01

Identifying the behavioral barriers to task completion, and designing a suite of solutions to improve the experience.


Summary

After assessing the last year of user research, I established a new design philosophy based on Ruth K. Schmidt’s work in behavioral design. I took this framework and facilitated design workshops with Engineering and Product to identify opportunity areas for the product as we had recently come close to reaching feature parity on the web app with our backend capabilities — meaning were running out of projects to do for the year. I then worked with Product and Support to assess our feature requests from customers to develop a series of projects to address experiential issues that come naturally when user shift their expectations from moving from a CLI to a GUI.

Outcomes

Improved design philosophy, improved UI components, and several UX improving projects

Role

Solo Designer
Project Lead

Timeframe

2-3 projects per quarter
1 fiscal year

Context


Machine Learning Data Management Software (MLDM)

MLDM enables robust version control of data for machine learning engineers and data scientists. Not only can users access historical data that has been fed to an AI model, they can also edit and reprocess that data which allows users to experiment with their datasets.

Primary Persona: Machine Learning Engineers

The ML engineer is primarily concerned with building the pipeline that feeds data to the AI model. Their greatest frustration is with the arduous task of iterating and debugging these pipelines until data is cleanly traveling from its source and to the AI model. A million little things can go wrong, and it is their job to find sometimes minuscule mistakes that end up ruining the entire data pipeline.

An end to end user journey across 3 different personas. Pink sticky notes denote pain points. Blue describes users’ current solutions to their pain points.

Research


Process

Ruth Schmidt’s Behavioral Ability Model was the key piece of literature review that drove research insights from reviewing the past year of research activities. In her model, she describes 3 levers which became pivotal language for identifying UX issues.

1. Confidence: “Is it in my best interest to [perform this task].”
2. Competence: “Do I really understand my options.”
3. Agency: “Are there barriers in the way?”

Myself, and leadership from Product and Engineering viewed these levers as themes and sorted the research data and our current feature requests in a closed card sort to broaden our perspective before I ran a generative design workshop with the larger team to start ideating solutions.

Findings

As an organization, we had over-indexed on solutions that aassumed a high level of competence. The general reasoning was that it only took 1 highly competent and excited user to demonstrate the value proposition we were offering. But, that assumption fell flat when it came to positioning our product as a viable long term solution for multiple teams.

Summary of findings from the research as it relates to the behavioral levers.

Shifting Point of View

There was a reliance on our technology to do the selling. However, we were surprised when customers ended up churning in favor for less powerful options in the market because of their superior ease of use. At a high level, these are the changes that needed to be made:

1. Confidence: Provide users with safety nets that make it simple to diagnose or reverse destructive mistakes.
2. Competency: Provide even greater clarity into the inner workings of the back end system especially where unconventional jargon is used.
3. Agency: Increase a team’s collaboration by removing barriers to simpler requests like changing datasets.

Layout of the generative design workshop run with the team.

Design


Summary

Several projects emerged from the research and collaborative exercises I facilitated with the team. Not only did these projects target the specific changes that needed to be made, but also they were aligned with the main business goals set by our leadership team.

A Few Projects

Calendar based navigation for a historical view of pipelines

Part of the debugging process for data pipelines requires users to know when and what has changed with their data pipeline. One of the most powerful ways to address this problem is to provide a convenient way to view the state of the pipeline when a change occurs.

In the past, users were forced to hunt for the automatically generated ID for older version of the pipeline and input it into the command line to return the historical data.

Using a calendar navigation in combination with the other metadata that could be recovered and filters enables users to more effectively find breakages.

The inbox of doom

No one likes learning that the things they’ve built have broken, especially developers who have to go through a lot of debugging. While unpleasant, the “inbox of doom” centralizes all known failure states at every step in the data pipeline, and makes it easier for users to navigate through their pipeline to see where breakages are occurring.

Pipelines can get as large as 300 steps, so not having manually find each error makes the experience a lot easier.

AI assisted logs

For nontechnical audiences, reading logs that detail every system response can be unapproachable. This project introduced an AI assistant that was trained on our docs and our understanding of system responses on the backend to give human readable descriptions of what the back end systems are trying to tell the user.

*pssssssst hey
You might have noticed that the example idea I gave in the generative design workshop is very similar to this project. ;)


Thanks for reading this case study!