Opportunity Space
Watson Assistant is an established product in the market of creating conversational interfaces for customer support. I joined the Watson Assistant team as a Sr Product Design Lead in early 2021, focusing on the authoring experience in the product.
One of my first missions was to ideate around some feedback we received from our power users that figuring out errors in their chatbot was a time consuming and ineffective process.
-
Sr. Design Lead
-
Iterative
-
-
Product Managers, Front-end Developers, Back-end Developers, Software Architect
UNDERSTANDING THE PROBLEM SPACE
How might we equip our customers to identify and remove existing and potential errors in their chatbot?
Scoping the Problem
RESEARCH
To better understand our customers’ needs for identifying and correcting errors, we needed to first understand the kinds of errors they were encountering in authoring their chatbots.
We interviewed a handful of our power users, taking notes on where they run into their common errors and what kinds of pointers would be helpful.
GAUGING TECHNICAL FEASIBILITY
After getting a better understanding of the problem, I worked closely with the Product Manager and Lead Developer to consider my vision and what was technically feasible for this ask.
Ideation
SKETCHES
Our user interviews revealed some clues about what areas were important to our customers in identifying and removing errors.
I started iterating with digital sketches through several early designs, emphasizing early error detection, and proactive user prompting before errors are even made.
Starting with sketches allowed me to quickly explore several concepts in a few days. After reviewing my sketches with my cross-functional team as well as the larger design team, I was able to reduce the concepts to three potential candidates for further exploration.
MID-FIDELITY
With a few versions of the designs, I was able to put some mid-fidelity designs in front of users and get some early feedback for iteration.
Final Concepts
CONFIDENCE SCORES FOR INTENT CLASSIFICATION
One of the major areas of obscurity in the product is how the Assistant classifies the end user’s intent. We received a lot of feedback from our users about being “in the dark” when it comes to the Assistant being able to identify the specific action associated with their customers’ inputs.
With the introduction of the confidence score functionality, users will be able to see the confidence scores associated with the various actions, and see that the Assistant choses an action based on that score. The user will also be able to see which actions are similar, and differentiate them so that the Assistant can make better decisions in the future.
FOLLOWING ALONG WITH STEPS
Another resounding pain point from our user interviews was the inability to follow through in the left panel and the user tests out the Assistant in preview. One can imagine that for our power users who may have over 150+ steps in the left panel, they could easily get lost when testing out their Assistant in preview.
With the debug feature, the authoring experience will be navigating through steps to follow along with the activity in preview. This means that if a user spots an error when they are previewing their Assistant, they can easily update the error in the step right away.
STEP LOCATOR
The step locator came out of a user testing session. While using the preview panel and the follow along experience, we noticed that there were situations where our user wanted to go to a previous step. Sometimes our user would want to make a change to some of their content from a prior steps, and with 150+ steps, it would be a chore to scroll through the left panel to find each step to edit.
With the step locator, users can navigate to any step within the conversation by clicking the locator icon to the left of each message. This allows for navigation from step 2 to step 150 and then to step 70, and even between different actions/conversation topics.
VARIABLES LOG
Users also complained about not being able to see a log of all their variables. Since each session with an Assistant could have over fifty variables between action variables, session variables, integration variables etc, it gets difficult to keep track of which variables have been registered by the Assistant.
With the introduction of the variables log, users will be able to see all the variables that their Assistant logs as their end users interact with their Assistant.
Here, users will also have the ability to hover over any variable and edit the value in order to test different conversation paths with their Assistant.
Key Results
Working with development, we were able to get a functioning prototype up and running by the end of the second sprint, and put that in front of users for a usability test.
92%
of users were able to 5/5 errors hidden in the test Assistant faster using the new debug feature
87%
of users were able to find all errors hidden in the test Assistant with the new debug feature
96%
of users agreed that they would use the new debug feature in their daily process of authoring their Assistant
Reflections
ACTIVE LISTENING
Last year, the Nielsen Norman Group released a video with the title, Don’t Listen to the Customers that sent the experience design community in a tizzy. Of course the message wasn’t to never listen to what users say. The message was geared towards listening to users talk through problems but not try to solve them. Many designers have heard many a user say, “I think if we just put a button here, we should be good“ or “We just need it to have a newer look and feel“. These kinds of prescriptive feedback can sometimes be a
For this mission I had to do a lot of active listening and looking out for subtext. Since most of our power users were developers and subject matter experts who felt strongly about what they needed for troubleshooting their Assistants, there were a lot of opinions about how to solve the problem. It challenged me to acknowledge the input and refocus the interviews towards understanding the problem and not solutioning.