Watson Conversation

Scaling an API service to an enterprise-grade product

Project Year

2017

Product 

Watson Conversation

Role

User experience
Design Lead

About

Watson Conversation is an enterprise-grade product for building, training, and scaling conversational agents.

More Information

When I joined the Watson design team in 2015, Conversation was one of a group of Watson API-based services meant for developer users. As a service, it enabled developers to automate branching conversations between a user and an application. They could then create chatbot experiences that understood communicated in natural language.

But as we began to learn more about how real users worked with Watson technologies, we increasingly found that developers were almost never able to single-handedly do the work needed to infuse cognitive capabilities into a business setting. They worked in concert with subject-matter experts, data analysts, and customer-facing staff. It takes a village to raise a bot.

Watson Conversation was the first Platform Service to receive tooling, the graphical user interface for these non-developer users to do their work. During my time on this team, I've worked alongside a multidisciplinary team of designers, remote teams of engineers and product managers, and tested my work with remote sponsor users. We worked in an Agile software development state of mind.

Here are some tidbits of my work on this part of the product.

Reframing the experience

Earlier versions of Watson Conversation mirrored the organization of a machine learning model, and was geared towards developer users. For many engineers, this still involves a learning curve, but this wasn’t always our user to begin with. Clients were hiring multidisciplinary teams of people to build their conversational agents, many of them with no programming knowledge.

My approach to the user experience in Conversation is to reflect human language acquisition more than teaching users about machine learning. I want to enable users to build and teach their chatbot as simultaneously as possible, and do their work as autonomously as possible.

Building a bot

First-time users often describe an initial "brain dump" into their system. It's a way for them to try and learn how things work and what things mean. But they're usually in for a rude awakening. Not only do they have to invest a considerable amount of time to get basic conversations to happen, but if bad examples were initially set, they can be hard to undo.

Current model

New model

Low-fidelity designs of new model

Placing deployment as part of a first-time user experience brings in conversation data as soon as possible, letting users improve and train their system as soon as possible.

Context-switching

It's hard to envision how a chatbot should behave without referencing what it knows. This design helps people make connections within their system. I also approached this from a business strategy point of view–surfacing connections and offering shortcuts creates opportunities for upselling.

Scaling a workflow

This design adds functionality previously found exclusively in the training section of the product. This interface reframes tasks and puts them directly in the building experience, which is where users spend most of their time. By breaking down workflow silos, the user takes shortcuts in building out the chatbot, simultaneously trains it, and gives the business unit an opportunity to upsell.

Teaching a bot

One expectation we frequently reset with customers is about how "smart" a chatbot is out of the box. To become "smart", the bot needs to interact with the outside world, and it needs its builder to reinforce, correct, and expand its understanding. Most users have nothing against real end-user data, but some are afraid that it could harm or change their chatbot in undesirable ways. Others just don't have the time for mundane training tasks.

There are also some users who, for lack of a better phrase, helicopter-parent their chatbots. They go through each turn of a user conversation, double-checking and ruminating the way the system had classified the message. This time-consuming, unsustainable, and expensive.

Below are designs that encourage chatbot resilience and self-sufficiency.

Thinking outside the box

Currently, to teach a chatbot how to understand what a person is trying to achieve in its conversation, a user has to enter as many phrases they can think of that describes that goal. They also have to build out its dictionary of things it recognizes by hand. This design enables a user to take shortcuts using conversation data to fill out those phrases. In turn, the chatbot builds out its dictionary too. The inspiration for this solution came from my knowledge of schema theory and context clues in reading comprehension.

Encouraging good habits

When I first joined the team, I ran usability tests on Recommendations, the product area for improving a conversation system. Few people knew about it, and even fewer people were engaging with it. These were wary of how their actions would change their system.

Here is a design that adds affordances for a user to see training tasks in context, and the ability to undo their reclassifications.

Breaking down workflow silos

Even with usability improvements in this Recommendations, users were still not training their system. Showing users immediate value in the midst of building their chatbot, may encourage adoption of Recommendations. In response to users admitting they felt too lazy to train their system, I'm experimenting with notifications and modal-based training across the product.

Here's an early concept for making these tasks less painful. With permission, the system takes over the user's screen and asks a few questions about one concept, like flashcards. The system utilizes the user's answers to train on multiple concepts at once.

Good design is good business

By using conversation data to demonstrate the value of Watson's recommendations, users might be more motivated to complete training tasks. If those tasks are integrated into the user's existing workflow, they are more likely to be habit-forming–making upsells easier.

It takes a village to raise a bot

My work on the Conversation team has focused on introducing non-technical people to cognitive technology for the first time. These people are doing the doing the grunt-work of forming, teaching, and rearing chatbots. It’s been both fun and helpful using my psychology background to form analogies to how humans acquire language, learn new skills, and form habits.