When I joined the Watson design team in 2015, Conversation was one of a group of Watson API-based services meant for developer users. As a service, it enabled developers to automate branching conversations between a user and an application. They could then create chatbot experiences that understood communicated in natural language.
But as we began to learn more about how real users worked with Watson technologies, we increasingly found that developers were almost never able to single-handedly do the work needed to infuse cognitive capabilities into a business setting. They worked in concert with subject-matter experts, data analysts, and customer-facing staff. It takes a village to raise a bot.
Watson Conversation was the first Platform Service to receive tooling, the graphical user interface for these non-developer users to do their work. During my time on this team, I've worked alongside a multidisciplinary team of designers, remote teams of engineers and product managers, and tested my work with remote sponsor users. We worked in an Agile software development state of mind.
Here are some tidbits of my work on this part of the product.
Earlier versions of Watson Conversation mirrored the organization of a machine learning model, and was geared towards developer users. For many engineers, this still involves a learning curve, but this wasn’t always our user to begin with. Clients were hiring multidisciplinary teams of people to build their conversational agents, many of them with no programming knowledge.
My approach to the user experience in Conversation is to reflect human language acquisition more than teaching users about machine learning. I want to enable users to build and teach their chatbot as simultaneously as possible, and do their work as autonomously as possible.
First-time users often describe an initial "brain dump" into their system. It's a way for them to try and learn how things work and what things mean. But they're usually in for a rude awakening. Not only do they have to invest a considerable amount of time to get basic conversations to happen, but if bad examples were initially set, they can be hard to undo.
One expectation we frequently reset with customers is about how "smart" a chatbot is out of the box. To become "smart", the bot needs to interact with the outside world, and it needs its builder to reinforce, correct, and expand its understanding. Most users have nothing against real end-user data, but some are afraid that it could harm or change their chatbot in undesirable ways. Others just don't have the time for mundane training tasks.
There are also some users who, for lack of a better phrase, helicopter-parent their chatbots. They go through each turn of a user conversation, double-checking and ruminating the way the system had classified the message. This time-consuming, unsustainable, and expensive.
Below are designs that encourage chatbot resilience and self-sufficiency.
My work on the Conversation team has focused on introducing non-technical people to cognitive technology for the first time. These people are doing the doing the grunt-work of forming, teaching, and rearing chatbots. It’s been both fun and helpful using my psychology background to form analogies to how humans acquire language, learn new skills, and form habits.