Conversational interfaces and chatbots were the focus of my major project during my MA in Interaction Design at NCAD. With my innate sense of curiosity I saw an opportunity to investigate the potential problems and opportunities a new interface paradigm would provide.
As an interaction designer one of the biggest lessons I have learnt is the importance of choosing the right tools for the job. Finding those that are appropriate in terms of fidelity, context and those which achieve the results that help drive a design solution forward are critical. These are just a handful of techniques you can employ and are by no means an exhaustive or definitive approach.
At the early stage of the project I was lucky enough to talk to Michael Owen Liston and Ciaran Duffy, graduates from the CIID MA who are currently working with chatbots. I’d highly recommend you take time to check out their links above to learn more about them. The insights of those conversations directed me to one particular tool they had used on their own projects. This was a tool called Textit, which allows you to visually build text or voice based applications.
I built the flow in TextIt and deployed it through Facebook Messenger to a set of user testers I had lined up. I was using the chatbot to gain an understanding of the level of experience and exposure my users had to bots. In about 80% of the cases it was the first time they were engaging with a chatbot, which was a great opportunity to get some real honest and unbiased feedback. For the past few years working in the field, it is rare to introduce or test an interface which is this “new” with end-users. All the while being very familiar in the context of a messaging application.
As I rolled out the experiment over a couple of days and timezones, I was asking the participants to photograph and document where and when they used the chatbot. I was able to take away the following:
Although I had found the initial experience prototype beneficial and enlightening, there were a few other aspects of chatbots I wanted to test. The first was to put a realistic goal or task out for the participants which could be completed through a chatbot. Secondly I wanted to test the different methods of user interface inputs which Facebook messenger makes available to designers and developers.
For this experience prototype I had to build a more complex Facebook Messenger bot and after some research I discovered ChatFuel - a WYSIWYG approach to chatbot builders. You can set up and deliver content using Facebook’s set of predefined templates. These include text cards, images, carousels/galleries and links. In order to complete the test, I setup Joe’s Wing Palace (a fictional chicken wing restaurant which I cannot guarantee is any good :P ) and deployed the page to Facebook with the bot enabled.
The biggest difference compared to TextIt is that the user can respond using those predefined UI buttons, otherwise known as structured input. Although it is possible with ChatFuel to capture user input through text input, it is a bit clunky and unreliable. This opens up another conversation about AI and natural language processing which I’ll document and share in detail at a later stage. This experiment uncovered the biggest insights for the project to date. For the majority of the participants it was the first time they had used buttons, swiped or clicked around within a conversational interface. And you know what — it was an unbelievable failure!
The majority (> 90%) didn’t see the buttons onscreen and when presented with those options they just naturally typed in their request, “make a booking” or something similar. They were unaware that they could browse a menu and get directions. While the intent was clear by the participants, the interface didn’t respect or serve this.
Not surprisingly in post interviews when I demonstrated the capabilities or explained how the chatbot was supposed to work, the reactions were very different. They could now see the value and opportunities that chatbots could offer. With this in mind here is what I took forward with me:
Another method of user research I conducted was to bodystorm a few different chatbots. I knew that to explore this area I would need to build and live with a chatbot for a number of weeks. I had the idea that JoeStudyBot would contact me at the end of each day with a prompt for me to micro-blog and document my progress on my project. A chatbot diary if you will.
I wanted to know what features would become important as I crafted my own personal bot. It started with naming the bot, then selecting an icon to represent it on Messenger and then I got really excited about the potential here (while simultaneously disappointed with the current limitations). Wouldn’t it be so much better if these facilities could be extended? I’d love to be able to choose language styles, colours, fonts, emoji’s, gifs, stickers… the list could and should be endless.
Next, I wanted to examine what it was like living with a bot over a number of weeks. I was curious to see how I would feel when it messaged me. In some cases throughout the experiment I was out for dinner, at a concert or sitting at home watching TV. Different emotions and contexts played across the varying scenarios. I also noted how in some situations I would dismiss the notification quickly if I wasn’t able to respond at the time. In other cases I was waiting for it to speak to me as I knew I had a few good things to say that day. What is important here is that your chatbot should understand much more about appropriateness and context than simply a time of the day.
Overall the experiment and method of research is very rich and deep. I learnt that I enjoyed having this little assistant who was helping keep my project on track for a couple of reasons;
It also validated a few early concepts for me — it was quick to build (with the right tools), it was customisable to a degree and it could be deployed across multiple channels. An added bonus - it was not another app on my phone taking up space.
These three examples above are not the only methods of research conducted throughout the project. I invested time to speak to businesses, conduct one-on-one interviews and reached out to Emmet Connolly in Intercom who provided mentorship and guidance throughout the project. I also spent months experimenting with various chatbots - from my personal assistant at Clara, to a chatbot that can interpret photos you send it and to many more.
As a designer, it is important that when you move from the research phase to concept generation your research follows through. I am firm believer that the research never really ends and should blend seamlessly in with the idea generation and concept development. This is why I was experimenting and building on my personal chatbot right up to the end. Everyday is an opportunity to learn something new and it ultimately influences your design decisions, so why would you ever stop?
The end result of the research phase helped define and develop TikTok, a conceptual chatbot mobile operating system. I will follow up on another post that outlines the concept development process. In truth, within the couple of weeks I barely scraped the surface of this very exciting and interesting field of interaction design. Below is a sample of one of the deliverables of the major project. In the video I demonstrate how TikTok could help manage your calendar using X.AI as a service which extends your bots capabilities.
Apparently they say curiosity killed the cat.. whoever “they” are has never met a designer like me. If you would like to know more about my curious nature, then why not jump on over to twitter for a chat or even better — meet my new chatbot here. It’s very friendly, knows all my background and can even help setup a meeting for us to chat in person!
TextIt, ChatFuel, Facebook Messenger, Axure, Sketch, Principle, Adobe After Effects
User Research, Experience Prototyping, Ethnographic Research, Paper Prototyping, Axure, User Testing, UI Design, Animation