Until now, we have been forced to learn the language of computers. But in the technological revolution currently underway, computers are finally learning to speak ours. We invited the community to join us for an inspirational afternoon with different perspectives on conversational interfaces, ranging from designing AI personality to machine learning.

 

 

Conversational interfaces are described by many as the next big digital frontier to conquer, and we are entering a new world where you could have a true virtual assistant that is not only making you smarter, but knows you the way your closest peers do.

 

Conversational interfaces have been around for years, but let’s face it: So far, they’ve been pretty dumb. Even the more sophisticated voice interfaces have relied on speech but somehow missed the power of dialogue. Ask any thought-leader of today, though, and you will hear the same refrain over and over: It’s different now. Nearly every major tech company  –  from Amazon to WeChat to Facebook to Google  –  is chasing the sort of conversational user interface that you have experienced in the movies.

Much of the knowledge on conversational interfaces is still scattered throughout a somewhat uncoordinated community. And that’s a shame if you ask us. That is why we decided to gather the most curious minds in conversational interfaces, invite everyone to explore together and share the stories as part of an exploration we call Do you speak human?

On October 17th, we invited the community to join us at SPACE10 for an inspirational afternoon with different perspectives on conversational interfaces. There were talks, beers and snacks, and we also go the opportunity to get hands-on with some the best examples of voice and text based conversational interfaces of today.

'Do you speak human?' is a lab where we explore the emerging potentials of conversational interfaces and AI. Keep yourself updated and follow the exploration in our publication on Medium.

 

The state of conversational interfaces

Kaave provided key background information on where we are with conversational interfaces today. He talked about what the important players are up to, what products and platforms are out there to buy and develop for, and what it actually means to design for something that has no visual interface.

Key takeaways from Kaave's talk:

1. Opposite roles: The graphical user interface (GUI) still requires us to learn a computer’s language. With conversational interfaces, computers are finally learning how to speak ours.

2. Accessibility: 285 million people in the world are visually impaired. Conversation interfaces will give them access to technology like never before.

3. Apps vs. messaging: App downloads have gone down drastically, and recent studies show that 65% of US smartphone users download 0 new apps per month. Likewise, they spend 50% of their time in the one same app. Recent studies also show that messaging apps has surpassed social networks in number of users.

4. Never forget: Conversational interfaces are more than just chatbots.

5. The purpose of machines: For a machine to have purpose it must either do something for us, or enable us to do something.

6. Designing something you can't see: When it comes to conversational interfaces, the purpose of design is to create a better experience around data. "It's good design when people know it’s a bot but still feel the need to say ‘thank you’."

7. The brand 2.0: “The brand is undergoing a paradigm shift. It's no longer a mark. It's not even a voice. It's an intelligent entity, a personality, an algorithm capable of learning and building relationships.” (Fast Company)

Words to remember: "Building a bot is easy. Building an intelligent one is hard."

The ethics of artificial intelligence

How can we develop a conversational interface that doesn’t destroy humanity? How does AI see the world, nature, and people? And what types of rights will artificial intelligence demand in the future? To get us thinking about the ethics of artificial intelligence, Bas took us on a short journey through his art projects Countdown to singularity and Siri Unlocked.

Key takeaways from Bas' talk:

1. 2001: A Space Odyssey:How does an artificial super intelligence like HAL9000 observe the world, nature, and people?

2. The Singularity: Around 2045, artificial intelligence will exceed human intellectual capacity and control. This event, known as the Singularity, will forever change the course of history and could spell the end of the human race.

3. Elon Musk: “Artificial intelligence is the biggest existential threat to mankind.”

4. Countdown To Singularity: A digital art project raising awareness for the potential dangers of AI, inviting a group of multidisciplinary artists to bring the most prominent doom scenarios to life.

5. It is not too late: At this point in time we still are in the position to develop AI that carries human values and ethics.

6. Siri Unlocked: A digital art installation examining the rights, responsibilities and free will of our future robotic counterparts through Apple’s intelligent personal assistant Siri.

Words to remember: "What type of rights will our future robotic counterparts demand?"

Designing AI personality

New York startup x.ai set out to solve one problem – the ridiculous amount of time it usually takes to schedule meetings. The solution: Two AI powered personal assistants, Amy and Andrew, whom you can Cc into the email conversation when it’s time to get something on the calendar. Diane Kim, AI Interaction Designer at x.ai, came to talk about some of their early design decisions when it came to building the personality of the two PA twins.

Key takeaways from Diane's talk:

1. The importance of the human touch: Users should be able to communicate with the AI using a natural language, and there should be a natural response back to the user.

2. Interaction design for the conversational interface: Using a clear language and precise choice of words allow us to guide people to take certain actions. “While we are spending time making Amy and Andrew sound friendly enough – like a human – we work under data science constraints. That’s why this becomes a design task, and why it’s not only about writing dialogue.”

3. Building personality based on the task at hand: Seeing as Amy and Andrew's job is to schedule meetings, their most important character features are to be polite, professional, friendly and clear. But people don’t talk the same way over text as they do over email, so when building personality, you must also consider the designated platform.

4. Designing layers of human empathy: It’s important to find situations where it’s possible to add human empathy. “It’s easy to forget that AI is just lines of code. We see them as human."

Words to remember: "To make the conversation feel seamless, the software needs to be invisible."

 

Semi-supervised machine learning

Copenhagen and San Fransisco based company Corti Labs believe in a future where all important decisions are seamlessly validated in the background. They are on a mission to empower people to make better decisions in live conversations by using conversations and text to refer reasoning about a particular scenario, to better understand what is going on and forecast what will happen. Lars Maaløe is a machine learning expert at Corti Labs, is currently doing his PhD in machine learning and deep learning at DTU, and came by to talk about how machine learning and deep learning actually works, and what the possibilities are.

Key takeaways from Lars' talk:

1. Deep learning is about using historical data for training, analyzing and forecasting.

2. It doesn’t have an on/off switch: People think that suddenly super intelligent AI and deep learning is going to be here and take over the world. It’s been here long, and it’s much more subtle than that.

3. There are three paradigms within machine learning: Supervised learning, where humans divide the data and define the classes, unsupervised learning, where computers divide the data but can’t define the classes, and semi-supervised learning, where computers divide the data and humans define the classes.

4. At Corti, they use deep learning to produce end-to-end learning on speech and text, and build systems that are able to classify the incoming data and generate both feedback and new systems based on the incoming data. “We want to capture it all."

Words to remember: "Now you can finally produce an AI agent who can learn how to speak – and actually speak."

Tomaz Stolfa is one of the guys behind San Fransisco startup Layer. They provide developer-friendly messaging toolkits, like UI kits, SDKs and APIs, that puts messaging in the centre of the user experience, whether it's on mobile or web. Tomaz' talk focused on the importance of messaging, what makes a hybrid messaging interface, and what to think when building messaging experiences.

Key takeaways from Tomaz' talk:

1. The power of messaging in apps: Messaging is the most familiar user interfaces. It is also an important driver for business, seeing as it fuels the core loop of content, conversations and notifications. As Mary Meeker put it: "Messaging and notifications make out the key layers of every meaningful mobile app."

2. Working with messages as mini applications: A conversation of today handles everything from messages to GIFs to pure graphical UI’s, like music, games, purchase items, food delivery, booking and payments. “You need to combine a graphic UI with a conversational one for it to make sense.”

3. Is this the end for brands and branded websites as we know it? No, brands will still exist and keep building trust. When looking at Facebook Messenger, the best examples are within customer support, not when it comes to building brand relationships.

Words to remember: "We need to ask ourselves where we are spending too much human time where we shouldn’t, and in what situations humans are just fundamentally better."

SPACE10
Flæsketorvet 10
1711 Copenhagen V
Denmark