The three ways Alexa is going to get smarter in the next decade

Making machines artificially intelligent is a time-consuming practice of collecting and manually labelling data for AIs to learn from. Today, researchers are enabling machines to learn new concepts continuously from far fewer data samples; in 2021, this will continue, as AI systems rely less on human labelling, and more on teaching themselves directly from interactions with users.
This will make a big difference to the “intelligence” of AI assistants. Today, it is second nature for us to complete transactional requests with AI assistants, either by issuing requests such as “Set the thermostat to 20°C”, or “Navigate to Wembley Stadium”. But, at the same time we are left yearning for more human-like conversational experiences, such as “Any ideas for this weekend?” or “Find me cameras under £400”. As we enter the next decade of AI assistants, advances in deep-learning architectures and associated learning techniques will take us towards this more natural way of interacting.


The first of these advances will be what is called semi-supervised learning. This is where a small amount of labelled data is combined with large amounts of unlabelled data, and used to teach an AI system. For example, in an Alexa initiative for improving automatic speech recognition, a large “teacher” model was first trained on thousands of hours of labelled speech data. Then, the teacher was used to train a “student” model on millions of hours of unlabelled data. The student eventually outperformed the teacher in accuracy by more than ten per cent.
The second advance will be the emerging field of self-supervised learning, where the AI learns by predicting one part of the input from what it knows about another – without any human teacher. Bidirectional Encoder Representations from Transformers (BERT) for natural-language processing, created by Google, for example, uses large unlabelled corpora of data to “pre-train” a general language model, which can then be optimised for a specific task using a small amount of labelled data.
The third, and perhaps most significant step, will be AI self-learning from users’ feedback signals. If an AI assistant does something wrong, users may repeat the request or paraphrase it to make their intent clearer. Using these feedback signals, AI assistants, including Alexa, are able to detect errors in their interpretation and correct them by reformulating user queries based on context.
In 2021, we will see more AI assistants teaching themselves with minimal human intervention. This will be a giant leap forward in our quest for AIs to learn concepts, acquire common sense and eventually reason like humans themselves.


Rohit Prasad is vice president and head scientist, Alexa Artificial Intelligence at Amazon
More great stories from WIRED
🦠 This is what will happen to Covid-19 when the pandemic is over
🎲 Need a screen break, but trapped inside? These are the best board games for two players
💵 The dodgy instant loan apps plaguing Google’s Play Store


🔊 Listen to The WIRED Podcast, the week in science, technology and culture, delivered every Friday
👉 Follow WIRED on Twitter, Instagram, Facebook and LinkedIn

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Why You Need A Website