Amazon Alexa gets generative AI boost: Amazon hosted its annual Devices and Services 2023 event last night. At the event, Amazon announced that it is supercharging its virtual assistant, Alexa, with generative AI capabilities bringing the power of large language models (LLMs) to users’ homes. What this means for users is a smarter and more human-like Alexa that understands the flow of natural conversations and doesn’t need to be called out every time you interact with it. It also means that Alexa is now capable of understanding more complex queries, something on the lines of, “Alexa turn off the lights in master bedroom while closing the curtains at bedtime everyday.” Earlier, a query like this would have been divided into two separate sets of commands under Routines — a feature that enabled Alexa users to club a bunch of requests under a single command.
Beyond the obvious benefits to the users, supercharging Alexa with generative AI capabilities and adding support for thousands of new devices also puts Amazon‘s virtual assistant leagues ahead of Google Assistant, which is still struggling to get some of Google Bard’s star power. Interestingly, Google has already made Bard — its generative AI-based model — available in most of its popular products and services, such as Workspace, YouTube, Google Maps, Google Drive and Google Docs to name a few. However, it is yet to give Google Assistant a much-needed Bard upgrade, which in turn puts Google Assistant-powered smart home ecosystem at a back-footing compared to Alexa, especially as Amazon is making a lot of Alexa’s new features available to its Echo devices dating back to 2014. It’s entirely possible that Google introduces some of Bard’s capabilities in Google Assistant next year at the time of its annual Google I/O 2024 conference. But by that time, Alexa will already be miles ahead of Google’s virtual assistant.
While we mull over the possible implications of Amazon’s announcements from last night, here is a breakdown of the most interesting features that Alexa users across the globe will get to use in the coming days.
Understanding users’ body language
Up until now, Amazon Alexa was capable of understanding just the spoken words. But soon, it will also be able to understand who is addressing it, the eye contact and their gestures.
Alexa has Opinions
Another LLM-based feature coming to Alexa will give it opinions, just the ones humans have. “…Alexa, powered by this LLM, will have opinions—and it will definitely still have the jokes and Easter eggs you’ve come to love from Alexa,” Amazon said while making the announcement last night. As far as availability is concerned, this feature will soon be available to Echo device users in the US. It also includes the very first Echo devices that Amazon shipped in 2014.
No need to wake up Alexa each time
Up until now, Alexa users needed to use the wake word, that is, ‘Alexa’ every time they interacted with Alexa. No two simultaneous requests could go without calling out Alexa. But that is about to change. At the event, Amazon announced that ‘users no longer need to say Alexa over and over again’. This is an opt-in feature that will allow users enrolled in Visual ID to start a conversation with Alexa simply by facing the screen.
Alexa’s speech gets more natural
Alexa’s language is also getting a more natural update. Amazon said that it is using a new speech recognition system that adjusts to users’ natural pauses and hesitation to deliver a more free-flowing conversation. “Alexa now adjusts its tone and emotion based on context,” Amazon said. This feature will arrive early next year.
Amazon is also rolling out a new feature called Eye Gaze on Alexa. This new feature will allow customers to gaze at their Amazon tablet to perform pre-set Alexa actions, such as playing music and shows, control their home environment, and even call loved ones. This feature will be available to Alexa users in the US later this year.
Additionally, Alexa is also getting a new Call Translation feature that will enable Alexa to caption audio and video calls in real time, allowing call participants to break down language barriers and communicate more effectively. This feature will also benefit deaf and hard of hearing customers.
As far as availability is concerned, Amazon says that this feature will be available to Echo Show and Alexa mobile app customers in the US, Canada, Mexico, UK, Germany, France, Italy, and Spain later this year, in over 10 languages including English, Spanish, French, German, and Portuguese.
Lastly, Alexa is also getting a new feature that will enable customers to create a digital map of their house and pin their connected devices to it. With Map View, users will get an at-a-glance view of their smart home’s status. “You can also control devices from Map View, so if you’ve just climbed into bed and want to turn off the lights downstairs, you can do that effortlessly,” Amazon added.
As far as availability is concerned, Map View is an opt-in feature and users will be able to choose which rooms are added to their floor plan, and select which devices show up. This feature will be available later this year in the US on select smartphones.Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.
Author Name | Shweta Ganjoo