Unlike other companies that are attempting significant transformations with artificial intelligence, Apple is using the emerging technology to improve basic functions in its new gadgets. Without using the words “artificial intelligence” to describe the emerging technology — something Google is fond of doing, Apple showcased a new line of iPhones and a new Apple Watch. Both new generation devices include improved semiconductor designs that power the new AI features. The features largely improve basic functions like taking a call or snapping better images.
Artificial intelligence didn’t come up at its June developer conference either but has for months been quietly reshaping Apple’s core software products behind the scenes. In contrast, Microsoft and Alphabet’s Google set ambitious goals for the level of transformation with their AI efforts. Industry leaders have warned about the potential harms of the unchecked development of new tools such as generative AI.
Apple built the Watch Series 9 with a new S9 SiP (System in Package) that includes improved data crunching capabilities, notably adding a four-core “Neural Engine” that can process machine learning tasks up to twice as quickly. The Neural Engine is what Apple calls the building blocks for its chips that accelerate AI functions.
The AI components of the watch chip make Siri, Apple’s voice assistant, 25 percent more accurate.
But including the machine learning chip components also enabled Apple to launch a new way to interact with the device: people can “double tap” by finger-pinching with their watch hand to do things like answer or end phone calls, pause music, or launch other information like the weather.
The idea is to give people a way to control the Apple Watch when their non-watch hand is busy holding a cup of coffee or walking a dog. The feature works by using the new chip and machine learning to detect subtle movements and changes in blood flow when users tap their fingers together.
The iPhone maker also showed off improved image capture for its lineup of phones. The company has long offered a “portrait mode” that can blur the backgrounds using computing power to simulate a large camera lens. But users had to remember to turn the feature on. Now, the camera automatically recognizes when a person is in the frame and gathers the data needed to blur the background later.
Apple is far from the only smartphone maker to add AI to its hardware. Google’s Pixel phones, for example, allow users to erase unwanted people or objects from images.
— Written with inputs from Reuters
Author Name | Shubham Verma