United States

Google I/O Wrap-Up: New Pixel Phones, Captions for Everything and the Kitchen TV You've Always Wanted

The tech giant's biggest party of the year had plenty of gadgets for consumers, accessibility breakthroughs for people with hearing and speech impairments, and lots of free food for developers

What to Know

  • Google introduced new hardware at its developer conference: A less-expensive Pixel phone, and a 10-inch home smart display
  • Accessibility breakthroughs in closed captioning and speech recognition also took center stage
  • Many new features aim to convince users Google cares a lot about their privacy

Ever since it outgrew San Francisco's Moscone Center, Google's annual I/O conference has embraced the outdoor vibe in its new home at Shoreline Amphitheater, giving out free sunscreen and water bottles to the 5,000 developers who attend.

In a way, the sunny pop-up campus of geodesic domes and air conditioned tents keeps the conference true to its name: I/O stands for "input/output," but it also stands for "Innovation in the Open." And each year, it's where Google brings out what it's been working on for the world to see.

Though past Google I/O keynotes have reached a staggering length of three and a half hours (anyone remember Sergey Brin's skydiving demo of Google Glass?) recent years have seen the presentation split in two: a product-focused keynote for reporters and fans, followed by a super-nerdy developer keynote after a brief break to refuel on Google's famous free lunch.

This year, Google made a return to hardware launches on the I/O stage, with the new $399 Pixel 3a smartphone (in two sizes) that brings some of the high-end Pixel's most-loved features to an affordable, plastic device. Six months after the flagship Pixel 3's launch, reviewers are already calling the cheaper 3a "the only Pixel to buy." It will be sold at every major U.S. carrier store — with the inexplicable exception of AT&T.

The new phones are joined by a new, bigger home smart display that will compete with those already available from Facebook and Amazon. The 10" Nest Hub Max dwarfs its slightly older sibling, the Google Home Hub, which gets renamed the Nest Hub as Google merges its Nest and Google Home brands. The Max has a camera, which Google repeatedly mentioned has a physical "off" switch for those concerned about their privacy.

Privacy was also a big theme in the launch of Android Q — the as-yet-unnamed 10th major release of Android that will roll out to all users this summer. The smartphone OS has new app permissions to give users control over who gets access to sensitive data like their location, and gives pop-up reminders when an app is using that information in the background. These new permissions will send some developers scrambling to update their apps to keep them compatible with Android Q — but Google's VP in charge of the operating system says it's worth it.

"Location is a really important part of making sure your data is private, and users care about it a lot," Sameer Samat said in an interview after the keynote. He acknowledged the new app permissions are "a really big change, which I'm sure consumers are going to appreciate."

Another privacy-enhancing feature is the move to perform speech processing directly on devices, instead of streaming audio to the cloud. In Android Q, a new accessibility feature called Live Caption uses entirely on-device speech recognition to provide closed captions for videos and podcasts that don't already have them — including videos you record yourself. Live Caption joins another Google accessibility app called Live Transcribe, which is basically closed captioning for the real world.

Google AI product manager Julie Cattiau works with a research scientist named Dmitri who uses Live Transcribe every day in meetings — but she's working on an even tougher problem: helping Dmitri communicate in English. Dmitri lost his hearing as a child in Russia.

"He never heard himself speak English," Cattiau said, so his speech can sometimes be difficult to understand.

Cattiau's product, Project Euphonia, aims to make speech recognition that works for people like Dmitri, by finding ways to train Google's speech processing AI model with their unique way of speaking. Dmitri recorded 15 hours of audio in order to make the model work for him, but Cattiau is optimistic that in time, they'll be able to make working models with far fewer audio samples, and even address speech recognition for people with degenerative disorders like ALS.

"I didn't know my grandmother, but she actually died from ALS, so I know about the disease from my mom," Cattiau said.

Cattiau said she's encouraged by the success Dmitri is already having with her team's work.

"In pretty much every meeting, he has multiple phones open," she said. "If some people in the room may not be used to his speech, they can just read the transcription instead. … It's been amazing to see him follow a conversation, sometimes between multiple people, just by using a mobile application that transcribes everything around him."

Contact Us