Google’s latest accessibility features include a potentially clever use of AI. The company is updating its Lookout app for Android with an “image question and answer” feature that uses DeepMind-developed AI to elaborate on descriptions of images with no captions or alt text. If the app sees a dog, for example, you can ask (via typing or voice) if that pup is playful. Google is inviting a handful of people with blindness and low vision to test the feature, with plans to expand the audience “soon.”
It will also be easier to get around town if you use a wheelchair — or a stroller, for that matter. Google Maps is expanding wheelchair-accessible labels to everyone, so you’ll know if there’s a step-free entrance before you show up. If a location doesn’t have a friendly entrance, you’ll see an alert as well as details for other accommodations (such as wheelchair-ready seating) to help you decide whether or not a place is worth the journey.
A handful of minor updates could still be helpful. Live Caption for calls lets you type back responses that are read aloud to recipients. Chrome on desktop (soon for mobile) now spots URL typos and suggests alternatives. As announced, Wear OS 4 will include faster and more consistent text-to-speech when it arrives later in the year.
Google has been pushing hard on AI in recent months, and launched a deluge of features at I/O 2023. The Lookout upgrade might be one of the most useful, though. While AI descriptions are helpful, the Q&A feature can provide details that would normally require another human’s input. That could boost independence for people with vision issues.