While it is difficult to imagine Facebook without photos, for millions of blind and visually impaired people, that has been the reality. Now all that is about to change; Facebook announced last week that it would use Artificial Intelligence (AI) to automatically describe the content of photos to blind and visually impaired users.
Created by Facebook’s Accessibility Team, the feature labeled ‘automatic alternative text’ recognizes objects in photos using machine learning. Machine learning helps to build artificial intelligences by using algorithms to make predictions. If you show a piece of software enough pictures of a dog, for example, in time it will be able to identify a dog in a photograph.
Today, the primary way that blind people access the internet is through a screen reader — software that describes the elements displayed on a screen (a link, a button, some text, and so on) and makes it possible to interact with them. But much of the web has long been out of reach for blind people. For instance, the message conveyed by the picture of a smiling child is inherently out of reach for a blind person.
In order to assist people, who cannot see and understand photos, to become part of the community and get the same enjoyment and benefit out of the platform as the people who can see, Facebook’s Accessibility Team turned to its artificial intelligence division that is building software to recognize images automatically. Though the technology has been around for a while, powering keyword searches in programs such as Google Photos and Flickr, it is still prone to errors and millions of objects remain to be parsed.
Nevertheless, the team is already pushing hard on two new tools: recognizing objects in videos, a technology it first demonstrated in November; and something it calls "visual Q&A," which will allow users to ask questions about pictures and receive an answer from Facebook’s AI. You might ask who is in a photo, for example, and it would tell you the names of the Facebook friends who appear in it.