Using Artificial Intelligence to Help Blind People ‘See’ Facebook
Thursday, 07 April 2016 - 13:09
Every day, people share more than 2 billion photos across Facebook, Instagram, Messenger and WhatsApp.
While visual content provides a fun and expressive way for people to communicate online, consuming and creating it poses challenges for people who are blind or severely visually impaired. With more than 39 million people who are blind, and over 246 million who have a severe visual impairment, many people may feel excluded from the conversation around photos on Facebook. We want to build technology that helps the blind community experience Facebook the same way others enjoy it.
That’s why today we’re introducing automatic alternative text.
Automatic alternative text, or automatic alt text, is a new development that generates a description of a photo using advancements in object recognition technology. People using screen readers on iOS devices will hear a list of items a photo may contain as they swipe past photos on Facebook. Before today, people using screen readers would only hear the name of the person who shared the photo, followed by the term “photo” when they came upon an image in News Feed. Now we can offer a richer description of what’s in a photo thanks to automatic alt text. For instance, someone could now hear, “Image may contain three people, smiling, outdoors.”
This is possible because of Facebook’s object recognition technology, which is based on a neural network that has billions of parameters and is trained with millions of examples. Each advancement in object recognition technology means that the Facebook Accessibility team will be able to make technology even more accessible for more people. When people are connected, they can achieve extraordinary things as individuals and as a community — and when everyone is connected, we all benefit.