How the Deaf Community Can Get the Same Opportunities as the Mainstream Population Afforded by Technological Advances and Innovation
In 2017, the mainstream audience has an abundance of technological options when it comes to user experience. Deaf people’s options are more limited, especially when the technology leans heavily towards audio-only input and output.
A few examples of audio-based technology include iPhone’s Siri, podcasts and new streaming video platforms that lack closed captioning.
From Rotary Phones to Siri
In the old days, deaf people could not use the phone at all. No iPhone. No TTY. No amplifiers for the hard of hearing. In the era of rotary phones, deaf people were completely left out of phone conversations.
An excerpt from Wikipedia shows how rotary phones worked:
“To dial a number, the user puts a finger in the corresponding finger hole and rotates the dial clockwise until it reaches the finger stop. The user then pulls out the finger, and a spring in the dial returns it to the resting position. For example, if the user dials "6" on a North American phone, electrical contacts wired through the cam mechanism inside the phone will open and close six times as the dial returns to home position, thus sending six pulses to the central office.”
However, if deaf people wanted to get in touch with a friend who lived six hours away, they either mailed a letter or drove six hours to their house hoping their friends were home. If their friends were home, they’d write a note, leave it at the door, and drove back home.
This was the case until about the 1970s, after the teletypewriter (TTY) was invented and mass-produced. Finally, deaf people could communicate with each other from afar without having to mail a letter or potentially wasting a day looking for their friends.
From the 1970s until about 2010, deaf people enjoyed a “golden age” of calling via TTY, texting through their sidekick cell phones or blackberries, using the relay service to contact customer service representatives of any company and video phone (VP) to chat with friends.
Not being able to use the phone was almost a non-issue.
Then Siri came along in April 2010.
Siri is a voice recognition app where it takes voice commands from the user, and the phone takes these actions. For example, while driving, a person could say “show me a map and give me directions to the nearest McDonald’s”, and the iPhone would display the quickest route to the nearest McDonald’s.
This neat innovation deviated from previous technological advances in one important manner:
It does not factor in the needs of deaf people. Some deaf people either cannot speak at all, or speak clearly enough for Siri to understand the commands.
As it currently is, Siri caters to the mainstream market with clear speech. To address the needs of deaf people, Siri needs to be able to recognize commands given in American Sign Language, whether given with two hands or just one hand. While driving, a deaf person could, with one hand, sign to the phone “show me a map and give me directions to the nearest McDonald’s” and it would show the quickest route there.
Another way would be to have Siri become more flexible in understanding a deaf person’s voice commands if said person does not use sign language. Even if a deaf person has clear speech easily understood by other human beings, it is difficult for Siri to understand a deaf person’s voice. The programmatic parameters need to be expanded to include the common voice characteristics of deaf people, so it is easier for deaf people to “train” Siri to understand their voice commands.
CRT Televisions to Streaming Services
The earliest televisions used cathode ray tubes (CRT). Starting in 1980, deaf people used captioning decoders with their CRT TVs to display closed captions on their favorite shows or movies.
In 1990, the U.S. government passed the Television Decoder Circuitry Act, which mandated that, starting in 1993, all televisions manufactured for sale in the U.S. must contain a built-in caption decoder as long as the picture tube is 13" or larger. Deaf people no longer needed a decoder if they bought a new TV after 1993.
Through the late 2000s, LCD and plasma TVs flooded the market with prices being cut every year. Due to the aforementioned U.S. law, all these new TVs had caption decoders built in. Having closed captions on almost all TV shows and movies was a non-issue.
However, with the advent of high-speed broadband Internet and 4G / LTE network speeds, video streaming is on the increase. In fact, broadband use by video streaming has been increasing exponentially in the last few years. More video content is being shared than ever.
Deaf people has been sharing more video content than any other group, comparatively. We are using YouTube, Vimeo, Vine, Facebook Live, plus commercial TV or movie streaming services like Netflix, Hulu, and Amazon Video.
Unfortunately for deaf people, captions were not as widely available on these streaming services as they were for TVs. On Netflix, Hulu and Amazon Video, not all TV shows or movies where captioned. YouTube, commendably, tried to automate captions on most videos, but in most cases, the captions don’t match what was actually said and thus don’t make any sense.
Vine and Vimeo as well as YouTube do allow video owners to edit captions after posting their videos, but how many owners actually do this? Is it a good idea to leave it to the owners to accommodate the audience, rather through regulatory means?
The good news is that things are still changing for the better. More and more videos are displayed in sign language, but for now this has been limited to Facebook Live.
Radio Shows to Podcasts
The most difficult technologies for deaf people to get accommodation for include radio shows and podcasts. Historically, many people enjoyed radio shows and the famous Fireside Chats with the late President Franklin D. Roosevelt.
Deaf people could not benefit from these radio shows or fireside chats unless a hearing person was willing to interpret for them. There were not many interpreters with sufficient qualifications back then, either. In the radio show realm, deaf people have been completely left out.
Today, we have podcasts instead of radio shows. Deaf people have the same problem with podcasts because they cannot listen to them.
However, deaf people have the option to directly request written transcripts of these podcasts from the creators. Not all requests would be granted, of course. Transcripts are nice, but it takes a very dedicated volunteer to create them. More likely, someone has to pay the market rate for a podcast transcript. Occasionally a group of volunteers will band together and split up the work.
Therefore, most podcasts simply don’t have transcripts.
Today, there are a few podcast transcription services, but they aren’t free. A hearing person could listen to a 30 minute podcast for free, while a deaf person would likely have to pay anywhere between $30 and $150 for a transcription of the same podcast.
Where Do We Go From Here?
These amazing advancements suit those who can hear, but the deaf community should have these same opportunities at minimal extra costs.
Some of these advancements are easy to accommodate for deaf people, such captioning for streaming services and promoting the use of American Sign Language in video content.
Unless innovative and cost-effective solutions can be found, other advancements will be difficult and/or expensive to adapt to the needs of the deaf community, such as podcasts and Siri.