A screen reader primer for Daredevil fans

by | Oct 15, 2012 | Blindness & Disability | 2 comments

I was going to start this post by saying something along the lines of: “Usually, for White Cane Day (i.e. October 15 and also known as White Cane Safety Day or International White Cane Day), I post something educational and blindness-related.” Then I realized that would be a lie. Because apparently, I usually forget and have only actually gotten around to it twice. The first time was in 2008 when I wrote a post about the history of the white cane – in the Daredevil comic and the real world – and the year after, I wrote a brief primer on Braille history. And even then, I didn’t get to it until two days late (I suck, I know). I don’t know what the heck happened to all those years in between, but I’ll try not to dwell on how fast time passes without my even noticing.

Anyway, in this case, it’s not too late to make amends and get back on track. So, for this post in the “educational and blindness-related” category, I thought I’d talk a little bit about screen reader technology and its history. Mainly because it’s a pretty cool topic and also has the advantage of feeling somewhat current now that Matt finally has his own computer (see panel from Daredevil #5 below, art by Marcos Martín) and Mark Waid has expressed a general interest in dealing with the 21st century consequences of Matt’s blindness. As for the general lack of assistive technology in Daredevil historically, see my previous post on the topic.

Matt talks about his computer, from Daredevil #5 by Mark Waid and Marcos Martín

Screen reader basics

So, what are screen readers? Well, in a nutshell, a screen reader (usually a piece of software that runs on the user’s otherwise standard computer) extracts information from the data output that is sent to the screen (the so-called “standard output”), but which obviously exists whether there is a monitor connected to the hard drive or not. The information gathered is then interpreted and sent to some kind out output device. The user then interacts with the content presented to him through various keyboard commands (a mouse is pretty useless if you don’t know what you’re clicking on).

I would imagine that when most people think of screen readers, they think of the output being synthetic text-to-speech, but it could just as easily be something like a braille display. Having said that, speech is the more common form of output. Not only do relatively few blind people actually read braille fluently (though Matt Murdock would actually be a good representative of the particular demographic that does, in that he’s totally blind since childhood/adolescence), but synthetic speech can be understood at very high rates, which is obviously preferred if you’re dealing with large amounts of information. In fact, a recent study indicates that one of the few “superhuman” abilities even real blind people develop (under the right circumstances) is being able to understand synthetic speech at speeds much higher than sighted people. We’re talking about speeds as high as 25 syllables per second, which is pretty mind-blowing. Apparently, the visual cortex handles at least some of this enhanced ability. Cool, huh?

The most commonly used screen readers currently are JAWS (an acronym that stands for Job Access With Speech), Window-Eyes, the open source NVDA and VoiceOver (which is the native Apple screen reader pre-installed on all modern Macs). JAWS is by far the most popular one, with 49% of users according to WebAim’s most recent survey. On a side note, if you’re at all interested in accessible web design (I am), WebAim is a fantastic resource. The survey I linked to above also provides a good indication of which features commonly found online present the biggest challenges to blind users.

A (brief) screen reader history

Given how the personal computer has developed over the last decade, it should come as no surprise that blind computer users were likely on more equal footing relative to their sighted peers before the advent of the graphical user interface we’re familiar with today. I’m old enough (almost 35!) to remember not only a time before computers were everywhere – my elementary school library had its entire inventory on alphabetically organized index cards – but also a time when getting on a computer was much less exciting than it is now. In junior high, the only thrill offered by the computer lab at my school came courtesy of an MS-DOS command window. The text was green against a black screen and, apparently, entering “format C:” was a very, very bad thing to do. However, since everything was text-based, early screen readers had a much easier time relaying the exact information presented on the screen.

For most of us, the GUI (graphical user interface) was a welcome change, ushering in an era of nice and colorful clickable icons and pretty pictures. However, for blind people, having information presented in a fashion that was non-linear and non-textual, just added another layer of complexity. This required a new generation of screen readers. The first such software for the PC was IBM’s Screen Reader/2 which was released in 1994 (its non-GUI predecessor “Screen Reader” was introduced in 1986). Freedom Scientific, the company behind JAWS, released its first Windows-compatible version in 1995 (the first version of JAWS was introduced in 1989).

Since the 90’s, tons of things have happened. The Internet revolution happened and web content and web applications are becoming increasingly rich and not always as accessible as one would hope. New developments spur new and improved versions of screen reading software, even though they are not quite able to give their users the (near) equal access experience that simpler times allowed.

However, it seems that in some ways, the screen reader experience is also becoming a litte more portable in a sense. Not only does pre-packaged screen readers (such as Apple’s VoiceOver) make it easier for a blind person to borrow someone else’s computer without worrying about installing extra software, which is often expensive (unless you’re using an open source alternative), there are also examples of web-based screen readers that can be activated from virtually any computer with an Internet connection, provided the user can “fly blind,” so to speak through whatever key strokes are needed to activate it.

While this was in no way exhaustive, I hope that you’ve at least learned something by reading this post. Otherwise, I will have failed miserably. Hey, at least I remembered this year! 😉 And, if you want to get some sense of what kind of information is sent to a screen reader user – and how that information is presented – take a look at Fangs, an easy to use Firefox plug-in. It gives you a text-based screen reader simulation of any page you visit.

2 Comments

  1. R.M. Hendershot

    This is so cool … I love days where I learn something new. Thanks, Christine!

    Reply
  2. Christine

    You’re very welcome! Sorry about the slow reply to your comment though. 😉

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

More from this category