Computer Chronicles Revisited 31 — The Adaptive Firmware Card, Express 3, and Kurzweil Reading Machine

Up to this point on Computer Chronicles–we’re now at February 1985 in terms of broadcast dates–the featured computers have largely relied on typewriter-style keyboards as input devices and cathode-ray tube monitors for output. (Even the mouse had only started to enter more general usage with the 1984 release of the Apple Macintosh.) But in this next episode, the focus is on individuals who have different accessibility requirements when it came to using computers.

In the studio introduction, Cheifet showed Gary Kildall a small handheld computer that was capable of speech synthesis. (The make and model was not identified.) Cheifet noted that if he were speech or hearing impaired and needed to make a telephone call, he could hook up this device up to the phone and let the computer speak for him. He asked Kildall if these computer applications originally designed for the disabled would “spin off” and help everyone. Kildall said he thought that worked both ways. He cited the technology being developed for the Mars rover projects, which included robotics and vision processing. He observed such technology could lead to the future development of something like a mechanical seeing-eye dog for the blind.

Ultrasonic Wheelchairs That Rely on Head Movements

Wendy Woods presented her first remote feature, which began with an “example of how hidden computers can help some people regain lost movement.” Specifically, she discussed a motorized wheelchair that relied on ultrasonic sensors rather than a joystick or another physical attachment for operation. The sensors enabled the user to simply point their head in the direction they wanted to go. Two ultrasonic sensors on the back of the chair reflected sound waves off of the user’s head and then measured the difference in time for the waves to return. This enabled the chair’s computer to decide if the user was pointing their head left, right, or straight ahead to indicate their desired direction.

Woods then shifted to explaining how the “standard desktop microcomputer” offered a different kind of assistance to mobility-impaired individuals. She narrated some B-roll of an unidentified man using an aluminum keystick to type on a standard keyboard using his mouth. Woods noted this individual used his machine mainly for his work as a computer programmer. But it was the computer’s more routine talents–such as word processing, communications, and online database access–that played a pivotal role in expanding and simplifying his daily life. For example, a modem allowed the user to send and receive messages through a bulletin board system and access an educational library through the Plato database system. In short, Woods said that for many disabled users, an ordinary microcomputer could mean the difference between reliance and self-reliance.

Exploring Alternative Input and Output Devices

Back in the studio, Cheifet and Kildall conducted their first round table with Peggy Barker and Sue Simpson. Cheifet noted that Simpson had suffered a stroke six years earlier, which led to her near-total paralysis and the loss of speech. Barker, a rehabilitation engineer with the Children’s Hospital at Stanford University, was there to demonstrate several accessibility devices that she used with her clients.

Barker’s first demonstration involved the Adaptive Firmware Card, an expansion card for the Apple II computer line that allowed users to connect alternate input devices. Barker used a special keyboard called the Unicorn Keyboard that plugged into the Adaptive Firmware Card. The Unicorn Keyboard was a programmable membrane keyboard. In this case, Barker said the Unicorn was setup to store whole words and phrases. This allowed someone with limited physical ability to input entire words or phrases so they could still access commercial software.

Kildall asked if you could change the vocabulary of the Unicorn keyboard. Barker said yes, it could be adapted for other applications such as word processing. In this particular case, the Unicorn was setup to work with severely disabled children, specifically those with cerebral palsy. She explained these children had limited “fine” motor control but good “gross” motor control. Kildall joked this type of keyboard would be ideal for playing text-based adventure games.

Cheifet then asked about a client who didn’t even have the ability to press the buttons on a membrane keyboard. Barker said there was another device, which she described as a “head wand,” which she used with clients who suffered from amyotrophic lateral sclerosis (ALS). Barker demonstrated this head wand, which allowed the user to control switches merely by blinking their eyes.

Cheifet then turned to Simpson, who used a device called the Express 3 to communicate following her loss of speech. Barker explained the Express 3 allowed Simpson to communicate using a built-in LCD or by typing out a message to a printer. The Express 3 also had speech synthesis capabilities. Simpson, who could not type, selected her messages by using a long-range light pen on a large control panel filled with rows and columns of LED “switches.”

Simpson explained that with the Express 3, “The memory storage is like a hotel with 128 floors and 100 rooms on each floor. I have almost unlimited storage.” She then demonstrated the voice synthesis function by saying her name.

Cheifet asked Barker about how difficult it was for Simpson to learn the Express 3. Barker said the hardest part was developing head control. But she added that Simpson was “incredibly determined” and picked it up much quicker than most of her other clients. Kildall said he talked to Simpson before the episode, and she mentioned that she did not have this type of device for several years following her stroke, but when she got the Express 3 it opened up the whole world of communications for her. Cheifet asked how fast Simpson could speak using the Express 3. Simpson said she could not adjust the speed of the speech synthesis.

The Computer That Can Read for You

The final segment began with Cheifet talking to Sue Melrose, a computer evaluator at the Veterans Administration Medical Center in Palo Alto, California, about her work with the Kurzweil Reading Machine, which was also on-set for a demonstration. Cheifet asked Melrose, who is blind, how she read or conducted research prior to using the Kurzweil. Melrose said for research, she would hire a person to read the text to her directly or onto an audiotape. But that was slow, and even listening to a tape took almost as much time as reading.

Cheifet then asked how the Kurzweil worked. Melrose said it was a “reading machine.” It looked much like a photocopier. The user laid a printed page on top and a camera would then scan the text line-by-line and output the content using voice synthesis. Cheifet then put in a sample page of text to demonstrate.

Melrose said she could use a control panel on the included software (running on an Apple II machine) to move the camera around, re-read something in memory, spell out words, and even read out punctuation. Cheifet noted the Kurzweil had a little bit of artificial intelligence built into it. Melrose explained that the Kurzweil often failed to read the first few lines of a text correctly. Part of the reason was that the machine would analyze these first lines to see if it mistook one letter for another–say, reading a “g” with the tail missing as an “a.” The Kurzweil could then reanalyze the text and correct its mistakes.

Cheifet said that in addition to outputting text to the speaker, the Kurzweil could also load it onto a computer, in this case the Apple II. Melrose said yes, and this feature was commonly used to translate texts into braille. Melrose said she often used a special word processing program called Braille Edit to read through the Kurzweil’s transcription and correct any errors.

VA Researchers Study Converting Industrial Robots to Service Humans

Wendy Woods returned to present her second remote feature, this time from the Palo Alto Veterans’ Administration medical center. She explained that a robot once designed to serve as an arc welder on an assembly line was now being adapted to serve the handicapped. Woods spoke to Bob Yee, one of 125 disabled people participating in the VA’s research project into the use of robotics. The robot itself ran off a Zilog Z80 microprocessor and could respond to a dozen voice commands and do “just about anything a human arm and hand could do,” such as playing chess or picking up a water glass.

Karen G. Engelhardt, the principal investigator on the research project, told Woods that the goal was to look at general purpose tools that could not only be used by a person with disabilities, but also something that people like her and Woods would want in their own homes. She said that the more people who were willing to purchase these types of robots, the sooner the technology would get to the people they were designing it for.

Yee told Woods that he currently required people to feed him and do all sorts of things that he could not do for himself. But if he had a device like this robotic arm, it would give him a lot more independence from other people. In narration, Woods said Yee’s dream of independence could soon be a reality. She said at least three companies were closely watching the VA’s progress in robotics and planned to mass produce the technology soon. Meanwhile, the designers at the Palo Alto facility were hoping to have the robot on wheels by May 1985.

The Future of Accessible Tech: Talking Screens?

For the final studio round table, Noel Runyan joined Cheifet, Kildall, and Melrose. Runyan, who is blind, was working on the IBM Talking Tablet. Before getting to that, however, Kildall decided to go off a tangent and asked the guests about a “vision substitution project” he heard about 10 years ago that used a camera in conjunction with a tactile-feel device that let the user “sense” what was on the camera. Kildall wanted to know if anything ever came of this technology? Runyan said he knew of a couple of different projects like that, including a reading machine developed by a Stanford researcher that let the user read print with their finger. There were also larger systems that could be used to recognize something like a telephone, but most of those were still in the primitive research phase.

Melrose said she actually participated in one such study conducted by the Smith-Kettlewell Eye Research Institute. She said after half an hour, she could learn to distinguish a teapot from a teddy bear. So she agreed with Runyan that this technology was still in a primitive state.

Kildall then asked about the possible use of sonar. Melrose said there were some mobility aides that used sonar feedback. You could get textural cues and grasp how far away an object was, or whether it was to your left or right, but it didn’t help with identification. Kildall asked if further research into such technology might bear fruit later. Melrose said it could but it needed a lot more work. She said that it was important to not only develop the technology, but also understand what a blind person needed to learn in order to interpret the signals for things such as dimensionality. Runyan added it needed to be the proper kind of information because you could end up running into a wall if you were too busy trying to interpret the machine.

Cheifet then asked Runyan about his own work in the area of audio assistance for the visually impaired. Runynan said his project was called the Talking Tablet. The idea was to use the raw power of the IBM Personal Computer and put it into a portable package that used touch-directed audio. Essentially, there was a tablet with a computer screen and wherever the user touched the screen there would be corresponding speech for that information. Runyan did not actually have a working tablet, but he demonstrated the speech synthesis on an IBM PC.

Bringing Down the Cost of Accessible Technology

Finally, Paul Schindler presented his commentary on the episode’s theme. He noted that almost everything that he could do on a computer today, such as preparing expense accounts or writing articles, was once done manually–and sometimes it was faster and more efficient that way. But when it came to computers and the disabled, that was an area where computers could do things that simply could not be done before.

Schindler said one example was online conferences. He noted that disabled individuals often had trouble participating in meetings. With a computer conference, however, these speech or motion difficulties vanished. Computers thus served as a job equalizer allowing the disabled to work, which opened up opportunities for 36 million Americans.

Schindler also noted that while most of the computing devices used by the disabled cost nothing extra, there was that “last 8 percent” that could cost a user $2,000 or more. He blamed manufacturers for these high prices and implored them to find a way to bring the costs down.

A Good Week for Hackers, A Bad Week for the IRS

Stewart Cheifet presented this week’s “Random Access,” which dates the episode in February 1985:

  • Cheifet noted “this was not a good week for computers.” First up, an Internal Revenue Service computer failed to process tax withholding payments from about 10,000 employers. This led a second computer to automatically send notices to the employers threatening to seize their assets. IRS clerks sorted out the problem after reconciling the records manually, Cheifet said, and no assets were seized.
  • Hackers managed to gain access to a Sprint long-distance telephone code. The hackers ran up $5,000 in calls before Sprint caught on. But it still took the telephone company 10 days to change the compromised code, during which time the hackers made 17,000 calls. The code belonged to a Sprint customer who subsequently received a 772-page bill totaling $60,000. On the plus side, Cheifet joked the customer received an $8,000 volume discount.
  • Some other hackers managed to compromise the University of Southern California’s transcript computers and sold phony doctorate degrees for $25,000.
  • In non-hacker news, the National Cancer Institute announced its first online cancer treatment database, which would allow any doctor to access up-to-the-minute information on clinical studies, treatments, and facilities with special treatment programs.
  • The Stanford Center for Computer Research in Music–previously featured on Computer Chronicles back in February 1984–said it had developed a computer that could listen to music and turn it into a written score. It took the machine about 5 minutes to score 1 minute.
  • Paul Schindler returned with his software review, this time for PFS:Plan, which he described as “the spreadsheet for the rest of us, not the power users.” The $140 program allowed users to create a spreadsheet of 64 columns by 176 rows.
  • Cheifet said the Campbell’s Soup company had developed robots to work in its New Jersey food processing plant. But the problem was plant regulations required all equipment to be washed down frequently for cleanliness. Since robots didn’t take well to water, Cheifet said Campbell’s was now developing “raincoats for the robots.”
  • Apple co-founder Steve Wozniak said he was taking another step back from the company to start a new venture he dubbed “My Best Friend,” which would develop home entertainment products.
  • Speaking of Apple, Cheifet said a reporter touring the company’s Macintosh production facility noticed it used a robotics system developed by rival IBM.

Adaptive Firmware Card Allowed Users to Control Apple II Using Morse Code, Other Methods

The Adaptive Firmware Card (AFC) demonstrated by Peggy Barker was an interesting piece of the technology that apparently enjoyed success within its market niche. The first reference I found to the AFC was a September 1982 article in Byte co-authored by Paul Schwejda of the University of Washington and Gregg Vanderheiden of the Trace Research and Development Center in Wisconsin. They described the AFC as,

a low-cost adaptive-firmware card that can be inserted into an Apple II microcomputer to provide a variety of “transparent” input routines including scanning, Morse code, and direct selection techniques. (The term “transparent” here indicates the routines work in conjunction with other programs without requiring that the programs be altered in any way.) In addition to providing ‘keyboard’ input, the car can simulate the use of game paddles and switches for people who cannot use the game paddles themselves.

Schwejda and Vanderheiden said there were 10 different input modes available on the original AFC. A later manual described seven “broad categories” of input, including the aforementioned Morse code method–where the user made inputs using a switch–and the “expanded keyboard method” utilized by Barker in her Chronicles demonstration.

Barker apparently used the Unicorn Expanded Keyboard, a separate device that effectively functioned as a touch-sensitive tablet. Although the Unicorn and the AFC working in conjunction could theoretically control any off-the-shelf Apple II software, individual programs still needed to be configured to work properly. (The blog Pix’s Origin Adventures published a post in 2014 where the writer attempted to configure the AFC to work with a number of popular 1980s Apple II games with mixed results.)

The AFC was in active development through at least 1988, as a version of the card was produced that year for the Apple IIgs. Don Johnston Developmental Equipment, Inc., was listed as the publisher of the AFC’s software and the product’s distributor. Don Johnston also sold the Unicorn keyboard, which was manufactured by an Oakland-based firm called, appropriately enough, Unicorn Engineering. Don Johnston itself is still in business today.

Perhaps echoing Paul Schindler’s commentary, these devices were not exactly cheap by 1980s standards. A March 1988 paper I found on the Internet Archive said Don Johnston sold the AFC for $400, with the Unicorn keyboard running an additional $295. For reference, the Apple IIgs (without a monitor) sold for around $1,000 at its 1986 launch, so the complete alternative input system that Barker showed off added an additional 60 percent to the Apple’s retail price.

Sue Simpson Inspired Open Source Accessibility Software Project

The device demonstrated by Sue Simpson, the Express 3, was manufactured by the Prentke Romich Company, which is now known as PRC-Saltillo and based in Ohio. Ed Prentke and Barry Romich started the company in 1966 to create devices to help disabled individuals. They released their first microprocessor-based product, the Express 1, in 1979. The Express 3 seen in the Chronicles episode was released in 1982 and was the first device that actually produced synthesized speech.

In the late 1980s, Simpson started working with an independent software developer, Jon Bjornstad, to help her deal with some of the problems she was having with the Express 3. Many years later, as Simpson’s original device started to deteriorate, Bjornstad decided to create his own software to assist people like Simpson. Named Sue Center in Simpson’s honor, Bjorstad made his program open source and freely available to anyone who needs it. Mike Cassidy described Bjornstad’s software in a 2009 article for the San Jose Mercury News:

Sue Center relies on a small camera tracking a reflective dot worn on the user’s forehead. When a user moves her head, the cursor glides across the screen. Holding the cursor over commands, letters and icons has the effect of clicking on the item.

Sue Center allows users to create text documents; send e-mail; request photographs, jokes, horoscopes, weather reports and news from the Web. Users can play MP3s, store photos, make phone calls using Skype. It can serve as a television remote or as an on-off switch for electrical appliances.

I’m not sure as to the current status of this project. Bjornstad has not updated his project website since 2011 and the code repository no longer appears to be active. As for Sue Simpson, Cassidy noted she had passed away, but no date was given.

Kurzweil Found a Practical Use for Optical Character Recognition Technology

The Kurzweil Reading Machine (KRM) is easily the best-known piece of technology featured in this episode. It was the signature invention of Ray Kurzweil, who developed the first optical character recognition (OCR) program in the 1970s. But as Kurzweil put it in a 1999 book, OCR wsa still a “solution in search of a problem.” That was until one day when Kurzweil said he “happened to sit next to a blind gentleman on a plane flight, and he explained to me that the only real handicap that he experienced was his inability to read ordinary printed material.”

Kurzweil Reading Machine

Inspired by this chance meeting, Kurzweil combined his OCR program with a flat-bed scanner and a text-to-speech synthesizer to create the KRM. After a flurry of early media coverage, Kurzweil managed to attract a major investor–Xerox, which purchased Kurzweil’s company and associated technology in 1980. Xerox continued to refine and produce the underlying OCR technology, now known as Xerox TextBridge.

Notes from the Random Access File

  • This episode is available at the Internet Archive and has an original broadcast date of February 11, 1985.
  • As of 2019, Peggy Barker continued to work as an assistive technology consultant based out of northern California.
  • In recent years, Noel Runyan has become a well-known activist in the area of accessibility and voting rights. In particular, he’s been critical of electronic voting machines, which he described as “garbage” to the Guardian in 2019.
  • From the 1990s until at least the late 2000s, Sue Melrose worked as an instructor at the Hadley School for the Blind in Illinois.
  • After completing her three-year robotics study at the Palo Alto VA, Karen G. Englehardt went on to work as a senior research scientist at Carnegie Mellon University in Pittsburgh, where she founded its Center for Human Service Robotics. She later worked for NASA’s Jet Propulsion Laboratories. In her early 60s, she decided to become a science teacher and continued to work at a California middle school well into the 2010s.
  • Gary Kildall’s reference to the Mars rover was a little confusing. I assume he meant the two Viking spacecraft launched by NASA in 1976–although the first Mars “rover” was successfully landed by the Soviet Union in 1971. What we commonly think of today as the Mars rovers did not launch until the 1990s.
  • As for Gary’s prediction of a future with robotic seeing-eye dogs, that has not yet come to pass, although Japanese tech company NSK famously announced a prototype for such a device back in 2011.
  • Steve Wozniak’s startup would actually be called CL 9, not “My Best Friend,” and it made programmable universal remote controls based on the same MOS Technology 6502 microprocessor that Woz used to create the Apple I and Apple II computers. CL 9 only stayed in business for three years before Woz sold off the assets in 1988.