Computer Chronicles Revisited 55 — NaturalLink, Personal Consultant, and Q&A

As enthused as Gary Kildall was about many tech products, notably optical storage, he could be equally grumpy about other subjects, such as artificial intelligence (AI). For instance, back in an early 1985 episode Kildall got noticeably irate at the creator of the Halcyon, who insisted his not-quite-ready prototype of a LaserDisc-based game system had AI. Kildall later wrote that the Halcyon’s botched on-air demo at least demonstrated that “natural languages and computers are not a good fit.”

This next episode, the first broadcast in 1986, revisits this topic. Instead of experimental game systems, however, the products discussed were more conventional business fare such as databases and “expert systems.” Kildall remained somewhat grumpy, especially towards one guest’s product, but at least everything worked during the on-air demos.

Artificial Intelligence as a “Moving Target”

Stewart Cheifet opened the episode while standing in front of an elevator. He noted that a simple computer ran the elevator system. It knew to park the elevators on the ground floor in the morning and on the top floor in the evening based on the pattern of usage. But what if everyone were to suddenly go on the night shift? Well, he said, the elevators would still park themselves the same way, because the computer would require “artificial intelligence” to adjust to the changing conditions.

In the studio, Cheifet and Kildall were playing with Racter, a computer program that tried to carry on a conversation with with the user. But Cheifet noted the program was so dumb it couldn’t figure out what he was asking. (Racter came up in a previous episode, where its creators used the software to “generate” an entire book of prose.) Indeed, Racter’s creators claimed the program used AI, which they said referred to “artificial insanity.”

Cheifet said there had been many promises made with respect to artificial intelligence but not much delivery. Was there really any viable artificial intelligence projects out there today? Kildall joked that AI was a moving target. Once the basic research led to a product it was no longer interesting artificial intelligence work. For example, 20 years earlier Arthur Samuel developed a pioneering program to play checkers that was considered good basic AI research. Today, it wouldn’t even make it in a Macy’s store.

Kildall added that many companies offered products that claimed to have features like “natural language processing” and “expert system” capability. The challenge was to separate the fluff from the substance.

SRI Working on Military-Funded “Vision”

Wendy Woods presented the first of her two remote segments for this episode. This first segment looked at a project at SRI International to create “machine vision.” Woods said that human beings were equipped with stereoscopic vision, a distinct feature that allowed us to judge distance and the relative placement of objects. In addition, humans knew how to identify the features of a landscape by color, shape, and placement.

But how could a computer be programmed to do the same and with the same information? Woods said SRI’s machine vision project was designed to produce digital terrain models by scanning stereoscopic photographs of the Earth’s surface. Working from illumination densities and reflectants’ values, the terrain modeling system–with some human assistance–could calculate an image from any point of view. It could then synthesize a flight path through the terrain based on any chosen route.

While the process of creating topographic maps was simple geometry for a human being, Woods said, scanning a photograph, no matter how detailed, was full of ambiguities. For example, tall objects obscured smaller ones. Repetitive patterns could be confusing. And forms could be misinterpreted.

To overcome these ambiguities, Woods said, the SRI project was experimenting with artificial intelligence. The goal was to develop a program that could look at the global constraints of an image and the parts in context with the whole.

Like much of today’s government-funded AI research, Woods added, this program was slanted towards military applications. But its future potential would help us recreate more exotic views, from inside a human body to the outside of the moon.

TI Offered Menu-Based “Expertise”

Dr. S. Jerrold “Jerry” Kaplan and Dr. Harry Tenant joined Cheifet and Kildall in the studio for the first of two round table segments. Kaplan was the principal technologist for Lotus Development Corporation. Tennant was a senior member of the technical staff at Texas Instruments (TI).

Kildall opened by noting that AI was a term that people appended to their products to make them sound fancier–like frosting on a cake–and therefore was at risk of being misused. He asked Kaplan how he characterized AI in working with new technology. Kaplan said artificial intelligence was a fundamental software technology of interest primarily to developers. It was not an end-user product in and of itself. There really wasn’t much of an AI market except as tools.

Kildall followed up, asking what AI could add to a program. Kaplan said AI was really a method for programming using symbols and describing their interrelationships, thus enabling the software to reason and reach some kind of conclusions or enhance some existing application. Over the next few years, he said we could expect an increase in the utility and intelligence of such tools. He said that programs coming out of companies like Lotus would be able to know more about their intended use and interpret the user’s intentions, as opposed to simply executing user commands.

Cheifet turned to Tennant and asked for a demonstration of Texas Instruments’ NaturalLink system. Tennant explained that NaturalLink offered a way to ask questions of the computer in English, so the user didn’t have to learn an arcane computer language.

What Tennant demonstrated here was not something like a modern Internet search engine. Rather, it was a database containing a specific set of information, in this case data about the state of Texas. The user would then “form” English sentences by selecting words from a series of drop-down menus. In this demo, Tennant constructed a sentence asking for the “average number of days of sun” in El Paso. The software returned an answer of “319 days.”

Tennant said the benefit of this software was that someone who didn’t know anything about the kind of database they were interfacing with or its programming could immediately see what kind of information was available and find out how to get that information out.

Kildall clarified that you couldn’t actually type a sentence to query the database. Tennant said that was correct, the user relied on the menu system to construct their sentences. Tennant said he had been doing research for several years into natural language processing, which did involve typing sentences to make queries. But he then decided to develop the Naturallink approach, which had the benefit of ensuring that 100 percent of user queries would be understood.

Kildall continued to push back, asking how you would compare NaturalLink with a system where you actually typed sentences and there were algorithms and heuristics that tried to figure out the sentence. Tennant said that if you took away the menus from NaturalLink, the back-end used the exact same technology that was in a conventional natural-language system. He added there were limitations to any natural-language system and what TI had done was reveal those limits.

Still not satisfied, Kildall asked if you could start typing sentences if you “peeled off” the menu layer. Tennant said you could.

Cheifet turned to Kaplan and asked if he thought that NaturalLink was really an example of artificial intelligence. Kaplan said this was more of a tool, and what you were seeing was an application of that tool to create a database. But in general, he didn’t think it was appropriate to say that something was–or was not–an “AI product.” AI was not something special that you added to an existing program. It was an approach to programming that was flexible and useful in solving problems that previously might have required some sort of human intelligence.

Kildall added there was a gray area between a “user interface” and something that could really understand a sentence. Tennant interjected, saying he would define AI a little bit differently. He said that when AI started out in the early 1950s, it was really a set of problems, such as how could we make computers reason. This range of problems made up AI. Natural language processing is one approach.

Turning to another approach, Tennant offered his second demonstration, this time for TI’s Personal Consultant, an expert system product. Tennant described it as a tool for decision-making, in this particular case for diagnosing problems with a car. (Basically, this was an “Expert Mechanic” module for the Personal Consultant system.) In the demo, Tennant showed that by answering a series of questions presented by Personal Consultant, the software could present a diagnosis.

Kildall asked how this differed from a straight “yes or no” decision-making process where someone had simply put all of the possible answers into the computer. Were there other judgments and valuations? Tennant said it differed in the way the system was programmed. In the case of building a rule-based expert system like Personal Consultant, the knowledge that went in came in the form of a collection of rules. By chaining together smaller pieces of information the computer could make a decision.

Kildall asked if these tools could be extended to other applications, such as fixing refrigerators. Tennant said absolutely. He noted that expert systems were now being applied to a very broad range of applications. For example, there was one developed by Johns Hopkins University to diagnose poisoning situations. (This was actually demonstrated in a prior Chronicles episode.)

Cheifet turned back to Kaplan and asked him about what Lotus was working on with respect to AI. Kaplan said they were looking into new classes of applications and products, which he wasn’t prepared to announce at this time. But broadly speaking, Lotus was looking at flexible databases and decision-support systems.

Stanford Professor Warned of “Star Wars” Influence on AI Research

For her second remote segment, Wendy Woods interviewed Stanford University computer science professor Terry Winograd. Woods said Winograd was currently working on two projects–one programming and one political. The first was a natural language for programmers that enabled them to use simple English words referenced to the kind of work the software will be doing.

Winograd told Woods that the goal was to get away from giving very specific details of what the machine should do to putting things more in the terms that make sense to the programmers in their world. If you’re doing payroll, for instance, you want to think about people and payrolls and money, not about disk files and records and what is in the computer.

Winogard’s second project concerned funding for AI research, Woods said. He’s worried that too many of his colleagues were being lured into lucrative Defense Department contracts, detracting them from basic research into AI. Winogard said the Reagan Administration’s “Star Wars” program in particular was going to distort research. Woods added that up to 50 percent of computer science research was supported by military grants, and only a fraction of computer scientists had refused them.

Woods said that Winogard and dozens of computer science professors were now circulating petitions asking their colleagues not to accept “Star Wars” funding. Their belief was that it will drain money from AI projects with practical applications.

Speaking in Complete Sentences to a Database

For the second and final round table, Gary Hendrix and Dr. Hubert Dreyfus joined Cheifet and Kildall. Hendrix was vice president for advanced technology with Symantec. Dreyfus was a professor of philosophy at the University of California, Berkeley, and the author of What Computers Can’t Do: The Limits of Artificial Intelligence (1972) and Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (1986).

Kildall asked Dreyfus what he saw as the “fallout” from AI in terms of useful products. Dreyfus said that really wasn’t his field. He was a philosopher interested in the limits of AI. So he wasn’t sure about the fallouts. But he said that you couldn’t use symbolic descriptions and inferences to make computers intelligent like humans. That didn’t mean you couldn’t find many uses for computers, however.

Why couldn’t you do that, Kildall asked? Dreyfus said that people began by using rules when they’re learning some new skill. But as they developed experience, they recognized the situation they were in as similar to one they’d already been in and just did what worked before. Therefore, they didn’t need to analyze the situation. In fact, analysis was really for beginners. Intuition was for experts.

Cheifet asked Hendrix if he disagreed with that. Hendrix said there was some truth in what Dreyfus said. But we had to start with the premise of what’s the enterprise we’re involved in. One enterprise was the academic efforts to push AI to the limits to find out just what a machine was capable of doing. Another enterprise was to take the body of techniques that had been developed so far and trying to put them into useful products. Hendrix said he personally believed we hadn’t reached the limits of what we could do with AI but he was still focused on practical tools.

Kildall next asked for a demonstration of Q&A, a database product from Symantec, which Hendrix described as an “intelligent assistant.” He said that unlike the TI Personal Consultant system that relied on menus, Q&A let the user type questions in English. The software then analyzed the user’s statements to figure out what they wanted to do.

In his demonstration, Hendrix asked a sample database, “I want you to find the forms for sales people who earn more than $15,000 and display them for me.” The software then broke this down into a proposed series of steps:

Shall I do the following?

Select and view the forms in which

The Department is SALES and

The Salary is > $15,000.00

Q&A then presented each of the matching forms one-at-a-time. You could also pull up a list of all matching forms.

Hendrix said you could also ask follow-up questions using pronouns. For example, he asked Q&A to show the names and addresses of “these people,” referring to the salespeople identified in the original query.

Kildall interrupted, asking if an untrained individual could simply come in and just type any sentence. Hendrix quipped you certainly couldn’t ask, “What’s interesting on TV tonight?” and expect a response. But to answer the question, Hendrix said Q&A could not parse 100 percent of the sentences that were typed. But if your purpose was not to break the system, but rather get a job done, and you cooperated with the system it would work.

Cheifet asked Hendrix to explain the “teaching” part of Q&A. Hendrix said that if a user typed a sentence with a word that was unfamiliar to Q&A, the program will stop. As an example, Hendrix typed, “If an employee has a salary between the average Harvard Man’s and John’s, let me see an evaluation and review contents.” The program did not know the word “man.” Hendrix said the system presented a few options. One of them was to “teach” the word as a synonym for a “male” in this database.

The program also stopped on “John” because there were two Johns in the database, Cheifet pointed out. Hendrix said you could simply select which “John” you wanted from from a list of matching entries. You could also look at individual forms if you needed more information to determine which John was which.

Cheifet asked Dreyfus for his views on the demonstrations seen during the program. Dreyfus said they were impressive. But he cautioned that “expert” systems could only do about 80 or 90 percent as well as the experts in a given field. He would call them “competent systems.” But once you were willing to settle for competence–which was important and valuable–you could then use rules and inferences to make these competent systems useful. But it was a mistake to think that if you just worked harder you could get to “expertise,” i.e. that last 15 percent or so.

Dreyfus added there were two kinds of expertise. The first was the “calculating” kind where a computer could actually do better than people. But most expertise was “intuitive,” where you ended up settling for less with a computer.

Should AI Take an IQ Test?

In his closing commentary, George Morrow said that terms like “artificial intelligence” and “expert systems” were first coined in the late 1950s when university-based computer research was largely funded by the military. These terms were therefore likely used to impress the granting agencies more than anything.

Morrow said much of today’s AI software was novel and very useful. But none of these products would be able to live up to what the term “artificial intelligence” invoked. He agreed with Hubert Dreyfus that we should call expert systems “competent systems.” And if a manufacturer insisted its products had artificial intelligence, that product should take a standard IQ test–with the score affixed to the package!

Would 1986 be the Year of Speech Recognition?

Stewart Cheifet presented this week’s “Random Access,” which was recorded in early January 1986.

  • Some industry watchers were saying that 1986 would be the year that speech recognition technology came into its own. Cheifet said there were three major speech products that could come out this year. Ray Kurzweil promised a word processor that accepted dictation and offered a 10,000 word dictionary. Speech Systems was expected to release a similar product with a 5,000-word vocabulary. And IBM was developing its own product, called Tangora, although there was no word when it might be released. Cheifet added that MIT, Carnegie Mellon, and various Japanese companies were also heavily investing in speech recognition technology.
  • IBM was expected to release its new “clamshell” portable computer in 1986. Cheifet said the rumors were that IBM would actually release two versions–one with a 3.5-inch floppy disk drive and one with no drive. The model-with-drive would also come with a utility called Xcopy that would let a user copy software from a 5.25-inch disk drive. Cheifet added these reports also suggested that IBM would start using 3.5-inch drives on its desktop PCs, setting the stage for a new standard.
  • IBM was also expected to release a new graphical version of TopView, although Cheifet said some reports suggested that version 2.0 would not be out until 1987.
  • Paul Schindler reviewed PFS:Access (Software Publishing Corporation, $95), a communications package.
  • The national computerized Yellow Pages was now a reality. Chefiet said it was called American Business Lists, and it cost $15 per month to access plus $1 per minute connection time. The publisher claimed the online directory contained 6 million business listings.

Dr. Hubert L. Dreyfus (1929 - 2017)

Hubert Dreyfus passed away in 2017 at the age of 87. Dreyfus taught at MIT in the 1950s and 1960s before moving to the University of California, Berkeley, in 1968. He taught at Bekerley until 2016.

Dreyfus first gained notoriety at MIT as a critic of the potential impact of computers on society. According to an August 1966 report in the Terre Haute Star–Dreyfus was from Terre Haute, Indiana–the professor published a pamphlet through the Rand Corporation outlining his tech skepticism:

In essence, Dr. Dreyfus believes the computer has been greatly overrated and will never replace man’s thinking and ability. He saus the machine cannot and will not work properly.

In the beginning, though, the computer registered dramatic success, but problems arose and disenchantment resulted, For instance, a computer beat an ex-champion from Connecticut in checkers, but not the machine had no luck in chess.

Dr. Dreyfus believes a computer’s attention cannot be atracted by areas on the board that look interesting. It fails to zero in on possibilities that appeal to a type of “fringe consciousness.”

Computer programs have been falling short in language areas, especially in translating. Moreover, Dr. Dreyfus compares the work of specialists attempting to produce artificial intelligence by means of computers with the ancient alchemists who tried to make lead into gold.

Computers eventually learned how to beat humans at chess, of course, but Dreyfus remained steadfast in his views that machines were incapable of learning or applying “common sense.” A March 1985 article from the Austin American-Statesman noted that Dreyfus had been dubbed “the black knight” of AI by a leading science magazine for his stance. But Dreyfus told the paper that he didn’t think so-called expert systems could ever “be programmed to answer the kind of question that a four-year-old child can.”

Dreyfus was such a well-known AI critic at the time that it apparently caused something of a problem for Stewart Cheifet and the Chronicles production team. In a 1986 paper co-authored with his brother, Stuart Dreyfus, and published in the MIT Technology Review, Dreyfus revealed that he’d been scheduled as a guest for the March 1984 Chronicles episode on artificial intelligence that also featured Stanford University professor John McCarthy–the man credited with inventing the term “artificial intelligence”–and IntelliCorp’s Thomas Kehler. Naturally, Dreyfus said he was there to provide a “balanced perspective” on the promise of AI. But things didn’t go according to plan:

After much negotiating, an evening was finally agreed upon for taping the discussion.

That evening the producer and technicians were standing by at the studio and [Dreyfus] had already arrived in San Mateo when word came that McCarthy would not show up because Hubert was to be on the program. A fourth participant, expert-systems research Michael Genesereth of Stanford University, also backed out.

All of us were stunned. Representatives from public TV’s NOVA science series and CBS news had already interviewed [Dreyfus] about AI, and he had recently appeared on a panel … [with] McCarthy himself at a meeting sponsored by the New York Academy of Sciences. Why not on KCSM? It seems the “experts” wanted to give the impression that they represented a successful science with marketable products and didn’t want to answer any potentially embarrassing questions.

The shock tactic worked. The station’s executive producer, Stewart Cheifet, rescheduled the taping with McCarthy as well as the demo from IntelliCorp, and he decided to drop the discussion with [Dreyfus]. The viewers were left with the impression that AI is a solid, ongoing science which, like physics, is hard at work solving its quote manageable current problems. The public’s chance to hear both sides was lost and the myth of steady progress in AI was maintained.

Obviously, Dreyfus didn’t hold a grudge as he appeared in this episode, which Cheifet no doubt offered as a makeup for what happened with McCarthy and Kehler. I’d also point out that the first AI episode had Herb Lechner, and not Gary Kildall, as the co-host. I wonder what Gary would have done if he had been present for McCarthy and Kehler’s unprofessional no-show.

Hendrix’s Early Symantec Struggles Created Base for Future Antivirus Giant

Stewart Cheifet introduced Gary Hendrix as a vice president with Symantec. Technically, Hendrix was the founder of Symantec. Although, as is often the case when researching the history of technology companies, the complete story is far more complicated.

Hendrix started working in artificial intelligence research as a graduate student at the University of Texas in the early 1970s. As this episode illustrated, Hendrix’s particular field of interest was natural language processing. According to an oral history interview conducted by Dag Spicer for the Computer History Museum in 2004, Hendrix authored a paper that caught the attention of SRI International–the subject of Wendy Woods’ first report from this episode–and he ended up joining SRI even before completing his doctorate.

While working at SRI, Hendrix decided to start an AI-based business. This led to the formation of Machine Intelligence Corporation in 1979, which Hendrix said included 15 people from SRI and Stanford who each contributed $5,000 in capital. Hendrix brought in an industry veteran, Arthur Lash, to serve as CEO.

Machine Intelligence’s initial project was developing a system for using natural language processing to assist in manufacturing. The company received a $25,000 grant from the National Science Foundation to fund the basic research. Hendrix said they managed to produce a Pascal-based prototype for the Apple II that could parse sentences, although there was no database at this point. (The database back-end was later developed by Jerry Kaplan, who worked as a consultant for Symantec.)

The NSF eventually provided Machine Learning with an additional grant of $250,000. But Hendrix said shorty therefater, Lash decided to focus the company on another area of research, robot vision systems. Machine Learning ended up partnering with a Japanese robotics company–Hendrix could not recall the name–but that ended up going nowhere. That was effectively the end of Machine Learning, Hendrix said, but not of his own natural-language processing project.

After contemplating a sale to Hewlett-Packard, Lash agreed to spin-off Hendrix’s project into its own company, which was incorporated as Symantec Corporation in 1982. Hendrix said he initially owned 60 percent of Symantec, with Machine Learning retaining the other 40 percent. Lash brought in Steve Shanck, a former Apple sales executive, to serve as Symantec’s CEO.

After Symantec presented a demo of its AI technology–still running on the Apple II–at the American Electronics Association’s March 1983 conference, venture capitalist John Doerr of Kleiner Perkins agreed to invest $3.5 million in the company. (Doerr previously appeared in a March 1985 Chronicles episode discussing IBM’s impact on the industry.) But as Hendrix later admitted to Spicer, his team of mostly academic researchers struggled to develop a viable database product that could actually take advantage of Symantec’s natural language processing research. In particular, Hendrix realized that development needed to move away from Pascal and towards C and assembly language, which would make a product that was more viable on microcomputers like the IBM PC.

Doerr’s solution was to basically merge Symantec with another struggling software company, C&E Software, Inc., which was started by Dennis Coleman and Gordon Eubanks in 1983. Gordon and Eubanks previously worked for Gary Kildall at Digital Research. They had started C&E hoping to target the low-end of the office software market, including their own database product. C&E also struggled to ship a product, but its five-person team had far more experience with C and assembly language.

On paper, C&E acquired Symantec, which continued to exist as a wholly owned subsidiary for several more years. Eubanks replaced Shanck as CEO. Hendrix remained in charge of software development, having replaced most of his old Stanford and SRI colleagues with C&E’s programmers.

The reorganized Symantec would finally release its database product, Q&A, in September 1985. It initially retailed for $300. Notably, Q&A required a minimum of 512 KB of memory, which was quite hefty for the time period. (The PC-AT, for instance, came with 256 KB standard.) Symantec therefore also sold an add-on 256 KB memory board for $50.

Q&A went on to enjoy a long life in software terms. Symantec continued to publish updated versions well into the Windows era, with the final release, Q&A 5.0, coming in 1995 and official support ending in 1998.

Symantec itself continues to exist today though not under that name. This is, yet again, where the story gets complicated. Even before Q&A released, Symantec published another product called Note-It, a utility for Lotus Development’s Lotus 1-2-3. Note-It was not released under the Symantec name but rather that of Turner Hall Publishing. This was a Symantec subsidiary named after two of the company’s marketing executives, Rod Turner and Dottie Hall.

Even after Q&A proved successful and Symantec started making money, its business quickly started to shift away from Hendrix’s AI-based work towards utility software. Starting in 1987, Symantec went on an acquisition binge, purchasing several other companies to grow in sales and staff. In 1988, a newly formed Symantec Corporation in Delaware absorbed both the original Symantec and C&E. This new Symantec went public in 1989. The following year, Symantec acquired Peter Norton Computing, one of the leading DOS-based utility software developers. Another year later, in 1991, Hendrix left Symantec and moved back to Texas.

During the 1990s, Symantec became synonymous with antivirus and computer security software. Eubanks resigned as CEO in 1999. His successor, former IBM executive John W. Thompson, sold off all of Symantec’s non-security related products and purchased data center software giant Veitras in 2004. As writer Richard Stiennon noted in a 2020 article for Forbes, the Vertias deal “was a major blunder.” After paying $13.5 billion for Veritas, Symantec ended up selling the division to the Carlyle Group in 2015 for $8 billion.

In 2019, Symantec renamed itself NortonLifeLock, combining the Norton name with LifeLock, yet another brand acquired by Symantec in 2016.

TI Found AI Success…in Soup?

Texas Instruments had been one of the many tech companies to enter–and exit–the personal computer business during the first half of the 1980s. After pulling out of that market altogether by 1985, TI turned to expert systems and AI as a potential growth area. In addition to the software demonstrated on this episode, TI also sold a line of dedicated Lisp machines called the Explorer.

Personal Consultant initially retailed for $3,000 when it was released in 1984, according to InfoWorld. But by late 1985, TI decided to offer a basic version of the program for just $950 and a more expensive Personal Consultant Plus for advanced users. The cheaper version was later renamed Personal Consultant Easy, and by 1989 an updated version was available for $500.

The expert systems market seemed to fizzle out by the end of the 1980s. But technology like Explorer and Personal Consultant did have its niche, as Joe Watson explained in a now-archived article from the Texas Instruments website:

Probably one of the best known commercial expert system applications from this era was the Campbell soup story. One of their long-time experts, who had intimate and detailed knowledge about the soup processing factory flow, was about to retire. Soup was cooked in the can after sealing, and it was essential that a carefully controlled time and temperature profile be followed in the cooking process, or the soup would be spoiled. Whenever one of their many soup canneries developed a flow problem, the result was a lot of bad soup, so the expert would be consulted to quickly correct the problem. In order to preserve the expert’s knowledge base, a group of TI “knowledge engineers” was called in to capture his knowledge, codify it and make it available through an expert system. The company reported that this project was very successful.

Notes from the Random Access File

  • This episode is available at the Internet Archive and has an original broadcast date of January 2, 1986.
  • Jerry Kaplan’s major contribution at Lotus was co-authoring an information manager tool called Agenda. In 1987, Kaplan co-founded GO Corporation, which developed early operating systems for pen-based computers. Kaplan has been involved in a number of other tech startups since then and today remains affiliated with Stanford University as an adjunct lecturer.
  • Harry Tennant remained with Texas Instruments until 1994.
  • Terry Winograd spent nearly 40 years as a professor at Stanford, taking emeritus status in 2012. In addition, Winogard was an advisor and visiting researcher with Google from 1998 to 2012, having previously served as an academic advisor to company co-founder Larry Page.
  • Dr. Hubert Dreyfus was so well-known as an AI critic that his views on artificial intelligence has its own Wikipedia page.
  • One of Dreyfus’ graduate students at Berkeley was Eric Kaplan, a former writer for the animated series Futurama. Kaplan said the character of Dr. Hubert Farnsworth from that program was named in honor of Dreyfus.
  • IBM never released a GUI for TopView. By 1987, IBM shifted its development to a new operating system–OS/2–which did feature a GUI, but that did not debut until 1988.
  • IBM did, however, release that “clamshell” portable. Known as the IBM Convertible PC (a/k/a the IBM 5140), it came out in April 1986 with two 3.5-inch drives and initially retailed for $2,000.