Computer Chronicles Revisited 20 — Expert-Ease and the KEE System

In Part 14 of this series, the Computer Chronicles first discussed the subject of “expert systems.” This referred to computer knowledge bases that purported to replicate a human’s expertise in a particular field. This next Chronicles episode revisits the idea of expert systems as part of a broader discussion of artificial intelligence.

Herbert Lechner is back as Stewart Cheifet’s co-host for this episode. The program opens with Cheifet demonstrating a pocket chess computer–I could not make out the model, but I’m guessing it’s from Radio Shack. Cheifet noted that chess was considered the “ultimate game of skill” and required a certain amount of mental agility or “intelligence.” And yet, this simple computer was capable of applying a certain kind of intelligence by analyzing board positions and selecting moves.

Cheifet then pointed out that for some people, artificial intelligence (AI) suggested an attempt to duplicate the way that a human brain worked. Was that what AI was really about? Lechner said not in terms of modern AI research. He said early AI research did look into duplicating human thought processes. But the current focus was on duplicating the end result of intelligence as opposed to the process. For instance, this meant developing computers that acted as experts–i.e., expert systems–and machines that could communicate with people in human language.

The normal B-roll was preceded by clips of an interview with Dr. Edward Feigenbaum, a professor of computer science at Stanford University. Feigenbaum explained that computers were general symbol processing devices capable of manipulating many kinds of symbols, of which numbers were one important class. But computers were also much more general than that. He noted that researchers have understood the “generality of computation” since the work of Alan Turing in the 1930s. And Feigenbaum said he’d traced these ideas back as far to “intuitions” expressed by Charles Babbage to Ada Lovelace in 1842. Lovelace said that Bababbe’s proposed analytical engine provided a link between the mechanical world and the most abstract of concepts. Feigenbaum said this was now referred to as the physical symbol system hypothesis, which was the basis for present-day work on AI.

In the science of artificial intelligence, Feigenbaum said, researchers used computers to process symbolic knowledge using logical inference methods. In other words, the goal was inference and not calculation in the traditional sense–or knowledge as opposed to numbers.

This led into Cheifet’s narration. He said that one current application of inferential knowledge was in the field of expert systems. The B-roll showed an example of an expert system, an “oil exploration advisor” developed to assist drilling rigs in remote areas. Cheifet explained that the program behaved much like a qualified specialist: It asked questions of the user and then gave advice on how to avoid or correct accidents that were common in drilling operations. If the user was confused by a question, it could ask the program to explain by typing, “WHY?” The “advisor” would then respond with the specific source for its question and explain its reasoning up to that point. Cheifet said this type of symbolic processing had found applications in a broad range of fields from medicine to robotics.

At the same time, Cheifet said, the most difficult area for AI to master was natural language–a talent that resulted in very friendly computers but also required enormous processing capacity. The ambiguities of syntax and context had restricted present AI systems to very limited applications. For example, a simple question about geography could lead to multiple interpretations depending on the phrasing. A program had to determine the question’s most likely meaning after conducting an exhaustive deconstruction of the sentence, and even then it could reject an unusual phrasing.

Cheifet concluded by noting that in a parallel development, there were other applications available that gave persons without the ability for normal speech the ability to adapt or construct alternative symbolic languages–a remarkable example using a computer program’s “intelligence” to help interpret human intelligence.

Copying the Knowledge of “World-Class Experts”

Jeffrey Perrone and Nils Nilsson joined Cheifet and Lechner in the studio. Nilsson, another computer science professor at Stanford, tried to define the current scope of AI. He viewed AI as mainly concerned with putting knowledge into computer programs, so that those programs could then solve certain problems for humans. And that knowledge had to be represented in a particular sort of way.

Lechner noted that it was not just that AI programs were smarter, but that there were also different techniques used relative to ordinary programs. Nilsson agreed there was a body of techniques used with AI that differed from the rest of computer science.

Cheifet then turned to Perrone, a consultant who marketed expert systems, and asked him to explain how the type of AI research performed by Nilsson was applied in practice. Perrone said that AI-based expert systems could now be used anyplace where “specialized knowledge” was used routinely to reach decisions, troubleshooting strategies, diagnoses, and so forth. He noted that we’d reached a watershed where it was no longer very expensive or difficult for individuals with no technical background to build and use such expert systems.

Lechner then asked Perrone to demonstrate an expert system called Expert-Ease. Perrone explained that Expert-Ease was a tool for generating expert systems applications in virtually any area. It ran on microcomputers like the IBM Personal Computer. And while Expert-Ease applied some of the basic AI concepts that Nilsson discussed, the software also had its own unique approach. For example, instead of requiring an explicit statement of rules to build an expert system, Expert-Ease only needed examples of previous decisions, tasks, or diagnoses.

The actual demo involved an expert system that Perrone previously developed with Expert-Ease with Dr. Hillary Don, an anesthesiologist. The purpose of this particular system was to help diagnose patients with potential airway and breathing problems. The program presented a series of questions to the user, such as:

  • When was the onset of the problem?
  • What is the quality of the stridor (a rasping sound made when one’s breathing)?
  • Is the patient drowsy?
  • Are there any predisposing factors to developing stridor?

Based on the answers provided, Expert-Ease offered a potential diagnosis. Lechner asked who would use a system like this. Perrone said either a doctor or a physician’s assistant could use the system to screen patients “for further fine tuning” by a human with more specific expertise.

Perrone also demonstrated a more “trivial example” of the Expert-Ease software to illustrate how its underlying matrix worked. This example helped a user decide what to do on a Sunday afternoon based on two inputs–what was the weather like and did the user have their family with them. Perrone explained these questions or “attributes” were used to build the expert models. (Basically, it’s a glorified spreadsheet.) For any particular problem, the user could define up to 31 attributes and 255 values per attribute. So it was possible to build very large models with Expert-Ease, subject only to the limitations of the host computer’s disk storage. After sketching out the attributes, the user then went to an “example screen” to enter examples of previous decisions.

Cheifet asked if it was really possible to seriously approach this kind of problem solving using a personal computer? Perrone said yes. He noted that he’d recently spoken with the Whole Earth Software Review about implementing an expert system to recommend software to its readers.

Cheifet turned back to Nilsson and asked him what was the point of all this? Why would we develop something like Expert-Ease? Nilsson said there were a lot of uses for this type of system. The present systems admittedly had some “brittle” features that limited their utility at the moment. But over the next 10-to-20 years, he believed expert systems would be generally used in a wide range of settings. The goal was for these systems to contain the knowledge of “world-class” experts. In many fields there might only be 5 or 6 such people. An expert system could help spread the knowledge of that small group in a way that could be easily copied.

Programs That Can Build Other Programs

The final segment began with Cheifet talking to Tom Kehler of IntelliGenetics, an AI company co-founded by Edward Feigenbaum. Kehler demonstrated the Knowledge Engineering Environment, or KEE system, which was an example of a more complex expert system than Expert-Ease. Kehler explained the KEE system used a graphical environment to connect underlying knowledge bases. This allowed the user to build models by “just fooling around with” various meters, valves, and objects.

The particular system Kehler demonstrated was designed for a nuclear reactor. The user could take various actions through the graphical interface. Kehler said those actions would then affect a knowledge base that would apply heuristics and decision-making procedures. For instance, the system could test the hypothesis of a particular nuclear accident scenario.

Cheifet clarified that the user could interrogate the system and ask it what course of logic it was following in recommending a specific course of action. Kehler said that was correct. For instance, if the system asked whether a steam-level generator was decreasing, a nuclear technician could ask the system to explain why it was asking that particular question.

Cheifet pointed out the KEE system ran on the Lisp programming language. Why was that the right language for an AI application? Perrone said Lisp was important because you needed very powerful symbolic manipulating capabilities. After all, decision-making was primarily a symbolic activity.

With that, Cheifet rejoined Lechner and Nils Nilsson, along with Dr. John McCarthy, yet another Stanford computer science professor who was also the inventor of Lisp. Lechner asked McCarthy why he invented Lisp. McCarthy replied it was primarily for use in AI work such as the KEE system. Lechner followed up, asking what were the characteristics that made Lisp better for AI than other languages. McCarthy explained that Lisp programs were written in the same format as its data. That made it easy to build programs that could produce other programs. Lechner said it also made it possible to deal with facts and logic more so than numbers. McCarthy said that was correct.

Lechner said when it came to these newer “smart machines,” the question on everyone’s mind was, “How smart can machines become?” In other words, what were the limits of artificial intelligence? McCarthy said he saw no limit short of human intelligence. With faster machines, a computer could do in a short time what a human did over a long time.

Cheifet asked Nilsson for his thoughts on this. Nilsson said there were three problems AI researchers currently faced. The first was deciding what knowledge should be represented in a program. The second was how to represent knowledge in the program itself. Finally, how would that knowledge be used? He noted that while there had already been rather successful activities in all three areas, there were still other things that needed to be done.

Cheifet asked if there was any frustration in the field of AI given that there was a lot of hope, yet it was harder that one might think to develop these concepts into a practical application. McCarthy noted there was already a certain level of practical technology in AI and that people were making applications. On the other hand, he made note of Nilsson’s earlier point that AI programs could now contain the knowledge of world-class experts. McCarthy said that was true, provided the knowledge fit into a format that the present software allowed, i.e., “If this is true, then do that.” But more general kinds of knowledge that could be used in vaguer ways were harder to program in this manner.

Cheifet recalled Jeffrey Perrone’s earlier demo of an expert system used to make medical diagnoses. He noted that there were several such systems available, yet there were problems using them. What were those problems? McCarthy said he was not familiar with those specific systems, but he offered a hypothetical example. Say you programmed a system to define a container as “sterile” if all the bacteria were dead. This knowledge could not be used, however, in a special way to determine if the container was sterile by examining each of the bacteria. It was useful only as general knowledge, i.e., if you heat the container enough, the bacteria will all be killed. Or if you leave bacteria in food, it will spoil.

Cheifet asked where the biggest potential market was for AI applications. Nilsson said the expert systems discussed today represented a rather larger market and that companies would continue to develop “less brittle” systems over the next 5 to 15 years. Another application would be computer programs that could converse with humans in English or other ordinary languages. That would make computers more accessible to a much wider variety of people.

Cheifet asked Nilsson to explain what he meant by “brittle.” Nilsson said it had to do with being able to reason about the context in which a particular discussion or conversation with an expert system was taking place. If there was information that was needed to reach a certain decision, and it was there explicitly, then the system could come up with a reasonable answer. But if the system needed “common sense”–i.e., the type of knowledge most humans had by the time they were 10 or 15–then the system would often fail. Cheifet quipped that ironically, common sense was the most difficult thing for a computer to learn. Nilsson retorted that few people were willing to pay to put common sense into a computer, whereas it was commercially important to put something like knowledge about how to operate a nuclear reactor into a computer.

Reagan Didn’t Want Communists Getting Computers

This episode’s “Random Access” segment, recorded in March 1984, was presented by Susan Bimba of KCSM-TV for the second time.

  • The U.S. Department of Commerce moved to stop exports of computer technology to Soviet-controlled eastern Europe. Specifically, the Department placed new restrictions on Digital Equipment Corporation. The company would now need to obtain individual export licenses for each “sophisticated computer” that it shipped to West Germany, Austria, or Norway–three countries that were known to divert western technology to the Soviet bloc.
  • Indeed, despite lobbying from the technology industry, Bimba said President Ronald Reagan had given the U.S. Department of Defense the authority to “advise” the Commerce Department on high-tech exports to all foreign countries. The Pentagon already had that advisory authority with respect to communist countries.
  • Following up on an earlier “Random Access” story, Bimba said National Semiconductor was faced with a ban on making chips for the U.S. government after admitting to “inadequate testing” of its products. Despite that, the company reported a $15.4 million profit for the last quarter.
  • IBM said it planned to ship more than 2 million personal computers in 1984, nearly triple its output for 1983.
  • IBM’s newest machine–the portable IBM 5155–led clone-maker Compaq to announce it would cut the price of its Compaq Portable from $2,995 to $2,495. This would allow Compaq to undercut the IBM portable’s expected retail price of $2,795.
  • Atari said it would lay off 300 more workers, although it planned to expand its sales and engineering staff. Bimba noted that in 1983, Atari lost $500 million, laid off 2,000 workers in the San Francisco area, and moved part of its operations to Taiwan and Hong Kong.
  • Another follow-up: After the Japanese Diet considered legislation to reduce copyright protections for U.S. software companies in Japan, the United States Government warned it would retaliate if the measures passed.
  • Finally, Paul Schindler reviewed Ken Uston’s Professional Blackjack. Schindler said the $70 program would “teach you how to play the game and win using various point-counting methods.” He also noted the “crisp” graphics and the use of sound.

John McCarthy (1927 - 2011)

John McCarthy died in 2011 at the age of 84. He was not simply a pioneer in the field of artificial intelligence. He actually coined the term itself during a 1956 workshop at Dartmouth University. Nils Nilsson wrote a short memoir about McCarthy following the latter’s death. Nilsson recounted McCarthy’s explanation of how he came up with the term:

“I had to call it something, so I called it ‘Artificial Intelligence,’ and I had a vague feeling that I’d heard the phrase before, but in all these years I have never been able to track it down."

McCarthy spent time at Princeton, Stanford, MIT, and Dartmouth, before moving to Stanford permanently in 1962, where he remained on faculty until he retired in 2000.

As noted in the episode, McCarthy’s most tangible contribution to AI and computer science was his invention of the Lisp programming language, which is still in use today. Nilsson’s memoir briefly described how Lisp came about:

McCarthy spent the summer of 1958 at IBM, and it was there that he first began to recognize the need for a new programming language–one that could support recursion and dynamic storage. (In recursive languages, a program can invoke a [usually] simpler version of itself.) When he returned to MIT in the fall of 1958, he began work on a new language, one he called “LISP” (for “list processor”). Besides recursion, programs written in LISP could process arbitrary symbolic structures and could treat programs (written in LISP) as data (expressed lists in LISP).

Nilsson also described McCarthy as “scrupulously honest in his claims about his work.” Noting that AI is often accused of “overselling itself”–Stewart Cheifet would probably agree–Nilsson said McCarthy’s papers and memos often contained “modest qualifications” regarding the technology.

Nils Nilsson (1933 - 2019)

Sadly, Nils Nilsson passed away a few years after McCarthy, in 2019 at the age of 86. According to an obituary by Sophia Kunthara in the San Francisco Chronicle, Nilsson worked at SRI International–where Herbert Lechner was a co-worker–for 23 years, serving as its leader from 1980 to 1984. In 1985, he became chairman of Stanford’s computer science department, a post he held until 1990. He then served as the Kumagi Professor of Engineering until his retirement and, according to Kunthara, he “helped transition the department from the School of Humanities and Sciences to the School of Engineering.”

Perrone Followed the AI Trail to Apple

The other two in-studio guests from this episode are still alive as of this writing. Jeffrey Perrone continued to work as a self-employed consultant on expert systems well into the mid-1990s. After spending five years as a senior product manager for E*Trade, Perrone co-founded a now-defunct website called PearlSoup.com, and served as its product manager from 2002 to 2009. During the 2010s, Perrone seems to have bounced between consulting and startup work. In 2020, he joined Apple as a contract “conversation user interface designer.” (I assume that means he’s working on Siri.)

The subject of Perrone’s product demonstration, Expert-Ease, was written by Donald Michie, a British AI researcher who passed away in 2007. Scott Mace provided a detailed write-up of Expert-Ease for the March 19, 1984 issue of InfoWorld. The software cost $2,000 and required an IBM PC with at least 128 KB of RAM. (For reference, a stock PC model only came with 16 KB.) Mace said that as of his report, “about 60 sites worldwide have bought early versions of Expert-Ease.” He quoted Perrone as promising a “new, faster, and easier-to-use version of the program will be released this spring [1984] as it goes into wider circulation.” Apparently, the first version of Expert-Ease was considered too slow, owing to the fact it was written in Pascal.

Kehler Continues to Promote AI Decision-Making

As for Thomas Kehler, he has served as CEO of a number of tech companies, including IntelliGenetics. According to a corporate history by FundingUniverse, Edward Feigenbaum started IntelliGenetics to monetize the AI research he was doing at Stanford. The company introduced its KEE system in 1983 as a “general purpose AI tool that had the potential to develop knowledge-based systems with commercial and industrial applications.” Later that year, IntelliGenetics went public, changing its name to IntelliCorp in 1985.

But as FundingUniverse explained, IntelliCorp struggled to capitalize on the potential of artificial intelligence. Ironically, it was the use of Lisp that made the KEE system less attractive to potential customers:

IntelliCorp, like many of the early AI companies, relied on the LISP programming language, which in turn required a specialized computer that cost as much as $150,000 and could only be used by a single operator. The research and development units of major corporations were willing at first to invest in this technology, thus spurring initial growth in the industry, but the broader corporate marketplace failed to follow suit. It wanted AI programs that could run on its mainframes and minicomputers and did not require its people to learn LISP. As a result, AI advances began to be incorporated into general computer programs, and AI lost its status as a distinct market.

Kehler replaced Feigenbaum as chairman and CEO in 1988 and remained with the company until 1991. Since then, he’s been involved with more than half a dozen tech startups. Kehler is currently CEO of CrowdSmart. He explained his latest venture’s purpose in a 2020 Medium article:

Whenever capital is deployed, one or more individuals make a judgement call to invest or pass. In most investment scenarios there is an investment committee that makes the call to deploy capital. Our project focused on seed and series A investments where there is little to no reliable predictive data and the process is highly reliant on human judgement.

The project was initiated as part of a new company, CrowdSmart, founded in 2015 to create a technology platform that would change the way capital is deployed to startups. (My co-founders were Kim Polese, Markus Guehrs and Fred Campbell). We believed that a scalable, trusted investment rating (like a Moody’s rating or a FICO score), and guidance would level the playing field to create more equitable opportunities for startups and better outcomes for investors.

As for IntelliCorp, it sputtered along for a couple more decades after Kehler’s departure until its software assets were finally sold in 2019 to Austria-based software company Tricentis.

Notes from the Random Access File

  • This episode is available at the Internet Archive and has a recording date of March 16, 1984.
  • Edward Feigenbaum will appear in a later Computer Chronicles episode, and I’ll discuss him in more detail at that time.
  • Stewart Cheifet’s demonstration of a small chess computer may have been another nod to the legacy of John McCarthy, who according to Nils Nilsson helped to develop one of the first chess-playing computer programs in the early 1960s.
  • The use of a computer AI to make decisions regarding a nuclear reactor reminded me of a similar scenario involving Homer J. Simpson some years later.
  • Speaking of The Simpsons, I wonder if the people who made Ken Uston’s Professional Blackjack were also behind Lee Carvallo’s Putting Challenge.
  • Ken Uston’s Professional Blackjack was not just a simple blackjack game. As Paul Schindler mentioned, it taught you how to count cards, which will get you kicked out of a real casino. (Schindler noted Uston himself had recently been ejected from 5 casinos.) What fascinated me was that the game’s settings also adjusted for different types of casino rules. I never knew that blackjack was played differently on the Las Vegas Strip differed than in downtown Las Vegas casinos.
  • So who the heck was Ken Uston? Schindler described him as a former stockbroker turned professional gambler. An obituary in Blackjack Forum said Uston, who was found dead in a Paris apartment in September 1987, had also been “working as a computer consultant for the Kuwaiti government.”