Computer Chronicles Revisited 25 — The Lawrence Livermore National Laboratory and Japan's National Supercomputer Project

You often hear people describe modern smartphones as a “supercomputer in your pocket.” There’s definitely some truth to that, especially when you compare today’s phones with the supercomputers of 40 years ago. In the mid-1980s, supercomputer manufacturers were still struggling with concepts like parallel processing, i.e., breaking down a program into smaller tasks that could run simultaneously on multiple microprocessors. Today, in contrast, multi-core, multi-threaded CPU cores are the norm on just about every personal computing device.

From the Abacus to the Cray X-MP

Stewart Cheifet opened this Computer Chronicles by showing Gary Kildall one of the earliest known computing devices–an abacus. Cheifet joked that if a person was really fast, they could do maybe 2 or 3 operations per second on this 5,000-year-old machine. Today, of course, the subject was supercomputers, which could perform hundreds of millions of operations per second. Why did someone need that kind of incredible speed in a computer? Kildall quipped that for one thing, such incredible speeds helped sell computers. But realistically, there were many problems that did require these kinds of high-performance computers to solve. One example was real-time graphics, where the foreground and background images had to change at a fixed rate, say 30 frames per second. Such graphics required a higher-performance processor to keep up with that frame rate. Other examples Kildall identified were weather modeling and information analysis required by expert systems.

Over this week’s B-roll, Cheifet explained that modern supercomputers were over a million times faster than the serial processing machines of the 1950s. Today’s supercomputers relied on massive parallel architectures, multiple functional units, and comprehensive pipelining to reach processing speeds in excess of 100 million operations per second. The applications for these supercomputers were as distinctly varied as they were specialized, from satellite imagery to super-real graphics, from space exploration to “Doomsday” weapons research.

Cheifet explained that at the Lawrence Livermore National Laboratory (LLNL), the world’s largest user of supercomputers, magnetic fusion experiments required massive computing power made available through the lab’s Octopus Network. This multi-user system shared seven supercomputers, which rented processing time to over 150 U.S. and foreign-based users who gained access through satellite connections.

The fastest of LLNL’s supercomputers, Cheifet said, was the Cray X-MP. This dual-processor supercomputer was reserved for the most secretive and costly activity at LLNL–nuclear weapons design and analysis.

Returning to our favorite theme for this season, Cheifet cautioned that with the Japanese now challenging American dominance in the supercomputer market, research was taking a different approach. Japan produced machines that were not only faster, but which also offered simpler, more portable software that used standard programs aimed at a much wider market.

Making Algorithms Fit Computers–or Making Computers Fit Algorithms?

In the studio, George A. Michael of LLNL and John Repp from supercomputer manufacturer Control Data Corporation (CDC) joined Kildall and Cheifet. Kildall noted that he had sold one of the first CP/M microcomputers to LLNL about 10 years earlier, which the lab used to operate a microfiche storage peripheral attached to the Octopus Network. So what was going on today at LLNL with supercomputers? Michael explained that the business of the lab was essentially solving partial differential equations, which was then applied to technical subjects like nuclear energy and weapons. This required enormous computational power. In that sense, the supercomputers were not an end in and of themselves, but rather the thing that was used to support the lab’s mission.

Kildall asked if LLNL’s approach was to buy more computers as their needs grew, or did they try and apply different techniques to use their existing computers more effectively. Michael said they did both. He noted the secret to a supercomputer–or any computer, really–was that there also had to be a language to express ideas. It was also a matter of developing the right algorithms. This required substantial research, and as a result you could not just buy a new computer and expect to immediately produce results 20 times faster. You had to build the thing.

Kildall noted that LLNL worked a lot with CDC. What were the current limitations of its supercomputer technology? Michael said that if you plotted the historic performance of supercomputers, you would see that there was a 100-factor improvement in performance between 1950 and 1960, and again from 1960 to 1970. But from 1970 to 1980 the performance didn’t even improve by a factor of 10. Why? Michael said it was partly because components were tired or couldn’t go any faster, partly it was that they were doing things incorrectly, and partly it was that the machines required very strange equations and algorithms to really perform.

Kildall pointed out that the CDC STAR-100 was the supercomputer of 10 years ago and it involved a different architecture then today’s machines. The STAR was basically an array-processing machine that required different kinds of compilers and software techniques. Kildall asked if that problem had been solved over the past 10 years? Repp replied that when the STAR was first introduced CDC, with the help of LLNL, had to develop a new logic–a new philosophy, as it were–to produce higher performance. The architecture itself had evolved over the past decade largely due to the federal government’s investment in research and development. Now, Repp said we were starting to see a supercomputer revolution in the commercial world. The technology was now at the point where companies were jumping on the supercomputer bandwagon.

Cheifet asked what those commercial applications were. Repp said the oil industry was investing in supercomputers in a big way, albeit not totally knowing what they were doing. The main use of supercomputers here was to help search for oil and perform various simulations. Repp said the automobile industry was also getting into supercomputers. The computers allowed manufacturers to crash a car via a simulation as opposed to smashing actual vehicles to conduct safety tests. Supercomputers also helped with vehicle design and productivity, i.e., determining how many cars a company needed to make to turn a profit.

Kildall noted that the famous U.S. Navy computer scientist Captain Grace Hopper used to hand out pieces of wire at lectures to demonstrate how far an electrical signal could travel in a nanosecond (i.e., one-billionth of one second). Hopper claimed the nanosecond was a fundamental limitation on the speed of computing. Was this still true? Michael said he didn’t think there was any effective physical limit to computing speed. He said if you told researchers such a limit exists, they would figure out a way around it. Michael said parallel computing was a simple way of getting around the limit Hopper cited. But it required a lot of rethinking in how programs were designed.

Cheifet asked about the technical problems that led Michael to state that increases in supercomputer speeds had hit a wall during this decade. What were the actual barriers? Michael said if you took the STAR as an example, it only performed at blazingly high speeds when its vector unit–which allowed the machine to operate on an array of data simultaneously–was running. But many of the scientific computations required the machine to be kept in the more traditional scalar mode, where the computer could only operate on one piece of data at a time.

So if the scalar mode was slow, Michael said, that slowed down the entire computation process. Michael described it as a “traumatic experience” to convert certain key physics equations, such as those related to the flow of fluid or the migration of neutrons in a nuclear reactor, to work on a vector unit. The lab had to invent special compilers. During this process they learned that it was easy to convert 60% of the program, but it was that remaining 40% that dominated in terms of how the user perceived the speed of the supercomputer.

Kildall restated this to say the real problem was rethinking how to state and present the algorithms to the computer, as opposed to changing the technology itself. Michael said there was some importance in improving the technology, but he essentially agreed with Kildall’s assessment. Michael said the key was learning how to remap algorithms onto new architectures. And that required challenging the computer manufacturers in a sense–instead of warping our algorithms to fit your computers, why don’t you warp your computers to fit my algorithms? Michael noted this dialog was ongoing, not only with the manufacturers like CDC, but also universities that relied on supercomputers. He expected this would lead to some breakthroughs by the end of the 1980s.

Cheifet asked Repp for his response. Repp said he didn’t disagree with Michael’s analysis. But if you looked at the commercial world, instead of rewriting code they were relying on conversions instead. It was time consuming and expensive to write entirely new code. And many commercial operations had code that had been written and built up over the years for scalar-based machines, and it would require an enormous effort to rewrite all those programs for vector-based computers.

Japan Adopts a Public-Private Approach to Challenge U.S. Supercomputer Dominance

The final segment was largely outside the studio and consisted of a series of interviews Cheifet conducted in Japan with that country’s leading supercomputer scientists. In an opening narration, Cheifet said that at the University of Tokyo, the computer science department recently acquired a HITAC S-810, one of the fastest supercomputers in the world, which was capable of performing 800 million floating-point operations per second (FLOPS). Hitachi, the manufacturer of the S-810, was one of ten companies participating in Japan’s National Supercomputer Project, a collaboration between the country’s public and private sectors. The project’s goal was to eventually produce a supercomputer that was 10 times faster than today’s machines–one that could perform 10 billion FLOPS.

Cheifet interviewed Dr. Kashiwagi Hiroshi, the leader of the supercomputer project. Speaking through an interpreter, Kashiwagi explained that this expected increase in speed would be accomplished through chip design. He noted the architecture itself would have to be 100 times faster than present supercomputers in order achieve a tenfold increase in speed. To accomplish this, the project was exploring the use of Josephson junctions–a “sandwich” made with two layers of superconducting material around a layer of non-conductive material–as well as high-electron mobility transistors, and Gallium arsenide, the latter of which was a seen as a potentially superior substitute for silicon in producing computer chips. Kashiwagi added that the new architecture itself would require development of a much higher degree of parallel processing than was currently available.

In narration, Cheifet said that testing of these new materials and designs were already underway at the Japanese government’s Electrotechnical Library, located in a part of Tsukuba, Japan, known as “Science City.” Because of the extreme heat created by this new high-speed circuitry, Cheifet said the chips had to be cooled to extreme temperatures–around -270 degrees centigrade.

Cheifet reiterated that higher computational speeds in supercomputers would also depend on developing new ways of handling data through parallel processing and pipeline architecture. Kashiwagi said the best way to explain these concepts was to think of a conveyor belt assembly line in an automobile factory. The conveyor belt started with one part and picked up additional components along the way until the car was completed. This was an efficient way to make one product using one line. Now, if you increased the number of conveyor belts you could produce more cars. But if you increased the number of belts by 100, you shouldn’t expect to increase the output by 100 times, because it was difficult to properly allocate the work to 100 different assembly lines simultaneously. You needed a good manager to control the process. Similarly, you shouldn’t expect 100 times the output from a supercomputer with 100 times as many processors, although you would still see a substantial increase in processing capacity.

Cheifet next interviewed Dr. Aiso Hideo, a professor of electrical engineering at Keio University, who cautioned that the Josephson junction could only be used in a very low temperature environment. If you moved the components from its lowest temperature to room temperature, the junctions would break down. He therefore thought it would be very difficult to extend such parts to a large-scale computer.

Kashiwagi added that another problem with developing these new machines was the need for new languages. Most research centers currently relied on FORTRAN. But the success of the Japanese project’s machines depended on whether users were willing to change to a different language. He noted that while there were some new languages in development, very few were currently in widespread use, which posed a potentially big problem.

In a final interview, Dr. Mizuno Yukio, a senior vice president with Japanese computer manufacturer NEC, explained some of the “unusual arrangements” between the government and the private sector to make the supercomputer project work. Specifically, Mizuno said any patents arising from the technology developed by the project belonged to the government. But the participating companies had the right to use those patents and develop their own products accordingly. In that sense, the Japanese project was still promoting free competition.

Indeed, Kashiwagi pointed out that good technology was always born out of a competitive environment. If the United States were the only country developing supercomputers, we wouldn’t expect the kind of breakthroughs that would come from competition. Similarly, if Japan monopolized the research, you would not see the current level of progress continue. Kashiwagi said he therefore hoped that Japan would always be a good competitor with the United States.

Could Japan Deliver on Its Promises?

Returning to the studio, Cheifet asked George Michael if there really was a significant difference in the Japanese and American approach to supercomputers. Michael said there was a difference, but he didn’t think that difference explained the current success of the Japanese companies in the market. He cited Fujitsu as an example. In discussions with LLNL, Michael said Fujitsu managed to write a new vector-based compiler for approximately 5,000 existing FORTRAN programs. Michael said it was the “best looking compiler” they’d ever seen, and it suggested the next generation of Japanese supercomputers would pose a serious threat to the American manufacturers.

Asked for CDC’s response, John Repp said the Japanese technology was unproven. The Japanese machines were still in the research environment and had yet to be delivered to any customers. And even if the Japanese started delivering machines by 1986, he said that American companies would already be out with supercomputers that were 10 times more powerful than current models. Extrapolating further, Repp said we were looking at machines improved by an order of 100 by 1990. So he felt that while the Japanese were tough competitors, America would respond with its own excellent technology.

Cheifet asked if there were any bigger consequences to this USA-Japan competition, say with respect to national defense. Michael said we weren’t faced with a life-or-death situation. But he did feel that if the U.S. companies didn’t get busy then the Japanese supercomputers would become dominant in terms of speed and price. He noted that aside from Cray and CDC, there really weren’t any other major players in the domestic supercomputer market. And that might not be enough to meet the “structured challenge” posed by the Japanese supercomputer project.

CDC Pioneered Supercomputer Market, But Later Collapsed Under Its Own Weight

So if you’re wondering who ultimately won the battle for supercomputer dominance–the United States or Japan–the answer would appear to be China. According to Statista.com, three Chinese companies–Lenovo, Sugon, and Inspur–sold over 60 percent of the world’s top 500 supercomputers as of June 2020. Hewlett-Packard Enterprise, which now includes Cray, is the largest American vendor on that list with roughly 15 percent of the market. Fujitsu is the largest Japanese manufacturer on the list with a market share of less than 3 percent.

One name you will not see on this list is Control Data Corporation. CDC broke apart in the late 1980s after more than two decades of pioneering work in the supercomputer market. A group of employees at a St. Paul, Minnesota, computer company called Engineering Research Associates left to start CDC in 1957. Donald M. Hall offered this account of CDC’s founding in a 2017 retrospective for the Minneapolis Star-Tribiune:

[T]he initial employee group began to raise money by calling on friends, relatives and strangers in presentations over lunch, at house parties and in any other possible way. By September 1957, they had received commitments for $1.2 million, double the amount anticipated in the prospectus. The state securities commissioner was dumbfounded. As he said, there was no product, no plant, no customers, no money and almost no employees. When the subscription was completed, he received calls from around the country asking: “Is this true?” It was, he told them, as he had approved the new company’s offering document, only eight pages long, and found no legal objection.

The company’s name was chosen from two columns of words by matching combinations that sounded appropriate. No one was much satisfied with the final selection, but they accepted it with resignation, not knowing, of course, that it would quickly electrify the Twin Cities local stock market and within six years be listed on the New York Stock Exchange. From its initial price of $1 per share in 1957, the stock in 1961 traded at $100 a share.

CDC helped define the supercomputer market in the 1960s and 1970s with machines like the CDC 6600, released in 1964, and the aforementioned CDC STAR-100. During this period, CDC’s lead designer was Seymour Cray, who left to start his own firm, Cray Research, in 1972. Meanwhile, throughout the 1970s CDC expanded into a number of other business lines, including commercial credit, computer peripherals like disk drives, and even ticket sales.

By the 1980s, the whole thing had apparently become too unwieldy. An account provided by Minnesota Computing History suggested that CDC “failed to transition quickly to the new realities brought on by personal computers” during the early 1980s. After CDC posted a loss of $567 million in 1985, it tried to reorganize, which in reality meant slowly selling off parts of the company. In 1992, CDC split itself into two companies: Ceridian, which contained the remaining services components of the business; and Control Data Systems (CDS), which was the remainder of the computer hardware business.

Ceridian is still in business today as the publicly traded Ceridian HCM, Inc., a provider of human resources software. BT Group (formerly British Telecom) acquired what was left of CDS in 1999 and merged it with BT’s Syntegra subsidiary.

Repp Rose from HR Assistant to VP of Sales

John Repp had a 22-year career with CDC. According to a May 2020 profile by Nancy Vogt in the Pine and Lakes Echo Journal, Repp left the Navy after serving as a lieutenant and went looking for a management job. When he couldn’t find one–due to his poor college GPA–Repp ended up as a personnel assistant at CDC. He later managed CDC’s data centers before shifting to sales, which Repp told Vogt was his “true passion.”

Repp left CDC and joined Alan Shugart’s Seagate Technology in 1989 as its vice president of sales. He retired from Seagate in 1995. According to Vogt, Repp is now enjoying his retirement with his wife of 60 years, three daughters, and seven grandchildren.

George A. Michael (1926 - 2008)

George Michael worked at the Lawrence Livermore National Laboratory for 38 years. Initially hired as a theoretical physicist in 1953, Michael would go on to play a major role in the laboratory’s supercomputer program. Even after his formal retirement in 1991, he continued to work part-time documenting the history of the laboratory on his personal website, which remains active today in partially finished form.

Michael detailed LLNL’s history of supercomputer acquisitions between 1953 to 1982. With respect to the CDC STAR-100, he commented:

The STAR 100 was the first vector machine in design, but because of its other complexities, was not the first to enter the marketplace. Vector computing was then rather new, and difficult in adaptation. A major problem on the STAR was the disparity in speeds between vector and scalar computations. The STAR provided the largest memories in use at the Lab: one or two million words. They were used as time sharing servers, but it was very hard to do that and efficiency suffered. They were late in delivery. It was difficult to produce efficient programs, and generally failed to make an impression on the marketplace. One of the designers at the Lab remarked that his team had spent seven years to gain mastery of the STAR, and that was seven years of not improving the physics, and he was never going to do that again.

In addition to his work at the laboratory, Michael also co-founded the International Conference for High Performance Computing, Networking, Storage and Analysis, more commonly known today as “SC,” in 1988. Michael died in June 2008 at the age of 82.

Notes from the Random Access File

  • This episode is available at the Internet Archive. I would place the date towards the end of 1983, based largely on Gary Kildall referring to “Captain Grace Hopper.” President Reagan promoted Hopper to Commodore–or Rear Admiral (lower half) in modern parlance–in December 1983 after Congress passed a joint resolution requesting the appointment.
  • LLNL is not named after Lawrence Livermore. The lab is based in Livermore, California. The “Lawrence” was added in honor of Ernest Lawrence, the co-founder of the facility, which was originally called the University of California Radiation Laboratory at Livermore when it started in 1952.
  • LLNL is still very much active today. It is part of the U.S. Department of Energy and has an annual operating budget of around $2.33 billion.
  • The Josephson junction is named for Brian Josephson, who shared the 1973 Nobel Prize in Physics with two others for his work leading to its creation.
  • The IPSJ Computer Museum has some good background on two of the Japanese interview subjects from this episode, Dr. Aiso Hideo and Dr. Mizuno Yukio. Mizuno passed away in 2003.
  • This episode is the last one I’m covering from the first season of Computer Chronicles. Starting with the next post there will be some slight format changes to the show. I also wanted to mention, since I neglected to do so in the last post, that I believe we saw the last of SRI International’s Herbert Lechner on the “Fifth Generation Computers” episode. As far as I know, Lechner is currently retired and living in northern California. Thanks for your service, Herb!