Free Novel Read

The Computers of Star Trek Page 2


  Another advance in second-generation computers was the development of programming languages. These languages, including COBOL and FORTRAN, replaced the zeros and ones (the binary code) of first-generation machines with words, numbers, and instructions. Developing specific programs for these machines led to the development of the software industry.

  The computers featured on the original series are first- and second-generation machines—projected three hundred years into the future. Thus on the original Enterprise, specific computers handle specific problems—the ship has a library computer, a science computer, a translator computer, and a computer used for navigation. “Futuristic” means they work at incredible speeds and contain vast amounts of information. Many of them are extremely large. Like their primitive ancestors, when pressed to their limits, the machines tend to overheat. Landru, for example, self-destructs in a thick cloud of smoke.

  Though larger and faster than the computers of the 1960s, the original series computers display little imagination or innovation in their basic design. Many of them print answers in machine language that have to be translated. Although most original series computers understand English (and even translate languages from alien cultures into English), most can’t handle simple graphic displays. They are artificially intelligent in that they understand questions, but they are extremely limited in extrapolating data and reaching conclusions. These computers represent the future as envisioned through a narrow tunnel from the past.

  The problem of computer overheating was solved in the real world by the development of a third generation of computers (1964—1971), which used silicon chips for transistors. The first integrated circuits, invented in 1958, combined three transistors on a single chip. This was quickly followed by the packing of tens, hundreds, and later thousands of transistors onto one chip. The smaller the transistor, the less distance electricity had to travel and the faster it worked. As component size shrank and more and more transistors were squeezed onto a single chip, computers became faster and smaller.

  Third-generation computers also featured operating systems, which allowed a machine to run a number of different programs at the same time. Second-generation machines had only been able to work on one problem after another. In third-generation machines, the operating system acted as a central program that monitored and managed all operations of the computer. For the first time, computers were able to do multiple tasks simultaneously, which greatly increased their problem-solving speed.

  As integrated circuits spread, the main direction in computer technology was smaller, faster, cheaper. The fourth generation of computers (1971—now) began with one of the breakthrough inventions of the twentieth century, the microprocessor. In 1971, the Intel 4004 chip contained all the computer’s components (CPU, memory, and input/output controls). This first microprocessor contained 2,300 transistors and performed about 60,000 calculations in a second. It was manufactured in quantity and then separately programmed for all types of functions.

  Soon, computers were everywhere—in televisions, automobiles, watches, microwave ovens, coffeemakers, toys, cash registers, airplanes, telephone systems, electric power grids, and stock market tickers.

  Steady improvements in photolithography—the method used to etch circuits onto chips—pushed component sizes even smaller, resulting in faster computer speeds. The smaller the component, the faster a signal traveled between transistors. Large-scale integration (LSI) fit hundreds of transistors onto a chip about half the size of a dime. In the 1980s, VLSI (very large-scale integration) fit hundreds of thousands of components onto a chip. ULSI (ultra-large-scale integration) increased that number into the millions. In 1995, approximately 3.1 million transistors could be fit onto a single square-inch chip (Intel’s Pentium chip). Modern microprocessors contain as many as twenty million transistors and perform hundreds of millions of calculations per second. A computer with the power of ENIAC, with its 18,000 vacuum tubes, could today fit onto a chip smaller than the period that finishes this sentence.

  Industry experts estimate that there are more than 15 billion microprocessors in use today. Without them, telephones would still have rotary dials, TVs would have knobs instead of remotes, ATMs wouldn’t exist, and thousands of other facets of modern life wouldn’t work. Nor would an unmanned probe have walked on Mars, sending us pictures of another planet’s landscape.

  Equally important, microprocessors enabled computer companies to manufacture computers for home use. In 1981, IBM introduced its Personal Computer (PC) for the home, office, and schools. Today, along with at least half a billion PCs, we have lap-tops and handheld computers: Palm Pilots, Newtons, a multitude of tiny computers that netsurf for us.

  On the original Star Trek, the communicators looked like today’s handheld computers. The etch-a-sketch-sized pads used by Captain Kirk to sign instructions, letters, and invoices (while he ogled Yeoman Rand in her miniskirt and cracked jokes about “the pleasures of shore leave”) were larger and clunkier than today’s powerful handheld computers. But the use of those pads by Kirk and crew exhibited an astonishing foresight.

  Kirk and his crew also used what look like today’s desktop PCs to access databases, communicate with each other, and analyze sensor information. But the most amazing example of the original series’ foresight is that crewmembers routinely gave each other data on disks that look exactly like today’s floppy disks.

  While much of the original series reflected the machines and cultural paranoia of the 1960s, the show also provided a remarkable glimpse of technology in the 1980s. Looking twenty-years ahead is a far cry from looking 300 years into the future, of course, but it’s probably the best that can be expected.

  Just as Kirk’s computers reflected the thinking of the 1960s, the TNG, VGR, and DS9 computers reflect today’s thinking. They incorporate much of today’s best computer science research: redundant architectures, neural nets, top-down as well as bottom-up artificial intelligence, nanotechnology, and virtual reality.

  This creates some problems. For example, the Trek computers are outlandish in design and concept. They supposedly run faster than the speed of light, which defies the laws of physics. Though starships travel at warp speed, they actually are warping space, using the fourth-dimensional curvature of space time to achieve faster-than-light (FTL) speeds. Nothing in this theory (which is discussed at great length in The Physics of Star Trek by Lawrence M. Krauss [Basic Books, 1995] and is speculative at best) justifies the concept of electrons in circuits moving at FTL speeds.

  The computers have a redundant architecture to handle system failures, yet constantly fail. They enable holographic doctors to hit humans and to fall in love. The Deep Space Nine computer is so argumentative and obstinate that Chief O’Brien must put it into manual override to save the space station from blowing up. Yet the same computer requires constant supervision, repair, and instructions from human engineers; in other words, it’s not particularly intelligent by today’s standards.

  Then there’s Data. He runs on some sort of advanced neural network (his positronic brain), but he also shows distinct signs of traditional i- then artificial intelligence—witness his love of Sherlock Holmes and his Spocklike deductive powers. And while he’s so advanced that no human seems capable of creating another Datalike creature, Data can’t interface with the ship’s main computer unless somebody takes off his “skins” (the word for the cases that house today’s computers, but in Data’s case the hair-and-skin flap on the back of his head), does some tweaking with a screwdriver or wrench, inserts what appears to be a serial cable, and watches dozens of flashing lights in Data’s skull. (See, for example, “Cause and Effect,” TNG.) Sometimes a crewmember even has to remove Data’s entire head to create the interface. (“Disaster,” TNG)

  The flashing lights harken back to the days when we gazed at blinking LEDs, jotted down which ones were off and on, and then calculated the corresponding hexadecimal values; these values meant something to us, such as ERROR 1320: MEMORY CORR
UPTION. It’s silly to think that Data’s head hundreds of years from now will have hexadecimal LEDs to indicate SUCCESS and ERROR. The method is outdated today.

  The Star Trek future comes to us courtesy of computer technology. However, we believe that computers will go far beyond the stuff of Star Trek. Tomorrow’s computers will be invisible, highly intelligent, and almost lifelike. Nanotechnology and cybernetic implants will be commonplace. We’ll talk to computers that are in our winter coats and in our summer sandals. Our computers will anticipate what we want before we even ask them. We’ll get ticked off when our computers forget to download our digital newspaper subscriptions, make our morning toast, or automatically design clothes to fit our exact body dimensions and fashion tastes. We’ll forget that computers are computers.

  Getting to this point will require breakthroughs as amazing as the microprocessor. Fortunately, computer scientists are already on the job.

  Since the 1950s, something called Moore’s Law has loosely defined the growth in our computing power. Originally stated in 1965 by Gordon Moore, a co-founder of Intel, it maintains that the number of components that can be put on a computer chip doubles every eighteen months while the price remains the same. Essentially, this means that computer power doubles every eighteen months. (Interestingly, in a 1997 interview with USA Today, Moore says that he originally stated the number of components would double every year. And that in 1971, he revised that to every two years. Eighteen months was never mentioned.)

  As transistors have become smaller, Moore’s Law has held with remarkable consistency. But there’s a limit to how small we can make tomorrow’s transistors. The limitation has to do with the wavelength of light that’s used to etch circuits on silicon chips. Light beams imprint etching patterns into the silicon, and then gases carve the circuitry according to the patterns. So the circuit can’t be narrower than the wavelength of light.

  Mercury light beams, for example, are as tiny as one-half or one-third of a micron (one millionth of a meter). Light beams from a pulsed excimer laser may someday etch circuits with wavelengths of one-fifth of a micron.

  But, and it’s a big but, we can’t reduce silicon circuits below one-tenth of a micron. At that size, quantum mechanics kick in and make the circuitry undependable. New techniques are essential.

  It’s long been postulated that gallium arsenide will replace silicon as the substrate for chips. (A substrate is a “backbone” supporting the circuits.) This new technology will help a little, but it won’t get us to the world of Star Trek: optical isolinear circuitry that breaks the laws of the universe! How far-fetched then is a computer that operates on nothing more than beams of light?

  Eight years ago, Bell Labs created an optical transistor, called the Symmetric Self-Electro-Optic Effect, a name that could be straight out of Star Trek. Optics are becoming fundamental to computers today. Hence the notions of Star Trek’s optical data network and optical isolinear chips—central pieces of the architecture of the Enterprise computer that we’ll describe in the next chapter—are extensions of what exists in our own world.

  Basically, an optical computer has a filter that either blocks light or lets it through. When the filter lets light through, we have a binary one. Otherwise we have a binary zero. We split a laser beam, putting information on one of the two “strands.” Then we cross the strands, forming light patterns at the juncture. If we cross the strands at various angles and in different sections of the holographic crystal structure, we can store tons of information: literally thousands of pages of data. To read the data, we shine a laser through the holographic structure. This “reading” laser produces another light beam that displays a holographic image of the stored information.

  It’s thought that holographic structures will someday store hundreds of billions of bytes. This method alone makes the vast storage capacity of the Enterprise seem possible. But with holographic storage, we won’t need the hard drives of mega-monster computers. We’ll need only a tiny holographic crystal structure. Lambertus Hesselink, a computer scientist at Stanford University and chairman of the holographic research firm Optitek, believes that one holographic structure the size of a sugar cube may be able to hold a terabyte of data. With continued refinement of the holographic process, in several decades that same sugar cube will someday hold as much information as every computer in the entire world does today.1

  Current thinking is that the merging of optical computers with holographic methods will yield the next major computer revolution.

  Amazing! And straight out of Star Trek.

  2

  A Twenty-Fourth-Century Mainframe

  The computer revolution today is a little more than a half-century old. The microprocessor has been in use for only a few decades. Yet in these few decades the computer has changed radically, from a fragile, room-sized agglomeration of vacuum tubes to a tiny chip embedded in automobile dashboards, wristwatches, and even greeting cards. It’s also become embedded in our lives. What computers are and how we relate to them has changed just as radically as their physical form.

  This has happened in just a generation; what will computers be like in 300 years?

  Three hundred years is a long time from now. If we really want to visualize the future, we need to shake ourselves loose of the assumptions of today.

  With that thought in mind, let’s examine the most important component of any Star Trek spaceship—and therefore the most important piece of technology in the entire Star Trek universe: the ship’s main computer system. The computer is responsible for the operation of all other systems on the ship, from life support to navigation to entertainment. We have as our guide to this extraordinary machine the Star Trek: The Next Generation—Technical Manual,1 whose authors compare the Enterprise computer to the nervous system of a human being. Let’s see if it’s a vision of the future.

  When analyzing a computer design, a good first step is to understand its overall structure. For example, does one computer control everything, feeding tasks to workstations? Or do many computers operate in parallel? How are all the components interconnected, and what kind of networking is used? These are basic questions. Once we know the answers, the next step is to identify the underlying modules and their interconnections. In other words, we break the general design into pieces, and then we take a look at the details.a

  The technical manual devotes only five pages to the Enterprise computer. Based on its vague and sketchy description, we’ve inferred the general design shown in Figure 2.1.

  There are five elements here: the library computer access and retrieval software (LCARS, an acronym that you can occasionally see flash on the screen in some episodes, as if it were proprietary software); the main processing core; the micron junction links; the subspace boundary layer; and the optical data network (ODN). We’ll briefly skate through the entire system and then examine each element in detail. According to the technical manual, the LCARS “provides both keyboard and verbal interface ability, incorporating highly sophisticated artificial intelligence routines and graphic display organization for maximum crew ease-of-use.” This is a fancy way of saying that crewmembers type commands and press keys, issue voice commands (the verbal interface), and look at a computer screen. We have the equivalent of an LCARS today. Writing this chapter involved typing commands and pressing function keys. Voice recognition software can be bought over the counter at most computer stores. For a couple of months’ wages you can buy a computer with 256 megabytes of random access memory (RAM) and dual Pentium processors, that with appropriate software will render three-dimensional moving images as quickly as the LCARS screen on Star Trek. In fact, a good modern screen has crisper colors and better image resolution.

  FIGURE 2.1 Overall Ship Computer System

  As we type on our keyboard and gaze at the monitor in order, say, to write this book, the PC’s two processors work together to handle our commands.b Just as all the processors in the main processing core of the Enterprise computer handle the comman
ds that the crew supplies.

  To back up this chapter (in case NT blows), we save it using another filename. We may backup the entire system on zip disks, CDs, or other media. The Enterprise computer, with its three main processing cores, is more like a giant IBM mainframe from the 1970s, with two mainframes providing total system backup— in case one mainframe blows, the Enterprise crew has another ready to assume all system functions. The LCARS consoles are the equivalent of the 1970s graphic display terminals that connected to the old mainframes.

  The micron junction links shift commands from the main processing cores through a subspace boundary layer into the ODN. Again, fancy terms for things we do today (though we don’t do them at faster-than-light [FTL] speed). Let’s suppose that this chapter is ready for our editor. Our transmission choices are: print the chapter and send it to the editor in an envelope, or e-mail the chapter to him. If we choose e-mail, the Internet does the trick. In our case, we dial a phone number and establish a modem connection to our Internet service provider. Over ordinary phone lines (or more high-speed lines, if someone has cash to burn), we transmit the chapter. The Internet service provider is our micron junction link. The telephone wires are our subspace boundary layer. Our ODN is the Internet. Somewhere in an indescribably messy editorial office, our editor logs onto the Internet and retrieves Chapter 2. Picture him sitting at his PC in our drawing of the Enterprise computer. He’s over there on the right, looking at one of the terminals or control panels.

  The most striking difference between the general design of our PC-linked Internet and the ODN setup of the Enterprise computer is that our technology is more advanced. Our version of the ODN—today’s Internet—connects independent computers around the world. There’s no mainframe controlling the Internet. On Star Trek, the ODN connects LCARS terminals to a giant mainframe that controls all system functions. This is a very old-fashioned networking design.