历史上最重大软件技术和产品

The Most Important Software Innovations

David A. Wheeler

August 1, 2001; Revised January 12, 2003

The Most Important Software Innovations

Here is a list of the most important software innovations:

Year

Innovation

Comments

1837

Software (Babbage’s Analytical Engine)

Charles Babbage was an eminent scientist; he was elected Lucasian Professor of Mathematics at Cambridge in 1828 (thechair held by Isaac Newton and Stephen Hawking). In 1837 he publicly described an analytical engine, a mechanical device that would take instructions from a program instead of being designed to do only one task. Babbage had apparantly been thinking about the problem for some time before this; as with many innovations, pinning down a single date is difficult. This appears to be the first time the concept of software (computing instructions for a mechanical device) is seriously contemplated. Babbage even notes that the instructions can be reused (a key concept in how today’s software works). In 1842 Ada Augusta, Countess of Lovelace, released a translation of "Sketch of the Analytical Engine" with extensive commentary of her own That commentary has a clear description of computer architecture and programming that is quite recognizable today, and Ada is often credited as being the "first computer programmer". Unfortunately, due to many factors the Analytical Engine is never built in Babbage’s lifetime, and it would be many years before general-purpose computers were built.

1845

Boolean Algebra

George Boole published "An Investigation of the Laws of Thought". His system for symbolic and logical reasoning became the basis of computing.

1936-37

Turing Machines

Alan Turing wrote his paper "On computable numbers, with an application to the Entscheidungsproblem", where he first describes Turing Machines. This mathematical construct showed the strengths - and fundamental limitations - of computer software. For example, it showed that there were some kinds of problems that could not be solved (in finite time).

1945

Stored program

In the "First Draft of a Report on the EDVAC", the concept of storing a program in the memory as data was described by John von Neumann. This is a fundamental concept for software manipulation that all software development is based on. Eckert, Mauchly, and Konrad Zuse have all claimed prior invention, but this is uncertain and this draft document is the one that spurred its use. Alan Turing published his own independent conception, but went further in showing that computers could be used for the logical manipulation of symbols of any kind. The approach was first implemented (in a race) by the prototype Mark I computer at Manchester in 1948.

1945

Hypertext

Hypertext was first described in Vannevar Bush’s ``As we may think’’ The word ``hypertext’’ itself was later coined by Ted Nelson in his 1965 article A File Structure for the Complex, the Changing, and the Indeterminate. 20th National Conference, New York, Association for Computing Machinery.

1951

Subroutines

Maurice Wilkes, Stanley Gill, and David Wheeler (not me) developed the concept of subroutines in programs to create re-usable modules and began formalizing the concept of software development.

1952

Assemblers

Alick E. Glennie wrote ``Autocoder’’, which translated symbolic statements into machine language for the Manchester Mark I computer. Autocoding later came to be a generic term for assembly language programming.

1952

Compilers

Grace Murray Hopper described techniques to select (compile) pre-written code segments in correspondence with codes written in a high level language, i.e., a compiler. Her 1952 paper is titled "The Education of a Computer" (Proc. ACM Conference), and is reprinted in the Annals of the History of Computing (Vol. 9, No. 3-4, pp. 271-281), based on her 1951-1952 effort to develop A-0. She was later instrumental in developing COBOL. A predecessor of the compiler concept was developed by Betty Holberton in 1951, who created a "sort-merge generator".

1954

Practically Compiling Human-like Notation (FORTRAN)

John Backus proposed the development of a programming language that would allow users to express their programs directly in commonly understood mathematical notation. The result was Fortran. The first Fortran implementation was completed in 1957. There were a few compilers before this point; languages such as A-0, A-1, and A-2 inserted subroutines, the Whirlwind I included a special-purpose program for solving equations (but couldn’t be used for general-purpose programming), and an ``interpreter’’ for the IBM 701 named Speedcoding had been developed. However, Fortran used notation far more similar to human notation, and its developers developed many techniques so that, for the first time, a compiler could create highly optimized code [Ceruzzi 1998, 85].

1955

Stack Principle

Frierich L. Bauer and Klaus developed the ``stack principle’’ (``the operation postponed last is carried out first’’) at the Technische Universität München. This served as the basis for compiler construction, and was naturally extended to all bracketed operation structures and all bracketed data structures.

1957

Time-sharing

In Fall 1957 John McCarthy (MIT, US) began proposing time-sharing operating systems, where multiple users could share a single computer (and each believes they control an entire computer). On January 1, 1959, he wrote a memo to Professor Philip Morse proposing that this be done for an upcoming machine. This idea caused immense excitement in the computing field. It’s worth noting that Christopher Strachey (National Research Development Corporation, UK) published a paper on ``time-sharing’’ in 1959, but his notion of the term was having programs share a computer, not that users would share a computer (programs had already been sharing computers, e.g., in the SAGE project). [Naughton 2000, 73] By November 1961 Fernando Corbat??lso at MIT) had a four-terminal system working on an IBM 709 mainframe. Soon afterwards CTSS (Compatible Time Sharing System) was running, the first effective time-sharing system. Even in those systems of today which aren’t shared by different users, these mechanisms are a critical support for computer security.

1958-1960

List Processing (LISP)

McCarthy (at Stanford) developed the LISP programming language for supporting list processing; it continues to be critical for Artificial Intelligence and related work, and is still widely used. List processing was not completely new at this point; at the 1956 Dartmouth Summer Research Project on Artificial Intelligence, Newell, Shaw and Simon described IPL 2, a list processing language for Rand Corporation’s JOHNNIAC computer. However, McCarthy realized that a program could itself be represented as a list, refining the approach into a flexible system fundamentally based on list processing. In 1956-1958 he began thinking about what would be needed for list processing, with significant work beginning in 1958 with hand simulated compilations. LISP demonstrated other important innovations used in many later languages, including polymorphism and unlimited-extent data structures.

1960

Survivable Packet-Switching Networks

In 1960 Paul Baran (RAND) proposed a message switching system that could forward messages over multiple paths. Unlike previous approaches (which required large storage capacities at each node), his approach used higher transmission speeds, so each node could be small, simple, and cheap. Baran’s approach routed messages to their destination instead of broadcasting them to all, and these routing decisions were made locally. In 1961 Leonard Kleinrock (MIT) published ``Information Flow in Large Communication Nets,’’ the first larger work examining and defining packet-switching theory. In 1964 Paul Baran wrote a series of papers titled ``On Distributed Communications Networks’’ that expanded on this idea. This series described how to implement a distributed packet-switching network with no single outage point (so it could be survivable). In 1966 Donald Davies (NPL, UK) publicly presented his ideas on ``packet switching’’ and learned that Baran had already invented the idea. Davies started the ``Mark I’’ project in 1967 to implement it, and ARPANET planning (the ancestor of the Internet) also began in 1967. These concepts are the fundamental basis of the Internet, defining how the Internet uses packet-switching.

1964

Word Processing

The first ``word processor’’, IBM’s product MT/ST (Magnetic Tape/Selectric Typewriter), which combined the features of the Selectric typewriter with a magnetic tape drive. For the first time, typed material could be edited without having to retype the whole text or chop up a coded copy. Later, in 1972, this would be morphed into a word processing system we would recognize today.

1964

The Mouse

The Mouse was invented in 1964 by Douglas C. Engelbart at SRI, using funding from the U.S. government’s ARPA program [Naughton 2000, 81]. Although this could be viewed as a hardware innovation, it isn’t much of a hardware innovation (it’s nothing more than an upside-down trackball). The true innovations were in the user interface approaches that use the mouse, which is entirely a software innovation. It was patented, though this never resulted in much money for the inventor.

1965

Semaphores

E. W. Dijkstra defined semaphores for coordinating multiple processes. The term derives from railroad signals, which in a similar way coordinate trains on railroad tracks.

1965

Hierarchical directories, program names as commands (Multics)

The Multics project spurred several innovations. Multics was the first operating system to sport hierarchical directories, as described in a 1965 paper by Daley and Neumann. Multics was also the first operating system where, in an innovation developed by Louis Pouzin, what you type at command level is the name of a program to run. This caused related innovations like working directories and a shell. In earlier systems, like CTSS, adding a command requiring recompiling; to run your own program you had to execute a system command that then loaded and ran the program. Louis Pouzin implemented a very limited form of this idea on CTSS as "RUNCOM", but the full approach was implemented on Multics with his help. Although fewer ordinary users use a command line interface today, these are still important for many programmers. The Multicians.org site has more information on Multics features.

1965

Unification

J.A. Robinson developed the concept of "unification". This concept - and algorithms that implement it - become the basis of logic programming.

1966

Structured Programming

B?and Jacopini defined the fundamentals of ``structured programming’’, which showed that programs could be created using a limited set of instructions (looping, conditional, and simple sequence) - and thus showing that the``goto’’ statement was actually not essential. Edsger Dijkstra’s 1968 letter "GO TO Statement Considered Harmful" popularized the use of this approach, claiming that the ``goto’’ statement produced code that was difficult to maintain.

1966

Spelling Checker

Les Earnest of Stanford developed the first spelling checker circa 1966. He later improved it around 1971 and this version quickly spread (via the ARPAnet) throughout the world. Earnest noted that 1970s partipants on the ARPAnet "found that both programs and data migrated around the net rather quickly, to the benefit of all" - an early note of the amplifying effect of large networks on OSS/FS development.

1967

Object Oriented Programming

Object-oriented (OO) programming was introduced to the world by the Norwegian Computing Centre’s Ole-Johan Dahl and Kristen Nygaard when they release Simula 67. Simula 67 introduces contructs that much later become common in computer programming: objects, classes, virtual procedures, and inheritance. OO programming is later popularized in Smalltalk-80, and still later, C++. This approach proved especially invaluable later when graphical user interfaces became widely used.

1967

Separating Text Content from Format

The first formatted texts manipulated by computer had embedded codes that described how to format the document ("font size X", "center this text"). In contrast, in the late 1960s, people began to use codes that described the meaning of the text (such as "new paragraph" or "title"), with separate information on how to format it. This had many advantages, such as allowing specialists to devise formats and easing searching, and influenced later technologies such as SGML, HTML, and XML. Although it is difficult to identify a specific time for this idea, many credit the use of this approach (sometimes called "generic coding") to a presentation made by William Tunnicliffe, chairman of the Graphic Communications Association (GCA) Composition Committee, during a meeting at the Canadian Government Printing Office in September 1967 (his topic was on the separation of the information content of documents from their format).

1968

The Graphical User Interface (GUI)

Douglas C. Engelbart gave a 90-minute, staged public demonstration of a networked computer system at the Augmentation Research Center, which was the first public appearance of the mouse, windows, hypermedia with object linking and addressing, and video teleconferencing. These are the innovations that are fundamental to the graphical user interface (``GUI’’).

1968

Regular Expressions

Ken Thompson published in the Communications of the ACM, June 1968, the paper ``Regular Expression Search Algorithm,’’ the first known computational use of regular expressions. Regular expressions had been studied earlier in mathematics, based on work by Stephen Kleene. Thompson later embedded this capability in the text editor ed to implement a simple way to define text search patterns. ed’s command ’g/regular expression/p’ was so useful that a separate utility, grep was created to print every line in a file that matched the pattern defined by the regular expression. Later, many libraries included this capability, and the widely-used Perl language makes regular expressions a fundamental underpinning for the language. See Jeffrey E.F. Friedl’s Mastering Regular Expressions, 1998, pp. 60-62, for more about this history.

1969-1970

Standardized Generic Markup Language (SGML)

In 1969 Charles F. Goldfarb, Ed Mosher, and Ray Lorie developed what they called a "Text Description Language" to enable integrating a text editing application, an information retrieval system, and a page composition program. The documents had to be selectable by query from a repository, revised with the text editor, and returned to the data base or rendered by the composition program. This was an extremely advanced set of capabilities for its time, and one that simple markup approaches did not support well. They solved this problem by creating a general approach to identifying different types of text, supporting formally-defined document types, and creating an explicit nested element structure. Their approach was first mentioned in a 1970 paper, renamed after their initials (GML) in 1971, and use began in 1973. GML became the basis for the Standard Generalized Markup Language (SGML), ISO Standard 8879. HTML, the basis of the World Wide Web, is an application of SGML, and the widely-used XML (another critically important technology) is a simplified form of SGML. For more information see The Roots of SGML -- A Personal Recollection and A Brief History of the Development of SGML. These standard markup languages have been critical for supporting standard interchange of data that support a wide variety of display devices and querying from a vast store of documents.

1970

Relational Model and Algebra

E.F. Codd introduced the relational model and relational algebra in a famous article in the Communications of the ACM, June 1970. This is the theoretical basis for relational database systems and their query language, SQL. The first commercial relational database, the Multics Relational Data Store (MRDS), was released in June 1976.

1971

Distributed Network Email

Richard Watson at the Stanford Research Institute suggested that a system be developed for transferring mail from one computer to another via the ARPANET network. Ray Tomlinson of Bolt, Beranek and Newman (BBN) implemented the first email program to send messages across a distributed network, derived from an intra-machine email program and a file-transfer program. This quickly became the ARPANET’s most popular and influential service. Note that Tominson defined the "@" convention for email addresses. It isn’t clear when single-computer email was developed; it’s known that MIT’s CTSS computer had a message feature in 1965 [Abbate 1999, page 109]. However, email that can span computers is far more powerful than email limited to a single computer. In 1973 the basic Internet protocol for sending email was formed (though RFC 630 wasn’t released until April 1974), and in 1975 the Internet mail headers were first officially defined [Naughton 2000, 149]. In 1975, John Vittal released the email program MSG, the first email program with an ``answer’’ (reply) command [Naughton 2000, 149].

1972

Modularity Criteria

David Parnas published a definition and justification of modularity via information hiding.

1972

Screen-Oriented Word Processing

Lexitron and Linolex developed the first word processing system that included video display screens and tape cassettes for storage; with the screen, text could be entered and corrected without having to produce a hard copy. Printing could be delayed until the writer was satisfied with the material. It can be argued that this was the first ``word processor’’ of the kind we use today. (see a brief history of word processing for more information). Other word processors were developed since. In 1979, Seymour Rubenstein and Rob Barnaby released ``WordStar’’, the first commercially successful word processing software program produced for microcomputers, but this was simply a re-implementation of a previous concept. In March of 1980, SSI*WP (the predecessor of Word Perfect) was released.

1972

Pipes

Pipes are ``pipelines’’ of commands, allowing programs to be easily ``hooked together’’. Pipes were originally developed for Unix and widely implemented on other operating systems (including all Unix-like systems and MS-DOS/Windows). M. D. McIlroy insisted on their original implementation in Unix; after a few months their syntax was changed to today’s syntax. Redirection of information pre-existed this point (Dartmouth’s system supported redirection, as did Multics), but it was only in 1972 that they were implemented in a way that didn’t require programs to specially support them and permitted programs to be rapidly connected together.

1972

B-Tree

Rudolf Bayer and Edward M. McCreight publish the seminal paper on B-trees, a critical data structure widely used for handling large datasets.

1972,1976

Portable operating systems (OS6, Unix)

By this date high-level languages had been used for many years to reduce development time and increase application portability between different computers. But many believed entire operating systems could not be practically ported in the way, since operating systems needed to control many low-level components. This was a problem, since it was often difficult to port applications to different operating systems. Significant portions of operating systems had been developed using high-level languages; ( Burroughs wrote much of the B5000’s operating system in a dialect of Algol, and later much of Multics was written in PL/I, but both were tied to specific hardware. In 1972 J.E. Stoy and C. Strachy discussed OS6, an experimental operating system for a small computer that was to be portable. In 1973 the fledgling Unix operating system was rewritten in a high-level language that just been developed, C, though at first the primary goal was not general machine portability of the entire operating system. In 1976-1977 the Unix system was modified further to be portable, and the Unix system did not limit itself to being small - it intentionally included significant capabilities such as a hierarchical filesystem and multiple simultaneous users. This allowed computer hardware to advance more rapidly, since it was no longer necessary to rewrite an operating system when a new hardware idea or approach was developed.

1972

Internetworking using Datagrams

The Cyclades project began in 1972 as an experimental network project funded by the French government. It demonstrated that computer networks could be interconnected (``internetworked’’) by the simple mechanism of transferring data packets (datagrams), instead of trying to build session connections or trying to create highly reliable ``intelligent’’ networks or ``intelligent’’ systems which connected the networks. Removing the requirement for ``intelligence’’ when trying to hook networks together had great benefits: it made systems less dependent on a specific media or technology, and it also made systems less dependent on central authorities to administer it.

At the time, networks were built and refined for a particular media, making it difficult to make them interoperate. For example, the ARPANET protocols (NCP) depended on highly reliable networks, an assumption that broke down for radio-based systems (which used an incompatible set of protocols). NCP also assumed that it was networking specific computers, not networks of networks. The experience of Xerox PARC’s local system (PARC Universal Packet, or PUP), based on Metcalfe’s 1973 dissertation, also showed that ``intelligence’’ in the network was unnecessary - in their system, ``subtracting all the hosts would leave little more than wire.’’

On June 1973, Vinton Cerf organized a seminar at Stanford University to discuss the redesign of the Internet, where it was agreed to emphasize host-based approaches to internetworking. In May 1974, Vinton Cerf and Robert E. Kahn published ``A Protocol for Packet Network Interconnection,’’ which put forward their ideas of using gateways between networks and packets that would be encapsulated by the transmitting host. This approach would later be part of the Internet.

In 1977, Xerox PARC’s PUP was designed to support multiple layers of network protocols. This approach resolved a key problem of Vinton Cerf’s Internet design group. Early attempts to design the Internet tried to create a single protocol, but this required too much duplication of effort as both network components and hosts tried to perform many of the functions. By January 1978, Vint Cerf, Jon Postel, and Danny Cohen developed a design for the Internet, using two layered protocols: a lower-level internetwork protocol (IP) which did not require ``intelligence’’ in the network and a higher-level host-to-host transmission control protocol (TCP) to provide reliability and sequencing where necessary (and not requiring network components to implement TCP). This was combined with the earlier approaches of using gateways to interconnect networks. By 1983, the ARPANET had switched to TCP/IP. This layering concept was later expanded by ISO into the ``OSI model,’’ a model still widely used for describing network protocols. Over the years, TCP/IP was refined to what it is today.

1973

Font Generation Algorithms

Of course, there had been many efforts before this to create fonts using mathematical techniques; Felice Feliciano worked on doing so around 1460. However, these older attempts generally produced ugly results. In 1973-1974 Peter Karow developed Ikarus, the first program to digitally generate fonts at arbitrary resolution. In 1978, Donald Knuth revealed his program Metafont, which generated fonts as well (this work went hand-in-hand with his work on the open source typesetting program TeX, which is still widely used for producing typeset papers with significant mathematical content). Algorithmically-generated fonts were fundamental to the Type 1 fonts of Postscript and to True Type fonts as well. Font generation algorithms made it possible for people to vary their font types and sizes to whatever they wanted, and for displays and printers to achieve the best possible presentation of a font. Today, most fonts displayed on screens and printers are generated by some font generation algorithm.

1974

Monitor

Hoare (1974) and Brinch Hansen (1975) proposed the monitor, a higher-level synchronization primitive; it’s now built into several programming languages (such as Java and Ada).

1975

Communicating Sequential Processes (CSP)

C. A. R. Hoare published the concept of Communicating Sequential Processes (CSP) in ``Parallel Programming: an Axiomatic Approach’’ (Computer Languages, vol 1, no 2, June 1975, pp. 151-160). This is a critically important approach for reasoning about parallel processes.

1977

Diffie-Hellman Security Algorithm

The Diffie-Hellman public key algorithm was created in a way that the public could read about it. According to the United Kingdom’s GCHQ, M. J. Williamson had invented this algorithm (or something very similar to it) in 1974, but it was classified, and I’m only counting those discoveries made available to the public. This algorithm allowed users to create a secure communication channel without meeting.

1978

RSA security algorithm

Rivest, Shamir, and Adleman, published their RSA algorithm, a critical basis for security. It permits authentication or encryption without having to previously exchange a secret shared key, greatly simplifying security. According to the United Kingdom’s GCHQ, Clifford Cocks had invented this algorithm in 1973, but it was classified.

1978

Spreadsheet

Dan Bricklin and Bob Frankston invented the spreadsheet application (as implemented in their product, VisiCalc). Bricklin and Frankston have made information on VisiCalc’s history available on the web.

1978

Lamport Clocks

Leslie Lamport published ``Time, Clocks, and the Ordering of Events in a Distributed System’’ (Communications of the ACM, vol 21, no 7, July 1978, pp. 558-565). This is an important approach for ordering events in a distributed system.

1979

Distributed Newsgroups (USENET)

Tom Truscott and Jim Ellis (Duke University, Durham, NC), along with Steve Bellovin (University of North Carolina, Chapel Hill), set up a system for distributing electronic newsletters, originally between Duke and the University of North Carolina using dial-up lines and the UUCP (Unix-to-Unix copy) program. This was the beginning of the informal network USENET, supporting online forums on a variety of topics, and took off once Usenet was bridged with the ARPANET. ARPANET already had discussion groups (basically mailing lists). However, the owner of ARPANET discussion groups determined who received the information - in contrast, everyone could read USENET postings (a more democratic and scaleable approach) [Naughton 2000, 177-179]

1980

Model View Controller (MVC)

The ``Model, View, Controller’’ (MVC) triad of classes for developing graphical user interfaces (GUIs) was first introduced as part of the Smalltalk-80 language at Xerox PARC. This work was overseen by Alan Kay, but it appears that many people were actually involved in developing the MVC concept, including Trygve Reenskaug, Adele Goldberg, Steve Althoff, Dan Ingalls, and possibly Larry Tesler. Krasner and Pope later documented the approach extensively and described it as a pattern so it could be more easily used elsewhere. This doesn’t mean that all GUIs have been developed using MVC, indeed, in 1997 and 1998, the Alan Kay team moved their Smalltalk graphic development efforts and research to another model based on display trees called Morphic, which they believe obsoletes MVC. However, this design pattern has since been widely used to implement flexible GUIs, and has influenced later thinking about how to develop GUIs.

1981?

Remote Procedure Call (RPC)

An RPC (Remote Procedure Call) allows one program to request a service from another program, potentially located in another computer, without having to understand network details. The requestor usually waits until the results are returned, and local calls can be optimized (e.g., by using the address space). This calling is facilitated through an ``interface definition language’’ (IDL) to define the interface. It’s difficult to trace this innovation back in time; the earliest I’ve identified this concept is Xerox’s Courier RPC protocols, but I believe the concept is much older than the date shown here. Sun’s RPC (later an RFC) were derived from this, and later on DCE, CORBA, component programming (COM, DCOM), and web application access (SOAP / WDDI, RPC-XML) all derive from this.

1984

Distributing Naming (DNS)

The ``domain name system’’ (DNS) was invented, essentially the first massively distributed database, enabling the Internet to scale while allowing users to use human-readable names of computers. Every time you type in a host name such as ``www.dwheeler.com’’, you’re relying on DNS to translate that name to a numeric address. Some theoretical work had been done before on massive database distribution, but not as a practical implementation on this scale, and DNS innovated in several ways to make its implementation practical (e.g., by not demanding complete network-wide synchronicity, by distributing data maintenance as well as storage, and by distributing ``reverse lookups’’ through a clever reflective scheme).

1986

Lockless version management

Dick Grune released to the public the Concurrent Versions System (CVS), the first lockless version management system for software development. In 1984-1985, Grune wanted to cooperate with two of his students when working on a C compiler. However, existing version management systems did not support cooperation well, because they all required that versions "locked" before they could be edited, and once locked only one person could edit the file. While standing at the university bus stop, waiting for the bus home in bad autumn weather, he created an approach for supporting distributed software development that did not require project-wide locking. After initial development, CVS was publicly posted by Dick Grune to the newsgroup mod.sources on 1986-07-03 in volume 6 issue 40, (and also to comp.sources.unix) as source code (in shell scripts). CVS has since been re-implemented, but its basic ideas have influenced all later version management systems. The initial CVS release did not formally state a license (a common practice at the time), but in keeping with the common understanding of the time, Mr. Grune intended for it to be used, modified, and redistributed; he has specifically stated that he "certainly intended it to be a gift to the international community... for everybody to use at their discretion." Thus, it appears that the initial implementation of CVS was intended to be open source software / free software (OSS/FS) or something closely akin to it. Certainly CVS has been important to OSS/FS since that time; while OSS/FS development can be performed without it, CVS’s ideas were a key enabler for many OSS/FS projects, and are widely used by proprietary projects as well. CVS’ ideas have been a key enabler in many projects for scaling software development to much larger and more geographically distributed development teams.

1989

Distributed Hypertext via Simple Mechanisms (World Wide Web)

The World Wide Web (WWW)’s Internet protocol (HTTP), language (HTML), and addressing scheme (URL/URIs) were created by Tim Berners-Lee. The idea of hypertext had existed before, and Nelson’s Xanadu had tried to implement a distributed scheme, but Berners-Lee developed a new approach for implementing a distributed hypertext system. He combined a simple client-server protocol, markup language, and addressing scheme in a way that was new, powerful, and easy to implement. Each of the pieces had existed in some form before, but the combination was obvious only in hindsight. Berners-Lee’s original proposal was dated March 1989, and he first implemented the approach in 1990.

1991

Design Patterns

Erich Gamma published in 1991 his PhD thesis which first seriously examined software design patterns as a subject of study including a number of specific design patterns. In 1995 Gamma, Helm, Johnson, and Vlissides (the ``Gang of Four’’) published ``Design Patterns,’’ which widely popularized the idea. The concept of ``design patterns’’ is old in other fields, specific patterns had been in use for some time, and algorithms had already been collected for some time. Some notion of patterns is suggested in earlier works (see the references in both). However, these works crystallized software design patterns in a way that was immediately useful and had not been done before. This has spawned other kinds of thinking, such as trying to identify anti-patterns (``solutions’’ whose negative consequences exceed their benefits; see the Antipatterns website, including information on development antipatterns).

1992

Secure Mobile Code (Java and Safe-Tcl)

A system supporting secure mobile code can automatically download potentially malicious code from a remote site and safely run it on a local computer. Sun built in 1990-1992, and demonstrated on September 1992, its new programming language, Oak (later called Java), as part of the Green project’s demonstration of its *7 PDA. Oak combined an interpreter (preventing certain illegal actions at run-time) and a bytecode verifier (which examines the mobile code for certain properties before running the program, speeding later execution). Originally intended for the ``set-top’’ market, Oak was modified to work with the World Wide Web and re-launched (with much fanfare) as Java in 1995. Nathaniel Borenstein and Marshall Rose implemented a prototype of Safe-Tcl in 1992; it was first used to implement ``active email messages.’’ An expanded version of Safe-Tcl was incorporated into regular Tcl on April 1996 (Tcl 7.5).

1993

Refactoring

Refactoring is the process of changing a software system that does not alter its external behavior but improves its internal structure. It’s sometimes described as``improving the design after it’s written’’, and could be viewed as design patterns in the small. Specific refactorings and the general notion of restructuring programs were known much longer, of course, but creating and studying a set of source code refactorings was essentially a new idea. This date is based on <A href=&quotftp://st.cs.uiuc.edu/pub/papers/refactoring/opdyke-thesis.ps.Z&quot;>William F. Opdyke’s PhD dissertation, the first lengthy discussion of it (including a set of standard refactorings) I’ve found. Martin Fowler later published his book ``Refactoring’’ which popularized this idea.

1994

Web-Crawling Search Engines

The World Wide Web Worm (WWWW) indexed 110,000 web pages by crawling along hypertext links and providing a central place to make search requests; this is one of the first (if not the first) web search engines. Text search engines far precede this, of course, so it can be easily argued that this is simply the reapplication of an old idea. However, text search engines before this time assumed that they had all the information locally available and would know when any content changed. In contrast, web crawlers have to locate new pages by crawling through links (selectively finding the ``important’’ ones).

《BYTE》:Most Important Software Products

The 20-year story of personal computing often seems to be dominated by hardware. But it’s the software that makes the hardware worth owning: Many early buyers of Apple IIs walked into stores and asked for the VisiCalc machine.

CP/M 2.0

Developed by the late Gary Kildall in 1974, CP/M was the first OS to run on machines from different vendors. It became the preferred OS for most software development, and it looked like it would rule forever.

VisiCalc

Written in 1979 by first-year Harvard Business School student Dan Bricklin and Bob Frankston of MIT, VisiCalc was a godsend to Wall Street users who had bought the first microcomputers two years earlier. Running initially on the Apple II and nearly single-hand edly creating the demand for the machine, VisiCalc established spreadsheets as a staple application, setting the stage for Lotus 1-2-3 on the IBM PC in 1982.

WordStar

While writing programs on the Altair, Michael Shrayer hit upon the idea of writing the manuals on the machine. Electric Pencil was born, the first microcomputer word processor. But the first program to exploit the market potential was Seymour Rubinstein’s 1979 masterpiece, WordStar. Other programs took up WordStar-compatible keyboard commands--including the last major upgrade of Electric Pencil.

dBase II

Wayne Ratliff’s creation, first intended to manage a company football pool, was the first serious database management system for CP/M. dBase II, in its DOS incarnation, was a massive success. Ashton-Tate, which acquired dBase from Ratliff, began to lose the lead when it released the bug-ridden dBase IV in 1988. A Windows version (under the ownership of Borland) didn’t appear until 1994, much too late. The dBase language survives in the form of Xbase, supported by vendors such as Microsoft and Computer Associates.

AutoCAD

Autodesk’s AutoCAD started life as a CP/M (Control Program for Microcomputers) application, later moved to DOS, and eventually made the transition to Windows. It brought CAD from minis and mainframes down to the desktop, one of the first programs to make that now-common migration. AutoCAD quickly became--and remains--an industry standard.

Lotus 1-2-3

VisiCalc may have sold Wall Street on the idea of electronic spreadsheets, but 1-2-3 was the spreadsheet that Main Street wanted, too. When the IBM PC and XT took over the world, Lotus’s simple but elegant grid was without question the top spreadsheet to run on them, adding graphics and data-retrieval functions to the paradigm established by VisiCalc. By the early 1990s, Lotus could brag that 1-2-3 was the top-selling app lication of all time.

The Norton Utilities

Before Peter Norton rolled up his sleeves, bit twiddlers were on their own when it came to recovering lost clusters and managing other disk catastrophes. It’s almost the end of the millennium, and most of us still reach for Norton Utilities when something goes wrong with a disk.

DOS 2.0

The version of DOS that truly solidified the Microsoft/IBM platform dominance was 2.0, which came out with IBM’s new XT in 1983. DOS 2.0 had commands to support the XT’s new 10-MB hard drive as well as such now-familiar external commands and files as ANSI.SYS and CONFIG.SYS.

DOS 2.11 became the de facto basis of backward compatibility for any DOS program. In 1990, you might not have known if an application ran on DOS 5.0, but you could be sure it worked on old 2.11. DOS limitations even survive in Windows 95--in particular, the dreaded 640-KB memory limit.

Flight Simulator

To work its magi c, Microsoft’s simulation of an airplane’s cockpit employed low-level graphics routines. It became a mainstay of software suites used to test compatibility with the IBM PC standard. It was also one of the best-selling games of all time.

Novell NetWare

The year of the LAN happened sometime in the 1980s, and it was Novell’s NetWare that made it so. NetWare is no lightweight desktop OS. NetWare was an OS that systems administrators could rely on. Versions of this OS are still in use in businesses everywhere.

Unix System V

The best effort so far at unifying the diverse flavors of Unix, System V took off after AT&T’s divestiture in 1984, when Ma Bell was freed to market the OS more aggressively. Version 4.0, released in 1989, brought together Xenix, SunOS, 4.3 BSD, and System V to form a single standard. Hardware vendors continued to go their own ways, however, requiring subsequent efforts by numerous groups (e.g., X/Oepen, OSF, and COSE) to continue the fight for a shrink-wrappable Unix. Those efforts have mostly failed, but Unix’s communications standards and network protocols are finding a wider user base as the Internet explodes in popularity.

Mac OS and System 7

The Macintosh wouldn’t be the Macintosh without the Mac OS . And it was on the Macintosh that the concept of the desktop GUI really dug in. Later named System 7 in a major 1990 upgrade, the Mac continues to best Windows in ease of use, plug-and-play compatibility, and color matching. Apple’s Power Macs and the first Mac clones just might keep System 7 relevant into the next century.

Quicken

This checkbook-balancing program may be better-suited to the needs of its users than any other program on this list save VisiCalc. Scott Cook’s company grew from humble beginnings in the mid-1980s to become Microsoft’s multibillion dollar dance partner (until the Department of Justice cut in). Once you start balancing your checkbo ok in Quicken, you don’t ever go back.

SideKick 1.0

Besides being the first PIM (personal information manager), its pop-up notepad, calendar, and calculator made Borland International’s SideKick the model for TSRs--an application type that was relatively rare in 1984. Pop-up mini-apps became commonplace in the DOS era, but Windows’ task switching killed the TSR market in the 1990s.

Excel for the Macintosh

VisiCalc and Lotus 1-2-3 started the spreadsheet revolution, but they were character-based. Microsoft Excel for the Macintosh made the benefits of graphical spreadsheets obvious. Microsoft ported Excel to Windows, but Lotus was slow to convert 1-2-3 to Windows. There’s a lesson here: Today, Excel for Windows is the best-selling spreadsheet.

PageMaker

This is the program that launched a million newsletters. PageMaker’s paste-up metaphor also made sense to people who had worked in traditional design and production departments. QuarkXPress might now have a larger share in higher-end publishing, but with Adobe’s money and name behind Aldus, PageMaker promises to remain a competitive desktop publishing system for a long time to come.

LANtastic

For people who thought Novell NetWare was for corporate MIS gurus, Artisoft’s affordable network-card-and-software package was an easy and popular way to link PCs and share resources. With the addition of NetWare server functions in Artisoft’s new LANtastic Dedicated Server, LANtastic keeps a foothold in the future.

Adobe Type

Desktop publishing was still a bit of a toy when Adobe made Type 1 PostScript fonts available on the Macintosh. Thanks to these fonts and the enhanced line spacing and printing control that PostScript provides, the Mac became a tool on which to run a publishing business.

Windows 3.x

Though it was first introduced in 1985, Microsoft Windows spent the rest of th e ’80s as somewhat of a joke. It was slow, ugly, and underpowered. Then Microsoft rolled out Windows 3.0, a complete rewrite, at a tightly orchestrated, bicoastal multimedia hypefest in the spring of 1990. Gone was the 640-KB DOS memory limit (sort of); in came a flood of applications, a type of multitasking, and the desktop environment most users live in today. Version 3.1, released in 1992, added speed and stability, not to mention OLE, True Type fonts, and drag-and-drop commands.

Lotus Notes 3.0

Notes is the most innovative and powerful of the numerous contenders in the leading-edge groupware category. Not just E-mail, Notes is brilliant at capturing corporate group-think, thanks to its unique, replicated message system. Notes has become the standard applications development environment in every company that’s ever uttered the word reengineering.

AutoCAD Brought Design to PCs

photo_link (17 Kbytes)

It slices, it dices, it renders, it models. AutoCAD single-handedly wrested design from minicomputers.

VisiCalc Created Financial Frenzy </