Memory Machines The Evolution Of Hypertext

aka Memory Machines: The Evolution Of HyperText, by Belinda Barnet ISBN:1783083441


  • A number of people have dreamed of brain-like Associative writing/reading structures since before there were digital computers.
  • Early implementations were shaped heavily by technical constraints and funding/adoption realities (e.g. treating hypertext as your primary delivery medium isn't viable if few of your target readers have access to an interactive/display computer).
  • Ted Nelson has been around for that entire history, and is very cranky with everyone for failing to fully execute his vision.
  • Many of these systems had some features that are superior (to some people, for some purposes) to the World Wide Web. But Worse Is Better again.

My thoughts: what can we do to get closer to some of those ideals while still working with the World Wide Web?


Foreword: TO MANDELBROT IN HEAVEN by Stuart Moulthrop

Not all bringers of light can be primarily solvers of problems, able to explain fluctuations of cotton prices, skitterings of heartbeats, the shapes of clouds and coastlines. Some catastrophists come among us to raise questions rather than resolve them, to instigate practices that bring on interesting times. One of these more harassing masters, by their own estimate, has not ended up in heaven, but somewhere like the other place. ‘I see today’s computer world, and the Web’, Theodor Holm Nelson (Ted Nelson) tells the author of this book, ‘as resulting from my failure’ (Nelson 2011).

recording made by John Mc Daid during the Association for Computing Machinery’s second conference on HyperText and hypermedia, in Pittsburgh, 1989. Its waveforms encode the rivetingly clear, stage-brilliant voice of Nelson, delivering a phrase I will not forget: ‘You! Up there! … WRONG'

If anyone has failed, it’s surely the rest of us, who sit some distance from the edge of revelation, blinking into the light

The computer world we live in is indeed broken far away from any higher vision. Too much is still governed by maladaptive models like page, document and file, which, as Nelson always said, distract from true electronic literacy

Working across categories of matter and sign requires techniques such as abstraction and algorithm, as we learn from the likes of Mandelbrot. But geometry is one thing; architecture and engineering, something else. Visionary procedures must be deployed or instantiated in appropriate tools: protocols, data frameworks, programming languages and development interfaces. Along with these instruments, we need adequate cultures of learning and teaching, imitation and sharing, to make the tools broadly useful. In these respects the computer world, or our engagement with it, falls very short

Somehow we conceive writing-out (rhetoric) and writing-forward (programming) as distinct subjects, not as they should be: sign and countersign of a single practice

Douglas Engelbart provided a counter-example: he strapped a brick to a pencil and requested that people write with the assemblage

As anyone whose mind has been warped by videogames knows, computational sign-systems (sometimes called ‘cybertexts’) radically revise the function, and perhaps the core meaning, of failure. When a story unfolds according to contingencies determined equally by logic, chance, and player action, there is no simple way to move from start to finish, no single streambed of discourse. To paraphrase a US philosopher, stuff happens, often with considerable complexity, though driven by deceptively simple rules. Under such sensitively dependent conditions, what constitutes failure, or indeed success?

Welcome to the cultural logic of software or cybertext, that larger domain to which HyperText, the subject of this book, inevitably articulates, either as first revelation or arcane orthodoxy. Seeing a world composed of paths and networks makes a difference. In the regime of play and simulation we do what we can because we must, or must because we can; you get the idea. Play’s the thing. Iteration is all. Failure is a given, but we fail forward, converging in our own ways toward ultimate solution – and bonus levels, and sequels, each its own absurdly delightful reward. To play is to remain in the loop, always alive to possibilities of puzzle, travail and further trial-by-error

In all these respects, the book you are now reading resembled another very good non-brick, James Gleick’s Information: A History, a Theory, a Flood (2011

If Engelbart and Nelson are the Babbage and Lovelace of our times, and Xanadu our answer to the Analytical Engine, then we might expect these mighty names to be wistfully celebrated, in about a century’s time, for their sadly clear grasp of what was not immediately to be


ponder how it all might fit together, and more deeply, if it is even possible to say that a technical system ‘evolves’. What, exactly, am I tracing the path of here? In one of those delightful recursive feedback loops that punctuates any history, I discovered that Doug Engelbart is also concerned with how technology evolves over time – so concerned, in fact, that he constructed his own ‘framework’ to explain it. Inspired, I went off and interviewed one of the world’s most eminent palaeontologists, Niles Eldredge, and asked him what he thought about technical evolution. His response forms the basis of Chapter 1

These stories are not at all linear; many of them overlap in time, and most of them transfer ideas and innovations from earlier systems and designs. According to Eldredge (2006), this is actually how technologies evolve over time – by transfer and by borrowing (he calls this the ‘lateral spread of information’). Ideally the chapters would be viewed and read in parallel, side-by-side strips, with visible interconnections between them

A leitmotif recurs throughout this history, a melody that plays through each chapter, slightly modified in each incarnation but nonetheless identifiable; it is the overwhelming desire to represent complexity, to ‘represent the true interconnections’ that crisscross and interpenetrate human knowledge, as Ted puts it (1993

Now for the important bit: this story stops before the Web. More accurately, it stops before HyperText became synonymous with the Web

I will define precisely what I mean by the word HyperText in the next chapter, but for now we will use Ted Nelson’s popular definition, branching and responding text, best read at a computer screen (Nelson 1993

For example, the earliest built system we will look at here – the oN-Line System (NLS) – had fine-grained linking and addressing capabilities by 1968

As Stuart Moulthrop writes in Hegirascope(1997), HTML stands for many things (‘Here True Meaning Lies’), one of which is ‘How to Minimise Linking’.

These early systems were not, however, connected to hundreds of millions of other users. You could not reach out through FRESS and read a page hosted in Thailand or Libya. The early systems worked on their own set of documents in their own unique environments... None of the early ‘built’ systems we look at either briefly or in depth in this book – NLS, HES, FRESS, WE, EDS, Intermedia or Storyspace – were designed to accommodate literally billions of users.1 That’s something only the Web can do.

In that sense, the book is also a call to action. HyperText could be different

This book is about an era when people had grand visions for their HyperText systems, when they believed that the solution to the world’s problems might lie in finding a way to organize the mess of human knowledge

The story changes in the mid-1980s with the emergence of several smaller workstation-based, ‘research-oriented’ systems (Hall et al. 1996, 14). I have included only two of these here, arguably the two that had the least commercial or financial success: Storyspace and Intermedia

HyperCard is the elephant in the pre-Web HyperText room; it popularized hypermedia before the Web and introduced the concept of linking to the general public

If I had another ten thousand words to play with, I would include a chapter on Microcosm and Professor Dame WendyHall

These systems were all pioneering in their own right, but they are (to my mind, at least) part of a different story: the commercialization of HyperText and the birth of Network Culture. That is a different book for a different time, and perhaps a different scholar


The relationship between human beings and their tools, and how those tools extend, augment or ‘boost’ our capacity as a species, is integral to the history of HyperText and the NLS system in particular

There is actually a historical approach that is interested in how objects change over time, but it does not come from the humanities. It comes from evolutionary biology

From the middle of the nineteenth century on, and arguably before this, scholars started remarking on the alarming rate at which technological change was accelerating. This sense of ‘urgency’, that technological change is accelerating and that we need to understand how it occurs, comes through most strongly in Engelbart’s work.

It is important to start with a definition of ‘evolution’ that is both suitable and appropriate to both systems. (Eldredge 2011, 298)

For our purposes here, Eldredge’s definition is perfect: ‘the long-term fate of transmissible information’ (Eldredge 2011, 298

Technical artefacts are not dependent on the previous generation; they can borrow designs and innovations from decades or even centuries ago (retroactivate), or they can borrow from entirely different ‘branches’ of the evolutionary tree (horizontal transfer

As Noah Wardrip-Fruin observes in his essay ‘Digital Media Archeology’ (2011), in approaching any media artefact it is important to understand the systems that support it, and for digital media in particular, the specific ‘operations and processes’ that make it unique

Digital Media are not simply representations but machines for generating representations

To define the ‘operations and processes’ behind HyperText as an information system, Ted Nelson is the logical first port of call. His original 1965 definition of HyperText was ‘a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper’ (Nelson 1965, 85). This is a useful definition because it emphasizes the most obvious aspect of HyperText: interconnectivity

Nelson’s next definition of HyperText is more useful for our purposes. In his 1974 book, Computer Lib Dream Machines (the copy I refer to here is the 1987 reprint), Nelson proposed a definition of HyperText that emphasized branching or responding text – text that automatically extends itself upon request. ‘HyperText means forms of writing which branch or perform on request; they are best presented on computer display screens

What Nelson meant by the word ‘perform’, however, also included some computer-based writing systems that haven’t gained mass adoption.   One of those systems was stretch-text, a term Nelson coined in 1967 to mean text that automatically ‘extends’ itself when required

although there have been excursions into the other forms of HyperText posited by Nelson (for example, the attempts to build stretch-text), chunk-style HyperText is now the dominant form

Based on Nelson’s own suggestions, and having explained that we are focusing on chunk-style HyperText not stretch-text, I now offer a definition of what we are tracing here:

Written or pictorial material interconnected in an Associative fashion, consisting of units of information retrieved by automated links (Automatic Linking), best read at a screen

As I explained in the Introduction, there is also a recurrent ‘vision’ in HyperText history that has found many different incarnations: the desire to represent complexity, to ‘represent the true interconnections

Each of the HyperText systems we investigate here has a different interpretation of what that means

There is also an important difference between technical vision and technical prototype

Bill Duvall, put it in an interview with the author: The one thing I would say – and this isn’t just true about NLS, this is true about innovation in general – is that […] sometimes just showing somebody a concept is all that you have to do to start an evolutionary path

This book investigates both technical vision and technical Prototypes, but more important, it investigates the relationship between them.


Memex was an electro-optical device designed in the 1930s to provide easy access to information stored associatively on MicroFilm

so much has been written about it that it is easy to forget the most remarkable thing about this device: it has never been built. Memex exists entirely on paper

1 What is not so well known is the way that Memex came about as a result of both VannevarBush’s earlier work with analogue computing machines and his understanding of the mechanism of Associative memory. I would like to show that Memex was the product of a particular engineering culture, and that the machines that preceded Memex – the Differential Analyzer and the Selector in particular – helped engender this culture and the discourse of analogue computing in the first place.

Prototype technologies create cultures of use around themselves; they create new techniques and new methods that were unthinkable prior to the technology

the only piece of Bush’s work that had any influence over the evolution of hypertext was his successful 1945 article, ‘As We May Think

We begin the Memex story with Bush’s first analogue computer, the Differential Analyzer

The Differential Analyzer was a giant electromechanical gear-and-shaft machine that was put to work during the war calculating artillery ranging tables. In the late 1930s and early 1940s it was ‘the most important computer in existence in the US

Many of the people who worked on the machine (for example Harold Hazen, Gordon Brown and Claude Shannon) later made contributions to feedback control, information theory and computing

However, by the spring of 1950 the Analyzer was gathering dust in a storeroom; the project had died

research into analogue computing technology, the Analyzer in particular, contributed to the rise of digital computing. It demonstrated that machines could automate the Calculus, that machines could automate human cognitive techniques

The interwar years found corporate and philanthropic donors more willing to fund research and development within engineering departments, and communications failures during the Great War revealed serious problems to be addressed. In particular, engineers were trying to predict the operating characteristics of power-transmission lines, long-distance telephone lines, commercial radio and other communications technologies

Of particular interest to the engineers was the Carson equation for transmission lines. Although it was a simple equation, it required intensive mathematical integration to solve.

Early in 1925 Bush suggested to his graduate student Herbert Stewart that he devise a machine to facilitate the recording of the areas needed for the Carson equation

Bush observed the success of the machine, and particularly the later incorporation of the two wheel-and-disc integrators, and decided to make a larger one, with more integrators and a more general application than the Carson equation. By the fall of 1928 Bush had secured funds from MIT to build a new machine. He called it the Differential Analyzer, after an earlier device proposed by Lord Kelvin, which might externalize the calculus and ‘mechanically integrate’ its solution (Hartree 2000

Bush had delivered something that was the stuff of dreams; others could come to the laboratory and learn by observing the machine, by watching it integrate, by imagining other applications. A working prototype is different from a vision or a white paper. It creates its own milieu; it teaches those who use it about the possibilities it contains and its material technical limits. It gets under their skin

Bush, himself, recognized this, and believed that those who used the Analyzer acquired what he called a ‘mechanical calculus’, an internalized knowledge of the machine

The creation of the first Analyzer, and Bush’s promotion of it as a calculation device for ballistic analysis, had created a link between the military and engineering science at MIT that endured for more than 30 years. Manuel De Landa (1994) puts great emphasis on this connection, particularly as it was further developed during WWII

It would result in the formation of the Advanced Research Projects Agency (ARPA) in 1958

In 1935 the US Navy came to Bush for advice on machines to crack coding devices like the new Japanese cipher machines (Burke 1991

Three new technologies were emerging at the time that handled information: photoelectricity, microfilm and digital electronics

Bush thought and designed in terms of analogies between brain and machine, electricity and information. He shared this central research agenda with Norbert Weiner and Warren Mc Culloch, both at MIT

Bush called his new machine the Comparator

Bush began the project in mid-1937, while he was working on the Rockefeller Analyzer, and agreed to deliver a code-cracking device based on these technologies by the next summer

Microfilm did not behave the way Bush wanted it to. As a material it was very fragile, sensitive to light and heat and it tore easily. The switch was made to paper tape with minute holes

The Comparator prototype ended up gathering dust in a navy storeroom, but much of the architecture was transferred to subsequent designs

By this time, Bush had also started work on the Memex design

In the 1930s many believed that microfilm would make information universally accessible and thus spark an intellectual revolution

the Encyclopaedia Britannica ‘could be reduced to the volume of a matchbox

Bush put together a proposal for a new microfilm selection device, based on the architecture of the Comparator, in 1937. Corporate funding was secured for the Selector by pitching it as a microfilm machine to modernize the library

As with the Comparator, long rolls of this film were to be spun past a photoelectric sensing station. If a match occurred between the code submitted by a researcher and the abstract codes attached to this film (Burke 1991), the researcher was presented with the article itself and any articles previously associated with it.

Bush considered the Selector as a step towards the mechanized control of scientific information, which was of immediate concern to him as a scientist. He felt that the fate of the nation depended on the effective management of these ideas lest they be lost in a brewing data storm

Bush planned to spin long rolls of 35mm film containing the codes and abstracts past a photoelectric sensing station so fast, at speeds of six feet per second, that 60,000 items could be tested in one minute. This was at least one hundred-fifty times faster than the mechanical tabulator

when Bush handed the project over to three of his researchers – John Howard, Lawrence Steinhardt and John Coombs – it was floundering. After three more years of intensive research and experimentation with microfilm, Howard had to inform the navy that the machine would not work because microfilm warps under heat and would deform at great speed

Technical limits shape the way a vision comes into being, but not in the sense of a rude awakening – more like a mutual dance

By the 1960s the project and machine failures associated with the Selector, it seems, made it difficult for Bush to think about Memex in concrete terms

We now turn to Bush’s fascination with, and exposure to, new models of human associative memory gaining currency in his time

Our ineptitude at getting at the record is largely caused by the artificiality of systems of indexing. When data of any sort are placed in storage, they are filed alphabetically or numerically. The human mind does not work that way. It operates by association

Bush’s model of human associative memory was an electromechanical one – a model that was being keenly developed by Claude Shannon, Warren McCulloch and Walter Pitts at [[MIT, and would result in the McCulloch-Pitts neuron

At the same time, there was a widespread faith in biological-mechanical analogues as techniques to boost human functions

The motor should first of all model itself on man, and eventually augment or replace him

So Memex was first and foremost an extension of human memory and the associative movements that the mind makes through information

The Design of Memex

Bush’s autobiography, Pieces of the Action, and also his essay, ‘Memex Revisited’, tell us that he started work on the design in the early 1930s

The description in this essay employs the same methodology Bush had used to design the Analyzer: combine existing lower-level technologies into a single machine with a higher function that automates the ‘pick-and-shovel’ work of the human mind

Bush was very good at assembling old innovations into new machines

Bush had a ‘global’ view of the combinatory possibilities and the technological lineage

If the user wished to consult a certain piece of information, ‘he [tapped] its code on the keyboard, and the title page of the book promptly appear[ed]

The user could classify material as it came in front of him using a teleautograph stylus, and register links between different pieces of information using this stylus. This was a piece of furniture from the future, to live in the home of a scientist or an engineer, to be used for research and information management

The 1945 Memex design also introduced the concept of ‘trails’, a concept derived from work in neuronal storage-retrieval networks at the time, which was a method of connecting information by linking units together in a networked manner, similar to hypertext paths. The process of making trails was called ‘trailblazing’, and was based on a mechanical provision ‘whereby any item may be caused at will to select immediately and automatically another’ (Bush [1945] 1991, 107), just as though these items were being ‘gathered together from widely separated sources and bound together to form a new book

a mechanical selection head inside the desk to find and create links between items. ‘This is the essential feature of the Memex. The process of tying two items together is the important thing

Bush went so far as to suggest that, in the future, there would be professional trailblazers who took pleasure in creating useful paths through the common record in such a fashion

In ‘Memex II’, Bush not only proposed that the machine might learn from the human via what was effectively a Cybernetic Feedback Loop, but proposed that the human might learn from the machine

In our interview, Engelbart claimed it was Bush’s concept of a ‘co-evolution’ between humans and machines, and also his conception of our human ‘augmentation system’, that inspired him

Paradoxically, Bush retreats on this close alignment of memory and machine in his later essays. In ‘Memex II’ he felt the need to demarcate a purely ‘human’ realm of thought from technology, a realm uncontaminated by machines. One of the major themes in ‘Memex II’ is defining exactly what machines can and cannot do

In all versions of the Memex essay, the machine was to serve as a personal memory support. It was not a public database in the sense of the modern Internet; it was first and foremost a private device

the dominant paradigm of human–computer interaction was sanctified and imposed by corporations like IBM, and ‘it was so entrenched that the very idea of a free interaction between users and machines as envisioned by Bush was viewed with hostility by the academic community

Memex remained profoundly uninfluenced by the paradigm of digital computing

Consequently, the Memex redesigns responded to the advances of the day quite differently from how others were responding at the time

Delay Lines’ stored 1,000 words as acoustic pulses in tubes of mercury

in 1936, no one could have expected that within ten years the whole field of digital ‘computer science’ would so quickly overtake Bush’s project (Weaver and Caldwell cited in Owens 1991, 4). Bush, and the department at MIT that had formed itself around the Analyzer and analogue computing, had been left behind

By 1967 Engelbart was already hard at work on NLS at SRI, and the Hypertext Editing System (HES) was built at Brown University. Digital computing had arrived. Bush, however, was not a part of this revolution

Technological evolution moves faster than our ability to adjust to its changes. More precisely, it moves faster than the techniques that it engenders and the culture it forms around itself

As he writes in his autobiography: ‘The trend [in the ’50s] had turned in the direction of digital machines, a whole new generation had taken hold. If I mixed with it, I could not possibly catch up with new techniques, and I did not intend to look foolish

Bush was fundamentally uncomfortable with digital electronics as a means to store material

Memex, Inheritance and Transmission

in ‘Memex II’, this project became grander, more urgent – the idea itself far more important than the technical details. He was nearing the end of his life, and Memex was still unbuilt.


Douglas Engelbart wants to improve the model of the human, to ‘boost our capacity to deal with complexity’ as a species

To get what he means by ‘boost our capacity’ as a species, we must first understand his philosophical framework. This is important for two reasons. Firstly, this framework profoundly influenced his own approach to invention in the ’60s and ’70s. Secondly, it represents a fascinating (and novel) theory of technical evolution

Engelbart believes that human beings live within an existing technical and cultural system, an ‘augmentation’ system.

There is no ‘naked ape’; from the moment we are born we are always already augmented by language, tools and technologies

Engelbart divides this system into two parts. One part has the material artefacts in it, and the other has all the ‘cognitive, sensory motor machinery’ (Engelbart 1999). ‘I called these the “tool system” and the “human system”’ (Engelbart 1988, 216). Together the tool system and the human system are called the ‘capability infrastructure

For Engelbart, the most important element of the ‘human system’ is language. As Thierry Bardini details in his seminal book on Engelbart, Bootstrapping (2000), he was heavily influenced early in his career by the work of language theorist Benjamin Lee Whorf.1 Whorf’s central thesis is that the world view of a culture is limited by the structure of the language this culture uses. Engelbart would eventually build a computer system that externalized the ‘networked’ structure of language. Language is a deeply interconnected network of relationships that mark the limit of what is possible

The human system invents itself to better utilize the tool system in a symbiotic relationship. This is the reverse of the liberal human perspective, where human culture exists prior to and separate from technology

But for Engelbart, this is an unbalanced evolution; until now, it has been the tool system that has been driving human beings

Engelbart thinks that we might be able to change this, to create a more ‘balanced’ evolution

*he explicitly refused to take on board the dominant mantra of the engineering community.2 ‘Easy to learn, natural to use’ *

the only way we can direct this evolutionary dynamic is to become conscious of the process itself; we have to become ‘conscious of the candidates for change – in both the tool system and the human system

To locate the changes that will benefit mankind, one must focus on the human system – what our current social structures and methods are for doing things, and more important, how these might be boosted using elements from the tool system. Engineers should reflect on their own basic cognitive machinery first, specifically on where its limits reside. Language is the most important cognitive machinery. If we can effectively map this ‘pattern system’ and externalize it, we can begin to change the limits of what is possible. This reflexive technique is what he calls 'BootStrapping'

For Engelbart, there is also a structure to this acquired knowledge, a networked structure we should seek to preserve

Unlike Bush, however, Engelbart considers this basic structure derivative of language

This general-purpose concept structure is common to all who use a particular language. Consequently, it can be refined and elaborated into a shared network: we can improve on it

Bardini (2000) argues that Engelbart’s philosophy is consequently opposed to association; association relies on personal, haphazard connections that do not translate into abstract general principles. The connections Engelbart saw in language are more general, and the structure they carry is public property, not based purely on personal experience

According to Bardini, the idiosyncratic thinking implied by the technique of association ‘and assumed by most authors dealing with hypertext or hypermedia systems’ to be the organizational principle behind hypertext (2000, 38) is consequently inadequate to describe Engelbart’s system

Engelbart’s system imposed a general Hierarchal Structure on all content. Every item had a section header, and underneath that would be some text, and underneath that some subtext; in layman’s terms, the system imposed a treelike structure on all content. Though it may seem rigid and modularized at first blush, this structure provided many benefits. (cf Structured Writing)

One of the most important aspects of NLS is that it allowed multiple different ‘views’ and ways of operating on these pieces of information

Although Nelson and van Dam admired Engelbart’s work, the hierarchy did not allow for the ‘freewheeling’ personal experience they were both after

According to Nelson, the most important reason for Engelbart’s imposed structure was that it facilitated collaboration. Everyone knew the order of things and worked within that structure. This was a crucial difference between NLS and Xanadu: ‘The difference between Doug Engelbart and me is that he sees the world in terms of harmony, and I see it in terms of disagreement. My systems were built with the express anticipation that we would be dealing with disagreement at every level

in NLS you could link from the bottom of the hierarchy straight to the top if you felt like doing so. The system structure was hierarchical; the linking structure, which was separate from this hierarchical system structure, was associative

This structure also needed to be learned. Unlike Nelson, Bush and van Dam, Engelbart was not opposed to hierarchy and control, and he was not opposed to engineering what others might consider best left alone; this would be a controversial aspect of his system and his philosophy

How are we to prevent great ideas from being lost? Bush urged men of science to turn their efforts to making this great body of human knowledge more accessible. We cannot progress if we cannot keep track of where we have been. Some ideas are like seeds. ‘Or viruses. If they are in the air at the right time, they will infect exactly those people who are most susceptible to putting their lives in the idea’s service

These three ‘flashes’ were to become the framework he worked from for the rest of his career

FLASH-3: Ahah-graphic vision surges forth of me sitting at a large CRT console, working in ways that are rapidly evolving in front of my eyes

He already had an image of screen-based interactivity that outlined what such a union might look like: Memex

Licklider, Engelbart and NLS

Working with the dimensional scaling of electronic components taught Engelbart that the power of digital computers would escalate. This drove home the urgency of creating a way for humans to harness this power. ‘If we don’t get proactive...

If the project was to have the kind of impact on the engineering community he wanted, he needed the support of his peers. Engelbart decided to write a conceptual framework for it, an agenda that the computing (and engineering) community could understand

Titled ‘A Conceptual Framework for the Augmentation of Man’s Intellect’ (Engelbart 1963), the paper met with much misunderstanding in the academic community and, even worse, angry silence from the computing community. It did not garner the peer support Engelbart was seeking

Fortunately, one of the few people who had the disciplinary background to be able to understand the new conceptual framework was moving through the ranks at the Advanced Research Projects Agency (ARPA). This man was JCR Licklider, a psychologist from MIT

In tune with cybernetic theory he envisioned the human being as a kind of complex system that processed information based on feedback from the environment (Licklider 1988, 132), and also had a theory of technical evolution that put the human in a ‘symbiotic’ relationship with technology, as the title of his 1960 paper indicates

Licklider began financing projects that developed thought-amplifying technologies

Engelbart’s project was not heavily funded by ARPA-IPTO until 1967, after Bob Taylor had taken over direction of IPTO

meet with the head of engineering at SRI... he said, ‘You don’t really think what they’re doing up there is science, do you?’ I think that reflected a lot of the official attitude towards what Doug was doing

The freshly outfitted laboratory, the ARC, began its work in 1965. It started with a series of experiments focused on the way people select and connect objects together across a computer screen. A section of the lab was given over to ‘screen selection device’ experiments

Someone can just get on a tricycle and move around, or they can learn to ride a bicycle and have more options’ (Engelbart 1999; also Engelbart 1988, 200). This is Engelbart’s favourite analogy. Augmentation systems must be learnt, which can be difficult

The first timesharing system that the lab received, in 1968, was an SDS 940. After several attempts to find a fast, well-resolved display system, they produced their own

As Bill Duvall remembers: The thing that I would say distinguished NLS from a lot of other development projects was that it was sort of the first – I’m not sure what the right word is; ‘holistic’ is almost a word that comes to mind – project that tried to use computers to deal with documents in a two-dimensional fashion rather than in a one-dimensional fashion

the linking structure skipped across and between Engelbart’s Hierarchal Structure

Consequently, between 1969 and 1971 NLS was changed to include an electronic filing arrangement that served as a linked archive of the development team’s efforts (Bardini 2000). This eventually cross-referenced over 100,000 items (Engelbart 1988). It was called the software Journal

The software Journal emphasized interoperability between ideas generated by individual professionals, and access to a shared communication and ‘thought space’ on the working subject. Because it evolved around the techniques and processes of the humans who used it, it was also a primary instance of ‘bootstrapping

We originally designed our Journal system to give the user a choice as to whether to make an entry unrecorded (as in current mail systems), or to be recorded in the Journal […] I eventually ordained that all entries would be recorded – no option. (Engelbart 1988, 213

It was the evolving memory of the development team

Some participants did not want their notes stored, their speculative jottings immortalized or their mistakes remembered; this technology had real social effects

Then [in NLS] we had it that every object in the document was intrinsically addressable

Unlike the NLS object-specific address, the URL is simply a location on a server; it is not attached to the object itself. When the object moves, the address is outdated. You can never be sure an old link will work; on the Web, links are outdated, and information is replaced or removed daily. NLS was far more permanent

Not only did NLS create a new form of storage and a new way of working with computers, but it created a new technical paradigm: Software Engineering... Like the technical paradigm that formed around the Analyzer, NLS created new engineering techniques. The team found that when working with software, you need version control, and especially ways of integrating different versions together. You need to ‘attend’ to change requests, design reports, specifications and other descriptive documents (Irby 1974, 247), and, on top of that, to compile, fix, track and problem-solve the code itself

Throughout this book I have been emphasizing the importance of working Prototypes, and of people actually seeing technical solutions in action

the traditional knowledge-work dictum of ‘PublishOrPerish’ is replaced by ‘DemoOrDie’.

Engelbart took an immense risk and applied for a special session at the ACM/IEEE-CS Fall Joint Computer Conference in San Francisco in December 1968

Demonstration of the NLS Prototype

the organizational and presentational machinery used almost all the remaining research funds for the year

At the time, this was the largest computer science conference in the world

This was the first public appearance of the mouse and the first public appearance of screen splitting, computer-supported associative linking, computer conferencing and a mixed text/graphics interface. It proceeded without a hitch and received a standing ovation

the mother of all demonstrations (Mother Of All Demos)

Many writers claim the demo actually created the paradigm of human-computer interactivity: ‘the idea that the computer could become a medium to amplify man’s intellect became a tangible reality for the people in Engelbart’s audience

In the audience, Andy Van Dam was ‘totally bowled over’ (2011) by the presentation. He had been thinking about a new version of HES for a while during 1968 because he’d ‘seen its limitations and “first pancake” mistakes’ (ibid.). It was immediately clear to him that NLS was a technical leap forward. In particular, van Dam saw in this demo the benefits of multiaccess computing, device independence and outline editing (van Dam 1999). These were later transferred to FRESS.

Another reason that the mouse, hypertext and the WIMP interface went on to become the dominant paradigm is that some members of Engelbart’s original NLS team jumped across to Xerox PARC11

In his book on Engelbart and the origins of personal computing, Thierry Bardini concludes that ‘internal disillusionment and external disregard sealed the fate of Engelbart’s vision and led to his relative failure

Duvall does not think, in his own case, that the decision was simple. He was reluctant to put anything on the record about the subject during our interview, but would say this:

I think […] it wasn’t just a case of people going en masse to another place, but rather it was a case of people responding to the changing focus of the group. When you the change the focus of something you want different personnel

Bill English also felt it was time to do something different. There was not a lot going on at ARC from a technical perspective; development had finished. English felt it was ‘sort of coasting’. He said in our 2011 interview: I think Doug wanted to move on based on his vision, based on the NLS system, on the idea of co-evolution and so forth. I felt it was time to break out of that and build independent tools

NASA support ended by 1969, and although ARPA continued their support until 1977, lack of funds crippled the project

Corporations who were interested in ‘augmenting’ did not wish to hire a small army of system operators, and were interested in but a few of NLS’s functions

ARPA support continued to dwindle, and SRI sold the entire augmentation system to Tym Share Corporation in 1977. The system, named ‘Augment’, was marketed as a package by Tymshare and adapted to the business environment through the early 1980s

What We Have Inherited: NLS, Vision and Loss

Which brings us to the next chapter – my favorite chapter and the heart of this book: the Magical Place of Literary Memory, Xanadu


Like the Web, but much better: no links would ever be broken, no documents would ever be lost, and copyright and ownership would be scrupulously preserved

two-way links

hypertext pioneer Theodor Holm Nelson (Ted Nelson), who dubbed the project Xanadu in October 1966

he started thinking about what he calls ‘profuse connection’, the interconnections that permeate life and thought. How can one manage all the changing relationships? How can one represent profuse connection

Nelson also has a theory about the inheritance and transmission of human knowledge

This corpus is constantly shifting and changing; like biological life, it is evolving. It is a ‘bundle of relationships subject to all kinds of twists, inversions, involutions and rearrangement: these changes are frequent but unpredictable

Nelson has, however, released numerous products along the way – including the multidimensional organizing system ZigZag (1996), and more recently Xanadu Space (2007), a system for working with parallel documents (both are available for download on the Web). Neither of these applications are globe-spanning archives or publishing systems, however

I have chosen to divide this chapter into two main parts. The first is the evolution of the vision, which is not at all straightforward; the second is an explanation of the Xanadu system itself

Ideas and Their Interconnections: The Evolution of the Idea

In 1960, at Harvard University, he took a computer course for the humanities, ‘and my world exploded

One of the first ideas was based on his own ‘terrible problem’ keeping notes on file cards.9 The problem was that his cards really needed to be in several different places at once

Then each project or sequence would be a list of those items

he conceived of what he called ‘the thousand theories program’, an explorable Computer Assisted Instruction program that would allow the user to study different theories and subjects by taking different trajectories through a network of information. ‘This idea rather quickly became what I would eventually call HyperText

This revised design combined two key ideas: side-by-side intercomparison and the reuse of elements (Transclusion).

  • He called this system ‘ZipperedLists’. *

Both of these ideas would make their way into Xanadu in some form, but the zippered list in particular would eventuate in a ‘deliverable’ 30 years later ZigZag

Crucially, the design also got him published. The word ‘HyperText’ appeared in print for the first time in 1965 after Nelson presented the concept at two conferences

Also in the mid-1960s, Nelson coined the terms ‘HyperMedia’ and ‘hyperfilm’ – terms that employed the same ideas behind hypertext and were meant to encompass image and film: Nelson always meant ‘hypermedia’ when he said ‘hypertext'.

The benefit of a global hypertext system would be ‘psychological, not technical

The key ideas that had made their way into Nelson’s vision by 1965 (Nelson maintains these were present for him by 1960, and merely published in 1965) were historical backtracking (Version Control), links and transclusion

Links as Nelson saw them were deeply tied to sequence

Everyone from Stewart Brand to Alan Kay seems to have been ‘inspired’ by Nelson, but nobody has built the design exactly as he wants it. ‘Nobody is building my/our design! They are building things vaguely like hearsay about the design!’

Engelbart gives equal credit to Nelson for discovering the link. They were both working on similar ideas at the same time, Engelbart told me in 1999, but he had the facilities and funding to build a machine that demonstrated those ideas (Engelbart 1999). As an engineer Engelbart was more concerned with constructing the tool system than theorizing it, more aware of how ideas will change qua technical artefact

Nelson did not stay to work for Engelbart in 1967, though Engelbart ‘half-invited’ him. Nelson asked for a job, and ‘Doug said, “Well, we need a programmer right now”’ (Nelson 2010c). Nelson considered this for a moment and then declined. He didn’t think he could teach himself to program quickly enough. Also, at a deeper level he felt that NLS, though brilliant at what it did, was too hierarchical

NLS was designed to boost Work Groups and make them smarter; it evolved around the technical activities of a group of engineers. For this reason, NLS emphasized keyboard commands, workflow and journaling. Xanadu was intended, like Bush’s Memex, to be a very personalized system – to empower the individual. (PIM)

To Nelson, links were not just part of an augmentation toolbox; they were the essence of a revolution – an ideological revolution. Literature need no longer be linear. We don’t have to read the same books in the same order. We don’t have to learn at someone else’s pace and in someone else’s direction. Hypertext furnishes the individual with choices: ‘YOU GET THE PART YOU WANT WHEN YOU ASK FOR IT'

One of the most important collaborators was Roger Gregory, then a science fiction fan working in a used-computer store in Ann Arbor, Michigan

Gregory says he got a group together at Swarthmore and designed a system that he ‘almost had working’ by 1988, when he organized funding through Autodesk

By 1983 Gregory claims that he wanted to ‘get some work done’ on Xanadu without Nelson interfering (‘Ted can be very distracting. [He] is really brilliant but…’ (Gregory 2010)). So Gregory set the company up so that ‘we had Ted in Sausalito and us in Palo Alto’ – a significant car drive away (Gregory 2010). He also signed an agreement with Nelson that gave him the ‘technical rights’ in 1983 – the right to do the development and find backing (which he did five years later, through AutoDesk

Nelson has a different version of events. He believes the AutoDesk collapse happened because the Xanadu Operating Company (XOC, the company Gregory was in charge of) tried to ‘change horses mid-stream’ (Nelson 2010c, 285). Gregory’s team rebelled in 1988 and decided they wanted to redesign the system based on completely different principles. From Gregory’s perspective this was madness; they almost had the thing working at this stage. Nelson, Gregory and Gregory’s team were all at odds with one another. Gregory lost the battle and was demoted to programmer, and the team threw out his code. They set out to create a completely different Xanadu.

Autodesk eventually pulled the plug on XOC in 1992, having spent up to $5 million trying to develop it.

The Xanadu System

Nelson is proposing an entirely new ‘computer religion’. This religion attempts to model an information system on the structure of thought and the creative process behind writing, ‘if we can figure out what that is’ (Nelson 1993, 2/5). One thing that we do know, however, is that the nature of thought is change. Consequently, maintains Nelson, a system that is true to thought should be able to retrieve and track changes regardless of the current state of the document

In our 1999 interview Nelson hit upon the sentence he had ‘been looking for years’ to explain the design in a nutshell:

Xanadu is a system for registered and owned content with thin document shells, reusable by reference, connectable and intercomparable sideways over a vast address space

The Web is not Xanadu, and embedded markup is one of the reasons

Like all grand visions, Xanadu has its critics

One of the most common criticisms is that it is a pipe dream

They always had too much data to move in and out of memory’, he says, referring to the 128k machines that Nelson’s programmers were working with in the early 1970s

With respect to the technical infeasibility of Xanadu at least, Gary Wolf was wrong. Nelson claims this is because he ‘seriously garbled the idea of transclusion’ (Nelson 1995b). I contend it is because Wolf thought the Xanadu system required vast quantities of data to be moved in and out of computer memory. As we have explored, the idea behind Xanadu is to point at the data and then re-create pieces of it as a virtual object

Integral to this idea of pointing at bits of a document rather than storing multiple copies of it in memory is the concept of transclusion

Remote instances remain part of the same virtual object, wherever they are. This concept underpins Nelson’s most famous commercial feature, transcopyright

Necessarily, a mechanism must be put in place to permit the system to charge for use, a Micro-Payments system

At the same time, the bridge must leave no trace of who bought the pieces, as this would make reading political

Tim Berners Lee shared Nelson’s ‘deep ConnectionIst’ philosophy, and his desire to organize information associatively

while Berners-Lee was met with scepticism and passivity, Nelson – with his energetic and eccentric presentations and business plans – received entirely disparaging responses (Segaller 1998, 288)

Unlike Bush’s Memex, people keep trying to build the thing as it was first designed


It is said that the character of Andy in Toy Story was inspired by (Andries) Andy Van Dam, professor of computing science at Brown University; several of the Pixar animators were students of van Dam’s and wanted to pay tribute to his pioneering work in computer graphics

the book he coauthored with James Foley, Computer Graphics Principles And Practice, appears on Andy’s bookshelf in the film

In this chapter I trace the development of two important hypertext systems built at Brown: the Hypertext Editing System (HES), codesigned by Ted Nelson and van Dam and developed by van Dam’s students, and the File Retrieval and Editing System (FRESS), designed by van Dam and his students

I focus here on van Dam’s work, but also briefly explore two other systems developed at Brown: the Electronic Document System (EDS), developed under van Dam’s leadership, and Inter Media, developed under Norman Meyrowitz’s leadership (Meyrowitz would go on to be president of Macromedia Inc

HES was ‘effectively a word processor with linking facilities’ that ‘led directly to the modern WordProcessor’ (Nelson 2011), which from Nelson’s perspective is where it all started to go wrong. This is when the world started to sink ‘into the degeneracy of paper simulation’ (Nelson 2011

HES was the world’s first word processor to run on commercial equipment – ‘the first “visual word processor

HES was also the first hypertext system that beginners could use, and it pioneered many modern hypertext concepts

Like Engelbart, the HES team encountered much resistance to their ideas; the world was not ready for text on a screen. This explains why van Dam emphasized print text editing from the outset, much to Nelson’s dismay: they were trying to sell hypertext to people who used typewriters.

Van Dam’s Early Work

His students from the 1960s, 1970s and 1980s have gone on to make important contributions of their own; among the more famous are Randy Pausch, professor of computing science at Carnegie Mellon (who lost his battle with pancreatic cancer several years ago), and Andy Hertzfeld

Van Dam wrote his PhD thesis in 1966, the second computer science PhD in the world, on the ‘digital processing of pictorial data’ using a system that simulated associative memory

The data model on which van Dam did his doctoral work was called MULTILIST, developed by Professors NoahPrywes and Josh Gray at the University of Pennsylvania. It simulated a multilinked list system, based on language keywords. As we see in the chapter on StorySpace, linked lists were one of the earliest (and most obvious) ways of implementing hypertext; this function was possible in many of the early computing languages, including Lisp, and later Pas Cal

In his senior year in college, van Dam worked on early AI projects involving perceptrons; this is a simple type of neural network first invented at Cornell in 1957. These were later ‘debunked in a very scathing analysis by Marvin Minsky and Seymour Papert, so they sort of went out of fashion’ (van Dam 1999), at least until reincarnated as the more sophisticated neural networks in use today

I [also] wanted a system for writing articles, class notes, syllabi, proposals, etc., and it was obvious to me that online editing was the way to go. But once a document is created, you can’t distribute it without going to paper. At the time there was no online community, and most people had no visual displays, certainly no Internet. Thus the print medium was the only way to distribute and share documents.

Theories and models of human thought aside, I have been arguing that NLS’s Hierarchal Structure was based as much on the fact that the system itself was Multi User, and that it was the centrepiece of (the first) large-scale Software Engineering project, as it was on Engelbart’s conception of the structure of language and thought. The technical process of tracking code versions and reconciling these with design specifications, bug reports and change requests within a team of engineers does not allow for ‘unstructured’ or personalized associations

Technology already existed, however, for editing text on a computer; it was called a ‘line’ or ‘context’ editor (van Dam 1999). This system was designed for writing or editing computer programs, but it was often used covertly to create documentation

However we slice it, van Dam was not aware of the groundbreaking work that Engelbart was doing in 1966 and 1967. What he was aware of, however, was that an editing system that sufficed for computer programmers would not fulfill the needs of writers and scholars

According to Nelson, who refers to van Dam as ‘von Drat’ in his autobiography (I have been editing this out in quotes here), van Dam invited him ‘to come to Brown to “implement some of your crazy ideas”’ (Nelson 2010b). Nelson claims in his omitted chapter that this experience cost him both time and money (Nelson 2010b). Van Dam, however, recalls that Nelson entered into this collaboration willingly, that it was an unfunded ‘BootLeg project’ from the beginning (van Dam hadn’t even told his sponsors) and that in return he got a team of ‘eager beavers’ willing to try and implement a small part of his vision

Van Dam gathered a team and began work. He stressed in his communications to me that the idea was never to ‘realize’ Xanadu. The intention was much smaller and more circumspect: to ‘implement a part of his vision

They were pioneering computer science courses, the only ones offered at Brown at the time; van Dam was trying to build computer science at Brown and to create a tradition of using students to do research work (like programming HES). To do that, he first had to convince students that computing science was a good idea, and then teach them how to program. He wanted at least 30 people ‘who were brave or foolish enough to sign up’ for this suite of subjects (Lloyd 2011), so he did something very clever. He introduced Ted Nelson at an introductory lecture.

From van Dam’s perspective, even with a name that included ‘editing’, it was still a hard sell. He recalls his chairman at the time saying, ‘why don’t you stop with all this hypertext nonsense, and do something serious?

IBM, however, thought the project serious enough to provide funding through a research contract. This commitment, recalls van Dam, put the project on much more legitimate ground and ensured that the undergraduates who had been programming HES as a bootleg graphics project were then paid for their efforts

Our philosophical position [was] essentially that the writer is engaged in very complicated pursuits, and that this work legitimately has a freewheeling character […] therefore it became our intent to provide the user with unrestricted ‘spatial’ options, and not to bother him with arbitrary concerns that have no meaning in terms of the work being performed

HES stored text as arbitrary-length fragments or ‘strings’ and allowed for edits with arbitrary-length scope (for example, insert, delete, move, copy). This approach differed from NLS, which imposed fixed-length lines or statements upon all content; Engelbart’s 4,000-character limits created a tighter, more controlled environment

The system itself comprised text ‘areas’ that were of any length, expanding and contracting automatically to accommodate material. These areas were connected in two ways: by links and by branches. A link went from a point of departure in one area (signified by an asterisk) to an entrance point in another, or the same, area

Branches were inserted at decision points to allow users to choose ‘next places’ to go

The system was much easier to use than NLS, perhaps because it was created as much for writers as for engineers

In early 1968 HES did the rounds of a number of large customers for IBM equipment, for example, Time–Life (Time Magazine) and the New York Times. All these customers based their business on the printed word, but HES was too far out for them. Writing was not something you did at a computer screen

In late 1968 van Dam finally met Doug Engelbart and attended a demonstration of NLS at the Fall Joint Computer Conference. As we explored in Chapter 4, this was a landmark presentation in the history of computing (Mother Of All Demos)

He went on to design the File Retrieval And Editing System (FRESS) at Brown with his team of star undergraduates and one master’s student. As van Dam observed in the Hypertext 1987 conference keynote address:

[My] design goal was to steal or improve on the best ideas from Doug’s NLS and put in some things we really liked from the Hypertext Editing System – a more freeform editing style, no limits to statement size, for example. (van Dam 1988

Meanwhile, the HES source code was submitted to the IBM SHARE program library. Van Dam proudly recalls that it was used in NASA’s Houston Manned Spacecraft Center for documentation on the Apollo space programme (van Dam 1987). For what it was designed to achieve, HES performed perfectly

Nelson is adamant that the legacy of HES is modern word processing, and that it also led to today’s Web browser

For his part, Tim Berners Lee claims in his autobiography that he had seen Dyna Text, a later commercial electronic writing technology that van Dam helped launch after HES (see De Rose 1999), but that he didn’t transfer this design to HTML

HES, as a first prototype, naturally had its shortcomings. Van Dam’s next system, FRESS, was designed to improve on them

HES was programmed specifically for the IBM System /360 and the /2250 display; there was no device independence

Second, HES wasn’t multiuser

Although FRESS ‘didn’t have the kinds of chalk-passing protocols that NLS had’ (van Dam 1999) – in NLS, for instance, multiple users could work with a shared view of a single document in progress – it was designed from the outset to run on a timesharing system and to accommodate multiple displays of different types and capabilities

But the most popular development for novice users in FRESS was not its capacity to accommodate multiple displays and users; it was the ‘UnDo’ feature – the feature of which van Dam is most proud (van Dam 2011). FRESS pioneered a single-level undo for both word processing and hypertext. Every edit to a file was saved in a shadow version of the data structure, which allowed for both an ‘AutoSave’ and an undo

In NLS, a link took you to a statement. In FRESS it could take you to a character

Another aspect of FRESS that the Web has not implemented is bidirectional linking (Two Way Links). HES had unidirectional links, which the FRESS team decided to change. FRESS was the first hypertext system to provide bidirectional linking

The OutLining functionality in FRESS was inspired by NLS – or as van Dam put it, ‘was a straight rip-off

FRESS actually displayed and handled complex documents better than nonhypertext ‘word processing’ systems of the time. It was so intuitive and efficient that it ‘was used as a publishing system as well as a collaborative hypertext environment for teaching, research and development

But FRESS was not a runaway success at Brown, and the project received little financial support

In 1976 the National Endowment for the Humanities supported a FRESS application for teaching English poetry (Catano 1979). The FRESS team, and particularly van Dam, had wanted to use the system explicitly for teaching since its inception

Hypertext at Brown: The Electronic Document System (EDS) and Intermedia

Van Dam’s next project at Brown, his third hypertext system, was called the Electronic Document System (EDS) or the Interactive Graphical Documents system (IGD). This tool was designed for presenting predominantly graphical documents, something that had interested him for over a decade

Like FRESS, EDS also provided a variety of coordinated views, but it went further by allowing colour graphic representation of the information web

We spent a lot of time figuring out how to elide, that is, hide, information graphically. We had a ‘detail button’ that let us view things at varying levels of detail. So the author could move these windows around, look at pages and chapters at arbitrary levels of detail, iconically create various kinds of buttons and specify actions to take place when the reader invoked a button. Such actions could include animation and taking a link to another page

In 1983 Andries van Dam, William S Shipp and Norman Meyrowitz founded the Institute for Research in Information and Scholarship (IRIS) at Brown. Their most notable project was Inter Media, a networked, shared, multiuser hypermedia system explicitly designed for use within university research and teaching environments. Intermedia was started in 1985 and sponsored by the Annenberg/CPB project and IBM.

It was used in several undergraduate courses at Brown from 1986, and some of the course material survives to the present day. For example the Victorian Web, a resource hypertext originally created by George Landow in Intermedia and later ported to StorySpace and the Web, is still thriving online.

It was originally written for IBM workstations, then ran on A/UX

Like many of the hypertext projects from the mid- to late 1980s, it was also programmed in Mac Pascal and made use of the Macintosh Toolbox (though some of these were in emulation mode), which provided the ‘best direct manipulation interface

Hypertext and the Early Internet


Michael Joyce has kept a journal for many years. Before he begins to write, he inscribes the first page with an epigram: Still flowing. As anyone who has read Joyce’s fictions or critical writing will attest, his work is replete with multiple voices and narrative trajectories, a babbling stream of textual overflow interrupted at regular intervals by playful, descriptive whorls and eddies. If there is a common thread to be drawn between his hyperfictions, his academic writing and his novels, then it is this polyglot dialogue, as Robert Coover terms it, a lyrical stream of consciousness

I wanted, quite simply, to write a novel that would change in successive readings and to make those changing versions according to the connections that I had for some time naturally discovered in the process of writing and that I wanted my readers to share

That novel would become afternoon (Joyce 1987), the world’s first hypertext fiction

The development of StorySpace is, at least in part, the story of Joyce’s quest to find a structure for what did not yet exist, or as he wrote in the Markle Report, to find ‘a structure editor […] for creating “multiple fictions”’

As Jay David Bolter put it, ‘Electronic symbols […] seem to be an extension of a network of ideas in the mind itself’ (Bolter 1991, 207; see also Joyce 1998, 23). Storyspace is no exception; Joyce intuitively felt that stories disclose themselves in their connectiveness, and that ‘we are associative creatures. That’s what we do


In every interview Joyce has given about afternoon or the development of Storyspace, the leitmotif of a ‘story that changes each time you read it’ returns like a Wagnerian melody

In our interview, he actually corrected my phrasing: ‘Forgive this, this is like an English teacher. It’s not every reading in the sense of every different reader, but I said each reading, and I had in mind readers who would go back to a text again and again

In the early ’80s, as a classics professor working at the University of North Carolina, Chapel Hill, Jay David Bolter was also thinking about the computer as a writing space, and about the connections that hold between ideas.

Consequently, when he started working with Joyce in 1983, Bolter had been thinking about the relationship between computing and classical scholarship for a few years.5 Joyce remembers he ‘felt drawn to Jay’s vision because he was concerned with questions of how the epic narrator, how the Homeric narrator, adjusted stories so they changed

Both Joyce and Bolter spent a year as visiting fellows at the Yale AI Project, Bolter from 1982 to 1983, and Joyce from 1984 to 1985

Joyce, in particular, was influenced by the ideas of the indomitable Roger Schank, Director of the Yale AI Project and author of a seminal book in AI called Dynamic Memory, first published in 1982

He also spent a couple of years corresponding with Natalie Dehn, a researcher at Schank’s lab, who used the term extensively in her work on story generation programs

Bolter published a book shortly after his fellowship at Yale that would become a classic in computing studies, Turings Man: Western Culture in the Computer Age (1984). In Turing’s Man, he sets out to ‘foster a process of cross-fertilization’ between computing science and the humanities and to explore the cultural impact of computing (Bolter 1984, xii).10 He also introduces some ideas around ‘spatial’ writing that would recur and grow in importance in his later work: in particular, the relationship between early Greek systems of place memory loci (the art of memory) and electronic writing

From the outset, the nodes in StorySpace were called ‘writing spaces

it worked explicitly with topographic metaphors, incorporating a graphic ‘map view’ of the link data structure from the first version, along with a tree and an outline view (which are also visual representations of the data

Many of the early pre-Web hypertext fictions were written in Storyspace

Those hypertexts that were written in Storyspace had a strange ‘preoccupation with the “topography” of hypertext’ (Ciccoricco 2007, 197). The metaphor also seemed to stick in hypertext theory for many years; Joyce (1998), Dickey (1995), Landow (1992), Nunes (1999) and Johnson-Eilola (1994), to name but a few of the ‘first wave’ theorists, explicitly conjure images of exploration and mapmaking to describe the aesthetics of hypertext, due in no small part to Storyspace and to Bolter’s book Writing Space

we would like to have the most all-encompassing, richest sensory experience possible, while ‘forgetting’ that this experience has been shaped in advance by the media that enable it

GLOSSA was written for the IBM PC in Pascal, one of the high-level development languages in use at the time.16 The precursors of Storyspace, TALETELLER, TALETELLER 2, and also the original Storyspace, were all written in Pas Cal – and by 1985, when both Joyce and Bolter were using Macs, Apple had its own Mac Pascal.

Joyce in fact recalls they were trying very hard to avoid Hierarchal Structures (2011a). Human memory is fallible though, and data models are very hard to trace at any rate. They always reflect something else, popping up everywhere – like the grin of the Cheshire cat, as Mc Aleese points out

Around this time, Joyce started preparing what he called ‘pseudocode’ to explain his ideas, a delightful mash of poetic handwritten text, symbols and logical notations

Joyce’s ‘urge toward a novel that changes each time it is read’ (Joyce 1998, 181) inspired the most distinctive feature of Storyspace: guard fields

Guard Fields control a reader’s experience of the text based on where they have been. This is done by placing conditions on the activation of links, so that certain paths can’t be taken until the reader has visited a specific node. The reader’s experience is thus literally as well as metaphorically shaped by the path already taken, enabling Joyce to repeat terms and nodes throughout the work and have them reappear in new contexts

In 1985 Bolter became involved with an interdisciplinary research group at UNC directed by a colleague from computer science, John B Smith

Text Lab was a very productive group, and they developed a number of hypertext systems between 1984 and 1989. Primary among them was the Writing Environment (WE), the system with which Smith was most closely associated. This expository writing system was contemporaneous with StorySpace

Unlike Bolter and Joyce, Smith and the rest of the WE team were ‘not interested in prose, in fiction’ (Smith 2011); WE was designed to produce paper documents for professionals. Users could represent their ideas as nodes, move them into ‘spatial’ clusters and link them into an Associative Network

it became apparent to Smith that:

Jay was more interested in the Macintosh hardware, and he was also interested in applying these ideas to literature and a literary context. So Jay at some point said ‘Okay, I think I’m going to set off and do an alternate system’, probably incorporating a lot of the ideas that were floating around in this group, and also, of course, creating new ideas of his own

Joyce and Bolter’s next implementation, again in Pascal, was called TALETELLER 2. They both emphasized the influence of the Macintosh user interface

In 1985 the team dubbed the program StorySpace, a name chosen to reflect this metaphor. Also in 1985, they received a grant from the Markle Foundation ‘to study methods for creating and presenting interactive compositions […] by developing a microcomputer-based "structure editor"

Joyce did, however, have a copy of afternoon to present and distribute one year later at the Hypertext ’87 conference at UNC Chapel Hill, November 13–15. If hypertext had a coming-out party, then it was this first ACM hypertext conference

Apple presented HyperCard with much pomp and ceremony, but it was met with an undertone of disdain (as Joyce recalls it); the feeling was ‘we all knew systems that had a good deal more functionality, like FRESS, and we sort of resented being told, “here’s hypertext”’

There in one room: Ted Nelson’s Xanadu, Engelbart’s NLS/Augment, Walker’s Symbolics Document Explorer, Joyce and Bolter with StorySpace, [Bernstein’s] Hyper Gate, Meyrowitz and Landow and Yankelovich and van Dam with Inter Media

Along with Stuart Moulthrop’s Victory Garden and ShelleyJackson’s Patchwork Girl, afternoon is arguably the most important hypertext fiction in the history of computing

Together they approached Broderbund Software, Inc., an American software and game manufacturer best known for the Carmen Sandiego and Galactic Empire games. According to Kirschenbaum, who went through the correspondence between Joyce and Brøderbund, it was eventually dropped in 1989 because of ‘what was perceived to be a weakening in the software market, as well as lingering confusion over what the tool did and who its potential audience was’ (Kirschenbaum 2008, 176). After Brøderbund, Joyce entered into conversation with Mark Bernstein, the founder of EastgateSystems. East Gate signed a contract licensing Storyspace in December 1990 and has distributed it since that time

The reason Storyspace has survived so long is fairly simple: Eastgate Systems has been maintaining it.

Eastgate has also been criticized for the dominance of its titles in the hypertext canon, and the fact that the company sells these titles for a profit

in 1990 Bernstein stood before the European Conference on Hypertext and uttered the famous first words, ‘Where are the hypertexts?’ – a question that is arguably still just as urgent. Eastgate was set up to provide an answer to that question

Bernstein also developed his own hypertext system, which was operational by 1988: Hyper Gate. Hypergate is noteworthy because it introduced the concept of ‘breadcrumbs’ – showing whether a link takes you back to a place you’ve already seen. Bernstein invented this concept, and it eventually made its way into the Mosaic browser (Eastgate never implemented breadcrumbs in Storyspace)

So important was Storyspace (and Eastgate) to the early development of the field, argues Katherine Hayles, that works created within Storyspace have come to be known as the Storyspace School

Bernstein extended the program in 1995 to enable users to export HTML templates, but as KatherineHayles points out, ‘the limitations of Storyspace as a web authoring program are significant

The relationship between Storyspace and the Web has never been entirely easy, in part because the Storyspace link model is richer than the Web’s and it turns out that for the sorts of things people want to do in Storyspace the richness matters a lot […] Guard fields are invaluable for large-scale narrative, and we have not come up with an alternative [on the Web].

although Storyspace continues to be used to produce standalone works, ‘it has been eclipsed as the primary web authoring tool for electronic literature

Bolter has a rather pessimistic view of the Web’s impact on networked literature, and on the legacy of Storyspace

The whole concept of selling hypertext is made immensely more difficult when everywhere literature is given away for free. As Bolter points out, the literary community ‘often revolted against this, because it was so easily commercialized

Looking back on its history, one of the most interesting aspects of Storyspace is the fact that it bridged the gap between humanities and computing science so early, and that it was taken seriously by both fields. This was largely due to the fact that Bolter ‘had a seat in both worlds


We have inherited more than just technical designs from the history of hypertext – we have inherited works of literature

The similarities between poststructuralist theory and hypertext were eagerly unpacked by Landow, Joyce, Lanham, Johnson-Eilola, Moulthrop and Bolter among others

The claims made during this period of discovery – when literary theorists ‘discovered’ hypertext – seem utopian now

One dream in particular has recurred throughout this book: a device that ‘enables associative connections that attempt to partially reflect the ‘intricate web of trails carried by the cells of the brain’ (Wardrip-Fruin 2003, 35). More precisely, a tool for thought – a tool that might organize the mass of deeply tangled data that surrounds us. For the world grows more and more complex every day, and the information we are expected to keep track of proliferates at every click. How are we to keep track of the mess? (tools for thought)

The problem that Bush identified in 1945 is just as urgent today. The Web has not solved this problem for us

In this book, I have presented some earlier models of the hypertext concept, and in the process, demonstrated that every model has its benefits and its shortcomings

I want to say that HyperText is not the World Wide Web; the Web is but one particular implementation of hypertext. It’s the best we’ve come up with insofar as it actually works, most of the time – and it has stayed the course for 22 years. It is not, however, the only way hypertext can be done.

Edited:    |       |    Search Twitter for discussion