Before and After the Web: George P. Landow (interviewed by Harvey L. Molloy)

Before and After the Web: George P. Landow (interviewed by Harvey L. Molloy)

George Landow

George Landow talks with Harvey Molloy about personal projects and future Web speculations.

George P. Landow is Professor of English and Art History at Brown University. This interview was conducted while he was on leave from Brown University and was Shaw Professor of English and Digital Culture and Director of the University Scholars Programme at the National University of Singapore. His books on hypertext and digital culture include Hypermedia and Literary Studies (MIT, 1991), and The Digital Word: Text-Based Computing in the Humanities (MIT, 1993) both of which he edited with Paul Delany, and Hypertext: The Convergence of Contemporary Critical Theory and Technology (Johns Hopkins UP, 1992), which has appeared in various European and Asian languages and as Hypertext in Hypertext (Johns Hopkins UP, 1994), a greatly expanded electronic version with original texts by Derrida, reviews, student interventions, and works by other authors. In 1997, he published a much-expanded, completely revised version as Hypertext 2.0. He has also edited Hyper/Text/Theory. (Johns Hopkins UP, 1994).

Harvey L. Molloy is an Assistant Professor in the University Scholars Programme at the National University of Singapore (NUS). His research interests include information design and digital arts. He has seven years experience in the design industry working as an information designer and has worked for clients in the diverse fields of telecommunications, finance, education and the arts. He is currently the Programme’s Web editor.

The Web and Hypertext

HM: During the 90s, the Web came to dominate how we think about hypertext. What do you think about this domination?

GL: As someone who believes that the model of networked - i.e. uncentered, nonhierarchical - digital technology offers important potential for education, educational institutions, scholarly and creative work, and society as a whole, I am fascinated and delighted by the way the Web has taken hold. As someone who came from the pre-Web hypertext community, I am saddened that people have had to settle for such an impoverished version of hypertextuality.

When I look back upon the history of hypertext, I realize that WWW is a kind of latter-day Hypercard in disseminating the idea and use of this kind of infotech: Like Hypercard, which came into being only after dozens of far richer systems had appeared, it appears free and extraordinarily easily to use. Of course, as soon as one tries to do anything rich and strange with either HTML or Hypercard, one begins to experience it much as boat owners tell me one experiences owning a sail boat - as a giant hole into which one pours unlimited time and money.

The lesson of both Hypercard and WWW seems to be that this misleadingly easy first experience leads to great success; the lesson of WWW seems to be that the networked model - this first step towards Nelson’s Docuverse - matters more than anything else.

HM: What are the limitations of HTML and the Web?

GL: Essentially HTML is a very basic formatting language that looks virtually identical to all the old mainframe and DOS word-processing software - IBM Script, Zywrite, and so on - to which have been added the capacity to add links and images. Adding these two features was an act of genius. Basic HTML is extraordinarily easy to use, and with decent HTML editors, such as BBEdit, Dreamweaver, and Homesite, very easy to use for large projects or sites, using Eastgate Systems Storyspace 2.0 one can even create giant, multi-directory sites, and export them into usable HTML with fairly little effort. So getting started is fairly easy today, as any 12 year-old knows.

The Web today has at least three main deficiencies: First, digital textuality is essentially dynamic; HTML, like its richer predecessor and model, SGML, is suited chiefly to static texts that are created, formatted, and frozen. The very use of the term “homepage,” which derives from a very different world of print, immediately suggests both the difficulties of the technology and the way new users come to it with incorrect - and very limiting - paradigms. HTML and currently available browsers lack some key features that make maintaining any dynamic Web site very time-consuming and therefore expensive.

Second, and related to this last point, is the absence of two defining features of true hypertext - (1) one-to-many linking and (2) automatically generated menus of links available when one clicks on any link-anchor. The first feature, the capacity to attach multiple links to any point in the text or image, creates a vastly richer sense of hypertextuality; in fact many students who learn about hypertext first from an experience of Storyspace, Microcosm, or other systems, find they cannot translate their work into HTML because the Web is “so much flatter,” as they put it, than other forms of hypertext.

In my experience, the second feature, link menus automatically generated by the system, saves much more than half the time and effort required to manage a dynamic site. My sites now comprise more than 42,000 documents and images, and they grow daily. Each time a new document comes to the Victorian Web, I have to do two things: First, I have to format it, which is fairly easy since one can use existing documents as templates for the new one. Second, and much more time-consuming and prone to error, I have to add links to the new doc from as many as six other menus, each of which has to be maintained manually. When one of my contributing editors from Canada (whom, incidentally, I have never met) e-mails an essay on Hardy and Conrad’s use of Miltonic imagery and its relation to their fundamental ideas, links have to be added to the literary relations overviews for each author as well as similar documents for imagery and themes. In richer forms of hypertext, one simply adds a link to each subject heading in each author’s overview using point-and-click techniques; in HTML, one has to edit six documents manually. What a lot of work!

A final problem exists in the instability of the Net. Ideally, one should be able to link to many other Web sites. In fact, painful experience proves that a large number of Webmasters, particularly graduate students, who request links to their sites, move or shut down their sites without warning, and server names seem to change at an astonishing rate, thereby breaking links. This fact means that one of the Web’s greatest promises - a true Nelsonian Docuverse - hasn’t been fulfilled.

HM: Do you think that the Web will continue to hold this dominant position? Do you think that future developments in markup languages - such as XML - will allow the Web to fulfill some of the visionary potential of hypertext as imagined by Bush and Nelson?

GL: According to people close to the latest developments in XML, it will have the strengths of SGML – essentially, tags describe a text element, such as a paragraph or book title, and one decides on formatting them from a central location. It also seems as if the Xlink protocols will finally give us one-to-many linking; now it’s up to Microsoft and Netscape to produce decent browsers that will support such features. If they do, the Web world could change at light speed.

HM: Is there a danger that students and researchers will forget the power of other hypertext systems due to the dominance of the Web?

GL: No, I think the danger is that the great majority of students and researchers never even learn about other systems. For someone involved in the field since 1986 or ‘87, one of the most painful (or pathetic) things about much Web-based research projects in Computer Science is seeing people duplicate research done much earlier – often on things that proved to be complete dead-ends. Oh well, it keeps them off the street.

HM: In Hypertext 2.0 you noted that “Hypertext also offers a means of experiencing the way a subject expert makes connections and formulates inquiries” (226). How does the Web fare in fulfilling this potential?

GL: Here I think the Web does an excellent job. The ease with which one can create what are essentially links to a glossary permits beginners to read with the help of expert readers - when they wish to do so.

The Web and Education

HM: The Victorian Web began as a Storyspace web - what was your experience in converting the Victorian Web from Storyspace to HTML? What were the effects of this change for authors and readers of the Victorian Web?

GL: First, an enormous amount of work, which continues on a daily basis. Second, an enormously larger audience that is now around a combined 7-8 million hits/month on my two sites (in Singapore and in the US). Third, as a result of the last effect, contributors to the Victorian Web, chiefly faculty members at other institutions and a few graduate students, have increased enormously. We now have around 500 faculty authors, and in the Victorian Web Books section, which consists of HTML translations of central books in the field, we now have a dozen important books originally published by Cornell, North Carolina, Oxford, Routledge, Princeton, Texas, and Yale UP. None of this could have happened without something like the Web.

HM: What’s interesting to me about the Victorian Web and the Hypertext and Critical Theory Web is that you don’t readily distinguish between student authors and established academic writers. Students are effectively engaged in scholarly research projects. Is hypertext unique in allowing students to become active researchers?

GL: Two comments: first, each does distinguish between undergraduate, postgraduate, and faculty contributors – at least to the extent that each byline indicates the status of the author. It does not distinguish among them to the extent that faculty and students or members of the general public comment upon one another’s work.

Second, as so many other educational and cultural effects, hypertext makes vastly easier something theoretically possible earlier and occasionally practiced.

HM: In Hypertext 2.0 you wrote that “Hypertext, by holding out the possibility of newly empowered, self-directed students, demands that we confront an entire range of questions about our conceptions of literary education” (219). What’s your evaluation of the humanities’ response to this possibility?

GL: Qualified medium-range optimism, I guess. Many young teachers immediately saw the possibilities of the Web and other forms of hypertext. For example, using both Storyspace and HTML, Massimo Riva of Brown constructed the massive bilingual Decameron Web with contributions from students and scholars from the USA and Italy. Interestingly enough, scholars working in the fields concerned with earlier literatures - Greek and Latin, Anglo-Saxon, Old Irish, Old Norse, and so on - led the way whereas those in contemporary literature, film, and video often refused even to consider the possibilities of digital technologies. Brown’s Department of Modern Culture and Media, which for almost a decade acted as if all media ended with television and video, blocked several attempts to have an official program or major in digital culture. In my own department, the medievalists and renaissance scholars have long been immersed in computing, but I have never been able to get those in the romantic and Victorian periods, including our chair, to look at the Victorian Web, much less use it for their courses or contribute to it themselves. As soon as I went on leave to come to the National University of Singapore, my department stopped teaching my hypertext courses, even though there are quite a few people who could have kept them going. The Old Guard, the Old Fellas (which in this case includes a large number of women), don’t see what this stuff has to do with an English Department.

My off-the-cuff explanation is that although all modern education is based chiefly upon book technology, those working in earlier fields know the texts that they study and teach bear the marks of scribal, oral, and pre-print infotech; those who work in later fields are so inside the Gutenberg galaxy (as McLuhan called it) that they see anything else as fundamentally anticultural.

HM: Do you think that there’s a danger that many teachers in humanities see hypertext as being about computers rather than being a means to do research?

GL: Yup. At the very least, they should be leading their students to learn how to evaluate the quality of information. Of course, since most secondary school teachers and college instructors today themselves don’t know how to do research in traditional libraries, they can’t extend these skills to the Net.

HM: What are some of the issues that need to be considered by Web publishers who want to create online editions of out-of-print books? How do footnotes, references and bibliographies work when a text is moved from print to hypertext?

GL: Since not all users have broadband access to the Internet, avoid adding links to notes where possible by using the following rules: First, all substantial notes should be given titles and treated as separate documents; second, incorporate as many brief comments and notes as possible into the main text; third, for bibliographical information include a list of works cited at the foot of each individual lexia (document) and then use the MLA short form of in-text citation, which means in practice that you only use as much info in the parenthetical reference as is absolutely necessary. Thus, if you introduce quoted material by “According to Spurgeon’s “Christ the Lord,” you only need a page number: “quoted text” (34). If, however, you wrote, “According to a Victorian preacher… ” you’d have to provide the necessary information in full: “quoted text” (Spurgeon, “Christ the Lord,” 34).

Most of the preceding recommendations, you’ll notice, come straight from the best of current book publishing practice. The problem is that many print publishers, including leading academic ones, have incompetent manuscript editors or inadequate house styles. Thirty-five years ago I was told by editors of leading journals and presses (a) not to use things like “Ibid.” or “Op. Cit.,” and (b) never to use unnecessary notes, but I still come upon books like Timothy Hilton’s fine biography of John Ruskin, the second volume of which Yale University Press published last year, that has pages and pages of tiny endnotes with Ibid. and page numbers. A good three-quarters of the endnotes, which are not easy to use in a massive volume, are useless. The lesson here is that one can get by with incompetent manuscript preparation in print, but such poor quality in a Web doc would be a disaster for readers, quickly training them not to follow any links!

The more interesting problems, which we face all the time in the Victorian Web Books - - include: (a) what to do with information created, even by the same author, since the book first appeared, (b) how does one add value with links to material not in the original book, and (c) how does one both preserve the text-as-a-book and make it function effectively as a digital text with permeable borders. Finally, can some of the solutions I’ve tried in the Victorian Web be carried out algorithmically?

Power, Authority, Control, and the Web

HM: In your introduction to the 1994 collection of essays Hyper/Text/Theory that you edited you wryly observed that the humanities excels in “finding mice in molehills.” Do you think that by subscribing to the narrative that the utopian idealism of the early 90s has now been superceded by systems of control and the search for the e-dollar that the humanities finds a late capitalist mouse in a cyberspace molehill? Has utopianism about the Web been replaced by a proliferation of technocapitalism and cybernetic governmentality?

GL: Although a certain cyber-utopianism has disappeared as a general characteristic of those involved in the Web, this change has happened in large part because new people with non-utopian goals have quite properly tried to earn a living with the new technology. I don’t see anything wrong in people trying to make money from doing things that other people need or want (not the same thing). At the same time at lot of people see the Web as a new virtual place of freedom. I find wonderfully encouraging the Web public’s refusal to accept channels and other attempts to turn hypertext into television. Michael Joyce’s brilliant challenge thus far has rung true: “Hypertext is the revenge of text upon television.” If it turns out that the most successful way to make money from the Net is business-to-business sales, a few consumer fields, such as music distribution, and the like - that’s fine. None of this drives out more experimental writing and the like.

HM: While there has been a rise in cybergovernmentality, there also been a proliferation of free Web-hosting, free email services, free egroups, free Web logs. It’s never been so easy to publish your own material. Do you think that this is significant? Does the rise of these services have implications for teaching and research?

GL: Yes, we find ourselves in a situation of creative anarchy, and, like everyone else, I’m waiting to see how things will shake out and down. I also wonder how long services will remain “free,” or if certain aspects of Internet culture will eventually become a kind of inalienable right. It is also possible that, like broadcast TV, such free services will come at the expense of advertisements, in which case skilled reading will involve becoming blind to commercial enticements.

Certain obvious implications have already been realized: my students in Singapore, like those in the US, often develop their work on their own servers, rather than in (and on) University facilities. In addition, the ability to publish anything makes something like a conventional publisher, who selects, regularizes, and advertises, even more important. I don’t think the Web is the death of publishers - just the death of those who insist on remaining clueless.

HM: In Hypertext 2.0 you argued that “Like other forms of technology, those involving information have shown a double-edged effect, though in the long run - sometimes the run has been very long indeed - the result has always been to democratize information and power” (276). What are some of the dynamics at work which result in this greater democratization?

GL: Although clearly many factors are involved, the single most important one, I believe, is the replacement of hierarchy by the uncenterable network. That makes top-down control difficult; hierarchy and lack of transparency almost unworkable; choice inevitable.

HM: Let’s talk a little about the issue of surveillance and openness. The extent to which Web surveillance has increased surely depends on very local issues. What do you see as the impact of the Web within Singapore and throughout the entire South-East Asia region?

GL: Key issues include (a) literacy, without which accessibility means nothing, (b) access to networked computers, and (c) access to high-speed networks. Much of the population of Singapore has more of these three capacities than most of Europe and America, and vastly more than their neighbors in the region, or countries in South America and Africa.

By announcing recently that Internet service providers are not legally liable for material their customers place on their webservers, the Singapore government took a giant step towards an open society. I have no idea how much Web surveillance actually happens here or throughout the world, though it seems to me that most of it takes a commercial turn, with merchandisers compiling elaborate profiles, which they then exchange with other commercial and possibly governmental entities.

I also don’t have a clear idea of how much surveillance is in fact possible. We all know stories of the Jet Propulsion Lab storing incredible amounts of data sent back by unmanned space vehicles because they don’t have capacities to process it. Even given the resources of NSA and the CIA, I wonder much they can accomplish with the vastly larger amounts of data that pour in from spy satellites, web crawlers, and the like. Singapore has only 3 million people, so the task would be easier if one had access to the same resources.

HM: How do you see copyright issues impinging on online publication and scholarship?

GL: Back somewhere around 1987, the Annenberg/Corporation for Public Broadcasting assembled about a dozen people in Cambridge, Massachusetts, and asked us what would be needed to make hypertext fulfill its potential as an educational and cultural force. Everyone agreed that the hardware and software will take care of themselves; the one factor that we had to work for was a new conception of copyright that involved something like leasing information for a tiny expenditure - Ted Nelson’s vision, of course. Since then nothing has changed.

Unfortunately, too many of the judges and lawmakers who consider such issues throughout the world do not understand networked digitech. Worse, not realizing that many of their conceptions of intellectual property are print based, they assume their notions of intellectual property are universal. Of course, as many students of copyright law have pointed out, in the commercial world large corporations protect their ideas by means of secrecy, not copyright.

HM: What are some of the new issues in hypertext? What would you need to cover if you were writing Hypertext. 3.0?

GL: The short answer is that if I knew, I’d be writing Hypertext 3.0 right now. The longer one is that I’d have much more digital fiction, poetry and art to examine, and I’d expect to examine various debates over gender, textual embodiment, and other issues increasingly in contemporary critical theory. Of course, I am particularly eager to see if the promise of XML will be fulfilled.