Monday, April 30, 2007

Book Presentation, Douglas Ryan VanBenthuysen, April 30, 2007

Book Title

Designing Usable Electronic Text, Second Edition

Author

Andrew Dillon

Dillon's Stated Purpose

"The major aim of the present work is to examine and subsequently describe the reading process from a perspective that sheds light on the potential for information technology to support that process."

Dillon's Purpose as Doug sees it

Dillon's overall goal seems to be to suggest to designers of electronic text that they consider ergonomic factors when planning and releasing designs. "Suggest" is perhaps too weak of a term; more accurate might be to say that Dillon asserts an almost ethical imperative to consider such factors. He laments that "human factors are often considered at a stage too late to effect better designs." Specifically, he advocated the use of the TIME framework for planning the design of electronic texts.

Measuring Differences between Screen and Paper Reading

Dillon evaluates the various ways in which one could measure differences between screen and paper reading and offers reference to the current research showing whether paper or screen scores higher in a given area of evaluate and how that result compares to the expected result.

The differences can be organized as follows:

  1. Differences in Outcome
    1. Speed
    2. Accuracy
    3. Comprehension
    4. Fatigue
    5. Preference
  2. Differences in Process
    1. Eye Movement
    2. Navigation
    3. Manipulation

Differences in Outcome

"Outcome measures concentrate on the results of reading."

Speed

Dillon asserts, "By far the most common experimental finding over the past 20 years is that silent reading from screen is significantly slower than reading from paper." "Figures vary according to means of calculation and experimental design but the weight of evidence suggests a performance deficit of between 20 per cent and 30 per cent when reading from screen." However, other tests have shown that under certain conditions, speeds are statistically equivalent. Dillon concludes that while it is commonplace to assert that screen reading is slower, it is not cut and dry.

Accuracy

Dillon admits that this term is rather loose in the research, but it can involve such things as locating information, recalling content, or locating errors (like spelling). Again, the available research favors print as studies show significantly poorer accuracy on screens. The exception to this rule comes when the accuracy test involves finding information using search. Studies have shown that hypertext versions make it easier for subject to find information not contained in headings and titles.

Comprehension

One of the problems with assessing data with regards to comprehension is that several studies fail to user "experienced" subjects. That is, the lower comprehension of some users may be due to the lack of familiarity with screen reading as opposed to some intrinsic problem with comprehension from the screen. More recent studies, particularly since the widespread advent of hypermedia, suggest that there is not a significant difference between comprehension levels on screen and print. Dillon concludes, "So it would seem that reading from screens does not negatively affect comprehension rates though it may affect the speed with which readers can attain a given level of comprehension."

Fatigue

A common concern about reading from screens is the fear of negative side effects for screen-readers. Dillon's analysis of research concludes that, in fact, fatigue is a bigger problem with screen reading, particularly when reading for extended periods of time. Screen quality has, as you might imagine, a huge impact on fatigue. Dillon says, "It would seem safe to conclude that users do not find reading from screens intrinsically fatiguing but that performance levels may be more difficult to sustain over time when reading from average quality screens. As screen standards increase over time this problem should be minimised. However, what may take much longer to overcome is the commonly held belief that screen reading is more fatiguing, regardless of any test results from a user study."

Preference

Dillon reveals that there is a conception that the ergonomic field tends to push technology, often concluding that naïve users tend to dislike computers but researches tend to illustrate their superiority. Without saying much more about this, Dillon concludes that there remains a strong preference for paper, though he acknowledges some support for the argument that improved screen quality and increased user familiarity with technology is leading to a shift in preference. He cites an interesting survey (from 2000) in which only 15% of readers of an online journal actually read articles online. The rest choose to print the articles. (I might hypothesize, however, that even now, just seven years later, a far greater number of users read online; if for no other reason, the affordability of LCD screens.)

Differences in Process

Measuring process involves examining the ways in which a user physically encounters a text. Dillon discusses the difficulty in evaluating process as the observation method must be unobtrusive enough to prevent the skewing off results. He also claims that people commonly assert that there must be great process differences between print and screen reading.

Eye Movement

Dillon defines: "Eye movements during reading are characterised by a series of jumps and fixations. The latter are of duration approximately 250 milliseconds and it is during these that word perception occurs." After describing a processing for investigating eye movement patterns while reading which uses a photoelectric monitoring system, Dillon concludes:

"Analysis revealed that when reading from screen, subjects made significantly more (15 per cent) forward fixations per line. However, this 15% difference translated into only one fixation per line. Generally, eye-movement patterns were similar and no difference in duration was observed. Gould et al. explained the 15 per cent fixation difference in terms of image quality variables. Interestingly, they report that there was no evidence that subjects lost their place, 'turned-off' or refixated more when reading from screen."

Manipulation

Manipulation involves the physical movement on an object, such as flipping a page or clicking a scroll button. Dillon says, "Perhaps the most obvious difference between reading from paper and from screens is the ease with which paper can be manipulated and the corresponding difficulty of so doing with electronic text." He discusses how we learn skills for manipulating paper text, such as leaving a finger in a page to mark a place, early in life. Also, he brings up the fact that there exists a wide variety of methods for manipulating electronic texts (using a wheel mouse, a stylus, a keyboard, etc).

Navigation

Navigation involves moving through a document. Dillon says, "There is a striking consensus among many researchers in the field that this process is the single greatest difficulty for readers of electronic text. This is particularly (but not uniquely) the case with hypertext where frequent reference is made to 'getting lost in hyperspace'" and "The general findings suggest that users are more likely to experience navigation difficulties in hypermedia environments than they are in paper and that considerable design effort is required to ensure that the navigational overhead does not limit the cognitive resources available for content comprehension."

Sources of Difference

Having delineated the types of differences, Dillon examines the reasons why such differences occur. He divides the sources of difference into four categories, each with subcategories:

  1. Physical
    1. Orientation
    2. Aspect Ratio
    3. Handling and Manipulation
    4. Display Size
  2. Perceptual
    1. Image Quality
      1. Flicker
      2. Angle of View
      3. Polarity
    2. Display Characteristics
    3. Anti-aliasing
  3. Cognitive
    1. Short-term memory
    2. Visual Memory
    3. Schema for text
    4. Searching
    5. Individual Differences
  4. Social
    1. Cultural forces
    2. Genres

Physical

Because there are obvious physical differences between electronic and paper texts, researchers have examined how these aspects affect performance.

Orientation

This refers to the way in which you are holding a text. For example, are you holding a book in front of your face, looking down on it at a table, holding it up while lying down. Obviously, books have an advantage in that they are small and portable enough to be reoriented whereas most electronic texts cannot be so manipulated.

Aspect Ratio

Defined as the relationship between width and height, aspect ratio is fixed in a book but can be altered on screen. Books tend to be taller than they are wide, whereas screens tend to be the opposite, though text can be manipulated to allow it to mimic the aspect ratio of a book.

Handling and Manipulating

This characteristic involves how a text might be physically handled. For instance, a paperback offers advantages in handling and manipulating over a large anthology. Dillon says, "Not only are books relatively easy to handle and manipulate, but also one set of handling and manipulation skills serves to operate most document types. Electronic documents, however, cannot be handled directly at all but must be presented through a computer screen and, even then, the operations one can perform and the manner in which they are performed can vary tremendously from one presentation system to another."

Display Size

"Popular wisdom," Dillon says, "suggests that 'bigger is better' but empirical support for this edict is sparse." Dillon discusses how larger print size on a screen causes more need to interact (to scroll, click to a new page, etc) just as a larger print size in a book results in the need to turn pages more often.

Perceptual

Perceptual differences involve how humans are actually presented with the text for processing.

Image Quality

Image quality seems to be one of the most important factors for Dillon. He divides his discussion of image quality into several components.

Flicker

Because electronic characters are constantly being regenerated, they may appear to flicker. Flicker may also be imperceptible. Flicker diminishes with an appropriate refresh rate, with the consensus being that at least 72 HZ is necessary for comfort.

Visual Angle and Viewing Distance

Studies have shown that holding a book at different angles can significantly affect speed and accuracy. Additionally, increased distance from a screen reduces fatigue, though screen readers often use short viewing differences to combat reflections and glares.

Image Polarity

Basically, is it black text on a white background (positive) or white text on a black background (negative). Books and most screen presentations tend to be positive, though it had been in vogue, says Dillon, to claim that negative polarity yielded better results, but studies differ.

If I might interject here, I would claim that positive polarity is "natural" to books whereas negative is "natural" to screens, given that to create a negative polarity book would take excessive dye and to create a positive polarity on a screen takes additional light. When printing on paper, you start with white and add to it. The opposite might be said to be true on screen.

Display Characteristics

This involves things like font selecting, line spacing, character spacing, kerning, etc. Many studies have been performed to show the effects of these decisions on reading performance. For example, certain fonts lead to increase reading speed.

Anti-Aliasing

Anti-aliasing technology reduces the jagged edges that would appear on screen letters making screen letters look like print letters. Studies have shown that while there are no significant performance differences between reading aliased versus anti-aliased characters, readers overwhelmingly prefer anti-aliased.

Cognitive

Cognitive differences involve how readers process information once hit has been perceived.

Short-term Memory of a Text

Here, Dillon discusses problems with issues such as splitting a sentence across screens which disrupts the capacity of the memory. Since readers are more likely to flip back a page when reading a book then go back a screen when reading an electronic text, the effect is potentially more detrimental to electronic texts. However, electronic texts could theoretically be designed to avoid such breaks.

Visual Memory for Location

Dillon asserts, "There is evidence to suggest that readers establish a visual memory for the location of items within a printed text based on their spatial location both on the page and within the document." Screen presentation can therefore be problematic because the information does not always appear on the same part of a page.

Schematic Representations of Documents

Readers have a familiarity with how a text will likely be organized: it will contain a table of context; depending on the type of work, it will contain an alphabetized index; etc. Dillon asserts that the reason for less understanding of structure in electronic documents is not only the short history, but "it is also the case that the medium's underlying structures do not have equivalent transparency."

Social

Dillon asserts that little work has been done assessing the social elements that shape human responses to documents.

Cultural Forces

Common discussion of electronic text often centers on the idea of cultural force driving the change in technology. "Eventually a new generation will accept electronic text," would be an example of such thinking. In this regard, Dillon compares the current situation with is akin to the early stages of the Guttenberg press. Dillon is cautious in this area throughout the book, however. He brings up the fact, for instance, that proponents of electronic text thought that electronic text would have displaced print by now. He also insists that the performance differences in the media will not disappear simply because people become more experienced.

Genre

Basically, different areas of study have different needs and therefore lead to different levels of acceptance of electronic text and different levels of performance when using electronic text.

Other Elements of Dillon's Argument

The third fourth of the book, which I will be glossing over here, essentially establishes the following:

  1. Electronic text design has not significantly addressed the human factors of the reading process.
  2. Readers approach texts of different genres with different goals and design consideration ought to consider these differences.

The TIMEFrame

Dillon lays out the "TIMEFrame" framework for designing electronic texts to guide those engaged in the design process and to ground his argument in a practical solution rather than mere abstraction. The framework does not give specific design advice, but rather lays out what a designer should be considering when creating an electronic text.

Four Criteria for Framework

Before laying out his framework, Dillon suggests that a design framework must meet the following criteria:

  1. It must be accurate
  2. It must be relatively noncomplex
  3. It must be suitable generic to be of relevance to more than one application
  4. It should be modifiable

TIME Assumptions

Dillon makes clear five assumptions upon which his framework rests:

  1. Humans explore and use information in a goal-directed manner to 'satisfice' the demands of their tasks.
  2. Humans form models of the structure of and relationship between information units. Repeated formation and application of such models leads to the formation of schematic forms and, ultimately, document genres.
  3. Human information usage consists in part of physical manipulation of information sources.
  4. Human reading at the level of word and sentence perception is bounded in part by the established laws of cognitive psychology.
  5. Human information usage occurs in contexts that enable the user to apply multiple sources of knowledge to the information task being performed.

TIME

The TIMEFrame is a framework that presents for consideration four key factors that affect usability: task, information model, manipulation skill and facilities, and ergonomic variables.

(T) Task

The task "reflects the reader's needs and uses for the material." When designing a text, the question must be raised of why the reader will be going that text, what he or she attempt to get out of the text, and what the reader will actually read.

(I) Information Modeling

The information model "consists of the user's mental model of the information space." This model is the result of the reader's attempt to give the material presented a meaningful structure. This model is not only based on the reader's encounter with the text, but also has to do with the reader's experience.

(M) Manipulation Skills and Facilities

Manipulation skills and facilities "support physical use of the material." For example, will the reader likely "flip through" the book or will it be read straight through? Designers here would consider what Dillon calls "the digital equivalent to the finger that is less permanent than a bookmark and serves the temporary holding of a place in space."

(E) Ergonomic Variables (He calls it "Visual Ergonomics" sometimes)

Ergonomic variables "support physical use of the material." This factor deals with elements such as eye movement, fixations, and other physical acts not directly described as "reading."

Possible Interactions

There are twelve possible "exchanges" between these four factors of the framework. The most typical exchange would flow in the order T -> I -> M -> E. For example, let's says that a reader is picking up a book on botany with the intention of looking up the importance of bees in plant development. The task, then, is to find information about bees. This interacts with the Information Model in that the readers, either based on knowledge of the genre of book or having already looked at the book, understand that the book is structured with an index at the end showing key words and concepts. The next exchange occurs when the information model interacts with the reader's manipulation skills. Simply, the reader flips to the index, find page numbers, and flips to those pages, probably keeping a finger in the book to hold the place of the index. Finally, the manipulation interacts with visual ergonomics when the reader's eyes move over the index entry.

There are, however, ways for the factors to interact with each other in a less linear fashion. Here are the 12 possible interactions:

  1. T -> I
  2. T -> M
  3. T -> E
  4. I -> T
  5. I -> M
  6. I -> E
  7. M -> T
  8. M -> I
  9. M -> E
  10. E -> T
  11. E -> I
  12. E -> M

I will not retell the description for each, but just to give a few more examples: M -> T might occur when a reader realizes that a text cannot be manipulated in a certain way, and therefore changes the task. E -> I might occur when the reader's eye movement picks up data about a text that leads to an altered view of the information model.

TIME as a design tool

So what is the TIMEFrame for, anyway? Dillon lays out the framework as a suggestion for designers to consider when designing a text. He says, for example, "The designer could conceptualize the intended users in a TIME framework and thereby guide or inform their prototyping activities."

TIMEFrame Suggests Considerations

Using the framework allows a designer to consider four major issues:

  1. What tasks will users of this text be performing?
  2. What information model will they possess or must they acquire through this text?
  3. What manipulations will be required or will likely be performed?
  4. What ergonomic aspects are involved?

Potential Uses of TIMEFrame

Dillon proposes three potential uses for the timeframe:

  • As a guiding principle for advanced organization
  • Parsing issues into elements to facilitate the identification of important issues
  • Ensuring that all relevant issues are considered

Short, Applicable Advice for Designers

Ever attempting to make his text practical for designers, Dillon puts forth a short list of steps for designers shortly before concluding. So, according to Dillon, here's what you should do if you are attempting to create an electronic text:

  • Identify stakeholders and analyze users
  • Generate scenarios for the contexts off use
  • Simulate usage analysis of the text(s) or document(s) involved according to three (at least but not exclusive) criteria:
    • How it is used
    • Why it is used
    • What readers perceive the information to be
  • Investigate the extent to which the document structure is fixed by existing readers' models.
  • Determine the electronic structure by considering the readers' existing models, potential models and the tasks being performed.
  • Design the manipulation facilities required for basic use and ensure that readers can at least perform these activities simply and intuitively with the mechanisms provided.
  • Add value to the system by offering facilities to perform desirable or advantageous activities that are impossible, difficult or time-consuming with paper or previous designs.
  • Ensure image quality is high.
  • Test the system on users performing real tasks and redesigning accordingly.

Final Sentences of the Book

A text without a reader is worthless. Similarly, a technology without a user is pointless. The human is the key; only by relating technologies to the needs and capabilities of the user can worthwhile systems be developed. The work in this book is a step in that direction for electronic texts, but there remains a long journey ahead.

Wednesday, April 25, 2007

Jakob Nielsen's Multimedia and Hypertext and others

Multimedia and Hypertext —The Internet and Beyound

http://www.useit.com/jakob/mmhtbook.html

Technically

a. Definition of Hypertext and Hypermedia:

The simplest way to define hypertext is to contrast it with traditional text like this book. All traditional text, whether in printed form or in computer files, is sequential meaning that there is a single linear sequence defining the order in which the text is to be read.
Hypertext is nonsequential; there is no single order that determines the sequence in which the text is to be read.
The traditional definition of the term "hypertext" implies that it is a system for dealing with plain text. Since many of the current systems actually also include the possibility for working with graphics and various other media, some people prefer using the term hypermedia, to stress the multimedia aspects of their system. Personally, I would like to keep using the traditional term "hypertext" for all systems since there does not seem to be any reason to reserve a special term for text-only systems. Therefore I tend to use the two terms hypertext and hypermedia interchangeably with a preference to sticking to hypertext.

b. History of Hypertext:

1945 Vannevar Bush proposes Memex
1965 Ted Nelson coins the word "hypertext"
1967 The Hypertext Editing System and FRESS, Brown University,
Andy van Dam
1968 Doug Engelbart demo of NLS system at FJCC
1975 ZOG (now KMS): CMU
1978 Aspen Movie Map, first hypermedia videodisk
Andy Lippman, MIT Architecture Machine Group
1984 Filevision from Telos; limited hypermedia
database widely available for the Macintosh
1985 Symbolics Document Examiner, Janet Walker
1985 Intermedia, Brown University, Norman Meyrowitz
1986 OWL introduces Guide, first widely available hypertext
1987 Apple introduces HyperCard, Bill Atkinson
1987 Hyperertext'87 Workshop, North Carolina
1991 World Wide Web at CERN becomes first global hypertext,
Tim Berners-Lee
1992 New York Times Book Review cover story on hypertext fiction
1993 Mosaic anointed Internet killer app, National Center for
Supercomputing Applications
1993 A Hard Day's Night becomes the first full-length feature film
in hypermedia
1993 Hypermedia encyclopedias sell more copies than print encyclopedias

c. The Architecture of Hypertext Systems

Presentation level: user interface
Hypertext Abstract Machine (HAM) level: nodes and links
Database level: storage, shared data, and network access

The Database Level:
The database level is at the bottom of the three-level model and deals with all the traditional issues of information storage that do not really have anything specifically to do with hypertext.
Ultimately it will be the database level's responsibility to enforce the access controls which may be defined at the upper levels of the architecture.
As far as the database level is concerned, the hypertext nodes and links are just data objects with no particular meaning.

The Hypertext Abstract Machine (HAM) Level:
The HAM is the best candidate for standardization of import-export formats for hypertexts, since the database level has to be heavily machine dependent in its storage format and the user interface level is highly different from one hypertext system to the next.

The User Interface Level
The user interface deals with the presentation of the information in the HAM, including such issues as what commands should be made available to the user, how to show nodes and links, and whether to include overview diagrams or not.

Usability

Usability is traditionally associated with five usability attributes:

Easy to learn: The user can quickly get some work done with the system.
Efficient to use: Once the user has learned the system, a high level of productivity is possible.
Easy to remember: The casual userig able to return to using the system after some period of
not having used it, without having to learn everything all over.
Few errors: Users do not make maty errors during the use of the system, or if they do make
errors they can easily recover from them. Also, no catastrophic errors must occur.
Pleasant to use: Users are subjectively satisfied by using the system; they like it.

3. Applications

a. Susan Hockey. The Reality of Electronic Editions. Voice, text, hypertext : emerging practices in textual studies / edited by Raimonda Modiano, Leroy F. Searle, and Peter Shillingsburg.
Much hype currently surrounds discussions on hypertext, electronic textuality, theory of electronic text, and related topics, especially in rela­tion to the preparation of electronic editions and archives. Leading textual scholars have embraced the idea of electronic texts and textuality and have speculated at length about this new medium, both in print and on the Internet.1 In practical terms, however, we are very much farther behind. Few implementations exist, and most of these are, in my view, poorly designed and weak in functionality. They tend to have too much dependence on the Hyper Text Markup Language (HTML) and native web technology and to incorporate too many multimedia gimmicks.
Prescriptive markup indicates the functions that are to be carried out on the text. Prescriptive markup restricts the functionality of the electronic text because the text, once marked up in this fashion, can really be used only for the functions pre­ scribed in the markup. By far the most widely used form of prescrip­tive markup is that created by word-processing programs.
Descriptive markup is much more powerful and flexible. The con­cept behind descriptive markup is very simple: instead of indicating what the computer is to do with a given component of the text, descrip­tive markup merely says what that component is. Standard Generalized Markup Language (SGML) and the Extensible Markup Language (XML) make it possible to create a set of encoding tags that directly correspond to the components of the text.8 An SGML / XML-based data model can avoid many of the simplification problems associated with HTML and can easily be extended if additional features have to be encoded. The principles of SGML can be summa­rized briefly as follows:
1. An SGML / XML-encoded text is a plain ASCII file, which can be read independently of any specific word-processing program. XML sup­ports Unicode as well. Thus the same text can be used for many differ­ent purposes and will outlast the computer on which it was created.
2. SGML / XML itself is a syntax or framework for defining markup languages, and the set of specific markup tags for one particular project is called an SGML/XML application.
3. These markup tags, and the relationships among them, must be defined in a document type definition (DTD) in SGML and XML (and possibly in a schema in XML), which gives a formal specification of the document's structure.
4. Almost anything, from a complete text down to the detailed inter­pretation of one part of a word (for example, the "ur" portion of the term "ur-text), can be encoded in SGML / XML. It is up to the designer of a DTD to determine what is important and what should be encoded— a process known as document analysis.

An electronic edition should:
(1) maintain current standards of scholarly editorial excellence;
(2) facilitate changes in scholarly editorial practice;
(3) allow postpublication enhancements of editions;
(4) allow multiple forms of publication; and
(5)conform to relevant standards for electronic text, images, and other material.

b. Kahn, P. (1989b). Linking together books: Experiments in adapting published material into hypertext. Hypermedia 1, 2,111-145.

Describes the conversion of a set of books on Chinese poetry into Intermedia format, giving plenty of screen shots. One interesting illustration is an overview diagram of the translators of the poet Tu Fu, which are ordered in two dimensions: Chronologically on the y-axis and according to the translator's emphasis on sinology or poetry on the x-axis. The author distinguishes between objective links (those present in the text being converted such as explicit literature references) and subjective links (those added because the converter or other hypertext user sees a connection between two items).

“The links in a body of literature can be viewed as connecting two classes of associations: objective and subjective. Objective associations are thosethat derive from the structure of the texts themselves, the text as data.Subjective associations are those that derive from an interpretive understanding of the material.
Objective associations can be used to generate links between items in the data. In a body of literature, objective associations are to be found in the structural and linguistic properties of the text, and the properties of the author associated with the text. Many current printed reference works, which are already structured along these lines, contain the necessary information to create such objective hypertexts.
Subjective associations are to be found in the same text, but the associations are located in the ideas which a scholar sees as represented by the text. Common examples in literature include commentaries on, or indications of, influences or similarities among authors or works.”

Thursday, April 19, 2007

John Bryant, The Fluid Text, A Theory of Revision and Editing for Book and Screen

I'm posting this for Ileana. -- Doug

Definition

"A fluid text is any literary work that exists in more than one version. It is 'fluid' because the versions flow from one to another."

"Fluidity is inherent both in literary texts and in the process of writing." Bryant considers all the variants, even those unauthorized by the authors such as adaptations in other media, as fluid texts. The so-called definitive text is a multiplicity of texts.

1. THE TEXTUAL DEBATE: INTENDED TEXTS AND SOCIAL TEXTS

The analysis of textual fluidity is a critical approach to both the act of writing and to the act of editing. When referring to Melville's case, Bryant makes no distinction between scribbles and revisions; they are both considered "textual fluidities," and, consequently, transcription of an almost illegible text and editing become equivalent challenges. His intention is to reveal their "critical nature" from the point of view of the author. He supposes there is a critical intention behind each correction made by Melville and considers that editors have to become "managers and explainers of the fluid text" (19).

The two versions of Melville's Typee represent a good example of a fluid text intended as such under external pressures and not only then. Neither the 1968 critical edition of Hayford, Parker and Tanselle which chose as a copy-text an early British version and restored what was missing from the American edition, nor the McGannian formula of a social/historicist edition is satisfactory, since none addresses fluidity properly. The critical edition, another variant added to the corpus of editions, simply describes fluidity without dealing directly with it. Such an edition contributes to the textual fluidity with another product. The social/historicist edition, on the other hand, presents the documents from the past from the point of view of the present, decontextualizing them in order to make readers experience them as events.

2. WORK AS CONCEPT: INTENTIONALIST HISTORICISM AND THE ONTOLOGY OF LITERARY WORK

Bryant tries to identify the place of "textual fluidity" in the paradigm of intentions – revisions – product. Tanselle's ontology of the literary product with its three states: work, text, and document, gives Bryant the opportunity to elaborate on ineffability of the work, not yet fixed in words of a document. However, Bryant is not content with Tanselle's concept of "ineffable" work and prefers the concept of "wording," as a better expression of what has not yet been a concrete, physical document. For him, intentions belong to the ineffable stage of the work although they are also embedded in the physical document. Because of this process of transmission from one format to another, they can be altered, since "writing itself […] is always a kind of miswriting" (34). In search of intentions the editor faces the same problem as restorers of the Sistine Chapel, fully aware of the fact that "restorations invariably require defoliation" (35). The reconstruction of a writer's past intentions is partially a failure. The more instances involved in the process of creation, the more difficult the process of restoration. In the case of Typee, it is necessary to distinguish between the original text and the influences or contributions of each of the editors.

3. WORK AS ENERGY: MATERIALIST HISTORICISM AND THE POETICS OF SOCIAL TEXT

By "work as energy" Bryant means a text which presents various interactions between private and public aspects in the form of literary versions, revisions, and adaptations, all made accessible to readers. Confronted with such a complex and fluid text, editors cannot but focus on the process of changing and make it evident as well as looking for the meaning of "the forces of change" (61). Constantly going back to Tanselle's concept of "work" as an open concept which is not necessarily limited to the concrete object and to McGann's concept of "work as event", it is not clear what Bryant's critique intends: to show the limitations of the previous definitions or to use them as much as possible to make his own intelligible. He uses Einstein's connotations of energy to emphasize the paradoxical nature of work as a physical text which reflects, in the flow of its variations, the alternation of individual and societal contributions, all the elements participating in the literary production such as: "writers, editors, readers, material products, and social events" (62). Bryant is interested in the effects of cultural or personal meddling on the poetics of revision. Changes occurred from one version to another should be made available so the readers have access to the cultural dynamics manifested through transitions.

4. THE FLUID TEXT MOMENT: VERSIONS OF THE VERSION

By a fluid text moment, Bryant understands a certain moment in which the author was determined to change his/her text in order to see it published (he cites Wright's Native Son) or in which a careless typo determined a series of readings and interpretations (Whitman's Out of the Cradle Endlessly Rocking). This last example: "I a reminiscence song/sing" leads him to the more substantial question "What makes a version worthy of critical analysis, and hence editing?" (70). In spite of Gabler's genetic edition principles, according to which variants are all equally relevant, and Stillinger's "textual pluralism," according to which each version displays a different authorial intention, Bryant believes that the entire range of versions makes a whole and each variant is dependent on the previous one. The concepts of "variation" and "version" do not serve his theory. A more suitable concept for Bryant is "revision," as "the root cause of all versions" (87).

5. READERS AND REVISION

8 characteristics of versions:

  • Versions are physical but not always available.
  • No version is entire of itself.
  • Versions are revisions.
  • Versions are not authorization.
  • Versions are different.
  • A version must be more than the sum of its variants.
  • Versions have audiences.
  • Versions are critical constructs.

Modes of Production of the Version:

Creation as it is documented in journals and letters, notes and rehearsals, working drafts, circulating drafts, fair copies or typescripts;

Publication: manuscript publication, print publication;

Adaptation in another format or genre, i.e. films.

Three kinds of readers:

  • 1st reader: writer who reads his own work;
  • 2nd reader: writer who self-edits and revises his work;
  • 3rd reader: the 1st external reader (Eliot for Pound).

Bryant draws attention to the role played by cultural revisions which make the works "look like us" (109) that means a refashioning of the work to resemble the audience so distant from the original.

6. THE PLEASURES OF THE FLUID TEXT

Bryant idealistically hopes that he may offer for reading "the interval between any two versions" (114). His edition "imaginaire" is the editor's image of a writer's originating condition of work. This kind of self-conscious edition has two functions: a rhetoric and heuristic function. It is meant to offer a historicist and a cultural experience to the reader, who has to learn new reading skills while reading and getting the sense of the historical context of each revision. For such an ambitious project, Bryant supports an editorial strategy which combines both print and electronic technologies. The tyranny of the single reading text will be abolished. Readers and editors become two facets of the same object: one consumes what the other conceives as the most reliable edition. He promises the pleasure of reading a fluid-text edition "despite the formidable barriers of textual fluidity" (123). He proposes that editor should use the apparatus (notes, back matter; even though he is fully aware that readers may ignore these) in a more visible way; he has to map out distances between different versions, to evidence the versions' historicity to transform the reader into a less subjective interpreter. Presenting Gabler's synoptic edition of Ulysses and the critiques of McGann and Tanselle, Bryant praises the achievements of such an edition, with his only comment that Gabler should have used the conventional apparatus to unload the text. His only problem is to find the right proportion between the text and the accompanying apparatus so that to design an enticing page for the reader. He hopes that a well-balanced apparatus will please the reader.

7. EDITING THE FLUID TEXT: AGENDA AND PRAXIS

Editing fluid text should pay attention to the changes occurring from one revision to another. Although fluid text editions are fragmentary, they must present unfamiliar material in a familiar way; they must provide information about the history of revision. That means that the editors should become "narrators of revision" (144). The fluid text editions should be critical, pedagogic, comprehensive, and should realize the book/screen synergy. According to Bryant, only a combination between the codex and the hypertext can provide the best format. An electronic archive may store revisions and the computer may "emulate" textual fluidity (he does not say how!). Intraversional and interversional connections may use electronic formats to display the series of narrative revisions, the relationships inside a series of revisions of a work or outside these series, and relationships between revisions of other works by the same author.

The textual fluidity may expose the intricate process of writing in so far as it is determined by multiple factors, these being external to the writer or the larger context of writing. Its public is academia, critics, professional writers, or anyone in cultural pursuits.

Monday, April 16, 2007

New Invites

Apparently people are having trouble logging into our Blogger site since Blogger decided to upgrade.

I was able to create a "New Blogger" account using my normal e-mail address, which is also now a Google account (though not a G-mail account). This doesn't entirely make sense to me.

Anyway, if you need a new invite, let me know. doug@wordswordswords.us

Wednesday, April 11, 2007

From George Bornstein's Material Modernism: "Many of us hope that the potentialities of an electronic environment will overcome some of the limits of codex display. For example, an electronic edition could handle significant variants in a far superior way, enabling the viewer to select differing versions of the entire poem -- perhaps two, though theoretically a larger number would be possible too -- and to display them either side by side or interlinearly at the viewer's option. Similarly, multimedia presentations would enable far superior forms of annotation [ed. note: like this!], enabling the viewer to click on the word you and find not only an explanation of Yeats' relation to Maud Gonne, but perhaps a photo of her and excerpts fro her own autobiography. Such annotation would need to be layered hypertextually to enable the viewer to select what sort of information he or she wishes. And, finally, computer editions would enable fuller display of bibliographic codes, allowed for both words and pictures of earlier incarnations of a poem. That point deserves further probing. On the one hand, an electronic edition could display pictures of the material embodiments of the text -- cover design, title page, layout, and so forth. On the other hand, and crucially, in the very act of presenting such material the electronic edition indicates its difference from such material. For the computer is a wholly different environment from the book, just as the book is from the manuscript. By displaying previous print incarnations of a poem, the computer emphasizes that it itself is not such a print incarnation, but rather a translation of that into a different medium. We have almost a pure case of Derridean presence and absence: the more the electronic edition invokes images of print incarnations, the more aware the viewer becomes that the electronic edition is not such an incarnation. That is both its strength and its weakness: its strength because of superior abilities to display and to manipulate word and image mechanically, and its weakness because of its perpetual inability to create the effect of what it is imaging, namely manuscripts and printed materials."

Bornstein's use here of the terms 'word' and 'image' -- as opposed to 'substabce' or 'material,' which he does much with at other points -- emphasizes what I continue to consider the unsettling immateriality of electronic texts. There is something vaguely sinister in the way authors -- Bornstein, McGann, and Landow among them -- perpetually laud the benefits of electronic editions while holding tightly to aspects of textuality that are rooted in print technology. To borrow a metaphor from science fiction, it's the same sinister feeling as when an android is capable of almost but not quite entirely mimicking sentience and what we call humanity. The gap between the original, 'natural' thing and its imitator is perceived as wide not because it is wide, but because it is perceptible. It is clear that hypertexts have potentials that have yet to be realized -- but we are trapped in the two-dimensional world of the codex, and thousands of years are not so easily shaken off, no matter how many hyperlinked webs you try to establish.

Then, someone alerted my attention to this YouTube video of a demonstration for something called Perceptive Pixel, which is both dazzling and, to someone for whom reading is partially a tactile undertaking, reassuring. Notice, however, that when typing needs to be done, a keyboard appears on the screen. There's still some tendency to let print technology dictate form -- although the three-dimensional graphs at the end are clearly an attempt to pull away from that.

Andrew Dillon's Designing Usable Electronic Text

I wanted to run the book I have selected for my presentation next Wednesday by everyone.

As I mentioned during the first week, I was thinking of doing something from the IT perspective as opposed to the textual studies perspective. In searching for a book, I found this one, Designing Usable Electronic Text by Andrew Dillon. While not actually from the IT perspective, it is certainly from a different one.

So I'd appreciate any feedback, particularly if you don't think this book would be interesting for the class.

Andrew Dillon is currently the dean of the school of information at the University of Texas. Prior to that he was a professor of information science at the University of Indiana. Perhaps more interestingly, before that he was a member of the
HUSAT research institute, now a part of the Ergonomics & Safety Research Institute. The focus of his study in the area of electronic text seems to be on how the presentation of electronic text effects human beings both physically and psychologically.

The book is now in its second edition, first published in 1994 and revised in 2003 to deal with increased proliferation of online texts. Dillon states the purpose of the book as follows:

The major aim of the present work is to examine and subsequently to describe the reading process from a perspective that sheds light on the potential for information technology to support that process.


From my initial browsing, the book seems to be drawing on psychological theories of reading, physical effects of computers, and electronic text design, though Dillon claims its subject matter is none of these.

On an interesting side note, I've decided to read electronic version of this book, available from Mobipocket.com. I'll probably post some observations about this electronic format later. I should mention that one feature is to tell you how long it should take you to read the book. I wonder how they calculate that. Anyway, the computer says it will take me 23 hours, so I better get cracking.

Monday, April 09, 2007

Adding CSS to Dido

I would like to add CSS style sheets to the entire Dido site. If you are not familiar with CSS, you can read an introduction here. Basically, "Cascading Style Sheets" are style sets that can be applied to multiple web pages, allowing you to quickly change the look and feel of an entire site. CSS works along side HTML. CSS styles, whether embedded in the [head] tag or in a imported from a .css file seem to be the norm of web design these days. I like them because the eliminate a lot of the "junk" from html files.

I have already added CSS to the course homepage and the version page to give an example.

Discussion about whether or not CSS is appropriate for the site is welcomed, and, if there is agreement that CSS is the way to go, then we might think about how we are going to define the various styles (what colors, fonts, etc). It is also possible to implement a "style switcher" that will allow visitors to the site to choose from a variety of style sheets.

Also, I am having a problem with Catalyst that maybe some of you have encountered. I can download pages fine to make changes, but if I try to upload the page, I am unable to overwrite the existing file. What I have been doing is changing the name of the existing file and moving it to another folder, then uploading the updated file. Is there some better way of updating a file?

Revising Dido

I thought I'd let you know that one of the things I'll be working on on the Dido site is establishing/fixing some links between the Chaucer text and its sources. But I would be happy to have others engaged in a similar effort. However, to avoid loss of new links, it probably would be wise for us to coordinate the uploading of revised pages. I'm currently working on the LGW text and the text of Heroides VII.

While We're Referencing Sterne ...

Landow just mentioned Lawrence Sterne for about the umpteenth time, which led me to see if someone had done the obvious (if laborious) thing and put a hypertext verion of Tristram Shandy somewhere out there in nebulous cyberspace. In fact, they have. A decade ago, a group of ambitious folk set up this set of webs of themes, characters, and digressions -- but in what the website refers to as 'true Shandian fashion,' it is glaringly incomplete and only contains two volumes of the actual text. I haven't yet learned if work is still being done on the project, though I suspect not. Which is a shame -- this website is pretty fun. I spent about ten minutes reading about noses ...
McGann begins by stating that Radiant Textuality is organized around two ideas about humanities-based digital instruments:

1: "that understanding the structure of digital space requires a disciplined aesthetic intelligence," and that texts are our best models of said intelligence in a highly developed form (RT, xi) .

Or, restated, that we need to take a greater interest in the ways that poems are formed and mean when we try to think about what to do with them in a digital space.

2: that "digital technology used by humanities scholars has focused almost exclusively on menthods of sorting, accessing and disseminating large bodies of materials, and on certain specialized problems in computational style and linguistics" -- and thus, humanities scholars have been much less engaged with questions about interpretation and self-aware reflection in digital spaces.

McGann uses his work developing the Rossetti Archive in connection with IATH as a way of exploring these two ideas further, with the overarching goal of "imagining what we don't know." Radiant Textuality describes McGann's progress -- but more importantly (by his own admission), the varied failures of the Rossetti Archive, and the importance of these failures to the ongoing work of digital textuality.

His point, however, is not that scholars should be basking in the glow of their computer screens in order to imagine what they don't know, with the digital anarchy newly refreshed each morning -- but rather that we need to be able to imagine what we don't know in a "disciplined and deliberated fashion" (18). This is tied into a prediction that "the next generation of literary and aesthetic theorists who will most matter are people who will be at least as involved with making things as with writing text" (19). Throughout the entire work, the emphasis is on performativity -- on how it affects receptions of poems in print, and
on how best to encourage performative receptions of texts in digital spaces.

In discussions, we have returned repeatedly to the importance of accuracy; to editorial choices and the responsibility placed on the one who makes them. One of the benefits that we seem to see in electronic editions (and this is more true of internet, rather than CD-ROM editions) is that they could be easily updated to reflect corrections. This is a fairly typical view point. But McGann suggests, as an alternative, pursuing editing as a theoretical pursuit, falling somewhere between three exemplars.

The first is the Kane-Donaldson edition of Piers Plowman, in which all editorial choices have been governed by a larger complex hypothesis about the work -- so that all the readings are connected with each other, and, by implication, all are correct. But the edition is admittedly created based on Kane and Donaldson's hypothesis of the work.

The second type is the "definitive" editing style that has been prevalent throughout the 20th century.

At the other end of the spectrum, in contrast to both of these, is the "un-editing" style promoted by Randall McLeod.

It is the first and third of these that are most important for the direction that McGann wants to take in his editing pursuits -- to combine McLeod's emphasis on the original materials with Kane and Donaldson's emphasis on developing complex hypotheses about a work. The benefit of technology is that it allows an edition to be designed in order to go through complex interactive transformations (81), not just at the hands of the supposed editors, but at the hands of all users and viewers -- and it is in the processes of transformation and mutability that any edition might be able to "undertake as an essential part of its work a regular and disciplined analysis and critique of itself" (RT 81).

These are not the goals that were in place at the beginning of the construction of the Rossetti Archive, which was begun in the context of the then-new SGML technology, and interest in tagging and linking between words. According to McGann, tagging and linking will eventually become a central act in text reception, as important as reading or writing (RT 68). But this (as well as the general principles of the TEI) treat all texts as "ordered hierarchies of content objects" (139), i.e., traditionally logical informational structures. And this principle "violates some of the most basic reading practices of the humanities community, scholarly as well as popular." We do not read this way. And this does not even address the problem of how to integrate images and visual content, which cannot be searched the way that verbal content can.

The Rossetti Archive "failed" in its original incarnation because of the conflict between trying to produce an edition with all the benefits of both the "critical edition" and "facsimile edition" styles -- and to deal appropriately with the verbal and visual components of Rossetti's work -- AND the pull towards theoretical editing -- imagining what you don't know.

This leads into McGann's ideas on deformance, which provide the thrust for the rest of the work.

To explain, first, consider Rossetti's painting, The Blessed Damozel,".

Here is a close-up facsimile of the original.

McGann and a colleague began playing with the image with a filtering program, and produced this distortion.

And then, using a larger area of the picture, produced this distortion. And finally, this one, which allow a new emphasis on the formal relations of color between the flesh of the damozel, the stars in her crown, and the world behind the embracing lovers.

But according to McGann, you can take this even further:







In these images, you can continue to look for formal colour ties highlighted by the filter, but in McGann's view, they also exhibit potential for rethinking the value of subjective aesthetic engagement: "They came to being through a series of obscure intuitive operations, a series of transformations and transformations of transformations. The series ended when the image proved satisfying."

Deformance, then, is the work of transforming an aesthetic work in order to learn/imagine what you don't know.

And in the time period of Radiant Textuality, the ultimate version of that arrives in the format of the Ivanhoe Game.

Rules for the game are here, but the six ground rules are as follows:

Ground Rules
1) Give each of your posted moves a title.
2) Indicate how your move is to be connected to other moves or to the source text. Example: "This move replaces paragraph 3 and is linked to Guildenstern's "Still Waiting" move."
3) Keep to the time limit for this game, midnight August 17th to midnight August 25th, but make as many moves as you like in that period.
4) You must adopt a role and keep a private role journal outside of the blog system.
5) Conceal your identity during play by editing your Blogger profile to provide a false "nickname."
6) You must have a high tolerance for unimaginative and infelicitous interfaces. The point here is just to store the content of our moves. Don't get hung up on Blogger.

A history of the moves in one of the early sessions of the game is here. The appendix in Radiant Textuality also includes a set of moves, and the accompanying player file, in which McGann chose to play the role of scholar, collector, bibliographer, and forger Thomas James Wise.

The idea is that the taking on of another role allows one a certain distance from one's actions -- and therefore, better ability to be self-critical. One also, therefore, has more freedom -- and less reluctance -- about self-amending later on.

Saturday, April 07, 2007

A New Edition of the Bayeux Tapestry?

I dislike being the only person posting on this blog. But this animated version of the Bayeux Tapestry is too interesting *not* to post.

What do you think? It's very editorial in terms of the score, but as a pedagogical tool/component of a student edition making use of available technology, I think that overall, it works well.

Wednesday, April 04, 2007

LibraryThing

This is just a quick post to link to LibraryThing, and to correct what I said in class, as it has not been bought by Amazon, and instead is simply 40% owned by ABEBooks. Still a corporation, of course -- but the correction is important.

And I'd also like to call attention to the two LibraryThing blogs: this one is the original, and covers site updates/changes, etc. The other blog, Thingology, discusses "the philosophy and methods of tags, libraries and suchnot" -- and in doing so, they approach some of the issues that Landow and Shillingsburg deal with, specifically democratization and access choices.

Monday, April 02, 2007

I think I should use this post to provide some concrete examples and links to things that I mentioned in class today; if I forget something and you want to hear more about it, just say so in the comments, and I'll edit as necessary.

First, here is the TruthLaidBear blog ecosystem, with rankings by links. If you look at the box on the righthand side of the page, you can click and see the rankings via visitor traffic.

Second, here is a Washington Post article on blogger involvement with the takedown of CBS news anchor Dan Rather back in 2005. In short, Dan Rather and CBS presented documents about President Bush's military service record, and one blogger pointed out features that suggested the documents were forged. This sparked more research by other bloggers, who went looking on the Internet for experts on 1970s typewriters (on which the documents had supposedly been typed). In the end, the bloggers were right, and CBS and Dan Rather lost some credibility. And after this incident, bloggers were more successful in lobbying for rights accorded to more "traditional" journalists; and earlier this month, two bloggers were given media seats at the Libby trial.

One of the other major factors contributing to a rise in blogger credibility is the value of bloggers in emergency situations, most recently, Hurricanes Katrina and Rita. In contrast to what we talked about this afternoon (the possibility of websites designed to look much more sophisticated than they really are), in national disasters, I would argue that established corporate news sources lose credibility, and while people who are writing from the inside of the situation gain it. Despite the fact that emergency situations often entail a loss of electricity, people in New Orleans were surprisingly resourceful about posting updates based on what they saw happening around them. And these were supplemented later on by aid workers who came in and then blogged their experiences each night after returning to places with power.

I also mentioned the academic blogging controversy. The two articles by Ivan Tribble aren't available for linkage, but you can access them online through the UW Libraries -- just search for "chronicle of higher ed," make sure your off-campus access is turned on, and then search the chronicle homepage for "tribble." You'll pull up Tribble's original article, "Bloggers Need Not Apply," as well as a couple of responses to it, and then his response, two months later. I particularly recommend the response by Rebecca Anne Goetz, who at the time, was finishing a dissertation in History at Harvard.

You can find many more responses, too -- just google "ivan tribble."

Finally (at least for tonight), it would be silly of me not to mention Second Life, which appears to be part of the vanguard of internet technology. And in many ways, it seems like it achieves the idealized, democratized, intelligibly organized medium that Landow is in search of. But oddly enough, it does this by blending technology and naturalism -- it tries to circumvent reader disorientation by putting information in the form of the everyday world. Does it work? Is Second Life the ultimate hypertext, and does it circumvent the problem of the rhizome by making it look familiar?

Well, I don't know. I don't even have a Second Life account, nor do I plan to get one. But if you want to read more about what other people think, then this page has stories of Second Life in the news.

I may manage to say more tomorrow; for tonight, I'm done.