Monday, October 8, 2012

"Abstraction is hiding things"


A couple of days ago, I came across David Kaiser's recent review of Turing's Cathedral in the London Review of Books (http://www.lrb.co.uk/v34/n18/david-kaiser/boiling-electrons)*. The article snagged my interest because it touches on something we mentioned briefly in the last class: the historical trajectory of the term "computer" as it moved from designating a person to a machine.

Kaiser notes that during WWII, complex calculations "were largely carried out by chains of human operators armed with handheld Marchant calculators," in a process made possible by dividing "the calculations into discrete steps." After this division, "assistants – often the young wives of the laboratory’s technical staff – would crunch the numbers, each one performing the same mathematical operation over and over again. One of them would square any number given to her; another would add two numbers and pass the result to the next down the line."

I mention this tidbit about the origins of computing because it points to something important if we are to think about code in any critical or systematic way: Historically, the instructions for computing precede the hardware for computing.

The LRB article also goes on to talk about one of the first computers, Eniac. Eniac could execute only fixed programs, "which had to be set in advance by physically rewiring components before any calculations could be performed." Rewiring those components "took weeks of swapping cables, alternating switches, checking and rechecking the resulting combinations." In the case of Eniac, software--if it can even be considered as such--is identical with hardware.

The two points--computers as people, and the rigidity of Eniac's instructions--problematize what we might intuitively think of when we hear the term "software." Are instructions themselves alone enough to constitute software? What exactly are we talking about when we talk about code?

In keeping with the arguments we saw in "Indermediation," Hayles seems to count code as a series of layers, each distinguished by increasing complexity. For Halyes, signifiers start as voltages, and signifieds are "the interpretations that other layers of code give these voltages." The strange thing about this scheme, however, is that it is not precisely clear where hardware ends and software/code begins. (This problem even creeps into Marino's CCS, when he notes that attention to hardware issues such a processor speed may be key to CCS.) With these definitions alone, it seems that the rigid programs of Eniac could be software. But if that is the case, what is to differentiate between software and medium? What is the difference between using software to shape output on a screen and using a pencil to shape letters on a page? Can we distinguish varying levels of materiality between the two processes?

If Hayles points to the continuity between hardware and software, Manovich points to the continuity between software and larger, non-computing, human enterprises. For Manovich, cultural software is much more than a metaphor: Software quite literally underlies most--if not all--contemporary production of culture. This is, of course, what makes it worth studying--and yet it is not clear why software should be privileged over the material infrastructure and hardware that enables software in the first place. Is software more crucial to culture than a cobalt mine in central Africa?

Hayles quotes Accelerated C++: "Abstraction is selective ignorance." Abstraction--and selective ignorance--is necessary in defining any object of study. So the question is: Are software/code studies abstracted enough to be useful frames for analysis?


-----------------------------------------------
*I'm afraid that the article is behind a pay wall. I checked if UW has library access to it, and it does... in microfilm. How's that for material constraints?

6 comments:

  1. First, thank you for taking time with and giving such vivid illustration of the history we're dealing with in this class -- "the young wives of the laboratory's technical staff" are important antecedents, easily forgotten.

    Secondly, I'd like to follow you into the questions about software and hardware, and whether we should privilege one over the other (or if not, how we might balance them). The "material infrastructure and hardware that enables software in the first place" seems fundamental to the discussion; Manovich doesn't abandon this in his article, and seems to treat it as one of many contexts, but he certainly doesn't focus on it to the level of explicitly remembering a cobalt mine in central Africa. I wonder if petty theft of one of the Hayles concepts might help us here: software and hardware/infrastructure seem to function as partners. Can we simply add to speech, writing, code and expand the collaborative frame?

    I'll point to one moment in the Manovich that might move us forward. He mentions underlying hardware and then parenthetically notes that this is being addressed by "platform studies." Ian Bogost's site gives a brief overview and directions for further discussion: http://www.bogost.com/books/platform_studies.shtml

    ReplyDelete
  2. Yes thank you for continuing the historical trajectory of the term “computer”. Friedrich Kittler’s _Gramophone, Film, Typewriter_ is another good source for illustrations of “women’s work” in technology. Women as receiver of information, typist, computer, switchboard operator, transcriptionist, etc. Women could transmit the message just not create the message. In keeping with the history of software and hardware and human interaction, here is a link to the Living Computer Museum here in Seattle: http://www.pdpplanet.com/default.aspx. I worked on the project pre-museum phase unloading machines off trucks and cataloging manuals.

    ReplyDelete
  3. Monday's discussion got us some of the way toward understanding Hayles's big picture, but there was one issue that I wanted to bring up if not for its lack of clarity. In trying to think through Hayles's argument I frequently returned to her mention of "the labor of computation" (41). Her phrasing here is key as she solicits for the "complexity of code." Where does the complexity of code reside, she submits, but in computation "that again and again calculates differences to create complexity as properties of emergent computation." (41) Right away we're to understand the signifying process of code is fundamentally different from those processes Saussure and Derrida observed in natural language -- this is merely a matter of fact. Although a rehearsal of Derrida's recognition of "the material constraints on the choice of sign" and tracing back to the Logos (p. 42-43) "performs" for us the fundamental written nature of spoken language, it actually takes us further from an understanding of the complexity of code: "the trace has no positive existence...This is even less tenable than Saussure's arbitrariness." (46) A computer "reads" voltage change -- that's it.
    What bothers me is that while "the juice" for her is in the complex dynamics generated in code's interaction with speech/writing, she also needs code to be free of the metaphysical baggage with which natural language is so inconveniently laden. The character of code is "in striking contrast," she says, "to communities that decide whether an act of speech or a piece of writing constitutes legible and competent utterance." (50) Code is "executable," and singular in its executability. Yes, this is sort of the payoff, the coup de grace for theory when it comes to code. We need to revise and make changes but it certainly doesn't get rid of the metaphysics. She reminds us on page 60 the computer is a mind amplification tool, and as such, neither supplants nor subsumes speech/writing but brings it into intermediation with code. So, just when we're supposed to set our metaphysical baggage down and leave it behind we find there's more to pack.
    Getting back to the question of "abstract enough," I find myself asking the same question Rachel does. Hayles's seems to have had an opportunity to expand with her "labor of computation" but instead -- at least in this piece -- squanders the conceptual space it earns her in favor of placing focus on "the voltage," a strict hardware-software framing. And while I can hear her responding by saying, "Yes, I'm interested in code for precisely this reason -- just think of the possibilities we encounter when code is able to meet nanotechnology, and, just like DNA, significantly alter this very hardware-software relationship," I think her conception of labor falls victim to this very narrow focus.

    ReplyDelete
  4. I may have a more basic question regarding the Hayles article than those posed above: does it even make sense to compare the worldview of speech and the worldview of writing with the worldview of code? Hayles argues that code has neither the transcendental signified of Saussure's speech system nor the trace of Derrida's grammatology. At the machine level, the computer has no tolerance for ambiguity, but even at higher levels--object-oriented programming, databases, etc.--the ambiguity is careful prescribed. It seems that it would have to be: in Hayles signifier-signifier framework for code, what is signified in C++ will eventually become the signifier for some function at the binary level. It seems that there would be no room for différance, no arche-writing, or "fecund force." (46)

    Hayles' examples of code's tolerance for ambiguity (Microsoft's spell checker [46], digital media based on databases [53], electronic literature [54]) express that ambiguity only at the level in which the code is embedded in an interface with humans. To be understandable to other code, it would have to be digitized again. As Hayles explains, "every change in voltage must be given an unambiguous interpretation, or the program is likely not to function as intended. Moreover, changes on one level of programming code must be exactly correlated with what is happening at all the other levels." (47) It seems that the analog for code in the natural world might not be language so much as biological or chemical systems, a connection which Hayles suggests on page 41. Then might Ellen Ullman's words, "A computer program has only one meaning: what it does.... It's entire meaning is its function" (48) be taken at face value?

    (On a minor note, I thought it was very sweet that machine intelligence at the basic level is also enabled by difference or maybe even différance, albeit of another kind, as voltage itself is electrical potential difference.)

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete

  6. I must confess I found Marino's enthusiasm for "mak[ing] the code the text" deeply disconcerting. I have always viewed code as a means to some end. I know that a great deal of creativity goes into its production, but fundamentally isn't coding a form of problem-solving? Doesn't language or sign-based interaction require a two-sided desire to communicate? I may be far off the mark, but as I see it, code is the digital equivalent of a Rube Goldberg machine. Execute the code, knock over the first domino - what's the difference? The computer is none the wiser.

    My own knowledge of coding is excruciatingly limited (HTML tags and the like), so I'm no authority. And here a concern rears its head: I fear that very few of us are such authorities. A humanities scholar reading code as text could discover embedded metaphors, social microcosms, etc. that prove valuable and interesting - or he/she could be bumbling around projecting pseudo-significance on what is nothing more than a clever solution for a programming problem.

    Marino would probably argue that there is no such thing as JUST a clever solution, since CCS views code as inherently "having meaning beyond its functionality since it is a form of symbolic expression and interaction." This is difficult to refute, especially given that Marino cautions against viewing code as discrete units of meaning in favor of a broader, contextual perspective. Fair enough - look hard enough and you'll see society reflected in any human production. But my inclination is to study that which code produces, not the blueprint.

    ReplyDelete