Aproximación cuantitativa al sistema de personajes de La Celestina

El invierno de Lincoln, NE fue duro, y muy largo; tanto que pensé que lo anunciado por George RR Martin había llegado. Pero dicen que “no hay mal que por bien no venga,” y las largas tardes encerrada al calor de la biblioteca me permitieron explorar más allá de los textos teóricos sobre las Humanidades Digitales – o las Digital Humanities para mí- y jugar con distintas herramientas disponibles para nuestro trabajo. Con este post únicamente quiero mostrar la metodología que los nuevos en este campo podemos seguir mientras aprendemos a programar y analizar textos en masa.

En primer lugar, decir que escogí el texto Fernando de Rojas por dos razones: su carácter de novela dialogada hace sencilla la tarea de contabilizar varias instancias a valorar aquí: emisores, receptores, peso del mensaje (medido en nº de palabras) y conexiones; la otra, conocía de antemano el sistema de sus personajes, por lo que aplicar la noción de character space de Woloch no debería resultarme difícil comprobar si los métodos computacionales usados daban respuestas acertadas o no.

Metodología:

Con el texto “crudo” descargado del Proyecto Gutenberg, lo más sencillo de hacer es cuantificar el número de parlamentos que posee cada personaje para comprobar una de las teorías básicas sobre esta obra: el cambio de título de (Tragi)Comedia de Calisto y Melibea (entre 1499 y 1502) al más conocido: La Celestina, desde 1569 aproximadamente. Tabla 1

Si atendemos a la tabla 1, en efecto, Celestina es el personaje que más parlamentos posee en toda la obra. La medianera habla un total de 281 veces (21,85%), seguida de Calisto con 227 parlamentos (17.65%), Sempronio y Pármeno (213 y 162 respectivamente), y Melibea con la mitad de los dos primeros: 117 (9.09%).  Según Moretti (2011), Celestina poseería así gran parte del “espacio del personaje,” atrayendo más atención por parte del lector/espectador; nada nuevo ni muy interesante.

Ahora, si quisiéramos atender al sistema propuesto por Sack (2011), es decir, calcular el espacio presencial de cada personaje por las veces que cada uno es mencionado, veríamos que la atención cambiaría a Calisto con un total de 157 menciones (17.1%),  seguido por Melibea, a quien mencionan 148 veces (16.12%), Sempronio, Celestina, y Pármeno.  Así, por menciones, son efectivamente los amantes los protagonistas de su tragicomedia. Esto no es suficiente.

Y, ¿qué ocurre si sumamos ambos datos?

figura 1

 

 

 

 

 

 

 

 

 

 

 

Ocurre aquí lo que a Sack con la novela Bleak House de Dickens (132): es imposible determinar quien es el personaje principal a primera vista puesto que el descenso de personaje a personaje se da de forma gradual. Sin embargo, en nuestra obra siempre van a ser 5 los personajes que están por encima de la media (98.9 parlamentos) y así los podemos ya considerar como los nodos más importantes de la red que construiremos más tarde.

Pero es mucho más interesante estudiar la evolución de los parlamentos de cada personaje en la obra para saber (sin tener que leer la obra) qué personaje actúa en qué escena.  La herramienta Voyant toma el texto base y calcula el número de veces que el término seleccionado por el usuario aparece en el texto, mostrando una distribución de la frecuencia con que aparece. Vemos que Celestina únicamente ocupa gran parte del espacio narrativo a partir del segundo segmento del texto (que no el Acto Segundo), dominando la esfera de la trama hasta el momento en que desaparece en los dos últimos segmentos; se da el caso de  que cuanto más aumenta su diálogo más desciende el de los demás.

figura 2

Como el primero de Moretti, este estudio carece de mucha más información que la que ya puede obtenerse con la lectura de la obra. Así, lo próximo fue pensar en la medición del número de palabras en cada parlamento para calcular el espacio total que ocupa cada personaje, además de tomar apunte de la dirección de cada parlamento para comprender la forma en que los personajes quedan conectados.

Todavía no sé automatizar el recuento de palabras por parlamento en una obra completa (voy aprendiendo poco a poco con el Dr. Matthew Jockers), así que elegí el Acto Doceno para mi experimento. Lo considero como uno de los más relevantes en el desarrollo de la trama, y uno en el que la verdadera naturaleza de los personajes queda completamente en la superficie (por si había duda de ella durante la primera mitad de la obra).

tabla 3La tabla 3 recoge el total de la suma de palabras de cada parlamento dentro del Acto 12 (incluyendo los monólogos). De este modo, vemos que la dominación del espacio narrativo (entendido aquí como número de palabras y no como número de parlamentos), que hasta ahora venía teniendo Celestina al contabilizar el total de la obra, queda en manos de Calisto al emitir él 1287 palabras en total. Sería quien más atención recibiría por parte del lector según la teoría de Woloch y Moretti (2013). Una vez construida la red social este hecho va a ser de mucha ayuda para entender cómo Calisto logra posicionarse como el nodo central que une a varios de los personajes y que, sin él, quedarían desconectados. De algún modo, se corrobora su centralidad y protagonismo en la obra. Pero sigamos porque esto no es del todo conclusivo.

¿Quiere decir que quien más palabras emite es el protagonista de, en este caso, un acto? Celestina, por ejemplo, resulta ser de nuevo una de las más persistentes en la participación del espacio. Tras leer el acto, saltándonos un poco la idea del distant reading10 acuñada por el mismo Moretti (de todos modos, sólo estamos prestando atención a una obra), sabemos que ninguno puede escapar a su protagonismo.Si de nuevo asistimos al recuento del número de parlamentos fijándonos en este acto únicamente (tabla 4), veremos que los roles se invierten y son los criados de Calisto quienes poseerían un mayor “espacio del personaje,” constituyéndose en los protagonistas: Pármeno 38, Sempronio 34, Calisto 24, Celestina 15, Melibea 13, etc. De aquí surgen nuevas ideas.

Por un lado, podemos pensar en que el hecho de que sean ellos dos quienes actúen de medianeros entre Calisto y Celestina los hace ya más relevantes – sobre todo si sumamos las intervenciones de ambos, que superarían al resto. Esto nos llevaría a categorizar al conjunto de los personajes por tipos especiales, dejando un poco de lado el número de ellos: la medianera, el noble, la dama, el criado, la criada, los padres (el nriado con 375 parlamentos en total y 72 en el Acto 12). De este modo quedaría al descubierto la importancia del criado tanto en la vida como en las obras de esta época, figura que muchas veces es relegado a personaje secundario pero que en muchos casos posee más relevancia (ver a Pierson).

Por último, debemos preguntarnos quién es el personaje que posee mayor centralidad en el acto (y la red social conformada a su alrededor) mediante la contabilización de quién habla a quien o hacia quién va dirigido cada parlamento. Los diálogos van a estar muy distribuidos entre todos los personajes puesto que los Calisto habla a 5 personajes (una interacción es a él mismo), Celestina y Sempronio a 4 y Pármeno a 4 (con su monólogo). ¿Cuánto diálogo se intercambia entre los personajes? Celestina es quien más habla a Sempronio y a Pármeno, aunque ellos le hablen a ella menos de la mitad (1042/875 vs. 412/89); luego, Calisto dedica mucho de su espacio de diálogo para Melibea (797), respondiéndole ella con 531 palabras –claramente es Calisto quien dirige la acción en su hacer con la muchacha tanto cuantitativamente como en la lectura tradicional.

En la última figura, generada con Raw, puede verse todo lo arriba señalado en una sola imagen. El emisor y su espacio, el número de mensajes que emite y la cantidad de palabras en los mensajes a cada receptor:

conexiones

Finalmente, con todos los ratos recabados hasta ahora, y sobre todo los utilizados en la última figura, es posible generar la red social del Acto Doceno gracias al programa Gephi (generador de visualizaciones y muchos dolores de cabeza):

red

El protagonista masculino de la tragicomedia se sitúa como conector o elemento (nodo) unificador de los dos grupos de personajes de la obra: la clase alta o familia de Melibea, y los sirvientes y la alcahueta. Es en este último grupo en el que, gracias al peso de las conexiones entre personajes o el número de palabras intercambiado entre ellos, podemos también ver que la balanza del “espacio del personaje” se inclina hacia ellos, y no los amantes. Entre ellos tres se intercambian gran cantidad de palabras y si recordamos las palabras de Moretti (“the number of words tells us how much meaning the character brings into the play (and is often correlated with a discord with power,…)” (109)) no se nos puede pasar por alto que, efectivamente, son estos tres personajes los que desobedecen las normas sociales exigidas en su entorno, siendo castigados por su mal hacer inmediatamente después.

Conclusiones:

Con este nuevo enfoque es posible poner al descubierto diferentes facetas del texto, como por ejemplo:

 

  1. El hecho de que los criados Sempronio y Pármeno, sumados como prototipo de “criado” hablen más que el resto de tipos;
  2. La función conectora de Calisto en algunos pasajes de la obra (Acto 12 concretamente), y no de Celestina como siempre se la ha calificado.
  3.  A poca relevancia de los padres de Melibea en el desarrollo de los hechos al igual que las amantes de los criados. ¿Podrían ser todos estos eliminados? ¿Cuál es su función?
  4. Y la clara evidencia de que, efectivamente, Celestina es el personaje que más veces emite mensajes a lo largo de la obra, posicionándola como protagonista ya no sólo por sus tejemanejes sino también por la cantidad de espacio narrativo que ocupa.

Lo más interesante sería, claro está, y una vez realizada tal minería de datos, crear la red completa de la obra, para tener una visión del conjunto y estudiar la centralizad de ciertos personajes y el peso de sus conexiones, además de la dirección de éstas – para fijarse si las conclusiones obtenidas coinciden, o no, con las lecturas hechas de esta obra durante años.

Os animo a que escojáis algún texto que os guste y juguéis con las herramientas que están a vuestro alcance. Son sencillas, fáciles de usar o intuitivas.

Bibliografía:

 

Bostock, Mike. “Sankey Diagrams.” Mike Bostock. May 22, 2013. Web. 20 Marzo, 2014.

Condello, M., R. Harrison, J. Isasi, A. Kinnaman, y A. Kumari. “A Character Network Construction for Macroanalysis.” University of Nebraska-Lincoln Literary Lab. (2014).

Elson, David K., Nicolas Dames and Kathleen R. McKeown. “Extracting Social Networks from Literary Fiction.” Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden (2010): 128-147.

Hockey, Susan. “The History of Humanities Computing.” En A Companion to Digital Humanities. Ed. Susan Schreibman, Ray Siemens, John Unsworth. Oxford: Blackwell, 2004. http://www.digitalhumanities.org/companion/

Moretti, Franco. “Network Theory, Plot Analysis.” Pamphlets of the Stanford Literary Lab. Palo Alto, California, 2011. Web. http://litlab.stanford.edu/LiteraryLabPamphlet2.pdf

–. “Operationalizing: or, the Function of Measurement in Modern Literary Study.” New Left Review, 84 (2013): 103–119.

–. “Conjectures on World Literature.” Distant Reading. London: Verso, 2013: 43-62.

Park, Gyeong-Mi et al. “Structural Analysis on Social Network Constructed from Characters in Literature Texts.” Journal of Computers, 8.9 (2013): 2442-2447.

Pierson, Emma. “Parsing Is Such Sweet Sorrow.” Five Thirty Eight. 17 Mar, 2014. Web. 17 Marzo 2014.

Rojas, Fernando de. La Celestina. Ed. Robert S. Rudder. 1999. E-text.

–. La Celestina. Ed. Dorothy S. Severin. Madrid: Alianza Editorial, 1993.

Sack, Graham Alexander. “Simulating Plot: Towards a Generative Model of Narrative Structure.” Complex Adaptive Systems: Energy, Information and Intelligence Conference. Arlington, Virginia, (2011): 127-136.

–. “Character Networks for Narrative Generation: Structural Balance Theory and the Emergence of Proto-Narratives.” 2013 Workshop on Computational Models of of Narrative. 32. Dagstulh, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, (2013): 183-197.

Woloch, Alex. The One vs. the Many: Minor Characters and the Space of the Protagonist in the Novel. Princeton, New Jersey: Princeton University Press, 2003.

 

In Search of Balance: Debates on Digital Humanities

Writing a blog post is not writing. Writing about digital media does not determine the value of technology. I don’t know how to program. Using social networks, blogs, Blackboard, iGadgets, online-whatever, is only satisfying my feigned ability to use new technologies. You probably don’t know how to program either. But, friend, we gobble the new medium up. Don’t we? Too bad, for we’ve, once again, become the clientele of a small elite that in this case has managed to gain access to the true potentiality of new media. We accept the seller’s words as truth. It is safer to do it that way.

This is not new story for me.

Instead of pursuing new abilities, we fetishize new toys. (Rushkoff, 14)

At the same time, engines are given the capability to ‘think’ for themselves. If we continue to be consumers, we could be giving them our human agency, both individual and collective; our human force given to our much treasured gadgets. Where does its knowledge end and my cognition begin? And, viceversa? Some of our complex processes are made by machines now, so we no longer get a reward on trying to complete them traditionally, or manually.

Douglas Rushkoff provides all these statements to get us to suss the importance of being able to program ourselves. Prof. Ramsay has told us so many times in class. Engines could be shaping our world. But, “after all, who or what is really the focus of the digital revolution?” (14). It looks like instead of taking advantage of what these machines could do for us humans, we are rearranging humans to use/serve machines (15). The solution, says Rushkoff, could be in deciding how devices are programmed; or at least know how and why are they programmed. We could be programming these tools, why are we not doing it already? What would be the purpose, anyway, one could ask. Being able to “adapt to the technologies of tomorrow” (130) as well as to comprehend that transformation (both of technology and society) comes from programming.

Transformations. By the time I finished that reading I was already wondering, what implications does our ignorance of programming carry in the Humanities? Rushkoff points out that now “we cannot truly communicate, because we have no idea how the media we are using bias the message we are sending and receiving” (133). Thus, if we are going to study all spheres of human knowledge, we need to grasp that bias of the information due to the media. We have to add the Digital realm to our inquiries. Maybe, Johanna Drucker provides a better insight on the need for a Digital scholarship inside the Humanities. She turns around the question, though. Instead of the usual: How is digital environments affecting traditional scholarship?, she asks:

Have the humanities had any impact on the digital environment?

An answer would be ‘what are you talking about?’ Critical thinking is not applied as it used to. The worrisome fact is that we are taking for granted anything out there for its “persuasive and seductive rhetorical force of visualization.”  For that reason, Drucker calls for an assertion of the cultural authority of the Humanities in this digital world we now live in. It would be the best way to give critical and cultural value to these platforms. Method and theory have to become one. Doing and thinking. Not just thinking.

Humanists accepted the new tools as they provided an awesome way to have all the versions of a texts in the same webpage. It had never been that easy to compare four versions of the same poem. New levels of inquiry. What could be possible to do, then, if humanities were to actually create the digital contexts? After all, data mining or concordances have been done without computers.

Now, my questions go back to the beginning of the semester, when we were discussing about the meaning of DH and their implementation in the departments of literary studies, history, etc.: Where to start from? Do we want to teach undergrads to program within their chosen non-computational discipline? Personally, I would have loved to be taught further into programming languages while taking all the classes in Modern Languages and Technologies. They certainly gave me an insight look at editing, social networking, blogging, etc. But, besides having this blog, creating a Wiki page and editing a text, I did not learn how to program. The little I know, I learnt on my own. Should we leave it to the choice of students to learn or not? Should this be an self-taught skill? No, it shouldn’t.

On the other hand, and in tune with Alan Liu, I am also concerned that programming might become the way to go for DH scholars. Criticism is much needed while we curate cultural artifacts. Or while we create them. Thus, once we come to terms on the need to take part on the programming of our society, not only do we have to start creating digital culture but also being critical and analytical about it. Unquestionably, there is a need to achieve fair equilibrium to understand this age.

Works cited:

Rushkoff, Douglas. Program or Be Programmed: Ten Commands for a Digital Age. New York: OR Books, 2010.

Drucker, Johanna. “Humanistic Theory and Digital Scholarship.” In Debates in the Digital Humanities. 2012.

Liu, Alan. “Where Is Cultural Criticism in the Digital Humanities?.” In Debates in the Digital Humanities. 2012.

Reading in the Information Age. On Hayle’s “How We Think”

If hyper-reading involves jumping from a text to another one, I’m guilty. The moment I read that a group of students had created a project where they gave the characters on Romeo and Juliet a Facebook profile, make them friends and created an event for the party where the lovers met, I stopped reading Hayles and moved to Google to search for this awesome idea: Romeo and Juliet: A Facebook Tragedy.

So my close reading of How We Think by N. Katherine Hayles -an insightful look about the ways digital texts are affecting or benefiting reading in young audiences, as well as the role Digital Humanities can play in developing new methods in which new hyper- and machine-reading can be implemented to resemble the tradition close reading in the Humanities – got interrupted by the end of the chapter “How We Read.” But I must admit that I found it interesting enough to stop and think about where I find myself, and my students, in the tryptic she proposes: close-reading, hyper-reading, machine-reading. First, I would give a brief description of the three to see how they intertwine or detach themselves.

Close-reading, belonging to the field of literary criticism, is “the careful and sustained interpretation of a brief passage of text. Such a reading places great emphasis on the single particular over the general” (Wikipedia). Practised by the Humanities, the method is becoming outdated, deficiente, and, at the end of the day, slow-moving. Hayles indicates that close-reading, though still needed as to ensure literacy and critical thinking, is no longer applicable for it leads to formulated results or the same conclusions. Moreover, given the great amount of published literary works that we can now explore, the method proves unfruitful. Also, until digital text is understood as an instance to be studied in the close-reading technique, this kind of text is going to be marginalized by the academia. And so an additional method has to be used.

In the age of information a new type reading arises for those who spend their time on a computer: hyper-reading. Defined as “reader-directed, screen-based, computer-assisted reading” (167), its basic mode of operation is a fragmentary reading by which the reader pecks the text, looking for keywords, gathering the main idea in a short time, and moving on to another text usually by a hyperlink. This has become an imperative. However, the need to skim over texts that fast is enacting changes in our brains – leading to an impossibility to pay attention in a close-reading manner. I’ll discuss this point later.

Finally, machine-reading is “the automatic, unsupervised understanding of text” (70), according to Etzioni, Banko and Cafarella (2006). It provides the opportunity to explore larger amounts of texts, along the construction of new knowledge impossible to gain before. Though many scholars have yet to twig the relevance of this type of method, it usually takes researchers from their first intuition to new questions to examine the texts. A good example would be Franco Moretti, who by getting data on a large corpus on British novels then asks questions to further explore the development of the genre.

These three reading  cannot be separated. They intertwine in that the hyper-reading looks for certain passages that can be close-read. Machine- and hyper-reading can identify unknown forms or structures that can later on be also studied by asserting a close-reading on the writings.

Now, how do I read? I do a lot of hyper-reading when searching for information to analyse or write about specific novels in my literature classes. Books that I have first read, closely, and found some particular issue I want to analyse. Right now, I’m also moving on to the third realm. I am learning how to use R language and environment to analyse texts that I might have not read before. How do I think, though? As Hayles points out, I usually know what I am looking for before I begin to hyper- or machine-read, then I interpret what I find. Given my ability to multitask, I wonder if those modes of reading are changing my brain…

…are my students’ brains also changed and unable to perform one activity focusing all their attention to it? I don’t teach literature – maybe in a couple of semesters- but Spanish. I have sussed that students like to have different tasks or modes of reading while learning new vocabulary, for instance. Memorization of a list of words never worked for me either. So I have pictures, games, videos, or sentences ready for them to learn new words. Usually I don’t ask them for a close-reading the day we have a text on culture – they have to hyper-read various texts in order to get the main idea for a new concept. Unfortunately I haven’t yet tried machine-reading with them. So far, it has worked for my classes. Is it negative? Would they be able to later on move on to a close-reading mode when they get to 400 level classes on literature?

As Humanists, Digital Humanists, and teachers/profs, I believe we have to understand the technogenesis Hayles talks about. Technology and humans develop by adapting to one another. Digital media is moving human beings to a faster and miscellaneous mode of communication. It is shaping new patterns of knowledge and research. I guess that we cannot avoid the change. So we have to adapt it and take full advantage of it. 

Works cited:

Hayles, N. Katherine. How We Think. Chicago: University of Chicago, 2012.

Sosnoski, James. 1999. “Hyper- Readings and Their Reading Engines.” In Passions, Pedagogies, and Twenty First Century Technologies, edited by Gail E. Hawisher and Cynthia L. Selfe.

The rise of the abstract model

After reading many books that made us philosophise about the implications of new technologies and media and their possible uses to Digital Humanities, we have got to one that shows the practical uses of applying computational analysis to literature.  This is Franco Moretti‘s Graphs, Maps and Trees. Abstracts Models for Literary History (2005).  However, he won’t mention computers as the needed tool to create a new envisioning of literary history.

First of all, the nature of this kind of study  is going to interconnect shapes, relations, structures, forms and models – connections that have been avoided in literary studies. Moretti believes in the many possibilities that the natural and social sciences can offer to our field as models, though abstract, can show “what literary history has accomplished so far (not enough), and … what still remains to be done (a lot)” (2). Hooray! I think this statement is sufficient enough for the book to be read, as it tells us humanists that we can still have a future, unlike those claiming that it is a field on its deathbed. But that is not the point today. Lets see how the author exemplifies how his abstractions can “widen the domain of the literary historian” (2).

Why turn to graphs? Because their quantitative approach (besides empowering cooperation) can force the study of the collective system of published works. It can lead to the discovery of new facts regarding literature as a whole – for studying the canonical works does not show the complete literary field and its collective system. In order to provide an example of the extent of this type of work he focuses on the so called the rise of the novel. Graphs allow the study of its pattern as a whole, or cycle, in Britain, India, Spain, etc. Genres being the primary cycle that produce the rises and falls on the graph. This allows to theorise about the longevity of genres along history. “Is the wave-like pattern a sort of hidden pendulum of literary history?” asks Moretti (18). Unlike what could be interpreted by the hegemonic market system, a study of data shows that over 44 genres play their part on 160 years: they cluster both to rise and to fall. Another question arises then: why the clustering? Audiences/generations; or ideologies; or none of them. Data doesn’t offer the why directly. These cycles indicate “a conflict that remains constant throughout the period” (Pomian, 117). Like the discussion about the bleeding death and rebirth of Humanities. Like Doctor Who’s regenerations over his 50 years on TV.

Now, the book brings us to maps. Or, better put, diagrams (check Carmen’s post for further reading). What knowledge can diagrams add to literary studies? In his first example Moretti demonstrates that the space of narrative can take several different shapes. In this case it takes a circular one and, as with data, this allows further exploration of the literary system. Maps are not “already an explanation; but at least it shows us that there is something that needs to be explained” (39). They can elucidate those experiences of social systems in the past, that we usually want to analyse in books, in order for us to understand how people framed their thinking, their ideologies. A map (see Christaller’s Central Places in Souther Germany) in which it is possible to see that daily life needs no big urban space – till things changed. Commerce and industry develop and with them the central region gives way to stronger, or new, networks and grapevines. Interconnections that pluralize needs, novelties, products, memories, emigration, crime, repression. The village against the city, the province in between. These kind of maps are going to show “a matrix of relations, not a cluster of individual locations” (54). And I reckon he has a point here: with maps we can easily visualize the connections as a matrix, a diagram of forces, in order to establish the meaning of society as a collective realm.

The third diagram in the party is the morphological one: the tree.  How are they practical to literary studies? Well, “a tree is a way of sketching how far a certain language has moved from another one, or from their common point of origin. And if language evolves by diverging, why not literature too?” (70). Interesting point of view, for he compares Darwin’s natural selection to languages and literature; “literary survival” (71) he calls it. When “a genre is visualized as a tree […] the genre becomes an abstract ‘diversity spectrum’, whose internal multiplicity no individual text will ever be able to represent” (76). No particular work can stand as the representative of the genre. Like graphs, trees can show the entire compound of literature – as “technology-of-language” (80)- to manifest the barriers in culture and their transformations. In the end, this creates a new conception: the study of literature both as it “moves for forwards and sideways” (91), being inclusive to all the participants in the corpus.

All in all, I loved the book! The reason why I liked it is because Moretti gives a great example of how to apply these abstracts approaches to literature. They can change -not that the traditional models are wrong- our theories, redesign them. Also, he doesn’t get lost in without actually showing himself in the kind of desperate need of  attention to computing/coding/digital/maths that I noticed on other writers, and that made their text somehow muzzy.

And because Whovians have created a lot of graphs, and maps, and trees, and diagrams, about their favourite show, which is on its 50th anniversary now, and has this “conflict that remains constant throughout the period” a little video I post.

Software is changing culture and society

On my last post I discussed the importance of hardware following the reading of Kirschenbaum‘s Mechanism. New Media and the Forensic Imagination, and I wondered if it is truly necessary for mainstream users to know how their material, physical computer works. This week I move on to ask myself: do we need to know how to program? And that takes me to the software realm, the non-physical part of technology that plays, in my opinion, a bigger role in the change of analog to digital culture. A digital culture that has been constructed with new media, and not upon an imitation of the traditional media. This week’s reading sheds some light on the issue of the new media being the leading force on that change and construction of a different society.

Lev Manovich poses four main questions in his introduction to Software Takes Command (2013) (you can read it online):  1) “What happens to the idea of a “medium” after previously media-specific tools have been simulated and extended in software?” 2) “Is it still meaningful to talk about different mediums at all?” 3) “Or do we now find ourselves in a new brave world of one single mono medium, or a metamedium?” 4) What is “media” after software?” (4). We have to ask these questions in order to understand the need for a new discipline, Software Studies, as “to investigate the role of software in contemporary culture, and the cultural and social forces that are shaping the development of software itself” (10). This is, software and culture interact with each other in a circular  way – which we can connect to McLuhan’s ideas as well. Manovich is going to focus on mainstream applications, rather than those used by programmers, because he understands that those practices are the ones that are shaping society, the software used by most people is changing cultural identity.

Being familiar with the many example he provides (social media, iOS, Photoshop, etc) my attention has focused on the history of the development of the computer as a tool for learning, discovering and creation by Alan Kay and his team at PARC. Their aim at developing a new tool was to provide the user with an already built-in software environment in which he/she could create new software. His was a metamedium; he transformed Alan Turing’s Universal Machine into a Universal Media Machine. They didn’t want to just mimic the old media, paper, writing, image, but to change them in order to create new forms. Ted Nelson developed the hypertext in 1965 to interconnect material in a complex way never available before; Douglas Engelbart, three years later, presented the “view control” system – which later on advanced into the GUI following Bruner‘s ideas of inactive, iconic and symbolic learning.

The consequence of their detachment from the academia I think is pretty clear: commercial use; the fight between different companies to get as many buyers of their software as possible. User interface was soon popular due to the straightforward, easy way to use by anyone. The team ascribed to the industry. And in order to make it really easy to use, they used a whole range metaphors that anyone could understand: save your text in a file/document that you then classify on different folders.

Manovich, nonetheless, attributes this use of metaphors of the traditional media into the new one to the lack of a history of software. According to him, the GUI and other software has certainly make learning and creation on computers something almost natural to humans – specially for digital natives. Experts even view a different future since “children who were born into and raised in the digital world – are coming of age, and soon our world will be reshaped in their image” (Born Digital). Both the invisibility of software and the development of a new media has to do with the metaphors – or so I understood in the reading- and the quality of computer media to expand in infinite new forms.

It is as though we are asked to remember and cherish the older media – and erase it at the same time (101)

This is only possible to the fact that software builds itself by sums and accumulation of previous languages, creating new ones on the process and, thus, following what the author calls media hybridization. I believe here is where DH comes to be related. Following Manovich notions, are DH projects multimedia or hybrids? Do we want to assemble different media in a setting without mixing them?; or are we to create new media from those different materials already on hand?

By joining two or more mediums, their languages are going to interchange their properties and create new structures. These are going to be unknown to us till the time they are created. At the same time, culture and society is going to be re-structured, luckily not immediately. And I wonder, is this why so many people are grumpy about the metamedium, the bonding of the odds and sods of the digital and the analog, of  the digital and the Humanities? A fear of those new unknown forms being born? I should cocoa! However, I would say that the software itself is harmless. It is guiding the development of technology in terms of what we are able to create. We should fear the industry, not the medium.

 Work cited:

Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2013.

Do we need to know about the medial ideology?

Reading the book for this week, and for some reason, this particular quote by Jacques Derrida from an interview got my attention:

“With pens and typewriters you think you know how it works, how ‘it responds.’ Whereas with computers, even if people know how to use them up to a point, they rarely know, intuitively and without thinking –at any rate, don’t know– how the internal demon of the apparatus operates. What rules it obeys. This secret with no mystery frequently marks our dependence in relation to many instruments of modern technology. We know how to use them and what they are for, without knowing what goes on with them, in them, on their side” (88).

Certainly, we love our laptops, smartphones, video games consoles, cameras, etc. Most users just choose to use the technology at their hands to do the basics: write, email, search for information, play – most of them don’t like reading on the digital gadget I’ve noticed. Others users have enough understanding of their gadgets as to first buy the product according to their needs and then be able to use it properly, to fix a little bug, or to replace a broken part. Very few people actually know what goes on in the machine when using it, and are usually those who create or advance the technology for those careless users. Everyone who has access to technology is now dependent of the many instruments on hand. Whichever group one belongs to, though, seems to be of little importance when analyzing how the development of bigger storage media has changed the relationship between the human and the machine.

To that change and the unknown realm of the computer processing, the “internal demon of the apparatus,” is to what Kirschenbaum is going to pay attention in his book Mechanism. New Media and the Forensic Imagination. He writes about it due to many reasons, that I gather as: 1) “we need to recapture [the] role [ of storage technologies] in the cultural history of computing; 2) Agrippa being hacked and thus breaking its main characteristic of destroying itself once opened; and the recovery of hard drives from the WTC; but most importantly, 3) it is in that particular place that electronic texts, as “artifacts-mechanisms -subject to material and historical forms of understanding” (17), are stored for us to read.

These texts are functionally inmaterial, they cannot be handled, touched, read with the eye. Nonetheless, since they resemble traditional forms of inscriptions in the process of creation, they fall in the realm of humanities. They “assigns value to time, history, and social or material circumstance – even trauma and wear – as part of our thinking… . Product and process, artifact and event, forensic and formal, awareness of the mechanism modulates inscription and transmission through the singularity of a digital present” (23). We have to pay attention to them as they are new forms representations.

Representations that are going to be stored forever. The author makes it clear when he repeats again and again, in many different ways, that “Every contact leaves a trace.” And forensic investigation can discover those traces no matter how hard we try to erase a document from a disc. But, following Mark Bernstein account on the cost of choosing what to dismiss being higher than the possibility of saving everything that we can possibly imagine, I ask: do we really want to have all our digital experience saved for the eternity? To which extent is it necessary to preserve all electronic textuality? We might be “prisoners of the present” (Leyton). Forensic investigation is recovering the past through the discovery of past objects. Such investigation helps in getting some tangible or visual instance from the “medial ideology” that is impossible to uncover for most people.

Now, given Kirschenbaum’s notions, I wonder, yet again: Should everyone know how electronic gadgets work? No. Should they know about the implications of using them in one way or another? Yes.

As a matter of fact, I have never heard about Mystery House before, for instance. Or I have never seen an Apple II. Some may think this to be a huge ignorance on my part. But, I happened to be born when everything was being developed or was already on the market. So when I was able to play video games Olentzero left a Nintendo 64 under the Christmas tree (I think I was 8-9 and still play with it sometimes); and by the time I had a computer (I was 12), it was running Windows Me, and I got to play The Sims for hours, along with my many other games, as well as writing papers for high school. No one taught me how to use the computer, I learnt. I broke it, I fixed it. Then I got Internet connection, and I learnt my way through the surfing of the web.

I think that my point here is that, unless someone is really interested in computers or wants to work on forensic investigations, etc. there is no need to really be able to clearly see how things work. However, everyone should be aware of the implications of using electronic devices, and that every online/electronic text is stored arbitrarily for you to never be able to erase traces – so careful if you are doing something ropey.

Now, the question that comes up many times in class, should digital humanists know the how in their working process? Or should they just use it as a tool for further research of their different fields? To what extent would it help to know how the hard drive where all the images of an archive works to someone who wants to analyse the pictures?

I would really find it interesting to learn how everything works; but I believe I won’t have enough time to be capable of understanding everything that is going on in the computer while I write this post, checking the default dictionary on the Mac, listening to music on iTunes, and having an intermittent chat on Facebook with a friend on my second desktop. That gets up my nose quite a lot. I would love to know. But I like getting broad knowledge of a couple of things instead of wandering around many to end up knowing nothing.

And now, if you’d excuse me, I’m going to copy some scanned copies of an old book to an external hard drive to later on extract the text and work with it.

Work cited:

Kirschenbaum, Matthew. Mechanism. New Media and the Forensic Imagination. Cambridge, MA and London, UK: MIT University Press, 2008.

Setting Balance Between the Parts: the Human and the Digital

Reading Walter Benjamin‘s “The Work of Art in the Age of Mechanical Reproduction” for week, I have again had an understanding towards media that makes it a tool towards our control. This literary critic and philosopher takes the cinema as one of the biggest challenges to art since it changed reproductivity of the world. With mechanization came a new form of reproduction. It was no longer something manual that took human exertion. By creating a product generated by the sum of little reproductions, its artistic value moved from an individual devotion to a communal exhibition. As a consequence, a new force was given to art: political function. Let’s see.

The characteristics of film lie not only in the manner in which man presents himself to mechanical equipment but also in the manner in which, by means of this apparatus, man can represent his environment (16).

Here, I understand that one of the main functions of art in terms of its presence in society of art is to set up a balance between the human and the system (the apparatus). This would sound familiar if you read my last post, where I quote Moretti‘s notion that “the substantial function of literature is to secure consent.” Real consentire (together+feel), is attained only by guaranteeing a balance between the parts. But, how can be get balance in this age of technology where the content is so much limited for each individual? Surely, it is impossible to “make of the entire globe, and of the human family, a single consciousness” (61) as McLuhan wonders. 

In an attempt to exemplify the difference between a sculptor, a writer, a photographer, and an actor, Benjamin takes us to literature, as an art. While in the old days there were just a bunch of authors who exhibited their works in front of a rather big audience, thanks to the press those readers were able to become authors. This led to a blurred line between writer and reader, for the later was able to get some knowledge of the craft. Taking into account that literary competence became polytechnic, it was possible for literature to be a “common property.”

And I ask myself…since being able to use media or computers is also a polytechnical competence, is new technology a “common property” that can help us all? Is it building a balance? Each day, a bigger number of people use technology and become writers as they can post, comment on news, chat, show their daily routines, etc. Nonetheless, if “mechanical reproduction of art changes the reaction of the masses toward art” (Benjamin, 15), isn’t media changing the reaction of humans towards art, life, and more importantly, self-consciousness? That’s dangerous, innit?

Internet access is now, in my opinion, overrated. We certainly have access to more information and research, we can text our roommate, who is sitting next door, or video call our family, that is thousands of kilometers away. That’s really nice, because I do both things.

Nevertheless, like cinema – which is not a so much a common property as it has distorted reality with its huge apparatus of publicity,- social media is the goose that laid golden eggs, but not all the glitter is gold. First, in spite of the ability to use Internet as a encyclopedia to learn about basically anything, users tend to focus their attention towards their main interest. It closes the scope of self-consciousness in that we cannot be aware of our position within a whole community (Benjamin posses this idea when arguing that capitalist driven cinema can make audiences oblivious to their social class or position). Moreover, unlike the common perception that we are all being turned into a homogenous and beautiful world, smaller cliques are created. Both set apart, cordoned off from each other in some cases.

Between 1963 and 1968, Pierre Bourdieu carried out a research to demonstrate that personal taste is driven by education and social class, and, thus, exclusion. “Tastes are the practical affirmation of an inevitable difference. It is no accident that, when they have to be justified, they are asserted purely negatively, by the refusal of other tastes” (56). With it a conflict of the legitimacy of culture is created. Thus, since it is the ruling class who has access to knowledge, it is going to govern the tastes. And as distinction is declared, distance between the groups, the classes, the cliques is increased. No homogeneity here.

Finally, I would like to try to connect all the above said with Humanities and Digital Humanities. So, the first is, I guess, the ruling class in this case, for it has centuries of tradition to back up their reasonings. Humanities are a “common property” for they study human knowledge and benefit humanity. DH, on the other hand, use technology as an approach to that human knowledge and, since technology is seen as a force that is changing perception, the DH is disregarded by H. But, if human perception is changing, Humanities should study the new variation, right?. How are they going to do it if they don’t pay attention to computers? As Humanists, there should be an awareness of the danger in making a distinction between one and the other. H has to take into account new media. Only in such a manner it would be able to comprehend self-consciousness of the 21st century on. For that is the only way to gain balance between the human and the apparatus.

Works cited and further reading:

Benjamin, Walter. “The Work of Art in the Age of Mechanical Reproduction.” (1936). Web.

Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. 1979. Cambridge: Harvard University Press, 1984.

McLuhan, Marshall. Understanding Media: The Extension of Man. Cambridge, MA: MIT, 1997.

Moretti, Franco. “The Soul and the Harpy.” From Signs Taken for Wonders. 1988. Web.