top of page
Search
Writer's pictureAsha W.

reading 1

Blog

Reading: Big data problems we face today can be traced to the social ordering practices of the 19th century.

This essay by Robertson and Travaglia rethinks the concept of big data. According to the article, the first era of big data occurred in the 1800’s, the data consisted of primarily “analog” information as opposed to digital data. One can assume “digital” is used in the Lev Manovich sense as continuous data converted into 1’s and 0’s. What is important about this first era of big data is information was collected for purposes of control and categorization of the environment and sentient beings.

In addition, the need for control and categorization of large sets of data produced novel processes, procedures and institutions such as data visualization, the Dewey Decimal System and libraries. What is interesting is how Robertson and Travaglia insist upon understanding that such man-made systems are not inherently objective. Data collection was largely concerned with constructing the non-normative in order to more clearly delineate and produce the normative. Categories such as “illiterate,” “crazy,” and “welfare mother” are not natural or deviant per se.

A significant contribution of the Robertson and Travaglia piece is the interrogation of ideologies or dominant ways of thinking. The essay essentially states that what is important in the second-era of big data is to rethink the taken-for-granted ways we think the world. The question about “re-writing the ideological inheritance of that first data revolution” is a particularly useful framework. Its rooted in the intersection of feminist standpoint, Black studies, queer and decolonial thought. I’d describe it as Black feminist. Regardless of name, this mode of thinking is what’s important.

Reading: Cadwalladr, Carole (04 Dec 2016) " Google, democracy and the truth about internet search ,”

The issue of algorithms returning problematic search hits has been widely documented from Noble to . This essay in the Guardian isn’t particularly alarming. Cadwalladr performs Google searches and reports that when she types in questions such as “Muslims are…,” “Women are…,” “Jews are…,” the top hits provide websites and answers that lean toward what It Cadwalladr characterizes as right-leaning. In addition, searches probing whether Hitler was a good person return results that provide claims of him being supportive of concentration camp victims. broaches the topic from several perspective.

One perspective is that of ethics. Cadwalladr recognizes humans operate both Google and Facebook and questions their ethical responsibility. The ethical responsibility argument is rooted in the article’s claims that rightwing enthusiasts (if you will) have found tacts to manipulate Google’s PageRank system and move thousands of websites with right-leaning and inaccurate information to the top of searchs. In addition, Google and Facebook’s response has been to put the onus on uses, stating their algorithms simply reflect the most popular hits and the websites users have created.

The other perspective the article takes is one of internet/media literacy. For instance, Cadwalladr writes, “What Epstein’s work has shown is that the contents of a page of search results can influence people’s views and opinions.” In this way, the article constructs a dichotomy between users of the internet and creators of content, as if they cannot be both. This media literacy perspective is one that might be as problematic as the issue of “bias” and algorithms.

Reading: Nunes, Mark. (2011) “ Error, Noise, and Potential: The Outside of Purpose ,” Error: Glitch, Noise, and Jam in New Media Cultures, Ed. Mark Nunes. London: Bloomsbury Academic. pp. 3-26.

This article is helpful in pointing clarifying what is a glitch and what constitutes noise. According to Nunes a glitch is “I describe the ‘glitch’ as an definite and/or replicated) departure or rupture from an anticipated or orthodox flow of information/data/knowledge or

meaning within (digital) communication structures that produces what is considered an upset or

error. I read this as taking for granted the concept of the “norm” and viewing departures as generative. This perspective is a helpful lens. I approach work from a critical race theory lens that does not see racism, for example, as a glitch but an organizing principle. I’m wondering how glitch and CRT can be brought together in a productive way.

Similarly, the article describes “noise” as an imprecision, interruption, or unwanted awareness of the instrument/medium. Entropy is a measure of disorder in a system. Noise and entropy seem to relate to structural and systemic concerns. For instance, Nunes (p. 13) writes, entropy “is inherent to any system of information, natural or technological, to tend towards disorder or to fall apart completely.” Even with noise and entropy positioned in this way, there is still an ideal or standard from which these terms deviate or disrupt.

I do not disagree with the article. It is productive, and I plan to incorporate the concepts in my research. How can we think about glitch, noise, and entropy by not viewing them as setback that are generative? Rather, what if we think of them as the generative capacity that yields the desire or need for the standard? For example, in “The Trans*-Ness of Blackness, the Blackness of Trans*-Ness,” Marquez Bey rethinks queerness and transness and repositions it not as a deviation but as the generative and uncategorized potential from which categories, standards and norms develop.

Reading: MacCormick (2012) “What is Computable?” 9 Algorithms that Changed the Future: the Ingenious Ideas that Drive Today’s Computers . Princeton University Press: Princeton.

This chapter is helpful in thinking about programs and output in that it can be viewed as an analogy. MacCormick uses the rule of contradiction to show that one cannot create a program that finds all bugs in a program, which connects to the notion that computers are limited to what humans know. Since human knowledge is not fallible and omniscient, neither is a computer program. In this vein, the end of the chapter presents the notion of “undecidability”: “Part of the

paper is devoted to demonstrating that certain calculations cannot be performed by these machines—this is the proof of undecidability, which we have discussed in detail already. But another part of the same paper makes a detailed and compelling argument that Turing’s “machine” (read: computer) can perform any calculation done by a “computer” (read: human)” (MacCormick, p. 198). This level of reflexivity expressed by computer science schoalrs is refreshing, as an earlier article noted how, computers are often viewed as “scientific” and unlimited by human shortcoming. Obviously, if some program cannot even exist this means there are possibility productive ways of knowing that are unknown.

In regards to analogy, the notion that the computer program design determines the output causes me to think of larger systems. MacCormick writes, ‘Obviously, you can get rather unexpected results if you open a file using a program it was not intended for. In the figure above,

you can see what happens” (p. 180). In other words, the input is not ideal and not designed for a particular system, the program. How can we think of the prison industrial complex as a programmed system by design. Certain inputs at the intersection of race, gender, class, mental health, sexuality, etc will lead to high incarceration rates, whereas other inputs do not. I can see the validity of some individuals who would argue prisons are not computers. That is a valid point, but there is a programming-based logic to practices such as stop and frisk, neighborhood policing, and officer-involved shootings. And this logic is correlated with certain inputs, namely the presentation of one’s body.

How can we connect this limit on human knowledge, this idea that there are unsolvable problems to Robertson and Travaglia’s piece on the inheritances of the first big data revolution. Yes. There are problems that are unsolvable. But MacCormick’s chapter could be troubled a bit questioning enlightenment mode of being human as a universal position. Power plays a role in what constitutes knowledge. There are vast ways of knowing that are disregarded. This limit on human knowledge and computability foregrounds questions regarding who is considered human. How can this otherly-human knowledge contribute to solving more problems that are considered unsolvable.

Reading: Scherffig, Lasse (2018) “ There is No Interface (Without a User). A Cybernetic Perspective on Interaction ” Interface Critique Journal, Vol. 1.

This reconsideration of the human relates to Scherffing article on embodiment. For example Scherffing (pg. 75) writes, “This view corresponded to the way enactive cognitive science understands how our actions are ultimately responsible for the perceived features of objects, such as their shape.” Scherffing is also rewriting this scientific inheritance by considering how the body shapes interaction and interface. In this way, Scherffing perspective displaces an ideal and objective scientific entity, or interface in this case. The interface is an effect of human interaction is a productive position from which to theorize.

With this said, I question the extent to which the author problematizes “control” and “cybernetics” in the conversation regarding negative feedback. Are these inheritances from earlier big data periods, the regulation of the human. Perhaps I am misreading the text, but my question is to what extent does valorizing human-centeredness obscure asking questions fundamental questions about the aims of such human-centeredness.

Reading Menkman, Rosa (2011). “ Glitch Studies Manifesto ,” Video Vortex Reader II: moving images beyond YouTube . Amsterdam: Institute of Network Cultures. pp. 336–347.

This essay contains a distinct definition of noise includes the glitch. Rosa writes that noise constitutes different moments in which linear transmission is disrupted and glitch, encoding / decoding and feedback artifacts comprise noise. In addition, Menkman notes that noise has a negative connotation. However, that negative connotation does not foreclose the way noise helps construct meaning by defining its opposite. This claim of universal productivity may be overly celebratory. From which perspective is this usage of noise deemed beneficial and when can it be violent?

Similarly, Menkman describes glitch differently. The glitch is generative in that it questions established discourses and forms. Glitch can also destroy meaning.

However, Menkman warns of impulse to domesticate the glitch, which turns it into a commodity.



1 view0 comments

Recent Posts

See All

final blog | Thank you!

final blog | Thank you! I have learned a lot from this class. I did not know what to expect going in, and I was a little annoyed that we...

Week 9 Reading Response

Last week I noted that how algorithms might be biased in their internal logic, this week’s readings by Tartelon Gillespie elaborated on...

Comments


bottom of page