top of page
Search

Week 9

Gillespie, Tarleton (May 18, 2016) “Algorithms, clickworkers, and the befuddled fury around FacebookTrends.”


This article obviously brings us back to an earlier discussion we had in class about the neutrality of algorithms. As I recall, this discussion became a bit heated as we debated the supposed neutrality of data versus the neutrality of the algorithms that use such data. There was also some debate as to whether we can actually separate data from algorithms in evaluating their supposed objectivity. This article brings such debates to the context of Facebook and the way it uses algorithms to decide what counts as a trend and how much to promote it. To discount the neutrality of these algorithms, Gillespie brings up the important fact that algorithms are themselves designed (as are the data that go into those algorithms). Further, Gillespie argues that algorithms are full of the people that create them - people who engage in toxic work behaviors and who prioritize profit over the need for thorough political discussion. Aside from the discrediting of the neutrality of algorithms, this article is important in that it points out another meta-function of algorithms themselves: they determine what counts as information, or what counts as "data" to some extent (determine what counts as a "trend").


 

Gillespie, Tarleton (May 9, 2016)“Facebook Trending: It’s made of people!! (but we should have already known that).”


This article picks up on the (or maybe is the precursor to) the above Gillespie article. This Gillespie article continues on the last thought I had mentioned in the above entry: the idea that the algorithms used by social media platforms have some ability to determine what counts or looks like information in their ability to define trends. These algorithms can distort information, regardless of how popular, to emphasize whichever concepts will get the most clicks. In a way, the algorithms that define trends "curate" information in ways that promote and adhere to the user-friendliness of the social media platform. This is important for thinking about other ways that we understand and process information. What exactly does it mean for information to be user friendly? Does the friendliness come from its accessibility, or its simplification/neat compartmentalization? These are not mutually exclusive, but each is deserving of more in-depth conversation. Ideally, we should probably be thinking about ways to make information both complex (because it is complex - information is sticky in that it is attached to all kinds of contingencies) as well as accessible.


 

Manovich, Lev (2016) “The Science of Culture?” Cultural Analytics, Social Computing, and the Digital Humanities.”


Here, I'm not understanding the premise of a desire to know everything, or a desire to collect data on everything (or perhaps I misunderstood this as the driving line of inquiry). There are certainly power dynamics that shape why it is we know of some forms of cultural production and not others (which I think the article does hint at). There's also perhaps a privileging of what "counts" as cultural production and what doesn't. The desire to collect things that "don't count" might thus seem to be a way to challenge those hierarchical logics. What seems to be lacking in this discussion, however, is a critical inquiry as to why these logics exist and how they come to organize our information. Sure, there's all kinds of messy fun to be had about methods for collecting things that don't "count" or don't "materialize" in the same way as things that do "count". But is it really enough to just collect that information/those products and make them visible? Curating those products in a way that highlights their subversive nature might be a somewhat more productive way to go about finding those data objects (as opposed to putting everything, mainstream and not, onto a single decontextualized platform). But, I'm not even sure that action does enough…


 

Striphas, Ted (August-October 2015) “Algorithmic Culture,” European Journal of Cultural Studies 18: 4-5,395–412.


I am a bit skeptical of this notion that human thought and conduct has been absorbed by algorithmic and big data logics. It's important to remember that not all humans live the same kinds of lives (or even are identified/actively identify as "human" - that word is pretty loaded to be throwing around). The strongest example that comes to mind is people who experience different kinds of trauma, and who process that trauma in different ways and to different extents (or don't process it at all). It doesn't necessarily make sense (it certainly isn't always generative) to put trauma into frameworks of algorithm or big data. I'm also interested in this notion of information/data as coming from different, non-"human" actors (and I also happen to be reading John Durham Peters right now). I'm curious about how data is abstracted from other kinds of sites (would this become an "extraction" of data if we are talking about "natural" sites?). I'm concerned about how we frame this data relative to other data produced from other kinds of sites - do we put these data side by side, or do we do something a bit more messy to organize/make sense of this data? Is the desire to make sense/organize that data a kind of human meaning making that is imposed on other non-"human" sites of meaning making?


 

Solon, Olivia and Sam Levin (16 Dec 2016) "How Google's search algorithm spreads false information with a rightwing bias,” The Guardian.


Like many of these readings have argued, this article is pointing out that the use of algorithms in these dominant online hosts of information contribute to a decreasing public quality of online spaces. However, after having read all of the assigned readings, it's starting to get difficult to take these readings seriously. We had a similar kind of article assigned for the week we were looking at glitches, when the author of the article suggested shock at the fact of alt right propaganda existing on Google. I find a similar kind of alarmism here, and it's frustrating to deal with because all of these anxiety articles are getting published in the looming shadow (or immediate aftermath) of the 2016 presidential election. I want to reiterate: these extremist views have always existed, and have always been accessible online. They did not just suddenly appear in 2016, and we shouldn't be surprised that these (garbage) opinions exist. The ability to which these alt right claims are visible to you is not just dependent on how an algorithm organizes information and trends. If you're a person of color, queer, differently abled, poor, a woman, trans, an immigrant, or any other kind of minority, you know that these claims have always existed. We know that information has been organized in ways that tilt towards these oppressive logics, and we know that it's nothing new or born in 2016. I refuse to only blame algorithms for "hiding" extremist claims from people with influence. I'm angry enough to believe that some part of the shock seen in these articles comes from a sort of willful ignorance of the way minorities live in this (and in other) countries. We cannot look to algorithms as the main instigator or as our guiding light - we need to take some social responsibility for how information is gathered and organized, too…

2 views0 comments

Recent Posts

See All

final blog | Thank you!

final blog | Thank you! I have learned a lot from this class. I did not know what to expect going in, and I was a little annoyed that we were only going to make a prototype instead of turning our rese

Week 9 Reading Response

Last week I noted that how algorithms might be biased in their internal logic, this week’s readings by Tartelon Gillespie elaborated on this point and showed how in every stage of an algorithm such as

bottom of page