Last week I noted that how algorithms might be biased in their internal logic, this week’s readings by Tartelon Gillespie elaborated on this point and showed how in every stage of an algorithm such as Facebook’s trending algorithm there are decisions made by programmers involved that it is impossible to separate an algorithm from its designers and users. A critical point for me was where Gillespie argues that “Facebook wanted to take surges of activity that its algorithms could identify and turn them into news-like headlines. But it treated this as an information processing problem, not an editorial one.” Therefore, Facebook is not trying to completely ignore its curatorial role which is inevitable, but wants to program its content curators, that is, turn the problem into a information processing one. However, while Gillespie argues for news-like editorial curators, the issue with that is Facebook’s dataset is so massive that it seems impossible for humans to approach the issue with an editorial lens that needs critical analysis. Therefore, the problem is the logic of the trend itself.
Gillespie elaborates on the difference between news and trends. However, their articulation of news is somehow idealistic. To begin with, even before the rise of social media, at least in case of United States, the news itself had turned into trends. The capitalist nature of media landscape alongside the individualism ideology had led to celebrity culture and at least part news had been already turned into trends. Another issue with Gillespie’s idealism is that news and trends are not separate entities anymore. Trends are not just barometers of social media activity, but sometimes lead to events that are newsworthy, therefore, the relationship is not unidirectional but bidirectional. This is evident in Solon and Levin’s article in the Guardian. The search algorithms that their logic is based on elements that are present in trends, and since trends are based on the sensation, the more radical and controversial topics surfaces. Yet this is a self-intensifying process. As soon as a topic becomes a trend, it is recommended by these search algorithms which results in their intensification. And these intensified trends turn into the news. This is how Donald Trump won the election, not by trends but by becoming a trend.
I find Lev Manovich’s article and the idea of Cultural Analytic deeply problematic. I sense a globalist viewpoint in it that does not see differences and struggles. For instance, if we are analyzing the photographs of Yemeni’s food on Instagram, then we have to ask what does food mean in the current humanitarian disaster when they have to eat leaves in order to survive, what does a photograph of a tree connotate, Who has the access to the internet, and who is the audience of such a photograph. From my perspective, what Manovich is suggesting is too analyze trends through algorithms. Even if we consider that scholars are better curators that those of Facebook, still we are faced with the problem of algorithm. How do we understand the biased logic of an algorithm and how do we address that in our methodology. Also, we not only have to ask what the API that we are using provides but also what it hides. What are the things that data cannot represent, what are the remainders when something turns into data?