This week's readings tap into the previous class discussion on algorithm bias. My question was is it possible that algorithms are not biased in themselves but the input that we give them. While part of the discussion was that algorithm and data are inseparable and one cannot evaluate an algorithm without its data, I realized through these readings that algorithm can be biased in their method. Before jumping into the readings of this week though, I would like to provide an example from Simone Browne’s book that we read its introduction, Dark Matters. In its epilogue it describes an example of racist algorithm of a webcam that couldn’t detect dark-tone faces because it “ is built on standard algorithms that measure the difference in intensity of contrast between the eyes and the upper cheek and nose” and that ‘the camera might have difficulty ‘seeing’ contrast in conditions where there is insufficient foreground lighting’” (161). In my previous responses I have described a racist facial-recognition pattern recognition algorithm that had become racist due to its biased training set. However, in this case, it is the method of the algorithm that is based on an racist idea that defines the face through an exclusive condition.
Coming back to the reading of this week, McCormick in “PageRank: The Technology That Launched Google” discusses the foundational idea behind the PageRanking algorithm which basically works based on the number links. If a page has been linked to more, will have a higher rank, and the links themselves are also weighted based on the number of links that the page that links has. Therefore, PageRank algorithm is based on popularity, that is there is no inherent feature in the algorithm that differentiate between two equally popular links. That being said, such a system can be manipulated by the users, like the notion of web spam that McCormick notes as one of the key problems of the algorithm. Can we then argue that the algorithm is unbiased by its design but it is its users that are biased? We are racists, we link more to racist pages, racist pages will have higher ranks. However, this is one way of seeing the issue.
The other approach would be questioning the logic behind the algorithm itself. let’s forget about the ontological bias in the act of ranking. The algorithm works based on the premise that what is linked more is more popular and therefore more credible. This is the core issue of democracy: the tyranny of majority. An idea that is not mainstream, that belongs to a minority, is pushed aside (down). Then the algorithm itself, because it is the source of knowledge, becomes a tool of oppression. Now let's go back to the ontological issue of ranking, “Examining the Impact of Ranking on Consumer Behavior and Search Engine Revenue” explains the impact of the idea of ranking on consumers behavior and how the placement of product impacts customers psychologically. Positioned lower on the screen has negative impact and being higher on the list has positive influence. This explains how in ranking websites there are always a couple of products at the top of the list that are sponsored. I noticed how several times the best selling product on the Amazon was actually the one that was sponsored in the first place. And since it is placed on top of the list, it generally gains positive reviews. Therefore, after a while the sponsored product becomes THE number one product on the list and there is no need to advertise it anymore.
Comments