Google’s autocompletion: algorithms, stereotypes and accountability | Sociostrategy

At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER.

Thus, the appearance of an autocompletion suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to. A recent paper by Baker and Potts (2013) consequently questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“ […]
I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.
Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms … as if the algorithms were beyond the company’s control.

Google’s autocompletion: algorithms, stereotypes and accountability | Sociostrategy

At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER.

Thus, the appearance of an autocompletion suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to. A recent paper by Baker and Potts (2013) consequently questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“ […]

I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.

Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms … as if the algorithms were beyond the company’s control.

116 notes

Show

  1. pugraisedbywolves reblogged this from new-aesthetic
  2. sentarseenlasnubesymirarelpasto reblogged this from new-aesthetic
  3. creepysleepyqueer reblogged this from electrocunttherapy
  4. electrocunttherapy reblogged this from new-aesthetic
  5. gusbritish reblogged this from altadam
  6. altadam reblogged this from new-aesthetic and added:
    Emphasis mine.
  7. allacharade reblogged this from new-aesthetic
  8. myoedizioni reblogged this from new-aesthetic
  9. michalmichalik reblogged this from new-aesthetic
  10. dim-skies reblogged this from new-aesthetic
  11. lightspeed2001 reblogged this from new-aesthetic
  12. synthcin reblogged this from new-aesthetic
  13. deadacid42 reblogged this from new-aesthetic
  14. letiinsaurralde reblogged this from new-aesthetic