Clay Shirky writes that we’re placing more and more trust in “algorithmic authority.” This becomes more interesting as we start to think of artificial intelligence.
We “regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources.”
Shirky is a professor of new media at New York University. He describes how “algorithmic authority,” is eroding the “institutional monopoly” of trust held by authoritative sources of news and information. This, he says, has three characteristics:
- It takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published. This is how Google’s PageRank algorithm works, it’s how Twitscoop’s zeitgeist measurement works, it’s how Wikipedia’s post-hoc peer review works.
- It produces good results, and as a consequence, people come to trust it. At this point, it’s become a valuable information tool, but not yet anything more.
- When people become aware not just of their own trust but also of the trust of others: “I use Wikipedia all the time, and other members of my group do as well.” Once everyone in the group has this realization, checking Wikipedia is tantamount to answering the kinds of questions Wikipedia purports to answer, for that group. This is the transition to algorithmic authority.
Still curious? Learn about The Filter Bubble.