Pariser pointed out the critical things that social personalization gets wrong when it comes to content.
- Anticipation: If there’s a small story about a meeting of the Greek parliament today, a human editor could anticipate that stocks might tumble tomorrow. Algorithms are rarely good at making this kind of abstract correlation.
- Risk Taking: For an algorithm to be successful, it needs to be right most of the time. Suggestion engines almost always offer up “safe” content within a very narrow spectrum. Human editors have the will to take risks on content that might be wildly successful (or fail miserably).
- Big Picture: Algorithms seldom connect the dots between specs of content to form a big picture of current events. An editor can create a front page (today, a homepage) that shows the news of the day in context, and arranged by importance.
- Pairing: Human editors can draw you in with something “clicky” and get you to stick around by pairing that item with something of substance. This can be an art more than a science, which is why algorithms come up short.
- Social Importance: Algorithms are good at surfacing what’s popular but not necessarily what’s important. The war in Afghanistan may not be “likeable” or “clickable,” but a human editor can ensure that stories about it get seen.
- Mind-Blowingness: Pariser spoke about the Napoleon Dynamite problem on Netflix. Users either loved the movie (rated it five stars) or hated it (one star). Because the Netflix algorithm doesn’t like making risky recommendations, it often eschewed Napoleon from suggestion lists — even though people who like the movie really like the movie.
- Trust: People learn to trust good editors. If something seems boring or irrelevant but a trusted editor says it’s important, you’ll heed. Algorithms may never be so trustworthy.
Full article on mashable.com