Interactive storytelling: an oxymoron
December 08, 2010
Craig Mod is psyched about the future of literary storytelling. "With digital media," he writes in "The Digital Death of the Author," an article that's part of New Scientist's "Storytelling 2.0" series, "the once sacred nature of text is sacred no longer. Instead, we can change it continuously and in real time." E-storytelling is to storytelling, he says, as Wikipedia is to a printed encyclopedia. And that's a good thing:
The biggest change is not in the form stories take but in the writing process. Digital media changes books by changing the nature of authorship. Stories no longer have to arrive fully actualised ... [Ultimately,] authorship becomes a collaboration between writers and readers. Readers can edit and update stories, either passively in comments on blogs or actively via wiki-style interfaces.
Sound familiar? It should. In the 1980s and early 1990s, when personal computers were new and their screens appeared to literary theorists as virgin canvases, there was enormous excitement over the possibilities for digital media to revolutionize storytelling. The enthusiasm back then centered on hypertext and multimedia, rather than on Internet collaboration tools, but the idea was the same, as was the "death of the author" rhetoric. By "freeing" text from the page, digital media would blur the line between reader and writer, spurring a profusion of new, interactive forms of literary expression and storytelling. As George Landow and Paul Delany wrote in their introduction to the influential 1991 compendium Hypermedia and Literary Studies, "So long as the text was married to a physical media, readers and writers took for granted three crucial attributes: that the text was linear, bounded, and fixed." The computer would break this static structure, allowing text to become more like "a network, a tree diagram, a nest of Chinese boxes, or a web." That in turn would shift "the boundaries between individual works as well as those between author and reader," overthrowing "certain notions of authorial property, authorial uniqueness, and a physically isolated text."
Then, as now, the celebration of the idea of interactive writing was founded more on a popular ideology of cultural emancipation than on a critical assessment of artistic expression. It reflected a yearning for a radical sort of cultural democratization, which required that "the author" be pulled down from his pedestal and revealed to be a historical accident, a now dispensable byproduct of the technology of the printing press, which had served to fix type, and hence stories, on the page. The author was the father who had to be slain before culture could be liberated from its elitist, patriarchal shackles.
The ability to write communally and interactively with computers is nothing new, in other words. Digital tools for collaborative writing date back twenty or thirty years. And yet interactive storytelling has never taken off. The hypertext novel in particular turned out to be a total flop. When we read stories, we still read ones written by authors. The reason for the failure of interactive storytelling has nothing to do with technology and everything to do with stories. Interactive storytelling hasn't become popular - and will never become popular - because it produces crappy stories that no one wants to read. That's not just a result of the writing-by-committee problem (I would have liked to have a link here to the gruesome product of Penguin Books' 2007 wiki-novel experiment, but, mercifully, it's been removed from the web). The act of reading a story, it turns out, is very different from, and ultimately incompatible with, the act of writing a story. The state of the story-reader is not a state of passivity, as is often, and sillily, suggested, but it is a state of repose. To enter a story, to achieve the kind of immersion that produces enjoyment and emotional engagement, a reader has to give up not only control but the desire to impose control. Readership and authorship are different, if mutually necessary, states: yin and yang. As soon as the reader begins to fiddle with the narrative - to take an authorial role - the spell of the story is broken. The story ceases to be a story and becomes a contraption.
What we actually value most about stories, as readers, is what Mod terms, disparagingly, "full actualization" - the meticulous crafting of an intriguing plot, believable characters and dialogue, and settings and actions that feel true (even if they're fantastical), all stitched together seamlessly with felicitous prose. More than a single author may be involved in this act of artistic creation - a good editor or other collaborator may make crucial contributions, for instance - but it must come to the reader as a harmonious whole (even if it comes in installments).
I agree with Mod that the shift of books from pages to screens will change the way we read books and hence, in time, the way writers write them, but I think his assessment of how those changes will play out is wrongheaded. (See also Alan Jacobs's take, which questions another of Mod's assumptions.) A usable encyclopedia article can, as Wikipedia has shown us, be constructed, "continuously and in real time," by a dispersed group of writers and editors with various talents. But it's a fallacy to believe that what works for an encyclopedia will also work for a novel or a tale. We read and evaluate encyclopedia articles in a completely different way from how we read and evaluate stories. An encyclopedia article can be "good enough"; a story has to be good.
Posted by nick at 03:26 PM | Permalink | Digg | Comments (7)
The attack on Do Not Track
December 06, 2010
If your ability to make money hinges on keeping people in the dark, there's nothing quite so discombobulating as the prospect of someone turning on the light.
Last week, the Federal Trade Commission recommended the establishment of a Do Not Track program for the Internet. The program would give people a simple way to block companies from collecting personal data about them, data that is today routinely collected and used for targeted, or "behavioral," advertising. The Do Not Track program, which in some ways would be similar to the popular Do Not Call program for blocking telemarketers, is part of a broader FTC effort to, as David Vladeck, director of the commission's Bureau of Consumer Protection, described in Congressional testimony last Thursday, "improve the transparency of businesses’ data practices, simplify the ability of consumers to exercise choices about how their information is collected and used, and ensure that businesses take privacy-protective measures as they develop and implement systems that involve consumer information," while at the same time being "cautious about restricting the exchange and use of consumer data in order to preserve the substantial consumer benefits made possible through the flow of information."
Vladeck noted in his testimony that, despite the fact that concerns about online privacy have been growing for years, the digital media and advertising industry's self-regulation efforts have on the whole been scattershot, confusing, and insufficient. Given "these limitations," Vladek said,
the Commission supports a more uniform and comprehensive consumer choice mechanism for online behavioral advertising, sometimes referred to as “Do Not Track.” The most practical method of providing uniform choice for online behavioral advertising would likely involve placing a setting similar to a persistent cookie on a consumer’s browser, and conveying that setting to sites that the browser visits, to signal whether or not the consumer wants to be tracked or receive targeted advertisements. To be effective, there must be an enforceable requirement that sites honor those choices.
Such a mechanism would ensure that consumers would not have to exercise choices on a company-by-company or industry-by-industry basis, and that such choices would be persistent. It should also address some of the concerns with the existing browser mechanisms, by being more clear, easy-to-locate, and effective, and by conveying directly to websites the user’s choice to opt out of tracking. Such a universal mechanism could be accomplished through legislation or potentially through robust, enforceable self-regulation.
Vladek also made it clear that the FTC recognizes that "consumers may want more granular options" than a universal opt-out: "We therefore urge Congress to consider whether a uniform and comprehensive choice mechanism should include an option that enables consumers to control the types of advertising they want to receive and the types of data they are willing to have collected about them, in addition to providing the option to opt out completely."
It has been amusing to watch the companies that have an interest in the surreptitious collection of personal information struggle to find ways to criticize the FTC's sensible proposal. In an article in today's New York Times, for example, several online advertising executives air some remarkably lame objections. “The do-not-track button holds far more complexities than the designers of the framework envision,” intones John Montgomery, the chief operating officer of the digital division of ad giant WPP. “If a number of consumers opt out, it might limit the ability for companies to monetize the Internet.” Yes, and if a number of consumers decide not to shop at Wal-Mart, Wal-Mart might not make as much money. And if a number of consumers decide to cancel their newspaper and magazine subscriptions, it might limit the ability of companies to monetize the printed page. I believe that's called freedom of choice, and it seems to me that consumers should be entrusted with that choice rather than having the choice made for them, secretly, by Montgomery and his ilk.
Montgomery also warns that by opting out of behavioral advertising, consumers would lose the benefits of such advertising. "With behavioral tracking," he says, by way of example, "women will not get ads for Viagra and men will not see ads for feminine hygiene products." This is very true. There would be a tradeoff involved in choosing the Do Not Track option. But, again, shouldn't people be able to make that tradeoff consciously rather than blindly, as is currently the case? Here, too, Montgomery seems to be arguing that people are incapable of making rational choices. Or maybe he's just afraid of the choices they would make if they were given the opportunity to make them.
Another commonly stated objection to Do Not Track is that, by making online advertising less effective, it could lead to a reduction in the amount of content and the number of services given away free online. Adam Thierer, the former president of the libertarian Progress & Freedom Foundation, makes this case at the Technology Liberation Front blog: "Most importantly, if 'Do Not Track' really did work as billed, it could fundamentally upend the unwritten quid pro quo that governs online content and services: Consumers get lots of 'free' sites, services, and content, but only if we generally agree to trade some data about ourselves and have ads served up. After all, as we’ve noted many times before here, there is no free lunch." Note Thierer's use of the terms "unwritten quid pro quo" and "generally agree." What the FTC is suggesting is that the unwritten quid pro quo be written, and that the general agreement be made specific. Does Thierer really believe that invisible tradeoffs are somehow better than visible ones? Shouldn't people know the cost of "free" services, and then be allowed to make decisions based on their own cost-benefit analysis? Isn't that the essence of the free market that Thierer so eloquently celebrates?
There may be valid arguments to make against a Do Not Track program - some of the technical details remain fuzzy, and government regulation, if done clumsily, can impede innovation - but the suggestion that people shouldn't be allowed to make informed choices about their privacy because some businesses may suffer as a result of those choices is ludicrous and even offensive. Like most people, I have the capacity to understand that, should I choose the Do Not Track option, I may on occasion see an ad for tampons where otherwise I would have seen an ad for erectile dysfunction tablets. In fact, I'm already girding my loins in preparation for such a calamitous eventuality. I think I may survive.
Posted by nick at 12:00 PM | Permalink | Digg | Comments (2)
The cloud press
December 04, 2010
Stephen Baker points to an illuminating, and troubling, report on the current state of electronic publishing by Newsweek COO Joseph Galarneau. This past week, amid the controversy surrounding WikiLeaks' publishing of confidential messages from US diplomats, Amazon.com tossed WikiLeaks off of its cloud computing service. WikiLeaks, the company said in an announcement, had violated at least two clauses in the Amazon Web Services (AWS) customer agreement:
AWS does not pre-screen its customers, but it does have terms of service that must be followed. WikiLeaks was not following them. There were several parts they were violating. For example, our terms of service state that “you represent and warrant that you own or otherwise control all of the rights to the content… that use of the content you supply does not violate this policy and will not cause injury to any person or entity.” It’s clear that WikiLeaks doesn’t own or otherwise control all the rights to this classified content. Further, it is not credible that the extraordinary volume of 250,000 classified documents that WikiLeaks is publishing could have been carefully redacted in such a way as to ensure that they weren’t putting innocent people in jeopardy. Human rights organizations have in fact written to WikiLeaks asking them to exercise caution and not release the names or identities of human rights defenders who might be persecuted by their governments.
Galarneau notes that many traditional publishers, including Newsweek and other newspapers and magazines, also use Amazon Web Services to distribute their stories. Storing and transmitting words and pictures through a cloud computing service is considerably cheaper than building a private data center for web publishing. Indeed, cloud computing is becoming "the 21st century equivalent of the printing press." But, as the Amazon terms of service, with their broad and vague prohibition on content that may "cause injury to any person or entity," reveal, cloud companies operate on different assumptions than do printers. Writes Galarneau:
The power of the press can be dramatically limited when the power to the press is disconnected. Outside the newspaper industry, few publishers actually own their own printing presses. U.S. courts rarely exercise prior restraint (orders that prohibit publication), and most printers rely on their customers to shoulder the legal liability if there are disputes. But as Amazon’s silencing of Wikileaks demonstrates, the rules can change when media companies move on to the Internet, with its very different methods of publishing ...
[As] part of Newsweek’s journey to the cloud, we thought about the same issue that tripped up Wikileaks. In its 77-year history, the magazine has often published confidential or leaked government information. Amazon’s publicly available contract with AWS customers, which Wikileaks likely agreed to, states that Amazon can turn off a website if “we receive notice or we otherwise determine, in our sole discretion” that a website is illegal, “has become impractical or unfeasible for any legal or regulatory reason … (or) might be libelous or defamatory or otherwise malicious, illegal or harmful to any person or entity.” Has Amazon anointed itself as judge, jury and executioner in matters of regulating content on its services?
Galarneau goes on to acknowledge that Amazon has good reason to be nervous about the kinds of content flowing through its servers, given legal ambiguities that continue to surround the distribution of digital information:
But should there be anything for cloud computing companies to fear? Federal law doesn’t hold hosting providers liable for information-related crimes committed by their users, no more so than a phone carrier would be subject to legal action due to a customer making a harassing call. There are gray areas untested by caselaw, [First Amendment attorney Michael] Bamberger added. “If the posting is a criminal act, which the Wikileaks materials may be given the claimed national security implications,” he said, “the service may have a legitimate fear of being charged with aiding and abetting despite federal law.”
Beyond the legal threats Galarneau discusses, there are also public-relations concerns and the possibility (and, in the WikiLeaks case, the reality) of debilitating denial-of-service attacks from parties who don't like the material coming out of your cloud. Of course, printers of controversial material can also face attacks - an angry person can throw a firebomb through a print shop's window - but the fact is that printers are much better protected from such attacks, through their anonymity as well as through their legal agreements with publishers, than are cloud computing companies.
Galarneau discloses that Newsweek struck a deal with Amazon that involves different terms than those in the standard AWS agreement: "Ultimately with Amazon, we agreed on mutually acceptable terms that would protect our editorial independence and ability to publish controversial information. Other media executives who use cloud computing have told me they baked in similar protections into their contracts." But such special protections are themselves problematical:
Newsweek is an established media company that has more clout and resources than the average start-up. How would the next incarnation of the Pentagon Papers be handled if it were published by a lone blogger instead of The New York Times?
Technologies advance more quickly than do laws, and eventually cloud companies may come to operate, vis a vis their journalistic clients, the way printers have operated in the past. But one of the lessons from the WikiLeaks case is that, today, there are no such guarantees. The cloud is a fickle medium, with restrictive and even capricious terms of service. Is there any journalism worth its salt that doesn't somehow cause harm to "a person or entity"?
Posted by nick at 02:45 PM | Permalink | Digg | Comments (3)
Absorbing self-communion
November 20, 2010
Virginia Heffernan is not the first New York Times Magazine writer to tackle the topic of attention. A correspondent pointed me to another piece (published precisely 100 years ago today), which was titled "The Secret of Success – Intellectual Concentration." It looks at "notable cases where men won fame and fortune through absorbing self-communion." These fellows – they include "Edison, Keene, Pupin, Hewitt, Westinghouse and Gould" – seem to have been gifted with big, fat, wonky attention spans.
I particularly enjoyed the description of Edison's ability to focus his attention:
When Edison, still a telegraph operator in Boston, was receiving or sending messages with a rapidity which at that time had never been surpassed [one can only imagine the vigor of the tweet stream Edison would have emitted! -editor], he began to wonder whether it might not be possible to send two messages each way at the same time through a telegraph wire. As he thought about this matter he became convinced that this seemingly impossible feat could be shown to be not impossible at all, but entirely practicable.
Finally Edison concentrated his mind upon the problem. He ate mechanically; he was almost unconscious of what he put into his mouth. He gave himself no sleep. He sat in his little laboratory as silent as a graven image. What was in his mind no man could tell. What he saw with his intensely concentrated mental vision he alone knew. But as a result of that concentration of mind he gave the world the quadruplex telegraph.
Some years later, one hot Summer day in the year 1878, Edison took a train at New York for the manufacturing village of Derby in the Housatonic Valley in Connecticut. With him was his friend George H. Barker, Professor of Chemistry at the University of Pennsylvania. Both were in high spirits, for the excursion seemed to them to be a sort of brief vacation – a pleasure trip.
But when Edison, on arriving at his destination, stood in front of a new electrical apparatus his whole manner changed instantly. He stood before that great piece of machinery which was creating, or at least capturing, electricity of great volume, utterly unconscious of his surroundings. There was no other intelligence in his eye than that which seemed to be reflected from the machine. If any one spoke to him he did not hear.
He was, in fact, under the complete domination of absolutely concentrated thought. He was another man – almost superman; and when later in the day his friend called to him, saying that it was time to take the return train, then only did Edison seem to awaken from what appeared to be almost a hypnotic or somnambulistic state. And then he said simply: "I think I have solved the secret of the divisibility of the electric current so that we may have the incandescent electric light."
I love that "simply."
It has recently become fashionable (as we swing to the sway of our new technologies) to denigrate solitary, deeply attentive thinking, the kind celebrated and symbolized by Rodin's The Thinker. Ideas and inventions, we're urged to believe, leap not from the head of the self-communing genius but from the whirl of "the network." In fact, you need both - the lonely wizard and the teeming bazaar - as Edison's life so clearly demonstrates. Edison certainly drew on the work and ideas of his predecessors and contemporaries, and his Menlo Park laboratory was by all accounts a noisy orgy room of intellectual cross-fertilization. But, like other deep thinkers, Edison had the ability to screen out the noise and focus his mind – and that capacity, half innate and half hard-won, was also essential to his creativity.
Newton stood on the shoulders of giants, but that doesn't make Newton any less of a giant.
Posted by nick at 02:01 PM | Permalink | Digg | Comments (8)
Yes, Virginia, there is attentiveness
November 19, 2010
Virginia Heffernan has a funny little column in this Sunday's New York Times Magazine. She opens by pointing to Jonah Lehrer and me as examples of people who allegedly believe that, as she puts it, "everyone has an attention span" and "an attention span is a freestanding entity like a boxer’s reach, existing independently of any newspaper or chess game that might engage or repel it, and which might be measured by the psychologist’s equivalent of a tailor’s tape." This is complete horseshit. Lehrer and I have different views of how the internet and other media influence attentiveness, but I certainly don't believe that individual human beings have fixed and precisely measurable attention spans, and I'm pretty sure that Lehrer doesn't believe that either. In fact, I can't say I've come across anyone of any sentience who subscribes to such a naive notion. Of course attentiveness is situational, and of course it's influenced by the activities one pursues - indeed, it's the nature of that influence that concerns Lehrer, me, and the many other people who are interested in the cognitive effects of media and other technologies.
In trotting out the strawman of a fixed attention span, Heffernan obfuscates a whole array of interesting, complicated, and important questions. Central to those questions is the fact that "attentiveness" takes many forms. One can, for instance, be attentive to rapid-paced changes in the environment, a form of attentiveness characterized by quick, deliberate shifts in focus. As Lehrer and others have described, there is evidence that video gaming can enhance this kind of attentiveness. There is a very different form of attentiveness that involves filtering out, or ignoring, environmental stimuli in order to focus steadily on one thing - reading a long book, say, or repairing a watch. Our capacity for this kind of sustained attention is being eroded, I argue, by the streams of enticing info-bits pouring out of our networked gadgets. There are also differing degrees of control that we wield over our attention. Research by Clifford Nass and his associates at Stanford suggests that people who are heavy media multitaskers may be sacrificing their ability to distinguish important information from trivia - it all blurs together. And there are, as well, different sorts of distractions - those that can refresh our thinking and those that can short-circuit it.
We're still a long way from understanding exactly how attention works in our minds, but we do know that the way we pay attention has a crucial effect on many of our most important mental processes, from the formation of memories to conceptual and critical thinking. As the psychology professor David Strayer puts it, "Attention is the holy grail. Everything that you’re conscious of, everything you let in, everything you remember and you forget, depends on it.” Heffernan is right to remind us that there is no one ideal form of attentiveness - that focusing too tightly can be as bad as focusing too loosely - but if she truly believes that "the dominant model" of discourse about attentiveness "ignores the content of activities in favor of a wonky span thought vaguely to be in the brain," she hasn't been paying attention.
UPDATE: Another view of attention spans,* also from today's Sunday Times. And an older take.
*In the vernacular sense, meaning, roughly, "ability to sustain one's concentration."
FURTHER UPDATE: Rob Horning chimes in, smartly:
... unlike Heffernan, I see concentration rather than distraction as an act of cultural resistance.
The problem with reckoning with attention problems is not that it is ineffable but that it doesn’t correspond with an economic model that has us spending and replenishing some quantifiable supply of it. But the metaphors built into an “attention span” or “paying attention” or the “attention economy” imagine a scarce resource rather than a quality of consciousness, a mindfulness. It may be that the notion of an attention economy is a sort of self-fulfilling prophecy, bringing into being the problems its posits through the way it frames experience. It may not be constructive to regard attention as scarce or something that can be wasted and let those conceptions govern our relation to our consciousness. The metaphor of how we exert control over our focus may be more applicable, more politically useful in imagining an alternative to the utility-maximizing neoliberal self. The goal would then be not to maximize the amount of stuff we can pay attention to but instead an awareness that much of what nips at us is beneath our attention.
Posted by nick at 03:07 PM | Permalink | Digg | Comments (3)
Privacy is relative
November 11, 2010
January 17, 2010: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." -Eric Schmidt
November 10, 2010: "Google CEO Eric Schmidt announced the salary hike in a memo late Tuesday, a copy of which was obtained by Fortune. The memo was also leaked to Business Insider, which broke the news. Within hours, Google notified its staff that it had terminated the leaker, several sources told CNNMoney. A Google spokesman declined to comment on the issue, or on the memo."
Posted by nick at 11:09 AM | Permalink | Digg | Comments (6)
The unrevolution
November 08, 2010
"I am not a Communist," declared the author-entrepreneur Steven Johnson in a recent column in the business section of the New York Times. Johnson made his disclaimer in the course of celebrating the creativity of "open networks," the groups of volunteers who gather on the net to share ideas and produce digital goods of one stripe or another. Because they exist outside the marketplace and don't operate in response to the profit motive, one might think that such collaboratives would represent a threat to traditional markets. After all, what could be more subversive to consumer capitalism than a mass movement of people working without pay to create free stuff for other people? But capitalists shouldn't worry, says Johnson; they should rejoice. The innovations of the unpaid web-enabled masses may be "conceived in nonmarket environments," but they ultimately create "new platforms" that "support commercial ventures." What appears to excite Johnson is not the intrinsic value of volunteerism as an alternative to consumerism, but the way the net allows the efforts of volunteers to be turned into the raw material for profit-making ventures.
Johnson's view is typical of many of the web's most enthusiastic promoters, the Corporate Communalists who feel compelled to distance themselves from, if not ignore entirely, the more radical implications of the trends they describe with starry-eyed avidity. In a new book with a Marx-tinged title, What's Mine Is Yours, the business consultants Rachel Botsman and Roo Rogers begin by describing the onset of what sounds like an anti-market revolution. "The convergence of social networks, a renewed belief in the importance of community, pressing environmental concerns, and cost consciousness," they write, "are moving us away from the old, top-heavy, centralized, and controlled forms of consumerism toward one of sharing, aggregation, openness, and cooperation." Indeed, we are at a moment of transition from "the twentieth century of hyper-consumption," during which "we were defined by credit, advertising, and what we owned," to "the twenty-first century of Collaborative Consumption," in which "we will be defined by reputation, by community, and by what we can access and how we share and what we give away."
But, having raised the specter of an anti-consumerist explosion, Botsman and Rogers immediately defuse the revolution they herald. Like Johnson, they turn out to be more interested in the way online sharing feeds into profit-making ventures. "Perhaps what is most exciting about Collaborative Consumption," they write, with charming naiveté, "is that it fulfills the hardened expectations on both sides of the socialist and capitalist ideological spectrum without being an ideology in itself." In fact, "For the most part, the people participating in Collaborative Consumption are not Pollyannaish do-gooders and still very much believe in the principles of capitalist markets and self-interest ... Collaborative Consumption is by no means antibusiness, antiproduct or anticonsumer." Whew!
As Rob Horning notes in his review of the book, Botsman and Rogers are more interested in co-opting anti-consumerist energies than unleashing them. Economically speaking, they're radical conservatives:
Were the emphasis of What’s Mine Is Yours strictly on giving things away, as opposed to reselling them or mediating the exchanges, it might have been a different sort of book, a far more utopian investigation into practical ways to shrink the consumer economy. It would have had to wrestle with the ramifications of advocating a steady-state economy in a society geared to rely on endless growth. But instead, the authors are more interested in the new crop of businesses that have sprung up to reorient some of the anti-capitalistic practices that have emerged online — file sharing, intellectual property theft, amateur samizdat distribution, gift economies, fluid activist groups that are easy to form and fund, and so on — and make them benign compliments [sic] to mainstream retail markets. Indeed, conspicuously absent from the book is any indication that any business entities would suffer if we all embraced the new consumerism, a gap that seems dictated by the book’s intended audience: the usual management-level types who consume business books.
A similar tension, between revolutionary rhetoric and counterrevolutionary message, runs through the popular "wikinomics" writings of Don Tapscott and Anthony D. Williams. In their new book, Macrowikinomics, they once again promote the net as, to quote from Tom Slee's review, "a revolutionary force for change, carrying us to a radically different future." And yet the blurbs on the back of the book come from a who's who of big company CEOs. The revolution that Tapscott and Williams describe is one that bears, explicitly, the imprimatur of Davos billionaires. For them, too, the ultimate promise of open networks, of wikis, lies in providing new opportunities, or "platforms," for profiteers. Slee notes some of the contradictions inherent in their argument:
On one side, Macrowikinomics exaggerates the political and economic possibilities of digital collaboration as well as the discontinuity between today’s digital culture and the activities of previous generations. On the other side, it ignores the unsavoury possibilities that seem to accompany each and every inspiring initiative on the Internet (every technology has its spam) and inspirational initiatives for change that take place away from the digital world. Most importantly, it does not register the corrosive effect of money (and particularly large amounts of money) on the social production and voluntary networked activity that they are so taken with.
What most characterizes today's web revolutionaries is their rigorously apolitical and ahistorical perspectives - their fear of actually being revolutionary. To them, the technological upheaval of the web ends in a reinforcement of the status quo. There's nothing wrong with that view, I suppose - these are all writers who court business audiences - but their writings do testify to just how far we've come from the idealism of the early days of cyberspace, when online communities were proudly uncommercial and the free exchanges of the web stood in opposition to what John Perry Barlow dismissively termed "the Industrial World." By encouraging us to think of sharing as "collaborative consumption" and of our intellectual capacities as "cognitive surplus," the technologies of the web now look like they will have, as their ultimate legacy, the spread of market forces into the most intimate spheres of human activity.
PS These are the first lolcats I've created. Pretty good, huh?
Posted by nick at 03:07 PM | Permalink | Digg | Comments (13)
Homo digital
October 28, 2010
In an essay in The American Interest, Sven Birkerts offers a thoughtful survey of some recent writings on the internet and culture, including John Palfrey and Urs Gasser's Born Digital, Jaron Lanier's You Are Not a Gadget, my own The Shallows, and a Clay Shirky post celebrating the post-literary mind.
Here's a maxisnippet:
Modern but Pre-Digital Man was different in untold ways from his counterpart in the Athenian agora. Millennia of history had altered his psychological structure, mentality and even his neural reflexes. What Lanier raises but then ducks is the inevitable question: If change and adaptation have been a constant all along, whence this sudden urgency about a changing “us”? Why not see the digital revolution as just the latest wave of technology, no less a boon than steam power or electricity and hardly an occasion for a top-to-bottom reconsideration of all things human?
If Lanier sidesteps the question, we may at least thank him for raising it. Change may be constant, but the gradations are hugely variable, with degree at some point shading into kind. Consider that the transformations of the human to date have all been dictated by social shifts, inventions and responses to various natural givens: modifications of circumstance, in short. We have adapted over these long millennia to the organization of agriculture, the standardization of time, the growth of cities, the harnessing of electricity, the arrival of the automobile and airplane and mass-scale birth control, to name just a few developments. But the cyber-revolution is bringing about a different magnitude of change, one that marks a massive discontinuity. Indeed, the aforementioned Pre-Digital Man has more in common with his counterpart in the agora than he will with a Digital Native of the year 2050. By this I refer not to cultural or social references but to core phenomenological understandings. I mean perceptions of the most fundamental terms of reality: the natural givens, the pre-virtual presence of fellow humans, the premises of social relationships.
Posted by nick at 02:15 PM | Permalink | Digg | Comments (8)
Friday, December 10, 2010
Don't agree: see #burtonstory. Interactive storytelling: an oxymoron: Nicholas Carr's Blog
via roughtype.com
Subscribe to:
Post Comments (Atom)
1 comment:
WL too much Change for Obama?
We NEED transparency for our global society that we created an cannot control.To many crises.
We'd never gone to Iraq if we read the cables first?
How can a few wise leaders alone solve complex global issues pending ?
People need to be involved/need same info on these complex issues to let our global society decide & survive.
If democracy fails, the only solution is More democracy.
Know It's a hard path, but harder for our totalitarian enemies.
E-vote(power), not E-commerce(money) that changes our world!
Post a Comment