Digital Society

Gelernter’s Law (and John Brockman’s merit)

Gelernter's First Law

Computers make people stupid.

Gelernter's Second Law

One expert is worth a million intellectuals. (This law is only approximate.)

Gelernter's Third Law

Scientists know all the right answers and none of the right questions.

(David Gelernter's answer to the 2004 edition of John Brockman's Annual Question on Edge.org: What's your Law?)

Note about John Brockman:

When you first read or hear about John Brockman, you immediatly start to feel intimidated by the gravitational force he is exercising in the contemporary intellectual world of the American Intelligentsia. I don't want to forestall the adventure in scribbling a litany of things he has done and is still doing and thereby preventing you to find out yourself who he is and if he's important to you. It is enough to remark that he's one of the amazing personalities, like TED's founder Richard S. Wurman, who are the chief editors of many intellectual public debates and through their activities are multipliers of ideas and people of interest today. A suggestion from my part for the 2014 Annual Question would be: How to revitalize and strengthen Democracy in our Digital Age? Insofar: Keep it up, Mr. Brockman!

Disturbing Smartness

Evgeny Morozov has to be applauded to peirce some holes into the hot air balloon of contemporary tech-enthusiasm. His dissection of Silicon Valley's dearest technological development of the last years – smart technology – into good smart products and not so good smart stuff can be appreciated in more than an analytical way. Morozov in his recent WSJ Article shows how product engineering becomes tacitly and faster than we might recognize social engineering in disguise. "Many smart technologies are heading in another, more disturbing direction. A number of thinkers in Silicon Valley see these technologies as a way not just to give consumers new products that they want but to push them to behave better. Sometimes this will be a nudge; sometimes it will be a shove. But the central idea is clear: social engineering disguised as product engineering." (E. Morozov, Is smart making us dumb?)

Pushing useful feedback to behavioural oppression is one of the disturbing ideas that should force us to stop and think about our new smart devices. Morozov agrees that "there is reason to worry about this approaching revolution. As smart technologies become more intrusive, they risk undermining our autonomy by suppressing behaviors that someone somewhere has deemed undesirable."

Thinking critical about contemporary digital and technological design decisions, Morozov is not the only enfant terrible of the intellectual elite that tries to finger-point at some troubling blind spots of Silicon-Valley-style enthusiasm and thinking. Jaron Lanier is an equally strong voice that describes how easily technological lock-ins can happen and when they happen, that they are far more important to our daily life than we may realize. But it's not the developers to blame exclusively. As Lanier remarks in You are not a Gadget, "software presents what often feels like an unfair level of responsibility to technologists. Because computers are growing more powerful at an exponential rate, the designers and programmers of technology must be extremely careful when they make design choices. The consequences of tiny, initially inconsequential decisions often are amplified to become defining, unchangeable rules of our lives." (Jaron Lanier, You are not a gadget, Chap. 1)

The same holds probably for our smart technologies that Morozov has in mind. The could create intellectual lock-ins due to their design, self-reinforcing behaviouristic feedback loops, that remove ideas that do fit into the digital representation schemes of the pre-programmed devises, that cut away creative and unconventional ways of problem-solving or simple life decisions and therefore fail to deal with the complexity of human nature. "The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning. It's great when the things around us run smoothly, but it's even better when they don't do so by default. That, after all, is how we gain the space to make decisions-many of them undoubtedly wrongheaded-and, through trial and error, to mature into responsible adults, tolerant of compromise and complexity." (E. Morozov, Is smart making us dumb?)

Maybe our smart devices will make us smart in a different way, that is helping us to start thinking again about our relation to technology and what parts of thinking and life we like to see conquered by technological devices, devilish smart algorithms or all observing sensors, and what parts we would like to keep under our creative and maybe suboptimal power and control.

Linked – the Sociology of Networks

As Albert-László Barabási in his study about networks pointed out1, scale-free networks are one of the most fascinating things in nature. And since the invention of the ARPANET and all following LANs that created the Internet with its architectural framework usually known as the WWW and with the help of a wisely established DNS to organize machines into domains and map host names onto IPs2, a human built scale-free network of networks is shaping and making our social life.

And since then a lot of companies are in the network business. From the European perspective there is still the dominance of Facebook, defining a major part of the online social media connections that people are daily nurturing with Likes and Timeline updates. Although the universe of social media networks has very different continents and dynamics (see e. g. the panel discussion of this years DLD13 conference on "How Social Media is changing China and Asia" – they all share the same economics of the online market. As more companies are hunting for market share, the pressure on all participants, users and companies, is growing.

Since market share in the online world equals mindshare, this development poses some critical sociological questions, especially in the scope of artificial intelligence and our social competence.David Kirkpatrick in his story about Facebook3 provides a good allegory for this tension between machine intelligence and our social intelligence. He's quoting Peter Thiel, one of Facebook's board members and visionary about the company's future, and although talking about different conceptual foundation between Google and Facebook, I am tempted to read this passage more allegorically than it was probably intended. Here is the quote:

"At its core Google believes that at the end of this globalization process the world will be centered on computers, and computers will be doing everything. That is probably one of the reasons Google has missed the boat on the social networking phenomenon. I don't want to denigrate Google. The Google model is that information, organizing the world's information, is the most important thing.

The Facebook model is radically different. One of the things that is critical about good globalization in my mind is that in some sense humans maintain mastery over technology, rather than the other way around. The value of the company economically, politically, culturally – whatever – stems from the idea that people are the most important thing. Helping the world's people self-organize is the most important thing." (Chap. 17)

These are interesting insights in the epistemological backbones of two big players in the network market. Maybe we are becoming more and more artificially intelligent – and maybe our algorithms create more and more social forces than we ever imagined. We'll see…

————–

1. See: Linked: New Science of Networks, 2002.

2. See: Computer networks 5ed., Andrew S. Tanenbaum, David J. Wetherall, Prentice Hall, 2011, p. 54-75.

3. David Kirkpatrick, The Facebook effect: the inside story of the company that is connecting the world, Simon and Schuster, 2010, chap. 17.