[media-credit name=”435Digital Tribune Media” align=”aligncenter” width=”668″][/media-credit]Recently, Internet privacy concerns erupted over Facebook’s introduction of facial recognition features. Most of us have some version of this on our home photo editing systems, and many people misunderstood what Facebook was offering and how to use it. The bottom line is that only photos by your friends will suggest that you are in the photo and only you can tag the photo.
To be sure, there’s potential for abusing some technologies but the facial recognition genie is out of the bottle and it won’t be going back in. That leaves us with the question of how we will marshal this and other technologies so that they are not abused by government despots or evildoers.
One of the cooler heads and sage voices on the Internet privacy beat is Tim O’Reilly.
O’Reilly is the founder and CEO of O’Reilly Media Inc., a $100 million company with the mission “to change the world by spreading the knowledge of innovators.” O’Reilly’s passion for creativity is channeled through books, events, new inventions and personal connectivity on social media. His company published its first book about the Internet in 1992. O’Reilly is considered a translator for the “alpha geeks,’ those innovators who are pushing the edge of technology. I spoke with O’Reilly recently about the big picture on privacy issues. What follows is straight talk from the man credited with coining the term Web 2.0, a concept that he says is really about harnessing collective intelligence. O’Reilly’s status among his customers is reflected by his nearly 1.5 million Twitter followers.
What would you say to those who are afraid to engage in social media because of privacy issues?
There is a kind of alarmism that reminds me of the early alarmism about using your credit card on the Internet.
People would say, “Oh I couldn’t possibly use my credit card on the internet.” Meanwhile they would think nothing of handing their credit card to some sketchy looking guy in a dive bar who could take it into a back room for a few minutes and do whatever with it.
I think some aspects of these privacy worries are overblown. Basically, they’re holding the Internet up to a special kind of standard. Every time you use your credit card you are checking in and providing information. And people are checking in with their credit cards more and more often.
For that matter, consider what’s known when you place a phone call. And if you are carrying a cell phone, your phone company knows where you are at all times.
I would say we should instead be asking: What are the things that WE DON’T WANT PEOPLE TO DO with the information they are collecting.
It is better to think about that than to try and pretend that you can avoid having the information collected.
In the recent flap over Facebook recognition, you said Facebook’s approach might finally cut the Gordian Knot on this thorny privacy problem. Could you explain what you mean by that?
In some places, like London, security cameras are everywhere.
That’s the point about Facebook. If you think, “Oh my God, Facebook, Google you bad people ! You shouldn’t do this.“ Then you’re missing the point.
It will be happening to you anyway. The police will be using cameras and government agencies will be using cameras. Unless you want to opt out of the modern world [you will be on camera].
So we basically have to say , “OK. We have invented these technologies. They are going to be deployed. Let’s figure out what are we worried about and address that.
The question is: When is it good for users and when isn’t it?
We like the convenience of Google maps. That means we are basically reporting our location not only through our local phone company but also back to Google. There is a lot of evidence that people are more than willing to trade their privacy for certain things.
It comes back to what we really have to do is accept that some people will have access to our private info. Then we have to ask what are the permitted uses for that info? Then what are the non-permitted uses of that info?
And the key to understanding between the two is to start asking “What are we afraid of ?”
I think “What are we afraid of” is really, really important. Because once we have that answer then we can start to ask “Are those fears overblown?” “Are they correct?” “Are they something we should be worried about?” It is our opportunity to really think about the possible harm. Then go back and look at the kind of information that is being collected.
In general, we want a regime in which we are consciously saying, “Let’s not make the default be driven by all the things we are afraid of.” Instead, let’s figure out the things we are afraid of and deal with it. And give ourselves more opportunities to get the benefits from the technology.
Can you think of a good example of this?
What should we allow the state to do and not do.
Look at the benefit we are getting from individuals who are recording events, witnessing police brutality, etc. But in places like Burma, when protestors put videos up on YouTube, the junta looked at the videos, identified people and killed them. We don’t want that to happen.
So in that case, Google says, “We are going to use our face detection to intentionally blur faces in politically testy situations and protect them.”
So we need to look at specific harms, specific remedies.
That is opposed to, “Oh well. We are just going to disable these cameras in general.” As in the case of Apple, which has developed technology that would allow venues to disable cameras. [ Reflecting the recording industry’s wish to stem bootlegging.]
What we’re really trying to figure out are the right tradeoffs. That YouTube situation is a good example.
The Apple solution is a bad solution because it disables many of the benefits as well as the harm. The Google solution is a good solution because it deals specifically with the harm. I’d like to see more thinking like this as we move into this new world.
What are we afraid of? Is it real? And if it is real– what can we do about it?
What do you see as the bright spots for small business in Web 2.0?
Whenever there is a technical transformation there are winners and losers. There is no question in my mind that many small business will be threatened by various technologies.
Generally the people who embrace the new technology are more likely to be winners than those who are afraid of it and fight it. The reason why most technology transformation hurts incumbents is because they are too afraid to try things. Their business models are such that they can’t really embrace the new one because it takes away the old one.
There is a great quote by Jeff Bezos, founder of Amazon: “When change is fast it favors the challenger. When change is slow it favors the incumbent.” There is certainly some of that at work.
But there is enough opportunity in what’s happening today that businesses should be going for it.
Building small businesses is really about building the wealth of communities. Can you point me toward technologies that have promise?
Increasingly we will have technologies that will measure activity with a fair amount of location granularity. For example, foot traffic can be measured every 15 seconds, checking in with the cell tower. [O’Reilly is an investor in one called Path Intelligence.]
We’ll know a thousand people walked by this hour by counting the cell phones.
The main thing about the wealth of communities is that WE are the wealth of communities. The most important thing for us to think about with social technology and with technology in general is to increase the density of interaction between people.
That’s what creates wealth.
Sally Duros is an independent journalist fascinated by entrepreneurs and how they reinvent their worlds and the worlds around us. You can connect with her on Google+ and on twitter at saduros.