Posted by Craig H on 21 May 2009
[Sorry, I’m getting a bit behind on things I want to blog about, part 2 of the Symbian Signed story will be up soon!]
On Tuesday I attended a seminar in Westminster on the topic of “Behavioural Targeting, Social Networking and the Challenges of Online Privacy“. “Behavioural targeting” refers to monitoring users’ behaviour online and using the collected data to present them with targeted content (often in the form of advertising).
There was an interesting mix of participants, from government and the civil service (the Home Office had the largest representation of any one organisation) to privacy advocates (Open Rights Group), industry (notably Phorm) and journalists. I wasn’t the only one who thought this might be relevant to mobile – several mobile network operators were present. There is clear potential for monitoring significantly more personal information via a mobile device carried with you, compared to a work or home PC.
The first half of the seminar was largely concerned with the concept and value of online identity, and there was a good deal of discussion about how to enable informed consent by users to the use of their personal information. Predictably the regulator thought that regulation was the answer, and technologists thought that technology was the answer. The “data is the new currency” idea was wheeled out to justify a need for stronger controls. Personally, I still think that the most important thing is to provide people with simple privacy controls, as I previously mentioned.
The second half was the more contentious and interesting one, specifically focusing on behavioural targeting. Again predictably, it polarised into an argument between the advertisers and the privacy advocates.
I think most consumers see the benefits of, and welcome, retailers’ business intelligence driving their own product recommendations (Amazon being the primary example). It does seem to be a significant step beyond that though, to aggregate data on shopping or browsing habits across multiple sites (such as the widely unpopular Facebook Beacon, NebuAd and Phorm). I was put in mind of a discussion I was part of a few years ago, on the topic of why surveillance bothers people: it was suggested to me then that people routinely (and largely subconsciously) adopt different personas depending on who they’re interacting with (a business meeting, a interview with the bank manager, friends, family, etc.) so if you’re being watched by someone anonymous, you may be subconsciously troubled about which persona to adopt.
I think this notion of personas really needs to be factored in to any measures (regulatory or techniological) addressing aggregations of personal data. As an aside, I’m intrigued by the legal status of pseudonyms and aliases in real life – I think that under British law an individual is legally entitled to use different names in different circumstances but I haven’t found a definitive reference, and I suspect moves towards ID cards, and increasing sharing of personal information between government departments, will be eroding any such right in the UK. Using multiple names online might be a practical way of managing personas, in the absence of any more sophisticated mechanism.
Phorm were at pains to point out how they minimise and anonymise the data they collect, but I really feel they were missing the point. Supposing that Phorm aren’t evil and do respect personal privacy, I still have to trust them to do the right thing. It’s certainly conceivable that their technology (a “black box” installed at your Internet Service Provider that inspects all the data sent between your PC and the Internet) could be used to capture all sorts of personally identifiable private information, and we have no way of knowing if a rogue employee or a programming mistake could leak that data to bad guys. There’s also nothing stopping Phorm changing their anonymisation policies in future, should they be acquired by another company, for example.
As usual in security, this boils down to a cost/benefit question: is the benefit (targeted ads) worth the cost (risk of loss of privacy)? As is also quite common in security, the benefits and costs are distributed unequally – the advertisers get more of the benefit while the consumer bears more of the cost (risk). Personally, I’d prefer to guarantee my privacy, as it makes no difference to me whether the ads I’m going to ignore anyway are targeted or not.
Another interesting point that was made was about the definition of personally identifiable information, and hence what is covered by data protection legislation. Obviously personal things, like your name or home address, are clearly covered, but data that advertisers want to collect is really metadata (data about your preferences and habits, rather than your identity itself) and thus isn’t covered. This is really the basis of Phorm’s argument – they only collect data about what a user does, not about who they are. However, the thing about such metadata is that it can, in some circumstances, be correlated and used to identify individuals, in similar ways to how AOL search data anonymisation was broken in 2006.
There was lots of interesting discussion, but not really any conclusions, so I think most people (including me, to be honest) went away with much the same views they arrived with. The one thing I think probably should have been mentioned but wasn’t, is the Vendor Relationship Management (VRM) project. I would have mentioned it myself except for two reasons: I don’t know as much as I would like to about it (their blog has been in my RSS feeds for quite a while but I haven’t had enough time to really read up on the background), and we ran out of time for questions from the floor. The idea of reversing Customer Relationship Management so that the customer remains in control of their personal data is a really strong one, and I do hope that it turns out to have technological and social substance.