Our website makes use of cookies like most of the websites. In order to deliver a personalised, responsive and improved experience, we remember and store information about how you use it. This is done using simple text files called cookies which sit on your computer. These cookies are completely safe and secure and will never contain any sensitive information. By clicking continue here, you give your consent to the use of cookies by our website.

Wednesday, 11 February 2015 09:46

Working with the moral issues in big data

Posted By  David Tebbutt

Using the results of big data is not just a commercial decision.  Our CCI columnist David ‘Tebbo’ Tebbutt asks for some greater thought in the way we use and monetise personal data.


In our rush to gather and combine big data in order to create fresh information, it’s easy to forget the impact that such activities might have on innocent bystanders. If the data relates to individuals in any way, regardless of whether they’ve given permission or they’ve been promised anonymity, eventually they can be harmed.

The lawmakers do their best, but they are usually behind the curve, dealing with the world as it was rather than as it will be. And this world is changing rapidly as computer systems are capable of handling more data, more quickly and more intelligently.

A person who gives consent for some lifestyle data to be captured and shared has little concept of how and where it might end up being processed. That same person may have agreed to other data about themselves to be captured – during a hospital stay, for example - but for their identity to be anonymised.

What if the two sets of data ended up overseas, being processed in some jurisdiction with a more casual approach to confidentiality and, through some common information between two data sets, they were able to re-identify the anonymous data? And, worse, then sell it on to a vested interest such as a life insurance company?

One organisation that has taken this issue very seriously is the Nuffield Council on Bioethics that has just reported on its deep enquiry (it started in 2010) into the collection, linking and use of data in biomedical research and health care. Much of the report is relevant outside of the medical world and you will find it useful if you have any responsibility for human-related data collection and analysis. The 225-page report is downloadable at http://nuffieldbioethics.org/wp-content/uploads/Biological_and_health_data_web.pdf

Professor Martin Richards is the Chair of the Nuffield Council on Bioethics Working Party and Emeritus Professor of Family Research at the University of Cambridge and he explains, “We now generate more health and biological data than ever before. This includes GP records, laboratory tests, clinical trials and health apps, and it is becoming easier and cheaper to collect, store and analyse this data.”

As long as these data stores are held within an institution, such as the NHS, there’s some chance that they will remain protected, but the temptation to blend it with other sources must be quite high, especially when the result may increase knowledge and understanding and lead to new treatments and improved health services. But where would it go to be processed? What laws or protocols would they agree to work to? How are the conflicting public, commercial and private interests best served? The report addresses such issues and offers seventeen recommendations.

On the matter of law, the report suggests that, “Compliance with the law cannot guarantee that the use of data is morally acceptable.” This isn’t just a medical issue. This applies to all manner of data captured on individuals, with and without their explicit permission. And, even if permission is granted, it cannot be taken as a carte blanche to use the data in a different context to that originally agreed. As the report says of recent information technology advances, “They have led to the emergence of a new attitude towards data that sees them as exploitable raw materials, which can be put to use for a variety of purposes beyond those for which they were originally collected.”

This automated mission creep gets messy and all that big data/rapid processing would get slowed down if permission had to be re-sought for each change of context. Each switch creates a new data set, blended from a number of original sources but which might carry personal identifiers.

Redress after the event of a breach of confidentiality isn’t much use. The only time to deal with the risk of harm to the data subjects is up front when setting the ground rules for how to use the data. If a change of context is contemplated, the risks have to be assessed and addressed beforehand. Strongly enforced Codes of Practice would be a good complement to relevant laws.

The report makes for interesting reading in your role as an information professional and as an individual who routinely provides data to the medical profession.

About the Author

David ‘Tebbo’ Tebbutt has been working in computing and writing about the impact of computing since the launch of the first PCs. He was editor, of Personal Computer World and has been a programmer, analyst, project manager, IT manager and director of two software companies.




Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.

IBM skyscraper2

datazen side

Most Read Articles