Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous.

Overview

Data anonymization has been defined as a "process by which personal data is altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party."[1] Data anonymization may enable the transfer of information across a boundary, such as between two departments within an agency or between two agencies, while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization.

In the context of medical data, anonymized data refers to data from which the patient cannot be identified by the recipient of the information. The name, address, and full postcode must be removed, together with any other information which, in conjunction with other data held by or disclosed to the recipient, could identify the patient.[2]

There will always be a risk that anonymized data may not stay anonymous over time. Pairing the anonymized dataset with other data, clever techniques and raw power are some of the ways previously anonymous data sets have become de-anonymized; The data subjects are no longer anonymous.

De-anonymization is the reverse process in which anonymous data is cross-referenced with other data sources to re-identify the anonymous data source.[3] Generalization and perturbation are the two popular anonymization approaches for relational data.[4] The process of obscuring data with the ability to re-identify it later is also called pseudonymization and is one-way companies can store data in a way that is HIPAA compliant.

However, according to ARTICLE 29 DATA PROTECTION WORKING PARTY, Directive 95/46/EC refers to anonymisation in Recital 26 "signifies that to anonymise any data, the data must be stripped of sufficient elements such that the data subject can no longer be identified. More precisely, that data must be processed in such a way that it can no longer be used to identify a natural person by using “all the means likely reasonably to be used” by either the controller or a third party. An important factor is that the processing must be irreversible. The Directive does not clarify how such a de-identification process should or could be performed. The focus is on the outcome: that data should be such as not to allow the data subject to be identified via “all” “likely” and “reasonable” means. Reference is made to codes of conduct as a tool to set out possible anonymisation mechanisms as well as retention in a form in which identification of the data subject is “no longer possible”.[5]

There are five types of data anonymization operations: generalization, suppression, anatomization, permutation, and perturbation.[6]

GDPR requirements

The European Union's new General Data Protection Regulation (GDPR) demands that stored data on people in the EU undergo either anonymization or a pseudonymization process.[7] GDPR Recital (26) establishes a very high bar for what constitutes anonymous data, thereby exempting the data from the requirements of the GDPR, namely “…information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” The European Data Protection Supervisor (EDPS) and the Spanish Agencia Española de Protección de Datos (AEPD) have issued joint guidance related to requirements for anonymity and exemption from GDPR requirements. According to the EDPS and AEPD no one, including the data controller, should be able to re-identify data subjects in a properly anonymized dataset.[8] Research by data scientists at Imperial College in London and UCLouvain in Belgium,[9] as well as a ruling by Judge Michal Agmon-Gonen of the Tel Aviv District Court,[10] highlight the shortcomings of "Anonymisation" in today's big data world. Anonymisation reflects an outdated approach to data protection that was developed when the processing of data was limited to isolated (siloed) applications prior to the popularity of “big data” processing involving the widespread sharing and combining of data.[11]

See also

References

  1. ISO 25237:2017 Health informatics -- Pseudonymization. ISO. 2017. p. 7.
  2. "Data anonymization". The Free Medical Dictionary. Retrieved 17 January 2014.
  3. "De-anonymization". Whatis.com. Retrieved 17 January 2014.
  4. Bin Zhou; Jian Pei; WoShun Luk (December 2008). "A brief survey on anonymization techniques for privacy preserving publishing of social network data" (PDF). Newsletter ACM SIGKDD Explorations Newsletter. 10 (2): 12–22. doi:10.1145/1540276.1540279. S2CID 609178.
  5. "Opinion 05/2014 on Anonymisation Techniques" (PDF). EU Commission. 10 April 2014. Retrieved 31 December 2023.
  6. Eyupoglu, Can; Aydin, Muhammed; Zaim, Abdul; Sertbas, Ahmet (2018-05-17). "An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques". Entropy. 20 (5): 373. Bibcode:2018Entrp..20..373E. doi:10.3390/e20050373. ISSN 1099-4300. PMC 7512893. PMID 33265463. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  7. Skiera, Bernd (2022). The impact of the GDPR on the online advertising market. Klaus Miller, Yuxi Jin, Lennart Kraft, René Laub, Julia Schmitt. Frankfurt am Main. ISBN 978-3-9824173-0-1. OCLC 1303894344.{{cite book}}: CS1 maint: location missing publisher (link)
  8. "Introduction to the Hash Function as a Personal Data Pseudonymisation Technique" (PDF). Spanish Data Protection Authority. October 2019. Retrieved 31 December 2023.
  9. Kolata, Gina (23 July 2019). "Your Data Were 'Anonymized'? These Scientists Can Still Identify You". The New York Times.
  10. "Attm (TA) 28857-06-17 Nursing Companies Association v. Ministry of Defense" (in Yiddish). Pearl Cohen. 2019. Retrieved 31 December 2023.
  11. Solomon, S. (31 January 2019). "Data is up for grabs under outdated Israeli privacy law, think tank says". The Times of Israel. Retrieved 31 December 2023.

Further reading

  • Raghunathan, Balaji (June 2013). The Complete Book of Data Anonymization: From Planning to Implementation. CRC Press. ISBN 9781482218565.
  • Khaled El Emam, Luk Arbuckle (August 2014). Anonymizing Health Data: Case Studies and Methods to Get You Started. O'Reilly Media. ISBN 978-1-4493-6307-9.
  • Rolf H. Weber, Ulrike I. Heinrich (2012). Anonymization: SpringerBriefs in Cybersecurity. Springer. ISBN 9781447140665.
  • Aris Gkoulalas-Divanis, Grigorios Loukides (2012). Anonymization of Electronic Medical Records to Support Clinical Analysis (SpringerBriefs in Electrical and Computer Engineering). Springer. ISBN 9781461456674.
  • Pete Warden. "Why you can't really anonymize your data". O'Reilly Media, Inc. Archived from the original on 9 January 2014. Retrieved 17 January 2014.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.