Papers by Gianclaudio Malgieri
Il Punto Sul Danno Da Vacanza Rovinata
Danno e Responsabilità, May 1, 2014

DIPLAP, 2015
Recent events have shown the destructive consequences that wildfires can have on the environment,... more Recent events have shown the destructive consequences that wildfires can have on the environment, people's lives, and the economy. Elevated fuel loads (i.e., the amount of flammable material in an area) increase the likeliness as well as the severity of accidental and human-caused fires. Fuel management operations help to reduce the impact of fires by applying treatments on the landscape that decrease fuel load. However, their planning poses a complicated decision problem, which includes multiple sources of uncertainty. In this paper, a problem for fuel treatment planning is presented, formulated, and solved. The optimisation model identifies the best subset of units in the landscape to be treated to minimise the impact of the worst-case wildfire. The model, bilevel in nature, is reformulated as a single-level integer program. Due to its size, which would make it intractable for realistic instances, a solution algorithm applying bound-based filters that reduce the size of the optimisation model while preserving optimality has been devised. Extensive computational testing on randomly generated instances illustrates that the proposed approach is very successful at solving the problem and that the filters indeed reduce the total solution time. Finally, the algorithm is applied to a case study on a landscape in Andalusia, Spain, which shows the capabilities of the proposed approach in addressing a real-world problem.
German Law Journal, 2022
Both individuals have confirmed that they meet ICMJE criteria for authorship.

Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making
IEEE Computational Intelligence Magazine, 2022
Can satisfactory explanations for complex machine learning models be achieved in high-risk automa... more Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data protection framework safeguarding a right to explanation? This article explores from an interdisciplinary point of view the connection between existing legal requirements for the explainability of AI systems set out in the General Data Protection Regulation (GDPR) and the current state of the art in the field of explainable AI. It studies the challenges of providing human legible explanations for current and future AI-based decision-making systems in practice, based on two scenarios of automated decision-making in credit scoring risks and medical diagnosis of COVID-19. These scenarios exemplify the trend towards increasingly complex machine learning algorithms in automated decision-making, both in terms of data and models. Current machine learning techniques, in particular those based on deep learning, are unable to make clear causal links between input data and final decisions. This represents a limitation for providing exact, human-legible reasons behind specific decisions, and presents a serious challenge to the provision of satisfactory, fair and transparent explanations. Therefore, the conclusion is that the quality of explanations might not be considered as an adequate safeguard for automated decision-making processes under the GDPR. Accordingly, additional tools should be considered to complement explanations. These could include algorithmic impact assessments, other forms of algorithmic justifications based on broader AI principles, and new technical developments in trustworthy AI. This suggests that eventually all of these approaches would need to be considered as a whole.

Mental Data Protection and the GDPR
SSRN Electronic Journal, 2021
Increasingly, digital technology can be used not only to measure relevant parameters of human ana... more Increasingly, digital technology can be used not only to measure relevant parameters of human anatomy and activity but also to gain exploratory information about mental faculties such as cognitive processes, personal preferences, and affective states. Although decoding the conceptual and non-conceptual content of mental states is unattainable at the current stand of technology development, several digital technologies such as neural interfaces, affective computing systems and digital behavioural technologies allow to establish increasingly reliable statistical associations between certain data patterns and mental activities such as memories, intensions and emotions. Furthermore, AI and big-data analytics potentially permit to explore these activities not just retrospectively but also in real-time and in a predictive manner. These converging technological developments are increasingly enabling what can be defined the digital mind—namely the moment-by-moment quantification of the individual-level human mind. In this article, we introduce the notion of ‘mental data’, which we define as any data that can be organized and processed to infer the mental states of a person, including their cognitive, affective and conative states. Further, we analyse the existing legal protection for this broad category of “mental data” by assessing meaningful risks for individuals’ rights and freedoms. Our analysis is focused on the EU GDPR, since it is one of the most advanced and comprehensive data protection laws in the world, having also an extraterritorial impact on other legal systems. In particular, we reflect on the nature of mental data, the lawfulness of their processing considering the different legal bases and purposes, and relevant compliance measures. We conclude that, although the contextual definition of “sensitive data” might appear inadequate to cover many examples of mental data (e.g., “emotions” or other “thoughts” not related to health status, sexuality or political/religious beliefs), the GDPR – through an extensive interpretation of “risk” indexes as the EDPB proposes – seems to be an adequate tool to prevent or mitigate risks related to mental data processing. In conclusion, we recommend that interpreters and stakeholders focus on the “processing” characteristics, rather than merely on the “category of data” at issue. To achieve this goal, we call for a “Mental Data Protection Impact Assessment” (MDPIA), i.e. a specific DPIA procedure that can help to better assess and mitigate risks that mental data processing can bring to fundamental rights and freedom of individuals.

The Cambridge Handbook of Surveillance Law
T he two European Courts (the European Court of Human Rights, ECtHR and, to a lesser degree, the ... more T he two European Courts (the European Court of Human Rights, ECtHR and, to a lesser degree, the European Union Court of Justice, EUCJ) have contributed greatly to the development of a legal framework for surveillance by either law enforcement agencies in the criminal law area or by secret services. Both courts put great emphasis on a system of control ex ante and post hoc by independent supervisory authorities. A complex and controversial issue remains whether the human rights to privacy, respect of communications, and to an effective remedy (enshrined in Article 8 and 13 of European Convention on Human Rights (ECHR)), requires judicial review as a necessary safeguard for secret surveillance or alternatively, at which conditions, parallel systems of non-judicial review can be accepted as adequate safeguards against illegitimate interference in citizens' private life. The European Courts have not yet established a clear doctrine in determining suitable thresholds and parameters. In particular, the ECtHR has a flexible approach in interpreting article 8 and 13 ECHR, depending on several factors ("vital" interests at stake, political considerations, etc.). In general terms, the Court has shown a preference towards judiciary oversight, but in the European legal order there are several examples of alternative oversight systems assessed positively by the Court, such as the quasi-judiciary systems (where the independency of the supervisory body, its wide jurisdiction, its power to data access and its power to effective reactions are proved) or the system of oversight set by Data Protection Authorities in the EU member states. However, in recent judgements of the ECtHR and the EUCJ we see an increasing emphasis on declaring the necessity of a "good enough" judicial (ex ante or post hoc) control over surveillance, meaning not simply a judicial control, but a system of oversight (judicial, quasi-judicial, hybrid) which can provide an effective control over surveillance, supported by empirical checks in the national legal system at issue.

R.I.P.: Rest in Privacy or Rest in (Quasi-)Property?
Data Protection and Privacy, 2019
The protection and management of personal data of deceased person is an open issue both in practi... more The protection and management of personal data of deceased person is an open issue both in practical terms and in theoretical terms. The fundamental right to privacy and to personal data protection (as well as secondary legislation, as GDPR) seems inadequate to cope with data of deceased data subjects. Accordingly, data controllers might be free to process data of deceased subjects without any guarantee. It might have an adverse affect not only on memory and post-mortem autonomy of deceased people, but also on living relatives. Different theoretical solutions have been proposed: post-mortem privacy (based on post-mortem autonomy) and the analogical application of copyright law or of inheritance law (data as digital assets). Actually the concept of “quasi-property” (from common law jurisprudence) might also prove interesting since it is an inalienable bundle of rights that protect deceased persons. Some EU member states already provided different solutions for data of deceased people...
The democracy of emergency at the time of the coronavirus: the virtues of privacy
The emergency of the Coronavirus imposes a cultural debate on the balancing of rights, freedoms a... more The emergency of the Coronavirus imposes a cultural debate on the balancing of rights, freedoms and social responsibilities, finalized to the protection of individual and collective health. So much and rightly has been written in these days about strategic errors of the past, and authoritarian and social control risks exploiting the fear of contagion to further compress individual freedoms. A lot has been said about the futility of privacy as well. But is there a democratic way that respects fundamental rights in an emergency? Is there a model that can turn respect for democratic freedoms into a tool for effective common struggle in an emergency?

When Intellectual Capital Meets Personal Data: A Solution for “Intellectual Privacy”
Customer lists are a fundamental component of the intellectual capital of businesses. On one side... more Customer lists are a fundamental component of the intellectual capital of businesses. On one side, companies want to protect the economic relationships between businesspeople and customers; on the other side, the new technologies (and in particular Big Data) have extremely developed the potentialities of customer information for companies, especially by the means of “data mining”: behaviour evaluations, forecasts, studies on life expectancy, personalized marketing plan, pricing, automated profiling, credit scores, etc. Such an intellectual work on customer information is highly valuable and needs specific protection. Traditionally, trade secret is the intellectual property right used to protect these data. In general, this new privacy framework risks to threaten the intellectual capital of businesses and to discourage innovation and competitiveness in terms of development of big data, economic relationships and marketing patterns. Therefore, the only possible solution is a technical solution: we propose a form of “decontextualized” shared ownership on this intangible property. We propose, on one side, to develop interests that are common to data subjects and trade secret holders (security measures, accuracy, data updating) by the means of “cooperation” between customers and businesses. On the other side, we propose a new taxonomy of personal data, based on the different level of “relationship” between subjects and data and on the different degree of intellectual activity involved in the collection, extraction and processing of such data.

I Soggetti Coinvolti nel Trattamento dei Dati Personali nel Cloud Computing: la Rottura del Dualismo Controller-Processor- Conference "Getting around the cloud(s) - Technical and legal issues on Cloud services”
The current data protection directive is inadequate to cope with the complexity of the technology... more The current data protection directive is inadequate to cope with the complexity of the technology of cloud computing. However, the Proposed General Data Protection Regulation seems to offer a suitable solution to this problem. The main issue is to establish who is the controller, and who is the processor in the cloud computing data processing, even considering all intermediate or subordinate positions and the two different kinds of data: user- related personal information and cloud-processed personal information. Several asymmetries complicate this qualification: a contractual and a structural asymmetry does not allow the “user” to be defined a data-controller, in fact, although he determines “scope” and “means” of the treatment, he is a mere “cloud consumer” (users ignore individuals involved in the treatment, the location of the servers where data are collected, and their displacement; he can neither determine autonomously technical or security means of the processing and he just ...

Ownership' of Customer (Big) Data in the European Union: Quasi-Property as Comparative Solution?
The aim of this paper is to move steps towards the determination of “ownership” of Big Data relat... more The aim of this paper is to move steps towards the determination of “ownership” of Big Data related to consumers and to understand whether allocating economic rights on personal data to consumers is efficient and consistent with human rights of individuals; and whether it will strengthen the level of legibility, agency and negotiability of human-data interactions. In particular, there is an uncertain “grey area” in which determining “default entitlement” of data is particularly challenging: it is the case of the intellectual output of algorithms on individuals’ personality, i.e. all information created by processing raw data (data received, observed, inferred or predicted) through data mining and in general all customer data which are just forecasted, statistically predicted, obtained by the original combination of probabilistic data, meta-data and raw data related to customers. To find a solution to this grey area “default entitlement” issue, several sub-questions need to be addres...

Data Extra Commercium
Commerce in some data is, and should be, limited by the law (data extra commercium) because some ... more Commerce in some data is, and should be, limited by the law (data extra commercium) because some data embody values and interests (in particular, human dignity) that may be detrimentally affected by trade. In this article, drawing on the Roman law principles regarding res extra commercium, we investigate the example of personal data as regulated under the EU Charter and the GDPR. We observe that transactions in personal data are not forbidden but subject to what we call a dynamically limited alienability rule. This rule is based on two dynamic variables: the nature of data and the legal basis for commercially trading such data (at primary or secondary level). Accordingly, in order to deal with such dynamism and the uncertainty it poses, we propose a general two-stage reasonableness test that should help legal practitioners, judges and law-makers in considering when trade in data is illicit and who (if anyone) shall be held responsible for this mischief. Finally, we show how the two-...
Making the most of new laws : Reconciling big data innovation and personal data protection within and beyond the GDPR
The main research question of this contribution, from the legal perspective, is how reconciling t... more The main research question of this contribution, from the legal perspective, is how reconciling the free flow of information with the protection of personal data. These two interests are often seen as conflicting. Several conceptual and mental steps are, in our view, necessary to go beyond the antagonism and to see a more productive relationship. We propose five steps to answer this question

Impossible Explanations?
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021
Can we achieve an adequate level of explanation for complex machine learning models in high-risk ... more Can we achieve an adequate level of explanation for complex machine learning models in high-risk AI applications when applying the EU data protection framework? In this article, we address this question, analysing from a multidisciplinary point of view the connection between existing legal requirements for the explainability of AI systems and the current state of the art in the field of explainable AI. We present a case study of a real-life scenario designed to illustrate the application of an AI-based automated decision making process for the medical diagnosis of COVID-19 patients. The scenario exemplifies the trend in the usage of increasingly complex machine-learning algorithms with growing dimensionality of data and model parameters. Based on this setting, we analyse the challenges of providing human legible explanations in practice and we discuss their legal implications following the General Data Protection Regulation (GDPR). Although it might appear that there is just one single form of explanation in the GDPR, we conclude that the context in which the decision-making system operates requires that several forms of explanation are considered. Thus, we propose to design explanations in multiple forms, depending on: the moment of the disclosure of the explanation (either ex ante or ex post); the audience of the explanation (explanation for an expert or a data controller and explanation for the final data subject); the layer of granularity (such as general, group-based or individual explanations); the level of the risks of the automated decision regarding fundamental rights and freedoms. Consequently, explanations should embrace this multifaceted environment. Furthermore, we highlight how the current inability of complex, deep learning based machine learning models to make clear causal links between input data and final decisions represents a limitation for providing exact, human-legible reasons behind specific decisions. This makes the provision of satisfactorily, fair and transparent explanations a serious challenge. Therefore, there are cases where the quality of possible explanations might not be assessed as an adequate safeguard for automated decision-making processes under Article 22(3) GDPR. Accordingly, we suggest that further research should focus on alternative tools in the GDPR (such as algorithmic impact assessments from Article 35 GDPR or algorithmic lawfulness justifications) that might be considered to complement the explanations of automated decision-making.

Law and Business, 2021
This paper argues that if we want a sustainable environment of desirable AI systems, we should ai... more This paper argues that if we want a sustainable environment of desirable AI systems, we should aim not only at transparent, explainable, fair, lawful, and accountable algorithms, but we also should seek for “just” algorithms, that is, automated decision-making systems that include all the above-mentioned qualities (transparency, explainability, fairness, lawfulness, and accountability). This is possible through a practical “justification” statement and process (eventually derived from algorithmic impact assessment) through which the data controller proves, in practical ways, why the AI system is not unfair, not discriminatory, not obscure, not unlawful, etc. In other words, this justification (eventually derived from data protection impact assessment on the AI system) proves the legality of the system with respect to all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability)....

The concept of fairness in the GDPR
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020
There is a growing attention on the notion of fairness in the GDPR in the European legal literatu... more There is a growing attention on the notion of fairness in the GDPR in the European legal literature. However, the principle of fairness in the Data Protection framework is still ambiguous and uncertain, as computer science literature and interpretative guidelines reveal. This paper looks for a better understanding of the concept of fairness in the data protection field through two parallel methodological tools: linguistic comparison and contextual interpretation. In terms of linguistic comparison, the paper analyses all translations of the world "fair" in the GDPR in the EU official languages, as the CJEU suggests in CILFIT Case for the interpretation of the EU law. The analysis takes into account also the translation of the notion of fairness in other contiguous fields (e.g. at Article 8 of the EU Charter of fundamental rights or in the Consumer field, e.g. Unfair terms directive or Unfair commercial practice directive). In general, the notion of fairness is translated with several different nuances (in accordance or in discordance with the previous Data protection Directive and with Article 8 of the Charter) In some versions different words are used interchangeably (it is the case of French, Spanish and Portuguese texts), in other versions there seems to be a specific rationale for using different terms in different parts of the GDPR (it is the case of German and Greek version). The analysis reveals three mean semantic notions: correctness (Italian, Swedish, Romanian), loyalty (French, Spanish, Portuguese and the German version of "Treu und Glaube") and equitability (French, Spanish and Portuguese). Interestingly, these three notions have common roots in the Western legal history: the Roman law notion of "bona fide". Taking into account both the value of "bona fide" in the current European legal contexts and also a contextual interpretation of the role of fairness in the GDPR, the preliminary conclusions is that fairness refers to a substantial balancing of interests among data controllers and data subjects. The approach of fairness is effect-based: what is relevant is not the formal respect of procedures (in terms of transparency, lawfulness or accountability), but the substantial mitigation of unfair imbalances that create situations of "vulnerability". Building on these reflections, the paper analyses how the notion of fairness and imbalance are related to the idea of vulnerability, within and beyond the GDPR. In sum, the article suggests that the best interpretation of the fairness principles in the GDPR (taking into account both the notion of procedural fairness and of fair balancing) is the mitigation of data subjects' vulnerabilities through specific safeguards and measures.

The Vulnerable Data Subject: A Gendered Data Subject?
SSRN Electronic Journal, 2021
Vulnerability is an emerging topic in many different fields, but in data protection and privacy t... more Vulnerability is an emerging topic in many different fields, but in data protection and privacy the discussion has rarely engaged with gender studies. This paper investigates the notion of the ‘vulnerable data subject’ from a gender perspective, to question whether gender should be regarded as factor of vulnerability at all, and, if yes, how. In addition, what do these reflections tell us about the (gendered or un-gendered) notion of ‘standard data subject’? In the EU data protection law (GDPR), even though the term ‘vulnerable data subject’ is only incidentally mentioned and referring explicitly just to children, several Data Protection Authorities (e.g., in Spain and Poland) have considered “being female” as a potential source of data subject’s vulnerability (e.g., in case of consumers victims of sex-related crimes). The US privacy tort - as originally conceived - was built off gendered notions of female modesty, suggesting women were vulnerable, and thus connecting women’s privacy claims to the ‘wrong kind of privacy’. Looking at the history and foundations of privacy and data protection law, surface questions such as whether the ‘average data subject’ in privacy and data protection legislation is, by default, a man, and, whether women might have to be regarded as vulnerable data subjects just because they are women. This article then look into law and economics analysis of consumers’ behaviour, but also political philosophy and, in particular, gender studies, where we can observe a real intellectual polarisation: on the one hand the vulnerability universalist approach, according to which every human is vulnerable, otherwise vulnerability would be a stigmatizing label; on the other hand the particularistic approach, according to which some subjects are more vulnerable than others (in particular, women are more vulnerable - i.e., subject to adverse effects - than men in many contexts: workplace, education, etc.). A third way might be the "layered" theory of Luna, based on a contextual and relational (even situational) nature of vulnerability. This solution is compatible with the layered risk-based approach in the GDPR, but also with the intersectional approach in gender studies. This “third way” might be also a cautious solution to the ambiguous and inconsistent focus on vulnerability both in the European Commission documents and proposed legislation (e.g., the EU Regulation AI) and in the EU data protection practice (considering, e.g., the ineffective protection of children).
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020

Article 8 ECHR Compliant and Foreseeable Surveillance: The ECTHR's Expanded Legality Requirement Copied by the CJEU. A Discussion of European Surveillance Case Law
SSRN Electronic Journal, 2020
The Strasbourg based European Court of Human Rights has a long record of cases dealing with surve... more The Strasbourg based European Court of Human Rights has a long record of cases dealing with surveillance, starting with Klass v. Germany (1978). In Klass the Court explicitly accepted the necessity for secret surveillance performed by public authorities in European post-World War II democracies, provided respect of certain victim and legality requirements deduced from Article 8 and 13 of the 1950 European Convention on Human Rights (ECHR). After the introduction of this premise, the Court proposes several important guidelines for lawful and human rights compatible surveillance that taken together built up to a comprehensive framework answering equally to questions about power divisions and checks on potential power abuse. Today there is a vast body of case law developed by the ECtHR and the European Union Court of Justice (hereafter: CJEU) that confirms and adapts these guidelines, often in view of addressing recent technology (e.g. GPS surveillance) or institutional developments (e.g. overlap between police and secrete services). In this article we will focus on developments with regard to the legality principle in the context of surveillance in the realm of criminal law and intelligence work by secret services. A more rigorous interpretation of legality principle in post Klass surveillance case law certainly qualifies as one of the most remarkable developments in the European Courts case law on surveillance. In particular, we will show that the strict approach towards the legality requirement enshrined in Article 8 ECHR adopted by the ECtHR in Huvig (1990) in the context of telephone surveillance will be then re-applied in all the following judgments of the Strasbourg Court and even adopted by the CJEU (from Digital Rights Ireland on) in the context of other surveillance practices.

Property and (Intellectual) Ownership of Consumers’ Information: A New Taxonomy for Personal Data
PinG Privacy in Germany, 2016
This article proposes a new personal data taxonomy in order to determine more clearly ownership a... more This article proposes a new personal data taxonomy in order to determine more clearly ownership and control rights on different kinds of information related to consumers. In an information society, personal data is no longer a mere expression of personality but a strong economic element in the relationships between companies and customers. As a consequence, the new General Data Protection Regulation recognises different levels of control rights to consumers in accordance with a “proprietorial” approach to personal data. Moreover, existing data taxonomies (based on a subject matter approach) are anachronistic. In an IoT world, the information industry is interested in any data related to consumers: not only their commercial preferences or habits, but also their health conditions, their family and financial status, their sports habits, friendships, and so on. At the same time, there exists a great conflict between privacy concerns and IP interests of companies regarding customer data processing. This article proposes to change the perspective on personal data taxonomy and to classify personal information in accordance with its “relationship” to the data subject and to reality, and with intellectual activity of businesses in acquiring and processing such data. Three categories are identifiable in this respect: strong relationship data (data provided directly by customers), intermediate relationship data (data observed or inferred and related to the present life of consumers), and weak relationship data (predictive data). Each category reflects different individuals’ rights under the EU General Data Protection Regulation. Data portability is provided just for strong relationship data, whereas no control rights are provided by weak relationship data. At the same time, other rights rebalance information asymmetries between consumers and enterprises (right to information, right not to be subjected to automated profiling, etc.). Therefore, the best balancing approach in order to both respect the IP rights of companies and the information privacy rights of consumers is to distinguish “control rights” (access, portability, oblivion) from “reaction rights” (right to information, opposition to automated profiling, etc.).
Uploads
Papers by Gianclaudio Malgieri