[Article] The State of Responsible IoT: Governance of Internet of Things and Ethics of Intelligent Algorithms

Governance of Internet of Things and Ethics of Intelligent Algorithms

 

New technical artifacts connected to the Internet constantly share, process, and storage a huge amount of data. This practice is what unifies the concept of Internet of Things (“IoT”) to the concept of Big Data. With the growing dissemination of Big Data and computing techniques, technological evolution and economic pressure spread rapidly, and algorithms have become a great resource for innovation and business models. This rapid diffusion of algorithms and their increasing influence, however, have consequences for the market and for society, consequences which include questions of ethics and governance.

Automated systems that turn on the lights and warm the dinner by realizing that you’re returning home from work, smart bracelets and insoles that share with your friends how much you’ve walked or cycled during the day in the city or sensors that automatically warn farmers when an animal is sick or pregnant. All these examples are manifestations considered innovative technologies associated to the concept that is being constructed of Internet of Things.

However, there are strong divergences about the IoT concept, so there is no single concept that can be considered unanimous. In general, it can be understood as an environment of physical objects interconnected with the Internet, through small and built-in sensors, creating a ubiquitous computing ecosystem, aimed at facilitating people’s daily lives, introducing functional solutions in the processes of day by day.

In this sense,the combination of intelligent objects and Big Data can significantly change the way we live. Some researches estimates that in 2020 the number of interconnected objects will increase from 25 billion to 50 billion intelligent devices. The projections for the impact of this scenario of hyperconnection in the economy are impressive — the global economic impact estimated is more than $ 11 trillion in 2025. Due to these estimates, IoT has been receiving strong investments from the private sector and emerged as a possible solution to the new challenges of public management, promising, from the use of integrated technologies and massive data processing, more effective solutions to problems such as pollution, congestion, crime, productive efficiency, among others. In addition, IoT can bring countless benefits to consumers.

All this hyperconnectivity and continuous interaction between various devices, sensors and people, have altered the way we act communicatively and make decisions in the public and private spheres. Increasingly, the information circulating on the Internet will no longer be placed on the network by people alone, but by Things and algorithms with artificial intelligence that exchange information among themselves, forming a space for connections and increasingly automated information.

We observe today the construction of new relationships that we are establishing with the machines and other interconnected devices, allowing algorithms to begin to make decisions and to guide evaluations and actions that were previously taken by humans. This is still a relatively recent culture and implies important ethical considerations in view of the ever-increasing impacts of algorithmic communication in society.

Taking into account how recent this hyperconnectivity and IoT digital scenario is, based on the close relationship between intelligent objects, Big Data and Computational Intelligence, or between the so-called “ABC” of Information and Communication Technologies – Analytics + Big Data + Cloud Computing –, we are not yet fully aware of its potential benefits and risks. However, we must seek an adequate balance in legal regulation in a way that it does not hinder innovation, but ensuring that the law also advances in this area, seeking appropriate standards for new technologies and the IoT scenario.

Considering this scenario and the lack of adequate regulation by Law, we are experiencing a self-regulation of the market itself and a regulation that is often done through the design of technology, which is known as “techno-regulation”. IoT is advancing faster than our ability to safeguard individual and collective rights.

Given the context of constant and intense storage, data processing, sharing and monetization of data, it is crucial to discuss the notions of privacy and ethics that should guide technological advances, reflecting on the world in which we want to live and how we see ourselves in this world of data and machines related to the new IoT scenario. The way we relate to Things tends to be more and more intense. Data governance and an adequate comprehension of the agency of human and non-human ‘actants’ in this hyperconnected environment is fundamental. Benefits and risks for companies, the State and consumers should be weighed cautiously. The law must be attentive to its role in this context: on the one hand, not to hamper the current economic and technological development, and, on the other hand, to regulate technological practices effectively, intending to restrain abuses and protect constitutional rights.

Since algorithms can permeate countless branches of our lives, as they become more sophisticated, useful, and autonomous, there is a risk that they will make important decisions on our behalf. To foment the integration of algorithms into social and economic processes, algorithms governance tools are needed.

The governance of algorithms can vary from the strictly legal and regulatory, to the purely technical point of view. Researchers at the University of Zurich argue that algorithm governance must be based on identified threats and suggest a risk-based approach, highlighting those related to manipulation, bias, censorship, social discrimination, privacy breaches, property rights and abuse of market power. To prevent these risks from materializing, it is necessary to resort to governance.

One of the main themes raised by doctrine when it comes to governance is the opacity of the algorithms. The problem of opacity is associated with the difficulty of decoding its results. Humans are becoming less and less able to understand, explain, or predict the inner workings, biases, and eventual problems of algorithms. Thus, experts have been discussing the need for greater transparency aiming a better understanding about algorithmic decisions and processes.

Relating to this concern, we will focus now on advanced algorithms endowed with machine learning, such as intelligent robots equipped with artificial intelligence, considering that they are technical artifacts (Things) attached to sociotechnical systems with a greater potential for autonomy (based largely on the processing of Big Data) and unpredictability.

The implementation of programs capable of learning how to execute functions typically performed by humans, creates new ethical and regulatory challenges, since it increases the possibility of obtaining results other than those intended, or even totally unexpected. This is because these mechanisms also act as agents in society, and end up influencing the environment around them, even though they are non-human entities. It is not, therefore, a matter of thinking only about the “use” and “repair” of new technologies, but mainly about the proper ethical orientation for their development.

In addition, the more adaptable the artificial intelligence programs become, the more unpredictable are their actions, bringing new risks. This makes it necessary for developers of this type of program to be more aware of the ethical responsibilities involved in this activity. The Code of Ethics of the Association for Computing Machinery indicates that professionals in the field should develop “comprehensive and thorough assessments of computer systems and their impacts, including the analysis of possible risks”.

The ability to amass experiences and learn from massive data processing, coupled with the ability to act independently and make choices autonomously can be considered preconditions for legal liability. However, since Artificial Intelligence is not recognized today as a subject of law, it cannot be held individually liable for the potential damage it may cause. In this sense, according to Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, a person (natural or an entity) on behalf of whom a program was created must, ultimately, be liable for any action generated by the machine. This reasoning is based on the notion that a tool has no will of its own.

In another perspective, in the case of damage caused by acts of an artificial intelligence, another type of responsibility is the one that makes an analogy with the responsibility attributed to the parents by the actions of their children (strict vicarious liability). Thus, adopting the theory of “robots as tools”, the responsibility for the acts of an AI could fall on its producer, users or their programmers, responsible for their “training”.

Should an act of an Artificial Intelligence cause damages by reason of deceit or negligence, manufacturing defect or design failure as a result of blameworthy programming, existing liability rules would most often indicate the “fault” of its creators. Nevertheless, it is often not easy to know how these programs come to their conclusions or even lead to unexpected and possibly unpleasant consequences. This harmful potential is especially dangerous in the use of Artificial Intelligence programs that rely on machine learning mechanisms, in which the very nature of the software involves the intention of developing an action that is not predictable, and which will only be determined from the data and events with which the program comes into contact.

As the behavior of an AI is not totally predictable, and its behavior is the result of the interaction between several human and non-human agents that integrate the sociotechnical system and even of self-learning processes, it can be extremely difficult to determine the causal nexus between the damage caused and the action of a human being or legal entity.

According to the legal framework we have today, this can lead to a situation of “distributed irresponsibility” (the name attributed in the present work to refer to the possible effect resulting from the lack of identification of the causal nexus between the agent’s conduct and the damage caused) among the different actors involved in the process. This would happen mainly when the damage occurs by the involvement of different agents, within a complex sociotechnical system, in which the liability will not be obvious, possibly involving at the same time the actions of the intelligent thing itself, of natural persons, and of a legal entity, all linked. This reflects a serious responsibility challenge called by some scholars as the “problem of the many hands”.

The ideal regulatory scenario would guide the development of the technical artifacts and manage it from a perspective of fundamental rights’ protection. But no reliable answers have yet been found on how to deal with the potential damages that may arise due to programming errors, or even due to machine learning processes that end up incorporating undesired conducts into the behavior of the machine that were not predicted by developers. Therefore, establishing minimum ethical foundations for regulating purposes is just as important as developing these new technologies, as part of the governance strategy.

Thus, when dealing with Artificial Intelligence, it is essential to promote an extensive debate about the ethical guidelines that should guide the construction of these machines. However, clear parameters of how to conduct this study, from the point of view of ethics, are yet to be defined. The need to establish a regulatory framework for this type of technology has been highlighted by some initiatives.

The General Data Protection Regulation in Europe (GDPR) already established important guidelines concerning, for example, data collection storage and privacy, setting key principles, such as: Purpose Limitation, Data Minimization, Storage Limitation, Accuracy, Transparency, Integrity and Confidentiality and Accountability. It is important to note that for some scholars the GDPR also predicts a “right to explanation” for decisions made by automated or artificially intelligent algorithmic systems and also a “right not to be subject to automated decision-making”, aiming to enhance the transparency and accountability of automated decisions.

On another angle, a conference held in January 2017 in Asilomar, CA, aimed to establish the definitions of a series of principles so that the development of Artificial Intelligence programs can be beneficial to humanity. Twenty three principles were indicated, among them are:

  1. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards;
  2. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible;
  3. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why;
  4. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications;
  5. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

 

Therefore, designers and builders of advanced AI systems should be considered stakeholders of the moral implications of their use and its damaging autonomous actions. Additionally, there should also be considered the liability of the designers/engineers concerning the responsibility in guaranteeing values ​​such as privacy, security and ethics in the design phase of the artifacts, to avoid damages a posteriori. Hence, the challenge is to think of a “value-sensitive design”. As an example, we can mention the inputs of: “privacy by design”, “security by design” and, “ethics by design”, always taking into account what is within the sphere of control and influence of the designer. This should lead to an approximation of civil society, policy makers and law enforcement agencies with the work of engineers.

From a legal standpoint, it is fundamental to keep in mind the new nature of a diffused/shared liability, potentially dispersed in space, time and agency of the various actants in the public sphere, humans and non-humans. We need to think about the context in which assumptions on liability are made. The question that is presented to us is not only how to make computational agents liable, but how to reasonably and fairly apply the mentioned liability. We must, therefore, think of a “shared responsibility” between the different actors entangled in the sociotechnical network and their spheres of control and influence over the presented damage situations. At the same time, we need to reflect on ethical foundations as part of a governance strategy for intelligent algorithms, which makes necessary a whole new interpretation of the role of Law in this techno-regulated context.

 


 

The State of Responsible IoT 2018

Leave a Comment