DEF plans a debate on Digital Ethics in October of this year as a follow-up of the discussions at DEF 2015. This blog invites comments and critique and suggestions in preparation of that debate

The need to anticipate legislation and usages

Digital technologies have so modified human activity and digital usage evolves so quickly, it is essential to constantly update the rules of digital ethics and review deontology in many areas (trade, health, education …). Following the new opportunities and potentials, it is well possible to use or divert erroneously or act maliciously.

Digital Ethics touches on many hot issues, including respect for privacy and consequences of profiling, ethics of content, information collection and storing, right to be forgotten, cybercrime and terrorism, (mass) surveillance, freedom of expression, IoT and Big Data, robots and drones, digital artificial body implant or augmented reality in neurobiology, intellectual property, virtual currency, precaution, accountability, responsibility and intentionality, global and cultural differences of ethical norms.

Digital technology increasingly disrupts the ethical basis of our society due to:

  • the profound social dimension of technologies and the relation of humans to technology mediating their living;
  • the essence of digital technology, where you can clone (and/or falsify) any digital entity, ad infinitum, without much effort;
  • the widespread interconnection of mobile computing entities where shadowing is difficult or impossible because entities are anonymous, or recognizable but difficult to trace;
  • excesses of governments to monitor citizens of all countries, inside and outside of their territory;
  • the proliferation of web sites appealing to violence, hatred or recruitment to terrorism (Call to Jihad);
  • hacktivism (WikiLeaks, Anonymous), whistleblowers and white hackers;
  • ideology of digital world as supranational territory that would escape to the national courts, or the Internet as a lawless zone;
  • rise of Internet oligarchs who challenge governments (tax optimization, etc.) and users (irreversibility of data storage or use of single-source software, “must” funnels).

Moreover, digital ethics will increasingly have to deal with decisions by autonomous systems (robots, profiling systems, embedded and connected systems, remote control systems, etc.), managed however by legal persons.

Digital Ethics aims to indicate how subjects (human beings, organizations, computers, software but also digital objects, drones and robots) must act and behave towards each other and those around them, in the digital ecosystem. Digital ethics precedes law which largely trails the evolution of digital behavior with regulation mostly endorsing fait accompli usage. Hence the need to think and act at the earliest.

It is necessary to very early create rules, eg. on whistleblowers and other forms of challenging accountability and transparency, in order that everyone acts with best intentions in the digital world, without disturbing others and the environment.

Complexity of digital ethics

Digital ethics encompasses the entire field of ethics of IT functions: communications, but also storage and processing. It thus includes the field:

  • Ethics of communication: i.a. do not spy, do not impersonate a sender or a receiver, do not download illegal content, do not record the digital behavior of users without their consent;
  • Ethics of storage: i.a. do not store personal information without the express consent of users, do not record the location of people without their consent, do not store squalid contents, do not constitute sensitive data files on people; and also
  • Ethics of the computation: do not set up any viruses, Trojan horses, do not make available software with backdoors, do not extract profiles and behavior of people without their consent, etc.). Digital mistrust essentially coming this way, because of opacity of licensed software and asymmetry of the distribution of “free” services in irreversible exchange of personal data.

The ethics of computation

Handling data is done by computer programs, hence we must act on these programs, as complex as they are (Google, Apple, Facebook, Amazon, Microsoft, Cisco …).

Digital ethics cannot be implemented entirely by technical measures.

Meaning is given by observation or by interpreting. To automate the ethical quest, so we resort to security systems by rules engines or we put in place devices or services to inject trust by statistical observations.

Technique does not prevent abuse, even though we would make a perfectly ethical system. It must be accompanied by social models or observation models that, in real time or retrospectively, strengthen or validate the confidence that can be given to the system.

Inserting the concerns of users into the life cycle of products is a complex task, which can lead to limited success only. Taking into account privacy earlier in the life cycle of products and services (“privacy by design”) can only have limited success, since digital innovations often come from diversion of beneficial uses of products or services, which designers had not thought of.

The dangers of simplicity

Concepts like accountable, transparent, open seen simple but often cover contradictions and their meanings get twisted and subjective.

In a completely opaque world with no transparency, it is difficult to issue rules. But if everything is crystal clear, privacy no longer exists, which leads to a totalitarian world. Intellectual property no longer exists in a world where trade secret is absent. The private sphere should remain a gray area managed by each subject using security technologies (cryptography and steganography).

Trust requires visibility. Confidence dawns from dialogue between two parties who exchange (parts of) unveiled information. Trust models (by reputation, recommendation and frequentation) are statistical opinion models.

Digital ethics is a subtle play of light and shadow to protect the interests of both parties, and grow interconnection in an anonymous world.

The open software model is also presented as a solution. An open software (in the computer sense) is an opaque software that makes public its interfaces to which it can connect. Software nests of spies can hide in open software. The user cannot verify that all data will be erased when she leaves the service.

The open source software or open library model is different. There the community of software developers (the Linux model, security tools: the virtual network of Tor anonymisation, certificates of OpenSSL) or writers’ community (Wikipedia model) continually improves a work, still under construction, and this work is done in bright light. These communities also have their fault and their lack of ethics. Backdoors or pressure groups exist as well, in full light, but are more difficult to detect.

The reluctance of actors

Technique being a failure, we can appeal to the goodwill of the various actors. But unfortunately that does not look very positive.

The computer industry has been neglecting the security dimension for too long.

The big players often hide behind generic computer models to narrow their application of ethics. For example the communication model of telecom operators and the ownership and usage model of software publishers with copyrights.

Telecoms operators focus on continuity of service and coverage obligations and do not want to worry about filtering communications (viruses, pedophiles files, illegal transportation) and (ethical or not) content control. It is the same for data hosting and cloud providers.

Application vendors who sell meaning, do not want to be responsible for the consequences of using their products that are protected with the notion of copyright, which means that the software is a piece of art used under the responsibility of the user exclusively. This defense by copyright did not exist in 1960.

There is a tendency in the ICT industry to shift all responsibility to users (by licensing agreements). This is not the case in other industries such as automotive or energy. In other words, nobody wants to take responsibility for the digital actions. It is fully transferred to the user who is mostly excluded from knowledge of the system she uses.

Implementation of a digital ethics: values, principles, rules

For all above problems new models need to be developed and implemented, based on common and fair action by all stakeholders (suppliers and users, governments, industry and citizens) taking their fair part of responsibility for the consequences of use.

Digital technologies are far from neutral and independent of political geostrategy. Using information technology inevitably involves ethical choices. But that needs values which may conflict (e.g. freedom and privacy, social peace and security), which exhibits the complexity of the coexistence between the principles and the difficulty of development of these rules.

The Enlightenment values

It is first necessary to highlight the values of the Enlightenment: freedom of the individual, autonomy, informed consent, individual responsibility, sovereignty of the personal digital assets, intellectual property, respect for the environment, justice, sharing, solidarity, reversibility of usage (ability to switch suppliers), digital dignity (avoiding digital assault with unwanted solicitations …).

Responsibility and intentionality of a physical person for computer actions are clearly in the realm of digital ethics. Can we attribute (directly or indirectly) the emergence of other IT events (failure in infrastructure, malfunction of a cloud, leakage of information, dissemination of virus attacks, etc.)? The identification of digital facts, digital evidence is a major issue to be resolved (forensics, audit, search, etc.). We might start to distinguish accountability (the action is executed by a human being or an artefact) and responsibility (a legal entity – human or organisation – can be held liable). This could help us to come to rules of digital systems management deontology.

But can digital technologies reject obscurantism, barbarism, and promote knowledge and sustainable action for the planet, without blocking future technologies that will eventually supplant the digital?

Of course, digital ethics appeals to the deepest philosophical debates: Michel Foucault, Roland Barthes or Aristotle and Plato. We should involve all stakeholders (including the protest movements) if we are to develop a code of conduct. But questions about the universality of digital ethics may remain unanswered and it might be best to limit ourselves initially to the Western world to establish first versions of an ethical framework.

Finally, we must not fall into the trap of an outdated battle between humans who can judge a behavior and machines that do not have consciousness, when building an ethical framework for digital technologies.

The principles

We could identify certain high-level principles borrowed from the rules on privacy and general human rights:

  • Finality: require authentication for an activity, but do not retain personal data for other purposes;
  • Proportionality: collect data in accordance to needs for service or security assurance;
  • Reciprocity: “free” services related to making personal information available with a duration, a right of withdrawal, and control over the uses of these databases;
  • Least privilege: give only what is really needed for a. service, monitor only what must be controlled.
  • Diversity: no digital uniformity;
  • Universal access: against the digital divide;
  • Progressivity: allow evolution and innovation;
  • Sustainability: ensure a future for humankind;
  • Respectfulness: don’t do to others what you do not want to experience yourself;
  • Fairness, equity: treat people equal and without discrimination.

Clearly there will be other rules and the above may need reformulation. This is part of the discussion needed.

The above rules must affect individuals, but in particular also immediately the identity management techniques, network management and management of service providers (inc. social networks, cloud providers).

It involves immediately the international side of the matter since the digital world transcends geographic borders and undermines the national courts.

The ethical rules are also contextual: one exchange may be acceptable, while an influx of calls can be harassment, one connection to a server can be normal, while a crowd effect can be a cyberattack (DoS service).

Most importantly, the concepts of time, place and action (action committed in a particular place at a particular time) are no longer appropriate in the digital scene, with the emergence of virtualization of IT entities. We might need new digital laws, moving away from the classic rules of human behavior in the real world.

First conclusions

The user is not in a position to understand deficiencies or dangers of computer systems, because of difficult technicalities, irrelevance to his daily activities and because of lack of transparency of the IT players and software opacity.

Digital ethics operates in a complex world. It seems necessary to develop and implement new formal models of digital ethics, consisting of subjects (individuals, organizations, robots), which enforce an “ethical conscience” (knowledge and behavior) defined by a set of principles that are formulated as rules of conduct (one might think of Isaac Asimov’s axioms for robots).

Computing ubiquity makes it increasingly difficult for national jurisdiction to be applied.

First of all, we must ourselves take responsibility. “Ethics committees” and regulators must have the courage to unravel responsibilities to designate guilt to actors and to propose solutions to remedy the situation, provided they have the instruments of