Ethics, Digitisation and Machine Intelligence

The wheel of technological innovation keeps spinning – and picking up speed. Recently, the digitization discourse has been negotiated under a new old term: Artificial intelligence. The discourse is often marked by fears: AI seems to cause ethical problems, to be a high-risk technology. The starting point for these concerns is always the autonomous weapons, the morally unsolvable dilemmas of autonomous driving or the (in the opinion of very few) imminent AI systems with consciousness and the apparently logically unavoidable urge to take control of the world. On the other hand, there is a very optimistic discourse that emphasizes the chances: prosperity can only be secured in the economic connection to technology, so one argument. Another insight is that we Europeans can only transfer our values into the age of artificial intelligence if we produce this technology ourselves and do not leave it to actors in the USA or China.

The current successes of AI research and implementation are certainly remarkable: In early November 2018, for example, the state-owned Chinese news agency Xinhua introduced a newsreader that is generated entirely by computer. According to media reports, the Xinhua agency stressed that the advantages of an AI newscaster are that it can work 24 hours a day without a break. A strange reason – such a system can work through a whole year and longer without a break.

Screenshot, Quelle:

Like digitalization, “artificial intelligence” is now also an indeterminate term that is used in popular discourse to describe anything that frightens or inspires us when it comes to smart technology – depending on the perspective. In order to get to the bottom of this ethical matter, one has to take a closer look.

Often enough it has been stressed that the term artificial intelligence does not fit at all. Intelligence cannot be artificial because consciousness is necessary for this, so the argument goes. An alternative term is machine intelligence, which indicates that with this intelligence we are dealing with something that is categorically different from what we assign to man. However, machine intelligence is a category that is defined in comparison to human abilities. We have certain difficulties in determining exactly what human intelligence is, but we know roughly what constitutes intelligent human behavior. In our everyday life, we infer the intelligence of human beings by judging their behaviour. In this sense, we can judge the behaviour of a machine as intelligent if the machine can successfully simulate some or even many important elements of human intelligence.

Systems controlled by algorithms are now regarded as intelligent if they can interpret data correctly, learn from data and use this learning success to accomplish specific goals and tasks. One speaks of “weak AI” when an AI system is oriented towards concretely determinable abilities and depicts these, such as learning to play Go and beating Grandmasters. On the other hand, one would speak of “strong AI” if the intelligent system, in addition to individual abilities, also possesses (self-)awareness, for example, and can define and pursue completely independently new goals for itself, i.e. it can also decide to learn and practice composing in addition to Go games. It is highly controversial whether and when it will be possible to develop systems with “strong AIs”. – So, we are currently dealing with simulations of intelligent behavior by machines. Machines independently recognize patterns, adapt their “behavior” and can make “decisions” based on self-controlled analysis of data. The quotation marks are important: For example, we should not assume decision-making ability for machines, because responsibility and evaluation of the consequences of a decision according to social norms and moral values is something we should reserve for people.

From here, the ethical problems of AI can already be encircled as a further stage of digitisation. On a first level, the big questions about awareness in the context of the “strong AI” are particularly interesting. Is a computer consciousness conceivable and what consequences does this have for the concept of human consciousness? What is thinking and can other beings than humans do the same? And if so: Can or must specific rights be granted to such systems? Isn’t a comprehensive superintelligence basically godlike? – Many of these questions have their roots in the philosophy of the mind, in neuroscience, in theology, or in all three. Anyone who assumes that thinking has purely material foundations (our biological brain functions) tends to assume that consciousness can also be realized in machines. At this point, current technological developments are forcing us to rethink what man is, what constitutes him, whether he has a soul and what it should be.

A second level deals with questions of human self-understanding. This raises the question of how an AI environment (with AI newscasters, computer assistants that react to speech, and completely autonomous vehicles) influences people’s thinking about themselves. What applies to “human rationality” when machines can make better decisions than humans, for example in skin cancer diagnoses? What is guilt and responsibility when autonomous systems cause accidents? What does a human action count when robots can act more precisely and tirelessly? What is lost when robots care for our elderly? We become new people in a way through the technology we have at our disposal. Digital technologies, including AI systems, enable us to acquire new knowledge, to live different lives (mobility) or to live longer. We do not become completely different people through new technology, but we can and will perceive ourselves anew. This can change a lot.

Finally, the third level deals with concrete questions of the use and regulation of corresponding products, but also informational self-determination, data protection and the promotion of research and business are relevant topics here. These questions of applied ethics are as diverse as the fields of application of AI. The three outlined levels must be considered together. Anthropological questions are related to deeper considerations on the status of consciousness and thinking, practical questions to anthropological considerations. The moral aspects in all these areas, especially in the applied dimension, are extremely diverse. And they are important because today we need answers to the question of what to do. Because the fact that we are “only” dealing with weak AI does not mean that we do not have strong ethical questions to deal with. But these are very concrete and already a topic today. All of this is about redefining or redefining what is specifically human. We will not only be able to answer these questions with technical cutlery, but also and above all based on our experiences through community, music, dance, religion, love, art and human encounters.

First published in Opus Kulturmagazin 2019,

Citing this article: Filipović, A. (2019, May 3). Ethics, Digitisation and Machine Intelligence. Unbeliebigkeitsraum. Retrieved from,-digitisation-and-machine-intelligence/

Algorithmen und Künstliche Intelligenz in ethischer Perspektive

Spätestens seit meinem 2013er Aufsatz “Die Enge der weiten Medienwelt. Bedrohen Algorithmen die Freiheit öffentlicher Kommunikation?” gehören die Themen Algorithmen und Künstliche Intelligenz (KI) mit zu meinen Forschungsinteressen. Die Relevanz von “künstlicher Intelligenz” für die Medien- und Kommunikationsethik liegt auf der Hand: Wenn Algorithmen und selbstlernende Systeme in der öffentlichen Kommunikation und im Bereich der Medien eine immer bedeutendere Rolle spielen (etwa Personalisierungsalgorithmen), dann sollten sie auch ethisch reflektiert werden.

Die Erweiterung der klassischen Kommunikations- und Medienethik hin zu informationsethischen Fragen zeigt bspw. das “Handbuch Medien- und Informationsethik” (Hg. Jessica Heesen). In der Tat müssen informationsethische und medienethische Themen heute integriert, also zusammen behandelt werden. Dies wird auch deutlich im Namen unseres “Zentrums für Ethik der Medien und der digitalen Gesellschaft (zem::dg)“, das ich zusammen mit Klaus-Dieter Altmeppen leite. Einige meiner Doktorandinnen und Doktoranden arbeiten auch an Fragestellungen, die mit KI zu tun haben

In diesem Kontext der ethischen Beschäftigung mit Algorithmen und Künstlicher Intelligenz sind jüngst auch ein paar Publikationen und Forschungsprojekte entstanden, auf die ich hier kurz hinweisen möchte:

Zusammen mit  Christopher Koska und Claudia Paganini habe ich vor einigen Wochen (2018) für die Bertelsmann-Stiftung die Expertise “Ethik für Algorithmiker – Was wir von erfolgreichen Professionsethiken lernen können” erstellt. Es geht darum herauszufinden, unter welchen Voraussetzungen Professionsethiken funktionieren und ob dies für das Feld der Algorithmengestaltung angewendet werden kann.

Mit Christopher Koska habe ich 2017 den Aufsatz “Gestaltungsfragen der Digitalität. Zu den sozialethischen Herausforderungen von künstlicher Intelligenz, Big Data und Virtualität.” publiziert. Darin ist ein kleines Credo meiner ethischen Perspektive versteckt:

“Die Ethik kann nicht schlechthin gegen die technologische Entwicklung in Stellung gebracht werden. […] Es ist mit diesem Vorschlag keinesfalls gemeint, dass wir gegenüber technischen Entwicklungen unkritisch sein sollen, im Gegenteil. Es geht eher darum, sich die Möglichkeit zu geben, einen kritischen Gesichtspunkt überhaupt erst zu gewinnen, der dann auch eine verändernde Kraft entfalten kann. Eine Ethik, die Lust auf Innovation hat, nimmt sich selbst davon nicht aus – und kann gerade deswegen auch kritisch bleiben. Eine solche Ethik lässt sich nicht abhängen von den dramatischen technischen Entwicklungen, sondern versucht Anschluss zu halten, um den Verlauf des Rennens weiter mitgestalten zu können.” (Koska, Filipović 2017: 189)

Zusammen und im Auftrag von Microsoft Deutschland haben wir im zem::dg eine Expertise erstellt zu “Social responsibility of curated aggregation portals using the example of MSN“. Die Angebote von Nachrichten und Medieninhalten werden vermehrt algorithmisch betrieben, was die Frage nach gesellschaftlicher Verantwortung für die Qualität der öffentlichen Kommunikation aufwirft (Filterblasen, Echokammern, Vereinzelung durch Personalisierung usw.). Das Ziel des Forschungsprojektes war es, die Problematik am Beispiel der Plattform MSN (Microsoft) zu zeigen und praktische Hinweise zu geben, wie das Unternehmen seiner gesellschaftlichen Verantwortung gerecht werden kann.

In der Begutachtung ist ein Projekt-Antrag (2018), den wir an der Hochschule für Philosophie zusammen mit Prof. Dr. Oliver Alexy (Professorship “Strategic Entrepreneurship” in the Entrepreneurship Research Institute at TUM School of Management) und Prof. Gordon Cheng (Chair of Cognitive Systems & Founding Director of the Institute for Cognitive System at TU Munich) erarbeitet haben. Thema: “The Public Value of Artificial Intelligence Shaping an Ethical Future for AI in Digital Societies”.

Die Themen “Algorithmen” und “Künstliche Intelligenz” werden in Zukunft eine noch größere Rolle in meiner Arbeit spielen. Ich hoffe hier im Blog ab und an etwas dazu schreiben zu können.


  • Filipović, Alexander (2013): Die Enge der weiten Medienwelt. Bedrohen Algorithmen die Freiheit öffentlicher Kommunikation? In: Communicatio Socialis 46 (2), S. 192–208. DOI: 10.5771/0010-3497-2013-2-192.
  • Filipović, Alexander; Koska, Christopher; Paganini, Claudia (2018): Ethik für Algorithmiker – Was wir von erfolgreichen Professionsethiken lernen können. Hg. v. Bertelsmann-Stiftung. Gütersloh (Impuls Algorithmenethik, 9). DOI: 10.11586/2018033.
  • Koska, Christopher; Filipović, Alexander (2017): Gestaltungsfragen der Digitalität. Zu den sozialethischen Herausforderungen von künstlicher Intelligenz, Big Data und Virtualität. In: Ralph Bergold, Jochen Sautermeister und André Schröder (Hg.): Dem Wandel eine menschliche Gestalt geben. Sozialethische Perspektiven für die Gesellschaft von morgen. Freiburg: Herder, S. 173–191.