Rüdiger Stix

Vienna Dilemma

The New AI Strategic Dilemma

When we train AI - at the level as mentioned by Henry Kissinger in “AI and the End of Enlightenment” - to use the Ethics of the European Approach on Human Rights, this AI will come to an ethical based finding, perfectly fitting into the system of the Justice and Ethics according to the European Legal thinking about Human Rights … and we cannot forecast, if this AI will recognise assisted suicide as a basic Human Right, or as a severe Crime…

Executive Summary

At the 4th Strategy Conference in Vienna we found and defined a new strategic dilemma on the Impact of AI and emerging disruptive Technologies.

In the discussions - especially with Prof. Gen.rtd. Schmidle on general AI, and Prof. Giselher Guttmann on the state of art in Neurophysiology (and others, incl. Wolfgang Peischel, Fred Korkisch, et alii), and the input by Mario Stiendl (on military use of BCI) and  Bertram Mayer with his hypothesis on biased programming according to Calvinistic values, we were looking at the Aspects, challenges and activities of the EU-Presidency of Finland, HELSINKI, from the point of view of MoD department on Science, Research and Innovation (WFE), especially on:

  • Defence and Security Research and Policies and on their
  • Legal, Ethical and Societal Affairs in Europe,
    in the shadow of the increasing global Arms Race on AI and Disruptive Technologies, and on the urgent need of
  • fast answer(s) to the evolving regulatory frameworks around AI-based Technologies in Defence
  • especially in the status of Neutrals in Europe, following the Hague Convention respecting the Laws and Customs of War on Land generally,
  • and in the Cyber Domain
  • including BCI/Brain-Computer-Interfaces.

Coming from the governmental position of Austria at emerging disruptive Technologies, we follow (since April 2014) the Austrian “Forschungsatlas/Emerging Technologies” (and the Policy Horizon Canada on Emerging Technologies) with its evaluation on potential technological advances in six key areas, chosen for their likelihood for significant disruptive potential on work, life, firms and policy over the next 15 years - and they are all based on AI.

At least five among the six key areas have a direct impact on Security and Defence:

  • Neurotechnology and Cognitive Technologies (with Neural Network Computing, Extended Cognition and Neural Interfaces);
  • Health Technologies with all aspects of Human Enhancement (from Biohacking and Genome Editing to Enhanced Organs);
  • Nanotechnology and Materials Science (from smart materials to controlled self-assembly and self-healing materials);
  • Energy Technologies (from Smart Grids to Energy Harvesting and Space-based solar power);
  • Digital and Communication Technologies (from 5G and Memristors to Digital Currencies with severe effects on the existing international financial system and Telepresence).

Since November 2018 the Austrian Government is working on an Austrian AI Strategy, the AIM-2030, and until today, the Ministries BMEIA and BMLV do not have an official position on AI at disruptive Technologies in Defence integrated into the AIM 2030, but a basic paper about the impacts and the needs of AI on Defence Research was suggested by the MoD department on Science, Research and Innovation (WFE) at the beginning of the Austrian EU presidency, discussing the priority technologies of defence research and their regulatory framework

In Austria we have Clusters of international Excellence in some of the Technologies, esp. in Genetics (Penninger et alii), Quantum-Physics (Zeilinger et alii), Neurology (Giselher Guttmann and Gert Pfurtscheller with EEG and Claus Lamm with fMRI, and the biggest Neurosurgery in Europe at JKU with Andreas Gruber); autonomous systems (TTTech and AVL-List), and at least outstanding leading scientist in AI (Prof. Sepp Hochreiter, JKU, inventor of the LSTM Algorithm), and some others.

Anyway, as a Neutral we face the need of fast answer(s) to the evolving regulatory frameworks around AI-based Technologies, not only in Defence:

In a Context-aware computing environment, the Computers that can both sense and react to their environment, AI based devices will have information about the circumstances under which they operate, and based on rules and sensor inputs, react accordingly. Context-aware AI-devices may also learn assumptions about the user's current situation.

Therefore, as a first elementary step, we have generally the question about the responsibilities and liabilities of actions, as well as especially the questions about options and limits according to the international law of conflict, based on the main question: how do the AI devices come to their solutions?

These urgent regulatory topics cover basically three main clusters:

  1. It ranges from the discussion about autonomous platforms and/or weapon systems versus automated weapons, as discussed at Geneva, to the regulatory frameworks at the Cyber-Domain when using swarm technologies.
  2. At the Cyber-Domain we have with the NATO Tallinn II Manual some regulatory benchmarks, including the status of Neutrals in the Cyberwar, following the Hague Land Warfare Convention.
  3. The third big issue is the more and more uncertain  border between personal identity and personal legal liability on the one side, and the AI-backed environment when using BCI/Brain-Computer-Interfaces connected to the net in the cyber-domain, and at the same time using (invasive or) non-invasive Neurology to alter the cognitive or emotional status.
  • At this field we face the discussions of Human Rights about personal and cognitive identity as well as questions about the liabilities of AI and AI operated systems, and the status of legal or illegal combatants, and in general the discussions on appropriate ethical guidelines.
  • The questions about the liabilities when using AI-systems include the handling of IPR/Intellectual Property Rights, with very different approaches in the USA and in Europe (not to mention China). Of course, this is very strongly connected to the questions of Data Ownership, Security and Transparency.
  • The discussions on appropriate ethical guidelines - on the military topics worldwide pushed by the NGO “Campaign to stop Killer Robots” - have a very strong influence on the societal acceptance in the western world.

The big problem with the hope and the search to find some overarching ethical principle on AI-Regulations is not only the balance between western ethical approaches on data-privacy and China, or other relevant powers in the AI arms race:

The most obvious - but very often neglected - problem with the discourse on appropriate ethical guidelines (not only on military topics), is the fact, that even at the core of European legal systems we find diametrically opposed solutions at basic Human Right issues - just to mention the example of the elementary individual right about the life of a person in the NL, Germany, the Swiss or Austria: when coming to the end of life und taking assistance voluntarily ending the own life: in two of the neighbouring countries above mentioned it is a basic Human Right - and in two others it is a severe crime…