Current work by the US Ministry of Defence are targeted at interoperability and interchangeability, non-phreakable quantum radar and human-machine teaming. These activities – endowed with 12,5 billions USD out of the 2017-budget – are under the pressure of their legal and political legitimization, of the economical and armament race as well as not least of adversarial espionage – a “third revolution” in the arms sector has begun. A selected circle of experts in public international law deal with cybersecurity in applied fields such as the military defence against cyber attacks against critical infrastructure, the crisis sturdiness of which has by far not been guaranteed so far – its vulnerability is considered unsolved due to a merely “unreal” normative safeguarding in cyberspace. The manifold coherences to be explored, such as the balance of innovations and social interests, make an interdisciplinary approach necessary. In this case it is certainly not inimical if a definition, a statement or a normative fine-tuning initially turns out to be woollier than is usual in the juridical lingo. It is the only way for an approach to succeed – with courage and frankness. This essay tries to contribute some observations concerning public international law to robotics and cyber security. Here it is subsumed that juridical and ethical solutions often look different than military-economic ones. A comprehensive interdiction of both use and development for cyber operations appears neither sensible nor desirable. Drones, for instance, cannot be withdrawn from service because of their worldwide prevalence. In the course of dynamic further development one can, however, perform a shift in the spectrum of the principles of proportionality and narrowness of the impact in humanitarian law, thus collecting inspiration for a libertarian perception of supplementary legal matters. In the sense of an ethical re-interpretation, existing standards like the internationally acknowledged due diligence and due thoughtfulness, e.g. the duty to prevent cyberattacks, common information obligations, consultations, risk assessment, or legal assistance, ought to account for a negotiable pile. Here, however, artificial intelligence represents a special problem. On the one hand, man-guided weapons together with their respective legal regulations cease to be in force, and on the other hand, the autonomous capability of development – as in the case of cancer detection and diagnostics by means of innovative data links hidden for human perception – goes beyond the previous facts. Thus, the current legal instruments appear scarcely sufficient. Naturally, one might formulate the new regulations in a more purposeful way. Whether artificial intelligence can be totally subsumed under existing law, and/or whether an analogue application might be possible, will be proven by technological developments. Human-machine-teaming, identification, targeting, fight against terrorism, containment of asymmetrical warfare in cyberspace will in all probability not override the demand for global common properties such as sustainable and guaranteed access to the open ocean and to outer space, peace, or collecting data for public interest.