CS4R: Cyber Security for Robotics

The robot cyber security for robotics workshop

The next generation of robots will be connected, either to each other, to industrial systems or to the Internet, and that comes with huge risks in terms of cybersecurity. For years, manufacturers and end-users have been worried about safety, but there is no real safety without security. Security is not an endpoint or a product, it is a process and needs to be continuously challenged.

In the EU security is a trendy research topic in robotics. Yet, the cyber-maturity of the robotics value chain is slowly evolving, according to the challenging threat landscape that is emerging. The Cyber Security for Robotics workshop (or CS4R for short) will argue about robot-related cybersecurity topics and challenges while providing a deep insight and discussion about potential cyber-risks the current robotics ecosystem is facing. Putting the elephant in the room, experts in the field will discuss their views and the current challenges in the European landscape.




Endika Gil Uriarte, Alias Robotics

Talk 1: Current security threat landscape in robotics


Víctor Mayoral Vilches, Alias Robotics

Following an offensive approach, Víctor will share state-of-the-art tools for robot cybersecurity research and how his team is pentesting robots and robotic systems around the world while helping companies deliver more cyber-secure solutions.

Slides | Video

Talk 2: Anomaly Detection for CPS: Results from the IoT4CPS project


Arndt Bonitz, Austrian Institute of Technology (AIT)

Mario will highlight the outputs of the EU funded IoT4CPS project and will follow illustrating some of the feasible attacks in Cyberphysical systems including MQ intrusion via mobile.

Slides | Video

Talk 3: Robotics honeypots: Learning from robot hackers


Francisco J. Rodríguez Lera, Universidad de León

Honeypots are typically cyber-deception platforms used to obfuscate the attackers. Learn more about robot hackers and their behaviour in robotics deception platform, in the Honeypot created by University of León (Spain)

Slides | Video

Talk 4: Attacking mobile robot bases


Bernhard Dieber, Joanneum Research

Who can remotely alter "safety" settings of mobile robots? Dr Dieber and his team have done it for a very popular mobile robot base. Learn more about how they did it in this talk

Talk 4: Robot Security Survey results


Endika Gil Uriarte, Alias Robotics

Endika shared some bits about conclusions drawn from robot security survey and how this relates to the overall status of robot cybersecurity.

Slides | Video

Round table & closing


Moderator: Endika Gil Uriarte

Also, do not miss the next session:
Cybersecurity for Robotics (CS4R)

PART II, Solutions

Stefan Rass, Alpen Andria University
Endika Gil Uriarte, Alias Robotics
Víctor Mayoral Vilches, Alias Robotics
and many more

See more


Victor Mayoral Vilches
Endika Gil Uriarte
Bernhard Dieber
Mario Drobics AIT
Francisco J. Rodríguez Lera



Tech update session - Room 3
FYCMA - Palacio de Ferias y Congresos de Málaga
Av. de José Ortega y Gasset,
201, 29006 Málaga, Spain


March 3rd, 2020 - 14:00 CET


(+34) 695 72 05 99

Would you like to learn more
about robot cybersecurity?

Our work   Let's talk


By threat modeling. Threat modeling helps you understand better your security flaws by studying the dataflows and the trust boundaries that apply to your use case/s. Once you have a clear picture of which attack vectors you're subject to, you'll be in position to decide on what to invest.

Traditional IT and recent OT network security are based on the castle-and-moat concept. In castle-and-moat security, it is hard to obtain access from outside the network, but everyone inside the network is trusted by default. The problem with this approach is that once an attacker gains access to the network, they have free reign over everything inside. This is what happens if you only use a VPN. VPNs offer a layer of protection but this is far from enough to guarantee security (specially when VPNs aren't flawless, e.g. see CVE-2019-14899)

Instead, we advocate for the use of Zero Trust security paradigm. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. Zero Trust moves network defenses from wide network perimeters to narrowly focusing on individual or small groups of resources. Access to data resources is granted when the resource is required, and authentication (both user and device) is performed before the connection is established.

Safety cares about the possible damage a robot may cause in its environment, whilst security aims at ensuring that the environment does not disturb the robot operation. Safety and security are connected matters.

There's no safety without security.

We encourage you to start caring about security at the design phase. Defining a proper architecture that takes security into account is key. Security can also be tackled at later phases but the more you delay it, the harder and more costly it'll be to ensure security.

In robotics there is a clear separation between Security and Quality that is best understood with scenarios involving robotic software components. For example, if one was building an industrial Autonomous Guided Vehicle (AGV) or a self-driving car, often, she/he would need to comply with coding standards (e.g. MISRA C for developing safety-critical systems). The same system's communications, however, regardless of its compliance with the coding standards, might rely on a channel that does not provide encryption or authentication and is thereby subject to eavesdropping and man-in-the-middle attacks. In this case neither security nor quality would be mutually exclusive, there will (and should) be elements of both.

Making security recommendations on robotic architectures demands proper understanding of such systems. Similarly, mitigating a vulnerability or a bug requires one to first reproduce the flaw. This can be extremely time consuming with robots, specially ensuring an appropiate enviroment for its analysis reproduction. Current robotic systems are of high complexity, a condition that in most cases leads to wide attack surfaces and a variety of potential attack vectors. This difficulties the mitigation process and the use of traditional security approaches. In-depth understanding of such systems (robots) is required and new mechanisms must be used.

Connected to the inherent complexity and time consumption is flaw prioritization. Patch management in robotics requires one to priorize first existing vulnerabilities. Existing scoring mechanisms such as CVSS have strong limitations when applied to robotics. Simply put, they fail to capture the interaction that robots may have with their environments and humans, leading to potential safety hazards. New scoring techniques in combination with knowhow is a must to maintain robotic systems secure.