CS4R Part 3: Humanoids - Cyber Security for Robotics

Cyber Security for Robots (CS4R):
Part 3: Humanoids

From Defense Hardening to the Offensive Use of Humanoids

As humanoid robots advance rapidly toward mainstream deployment, they present unique cybersecurity challenges that demand immediate attention. Unlike traditional industrial robots confined to controlled environments, humanoids are designed to operate in human spaces, interact physically with people, and make autonomous decisions. This proximity and autonomy create unprecedented security risks - from privacy breaches to physical harm.

The CS4R Part 3: Humanoids workshop addresses this critical gap by equipping researchers and integrators with actionable know-how to cyber-secure humanoid robots before deployment. More provocatively, we explore how these same platforms can be weaponized in cyber-physical attacks, demonstrating offensive scenarios where AI-enabled humanoids act as Trojan horses. Through live demonstrations, hands-on sessions, and expert presentations, participants will gain deep insights into humanoid attack surfaces, secure hardening techniques, and the evolving threat landscape.

Program

Welcome and objectives

14:00-14:10

Dr. Víctor Mayoral-Vilches

Introduction to the workshop objectives and overview of the humanoid security landscape.

Keynote: 2025 Humanoid Security Threat Landscape

14:10-14:40

Endika Gil Uriarte, Alias Robotics

A comprehensive overview of current and emerging security threats specific to humanoid robots, including real-world case studies and vulnerability demonstrations.

Session 1: Zero Days in Humanoids

14:40-15:10

Dr. Stefan Rass, Johannes Kepler Universität Linz

A game theoretic approach for risk estimation in humanoid robots. Learn how to assess and quantify security risks using advanced mathematical models.

Coffee Break

15:10-15:30

Networking opportunity

Connect with fellow researchers and practitioners in the humanoid robotics security field.

Session 2: Humanoids as Attack Vectors

15:30-16:00

Dr. Víctor Mayoral-Vilches, Alias Robotics

Exploring how humanoid robots can be weaponized using Cybersecurity AI. Live demonstrations of offensive scenarios and attack methodologies.

Lightning Paper Presentations

16:00-16:30

Moderated by Dr. Mayoral-Vilches

Six 5-minute presentations showcasing cutting-edge research in humanoid robot security. An opportunity for researchers to present their latest findings.

Road-map Discussion & Closing

16:30-17:00

All participants

Collaborative discussion on future research directions and closing remarks by Dr. Mayoral-Vilches.

Speakers & Organizers

Victor Mayoral Vilches
Endika Gil Uriarte
Stefan Rass

Workshop Details

Event Info

Type: Half-day Workshop

Duration: 3 hours

Date: October 2, 2025

Participants: 20-40

Objective

Equip researchers with actionable know-how to cyber-secure humanoid robots while exposing how these platforms can be weaponized in cyber-physical attacks.

Target Audience

  • • Robotics Researchers
  • • Security Professionals
  • • System Integrators
  • • Humanoid Developers

Topics Covered

Humanoid attack surfaces (HW, FW, middleware, network)

Secure ROS 2/DDS hardening & SBOMs

Humanoids weaponized via Cybersecurity AIs

Zero days in Humanoids and risk assessment

Call for Papers

Submit short papers (2-4 pages) on humanoid robot security for our lightning presentation session. We welcome original research, position papers, and work-in-progress reports.

Important Dates
Paper Submission: September 1, 2025
Notification: September 15, 2025
Workshop Date: October 2, 2025

Submit your papers to:

research@aliasrobotics.com

Include "CS4R-Humanoids Submission" in subject line

COORDINATES

Where

Humanoids 2025
COEX Convention & Exhibition Center
513 Yeongdong-daero, Gangnam District
Seoul, Korea

When

September 30 – October 2, 2025
Workshop date and time TBD
2025humanoids.org

Primary Contact

victor@aliasrobotics.com
Dr. Víctor Mayoral-Vilches

Interested in humanoid robot security?
Join us at CS4R Part 3

View all CS4R workshops   Contact us

F.A.Q.

Humanoid robots present unique security challenges due to their human-like form factor and interaction capabilities. They operate in close proximity to humans, have complex sensor arrays, and can physically manipulate their environment. This makes security breaches potentially more dangerous than in traditional industrial robots, as compromised humanoids could cause physical harm, violate privacy, or be used for surveillance and infiltration.

Humanoid robots have multiple attack surfaces including: hardware components (sensors, actuators, processing units), firmware (low-level control systems), middleware (ROS/ROS2, DDS), network interfaces (WiFi, Bluetooth, cellular), and AI/ML models (vision, speech, decision-making). Each layer presents unique vulnerabilities that can be exploited by attackers to compromise the robot's behavior or steal sensitive data.

This concept explores how compromised humanoid robots can be weaponized to conduct cyber-physical attacks. A hijacked humanoid could serve as a mobile surveillance platform, physically access restricted areas, manipulate objects to cause damage, or socially engineer humans through trusted interactions. The workshop demonstrates real scenarios where AI-enabled humanoids act as Trojan horses in secure environments.

This workshop is designed for robotics researchers, security professionals, system integrators, and humanoid robot developers. Whether you're building humanoid platforms, deploying them in real-world scenarios, or researching their security implications, you'll gain valuable insights into securing these complex systems and understanding their potential misuse.

Cybersecurity AI (CAI) refers to AI-powered tools and frameworks that can autonomously discover, exploit, and defend against security vulnerabilities. In humanoids, CAI can be used defensively to harden systems and detect intrusions, or offensively to automate attacks. The workshop explores both aspects, showing how CAI dramatically amplifies both defensive capabilities and attack sophistication in humanoid robots.

Yes, the workshop includes live demonstrations of both defensive hardening techniques and offensive attack scenarios. Participants will see real vulnerabilities being exploited and mitigated in humanoid systems. These practical demonstrations complement the theoretical presentations, providing a comprehensive understanding of humanoid robot security challenges and solutions.