Internal Cognitive Assurance Model for Autonomous Robotic Systems (ICAMARS)


Downloadable Content

open in viewer

It is no secret that robotic systems are expanding into or augmenting human roles. We have seen an increase in the number of autonomous vehicles, ride services, aerial and maritime vehicle companies. As these robotic systems are deployed in an essentially unbound environment, they are therefore more susceptible to adversarial attacks. Artificial Intelligence is enabling the progression of autonomous systems as we witness this with self-driving cars, drones, deep sea and space exploration. The increased level of autonomy provides new security exposures, which are different from conventional ones. As the Robot Operating System (ROS) has become a de facto standard for many robotic systems, the security of ROS becomes an important consideration for deployed systems. The original ROS implementations were not designed to mitigate the security risks associated with hostile actors. This shortcoming is addressed in the next generation of ROS, ROS 2 by leveraging DDS for its messaging architecture and DDS security extensions for its protection of data in motion. However, ROS 2 security only addresses a subset of the overall system and does not address new security consideration necessary for autonomous robotic systems. As a result, many questions emerge and can be categorized into performance tradeoffs, vulnerability analysis, and determining if trust metrics/solutions exist. Upon investigating a number of these questions, the results advocate for a holistic approach. Therefore, our focus is on a holistic approach for assessing system trust which requires incorporating system, hardware, software, cognitive robustness, and supplier level trust metrics into a unified model of trust. While there are extensive writings related to various aspects of robotic systems such as, risk management, safety, security assurance and so on, each source only covered subsets of an overall system and did not consistently incorporate the relevant costs in their metrics. This study was motivated by a need to address the demand for a holistic security architecture for autonomous systems. In this research, we have defined trust metrics for each of the layers in an autonomous robotic architecture. The resulting internal assurance model utilizes a Bayesian Network for scoring each subsystem based on security-enabled features. This Bayesian Network is used to determine the internal trust of an autonomous robotic system before it can be extended to an external entity. While this model is for static assessment, our future work looks to extend the base model approach to a dynamic operational one in which the defenders kill chain is introduced into the model. While utilizing a Dynamic Bayesian Network model, mitigation strategies can be applied to reduce security risks and provide platform resiliency.

  • etd-3666
Defense date
  • 2020
Date created
  • 2020-05-04
Resource type
Rights statement


In Collection:



Permanent link to this page: