Loading...

Project Aim

To improve the trustworthiness of autonomous decision-making for mobile robots by ensuring that they can recognise and resolve operational and legal ambiguities that arise commonly in the real world. This will be achieved through the paradigm of human-like computing (HLC) and using meta-interpretive learning (MIL).
The human-like decision-making will be encoded in a variety of ways:

A. By design from operational and legal experts in the form of initial logical rules (background knowledge)
B.Through passive learning of new logical representations and rules during intervention by human overrides when the robot is not behaving as expected; and
C.Through recognising ambiguities before they arise and active learning of rules to resolve them with human assistance.

A general trustworthy robotic framework will be developed to incorporate the new approach. However, as a case study, we will be focusing on autonomous aquatic applications, e.g., an autonomous "robot boat" for underwater crime scene investigation and emergency response with Metropolitan Police.

Publications

Publications

Selected recent publications