Counterfactual Assessment and Valuation for Awareness Architecture

Counterfactual Assessment and Valuation for Awareness Architecture (CAVAA)

Portrait colour photograph of Alberto Giubilini and Cristina Voinea

Oxford's CAVAA team members, Dr Alberto Giubilini and Dr Cristina Voinea

Funding

European Commission, EIC 101071178; CO-PI: Julian Savulescu

Project Dates

October 2022 - October 2026

Oxford Project Manager

Alberto Giubilini

Oxford Researcher

Cristina Voinea

Full CAVAA Project

https://cavaa.eu/

 


Project Description

Robotics and artificial intelligence technologies are becoming increasingly advanced, and some researchers hope to build robots or AI systems that are aware of the world around them. The team at the Uehiro Oxford Institute will examine ethical issues that arise due to AI awareness, including questions of privacy and value alignment. In collaboration with CAVAA collaborators at Uppsala University and Sorbonne University, the Oxford team will investigate human judgments about privacy and other values, which may then inform policy recommendations for the design, construction, and regulation of AI systems.

Examining the ethical issues raised by aware AI involves several components. In some of our work, we examine normative and philosophical questions about what it means for an AI to be aware, whether it’s possible for AI systems to infringe our privacy, and what it might take for AI to be aligned with our norms and values. In other work, we investigate the human relational psychology of interacting with AI, as well as human preferences about how AI should treat the information they may learn about humans. Finally, in order to make reasonable assessments of the risks posed by AI, as well as reasonable suggestions for how to design ethical AI, we learn from researchers who are designing and building current generations of AI architectures and social robots.

Full details of the project and our collaborating partners can be found on the CAVAA website.

Outputs

UOI Publications

2025

Register, C., (2025), 'Individuating artificial moral patients', Philosophical Studies, Vol: online first 

2024

Giubilini, A., Voinea, C., Porsdam Mann, S., Earp, B. D. and Savulescu, J., (2024), 'Know Thyself, Improve Thyself: Personalized LLMs for Self-Knowledge and Moral Enhancement', Science and Engineering Ethics, Vol: 30(6): 54 [PMC11582191]

Levy, N., (2024), 'Consciousness ain't all that', Neuroethics, Vol: 17(21)

UOI blogs/media

Blog post 'Friend AI: Personal Enhancement or Uninvited Company?' by Chris Register (8 October 2024)

Workshops and talks

In the spring of 2025, we hosted a workshop on 'Privacy, Awareness, and Alignment in AI', with participants from Oxford, Google Deep Mind, Sorbonne, Uppsala, Cambridge, and Sheffield.