Web-based and other digital mental health initiatives are advancing rapidly, outpacing legal regulation. This project aims to provide new understanding of the rights issues affecting users and subjects of digital mental health technologies, and develop a nuanced set of principles to guide legal frameworks. The project will focus on the use of artificial intelligence (AI), machine learning and other algorithmic technologies in the mental health context. Examples include:
‘Digital phenotyping’, in which machine learning is used analyse physiological and biometric data gathered by smartphone;
‘Mental health Apps’, of which there are reportedly 10,000+;
'Digital pills', which combine pharmaceuticals with sensor and tracking technology;
Mental health-based monitoring and surveillance of students in schools and universities; and
As with other areas of technological innovation, these developments need to be governed in ways for which there may be no precedents.
This project aims to improve responsible public governance of web-based algorithmic systems in the mental health context. It will do so by charting the expansion of these technologies, and asking how they can be used responsibly, when they should be permitted, or when they should be discouraged and even forbidden. Emphasis will be placed on the knowledge of groups who are most affected, particularly people who have experienced mental health crises or psychosocial disability. The project aims to clarify major legal and policy issues for civil society, as well as policymakers, legislators, and the judiciary. The findings will also be directed to mental health professionals and technologists in developing extra-judicial regulation, such as professional guidelines and industry standards.
Dr Piers Gooding (Co-ordinator)
Mr Timothy Kariotis (Research Assistant)
The project is supported by the Mozilla Foundation