On Tuesday 26th April, Shalaleh Rismani gave an online webinar for Forum 42, ‘How can an AI system ‘break’ ethically and socially?’. Ms Rismani is a PhD student of Electrical and Computer Engineering at McGill University in Canada, and is executive director at the Open Roboethics Institute (ORI).
SFC1 (Year 12) student Jamie attended the talk and wrote the following:
“Ms Rismani took us through an introduction to Artificial Intelligence (AI) principles and ethical issues in the field. She emphasised that the aim of AI is to benefit humanity in a safe and equitable manner, at which a risk assessment program is crucial to identify the potential hazards. In the talk she explained the thought process behind an AI risk assessment with a holistic approach. This includes supporting the wellbeing of people, respecting human rights, transparency, security, and accountability of AI to ensure the impact is as positive as initially intended.
Another idea she emphasised was the ambiguity of AI failures in a social context. The example she employed was COMPAS, a criminal scoring algorithm that plays a role in sentencing. With such an enormous impact on criminals’ lives, controversies arose as the system appeared to have prejudiced people of colour. She suggested how this reflects an imminent need for an assessment to characterise, identify and mitigate AI failures in a social setting, as well as recruiting a more inclusive community in the process of designing AI.
She then talked about the existing challenges in the field such as the difficulty to achieve impartiality across different social norms and the tension between human autonomy and the system itself. It was fascinating to explore the overlap between computational science and sociology in a broader view of AI application.
Finally, she introduced her research project focusing on a system of assessment of social failures. The key components she outlined were the identification of the social norms and the impacts on the stakeholders within each subsystem. She employed the social failure mode to analyse any possible outcomes and the likelihood of occurrence. This project is yet to overcome the diversity of social norms, and hopefully in the foreseeable future, this can be used to detect AI failures in the refinement of the system.”
Jamie, SFC1 (Year 12)
Cookies
We'd like to set cookies to understand how you use this site. We use services such as YouTube, Flockler and Hireroad that may also use third party cookies.
For more detailed information, see our Cookies Policy.
Essential Cookies
We use these for core functionality, such as storing this cookie consent preference. These are loaded automatically and cannot be disabled by the user.
Analytics Cookies
We use Google Analytics to track visits to our website and how users interact with our website. This helps us improve the way our website works.
Personalised Advertising Cookies
We use Google Ads Conversions & Facebook Pixel to measure how you use and interact with our website and with our advertisements.
Our Partners Cookies
These cookies may be set by third party websites and do things like measure how you view videos or other content that is embedded on our site.
Our new Sixth Form Centre will provide innovative learning spaces, that prepare students for university, work and life. Find out more here.
The deadline to join us for Year 9 in 2027 (13+ entry) is approaching: Register by Friday 17th October 2025.
Find out more here.