Michael Storozhev’s session ‘The Hidden Risks of Data Science and Machine Learning’ explored the issues living beneath the surface of modern artificial intelligence (AI).
By talking to these issues, he both raised awareness and identified a potential role for actuaries to contribute to ameliorating some of these downsides. Some of Michael’s comments concerned the ethical application of AI.
While this topic has been covered elsewhere, it served as a good reminder that:
- If the underlying data carries bias, model outputs often will too; and
- There is significant risk in models that do not have transparency, or the ability to explain ‘why’ a particular prediction has been made.
Michael also pointed to Chris Dolman’s recent article on the need to focus on how we want a decision to be made, rather than treating AI bias as a risk to be managed. More broadly, concerns about the use of AI are topical, especially with the European Union’s (EU) recent proposed regulations of such models.
Perhaps more striking was Michael’s exploration of the environmental and social impacts of AI. On the environmental side:
- Electricity use for training and applying AI models has been increasing significantly over the past decade. One of the largest natural language process models, Open AI’s most recent language model, GPT-3, consumed the equivalent of 300 trans-American flights while being trained. Part of this is the natural limits of model-building; to achieve linear performance improvements requires exponential increases in data and training.
- The supply chain around the technology needed for AI is also a concern. Although hard to distinguish AI-specific impacts versus the needs of the broader technology sector, the impacts of mining specific elements needed for modern computation, combined with the growing amounts of e-waste, pose a significant environmental challenge. Michael talked of the 8km-wide ‘dead lake’ of acidic sludge in Mongolia that has been created from the waste products of rare earth mining in that country.
Some social impacts are also largely hidden. While we think the technology sector is made up of highly paid coders in Silicon Valley, the creation of labelled training data for models has often required extensive use of low-paid workers doing incredibly repetitive tasks. The Mechanical Turk service, offered by Amazon, allows workers to be deployed at the click of a button. However, a 2018 research paper found that the median wage for these workers was just $2 per hour. And social media companies, like Facebook, pay contractors to filter obscene content, often leading to psychological harm.
Technological advancement is not all doom and gloom, but Michael’s presentation was a useful reminder of the downsides that we do not always see. Food for thought.
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.