You are here

  1. Home
  2. Open Societal Challenges
  3. Sustainability
  4. Reducing the carbon footprint of Artificial Intelligence

Reducing the carbon footprint of Artificial Intelligence

A person using a ChatGPT on their mobile phone

Artificial intelligence (AI) has exploded at the forefront of the public's attention in recent years, with disruptive technologies such as ChatGPT having the potential to change the way we interact with the world.

At a fundamental level, AI systems rely on machine learning algorithms – computer programs that can be trained to improve their performance over time. The age of big data has provided a huge increase in available training material, so that machine learning algorithms can now be developed for applications as diverse as voice and image recognition, self-driving cars, judicial and medical decision-support systems, computer games, and many others. However, the emergence of such a powerful technology inevitably raises important questions.

“There are unresolved ethical and legal concerns related to data privacy, model biases, as well as the accountability of intelligent systems that decide and act autonomously,” says Daniel Berrar, a data scientist at the Open University. “Balancing the opportunities and risks of AI is one of the major societal challenges of the 21st century.”

However, one crucial aspect of modern AI has received only scant attention to date: sustainability. One dimension of sustainability is the environmental impact of computer-based activities: every element, from the production of the complex electronics making up the motherboard, to the processing power necessary to send an email, have an associated CO2 footprint. AI applications can be particularly resource-intensive, and the rapid expansion of this technology has led to a sudden rise in the complexity of computational systems. Consequently, we are now in a period of record demand for computational resources, which often come with a heavy environmental price tag. A worrying effect of the widespread deployment of large-scale AI systems is the considerable carbon footprint that these systems have already begun to produce. For example, the CO2 emissions due to training a large deep neural network for natural language processing can be substantially higher than the emissions of a car over its entire lifetime. However, “the efficient usage of computational resources is only one dimension of sustainable machine learning,” says Berrar.

Supported by the Open University's Open Societal Challenges programme, the TASMAL project aims to investigate other relevant dimensions of sustainability of machine learning, such as model robustness, maintainability, interpretability, as well as reproducibility of computational results. Although the issue of sustainability has begun to gain some attention in the AI community, the main thrust of research and development is still often primarily on performance optimisation. The TASMAL project is expected to give new insights for supporting the development of machine learning algorithms that do not necessarily prioritise performance over other relevant dimensions. “One very important, but perhaps underrated, aspect is the reproducibility of computational results in the machine learning life cycle,” says Berrar. “Reproducibility is a cornerstone of good science, and it is also a crucial element of sustainable machine learning. Artificial intelligence has become ubiquitous, with a tremendous impact on science, economy, healthcare, and society at large. The long-term vision of TASMAL is to make a significant contribution to a greater sustainability of AI.”