top of page

The Workshop’s Idea

Artificial intelligence has not one, but several sustainability problems: in terms of social sustainability, we notice that corporations use huge amounts of data to train their models, most often without the copyright or consent of the owners of that data, and without proper screening for biases; in terms of environmental sustainability, they use vast amounts of natural resources, such as fresh water for cooling data centers, and energy to train their models; and in terms of economic sustainability, AI-corporations have created a hype with new AI-based tools, aiming at disrupting industries and uproot people’s economic stability, all while being funded by high-risk investments without a clear path towards profitability, often with disappointing results in terms of reliable technology. And even on a political level, their covert lobbying efforts to influence legislation in the European and US American legislature has caused some concern about the future political sustainability of this economic and technological system.

 

There is growing work on every aspect of these rather unsustainable practices within the field of artificial intelligence. However, we contend that a dedicated critical perspective on unsustainable AI is still not visible enough to increase sensitivity towards this issue on a societal level, to have impact within the industry itself, or to lead to more informed academic debates on the matter. The backdrop against which AI ethics is done, i.e., the material, social, economic and political conditions of the development and implementation of AI, is often taken for granted.

This workshop aims to bring together a variety of voices within the field of artificial intelligence, broadly conceived, including AI ethicists and philosophers, AI safety researchers, engineers, social and political scientists and others to grow and sustain a critical perspective on questions of sustainability of AI. We intend to tackle a plethora of open questions, for example: 

  • Just how unsustainable are current AI practices, how can and should their impact be conceptualized?

  • How can we reliably distinguish AI hype from genuine technological progress?

  • How should we view the state of the industry and its entanglement with democratic (and undemocratic) systems of government? 

  • What should we make of the opaque and ambiguous progress reports and the doomsday-scenarios of the advent of AGI purported by leading CEOs of AI-companies?

  • What measures ought to be taken to curb the impact the AI industry has on the planet, on human flourishing, and on questions of justice?

bottom of page