Interview with Andrés Colmenares, director of the Master in Design for a Responsible AI

03 May 2024
Elisava
Andrés Colmenares during a conference at Disseny Hub Barcelona. Source: Elisava.

For more than a decade, Andrés Colmenares has been working to help citizens and organisations make better decisions through anticipation, participation, and collaborative learning tools. At the same time, he researches, networks and curates conferences on the socio-ecological impacts of digital technologies and infrastructures. As a strategist, curator and creative consultant, he has led projects for numerous international organisations, is co-director of The Billion Seconds Institute and directs Elisava’s Master in Design for Responsible AI.

The European Parliament has recently passed an Act on Artificial Intelligence (AI) that, beforehand and generally guarantees security and respecting fundamental rights, while promoting innovation. Among other things, this regulation limits the use of biometric identification systems by security forces and reinforces the consumers’ rights. At first, it looks like good news, especially for the most judgmental ones towards the growing presence of AI in our daily lives… What do you think?

The fact that Europe has decided to go through the extensive and bureaucratic process of creating and approving a new law to deal with the impacts of the massive adoption of artificial intelligence systems is good news for EU citizens, governments and, to a large extent, for all types of companies and organisations that use these systems, since the use of these types of systems and their implications are quite cross-cutting. But it is also true that, as several human rights organisations have reported, the law has fallen short in this area, which is fundamental for society. For example, the law allows high-risk uses such as predictive and biometric mass surveillance systems that affect vulnerable populations such as refugees and undocumented people, as well as creating legal loopholes and exceptions under the umbrella of national security. The law also leaves out the serious and growing environmental impacts of these systems.

«There are many interests at stake and large corporations have created an intense lobby to ensure that regulatory aspects of the law do not prevent them from continuing to take advantage of legal loopholes»

In my opinion, covering all current and potential uses of artificial intelligence systems in a single law is not only an impossible task but also a problematic one, because Artificial Intelligence does not exist. It is a term that is used interchangeably to refer to connected but different topics such as technologies, sectors, applications, ideologies, research fields and computing techniques. It exists in the collective imagination as an existential threat to humanity itself and at the same time as an advanced and powerfully disruptive technology. In practice, it ranges from systems used for border control and surveillance to those used to generate images, recommend videos or decide on public support. There are many interests at stake and large corporations have created an intense lobby to ensure that regulatory aspects of the law do not prevent them from continuing to take advantage of legal loopholes from which many benefit at the cost of violating rights or exploiting the most vulnerable.

The new regulation also forbids the indiscriminate capture of facial images from the internet or surveillance camera recordings to create facial recognition databases, as well as the identification of emotions in the workplace and schools, or predictive policing. These are topics that seem almost science fiction, and that we have seen many times in movies, but are already a reality, aren’t they?

It is worth clarifying that the new law does not forbid these dangerous uses, but only imposes restrictions, and also opens the door to exceptional uses. Many citizens indeed recognise these uses of advanced technologies in science fiction films, almost always illustrating works by science fiction writers who in many cases try to reflect social problems, and abuses of power, anticipate risks, or, in some, cases promote ideologies. We live in a society dominated by the idea that progress is determined by technological innovation and there is a tendency to celebrate its materialisation in products, services, or experiences.

I find it interesting how the media covers every launch of corporations such as Tesla, Apple, or Amazon as if they were matters of public interest. But the relationship between science fiction films, which are not the same as books since adapting a literary work for the screen inevitably reduces the plural and critical interpretation of the ideas it reflects, and the imaginary that exists around artificial intelligence is by no means an anecdotal or secondary aspect. They are part of the same system that feeds back on each other. Science fiction inspires the technology sector, and the technology sector inspires science fiction. It is essential to understand this dialogue in order to demystify artificial intelligence and to understand the power relations and ideologies that converge in its most dangerous implementations in terms of human rights and environmental and social impact rights.

«I find it interesting how the media covers every launch of corporations such as Tesla, Apple, or Amazon as if they were matters of public interest.»

Another interesting novelty that the new act considers is that artificial or manipulated images, audio, and video content should be legally labeled as such. On the other hand, they will have to respect the EU’s legislation on copyright, something that up until now has stayed in a vague area that created some conflicts for creators, including designers…

Over the last year, one of the most popular uses of artificial intelligence systems has been generative text, image, or sound tools based on the so-called large language models, which operate on a huge volume of data. The techniques of capturing, processing, and subsequent use of such data, which can be images, texts, and other types of copyrighted creations, are in many cases illegal or at best paralegal even before the new law. But this has not stopped the corporations behind these popular tools (ChatGPT, DALL-E, among others) from doing so and continuing to do so.

The new law strengthens and specifies requirements associated with both copyright and data collection and use, as well as the relative transparency and technical documentation they must apply, but the philosophy of these corporations, mostly based outside the EU, is not exactly to comply with the law. The scale of their capital allows them to pay for law firms, teams of lobbyists, public relations, and at worst fines, while those affected, which include independent creators, have insufficient means to enforce their rights or sue offenders.

Elisava
AI for Biodiversity by Nidia Dias & Google DeepMind. Source: betterimagesofai.org.

At Elisava you are the director of the Master in Design for a Responsible AI, which promotes different aspects linked to AI, such as creative research, critical thinking, and taking strategic decisions. The master’s puts faith in researching how AI affects our daily lives, and this involves technological, ethical, and sustainability aspects, among others. In which way do you approach such an interesting but complex topic in the master’s degree?

We approach it starting from the idea of understanding and exploring artificial intelligence as eco-socio-technological systems, framed in the current context determined by the climate emergency, to develop the skills and critical thinking necessary to demystify the idea of “Artificial Intelligence” as a first step to make responsible decisions. In other words, we literally visit and study the places where these systems interact with the territory and the environment, the data centres. With the help of investigative journalists, and academic and artistic researchers, we analyse and build maps of the infrastructures that support these systems and their relationships with economic, social and legal systems. We analyse with experts the tangible and intangible implications and impacts of these systems and study their link to centuries-old systems of power with decolonial, anti-racist, ancestral, plural, and inclusive perspectives necessary to imagine alternative narratives, methodologies, and other critical design tools that help companies and governments to innovate responsibly. For this reason, we have teachers and collaborators from multiple disciplines, cultures, and identities. All of this is threaded together with a philosophy of collective, supportive, and horizontal learning where there is no hierarchy between students and teachers. We invite participants to position themselves as creative researchers from the first day of the programme and we focus on developing skills more than on the consumption of information.

I find especially interesting the interconnection between AI and the climate crisis. Which connections exist between both fields and how can AI benefit the environment and the planet’s survival, if that is possible?

The most critical relationship, meaning the one that has the most implications, between the climate emergency and the ecosystem built around artificial intelligence, according to Mel Hogan and other promoters of the emerging field of Critical Data Center Studies, can be summarised with five elements: energy, earth, water, work, and heat. If we visualise AI systems as a tree, its roots would be the thousands of data centres in which is stored, but especially computed, the data that makes theoretical applications of artificial intelligence real today.

«The explosive demand for AI systems is triggering the need to build more and bigger data centres, increasing resource consumption and exploitation, as well as emissions»

These data centres consume a lot of electricity, so the energy needed to keep them running significantly increases the industry’s carbon emissions. Then there is the growing demand for the rare minerals needed to manufacture the chips and other hardware components used to process data. These infrastructures also need to use significant volumes of water to cool the servers as we cannot forget that computing is a physical process that generates a lot of heat. And in this set of elements, we cannot forget the precarious workers in this whole chain: in mines, in data tagging, in content moderation, in the servers themselves, or even delivery workers controlled by algorithms that only seek to optimise corporate profits.

The explosive demand for AI systems is triggering the need to build more and bigger data centres, increasing resource consumption and exploitation, as well as emissions. We are told not to fly, not to use cars, or to reduce meat consumption to cope with the emergency, but little is said about the impact that the digital economy is having. There are also several initiatives that seek to use this computational capacity to address the multiple challenges presented by the climate crisis, but in the end, they are only analytical tools that tend to address the climate crisis, and in my opinion, they do it wrong, as a problem that technology can solve.

Elisava
Andrés Colmenares gives a talk on IAM. Source: friendsoffriends.com.

Recently, from Elisava’s Master in Design for a Responsible AI, you have organised the series Critical Futures Talks, a set of hybrid events, interviews, and podcasts that resulted in a conglomeration of opinions and points of view about the topic from the master’s teachers and collaborators and international guests that have reflected on responsible AI, media, and design regarding the current state of climate emergency. What would you say that are the main conclusions of the series?

Our main intention with this series is to start conversations about the topics we are studying so that they go beyond the classroom and the school because of their social and cultural relevance, and that more people reflect on the relationship we as a society have with technologies and the ways of thinking and existing that emerge from this relationship. Hundreds of people in dozens of countries and from different sectors registered to participate in person or via live stream, demonstrating the interest in these issues.

In several sessions, we talked about topics that are not normally related to AI or the climate emergency, such as our relationship to time, the idea of futures, the importance of ancestral knowledge or the crisis of sociological imagination that affects humanity. Through the presentations and discussions, we see that there is a growing critical reflection from companies and among design professionals concerning the way in which digital technologies are created and used. We see more and more professionals putting their values before the salaries offered by large corporations based on extractivism. In conclusion, some signs invite us to believe that things can change if enough people and organisations put their minds to it.

Even if we put ourselves in the worst position in terms of the transformation of the professional world due to the irruption of AI, some experts claim that there are 3 fields that will survive its impact: artificial intelligence itself, energy, and biosciences. Do you agree?

I do not share the narrative that has been generalised around the duality between humans vs. machines reflected in the attempt to present “Artificial Intelligence” as something inevitable or desirable in which only a few will be able to survive, or as a kind of natural phenomenon that has to happen. It is precisely this ideological dimension that worries me the most because it is invisible and is being planted deep in the collective unconscious.

We are seeing massive adoption of all kinds of solutions based on artificial intelligence that maximise the automation of all kinds of processes, which in several cases leave behind some types of professionals or workers, in many cases the most precarious ones, triggering inequality. As a society we must ask ourselves if it is more important or desirable to live in a more automated and efficient but more unequal and unfair world or if we prefer a fairer world, reducing all kinds of inequalities, and defending human rights even if this in some cases means having to go slower.

«As a society we have to ask ourselves whether it is desirable to live in a more automated and efficient, but more unequal world, or whether we prefer a fairer world»

Despite some concerns, substantiated or not, do you believe there are reasons why we should be optimistic about the changing current context?

For sure, hope defines us as human beings, and under no circumstances should we give up humanity. But being optimistic is not enough. We have to be critical optimists, meaning we must learn to reconcile the idea that tomorrow can be better than yesterday while still questioning the idea of who defines and how we define what is better, and for whom that tomorrow is better, being aware of our privileges and situations of power to fight on behalf of those who do not have the luxury of stopping to make this reflection. In the words of the pedagogue Paulo Freire, what defines us as humans is our capacity to transform our reality and a large part of this capacity is to be able to imagine that tomorrow can be better. That is why we exist, to be human.

Elisava
Plant by Alan Warburton. Source: betterimagesofai.org.