Only two of the attendees, including me, were engineers from the industry and NOT from Europe/US or whatever the “Western World” means. This aspect became more noticeable when the discussions started and it stirred up many thoughts in my mind!
Before The Dinner
During one coffee break, I had a chat with a west-european graduate in his early twenties, who sat beside me. He finished his bachelor of Engineering, but he wanted to take a different path and do something about “AI Safety”. The fascination with Effective Altruism (AE) and the Future of Humanity Institute in Oxford (that was shut down later in 2024) was obvious in his statements as well as in the stickers on his laptop. He said he was interested in the questions of AI risk, sentience, alignment, etc. and would like to ask the organizers to pitch the idea of building an “AI Safety” community at the university and the region.
I asked him whether he was aware of the current issues with building AI systems including the data theft or exploitation of workers in low-income (aka over-exploited) countries to fuel the models with reliable annotated data. He said “Oh, I didn’t know that”.
He thought OpenAI had an automatic advanced technique to improve the models (which is a common misconception propagated due to the neglect of the humans powering these systems). Then he pivoted back to his favorite topics like AI alignment as the current issues seemed to be irrelevant.
One may say that he was just a young enthusiastic graduate with limited knowledge, who just wanted to belong to a hyped-up community. But this sort of disconnect became more obvious on a larger scale during the dinner discussions where other researchers didn’t seem interested in the short term impact of AI systems on the environment and humans. All that seemed to matter was the shiny long term theories about human extinction, sentience, alignment, etc.
At The Dinner
At the dinner table we were clustered in two groups, according to food preferences. Each group would share a big platter including a variety of food served on injera (a traditional flat bread). I ended up in a group with the other Engineer from the industry, the young graduate and two philosophers.
The young graduate started to explain his plans to form a community, organize meetups, and invite speakers. He started to touch the topics of his interest like:
- What do you think about the scenario of existential risk or extinction?
- Do you think AI systems will be sentient soon? How do you imagine this?
- What if AI systems got aligned like this or that?
- What do you think about transhumanism?
- ….
The philosophers engaged with him, talking about possibilities in the next centuries.
I looked at them and asked: Why do you think this is the problem to focus your energy on now, when there are other current issues to be addressed about the environment, resources and living humans?
The young guy just regurgitated arguments coming from longtermism about the possibility that reaching AGI would provide us with smarter solutions to solve all the big problems like climate change. So it would be better to focus on AGI!
I kept challenging him, asking who decided that current humans were capable of choosing what was better for people coming after a few centuries?
There was no real interest in the group to engage with such questions. Even when I mentioned examples of human exploitation or data theft, it seemed everyone didn’t want to be bothered by those distant people in the global south or even around the corner. All they wanted to talk about was theoretical long term topics that wouldn’t make them grapple with the moral responsibility or big questions in the present.
The guy proceeded to talk enthusiastically about AI alignment with the philosophers. When I had a chance to take part of the conversations, I said:
- but who decides what alignment is? We live in a world with different cultures and world views. Humans have never been fully aligned.
- Why do you assume that few organizations with a lot of resources, and wealthy people are the ones to decide the values to embed in AI systems?
- And why do you make the assumption that there’s a set of global values to fit everyone?
The other Engineer who came from Southern Asia also started to question this point, highlighting the diversity of cultures around the world!
There were a couple of responses all revolving around the idea that AGI will find a way or that maybe some customizations would be necessary. But I noticed that whenever these questions came up, the group pivoted quickly to abstract topics.
After The Dinner
At this point I had the same impression that I got over and over from events, talks, articles and discussions on tech and AI.
- The discourse is controlled by those who have the resources to develop the latest tech with dominant US/Eurocentric views.
- Many people don’t care how AI systems are developed or the underlying work that fuel them, as long as they can benefit from it (more money, consumption, published papers, etc.).
- There’s minimal regard to the exploitation of humans or the negative impact on the environment, especially if it is distant in other regions whose exploitation became acceptable to fuel the progress in different industries, including but not limited to AI.
- For some privileged people, there’s a sort of feel-good activism like forming a community around a hypothetical topic (e.g. extinction), rather than doing the actual work that would improve our current societies. It could be a sort of escapism, detachment from reality or a savior complex.
Another side thought was triggered by the environment around. We sat at an Eritrean restaurant in Europe, an evidence of the vibrancy that different cultures bring. But instead of aspiring to engage with other cultures, understand the differences and know the impact of tech on them, people want to talk about abstract issues and think about the alignment delusions.
Was This a Big Surprise?
Not really!
Even before 2023, I was aware of the state of AI Ethics or Responsible AI at the majority of corporates. Some of the organizations were doing it for PR and greenwashing. Others were focused on limited aspects that didn’t interfere with the profitability objectives or make them confront big questions. Those who asked the questions and genuinely tried to do their job faced censorship, got fired or had their teams dismantled.
So I already had my fair share of skepticism about the possibility of contributing to the field of Responsible AI through big tech or big tech-funded institutions.
I also knew that the hype was impacting academia to a certain level. So I was prepared for any type of discussions. Nevertheless, I kept looking for opportunities where people who study and engage with current challenge may be present. That’s why I attended the workshop which offered some learnings in other topics.
Two Years Later
Two years later, I still recall this dinner and my thoughts after it. I could see the same patterns over and over in the AI discourse, in industry and academia: the disconnect from reality, avoiding the current issues and putting more weight on less critical hypothetical questions.
I came to peace with the fact that there will be very few spaces where people genuinely care about the current issues related to AI. And I have to accept that I will be consistently disappointed in my quest to find these communities!