Reflections on the ITU's AI for Good summit 2024
My personal observations and thoughts on the conference.
This week I attended the ITU’s AI for Good summit in Geneva and would like to share a few observations and thoughts on the conference.
Firstly, the ‘vibe’ and a few general observations about the conference as a whole. Since it was quite big – approximately 30k participants, several stages, many workshops and company fair stalls – I only got to participate in a thin sliver of the program. Still, here are my overall thoughts:
Futurism: The focus was clearly on the future. The conference itself started with quite a futuristic artistic performance (Fusion Augmenta), future generations were mentioned in almost every speech, and the connection to SDGs (Sustainable Development Goals to be achieved by 2030 set by the UN) was omnipresent.
Futurism was also very palpable in the company fair stalls which presented new high-tech products (often quite half-baked) from robotic dogs to smart prosthetics. Some of them seemed like early prototypes but a few were already operational. They illustrated quite well the breadth of potential AI use cases as they were targeted at solving a variety of real world issues – for example weather forecasting, the mobility of people with disabilities, and emergency response in remote locations.
Robotics was actually the main focus of most stalls. This makes sense as giving a body to the artificial mind is definitely one of the hot topics. Walking around, I realised that I really dislike the uncanny valley humanoid robots (like the Saudi Sara), which look pretty strange and artificial. Instead, I can imagine my future self interacting with something like an i-Roomba on steroids but more cutesy. There was a really cute robot called Buddy – definitely worth checking out even though its capabilities are still limited.
AI Solutions to Human Problems: The overall conference outlook on AI was extremely positive (sometimes it even felt a bit naive). It was called AI for Good after all and this was definitely reflected in the visual (very colourful) style as well as in the content. In her opening speech, the ITU’s Secretary-General Doreen Bogdan-Martin emphasised the opportunities stemming from AI and many optimistic stories of how AI could salvage our lack of progress in achieving many of the SDGs. The company fair then offered solutions for sectors ranging from health care, through education, to climate and energy.
While searching for ways in which the power of AI can be harnessed for all of humanity is definitely great, the risks associated with AI and AI safety more broadly were mostly paid lip service to without real in-depth conversations on this topic. I found that quite disappointing since harnessing the power of AI for good seems counterintuitive without ensuring its safety first. The discussions that mentioned AI risks then often focused on the more short-term risks, like bias or the spread of disinformation, and largely omitted the medium and long-term risks stemming from misalignment and other issues.
[DISCLAIMER: Of course, I was not able to attend all the talks and I might have missed some of these conversations… I definitely missed the one panel with Safety in its title – The critical conversation on AI safety and risk – which took place one day before the official summit dates.)
China (not) in the spotlight: The intense Chinese interest in digital technologies and AI in particular was very obvious. Many of the largest and most visible fair stalls belonged to Chinese companies after all. Chinese government and industry representatives participated in many of the main panels and round tables, including a keynote by the President of China Mobile Communications Group on how AI will bridge new divides. While China did not manage to get their preferred candidate to the ITU’s leadership, its efforts to bring the Chinese perspective to the table seem to be unrelenting. Unfortunately, no Chinese representatives participated in the workshop on Generative AI and Regulation, which was a shame.
Now a few thoughts from the specific sessions that I attended. There was a long workshop on generative AI and regulation on Thursday which was quite insightful. Unfortunately, its international perspective was quite limited – more than half of the workshop was dedicated solely to the EU and Asia was essentially omitted (including Japan, South Korea, India, Russia, and even China). I also attended a workshop on open-source AI for transforming digital public services on Friday which identified some of the issues that I encountered during my ICT for Development classes during my Master’s degree at LSE.
My main takeaways from the sessions:
The regulation workshop included a short introduction into the technical questions connected to AI. After that, the focus lay on the regulation side entirely. This is purely an organisational point but I think that the separation of the policy and the technical side is not a good strategy. Policy largely depends on the technical solutions that are available, demanded, and even practically possible. Instead, I think that the workshop should have intertwined the two to connect the knowledge of the technical leaders with legal experts. I would definitely appreciate more events connecting people with technical background and people with policy background to avoid siloing in policy-making.
Adversarial attacks should be discussed more in the governance circles as they pose a real challenge to model security. Moreover, they will definitely have a huge impact on the policy enforceability as even though the models might pass the testing, new adversarial attacks might jailbreak them. It seems to me that this issue should be more present in the policy debates.
Regulators mostly have to choose between omnibus regulation and sectoral regulation. The EU chose omnibus, which will likely make the EU AI Act quite heavy-handed and less efficient. The UK is going in the sectoral direction and the next (most-likely Labour) government will probably stick with it so we might be soon able to compare these two approaches better. Still, the actionability and enforceability of the EU AI Act was criticised by several legal experts, highlighting uncertainties especially around the need for explainability of models.
While AI models might be a revolutionary tool for educating people in their native language, it really matters in what specific way the information is transmitted. There was a demo of a chatbot that should help farmers upgrade their agricultural knowledge. It was able to answer the farmer’s questions about how to take care of wheat in a given country in the native language, which was quite cool. However, what was less cool was the fact that this communication was written and in a highly formal (almost academic) language. As a person with two university degrees, I sometimes struggle understanding some legal texts in my native tongue so I am quite pessimistic about the application of this chatbot in rural contexts in the hands of people who rarely experienced more than a few years of primary school (even after ensuring that they are able to use the technology itself). The actual design of technologies for development truly matters and the focus should be more on the end-users!
Overall, I am very glad that I attended this summit. It sparked some new questions and I enjoyed the broad variety of topics discussed. While the main sessions were quite generic, I really enjoyed the workshops which offered more deep-dive discussions and technical content. If you get an opportunity to attend next year, I recommend you to try it out. Also, Geneva is quite a nice city for a short visit! 😊