A response to the UNESCO call for an ethical AI framework

Published: 2020-04-23

I want to start this blog post with a disclaimer. This blog post will be a response to this UNESCO article. It might seem in this post that I am criticising UNESCO directly. This is not my intention. UNESCO is a good independent body doing important work. Additionally, I'm well aware that the post I will be talking about is not a whitepaper, so I understand that the scope is completely different, however, I do think there are several points of information missing in the post I am about to discuss. I hope to expand and contribute to this conversation, both by proposing more specific issues I think the post should have addressed, and highlight points where I take umbrage with the picture it paints and why. I hope you will see this post as an attempt to add to the conversation, not detract from it.

A call for an ethical framework

First off I will start out with the generally positive notes. First off, I am glad that UNESCO is making this call. One of the things that they mention is that AI is lacking in a unified effort to create an ethical framework and that more strict coordination is needed to make a good attempt. It writes:

Many actors—businesses, research centres, science academies, United Nations Member States, international organizations and civil society associations—are calling for an ethical framework for AI development. While there is a growing understanding of the issues, related initiatives need more robust coordination. This issue is global, and reflection on it must take place at the global level to avoid a ‘pick-and-choose’ approach to ethics. Furthermore, an inclusive, global approach, with the participation of United Nations funds, agencies and programmes, is required if we are to find ways of harnessing AI for sustainable development.

This is, hopefully obviously, correct. These things are so desperate and its applications so diverse that a united effort is required. As it also points out, I think that UNESCO is a very well positioned body to play a central role in this as they highlight themselves:

UNESCO will be a full and active participant in this global conversation. Our organization has many years of experience in the ethics of science and technology.

More specifically I am glad that an independent body that is specifically not a tech company or body is making these moves. I think that UNESCO will be able to pay more attention to the more human side of AI, a perspective which is sorely lacking in my opinion, but that is something for another post.

Another point that I am very happy to see included the focus on demographics that are traditionally excluded:

UNESCO priorities must also guide our international action in this area. It is essential to ensure that Africa fully participates in transformations related to AI, not only as a beneficiary but also upstream, contributing directly to its development. In terms of gender equality, we must fight against the biases in our societies to guarantee that they are not reproduced in AI applications.

At this point, the impact of AI systems on people of colour has been well documented and I wholeheartedly believe the only way to solve this (and it goes without saying that we should if AI is to have any right to exist in our society) is to include more of the people that are being excluded. This will be in no means sufficient, but it will be necessary so I am happy to see this clause.


As a young data scientist, I am equal parts excited and sceptical of AI. I think it has enormous potential, but not in the way that it is commonly talked about. This is where I want to start to highlight some of the places where I think the call could have gone further. Again, this is not meant as an attack on UNESCO but an attempt to contribute.


My first point of criticism is that the UNESCO post has fallen a little bit prey to the technology hype train without critical examination in my opinion. For example:

Education is already being profoundly transformed by AI. Very soon, the tools of education—the way we learn, access knowledge and train teachers—will no longer be the same.

But will it really? Because I don't think that it will, or at the very least it shouldn't. I am not an educational scholar, but as far as I can see, the fundamental form of formal education has changed remarkably little since the industrial revolution. Subjects, modes of communication and ideas about education maybe have changed and will continue to change, but in a more abstract sense, it is still remarkably similar. We still get presented with information about a wealth of subjects and then we are usually asked to reproduce the information or apply it in very controlled settings.

If we look at the last educational revolution that I am aware of, the internet, we can see that indeed education has changed in its delivery somewhat. Classes are done more online, information is more readily available, the scope of education has changed as a result of it, and as a result, education is definitely changed shape. But is it fundamentally different than what it was before? I don't think so.

The same is true for AI. it has great potential to change the scope and mode of communication of education, but I do think it is dangerous to assume that once AI enters the picture, nothing of what came before still applies. AI is not the messiah, it is simply another tool. If anything COVID-19 has done more to change education than AI has. I know that tech companies are all too happy to promise golden mountains, but I hope that UNESCO can exercise a bit more caution in with claims like these in the future.

Yes AI has potential to radically change the way education is applied, but equally, it will not change its essence, and those at the vanguard of tech have a well-documented history of having the arrogance to think they can revolutionise fields they know very little about.

So too does UNESCO offer no specifics on how AI is set to fundamentally change education. Again, I know this is not a whitepaper, but it is this sort of unexamined "assumption of impact" that can lead people to try and reinvent the wheel, taking away vital resources that the current system so desperately needs. That is something that I would love to avoid, which is why I think we ought to be careful to use the kind of language UNESCO uses in their post.

Lack of specifics

Ethics is complicated. While I am not an ethics scholar, I do carry significant interest in it. From my perspective, one of the biggest sources of difficulty is the dissonance between the general and the specific. Do you do what is good because it is good regardless of outcomes, or do the ends justify the means? You can find unintended consequences in almost any ethical framework. All of this is to say, that doing ethics, comes down to specifics. For example consider the trolly problem, an old thought experiment that has entered the mainstream consciousness now that self-driving technology is coming closer than ever to the consumer. The trolly problem is basically entirely comprised of specifics.

This is where I wish that the article had been a little bit more specific. I understand that you can't make specific promises in articles like these, but some ways aren't much harder to get into texts like these without going into a ton of detail. For example, when they write:

It is essential to ensure that Africa fully participates in transformations related to AI, not only as a beneficiary but also upstream, contributing directly to its development.

While this is true this gives no further information. For example, at which level should we work to include Africa? Do we incentivise states, academic institutions, private corporations or even individuals to contribute to AI? Should Africa be more involved with all AI endeavours or just ones that directly affect it?

It's a start, but one that has been made before

These are questions that probably didn't have hope of being addressed in this article but I think it would have been good to mention them. I think that even if the answers to these questions would be as vague as "all" or "yes", I think it helps to explicitly mention that.

I think it is especially important to address these things more explicitly because a lot of the problems mentioned have solutions that will include far more than AI. For example:

Can freedom of action be guaranteed when our desires are anticipated and guided? How can we ensure that social and cultural stereotypes are not replicated in AI programming, notably when it comes to gender discrimination? Can these circuits be duplicated? Can values be programmed, and by whom? How can we ensure accountability when decisions and actions are fully automated?

To address any of these questions much more will have to be addressed than just the AI applications themselves. It will involve cultivating a more nuanced understanding of discrimination in AI developers, changes to education, law and cultural understanding.

While UNESCO is a very good form to house the main debates, these issues are too diverse and far-reaching to be decided in those debates alone. Therefore I think it is critical to give as much concrete information to the people this article is addressing as possible. The fact is that an article like this represents an implicit invitation to contribute to the discourse and as such, I think it is important too, in some way, address which issues are going to be addressed.

It is true that at the time of writing the UNESCO think-tank is set to finish their first meeting tomorrow so I expect that the discussion will continue and UNESCO will soon come with more specifics on the issues that should be addressed and proposals for potential solutions, but until then, this call for an ethical framework is just one more among droves of other calls.

#UNESCO, #AI, #ethics