Artificial intelligence (AI) is neither good, bad nor neutral. Like any new revolutionary technology it is the source of many innovations intended to make our lives easier. However, it can also have a negative impact on our lives if developed carelessly or for malicious purposes.
At Yassir, AI underpins a great part of our future development. In the future, any feature you use, from requesting a driver to predicting how much an order takes to be prepared, everything will be powered by some form of machine intelligence. Thus, making sure that we are building a system that a) works for all our users and b) does no harm, is crucial. We’d like to walk you through some of the guiding principles we are considering to make our AI safer.
How can AI go wrong?
We all heard of AI horror stories, data privacy issues, biases in facial recognition algorithms which turn out to be much less efficient on the faces of dark-skinned women than on the faces of light-skinned men caused in most of the time by what is called intelligence biases or algorithmic biases which can sometimes impair the usefulness of AI.
A (negative) bias is a distortion in the functioning of the algorithm whose result is to systematically disadvantage a group of individuals identified by “sensitive” attributes such as gender, ethnicity, etc.
Biases are usually unintentional. We are not aware of them, yet they could lead to grim unintended consequences. Imagine you are a fintech company that extends loans to users after computing a credit score for each user based on AI algorithms. Given the large unbanked population in Africa, there is a risk, the algorithm would extend loans to people who are not able to pay it back even if the data says so (so-called fintech debt trap).
The biases mainly come from the data used to build a model and/or in the design of the model. They result from the following situations:
- The data are poorly collected and do not reflect reality; groups and projects tend to be misrepresented.
- The data are correctly collected but contain structural biases. For example, if we collect employment data historically, certain jobs are systematically attributed to men rather than women.
- The data necessary for the classification model development in supervised learning is poorly categorized by humans.
- Algorithms are not well designed, by not being neutral, they reflect the values of those who design them. Human bias manifests itself in the choice of the data types integrated into the model, the transformation operated on these data and the weight attributed to them, etc.
- The decision-making algorithm is semi-automatic, the final decision is up to the human, however, the latter can misinterpret the results produced by the algorithm and therefore, may introduce biases.
As it is known, this is even more challenging in an African context. For services used on the continent, users are perhaps less tech savvy, relevant data is typically sparse and/or of bad quality, pre-trained models are usually trained on non-african datasets and only a few Africans get to design and interpret those algorithms. So if you design AI for Africa, the risks of embedding biases are higher.
If we want to prevent those biases and maximize the benefits of AI, especially in the sector of industry intended to offer services to large swathes of everyday users in developing countries (like Yassir), it is important to confront our blindspot head on and to adopt ethics by design. It is all the more effective to consider ethics at the start of an AI project, as it is difficult to correct a system to make it ethical once implemented, not to mention the damage when the algorithms derail.
What is an ethical AI?
An ethical AI is defined as an AI developed with respect for fundamental rights with regards to the following principles:
- Respect for autonomy: Humans interacting with AI systems must be able to maintain full and effective self-determination.
- Preventing infringement: AI should not harm humans.
- Equity: to not only avoid unfair biases, discrimination and stigmatization of certain individuals or groups, but also, to implement procedures that allow to challenge decisions taken by AI and the ability to submit an appeal against these decisions -** Explainability**: AI systems should be transparent and decisions made using these systems should be explainable.
Few frameworks exist today to guide the development of an ethical AI. For instance, The European Union is keen to stand out in the field by making ethics an essential element in the development of cutting-edge AI, by defining a framework in the form of Ethics Guidelines for Trustworthy AI, which are based on the following elements:
- Compliance with applicable laws and regulations — lawful
- Adherence to ethical principles and values — ethics
- Technically and socially, AI cannot cause unintended harm — robust
We believe that adopting/adapting an existing framework makes more sense than reinventing the wheel.
How we build ethical AI at Yassir
Several techniques have been developed to measure and prevent algorithmic biases, however the technique alone is not enough. One needs an ethical compass that provides transparency, mobilizes the teams and provides guidance when introducing new AI systems.
Building on existing frameworks, we ambition to integrate ethics into the development and deployment of any new AI system by adopting a few key principles:
- Define an AI Ethics Charter for all AI projects.
- Before starting an AI project, assess the risks and the impact related to the algorithm you want to deploy by involving the technical actors, the people directly affected by the algorithm and those who use it. This risk assessment should continue throughout the life cycle of the AI system.
- Ensure diversity in the people involved in the development and implementation of the algorithm.
- Set clear objectives regarding the biases that you want to reduce/eliminate in a certain legal context.
- Identify metrics (with the technical teams) that allow these biases to be measured.
- Ensure transparency, it is important to know the elements that are crucial in decision-making, especially when things go wrong. This limits the use of algorithms such as neural networks or the use of third-party algorithms whose intellectual property is protected.
- Ensure robustness of algorithms, eg., by verifying that they react adequately when the input data is slightly modified. In case of two similar situations, the algorithm must give a similar decision. This ensures every citizen to be treated equally, specially in the public sector.
- Monitor the algorithms, re-train them regularly on new data, taking care not to amplify any biases through a feedback loop process (the decisions made with an algorithm affect the new data which are themselves used to generate a new version of this algorithm).
- Audit algorithms, even though there are not yet clear rules as to how algorithms should be audited.
- Work with public and private stakeholders to develop and adopt industry standards.
What’s next
As technology continues to reshape the future of the African continent, we see enormous opportunities (and risks) in what AI can offer to Africans.
It is beyond doubt that we need a mix of regulatory interventions, robust technology and world-class talent to be involved, in order to maximize social benefits. However, we also need to consider ethics.
Unfortunately, little precedent exists when it comes to providing safe AI services to millions of Africans. This needs to change and as a company we are committed to doing more towards this goal all while sharing our progress.
As an industry, we need a hippocratic oath for African AI startups (call it Al-Khawarizmic oath?), a compass that helps us steer our decision in the right direction, reduce potential harm and prepare the next generation of AI specialists.
Many believe in the importance of having ethics and at Yassir we are convinced that it is the core of any sustainable AI/technology offering to come, especially for Africa.