How to combine ethics and AI in the training sector?

François DeboisHead of innovation & design at Cegos Group

Algorithms have no limits. So it’s up to humans to watch over ethics when AI is at work, and to lay down the principles that will protect learners’ interests. Here are our antidotes to the three major risks that AI entails in training.

AI has enormous potential in training. It can accommodate trainees’ learning preferences to produce ultra-customised training experiences, and take over low-added-value tasks. Trainers can then focus on the high-value ones. But we mustn’t be beatific: AI also carries risks.

Algorithms have no ethics

Only the humans who design algorithms can foresee their effects and regulate them. A researcher or engineer who lacks proper perspective can create a machine that does wacky things. An example?

Amazon added AI to its recruitment process in 2014. The idea was to find the most qualified profiles for each vacancy. A year later, it realised that the AI was banning women from a number of positions. Why? Because the AI system was using a database that contained more CVs from qualified men than CVs from qualified women, and concluded that men were somehow preferable to women. Amazon stopped using AI to recruit people in 2018.

Imagine this scenario: learners with new types of profiles (young generations entering the job market, for instance) are discarded because your AI system doesn’t have enough information about their profiles.

The growing awareness of the need for regulation

In his general policy statement on 4 July 2017, French Prime Minister Édouard Philippe mentioned the need to provide a framework around AI, involving:

  • Algorithm transparency
  • The creation of an international group of AI experts to organise an independent expert study worldwide
  • The creation of education programmes that place transparency and loyalty at the core of training

In a report on The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation published in February 2018, 26 AI specialists make recommendations to avoid or contain the main risks.

The 3 major risks of using AI in training

During the Innovation by Cegos Cluster's day of exchanges dedicated to AI, we tried to identify potential risks associated with using AI in training, by exploring plausible disaster scenarios.

We also tried to formulate principles to manage the three major risks:

  • Confinement
  • Deception
  • Uncontrolled dissemination of data

1- The risk of confinement

Disaster scenario: imagine an AI system that only allowed you to learn about specific topics, because they are the most relevant ones to you or the ones that best match your learning preferences.

Or an AI that decides you can’t take a number of course because you lack the required qualifications (“You are allowed to watch Teletubbies episodes, nothing more advanced!”) or because your physiological or emotional state isn’t right (“You didn’t get enough sleep last night so you are unfit to learn this morning”, “I sense your stress levels are excessively high, so I am going to pause the course here and resume tomorrow”).

The principle of access to training: anyone can access any training content in the AI system (except if the content can put the learner in danger).

The principle of human supervision whenever AI is used: learners must be able to contact another human being for information about their training experience and path.

2- The risk of deception

Disaster scenario: imagine all trainers are replaced by AI tutors, and you ultimately realise that your “tutor” is using a soulless productionist approach and has no intention of helping you make real progress.

The principle of transparency: an obligation to state whether the tutor is an AI system or a human being.

Disaster scenario: imagine you fail an important exam, but cannot find out what you got wrong and why you didn’t make the cut.

The principle of algorithm explainability: learners must be able to understand why they got the score they got (how it was calculated, according to what rules, etc) or the mechanisms underlying any decision concerning them.

Disaster scenario: imagine a content-curating system that feeds you fake news (generated by an AI system specialised in making fake news appear as plausible as possible).

The principle of information truthfulness: as part of the content-curating process, inform the learner how the algorithm sources content and whether or not the content has been approved by an expert (and allow the learner to contact the expert).

3- The risk of uncontrolled dissemination of data

Disaster scenario: imagine all your learning-related information is broadcast around your company, along with your learning preferences (with a view to enabling everyone to adapt accordingly).

Imagine that a hiccup early on in your career stays with you for the rest of your professional life.

The principle of consent to data collection: accommodate different learning methods, but protect confidential personal data at the same time:

  • Anonymised statistics
  • Distinguish public-interest data from mandatory data that will be shared (and learners can thus choose not to provide)
  • Allow learners to withdraw (the sensors may say everything is okay, but you have the right to refuse), so as to renegotiate goals, chat with a human being, etc
  • Provide the right to delete data/oblivion algorithm

This article is part of a series on AI and we hope it has helped you prepare to negotiate this major shift.

Our previous posts:


These posts are drafted by human beings and reviewed by the following expert committee: Fabienne Bouchut, François Debois and Jonathan Tronchet, whom we thank for giving shape to our thoughts.

Written by

François Debois

François used to share his thought in this space when he worked at Cegos
Learn more
newsletter image

Receive our newsletter

Keep up to date with the latest L&D Insights

Subscribe here