An Open Standard for Ethical Enterprise-Grade AI

Zelros AI
4 min readJan 21, 2019

--

Just like the invention of the wheel, of the printing press or of the computer, Artificial Intelligence (AI) will radically reshape the way enterprises work.

This is such a revolution, that all industries will be impacted — without any exception: transportation, e-commerce, education, healthcare, energy, insurance, …

The path to an organization-wide AI adoption

The most efficient organizations of tomorrow will delegate a large part of their processes to intelligent decision-making algorithms.

We mean here algorithms not only based on simple hand-crafted if / then / else business rules — but on complex machine-learned decisions rules, unreadable by a human brain.

We mean here algorithms not only based on a few variables, but on hundreds of variables.

Let’s now imagine thousands of such machine learning algorithms, deployed at scale, and running 100% of the processes of an entire company: invoicing, customer support, purchasing, emailing, hiring, … If you think about it, it raises major questions about ethics, fairness and trust in the usage of these AIs.

Let’s take only a few examples:

  • how to control that algorithm #2834 selecting candidates for job interviews doesn’t use prohibited special categories of personal data like political opinions or ethnic origin?
  • are we sure that algorithm #7382 recommending employees salary increases was not trained on a biased or discriminative dataset (e.g. a dataset with men more paid than women) ?
  • am I able to explain to my client how algorithm #3918 calculates the price of its quotation?
  • how accurate is algorithm #5842 priorizing customer requests?
  • on so on …

In fact, we can see that having clear answers to these questions becomes quickly a necessity, to achieve a wide adoption of trustworthy AIs by organizations.

The need for an ethical AI standard

At Zelros, we manipulate a lot of machine learning algorithms — they are at the heart of our product. We are regularly challenged by our customers (insurers in our case, but it would be true for all industries), about what is inside them, and even the right to use them.

It became quickly necessary for us to have a tool to transparently bridge the gap between the code of our data scientists, and the legitimate questions of the business owners using our AI-first product.

This necessity lead us to design a framework contributing to a more explainable and transparent AI.

Publishing our standard for an ethical AI at enterprise scale

We announce today the release of our standard for a transparent use of machine learning algorithms in enterprises.

We decided to publish it publicly on our github, because we know that is is only an imperfect preliminary version. It has to be reviewed by external contributors to be improved. By making our standard public, we hope it will gain visibility, and attract as much feedback as possible. We would love to learn:

  • how often can the standard be used?
  • in which cases is the standard not adapted?
  • what is missing in the standard?

A seven sections, evolutive standard

The standard we propose takes the form of a ‘check-list’, that every machine learning model embedded in a product or a service should document. We believe that filling this standard document for each AI model in production contributes to their traceability, compliance and transparency.

The standard has seven sections. The full details of each of them are explained on github, but here is a summary:

1. General Information

This section gives general information about the context around the machine learning algorithm usage: who designed it, when was it trained, … and most important, its main purpose.

2. Initial Data

This section documents the initial data used to train the model: size, content (description of the variables, …).

What’s more, a unique signature of the dataset is provided in the report, to be able to verify the authenticity of a dataset during an audit. A unique signature is also computed after each data manipulation of sections 3 et 4 below.

3. Data Preparation

This section describes if any preliminary data manipulation was made before training the machine learning algorithm: any row or column removed, any missing value replaced, … It is important to have transparency on these preparation steps, to check if — and how — data has been altered.

4. Feature Engineering

This section documents if any hand-crafted variable has been created.

It also explains how the target — the phenomenon we want our algorithm to predict — has been constructed.

The dataset at the output of this section is the dataset used to train the machine learning algorithm.

5. Training Data Audit

This section provides various statistical descriptions of the dataset used for training. Mainly the percentage of appearance of the values of each variable, the statistical repartition of the predicted phenomenon with respect to the variables values, …

This section is important to highlight potential biases or imbalances in the dataset.

6. Model Description

This sections provides information about the type of algorithm used (logistic regressions, random forest, neural network, …), its hyperparameters, the validation strategy, and its performance (accuracy, …).

A unique signature of the trained model is also provided, to control that the model running in production is indeed the one documented in its companion descriptive document.

7. Model Audit

Finally, this sections provides insights on the trained machine learning model: which variables are the most important for prediction, how accuracy is improved when more data is added, model stability over time, …

How to contribute?

As mentioned before, we hope to get feedback on this standard, to make it even more robust and useful. For that, feel free to:

We will keep you regularly updated on new ideas around this project. Stay tuned!

--

--

Zelros AI
Zelros AI

Written by Zelros AI

Our thoughts about Data, Machine Learning and Artificial Intelligence

No responses yet