Artificial Intelligence is Bias, is it or we are making it?


AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves. Exposing human data to algorithms exposes bias. But the machines can’t do it on their own. Even unsupervised learning is semi-supervised, as it requires data scientists to choose the training data that goes into the models. If a human is the chooser, bias can be present. How the heck do we tackle such a bias beast, what training and policies should we introduce to avoid it now rather than wait for some sort of disaster to happen?

Artificial Intelligence
Bias for Action
Bias Analysis
Machine Learning Algorithms
Machine Learning
Ethics and Compliance
Ethical Decision Making
Ethical Hacking
Artificial Intelligence Appllications
Human Rights
Data Mining
Data Analysis
Human Factors
Human Interaction Design
Masarrat A Shah
13 months ago

4 answers


It’s possible AI may be the solution to, as well as the cause of this problem. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.
This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.
Here are three specific steps which organizations can take to minimize the risk of perpetuating societal biases:

The first is to look at the algorithms themselves and ensure that nothing about the way they are coded perpetuates bias. This is particularly necessary when AI is constantly making predictions which are out-of-step with reality.
Second is to consider ways in which AI itself can help to mitigate against the risk of biased data – IBM’s bias detection algorithms could play a part here.
Thirdly, make sure our house is in order – we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be.

David Cottrell
13 months ago
David Cottrell Totally agree with you and very valid points as mentioned by you. - Masarrat A 13 months ago

This is a very interesting question. AI can be biased ( and it is in many use cases if you look around a lot of marketing use cases around you ). My outlook is - we are making it. Until we reach the state of singularity (Singularity is something far from reality at least in the near future), all the AI algorithms are based on the dataset, we are feeding. We feed the dataset and we configure the algorithms to behave. Hence the results are expected. 

Hitesh Mathpal
13 months ago
True Hitesh, its all upto us how we feed it and how we want AI to behave and respect everyones individuality without Bias. - Masarrat A 13 months ago
Why do we need bots - to solve our purpose. We define our purpose and we train bots for the same, It is a bias until bot start thinking differently. - Maya 13 months ago

If we see a Digital Twin. The model has to be created by engineers, who decide how and when the simulation acts like reality. Furthermore, in the second step, the Digital Twin gets fueled by continuous monitoring coming from the numerous sensors. The DT includes a lot of human decisions, like how the model should look like and where the sensors should be installed at. So, the results include a human bias.

Patrick Henz
13 months ago
As mentioned by David Cottrell, IBM is working on eradicating this Human Bias intervention, the first thing they have taken as human faces and ethnicities where AI should not Bias on basis of race etc. - Masarrat A 13 months ago

Every reasonable model should have a validation stage. Incorporating bias-detection at that stage can tell as if the model is unbiased and can be used for continues validation along with random checks.

VGG-Consulting /Vesselin Gueorguiev
13 months ago
Yes, so for that validation we need come up with policies and idea, because at the end of the day we humans are unpredictable in every sphere of life. - Masarrat A 13 months ago
The policy should be to have no-policies but to train computer engineers to follow the process of validating their models and to build the culture of doing so! - VGG-Consulting 13 months ago
Agree - Masarrat A 13 months ago

Have some input?