Artificial Intelligence is Bias, is it or we are making it?
AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves. Exposing human data to algorithms exposes bias. But the machines can’t do it on their own. Even unsupervised learning is semi-supervised, as it requires data scientists to choose the training data that goes into the models. If a human is the chooser, bias can be present. How the heck do we tackle such a bias beast, what training and policies should we introduce to avoid it now rather than wait for some sort of disaster to happen?
It’s possible AI may be the solution to, as well as the cause of this problem. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.
This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.
Here are three specific steps which organizations can take to minimize the risk of perpetuating societal biases:
The first is to look at the algorithms themselves and ensure that nothing about the way they are coded perpetuates bias. This is particularly necessary when AI is constantly making predictions which are out-of-step with reality.
Second is to consider ways in which AI itself can help to mitigate against the risk of biased data – IBM’s bias detection algorithms could play a part here.
Thirdly, make sure our house is in order – we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be.
This is a very interesting question. AI can be biased ( and it is in many use cases if you look around a lot of marketing use cases around you ). My outlook is - we are making it. Until we reach the state of singularity (Singularity is something far from reality at least in the near future), all the AI algorithms are based on the dataset, we are feeding. We feed the dataset and we configure the algorithms to behave. Hence the results are expected.
If we see a Digital Twin. The model has to be created by engineers, who decide how and when the simulation acts like reality. Furthermore, in the second step, the Digital Twin gets fueled by continuous monitoring coming from the numerous sensors. The DT includes a lot of human decisions, like how the model should look like and where the sensors should be installed at. So, the results include a human bias.