Every coin has two sides, and every tool can be used for creative as well as destructive purposes. When it comes to such powerful new tools as learning networks, this adage has new and unforeseen implications. Since machine learning is spawning a host of innovative tools with never before seen possibilities, society must reflect on their ethical implications and potential pitfalls.
There are some pressing questions to be debated in the short term, and this article will bring up the most relevant. Read through the following sections and pitch in your opinion in the comments.
Could machine learning algorithms promote social bias?
Provided with the right training data, machine learning algorithms can work brilliantly in predicting the likelier behavior of the masses by understanding their past behavior. The problem with this is how people are often culturally biased in their existing worldviews, which means these sophisticated programs are making accurate predictions from biased data… so there’s a real danger such predictions will reflect existing bias and even amplify it. This is why it’s so important to integrate ethical considerations into machine learning right from the beginning. Otherwise, we could indeed realize our worst collective nightmares of being dominated by ruthless machines.
Will deep learning predictions lead to inhumane actions?
Since machine learning evaluates data objectively, it can’t discern if the data it feeds on carries negative subjective concepts that work against the common good. For example, if an algorithm picks up on data that subtly expresses racist concepts, it won’t evaluate the underlying trend critically – but instead will simply integrate these concepts into its framework. The resulting predictions will carry on those subtle racist concepts, and possibly make unfair machine decisions based on the unfair human decisions made in the past. We need to teach machines how to evaluate data from a humane standpoint; this is a particularly challenging task since even we humans often have a difficult time doing so.
Are we giving too much decision power to algorithms?
The previous points are especially concerning when we realize that we’re already giving much decision power to algorithms even before we are confident in their maturity. In some jurisdictions, algorithms are already being used in courts to determine prison sentences, and the advent of robotic judges is now just around the corner. While many people assume that computer programs will be able to make fair decisions since they can look at cases objectively… this has already proved not to be the case. Typically, sentencing algorithms are simply replicating the same culturally biased sentencing patterns they learned from.
Could AI-based advertising models promote disinformation?
In a world that’s already rampant with fake news, disinformation, and confusing political agendas, machine learning is indeed amplifying the problem – as recently observed in the Cambridge Analytics scandal. The scales on the Brexit referendum might have been confusingly tipped over by the use of machine learning algorithms to entice voters, by manipulating the public opinion through sensationalist content. Machine learning applied to content advertising poses a real threat of pushing forward inflammatory content that will easily grow viral, as opposed to reasonable content that might contribute to the advancement of humanity. In other words, AI-based advertising models feed on sensationalism and throw more fuel into the roaring fires of populism.
Should we worry about lethal autonomous weapon systems?
In recent years, we have seen many impressive (and scary) developments when it comes to the creation of artificially intelligent weapons which have lethal power as well as the capacity to make autonomous decisions. You may have heard about “Slaughterbots”, highly advanced military drones that can be deployed in a war zone and instructed to use facial recognition to eliminate specific targets. As you can imagine, the destructive potential of such weapons would be tremendous, and might easily lead to outright genocide.
What happens when self-driving cars outperform human drivers?
Self-driving cars are at this point on the verge of mass distribution, and it won’t be long until they become a normal part of our lives. This raises many through-provoking issues, such the classic “trolley problem”: self-driving cars will have to be programmed to make life-and-death decisions on the fly, between saving the life of the passengers or saving the life of people who are in a collision course to the vehicle. This issue will escalade when self-driving cars eventually outperform human drivers (which it won’t be long before they do); when this happens, self-driving cars will replace human drivers, which means it will be up to the cars to make life and death decisions in the eminence of traffic accidents everywhere.
Where do we draw the line between surveillance and privacy?
Surveillance cameras are now everywhere, to the point it’s starting to prove impossible to live in urban areas without being constantly monitored. This isn’t too worrying…until those video streams are cross-referenced and processed through sophisticated facial recognition algorithms. When this happens, we’re effectively looking at a “Big Brother” situation – the line between public surveillance and privacy breach will be blurred and tend to disappear. It’s already happening in China, where surveillance cameras are being used everywhere to monitor citizens and determine their social score based on their ongoing behavior.
If you liked this article, join our growing readership now! The author Frieder F specializes in everything related to using technology in a positive and future-focused way. Asides from articles like this about marketing technology and AI ethics, this blog focuses on inspiring people to realize how the advances in data science and learning networks will help improve human society.