This is why we need to regulate AI

By Matt Reaney. Posted 29 January 2018
Law
Ethical artificial intelligence will make life better for humans

As the technology progresses at a rapid pace, it is a critical time for governments and policymakers to think about how we can safeguard the effects of Artificial Intelligence (AI) on a social, economic and political scale. Artificial Intelligence is not inherently good or bad, but the way we use it could well be one or the other.

Whether you’re in the Musk or Zuckerberg camp, it’s undeniable that we need to consider all outcomes for society

Unfortunately, there has been little attention paid by such governing bodies as yet in regard to the impact of this technology. We’re going to see huge changes to employment, privacy, and arms to name a few, that if managed incorrectly or not at all, could spell disaster. Handled correctly, with forward planning and proper regulation, the technology has the potential to better the future of our societies.

Elon Musk’s warnings have made headlines in recent months, as he urges the regulation of AI to be proactive rather than reactive for fears the latter would be much too late.  Whether you’re in the Musk or Zuckerberg camp, it’s undeniable that we need to consider all outcomes for society.

Humanity, unite!

It’s been a year since giants in the field of deep learning, Amazon, Facebook, Google, IBM and Microsoft, announced the launch of a non-profit Partnership on AI. Since this, other companies and industry leaders have followed suit in coming together to highlight the need for a governing body on AI and ethics.

The CEO of the Allen Institute for AI has put forward that we should adopt an approach to AI similar to Issac Asimov’s three laws of robotics – if you’ve not read Asimov’s work already, I highly recommend!

There is much demand for a body/ethical board to help govern AI practice and development as it rapidly advances as it is right now

Whilst these laws are ambiguous at best for an artificially intelligent being to interpret in a world so much further advanced than 1946 when they were written, we could use them as a foundation/base to shape three laws of AI with much more specific adherence to modern laws. Rather than the policing of artificial ‘general’ intelligence, and some sky net scenario with super intelligent beings with consciousness, it’s more the need to regulate the way in which AI technology is used.

And while there are so many companies out there working on applying AI for good, what happens if things go wrong? There is much demand for a body/ethical board to help govern AI practice and development as it rapidly advances as it is right now.

Ethical AI – worth the risk

One example is a rather dystopian study which has recently made claim that they’ve developed software that can distinguish a person’s sexual orientation just on their face by using algorithms trained from dating website data.

Although researchers’ intentions would have been good, there was obviously a big backlash. This sort of facial recognition system is a breach of personal privacy and could be misused to target vulnerable individuals if in the wrong hands.

It is better to be premature and lay the groundwork for later policies to take shape, and risk being late to the party, than risk unmoderated AI

Facial recognition is also going further in China, as researchers have posited that they can tell if someone is likely to commit a crime. This sort of use of AI technology is bringing the destructive physiognomy practices of the 19thcentury (and earlier) into the modern world, not to mention exacerbating racial bias and stereotyping, with little to no accountability.

Whilst many experts are saying that it’s premature to be putting AI regulations in place, and we are still not entirely sure of the exact impacts and implications AI will bring to society, it’s important to remember that governments and administrations notoriously move at a glacial pace compared to technological progress.

So, it is better to be premature and lay the groundwork for later policies to take shape, and risk being late to the party, than risk unmoderated AI.

Matt Reaney, is the Founder of Big Cloud – specialists in Big Data and Data Science recruitment.