by Kayla Toh
Domino’s now deliver pizza using driverless cars. Cool, right? But would you let a driverless car drop your child to school? We look at how much you should trust machine intelligence.
Our fast approach towards a machine age is reflected in the surge of high profile news reports on data-driven scandals. One particular case study that has been making headlines lately is the Cambridge Analytica scandal, which brought to light the use of algorithms for data breaches. This story shocked the majority, but algorithms and artificial intelligence like this are hiding behind every aspect of our human lives.
Algorithms are used in our supermarkets, mobile devices, courtrooms, hospitals, banks and more. This provokes the questions; how much are we relying on algorithms? And can we really fully trust them? Critical questions like this need to be answered if we’re to determine what kind of world we ultimately live in, as we advance into the not-so-distant future of a social media, algorithm and automation age.
Last month, I attended a talk by Hannah Fry at the Royal Society of Arts. Dr Fry is an Associate Professor in the mathematics of cities from University College London and works alongside a unique mix of physicists, mathematicians, computer scientists, architects and geographers to study patterns in human behaviour. She has used mathematical models to uncover questions in everything from love to flu epidemics. And now, with endearing enthusiasm, refreshing simplicity and admirable expertise, she helps us to uncover the answers on ‘How to be Human in the Age of Machine’.
Hannah referenced her new book; ‘Hello World: How to be Human in the Age of Machine’, throughout the talk, as she examined how algorithms are taking over society. People are becoming more reliant on them to determine finance, security, healthcare, transport and more. Hannah gave us a fair-minded account of what the software that now governs our lives can and cannot do. By looking at “the good, the bad and the downright ugly sides of algorithms” we can judge how much confidence we should put in algorithms and whether or not they’re actually an improvement on human intelligence.
First and foremost, what are algorithms? It sounds fancier than what it really is. According to the Cambridge Dictionary, algorithms are essentially just a set of mathematical instructions or rules that will help to calculate an answer to a problem. They’re used for machine intelligence, otherwise known as artificial intelligence (AI). AI is the replacement of human labour by a digital computer or robot that performs the tasks typically associated with intellectuals beings, as defined by the Britannica Encyclopedia.
In a more unnerving translation, it means machines replacing humans. This may spark a scary vision of a post-apocalyptic robot world in your mind, but as Dr Fry emphasises; machine intelligence is not all doom and gloom. It can be used to create some mind-blowing inventions.
Algorithms and algorythms
An astounding feat in AI innovation is David Cope’s ‘Experiments in Musical Intelligence’. As a professor in theory and composition, David used machine intelligence to replicate the music of famous musicians, such as Johann Sebastian Bach. He created an AI track version of Bach’s music and it’s practically impossible for the average person to tell the difference between the robotic track and the original Bach track. If you don’t believe me, you can try it out for yourself by listening to the Bach-style Chorale here.
The algorithm that David used in his experiment is actually similar to that of the algorithm used in our phones when predictive texting. Your smartphone predicts what you want to type by scanning the history of all of your previous texts.
How much should we rely on algorithms?
Humans place blind faith in machines. For example, algorithms are regularly used in court cases to predict whether the defendant will commit a crime again in the future and if they’re a risk to society. But would we prefer a human or a robot determining our fate with the law?
Hannah argues that there are pros to using a machine, because the machine can be totally unbiased and is consistent and balanced. However, the judge needs to know when to overrule the machine. This is because machines don’t understand context like humans and they can’t see the world in the same way that we do.
Justifying herself, Hannah gave an interesting example of a court case that saw a machine determine a young offender (19 years old) who had consensual sex with a minor (aged 14) as a high risk to society. It gave him a jail sentence of 18 months due to the fact that the defendant was young and, in the logic of the machine, would be more likely to commit a crime again in the future. Whereas, the machine determined a 36 year old man who had sexual intercourse with a 14 year old (22 year age gap) as low risk to society and should avoid a jail sentence altogether. It’s clear that there’s still a really big gap between us and AI.
Machines have great potential to do a lot of social good, but it’s inevitable that they will make mistakes. We can’t always rely on ourselves to know when to draw the line between when to prioritise AI and when not to. For example, AI is being used for facial recognition in the police force, but it has a 98% failure rate. Yet it is still being trusted and there is currently a scheme to roll it out onto the streets. This is why Hannah insists that we can’t be too trusting of AI. We can’t just think of AI as a lone source of resoluteness. We have to think of the trust issues and failings of the people using and creating the machines.
Learning to trust machines
Machine learning is similar to training a dog. You give them an objective and a reward and all the in-between bits they have to figure out for themselves. With AI, trusting the outcome depends on the circumstance. In some cases. It’s very important to be clear on how the machine got that answer. For example, the court case situation. But in other situations, you just need to know that the machine’s answer is extremely accurate and it doesn’t matter how it got to that conclusion. For example, predicting whether or not someone will survive cancer.
To improve our trust in AI, we must improve how it’s regulated. Hannah believes that people are beginning to use AI the same way that medicine was used before certain rules and measures came into place – in the sense that it is becoming morally bankrupt. Machine intelligence is currently untested, but yet still administered and there is no one stopping this from happening. If there are stricter regulations on how people use machine intelligence and the causes it’s used for, then we will have less ethical concerns about it.
In hindsight, not all the blame can be pinned on machines for their errors. We must remember that human failings are inevitable and the people responsible for designing AI need to consider this.
Hannah noted that there has been a push on writing algorithms to explain other algorithms in the last 18 months. This is so that we can trust AI more. But it’s still very important to know when to draw the line with our faith in machines.
Machines give us an easy sense of authority. Humans like cognitive shortcuts and they like not having the responsibility of making an important decision, so this is why some people may prioritise the decisions of a robot. But, at the same time we are quick to dismiss robots if they show any kind of flaw or make a minor error.
Should we fear machine intelligence?
It’s easy for us to fear development in machine intelligence and the rate at which it seems to be improving. Stephen Hawking and Elon Musk have recently been very open about their concerns towards AI and machines edging out humans. But Hannah argues that “worrying about AI in the future is like worrying about overcrowding on the moon.”
In other words, a real life incident of a Terminator type occurrence is highly unlikely. This is because we’re a long way away from ever replicating the human brain neurons. Scientists currently can’t replicate the brain neurons of a sea slug, which only has 280 neurons compared to our billions. Machines and robots aren’t a near match for our human minds. Instead of worrying about this, we need to focus on the AI problems that we have here and now in the present.
But if we were to apply Moore’s Law to the rate of pace at which AI develops, then it’s possible we will see a very sudden and fast-paced growth in AI technology in the future. Moore’s Law is a computing term that states that the overall processing power for computers will double every two years.
An example of excellent, but somewhat seemingly threatening, AI innovation is the chess-playing AlphaZero robot. Yes, this is an amazing accomplishment for AI, but Hannah states that we can’t begin to worry about an AI apocalypse based solely on this. A game of chess has a very clear objective and this is drastically different to the objective of driverless cars or trying to solve crime in a city.