Coffee House

Will Artificial Intelligence put my job at risk?

6 June 2014

Ever since the onset of artificial intelligence – simulating human reasoning, problem solving – there has been worry about the machines taking over. Taking our jobs, rendering us unnecessary, perhaps even developing sentience and turning on us, like Skynet in Terminator 2.

Some of those fears have been wildly exaggerated, partly based on a misconception of what artificial intelligence actually is (which, on the whole, still remains using examples to train computer programmes to mimic human behaviour under certain quite limited conditions). But they aren’t completely foolish worries. The speed of improvement in artificial intelligence, as in much modern technology, is dazzling and quickening. According to Google’s Ray Kurzweil, in 2045 we will reach what’s known as ‘Singularity’: the point at which artificial intelligence becomes so advanced that it begins to produce new and ever more advanced versions of itself, leaving us mortals behind. If that happens then yes, consider your job at risk. Along with everything else. (Perhaps we fleshy souls will be usefully employed guarding plug sockets, although the super computers will have figured out a way of resolving that problem too, the clever rascals.)

Leaving aside for the moment that artificial intelligence obviously creates jobs too – technology often tends to improve total productivity, and create more high skilled opportunity at the expense of semi-skilled work – what is usually overlooked is how society might respond to all of this. The effect of AI on jobs depends on more than computing speed.

Claim your gift

As technology comes to play an increasingly important and signal role in society, expect something of a backlash. Already there are groups and movements hoping to slow down its rapid development. In 2011 a New Mexico group called the Individualists Tending Toward the Wild were founded with the objective ‘to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course’. That year they detonated a bomb at a prominent nano-technology research centre in Monterrey. I expect to see more of these groups in the coming years, especially if Singularity begins to look possible).

Then there’s the government. Even if AI could theoretically do your job better than you – of course it depends on what the job is – the prospect of genuinely powerful artificial intelligence is likely to provoke swift and heavy regulation to try to limit its effect on the workforce. A new political party would be founded campaigning on a ‘jobs at risk from AI’ platform, which would force the government of tomorrow to pass strict legislation to limit the damage. Perhaps some kind of points based system.

Where this all might lead is anyone’s guess. Zoltan Istvan, an American futurist and writer has recently released a controversial novel called The Transhumanist Wager. In it, a small band of transhumanists – including scientists who work on artificial intelligence – disappear to a floating seasted in international waters called Transhumania. They believe society’s fear of technology is getting in the way of innovation and development. From Transhumania, this band of brilliant scientists launch a world war against the forces of inertia. ‘It’s a novel’, Zoltan assures me, unconvincingly.

Jamie Bartlett is Director, Centre for the Analysis of Social Media

AIDebateThe Spectator is holding a debate ‘Will Artificial Intelligence Put My Job at Risk?‘ at 7pm on Wednesday 18 June at Prince Philip House, SW1. Speakers will include: Andrew Blake, Laboratory Director, Microsoft; Jamie Bartlett, Director of the Centre for the Analysis of Social Media, Demos; Nicola Smith, Head of Economic and Social Affairs, Trade Unions Congress; and the author and journalist Bryan Appleyard.

You can join the discussion by clicking here.

Give the perfect gift this Christmas. Buy a subscription for a friend for just £75 and you’ll receive a free gift too. Buy now.

Show comments
  • Martin Renold

    If machines could do all the work for us, why would that make us despair? We don’t need work to live, we only need the result of that work.

  • Chris Hobson

    Yes 2029 computers will have human intelligence.

  • Jupiter

    After we get to the singularity, humanity will eventually split into something like the Eloi and the Morlocks.

  • ButlerianHeretic

    There will be plenty of opportunities for humans to play a meaningful role in our own fate for a long time. However, those opportunities will require that we use technology to augment our own capabilities in order to keep up with accelerating AI power. If humans do not do this, we will cease to be relevant to the future. I’m not okay with that. But some won’t want to augment themselves and by not doing so they will essentially opt out of the economy as even burger flipping jobs get automated. And every increase in the minimum wage makes it more economical to automate them, btw.

    The short version is that society will stratify at a rapid rate. By stratify, I don’t mean some people will be earning $1 million per year while everyone else is earning $10,000 per year. In the not too distant future, some people will wield power that today belongs to an entire corporation with their own individual will, while everyone else is on welfare. We are already seeing tension and violence caused by the technology have-nots resenting the haves. At some point that’s going to explode, and we may very well see “Butlerian Jihad” style open warfare break out. This is going to rapidly become the biggest challenge our society faces. I’m not sure that America with our tradition of egalitarianism can handle it. For my part, I feel that it is very likely that I will soon be forced to go someplace a little less free, with a greater tolerance for technology.

    At the point AI becomes advanced enough to regulate, if any nation does regulate it and their competitor nations don’t, then those nations that do regulate AI will cease to exist as meaningful participants in the world economy. Think the controversy over the Kyoto Protocol is fraught? That is nothing. Some nations may be okay with this, some won’t. Considering the level of investment that the US government is already making in AI, it seems very doubtful that the US will regulate AI in meaningful ways unless the alternative is civil war.

    Meanwhile, in the 17 years since the Kyoto protocol was signed, and still with no global consensus on the issue, computer power has gone through 11 doubling periods. That is, in terms of power for a given volume of computer equipment, current computers are 2048 times as powerful as the computers available in 1997. In terms of cost it is even more dramatic because cost falls by half even as power doubles, and modern computers are 4,194,304 times as powerful as computers at the same price point as their 1997 equivalents. Now imagine if the computers in 1997 were already-powerful AI. At the point this becomes a serious question for politicians, it will already be entirely too late to do anything meaningful about it.

  • beenzrgud

    we will use AI to augment our own intelligence and creativity. I think it will be very far into the future before computers can experience a moment of inspiration and make a creative leap.

    • ButlerianHeretic

      I think Jarvis from Iron Man is a much better analogy for future AI than Skynet.

      • beenzrgud

        Yes. We will use AI to help us reach useful conclusions, but the formulation of questions will remain under our control.

        • ButlerianHeretic

          I’m cautious but hopeful here. Today, genetic algorithms and similar processes are better at refining something we create than they are at creating something new. However, Moore’s Law doubling implies that by mid-century a $1000 computer will have more raw processing power than the entire raw processing power of non-augmented humanity. So even if a genetic algorithm is not good at inventing things, brute-force creativity may still be a practical possibility.

          I do think that unless some new paradigm of computing allows AI to achieve something analogous to the kind of intuitive leaps that are natural for humans, an augmented human will still beat a pure AI in areas that require original thought. In particular, if quantum computers are as much better at searching for optimum solutions to problems through genetic algorithms as they will be at searching for encrypted passwords, we may have to rethink that. Also, for a lot of “creative” stuff that is largely derivative and where change is mostly cosmetic, even implementations of genetic algorithms on digital computers may be just fine to render human creativity superfluous. Procedurally creating new car and appliance models every year would be very doable for example.

          • beenzrgud

            Without unknowns life would be dull, but I also share your optimism.

  • Jay

    No, AI is not going to put our jobs at risk. At leat not in near future because human-level intelligence is a distant dream. Till then play with software like Braina assistant for PC.

  • UniteAgainstSocialism

    I dont know about AI putting your job at risk, but i wish immigrants would put your job at risk. Maybe then you and the rest of the metropolitian elite would wake up and oppose uncontrolled mass immigration

    • Hello

      It’s amazing that such an inane and pointless comment as yours received a vote. Just because you don’t like mass immigration doesn’t mean that everyone has to agree with you. It just doesn’t bother some people you know, they just get on with their lives.

      • UniteAgainstSocialism

        oh hello. What an inane and pointless comment by yourself so I’m not surprised you havnt got any votes. Just because youre a traitor and love immigration doesnt mean everyone has to agree with you either. Its still a free country and im allowed to express my opinions. You may not like it, probably why you dont like free speech

        • Hello

          I didn’t say that you did have to agree with me; I didn’t say that I love immigration; and I don’t think that loving immigration is synonymous with being a traitor.

          You, on the other hand, did imply that anyone that doesn’t oppose immigration isn’t awake. What I was saying is that it’s perfectly possible to disagree with you at the same time as being “awake”.

          R2D2 and C3PO pose more of a threat to your job than Ivan. Ivan is just increasing the supply of labour, and the labour market will reach equilibrium in the long run. R2D2 and C3PO, on the other hand, are changing the nature of the job, and there’s no reason to assume that you’ll be capable of adapting to a different role.

  • HookesLaw

    Robotics, thats repetitive preprogrammed actions, has already affected the workforce. So I am not sure what further effect so called AI will have if it ever materialises. AI, thats self thinking unprogrammed machines and robots, are a big kettle of fish. Fine for Sci Fi but why would we really want to build them? The only reason I can think is to explore the solar system.To boldly go where man cannot go.

    • Scott Bisset

      Imagine all the scientist in the world IQ that what AI can give u in one program

  • La Fold

    Have they never seen the Matrix??

  • Rhoda Klapp8

    Natural stupidity beats artificial intelligence every time.

Can't find your Web ID? Click here