Is AI going to destroy civilisation?
Possibly, but not for the reasons you might think.
We keep hearing that AI could reach a level of Artificial General Intelligence (AGI). In this stage it could take on life of its own. In a so called singularity it would have no more need for the soft human forms that gave birth to it. In this article I am going to explain why this is not as likely an outcome as many believe. Or, at least, I will explain why it is not the outcome to be feared most.
Some Background
I began working AI in 1987. At the time I joined British Gas who had just got started a large European project called COGSYS (Cognitive System) with 35 partners. Our goal was to develop a real-time expert system for process control and fault diagnosis. As project leader for our test site, I led our trial of the system. It went well, sufficiently so that British Gas decided to spin out a commercial company called Cogsys Ltd., in conjunction with Salford University.
I became a director and was responsible for delivering commercial applications using COGSYS. During my time there we got involved in a wide variety of fascinating and challenging applications. Some of these combined advanced simulation models and neural networks.
Today, it is in the area of neural networks or “machine learning” that recent breakthroughs in Generative AI, based on Large Language Models (LLMs), have caught the attention of the public, businesses and venture capitalists. If you are an engineer or manager in any organisation, your boss will probably be pushing the implementation of AI.
I am not going to dwell on what is driving this. I have been following the hype around AI for many years. If that hype had been true in the early days we would all by lying on the beach by now, with nothing better to do than worry about developing skin cancer.
In this article I am going to concentrate on what is happening in software development, but the arguments hold more generally for any field of endeavour with reliance on LLMs.
The Vibe Coding Revolution
“Vibe coding” describes a supposedly more intuitive, almost conversational approach to programming. Instead of carefully planning every line of code in advance, you build things by experimenting, tweaking, and responding to how the software behaves in the moment. It involves using AI assistants to quickly generate or adjust code, then refining it based on what “looks right” as you go along.
For a non-programmer, you can think of it a bit like cooking without a strict recipe with the help of the world’s best chefs. You start with an idea, get advice from a chef, mix your ingredients and taste as you go, letting the chef know what you think. You keep adjusting until the result matches the appearance and taste (vibe) you were aiming for, rather than following a rigid set of instructions from the start.
Many traditional programmers use vibe coding tools to work faster rather than replace themselves. Surveys suggest that most developers now use AI coding tools daily. Vibe coding can also be highly democratising, in that journalists, product managers, and hobbyists can now build simple apps by describing what they want in plain English.
For the remainder of this article I am going to concentrate on professional software engineers. This is because these users are building systems that millions of people use every day and where the greatest risk lies. Before I go into the risks, I first need to explain how LLMs work, so that their short-comings become apparent.
How LLMs Work
LLMs are a type of artificial intelligence designed to understand and generate human-like text - one form of Generative AI. At their core, they don’t “know” things in the way a person does. Instead, they’ve been trained on vast amounts of written material (books, articles, websites) to recognise patterns in language. When you ask a question, the model looks at your words and predicts what words are most likely to come next, one step at a time, based on everything it has learned.
Imagine an extremely well-read autocomplete system. When you type a message on your phone and see suggested next words, that’s a very simple version of what’s going on. An LLM does this on a far more sophisticated level. It considers not just the last word or two, but the whole context of your sentence or paragraph. This allows it to produce responses that feel coherent, relevant, and often surprisingly insightful.
Another useful analogy is to picture a librarian who has read millions of books and has an extraordinary memory for how ideas are expressed. He or she isn’t actually thinking or reasoning in the human sense. When you ask a question, the librarian doesn’t “look up” a single correct answer. Instead, they compose a response by drawing on patterns and associations they’ve seen across many sources. That’s why LLMs can explain concepts clearly and write in different styles.
However, it is also the reason why they can sometimes be confidently wrong. They’re generating output based on what they have learnt, but they have no way of knowing whether what they have been trained on is nonsense, other than through the aggregation of more material that is hopefully “correct”. You may already have come across this yourself - it’s called AI hallucination. And this is the key reason why you should never rely on everything that an AI tells you, particularly if it relates to things like your personal finances or health.
In essence, LLMs are powerful pattern-recognition and text-generation systems. They’re good at language because they’ve seen so much of it, not because they truly understand the world. This is what makes them incredibly useful for tasks like explaining ideas, drafting text, or answering questions. However, it also means that you should treat their advice as helpful suggestions rather than unquestionable facts.
What has this got to do with vibe coding? Well, programming involves the construction of software programmes written using computer languages. There are many programming languages, but all of them boil down to writing textual instructions explaining sequences of operations on inputs to produce outputs. Therefore, existing software programmes are incredibly valuable sources of input for LLMs. Applications like Chat-GPT, Claude and Gemini have been trained on large software repositories, such as open source code bases in GitHub, and coding knowledge bases such as StackOverflow.
Professional Vibe Coding
this is great - anyone can programme now!
this could be a disaster - anyone can programme now!
My own reservations about vibe coding are based on personal experience developing a series of hobby projects. I have had mixed results. Sometimes the results are astounding, but other times, much less so. Over time I have changed my view from one of “this is great - anyone can programme now!” to “this could be a disaster - anyone can programme now!”
As a tool for getting started, vibe coding is great. Developing prototypes and MVPs (Minimal Viable Products) has never been easier. The problem is that as the code gets closer to actually being deployed, questions and concerns come to the fore. For example:
How good is the code?
Is it secure?
Does it cover all use cases?
Is it maintainable?
The problem is that vibe coding can generate a lot of code very quickly. That’s great if you can trust the code. However, it’s clear now that just as when you ask Chat-GPT for the best way to invest your savings, you have to be sceptical about the results.
To put this into real world terms of what happens when AI coding goes wrong, here is a quote about what happened at Amazon recently (Times Now News):
Just when everyone thought that AI is going to replace every profession, something or the other comes up which proves it can't - at least in its entirety. A recent report by Financial Times suggests that Amazon engineers asked the internal AI bot dubbed Kiro to fix some issues related to Amazon Web Services […] resulted in a massive 13-hour-long outage.
The problem was that Kiro decided the best way to fix the problems was to delete the deployed system and re-build using its suggested changes. You might argue that the programmer should have checked the code and vetoed the deployment method that Kiro recommended. This is true, but what this example shows is that as AI coding becomes more prevalent, the requirement to validate the output does not go away. In fact it becomes critical.
Problems like this are all the less surprising when you consider that Amazon are pushing the use of Kiro whilst laying off thousands of software engineers. In this case the outage was a commercial problem for Amazon. However, vibe coding is popular everywhere and is already starting to have potential consequences for systems deployed in many different businesses and organisations.
Where is this going to end?
In my beginning is my end.
T.S. Elliot, from the opening to East Coker in the Four Quartets
Unfortunately it gets worse. LLMs are trained on existing code repositories. And guess what sort of code is increasingly being added to those repositories? The logical outcome is that AI models will be trained on code bases made up of a growing proportion of AI generated software.
Going back to the discussion of AI hallucination, if vibe coding tools become overwhelmed with AI generated code, then the chances of new generated code being reliable will decrease.
Now imagine what happens as such code gets increasingly deployed in banks, hospitals, nuclear power plants, air traffic control and defence systems. If we are lucky, only small things will go wrong. If we are unlucky? It’s impossible to guess the consequences.
And more practically, how will these programs be fixed if no human engineers have ownership of the code?
You might remember the Millennium Bug which was highlighted as a potentially existential problem if computer clocks failed to handle the transition to the year 2000. I remember the years leading up to that very well and the enormous amount of time and money spent on validating systems to make sure that the transition would be smooth.
Now think about the hundreds of thousands of systems deployed in every walk of life which may have hidden issues, with no way to anticipate whether or when they might have serious consequences.
What is likely to happen with the AI coding revolution is going to be much worse because it is unclear whether anyone is seriously thinking about the consequences. People may be worried about the problem of AGI but the real risk is much more prosaic.
Let’s be clear. AGI based on LLMs is a fantasy. LLMs simply predict words in sequences based on what they have been trained on and what they have been asked. This does not mean that AGI is impossible using other models of AI. In fact, we could have the worst of all worlds. However, my money is on more mundane problems stemming from the spread of more and more degenerate software.
The old computing adage of garbage in garbage out has never been more true. As more AI code (and other forms of knowledge) are increasingly generated from an increasingly corrupt pool, the buildup of dangerous code will be insidious and ultimately, potentially catastrophic.
Other issues
We should also be sceptical of LLMs being capable of real innovation. LLMs can give the impression of novelty, but in the end they are simply reproducing text from what they have been trained on.
A separate issue relates to the morality of this. There are no rewards for those engineers who freely shared their code in public repositories. Therefore, engineers may be less likely to share their code in the future - cutting off potential new sources of good quality software.
What can we do about the problem?
For a start, more people need to understand the short-comings of LLMs. This will require training and education.
Secondly, any organisation that generates software using LLMs needs to take a step back and analyse whether my concerns are valid in the way they are using them. Building prototypes and MVPs is a sensible and advantageous way to exploit generative AI. However, when it comes to deploying systems, there needs to be a complete different mindset. Highly skilled and experienced programmers will always be vital an any organisation serious about delivering high quality, reliable and safe software.
Thirdly, as well as avoiding future problems, organisation like Amazon will need to perform a Millenium Bug style audit of their systems to track down any existing issues due to deploying degenerate software.
None of these actions will happen quickly or cheaply - but they are vital. Of course, the longer we leave it, the more difficult and expensive it will be to fix.
Summary
I have discussed how LLMs in the coding domain could lead to the spread of poorly coded systems that may have catastrophic real-world consequences as their deployment spreads.
Additionally, I rate these practical risks to be greater than the theoretical risk of AGI taking over the world. That doesn’t mean AGI won’t happen, but it will not occur through the development of LLMs, at least not as currently architected.
And finally, it is worth considering the bigger picture for a moment. As more and more people come to depend on applications like Chat-GPT, the creation of new and innovative human knowledge will plummet. And these problems start early on in schools and universities. The following quote from a report by the Higher Education Policy Institute highlights my concerns:
The proportion of students using generative AI tools such as ChatGPT for assessments has jumped from 53% last year to 88% this year.
This is not to say that all students are engaged in plagiarism, but the risk is increasing and the potential loss of actual learning should be taken seriously. I use Chat-GPT and other applications a lot for research needed for my articles, but I’d like to think I was already good at research before these tools came along. It is hard not to wonder what it would be like to grow up with these tools available from the start.
I may be wrong, but based on my experience, I have a strong feeling that this is going to be an increasing issue. It is also one that that we should take extremely seriously.
Acknowledgement
This article was stimulated by a YouTube video by Mo Bitar called “After two years of vibe coding, I’m back to coding by hand.”


