
Unleashing AI: How Far do we Go?
Science & Technology Editor David Wiley, LVI, asks if the world needs restrictions on Artificial Intelligence.

David Wiley
Science & Technology Editor, 2025
Artwork by Leo Hellier
Senior 9
It is the phrase on everybody’s lips right now – artificial intelligence is the ultimate hot topic in the technology world, but concerns have been voiced over the degree of freedom that AI should have to operate. There is no denying the extraordinary power that AI has – anyone who’s ever used ChatGPT to get out of a homework scrape can testify to that – and it can undoubtedly be applied in numerous positive ways.
A tool called Artemis has just recently been developed in conjunction with human rights agencies to track down and apprehend human traffickers, and AI-powered medical systems can diagnose and remedy some diseases well before they’re even noticed.
What’s more, supporters of AI argue that the development of AI is inextricably tied in with the pursuit of knowledge, due to neural network’s astounding power to catalogue, summarize and impart information.
The most common form of AI is a system that is fed an enormous amount of information, and then uses that as a database to formulate new ideas, identify new patterns or even converse with people, as is seen in chatbots like Copilot. However, AI is not universally embraced, and some even see it as an existential threat to humanity.
It’s not just paranoid conspiracists who fear the impact of AI – in 2023, over 1,100 people notable in technological fields signed a letter that called for a halt, of at least six months, to the development of new AI systems. Among the signatories were Tesla and SpaceX billionaire Elon Musk and Apple co-founder Steve Wozniak, showing that even the most famous are seriously concerned about AI’s potential threat. The letter warns that there is now an ‘out of control race’ to develop ‘ever more powerful digital minds’ that will end up spiraling out of the control of their creators. The basic fear of the writers and signatories is that the push to continually advance AI will end up with an uncontrollable model being produced that could wreak havoc in a digitally interconnected world.
However, experts often play down or even outright dismiss this fear of rogue AI as movie-driven paranoia, and a letter was published in response to the 2023 publication that stressed the positive uses of AI.
This is not the only thing that drives apprehension in regard to AI. Another major concern possessed by many is the threat of AI prejudice. The problem with feeding models huge amounts of data is that if bias is present in the datasets, the AI will adopt these biases and apply them to any situation they are presented with. This of course leads to problems because AI models can end up racially or religiously biased, and thus disproportionately affect minorities when decision-making.
In 2014, software engineers at Amazon developed an AI tool to streamline the hiring process – it was then found out a year later that the model discriminated against women during hiring and it had to be removed from service. This is not, unfortunately, an easy problem to solve; in order to train a completely unbiased AI model, datasets must be meticulously examined to identify and remove prejudices. This is a time-consuming and expensive process, and it also limits how comprehensive our training of AI can be, because large, real-life datasets are often unable to be used if strict non-bias eligibility rules are followed.
The issues don’t stop with prejudice either; jobs will be inevitably lost if AI continues to develop and companies eventually realize an administration job, for example, could be done much more efficiently and cheaper by an AI model. The implementation of AI into weapons could enable slaughter on an even more terrifying scale than is already possible today, and the vast amount of data required for AI throws up complicated personal privacy and data protection issues.
Amidst all this doom and gloom, however, it is important that we return to the benefits of AI. There is no reason, given the construction and enforcement of proper rules and regulations, that AI cannot be a tool for the good of humanity. Development of AI in the future should be controlled in order to prevent the creation of harmful models, and individuals, corporations and countries should all work together to continually assess the risks and benefits of AI; Co-operation on every scale will be required if AI is going to be benefit all of us. Artificial intelligence then should have some degree of restriction placed upon it; but we must be careful not to strangulate what could be a cornucopia for humanity.