Some key figures in the AI space want training of powerful AI systems to be suspended amid fears of a threat to humanity.
Some important figures in the AI industry have signed an an open letter warning of potential risks in the race to develop AI systems. A lot of the companies are rushing to be the next big thing in AI which is causing a lot of cause for concern. We are living in a kind of wild west era for AI, no hard coded rules or regulations can be a recipe for disaster.
One of the signatures found on this open letter is Elon Musk (Twitter Guy) one of many who want the training of AI over a certain ability to be halted for at least six months. Apple co-founder Steve Wozniak and some of the researchers over at DeepMind have also signed it.
This all stems from the company behind ChatGPT, OpenAI. With the recent release of GPT-4 the chat bot everyone is talking about and how it is changing our day to day lives. This AI system is capable of performing certain chat tasks, like responding to questions, queries and even answer advanced exam questions. It can now even answer questions in regards to objects and images.
There is a fear that someday the GPT model might become so advanced it will show signs of AGI. Some people are concerned that it may start sharing false, misleading information so demand regulations be put in place.
So what’s in this letter?
The letter linked at the top of this page from The Future of Life Institute, states that it wants development to be halted temporarily at the level we are currently at. The letter also states it is concerned about the risks more advanced systems might pose. Due to the speed and advances AI technology is increasing by.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research”
The Future of Life Institute is a not-for-profit organisation which says its mission is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life“
To summarize the speed that AI is growing is frightening for some people. The gold rush has started and everyone is obsessed with becoming the next best thing within AI. This is causing individuals not to care about regulations or issues that may occur from developing these kinds of technologies.
There is no doubt that AI will benefit human life in many forms, the question is why have we not created laws and regulations in order to ensure bad actors do not exploit this new type of technology? Like all new creations, the creation happens first and laws and regulations follow.
But it’s just old men shouting at trains?
Times gone by there has always been indivduals shouting about progression. Take the internet for example, it had a lot of bad news when it first came out. What people don’t talk about is all the laws, rules and regulations that were implemented over time to ensure that internet is and remains a safe place for all it’s users.
This is why there is a current call for some big players in AI to put on the breaks for a short period of time. This is to ensure that laws and regulations can be put in place to stop certain people exploiting this kind of technology.
So what do we think about all this?
Well, we do think AI is growing at an alarming rate. We love AI, we eat, breath, and live it. But there comes a time in all technology where we need to take a step back and really evaluate the direction it is going in. We don’t think that they need to “STOP AI” development or even pause it for 6 months, we just need these big companies to be more transparent with their work. So maybe OpenAI can start being what they set out to be “OPEN”.
A number of proposals for the regulation of technology have been put forward in the US, UK and EU. However, the UK has ruled out a dedicated regulator for AI, looks like we are a little bit behind here.