An AI Task Force adviser Is Worried About AI

ai-will-threaten-humans-in-two-years

“AI will threaten humans in two years”

An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years.

ai-will-threaten-humans-in-two-years
Could AI Really threaten us in 2 years time?

With regulation being thrown around left, right and centre it’s no surprise that some governments are seeking advice from what is known as “AI advisers”. These advisers are supposed to be professionals in the field of artificial intelligence (AI) and have some knowledge, although in some cases this is debatable.

Let’s face it Generative AI art and text is a fairly new piece of technology, that has only really come about due to the increase of computer power, new technological advances and an increased need for this type of technology. Let’s face it we are in the golden age of AI and there is millions of dollars being thrown at companies to develop this kind of technology.

So Who is this “AI Advisor” ?

what-does-this-ai-advisor-have-to-say-on-the-threats-of-ai

Matt Clifford chairs the government’s Advanced Research and Invention Agency (ARIA), emphasised the need for a framework that addresses the safety and regulation of AI systems.

If you agree with some of his views and comments in this article you can reach out to him on either Twitter or LinkedIn.

What Does This AI Advisor Have to Say On The Threats Of AI

During an interview with TalkTV, Matt Clifford said that humans have a narrow window of opportunity where they need to regulate and control AI. His fear is that this technology might become too powerful over the next two years. That gives governments very little time to act on the situation.

“The near-term risks are actually pretty scary. You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks,” said Clifford. We can’t blame Clifford for saying this as it’s quite easy to break barriers with AI and get it to answer some questions it’s not supposed to respond to. We even provided a guide on how to get ChatGPT to answer any question. This was never intended to be used maliciously it was more of a way to avoid strict controls over the platform.

Clifford also went onto say “You can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time.”

Maybe his views are a little on the extreme side, but we think this point of view might have been made to be more of a warning. Two years seems far too soon, we are thinking more 2035, but still it’s key that regulation needs to be implemented in order to protect people from bad actors.

What Mas The Main Focus of This Interview?

In this interviews, Clifford basically highlighted the increasing capabilities of AI systems. There is an urgent need for governments to asses on risks that could be associated with using them. He has warned that if we don’t implement safety regulations soon then these systems could just grow to unbelievably powerful stat within the next two years. This would pose risk and threats to people in both the short and long term.

As we could have guessed he referenced the infamous open letter signed by 350 AI experts, including OpenAI CEO Sam Altman. This is basically calling for people to trat AI as a threat like nuclear weapons and pandemics are. Which we find a little on the extreme side.

Don’t you find it funny that those creating this kind of technology want to be the ones that are regulating it? seems a little bit odd right? Shouldn’t they have implemented their own regulations within the base code, guess they skipped that day at work right?

Clifford went on to explain “The kind of existential risk that I think the letter writers were talking about is … about what happens once we effectively create a new species, an intelligence that is greater than humans,” His main concern is that if AI becomes smarter than humans we will be doomed.

Well here at so artificial we have been working with AI for many years and what people fail to understand is that AI is not AGI. These new AI technologies are not intelligent as in human intelligence. It’s not self aware. Sure it uses machine learning to learn patterns and algorithms. Of course maybe in the next decade or so we might actually see some use of AGI but for the moment it’s all just smokes and mirrors and we actually have to be good at prompt engineering to get a good response from these systems.

If we look at ChatGPT for example think of it as more of a predictive tool, kind of how your predictive text works on a much more advanced level. It has scrapped billions of text documents and is basically using machine learning and deep learning to guess what would be the best word to put next in a sentence. Is this intelligent? or just a good use of technology.

It’s Important that we focus on Control and Understanding of New AI Models

its-important-that-we-focus-on-control-and-understanding-of-new-ai-models
This image was created using an AI generative art tool stable diffusion

We do however agree with the statement Clifford made in regards to AI models “I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t,”.

We have been telling people this for the past 12 months. All AI models should be open source so we are aware of the data it’s trained on and what it can do. When it comes to control AI models we are very limited. Bascially companies will train AI models on millions of images, text or video in order for AI to create something based off of this model.

Once this model is trained it will then be left to go wild. This is where, hallucinations and offensive material can soon start to appear. Hallucinations are where AI thinks something it is saying is fact, where in reality it is no, it’s like there is a bug in it’s memory. This is a worrying factor when it comes to using AI to help people, where the information provided needs to be accurate.

Just imagine if you were to use AI in the healthcare industry, deciding if someone broke the law or even offering someone help and support. What if this system gave the user fake or incorrect new or worse pushed them over the edge? Worrying right? well this is why these models need to be controlled more and of course regulation need to come into play on how they are distributed.

We have already seen issues with fake news and deep fakes where peoples appearances were used in inappropriate ways. These views are shared by plenty of people within the community, so it’s just a matter of time before governments catch up.

The main problem regulators are coming against is that everyone wants AI to be adopted in a positive way. The issue is trying to come up with rules and regulations to make it a pleasant adoption of new technology. No one wants to blanket ban AI as whole they just wont to moderate it.

So What Does This All Mean for the UK?

an-ai-task-force-adviser-is-worried-about-ai

The same it does for every other country in the world. It just shows that governments are taking steps to ensure regulation is implemented in the early days of AI. This is to ensure the safety for the residents of it’s country.

There is really only a limited timeframe for countries to react. If they fail to do anything, any time soon we could be quickly over run with these new advanced AI tools. Sure some of them are proving to be a great asset to every day life, but others could become quite problematic.

Over the past few weeks we have heard countless countries stepping in to start regulating time of AI tools only last week Japan put some copywrite laws up in regards to using AI generative tools to mark art. Also Australia is a having an open public debate about it too. So we are seeing this kind of regulation being mass adopted. Is it good or bad? only time will tell.

What do you think about regulation around the AI world? Is it beneficial, needed or not needed? let us know what you think in the comments below.

Leave a Reply

Your email address will not be published. Required fields are marked *