Now Even Australia is Considering Regulation
Let’s face it Artificial Intelligence (AI) is growing at such an alarming rate, that it’s no wonder some countries want to restrict it. Today we are hearing that Australia no has AI in it’s sights and is considering pushing for a ban on what is labelled as “high risk” AI tools.
We have just got wind of news that Australia’s government has launched an eight-week consultation into what it believes are the risk and benefits of the mass adoption and use of AI. This basically whiffs the first steps of regulation, after hearing about Japan pushing regulation on AI generated Art. it’s no surprise other countries are gaining interest in following suite.
All this comes as local regulators are asking the public what their opinion is on whether or not a ban should be imposed on, what these regulators deem as “high risk” Ai platforms within Australia. This consultation will run until July the 26th, it’s expected that experts in the field of AI and some academic figures will talk with policy makers in regards to this type of AI regulation.
You can read the full document on this public opinion in this document “Safe and responsible
AI in Australia -Discussion paper” by The Australia government department of industry science and resources.
We do applaud Australia though for allowing a public discussion in this matter, so people can actually voice their concerns before regulation is passed without any input for the residents of it’s country.
There is a discussion paper titled “Safe and Responsible AI in Australia,” Minister for Industry and Science, Hon. Ed Husic, has pointed out the governments concerns on AI, generative AI in particular. Husic stated that this technology could go rouge or bad actors may use it to create fake news and deep fakes, we can understand his concerns in regards to this. He also pointed out that there could be AI Chatbots that might encourage suicide, which would be awful. All it takes is a few bad actors to break these algorithms and turn these chat bots on the head.
So What’s the Concerns of the Australian Government?
Well to start with it seems to be aimed at biases of AI models and their prompting required to run. Let’s face it when you create a prompt a real specific prompt it will require the use of prompt engineering and some of these prompts can be quite biased and can include certain prejudices. Which has caused a lot of cause for concern.
Other concerns are that a lot of this AI technology advances are based outside of the US where Australia believes the regulations are more relaxed so this could pose several risks for them. To some extent we can agree with this, each country has it’s own regulations and rules.
In the previous paper Husic Said: “Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment,”. Stating that “The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud.” So the government is kind of stuck AI is beneficial yet worrying at the same time, hence why some kind of regulation needs to be implemented. This is a direct attempt at trying to stop certain AI tools taking over.
Australia Has Previously Been Concerned About AI
These type of regulations are nothing new, ever since the release of ChatGPT the whole world has been on it’s toes in regards to the speed AI is increasing in power. Australia is no different in regards to regulation. It has previously taken steps towards regulating AI. Although it these where more of a set of rules before passing judgement, they were still steps implemented.
Interesting though, Australia’s government has spent $41 million out of its recent Budget for developing AI technologies in the country via its National AI Centre. This is in an aim to promote ethical AI and it’s mass adoption. Australia is not against AI, far from it, it just wants is users and creators to be more ethical around it. Ai can be used as both a negative and a positive and some certain people need to be made aware of that.
Sadly despite it’s efforts, a recent paper publication from The National Science and Technology council on generative AI gives a little feed back on the mass adoption of ethical AI. It wasn’t good. It states that adoption is quite low and this could be mainly down to the publics trust. So perhaps this is a sector that the government could focus in.
AI regulation Really Does Have Governments on their toes
You might think “AI is coming” well it’s already here and is literally impacting our day to day lives as we speak. It has been for many years, you might not have been aware of it either. Think of self driving cars, production lines, Alexa, Siri etc. People are only worried because they are seeing countless new articles about chatbots and AI art. This is visual you can see it, so of course it’s going to get people talking. Where as hidden technology that you can’t see, seems to just fall off the radar. This is the technology you should be worried about.
Mass adoption is already here, but with the increase in new AI tools from a wide range of developers governments really need to step in. All it takes is a few bad actors to fall through the cracks and we can end up in a right mess. It’s quite easy to make AI perform bad tasks, you just need to train it that way.
Heck even China has performed a blanket ban on OpenAI’s ChatGPT as it would prefer to create it’s own version. Why? well why do you think? that’s the real question. We believe it’s because ChatGPT is too Americanised so gives comments and feedback based on that audience. This might not be suitable for Chinese users or “appropriate” for the chines government. Who really knows.
We are also seeing corporations prohibit employees for using Ai generative tools over security concerns. So if companies are doing this then of course the government should do too.
Conclusion
We have seen a huge increase in regulation over the past few months. It’s no surprise with the rate that AI technology is growing. Our personal main concern is that a new tool comes out every day, by some new fly by night company that’s main purpose is to make money at any cost. This should be avoided. This could create so many issues, especially if regulation does not get implemented.
Think of a tool that can perform deepfakes in seconds, imagine the chaos online. Imagine the exploitation. So we can understand why Australia might want to restrict some AI tools. Not all of them of course, but yes some of them.
Don’t get us wrong, we love AI and AI tools, we love new technological advances. But we do want people to use this type of technology responsibly. Deep fakes can be harmful especially when using peoples faces without permission. Sure it might be “funny” but to try to pass it off as real is a problem. Also taking other peoples work and profiting of it should not be encrouage.
What are you views on this subject are you for regulation or not? Do you agree with Australia opens public consultation on banning ‘high-risk’ AI tools? if not what would you do instead?