The Australian Young Lawyer is currently undergoing website maintenance. You are still welcome to browse, send an enquiry or submit an article for consideration in the meantime. We apologise for any inconvenience.

Artificial Intelligence: How it will be regulated in Australia

By Ashleigh Cooper | 18 March 2024

Australia does not have a clear plan for using artificial intelligence at the national level. While Australia does not have a specific regulations for controlling how artificial intelligence is used however with the increasing risks involved it could cause new protections to be introduced in 2024.

Using AI in a responsible way means understanding the regulations now and being ready for those regulations that could change in the future. If you are creating a structure for your law firm, it's important to plan ahead. Here's what lawyers need to know.

Does the Australian Government regulate AI?

Yes, however there is no specific law just for AI. Instead, it is controlled by current legislation that is in place, including but not limited to: laws protecting consumers, their information, fair competition, ownership of creative works, and against unfair treatment.

Australia has a set of rules for making sure AI is used in a responsible and fair way. However, the government wants to put in place protections against the potential dangers of AI. This could involve making specific regulations and managing how AI is used.

Is there a plan to regulate AI in 2024?

Yes, so far, in September 2023, the Australian Government agreed to make changes to the Privacy Act 1988 (Cth). This is part of its larger effort to control AI. The changes will allow people to know how their personal information is being used.

In June 2023, the government released a paper talking about how to use AI safely and responsibly in Australia, creating new laws and standards for using and developing AI, and making sure organisations follow these regulations.

The Bletchley Declaration 

The Bletchley Declaration is a promise made by many countries, including Australia, the EU, the UK, US, and China, to use artificial intelligence in a safe and responsible way. It was signed in November 2023.

The OECD's AI Principles help governments around the world. Australia's AI ethics framework follows these principles. Many companies, like Microsoft and Thomson Reuters, have their own rules for artificial intelligence but we want to make sure that AI is being created, used, and managed in a responsible way. 

In 2021, UNESCO gave advice on how to use artificial intelligence in a way that ensures people's rights and dignity are safe particularly in areas like managing data, social welfare, and the environment.

Most of the attention on ethical AI is on generative AI, which became well-known because of ChatGPT's abilities. Generative AI is a powerful tool that uses big language models to search through massive amounts of data and quickly come up with answers to questions.

AI is saving time for professionals, so they can concentrate on important tasks. However, there are some important risks to consider when it comes to governing AI, such as privacy, data security, copyright infringement, and reducing bias and hallucinations.

We need to make sure that AI helps people and doesn't control them.

In the 2023/2024 budget, the government promised to give $41. 2 million to help use AI carefully. In November 2023, the government said they will work with Microsoft. This is part of Microsoft's plan to put $5 million into Australia.

What are the moral questions for artificial intelligence in Australia?

Australia made a list of eight rules for using AI (artificial intelligence) in a fair and ethical way in 2019. These rules are based on principles from the OECD. The voluntary rules encourage using AI in a reliable and fair way. It also wants to reduce risks for everyone.

In July 2023, the Australian Government started a group called the Artificial Intelligence (AI) in Government Taskforce. It is focused on making sure that AI is used safely and responsibly by the Australian public service (APS). The taskforce is led by both the DTA and the Department of Industry, Science and Resources.

The government has looked at the dangers of using AI in the Australian Public Service. In October 2023, a report was released with guidelines for using AI in public services.

What are the potential dangers of AI in Australia?

The Australian Human Rights Commission is worried about four main problems caused by AI: invasion of privacy, unfair treatment by algorithms, biased automation, and spreading false information.

The Australian Government still backs the safe and responsible use of artificial intelligence in both the government and private businesses. However, the government wants to appoint an AI Commissioner; as a separate government office, it would help the government and businesses follow the rules for using AI technology.

Ashleigh Cooper is currently a Law Clerk and studying a Bachelor of Laws and Bachelor of Business at the University of Southern Queensland (USQ).