Your cart is empty.
Your cart is empty.
27 Jan 2026 Uncategorized
Using AI for workplace legal advice can be risky. This article explains the limits of AI and why human legal expertise is essential.
Artificial Intelligence (AI) tools are becoming a common place for people to look for quick answers — including answers about their rights at work. At the Working Women’s Centre South Australia, we are seeing a growing number of clients who have used AI platforms for employment law advice before contacting a legal service.
Chatbots such as ChatGPT, Google Gemini, and Microsoft Copilot may be useful for general information, but they should not be relied on for accurate legal advice.
While AI tools may appear helpful, the reality is far more complex — and creates risks for workers.
Why Caution Is Essential in Employment Law Contexts
AI tools do not “understand” the law. They generate responses by predicting which words statistically fit together based on vast amounts of existing data. They can sound confident and persuasive — but they do not know whether what they are saying is correct, current, or appropriate to an individual’s circumstances.
In employment law, small details matter. An incorrect assumption about notice periods, unfair dismissal eligibility, workplace rights, or time limits can result in workers missing critical opportunities to protect themselves.
When it comes to employment law, the information is often outdated or incorrect.
For example, the answers you get from an AI chatbot for your employment question in South Australia might include information from laws in New South Wales, Queensland, or even overseas. That information will not be useful to your case.
Even if you ask the chatbot to only refer to South Australian law, there is a very good chance that it will ‘hallucinate’. There are more and more cases of AI simply making up cases and laws that simply do not exist. The South Australian Employment Tribunal has recently issued this warning: https://www.saet.sa.gov.au/2025/10/16/referral-to-the-lpcc-for-the-citation-of-fake-cases/
For this reason, we strongly recommend that you do not use AI chatbots for legal advice or to draft applications or submissions. Doing so may harm your case rather than help it.
The Working Women’s Centre wants to hear your story. Please do not provide us with AI-generated information or opinions, as this does not assist us in giving you accurate legal advice.
Employment Law Is Context-Specific
Employment law depends heavily on individual circumstances, including:
AI platforms cannot assess these nuances. The information they provide is general, not personalised — and that distinction matters enormously when workers are deciding whether to challenge a dismissal, raise a complaint, or take legal action.
AI cannot assess risk, apply discretion, or advocate for fairness. Importantly, it cannot challenge power imbalances or hold employers accountable. It does not understand trauma, discrimination, or the lived realities of working women, yet it often presents information with confidence, creating a false sense of certainty.
AI Can Sound Confident — Even When It Is Wrong
AI tools are designed to sound convincing, even when the information they provide is incomplete, outdated, or legally incorrect. One of the key risks associated with AI is automation bias: the tendency to trust information simply because it appears authoritative or confident.
A report by the Australian Human Rights Commission, The Need for Human Rights-centred Artificial Intelligence, highlights why a cautious, human-rights-focused approach to AI is essential — particularly in areas such as employment, discrimination, and access to justice.
The Commission makes it clear that AI should support, not replace, human decision-making — especially where decisions have serious legal and human consequences, such as employment and dismissal.
AI does not understand context, emotion, trauma, or power dynamics. It cannot recognise when someone is vulnerable, experiencing workplace abuse, or afraid of retaliation.
Bias and Discrimination Risks Are Real
The Australian Human Rights Commission also warns that AI systems can reproduce and amplify existing biases, including gender bias and discrimination against marginalised groups. In employment settings, this is particularly concerning.
AI systems trained on biased data may:
For women, migrants, people with disability, and workers in insecure employment, this can deepen existing power imbalances instead of addressing them.
Large language models can:
What to Do Instead
At the Working Women’s Centre SA, our work is grounded in human-centred, trauma-informed, and rights-based legal support. We provide free, confidential employment law advice to women and gender-diverse workers.
Unlike AI tools, our lawyers:
If you are experiencing a workplace issue, here are safer steps you can take:
If you are unsure whether something in your workplace is unfair or unlawful, it is always better to contact a legal service directly to help you make informed decisions.
Your rights are too important to leave to an algorithm.