Australia’s controversial algorithm used to determine funding for elderly care is facing mounting backlash, with complaints from patients, assessors and advocacy groups prompting an official investigation and intensifying scrutiny of the government’s reform agenda.
The system, known as the Integrated Assessment Tool (IAT), was introduced in November 2025 as part of sweeping changes to the country’s aged care sector. It relies on an algorithm to evaluate an individual’s needs and assign a funding level for in-home support services. However, critics argue that the tool is increasingly being seen as flawed, opaque and potentially harmful, with reports suggesting that it has reduced care support for some elderly Australians despite worsening health conditions.
Concerns have emerged from both within and outside the system. Assessors tasked with evaluating patients say their professional judgment has effectively been sidelined, as they are required to accept the algorithm’s output without the ability to override it. This has led to frustration among healthcare professionals, some of whom describe their role as reduced to simply entering data rather than making informed clinical decisions.
Patients and their families have also raised alarm, with several reporting that reassessments under the new system resulted in lower funding allocations even when care needs had increased. In some cases, individuals have reportedly avoided requesting reassessments out of fear that their support could be further reduced.
The scale of discontent is reflected in the surge of review requests. Government data indicates that hundreds of people have already sought reassessment of their cases since the algorithm’s rollout, a sharp increase compared to previous years. Critics argue that this spike highlights systemic issues rather than isolated errors, with some alleging that the tool is designed to curb public spending on aged care rather than improve outcomes.
The controversy has drawn comparisons to past policy failures involving automated decision-making, with some experts warning that removing human oversight from such critical assessments risks serious consequences for vulnerable populations. Lawmakers and advocacy groups have questioned the legal and ethical basis for preventing assessors from overriding the algorithm, especially when professional judgment suggests a different outcome.
In response to the growing outcry, the Commonwealth Ombudsman has launched an investigation into the system. The probe follows a wave of complaints from affected individuals, healthcare workers and political figures, and is expected to examine whether the algorithm is delivering fair and appropriate outcomes.
The government, however, has defended the tool, maintaining that it is intended to bring greater consistency and transparency to the assessment process. Officials argue that the previous system allowed inconsistencies in funding allocation and that the algorithm helps standardise decisions across cases. Despite these assurances, critics remain unconvinced, warning that a one-size-fits-all approach cannot adequately account for the complexities of individual health conditions.
The controversy comes at a time when Australia is exploring broader use of artificial intelligence in aged care, with experts divided over its potential benefits and risks. While some believe technology could improve efficiency and quality of care, others caution that inadequate safeguards could lead to unintended harm, particularly for those most in need of support.
As the investigation unfolds, the future of the algorithm—and the broader direction of aged care reforms—remains uncertain. For many stakeholders, the debate underscores a fundamental question: whether critical care decisions should rely on automated systems or retain a central role for human judgment.