Arnotts Technology Lawyers

Justice Perry and Sonya Campbell of the Federal Court of Australia recently prepared a speech on artificial intelligence (AI) and automated decision-making.

In their speech, Justice Perry and Sonya Campbell highlighted the shortfalls of automated decision-making through the issues of bias, review mechanisms, and discretionary decision-making.

Bias

The speakers firstly noted the inherent bias in automated decision-making. The primary concern with bias is that AI tools are designed by computer scientists – not lawyers. Without a fundamental understanding of the operation of law, computer scientists inadvertently introduce bias into decision-making systems based on their own unconscious thoughts. This often leads to unlawful discrimination and overtly objective decision-making. Since AI is relying on historic data to learn and structure its responses, it cannot easily adapt to modern circumstances. The majority of core legislation was written hundreds of years ago, and such historical data will likely further the problem of bias. Despite not being directly related to the legal system, the speakers noted the automated decision-making examples of bias through Amazon hiring more male applicants and the New York Police Department’s data-sharing problems.

Review mechanisms

Secondly, judicial review is reactive and ad hoc. For AI decision-making to be effective, it must recognise a pattern and then pre-emptively apply it to future cases. However, if the current system is reliant on previous data, it may not be as effective in solving future problems. AI decision-making in an administrative decision-making context must be determined as early as possible. The speakers emphasise that legal practitioners and judges must be integral in the design of such systems to ensure that the AI program is effective.

Discretionary decision-making

The last point of contention was whether automated decision-making may be discretionary. This is interesting as AI decision-making is reliant on logic and pre-determined factors. If an unknown element appeared in a case, it may break down a programs ability to reach a decision. Similarly, AI cannot assess the subjective elements for a case. For example, in a divorce hearing, the benefit may be to the divorcee but a third party such as a child may be heavily impacted through the process. This highlights that automated decision-making cannot replace a human counterpart. At the very least, it should be used as an extension of human decision-making to either speed up the process or reduce errors. Without innate characteristics such as empathy, adaptability, and compassion, AI decision-making is unlikely to replace human decision-making.

For the full reading of the speech, see here.