Government, We Need to Talk About AI
The Federal Government is already arriving late to the international table. Regulation of AI needs to be prioritised and inaction is not an option.
Among the post-election commentary that we have seen in the last month, the regulation of artificial intelligence (AI), one of the most challenging tasks the new government will face, has gone largely unnoticed or, at least, unmentioned. The day before the Australian federal election was the deadline for a consultation launched by the Morrison government on the regulation of the use of AI by private actors and public authorities. The Federal Government in many respects was already late to the regulatory party. For years the US, Europe, and China have been developing new approaches to promote and control the use of AI in ways that align with the interests of their societies. Australia’s executive must make this a priority.
Opting for paralysis, in fear that we may stifle innovation, may leave Australia with the worst of both worlds: an inadequately regulated system that in the medium-to-long term deters development while leaving Australia and Australians unprotected from the multitude of possible risks and harms, including to human rights.
Why regulate AI?
There is a general consensus among human rights experts that States need to regulate the risks artificial intelligence may pose to democracy and civil and political rights. But how does AI pose a risk to human rights? Think here of a government system that automatically recommends granting or denying refugee status based on the data it can access about countries and applicants. On the one hand, a well-designed AI-driven decision support system could guide an official reviewing such an application and, for example, reduce processing time for the applicant. It might give them information about the country of origin of the applicant and provide insights from other similar applications. Yet, while the goal might be for the applicant to be treated similarly to other past applications in the system, can the AI-system identify all relevant information pertinent to the applicant’s background? Can it assess “reasonable fear”, which is a key element to qualify for refugee status? Will it adequately consider the relevance of the applicant’s multiple identities (sex, gender identity, disability status etc.) when making a determination? Perhaps yes but perhaps not. Above all and possibly the greatest challenge with such a use of AI, if the system is designed to treat people similarly, how can the human decision-maker justify their decision if the system does not explain how it has reached a particular outcome and when can the official make a different final determination from that which is recommended by AI?
To offer another example, imagine an AI-driven university system that determines which applicants will be accepted. Such a system might choose those applicants who are assessed as more likely to complete their degree faster, possibly selecting fewer students from rural areas, or women who may experience a study break for carer responsibilities, leading to a shortage of qualified rural or female graduates. In our research, we have evaluated the grave implications for human rights of these systems, stressing how they can be particularly harmful for women’s rights.
Where to begin with regulating AI?
One notable challenge is that at this stage, there is no international agreement about how to deal with AI. The first international instrument with some global support is the UNESCO ‘Recommendation on the Ethics of Artificial Intelligence’, from late 2021. Yet it is simply a recommendation and only refers to ethical questions that governments need to consider. There is also a legally binding treaty under development at the Council of Europe (under consideration by its Council of Ministers and possibly open to Australia, despite its name) on a “transversal legal instrument to regulate the design, development, and use of artificial intelligence systems.” Both could serve as a guide for Australia, but in our view, discussion must start from the domestic sphere first.
In a similar vein, each of main regulatory approaches in the EU, USA (at federal and state levels), and China differ. These regulations pursue rules for AI that accommodate domestic interests. The Albanese government cannot automatically apply them in the Australian context. Rather their appropriateness to Australia’s needs should be carefully considered, especially because the compatibility of regulations with key partners (namely the USA and the EU) could be essential to guarantee the viability of the nascent AI domestic industry and the protection of the basic interests and rights of Australian citizens.
What to consider when regulating, from a human rights perspective?
An orderly deployment of AI can be an opportunity for a fairer and more efficient society. Our concerns lie with the everyday normalisation of AI, because this usage will affect the widest number of people across the largest number of countries. Moreover, its implications are likely to be felt differently by different groups of rightsholders. Daily usage of AI ‒ everything from AI-driven voice assistants in our homes, to more complex systems designed to allocate public services ‒ and the daily risks involved have evident and not so evident implications for human rights, which demands the type of higher-principle inquiry we undertake here.
Australia can be a leader in identifying and responding to the human rights implications of AI. In order to do so, there are a number of key questions to grapple with. In our view, these three are the most pertinent: how is AI different as a challenge to human rights, what are the risks to its implementation, and is it reasonable to assume that regulation is a way (the right way, perhaps) to address them?
José-Miguel Bello y Villarino, is a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society and the Law School of the University of Sydney and a member (on leave) of the Spanish diplomatic corps.
Dr Ramona Vijeyarasa is an academic at the Law School of the University of Technology Sydney and the winner of the 2022 Women in AI, Australia-New Zealand in the law category.
They are the authors of the just published “International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time to Regulate?” in the Nordic Journal of Human Rights.
This article is published under a Creative Commons Licence and may be republished with attribution.