Killer robots and unpredictability: the new face of weapons regulation
Transcript
(This transcript is partly AI-generated and may contain inaccuracies.)
Lauren
There is no internationally accepted definition of what a lethal autonomous weapon system is...
Zena
With the current hype of technology, there is an illusion of a lack of human control, and that is far from the case.
National Security Podcast
You’re listening to The National Security Podcast, the show that brings you expert analysis, insights and opinion on the national security challenges facing Australia and the Indo-Pacific. Produced by the ANU National Security College.
Danielle
Welcome to the National Security Podcast. I'm Danielle Ireland-Piper, Academic Director and Associate Professor at the ANU National Security College. Today's podcast is being recorded on the lands of the Ngunnawal and Ngambri people. I pay my respects to their elders past and present. Today we're considering the issue of weapons regulation, and what that means in the age of artificial intelligence and autonomous systems. To that end, I am very pleased to be joined here today with two experts. Dr Lauren Sanders, who is an Adjunct Associate Professor at the School of Law at the University of Queensland, and Dr Zena Assad, who is a Senior Research Fellow with the School of Engineering at the Australian National University. Welcome both.
Zena
Thank you.
Lauren
Thanks for having me.
Danielle
Great to have you here. So let’s dive in. But before we get into the detail of what the key issues around weapons regulation are, I'm really interested to ask about the specific angles and expertise that you both bring to this topic, and what drew you to this research and to the practice in this area, in this way. So I'll kick off with Zena.
Zena
So my background is actually in aerospace engineering, and when I was doing my PHD I actually worked in industry at the same time. And I was working specifically as assistant safety engineer on the regulation of emerging aviation technologies. so at the time, this was when drones were starting to become a lot more prominent. And that's really what started to draw me to the regulation of emerging technologies, specifically in safety critical domains, because as you know aviation is a safety critical domain and I also had quite a defence focus in the aviation work that I was doing. That was my original kind of direction towards this area, but since becoming an academic, my continued interest in this area has really been with the more advancements of technology and the different kind of safety concerns that we're seeing with those. So at the time when I was working in industry, we were starting to get a little bit of a taste and a flavour for some of those different and more broader safety concerns, so things like trust and privacy, and we didn't really see those traditionally. But now that I'm working in my academic career, now that we really do have the autonomy and AI craze kind of abuzz, it's definitely been a lot more prominent and it's directed a lot of my research.
Danielle
That's wonderful. And it's so good to have a scientist here with a bunch of lawyers. So that brings me to Lauren. Lauren, in addition to having a PhD in law, you've got a number of other qualifications and also spent 20 years serving in the Australian Defence Force in the Army. What brought you to be interested in the particular issue of weapons regulation?
Lauren
Well, I think during my practice in defence, I spent a lot of time thinking about the regulation of weapons during situations of armed conflict. But the last three years I've spent at the University of Queensland with the Law and Future of War project, which is actually where I've had quite a bit to do with XANA through the Trusted Autonomous Systems, where we were specifically looking at the regulation of autonomous weapons systems and the emerging legal debate about whether lethal autonomous weapons systems need to be further regulated at international law. I think having that practical experience, but then these last three years to think more deeply about the specific issues associated with autonomy has allowed me to sort of zoom out a little bit and think about weapons regulations more broadly. So with that LFW work, I focused on export controls as it relates to autonomous weapons, but also been doing a little bit of work in the disarmament space, which is another line of regulation that I think is something we don't often connect with issues of use of these kinds of emerging technologies.
Danielle
So before we move on to drill down into that, Lauren, can you spend a moment explaining what we mean when we say lethal autonomous weapons?
Lauren
Sure, I think it's probably best to start to say there is no internationally accepted definition of what a lethal autonomous weapon system is. But I think colloquially accepted or accepted as a general starting point for discussion internationally is a weapon system that can independently identify and strike a target with a lethal effect. So starting effectively covering off on the entire entirety of the targeting cycle as opposed to something that is just being used for the intelligence, surveillance, reconnaissance aspect of targeting or for the actual loitering strike end. It's something that is capable from start to finish of identifying, selecting and engaging a target.
Danielle
Yeah, great. And we'll come back to the reason of why we bother regulating weapons in the first place a bit later. But let's go to Zena now. So much of your research, as you just indicated, focuses on safety and assurance of robotics when we're looking at autonomous systems and weapons. So in your view, what are the implications here for defence and how we manage the use of weapons?
Zena
There's a few things here and I think one of the main ones is that the way that we are looking at safety has shifted. So autonomous and AI enabled systems have created what I would consider unique safety challenges and the main reason for those unique safety challenges is really the level of uncertainty that comes with these systems. So traditionally speaking, technical systems that have been implemented in a defence context have had very deterministic output. So that means the output is generally known.
Danielle
We know what's gonna happen.
Zena
Yeah, we know what's gonna happen. It's generally known. There are small levels of unpredictability. So any single technical system that is designed or implemented by humans is going to have some level of unpredictability. That's just how it is. Any system can be made to behave unpredictably under given circumstances. Think about your phone, your car, anything. There are always gonna be situations or circumstances where they behave unpredictably. That's common. But what autonomous and AI enabled systems do is they increase that bandwidth a little bit more. And that level of uncertainty is a little bit more inherent in these systems.
Danielle
And so when we're talking about uncertainty on a practical level, are we just not sure what it is the weapon will actually do physically in a given circumstance? Or is it more a problem with the weapon, for example, being able to distinguish between a civilian or a combatant, for example?
Zena
So the second example you gave I would consider to be an error of a system rather than uncertainty.
Danielle
An unpredictability.
Zena
Yeah, and so these are some of the unique safety challenges that come out. So when we talk about unpredictability, what we're saying is these systems are what we deem intelligent systems or complex systems, right? They're able to produce outputs that are different to what we expected them to produce. And what we call these are optimised outputs. So they're able to improve their behaviour over time. So the great example I really like to give is a drone avoiding a bird. You know that the drone is gonna go either up, down, left, or right at a very simplistic level to avoid this bird. You just don't know which one it's going to be. So that is what we mean when we say that there's kind of like a boundary around that unpredictability. And with these systems, that boundary is getting a little bit bigger and bigger. Now I do want to emphasise that that doesn't mean these systems can operate independently or have a mind of their own. That's absolutely not the case. It just means that the risk threshold looks a bit different and that becomes more challenging to navigate and to regulate.
Danielle
Right, because that would be a common misconception, I imagine, that we would generally assume that it means that they're operating independently. Can you explain the sort of science around that a little bit?
Zena
So public narratives is a huge part of this problem. You know, we see a lot around, like the term killer robots is a huge one. I absolutely hate that term because it's so deliberately controversial. But what it's done is it's planted a seed in people's minds and that seed has kind of really developed with the mind of its own of how people perceive these particular systems to operate. So the way that they operate is at a basic level, they have objective functions and they need to meet these objective functions. And at an algorithmic level, they are able to learn based on previous data how to optimise their outputs to achieve that function. So the purpose of the system is never going to change. So if we go back to that drone example, if its purpose is to avoid the bird, that's what it's going to do, it's going to avoid the bird. It may be unpredictable in how it chooses to do that, I don't know if it's going to go up, down, right or left, but at the end of the day, its purpose and its function has remained the same. And it's the same with these systems.
Danielle
So in a way, the implications for defence are around planning in some ways?
Zena
They're around planning, they're also around tempering expectations. So it's really hard to have these conversations and to navigate the safety challenges when you're trying to meet two things. One, you're trying to actually regulate the safety of these systems, and two, you're trying to do that against people's expectations. And the reason that's important is because people's expectations dictate how they operate and how they choose to interact with a particular system. And so you may behave in a way that you didn't intend to otherwise that may create an unsafe environment, based on a perception that you had about that technology. And so monitoring and trying to temper people's expectations is a huge part of the equation.
Danielle
It's a huge part of national security generally, isn't it? When we're thinking about a wide variety of national security risks, there's not only the risk itself, but it's people's perception in that risk and trust in the institutions that are guarding against the risk. So that makes sense. Lauren, when we're talking about weapons regulation there are a number of bodies of law that might be relevant, but of course, the area in which you're an expert and the most relevant area is that of international humanitarian law, which is commonly misunderstood, talking about public perceptions, as being human rights law, but it's a different body of law. It's also known as the law of armed conflict. And of course it arose out of a merchant, I believe, witnessing the horror of battle somewhere in France or Switzerland. And from there we have a body of law that recognises that even when wars themselves are unlawful or we're ostensibly in a state of armed conflict, there are still rules that apply. Can you talk us through how that works in relation to weapons? One might say, well, we're using weapons, they kill people. What's the point of regulating them?
Lauren
Sure. So I think the tale of the Battle of Solferino and Henry de Nantes observation of combatants dying in fairly horrific ways at that battle was the spark that created the International Committee of the Red Cross, which is one of the main keepers of international humanitarian law, or the laws of armed conflict. But effectively the principle of the body of law is in a sense, it's a pragmatic body of law because public international law has already failed to an extent if we find ourselves in a situation of armed conflict, because since at least the United Nations Charter under Article 2.4, it's illegal to start, unlawful to start a war of aggression to try and obtain the territory of another. So if we're in a situation of armed conflict, there's been some sort of a failure of law. But this is the, the law of last resort, which says that even in those contexts, even wars have limits, which means that civilians in particular should be protected to the extent possible from the horrors of war and that even combatants have certain rights, but also obligations that come from those rights when they are conducting themselves in armed conflict. So as that applies to weapons, it really creates a number of prohibitions and limitations on the way in which they can be used.
Danielle
And so back into the day, one of the examples was not using a dumb, dumb bullet, for example, which explodes within a body and therefore can cause more harm to a combatant that might otherwise survive as a prisoner of war or be sent home. When we're talking about autonomous weapons, what are the issues that play there? It's quite easy to understand why you would use a less harmful bullet, but what are the issues at play when we're talking about autonomous systems?
Lauren
Mm. Mm-hmm. There's a few, but I think probably the two most controversial is really the suggestion that a machine would have the capacity to make a decision on behalf of a human about who should live and die. So a lot of the rules that relate to the use of force during situations of armed conflict require contextual judgment and consideration based on a wide variety of considerations, but also in relation to balancing very difficult concepts in particular under the principle of proportionality in terms of a direct concrete military advantage balanced against potential harm to civilians and civilian objects. So one of the main criticisms or critiques of the capacity of lethal autonomous weapon systems is that they should never be able to make those decisions and in any event they're not capable of making those decisions now. And then the second issue is really focused on the issue of distinction, which is the ability, you referenced earlier, to distinguish between combatants and civilians, those who are lawfully able to be targeted. So not just combatants, but using that as a shorthand for the group of people against whom lethal force could be used versus civilians and civilian objects who are specially protected by the laws of armed conflict. And again, that context specific assessment using a machine to make that decision in what is a contested complex, difficult environment in terms of situations of armed conflict is one, potentially not technically feasible at this stage in relation to some of the more difficult circumstances of armed conflict and two, one that is also highly contextual and as a consequence, one that many people suggest should not be left to machines or predictive algorithms to determine.
Danielle
If I could be bit of a devil's advocate there, is there an argument we know that sometimes humans don't make good or moral decisions and can commit war crimes, for example, is there an argument that if we use autonomous systems, they can be pre-programmed to follow the laws of armed conflict? Or is it ultimately that you always need that contextual decision to determine whether, for example, a child is posing a threat or not, if they're armed or not armed?
Lauren
Mm-hmm. Absolutely, that's I mean most proponents of lethal autonomous weapons systems will say that there are circumstances where it is better to have the human out of the loop. So the assessment as to what's targetable is programmed prior to the event starting and those specific targetable objects which will always be targetable, you know a tank is always going to be a tank, is going to be more precise if you're using these kinds of precision strikes you're going to have better protection of civilians and your own combat forces because of that standoff capability. So there are certainly pluses in relation to the use of these capabilities. Likewise, the use of decision support systems using autonomy and AI, which are connected into the system, although we're getting a little bit further away from the strict definition of a lethal autonomous weapon system. But that's probably more where you hear that argument being used to say that we have a list of people we need to make assessments again, against as to whether they're targetable or not. We have a certain list of criteria or categorisations for those people. Computers are better at doing that without emotion. And the counter to that counter, of course, is that the way that it's integrated into the targeting system is where it really matters. And there needs to be still levels of human oversight to make sure the way that that assessment is being calibrated suits the circumstances. Probably touch on recent uses of decision support systems by the IDF a bit later if you wanted to talk to that issue a bit more.
Danielle
Yeah, no, that sounds good. Before we do though, I know you also do a little bit of work on export controls. How does that play into weapons?
Lauren
Well, there's probably two ways I think of export controls. The first is to give effect to the broader international project of disarmament, to say that as a global community, we've decided there's certain kinds of weapons that we just don't want to be handed around without strict controls on them. So nuclear weapons, for example, are only able to be in the hands of certain states as a result of a number of international agreements. And as a consequence, there's very strict export controls on any components that might go towards building nuclear weapons in non-nuclear weapon states. And the other hand is in relation to a nation's domestic interests or their national security interests in relation to preventing types of weapons that they want to use getting in the hands of people that aren't their allies. So it's more a strategic control as compared to necessarily a broader disarmament objectives.
Danielle
So in essence, it's sort of rules about who we can sell and what we can, who we can sell to and what we can sell them. We'll come back to some practical examples around that in a moment, but just moving back to Zena now, if in your area of research and study and a lot of the really important work you're doing, if you had three things that you would recommend to decision and policy makers around defence and weapons systems regulations, what would they be?
Zena
I think the first one for me would be touching on a point I made earlier is to understand technology properly and to not buy into the hype of technology. With the current hype of technology, there is a very strong illusion of a lack of human control and that's far from the case.
Danielle
We all watched the terminate, terminator often, right?
Zena
Exactly, I think so. And you know, there's this, people talk a lot about an accountability gap and it's not something that I ascribe to. I don't believe there's an accountability gap. Even if we got to a point of weapon systems being able to determine and engage a target completely on their own without any human intervention, there was still a lot of human intervention prior to that point, right? Like we designed that system, we inputted the data, we put in the descriptions of what a target looks like, how we're characterising. There's huge, huge, huge amounts of decision making and human decision making and human agency in that equation. So would say there is no such thing as no human control or no human decision making. So understand technology properly. The second one would be to lean into existing policy and regulations. There's also a misconception that we have to start from scratch and build brand new measures for these kinds of systems. And I don't believe that's true at all. We have a huge legacy of existing policy regulations and laws.
Danielle
I feel like that a lot about the law!
Zena
Yeah.
Danielle
We try and recreate the wheel from micro situations, but. There are general principles that remain applicable, like in space, for example.
Zena
Exactly. I do think there's some that need to be amended and edited for sure.
Danielle
To follow with time.
Zena
Yeah. But I don't think that they need to be redone from scratch and reinventing the wheel. Not at all. And then the last one would be, again, one of the points that I made earlier, and it's separating public perceptions and narrative from the decision making that we make. So because I work in the safety and regulation space a lot, I know that meeting public perceptions and public expectations is a huge part of the equation. But because of the sweeping narratives around AI in particular, it has warped public perception to a point that it's not technically accurate. And at some point, we do need to put a line in the sand and say, well, these are public expectations. We can't back them with evidence. They're not technically accurate. And how do we create policy regulation and law that is actually aligned with technical accuracy as opposed to really what is fear. Like when you use the term killer robot. It's deliberately controversial, it's fearmongering and it's not an accurate representation of how we're seeing this technology in weapons systems.
Danielle
And I guess the risk then is that we're trying to fix a problem that we don't have as opposed to fixing problems that we do have.
Zena
Absolutely, because public expectation is huge, especially for public policy and public regulation. If you put something forward and it doesn't meet public expectation, the backlash is huge. And so tempering those expectations is a big part of the equation. And so I think there's probably a public education piece that comes into that as well.
Danielle
That makes sense. And what about you, Lauren? If you had your way…
Lauren
Well, conveniently, most of mine really echo what Zena had to say because we've been talking about this for a number of years. But in relation to that, I think the first point would be to do a proper gap analysis in relation to what existing laws do cover or don't cover in respect to the technologies, building upon Zena's point about understanding the tech properly and also that issue that we don't need to start again from scratch because I think when you do that, you can really narrow down the small areas where you need to tweak existing legal policy application of existing legal principles. And I point to the work that Zeno has assisted us in the last few years in relation to the legal review of autonomous weapons systems. What exactly is the deficit or what exactly is the change in the way that you're going to approach an existing assurance mechanism to address this new technology? And the answer is there's, there's not a lot that is very specific to the technology. The second is that Australia has a great system which they put forward as part of their international debate in relation to lethal autonomous weapons systems. And it's called the system of control. And what it does is it explains how Australia has layers of control in relation to the use of force of any kind of weapons systems across the capability life cycle of, of its weapons systems. So it starts with processes around acquisition, processes then around assessment of legal compliance before purchase, training models, rules of engagement during application in situations of armed conflict. So pretty much that cradle to grave assurance process in relation to any use of force by the Australian Defence Force. I think building that out and putting some more detail around that policy as it applies to lethal autonomous weapons systems would be really useful in relation to future acquisitions and uses of these types of technologies, also allowing a clear demonstration of the multiple layers of assurance that surround these technologies that enable their lawful use. And finally, I think in terms of areas like export control and policies around the use of lethal autonomous weapons systems, I think increased transparency is going to be really important going forward. And this goes to Zena's point of education and countering narratives of killer robots, for example. We have, Australia has a broad and very considered approach to the assurance systems that support the use and acquisition of these kinds of technologies. And I think being more transparent about those controls would assist in countering some of those unhelpful and not necessarily accurate narratives.
Danielle
So sounds like overall there's a public literacy issue here. absolutely. Lauren, you mentioned before examples of concerns around use of weapons by the IDF. Did you want to speak to that?
Lauren
Sure. I think recently there's been an enhanced focus on decision support systems and using artificial intelligence to create target lists. There's the Lavender system that the IDF used for a number of strikes in its Gaza campaign where there was concern about who was being struck based on that list and the level of collateral damage that was accepted to be incurred in relation to those particular strikes. And I think what is interesting when you look into the detail about how that system was used is that it's not necessarily that the targeting system, so the decision support system, was incapable of being used lawfully. It's that the methodology of adopting it and inserting it into the existing targeting cycle didn't have sufficient oversight to check that it was calibrated properly to say that, for example, one low-level Hamas leader is the equivalent of seven civilian deaths. I mean, I don't know the numbers. I don't know what it was calibrated to, that's the way. Yeah, absolutely. So, you know, that kind of level could feasibly be a lawful way of approaching targeting provided those considerations are properly calibrated and used in a controlled and limited sense rather than necessarily cutting and pasting the entirety of the list and then inserting it into another system that doesn't have the level of human oversight that's necessary or the level of controls over those calibrated lists.
Danielle
And I guess that's because even separate from the Geneva Conventions and the plethora of provisions that are in there, there's four basic principles at what we call customary international law that dictate the use of weapons, right? And so is what this issue going to the customary international law principle of proportionality?
Lauren
Yeah, absolutely. I mean, also we see it in additional protocol one, but I think that the issue here is instead of just accepting a big list of accepted targets that are then capable of being struck, that there's still a requirement to have that context specific assessment of what might happen on that particular strike, who else might die at that time of day when that missile is released. And I think the absence of necessarily calibrating those two things is something that needs more thought because there certainly will be circumstances where churning out a targeting list and then immediately implementing that in a strike list is going to be appropriate. There are going to be high-end armed conflicts where those kinds of processes might be relevant, but translating that into a really heavily urbanised environment where the target you're trying to strike is in a basement under three floors of civilians is not necessarily the best way to use that technology, albeit that it comes with some of the benefits we talked about before in relation to better access to information, the ability to process more information than humans can, the ability to remove emotion in relation to those decisions and the like.
Danielle
So in one sense, like any weapon, it's less about the weapon and more about how we as humans use it. Yeah, thank you for that. Zena, in the context of sort of a follow-up question to what we talked about before in terms of your three shopping list items for decision makers and policy makers, what are you most worried or concerned about and what are you most optimistic about in this context?
Zena
I would say I'm most concerned about the level of power that technology companies have in this discussion.
Danielle
Do you mean the private sector?
Zena
Yes, yeah, I mean the private sector. So the recent UN report that was released was a really good example of that. They have people from the technology industry, so they had people from Microsoft and OpenAI who are on the committee. And I would argue that the report that was released was not, I don't think it was as agnostic as it should have been, is what I would say. So I'm concerned about the level of power and the kind of monopoly that they currently play in the technology environment and the way that that's kind of seeping into these discussions and these applications. But then I would say something that I'm optimistic about is that I do think that we are making strides in demonstrating how these kinds of technologies can be regulated when implemented in weapon systems or in warfare more broadly. There's been a lot of debate about whether or not we can regulate these systems and I do believe that we can. And I think we've made very positive strides forward to demonstrate that.
Danielle
And you've heard it here from an engineer, it is possible to regulate these systems.
Zena
It is, it's always possible to regulate them.
Danielle
It's interesting your comment about the private sector. You would see that play out in a number of different contexts. And I guess, is that because there's so much money to be made in the weapons sector? Or is it because there isn't an active enough civil society that's engaging and making submissions in this space?
Zena
It's both and the reason that we don't have more of an even and kind of diverse technology industry is because we have the big five technology companies that are really monopolising the industry at the moment. But I do think, yeah, money is a huge part of it. You know, the example that Lauren gave of the Lavender system, they use data that was made publicly, that was publicly retrieved by Google. And you know, if you knew that the things that you were subscribing to with Google were then going to be used in a weapon system, I think a lot of the public wouldn't be very comfortable with that or would have concern with that.
Danielle
Or would have no idea.
Zena
Yeah, right. And it's the whole thing of those terms and conditions. Like I've said this before, but honestly, Instagram could own my ovaries. I have no idea because I've never read the terms and conditions.
Danielle
I'm a lawyer. I don't think I have.
Zena
Most people don't. And it's because they deliberately do it where you read, know, like, I agree to the terms and conditions, click here to read them. Nobody clicks there. And if you do, it's just, you know, hundreds and hundreds and hundreds of paragraphs of these things and nobody actually reads them because in your mind you're like, what's the worst that can happen? And well, we've kind of seen that in this situation, the worst that can happen is actually part of your data and part of-
Danielle
Is used for a weapon system.
Zena
Yeah, without realising and I don't believe that the general public were even aware that these links could have been made. And so I think that's something that definitely concerns me is just that kind of, really subtle and underhanded kind of power that we're seeing with the technology industry and how it's seeping into different industries.
Danielle
Particularly when you, as you say, the literacy amongst the public isn't great, you often hear people say, well, I don't mind if people use algorithms on Facebook. I've got nothing to hide. But again, it's what you don't know that your small piece of data is forming a bigger part of.
Zena
Yeah, it's important to remember that technology advancements are incremental over time. And a lot of the times there are tangible links between seemingly unrelated technical advancements, right? Like we've seen technical advancements in one industry that have led to strides in another industry, even though they seem unrelated on the surface. And that's something to really remember, like the way that technology advances is very much interwoven. And so, well, you know, you don't care. Maybe you're just posting photos of your cat or something on Facebook. You're like, I don't care about the algorithm. That's great. But the algorithm is used for a lot more than just sending you cute cat videos. And so it's just something to keep in mind. Maybe I will, like I preach about this and I still haven't read the terms and conditions. So maybe I should.
Danielle
Well, we're all going to read the terms and conditions of Instagram after this. They're too long. Lauren, over to you. Did you have a reaction to anything that Zena said there or do you have your own sort of concerns and optimisms?
Lauren
Yeah! Well, think just building off what Zena has said, I think it's kind of linked in so far as there's been a defence exemption to the EU's AI Act, for example, and the strides that Australia has made in relation to the regulation of AI has had a similar exemption to it. So my concern is really in relation to that dual use issue that Zena has spoken about and allowing that exemption to continue in terms of viewing these as completely separate industries that you can draw a really neat line between. But I think on the positive side of that as well, there has been strides in relation to recognising the need for stronger and swifter regulation from the defence perspective as well in terms of data usage, in terms of what concrete steps can be taken. And Zena and I have recently come back from the responsible AI in the military summit, which was held in South Korea in September this year. So in addition to having some great barbecue, we also had some great discussions about the future of regulation and some of the concrete steps that are starting to be taken with a lot of states now developing their thoughts about how to regulate and how to link in some of these technology uses where they have got algorithms that have been born from commercial use and the data has come from what we all post on Instagram and how that can be properly moderated, assessed and assured for safe military use and responsible military use.
Danielle
And of course, by dual use, mean the fact that a particular piece of equipment or system can have both a civil and a military use, but also that the industries are not disparate in the way they're connected.
Lauren
Mm. Mm, absolutely.
Danielle
And I know I'm putting you both on the spot here with this one, but if you wanted to recommend, if you could recommend some additional reading for some listeners who might be interested in going a bit deeper into this topic, can I ask you, and please feel free to recommend your own works. Zena?
Zena
So Lauren mentioned earlier that we've been involved in reviews of Article 36, which is the weapon reviews, and that information has actually been made public. So there's a really, really fantastic report, and Lauren, you can probably give the information of where it's available, but it's actually, and this is coming from an academic perspective, a lot of academic writing is inaccessible, but I promise this report is not. It's actually written in a very accessible way, it's very clear, and it also talks to not only the work that we've done, but the work that is going to be done. So I would say if you're interested in this, that would be one to definitely read.
Danielle
Great. And Lauren, where would we find that, you think?
Lauren
So it's just available online posted at the UQ Law and Future of War website. So it's just the legal review of autonomous weapons systems. And there's three iterations of the report based on what states were saying that they think needs to happen in that space at the group of government experts discussions under the auspices of the conventional and certain conventional weapons, which is a mouthful in itself. We won't go for the full name of that treaty. And then the subsequent meetings that were had with a number of international experts hosted by the Australian Defence Force and UQ last year and both this year. It's also available on the April's website. So I can give you links to those for your show notes after this.
Danielle
Wonderful. Thank you so much. That gives us all something to dive further into. Did you have any final comments, Lauren?
Lauren
I was just going to say that I quite often just follow Zena's LinkedIn to see what her response is to people posting stuff about killer robots in news articles –
Zena
It gets me so mad.
Lauren
-without addressing the actual technology behind it. So if you want to follow the debate in a sort of closer to real time fashion, I would recommend following Zena on LinkedIn.
Danielle
So we're all gonna, this is our homework, we're all gonna go follow Zena on LinkedIn.
Zena
Lauren's being really nice, it's literally just me ranting, being like, this is why I think this term is stupid.
Danielle
No one will use the term killer robot anymore.
Zena
God, that's the other one, yeah, I'm so sorry. Gee, I maybe should have like tempered myself.
Danielle
We'll all go have a look at those materials. Really grateful for your time and expertise to be in the room with both an engineer and a lawyer talking about this really important issue that really does impact upon people's lives and upon our defence and national security systems. Zena, really grateful for your time. Look forward to following up on more of your work. And Lauren, great to have you all the way from Queensland and also grateful that you've been teaching into our Masters of National Security Policy course as well. Really grateful for your time on that. Thank you both. Thank you.
Lauren
Thanks for having me.
Zena
Thanks.