HxGN RadioPodcast

Responsible AI in public safety

Public safety organisations are rapidly adopting artificial intelligence (AI) solutions to improve operations and deliver a higher standard of service. But the deployment of AI comes with a great responsibility for system governance, outcome transparency, and protection of citizens’ privacy. In this episode, Rick Zak, Director, Public Safety & Justice Solutions at Microsoft, and Jack Williams, Strategic Product Manager for AI and Analytics at Hexagon, discuss responsible AI in public safety and best practises for agencies beginning their AI journey.

JW: Hi, thank you for tuning into this episode of HxGN Radio. I’m your host, Jack Williams, and today we’ll be talking with Rick Zak of Microsoft about responsible AI and how to deploy it, some approaches, and ways in which Rick and Microsoft address this issue. So, Rick, how are you doing today?

RZ: I’m doing great, Jack, and appreciate the opportunity to work together on this podcast and share some insights for folks not only about artificial intelligence, but sort of the boundaries around it, how you deliver it in a responsible way.

JW: Yeah. I’m particularly interested in this, Rick. It’s an area I’m focused on heavily in my day to day. Will you tell us a little bit more? Tell us a little bit more about what you do and your role within Microsoft.

RZ: You know, Microsoft has a real commitment to public safety and has many people on different teams supporting it. My role is in our U.S. state, and local government team, and I lead a strategy for public safety and justice, which really for us means working at an intersection of strategy and technology and policy.

JW: We’ve worked with you quite a bit in the past, and I can attest to all those things. Microsoft itself, Rick, you know, you look at and you see cognitive services, you see Azure ML, and you see some of the projects and initiatives. I mean, they’re coming out with amazing stuff every day. And they’ve been working on A.I. and machine learning for about as long as anybody. So, I guess, can you tell me more about the works to make sure that, you know, the hot topic today, that your A.I.’s responsible? Take us through a journey. For example, like what are some of the guidelines that Microsoft considers or their stance when they talk about responsible and ethical A.I.?

RZ: Well, it’s a great question because artificial intelligence, A.I., is different than a lot of other technologies. And as you know, Microsoft for decades has been delivering technology to support organisations of every kind. A.I. is different, and it’s so connected to people and has the ability to drive outcomes without people involved, right? So, the stakes are much higher as you get to a technology like artificial intelligence. And as a result, Microsoft, alongside developing a lot of the capabilities that you mentioned going back many years, has really focused on the policy side of it and trying to support the technology with some frameworks to ensure that when organisations deliver an A.I. solution, they do it in a responsible way. I mean, this started for us almost five years ago in 2016, at the very early side of providing artificial intelligence across so many different industries. In fact, that really drove a lot of our work on the technology side as well, because they’re so tightly coupled – artificial intelligence and the frameworks to ensure that it’s delivered appropriately. And so for Microsoft, it has been about building internal teams to review the development of artificial intelligence capabilities, to have a committee inside that meets to really provide guidance on appropriate uses for artificial intelligence, and have all of those things flow into the work we do, including really establishing by name an office of responsible A.I. That goes back even two years now. But again, this whole idea is delivering technology in a framework of sort of base principles that ensure that artificial intelligence gets delivered in a way that’s appropriate and responsible. You know, Jack, one thing that would be interesting is almost to talk through what those principles are around A.I. that flowed out of all of that work from Microsoft.

JW: Yes. So, from my perspective, Rick, when I think of responsibility and ethics and what does that really mean, you know, we can all have a stance in a policy. When I work with our A.I. group here in Hexagon, we think of responsible A.I. as being very transparent, very explainable, and very interpretable, and be cognisant of the fact that data sets can be biased in some degree. And I know that’s, you know, could be subjective, but it could be imbalanced data, right? Overreporting in one area, aet cetera. So can you sort of maybe apply those three things—interpretability, explainability, and transparency and bias? What’s the take on that? How does Microsoft approach that?

RZ: And you raise excellent points. And that bias question is an important one, because in many scenarios in A.I., you’re using data to teach a system to identify things or to prompt decisions, and it can only base that on the data it gets. So if the data isn’t representative, then you’re not going to get representative outcomes. To answer your broader question, what flowed out of Microsoft’s work on developing the policies to understand appropriate use were actually six principles that we laid out that guide our work. And they’re not Microsoft specific, right? They’re not product based. They’re really principles that can be applied in any case around A.I. And if you want, I’ll just walk through those because they do capture some of the things that you’re talking about. And for us, they put the framework down for technology and strategy.

JW: Yes.

RZ: And if I go through them, the first one we look at is fairness: that A.I. systems should treat everyone fairly, and that the similar people across all dimensions should be treated in a similar way. So if we look at an example, like an A.I. system that may provide guidance on medical treatment or a loan application or employment, they should make the same recommendations for everyone who have similar symptoms for medical treatment or financial circumstances or professional qualifications. The fairness should take any of those biases out so that the same inputs coming in should get you to the same outputs coming out. The second principle for us is really about reliability and safety, and it’s this idea that A.I. systems should operate reliably, safely, consistently. That consistency is so important under both normal and unexpected conditions. And it’s just an idea that if you build a system that can only deliver responsible A.I. in a very narrow set of conditions, then you’ve set up failure. And so it’s that reliability part and that safety and protection has to be there across all different scenarios, because in some ways it’s a different set of bias. You have bias from circumstances. The third area that we’ve laid out, the third principle is privacy and security. A.I. is about almost peeling layers back on data, understanding data inside of other data, and so protecting privacy and securing the important information for people or for businesses is becoming more and more important and more complex in the new modern sort of technology society. And so if we look at privacy and data concerns, those are core to making sure that as we gather more data and as systems are processing more, as they’re helping to make decisions that we protect through that system, the privacy and security of people’s data. There’s a fourth, and I think it goes to some of the things you’re talking about. We refer to it as inclusiveness, which is you should be building systems in a way where you’re already identifying the barriers that could even unintentionally exclude people from all those things I talked about—reliability, fairness, everything else. So rather than building a system and saying, well, we haven’t run into a problem, so we’re good; the inclusiveness part is about actively seeking out those places where there could be barriers or ways that you’re unintentionally going to exclude people. So seek out the issues and address them rather than assume that all is well.

JW: Yeah. And I think that last one kind of is the way you enact the first three in the sense that Microsoft is the platform of choice for most businesses out there today. And we use Microsoft technologies to write software to manage our cloud services, to create A.I. algorithms and leverage services like cognitive services. And having those guidelines, I’m assuming, Rick, tell me if I’m wrong, that means that Microsoft is proactively incorporating these principles into their products and providing tools or tips or I would just say guidance, if you will, because, you know, it’s still a platform people are still building on top of it. But tools to, for example, like obfuscate personal data or consider data bias and automatically detect, say, an unbalanced data set that that might be considered “biased” or statistically not a good data set to use for A.I. Because as with A.I. and machine learning, and I’ve worked with the product team as product manager and we always were like, we need more data, more data, more and more data that area. But just because you want more data, it doesn’t mean it’s good data. And, you know, so is that how Microsoft incorporates those pillars that you just mentioned?

RZ: Yeah. Let me do, because there’s actually two more, and one of them, and then I want to go to your question because you see how they’re all connected together. The fifth is transparency, right? You have to know how a system is making its decisions. And I think you call the intelligibility, which is something a term that we use as well. Like it can’t just be a mysterious box. You have to understand how it’s getting processed. And the last, this is one of most important to me, is accountability. And you highlighted this as well. So if you’re going to put a system out there built on A.I., you have to be accountable for how it operates, right? And that really ties back to your question, which is Microsoft engages internally to make sure that our own tools are created, developed, deployed in a responsible way around A.I. But then we provide a lot of guidance to the partners that we work with because we can provide sort of frameworks, help them understand outcomes, sometimes for the partners that we work with. And we’ve already addressed a lot of the questions around privacy and security in the way that we develop our own systems and can provide them valuable insights on they know how they develop their own. And just for one example, I’m part of a discussion with one of our partners who has what could be a really powerful tool for public safety. And we’re helping them through the process of understanding what does privacy really mean here? What is security? We’re coaching them on places that they may not have looked yet for where those gaps are or where those barriers could be that would exclude people from that fairness, right? And so you’re right, that it’s a collaborative process and the fact that we’ve done it ourselves and go through it and continuously do it makes us a resource for not only the technology, but from some guidance on policy as well.

JW: Yeah, that’s huge because everybody today is wanting to incorporate A.I. into their solutions. And honestly, if they don’t, then they’re going to be left behind, in my opinion. But just in the rush to operationalise A.I., people might not sit back and think about some of these ethical and responsible things that you need to consider before you roll something out. And I think that’s very important. I know you guys have worked with us very closely, and we appreciate that and our relationship. And it is something that I can say that you guys do a very good job of. And so with regards to public safety, you know, when you talk to the customers, partners, the public at large, how do you convey and communicate this transparency with the public? What do you do? You know, how do you get that message out and kind of settle their Hollywood fears of A.I. taking over the world and Terminator coming back and having a robot human war, you know? So, how do you do that?

RZ: Well, there’s a couple of things that that we do, and the first is being really open to that discussion around the fear and the mistrust that sometimes people have with A.I. You know, from our point of view, the wrong answer is to say, trust us, it’s all going to be great. That’s not a meaningful answer. What is really meaningful here is to engage that discussion, lean into that discussion, recognise that A.I. is different because it’s so connected to human intelligence and has a greater potential for both good and harm, but really engage around the discussion, drive through the idea in that discussion that there are principles behind it. And that often is a very helpful discussion for everybody, not just for Microsoft, but for often the customer that we’re working with or the partner we’re working with to say, look, here’s our position. And you may agree with one. You may not like this one, you may like them all, but let’s have a real discussion about what A.I. means, what the potential is, and what those guiding principles are going to be as we start to develop new systems. And I think the other part to this, and this really goes to the public, which is about being as transparent on how it works with the public as you can, because I would say here’s a focus that we’ve always had in technology in the past about any technology. The goal has always been getting to that launch date, right, for Hexagon, for Microsoft. We have this date, that we say, boy, this is the day we’re going to release it. And if we put that in terms of particularly a public safety agency, let’s say, then we all look at what is the go live date when they’re going to launch that new capability. And what we’ve seen is that the world is different, particularly when you have capabilities like A.I. that can be more intrusive. The launch date, that release date, that go live date for the public safety agency running A.I. isn’t the day they deploy it. It’s probably six months later, after the public or community that they serve have had a chance to see it and express their feedback, both positive and negative about it. That’s the real go live date. It’s not the ability to deliver a new capability with a public safety agency, but it’s to do it in a way that helps them keep it six months later.

JW: I totally understand. I mean, Rick, just kind of a side question: you’ve probably seen there’s a lot of watchdog groups out there that are looking into ethical practises of A.I. that are looking in the data and privacy security issues. I do think there’s going to be a stricter, maybe a more formal way of trying to identify some misuse of technology, specifically A.I., where it intrudes on people’s privacy and is not transparent. I think, like even if you don’t buy into this, right, you still eventually are going to have to do it because it’s going to be, I believe, mandated or heavily, heavily encouraged behaviour, because I think it’s just going to have to do it or else people are going to turn off over time to something that they don’t know where their A.I. got to its decision or how it came to the notification.

RZ: You make a good point. And I want to sort of dig into that a little because I liked your point about Skynet and robots coming to get us, which is sometimes the view that people have on A.I. – that it’s going to be something new and different. And one of the things that sort of drives this focus on transparency on our end is the recognition that A.I. is more often going to come in the background of something that somebody is already using and not as a new system they can decide yes or no to vote on. You see what I mean?

JW: Yes. It’s a background. It’s a process.

RZ: It’s a background. And it just comes with something. And I’ll use Netflix as an example, or if you use any streaming service and you get the recommendations from it, like, well, you watch this, you might like these three things. It’s essentially A.I. running in the background, and it’s trying to learn what you like, what you don’t like. You see it on social media. You get prompted with things, hey, you watch that movie, and what do you know, you start to get a lot more things like that. So that’s my point, that we’re going to see less times, fewer places where you’re buying an A.I. system and more times when A.I. is just going to be a new sort of capability, powering things inside of something you already use. And that’s why the more transparent we can all be about how these capabilities are coming in, how they’re getting built, what’s done, what specifically not built into it, the more comfort we’ll have for the public, whether it’s a public safety agency or it’s the people that they serve, getting comfortable with these new capabilities powered by A.I.

JW: Yeah. I like the way you put that, Rick. You’re exactly correct, and I’m going to steal that language. You typically don’t buy an A.I. system. You embed A.I. capabilities into your products, whether that’s to provide proactive alerts or to provide a better user experience or to suggest movies that you like. That’s what it’s used for. And I agree. I think the more transparent you are, the better it’s going to serve you. And, listen, some A.I., simply put, the more data that you give it, the better it will behave. I mean, it’s just the nature of A.I. More data, better A.I. But be very transparent and say, look, this is how if you want to do this, you know, this is what type of data we would be looking at. This is how we’d analyse it. You know, if you want to participate, there’s probably a lot of value to be added if that person’s comfortable with it. And I think as long as you’re straight up with people and don’t try to bury stuff in the background, then it can be very valuable—I think most people would be willing to, I don’t know, trade off a little bit to get some added value on the back end, right?

RZ: Well, you know, one of the things that we would want to sort of make a distinction, because there’s different flavours of A.I. across machine learning, deep learning—

JW: True, true.

RZ: —we want to be in that transparency, in that transparency, being specific with people when A.I. is being used in a scenario where it’s essentially doing what a person does, but doing it at massive scale. And that’s not every case, but it’s an awful lot of cases. And what we’ve seen is when we, whether it’s us directly or one of our partners delivering capabilities, when you say this isn’t different than when a person did that job based on paper or having to do it themselves, but now A.I. is doing it massively at scale and then bringing a person in to review things. Let’s say there’s a match across a massive data set or something like that. We find is that the trust goes up because the more we share about the fact that this isn’t some space age new process, it’s the same process just done sort of at a massive scale, people get comfortable. There’s an asterisk there, which is then we have to be just as clear when an A.I. system is doing something like deep learning, where it’s getting much more complicated to explain what is exactly going on inside the system. And that’s where we have to be super clear and super transparent as well.

JW: So, Rick, first off, I got a two pronged question. So first, what are some best practises for, so public safety agencies are out there listening, and they want to begin their work with A.I. What are your top best practises for agencies to consider when getting started? And then how do you anticipate the focus of responsible A.I. playing out in the future? Like, how do you see that bubbling up? I mean, do you see it becoming more and more of an emphasis just in certain industries, human and interpret? How do you anticipate the focus to be on responsible A.I. moving forward? So, best practises and the focus of responsible A.I.

RZ: There are great questions in there. They are really tied to each other. Right. You know, how do you get started, and how long do you think this is going to go, right? Do you think this is going to get bigger and then the getting started part, like for an agency who’s listening to the podcast, the first step isn’t technology. It’s policy. It’s understanding how you’re going to use A.I., what your constraints are, and to get some real guide-rails for yourself. They can do it at the same time as technology. Right. But I would say don’t leave policy for last because there’s some examples here from technology for public safety that can show us. And I would say one example is the growth of body worn cameras go back probably five years, six years for the really enormous growth of body worn cameras, and understandably, a lot of law enforcement agencies deployed them. They became a great tool for driving transparency, but a lot of departments didn’t at the same time create the policies required to support them. This was a new kind of data capturing people at their most vulnerable, very sensitive data. And they had to have policies that said who would wear a camera? Who wouldn’t? When could you turn it on? When could you turn it off? Who would be able to review the video? Could an officer review the video before or as part of writing the report? And I won’t give you the answer on both of them, those arguments to be made on all of those questions. But the point was, a lot of departments deployed them without the policies in place. So the first time there was pushback on it, they didn’t have a policy to stand on around user disclosure or anything else. So a really good example of where it wasn’t just the technology, it was having the technology and the policy tightly coupled that really drove adoption and got the public comfortable with their use. Same thing’s playing out with A.I. And so a public safety agency who’s, you know, listen to the podcast, create an internal policy first. Right. How are you going to use it yourself as a department? And then how are you going to govern that? A.I. requires the sort of known policy review to say who’s going to be responsible for the policy? How are you going to measure whether you’re staying in compliance with your own policy? And then how are you going to use that policy as a way to really support the use of A.I. in the department, both directly and through the use of A.I. in the solutions that you use? And there are a lot of guideposts for that. There’s some really good reference examples of how you put together sort of a governance model that way and then put those principles in place.

JW: Is that a new role? Like who typically plays that role of the you know, and I agree. I agree 100 percent, because you’re going to run into questions and issues that you’ve never seen before. You know, a lot of A.I. does have some configuration, right? You direct it what to look for in some regards. And there is a lot of data that, otherwise, is probably not accessible or you weren’t maybe tracking before, who has access? How do you maintain that compliance? How do you govern it? Is that a new role? I mean, just curious. Like I said, it’s random, but I mean, who would do that?

RZ: So, these kinds of decisions really are going to be a combination of leadership on the mission side and counsel’s office. And depending on the city, the county, the you know, the state entity that’s doing this is going to be somebody whose legal guidance is relied on by the mission leaders, because this is an area where it affects both of them. It’s not always law, but the law plays a role. It’s not just mission without any boundaries around it. There are constraints. So, having those people talking and together on some sort of working group is going to be really important. We are seeing a role sometimes come out as a privacy role, and that person is most likely the person that’s going to get the responsible A.I. responsibility because there’s such a tight connection between privacy and A.I. But I would just say that is a good thing. So for the chiefs listening to the call, this is a good thing. Don’t be scared. This is about getting it right at the beginning and being able to get new capabilities. It’s not about saying no. It’s about saying yes in the right way and avoiding downstream consequences.

JW: Yeah. Potentially big ones, too. It can’t be understated. And I agree. So, now I’m just curious because I would think that if folks aren’t thinking about it now as they roll out initiatives that involve A.I. and things, you might want to just think about, you know, how that governance and compliance and transparency is going to work. So good stuff. And—

RZ: Yeah. Please go ahead.

JW: Yeah. So, tell me about the larger focus. Do you see the larger focus on responsible A.I.? Do you think it’s just a fad right now, or is this just going to continue to become more and more of a thing and then it’s going to be in the dialect of everybody, responsible A.I.?

RZ: I think it’s going to sustain, Jack. I really do. And it’s because A.I., not just us and not just you, it surrounds us. You know, if you talk to your phone today, right, if you ask your phone the weather or, you know, you talk to some, you know, home speaker in your house and ask to turn the lights on, you’re using A.I., right? And so it’s not separate from what we do. What we see is it’s really getting woven into everything that we do. And so as a result, the idea of doing it responsibly, understanding what a yes is on a scenario and the ones that all people or most people look at and say, no, that’s only going to increase over time as A.I. plays a larger and larger role and in our professional world, our personal lives. And so we may call it different things than responsible A.I. over time. But I think two things: it’s getting more and more pervasive and will continue to increase and that people in society as individuals will feel more empowered to push back when they believe the systems aren’t either serving their needs or being done in a way without transparency, that they don’t understand how it works.

JW: Well, I tell you what, Rick, you’ve given me a lot to think about. I took notes the whole time just because you were full of great information. I thoroughly enjoyed the conversation. You know, the talks about the pillars that Microsoft uses to make sure the A.I.’s responsible, the guidelines that you follow. I will say that Hexagon is in lock step with you. No wonder we’re partners because we can connect this type of guidelines. And then the issues around inclusiveness I thought were very interesting. I appreciate all that information, Rick. And then once again, enjoyed the conversation. I do want to ask you; do you have any parting words?

RZ: I’ll just say that at Microsoft, we appreciate our partnership with Hexagon. Look forward to the opportunity to do more podcasts together in the future.

JW: Rick, thanks again. I appreciate your time and joining us on this podcast with Microsoft to talk about A.I. ethics and responsibility. To listen to more episodes of HxGN Radio or to learn more, please visit HxGNSpotlight.com. Thanks, everybody, for tuning in.