Q&A
Discussion on privacy concerns and potential issues with AI-based tools in government services
2:11:17
·
175 sec
Council Member Gutiérrez and Senator Gonzalez discuss deeper concerns about privacy, data sharing, and the use of AI in government services. They explore the potential risks and the need for clear standards and regulations.
- Concerns about predictive services and how collected information might be used
- Discussion on the risks of deploying untested AI tools in government services
- Emphasis on the need for clear frameworks to reduce bias in automated decision-making systems
- Examples of issues in other states where AI tools led to benefit losses or false fraud accusations
- Call for human oversight in consequential decisions made by AI-based tools
Jennifer Gutiérrez
2:11:17
That's excellent.
2:11:18
Yeah.
2:11:18
I mean, I think the commissioner, you know, barely barely touched on on that.
2:11:25
I think often reports to kind of, like, what is already existing, seems like they've not really engaged in a ton of feedback besides maybe like a survey situation.
2:11:37
So, I mean, what I what I got clear from the commissioner are are MLU's agency to agency.
2:11:43
Obviously, I was asking specifically about law enforcement bursement.
2:11:46
He couldn't really speak to that a lot.
2:11:48
So I'm really I'm encouraged that constituents are asking about that.
2:11:51
Is there are there, I guess, specific concerns about kind of the future.
2:11:59
And you can get back to me with this because it sounds like they're really looking at a sense of, like, kind of predictive services with the information that people are are supplying Is do you think that there is concern for what he laid out or what they laid out today, which is we wanna see what services people need so we can provide solutions versus kind of like what you're hearing on the ground?
2:12:21
What what people really do they really need that?
Kristen Gonzalez
2:12:24
That's a phenomenal question.
2:12:26
So I think we can all agree that we want to see our technology stack in our city and state government, be responsive, be easy to use, you know, from a from a new worker perspective, also give us the information that we need, but deploying new tools that are untested certainly isn't the answer to that.
2:12:44
And what I really wanted to point out to your question is that the question of automated decision making systems having implicit bias is one that we've been dealing with in our city and state governments for years.
2:12:56
And now that we're adding new types of technology like generative AI and large language models, which again have been proven to have certain challenges and hallucinate, you know, we have the risk to actually amplify some of that bias.
2:13:09
And what we've seen in other states is that when these tools, whether an automated decision making system, that did not have a clear, again, framework mandated by government to reduce bias or a generative AI based tool.
2:13:23
I'm definitely in the latter right here.
2:13:24
We've actually seen when these tools have been deployed some serious issues, like folks losing some of their benefits because the decisions were inaccurate.
2:13:34
And folks being accused of things like fraud in other states because the systems were actually flogging people unnecessarily.
2:13:42
And that's why we wanna see before we go ahead and do any of this, we that we have clear standards and at least a human in the loop when a consequential decision is being made about someone's life with an AI based tool.
Jennifer Gutiérrez
2:13:54
That's right.
Kristen Gonzalez
2:13:55
Yeah.
Jennifer Gutiérrez
2:13:56
Well, the administration seems very hopeful.
2:14:00
They'll figure this chatbot situation out.
2:14:03
But thank you so much for coming, and thank you for your testimony.
Kristen Gonzalez
2:14:06
Thank you so much for having me.
2:14:08
And, again, appreciate the conversations today around the the digital idea.
Jennifer Gutiérrez
2:14:12
Thank you so much for coming in.