PUBLIC TESTIMONY
Testimony by Darío Maestro, Senior Legal Fellow at Surveillance of Technology Oversight Project (STOP), on AI Legislation
1:47:20
·
4 min
Darío Maestro from the Surveillance of Technology Oversight Project (STOP) testified on a package of bills aimed at regulating AI and automated decision systems in NYC. He offered recommendations for strengthening the proposed legislation to ensure effective oversight and accountability in the city's use of AI.
- Intro 926: Suggested specifying standards for AI tools and audits, and recommended a temporary moratorium on AI use in sensitive areas.
- Intro 199: Advocated for giving the proposed Office of Algorithmic Data Integrity real enforcement authority.
- Intro 1024: Emphasized the need for a clear public approval process for AI tools to be included in the centralized list.
Darío Maestro
1:47:20
Good afternoon, sir.
1:47:22
And members of the committee on technology.
1:47:24
Thank you for the opportunity to testify before you today on this critical legislative package.
1:47:29
My name is Daniel Maestro.
1:47:31
I am the senior legal fellow at the surveillance of technology oversight project or stop.
1:47:36
We are a New York based civil rights group committed to fighting privacy violations, and the discrimination biases sometimes embedded in this in new technologies, especially in artificial intelligence and automaker decisions since terms.
1:47:50
In our work, we have witnessed firsthand how these technologies can harm already marginalized technologies marginalized communities by reinforcing existing patterns of discrimination, whether by race, gender, or socioeconomic status.
1:48:05
That is why we welcome the introduction of a trio of the bills including today's agenda.
1:48:11
Specifically, interest 199, 926, and 1024.
1:48:16
These bills represent the match needed that you say the push towards oversight and accountability in the CDC's use of AI as it has been already much discussed during today's hearing.
1:48:26
However, despite the strong foundation, they would benefit from targeted amendments to become generally effective.
1:48:35
Today, I am going to discuss each bill offering a specific recommendation on each one.
1:48:41
1st, intro 926 calls for defining best practices in the use of AI tools by city agencies.
1:48:48
However, we think it falls short by failing to specify what standards and minimum standards for what these tools should meet or what AI audits and regularized reviews should be testing for.
1:49:05
Without standardized audit criteria, we cannot determine how into what extent these systems perpetuate bias.
1:49:12
At stop, we have conducted extensive research on AI audits and would be happy to collaborate with the council in your offices to help develop these necessary rules.
1:49:22
Further, until these standards are met and set by either legislation or CD agencies.
1:49:28
We recommend establishing a temporary moratorium on AIUs in sensitive areas like housing, employment, law enforcement, and social services.
1:49:37
Now turning to intro 199.
1:49:41
This bill seeks to establish an office of algorithmic data integrity.
1:49:45
But as it stands, it only gives this office an advisory role.
1:49:49
We believe real enforcement authority should also be needed for this office to be effective.
1:49:56
Specifically should have the ability to investigate, penalize, and enforce corrective measures to act both against a tool that is found to be biased or harmful or when agencies fail to comply.
1:50:08
The ability to subpoena code and the ability to test that code for biases would also be welcomed.
1:50:16
Finally, intro 1024, Monday is a centralized list of AI tools approved for CDUs.
1:50:23
These ads transparency I'm thank you.
1:50:28
I appreciate that.
1:50:30
These ads transparency, I was saying, but without a clear public approval process to ensure that only the safe and bias tools make it onto that list, we would have a situation where the manual administration could simply rubber stamp any tool it desires and make into a list.
1:50:47
Just to wrap up, we believe that it's properly amended.
1:50:50
And when it combined, this package of 2 l translation could form a powerful and meaningful and meaningful tool in combating AI biases.
1:51:01
Intro 926 can set the rigorous standards that CD agencies can follow in their AIUs 1024 would then function as a Guardian, only allowing those tools that meet the standards to be used.
1:51:14
And 199 would create the enforcement body that would make sure the AI systems comply with the standards of 926 and the approval process of 1024.
1:51:25
We at Stubb are ready to work with your offices and the council to develop these important amendments and secure the strongest possible safeguards for all New Yorkers.
Jennifer Gutiérrez
1:51:34
Thank you.
Darío Maestro
1:51:35
Thank you for the opportunity.
Jennifer Gutiérrez
1:51:36
Thank you.