Wetin AI Regulation mean for Silicon Valley
Julia Reinhardt, Privacy and AI Governance Professional, on data policy, GDPR, and the impact wey e get on SMEs
In the latest episode of Voices of the Data Economy, we follow Julia Reinhardt talk. She be San Francisco-based expert for Artificial Intelligence governance and privacy and public policy consultant. As a Mozilla Fellow for Residence, Julia assess the opportunities and limitations wey European approaches get on trustworthy Artificial Intelligence in Silicon Valley and their potential for U.S. businesses and advocacy. During our conversation, Julia talk about the different facets wey GDPR's impact on Silicon Valley and the challenges wey upcoming AI regulation get. Here na edited excerpts from the podcast.
Impact of GDPR on Silicon valley
GDPR don get notable and immediate impacts worldwide. E bring awareness to Silicon Valley sey privacy Na human right wey for US, many neva consider significant. The global conversation around privacy don shift in the past three years since 2018 and so get the laws. As a direct result of GDPR for Europe, countries like Japan, Brazil, India, and China dey in the process to pass GDPR inspired privacy laws. In addition, California get new privacy law, wey don take effect in 2020 due to GDPR.
GDPR don also show Silicon Valley sey one of their biggest markets, Europe get hin own set of rules, and the US must to follow them to be player for there and to fit earn money in the region. And as a result, many US-based organizations wey process personal data of people worldwide don decide to apply GDPR and extend all the rights wey go with am to their customers wey no need to be European residents and live outside of Europe. E go give them an edge in global compliance, and dey easier for them to handle complaints and requests. In addition, GDPR go offer them a legal framework and a set of standards.
“I need to mention sey a disappointing factor with the GDPR laws na the enforcement. Even when tech companies dey get hit with billion-dollar fines, for them, na tap on the wrist. And so far, GDPR neva change the underlying business models, the way money dey made on the internet, and surveilling people’s behavior. So no be just the business model of a company; na the economic model on which the entire internet dey based — wey no get privacy, top of mind. To change go require fundamental and probably painful adjustments to the way things don dey structured. That na something wey GDPR so far neva dey able to achieve. And e dey a bit disappointing.”
AI regulations for inside Europe and their global challenges
Julia work as a German diplomat for almost 15 years, dey manage bilateral relations, navigate crisis, communication, heading up high-level protocol, participating in EU negotiation processes, and promoting innovation and outreach in the Western US.
As part of her work today, she mentions sey she intend to make sure sey the upcoming AI regulation from Europe does no dey again cause that lagging behind for small players because inside the field of AI — size matter. “The more data you fit gather, the better your AI system works. We don already dey pretty far down the road to monopolization because big players inside the market have access to an impressive range of data. Them fit also afford to gather high-quality data, which them go enable them to build better-performing AI. And for small-scale providers, wetin dey most important na the clarity of the guidance. The draft sey the European Commission table also don dey here a very long time in the making. Na the most ambitious and the most comprehensive attempt to reining the risks wey link to the deployment of AI sey we go see so far across the globe. na bold new step.” You fit read analysis of the AI regulation proposal for here.
Now in 2021, we dey the stage to fit transform these principles into practical rules and regulations. The rules wey the European Commission proposed no go cover all AI systems. For instance, them do cover systems wey dey deemed to pose a significant risk to the safety and fundamental rights of people wey dey for Europe. Na risk-based approach, and it get several layers. And those layers get different rules for different classes of AI systems; some dey prohibited, some dey consider as high risk, and some follow specific rules only. And then others where, you know, them just say you go dey more transparent.
To go deeper inside regulations: Code testing for Algorithms
You suppose to know which category your AI system fall. For some use of AI, the commission go propose an outright ban and say na unacceptable threat to citizens. For example, AI system dey cause physical or psychological harm by to manipulate people’s behavior or exploiting their vulnerable abilities, like age or disability. Other examples na social scoring systems where people fit collect points and facial recognition in public spaces by law enforcement authorities — no be all facial recognition dey banned but those wey dey used by police for public areas. Although, exceptions dey.
Most of the regulatory draft dey focus on AI, wey dem consider high risk, and wetin be high risk dey defined in the regulatory draft. So na that kind of problematic use for the recruiting field in the employment admissions context, to determine person creditworthiness, or eligibility for public services and benefits, and some applications dey use in law enforcement and security and judiciary. And for those, these systems go meet different requirements and undergo a conformity assessment before them enter the European market.
To ensure sey AI system dey comply with several requirements around serious risk management, e go use data sets for training, validation, and testing wey dey relevant, representative, free of errors, etc. Documentation about a high-risk AI system must dey really extensive and very precise — why you chose certain designs? Why you design am for specific way? The keyword na always human oversight. So high-risk AI systems must dey designed to allow people to understand the capabilities and limitations of a system and counter so-called automation bias. Also, if necessary, reverse or override the output. E be like code testing for algorithms.
Loopholes for inside AI regulation: No be the final world
The European Parliament and other bodies for Europe don already call for much stricter rules in some draft elements. In addition, certain member states believe sey e go dey more stringent in some cases. However, this no be the final word.
“In my personal opinion, I think sey the exceptions like for facial recognition dey too wide. E dey difficult when you ban very specific uses of facial recognition but then actually for inside industry or private uses — no ban at all. Even for the use of law enforcement, you go get certain areas where e fit dey used. Practically speaking, law enforcement for Europe go buy facial recognition systems on the market, wherever them dey produced, and use them for those specific cases wey them go dey allowed to. How you wan make sure sey them no use them for other things? I think sey that na huge loophole. I think sey that facial recognition get the potential to actually undermine our free society. In the end, so lot dey to criticize about this draft.”
Here na list of the selected time stamps on the different topics wey dey discussed during the podcast:
2:16 — 6:56 Julia’s journey from being a German diplomat to now an advisor bin data policy and regulations in the US.
6:56 — 12:48 GDPR’s impact on Silicon Valley
15:55 — 20:38 Impact of GDPR on Big tech and SMEs in the U.S
20:38 — 25:53 How do the proposed AI regulations impact the U.S?
25:53 — 29:58 Detailed analysis of AI regulations
29:58 — 35:55 Loopholes and challenges of the AI regulation?
35:55 — End Does innovation and AI regulation go hand-in-hand?