The potential dangers as artificial intelligence grows more sophisticated and popular

Seth Dobrin:
So, I think if you look at what the E.U. is doing, so they have an A.I. regulation that's regulating outcomes. So anything that impacts health, wealth, or livelihood of a human should be regulated.
There's also — so, I'm president of the Responsible A.I. Institute. What we do is, we build — so the letter also calls for tools to assess these things. That's what we do that. We are a nonprofit, and we build tools that are align to global standards. So, some of your viewers have probably heard of ISO standards, or CE. You have a CE stamper or UL stamp on every lightbulb you ever look at.
We build standards for — we build a ways to align or conform to standards for A.I. And they're applicable to these types of A.I. as well. But what's important — and this gets to the heart of the letter as well — is, we don't try and understand what the model is doing. We measure the outcome, because, quite honestly, if you or I are getting a mortgage, we don't care if the model is biased.
What we care is, is the outcome biased, right? We don't necessarily need the model explained. We need to understand why a decision was made. And it's typically the interaction between the A.I. and the human that drives that, not just the A.I. and not just the human.
ncG1vNJzZmivp6x7sa7SZ6arn1%2Bjsri%2Fx6isq2ejnby4e9OhnGaon6myr8DImqNmnJGjtKa%2B0maYrGWRp8GqssicoJqkXZ67tbHLpaCgnZ6Ysm6z0aiurGWdpL%2BmedKop6Gho6m2pK3TnptmmZ6ZerG7z66jmqo%3D