Responsible AI should be a priority – now

Join the CEOs July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discussing topics related to AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free card now!

Responsible AI must be embedded in the company’s DNA.

“Why is bias in AI something we all need to think about today? That’s because AI fuels everything we do today,” Myriam VogelPresident and CEO of EqualAIto a live audience during Transform 2022 event this week.

Vogel discussed the topics of AI Bias and AI in depth in a fireplace conversation led by Victoria Espinel of the trade group. Software Alliance.

Vogel has extensive experience in technology and politics, including in the White House, the US Department of Justice (DOJ), and in the nonprofit EqualAI, which is dedicated to reducing unconscious bias in the development and use of artificial intelligence. She also serves as chair of the recently launched National Artificial Intelligence Advisory Committee (NAIAC), which Congress has mandated to advise the President and the White House on AI policy.

As she pointed out, AI is becoming more and more important in our daily lives – and improving it exponentially – but at the same time, we have to understand the many dangers inherent in AI. Everyone—builders, creators, and users alike—must make AI “our partner,” as well as efficient, effective, and trustworthy.

“You can’t build trust with your app if you’re not sure that it’s safe for you, and that it’s designed for you,” Vogel said.

Now is the time

We must address the issue of responsible AI now, said Vogel, because we are still laying the “rules of the road.” What constitutes AI remains a kind of ‘gray area’.

And if it is not addressed? The consequences could be dire. People may not have access to proper health care or employment opportunities as a result Artificial intelligence bias“Litigation will come and regulation will come,” Fogel warned.

When that happens, “we can’t decipher the AI ​​systems that we’ve become so dependent on, that have become intertwined,” she said. “Now, today, is the time for us to be very aware of what we’re building and deploying, making sure we’re assessing the risks, making sure we’re mitigating those risks.”

Good hygiene using artificial intelligence

Companies must address AI responsible Now by creating strong governance practices and policies and creating a culture that is safe, collaborative and visible. Vogel said that this should be “overflowed” and handled with caution and intentionality.

For example, in hiring, companies can start simply by asking if the platforms have been tested for discrimination.

“This basic question is just too powerful,” Vogel said.

organisation HR team It must be powered by end-to-end artificial intelligence that does not exclude the best candidates from hiring or applying.

It’s a matter of “good AI hygiene,” Vogel said, and it starts with the C-suite.

“Why Sea Sweet? Because at the end of the day, if you don’t have acceptance at the highest levels, you can’t get the governance framework in place, you can’t get an investment in the governance framework, you can’t get the governance framework in place, you can’t get the governance framework in place, you can’t get the governance framework in place, Vogel said:

Also, bias detection is an ongoing process: once a framework has been established, there must be a long-term process of continually evaluating whether or not bias is hampering systems.

“Bias can be embedded in every human touch point,” Vogel said, from data collection, to testing, to design, to development and deployment.

Responsible Artificial Intelligence: A Problem at the Human Level

Fogel points out that talk of AI bias and AI responsibility was initially limited to programmers – but Vogel feels it is “unfair”.

“We can’t expect them to solve humanity’s problems on their own,” she said.

It’s human nature: people often fantasize only as widely as their experience or creativity allows. Therefore, the more votes that can be brought in, the better it is to identify best practices and ensure that the age-old issue of bias does not permeate AI.

This is already underway, Vogel said, with governments around the world drafting regulatory frameworks. The European Union is creating a file General Data Protection Regulationlike organizing for AI, for example. In addition, the US Equal Employment Opportunity Commission and the Department of Justice recently released an “unprecedented” report. joint statement On reducing discrimination when it comes to disabilities — something that AI and its algorithms can make worse if not seen. The National Institute of Standards and Technology has also been authorized in Congress to create a file risk management framework for artificial intelligence.

“We can expect a lot from the United States in terms of AI regulation,” Fogel said.

This includes the recently formed committee that she now chairs.

“We will have an impact,” she said.

do not miss full conversation From the Transform 2022 event.

VentureBeat mission It is to be the digital city arena for technical decision makers to gain knowledge about transformational enterprise technology and transactions. Learn more about membership.

Leave a Comment