EY launches advanced tool to assess trustworthiness of AI technology
Global professional services firm Ernst & Young has announced the release of an advanced analytical tool to assess the trustworthiness of artificial intelligence.
Enabled by Microsoft Azure, the EY Trusted AI platform released by the global professional services firm Ernst & Young produces a technical score of an artificial intelligence system by leveraging advanced analytics to evaluate its technical design, measuring risk drivers including its “objective, underlying technologies, technical operating environment and level of autonomy compared with human oversight.”
Aimed at helping to resolve the issue of trust in technology, which the firm contends is the biggest barrier to wider AI adoption, the new tool’s risk scoring model is based on the ‘EY Trusted AI conceptual framework’ launched last year, which speaks to embedding trust mechanisms in an AI system at the earliest stages around the core pillars of ethics, social responsibility, accountability and explainability, and reliability.
“Trust must be a front-line consideration, rather than a box to check after an AI system goes live,” said Keith Strier, EY’s Global Advisory Leader for Artificial Intelligence. “Unlike traditional software, which can be fixed, tested and patched, if a neural network is trained on biased data, it may be impossible to fix, and the entire investment could be lost.”Users of the new solution such as AI developers, executive sponsors, and risk professionals will be able to garner deeper insights into a given AI system to better identify and mitigate risks unique to artificial intelligence technology, with the platform score produced by the tool subject to a complex multiplier based on the impact on users – taking into account potential unintended consequences such as social and ethical implications.
According to the firm, it’s the first solution designed to help enterprises evaluate, monitor and quantify the impact and trustworthiness of AI, while an evaluation of governance and control maturity further serves to reduce residual risks and allow greater planning – helping to safeguard “products, brands, relationships and reputations” in the contemporary risk environment.
“If AI is to reach its full potential, we need a more granular view – the ability to predict conditions that amplify risks and then target mitigation strategies for risks that may undermine trust, while still considering traditional system risks such as reliability, performance and security,” said EY Global Trusted Artificial Intelligence Advisory Leader Cathy Cobey.
Offered as a standalone or managed service – which will be regularly updated with new AI risk metrics, measurement techniques and monitoring tools – the new solution will be available to clients globally this year, with further features including a guided interactive, web-based interface and a function to drill down for additional detail, as well as the ability to perform dynamic risk forecasting on when an AI component changes – such as an agent’s functional capabilities or level of autonomy.