Middle East well-placed to lead development of AI in healthcare says PwC

17 January 2019 Consultancy-me.com

The Middle East region is well-placed as a potential world leader on artificial intelligence research & development for the healthcare sector says PwC, with the public in approval.

The findings of a new global survey report on digital healthcare transformations released by PwC have prompted the firm to suggest the Middle East is uniquely positioned as a possible international leader and hub for artificial intelligence research & development in the sector. Notably, the willingness of the Middle Eastern public to engage with AI in the healthcare system was greater than anywhere else.

The report – ‘From Virtual to Reality: Six imperatives for becoming an AI-ready healthcare business’ – examined global and regional use cases of AI implementation in the healthcare sector across the areas of leadership and culture, ethics and confidentiality, clinical effectiveness, workforce transformation, and public readiness and regulation, stitching together a roadmap for a successful AI adoption.

From a regional perspective, although almost two thirds of the public and private sector healthcare leaders interviewed from the Middle East believe that AI will have a major impact on their businesses in the coming ten years, fewer than ten percent have begun pursuing the matter – findings which accord with a broader recent BCG study on positive AI perspectives but stalling implementations in the region.Business Impact vs Implementation

The slow level of activity places local healthcare businesses far behind those in Europe. Yet, another aspect of the study revealed that the Middle Eastern public is considerably more open and at an increasing rate to engage with an AI-driven healthcare system than respondents in Europe and elsewhere – creating an approximate 50 percent gap between public readiness and business readiness in the region.

For PwC, the widespread public acceptance provides a compelling opportunity; “With regulators actively working on developing governance and regulatory frameworks that can facilitate the application and implementation of AI for healthcare businesses, PwC’s view is that the Middle East is in a unique position to lead the development of international standards and become a hub for AI research and development.”

But more still needs to be done, says the firm, beginning with leadership and culture. The report further identifies the traits of AI-ready leaders, including strategic thinking and foresight, the ability to simplify complexity and create a roadmap which connects all parts of an organisation, and being technologically aware while not losing sight of the need for compassion and emotional intelligence as a fundamental pillar of healthcare.

“AI has the potential to revolutionise every aspect of the healthcare sector,” says PwC Middle East Health Industries Consulting Leader Hamish Clark. “The economic benefits are clear and the technology is already here. However, there is a significant implementation challenge which requires new skills and strong leadership. Disruption to traditional healthcare delivery models is now happening at pace.”


EY launches advanced tool to assess trustworthiness of AI technology

12 April 2019 Consultancy-me.com

Global professional services firm Ernst & Young has announced the release of an advanced analytical tool to assess the trustworthiness of artificial intelligence.

Enabled by Microsoft Azure, the EY Trusted AI platform released by the global professional services firm Ernst & Young produces a technical score of an artificial intelligence system by leveraging advanced analytics to evaluate its technical design, measuring risk drivers including its “objective, underlying technologies, technical operating environment and level of autonomy compared with human oversight.”

Aimed at helping to resolve the issue of trust in technology, which the firm contends is the biggest barrier to wider AI adoption, the new tool’s risk scoring model is based on the ‘EY Trusted AI conceptual framework’ launched last year, which speaks to embedding trust mechanisms in an AI system at the earliest stages around the core pillars of ethics, social responsibility, accountability and explainability, and reliability.

“Trust must be a front-line consideration, rather than a box to check after an AI system goes live,” said Keith Strier, EY’s Global Advisory Leader for Artificial Intelligence. “Unlike traditional software, which can be fixed, tested and patched, if a neural network is trained on biased data, it may be impossible to fix, and the entire investment could be lost.”AI system overviewUsers of the new solution such as AI developers, executive sponsors, and risk professionals will be able to garner deeper insights into a given AI system to better identify and mitigate risks unique to artificial intelligence technology, with the platform score produced by the tool subject to a complex multiplier based on the impact on users – taking into account potential unintended consequences such as social and ethical implications.

According to the firm, it’s the first solution designed to help enterprises evaluate, monitor and quantify the impact and trustworthiness of AI, while an evaluation of governance and control maturity further serves to reduce residual risks and allow greater planning – helping to safeguard “products, brands, relationships and reputations” in the contemporary risk environment.

“If AI is to reach its full potential, we need a more granular view – the ability to predict conditions that amplify risks and then target mitigation strategies for risks that may undermine trust, while still considering traditional system risks such as reliability, performance and security,” said EY Global Trusted Artificial Intelligence Advisory Leader Cathy Cobey.

Offered as a standalone or managed service – which will be regularly updated with new AI risk metrics, measurement techniques and monitoring tools – the new solution will be available to clients globally this year, with further features including a guided interactive, web-based interface and a function to drill down for additional detail, as well as the ability to perform dynamic risk forecasting on when an AI component changes – such as an agent’s functional capabilities or level of autonomy.