Deloitte calls governments to action on a global ethical framework for AI
It’s time for the public sector to consider ethics in the development and deployment of AI says a new report from Deloitte, and many prominent experts would most certainly agree.
Released at last month’s World Government Summit in Dubai and aimed at triggering actionable discussions, Deloitte’s ‘AI Ethics: The Next Big Thing in Government’ report barely scratches the surface of the deeply complex conversation around artificial intelligence and the role of – and application within – the public sector, but it does raise the question of ethics, urging governments to start taking action and offering a number of foundational recommendations.
And the stakes are certainly high. “Could AI lead to the creation of a ‘useless class’ of millions of human beings? How do we ensure that machines do not harm other humans? How could we control an AI system that has gone beyond our understanding of complexity?” the authors ask. Indeed, the report features nearly 60 question marks in total, including some of those curlier ones as notably raised by public figures such as Stephen Hawking, Nick Bostrom and Elon Musk et al. as to potential end of humankind as we know it.
Chief among the Deloitte recommendations is the focus on transparency and trust – problematic to say the least in the contemporary political climate of ‘fake news’ and authoritarian leanings. Further, the report argues that developing an international regulatory model will be essential for ensuring the ethical development and deployment of AI – quite rightly, but again, a tricky prospect with respect to a deteriorating geopolitical order.
“The role of the public sector is essential and twofold,” says Ali Hashmi, Deloitte Middle East’s Artificial Intelligence Leader, “to act as both a role model and regulator.” Also as stated by the report; “Gaining societal consensus on the ethics of AI is one of the key tasks of the government.” Yet, to date, even the experts can’t get close to agreeing on a basic ethical model, and Musk, for one, appears to have little genuine faith in the government to lead by example.The concluding principals as laid out by Deloitte are lofty – e.g. that when undertaking an AI project, an organisation should consider what will be the economic impact of a project that will result in job loss on the society as a whole – but would in many ways require a fundamental change to globally entrenched systems; something that both governments and profit-driven multinationals have struggled with as to already present and tangible existential threats (read: global warming).
Deloitte notes the recent surge in AI ethics worldwide – reeling off a list of academic institutions, public sector bodies (including nation-states) and corporate entities now interested in developing a working ethical framework for “maximising AI benefits while minimising its risks”. The Big Four firm also reminds us that AI is already well and truly here – at the very centre of the operations of most of the above-mentioned corporate entities – Google, Microsoft, etc. – which are guiding the current ethical debate.
The call to action is admirable, but the above reminds of the deep complications involved – from the erosion of trust in government to the compromised entities driving the profitable development of AI while framing the ethical debate. “Governments and public sector organisations have to reach out to external AI stakeholders – i.e. other governments, institutions – to build partnerships for developing effective codes of ethics,” says Hashmi. What’s required is a road-map for how this might successfully occur, driven by public demand.
Before arriving at the forwarded principals that should shape a code of ethics, the report first examines the philosophical foundations that should inform those ethics (such as, how do brains work?), but, as has been previously pointed out, philosophers have been grappling with some of the same ontological questions for thousands of years now, whereas as the advent of human-level machine intelligence is projected by many experts to be as close as within the next ten years.
Deloitte itself notes in the report that “organisations should anticipate the birth of artificial general intelligence – the intelligence of a machine that could successfully perform any intellectual task that a human being can do,” and from there, it may be just a small step to artificial superintelligence. One sobering thought: Apple co-founder Steve Wozniak has reportedly started feeding his dogs filet out of concern he could be reduced to serving as a future robot’s family pet, telling Vanity Fair, “Once you start thinking you could be one, that’s how you want them treated.”