Roland Berger supports RPA adoption by Qatari maritime logistics firm

12 June 2018 Consultancy-me.com

Qatari maritime transport & logistics firm Milaha has deployed one of the world's first AI solutions for back-end operations which performs clerical and administrative tasks mirroring human activities. The robotics process automation system, dubbed ‘Tam’ by the company, reduces the time of some tasks from days to hours.

Milaha has been testing the system with the help of global strategy consulting firm Roland Berger for some time and has found that Tam is flawless. Working in conjunction with the duo was technology firm FourNxt, an expert in artificial technology and Internet of Things systems. Together, the consortium has tried and tested the system and found that Tam works perfectly 24/7 with no faults whatsoever.

According to Milaha’s team, the system is able to process up to 35 transactions per hour, a workload which would ordinarily take over 12 hours to complete. This can also continue throughout the night and not only when clerical staff are working, significantly increasing overall efficiency.

Milaha is a Qatar-based maritime transport and logistics company which is pushing forward in its uptake of Industry 4.0 technologies. The company is invested in the successful use of machine learning and robotic process automation (RPA) as a path to reaching the Qatar National Vision 2030.

Roland Berger support Milaha implement machine learning solutions in QatarCommenting on the implementation of Tam, Milaha’s President and CEO Abdulrahman Essa Al-Mannai said; “The introduction of RPA into our operations is part of a company-wide initiative to automate the workplace with the ultimate goal of maximising our efficiency and accuracy, and we are proud to be one of the first Qatari companies to adopt such cutting-edge technology.”

“Tam will also free up our staff to focus on complex higher-value tasks, decision-making, and client engagement. On the other hand, our automation drive directly contributes to the creation of a knowledge-based economy, a key component of Qatar National Vision 2030.”

Saleh Al-Haroon, Milaha’s Senior Vice President said of the new system; “The use of Robotic Process Automation is becoming an inevitable requirement for large organisations, and we are already seeing promising results from the new system, particularly with our finance department, which first started implementing it."

"Using ‘Tam’, our colleagues are able to perform within hours what would usually takes days to finish, and we are working with the developer to brainstorm new ways and avenues in which we can integrate the solution,” Al-Haroon added. 

With three offices in the region, Roland Berger has been busy in 2018 with a number of local tech-related projects. Earlier this year, the consulting firm was among the judges of a start-up competition in Saudi Arabia, hosting the winners at their office in Germany. Meanwhile the firm has also recently joined Bahrain's FinTech Bay as a founding partner.

EY launches advanced tool to assess trustworthiness of AI technology

12 April 2019 Consultancy-me.com

Global professional services firm Ernst & Young has announced the release of an advanced analytical tool to assess the trustworthiness of artificial intelligence.

Enabled by Microsoft Azure, the EY Trusted AI platform released by the global professional services firm Ernst & Young produces a technical score of an artificial intelligence system by leveraging advanced analytics to evaluate its technical design, measuring risk drivers including its “objective, underlying technologies, technical operating environment and level of autonomy compared with human oversight.”

Aimed at helping to resolve the issue of trust in technology, which the firm contends is the biggest barrier to wider AI adoption, the new tool’s risk scoring model is based on the ‘EY Trusted AI conceptual framework’ launched last year, which speaks to embedding trust mechanisms in an AI system at the earliest stages around the core pillars of ethics, social responsibility, accountability and explainability, and reliability.

“Trust must be a front-line consideration, rather than a box to check after an AI system goes live,” said Keith Strier, EY’s Global Advisory Leader for Artificial Intelligence. “Unlike traditional software, which can be fixed, tested and patched, if a neural network is trained on biased data, it may be impossible to fix, and the entire investment could be lost.”AI system overviewUsers of the new solution such as AI developers, executive sponsors, and risk professionals will be able to garner deeper insights into a given AI system to better identify and mitigate risks unique to artificial intelligence technology, with the platform score produced by the tool subject to a complex multiplier based on the impact on users – taking into account potential unintended consequences such as social and ethical implications.

According to the firm, it’s the first solution designed to help enterprises evaluate, monitor and quantify the impact and trustworthiness of AI, while an evaluation of governance and control maturity further serves to reduce residual risks and allow greater planning – helping to safeguard “products, brands, relationships and reputations” in the contemporary risk environment.

“If AI is to reach its full potential, we need a more granular view – the ability to predict conditions that amplify risks and then target mitigation strategies for risks that may undermine trust, while still considering traditional system risks such as reliability, performance and security,” said EY Global Trusted Artificial Intelligence Advisory Leader Cathy Cobey.

Offered as a standalone or managed service – which will be regularly updated with new AI risk metrics, measurement techniques and monitoring tools – the new solution will be available to clients globally this year, with further features including a guided interactive, web-based interface and a function to drill down for additional detail, as well as the ability to perform dynamic risk forecasting on when an AI component changes – such as an agent’s functional capabilities or level of autonomy.