Towards a trustworthy AI

Varios minutos de lectura
Publicado el
Compartir en , ,

It’s widely accepted that artificial intelligence will have a huge impact on our lives in the coming decades – but how can we be sure this new technology is not just innovative and helpful, but also trustworthy?

To address issues of trust in artificial intelligence (AI), a new technical report by ISO and the International Electrotechnical Commission (IEC) looks at concerns related to trustworthiness and provides practical solutions. ISO/IEC TR 24028:2020, Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence, analyses the factors that can impact the trustworthiness of systems providing or using AI. It can be used by any business regardless of its size or sector.

Weather forecasts, e-mail spam filtering, Google’s search predictions and voice recognition, such as Apple’s Siri, are all examples of AI in everyday life. What these technologies have in common is machine-learning algorithms that enable them to react and respond in real time. Tech enthusiasts and experts say there will be growing pains as AI technology evolves, but the positive effect it will have on business in terms of productivity is immeasurable. A survey by management consultancy McKinsey has estimated that AI analytics could add around USD 13 trillion or 16 % to annual global GDP by 2030.

The question of trustworthiness is key, explains Wael William Diab, Chair of SC 42, Artificial intelligence, a subcommittee operating under joint technical committee ISO/IEC JTC 1, Information technology: “Every customer – whether it’s a financial services company, whether it’s a retailer, whether it’s a manufacturer – is going to ask: ‘Who do I trust?’ Many aspects including societal concerns, such as data quality, privacy, potentially unfair bias and safety must be addressed. This recently published technical report is the first of many works that will help achieve this.”

ISO/IEC TR 24028 briefly surveys the existing approaches that can support or improve trustworthiness in technical systems and discusses their potential application to AI. It also discusses possible approaches to mitigating AI system vulnerabilities and ways to improving their trustworthiness.

In addition to providing clearer guidance on trustworthiness and how it is being embedded in IT systems, ISO/IEC TR 24028 will help the standards community to better understand and identify the specific standardization gaps in AI and, importantly, how to address these through future standards work.

ISO/IEC TR 24028 was developed by joint technical committee ISO/IEC JTC 1, Information technology, subcommittee SC 42, Artificial intelligence, the secretariat of which is held by ANSI, ISO’s member for the USA. The new technical report can be purchased from your national ISO member or through the ISO Store.

Elizabeth Gasiorowski-Denis
Elizabeth Gasiorowski-Denis

+41 22 749 03 25
Contacto de prensa

press@iso.org

¿Periodista, bloguero o editor?

¿Desea obtener la primicia sobre las normas o saber más sobre lo que hacemos? Póngase en contacto con nuestro equipo o consulte nuestro kit de prensa.