- The role of Chief AI Ethics Officer (CAIEO) is increasing in large companies as digital transformation becomes more complex and adoption of AI grows rapidly across all industries;
- Forward-looking companies look to the role of CAIEO to implement AI-related corporate values ââacross all divisions of the organization. CAIEOs must ensure that the AI ââtechnology developed, used and deployed is trustworthy; and that developers have the right tools, education and training to easily integrate these properties into what they produce;
- CAIEOs should have multidisciplinary knowledge of AI techniques, tools and platforms, AI risks and its impact on society, business strategy, industries and public policies, as well as good skills in communication;
- A new report from the Global Future Council on AI for Humanity explores educating the role of CAIEO and others in implementing AI equity in an organization.
Artificial intelligence (AI) affects the lives of billions of people, rapidly transforming our society and questioning what it means to be human. Some think AI is just a buzzword, but it’s powerful enough to enable solutions across industries, from personal digital assistants to fraud and failure prediction, autonomous driving and assistance and health diagnostics. AI can help personalize education and tutoring, create new jobs, and help fight the COVID-19 pandemic and its aftermath.
Along with its positive effects, some AI applications raise legitimate concerns and risks. AI ethics is the multidisciplinary, multi-stakeholder field of study that aims to define and implement technical and non-technical solutions to address these concerns and mitigate risks.
AI solutions could, for example, unintentionally generate discriminatory results because the underlying data is biased in favor of a particular population segment. This could exacerbate existing structural injustices, further distort the balance of power, threaten human rights and limit access to resources and information.
Some AI systems might behave like black boxes with little or no explanation as to why they are making their decisions. According to FICO’s latest report on the state of responsible AI, two-thirds (65%) of companies surveyed cannot explain how specific AI-based decisions or predictions are made. This could erode trust in AI and thus hamper its adoption, thereby reducing the positive impacts of this technology. It could also damage a company’s reputation and the confidence of its customers, as well as go against the company’s values.
The main objective of a Chief AI Ethics Officer (CAIEO) is to integrate the ethical principles of AI into operations within a company, organization or institution. A CAIEO advises and builds accountability frameworks for CEOs and boards of directors on unforeseen risks posed by AI to the organization. They should help companies comply with existing or expected AI regulations and oversee the implementation of many of the organization’s AI governance and ethics training functions.
Several companies, such as BCG, Salesforce, IBM and Microsoft, already have this role with various titles – Head of Responsible AI, AI Ethics Global Leader, Global AI Ethicist or Chief Responsible AI Officer. “[T]its call for specialists in the ethics of artificial intelligence is growing louder as technology leaders publicly recognize that their products can be faulty and harm jobs, privacy and human rights â , said a recent WSJ article.
At a very high level, companies need an AI ethical framework to ensure that AI-based solutions are developed in a way that mitigates the risk of harm to relevant stakeholders. Specifically, an CAIEO should lead the setting of broad AI ethics goals, and then help the company understand how to achieve them. They need to ensure that the AI ââtechnology under development has appropriate properties (fairness, robustness, explainability) and that developers have the appropriate tools and training to easily integrate these properties into what they produce. They need to ensure that the risks in deploying AI (both internally and for other companies) are mitigated appropriately.
To achieve all this, the role of CAIEO needs to:
- Multidisciplinary knowledge: AI ethics issues cannot be solved by technical solutions and adherence to relevant policies, standards and laws alone. CAIEOs need multidisciplinary knowledge and skills, including technical knowledge and perceptions of AI, ethical considerations, familiarity with social science and technology law, as well as know-how in the field. business strategy. They should also facilitate the creation of tools and frameworks for product teams to responsibly develop AI, as well as to work with the entire enterprise, other professional organizations, and policy makers to help shape laws, norms and standards in order to define and govern at best. practices globally.
- Effective and inclusive governance: Companies can support and oversee an CAIEO by creating an AI Ethics Committee, led by the CAIEO with representatives from all divisions of the company, and endowed with decision-making power, visibility and leadership. ‘a governance authority fully supported by the CEO and senior management. CAIEO can help each board member understand AI ethics solutions in their respective business units and integrate them into operations. In large organizations, CAIEOs should implement centralized, top-down governance initiatives, such as corporate guidelines that outline how the entire organization should detect and mitigate AI biases during creation or the use of an AI solution; and bottom-up initiatives, such as business unit specific tools or an AI solution.
- Strategic differentiation and business value: So far, much of the argument for CAIEOs has focused on risk reduction, but this role should also prompt companies to view AI ethics as a source of value and a source of value. strategic differentiator rather than a simple set of safeguards to comply with. The ethical principles of AI and their implementation should be linked to the values ââand business model of the company so that all internal and external stakeholders can appreciate the value of developing, deploying and using it. ‘IA responsibly. CAIEOs need to understand the business value of investing in ethics and fairness, including the costs of product development, implementation and commercial adoption.
- Public communication and advocacy: CAIEOs also require communication skills to facilitate dialogue and trust between stakeholders within and outside the company. Helping people understand issues and persuading them to change their actions requires strong communication skills and work in all aspects of the organization. A CAIEO must prepare content for target audiences and advance dialogue and debate. AI ethics involves a deep understanding of regulations, governance, and political issues. However, laws and regulations still lag behind technology and AI adoption, so focusing on values, standards and public perception is essential.
- Company-wide engagement: All of this cannot be done by one person (or team), but rather requires a company-wide approach, where all business units contribute to achieving these goals. AI requirements must be defined, technical tools constructed and educational material produced. Customers must be engaged and teams trained. According to Ethically aligned IEEE design guidelines for autonomous and intelligent systems: “[C]Businesses need to create roles for top marketers, ethicists, or advocates who can pragmatically implement ethically aligned design, both in technology and social processes to support the value-based system innovation.
Data and AI are becoming essential for most businesses. As they reimagine their business model and strategic value in this new data-centric age, it is imperative that these organizations identify and address AI ethics issues and risks correctly and effectively. To do this – and to lead the creation of both social and business value by building, deploying and using AI responsibly and trustworthily – we advocate starting with the appointment of a CAIEO. This role will enable the creation of a company-wide approach to AI ethics.