Responsible AI will give you a competitive advantage

Responsible AI will give you a competitive advantage

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.



There is little doubt that AI is changing the business landscape and providing competitive advantages to  those that embrace it. It is time, however, to move beyond the simple implementation of AI and to ensure that AI is being done in a safe and ethical manner. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in and of itself.


What is responsible AI?


Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountability  concerns. Although the implementation of responsible AI varies by company, the necessity of it is clear. Without responsible AI practices in place, a company is exposed to serious financial, reputational, and  legal risks. On the positive side, responsible AI practices are becoming prerequisites to even bidding on certain contracts, especially when governments are involved; a well-executed strategy will greatly help in winning those bids. Additionally, embracing responsible AI can contribute to a reputational gain to the company overall.


Values by design


Much of the problem implementing responsible AI comes down to foresight. This foresight is the ability  to predict what ethical or legal issues an AI system could have during its development and deployment  lifecycle. Right now, most of the responsible AI considerations happen after an AI product is  developed — a very ineffective way to implement AI. If you want to protect your company from financial,  legal, and reputational risk, you have to start projects with responsible AI in mind. Your company needs  to have values by design, not by whatever you happen to end up with at the end of a project.


Implementing values by design


Responsible AI covers a large number of values that need to be prioritized by company leadership. While  covering all areas is important in any responsible AI plan, the amount of effort your company expends in  each value is up to company leaders. There has to be a balance between checking for responsible AI  and actually implementing AI. If you expend too much effort on responsible AI, your effectiveness may  suffer. On the other hand, ignoring responsible AI is being reckless with company resources. The best  way to combat this trade off is starting off with a thorough analysis at the onset of the project, and not  as an after-the-fact effort.


Best practice is to establish a responsible AI committee to review your AI projects before they  start, periodically during the projects, and upon completion. The purpose of this committee is to evaluate the project against responsible AI values and approve, disapprove, or disapprove with actions to bring the project in compliance. This can include requesting more information be obtained or things that need to be changed fundamentally. Like an Institutional Review Board that is used to monitor ethics in biomedical research, this committee should contain both experts in AI and non-technical  members. The non-technical members can come from any background and serve as a reality check on the AI experts. AI experts, on the other hand, may better understand the difficulties and remediations possible but can become too used to institutional and industry norms that may not be sensitive enough  to concerns of the greater community. This committee should be convened at the onset of the project,  periodically during the project, and at the end of the project for final approval.


What values should the Responsible AI Committee consider?


Values to focus on should be considered by the business to fit within its overall mission statement.  Your business will likely choose specific values to emphasize, but all major areas of concern should be  covered. There are many frameworks you can choose to use for inspiration such as Google’s and Facebook’s. For this article, however, we will  be basing the discussion on the recommendations set forth by the High-Level Expert Group on Artificial  Intelligence Set Up by The European Commission in The Assessment List for Trustworthy Artificial  Intelligence. These recommendations include seven areas. We will explore each area and suggest  questions to be asked regarding each area.


1. Human agency and oversight


AI projects should respect human agency and decision making. This principle involves how the AI  project will influence or support humans in the decision making process. It also involves how the  subjects of AI will be made aware of the AI and put trust in its outcomes. Some questions that need to  be asked include:


  • Are users made aware that a decision or outcome is the result of an AI project?

  • Is there any detection and response mechanism to monitor adverse effects of the AI project?

2. Technical robustness and safety


Technical robustness and safety require that AI projects preemptively address concerns around risks associated with the AI performing unreliably and minimize the impact of such. The results of the AI project should include the ability of the AI to perform predictably and consistently, and it should cover the need of the AI to be protected from cybersecurity concerns. Some questions that need to be asked  include:


  • Has the AI system been tested by cybersecurity experts?

  • Is there a monitoring process to measure and access risks associated with the AI project?

3. Privacy and data governance


AI should protect individual and group privacy, both in its inputs and its outputs. The algorithm should not include data that was gathered in a way that violates privacy, and it should not give results that violate the privacy of the subjects, even when bad actors are trying to force such errors. In order to do this effectively, data governance must also be a concern. Appropriate questions to ask include:


  • Does any of the training or inference data use protected personal data?

  • Can the results of this AI project be crossed with external data in a way that would violate an  individual’s privacy?

4. Transparency


Transparency covers concerns about traceability in individual results and overall explainability of AI algorithms. The traceability allows the user to understand why an individual decision was made.  Explainability refers to the user being able to understand the basics of the algorithm that was used to  make the decision. It also refers to the ability of the user to understand what factors where involved in  the decision making process for their specific prediction. Questions to ask are:


  • Do you monitor and record the quality of the input data?

  • Can a user receive feedback as to how a certain decision was made and what they could do to  change that decision?

5. Diversity, non-discrimination


In order to be considered responsible AI, the AI project must work for all subgroups of people as well as possible. While AI bias can rarely be eliminated entirely, it can be effectively managed. This mitigation can take place during the data collection process — to include a more diverse background of people in the training dataset — and can also be used at inference time to help balance accuracy between different  groupings of people. Common questions include:


  • Did you balance your training dataset as much as possible to include various subgroups of people?

  • Do you define fairness and then quantitatively evaluate the results?

6. Societal and environmental well-being


An AI project should be evaluated in terms of its impact on the subjects and users along with its impact on the environment. Social norms such as democratic decision making, upholding values, and preventing addiction to AI projects should be upheld. Furthermore the results of the decisions of the AI project on the environment should be considered where applicable.  One factor applicable in nearly all cases is an evaluation of the amount of energy needed to train the required models. Questions that can be asked:


  • Did you assess the project’s impact on its users and subjects as well as other stakeholders?

  • How much energy is required to train the model and how much does that contribute to carbon emissions?

7. Accountability


Some person or organization needs to be responsible for the actions and decisions made by the AI  project or encountered during development. There should be a system to ensure adequate possibility of  redress in cases where detrimental decisions are made. There should also be some time and attention paid to risk management and mitigation. Appropriate questions include:


  • Can the AI system be audited by third parties for risk?

  • What are the major risks associated with the AI project and how can they be mitigated?

The bottom line


The seven values of responsible AI outlined above provide a starting point for an organization’s responsible AI initiative. Organizations who choose that pursue responsible AI will find they increasingly have access to more opportunities — such as bidding on government contracts. Organizations that don’t implement these practices expose themselves to legal, ethical, and reputational risks.


David Ellison is Senior AI Data Scientist at Lenovo.

VentureBeat


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member