Skip to main content

“AI is at a trust inflection point,” according to a March 2024 report based on 32,000 respondents indicating 35% rejected AI (Edelman, 2024). The report also noted, overall trust in the technology business sector was flat while other business sectors had increased.

Regarding hesitancy to adopt AI, John Lombard, CEO, of NTT Data put it succinctly in a June 27, 2024 CNBC interview “The biggest challenge governments have…is trust.”

The answer to the trust challenge is not democratization, especially as defined by organizations with commercial interests in spreading, and selling, AI technology. Accessibility, though important, is by itself a hyper-narrow definition of democratization.

The answer to the low level of trust in AI is constructing AI trustworthiness. Constructing trustworthiness is demonstrated by instituting step-by-step processes of risk reduction and harm mitigation associated with the risks-taken in building AI and the expectation of positive outcomes resulting from taking those risks.

By way of an example for constructing trustworthy AI, 20 high level steps to build ChatGPT are listed here. By way of constructing trustworthy AI, each ChatGPT step is considered a risk taken by a ChatGPT user.
The 20 ChatGPT steps are also placed into four groupings of ChatGPT steps, namely Data Handling, Model Development, Optimization & Tuning, and Deployment & User Interaction. For example, the Data Handling group includes three ChatGPT steps, Data Collection, Data Cleaning, and Tokenization.
In turn, four layers of risk reduction are described for the ChatGPT steps and groups of steps, relationships between steps and groups of steps, and the relationships with external stakeholders. The risk reduction capability is briefly described as a process, action, or policy and two entities (vendors, organizations) that exemplify the capability are noted.

The risk reduction capabilities of each layer applied to each ChatGPT step is color-coded. The color coded risk reduction capabilities are force-ranked relative to other the risk reduction capabilities for each ChatGPT step: Stronger (Green), Moderate (Orange), Lower (Red), Needs development (Purple) & theoretical risk reduction capability noted.

The point of the color coding and force ranking is to indicate that trustworthiness is always an ongoing process. By segmenting the layers applied to each step and force ranking them for risk reduction, at any given time a specific area of risk reduction will be in a stronger or weaker position relative to the other layers and therefore signal an area for improving risk reduction.

Furthermore, areas in the layers where risk reduction is not robustly present are noted. Options that, in theory, may result in further risk reduction are also explored. At the very least these are areas for development. In some cases, brainstormed ideas and options are suggested. Additionally, in many areas, harm mitigation options – ideas to reduce the harm of the risks – are also briefly described.

Note, while the four layers of risk reduction capabilities listed for each of the 20 steps and the construction of trustworthy AI is substantial, it is not comprehensive. Rather, the point is constructing trustworthy AI is a step-by-step multi-layered approach and importantly that describing risk reduction in this way advances actual, as well as perceived, trustworthiness. (Note: A much more comprehensive listing of risk reduction layers could include up to 14 layers per ChatGPT step.)

Trust in AI is at a crossroads. A step by step process of constructing trustworthy AI is a way forward. The 20 ChatGPT steps provided are a simple example of actions, processes, and policies towards trustworthy AI.

 

 

References:

  • Edelman, M. (March 21, 2024). Technology’s tipping point: Why now is the time to earn trust in AI. Website article. World Economic Forum.
Dr. Brad

Dr Bradley K. Canham, MBC, EdD

Leave a Reply