Skip navigation

BLOGAI Risks and Lost Opportunities (Part One)

by Randall CraigFiled in: Make It Happen Tipsheet, AI, Blog, Content, Digital Strategy, Strategy, TechnologyTagged as: ,

When you make your business decisions, do you weigh the benefits against the inherent costs and risks? The answer seems obvious, but when it comes to understanding and weighing AI-based risks, the answer becomes a bit murky.

AI Risks and Lost Opportunities (Part One)

There are two reasons why a risk assessment for new technologies (like ChatGPT and other AI tools) does not happen: most leaders are not aware of the risks (and lost opportunities), and individuals can use both their work computers and personal devices to access the tool, in the same way they access Google or other internet-based resources. It’s open season, where everyone in the organization can experiment, reap the benefits, and be a hero… without little regard for the downside.

This is part one of a two-part series; this article addresses risk, while the next one focuses on lost opportunities.

Nine AI Risks

Hallucination:  AI tools are prone to making things up that are just not true. Newspapers and magazines have long used [human] fact-checkers to ensure there were no errors in what they published: this must also be done with AI-generated content.

Bias: Any bias in the output is based completely on the nature of the training data. While most large language models scrape their data from the internet, there is an assumption that there is no bias… which is untrue. At the most basic level, the training data is more American-centric. And there are also arguments that the training data doesn’t accurately reflect the diversity (cultural, language, race, gender, orientation) of the overall population. There is also an argument that the training data over-represents the younger male tech demographic, and under-represents other segments, such as seniors, artists, homemakers, etc.

Flawed conclusions: The AI output is a function of both the training data used, and the data entered by the user as part of their prompts. If incomplete or incorrect information comes from either source, the output will be similarly flawed. While the training data and AI models will no doubt improve over time, mitigating this risk requires more extensive context input within the prompts, and also a reality-check review of the output itself. Another way to address this is to use a “private” AI model that also uses your organization’s data for training.

Incomplete answers: ChatGPT output always sounds reasonable, but what might be missing? When asking for a list of the top five factors for _____ , that the most important factor actually was number six, which didn’t make the cut? Or that the most important factor for your audience was actually number seven? One way to address this is by asking for more information that you eventually will need, and then choosing what to use (and not use) from the output.

Unintended leakage of private intellectual property: Many public AI systems will use whatever you feed into your prompts as future training data. So when employees use AI to answer questions or generate content, they may unwittingly be sharing your confidential information, trade secrets, and IP. To mitigate this risk, an AI usage policy, employee training, and restrictive firewall access rules can help. But even with this all in place, employees can still use their personal devices, and circumvent these restrictions to gain access.

No permissions for training data: Your organization has likely made significant investments in publicly available (but copyrighted) content. But your investment was not intended to be used by third parties without permission — or payment — to create derivative works such as ChatGPT, Bard, and other systems. As galling as their theft of your content might be, the business risk is that your competitors can query the AI and then quickly replicate your content, but in their voice and their context. Mitigation: some of the AI platforms are just now publishing technical specifications (eg. “Robot.txt” parameters) to prevent the scanning of your websites for future versions.

No attribution of source material: In most AI systems, there isn’t any attribution of the source data that was used to build the response. This means that it is impossible to say whether “original” output from ChatGPT is actually original, or by its very nature, is plagiarized. The risk to your business comes from both sides of this sword: if you use ChatGPT to create content without attribution, there may be a legal (and market) risk. And if others do the same, they can pass your work off as their own, gaining a market advantage.

No Transparency: Currently, there is no way to audit how an AI system came up with its responses: it is a “black box” problem. As a result, it is just about impossible to uncover bias, source data/attribution issues, or any of the other issues addressed earlier in this note. Transparency can mitigate many of these risks, and would also open the door to the assurance provided by third-party verification.

Inaction: Despite the risks (and the lost opportunities that part two of this series describes), doing nothing — or actively forbidding experimentation — also has a downside. No new possibilities are explored, no workflow efficiencies are incorporated, and no cost savings are possible. And lacking all of these, competitors could deliver better services at lower cost to your clients than you can: inaction leads to competitive disadvantage. Mitigating this risk means controlled experimentation needs to take place.

This Week’s Action Plan:

Many of these risks can be addressed through staff training, a formal AI policy, choice of platform, and firewall access policies. This week, identify any quick changes that you can make to mitigate these risks.

Related Posts: ChatGPT — Is it stealing your content?

Does this topic resonate? Reach out to Randall: he can present it to your group.  (More presentation topics)
Download Randall’s professional credentials: Speaker credentials one-sheet or Management Advisory credentials.

Content Authenticity Statement: 100% original content: no AI was used in creating this content.

@RandallCraig (Follow me for daily insights)
www.RandallCraig.com: Professional credentials site.

CATEGORIES

INVITE
RANDALL
TO SPEAK

Randall Craig

Contact us for more on Randall’s topics, availability, and audience fit.

Back to top