top of page
Search

AI Hallucinates 25% of the Time. What Does That Mean for Contract Managers?

Recently, I posted a short blog—in Dutch—about the future of IT contract management in the age of AI. Here's the English translation. Some of the feedback I received was that I underestimated the power of AI and should experiment more before drawing conclusions. That is interesting, because I actually use AI a lot—especially ChatGPT.


Professionally, I use AI to draft contract clauses, test business ideas, write and review posts, review my website, and even support proposal writing.

Personally, I rely on it for travel advice, choosing a car, exploring investment options, and even navigating the search for a nursing home for my father.


Do I use paid versions?

Yes for Copilot, partially for Gemini, and no for ChatGPT.


Overall, the experience is very positive: AI is fast, resourceful, and surprisingly good at tone-of-voice.


But there are real downsides. AI still hallucinates—sometimes dramatically. At a recent Gartner Local Executive Briefing, a senior partner estimated hallucination rates at around 25%. I haven’t measured it myself, but I encounter clear examples regularly. And as Sundar Pichai (CEO, Alphabet) recently stated: AI models “are prone to errors” and should be used alongside other tools—not as the sole source of truth.


So what does this mean for IT contract management? My take:


1. Build or adopt an in-house AI model with your IT department.

This mitigates issues around IP, confidentiality, data quality, and hallucinations by training on your own “clean” internal data.


2. Define clear use cases.

For example, I recently needed an LOI to begin negotiations with a business-critical software supplier. Legal was understaffed and had no templates. With appropriate confidentiality safeguards, I used ChatGPT to generate a strong first draft.


3. Elevate contract management in the AI roadmap.

CM is rarely at the top of AI project priorities. To get attention, build a clear business case: revenue impact, cost reduction, better compliance, reduction of cycle time, etc.


4. Stay critical of AI-generated output.

Always validate with senior colleagues or subject-matter experts. In my experience, around 20% of content needs obvious correction. Another 10% contains subtle errors only experts will spot. Build review cycles around that reality.


ree

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page