As someone who has led many change management projects, I am concerned that the federal government and many of our most important regional employers are taking an approach to AI transformation that will fail.
The government may not be focusing as it should on the interplay between efficiency of AI and the value add of human insight and experience.
Some of this is not surprising, for example a survey of government AI change leaders showed much greater concern toward solving AI technology adoption challenges (data and governance) rather than culture and human adoption. Those of us in the field are hearing anecdotally of internal concerns among government workers regarding the adoption of comprehensive AI Agent deployment or utilization of large language models (LLMs), for example at the IRS or the U.S. Patent and Trademark Office (USPTO). Or we may learn about problems with Social Security customer services, where the deployment of AI agents has not yet caught up with citizen demand.
I suspect that these vignettes are indicative of a more systemic problem. If our change leaders are over-indexing on AI transformation being only a technological challenge, this will have significant negative effects on our government, government contractor industry and government services.
Limitations of LLMs, AI Agents
We should acknowledge three inherent structural limitations to LLMs and AI Agents.
The first is that they are probabilistic models, which means that they will at times be dangerously not right, and more often incorrect in a nuanced way. I use AI all the time, and I am often struck by the seductive certainty of the answers I get, and how often there is a nuanced misstatement or outright error, particularly as I use it for more complex tasks. I’m sure that I am not alone in this. The fact is that as “wrong” answers are more and more nuanced, they will not be obvious to those that are untrained, lack experience and context or are just plain old lazy.
The second is that it is demonstrable and more and more widely reported that use of AI without training and a conscious effort to reinforce human creativity creates a toxic brew of organizational group think and mediocrity.
The third is that the AI trainers have just run out of data and are now training their models on AI-derived data. AI narcissism, where the models start to prefer the synthetic data in their answers, is an emerging problem and will reinforce the existing predisposition to homogeneity.
The Solution
The solution to each of these limitations includes people who are original thinkers.
Why does AI transformation focus more on technology than those people? Some of this is due to the tech love for anything new: AI is just the most recent iteration of a long history of technologies that boosters claim will be able to do anything and solve any problem. Look at discussions of the World Wide Web in 2000 for a nice historical reminder, and look what it has become.
But put aside the enthusiastic boosters. For those who want to value humans in an economic analysis of AI transformation, they have had a second problem. The tools haven’t been there.
Assessment tools currently used for finding people who can flourish in an AI-transformed environment are of limited or no predictive utility. These tools fail when people can use LLMs to “cheat.” In a situation where there are “correct” answers to these tests — and there always are correct and expected answers for comparisons to be made — AI renders these tests obsolete. This is because once a “correct” answer is known to a candidate who uses an LLM, and the candidate shares the answer with others directly or indirectly through the LLM ingesting the “right” responses through relentless indexing and data crawling, the “right answer” is immediately available to all.
Faced with the inherent limitation of existing tools, the change leader for AI transformation has a problem regarding how to benchmark capabilities, train and provide an objective and fair process. Perhaps it’s not surprising why many leaders are just throwing up their hands and focusing only on the efficiency gains of AI. It’s easier to focus on the things you can measure.
The Case for Original Intelligence
The good news here is you can now objectively measure “original intelligence,” an individual’s ability to expand an idea space and provide objective novelty — whether compared to other people or against the output of LLMs and AI Agents.
People with original intelligence will add value by providing the novelty and differentiation that LLMs and AI Agents cannot. They are curious self-starters willing to take risks and are more comfortable in leadership roles. Original thinkers, not surprisingly, are also much better at taking tools and creating something new — they are the bellwethers of any enterprise transformation including AI transformation.
Original intelligence can shape training and engagement using individual capabilities, and it can also assess progress. New entrants can be identified as “AI ready” in an objective and fair process. Original intelligence can be the hallmark of a new approach to government service and consulting — one that highlights and celebrates the originality of our workforce and its ability to use AI to create something new.