Original Intelligence Predicts Capability and Achievement
Recent findings show that generating ideas signals cognitive ability, creativity, and real-world success.
Overview
Generative AI increasingly matches or exceeds human benchmarks for traditional measures of creativity and fluency. But it is now clear that AI’s outputs converge with each other, reflecting a statistical center shaped by shared data, probability distributions, and global usage patterns (Doshi & Hauser, 2024; Wenger & Kenett, 2025). What looks new in isolation becomes repetitive at scale. Because the same AI systems, generating similar content, are available to everyone, AI content alone is not a differentiator.
The core differentiator in the AI landscape is the human ability to produce ideas that are meaningfully outside of the AI-homogenized idea space. This is especially important when humans are collaborating with AI – the most valuable human contribution to human-AI collaboration is expanding the idea space. This capability—Original Intelligence (OI)—can be measured by quantitatively mapping AI-generated norms and tracking ideas that expand conceptual space beyond these norms. Unlike traditional creativity metrics that emphasize fluency or surface-level novelty (Amabile, 1983; Runco & Jaeger, 2012), OI objectively captures the ability to think in directions AI doesn’t.
New research shows that OI is not only quantifiable, but also stable, predictive, and highly consequential (Johnson, Moon, Kaufman, & Green, 2025). Individuals with higher OI consistently generate ideas that are more creative, more effective, and more distinct. Most importantly, OI predicts real-world achievement and greater cognitive and creative capabilities. As AI comes to dominate the ordinary, OI is emerging as a leading indicator of differentiating value in the human-AI collaborative landscape.
High-OI Ideas Are High-Value Ideas
The pervasive influence of AI idea homogenization leads to a central question about the human side of human-AI collaboration. Are human ideas that are outside AI’s footprint simply different—or are they good ideas that AI misses?
To answer this, large-scale analyses were conducted across multiple datasets, including college admissions, problem-solving, and creative thinking (Guilford, 1967; Jauk, 2014; Luchini et al., 2025; Reiter-Palmon, 1998). These studies assessed whether high-OI ideas—those furthest from AI norms—were also high value ideas.
Across every task, the answer was yes. Ideas that expanded the idea space more were also consistently higher-value ideas based on quantitative analyses of creativity, effectiveness, and quality (Crossley et al. 2023; Flesch, 1948). These results indicate that “thinking outside the bots” signals value. Distinctiveness is not merely deviation—it is contribution.
Original Intelligence Is a Stable Individual Difference
OI is not limited to a specific task and is more than a momentary advantage. It reflects a consistent, measurable, individual-level characteristic.
OI emerged as a differentiator across multiple tasks and when the same individuals generated multiple responses, OI scores showed high internal consistency. This suggests that OI reflects a stable cognitive disposition, not random variability or surface-level stylistic quirks.
This finding matters. It indicates that some individuals reliably operate in regions of idea space that AI does not reach. OI, therefore, appears to reflect a general and transferable capability.
High OI Predicts General Cognitive Ability
If OI reflects a general thinking ability, it should correlate with other indicators of intellectual capacity. That prediction holds: OI is strongly associated with standardized cognitive performance.
In a sample of over 6,700 college students at a highly selective U.S. university, individuals with the highest OI scores also earned significantly higher scores on the SAT—a widely used measure of general cognitive ability. Students in the top 30% of the OI distribution consistently outperformed their peers in the lower 30% of OI by more than two standard deviations, a remarkably strong effect in educational research.
These results held across comparisons with both ChatGPT- and Claude-generated baselines, and across multiple bootstrapped random samples. The conclusion is robust: the capacity to generate distinct content that AI does not generate correlates strongly with core intellectual aptitude.
High OI Predicts Real-World Achievement in College Students
The strongest test of any cognitive metric is its ability to predict meaningful outcomes in real-world contexts.
Across two large college admissions datasets totaling 23,656 students, individuals whose application essays demonstrated high OI went on to earn substantially higher college GPAs than their peers. This effect held at both a highly selective private university (N = 6,766) and a moderately selective public university (N = 16,890).
The results were clear and consistent. At both universities, students in the top 30% of the OI distribution had GPAs that were well over a full standard deviation beyond those of their peers.
These effects held regardless of AI model (ChatGPT or Claude), essay prompt, or sampling variation, confirming that OI identified the individuals who went on to achieve the greatest success in a high-stakes context. Notably, these high-OI students also wrote essays that were easier to read and more coherent—refuting the idea that conceptual distinctiveness requires tradeoffs in clarity or quality.
These results validate OI as a predictor of performance in real-world environments that demand sustained cognitive performance. OI is not only associated with good ideas—it is associated with better achievement outcomes.
OI Is Distinct from Traditional Creativity—and from AI
OI is not simply another name for creativity. Johnson et al. (2025) found that AI-generated content often received higher average creativity ratings than human content, especially when evaluated with standard scoring models. Yet these high-scoring AI outputs tended to cluster in a narrow band of idea space. That is, AI consistently produced similar ideas, even when those ideas were rated as creative.
In contrast, high-OI human content was more diverse, less repeated, and broader in conceptual scope—and yet also rated higher in creativity when evaluated at the idea level. This reveals a key insight: traditional creativity tests, which predate modern AI, are increasingly insufficient. What they reward, AI can now emulate. What AI alone cannot do is generate ideas that reliably fall outside its own distribution. OI captures this distinctiveness, making it a more precise signal of human value for hybrid collaboration in the AI era.
Conclusion: Original Intelligence Is a Leading Indicator of Success in the AI landscape
The ability to generate ideas that diverge from AI is a powerful indicator of value. Across multiple domains and large, high-stakes samples, Original Intelligence predicts the generation of better ideas, reflects stable individual differences, and forecasts real-world achievement at scale.
As AI becomes ubiquitous, the differentiators that confer value and enable innovation will depend less on speed or polish, and more on human-AI collaborations that extend beyond AI homogenization. OI identifies the human ability to expand AI idea space, and enables organizations to find and grow this ability in talent development, institutional strategy, and the future of creative work.
References:
Amabile, T. M. (1983). The social psychology of creativity. Springer.
Anderson, B. R., Shah, J. H., & Kreminski, M. (2024). Homogenization effects of large language models on human creative ideation. Proc. of Creativity & Cognition ’24.
Crossley, S. A., et al. (2023). Readability metrics in educational evaluation.
Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233.
Jauk, E., et al. (2014). Creative potential and intelligence. Intelligence, 42, 10–21.
Luchini, M., et al. (2025). Human-AI collaboration in problem-solving. In preparation.
Reiter-Palmon, R., et al. (1998). Problem construction and creativity. Creativity Research Journal.
Wenger, E., & Kenett, Y. (2025). We’re different, we’re the same: Creative homogeneity across LLMs. arXiv:2501.19361.
Johnson, D., Moon, K., Kaufman, J., & Green, A. (2025). Think Outside the Bots: Expanding Idea Space Beyond AI Predicts Achievement. Preprint.