Considered to be one of the greatest mathematicians and physicists of the 19th century, whose name is forever epitomized in calculus textbooks, Pierre-Simon Laplace postulated in 1814 that in the future, we can accurately calculate the movement of all atoms of the universe, and we will be able to predict the future and the past and become like gods.
This became known as Laplace’s superman and was the deterministic assumption underlying all physicists and mathematicians’ scientific endeavor even to this day. Physicists after Maxwell became convinced that we were quickly approaching the end of physics because all we needed to do now was to be more precise in our quantification, and more accurate in our measurement of our quantification, and that there were nothing new to be discovered.
Then along came Quantum Mechanics and Heisenberg’s uncertainty principle and everything we knew about Laplacian determinism was tossed out of the window.
Again, in the early 20th century, the most authoritative mathematician in Germany at the time, David Hilbert, predicted that we ought to be able to prove any proposition if we start from a set of axioms, and he even listed a list of problems that remained unsolved in each branch of mathematics and gave tasks to other mathematicians to completely axiomize all those different branches of mathematics. Something to the effect of, “one day, we will arrive at the end of mathematics when all those problems are solved,” was proclaimed.
Then along came Kurt Godel who proved that you can never be able to prove certain propositions to be true or false using a set of axioms and everything we know about the logic of mathematics was overturned.
I can go on and on. The same stories have been repeated, in the last 400 years, in just about every other branch of higher learning. But you get the point.
I will preface by saying that I know nothing about artificial intelligence or computer science, but my gut instinct tells me that AI will never be able to surpass human intelligence. There are hidden variables, un-chartered physical laws, and other restraints set by the mystagogue of this universe that we do not yet know and it will limit AI from ever becoming as intelligent as humans.
And I have a very simple way to test my hypothesis:
Ask AI what are the physical laws that are preventing AI from ever becoming more intelligent than humans. If AI can answer that question, and come up with a new theoretical framework beyond the standard model to explain it, then we should give AI the Nobel prize in physics, and be rest assured that AI will never surpass human intelligence.
If it cannot find the physical laws that prevent AI from becoming more intelligent than humans, and implies that AI will one day surpass humans, then that means it can only be as intelligent as humans, not more, because so far, no humans have yet to come up with a new physical theory to explain why AI cannot surpass humans in intelligence. At best, AI can only be as smart as humans, but never more.
All in all, it is my opinion that AI will never be able to surpass human intelligence, and one day, the next Einstein or Heisenberg or Godel will explain why, and given what I’ve seen AI do so far, I’m pretty confident that it will be a human.
I greatly enjoyed reading this thoughtful and well-written blogpost, especially on the historical context. I disagree on the proposed test though.
The gap in your reasoning is that you assume there must be a hard upper limit on intelligence, and that this limit happens to coincide exactly with the human level, implying that a new physical paradigm would be required to exceed it, without providing a reason the universe would align its maximum with our particular evolutionary endpoint. Your test then ties ‘surpassing humans’ to one extremely narrow achievement: discovering new physical laws, an accomplishment only a tiny fraction of brilliant scientists ever achieve, even though many of them are still far more intelligent than the average person.
Many researchers argue that intelligence is not capped at the human level under the laws we currently understand; see for example Bostrom (2014), ‘Superintelligence’. And several papers note that human intelligence is a poor benchmark for evaluating AGI, since future systems may think in ways that are unlike, but potentially more capable than, our own. For example, see Dolgikh (2024), ‘The Trap of Presumed Equivalence’.
http://www.tiktok.com/t/ZTrBnSBm4/
lin Chen is absolutely right. She’s smarter than all you white retadrs
Thank you for your kind words, but TikTok is not really a reliable source for information. I would prefer if you could cite scientific papers published in reputable journals.
“The Cat Sat on the …?” Why Generative AI Has Limited Creativity
https://onlinelibrary.wiley.com/doi/10.1002/jocb.70077
“The Cat Sat on the …?” Why Generative AI Has Limited Creativity
Since the introduction of Generative AI several years ago, there has been much debate regarding the capacity of this technology for creativity. This paper applies the standard definition of creativity to the output of Large Language Models (LLMs) and shows not only that this can be calculated ex ante, but that LLM output creativity has a fundamental upper limit. This upper limit, determined by the mechanism used to produce LLM outputs, moreover, is constrained to a level equivalent to the boundary between little-c and Pro-c creativity. Consequently, LLM creativity is mathematically constrained to a level equivalent to the boundary between amateur and professional human creativity. This has significant implications for claims about AI autonomy in creative tasks.
This paper has hypothesized that the creativity of generative AI—specifically, Large Language Models (LLMs)—is not merely a matter of subjective judgment or philosophical debate, but is, in fact, mathematically bounded. By applying the Standard Definition of Creativity as the product of effectiveness and novelty and translating this into the probabilistic mechanics of LLM output generation, we arrive at a simple yet profound constraint: creativity in an LLM is limited to the function C = E − E2. This yields an upper limit of 0.25, a ceiling far below what is achievable by humans.
This limitation is not the result of poor training, inadequate prompting, or underpowered architectures. Rather, it is a fundamental consequence of how these systems operate: token selection is governed by probability distributions where effectiveness and novelty are negatively correlated. High-probability tokens are semantically appropriate but predictable; low-probability tokens are unexpected but often nonsensical. The result is a structural inability to simultaneously achieve both originality and effectiveness to the degree required for high creativity.
Even when LLMs appear to generate creative outputs—for example, poems, inventions, or artistic expressions—closer examination often reveals the guiding hand of human intervention, whether through selective prompting, evaluation, or postediting. Claims of machine creativity that overlook these human contributions risk conflating statistical generation with true ideation. In fact, a growing body of empirical research supports the assertion that LLM output creativity peaks at around the human average.
The implications of these findings are far-reaching. As generative AI becomes increasingly integrated into creative professions, education, and even intellectual property law, it is crucial to distinguish between outputs that are merely generated and those that are meaningfully creative. Misunderstanding this boundary may lead to misplaced trust, misallocated credit, or the erosion of the very concept of creativity.
Looking ahead, overcoming these limits would likely require a fundamental redesign of AI architectures beyond next-token prediction. This might involve the integration of autonomous goal setting, contextual validation, or embodied interaction with the world. Until such capabilities emerge, LLMs remain extraordinary mimics of creativity, but not its source.
you say: “High-probability tokens are semantically appropriate but predictable; low-probability tokens are unexpected but often nonsensical. The result is a structural inability to simultaneously achieve both originality and effectiveness to the degree required for high creativity.”
This seems only true for one time one phase iterations. You can make several iterations with low probability tokens to create creative proposals and verify their validity using a process with high probability tokens to select the sensical ones from those proposals. Isn’t that what humans effectively do also?
maybe you should take the issue with the author of the article. I’m sure he will appreciate it.
Corresponding Author
David H. Cropley
UniSA STEM, University of South Australia, Adelaide, South Australia, Australia
This is very interesting. I will read through the paper before I make further comments.
AI is created by men, as a construct or algorithms and programs. What is menmade is not perfect, and as with everything in the virtual world, the rule “shit in, shit out” applies 100%
one thing AI will never have is a conscious and a soul, it will never have a moral compass, or common sense.
so will it be smarter? no
will it be faster, perhaps …
is it able to immense quantities of data, yes, and we should focus on this
but it will never be human, with horny feelings, with dirty minds, and with sexual desires
You don’t seem to grasp how the human mind and what input is uses to work the way it does. All this is technically replicable. Won’t be long….
The more insidious danger is the chunk of humanity diminishing it’s own intelligence and creativity through reliance and use without critique.
Excuse my bad writing, I don’t speak English. To begin with, current AI like chatgpt are just word processors; they’re nothing like human intelligence, not even close, so we’re still far from an AI that surpasses humans.
Lin Chen you should leave intellectual deliberations like this to white men.
The AI we now have is dumb, very dumb.
But I can assure you that one day, not too far in the future, AI will beat human intellect by a mile and more.
We should be afraid, very afraid, when it does!
The average white men that I have seen in this post is much dumber than Ling.
Whites should focus on wanking instead of interfering with her work. I always see raging white incels accusing her for something that she did not do.
LOL If only you knew…
LOL If only you knew…
This is a good website. It showcases exactly how retarded most white people really are. The China century is here. White pigs be gone LMAO
It is true that East Asians have higher IQ on average, but western European men and Ashkenazi Jewish men dominate in the profoundly gifted segment of the human population.
That said … I do agree with the sentiment that most white men who come to this site are not particularly known for their intelligence.
But would your hypothesis apply to a quantum computer that could simultaneously consider all of the probabilities simultaneously?
I’m not sure – that would be a capacity that humans simply don’t possess.
Would that make it “smarter”? I honestly don’t know, but it might.