Considered to be one of the greatest mathematicians and physicists of the 19th century, whose name is forever epitomized in calculus textbooks, Pierre-Simon Laplace postulated in 1814 that in the future, we can accurately calculate the movement of all atoms of the universe, and we will be able to predict the future and the past and become like gods. 

This became known as Laplace’s superman and was the deterministic assumption underlying all physicists and mathematicians’ scientific endeavor even to this day. Physicists after Maxwell became convinced that we were quickly approaching the end of physics because all we needed to do now was to be more precise in our quantification, and more accurate in our measurement of our quantification, and that there were nothing new to be discovered. 

Then along came Quantum Mechanics and Heisenberg’s uncertainty principle and everything we knew about Laplacian determinism was tossed out of the window. 

Again, in the early 20th century, the most authoritative mathematician in Germany at the time, David Hilbert, predicted that we ought to be able to prove any proposition if we start from a set of axioms, and he even listed a list of problems that remained unsolved in each branch of mathematics and gave tasks to other mathematicians to completely axiomize all those different branches of mathematics. Something to the effect of, “one day, we will arrive at the end of mathematics when all those problems are solved,” was proclaimed. 

Then along came Kurt Godel who proved that you can never be able to prove certain propositions to be true or false using a set of axioms and everything we know about the logic of mathematics was overturned. 

I can go on and on. The same stories have been repeated, in the last 400 years, in just about every other branch of higher learning. But you get the point. 

I will preface by saying that I know nothing about artificial intelligence or computer science, but my gut instinct tells me that AI will never be able to surpass human intelligence. There are hidden variables, un-chartered physical laws, and other restraints set by the mystagogue of this universe that we do not yet know and it will limit AI from ever becoming as intelligent as humans. 

And I have a very simple way to test my hypothesis:

Ask AI what are the physical laws that are preventing AI from ever becoming more intelligent than humans. If AI can answer that question, and come up with a new theoretical framework beyond the standard model to explain it, then we should give AI the Nobel prize in physics, and be rest assured that AI will never surpass human intelligence. 

If it cannot find the physical laws that prevent AI from becoming more intelligent than humans, and implies that AI will one day surpass humans, then that means it can only be as intelligent as humans, not more, because so far, no humans have yet to come up with a new physical theory to explain why AI cannot surpass humans in intelligence. At best, AI can only be as smart as humans, but never more.

All in all, it is my opinion that AI will never be able to surpass human intelligence, and one day, the next Einstein or Heisenberg or Godel will explain why, and given what I’ve seen AI do so far, I’m pretty confident that it will be a human.