≡ Menu

Michio Kaku on the Collapse of Moore’s Law

Moore’s Law has been around since 1965 when Intel co-founder Gordon E. Moore described it in a paper. Since that day, the law has been in full effect, and the number of transistors placed inexpensively on an integrated circuit has roughly doubled every two years. It’s also a commonly held belief that chip performance doubles every 18 months.

But Moore’s Law won’t be true forever, and in the video below theoretical physicist Michio Kaku explains how it will collapse. And Kaku argues that the collapse isn’t going to happen in some distant future but within the next decade.

The problem is one of finding a replacement for silicon coupled with the exponential nature of Moore’s Law. Quite simply, computing power cannot go on doubling every two years indefinitely.

The other issue is we are about to reach the limits of silicon. According to Kaku, once we get done to 5nm processes for chip production, silicon is finished. Any smaller and processors will just overheat.

What’s beyond silicon? There have been a number of proposals: protein computers, DNA computers, optical computers, quantum computers, molecular computers. Dr. Michio Kaku says “if I were to put money on the table I would say that in the next ten years as Moore’s Law slows down, we will tweak it.”

So, what do you think?

Is Michio Kaku right or is he going to be just one among many who wrongly predicted the demise of Moore’s Law?!

Like this article?

Please help me produce more content:



Please subscribe for free weekly updates:

  • Seems like Kaku was open to quantum computers at the very end of the video. I agree that Moore’s Law won’t hold up after the limit of silicon. But I also believe the replacement technology that comes after silicon will be so revolutionary, Moore’s Law will be back on track again. The plot of the Moore’s Law graph isn’t smooth at close examination, but at far examination, it evens out.

  •  Feynman: “There’s plenty of room at the bottom.” Kaku: “There’s no more room at the bottom.” What a difference half a century makes!

  • Good point Stephen,

    Ray Kurzweil repeatedly points out that Moore’s Law is just one example of the Law of Accelerating Returns:

    “Moore’s law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to [Newman’s] relay-based “[Heath] Robinson” machine that cracked the Nazi [Lorenz cipher], to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.”

    Thus I think it is very likely it will remain even after we move beyond silicon…

  • Pingback: Hugo de Garis on Singularity 1 on 1: Are We Building Gods or Terminators?()

  • Pingback: The Physics of Everything: Michio Kaku Puts The Universe in a Nutshell()

  • Pingback: The Physics of Everything: Michio Kaku Puts The Universe in a Nutshell()

  • Arthur Piehl

    I both agree and disagree with Kaku.
    Sure, Moore’s Law can’t go on forever, but we don’t need it to.
    Sure Si has limits but so what? There has never been a “standard” Si technology anyways.
    There has never been a standard gate oxide, a standard interconnect, or a standard Si substrate. They have always been “tweaked”. Lithography has also never been standard.
    Sure there are quantum limits, but so what. All solid state devices use quantum computing.
    Whether there are conduction bands, or discrete quantum levels, just goes with the territory.
    The Heisenberg limit to computing is trillions of times beyond our current capabilities.
    The ultimate limits of quantum computing are so far beyond current capabilities that it doesn’t matter. There will be no discontinuity between “standard” processing and “quantum computing”. There already isn’t. Observable quantum effects will continue to play a more dominant role, but will be part of the solution, not the problem.
    When I joined the industry in 1980, the general wisdom was that we would need x-ray or electron beam lithography to break the 1 micron limit. This was considered an ultimate physical limit based on optical resolution. What happened? Didn’t we understand the laws physics in 1980? No, we just didn’t believe that we could use those laws to extract more performance out of it. So now we are where? .022 microns?

    Then we thought that gamma rays and heat dissipation of DRAMS and NMOS would prevent further shrinking. How naive. Atomic layer deposition is now a standard technology, and molecular self-assembly is on the way. These are all “tweaks”, just like DUV immersion lithography, nitrided “oxides”, CMOS, Si-Ge, Cu interconnects, CMP, PECVD, self-aligned gates, etc etc. I would never underestimate the future solutions to current problems.

  • http://www.overcomingbias.com/2013/03/slowing-computer-gains.html And here’s another analysis of it from our favorite physicist-turned-economist.

  • Albert Heisenberg

    Excellent point.

    A minor quibble. I think what Kaku meant by “quantum computing’ was an allusion to the most bizarre property of quantum mechanics: entanglement. It was Einstein that first postulated entanglement in his EPR paper in the 30’s, and Bell went on to show that not only was entanglement real but that it couldn’t be refuted by local-variable conditions.

    How, and when, we fully optimize entanglement in steady state systems will determine how precipitously computing power can rise in the coming decades.

    Great stuff Arthur, I couldn’t agree more.

Over 3,000 super smart people have subscribed to my newsletter: