Will more funding be needed to keep Intel competitive?

On 1 August 2024, Intel announced financial results for the second quarter of 2024. They weren’t pretty; the company’s stock dropped more than 25 percent as it announced an aggressive plan to cut costs, including layoffs that will impact 15 percent of its entire workforce.

  • EleventhHour@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    2 months ago

    Funny, I speculated to a friend about a year ago that intel’s general fuckery might cause them problems not to far down the road , and here they are. Hmm, if only I had money to back that up… I’d probably have more money or whatever.

    Or maybe not so much since I think just about everyone saw this coming. My friend certainly did.

    • Varyk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      what’s been going on with intel the last few years? in terms of their troubles that this extremely vague article that could have been 4 sentences says nothing about.

      • magiccupcake@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 months ago

        They’ve had fab problems for years, in that it cost them a ton on money and much longer than desired to shrink nodes, so they’ve fallen from a leader in fab production to being behind.

        Not to mention there’s not much money to be made from fabs, unless your tsmc.

        AMD, Qualcomm, Nvidia, Google, Apple, are all huge tech companies that design their own cutting edge chips, and only Samsung is another company that both designs and produces chips.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Semiconductor manufacturing has gotten better over time, with exponential improvement to transistor density, which translates pretty directly to performance. This observation traces back to the 60’s and is commonly known as Moore’s Law.

        Fitting more transistors into the same size space required quite a few technical advancements and paradigm shifts. But for the first few decades of Moore’s law, every time they started to approach some kind of physical limit, they’d develop a completely new technique to get things smaller: photolithography moved from off the shelf chemicals purchased from photography companies like Eastman Kodak to specialized manufacturing processes, while the light used went higher and higher wavelength, with the use of new technology like lasers to get even more precisely etched masks.

        Most recently, the main areas of physical improvement has been in using extreme ultraviolet (aka EUV) wavelengths to get really small features, and certain three dimensional structures that break out from the old paradigm of stacking a bunch of planar materials on each other. Each of these breakthroughs was 20 years in the making, so the R&D and the implementation details had to be hammered out with partners in a tightly orchestrated process, to see if it would even work at scale.

        Some manufacturers recognized the huge cost and the uncertainty of success in taking stuff from academic papers in the 2000s and actually mass producing chips in 2025, so they abandoned the leading edge. Global Foundries, Micron, and a bunch of others basically decided it wasn’t worth the investment to try to compete, and now manufacture in those older nodes, without necessarily trying to compete on the newest nodes, leaving things to Intel, Samsung, and TSMC.

        TSMC managed to get EUV working at scale before Intel did. And even though Intel beat TSMC to market with certain three dimensional structures known as “FinFETs,” the next 2 generations after that, TSMC managed to really shove them in there at higher density, by using combining those FinFETs with lithography techniques that Intel couldn’t figure out fast enough. And every time Intel seemed to get close, a new engineering challenge would stifle them. And after a few years of stagnation, they went from being consistently 3 years ahead of TSMC to seeming like they’re about 2 years behind TSMC.

        On the design side of things, AMD pioneered chiplet-based design, where different pieces of silicon could be packaged together, which allowed them to have higher yields (an error in a big slab of silicon might make the whole thing worthless) and to mix and match things in a more modular way. Intel was slow to adopt that, so AMD started taking the lead in CPU performance per watt.

        It’s difficult engineering challenges, traceable back to decisions made in the past decades. Not all of the decisions were obviously wrong at the time, but nobody could’ve predicted at the time that TSMC and AMD would be able to leapfrog Intel based on these specific engineering challenges.

        Intel has a few things on the roadmap that might allow it to leapfrog the competition again (especially if the competition runs into their own setbacks). Intel is ramping up use of EUV in its current processes, are ramping up a competing three dimensional structures they call RibbonFET to compete with TSMC’s Gate All Around (both of which are supposed to replace FinFETs) and they’re hoping to beat TSMC to backside power delivery, which is going to represent a significant paradigm shift in how chips are designed.

        It’s true that in business, success begets success, but it’s also true that each new generation presents its own novel challenges, and it’s not easy to see where any given competitor might get stuck. Semiconductor manufacturing is basically wizardry, and the history of the industry shows that today’s leaders might get left behind, really quickly.