Qualcomm brought a company named Nuvia, which are ex-Apple engineers that help designed the M series Apple silicon chips to produce Oryon which exceeds Apple’s M2 Max in single threaded benchmarks.

The impression I get is than these are for PCs and laptops

I’ve been following the development of Asahi Linux (Linux on the M series MacBooks) with this new development there’s some exciting times to come.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    88
    ·
    1 year ago

    I’m just eager to know how much laptops will cost with the new Qualcomm chip. I don’t want to pop champagne too early only to realize that new ARM laptops cost $2000.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      1 year ago

      I’d expect them to start around 1k. Not many people are going to be buying these devices so there’s no economies of scale.

      Also I love how qualcomm announced this CPU and a day later Apple releases the M3 which is finally a real upgrade from the M1.

      • jmcs@discuss.tchncs.de
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        Lots of tech companies might be interested. For example, at my work we are now stuck half way between x64 and arm, both on the server side and on the developers side (Linux users are on x64 and Mac users are on arm). While multiarch OCI/docker containers minimize the pains caused by this, it would still be easier to go back to a single architecture.

    • ⑨③③Ⓚ@lemdro.idOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 year ago

      New tech always comes at a cost, hopefully with the many manufacturers partnering with Qualcomm in this project we’ll have competitive pricing better than the current offering that Apple silicon provides.

      • anon_8675309@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        Used to be, each year-ish computers got faster AND cheaper. So, it doesn’t “always” have to be that way.

        • hedgehog@ttrpg.network
          link
          fedilink
          arrow-up
          5
          ·
          1 year ago

          That’s not happening anymore due to real world constraints, though. Dennard scaling combined with Moore’s Law allowed us to get more performance per watt until around 2006-2010, when Dennard scaling stopped applying - transistors had gotten small enough that thermal issues and other current leakage related challenges meant that chip manufacturers were no longer able to increase clock frequencies each generation.

          Even before 2006 there was still a cost to new development, though, us consumers just got more of an improvement per dollar a year later than we do now.

    • suoko@feddit.it
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Youre right, just like the first risc-v laptop which was more than 1k with awful performances. This will probably follow the M series trend at about 1,5k , but arm has a lot of competitors…

    • taanegl@beehaw.org
      link
      fedilink
      arrow-up
      28
      ·
      1 year ago

      I kind of agree, in that ARM is even more locked down than x86, but if I could get an ARM with UEFI and all computational power is available to the Linux kernel, then I wouldn’t mind trying one out for a while.

      But yes, I can’t wait for RISC-V systems to become mainstream for consumers.

        • taanegl@beehaw.org
          link
          fedilink
          arrow-up
          9
          ·
          1 year ago

          Generally speaking, and I’m not talking about your Raspberry Pi’s, but even there we find some limitations for getting a system up and booting - and it’s not for lack of transistors.

          But say if you take a consumer facing ARM device, almost always the bootloader is locked and apart of some read only ROM - that if you touch it without permission voids your warranty.

          Compare that with an x86 system, whereby the boot loader is installed on an independent partition and has to be “declared” to the firmware, which means you can have several systems on the same machine.

          Note how I’m talking about consumer devices and not servers for data centres or embedded systems.

    • Chobbes@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      I think you’ll be waiting a pretty long time for high end RISC-V CPUs, unfortunately. I don’t particularly trust Qualcomm, but I’m really hoping to see some good arm laptops for Linux.

  • Petter1@lemm.ee
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    I hope for Microsoft to just give up and build a new "windows“ which is just an other Linux distro xD

    Ducking windows can’t even clone the Linux kernel right now

    • V ‎ ‎ @beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      IIRC Microsoft’s woes in the ARM space is two-fold. First is the crushing legacy compatibility and inability to muster developers around anything newer than win32, and second was signing a deal to make Qualcomm the exclusive ARM processors for Windows for who knows how long.

      • Duxon@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        They’re a platform company that provides services. They could build proprietary services on top of a Linux distro. Basically the same as they’re doing now with Edge.

    • the_lone_wolf@lemmy.ml
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Qualcomm is my main fear also. They will ship it with lots of closed source firmware digitally signed with their private keys which users can’t replace so expect a shitty bootloader and don’t forget about always running hypervisior, trust zone and world most kept secret modem

    • AnUnusualRelic@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      I’m more interested in something that has an actual hardware and software ecosystem. I’m no longer interested in soldering my computer and it’s peripherals together.

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      If you want to kill x86, you need to do what Valve and the Wine foundation did with Proton/WINE (mostly proton at this point though), but for x86 to ARM and maybe other architectures like RISCV (especially because the milkV pioneer is a thing).

      There is too much legacy software that will never be converted that people still use to this day. Once you make it easy to transition, it will slowly but steadily start to happen.

      Box86/Box64 are promising, but need help from contributors like you. If you want it to happen, go make it happen, or continue to live in the world you have now.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Well legacy software is fine, that stuff mostly runs on old machines/servers/etc. ARM will be more easily to move towards by focusing the consumer market, where legacy issue is less of an issue because their programs are frequently updated. Some old server using outdated software that people are afraid to touch, we don’t need to worry about converting that lol.

      • KseniyaK@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Well, you do have qemu, which can run x86 programs on other architectures (not just running x86 virtual machines on top of hosts of other architectures).

        • Chobbes@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          My experience running arm on x86 with qemu was dog slow. This was years ago, though, so hopefully it has gotten better.

  • Parastie@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    The benchmarks for the M3 have the single core and multicore performances way past similar Intel and AMD chips. Qualcomm’s mobile chips are still no where near Apple’s mobile chips. I do not believe for a second that Qualcomm will catch up to the M2 on their first release.

    • zik@zorg.social
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      1 year ago

      That’s absolutely not true. The M3 Max just about brings Apple performance up to similar levels as Intel and AMD. The Ryzen 9 7945HX3D for example is a laptop processor which trades blows with the M3 on benchmarks - single core the M3’s slightly faster and multi core the Ryzen’s slightly faster - and in performance per watt the Ryzen’s marginally better. So really it’s just catching up with older laptop processors from other manufacturers.

      And if you venure outside the laptop space to compare ultimate speed it’s nowhere near the fastest, particularly in multi-threaded. Its multi-threaded performance is around 13% of the AMD EPYC 9754 Bergamo for example.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Keep in mind this is with up to an 80 watt TDP vs an effectively 3 year old architecture in a select few tests. The M2 was basically just an overclocked M1, with the Pro/Max models getting 2 extra cores. This is qualcomms best case scenario.

    • pr06lefs@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      The Qualcomm x elite benchmarks as faster than the M3 for multicore. Not too surprising as I think it’s 12 cores vs 10. For single core its something like 2700 vs 3200.

      Laptops running x elite are supposed to be available mid 2024.

  • OscarRobin@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    The limited benchmarks I’ve seen put the new X Elite at slightly less efficient than the M2 Pro (let alone M3 Pro). It only gets marginally higher scores when operating at 3x the wattage.

    Also, let’s not imagine even for a second that notoriously terrible ARM are going to make it easy to support this chip, especially not in the long term.

  • HurlingDurling@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    As long as memory and ssd are upgradable and not soldered on the board, I would buy this laptop

  • happyhippo@feddit.it
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    I don’t wanna repeat myself, but: 7840u for the next few years, then I hope RISC V will be mature enough to kick some ass (and that framework releases a board for it).

    That’s all I dream of.

  • drkhrse96@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    It’s interesting as a comparison to M3 now and at different power limits. I’m hoping it may hopefully benefit the asahi project also. As a windows product I don’t think it’ll be good at all unless Microsoft has a Rosetta like emulation layer that is nearly as good as Apple. Without that this product will not do well.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Microsoft has a pretty good translation layer, it’s the hardware x86 acceleration that most windows ARM chips lack, that Apple’s CPUs have.

    • CafecitoHippo@lemm.ee
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      Even if we were thirsting over it, what’s wrong with it? Apple makes some impressive silicon that’s really efficient. The problem is that it’s tied to their products and closed off. You can marvel at what they’re doing on the production side while not liking their business practices.

          • StarDreamer
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            Because anyone who works at the assembly level tends to think that the x86_64 ISA is garbage.

            To be fair, aarch64 is also garbage. But it’s less smelly garbage.

            That being said, I’m not expecting any of these CPUs to be hanging in the Sistine Chapel. So whatever works, I guess.

  • Horsey@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I can’t wait for the hardware Android continuity… that’s the only thing I’m waiting on now to switch to Android besides the raw performance being equal.