• floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Intel has not halted sales or clawed back any inventory. It will not do a recall, period.

    Buy AMD. Got it!

      • lath@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yet they do it all the time when a higher specs CPU is fabricated with physical defects and is then presented as a lower specs variant.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Nobody objects to binning, because people know what they’re getting and the part functions within the specified parameters.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve been buying AMD for – holy shit – 25 years now, and have never once regretted it. I don’t consider myself a fanboi; I just (a) prefer having the best performance-per-dollar rather than best performance outright, and (b) like rooting for the underdog.

      But if Intel keeps fucking up like this, I might have to switch on grounds of (b)!

      spoiler

      (Realistically I’d be more likely to switch to ARM or even RISCV, though. Even if Intel became an underdog, my memory of their anti-competitive and anti-consumer bad behavior remains long.)

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I hate the way Intel is going, but I’ve been using Intel chips for over 30 years and never had an issue.

        So your statement is kind of pointless, since it’s such a small data set, it’s irrelevant and nothing to draw any conclusion from.

      • Damage@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’ve been on AMD and ATi since the Athlon 64 days on the desktop.

        Laptops are always Intel, simply because that’s what I can find, even if every time I scour the market extensively.

        • Krauerking@lemy.lol
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Honestly I was and am, an AMD fan but if you went back a few years you would not have wanted and AMD laptop. I had one and it was truly awful.

          Battery issues. Low processing power. App crashes and video playback issues. And this was on a more expensive one with a dedicated GPU…

          And then Ryzen came out. You can get AMD laptops now and I mean that like they exist, but also, as they actually are nice. (Have one)

          But in 2013 it was Intel or you were better off with nothing.

          • orangeboats@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Indeed, the Ryzen laptops are very nice! I have one (the 4800H) and it lasts ~8 hours on battery, far more than what I expected from laptops of this performance level. My last laptop barely achieved 4 hours of battery life.

            I had stability issues in the first year but after one of the BIOS updates it has been smooth as butter.

      • SoleInvictus@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Same here. I hate Intel so much, I won’t even work there, despite it being my current industry and having been headhunted by their recruiter. It was so satisfying to tell them to go pound sand.

      • Final Remix@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I’ve had nothing but issues with some computers, laptops, etc… once I discovered the common factor was Intel, I haven’t had a single problem with any of my devices since. AMD all the way for CPUs.

      • Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Sorry but after the amazing Athlon x2, the core and core 2 (then i series) lines fuckin wrecked AMD for YEARS. Ryzen took the belt back but AMD was absolutely wrecked through the core and i series.

        Source: computer building company and also history

        tl:dr: AMD sucked ass for value and performance between core 2 and Ryzen, then became amazing again after Ryzen was released.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          AMD “bulldozer” architecture CPUs were indeed pretty bad compared to Intel Core 2, but they were also really cheap.

      • mox@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        RISC-V isn’t there yet, but it’s moving in the right direction. A completely open architecture is something many of us have wanted for ages. It’s worth keeping an eye on.

          • Vik@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            even then, strix will look to compete with apple silicon in perf/watt

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          ARM is only more power efficient below 10 to 15 W or so. Above that, doesn’t matter much between ARM and x86.

          The real benefit is somewhat abstract. Only two companies can make x86, and only one of them knows how to do it well. ARM (and RISC V) opens up the market to more players.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Depends on the desktop. I have a NanoPC T4, originally as a set top box (that’s what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it’s plenty fast enough for a browser and office. Provided you give it an SSD, that is.

          Speaking of Desktop though the graphics driver situation is atrocious. There’s been movement since I last had a monitor hooked up to it but let’s just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.

          Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can’t paint to gles2 textures. Throwback to the 90s.

          Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.

          • cmnybo@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            GPU support is a real mess. Those ARM SOCs are intended for embeded systems, not PCs. None of the manufacturers want to release an open source driver and the blobs typically don’t work with a recent kernel.

            For ARM on the desktop, I would want an ATX motherboard with a socketed 3+ GHz CPU with 8-16 cores, socketed RAM and a PCIe slot for a desktop GPU.

            Almost all Linux software will run natively on ARM if you have a working GPU. Getting windows games to run on ARM with decent performance would probably be difficult. It would probably need a CPU that’s been optimized for emulating x86 like what Apple did with theirs.

        • chingadera@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I hope so, I accidentally advised a client to snatch up a snapdragon surface (because they had to have a dog shit surface) and I hadn’t realized that a lot of shit doesn’t quite work yet. Most of it does, which is awesome, but it needs to pick up the pace

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        If there were decent homelab ARM CPUs, I’d be all over that. But everything is either memory limited (e.g. max 8GB) or datacenter grade (so $$$$). I want something like a Snapdragon with 4x SATA, 2x m.2, 2+ USB-C, and support for 16GB+ RAM in a mini-ITX form factor. Give it to me for $200-400, and I’ll buy it if it can beat my current NAS in power efficiency (not hard, it’s a Ryzen 1700).

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        arm is very primed to take a lot of market share of server market from intel. Amazon is already very committed on making their graviton arm cpu their main cpu, which they own a huge lion share of the server market on alone.

        for consumers, arm adoption is fully reliant on the respective operating systems and compatibility to get ironed out.

        • icydefiance@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah, I manage the infrastructure for almost 150 WordPress sites, and I moved them all to ARM servers a while ago, because they’re 10% or 20% cheaper on AWS.

          Websites are rarely bottlenecked by the CPU, so that power efficiency is very significant.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            I really think that most people who think that they want ARM machines are wrong, at least given the state of things in 2024. Like, maybe you use Linux…but do you want to run x86 Windows binary-only games? Even if you can get 'em running, you’ve lost the power efficiency. What’s hardware support like? Do you want to be able to buy other components? If you like stuff like that Framework laptop, which seems popular on here, an SoC is heading in the opposite direction of that – an all-in-one, non-expandable manufacturer-specified system.

            But yours is a legit application. A non-CPU-constrained datacenter application running open-source software compiled against ARM, where someone else has validated that the hardware is all good for the OS.

            I would not go ARM for a desktop or laptop as things stand, though.

            • batshit@lemmings.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              If you didn’t want to game on your laptop, would an ARM device not be better for office work? Considering they’re quiet and their battery lasts forever.

              • Nighed@sffa.community
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                As long as the apps all work. So much stuff is browser based now, but something will always turns up that doesn’t work. Something like mandatory timesheet software, a bespoke tool etc.

              • frezik@midwest.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so. Apple is getting a lot out of them because TSMC 3nm; even the upcoming AMD 9000 series will only be on TSMC 4nm.

                ARM is great for having more than one competent company in the market, though.

                • batshit@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so.

                  Do you have a source for that? It seems a bit hard to believe.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.

          • Dudewitbow@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            arm is a mixes bag. iirc atm the gpu on the Snapdragon X Elite os disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Eh, if they give me a PCIe slot, I’m happy to use that in the meantime. My current NAS uses an old NVIDIA GPU, so I’d just move that over.

              • Zangoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.

                  If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Servers being slow is usually fine. They’re already at way lower clocks than consumer chips because almost all that matters is power efficiency.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Eh, it looks like ARM laptops are coming along. I give it a year or so for the process to be smooth.

              For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.

              • Justin@lemmy.jlh.name
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.

                Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.

                https://www.phoronix.com/review/graviton4-96-core

              • CancerMancer@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.

                Not sure what your application is but if you’re open to clustering them that could be an option.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  Here’s my actual requirements:

                  • 2 boot drives in mirror - m.2 or SATA is fine
                  • 4 NAS HDD drives - will be SATA, but could use PCIe expansion; currently have 2 8TB 3.5" HDDs, want flexibility to add 2x more
                  • minimum CPU performance - was fine on my Phenom II x4, so not a high bar, but the Phenom II x4 has better single core than ZimaBlade

                  Services:

                  • I/O heavy - Jellyfin (no live transcoding), Collabora (and NextCloud/ownCloud), samba, etc
                  • CPU heavy - CI/CD for Rust projects (relatively infrequent and not a hard req), gaming servers (Minecraft for now), speech processing (maybe? Looking to build Alexa alt)
                  • others - actual budget, vault warden, Home Assistant

                  The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage. Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives. Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.

        • schizo@forum.uncomfortable.business
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Kinda? It really should be treated as a 1st generation product for Windows (because the previous versions were ignored by, well, everyone because they were utterly worthless), and should be avoided for quite a while if gaming is remotely your goal. It’s probably the future, but the future is later… assuming, of course, that the next gen x86 CPUs don’t both get faster and lower power (which they are) and thus eliminate the entire benefit of ARM.

          And, if you DONT use Windows, you’re looking at a couple of months to a year to get all the drivers in the Linux kernel, then the kernel with the drivers into mainstream distributions, assuming Qualcomm doesn’t do their usual thing of just abandoning support six months in because they want you to buy the next release of their chips instead.

            • schizo@forum.uncomfortable.business
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I’m having the same dream, but I don’t trust Qualcomm to not fuck everyone. I mean it’d be nice if they don’t but they’ve certainly got the history of being the scorpion and I’m going to let someone else be the frog until they’ve proven they’re not going to sting me mid-river.

  • deltreed@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    So like, did Intel lay off or deprecate its QA teams similar to what Microsoft did with Windows? Remember when stability was key and everything else was secondary? Pepperidge farms remembers.

    • john89@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Why would they lay off their QA teams when its management and executives who make the decisions to cut corners?

  • Noble Shift@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And this is why I never purchase a product with a revision code of *.0, and almost always purchase used.

  • w2tpmf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Any real world comparison. Gaming frame rate, video encoding… The 13-700 beats the 7900x while being more energy efficient and costing less.

    That’s even giving AMD a handicap in the comparison since the 7700x is supposed to be the direct comparison to the 13-700.

    I say all this as a longggg time AMD CPU customer. I had planned on buying their CPU before multiple different sources of comparison steered me away this time.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Ok, so maybe you are missing the part where the 13 and 14 gens are destroying themselves. No one really cares if you use AMD or what not, this little issue is intel and makes any performance,power use or cost moot as the cpu’s ability to not hurt itself in its confusion will now always be in question.

      Also I don’t think CPU speeds have been a large bottleneck in the last few years, why both AMD and Intel keep pushing is just silly.

      • w2tpmf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah that does suck. But I was replying specifically to the person saying Intel hasn’t been relevant for years because of a supposed performance dominance from AMD. That’s part just isn’t true.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Your comment does not reply to anyone though, its just floating out there on its own.

          And even taken as a reply it still does not make sense since as of this “issue” any 13th or 14th gen Intel over a 600 is out of the running since they can not be trusted to not kill themselves.

          • w2tpmf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Yeah not really sure how my comment ended up where it is. Connect stacks comments in a weird way and I must have clicked reply in the wrong place.

            I was replying to this …

            Is there really still such a market for Intel CPUs? I do not understand that AMDs Zen is so much better and is the superior technology since almost a decade now.

            …Which up untill this issue was NOT true. The entire Zen 2 line was a step behind the Intel chips that released at the same times as it.

            I’ve been running a 3600x for years now and love it … But a i5-10600k that came out at the same time absolutely smashes it in performance.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Those came out a year apart and no one does not “smash” the other in performance. I doubt you can even notice the difference between the two, and that is the issue with CPUs today, they are not the bottleneck in most systems. I have used both of these (I like the 10600k as well) but they are almost exactly the same “performance” and would not turn up my nose at ether. The issue is that (especially in personal use cases) there is no justification in the newer systems. DDR4 still runs literally everything and both of these 4 year+ year old CPUs (that are now a few gens old) also will run anything well outside of exotic cases. You are more likely to see slowdowns with a lack of ram (since most programs today seem to think the stuff is unlimited), GPU bottlenecks, or just badly optimized software.

    • mox@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I don’t think we’ve been given any reason to believe this was caused by Intel Management Engine.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Intel is about to have a lot of lawsuits on their hands if this delay deny deflect strategy doesn’t work out for them. This problem has been going on for over a year and the details Intel lets slip just keep getting worse and worse. The more customers that realize they’re getting defective CPUs, the more outcry there’ll be for a recall. Intel is going to be in a lot of trouble if they wait until regulators force them to have a recall.

    Big moment of truth is next month when they have earnings and we see what the performance impact from dropping voltages will be. Hopefully it’ll just be 5% and no more CPUs die. I can’t imagine investors will be happy about the cost, though.

    • Archer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I want to say gamers rise up, but honestly gamers calling their member of Congress every day and asking what they’re doing about this fraud would be way more effective. Congress is in a Big Tech regulating mood right now

  • CaptainBasculin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Considering AMD has also paused its release of 9th gen Ryzen just before its release date; I wonder if this issue is caused by TSMC.

  • gearheart@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This would be funny if it happened to Nvidia.

    Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

    No one wants that.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      This would be funny if it happened to Nvidia.

      It kinda, has, with Fermi, lol. The GTX 480 was… something.

      Same reason too. They pushed the voltage too hard, to the point of stupidity.

      Nvidia does not compete in this market though, as much as they’d like to. They do not make x86 CPUs, and frankly Intel is hard to displace since they have their own fab capacity. AMD can’t take the market themselves because there simply isn’t enough TSMC/Samsung to go around.

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There’s also Intel holding the x86 patent and AMD holding the x64 patent. Those two aren’t going anywhere yet.

        • wax@feddit.nu
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Actually, looks lhe base patents have expired. All the extentions, SSE, AVX are still in effect though

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This would be funny if it happened to Nvidia.

      Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

      Lol there was a reason Xbox 360s had a whopping 54% failure rate and every OEM was getting sued in the late 2000s for chip defects.

        • hardcoreufo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I think the 360 failed for the same reason lots of early/mid 2000s PCs failed. They had issues with chips lifting due to the move away from leaded solder. Over time the formulas improved and we don’t see that as much anymore. At least that’s the way I recall it.

          • icedterminal@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Tagging on here: Both the first model PS3 and Xbox 360 were hot boxes with insufficient cooling. Both suffered from getting too hot too fast for their cooling solutions to keep up. Resulting in hardware stress that caused the chips solder points to weaken until they eventually cracked.

            • john89@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Owner of original 60gb PS3 here.

              It got very hot and eventually stopped working. It was under warranty and I got an 80gb replacement for $200 cheaper, but lost out on backwards compatibility which really sucked because I sold my PS2 to get a PS3.

              • lennivelkant@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Why would you want backwards compatibility? To play games you already own and like instead of buying new ones? Now now, don’t be ridiculous.

                Sarcasm aside, I do wonder how technically challenging it is to keep your system backwards-compatible. I understand console games are written for specific hardware specs, but I’d assume newer hardware still understands the old instructions. It could be an OS question, but again, I’d assume they would develop the newer version on top of their old, so I don’t know why it wouldn’t support the old features anymore.

                I don’t want to cynically claim that it’s only done for profit reasons, and I’m certainly out of my depth on the topic of developing an entire console system, so I want to assume there’s something I just don’t know about, but I’m curious what that might be.

                • john89@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  It’s my understanding that backwards-compatible PS3s actually had PS2 hardware in them.

                  We can play PS2 and PS1 games if they are downloaded from the store, so emulation isn’t an issue. I think Sony looked at the data and saw they would make more money removing backwards compatibility, so that’s what they did.

                  Thankfully the PS3 was my last console before standards got even lower and they started charging an additional fee to use my internet.

    • nek0d3r@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I genuinely think that was the best Intel generation. Things really started going downhill in my eyes after Skylake.

  • sebsch@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Is there really still such a market for Intel CPUs? I do not understand that AMDs Zen is so much better and is the superior technology since almost a decade now.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The new AMD generation kinda tossed all the good out the window. Now they are the more expensive option and even with this Intel fuckup they are likely still going to be the go to for people that have more sense then money.

      Funny that the good old zen 3 stuff is still swinging above its weight class.

      • aard@kyu.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AMD keeps some older generations in production as their budget options - and as they had excellent CPUs for multiple generations now you also get pretty good computers out of that. Even better - with some planning you’ll be able to upgrade to another CPU later when checking chipset lifecycle.

        AMD has established by now that they deliver what they promise - and intel couldn’t compete with them for a few generations over pretty much the complete product line - so they can afford now to have the bleeding edge hardware at higher prices. It’s still far away from what intel was charging when they were dominant 10 years ago - and if you need that performance for work well worth the money. For most private systems I’d always recommend getting last gen, though.

      • Luccus@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I don’t see data backing up your claim […]

        Links a list where the three top spots substantiate the claim, followed by a comparatively large 8% drop.

        To add a bit of nuance: There are definitely exceptions to the claim. But if I had to make a blanket statement, it would absolutely be in favor of AMD.

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          The point of the chart is that it alternates over a wide performance range, there isn’t a blanket winner between the company that can’t figure out security and the company that can’t figure out thermals.

      • DefederateLemmyMl@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Why does that graph show Epyc (server) and Threadripper (workstation) processors in the upper right corner, but not the equivalent Xeons? If you take those away, it would paint a different picture.

        Also, a price/performance graph does not say much about which is the superior technology. Intel has been struggling to keep up with AMD technologically the past years, and has been upping power targets and thermal limits to do so … which is one of the reasons why we are here points at headline.

        I do hope they get their act together, because we an AMD monopoly would just be as bad as an Intel monopoly. We need the competition, and a healthy x86 market, lest proprietary ARM based computers take over the market (Apple M-chips, Snapdragon laptops,…)

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I guess I’m confused by your fundamental point though: if we aren’t looking for raw processing power on a range of workloads, what is the technology you see them winning in?

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Id guess because I selected single processors and many of the xeons are server oriented with multi socket expected. Given the original post I’m responding to I’m more concerned by desktop grade (10-40k pts multi core) than server grade.

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Aha because if they included the xeon scalables it show how bad they are doing in the datacenter market.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Intel is in the precarious position of being the largest surviving American owned semiconductor manufacturer, with the competition either existing abroad (TSMC, Samsung, ASML) or as a partner/subsidiary of a foreign national firm (NVidia simply procures its chips from TSMC, GlobalFoundries was bought up by the UAE sovereign wealth fund, etc).

      Consequently, whenever the US dumps a giant bucket of money into the domestic semiconductor industry, Intel is there to clean up whether or not their technology actually works.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Small correction: only surviving that makes desktop/server class chips. Companies like Texas Instruments and Microchip still have US foundries for microcontrollers.

    • w2tpmf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Naw. Zen was a leap ahead when it came out but AMD didn’t keep that pace long and Intel CPUs quickly caught up.

      I just almost bought a Ryzen 9 7900x but a i7-13700k ended up being cheaper and outperforms the AMD chip.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        On what workloads? AMD is king for most games, and for less price. It’s also king for heavily multicore workloads, but not on the same CPU as for games.

        In other words, they don’t have a CPU that is king for both at the same time. That’s the one thing Intel was good at, provided you could cool the damn thing.

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Its the only chip that runs on open source bios and you can completely disable the Intel ME after boot up.

      AMD’s PSP is 100% proprietary spyware that can’t be disabled or manipulated into not running.

    • shastaxc@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Intels have been working in my Linux server better than AMD. The AMDs kept causing server crashes due to C-state nonsense that no amount of BIOS tweaking would fix. AMD is great for performance and efficiency (and cost/value) in my gaming PC but wreaking havoc with my server which I need to be reliably functional without power restarts.

      So I have both.

    • deeply_moving_queef@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Intel’s iGPU is still the by far the best option for applications such as media transcoding. It’s a shame that AMD haven’t focussed more on this but understandable, it’s relatively niche.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The argument was that while AMD is better on paper in most things, Intel would give you rock solid stability. That argument has now taken an Iowa-class broadside to the face.

      I don’t watch LTT anymore, but a few years back they had a video where they were really pushing the limits of PCIe lanes on an Epyc chip by stuffing it full of NVMe drives and running them with software RAID (which Epyc’s sick number of cores should be able to handle). Long story short, they ran into a bunch of problems. After talking to Wendel of Level1Techs, he mentioned that sometimes, AMD just doesn’t work the way it seems it should based on paper specs. Intel usually does. (Might be getting a few details wrong about this, but the general gist should be right.)

      This argument was almost the only thing stopping AMD from taking over the server market. The other thing was AMD simply being able to manufacture enough chips in a short time period. The server market is huge; Intel had $16B revenue in “Data Center and AI” in 2023, while AMD’s total revenue was $23B. Now manufacturing ramp up might be all that’s stopping AMD from owning it.

  • InAbsentia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Thankfully I haven’t had any issues out of my 13700k but it’s pretty shitty of Intel to not stand behind their products and do a recall.