• 2 Posts
  • 81 Comments
Joined 1 year ago
cake
Cake day: February 3rd, 2023

help-circle
  • I’ll say, I really like MacOS! I transitioned from Linux to MacOS X back in 10.2 days and kept with it until 2017 or so. Put homebrew on there and fill it up with a bunch of 'nix tools and you get a respectable unix. Plus you get access to a bunch of commercial software unavailable on Linux. I was very happy with this solution for well over a decade.

    My problem came with Apple. Their hardware lock downs preventing upgrades and fixing stuff, plus the newer software lockdowns, all this nonsense just made it impossible for me to do my work. Like Microsoft, Apple imposes itself on my workflow. And while Linux doesn’t do stuff I’d like in the commercial realm, one thing it doesn’t do is stick its nose in my workflow. If something breaks I can fix it. I don’t have to deal with it demanding I reboot my machine for an update while the box is in the middle of a two week long render. I don’t have to be forced onto the network to login to a central Microsoft or Apple authentication server just to use my computer. I don’t have to deal with them bugging me to stick my clients’ data on their cloud services I signed an legal NDA promising to protect just because the company wants me to.

    My issue is not with the software, even on Windows, it’s with the corporate practices! They get in my way. I bought this computer to make money with, not so they can make money off me and my clients! When you sign legal documents promising to not disclose confidential material related to a pending bid, and your OS vendor spies on you, I mean most people don’t seem to care about that but I sure do. lol


  • So, one point I’ll make on the hardware assist you discuss is that it’s actually limited to very specific use cases. And the best way to understand this is to read the ffmpeg x264 encoding guide here:

    https://trac.ffmpeg.org/wiki/Encode/H.264

    The x265 guide is similar, so I won’t repeat. But there are a dizzying range of considerations to make when cutting a deliverable file. Concerns such as:

    • target display. Is the display an old style rec709 with 8 bits per color, SDR with of six and a half stops dynamic range, etc? Is it a rec2020, 10 bits per color, about eight stops? Is it a movie projector in a theater, with 12 bits per color and even more dynamic range? When producing deliverables, you choose output settings for encode specific to the target display type.

    • quality settings. Typically handled in Constant Rate Factor (CRF) settings. If you’ve burned video files, you’ll know the lower the CRF number the higher the image quality. But the higher the image quality the lower the overall compression. It’s a tradeoff.

    • compression. The more computation put to compression the smaller the video file per any CRF setting. But also the longer it takes to complete the computation.

    This is only for local playback. Streaming requires a additional tweaks. And it’s only for a deliverable file. In the production pipeline you’d be using totally different files which store each frame separately rather than compress groups of frames, retain far more image data per frame, and are much less compressed or entirely uncompressed overall.

    The point of this is to highlight the vast difference in use cases placed on encoding throughout various stages in a project. And to point out for video production you care about system I/O bandwidth most of all.

    But hardware encode limits you to very specific output ranges. This is what the preset limitations are all about for say nvidia nvenc hardware assist x264 in ffmpeg. The hardware devs select what they think is the most common use case, say YouTube as an output target (which makes network bandwidth and display type presumptions), and targets their hardware accel for that.

    This means most of that marketing talk about hardware assist in M series chips and GPUs etc is actually not relevant for production work. It’s only relevant for cutting final deliverable files under specific use cases like YouTube, or Broadcast (which still wants 10bit ProRes).

    If you look at just x264 settings, the hardware accel presets are so limited most times you’d still be cutting with software encode. Hardware encode comes into play with real time, like streaming and live broadcast. The rest of the pipeline? All software.








  • Well, you’re absolutely right that they’ve released a Mac Pro. Looking it over, the machine is still a terrible deal in comparison to Threadripper. The Mac Pro maxes out at 192GB RAM and 72 GPU cores. Which is a terrible deal compared to Threadripper, which maxes out at 1.5TB RAM and enough PCI lanes for four GPUs.

    From a price / performance standpoint, you could beat this thing with a lower end Ryzen 5950x CPU, 256GB RAM, and two Nvidia 4080 GPUs at maybe $2500-$3000 dollars less than the maxed out Mac Pro.

    But I was wrong there. Thank you for the correction.

    NOTE A 64core Threadripper with 512GB and four 4090 GPUs would be suitable for a professional machine learning tasks. Better GPUs in the Pro space cost much more though. A 5950x 16 core, 256GB, two 4090 GPUs and pci ssd raid would do to edit 8k / 12k raw footage with color grading and compositing in Davinci Resolve or Premiere/Ae. Would be a good Maya workstation for feature or broadcast 3d animation too.

    That Mac Pro would make a good editing workstation in the broadcast / streaming space, especially if you’re using Final Cut and Motion, but is not suitable in the machine learning space. And I wouldn’t choose it for Davinci Resolve as a color grading station. The Mac XDR 6k monitor is not suitable for pro color grading on the feature side, but would probably be acceptable for broadcast / streaming projects. On the flip side, a Pro color grading monitor is $25K to start and strictly PC anyway.


  • This is why I left the Mac platform and switched to Linux on Threadripper. Apple is just not being honest.

    The M series Mac is not suitable for performance computing. Outrageous prices and small memory sizes make the computer a toy for professional workloads.

    M gets its performance from RAM proximity to the CPU by connecting RAM and CPU chips together. This proximity lets them reduce the clock divisor and thereby increase total I/O bandwidth. Good idea for phones, tablets, and even laptops. Useless for high end workstations, where you might need 128-256GB. Or more.

    Also, one of these days AMD or Intel will bolt 8GB on their CPUs too, and then they’ll squash M. Because ARM still per clock tick isn’t as efficient as x86 on instruction execution. Just use it for cache. Or declare it fast RAM like the Amiga did.

    Apple has really snowed people about how and where they get their performance from M. It’s a good idea on the small end. But it doesn’t scale.

    This is why they haven’t released an M series Mac Pro.

    Nope, there is in fact a Mac Pro now. I stand corrected.




  • In this case I’m referring to the “shitlib” epithet. Branding whole peoples they disagree with politically as shit. And I’m not even clear exactly what view is being promoted or how bullying people is supposed to earn them friends outside their little bubble. Much less continued federation.

    Being called a neoliberal because I run a tiny film consultancy is like branding a single store mom and pop bakery owner a 19th century Robber Baron. Voting Biden is strictly strategic, I’ve lived in Europe and would vote for democratic socialists in a heartbeat if one could win.

    But I know democratic socialism would never be enough for these espoused fake Marxists, who’ve never actually read Marx or Engles, don’t know a thing about the history of the October Revolution, think Mao’s Long March of Death was actually a victory because he said so, and couldn’t place left versus right seating in the Estates Generale to a political movement even if Robespierre introduced them to his favorite guillotine with a severed copy of Burke’s Critique of the French Revolution.

    Lemmy ml is a cutout for lemmy devs, who know even less about algorithms than they do about Marxism and political history, confusing both for alphabet soup. The dev crew is one Freshman semester in to a CS degree with a failing grade pretending they’re writing system code in inline assembly.

    Incompetence can get them only so far in failing upward. This isn’t a corporate bureaucracy. Lol



  • Nononono you don’t get my perspective at all.

    I’m not on about lemmy.ml banning me, per se. Their server, whatever. This is about much more than that.

    Lemmy devs do not respect the rest of the fediverse. They do not respect basic civility. They especially do not respect alternate views. They promote hate and disparaging slurs as a virtue. They are bitter and little. Petty. And have already earned themselves a bad reputation.

    This puts them at odds with the rest of the FediVerse. And that will come to bite them in due course.

    This is well beyond me. I’ll just be a spectator joining a wave from the bleachers.



  • I’m not talking about Steam. But if they’re violating the copyrights of creators who choose the GPL, by all means sue their asses too.

    And I’m not talking about RedHat protecting their trademarks, which I consider perfectly legitimate.

    I’m talking about threatening to revoke access to the support program, which is the only way to get RHEL source, as a way to prevent redistribution of GPL’d sourcecode. That’s a violation of Section 6 of the GPL, and therefore represents a copyright violation to everyone outside RedHat who has contributed source the project uses.

    RedHat could solve this by making all GPL’d source available to the public. Or by stipulating that redistributed code under the GPL must have RedHat trademarks removed. Or by removing all GPL’d sourcecode in RHEL and using their own internally developed code, or using code released under the BSD or MIT, etc. licenses. A RedHat RHE-BSD, for example.

    The trademark issue is just a RedHat Herring, so to speak. I’m fine with them demanding all RedHat trademarks be removed from GPL’d sourcecode related to the RHEL which others redistribute. But they may not violate the copyrights of contributors. Or else they should be sued for copyright infringement like anyone else. That’s the position I’m taking.

    Note that I don’t demand the complete RHEL system. Components under BSD or MIT licenses, or those entirely written by RedHat, could be withheld and I wouldn’t care about that. This argument is specific to only GPL’d materials contributed by external parties.

    And to be clear: when I say, ‘Sue their asses,’ I don’t mean in a North Carolina court where RedHat could judge shop for their best outcome. As per their contractual terms. No, I think California or New York would be best, because those jurisdictions are most likely to protect the intellectual property rights of contributors RedHat includes in RHEL.


  • The charter does matter. Because it’s a community driven document.

    Your argument here is two fold:

    • Community rules are meaningless.

    Therefore:

    • I can do anything I want, fuck you.

    My counter argument here is that: When majorities in the community realize they’re being punked by the likes of you, the response will be to shun you and your instance with mass defederation.

    Lemmy has these problems partly because the interface design copied from Reddit incentivizes incivility and bad behavior. But also because the leadership of Lemmy is a role model for bad behavior. They created the community it has become.

    Under circumstances like this, I believe mass defederation is exactly the right outcome. Lemmy is rushing head first to irrelevancy.Ya’ll can then go off and do your own hate thing UnTruth Social or Gab or whatever. Good luck with that.