Hey I want to add a command to my system. I am not using any package-format or anything. I just want to install a script that I wrote.

I know of some ways to do that:

  • add my script into whatever is the first thing in $PATH
  • add a custom path to $PATH (in /etc/profile.d/ or /etc/environment) and put the script into the custom path
  • add my script into /usr/bin or /usr/local/bin

I remember reading that profile.d/ doesn’t get picked up by all shells and that /etc/environment (which exists but is empty on my system) shouldn’t be used.

What should I do to ensure that my command will be available everywhere on any gnu/linux or bsd system?

EDIT: I should clarify that I am asking this only out of curiosity. I just like knowing how this stuff works. The script was just an example (I thought this would make it easier to understand, lol). I am interested in knowing a way to install a command without any chance of that command later not being found by other programs. I find many different answers to this and they all seem a little muddy, like “doing x should usually work”. I want to know the solution that always works

  • niemand@discuss.tchncs.deOP
    link
    fedilink
    arrow-up
    6
    ·
    9 months ago

    He just told you why not to put it in /usr/bin: it’s where your package manager puts executables.

    I thought he might tell me why me and my package manager cant both use this directory. The reason for that is not obvious to me

    • chickenf622@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      Cause it’s good to know of something is an installed package at a glance. I also imagine it would reduce the risk of accidentally overwriting your own scripts if the packages happen to have the same name as your local scripts.

    • folkrav@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Other people already answered you, but it’s mostly for:

      1. Keeping things obvious, you know who did what
      2. Avoid potential collisions
    • aperson@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Because in situations like this, segregation is a good thing. You don’t want automated tools futzing in directories that you might have wanted to keep as-is.

    • InverseParallax@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Unix has had a long running convention of separation between “operating system” and other files, so you can blow away something like /opt or /home without making your system unbeatable.

      If you stick stuff under /usr/bin then you have to track the files especially if there are any conflicts.

      Best to just add another path, I use ~/bin because it’s easy to get to and it’s a symlink from the git repo that holds my portable environment, just clone it and run a script and I’m home.

      • andruid@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        And migrate /opt and /home (or even remotely mount) so that user data is preserved outside of the system!

        Both features make system admin much more sane!