I want to create a backup of my Linux system, including user files, from the command line. I tried using Timeshift but it doesn’t have a CLI argument to include a folder.

I found a guide on dev.to that explains how to use Timeshift from the command line, but it doesn’t mention how to include user files. According to ItsFOSS, Timeshift is designed to protect system files and settings, not user data, so user home directories are excluded by default.

I came across a list of backup programs for Linux on Slant, and BackInTime appears to be the best.

Has anyone used BackInTime to backup the whole system including user files? Are there any other tools that you would recommend?

Edit: would also be nice if it had similar features to Timeshift, like incremental snapshots, weekly snapshots, list, restore and delete snapshots, etc.

  • tal@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    8 months ago

    Yes, though then you won’t have incremental backups. That is, if you want 20 copies, you’ll require 20x the storage unless you’re using some kind of copy-on-write underlying storage on the backup server side and your copy mechanism is rigged up to leverage that.

    • TGHOST-V0@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      To be honest i should check how rdiff works, i dont know it. Because i would have guess it does that too, if you saying me its working on rsync.

      And tar+rsync isnt an option ?

      • tal@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        working on rsync

        Well, strictly-speaking, it uses librsync, which is the core of rsync, rather than the command. The other backup utility I mentioned, duplicity, which unlike rdiff-backup does encryption, also uses librsync.

        Because i would have guess it does that too

        No, it only stores a full copy of the most-recent backup, and the rest are incrementals, only store a set of changes. If you want to pull from an older one, then you need to use the rdiff-backup command to generate that older one. If you just want the most-recent one, you can just copy the files directly. I’ll take a nightly backup, and it’ll go back…I haven’t actually looked if I have any bound on it, but something over six months.

        And tar+rsync isnt an option ?

        Tar won’t give you incrementals for free. You could store full backups and gzip them or something, and that might save some space if your data is compressable, but it’s still a full backup rather than an incremental.

        I mean, I’m sure that, given enough effort, you can set up some job using a shell script that uses rsync internally and generates incremental backups, the same way rdiff-backup does, but then you’re basically heading down the path of reimplementing rdiff-backup. Someone else has already done the work, so…shrugs

        What you’ll wind up with rdiff-backup is functionally what you’ll get with rsync – a mirror of files, with replicated metadata on the destination. But in addition, you get a history of incrementals. In general, one probably would prefer to have those incrementals. I’m not saying that that is absolutely true of everyone. Maybe some use cases legitimately will never have use for a backup prior than the most-recent one. But I think that the common case is that people would probably prefer to have it available.

        • TGHOST-V0@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 months ago

          I will really go take a look on it XD,
          U Said it,

          I mean, I’m sure that, given enough effort, you can set up some job using a shell script that uses rsync internally and generates incremental backups, the same way rdiff-backup does, but then you’re basically heading down the path of reimplementing rdiff-backup.

          Then u can just use gpg on your rsync backups ? Like u said it to me, ur solution is the best.

          Personally, I’m not in IT to feed me, I just use rsync+tar, depending the context. Regex it can be cool but hey tools exists.

          And ty then ^^"

          • tal@lemmy.today
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            8 months ago

            Then u can just use gpg on your rsync backups ?

            Yes, if you wanted to implement encryption, you could do it using gpg. That’s what duplicity does.

            If you wanted to try implementing deltas for incrementals yourself in shell, I guess you could try doing so using xdelta, though I have no idea whether it’s possible to make it reasonable on performance for this kind of workload. It’s just that, I mean, all of this stuff takes time and testing, and people have already built backup systems atop rsync in the form of rsnapshot or rdiff-backup or duplicity and such.

            I’ve got nothing against rsync. I use it to replicate file trees all the time, and it’s fine for what it was built for. It’s just that as a tool, it’s aimed at generating a replicated filetree…but generally, backups can benefit from more than just a replicated tree of files.

            I’m not specifically-saying that rdiff-backup is the end-all be-all backup system, just that it’s what I’ve found to be useful, and that I’m familiar with rsync. If it kept an index of hashes of files, it could dedup whole files and make renames-space-efficient, which would be kind of nice. If it retained copies of inode numbers, it could be used to cheaply-detect renames. There are algorithms to detect non-aligned duplicate chunks, which could cram the size down further. It needs a filesystem as the target, not a blob store, the way it looks like restic+rclone can, and for some users – like, say they want to use Amazon S3 storage as their backing storage to get offsite storage – might make sense. It can’t leverage information stored in something like btrfs to rapidly-detect that a file has changed; like rsync, it can use mtime as a quick check, without hashing the file. It isn’t (itself) designed for things like backing up live SQL databases. But for my use case, and I think for most people, it probably covers the stuff that they’re gonna want in a backup tool.