Disclaimer: This method is recommended for standalone systems that are not accessed by other users or processes in the network since some data will constantly change with each passing minute and interfere with the backup process.
Make a Backup With rsync
The rsync command-line tool is the most preferred backup tool in Linux systems for multiple reasons. It allows you to make incremental backups including the entire directory tree, both locally and on a remote server. Better yet, you can automate the backups using shell scripts and cron jobs.
That about covers it as far as local backups are concerned. As you can tell, rsync is very easy to use. It gets slightly more complex when using it to sync data with an external host over the Internet, but we will show you a simple, fast, and secure way to do that.
Other than installing SSH and rsync on the server, all that really needs to be done is to setup the repositories on the server where you would like the files backed up, and make sure that SSH is locked down. Make sure the user you plan on using has a complex password, and it may also be a good idea to switch the port that SSH listens on (default is 22).
As you can see from the screenshot above, the output given when backing up across the network is pretty much the same as when backing up locally, the only thing that changes is the command you use. Notice also that it prompted for a password. This is to authenticate with SSH. You can set up RSA keys to skip this process, which will also simplify automating rsync.
Another useful thing you can do is put your backups into a zip file. You will need to specify where you would like the zip file to be placed, and then rsync that directory to your backup directory. For example:
As a sysadmin, I spend most of my energy on two things (other than making sure there is coffee): Worrying about having backups and figuring out the simplest, best way to do things. One of my favorite tools for solving both problems is called rsync.
All this is important if we want to make backups. This behavior is the same as the cp command. We can also use the cp command to copy directories recursively, as well as preserve attributes and ownership. The big difference is that rsync can do a checksum of the file and compare source and destination files, where cp just looks at the atime value. Rsync's additional functionality is useful for preserving the backup's integrity (we'll get into integrity later in this series).
Whether transferring files locally or remotely, rsync first creates a file-list containing information (by default, it is the file size and last modification timestamp) which will then be used to determine if a file needs to be constructed. For each file to be constructed, a weak and strong checksum is found for all blocks such that each block is of length S bytes, non-overlapping, and has an offset which is divisible by S. Using this information a large file can be constructed using rsync without having to transfer the entire file. For a more detailed practical and mathematical explanation refer to how rsync works and the rsync algorithm, respectively.
The rsync protocol can easily be used for backups, only transferring files that have changed since the last backup. This section describes a very simple scheduled backup script using rsync, typically used for copying to removable media.
Instead of running time interval backups with time based schedules, such as those implemented in cron, it is possible to run a backup every time one of the files you are backing up changes. systemd.path units use inotify to monitor the filesystem, and can be used in conjunction with systemd.service files to start any process (in this case your rsync backup) based on a filesystem event.
Then create a systemd.service file that will be activated when it detects a change. By default a service file of the same name as the path unit (in this case backup.path) will be activated, except with the .service extension instead of .path (in this case backup.service).
There must be a symlink to a full backup already in existence as a target for --link-dest. If the most recent snapshot is deleted, the symlink will need to be recreated to point to the most recent snapshot. If --link-dest does not find a working symlink, rsync will proceed to copy all source files instead of only the changes.
This section is about using rsync to transfer a copy of the entire / tree, excluding a few selected directories. This approach is considered to be better than disk cloning with dd since it allows for a different size, partition table and filesystem to be used, and better than copying with cp -a as well, because it allows greater control over file permissions, attributes, Access Control Lists and extended attributes.
It is very important to make backups of your data, as you never know when disaster strikes! One powerful, cross platform, tool to help you achieve this is 'rsync'. In this post I'll explain why rsync is useful and how you can use it to set up your own backups.
If you execute this command for the first time, the remote server's cryptographic fingerprint is shown. This is used to ensure that you're talking to your server. This prevents hackers to impersonate your server without you noticing (as the fingerprint is unique to your server. If rsync detects a change in this fingerprint, it will refuse to transfer files. This is critical as copying your backups to a random remote destination would be a potential risk for data leaks!
If rsync can connect successfully it will then continue to determine which files need to be transferred. If you have set the "-v" flag, you'll see the progress line by line. After the transfer is complete rsync will exit with a summary.
You can see in the screenshot that rsync automatically detected the new file. It only transferred that file to the remote server. Saving bandwidth, time and processing power. It works with new files and files that have been updated.
Because rsync is "just a terminal command" you can easily use it inside your own scripts. Think of a script as a collection of rsync commands that comprise a set of "source" and "destination" links between your files and their respective backup locations.
Suppose that your source directory A contains files totaling 1GB in size. Assume that you also have directory B with the same 1GB of files. Then you add small changes of about 0.1GB. With rsync, you won't have to copy the whole 1.1GB of data from A to B. You will only have to transfer 0.1GB of data. Why copy mostly the same data if you can just copy only the differences? This lets you minimize the network usage, which can be useful if you have a small bandwidth.
Let's jump straight to the code. I would strongly encourage you to code along. I find it more useful when learning a new thing if I actually type the commands. Moreover, don't just type everything you see in this article and stop there. Experiment with these commands. Read the man rsync page. Make variations. Experiment. Do things that I don't mention here. Break things! Just make sure you make a backup first (see what I did there? :D) Only by doing these you'll get the most out of this article.
Now you will find all the files inside source/ are copied inside the destination/. If you run the command rsync source/* destination/ again without making any changes, rsync won't do anything (there are no deltas).
Finally, if you add a directory inside source/, the rsync command above won't sync the directory (and neither the contents inside that directory). To sync directories within a source directory, you need to use rsync recursively.
Wait a second... doesn't that sound like dropbox? Yup! There are tons of other features that Dropbox has that rsync doesn't, but at the gist of it, dropbox is a fancy and glorified rsync with durability added.
Notice that I don't have a forward slash after Projects even though it is a directory. When you rsync a directory but you don't pass it a slash, rsync will create a directory with the same name as the source. What this does is it creates a /Projects/ directory inside my gc Host.
Rsync is a powerful command for creating backups or syncing two directories. If you only need to do a one-time copy, the cp command is probably simpler. But if you need to keep two directories in sync, rsync is a better option.
Backing up data is an essential part of both individual and enterprise infrastructures. Machines with the Linux operating system can use rsync and ssh to facilitate the process.
Note: You can evade entering a password every time you want to back up data with rsync over SSH. Set up SSH key-based authentication, and you will be able to use passwordless login to the remote machine.
This tutorial showed you how to back up data using rsync both locally and over a network. Take caution when using this tool and make sure you do a dry run if you are unsure about the rsync options you want to use.
I copied my entire root (/) tree with Rsync using a single line command. Yes, it is just a one-liner command. While there are so many tools to backup your systems, I find this method super easy and convenient, at least to me.
Bought a small SSD, did connect it to my running Raspberry Pi with a USB/Sata cable and made 2 partitions on it. Then I copied the boot and root partitions, with Rsync. The files on the root partition as you described. Copied the files on the boot partion with rsync -av.
Same as local copy. Just replace the destination with your remote location something like this:rsync -aHAXxv --exclude="/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found" [source_dir] [destination_host:/destination_dir]
sudo nano /etc/rsyncd.conf# Global configuration of the rsync servicepid file = /var/run/rsyncd.pid# Username and group for working with backupsuid = backup-usergid = backup-user# Don't allow to modify the source filesread only = yes# Data source information[data]path = /path/to/backuplist = yesauth users = backup-usersecrets file = /etc/rsyncd.passwd 2ff7e9595c
Comments