Setting up a good backup solution on any system is essential. You’ve spent hours installing and configuring everything, you may even be storing critical data on your Ubuntu server or computer, it isn’t the time to skip the backup configuration to save 15 minutes. I’ve been a system administrator for almost 20 years, so I’ll share my tips with you in this article.
A good backup solution must be automated and configured by anticipating the worst-case scenario. The archive needs to be stored on a different computer (or external storage), and the recovery process documented and tested regularly.
That’s good practice at least. What you set up from there will highly depend on the content and role of your computer or server. You won’t do the same thing for a personal computer, where almost everything is stored online, and a critical server for a large company.
Creating Backups on Ubuntu using the GUI
If you use Ubuntu for personal use or have a desktop environment on it, you’ll find many good graphic applications available on Ubuntu to create backups. Most of them are based on rsync (a file transfer tool), they just have slightly different interfaces, with more or fewer features.
I will focus on two options here, but you are welcome to browse the Ubuntu Software app to find alternatives if you are looking for something specific.
Method 1: Using Deja Dup
The first solution I tested for you is “Déjà Dup Backups”. It’s available in Ubuntu Software and comes with the most important features for a personal computer (folders to include, destination: local or in the cloud, encryption, automation, etc.).
To install “Déjà Dup”, simply open Ubuntu Software and use the search engine to quickly find it:
Click on “Install” to download and install it on your computer.
You can then find the shortcut in the app launcher to start the configuration:
- A welcome window shows up, choose to create your first backup:
- Choose the files to back up (and to exclude):
The default option will save your home folder, excluding the Downloads folder, which is generally large and a more temporary folder than something essential. But you can easily adjust these settings via the interface.
- Choose the destination:
Déjà Dup can send the file to a local folder, or a network server, but also to Google Drive or Microsoft OneDrive.
As we’ll see later, it’s a good practice to not keep the files on the same computer, but you can decide to create the archive locally and move it to another location later if you’d prefer.
- You can then protect your backup with a password if you want (do it if you save it to a network folder or use a shared computer).
Once your first backup is created, you can decide whether you want to schedule it or not. It’s often a good idea, even if you don’t do it every day. The default option is a weekly backup, I think it’s fine for personal use, but you can change it in the preferences.
Déjà Dup also has a great interface to recover files from previous backups.
When you click on the “Restore” tab, you’ll see the files in the last backup:
You can select an older version from the date list in the bottom-right corner.
Select the files you want to recover, and click on “Restore”.
You can then decide to restore them to the original location or to a different folder.
Method 2: Using Timeshift
The idea of Timeshift is slightly different. If you are familiar with macOS, it’s like Time Machine. It takes incremental snapshots of the file system at regular intervals.
The goal is to avoid reinstalling everything if your system doesn’t work as expected one day (after a major upgrade, typically). It’s not meant to be a backup tool where you save your personal files only.
Timeshift is also available in Ubuntu Software, so you can quickly find it via the search engine and install it in one click:
You’ll then find it in your main menu. Start it to begin the configuration and take your first snapshot.
When you open Timeshift for the first time, you’ll get a setup wizard asking you which snapshot type you want to use (even when you don’t really have the choice ^^):
You’ll then access the Timeshift interface. If it doesn’t start automatically, click on “Create” to create the first snapshot:
As I told you, it’s a full system snapshot, so the first one may take a while. It takes all the files on your system and creates an archive with it. The first will be giant, and then it will only save the differences, so it should be faster and smaller.
If needed, you can click on “Settings” to change the default configuration.
If you plan to do it, you can cancel the first snapshot, adjust the settings and start it again after.
In the settings window, you’ll find:
- Location: where do you want to keep the snapshots?
- Schedule: should Timeshift run automatically and when?
- Users: by default, Timeshift doesn’t save user folders, but you can decide to change this, to include hidden files (generally applications data) or all files.
- Filters: Create custom filters to exclude files you don’t want to include in the snapshots.
Warning: Timeshift is great to get back to a specific day in the past (especially if you noticed major issues after a version upgrade, for example), but it’s not really a backup tool you might be looking for.
If you have a A GUI, or Graphical User Interface, is a visual way for users to interact with..., I guess it’s for a personal computer. I think using the first method (Déjà Dup) should be enough in most cases. It’ll often be easier to reinstall Ubuntu from scratch and restore the Déjà Dup archive than play with snapshots containing outdated files.
In any case, make sure to plan for the worst-case scenario, but also for the simple recovery of one file, you may need more often. You have just accidentally deleted one photo of your son, will you really recover the full system snapshot from 3 days ago? Or do you prefer browsing the last Déjà Dup backup to recover only this file?
Don’t miss the next two parts of this guide for other ideas and best practices.
Note: Timeshift comes with a command line interface, so it’s possible to use it via a terminal. I won’t cover it in the next section, but it might be a good option, as I think this app is mainly useful when you host critical applications and don’t want to have to redo everything after the next Ubuntu upgrade.
Creating Backups on Ubuntu with Commands
As explained in the introduction, most backup applications are just graphic interfaces for the commands we’ll see in this section.
If you use Ubuntu on a server, you most likely already know them. And even if you have a GUI and decent bash scripting skills, you may find these methods more powerful and customizable.
By the way, I titled this article for Ubuntu because it’s slightly different from a desktop environment. But for the rest of the article, all the tips apply to any distribution.
Method 1: Using Tar
The goal of the Tar command is to create an archive from files or folders. It’s a bit like WinRAR, WinZip, 7zip or other tools you may be more used to on other systems.
So, the idea with this command is to take one folder (for example, the Documents folder in your home directory) and create an archive of it: one file with all the documents in it. It’s not a backup command in itself, but it’s the first step.
The basic syntax to create an archive with tar is:
tar <options> <target> <directory>
The target will be the file name (eg: backup.tar) and the directory is the source folder (eg: /home/pat/Documents).
Then, there are many options that can be used:
- c – Create: a new tar file.
- x – Extract: extract files from an archive.
- v – Verbose: show progress.
- f – File name: specify the file name in the command line.
- z – Compression 1: use Gzip to create or extract a compressed archive.
- j – Compression 2: use Bz2 to create or extract a compressed archive.
Yep, I understand it’s not very intuitive, but don’t worry, I’ll show you the most common combinations.
Here are a few concrete examples you can use when you back up your system:
- Create a compressed archive of a specific folder:
tar cvfz backup.tar.gz /home/pat/Documents
For backups, we’ll generally use compression (either gzip or bz2) to create a smaller archive file.
- Extract the compressed archive to a temporary folder:
tar xvfz backup.tar.gz -C /home/pat/restore/
The “-C” option allows you to restore the files to a specific location instead of the current one.
From there, you can easily create a bash script that will create archives for your most important folders, or even combine everything into one file. But remember, we can’t consider this a backup solution if it’s not automated (scheduled) and sent to a remote location (network drive, external storage or cloud).
We’ll get back to this at the end of the article. I first need to introduce another important command.
If you’re new to the Linux command line, this article will give you the most important Linux commands to know, plus a free downloadable cheat sheet to keep handy.
Method 2: Using Rsync
Rsync is not an alternative to tar, but it’s often a better solution.
The overall idea is to sync files from one folder to another, or from one computer to another. You’ll typically use it to back up your files to another server on the network (a NAS or a Raspberry Pi acting as a NAS, for example).
What’s great is that it can compare files from the source and the destination, and only transfer the ones that changed. So, if you back up a 100G folder with all your personal photos, it won’t transfer everything each time (like if you use the tar commands to create a new archive each week). It will only transfer the new ones to the network folder (or nothing if you haven’t added or changed files).
On most Linux distributions, “rsync” is pre-installed by default, so you don’t have anything to do before using it.
If it doesn’t work on your system, try to use the package manager to install it. It’s available in the default repository. On Debian-based distributions, you can use:
sudo apt update
sudo apt install rsync
The main syntax of rsync is:
rsync <options> <source> <destination>
The source is the folder you want to back up, and the destination where you’ll keep the copy.
It can be local paths or remote resources.
- A relative path:
rsync important_files/ ./backup-folder/
- An absolute path:
rsync important_files /home/pat/backup-folder/
- A remote location, using ssh:
rsync important_files USER@IP_ADDRESS:/media/backups/
For the options, there is a long list available, but here are the ones I use most of the time:
- a – Archive: to get an exact copy (preserve permissions, copy links, etc.)
- u – Update: Only copy files that have changed since the last transfer (compare dates with the destination files).
- z – Compress: Compress files during transfer (limit the bandwidth used).
- r – Recursive: Sync subfolders (not required if you use “a”).
Use man to see all details:
So typically, you’ll use this command to keep an identical copy of an important folder in a separate location (ideally a NAS or other computer on the network).
Here is an example:
rsync -auzr /home/$USER/Documents/ [USER]@[IP]:/media/backups/[USER]/Docs/
I have an article about creating backups on Raspberry Pi where I go into more detail. It’s for Raspberry Pi, but it’s Debian based, so it’s the same thing on Ubuntu.
Reminder: Remember that all the members of my community get access to this website without ads, exclusive courses and much more. You can become part of this community for as little as $5 per month & get all the benefits immediately.
Best Practices for Ubuntu Backups
Now that you know the tools and commands you can use to back up your Ubuntu computer, it’s important to have some general concepts in mind.
It’s not enough to create a Tar archive once a year or to let Timeshift run once a week and consider yourself safe.
Automate your backups
Backups need to be scheduled as often as needed, depending on your situation. You can’t rely on your goodwill to run them manually from time to time. Please take 5 more minutes and schedule them.
If you use one of the GUI applications I introduced at the beginning, you are already set. They both have schedule options, just make sure they are configured correctly for you (not everyone has the same needs).
If you use command lines, you can create a basic script that will run your “tar” or “rsync” commands. The first benefit is that you won’t have to remember the syntax and options. And we’ll see that you can easily schedule a script to run automatically.
It can be a simple script that maybe looks like this:
#!/bin/bash tar cfz /media/backups/backup_documents.tar.gz /home/pat/Documents tar cfz /media/backups/backup_pictures.tar.gz /home/pat/Pictures rsync -auzr /media/backups/* [USER]@[IP]:/share/backups
In this script, I create two archives with some of the most important folders in my home directory. Then I transfer them to a remote location. Feel free to suffix file names with the current date if you want to keep a history.
Once the script is created (and tested), you still need to schedule it.
On Linux, you can use crons to do this:
- Open the user’s crontab:
- Paste a line like this at the end of the file:
0 0 * * * /usr/local/bin/backup.sh
This cron will run your backup script each day at midnight.
Edit it with your script location and the schedule you want to use.
You can use this tool to generate the beginning of the line if you are not familiar with the crontab syntax.
- Save and quit (CTRL+O, Enter, CTRL+X)
If you back up files with privileged access needed, don’t forget to schedule the script in the root crontab (sudo crontab -e).
That’s it, you should already be pretty good. A backup script will evolve over time and needs to be updated to make sure it’s still saving the important files. You may also update it after some file recoveries to make it easier or safer.
Choose a safe backup location
The main recommendation here is to not keep the backup files on the same computer.
If you do this, it doesn’t really matter which solution you pick. I like using NAS for this (I have this one), but even if you are just using an external drive, it’s already better.
The issue with external storage is that it can’t be automated. You can automate the file creation on the computer, but you’ll do the external copy manually from time to time, so it’s less reliable.
If your hard drive fails, and you haven’t copied the backups to the external drive recently, you’re screwed. But at least you have “something” if you did it a few times.
NAS or cloud storage can be accessed directly in your scripts or applications, so you don’t have to worry about it.
In any case, make sure you have your backup in an external location. For critical data, it’s even recommended to keep them in a different physical location (floor, building, etc.). So even in case of a fire or flood, you’re pretty safe.
Keeping your backups in the cloud is often the most expensive option if you have a lot of data, but it’s the safest solution.
Test your backups regularly
Last but not least, test your backups regularly!
I can emphasize this enough. I can’t tell how many times as a sysadmin I had to recover a backup, only to discover that the recovery process was either too long or not working:
- Extracting files from a 100 GB+ archive takes forever.
I hope you don’t have 500 employees waiting for you.
- Searching in the backup for a file edited yesterday, and discovering that the most recent files are 2 years old.
- Recovering MySQL tables where there is the data in the file but not the table structure (at least it’s better than the other way).
So make sure to test your backups. Write the procedure, so you know what to do when something bad happens, and test it regularly.
You might be stressed when you’ll need to do this for real (especially in work environments), so it’s a good idea to test it first.
Overwhelmed with Linux commands?
My e-book, “Master Linux Commands”, is your essential guide to mastering the terminal. Get practical tips, real-world examples, and a bonus cheat sheet to keep by your side.
Grab your copy now.
If you just want to hang out with me and other Linux fans, you can also join the community. I share exclusive tutorials and behind-the-scenes content there. Premium members can also visit the website without ads.
More details here.
Need help building something with Python?
Python is a great language to get started with programming on any Linux computer.
Learn the essentials, step-by-step, without losing time understanding useless concepts.
Get the e-book now.