How to Backup FreeNAS to Google Drive Using Duplicati

How to Backup FreeNAS to Google Drive Using Duplicati

We all have our own backup solutions, some better than others, but the standard is the 3-2-1 Backup Strategy, which suggests having at least (3) copies of your data (not including the production data itself, with (2) of those copies being stored locally on different hard drives, and (1) copy stored somewhere offsite. Most of us datahoarders and homelabbers have some implementation of this rule in one form or another.

If you are just looking for the tutorial and want to skip through all of my personal backstory bullshit, just scroll on to the end, and don’t complain about it. This is a personal blog, not some Medium article. At the end, I will discuss how to set up incremental, versioned, block-level, encrypted backups to Google Drive on FreeNAS.

Note: the single caveat is that the unlimited storage is only free and unlimited for GSuites for Business accounts that have 5 or more users (otherwise, you will be paying normal Google Drive storage fees).

ZFS Redundancy and Backing Up to FreeNAS

My [tiny in comparison to most] FreeNAS setup runs 4x6TB WD Reds in a RAID-Z2 configuration. I know what you’re thinking — that’s overkill for a 4 drive setup, but it was necessary before I had a Dell R710 to run a legitimate FreeNAS server, when I was running FreeNAS baremetal on a Mac Mini with 2 external USB3.0 dual drive docking stations. If one of the docking stations failed (which it did before I replaced it), then I wouldn’t lose all of my data. And in my opinion, you can never have enough redundancy. So in my situation, if a single drive fails, I will have enough time to save up for an 8-10TB WD Red Pro to replace it (in order to initiate an excuse for an eventual zpool upgrade) without having to worry about another drive failing in the meantime and losing all of my data.

I have multiple backup solutions for ESXi and other computers, all of which end at FreeNAS. Even my ESXi host (a Mac Mini at the moment) mounts an NFS datastore and runs all of my guests VMs directly from FreeNAS. I have a single ESX guest that functions as a management server and runs multiple rsync backup scripts via cronjobs, which stores important files, directories, databases, and Docker containers from EC2 instances (so that I don’t have to pay for AWS AMIs or volume snapshots), all of which are stored on the NAS, along with the scripts themselves, which run the backup jobs. Additionally, the same management server runs an Ansible configuration (also stored on the NAS), in order to keep all of my EC2 instances and ESX guests patched nightly. So you can obviously see that FreeNAS has become a crucial part of my overall home workflow, and absolutely cannot become a single point of failure.

Local FreeNAS Backup Strategy

I have had multiple backup strategies and solutions over the years, but mainly I have an additional 6TB WD Red, which is also formatted to ZFS, which I store ZFS snapshots on. At the moment, I have a base snapshot, a yearly snapshot, and monthly snapshots, which I can recover files from at any point in time, so it’s basically a mirror of my main ZFS pool, but with recovery capabilities. Now, obviously this is not going to work once my pool outgrows 6TB in actual used volume, so eventually I will need to replace that guy with a 10TB drive or somewhere in that range. Additionally, I have a 4TB external USB3 drive that includes all of my really important data before I migrated to ZFS. That’s the local storage “oh-shit” solution. If I could ever get a decent USB3.0 card to work with my R710, then I will probably hook up an external 4+TB drive at some point and implement an occasional file-level rsync backup.

Shannon also keeps all of the kids photos on her computer, because she doesn’t trust the NAS and doesn’t understand single points of failure. I have to stay on her constantly to copy and organize her photos on the NAS, but she maintains her own backup strategy of organizing them twice — once on her computer and then on the NAS. I won’t go there, but it is what it is! So between the ZFS snapshots, the “oh shit” external drive, and arguably my wife’s computer, I’d say I have the 2 on-site backups fairly squared away.

Offsite Backup

In addition to Shannon storing what she believes to be a full database of all important photos on her computer, I have all of the family photos also backed up in both iCloud and Google Photos. And yeah, I pay like $2 a month for each in order to have enough of their expensive storage, but honestly, if I were to lose everything, I would at least want to have our photos, and the price is negligible when you consider being able to have a decade’s worth of photos and videos at your fingertips. And yes, Google is probably using facial recognition to be able to track my kids in the future and target them after they age and become cyber criminals, but honestly, I’m not going to worry about that right now.

So what about my other stuff? Well, to be honest, when GitHub announced unlimited free private repositories a while back, that was like a dream come true for me, because I really don’t care for Bitbucket at all, so I try to keep all of my useful scripts and projects (that are not opensource) stored in private GitHub repos. This also includes a config file repo that the previously-described management server uses to pull and store nightly updates to the configuration files for ESXi, FreeNAS, my two routers, and my pfSense firewall.

So what about the meat and potatoes?

These are nice for little one-off strategies, but the 3-2-1 Rule says that I should keep a full copy of my data offsite. The cheapest option would be to buy a 10TB external USB3 drive (and a working USB3 card for the R710), rsync the whole enchilada, and give it to my parents the next time they come to visit sometime within the next couple years after the COVID-19 quarantine is lifted. And I will actually probably do that at some point in the future to replace and/or coexist with my “oh-shit” local backup solution. But what if I want my data now and don’t want to have to drive a 4 hour round-trip in order to restore it? Well that’s obviously what the cloud is there for.

So what cloud solution is the best solution?

I had Crashplan for years before they changed their business structure. I loved it. It was great. It was some ridiculously low price for unlimited storage, but then that changed when they *cough* sold out *cough*. So I decided to go with Backblaze instead. I went ahead and paid for my first month or whatever and set up a Windows VM to backup FreeNAS via a CIFS/SMB share. Until I realized that they don’t support network drives, so I considered their B2 cloud storage service, which actually has an official FreeNAS plug-in, but I determined that that would be too expensive. The simplest solution would be AWS Glacier Deep Archive, but that would also be expensive, especially if I ever wanted to retrieve my data (Amazon always intends to get you one way or another). So finally, I decided on returning to Crashplan, this time opting for their new Crashplan For Small Business solution, which does include the native ability to backup network shares…

Unlike the old Crashplan, which could run headless in Linux, this version required a GUI interface (or at least it did when I signed up), and I didn’t currently have a Windows VM that I could use as a constant backup source. What I did have, however, was a MacOS VM that I was using for some mobile development projects, so I decided to use that to mount the NAS via NFS and backup everything from there. It worked alright. Sometimes. Many times, I would log in, and it wouldn’t be able to connect to the backup server, and I would have to restart it, and well honestly it was just crappy. I also found out that they ignored certain file types, including .vdi, .vmdk, etc., which meant that I wouldn’t be able to backup any VMs I had stored on my NAS. At this point, the MacOS VM was running on a local datastore instead of FreeNAS, and I knew that one of the hard drives on the host was bad, so I had all of my VMs in one datastore, with only ISOs stored on the hard drive that I thought might crash at any time. As it turned out, I thought wrong, and I lost all of my VMs. Luckily I had backed up all of the important configs, and files to the NAS, so I was able to get pfSense and my web, app, and management tiers back up within a day, but the MacOS VM, which required lots of convoluted configuration was screwed, so it stayed down. I would periodically get emails about how my data hadn’t been backed up in X number of days, and then I would see the monthly PayPal receipts for a backup service that I wasn’t using, and finally when I saw that only 2.2TB totally had even been backed up, that’s when I said “f*ck this camel and the straw that broke its back” — it was time for a better solution. I had heard some rumors, but as it turned out, I had no idea what I had been missing.

Enter Google Drive

Linux Tech Tips posted a video (which I happened to miss at the time) back in 2018, called I Hope Google Doesn’t Ban Us – …Abusing Unlimited Google Drive. In this video, they discover a loophole in GSuites for Business accounts with 5 or more users that allows for unlimited Google Drive storage. They also discover that bandwidth is not unlimited, as you are limited to 750GB upload per day — however, that limit only applies per user, so they come up with some sort of ridiculously ingenious setup that will allow them to max out their upload speed by utilizing multiple users without hitting the per-user bandwidth cap. That being said, they have a pretty significant fiber pipe, and normal users would probably be fine only utilizing a single user without hitting the cap, and it should be enough, which [if my mat is correct] would allow you to upload somewhere around 10TB in about 2 weeks.

This was a no-brainer for me, because I currently have accounts on 3 separate GSuites for Business accounts — one for my workplace, but I didn’t want to mix personal data with work data, and if I ever stop working there, then I would lose access to my data. I also have an account with The National Upcycled Computing Collective, Inc. (NUCC), but I’m fairly certain there’s is a GSuite for Nonprofit account, and I didn’t know if the same rules applied, plus it seemed like bad form. The most logical solution was to use my GSuites for Business account with KM CyberSecurity, as Keatron Evans is a very close personal friend and mentor, so I asked him beforehand, and he was cool with it. KM CyberSecurity is also one of our affiliates, who graciously provides the hosting for this website, so it seemed to be a perfect fit.

So what is the downside of doing it using rclone like Linus and his crew? Well, if you are at all concerned about Google having file-level access to 100% of the data stored on your NAS, that would be the primary concern. The upside would be having file-level access to all of your data from the browser or an app on your smartphone, but that is already possible and much more secure without Google’s prying eyes by setting up something such as Nextcloud and just mounting your volume there for easy access. So what is a secure alternative to storing your files directly in Google Drive using rclone?

Allow me to introduce Duplicati

Duplicati is a backup client that securely stores encrypted, incremental, compressed backups on local storage, cloud storage services and remote file servers” — and it comes with native support for Google Drive! Some of the stand-out features of Duplicati are as follows:

  • String encryption
  • Incremental backups
  • Compression
  • Online backup verification
  • Deduplication
  • Fail-safe design
  • Web interface
  • Command line interface
  • Metadata
  • Scheduler
  • Auto-updater
  • Ability to backup open files

I’ve been using it for a few days now, and it hasn’t missed a beat. It’s extremely simple, yet extremely powerful, it’s fast, and it’s something that I have really missed without even knowing it. I have so much love for this tool already that it’s kind of ridiculous. So now I’m going to show you how you can get this shit set up on FreeNAS.

Installing Duplicati in a FreeBSD Jail

I must admit that I cannot take any credit whatsoever for this tutorial. After quite a bit of Googling, I was led to Reddit, where I then found it hidden here on the iXsystems community site, and I figured I would update it, put my own twist on it, and try and increase the visibility by putting another resource out on the web for anyone looking to backup FreeNAS to Google Drive. The whole thing takes less than 10 minutes to setup.

Just be sure to replace the ip4_addr value with whichever IP you want to use, the defaultrouter value if your’s is not 192.168.1.1, and 11.3-RELEASE with the value of your current release. Also, obviously replace the file paths with your own. And for the love of God, do not ask me how this works. To be honest, I just consider it magic and roll with it:

# SSH to FreeNAS:
ssh root@freenas

# Create temporary package file:
echo '{"pkgs":["mono","py27-sqlite3","curl","ca_root_nss"]}' > /tmp/pkg.json

# Create duplicati iocage jail:
iocage create -n "duplicati" -p /tmp/pkg.json -r 11.3-RELEASE ip4_addr="vnet0|192.168.1.17/24" defaultrouter="192.168.1.1" vnet="on" allow_raw_sockets="1" boot="on"

# Remove the temporary package file:
rm /tmp/pkg.json

# Create config directory outside iocage
mkdir /mnt/Volume1/apps/duplicati

# Add config directory to duplicati /etc/fstab:
iocage fstab -a duplicati /mnt/Volume1/apps/duplicati /config nullfs rw 0 0

# Mount backup source directory to in duplicati jail:
iocage exec duplicati mkdir /mnt/backup
iocage fstab -a duplicati /mnt/Volume1/datastore_to_backup /mnt/backup nullfs rw 0 0

# Exec into the duplicati iocage jail as root:
iocage console duplicati

# Additional setup steps:
ln -s /usr/local/bin/mono /usr/bin/mono
mkdir /usr/local/share/duplicati

# Download the duplicati zip file (check for latest before download):
curl -o duplicati.zip 'https://updates.duplicati.com/beta/duplicati-2.0.5.1_beta_2020-01-18.zip'

# Contain the contents of the zip file:
mkdir duplicati && mv duplicati.zip duplicati/
cd duplicati && unzip duplicati.zip && rm duplicati.zip

# Copy the contents of the zip file to the correct location:
cd .. && cp -rf /duplicati/* /usr/local/share/duplicati/
rm -rf duplicati

# Create duplicati user and group
pw user add duplicati -c duplicati -u 818 -d /nonexistent -s /usr/bin/nologin

# Set up permissions
chown -R duplicati:duplicati /usr/local/share/duplicati /config

# Create service directory
mkdir /usr/local/etc/rc.d

While inside the iocage jail as root, create the following service file using vi /usr/local/etc/rc.d/duplicati, and copy/paste the contents below (remember to hit i for insert before pasting (for all the nano folks), and then Esc > :wq! to save the file:

Note: this file has the web UI password in it, so replace YOURPASSWORDHERE with an actual password.

#!/bin/sh

# $FreeBSD$
#
# PROVIDE: duplicati
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# duplicati_enable: Set to YES to enable duplicati
# Default: NO
# duplicati_user: The user account used to run the duplicati daemon.
# This is optional, however do not specifically set this to an
# empty string as this will cause the daemon to run as root.
# Default: media
# duplicati_group: The group account used to run the duplicati daemon.
# This is optional, however do not specifically set this to an
# empty string as this will cause the daemon to run with group wheel.
# Default: media
# duplicati_data_dir: Directory where duplicati configuration
# data is stored.
# Default: /var/db/duplicati

. /etc/rc.subr
name=duplicati
rcvar=${name}_enable
load_rc_config $name

: ${duplicati_enable:="NO"}
: ${duplicati_user:="duplicati"}
: ${duplicati_group:="duplicati"}
: ${duplicati_data_dir:="/config"}

command="/usr/sbin/daemon"
procname="/usr/local/bin/mono"
command_args="-p ${duplicati_data_dir}/duplicati.pid -f ${procname} /usr/local/share/duplicati/Duplicati.Server.exe --webservice-port=8200 --webservice-interface=any --webservice-password=YOURPASSWORDHERE -d ${duplicati_data_dir}"

start_precmd=duplicati_precmd
duplicati_precmd() {
export USER=${duplicati_user}
if [ ! -d ${duplicati_data_dir} ]; then
install -d -o ${duplicati_user} -g ${duplicati_group} ${duplicati_data_dir}
fi

export XDG_CONFIG_HOME=${duplicati_data_dir}
}

run_rc_command "$1"

Now let’s run a few more commands to get it completely working:

# Set correct permissions on duplicati service:
chmod u+x /usr/local/etc/rc.d/duplicati

# Set duplicati service to start on boot:
sysrc "duplicati_enable=YES"

# Exit out of duplicati iocage jail back into FreeNAS:
exit

# Start the duplicati service:
iocage exec duplicati service duplicati restart

Now you should be able to browse to http://<jail_ip>:8200, enter the password used in /usr/local/etc/rc.d/duplicati. Then just select Google Drive as your backup destination, allow access via OAuth2, etc., select where you want Google Drive mounted in the jail (it wil create the directory if it doesn’t already exist), and select the datastore directory that you mounted in the jail previously as your backup source.

You will be allowed to set an optional backup retention plan, and bingo-bango, you’re ready to tango.

If you find anything wrong with this article or note any corrections that need to be made, please feel free to leave a comment.

Comments ( 3 )

  1. AJS
    I tired this but i get stuck on iocage fstab -a duplicati /mnt/Volume1/apps/duplicati /config nullfs rw 0 0 I filled in my own pool (/mnt/Backup/Duplicati) but... i get this error on this step Destination: /mnt/VMs-Jails/iocage/jails/duplicati/root/config does not exist or is not a directory. (this is not my filled in map)
    • AJS
      found error mkdir for this path is missing. Also later in cp /duplicati/* /usr ..... path does not exsists so must first be made!
      • AJS
        sorry my mistake it was another problem .... it does not see duplicati as a directory so i just did first copy and then cd .. so cp -rf * /usr/local/share/duplicati/ but i ran into an issue that i tried the plugin at first and had the same issue... I can not backup the pools inside my pool that need tpo be backed up. So i have a Pool: Data which i wanted to be backed up... so i mounted /mnt/Data to my jail When i am ssh into my jail i can see evrything all pools and other maps But when i am in my webGUI of Duplicati it does not see anything and i get errors. In the plugin i saw at least some directories that were made in the pool but not created via a new dataset in the pool Data. But with your methode i see only Pool Data with a big ! mark and when trying to backup i am getting warnings that it can not acces the data, so how do i solve this issue. So for one methode i have to add all datasets individually so it seems (your methode) and on the other methode i have to first make directories and remove all datasets but i need the datasets but adding all one by one is a hell of a job... I mean this whole FreeNAS is a hell of a job, but you need something... I was the only one that loved FreeNAS Corral (10.0) which died because i was the only one) everything just worked easily, less difficult to setup dockers and jails and the ACL's were just working, it is a hell in FreeNAS 11 and it just giving me a lot of hedeace's when using different accounts and FTP some FTP accounts can not see their own home directory for instance ownly when i make them member of wheel.. Just FreeNAS 11 is crap...

Leave a Reply