Tag Archives: Backup

Home / Backup
6 Posts

Today I received a mail from SymForm announcing that they will discontinue the SymForm Service on July 31, 2016:

Announcement: Symform Service To Be Discontinued On July 31, 2016

Dear Symform Customer,

At Quantum, we strive to make great products that deliver a great user experience and solve key business problems. At the same time, we periodically evaluate our products and services in the context of our overall strategic goals and direction.

With this in mind, we have made the difficult decision to discontinue our Symform storage cloud services as of July 31, 2016. We appreciate the many customers who have used and contributed to the Symform service.

We have put together a detailed plan to support Symform customers and partners throughout this transition. For more information, check out the Symform Community site and related FAQs.

Your Symform Team

Synology has created wonderful devices and a marvelous operating system on these devices. The applications they offer on these devices makes that we love our Synology so much. For almost any task we can think of is a package available and if it isn’t available from Synologys repository there are many 3th party repositories to fill the gap.

One of the applications delivered by Synology is the Backup and replication tool. It fulfills the task of backing up our precious data stored on the volume or volumes of our DiskStation. They also added features to backup the settings of some applications they offer in their repository of packages. The Backup and replication tool only can access the volumes of the DiskStation, not the files stored in the root of the filesystem and its descendants other than the volumes.  Unfortunately we can’t expect that they also backup settings of applications they didn’t even know that they exist. On the other hand we use these applications and it would be nice to have a mechanism to backup at least the configurations of these applications for obvious reasons. With some self support a shell script and the scheduled task function of the DiskStation, it is possible to create a reliable mechanism to backup the files you can’t access directly with the Backup and replication tool.

The idea is simple: create a script that puts all the files and folders mentioned in a separate file in a tar archive. Make sure the archive is stored in an area that can be backed up with the Backup and replication tool. The configuration files may contain sensitive information (i.e. passwords). Make sure the area is only accessible to the users (administrators) that are allowed to see the content of these files.

The script

If you are using a Windows machine the best thing to do is using WinSCP. WinSCP helps you to access the filesystem of your DiskStation and makes sure the text files you create have the right encoding. This ensures that the scripts you write actually do work. See Getting access to Synology’s filesystem elsewhere on this site.

The script is straight forward:


# Constants

BU_FILE="$BU_FOLDER/backup_$(date +%Y-%m-%d'_'%H-%M-%S).tgz";

# Creates backup folder if it doesn't exist.
if [ ! -d "$BU_FOLDER" ]; then
  mkdir "$BU_FOLDER";

# Create backup of the files and folders in $LIST
tar czf "$BU_FILE" --dereference --files-from "$LIST"

the script is called confbackup.sh but can have any name you like as long as it has a .sh extension. Also make sure it has the right permissions: the execute bit should be set and the right owner and group should have access to the script. A ‘750’ would do the job when the owner is ‘root’ and the group is ‘root’ too.

There are two ‘variables’ that should have some attention; LIST and BU_FOLDER. The variable LIST contains the full path and file name to the file containing all the files and folders to backup in the archive. Make sure the file exists on the location you enter here. The BU_FOLDER is the folder where the archive is being stored. It will be created when it doesn’t exist. This location should be in the area you can access with the Backup and replication tool to be able to backup the archive. Don’t put a ‘/’ at the end. the variable BU_FILE adds it when it ‘builds’ the file name of the archive file.

The tobackup.lst file will look like this:


Each line contains either a folder or a file. Symbolic links will be replaced when the archive is build. When a folder is selected all the sub-folders will be included. If you want to alter the set of files to be backed up in the archive, you just alter the tobackup.lst accordingly.


The only thing left to do is to schedule a task to execute the script periodically. This can be done with Synology’s Task Scheduler that can be found in the Control panel of your DiskStation. Just create a user-defined script task. Enter a logical task name in the Task field. Use root as user context to run the script in. The Run command is the user-defined script: /volume1/ConfBackup/confbackup.sh. (Make sure the path and the file name are the same as you use.) In the schedule tab you can define a schedule to run the script. If you don’t change much once a week will be sufficient. Otherwise you can set any schedule that suits your needs.

The current version (4.20) of SymForm on my NAS works like a charm! Also the support is reachable again. The story below is outdated.

SymForms is a one of a kind Cloud backup solution that can be used on your Synology Diskstation to backup your precious data. It’s a p2p solution that doesn’t use datacenters. The concept is based on the fact that people never use their complete capacity of their storage. There is an alternative way to pay for your cloud storage: donate bytes from your local storage. You’ll get 50% cloud storage in return for the space you donate on your local storage. Symform slices your data in 96 chunks and distributes it all over the world. Check here to let SymForm self explain it to you. I also found another article here that evaluates SymForms security concept.

So, yes I’m enthusiast about SymForm. It’s an affordable way to have an off-site backup of my valuable data. It supports a width range of devices and operating systems: Synology, Netgear ReadyNAS, Qnap, Windows, Mac and Linux. I use it on a Synology. To offload the SymForm data from my Diskstation, I use an external USB hard drive attached to the Diskstation. This hard drive is my primary destination for the backup jobs of my Diskstation. The SymForm client synchronizes this backup data to the offsite SymForm cloud. The USB drive also contains the folder with disk  space donated to SymForm. With this method I keep my NAS as private as possible.

EDIT: It seems that Symform has fixed the memory leakage. With my current verion of the Symform client ( it seems that the software behaves itself normal. I havent seen ‘explosions’ in memory usage or processor usage. I will monitor this and get back here when something changes. Therfor the following is obsolete, but stays there for archiving purposes.

EDIT: Unfortunatly I just found out that SymForm was resource hungry again. I re-activated the task schedules.

EDIT: It seems thatSymForms client on my DiskStation started to behave it self again. The current version is:

Currently there is an issue with the client for Synology. (This also may apply to other NAS devices.) There seems to be a memory leak in the software. In a couple of days the memory usage literally explodes and drains my Diskstations resources. It usually starts with a couple of megabytes, but after a few days it can consume more than a gigabyte! (And that can really ruin your day.) This forces me to restart the SymForm package every other day manually in the Package Center. The other option is stop using SymForm, but that leaves me without an offsite backup.

I trust the people of SymForm they will fix this sooner or later. Meanwhile I have to automate the stopping and starting of the SymForm client. Fortunately has a feature for that: Task Planner. Task Planner is located in the Control Panel.

You’ll need to create 2 tasks. Three to stop the SymForm processes, one to kill the SymForm Log uploader and three to start the SymForm processes again you stopped earlier. I configured it as daily User-defined script tasks as following:

Start User Task name Action
Stop Symform Sync
/*/@appstore/Symform/SymformNode.sh restart
Kill SymForm LogUploader
kill $(ps -w |grep symformloguploader|grep -v grep|awk "{print \$1}")

This applies to the client version 3.18.0 (?) of SymForm. If the people of SymForm fixes the memory leak, you can simply disable these task. Until then you can have the best of both worlds: Keep using SymForm without the risk that the client software is eating your NAS resources.

I dealt a long time with an issue in my WordPress installation, but didn’t trust me to experiment around and have a non working WordPress installation at the end. I decided to find a way to clone my whole WordPress installation and satisfy my experimental drift there. When I find a way to clone my WordPress installation, I also find a way to do something about disaster recovery of my WordPress blog.

It has to be a reliable method and must be more or less foolproof, because I wouldn’t like to spend much of the time repairing if it is not nesessary. I think I’ve found a solution that ticks all the boxes: a plugin called Duplicator (http://wordpress.org/extend/plugins/duplicator/)

Duplicator can be installed as a regular WordPress plugin. The interface is simple but effective. A complete manual can be found here at LifeInTheGrid website.

The first time you are getting acquainted with Duplicator is with the screen above wihout the list of packages. Once you have created a backup of your WordPress site, the list with packages will grow. To create a backup simply press the Create Package button witch is located in the red circled area in the image above. Just give it a logical name and press the Create Package Set button , sit back and wait. When the backup is finished, a package set will appear.

To transport your site to another location you just press the two buttons (in the green area) behind your package. These buttons start two downloads. One of a zip-file containing your WordPress site and another to download the installer (installer.php). Save these files on your computer.

To rebuild your site on another location (or in case of a disaster on your original location) you have to make sure that you have created an empty wordpress rootfolder and an empty wordpressblog database schema on your target location. Make sure you have the appropiate rights on the database schema and the wordpress rootfolder. (I assume you have a working Apache and MySQL on your targetsystem.)

copy the zipfile and the installer.php to the WordPress rootfolder on your target system and open installer.php in your favorite browser (ie. http://myaddress/wordpress/installer.php) and follow the instructions on the screen.

If you met the requirements, the installer has three steps to follow:

  1. The first one is the step that asks you for your MySQL credentials. (This is also the last opportunity to drop all the tables in the wordpressblog schema by ticking the box before Table Removal.) Enter here your MySql credentials to let the installer create the database and tables.

  2. The second step lets you changing the url and the physical location of your WordPress rootfolder. Hardcoded locations in the pages and posts will not be changed. It also allowes you to decide witch plugins should be started or not. Disable all plugins during the installation but not the login redirectors. Enable the plugins after the installation in the Admin page of your WordPress site.

  3. This last step will give you directions to follow to make sure your WordPress is able to run again. Make sure you remove the following files from the WordPress rootfolder: installer.php, installer-log.txt, installer-data.sql

You now must be able to open your working WordPress site from the new location.

It doesn’t harm to backup your MySql database server. In case of an emergency you probably will be glad you did. This post describes a simple method to do so.

The first action is to create a user in the MySql database server to use with just enough rights to fulfill its task. Open your favorite MySql management tool and create a new user with just a name and a password. (Don’t assign any databases.) In this case we will use BackupUser as username with the password P@$$w0rd.

Assign the following global rights :

  • Select
  • Reload
  • Show Databases
  • Lock Tables

Save the user in MySql and close the MySql management tool.

The next thing to do is creating a shell script. Open a SSH connection to your NAS. Start VI and type the following script:


# create backup dir if it does not exist
mkdir -p ${DIR}
# remove all backups except the $KEEP latest
BACKUPS=`find ${DIR} -name "mysqldump-*.gz" | wc -l | sed 's/\ //g'`
while [ $BACKUPS -ge $KEEP ]
  ls -tr1 ${DIR}mysqldump-*.gz | head -n 1 | xargs rm -f
  BACKUPS=`expr $BACKUPS - 1`

# create backups securely
#umask 006

# dump all the databases in a gzip file
/usr/syno/mysql/bin/mysqldump -u $DB_USER -p$DB_PASS --opt --all-databases --flush-logs | gzip > $FILENAME

The constant DIR contains the path where the script must save its backup file. You can change this to fulfill your own needs.

You can run this script by assigning a tob to the crontab. How crontab works and how to add the task to crontab is described in Crontab explained. All you have to do is adding the following line to /etc/crontab :

1       0       *       *       *       root    sh /volume1/backup/backupMySql.sh

This ask will run every day at 0:01. Change it to the moments you want to run this backup. Don’t forget to restart the cron demon.

Restoring a database can be done with:

mysql -uroot -pecare2@ < alldatabases.sql

Change the root password according to your’s. The alldatabases.sql  file name must be replaced with the file name of your backup.