Advertisement

CrashPlan Performance Tips

Affiliate Links

Sometimes you’ll find very good tips on CrashPlan Support KnowledgeBase articles, but sometimes help has to be found elsewhere. Is your backup occupying a lot of processor time and slowing down your NAS? Do you have slow upload speeds? Try these tweaks!

 


Data De-duplication I

The first thing you should try is to set the backup set’s deduplication feature to Minimal. Access the headless client via GUI and navigate to:

Settings > backup (select the backup set) Advanced Settings

CrashPlan Data De-duplication screenshot

Data De-duplication settings

Change the “Data de-duplication” setting from “Automatic” to “Minimal” and hit “Ok“.

 


Data De-duplication II

You’ll need to access the headless client via ssh and navigate to (Synology paths):

/var/packages/CrashPlan/target/conf

There, edit the file “my.service.xml” (using e.g. “vi my.service.xml“) and scroll down until you find these two lines:

<dataDeDupAutoMaxFileSizeForWan>10</dataDeDupAutoMaxFileSizeForWan>
<dataDeDuplication>MINIMAL</dataDeDuplication> 

On the first one you’ll probably have a very long number. We’re trying to stop CrashPlan to de-duplicate any data at all, so we’ll set it to a very low number (1 or 10 bytes), save the file and we’re good to go! You show get better upload speeds now!

Note that if you have more than one backup set you’ll find this line multiple times.

Don’t forget to restart the engine after editing the file.

 


CrashPlan’s official comment (Nov 2014) on the caveats of altering these settings:

We were very proud when LifeHacker readers voted us the most popular online backup service.

We love our technically advanced customers, and we want them to be able to make the best choices for their backups! Toward that end, we want LifeHacker readers to be informed of the technical implications they can expect if they implement this modification.

Essentially, it will increase, perhaps greatly, the amount of data CrashPlan needs to send to its destinations, especially for file types that change frequently (e.g., PST files). While it doesn’t disable de-duplication (de-dupe is a core function of how CrashPlan backs up data), this modification does change the way that CrashPlan uses de-dupe, by forcing it to use a specific block size.

By quitting the de-dupe after one changed byte, we stop de-duplicating after one altered byte. This means that even if the rest of the file has been backed up, we back it up again. Since CrashPlan is engineered to back up your newest data first, this modification could result in a constant upload of the same “new” data, leaving older—and possibly more important—data completely unprotected.

Also, while this change can increase CrashPlan’s bandwidth utilization if source disk I/O is a bottleneck, it eliminates the use of algorithms that enable massive boosts to efficiency during backup. This presents a false economy, as the amount of data that needs to be sent to complete a backup is increased dramatically while bandwidth utilization increases, making it look like CrashPlan is “working faster.”

In a very rare best-case scenario, it will add an estimated 25–30% overhead to backup data transport; in worst-case scenarios, it can double, triple or even quadruple the amount of data that needs to be sent to destination archives.

This article is great, and it raises good questions on our de-duplication settings. We will address this topic in detail on our documentation site soon.

 

 


Do you have any other performance suggestions for CrashPlan? Share them with the community!

 

 

Advertisement

Here's what other Users are reading:

Affiliate Links
Advertisement