Some days after the last DSM upgrade, which was working great, my Synology just decided to simulate dying on me (the blue LED flashing, no boot, issue) and I ended up having to install and configure everything from scratch. Regarding CrashPlan itself, I ended up noticing that there are some important steps to do in order to have CrashPlan behave nicely with your Synology DS.
Please consider that at this time I was adopting a 2.0Tb backup dataset and was actually very impressed with how [relatively] fast and uneventful was the entire process, as it took well under half a day.
Nonetheless, optimisation was necessary, especially when I found my Synology unresponsive, all because of CrashPlan excessive CPU usage. After some patient trials to connect through SSH, issuing a top command identified Java/CrashPlanService as the culprit; I was then able to kill the service and after that I could access DSM again.
The thing is, I noticed that after restarting CrashPlan the CPU was being heavily occupied by CrashPlan even if it was almost idling (it was stuck on analysing the file dataset with new files, which sounded off). Here’s a capture of top, no wonder it was unresponsive:
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
17464 root 39 19 5257.1m 716.7m 190.4 35.9 7:01.18 S /var/packages/Java8/target/j2sdk-image/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=Cras+
Also interesting in this is to see that CrashPlan is “only” using 717Mb of RAM for a 2.0Tb backup dataset (two different backup sets: 150k+150k=300k files).
So here are some reminder tips for optimising a post install of CrashPlan:
- Change your Java Heap size right away (if you have a large dataset, either in number of files or total size), using this guide. The current default is 1024Mb but 1500Mb could be a better number for larger than 1Tb datasets.
- If your CrashPlan client is reporting some abnormal information right after an upgrade, try to issue
reconnectin its CLI.
- If your processor is not very powerful, and especially if the server is not basically for CrashPlan and File sharing, set a lower processor max usage in the backup Settings (even though I’ve read somewhere that this doesn’t any effect in headless configurations, but maybe someone can comment on this).
- Set Auto De-duplication settings to minimal using this guide (it will reduce the CPU usage but likely increase bandwidth, so it may be overkill, depending on what you’re looking for – in my case, I definitely prefer to have more CPU available for other services present on my NAS).
- Check and set the maximum Bandwidth values on CrashPlan’s configuration to something that does not create a bottleneck in your network(s), unless you’re sure you have some device that is providing QoS with CrashPlan under consideration.
- Block CrashPlan’s auto-upgrade feature to prevent it from stopping by using this guide, but remember to properly upgrade in the near future!
- For large data sets use different (but not many!) backup sets differentiating data that regularly changes from archival data (in the latter you can set full analysis’ to occur in larger spans of time).
Any additional suggestions?