PART 1 – CHECKING IF IT’S THE JAVA HEAP SIZE CAUSING THE ISSUE
First things first: if the CrashPlan service is actually restarting (i.e. you see that behavior on DSM Package Manager log) then most likely cause is that the Java heap max size is set too low. It have have been reset on reinstalling, so do make sure it is ok.
If you have access through the GUI, even if it’s just for a short period of time, then double click the “CrashPlan” logo on the top right and on the pop up window type in the command below and it will display the current value:
– Note that this IS NOT PERSISTENT, it will reset with each restart. Use the next suggestion for a persistent solution.
To change the Java RAM Heap Size:
- Via GUI (if you have access):
Just double click the “CrashPlan” logo and type in the java heap you want in Mb (1536Mb in this example) with this command:
java mx 1536, restart
- Via SSH – open a SSH connection to your NAS and edit this file:
In that file uncomment (erase the “#” character) the USR_MAX_HEAP variable line and change it to an adequate value. If you have 2Gb and over 1Tb of dat I suggest putting “1500M”:
#uncomment to expand Java max heap size beyond prescribed value (will survive upgrades)
#you probably only want more than the recommended 1024M if you're backing up extremely large volumes of files
PART 2 – FIXES FOR UPLOAD STALLED OR RANDOMLY RESTARTING
Considering that Java heap value is adequate, and if you’re seeing messages referring to connection “timeout”, or reindexing and analyzing but stalling at the upload stage then you might try the following:
- Compact your data set:
To Compact your data set, go to Destinations > Cloud > CrashPlan Central.
You’ll see a “Compact” button at the right side of the “Space Used” line, as displayed on this screenshot.
- Erase your local cache by using the instructions provided by Code42 at the page linked below and use the Compact button after restarting Crashplan, while it’s rebuilding the index/cache:
Repeat the process if necessary. In my case it was solved after that CrashPlan started a series of different process stages, like “indexing”, “synchronizing file information“, “synchronizing block information”, all of which took about 20 hours on a 1.5Tb data set.