How to back up a hard disk to another machine

How to do things, solutions, recipes, tutorials
Post Reply
Message
Author
jeffrey
Posts: 168
Joined: Mon 16 Jan 2006, 04:20
Location: Christchurch, New Zealand

How to back up a hard disk to another machine

#1 Post by jeffrey »

I am considering how to make backups of some old PCs that run Linux.
My current favoured option is to use a Live CD (why not Puppy Linux?) to boot from, then use NFS to mount the backup file system on the remote machine, and then simply back up the local disk to the remote machine.
I've tested this out with a Fedora Core 5 installation's /boot partition, which is about 100MB, of which only 20MB is used. I copied it to the remote machine, which was fine. Then I tried again with compression, which reduced the file size from 100MB down to 63MB. Realising that the unused 80MB in the file system is probably full of garbage, I wrote zeros to it, and then copied it again with compression, this time ending up with a mere 13MB file. This seems pretty simple and although it's a manual process, with the disk-zeroing step it is probably more efficient than any automated backup alternative out there. (It wouldn't be hard to automate it, but I haven't time for that at present.)
The steps that I followed to do this (with a standard Puppy Linux 2.16) are:
1. On the remote machine, say monster, allow a directory, say /backups, to be writeable by NFS by adding "/backups (rw)" to /etc/exports and restarting the nfs service.
2. On the local machine mount the NFS disk with "mount -o rw,nolock monster:/backups /mnt/data". The nolock avoids the 30s timeout caused by mount looking for portmap (according to delboy711, post 12523).
3. Wipe the unused 80MB of /dev/hda1 with zeros by mounting it as /mnt/hda1 with the standard Pmount tool, then issue "dd if=/dev/zero of=/mnt/hda1/filler-to-write-zeros-to-disk bs=1024k count=80" and "rm /mnt/hda1/filler-to-write-zeros-to-disk" to release the disk space now that it has been wiped.
4. Copy /dev/hda1 from the local machine, say scary, to the backups directory on the remote machine with "gzip < /dev/hda1 | dd of=/mnt/data/scary-dev-hda1.gz"
And that's all.
To restore it use "gzip -d" in the opposite direction, but I haven't tested this and even if I had I wouldn't guarantee that it will work for you, so don't complain to me if it doesn't work! Experiment on an expendable disk first.
Of course, I'm not actually interested in /dev/hda1, but really in /dev/hda in its entirety. What this experiment tells me is that I am probably getting 100% compression on empty disk space once I've written zeros to it. I would have to do the same to hda2 before backing it up because its the / file system while hda1 is merely the small /boot file system. But the same approach should work there too. I'm not sure about writing zeros to the swap partition (hda3 in my case think) because Puppy may be smart enough to be using it!
I hope that this will be of interest to someone.

Bruce B

#2 Post by Bruce B »

Interesting.

You can turn your swap partition off, then write zeros.

swapoff /dev/???

Of course you'll get optimal compression if you copy free space which contains zeros.

I'm going to try #4 and see how it works, I might use it to replace my present backup routines.

Bruce B

#3 Post by Bruce B »

I started the backup with a slight variation, while watching the screen, and thinking, it occurred to me that I might have problems restoring it.

gzip < /dev/hda1 | dd bs=8192 of=/mnt/hda8/hda1.gz

128273+1 records in
128273+1 records out

That should be 9 MB of compressed data.

Seems to have worked fine.

--------------

Restoring theory I'll have to think about and do some R&D

Easy way to wipe free space for better compression theory proposal:

# umount -a
# dd if=/dev/zero of=/zerofile.tmp
# sync
# rm /zerofile.tmp
# sync


----------

User avatar
Flash
Official Dog Handler
Posts: 13071
Joined: Wed 04 May 2005, 16:04
Location: Arizona USA

#4 Post by Flash »

Jeffrey, as a How-To, this leaves too much as an exercise for the reader. :? See this post. Did you install nfs in Puppy? If so, how?

valpy
Posts: 67
Joined: Wed 18 Apr 2007, 20:38
Location: Looking at the tapestry

#5 Post by valpy »

The Pudd utility in Puppy is useful for backups (Start->Utility->Pudd in 2.15, but it's available at least from 2.13 onwards). I have used it, (though not over NFS), but if you're creating a backup file for a disk or set of partitions I think it should work over NFS.

Follow the prompts - it will use dd, writing zeros to the unused parts of the disks/partitions, and will then compress the image as a .gz file (all from the gui).

Pudd can restore images too - a very nice simple backup and restore method.

Bruce B

#6 Post by Bruce B »

I found the following on the Internet

Creating a hard drive backup image

# dd if=/dev/hda | gzip >/mnt/hdb1/system_drive_backup.img.gz

Here dd is making an image of the first harddrive, and piping it through the gzip compression program. The compressed image is then placed in a file on a separate drive. To reverse the process:

[Restore hard drive from image]

# gzip -dc /mnt/hdb1/system_drive_backup.img.gz | dd of=/dev/hda

Here, gzip is decompressing (the -d switch) the file, sending the results to stdout (the -c switch), which are piped to dd, and then written to /dev/hda.

source:

http://wiki.linuxquestions.org/wiki/Dd# ... ckup_image

jeffrey
Posts: 168
Joined: Mon 16 Jan 2006, 04:20
Location: Christchurch, New Zealand

#7 Post by jeffrey »

Apologies for my poor post. It isn't bullet-proof, I'm sure. I made two particular mistakes:
1. Flash, I didn't realise there were How-To standards that I've fallen short of. Sorry about that. Are these published? Looking at some other HowTos I'm not sure where I'm deficient (apart from including step-by-step restore instructions), but re-reading my instructions I can see that they’re not as clear as they could be. The commands need to be run from a console window (ie from rxvt). NFS must be installed, configured (with a writeable directory), and running on the remote machine (in my case a Fedora Core 5 Linux machine). Puppy, running on the local machine, simply uses its existing ‘mount’ command to mount that remote NFS file system. So Puppy does not need NFS installed for this backup strategy to work. I should also have said that Puppy should be run entirely in RAM so that the hard drive is not mounted during the backup.
2. valpy, I hadn't noticed Pudd (which is quite embarrassing). That’s an excellent piece of work and nearly does all that I need as it stands, but not quite. It has a backup-to-remote-machine option which uses netcat/nc (already present on my remote Fedora machine and an amazingly simple tool that I was ignorant of), but since my remote machine already has NFS I’ll just mount the NFS remote file-system on my local machine as above and get Pudd to write to that mount point as if it is a local directory. At least with Puppy 2.14 (which is what I’m using today), Pudd does not write zeros to file systems before backing them up, so my instructions for that are still valid and can result in a major saving of space and transfer time.

Bruce B, thanks for the “swapoff /dev/hda3

valpy
Posts: 67
Joined: Wed 18 Apr 2007, 20:38
Location: Looking at the tapestry

#8 Post by valpy »

If you use Pudd, there is a dialogue at the point of creating the image which does give you the option of writing zeros to unused parts of the image - or you can do it yourself.

valpy
Posts: 67
Joined: Wed 18 Apr 2007, 20:38
Location: Looking at the tapestry

#9 Post by valpy »

Should have said, this is in 2.13 - haven't tried 2.14

valpy
Posts: 67
Joined: Wed 18 Apr 2007, 20:38
Location: Looking at the tapestry

#10 Post by valpy »

Just tried Pudd in 2.14, it does offer the option to write zeroes.

There is a final dialogue before the image is created

"Puppy Universal DD: optimise compression

/dev/sda6 will be copied to /root/myfile.img.gz, compressed with gzip.
Compression may be greatly improved if the unused part of /dev/sda6 is zeroised. This involves temporarily mouning it on /mnt/tmp, writing zeroes to the unused areeas, then unmounting it.
Would you like to do this size optimisation?"

User avatar
Flash
Official Dog Handler
Posts: 13071
Joined: Wed 04 May 2005, 16:04
Location: Arizona USA

#11 Post by Flash »

Jeffrey, I only delete spam, obvious duplicate posts, posts with objectionable language that otherwise have no redeeming value, etc.. You're safe. :)

It can be pretty hard to see which forum a post like yours belongs in. I thought it had the makings of a useful How-To, with just a little more work.

I realize that there are no written standards for a How-To. Like everything else in Puppy, this forum is a work in progress, done by volunteer enthusiasts. Still, it seems obvious to me that a How-To should be self-sufficient, and written so that the average puppian can give it a try. As the forum's index page suggests, How-Tos are meant to be like recipes. Experimental recipes. They might not turn out to work for everyone, but they should be tested at least once by the author and written clearly enough and contain enough information to enable the target audience to assemble the ingredients and try it for themselves. :)

Bruce B

#12 Post by Bruce B »

Flash, I posted right below you and now its gone. Did you delete it?

jeffrey
Posts: 168
Joined: Mon 16 Jan 2006, 04:20
Location: Christchurch, New Zealand

#13 Post by jeffrey »

valpy, my apologies about Pudd and wiping unused areas of file systems with zeros. You are quite correct that for a partition (ie file system) it does offer such an option. I missed that. Note that it doesn't do this when backing up an entire drive (which is what I want). I may spend a little time writing an enhancement to Pudd to do that. Pudd is quite an impressive piece of work as it is.

Thank you all for your helpful and patient posts.

I use Puppy Linux at home. It is regarded as a personal toy at work, but with its excellent hardware detection and Pudd that attitude may change...

User avatar
Flash
Official Dog Handler
Posts: 13071
Joined: Wed 04 May 2005, 16:04
Location: Arizona USA

#14 Post by Flash »

Bruce B wrote:Flash, I posted right below you and now its gone. Did you delete it?
No, and I didn't see any post from you other than the one quoted. Anyway, it's been days since I deleted a post.

Bruce B

#15 Post by Bruce B »

Flash wrote:
Bruce B wrote:Flash, I posted right below you and now its gone. Did you delete it?
No, and I didn't see any post from you other than the one quoted. Anyway, it's been days since I deleted a post.
Thank you for your reply. I didn't think it was you, but now I know. Once in a while a post doesn't get posted. I might do well to save the text before submitting.

rrolsbe
Posts: 185
Joined: Wed 15 Nov 2006, 21:53

Might want to add a split option

#16 Post by rrolsbe »

jeffrey wrote:valpy, my apologies about Pudd and wiping unused areas of file systems with zeros. You are quite correct that for a partition (ie file system) it does offer such an option. I missed that. Note that it doesn't do this when backing up an entire drive (which is what I want). I may spend a little time writing an enhancement to Pudd to do that. Pudd is quite an impressive piece of work as it is.

Thank you all for your helpful and patient posts.

I use Puppy Linux at home. It is regarded as a personal toy at work, but with its excellent hardware detection and Pudd that attitude may change...
Jeffrey

Don't know if a puppy package is available that has the unix split command but adding the split command to the Pudd disk backup utility would be great. UPDATE: Looks like split is in puppy 2.16 it was not in my version of puppy 2.13.

See more into at the following link:
http://wiki.linuxquestions.org/wiki/Dd

The Pudd backup utility is great now but it could be fantastic.

Regards
Ron
Last edited by rrolsbe on Wed 13 Aug 2008, 20:25, edited 1 time in total.

rrolsbe
Posts: 185
Joined: Wed 15 Nov 2006, 21:53

Pudd disk backup MUCH larger than partition backup

#17 Post by rrolsbe »

First I performed a partition backup and selected zero blank space the partition backup file size of sda1 was 2.7G.

Then I backed up entire drive and file size was HUGE. Had to stop the backup at 25G before it filled the destination drive. Since the other two partitions are very small in comparision to /sda1 (see fdisk below), can anyone explain this?? Has anyone had similar experience?

sh-3.00# ls -alh ubu*
-rwxrwxrwx 1 root root 25G 2007-05-06 12:08 ubuntu6.10_drive.img.gz
-rwxrwxrwx 1 root root 2.7G 2007-05-04 10:48 ubuntu_sda1_partition.img.gz


sh-3.00# fdisk -l

Disk /dev/sda: 40.9 GB, 40982151168 bytes
255 heads, 63 sectors/track, 4982 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1262 10136983+ 83 Linux
/dev/sda2 1263 1467 1646662+ 5 Extended
/dev/sda5 1263 1467 1646631 82 Linux swap / Solaris

sda2 & sda5 together only about 20% of sda1

Update: Looks like I am may only be using slightly over 10G of the 40G, not sure why took the suggested ubuntu install defaults. The other 30G gig probably has truely random data.

Regards
Ron

Post Reply