RAID arrays in (Puppy) Linux

How to do things, solutions, recipes, tutorials
Message
Author
tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#21 Post by tempestuous »

And here's an alternative approach - since you don't want the host-RAID functionality of the Promise controller, you can avoid using IDE3/4 altogether and just use IDE2 instead -
connect your two storage drives to IDE2, using a single IDE cable, with one drive set as MASTER, the other as SLAVE.
Software RAID will work just as well this way.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#22 Post by toronado »

Thanks tempestuous.

I formatted the drives as regular drives in GParted and I'm just going to use them as-is without any RAIDing for the time being. (configuring the software raid at the command line is a bit beyond my current comfort level anyway - maybe I'll give it a try in the future)

Can't use IDE 1 & 2 because they are already in use! :) Got this box filled with old IDE drives. Hey, gotta do something with them.

BTW, I'm using ext4. Why do you prefer ext3?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#23 Post by tempestuous »

I was wary of ext4 when it was first introduced, because there seemed to be quite a few bugs with it, but it should be fine these days.
toronado wrote:configuring the software raid at the command line is a bit beyond my current comfort level
Well I just updated my instructions in the third post with more detail, but here's exactly what I think you need -
first install the mdadm-3.1.4 dotpet.
Assuming you want RAID0 - the two drives striped for maximum capacity and speed, run these commands -

Code: Select all

mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Wait until this initialization is completed, then save the RAID configuration with -

Code: Select all

mdadm --detail --scan >> /etc/mdadm.conf
Format the new RAID array (md0) with ext4, using some special formatting options suited to RAID operation -

Code: Select all

mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
Finally, mount the RAID array -

Code: Select all

mkdir /mnt/md0
mount /dev/md0 /mnt/md0
See if you can transfer some files to/from /mnt/md0
That's it.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#24 Post by toronado »

OK, I'll give it a try. But one question about this first line...

Code: Select all

mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Before running this command, should the drives have only unallocated space, or should I have already created the partitions and formatted them?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#25 Post by tempestuous »

i) yes, the drives need to be partitioned first - each as a primary partition. I understand that you have done this already.
ii) it's no matter whether you format or not prior to the array creation. As the array is created, any existing formatting is destroyed anyway.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#26 Post by toronado »

tempestuous wrote:i) yes, the drives need to be partitioned first - each as a primary partition. I understand that you have done this already.
ii) it's no matter whether you format or not prior to the array creation. As the array is created, any existing formatting is destroyed anyway.
OK, thanks for explaining. For the purpose of this exercise, I used GParted to create primary partitions (unformatted) on both drives, then proceeded with the commands you gave me.

Here is a copy/paste of the terminal session:

Code: Select all

sh-4.1# mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
sh-4.1# mdadm --detail --scan >> /etc/mdadm.conf
sh-4.1# mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
mke2fs 1.41.14 (22-Dec-2010)
fs_types for mke2fs.conf resolution: 'ext4'
Calling BLKDISCARD from 0 to 160048349184 failed.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=32 blocks, Stripe width=64 blocks
9773056 inodes, 39074304 blocks
39074 blocks (0.10%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
1193 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
sh-4.1# mkdir /mnt/md0
sh-4.1# mount /dev/md0 /mnt/md0
sh-4.1# 
At this point /mnt/md0 does not show up on the desktop, and Pmount doesn't see it at all. It does not appear to be mounted.

EDIT: I tried:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
sh-4.1# mount /dev/md0 /mnt/md0
And the md0 appeared on the desktop with the green dot. However when I clicked on it, it didn't open and instead Pmount launched and didn't see it.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#27 Post by tempestuous »

Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
toronado wrote:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.
toronado wrote:

Code: Select all

sh-4.1# mount /dev/md0 /mnt/md0
This command appears to run without error - which means that Puppy can see the array, and mount it.
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from. If this fails, I suspect that the ext4 filesystem has failed to properly create within the array. In this case I suggest you reformat with ext3. To format, the array must first be unmounted -

Code: Select all

umount /mnt/md0
now go ahead with the reformatting -

Code: Select all

mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#28 Post by tempestuous »

Also, what version of Puppy are you running?

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#29 Post by toronado »

tempestuous wrote:Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
OK.
tempestuous wrote:
toronado wrote:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.
It is necessary on my system. Not sure why.
tempestuous wrote:
toronado wrote:

Code: Select all

sh-4.1# mount /dev/md0 /mnt/md0
This command appears to run without error - which means that Puppy can see the array, and mount it.
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from.
It works. I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great. I added a Samba share on md0 and it works great.

So basically, everything seems to be working. md0 isn't showing up on the desktop with this method, but as you said, that isn't really needed.

One thing is puzzling to me though... ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. (Once I mount md0 the directory shows the actual contents of the partition.) I found many other "dummy" directories there for previously created partitions and what not that no longer exist. I deleted these and they haven't reappeared so I think it's ok.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#30 Post by toronado »

tempestuous wrote:Also, what version of Puppy are you running?
lupu528

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#31 Post by tempestuous »

toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.
Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -

Code: Select all

mdadm --stop /dev/whatever
but as long as these devices/arrays don't interfere with the correct device/array, there's no problem, and you can just ignore them.
toronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.local
I have just updated the instructions in the third post.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#32 Post by tempestuous »

And in the final wash up ... you have Puppy running with a functioning Linux software RAID-0 array.
Bravo.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#33 Post by toronado »

tempestuous wrote:
toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.
Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -

Code: Select all

mdadm --stop /dev/whatever
but as long as these devices/arrays don't interfere with the correct device/array, there's no problem, and you can just ignore them.
This is probably getting off-topic for this thread, but I don't think this issue pertains to mdadm. (It might not have anything to do with ROX either.) For example, well before installing the mdadm pet I noticed empty directories in /mnt/ for partitions such as sda2 sdb2 sdc4 sdd2 etc. and none of these partitions actually existed (or at least they didn't exist at the time I was browsing /mnt/). And while some of these partitions I actually created in the past and subsequently deleted (such as sdc4), others (like sda2, sdb2, sdd2) I don't recall ever creating in the first place. It's an old computer, so maybe it's haunted. :-)
tempestuous wrote:
toronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.local
I have just updated the instructions in the third post.
OK thanks.

carenrose
Posts: 36
Joined: Tue 11 Dec 2012, 03:22

#34 Post by carenrose »

Sorry, I know this thread is old, but I figured this is probably the best place to put this question.

I have succeeded in setting up my RAID (1, btw). I'm now trying - and failing - to install Grub.

From the menu entry gui program, when I get to the step, something like "which disk or whatever do you want to put it in on?" and it suggests "/dev/sda(1?)" I tried both /dev/md0 and /dev/sda (and/or /dev/sda1) and /dev/sdb (and same).
I am not currently on that computer so I don't remember if that step asked for the disk or the partition, but whichever it asked for I put, ok? :D
It said that /dev/md0 "is not a valid Linux" something or other, and it couldn't mount sda or sdb to do its stuff there - which I figured as much.

Anyways, I tried command line. The below is approximation of what it said: (Yes, I know I'm being overly descriptive.)

Code: Select all

# grub
Blah blah something about BIOS this will take a while. ...

grub > find /boot/grub/stage1

Error 15: File not found
Ok, so I copied the files from /usr/sbin/grub (or wherever they are) to md0's /boot/grub that I created ... so it should be present on *both physical drives* right? And grub won't find when I do the above yet again.

What am I missing? Am I just totally lost?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#35 Post by tempestuous »

Ah, you're trying to boot from a RAID array, and that's complicated. Personally, I would avoid this, and install Puppy on a separate (non-RAID) drive - even a small USB flash drive. Then just use the RAID array for your user-data.

But if you're determined to persist, you will need to do some research and experimentation, and then rebuild Puppy's initrd.
Let me explain it in principle: the software RAID array can generally only be understood, and thus accessible, to a running operating system. It's difficult (but not impossible) to access files on the array at bootup.

What you need to do is include all necessary drivers, utilities, and configuration logic into Puppy's initial ramdisk.
In your case, that means rebuilding the initrd image to include the mdadm application, raid1 kernel module, and also modify the initrd startup scripts to assemble and mount the RAID array right at the start of the boot sequence.
I have no experience in this, so cannot help with the fine details.

Once this is all achieved, yes, you can put the grub configuration files onto the array ...
but as I understand it, the Master Boot Record must still be installed onto a single physical drive. I don't believe it's possible to share the MBR on an array - that would effectively be sharing two boot sectors. I can't imagine that any motherboard bios would be able to recognise this.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#36 Post by toronado »

I've installed a new version of Puppy (PhatSlacko 5.5.02 for its easy Samba config) and I installed mdadm-3.2.5 from the PPM and rebooted but no md0 in /mnt.

I tried:

Code: Select all

# mdadm --assemble /dev/md0
mdadm: /dev/md0 not identified in config file.
So before I go messing things up further, what should I try next?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#37 Post by tempestuous »

So as I understand it, you have installed a new version of Puppy, but you want to access the software RAID array that you previously created?
toronado wrote:

Code: Select all

mdadm: /dev/md0 not identified in config file.
Oops, it sounds like you didn't keep a copy of your configuration file (from your earlier installation) - /etc/mdadm.conf

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#38 Post by toronado »

tempestuous wrote:So as I understand it, you have installed a new version of Puppy, but you want to access the software RAID array that you previously created?
toronado wrote:

Code: Select all

mdadm: /dev/md0 not identified in config file.
Oops, it sounds like you didn't keep a copy of your configuration file (from your earlier installation) - /etc/mdadm.conf
I haven't erased the previous install (it's on a separate partition). Are you saying all I need to do is copy over the config file?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#39 Post by tempestuous »

I think so, yes.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#40 Post by toronado »

Thanks, it worked.

Post Reply