RAID arrays in (Puppy) Linux

How to do things, solutions, recipes, tutorials
Message
Author
disciple
Posts: 6984
Joined: Sun 21 May 2006, 01:46
Location: Auckland, New Zealand

#16 Post by disciple »

I pulled a couple of raid cards and the hard drives out of an old windows server from work and plugged them into my Puppy machine and they work without me doing anything, so I'm assuming they're hardware-RAID adapters... as opposed to them being BIOS-RAID adapters that my Puppy has somehow handled automatically.
Do you know a good gtkdialog program? Please post a link here

Classic Puppy quotes

ROOT FOREVER
GTK2 FOREVER

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#17 Post by toronado »

tempestuous wrote:3. LINUX SOFTWARE RAID...
disable the RAID function in bios
On my MSI 6830E motherboard there are 4 IDE ports... 2 regular IDE ports, and 2 more that are intended for use with the BIOS RAID. I've always thought that the only way to have the BIOS RAID IDE ports functioning was to enable the BIOS RAID function in the BIOS (OnBoard ATA133 RAID). Otherwise the hard drives connected to those ports don't even show up in GParted.

Are you saying that by disabling the on-board BIOS RAID and by loading "modprobe raid1" those IDE ports will function and the devices connected to them will be visible to the OS?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#18 Post by tempestuous »

toronado wrote:Are you saying that by disabling the on-board BIOS RAID and by loading "modprobe raid1" those IDE ports will function ...
No. With the RAID function disabled in bios, those ports should act as a standard IDE port, no extra drivers required.

toronado wrote:I've always thought that the only way to have the BIOS RAID IDE ports functioning was to enable the BIOS RAID function in the BIOS (OnBoard ATA133 RAID). Otherwise the hard drives connected to those ports don't even show up in GParted.
Well I'm not familiar with the MS-6830E, or MSI KT3 Ultra, or whatever it's called, but unless there's something special about that board then this sounds very wrong.

My first thought is that maybe the bios settings are confusing - maybe there's a setting which enables the RAID ports, and a second setting which enables the RAID function associated with those ports? So maybe there's a distinction between "Enable" and "Enable RAID".

My second though is that you should check your drives and cables -
if there's a single IDE drive connected to each RAID port, then these drives should have their rear jumpers set for "MASTER" or "CABLE SELECT". If CABLE SELECT, then it's important that you connect the black plug on the IDE cable to the drive, not the grey plug.
You should certainly not have the jumpers on your drives set for "SLAVE".

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#19 Post by toronado »

tempestuous, thanks for your reply.

The full name of this antique is "MSI KT3 Ultra-ARU", the "ARU" part designating that it has the two extra IDE ports (IDE 3 & IDE 4) for BIOS RAID (via a Promise PDC20276 chip), and that it supports USB 2.0 :-)

The hard drives connected to the BIOS RAID (IDE 3 & IDE 4) ports are jumpered for "master" and are connected to the "master" connectors on the drive cables.

AMIBIOS is the "main" motherboard BIOS.

There is an "other" BIOS which deals with the IDE RAID ports which is the MBFastTrack133 "Lite" BIOS by Promise Technology.

Within the AMIBIOS, under "Integrated Perhipherals", there is a setting for "OnBoard ATA133 RAID" which has just two possible options: "Enable" or "Disable".

If set to "Enable" then the "MBFastTrack133 BIOS" will run immediately after POST and scan those RAID IDE ports to see if any devices are connected, and determine if a RAID array exists. At this time you also have the option to enter the "FastBuild" utility to set-up an array.

(If set to "Disable" then the "MBFastTrack133 BIOS" will NOT run and devices connected to the BIOS RAID (IDE 3 & IDE 4) ports do not show up in GParted.)

If you do not have any array defined, and you do not enter the utility to set one up, then the MBFastTrack133 auto-creates two separate single-drive "arrays":

Array 1: 1+0 Stripe
Array 2: 1+0 Stripe

(I have tried deleting the single drive "arrays" after they are created, but MBFastTrack133 just reboots the computer and auto-creates them again. So there appears to be no way around the single-drive "arrays".)

These single-drive "arrays" show up in GParted as:
/dev/sda
/dev/sdb

It appears that the single-drive "arrays" are for all intents and purposes functioning as though these were normal IDE ports with single drives connected to them.

(IIRC, I used it this way with OpenFiler a while back and was able to use the software RAID functions within OpenFiler to create a RAID array or even just a JBOD setup.)

Do you think I might be able to use it this way with the Linux Software RAID in Puppy?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#20 Post by tempestuous »

OK, I think I understand - there's a secondary (separate) bios dedicated to the Promise controller for IDE 3/4.
toronado wrote:If you do not have any array defined, and you do not enter the utility to set one up, then the MBFastTrack133 auto-creates two separate single-drive "arrays":

Array 1: 1+0 Stripe
Array 2: 1+0 Stripe
As you say, this autoconfiguration appears to be two "single-drive arrays" ... which seems a little silly to me, since the word "array" implies multiple drives! Personally I consider this configuration a "null array".

Yes, I suggest you proceed to treat these drives within Puppy Linux as though they are single, normal, drives.
My only worry is that the Promise controller may still be acting as a host-RAID interface (even though there's no RAID striping/mirroring involved) and it might be necessary to translate this interface to the Linux OS via the dmraid application.
If so, there's no way to get around using host-RAID, so you might as well use it to configure your 2-drive array ...
but let's not assume this worst case.

With the Promise bios enabled in its default state, as you described, boot up to Puppy Linux.
Don't install dmraid - let's see if the drives act as normal IDE devices -
with GParted define each drive as a single partition, and format them as ext3 (ext3 is my preferred drive format).
Just to be safe, reboot.
Hopefully Puppy will now see the two new drives. See if you can copy some files to/from these drives.
If successful, this means that the Promise host-RAID function is inactive or benign, and you can proceed to use Linux software RAID as I described in the third post.

Be aware that once you have configured your RAID array (using mdadm) that the formatting you previously did is now gone.
You must now treat the two drives as a single unit, and format them again. I prefer to do this from the commandline rather than use GParted.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#21 Post by tempestuous »

And here's an alternative approach - since you don't want the host-RAID functionality of the Promise controller, you can avoid using IDE3/4 altogether and just use IDE2 instead -
connect your two storage drives to IDE2, using a single IDE cable, with one drive set as MASTER, the other as SLAVE.
Software RAID will work just as well this way.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#22 Post by toronado »

Thanks tempestuous.

I formatted the drives as regular drives in GParted and I'm just going to use them as-is without any RAIDing for the time being. (configuring the software raid at the command line is a bit beyond my current comfort level anyway - maybe I'll give it a try in the future)

Can't use IDE 1 & 2 because they are already in use! :) Got this box filled with old IDE drives. Hey, gotta do something with them.

BTW, I'm using ext4. Why do you prefer ext3?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#23 Post by tempestuous »

I was wary of ext4 when it was first introduced, because there seemed to be quite a few bugs with it, but it should be fine these days.
toronado wrote:configuring the software raid at the command line is a bit beyond my current comfort level
Well I just updated my instructions in the third post with more detail, but here's exactly what I think you need -
first install the mdadm-3.1.4 dotpet.
Assuming you want RAID0 - the two drives striped for maximum capacity and speed, run these commands -

Code: Select all

mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Wait until this initialization is completed, then save the RAID configuration with -

Code: Select all

mdadm --detail --scan >> /etc/mdadm.conf
Format the new RAID array (md0) with ext4, using some special formatting options suited to RAID operation -

Code: Select all

mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
Finally, mount the RAID array -

Code: Select all

mkdir /mnt/md0
mount /dev/md0 /mnt/md0
See if you can transfer some files to/from /mnt/md0
That's it.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#24 Post by toronado »

OK, I'll give it a try. But one question about this first line...

Code: Select all

mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Before running this command, should the drives have only unallocated space, or should I have already created the partitions and formatted them?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#25 Post by tempestuous »

i) yes, the drives need to be partitioned first - each as a primary partition. I understand that you have done this already.
ii) it's no matter whether you format or not prior to the array creation. As the array is created, any existing formatting is destroyed anyway.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#26 Post by toronado »

tempestuous wrote:i) yes, the drives need to be partitioned first - each as a primary partition. I understand that you have done this already.
ii) it's no matter whether you format or not prior to the array creation. As the array is created, any existing formatting is destroyed anyway.
OK, thanks for explaining. For the purpose of this exercise, I used GParted to create primary partitions (unformatted) on both drives, then proceeded with the commands you gave me.

Here is a copy/paste of the terminal session:

Code: Select all

sh-4.1# mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
sh-4.1# mdadm --detail --scan >> /etc/mdadm.conf
sh-4.1# mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
mke2fs 1.41.14 (22-Dec-2010)
fs_types for mke2fs.conf resolution: 'ext4'
Calling BLKDISCARD from 0 to 160048349184 failed.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=32 blocks, Stripe width=64 blocks
9773056 inodes, 39074304 blocks
39074 blocks (0.10%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
1193 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
sh-4.1# mkdir /mnt/md0
sh-4.1# mount /dev/md0 /mnt/md0
sh-4.1# 
At this point /mnt/md0 does not show up on the desktop, and Pmount doesn't see it at all. It does not appear to be mounted.

EDIT: I tried:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
sh-4.1# mount /dev/md0 /mnt/md0
And the md0 appeared on the desktop with the green dot. However when I clicked on it, it didn't open and instead Pmount launched and didn't see it.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#27 Post by tempestuous »

Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
toronado wrote:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.
toronado wrote:

Code: Select all

sh-4.1# mount /dev/md0 /mnt/md0
This command appears to run without error - which means that Puppy can see the array, and mount it.
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from. If this fails, I suspect that the ext4 filesystem has failed to properly create within the array. In this case I suggest you reformat with ext3. To format, the array must first be unmounted -

Code: Select all

umount /mnt/md0
now go ahead with the reformatting -

Code: Select all

mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#28 Post by tempestuous »

Also, what version of Puppy are you running?

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#29 Post by toronado »

tempestuous wrote:Don't be concerned that /mnt/md0 doesn't show up on the desktop, neither that Pmount doesn't see the array.
Both of these probably require extra code written into them to understand software arrays.
OK.
tempestuous wrote:
toronado wrote:

Code: Select all

sh-4.1# mdadm --assemble /dev/md0
mdadm: /dev/md0 has been started with 2 drives.
This should only be necessary if the udev rule that I included in the mdadm dotpet fails to automatically restore the array at each bootup. This needs investigation.
It is necessary on my system. Not sure why.
tempestuous wrote:
toronado wrote:

Code: Select all

sh-4.1# mount /dev/md0 /mnt/md0
This command appears to run without error - which means that Puppy can see the array, and mount it.
You should browse to /mnt/md0 with ROX, and see if you can transfer files to/from.
It works. I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great. I added a Samba share on md0 and it works great.

So basically, everything seems to be working. md0 isn't showing up on the desktop with this method, but as you said, that isn't really needed.

One thing is puzzling to me though... ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. (Once I mount md0 the directory shows the actual contents of the partition.) I found many other "dummy" directories there for previously created partitions and what not that no longer exist. I deleted these and they haven't reappeared so I think it's ok.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#30 Post by toronado »

tempestuous wrote:Also, what version of Puppy are you running?
lupu528

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#31 Post by tempestuous »

toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.
Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -

Code: Select all

mdadm --stop /dev/whatever
but as long as these devices/arrays don't interfere with the correct device/array, there's no problem, and you can just ignore them.
toronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.local
I have just updated the instructions in the third post.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#32 Post by tempestuous »

And in the final wash up ... you have Puppy running with a functioning Linux software RAID-0 array.
Bravo.

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#33 Post by toronado »

tempestuous wrote:
toronado wrote:ROX seems to show empty directories in /mnt/ for partitions that don't exist. So, if I don't mount md0, there is a (empty) directory there anyway. I found many other "dummy" directories there for previously created partitions and what not that no longer exist.
Yes, it seems to be a quirk of the mdadm application that it creates additional "dummy" or "ghost" device nodes, typically of the format "/dev/md_d0" or similar. You can manually stop these dummy arrays as such -

Code: Select all

mdadm --stop /dev/whatever
but as long as these devices/arrays don't interfere with the correct device/array, there's no problem, and you can just ignore them.
This is probably getting off-topic for this thread, but I don't think this issue pertains to mdadm. (It might not have anything to do with ROX either.) For example, well before installing the mdadm pet I noticed empty directories in /mnt/ for partitions such as sda2 sdb2 sdc4 sdd2 etc. and none of these partitions actually existed (or at least they didn't exist at the time I was browsing /mnt/). And while some of these partitions I actually created in the past and subsequently deleted (such as sdc4), others (like sda2, sdb2, sdd2) I don't recall ever creating in the first place. It's an old computer, so maybe it's haunted. :-)
tempestuous wrote:
toronado wrote:I just added the commands to assemble and mount the RAID array to my Samba Auto-Start script and it works great.
Sure, that startup script is fine, but generally the correct place to such additional commands is /etc/rc.d/rc.local
I have just updated the instructions in the third post.
OK thanks.

carenrose
Posts: 36
Joined: Tue 11 Dec 2012, 03:22

#34 Post by carenrose »

Sorry, I know this thread is old, but I figured this is probably the best place to put this question.

I have succeeded in setting up my RAID (1, btw). I'm now trying - and failing - to install Grub.

From the menu entry gui program, when I get to the step, something like "which disk or whatever do you want to put it in on?" and it suggests "/dev/sda(1?)" I tried both /dev/md0 and /dev/sda (and/or /dev/sda1) and /dev/sdb (and same).
I am not currently on that computer so I don't remember if that step asked for the disk or the partition, but whichever it asked for I put, ok? :D
It said that /dev/md0 "is not a valid Linux" something or other, and it couldn't mount sda or sdb to do its stuff there - which I figured as much.

Anyways, I tried command line. The below is approximation of what it said: (Yes, I know I'm being overly descriptive.)

Code: Select all

# grub
Blah blah something about BIOS this will take a while. ...

grub > find /boot/grub/stage1

Error 15: File not found
Ok, so I copied the files from /usr/sbin/grub (or wherever they are) to md0's /boot/grub that I created ... so it should be present on *both physical drives* right? And grub won't find when I do the above yet again.

What am I missing? Am I just totally lost?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#35 Post by tempestuous »

Ah, you're trying to boot from a RAID array, and that's complicated. Personally, I would avoid this, and install Puppy on a separate (non-RAID) drive - even a small USB flash drive. Then just use the RAID array for your user-data.

But if you're determined to persist, you will need to do some research and experimentation, and then rebuild Puppy's initrd.
Let me explain it in principle: the software RAID array can generally only be understood, and thus accessible, to a running operating system. It's difficult (but not impossible) to access files on the array at bootup.

What you need to do is include all necessary drivers, utilities, and configuration logic into Puppy's initial ramdisk.
In your case, that means rebuilding the initrd image to include the mdadm application, raid1 kernel module, and also modify the initrd startup scripts to assemble and mount the RAID array right at the start of the boot sequence.
I have no experience in this, so cannot help with the fine details.

Once this is all achieved, yes, you can put the grub configuration files onto the array ...
but as I understand it, the Master Boot Record must still be installed onto a single physical drive. I don't believe it's possible to share the MBR on an array - that would effectively be sharing two boot sectors. I can't imagine that any motherboard bios would be able to recognise this.

Post Reply