RAID arrays in (Puppy) Linux

How to do things, solutions, recipes, tutorials
Message
Author
tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

RAID arrays in (Puppy) Linux

#1 Post by tempestuous »

This is an overview of RAID configuration in Linux. I have provided the necessary configuration utilities, but this is not really a full HOWTO, because I don't currently have any RAID hardware myself, and can't vouch for the final outcome.
A quick summary of currently available RAID modes:

RAID-0 - striping
RAID-1 - mirroring
RAID-10 - mirrored striping
RAID-4/RAID-5/RAID-6 - failure protection within array

1. HARDWARE RAID
As long as there's a compatible Linux driver available for your RAID interface device, no special software or configuration is required. The whole idea of hardware raid is that the configuration is independent of the operating system. So Puppy will see whatever logical drives your raid card has configured. And Puppy will remain happily unaware of the exact details of which physical drives and partitions are actually in use.
Puppy currently contains drivers for RAID interfaces by Adaptec, LSI Logic, Highpoint, Intel, IBM, Promise, and PMC.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#2 Post by tempestuous »

2. HOST RAID
This is also known as fakeraid, onboard RAID, or bios RAID.
http://en.wikipedia.org/wiki/RAID#Firmw ... based_RAID

The hardware (or firmware) simply sets the RAID configuration, but RAID control must then be handled in software.

In Windows this means software relating to the particular brand/model of host raid device.
In Linux, "dmraid" can access these fakeraid arrays; all brands/models.
dmraid dotpet package now attached.

EDIT: Jan 27 2011
dmraid updated to ver 1.0.0.rc16-3
Package now includes the latest libdevmapper and libdevmapper-event libraries,
so it should work in earlier Puppies, even though it was compiled in Puppy 5.1

Since the RAID array can be seen by both Windows and Linux, this setup is useful if you want to dual boot between the two operating systems. And since Windows cannot handle many filesystems that Linux does, you probably want to format the RAID drives under Windows - probably with FAT32.

There's a HOWTO here -
http://en.gentoo-wiki.com/wiki/RAID/Onboard
but it's a little complex, especially since it explains how to boot from a RAID device.
Dealing with fakeraid/software raid arrays is easier if you boot your Linux operating system from a non-RAID partition.
In this case, here's a summary -
First install the dmraid dotpet package attached to this post.
Next load the device-mapper kernel module -

Code: Select all

modprobe dm-mod
You will see an error message "WARNING: Deprecated config file /etc/modprobe.conf" - don't worry, that's a trivial warning.

This dm-mod kernel module depends on the device-mapper library, which Puppy 5.1/5.2 already contains ...
but my dmraid dotpet will automatically delete this library and upgrade it to a newer version.
Check that dmraid can now see your RAID array(s) -

Code: Select all

dmraid -r
If it looks good, activate the array(s)

Code: Select all

dmraid -a y
Check that the activated array(s) are now listed in /dev/mapper

Code: Select all

ls /dev/mapper/
You will likely see several devices listed there. eg. for an nForce interface you might see "nvidia_x1" and "nvidia_x2" listed. Mount whatever is listed, like this -

Code: Select all

mkdir /mnt/raid1
mount /dev/mapper/nvidia_x1 /mnt/raid1

mkdir /mnt/raid2
mount /dev/mapper/nvidia_x2 /mnt/raid2
Hopefully you can now see the contents of your RAID array in the directories you just mounted, in this case /mnt/raid1 and /mnt/raid2

This form of RAID offers no real hardware assistance. If you need to dual boot such hardware with Windows, dmraid is the only choice, but if your setup is exclusively Linux, full Linux software RAID is a better choice. See the next post.
Last edited by tempestuous on Sat 29 Jan 2011, 06:45, edited 4 times in total.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#3 Post by tempestuous »

3. LINUX SOFTWARE RAID

If your RAID setup is running only under Linux, and there's no need to dual boot into Windows, this is the better RAID solution than bios-RAID.
For motherboards with onboard RAID (Host RAID) as mentioned in the previous section, disable the RAID function in bios. Yes, this sounds a bit strange, but Linux will take over all RAID configuration.
There's a HOWTO here -
https://raid.wiki.kernel.org/index.php/ ... _operation
Here's a summary. Let's assume you want RAID1 (mirroring for redundancy)
using /dev/sdb1 and /dev/sdc1 (your drives must be partitioned with GParted, first)
RAID0 support is built directly into the Puppy kernel, but RAID1 support is via the external kernel module "raid1", so we first need to load this module -

Code: Select all

modprobe raid1
Install the mdadm dotpet package attached to this post. Then run this command to configure the RAID array

Code: Select all

mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
You should hear the drives working as the array is initialised. Once this is complete, the new RAID configuration must be saved, with this command -

Code: Select all

mdadm --detail --scan >> /etc/mdadm.conf
(It's worth keeping a copy of this configuration file on a USB dongle, for example, in case your system drive fails in the future, and you need to be able to access the RAID array on a different installation.)
Your new RAID array is /dev/md0. Go ahead and format it with ext3 (alternative filesystems may be considered)

Code: Select all

mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
All those extra formatting options I just included are explained in the RAID wiki I mentioned earlier.
Now mount the RAID array -

Code: Select all

mkdir /mnt/md0
mount /dev/md0 /mnt/md0
Now you can browse to /mnt/md0 with ROX, and test it by copying some files to/from.

That's it, but we also need to restore this configuration at each bootup. My mdadm dotpet includes a special mdadm udev rule which, in theory, will auto-detect software-raid arrays at bootup, and restore their configuration ... unfortunately testing under Puppy 5.28 has shown that this udev function fails!!
So you need to add some extra commands to Puppy's startup scripts - open /etc/rc.d/rc.local in Geany, and add these 3 lines -

Code: Select all

modprobe raid1
mdadm --assemble /dev/md0
mount /dev/md0 /mnt/md0
Save.
Now each time you boot Puppy, your RAID array will be ready to use at /mnt/md0
Last edited by tempestuous on Sat 26 Oct 2013, 01:47, edited 3 times in total.

User avatar
prehistoric
Posts: 1744
Joined: Tue 23 Oct 2007, 17:34

plan recovery first

#4 Post by prehistoric »

This is not about a current RAID system I'm running, but reflects past experience. RAID 0 striping fails and loses data when either drive fails, it is emphatically not fault tolerant. All other RAID configurations have some fault tolerance.

While a redundant array can reduce data loss from drive failures, it does not reduce the rate of drive failures. If you are running four drives, you can expect four times the rate of drive failures. This means you should plan for recovery from failure of a drive in a redundant array before you get into a situation where you must do this to avoid losing data. I've seen people screw things up because they expected everything to be handled automatically. Once the data is really gone, it stays gone.

One trivial error is to find you can't match the failed drive without ordering a new one by mail. It helps to have a spare of the same type from the day you set up the array.

If you are going to run a RAID array to avoid data loss, think things through before you get in trouble.

nickdobrinich
Posts: 77
Joined: Fri 06 Apr 2007, 03:29
Location: Cleveland OH USA

hardware RAID

#5 Post by nickdobrinich »

If you are using hardware RAID of any kind, make sure you have an exact duplicate RAID card.
I have had them fail and it is not pleasant or cheap.

If RAID is provided on the motherboard, make sure to have a duplicate m/b.

My current thinking is this:
Although software RAID does not have the performance of hardware RAID, it may be best to use it in a RAID 1 or higher configuration for drive fault tolerance.
Spend the money you save by not buying 2 RAID cards on a faster multicore CPU.

And if it is critical data, absolutely positively have an offsite backup plan in place.
The hardware can be replaced, the drives can be replaced.
The OS can be reinstalled.
But your data is the most valuable thing you have.
Which I only remember when the worst case, could-never-happen, the UPS died hard, the dog chewed the power cord scenario strikes.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

dmraid vs mdadm

#6 Post by tempestuous »

I see that my reference to dual-booting with Windows may have caused some confusion in a different forum thread.
So let me clarify that bios-RAID and Linux software RAID are technically similar: the only difference is that with bios-RAID the hardware adapter sets the RAID configuration (as an onboard bios setting). And as other forum members here have mentioned, if this hardware fails you will need to reconnect your drives to an identical adapter to retrieve your data.
That's not to say that Linux software RAID is foolproof - if your Linux OS dies, sure, you can just install Linux afresh ... but you need to remember the software configuration you initially configured the RAID array with. Pen, paper and old fashioned record keeping should not be forgotten in these high tech times.

The choice between dmraid and mdadm is often one of practicality. If you have been running a bios-RAID array under Windows, chances are you already have a certain amount of data on the array which you would like to keep. If you decide to install Linux on this same computer, regardless of whether Windows will be kept or not, the easiest option is to manage this array under Linux with dmraid, which will recognise the bios-RAID configuration and let you keep using the array without any reformatting.

But if the RAID array's data is already backed up somewhere else and you're happy to totally reconfigure a Linux system, the better option is to disable the bios-RAID function, then run full Linux software RAID using mdadm. Obviously, all existing data on drives within the new array will be lost.
What's interesting about this situation is that the bios-RAID adapter contributes nothing to the RAID configuration; it reverts to being a multiple-drive host interface.

Indeed, software RAID requires no special drive interfaces. Just connect 2 SATA hard drives, for example, on the standard SATA ports of a fairly standard motherboard and you can configure them for full software RAID using mdadm.
... just remember to write down the RAID configuration you set up.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#7 Post by tempestuous »

Changing the subject back to hardware RAID
here are the configuration utilities for the LSI MegaRAID controller family,
supported models MegaRAID SCSI 320 & SATA 150/300.

The original files were obtained from here
http://www.lsi.com/storage_home/product ... csi_3202x/
which were originally zipped binary files. I have repackaged them as dotpets, and slightly adapted the MegaMon start script for Puppy.

Please note: MegaRAID adapters are supported in Puppy by the megaraid driver, and RAID arrays should work just fine without anything else.
These utilities are optional extras.

megarc 1.11 is a commandline MegaRAID Configuration Utility
run "megarc" to launch

megamgr 5.20 is a gui MegaRAID Configuration Utility
run "megamgr" to launch

MegaMon 3.8 is a daemon that monitors the RAID array
The package installs /etc/init.d/raidmon
will run as a daemon at each boot up.
It then logs different events in file "/var/log/megaserv.log" with date & time stamp,
and it sends mail to root for different events with date & time stamp.

nickdobrinich
Posts: 77
Joined: Fri 06 Apr 2007, 03:29
Location: Cleveland OH USA

software RAID on ClearOS

#8 Post by nickdobrinich »

@tempestuous:
Yes, your comments about writing down the RAID config are spot on.
Write it down on paper or keep it on a flash drive, not in a file on the RAID drive.
I often wonder if I will be able to wade through the RAID configuration 3 years from now when in full panic, everybody is screaming at me mode.

On a related topic.
I am currently looking into setting up ClearOS (parents are Red Hat Enterprise via CentOS) in a software RAID configuration.

I have a 160 GB IDE drive and (2) 1 TB matched SATA drives configured as software RAID1 for a small office 8 user Samba file server with OpenX or Zimba email to connect with client Outlooks (aka LookOut).

What is a recommended setup for the Linux directory structure?
Is it best to have only /boot and the double memory size swap partition on the 160 GB drive with / and everything else on the RAIDed drives?
Should I be looking at a more elaborate setup?
What is my worst single point of failure condition?

Is anyone familiar with NUT (network UPS tools) to monitor an APC UPS to bring the server down gracefully in an extended power outage?
Any guidance here would be much appreciated as this is my first RAID setup.

PS Just for added pressure, my paycheck will print off this system.
Or not.
Last edited by nickdobrinich on Wed 10 Nov 2010, 13:47, edited 1 time in total.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#9 Post by tempestuous »

My experience is video production/video server related, so my comments carry no great weight for web server situations, but for what it's worth, I would install my Linux OS plus all applications on the non-RAID boot drive and keep the RAID array purely for data. Then you would configure each individual application (OpenX, Zimbra, etc) to store its user data on the RAID.

Regarding partitioning on the boot drive: standard Puppy convention is a single ext3 or ext4 partition for Puppy (boot + /home + /) plus a 1.5x RAM size Linux swap partition.
But Puppy will take up such a small part of your 160G drive it would make sense to create at least one more partition for another Linux installation.
And once you get into a multi-boot situation, it's a good idea to have a separate boot partition. This should ideally be the very first partition on the drive, and only needs to be very small - say 50MB. I would format the boot partition as ext3.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#10 Post by tempestuous »

Here is the latest LVM2, a Logical Volume Manager for Linux.
The source code was obtained from
ftp://sources.redhat.com/pub/lvm2/

This is not directly associated with RAID systems, but it works in a similar fashion to dmraid. Like dmraid, it depends on the device-mapper library, which Puppy 5.1 already contains.

UPDATE Jan 29 2011
My LVM dotpet package has been upgraded to contain its own libdevmapper and libdevmapper-event libraries, and these are exactly the same as the devmapper libraries contained in my dmraid package earlier in this thread. So these two dotpet packages, LVM and dmraid, can coexist on the same Puppy installation.
Puppy 5.x is definitely compatible, and earlier Puppies might be compatible.

Instructions are here -
http://tldp.org/HOWTO/LVM-HOWTO/

This is an advanced tool. If you don't know what it is, you don't need it.

Aug 26 2014
Forum download link is broken. Groan.
Dotpet now available here -
http://www.smokey01.com/tempestuous/LVM2.2.02.79.pet
Last edited by tempestuous on Tue 26 Aug 2014, 00:32, edited 2 times in total.

gcmartin

Using LVM2

#11 Post by gcmartin »

Thanks tempestuous for these needed tools
tempestuous wrote:Here is the latest LVM2 ....
There are 2 PETs shown. I am looking at using it in 2 very different Puppy Disto ways.
ttuuxxx's 4.3.2-SCSI and playdayz's PUP5.2 liveCDs where PUP would give me diagnostic abilities for the LVM2 that exist. One set of systems has SCSI drives with LVM2s operational and a 2nd set has SATA LVM2s.

Do I need both PETs for these distro?
Do I install "libdevmapper" PET, 1st or last?

Edit: Thanks again for the update. I am not running a Puppy on RAID hardware. I do need the LVM2 items,

Thanks
Last edited by gcmartin on Mon 31 Jan 2011, 21:44, edited 2 times in total.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#12 Post by tempestuous »

Puppy 5.x already contains libdevmapper. Thus:

Puppy 5.2 requires just LVM2.2.02.79.pet

Puppy 4.3.2-SCSI requires LVM2.2.02.79.pet plus libdevmapper-1.02.60.pet
The order of installation makes no difference.

The LVM2 utility should work with IDE/SATA/SCSI.

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#13 Post by tempestuous »

Thanks to some good testing by forum member CindyJ, the dmraid dotpet earlier in this thread is confirmed to work with bios-RAID devices, and I have just upgraded the dmraid dotpet, and also updated the instructions. It's a shame the 150-or-so other people who earlier downloaded this package failed to offer such troubleshooting and assistance.

I have also updated the LVM dotpet so it contains matching libdevmapper libraries.

disciple
Posts: 6984
Joined: Sun 21 May 2006, 01:46
Location: Auckland, New Zealand

#14 Post by disciple »

tempestuous wrote:3. LINUX SOFTWARE RAID

If your RAID setup is running only under Linux, and there's no need to dual boot into Windows, this is the best RAID solution.
Is a software RAID better than a hardware raid just because you don't need two RAID cards (one to use and one for backup)?
Do you know a good gtkdialog program? Please post a link here

Classic Puppy quotes

ROOT FOREVER
GTK2 FOREVER

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#15 Post by tempestuous »

Sorry, what I should have said was:
"... this is the better RAID solution than bios-RAID"

If you have a true hardware-RAID adapter, this is the very best solution. True hardware RAID adpaters are less common, and expensive.
And beware; many bios-RAID devices are incorrectly assumed to be hardware-RAID.

disciple
Posts: 6984
Joined: Sun 21 May 2006, 01:46
Location: Auckland, New Zealand

#16 Post by disciple »

I pulled a couple of raid cards and the hard drives out of an old windows server from work and plugged them into my Puppy machine and they work without me doing anything, so I'm assuming they're hardware-RAID adapters... as opposed to them being BIOS-RAID adapters that my Puppy has somehow handled automatically.
Do you know a good gtkdialog program? Please post a link here

Classic Puppy quotes

ROOT FOREVER
GTK2 FOREVER

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#17 Post by toronado »

tempestuous wrote:3. LINUX SOFTWARE RAID...
disable the RAID function in bios
On my MSI 6830E motherboard there are 4 IDE ports... 2 regular IDE ports, and 2 more that are intended for use with the BIOS RAID. I've always thought that the only way to have the BIOS RAID IDE ports functioning was to enable the BIOS RAID function in the BIOS (OnBoard ATA133 RAID). Otherwise the hard drives connected to those ports don't even show up in GParted.

Are you saying that by disabling the on-board BIOS RAID and by loading "modprobe raid1" those IDE ports will function and the devices connected to them will be visible to the OS?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#18 Post by tempestuous »

toronado wrote:Are you saying that by disabling the on-board BIOS RAID and by loading "modprobe raid1" those IDE ports will function ...
No. With the RAID function disabled in bios, those ports should act as a standard IDE port, no extra drivers required.

toronado wrote:I've always thought that the only way to have the BIOS RAID IDE ports functioning was to enable the BIOS RAID function in the BIOS (OnBoard ATA133 RAID). Otherwise the hard drives connected to those ports don't even show up in GParted.
Well I'm not familiar with the MS-6830E, or MSI KT3 Ultra, or whatever it's called, but unless there's something special about that board then this sounds very wrong.

My first thought is that maybe the bios settings are confusing - maybe there's a setting which enables the RAID ports, and a second setting which enables the RAID function associated with those ports? So maybe there's a distinction between "Enable" and "Enable RAID".

My second though is that you should check your drives and cables -
if there's a single IDE drive connected to each RAID port, then these drives should have their rear jumpers set for "MASTER" or "CABLE SELECT". If CABLE SELECT, then it's important that you connect the black plug on the IDE cable to the drive, not the grey plug.
You should certainly not have the jumpers on your drives set for "SLAVE".

toronado
Posts: 95
Joined: Wed 04 Sep 2013, 21:09

#19 Post by toronado »

tempestuous, thanks for your reply.

The full name of this antique is "MSI KT3 Ultra-ARU", the "ARU" part designating that it has the two extra IDE ports (IDE 3 & IDE 4) for BIOS RAID (via a Promise PDC20276 chip), and that it supports USB 2.0 :-)

The hard drives connected to the BIOS RAID (IDE 3 & IDE 4) ports are jumpered for "master" and are connected to the "master" connectors on the drive cables.

AMIBIOS is the "main" motherboard BIOS.

There is an "other" BIOS which deals with the IDE RAID ports which is the MBFastTrack133 "Lite" BIOS by Promise Technology.

Within the AMIBIOS, under "Integrated Perhipherals", there is a setting for "OnBoard ATA133 RAID" which has just two possible options: "Enable" or "Disable".

If set to "Enable" then the "MBFastTrack133 BIOS" will run immediately after POST and scan those RAID IDE ports to see if any devices are connected, and determine if a RAID array exists. At this time you also have the option to enter the "FastBuild" utility to set-up an array.

(If set to "Disable" then the "MBFastTrack133 BIOS" will NOT run and devices connected to the BIOS RAID (IDE 3 & IDE 4) ports do not show up in GParted.)

If you do not have any array defined, and you do not enter the utility to set one up, then the MBFastTrack133 auto-creates two separate single-drive "arrays":

Array 1: 1+0 Stripe
Array 2: 1+0 Stripe

(I have tried deleting the single drive "arrays" after they are created, but MBFastTrack133 just reboots the computer and auto-creates them again. So there appears to be no way around the single-drive "arrays".)

These single-drive "arrays" show up in GParted as:
/dev/sda
/dev/sdb

It appears that the single-drive "arrays" are for all intents and purposes functioning as though these were normal IDE ports with single drives connected to them.

(IIRC, I used it this way with OpenFiler a while back and was able to use the software RAID functions within OpenFiler to create a RAID array or even just a JBOD setup.)

Do you think I might be able to use it this way with the Linux Software RAID in Puppy?

tempestuous
Posts: 5464
Joined: Fri 10 Jun 2005, 05:12
Location: Australia

#20 Post by tempestuous »

OK, I think I understand - there's a secondary (separate) bios dedicated to the Promise controller for IDE 3/4.
toronado wrote:If you do not have any array defined, and you do not enter the utility to set one up, then the MBFastTrack133 auto-creates two separate single-drive "arrays":

Array 1: 1+0 Stripe
Array 2: 1+0 Stripe
As you say, this autoconfiguration appears to be two "single-drive arrays" ... which seems a little silly to me, since the word "array" implies multiple drives! Personally I consider this configuration a "null array".

Yes, I suggest you proceed to treat these drives within Puppy Linux as though they are single, normal, drives.
My only worry is that the Promise controller may still be acting as a host-RAID interface (even though there's no RAID striping/mirroring involved) and it might be necessary to translate this interface to the Linux OS via the dmraid application.
If so, there's no way to get around using host-RAID, so you might as well use it to configure your 2-drive array ...
but let's not assume this worst case.

With the Promise bios enabled in its default state, as you described, boot up to Puppy Linux.
Don't install dmraid - let's see if the drives act as normal IDE devices -
with GParted define each drive as a single partition, and format them as ext3 (ext3 is my preferred drive format).
Just to be safe, reboot.
Hopefully Puppy will now see the two new drives. See if you can copy some files to/from these drives.
If successful, this means that the Promise host-RAID function is inactive or benign, and you can proceed to use Linux software RAID as I described in the third post.

Be aware that once you have configured your RAID array (using mdadm) that the formatting you previously did is now gone.
You must now treat the two drives as a single unit, and format them again. I prefer to do this from the commandline rather than use GParted.

Post Reply