Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Fri 31 Oct 2014, 07:26
All times are UTC - 4
 Forum index » House Training » HOWTO ( Solutions )
RAID arrays in (Puppy) Linux
Moderators: Flash, Ian, JohnMurga
Post_new_topic   Reply_to_topic View_previous_topic :: View_next_topic
Page 1 of 3 Posts_count   Goto page: 1, 2, 3 Next
Author Message
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sun 17 Oct 2010, 06:08    Post_subject:  RAID arrays in (Puppy) Linux  

This is an overview of RAID configuration in Linux. I have provided the necessary configuration utilities, but this is not really a full HOWTO, because I don't currently have any RAID hardware myself, and can't vouch for the final outcome.
A quick summary of currently available RAID modes:

RAID-0 - striping
RAID-1 - mirroring
RAID-10 - mirrored striping
RAID-4/RAID-5/RAID-6 - failure protection within array

1. HARDWARE RAID
As long as there's a compatible Linux driver available for your RAID interface device, no special software or configuration is required. The whole idea of hardware raid is that the configuration is independent of the operating system. So Puppy will see whatever logical drives your raid card has configured. And Puppy will remain happily unaware of the exact details of which physical drives and partitions are actually in use.
Puppy currently contains drivers for RAID interfaces by Adaptec, LSI Logic, Highpoint, Intel, IBM, Promise, and PMC.
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sun 17 Oct 2010, 06:12    Post_subject:  

2. HOST RAID
This is also known as fakeraid, onboard RAID, or bios RAID.
http://en.wikipedia.org/wiki/RAID#Firmware.2Fdriver-based_RAID

The hardware (or firmware) simply sets the RAID configuration, but RAID control must then be handled in software.

In Windows this means software relating to the particular brand/model of host raid device.
In Linux, "dmraid" can access these fakeraid arrays; all brands/models.
dmraid dotpet package now attached.

EDIT: Jan 27 2011
dmraid updated to ver 1.0.0.rc16-3
Package now includes the latest libdevmapper and libdevmapper-event libraries,
so it should work in earlier Puppies, even though it was compiled in Puppy 5.1

Since the RAID array can be seen by both Windows and Linux, this setup is useful if you want to dual boot between the two operating systems. And since Windows cannot handle many filesystems that Linux does, you probably want to format the RAID drives under Windows - probably with FAT32.

There's a HOWTO here -
http://en.gentoo-wiki.com/wiki/RAID/Onboard
but it's a little complex, especially since it explains how to boot from a RAID device.
Dealing with fakeraid/software raid arrays is easier if you boot your Linux operating system from a non-RAID partition.
In this case, here's a summary -
First install the dmraid dotpet package attached to this post.
Next load the device-mapper kernel module -
Code:
modprobe dm-mod

You will see an error message "WARNING: Deprecated config file /etc/modprobe.conf" - don't worry, that's a trivial warning.

This dm-mod kernel module depends on the device-mapper library, which Puppy 5.1/5.2 already contains ...
but my dmraid dotpet will automatically delete this library and upgrade it to a newer version.
Check that dmraid can now see your RAID array(s) -
Code:
dmraid -r

If it looks good, activate the array(s)
Code:
dmraid -a y

Check that the activated array(s) are now listed in /dev/mapper
Code:
ls /dev/mapper/

You will likely see several devices listed there. eg. for an nForce interface you might see "nvidia_x1" and "nvidia_x2" listed. Mount whatever is listed, like this -
Code:
mkdir /mnt/raid1
mount /dev/mapper/nvidia_x1 /mnt/raid1

mkdir /mnt/raid2
mount /dev/mapper/nvidia_x2 /mnt/raid2

Hopefully you can now see the contents of your RAID array in the directories you just mounted, in this case /mnt/raid1 and /mnt/raid2

This form of RAID offers no real hardware assistance. If you need to dual boot such hardware with Windows, dmraid is the only choice, but if your setup is exclusively Linux, full Linux software RAID is a better choice. See the next post.
dmraid-1.0.0.rc16-3.pet
Description 
pet

 Download 
Filename  dmraid-1.0.0.rc16-3.pet 
Filesize  205.74 KB 
Downloaded  859 Time(s) 

Edited_times_total
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sun 17 Oct 2010, 06:21    Post_subject:  

3. LINUX SOFTWARE RAID

If your RAID setup is running only under Linux, and there's no need to dual boot into Windows, this is the better RAID solution than bios-RAID.
For motherboards with onboard RAID (Host RAID) as mentioned in the previous section, disable the RAID function in bios. Yes, this sounds a bit strange, but Linux will take over all RAID configuration.
There's a HOWTO here -
https://raid.wiki.kernel.org/index.php/RAID_setup#Mdadm_modes_of_operation
Here's a summary. Let's assume you want RAID1 (mirroring for redundancy)
using /dev/sdb1 and /dev/sdc1 (your drives must be partitioned with GParted, first)
RAID0 support is built directly into the Puppy kernel, but RAID1 support is via the external kernel module "raid1", so we first need to load this module -
Code:
modprobe raid1

Install the mdadm dotpet package attached to this post. Then run this command to configure the RAID array
Code:
mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1

You should hear the drives working as the array is initialised. Once this is complete, the new RAID configuration must be saved, with this command -
Code:
mdadm --detail --scan >> /etc/mdadm.conf

(It's worth keeping a copy of this configuration file on a USB dongle, for example, in case your system drive fails in the future, and you need to be able to access the RAID array on a different installation.)
Your new RAID array is /dev/md0. Go ahead and format it with ext3 (alternative filesystems may be considered)
Code:
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0

All those extra formatting options I just included are explained in the RAID wiki I mentioned earlier.
Now mount the RAID array -
Code:
mkdir /mnt/md0
mount /dev/md0 /mnt/md0

Now you can browse to /mnt/md0 with ROX, and test it by copying some files to/from.

That's it, but we also need to restore this configuration at each bootup. My mdadm dotpet includes a special mdadm udev rule which, in theory, will auto-detect software-raid arrays at bootup, and restore their configuration ... unfortunately testing under Puppy 5.28 has shown that this udev function fails!!
So you need to add some extra commands to Puppy's startup scripts - open /etc/rc.d/rc.local in Geany, and add these 3 lines -
Code:
modprobe raid1
mdadm --assemble /dev/md0
mount /dev/md0 /mnt/md0

Save.
Now each time you boot Puppy, your RAID array will be ready to use at /mnt/md0
mdadm-3.1.4.pet
Description  compiled in Puppy 5.1
pet

 Download 
Filename  mdadm-3.1.4.pet 
Filesize  251.91 KB 
Downloaded  789 Time(s) 

Edited_times_total
Back to top
View user's profile Send_private_message 
prehistoric


Joined: 23 Oct 2007
Posts: 1303

PostPosted: Sun 17 Oct 2010, 09:06    Post_subject: plan recovery first  

This is not about a current RAID system I'm running, but reflects past experience. RAID 0 striping fails and loses data when either drive fails, it is emphatically not fault tolerant. All other RAID configurations have some fault tolerance.

While a redundant array can reduce data loss from drive failures, it does not reduce the rate of drive failures. If you are running four drives, you can expect four times the rate of drive failures. This means you should plan for recovery from failure of a drive in a redundant array before you get into a situation where you must do this to avoid losing data. I've seen people screw things up because they expected everything to be handled automatically. Once the data is really gone, it stays gone.

One trivial error is to find you can't match the failed drive without ordering a new one by mail. It helps to have a spare of the same type from the day you set up the array.

If you are going to run a RAID array to avoid data loss, think things through before you get in trouble.
Back to top
View user's profile Send_private_message 
nickdobrinich

Joined: 05 Apr 2007
Posts: 77
Location: Cleveland OH USA

PostPosted: Sat 06 Nov 2010, 00:19    Post_subject: hardware RAID
Sub_title: backup hardware precautions
 

If you are using hardware RAID of any kind, make sure you have an exact duplicate RAID card.
I have had them fail and it is not pleasant or cheap.

If RAID is provided on the motherboard, make sure to have a duplicate m/b.

My current thinking is this:
Although software RAID does not have the performance of hardware RAID, it may be best to use it in a RAID 1 or higher configuration for drive fault tolerance.
Spend the money you save by not buying 2 RAID cards on a faster multicore CPU.

And if it is critical data, absolutely positively have an offsite backup plan in place.
The hardware can be replaced, the drives can be replaced.
The OS can be reinstalled.
But your data is the most valuable thing you have.
Which I only remember when the worst case, could-never-happen, the UPS died hard, the dog chewed the power cord scenario strikes.
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sat 06 Nov 2010, 06:19    Post_subject: dmraid vs mdadm  

I see that my reference to dual-booting with Windows may have caused some confusion in a different forum thread.
So let me clarify that bios-RAID and Linux software RAID are technically similar: the only difference is that with bios-RAID the hardware adapter sets the RAID configuration (as an onboard bios setting). And as other forum members here have mentioned, if this hardware fails you will need to reconnect your drives to an identical adapter to retrieve your data.
That's not to say that Linux software RAID is foolproof - if your Linux OS dies, sure, you can just install Linux afresh ... but you need to remember the software configuration you initially configured the RAID array with. Pen, paper and old fashioned record keeping should not be forgotten in these high tech times.

The choice between dmraid and mdadm is often one of practicality. If you have been running a bios-RAID array under Windows, chances are you already have a certain amount of data on the array which you would like to keep. If you decide to install Linux on this same computer, regardless of whether Windows will be kept or not, the easiest option is to manage this array under Linux with dmraid, which will recognise the bios-RAID configuration and let you keep using the array without any reformatting.

But if the RAID array's data is already backed up somewhere else and you're happy to totally reconfigure a Linux system, the better option is to disable the bios-RAID function, then run full Linux software RAID using mdadm. Obviously, all existing data on drives within the new array will be lost.
What's interesting about this situation is that the bios-RAID adapter contributes nothing to the RAID configuration; it reverts to being a multiple-drive host interface.

Indeed, software RAID requires no special drive interfaces. Just connect 2 SATA hard drives, for example, on the standard SATA ports of a fairly standard motherboard and you can configure them for full software RAID using mdadm.
... just remember to write down the RAID configuration you set up.
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sat 06 Nov 2010, 06:51    Post_subject:  

Changing the subject back to hardware RAID
here are the configuration utilities for the LSI MegaRAID controller family,
supported models MegaRAID SCSI 320 & SATA 150/300.

The original files were obtained from here
http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_scsi/megaraid_scsi_3202x/
which were originally zipped binary files. I have repackaged them as dotpets, and slightly adapted the MegaMon start script for Puppy.

Please note: MegaRAID adapters are supported in Puppy by the megaraid driver, and RAID arrays should work just fine without anything else.
These utilities are optional extras.

megarc 1.11 is a commandline MegaRAID Configuration Utility
run "megarc" to launch

megamgr 5.20 is a gui MegaRAID Configuration Utility
run "megamgr" to launch

MegaMon 3.8 is a daemon that monitors the RAID array
The package installs /etc/init.d/raidmon
will run as a daemon at each boot up.
It then logs different events in file "/var/log/megaserv.log" with date & time stamp,
and it sends mail to root for different events with date & time stamp.
megarc-1.11.pet
Description 
pet

 Download 
Filename  megarc-1.11.pet 
Filesize  260.34 KB 
Downloaded  520 Time(s) 
megamgr-5.20.pet
Description 
pet

 Download 
Filename  megamgr-5.20.pet 
Filesize  245.81 KB 
Downloaded  533 Time(s) 
MegaMon-3.8.pet
Description 
pet

 Download 
Filename  MegaMon-3.8.pet 
Filesize  410.81 KB 
Downloaded  551 Time(s) 
Back to top
View user's profile Send_private_message 
nickdobrinich

Joined: 05 Apr 2007
Posts: 77
Location: Cleveland OH USA

PostPosted: Sat 06 Nov 2010, 20:11    Post_subject: software RAID on ClearOS
Sub_title: RAID partitioning in a Linux system
 

@tempestuous:
Yes, your comments about writing down the RAID config are spot on.
Write it down on paper or keep it on a flash drive, not in a file on the RAID drive.
I often wonder if I will be able to wade through the RAID configuration 3 years from now when in full panic, everybody is screaming at me mode.

On a related topic.
I am currently looking into setting up ClearOS (parents are Red Hat Enterprise via CentOS) in a software RAID configuration.

I have a 160 GB IDE drive and (2) 1 TB matched SATA drives configured as software RAID1 for a small office 8 user Samba file server with OpenX or Zimba email to connect with client Outlooks (aka LookOut).

What is a recommended setup for the Linux directory structure?
Is it best to have only /boot and the double memory size swap partition on the 160 GB drive with / and everything else on the RAIDed drives?
Should I be looking at a more elaborate setup?
What is my worst single point of failure condition?

Is anyone familiar with NUT (network UPS tools) to monitor an APC UPS to bring the server down gracefully in an extended power outage?
Any guidance here would be much appreciated as this is my first RAID setup.

PS Just for added pressure, my paycheck will print off this system.
Or not.

Edited_time_total
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sun 07 Nov 2010, 09:25    Post_subject:  

My experience is video production/video server related, so my comments carry no great weight for web server situations, but for what it's worth, I would install my Linux OS plus all applications on the non-RAID boot drive and keep the RAID array purely for data. Then you would configure each individual application (OpenX, Zimbra, etc) to store its user data on the RAID.

Regarding partitioning on the boot drive: standard Puppy convention is a single ext3 or ext4 partition for Puppy (boot + /home + /) plus a 1.5x RAM size Linux swap partition.
But Puppy will take up such a small part of your 160G drive it would make sense to create at least one more partition for another Linux installation.
And once you get into a multi-boot situation, it's a good idea to have a separate boot partition. This should ideally be the very first partition on the drive, and only needs to be very small - say 50MB. I would format the boot partition as ext3.
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Wed 05 Jan 2011, 00:03    Post_subject:  

Here is the latest LVM2, a Logical Volume Manager for Linux.
The source code was obtained from
ftp://sources.redhat.com/pub/lvm2/

This is not directly associated with RAID systems, but it works in a similar fashion to dmraid. Like dmraid, it depends on the device-mapper library, which Puppy 5.1 already contains.

UPDATE Jan 29 2011
My LVM dotpet package has been upgraded to contain its own libdevmapper and libdevmapper-event libraries, and these are exactly the same as the devmapper libraries contained in my dmraid package earlier in this thread. So these two dotpet packages, LVM and dmraid, can coexist on the same Puppy installation.
Puppy 5.x is definitely compatible, and earlier Puppies might be compatible.

Instructions are here -
http://tldp.org/HOWTO/LVM-HOWTO/

This is an advanced tool. If you don't know what it is, you don't need it.

Aug 26 2014
Forum download link is broken. Groan.
Dotpet now available here -
http://www.smokey01.com/tempestuous/LVM2.2.02.79.pet

Edited_times_total
Back to top
View user's profile Send_private_message 
gcmartin

Joined: 14 Oct 2005
Posts: 4380
Location: Earth

PostPosted: Thu 13 Jan 2011, 00:56    Post_subject: Using LVM2  

Thanks tempestuous for these needed tools
tempestuous wrote:
Here is the latest LVM2 ....
There are 2 PETs shown. I am looking at using it in 2 very different Puppy Disto ways.
ttuuxxx's 4.3.2-SCSI and playdayz's PUP5.2 liveCDs where PUP would give me diagnostic abilities for the LVM2 that exist. One set of systems has SCSI drives with LVM2s operational and a 2nd set has SATA LVM2s.

Do I need both PETs for these distro?
Do I install "libdevmapper" PET, 1st or last?

Edit: Thanks again for the update. I am not running a Puppy on RAID hardware. I do need the LVM2 items,

Thanks

_________________
Get ACTIVE Create Circles; Do those good things which benefit people's needs!
We are all related ... Its time to show that we know this!
3 Different Puppy Search Engine or use DogPile

Edited_times_total
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Thu 13 Jan 2011, 06:32    Post_subject:  

Puppy 5.x already contains libdevmapper. Thus:

Puppy 5.2 requires just LVM2.2.02.79.pet

Puppy 4.3.2-SCSI requires LVM2.2.02.79.pet plus libdevmapper-1.02.60.pet
The order of installation makes no difference.

The LVM2 utility should work with IDE/SATA/SCSI.
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sat 29 Jan 2011, 03:20    Post_subject:  

Thanks to some good testing by forum member CindyJ, the dmraid dotpet earlier in this thread is confirmed to work with bios-RAID devices, and I have just upgraded the dmraid dotpet, and also updated the instructions. It's a shame the 150-or-so other people who earlier downloaded this package failed to offer such troubleshooting and assistance.

I have also updated the LVM dotpet so it contains matching libdevmapper libraries.
Back to top
View user's profile Send_private_message 
disciple

Joined: 20 May 2006
Posts: 6449
Location: Auckland, New Zealand

PostPosted: Sun 03 Apr 2011, 02:49    Post_subject:  

tempestuous wrote:
3. LINUX SOFTWARE RAID

If your RAID setup is running only under Linux, and there's no need to dual boot into Windows, this is the best RAID solution.

Is a software RAID better than a hardware raid just because you don't need two RAID cards (one to use and one for backup)?

_________________
DEATH TO SPREADSHEETS
- - -
Classic Puppy quotes
- - -
Beware the demented serfers!
Back to top
View user's profile Send_private_message 
tempestuous

Joined: 10 Jun 2005
Posts: 5275
Location: Australia

PostPosted: Sun 03 Apr 2011, 03:18    Post_subject:  

Sorry, what I should have said was:
"... this is the better RAID solution than bios-RAID"

If you have a true hardware-RAID adapter, this is the very best solution. True hardware RAID adpaters are less common, and expensive.
And beware; many bios-RAID devices are incorrectly assumed to be hardware-RAID.
Back to top
View user's profile Send_private_message 
Display_posts:   Sort by:   
Page 1 of 3 Posts_count   Goto page: 1, 2, 3 Next
Post_new_topic   Reply_to_topic View_previous_topic :: View_next_topic
 Forum index » House Training » HOWTO ( Solutions )
Jump to:  

Rules_post_cannot
Rules_reply_cannot
Rules_edit_cannot
Rules_delete_cannot
Rules_vote_cannot
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1207s ][ Queries: 12 (0.0056s) ][ GZIP on ]