Beyond PXE - Puppy Network booting

Under development: PCMCIA, wireless, etc.
Message
Author
User avatar
Aitch
Posts: 6518
Joined: Wed 04 Apr 2007, 15:57
Location: Chatham, Kent, UK

#16 Post by Aitch »

gcmartin

I got impatient...ha ha :wink:

http://murga-linux.com/puppy/viewtopic. ... 955#499955

More PR isn't it?

Aitch :)

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#17 Post by jamesbond »

Dutchman again, yes, that's one of the ideas.

mhanif - "virtual remastering" is functionally equivalent of "standard puppy + savefile". It's better to use the "puppy + savefile" because you save space - only the changed / modified files needs to be kept, instead of the entire pup.sfs

NFS can export multiple directories, no problem. The problem with NFS is security - in the past, NFS security is based on IP address - not something you want to live with in this age. There are ways to tie NFS security with Kerberos apparently, but honestly I have no idea how to do this - both at the server level and at the client level.

Does anyone thinks that standard puppy setup is secure enough to run sshd? It's very easy to setup a "regular" user on puppy for ssh purpose, but I'm not sure whether puppy is secure enough to run as that user (ie can't wipe important directories, for example)? Running chroot-ed sshd sounds better - I need to explore this dropbear.

Terminal Server (TS) concept looks interesting, but as I said above, it's difficult to do using puppy as the server. As a client - no problem, we have loads of TS client - we have vnc, rdesktop, and others. As a server - well, you need a multi-user puppy, and we are nowhere near that.

gcmartin - thanks for the info on Edubuntu. Does the setup you mention runs out of the box? If I download Edubuntu LiveCD, will all that you mention works straight away - no special installation process is necessary (e.g. goto synaptic, download this packages, install this script, edit this config file, change that settings ...)? If yes, that would be very great!

NBD is there mostly for performance reasons. Problems with NBD is similar NFS - it has no security. And to make it worse, it can only export one block device per service - thus if you want to server 10 different users, you need 10 instances of NBD server. If the server is light-weight enough, this probably isn't a problem. Using a combination of ssh and nbd, we can explore the possibility of starting an NBD server when the remote user logs in from their ssh client. Boot process continues with the client PC mounting the save-file over NBD.

Good ideas everyone, thanks for sharing. Anyone else?
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#18 Post by jamesbond »

Following up with the concept outlined earlier, I've got chroot-ed dropbear to work. Combined with sftp-server from openssh, it serves sshfs smoothly with a non-root user id. I was thinking of using NBD - but why bother, just put a symlink to the pup.sfs to users' home directories and let dropbear serves both the pup.sfs and pupsave.sfs.

All that is needed is plumbing scripts - the server side is about 1.6M (1.1M of those is full busybox, which we can definitely cut down). The sshfs client side (not including busybox and network drivers) is 2.5M, and that is because I'm too lazy to convert those glib dependencies to uclibc instead (glib+libc=2M).

I think we can do the same with cifs. Client-side requirement for cifs will be very minimal (cifs is a kernel module, and static mount.cifs is only 70k), as compared to sshfs. The samba server component on the other hand ... and I don't think it's that easy to make samba runs under chroot (simply because there are too many libs?).

Hmmm. Must consider performance also. Which one is faster, sshfs or samba?
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#19 Post by jamesbond »

Some teasers :)
I did this with Samba for reasons I already said above.

Step1 - After PXE booting - stopping to wait for entering network credentials to connect to samba server (not puppy's credential, which is always root).
Step 2 - After entering credentials (spot) and setup all unionfs layers, just before switch root.
Step 3 - After switch_root and executing /etc/rc.d/rc.sysinit, stopping in terminal before going to X desktop.
Step 4 - Within the X desktop. Mount shows all the mountpoints. I'm using 128MB of pupsave for this experiment.

This is implementation of method 3 (please refer to first post). Summary: PXE booting, with pup.sfs and pupsave.sfs over cifs. Persistence is over the network - users can login to any PC and will see his/her own desktop just the way they left it before. Puppy runs as root as usual, but access to cifs is governed by network username/password separate from puppy's root account. Everybody will see his/her own pupsave file only and cannot access/mess with others. If more security is required, encrypted pupsave can be used (didn't do this in the experiment).
Attachments
step1.png
(162.11 KiB) Downloaded 978 times
step2.png
(163.86 KiB) Downloaded 966 times
step3.png
(167.23 KiB) Downloaded 967 times
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#20 Post by jamesbond »

Same as above, but running under sshfs (under chroot-ed dropbear). Feels a tad slower - may be because of the encryption overhead.
Attachments
step5.png
(142.23 KiB) Downloaded 942 times
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

gcmartin

Edubuntu

#21 Post by gcmartin »

Your question on Edubuntu: Answer; YES. (Sorry for the delay. Day job!)

Pluses:
  • Loads of documentation
  • Lots of assistance across the spectrum of users; educational users and as well as products specialist.
Seems that this area gets a very very large community of people helping.

Its an out of the box implementation. All you need is some HDD space to house your filesystems (we used LVMs) that will be built.
(I think you understand why LVMs are important in areas where there is unexpected-uncontrolled growth for user needs.)

With your current expertise that I've witness from your netbook understanding, this is a no-brainer for you. I expect that more of your time will be in initial reading, than actual download, setup and use. Its a guided approach. LTSP PCs on the LAN are dumb. There are an average of about 50+ simultaneous users in a defined user base of 200+ people on a 4GB RAM dual-Xeon server and its is not close to max, currently. Edubuntu comes with a full complement of Office/Classroom tools for students and faculty. Expansions are planned this summer.

Hope this helps.
We are trying to determine if we can securely allow off-site, over the LAN access directly from the internet. Right now, we are using a Microsoft Terminal server as a helper to get internet user connections to the Edubuntu ID.

Hope this helps.
P.S. You are absolutely right about Puppy not be an attractive platform for something like this.

Master_wrong
Posts: 452
Joined: Thu 20 Mar 2008, 01:48

#22 Post by Master_wrong »

@ Jamesbond
Hmmm. Must consider performance also. Which one is faster, sshfs or samba?
from
http://www.saltycrane.com/blog/2010/04/ ... fs-ubuntu/

it seem samba and nfs is faster ?
I'm no expert, but from what I've gathered, sshfs is faster than WebDAV and slower than Samba and NFS. However, Samba and NFS are typically more difficult to set up than sshfs.
so this leave which is faster... nfs or samba
from this link i assume nfs is faster
http://forums.whirlpool.net.au/archive/701909
From my experience NFS is a much faster protocol than SMB. So moving large amounts of data around is going to be better with NFS.

Using Kerberos in conjunction with NFS will make it just as secure as SMB, although kerberised NFS can be a pain to setup.

and here is the test result from other site
According his tests:
SMB : 9.6 MB/s
NFS (native QNAP) : 8.8 MB/s
UNFS (ipkg) : 16 MB/s
http://www.mpcclub.com/forum/showthread.php?t=21484
Cluster-Pup v.2-Puppy Beowulf Cluster
[url]http://www.murga-linux.com/puppy/viewtopic.php?p=499199#499199[/url]

gcmartin

SAMBA difficulty

#23 Post by gcmartin »

Master_wrong wrote: ...
I'm no expert, but from what I've gathered, sshfs is faster than WebDAV and slower than Samba and NFS. However, Samba and NFS are typically more difficult to set up than sshfs.
The decision to use either of these rest with whether there is Microsoft presence in your network and its users. Every Microsoft PC is built with SMB/CIFS. None of them come out of the box with NFS. Every MS user knows Network Neighborhood. None know anything about NFS or troubleshooting or ... Because 99.44% of this earthly world is Microsoft, this is helpful knowledge especially in light that MS is coming out with a personal hand-held with a MS OS.

SAMBA is NOT very hard to set up. Biggest issues are what role do you want your SAMBA PC to share out to the network. It can be something very very simple like sharing a folder or a printer, all the way to controlling every PC on your network.

Now, lets set the FS discussion aside for a moment, and focus of a definition of what a TS system (Server and its clients) should be doing. When we are discussing, here, we should identify which of the following that we are discussing
  1. Which
  2. If we are trying to do a TS, then our discussion is around how we intend to get the clients setup to access and run ON THE SERVER. We might call this Real TS.
  3. If we are trying to give isolated users on our LAN who have PXE boot and is running a desktop OS, a means to have protected storage, that is a completely different thing. We might call this Extended PXE.
And, there may be other implementations subsets, too.

Distributed File Systems (i.e. NFS, CIFS, DFS, etc) are a separate discussion from getting something operational and certainly separate from a TS where none of this is needed, except in helping to get a thin-client on the air.

Let's not miss the forest for the trees that we readily see.

Hope this helps.
Last edited by gcmartin on Thu 10 Mar 2011, 05:49, edited 1 time in total.

gcmartin

One Linux Terminal Server

#24 Post by gcmartin »

Those interested in experiencing a Linux TS go here:
Please take a moment to help the community by leaving them a response to your experience. This helps all of us.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#25 Post by jamesbond »

Final teaser: sshfs with nbd.
step6 image - just after booting, stopped at command line.
step7 image - within X desktop, showing mount points - pup.sfs is mounted readonly over nbd, pupsave is mounted over sshfs.
Bootspeed - I don't time it, but it feels faster than sshfs or cifs alone.
All servers are chroot-ed except dnsmasq - me too lazy :)
Attachments
step6.png
(24.18 KiB) Downloaded 929 times
step7.png
(140.48 KiB) Downloaded 650 times
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#26 Post by jamesbond »

Here's an idea.

I have MBWE (that is, Western Digital MyBook World Edition - a simple NAS of some sorts). By default it serves cifs (well, it's a NAS for home users - and I really mean *home* users, it's painfully slow serving the cifs). BTW people say the newer version is *much* faster but I don't have that.

Good thing about MBWE is that it runs Linux, and it's rather hackable - in fact, there is a whole community whose purpose is to transform MBWE from a just a humble NAS into all sorts of things.

One only needs to add dnsmasq (available in optware) for PXE and nbd (must compile) - and then add the recipe from this thread - to get a fully functioning, puppy multi-user boot server Image

cheers!
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

User avatar
rufwoof
Posts: 3690
Joined: Mon 24 Feb 2014, 17:47

NBD

#27 Post by rufwoof »

jamesbond wrote:NBD is there mostly for performance reasons. Problems with NBD is similar NFS - it has no security. And to make it worse, it can only export one block device per service - thus if you want to server 10 different users, you need 10 instances of NBD server. If the server is light-weight enough, this probably isn't a problem.
Old thread/post I know, stumbled across it whilst looking where to post a x-ref for http://murga-linux.com/puppy/viewtopic. ... 78#1039578
You can have multiple clients served from a single nbd server instance - provided you're content to use copy on write, where each user sees the same unchanging base set of files, but can individually edit those files and the changes are stored in separate diff files (copy on write). Like a sfs layered setup where the changes are lost at shutdown/disconnect and each user/client only sees their changes.

Yes the server would have to have a instance for each user if changes were desired to be saved back onto the server system. But if changes were stored locally then the server can be just a single instance.

Consider for example where I create a sfs of a documents folder on my server box 192.168.1.5

cd /mnt/sda1
mksquashfs docs docs.sfs -b 4096

Using a block size of 4K here as that's the minimum for mksquashfs whilst its the maximum for nbd.

I then serve that up using nbd

nbd-server 9000 /mnt/sda1/docs.sfs -c -C /dev/null

On a client I use nbd-client to link that server process to a device

nbd-client 192.168.1.5 9000 /dev/nbd0

and then mount that sfs

mkdir /tmp/sfs
mount /dev/nbd0 /tmp/sfs

and create a aufs structure for that so changes can be recorded
mkdir /root/changes
mkdir /tmp/top
mount -t aufs -o br=/root/changes:/tmp/sfs none /tmp/top

So when I open rox on /tmp/top and make changes to files there (sfs content), the changes are stored in /root/changes (which here I assume remains persistent, so prior changes remain available after reboot).

When done, I umount /tmp/top and /tmp/sfs ... and if later I re-establish that then the prior changes will still be available in /root/changes i.e. all changes are being recorded on the local system. Multiple users can use a similar arrangement on their box, all sharing the single core sfs, but each user having their own changes stored locally.

Only if I want those changes being stored back on the server would instead another nbd-server port be required, one for each user, in order to store the changes folder content on the server. But equally that could be via a sftp, scp or other transfer method.

In practice, there's not much point in actually serving up a main sfs on the server, the only benefit is that being compressed files will be quicker to read across the LAN. If a actual volume, or file filesystem is served along with copy-on-write, then that's the same as being read only anyway. And a file filesystem or actual volume is easier to update compared to having to unsquashfs, make the changes and mksquashfs a new version of sfs in order to update it. Whichever is used, the main core documents will be safe from the likes of being wiped out or modified by ransomware as they're read only and on a another box, only the changes (such as /root/changes folder content) might be destroyed/encrypted/changed, but so equally might those changes be destroyed/encrypted/changed if they were being stored in a rw folder on the server.
[size=75]( ͡° ͜ʖ ͡°) :wq[/size]
[url=http://murga-linux.com/puppy/viewtopic.php?p=1028256#1028256][size=75]Fatdog multi-session usb[/url][/size]
[size=75][url=https://hashbang.sh]echo url|sed -e 's/^/(c/' -e 's/$/ hashbang.sh)/'|sh[/url][/size]

Post Reply