Easy Containers for Puppy Linux

Under development: PCMCIA, wireless, etc.
Message
Author
jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#21 Post by jamesbond »

BarryK wrote:A useful read on abstract sockets, though it is unclear to me why they should be a security threat:

http://tstarling.com/blog/2016/06/x11-s ... isolation/
Interesting find. I found out that my Xorg has been listening on the abstract sockets too - it's just that I didn't notice until you brought it out!

In my previous test I used this "unshare -piumUrfn --mount-proc" - which is almost identical to you, except that the extra "-n" enables network isolation as well, and according to the articles you linked, is the only way to "hide" an abstract socket from within the container. No wonder my test didn't work (which I expected not to work). When I dropped that "-n", I got the same result as you - X apps start even with /tmp/.X11-unix/X0 is hidden.

I think it's a security thread because:
a) you can't prevent access from within standard chroot (you need network namespaces to disable it)
b) you can't control permission of the abstract socket
Which basically means, if you know the name of the socket, then **everybody** can connect.
Very bad. I should disable this immediately.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

User avatar
technosaurus
Posts: 4853
Joined: Mon 19 May 2008, 01:24
Location: Blue Springs, MO
Contact:

#22 Post by technosaurus »

I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.

When I was researching it one of the things that seemed problematic was how to selectively share /dev/* among multiple users and I don't know if that ever really got sufficiently resolved. I wanted to be able to let thin clients mount their local usb drive on a remote server inside a container. Then there was xdmcp and the network audio system - AFAIK, there is still no ability to map /dev/audio (etc..) to a network based alternative within a container (for seamlessly running older apps without having to modify the code)... basically its really good for hosting providers, but not so much end users that actually do stuff aside from security and the portability of the container, but there are other ways to do that.

There is a really useful tool for packaging an executable with all its necessary files called magic ermine It is proprietary, but the developer (I think her name is Valery?) was really supportive and even offered a free license and hosting for open source projects. She has a similar open source project called statifier on sourceforge. This is similar to flatpak (formerly xdg-app), snappyor appimage ... also similar to roxapps except 1 file instead of 1 directory. I don't see any reason why these cannot be combined with containers or even extended further.

Going back to thin clients (now cloud computing) is starting to make sense again because network speeds have started to rival disk speeds and RAM is getting much cheaper to the point a fairly basic server can keep all applications loaded in RAM and serve them up faster than a local disk (which on some newer arm based machines are as low as 512Mb of flash). Now that there are computers for under $10 that have sufficient processing power it is probably better to create/modify a simple caching network file system using something like zram swap over tcp to a precompressed completely installed distro. That way the client distro only has to have enough infrastructure to connect to the internet and handle the caching filesystem, so it will appear to have every single {debian,arch,fedora,...} package installed, but only need about a floppy size of storage to boot... as a bonus updates are automatically handled by the caching filesystem, just like web page caching (only the server needs to update them) Doing this kind of system with containers or single file format binaries like snappy or flatpaks begins to make even more sense because there are fewer, more compressible files to manage...
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].

amigo
Posts: 2629
Joined: Mon 02 Apr 2007, 06:52

#23 Post by amigo »

Barry, anything that goes through X in any way is a security threat because the server runs suid. Anyone who can crash that server with malformed code, or uses a known bug, can get root access.

Really containers are just a variation of the idea of chroot. And overlayfs is simply a LinusTorwald-approved way of implementing a files system 'union'. He always said that the only way the concept would be accepted in the mainline kernel was by using a 'stackable' filesystem.

User avatar
BarryK
Puppy Master
Posts: 9392
Joined: Mon 09 May 2005, 09:23
Location: Perth, Western Australia
Contact:

#24 Post by BarryK »

I have created Easy Linux, version 0.2 pre-alpha, a first play at a "container friendly" OS:

http://murga-linux.com/puppy/viewtopic.php?t=109958

Have been traveling, and other things, haven't had any time to think further about the issues raised in this thread about containers. That shm Xorg crash is still unresolved.
[url]https://bkhome.org/news/[/url]

User avatar
rufwoof
Posts: 3690
Joined: Mon 24 Feb 2014, 17:47

#25 Post by rufwoof »

technosaurus wrote:I wrote a whole analysis paper on linux containers several years back (can upload if anyone is interested) Apart from isolation and security, they seems pretty nice for thin client computing because hundreds of clients can run the same executable without significantly increasing the memory load or disk load (with BTRFS) due to copy on write.
Apparently (my knowledge is near zero) with BTRFS you can create/use sub-volumes
A subvolume in btrfs can be accessed in two ways:

like any other directory that is accessible to the user

like a separately mounted filesystem (options subvol or subvolid)

In the latter case the parent directory is not visible and accessible. This is similar to a bind mount, and in fact the subvolume mount does exactly that.
Over on the Debian forum
I can test any linux-distros installations in the same partition, making use of subvolumes. You wont need Virtualbox again to test new distros.

User avatar
BarryK
Puppy Master
Posts: 9392
Joined: Mon 09 May 2005, 09:23
Location: Perth, Western Australia
Contact:

#26 Post by BarryK »

Easy Containers is continuing to evolve, see blog post:

http://bkhome.org/news/201805/easyos-py ... n-091.html

Now supporting Linux Capabilities, for improved security.
[url]https://bkhome.org/news/[/url]

User avatar
BarryK
Puppy Master
Posts: 9392
Joined: Mon 09 May 2005, 09:23
Location: Perth, Western Australia
Contact:

#27 Post by BarryK »

jamesbond wrote:I can reproduce the the BadShmSeg error. This is how I did it:
1. I run "unshare -piumUrfn --mount-proc" in a terminal (this launches the "container", but sharing the filesystem with the host).
2. Inside that terminal then I launch geany (or anything else)
3. Then I got this:

Code: Select all

The program 'geany' received an X Window System error.
This probably reflects a bug in the program.
The error was 'BadShmSeg (invalid shared segment parameter)'.
  (Details: serial 2165 error_code 128 request_code 130 minor_code 3)
  (Note to programmers: normally, X errors are reported asynchronously;
   that is, you will receive the error a while after causing it.
   To debug your program, run it with the --sync command line
   option to change this behavior. You can then get a meaningful
   backtrace from your debugger if you break on the gdk_x_error() function.)
Subsequent invocation of geany (or any other X programs) will work without error.

I have an explanation: Modern X server tries to enable shared memory support for X clients for performance purposes. Since we have specified "-i" switch (=don't share IPC, including shared memory segments), shared memory created by X server on the host cannot be accessed by X clients in the container. Thus it fails. Now my guess: when this fails, some flag is set (perhaps in the X server itself?), so that subsequent X server will access the server without using shared memory anymore. To see the list of shared memory, run "ipcs". Run this on the host, and in the container, to see the difference.
I am playing with 'Pflask', which is a single C executable, a kind of "secure chroot". It defaults to isolating all the namespaces. I ran into this same problem, X app crashed first time, after that they work.

However, the error message was not specifically about "BadShmSeg", it is a "BadAccess (attempt to access private resource denied)". But, the same fix, not unsharing IPC, fixes it.

I posted about this to the pflask github site:

https://github.com/ghedo/pflask/issues/26
[url]https://bkhome.org/news/[/url]

Post Reply