Fatdog64-620 Final (17 April 2013) and 621 (9 May 2013)

A home for all kinds of Puppy related projects
Message
Author
heywoodj
Posts: 85
Joined: Sun 15 Mar 2009, 04:39

#241 Post by heywoodj »

heywoodj wrote:
Well this wifi problem is turning out to be bigger than I thought at first, as I'm unable to connect to a second wireless router...
After trying to connect to a couple more routers as I mentioned here, http://murga-linux.com/puppy/viewtopic.php?t=86425, I concluded that in fact the wireless card was working, but in a very inadequate way. The maximum reliable connection range is about 15 ft (~5 m) and less if walls/floors are involved.

I don't think I ever had such poor wireless performance. I've always had acceptable wireless performance on all other Puppies upto now.

Is there anything to boost the performance close to that achieved by running this laptop on Win8?

For the record, the internal card is a Atheros AR9285 using the ath9k driver.

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

Zombie whiteout files

#242 Post by Ted Dog »

Starting at line 739 of system-init:

Code: Select all

						fname="${pp%.wh.*}${pp#*.wh.}"	# without the .wh. part
						if [ -e "$fname" ]; then		# if the file exist, then ...
							rm -rf "$fname" > /dev/null	# delete the file
							rm -rf "$pp" > /dev/null	# also delete the whiteout
						fi
					done
Files deleted are not being written to save file, but the white-out hidden file is. However if file is not found; the white-out hidden file remains behind. If the same named file is recreated, would it not be re-deleted (with the white out now finally be removed). This is exactly the type of behavior I'm experiencing, double saves makes changes stick.

Sage
Posts: 5536
Joined: Tue 04 Oct 2005, 08:34
Location: GB

#243 Post by Sage »

don't think I ever had such poor wireless performance
Sounds like a confusion between SW, FW & HW. Try opening up the router and inspecting for bad caps? Try a different filter? Other than that, re-site your router according to the (de)structions that came with it. Place the device as close as possible to the master socket and run cables from there for your wired connections. It's permissible to re-site the master socket, eg into the loft, before running some cabling downstairs. Knock down a few structural walls to improve the wireless signal?!

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

Re: Zombie whiteout files

#244 Post by jamesbond »

Ted Dog wrote:Starting at line 739 of system-init:

Code: Select all

						fname="${pp%.wh.*}${pp#*.wh.}"	# without the .wh. part
						if [ -e "$fname" ]; then		# if the file exist, then ...
							rm -rf "$fname" > /dev/null	# delete the file
							rm -rf "$pp" > /dev/null	# also delete the whiteout
						fi
					done
Files deleted are not being written to save file, but the white-out hidden file is. However if file is not found; the white-out hidden file remains behind. If the same named file is recreated, would it not be re-deleted (with the white out now finally be removed). This is exactly the type of behavior I'm experiencing, double saves makes changes stick.
Thanks Ted Dog, I think you have a point. This would only happen in one particular scenario - when you delete an file that exist in lower SFS layers and later re-create it (e.g. replacing /bin/bash with an updated version). I guess the solution is to check whether the file is newer than the whiteout, if it is then delete the whiteout, otherwise delete both of them.

EDIT: Replace those lines with this:

Code: Select all

						if [ -e "$fname" ]; then		# if both file and whiteout exist, then ...
							if [ "$pp" -nt "$fname" ]; then # delete the older one
								rm -rf "$fname" > /dev/null	# delete the file if older
							else
								rm -rf "$pp" > /dev/null	# otherwise delete the whiteout
							fi
						fi
Also change line 1041 and 1042

Code: Select all

[ "$MULTI_MOUNT" ] && BRANCHES=$MULTI_MOUNT:$BRANCHES 
[ "$SAVEFILE_MOUNT" ] && BRANCHES=$SAVEFILE_MOUNT:$BRANCHES 
with this

Code: Select all

[ "$MULTI_MOUNT"    ] && BRANCHES=$MULTI_MOUNT=ro+wh:$BRANCHES
[ "$SAVEFILE_MOUNT" ] &&
if [ "$TMPFS_MOUNT" ]; then BRANCHES=$SAVEFILE_MOUNT=ro+wh:$BRANCHES
else BRANCHES=$SAVEFILE_MOUNT:$BRANCHES
fi
heywoodj,
"iwconfig" command can be used to adjust the radio power of your wireless, although it doesn't always work.

cheers!
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

#245 Post by Ted Dog »

Thanks for looking into the zombie white-out possibility, your code once located is so easy to follow, so I'm encouraged to try my hand with labelled save session control (allowing user to select which session to skip or group )
However, the remaster process goes really slow due to my CPU's compression/decompression weakness, would there be a way to take the whole files in tmp during remaster, before the sfs compression and gzip them into initrd without sfs file? for a super huge init (actually only 20-25% larger initrd is still gzipped) so that the system would be uncompressed in RAM as an remaster option if machines have massive amounts of RAM, you check the disk usage of the full filesystem and report that before compression, I've still have 7G's free RAM on a fully decompressed FatDog64. Lets make this fat puppy sing.... :wink:

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

Too FAT TOO FURRY-us total raw speed remaster option

#246 Post by Ted Dog »

Furiously fast program execution, when running from RAM in uncompressed state.
I made this option for myself in order to speed up testing during my early days with puppylinux, so Its do able. Rule of thumb the system should have 9x the RAM size for the size of the gzipped initrd to run nicely ie make large changes, rezip reburn etc. For average user, 4x RAM is needed, no plans to remaster in RAM etc.
For example FatDog64 would be around 285M, so a machine with 2.8G's or more of RAM would be supported nicely and be usable with some headroom restrictions down to around 1.2Gs of RAM.

WARNING: the speed improvement is so drastic you can never be happy going back... Like driving a very fast car, true sports car, you will never be fully satisfied with anything less.....It's additive.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#247 Post by jamesbond »

That is a fun idea, I can't resist :lol:
Here's one way you can do it.
1. Open terminal (call this as terminal #1), launch sandbox.sh, choose the layers you want to include in the "remaster". You will need to include all the bottom 3 layers (pup_init, pup_ro and kernel-modules).
2. Open another terminal (call this as terminal #2)
3. Run the following script inside terminal #2 - important, in terminal #2, NOT in the sandbox itself.
4. You will get initrd.gz in /tmp
5. I assume you know what to do with that initrd.gz :)
6. Close terminal #1 and #2.

The resulting initrd booted on my 2GB virtualbox instance, so it's not too bad 8)
Expect some scripts to fail because when run this way, you're actually running without basesfs (all the files are in initrd).

Code: Select all

#!/bin/sh
# undo sandbox modifications
cd /mnt/fakeroot
cp /etc/profile etc/profile
cp /etc/shinit etc/shinit
cp /usr/bin/xwin usr/X11R7/bin/xwin
cp /usr/bin/wmexit usr/X11R7/bin/wmexit
cp /usr/bin/X usr/X11R7/bin/X
rm etc/BOOTSTATE

# prepare for bootup
ln -s sbin/system-init init
ln -s usr/share/blank.sfs kernel-modules.sfs
sed 's/ mount / busybox mount /; s/^mount /busybox mount /; s_cp -a /bin _cp -a /bin /lib64 /libexec /archive _' init > sbin/system-init.new
mv sbin/system-init.new sbin/system-init
chmod +x sbin/system-init

# generate new giant initrd
find | grep -Ev '^./usr/lib/|^./usr/X11R7/lib/|^./dev/|^./tmp/|^./proc/|^./sys/|^./aufs/' | cpio -o -H newc | gzip > /tmp/initrd.gz
EDIT: code should cd to /mnt/fakeroot, not /mnt/sandbox.
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#248 Post by jamesbond »

Here is another idea how you can do it.
Instead of making squashfs of the root filesystem, you could create an ext2 image file large enough to hold the stuff (about 700MB at the moment), and put the content of squashfs there, uncompressed. Name the image file as "fd64-620.sfs" and put it in initrd, and then "cpio -o -H newc | gzip" as usual to produce the initrd stuff.

This is "more compatible" than the above because it still runs with basesfs, but the basesfs is uncompressed. The entire initrd is still compressed but it will be decompressed only once during boot-up. In my brief test this seems to have a better boot speed.

Note: just because the extension is "SFS", it doesn't mean it has to be squashfs file :wink:

Instruction:
1. Open terminal (call this as terminal #1), launch sandbox.sh, choose the layers you want to include in the "remaster". Only include pup_ro layers onwards, *DO NOT* include the bottom 2 layers (pup_init and kernel-modules).
2. Open another terminal (call this as terminal #2)
3. Type the commands in the following script one-by-one inside terminal #2 - important, in terminal #2, NOT in the sandbox itself.
4. Replace /path/to/original/initrd with - well, the path your original initrd. Its contents will be changed so make sure you have a handy backup copy, and it must be on a writable medium.
5. Replace /tmp/initrd-XXXXX with the random path generated by filemnt.
6. At the end of the script, the your /path/to/original/initrd would have been updated, which then you can use for making iso etc.
7. Close terminal #1 and #2.

Code: Select all

#/bin/sh
# undo sandbox modifications
cp /etc/profile etc/profile
cp /etc/shinit etc/shinit
cp /usr/bin/xwin usr/X11R7/bin/xwin
cp /usr/bin/wmexit usr/X11R7/bin/wmexit
cp /usr/bin/X usr/X11R7/bin/X
rm etc/BOOTSTATE

# make the imagefile
head -c 700M /dev/zero > /tmp/fd64-620.sfs
mke2fs /tmp/fd64-620.sfs

# mount the image file
mount -o loop /tmp/fd64-620.sfs /mnt/data

# copy the files
find | grep -Ev '^./usr/lib/|^./usr/X11R7/lib/|^./dev/|^./tmp/|^./proc/|^./sys/|^./aufs/' | cpio -p /mnt/data

# umount the imagefile
umount -d /mnt/data

# open the original initrd
filemnt /path/to/original/initrd

# replace the fd64-200.sfs with 
cp /tmp/fd64-620.sfs /tmp/initrd-XXXXX # where XXXX is the location in which filemnt has opened the initrd

# re-pack initrd
/tmp/initrd-XXXXX/repack-initrd # you will end up with 700+ MB of initrd, in the original /path/to/original/initrd location

# gzip the initrd 
gzip /path/to/original/initrd
EDIT: Typed the script but forgot to include the instruction, doh!
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

gcmartin

#249 Post by gcmartin »

This is a great idea for a "local" adaptation of FD. It offers the creation of an ISO by the local user which will run a system minus the overhead of compression, once booted.

Excellent insight from @TedDog and implementation examples from @JamesBond.

I wonder if this could be a future addition to the general system or an option to the Remaster tool.

I do understand the RAM impact, but, the performance offers a positive impact and user experience. Would this qualify as a "Real-Time" category?

Thanks.

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

#250 Post by Ted Dog »

Should also be possible without having to rebuild it uncompressed, if the entire contents of regular SFS would be copied into RAM (decompressing) at the time prior to switch-root, and then the SFS could be deleted, or if in my case just remain behind to aid in compatibility I've got the RAM room to spare. RAM would need to be 10X the size of compressed SFS to function without issue.
When I did my methods years ago I used 10X size is a warning layer since I found scripting x10 math easier than x9 I slapped a character for the letter zero to the end of the string and use that to compare.
I had a single script closer to the last one above that then created an on the fly iso and burned it to the DVD+RW, then called the reboot command. It was nice, hit the icon on the desktop, leave to eat or due something else for a few minutes. If my desktop was in view then that test was successful.

A single bootcode could be added to do this.

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

#251 Post by Ted Dog »

Actually you should make running slow as a boot flag not FAST.

I always have to add a waitdev of 2 to 4 seconds for my machine to function correctly.

Those 2 to 4 seconds would be all we need to decompress the sfs if it already loaded in RAM.

Check for 10x RAM and decompress unless they told us not too. :wink:

Its the repetitively decompressing the same files when accessed individually that slows the system.

Also this reminds me of something I read online ( can't recall source ) that redhat uses sfs as a package container for a single file which is a filesystem like ext2 that is used by the system like a block device, so that different layer of reading code is used by kernel drivers and a performance is increased since the files most likely to be used next are within the block driver RAM buffer.

I did not understand how that actually worked but if you are already so close to reproduce that method why not see if this holds true.

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#252 Post by jamesbond »

Replace process_base2ram with this one.
Beware that the initial expansion process is a bit slow.
To use, boot with base2ram=expand

Code: Select all

process_base2ram() {
	local mntdev size avail
	# can only do it when basesfs exist and is not bind-mounted
	if [ "$BASE_SFS_MOUNT" -a -z "$BASE_SFS_BIND" ]; then	
		# setup 
		case $base2ram in
			yes)
				echo Copying $BASE_SFS_PATH to RAM ...	
				mntdev=$(awk -v MNT=$BASE_SFS_MOUNT '$2 == MNT {print $1; exit}' /proc/mounts)
				BASE_SFS_PATH=$BASE_SFS_DEFAULT_PATH
				dd if=$mntdev of=$BASE_SFS_PATH bs=1M
				;;
			
			expand)
				size=$(du -sm $BASE_SFS_MOUNT); size=${size%%/*}
				size=$(( ( $size * 110 ) / 100 )) # make it 10% larger to account for ext2 overheads
				avail=$(df -m $BASELINE_MOUNT | awk 'NR==2 {print $2}')
				if [ $avail -gt $size ]; then
					echo "Expanding $BASE_SFS_PATH ($size MB) to RAM ..."
					BASE_SFS_PATH=${BASE_SFS_DEFAULT_PATH%.sfs}.ext2
					dd if=/dev/zero of=$BASE_SFS_PATH bs=1M count=0 seek=$size 1>&2
					
					mke2fs -m 0 -F $BASE_SFS_PATH 1>&2
					mkdir /tmp/newbase; mount -o loop $BASE_SFS_PATH /tmp/newbase
					cp -a $BASE_SFS_MOUNT/* /tmp/newbase
					umount -d /tmp/newbase; rmdir /tmp/newbase
				else 
					echo "base2ram: not expanding, needed $size MB but only $avail MB available."
					base2ram=no
				fi
				;;
			
		esac
		
		# shared yes/expand cleanup
		case $base2ram in
			yes|expand)
				BASE_SFS_DEVICE=
				umount -d $BASE_SFS_MOUNT
				if [ -e $BASE_SFS_DEV_MOUNT ]; then
					umount -d $BASE_SFS_DEV_MOUNT
					rmdir $BASE_SFS_DEV_MOUNT
				fi
				
				# use RAM copy instead
				! mount_loop $BASE_SFS_PATH $BASE_SFS_MOUNT -o ro && BASE_SFS_MOUNT=						
				;;	
		esac
	fi
}
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

heywoodj
Posts: 85
Joined: Sun 15 Mar 2009, 04:39

#253 Post by heywoodj »

Sage wrote:
don't think I ever had such poor wireless performance
Sounds like a confusion between SW, FW & HW...
You're probably right, but the thing that gets me is that the same machine at the same location does wireless fine in @#!&&^* Win8. So, yeah SW, HW & FW are not playing nice together with the Atheros card in FD620.

Since this seems to be one of the only versions to have UEFI support, it's what I've been using.

Knock down a few structural walls to improve the wireless signal?!
I'm getting the sledge hammer now! :wink:
jamesbond wrote:"iwconfig" command can be used to adjust the radio power of your wireless, although it doesn't always work.
Do you mean boosting the transmission power from the router? Seems on my older reflashed Linksys router with dd-wrt, there were adjustable power settings, but don't know about the other routers, most which are not mine!

I found iwconfig has a txpower parameter but about all I can do is turn it on and off, not adjust the power. If I were to try to set txpower, I get:

Code: Select all

# iwconfig wlan0 txpower 30
Error for wireless request "Set Tx Power" (8B26) :
    SET failed on device wlan0 ; Invalid argument.
Is my syntax right?

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

#254 Post by Ted Dog »

Great, a turbo boost option from the start, without a remaster for the next version of FatDog. I think this will become popular since most 64bit computers come with enough RAM.

I, of course, will try this first, then stair-step the fast remaster option to solve my original issue. Thanks for all the scripts and how-to knowledge... :lol:

User avatar
Ted Dog
Posts: 3965
Joined: Wed 14 Sep 2005, 02:35
Location: Heart of Texas

#255 Post by Ted Dog »

Boot option base2ram=expand works nice, it does not work in a huge init without editing the code that resets base2ram=no. since its already loaded

But so far, loading apps with lots of files SeaMonkey, Gimp etc. its very noticeable in a good way. System over all seems a bit less clogged.

gcmartin

#256 Post by gcmartin »

Base2RAM: I see similar improvements in my test. THX

Here to help

jamesbond
Posts: 3433
Joined: Mon 26 Feb 2007, 05:02
Location: The Blue Marble

#257 Post by jamesbond »

heywoodj wrote:
jamesbond wrote:"iwconfig" command can be used to adjust the radio power of your wireless, although it doesn't always work.
Do you mean boosting the transmission power from the router? Seems on my older reflashed Linksys router with dd-wrt, there were adjustable power settings, but don't know about the other routers, most which are not mine!
No, the one on the laptop. You have done it right as you quoted below.
I found iwconfig has a txpower parameter but about all I can do is turn it on and off, not adjust the power. If I were to try to set txpower, I get:

Code: Select all

# iwconfig wlan0 txpower 30
Error for wireless request "Set Tx Power" (8B26) :
    SET failed on device wlan0 ; Invalid argument.
Is my syntax right?
Syntax is right, but apparently the support for the command varies with the card. The other way of doing it is either poking the hardware directly (with "iw" command, not included in Fatdog yet) or messing with the regulatory settings (using 'CRDA' and then create your own database - but 'CRDA' is not currently included in Fatdog yet). Either way you're going to exceed the manufacturer's stated limitations (which isn't always a bad thing - just see what they do in the CPU overclocker forum), but you need to know what I'm doing. :) I'm not sure why Windows drivers can transmit more power than the Linux drivers :(
Fatdog64 forum links: [url=http://murga-linux.com/puppy/viewtopic.php?t=117546]Latest version[/url] | [url=https://cutt.ly/ke8sn5H]Contributed packages[/url] | [url=https://cutt.ly/se8scrb]ISO builder[/url]

Hans
Posts: 45
Joined: Thu 16 Mar 2006, 22:38
Location: The Netherlands

power wireless adapter

#258 Post by Hans »

heywoodj and jamesbond,

Some time ago I did some research on this matter. To my similar experience and other linux-users at work the linux workers always disconnect from wireless sooner when working in a group of windos users. I also was not able to adjust these driver settings to make a difference. And same goes for the minimum bandwith limit, It might be possible that this setting is a bit higher than on a windos machine, thus disconnecting sooner.

I gave up in the end. Any suggestions would still be welcome of course

gcmartin

#259 Post by gcmartin »

FWIW, I had a similar problem, recently, at my auto dealership's open network. My trusty laptop booting Live FD621 would NOT connect under any circumstances. I tried normal boot with save-sessions AND I also booted without save-sessions(native). So, I removed the booting LiveDVD and booted Windows which happily connected.

Probably would have spent some time in research, but, my car was complete within 30minutes. I did think it odd, though.

Questions
  • what would be the recommended approach to assist development when these occur?
  • Is there some specific reports/commands development would like to see the results of when this occurs?

WillM
Posts: 173
Joined: Wed 30 Dec 2009, 04:42
Location: Oakland, California

#260 Post by WillM »

This is an sfs for GIMP-2.8.4 installed to /opt.
http://ftp.nluug.nl/ibiblio/distributio ... .4_620.sfs

Post Reply