Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Wed 30 Jul 2014, 11:47
All times are UTC - 4
 Forum index » Taking the Puppy out for a walk » Announcements
Community Edition anyone interested?
Moderators: Flash, Ian, JohnMurga
Post new topic   Reply to topic View previous topic :: View next topic
Page 34 of 42 [621 Posts]   Goto page: Previous 1, 2, 3, ..., 32, 33, 34, 35, 36, ..., 40, 41, 42 Next
Author Message
saintless


Joined: 11 Jun 2011
Posts: 2215
Location: Bulgaria

PostPosted: Fri 06 Dec 2013, 07:41    Post subject:  

Iguleder wrote:
saintless wrote:

Quote:
(Forget about Wheezy - it is much much RAM hungry).



Forget about good hardware support and modern applications. Confused

I'm starting to agree about the modern applications. It might worth the RAM usage. If someone can manage to reduce 500 Mb and more to acceptable base I agree for Wheezy.
What do you think about Squeeze with kernel 3.2.0-0.bpo.4-686-pae at start point 37Mb? Is it good enough hardware support? It can be made 64 bit.

Cheers, Toni

_________________
Farewell, Nooby, you will be missed...
Back to top
View user's profile Send private message MSN Messenger 
Volhout


Joined: 28 Dec 2008
Posts: 375

PostPosted: Fri 06 Dec 2013, 08:15    Post subject: Community Edition
Subject description: why
 

The problem I see in the way this project is going is in the approach that is taken.

In puppy land there are a lot of iso's. These puppies (a collection of programs running on a kernell and drivers) are tested and debugged.

If you base a CE version on one of these iso's you get a complete tested and more or less stable set of software.... that you tear appart, and use as a base to create something new. But by doing so, you also tear appart it's quality, stability and functionality.

A good example is: "we are using 214x9 becuase it runs better on my PC than 214x10". I make one promise: if you tear appart 214x9 you will end up with something just as instable as 214x10. Technosaurus spend a lot of time on making these versions. You better taken 214x10 and fix the problem (before-or-after the tearing appart).

In my opinion ... you pick a base ... and start working. Suitible for older or newer computer.... don't make the scope to big by combining a 2.4 kernell (214x) with a 3.10 kernel (Slacko 64 bit). Pick one , and work your way through the stuff. Set a single goal... it will be enough work.

It seems 01micko has gained some leverage (click on the link in his signature) Please start helping him. Leave 214x for what it is (sorry Technosaurus).

One final remark: I would re-define "old computer".

According to Microsoft, 44% of the worlds microsoft installed PC's is still running XP. That means that 44% of PC's has below 4gbytes of RAM, typical 1Gbyte) and has a P4 type processor. I would consider that your new "OLD PC". And not the PC that was already old when puppy was launched 10 years ago. Those newer OLD PC's can never be converted to W7 or W8, cannot be re-installed with XP(*), and these guys need linux between now and 2 years, or ... buy new pc's.

Volhout.

(*) believe me, I tried.... The original install on the restore partition was XP SP 1, and you can NOT get XP re-installed in a controlled way via update manager since it will not work with the IE6 in that image. Only thing you can do is manually install SP3 and IE8, but there is still no way to install all the missing diarea of updates. After 2 nights of frustration I installed LXLE (Lubuntu 12.04LTS with Libreoffice + more in one iso) and was up and running in 17 minutes (for that particular friend, puppy would not have offered the convenience he needs since puppy currently, despite the effort developers do, for single click installs that work on a variety of programs. There is always the tiny "missing this lib" that can be fixed ... but not by him, and preferably not every week .... by me.
Of coarse there are sufficient iso's and keygens available if you want illegal XP stuff.
Back to top
View user's profile Send private message 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Fri 06 Dec 2013, 11:21    Post subject:  

Iguleder wrote:
saintless wrote:

Quote:
(Forget about Wheezy - it is much much RAM hungry).



Forget about good hardware support and modern applications. Confused



ThinkGeek.com Cheap and power efficient. Forget about repositories

http://www.thinkgeek.com/product/f151/?itm=adwords_labelsGeek_Toys_and_adwords_labelsOn_Sale&rkgid=1132042983&cpg=ogty1&source=google_toys&device=c&network=g&matchtype=&gclid=CNOR8ur1m7sCFUdbfgodmHkAIw
Back to top
View user's profile Send private message 
saintless


Joined: 11 Jun 2011
Posts: 2215
Location: Bulgaria

PostPosted: Fri 06 Dec 2013, 11:36    Post subject:  

jpeps wrote:
ThinkGeek.com Cheap and power efficient. Forget about repositories

http://www.thinkgeek.com/product/f151/?itm=adwords_labelsGeek_Toys_and_adwords_labelsOn_Sale&rkgid=1132042983&cpg=ogty1&source=google_toys&device=c&network=g&matchtype=&gclid=CNOR8ur1m7sCFUdbfgodmHkAIw

Yes, puppy repositories are much better than the squeeze one. And most of the kernels are above 3...
Where was my mind?

_________________
Farewell, Nooby, you will be missed...
Back to top
View user's profile Send private message MSN Messenger 
darry1966

Joined: 26 Feb 2012
Posts: 368
Location: New Zealand

PostPosted: Fri 06 Dec 2013, 12:19    Post subject: Old hardware Base  

For old Hardware I think the updated 4.31 - 4.32v3 would be better than 2.14X it has a later Xorg and surely with a little work can be updated a little or the lovely Lucid 525 retro. 214X just doesn't work for me.
Back to top
View user's profile Send private message 
Iguleder


Joined: 11 Aug 2009
Posts: 1872
Location: Israel, somewhere in the beautiful desert

PostPosted: Fri 06 Dec 2013, 14:20    Post subject: Re: Old hardware Base  

darry1966 wrote:
214X just doesn't work for me.


The problematic word here is "me". Everybody has their hardware and that old Puppy that works fine for this specific setup.

Just face it! You can't support today's hardware with old software (Wary, 2.14x, any retro Puppy or old Puppy versions) and you can't just upgrade the kernel and X of an old Puppy to achieve good hardware support.

Either build a modern Puppy or stick with an ancient version forever.

_________________
My homepage
Back to top
View user's profile Send private message Visit poster's website MSN Messenger 
ICQ Number 
wanderer

Joined: 20 Oct 2007
Posts: 215

PostPosted: Fri 06 Dec 2013, 15:07    Post subject:  

Greetings community edition puppy fans

i am presently reading the woof at github thread, and trying to learn how to make a minimal base iso from woof-ce, that can be modified and expanded into whatever we would want. This, i feel, would address essentially, all of the concerns that have been mentioned in the ce thread.

i encourage everyone to look into this option.

have fun

wanderer

Last edited by wanderer on Mon 09 Dec 2013, 11:14; edited 4 times in total
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2226

PostPosted: Fri 06 Dec 2013, 15:10    Post subject:  

Actually, it is quite possible to support both old hardware and new software. glibc can be compiled to support older versions of the kernel. There really is no hard connection or conflict between hardware and software. Any kernel version can be run with any runtime libs you like -as long as glibc was compiled to allow support for those older kernel versions. One produces a runtime libs/progs using later versions of the libs/progs. Then, one can run any version of the kernel -which is what decides whether new or old hardware is better supported. One glibc & Co., and then the usres' choice of kernel -say 2.6.32 or 3.12, or whatever.
Back to top
View user's profile Send private message 
Iguleder


Joined: 11 Aug 2009
Posts: 1872
Location: Israel, somewhere in the beautiful desert

PostPosted: Fri 06 Dec 2013, 15:27    Post subject:  

That's true, but only if you start from scratch, since it's hard to replace 2.14's kernel, glibc and X Laughing

It's wise to use glibc compiled against old headers - that's what I do in my builds. However, you also need two X stacks.

_________________
My homepage
Back to top
View user's profile Send private message Visit poster's website MSN Messenger 
ICQ Number 
darry1966

Joined: 26 Feb 2012
Posts: 368
Location: New Zealand

PostPosted: Fri 06 Dec 2013, 15:41    Post subject: Re: Old hardware Base  

Iguleder wrote:
darry1966 wrote:
214X just doesn't work for me.


The problematic word here is "me". Everybody has their hardware and that old Puppy that works fine for this specific setup.

Just face it! You can't support today's hardware with old software (Wary, 2.14x, any retro Puppy or old Puppy versions) and you can't just upgrade the kernel and X of an old Puppy to achieve good hardware support.

Either build a modern Puppy or stick with an ancient version forever.


1. I was talking about retro hardware and the base(s) I suggested is are proven plus Seamonkey is taken care of as browser is concerned and yes I guess I could have worded my post better instead of me statement I should have said 214x didn't work well on my crappy Sempron xorg on these Pups is far better as for newer I guess Raring or Pemasu's Wheezy - both are ready for prime time.

Just update what needs updating with 4.32 or retro 5.25 - again ideal For Retro machines.
Back to top
View user's profile Send private message 
bark_bark_bark

Joined: 05 Jun 2012
Posts: 784
Location: USA

PostPosted: Fri 06 Dec 2013, 17:55    Post subject: Re: Community Edition
Subject description: why
 

Volhout wrote:
...That means that 44% of PC's has below 4gbytes of RAM, typical 1Gbyte) and has a P4 type processor... And not the PC that was already old when puppy was launched 10 years ago.


I think the P-IV was a new when puppy came out. I think the most powerful are from 2006.

Volhout wrote:
...buy new pc's.


Also, there is a problem with the "buy a new pc" statement because not everyone is "rich" enough to do so. $600 is a S**t lot to pay. $300 for a i3/i5 refurb is just too much.

_________________
Desktop: Intel 945PSN Motherboard, 3.2Ghz P-IV "Prescott 2M", 2GB RAM, 500GB WD HDD, Slackware 14.1
Back to top
View user's profile Send private message 
greengeek

Joined: 20 Jul 2010
Posts: 2407
Location: New Zealand

PostPosted: Fri 06 Dec 2013, 21:33    Post subject: Re: Community Edition
Subject description: why
 

Volhout wrote:
if you tear appart 214x9 you will end up with something just as instable as 214x10. Technosaurus spend a lot of time on making these versions. .
I think Ttuuxxx, not technosaurus...
Back to top
View user's profile Send private message 
mikeslr


Joined: 16 Jun 2008
Posts: 765
Location: Union New Jersey USA

PostPosted: Sat 07 Dec 2013, 22:58    Post subject: How about resolving glibc questions via an application?  

amigo wrote:
Actually, it is quite possible to support both old hardware and new software. glibc can be compiled to support older versions of the kernel. There really is no hard connection or conflict between hardware and software. Any kernel version can be run with any runtime libs you like -as long as glibc was compiled to allow support for those older kernel versions. One produces a runtime libs/progs using later versions of the libs/progs. Then, one can run any version of the kernel -which is what decides whether new or old hardware is better supported. One glibc & Co., and then the usres' choice of kernel -say 2.6.32 or 3.12, or whatever.


Correct me if I'm wrong: A Pup can also be built whose kernel was compiled to support the compiling and subsequent utilization of several glibc versions. What would then be needed are (1) a pet developer to specify in pet specs the glibc dependency of the pet and (2) a modification of PPM to examine the Pup as to whether that dependency was met.

Alternatively, perhaps you could read what I said about “Semi-Full” installs, http://murga-linux.com/puppy/viewtopic.php?p=742201#742201, and perhaps add to the “core” several glib builds. Or build two Pups, one Modern and one Retro. Then, as was done for graphics cards, add an application which would read the specs of the computer and the glib built into the core and advise the user which version of applications would work best for his or her computer.

mikesLr
Back to top
View user's profile Send private message 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Sun 08 Dec 2013, 02:56    Post subject: Re: How about resolving glibc questions via an application?  

mikeslr wrote:


Correct me if I'm wrong: A Pup can also be built whose kernel was compiled to support the compiling and subsequent utilization of several glibc versions. What would then be needed are (1) a pet developer to specify in pet specs the glibc dependency of the pet and (2) a modification of PPM to examine the Pup as to whether that dependency was met.


Probably dependencies can be met, and still not work correctly without a fresh compile once you alter the tool chain.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2226

PostPosted: Sun 08 Dec 2013, 06:11    Post subject:  

As Iguleder points out, the situation with the latest kernel and Xorg drivers is inter-related -this was not always so.
But, with glibc it is that glibc is compiled to accept a range of kernel versions. The kernel doesn't know or care anything about glibc -it's the libc(glibc,musl,uclibc,..) which has to understand the kernel.

glibc stable is now at 2.18. It can be compiled to use any range of kernel versions, although there are some important watershed versions where really major things were changed in the kernel. The old ways can still be supported in glibc, but do cause the libs to be larger. A reasonable approach easily lets you use kernels back to linux-2.6.32. The next big step backward in time from there would be linux-2.6.17. 2.6.16 and earlier are really different.

Next, when compiling glibc, it refers to headers which are part of the kernel source. In Slackware these are called 'kernel-headers'. In debian and others, they are called the libc-headers. They define the interface of your chosen libc with the kernel. *They do not have to be the headers for any kernel you are going to use.* They simply need to be 'sanitized' headers -if you were compiling a system with another kernel, like 'hurd', then you'd use the hurd headers, etc.

So, compile late glibc ( I still wouldn't use 2.18!) against kernel headers from 2.6.32, or 3.0..., specifying the minimum kernel version to support -2.6.32 is a nice number.

Then, compile and use any kernel version between 2.6.32 and the latest with its' modules to support any range of hardware you like or need. glibc will have no problems with any of them.

However, as iguleder says, you'd need to maintain two X stacks in order to support both old*est* and new*est* graphics cards. The fairly recent addition of KMS Kernel Mode Setting means that significant parts of the graphics drivers wound up in the kernel. These bits of code must be co-ordinated with the Xorg server version and particularly, the xf86 input an video drivers.

A similar situation exists for firmware -particularly for wireless devices. But, may distros manage to find reasonable ranges within which to work.

Now, *big heads-up*, nothing stops you from using the latest software applications you want. Your glibc version will eventually get in the way -but that takes a while usually. The point is that the versions of applications have very little direct relation to the kernel version -in fact they don't even communicate directly with the kernel.

As to the question of 32-bit or 64-bit, well, you simply need multiple package trees containing the different arches. That means maintaining two 'ports' of the (mostly)same distro. That means you need a very nice system for building and rebuilding those little units of software(called packages) -which you can then use to assemble whatever sort of system you want -based on those little logical, manageable units. The method of assembling them into an end-product or installed system, should be completely divorced from the method and system of creating those packages -and I mean any sort of bundle/app/package/sfs that you plane to use.

The one-script-to-do-it-all approach is much too messy for the job -and worse- it takes your eye off that concept of manageable, logical units of functionality which can be assembled, upgraded and extended in any way you imagine. And you still have to do that even if you have no intention to do 'dependency resolution' or provide depends info.

Every time you add something which comes from sources, you are already using the logical system -the natural order of things means that each source tarball contains a limited set of code, for a single or closely-related set of functionalities. It only makes sense to follow with that logic and create a similar object called a package which is able to correctly be installed or upgraded, and whose name and version reflects some useful logic.

If you want to have dependency info or resolution, then you need a sophisticated system for determining those dependencies -and that cannot depend on some list of dependencies downloaded from somewhere it has to be generated as part of the configure-compile-package process. Some packaging systems like arch require you to maually *supply* the list. debian building systems will generate them for you -as does t2. My package builder generates them but lets you add items which might be missed. auto-generated but tweakable. Anyway, accurate dependency information can *only* be determined at compile-time. And sometimes a bit of human input is needed. For instance, it would be extremely difficult to programatically determine that the 'man' programs needs 'groff' in order to work. It's not a binary dependency.

A LiveCD project needs really, really sophisticated package management. What makes them so hard to maintain and what makes them so prone to be 'forked' is that the method of assembling, altering, upgrading indivdual units gets abandoned. It is very easy to manually alter or programatically 'remaster' a local copy -and keep doing that for a while. But, the first time you want to upgrade, extend or down-size that one-image idea, then every custom thing that you *manually* did, is prone to be lost in your next product. And, when you release a product with a minor error, the only way to deliver the fix is to create the whole product again -one missing file means you have to upload the whole (fixed) thing again, and your users have to download the whole (fixed) thing again. And how can you upgrade such a system reasonably?
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 34 of 42 [621 Posts]   Goto page: Previous 1, 2, 3, ..., 32, 33, 34, 35, 36, ..., 40, 41, 42 Next
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Taking the Puppy out for a walk » Announcements
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1487s ][ Queries: 13 (0.0364s) ][ GZIP on ]