Puppy Linux Discussion Forum Forum Index Puppy Linux Discussion Forum
Puppy HOME page : puppylinux.com
"THE" alternative forum : puppylinux.info
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

The time now is Tue 30 Sep 2014, 20:46
All times are UTC - 4
 Forum index » Off-Topic Area » Programming
Facts and myth about installing new glibc?
Post new topic   Reply to topic View previous topic :: View next topic
Page 1 of 2 [20 Posts]   Goto page: 1, 2 Next
Author Message
jamesbond

Joined: 26 Feb 2007
Posts: 2183
Location: The Blue Marble

PostPosted: Tue 05 Feb 2013, 23:17    Post subject:  Facts and myth about installing new glibc?  

Ok, here we go. This has been discussed a few times, but I'd like to know a bit deeper from those who knows more.

Traditional wisdom is to upgrade glibc simply by replacing it with a newer version is frowned upon, because it can cause subtle errors and breakages, resulting in a very difficult-to-troubleshoot situation. As the wisdom goes, to properly ugprade glibc, one needs to:
a) build a new glibc
b) build a new gcc using that new glibc
c) re-compile all the apps using that new gcc+glibc.
==> a lot of work, and that's why we don't see glibc upgraded every day.

Anecdotal challenges to the traditional wisdom:
- a (dynamically linked) app compiled for glibc 2.6 will still run on glibc 2.14, so it seems that glibc upgrade is harmless (for that application)
- Arch Linux with its rolling release upgrades glibc as they go along, without requiring recompilation for all the installed packages.

Question:
1. Why isn't Arch falling to pieces? What do they do to avoid breaking apps compatibility when they install a new glibc?
2. If old apps work with the new glibc, what's the worry about replacing glibc then?
3. I understand that step 2 (rebuild gcc) is necessary, but do we really need to recompile all the apps?

cheers!

PS: Question originally asked by Q5sys to me, of which I'm unable to answer. Thus this post Smile

_________________
Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread
Back to top
View user's profile Send private message 
tallboy


Joined: 21 Sep 2010
Posts: 444
Location: Oslo, Norway

PostPosted: Wed 06 Feb 2013, 08:45    Post subject:  


_________________
True freedom is a live Puppy on a multisession CD/DVD.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2251

PostPosted: Wed 06 Feb 2013, 14:47    Post subject:  

Arch is not 'falling to pieces' And, neither is Slackware. But, they both have a few crumbs falling off every day -many of which don't get noticed for a while.

Say it's only 2% of programs which misbehave. Actually, upgrading other libs _besides glibc_ will cause problems more often. Because Arch and Slackware both _do not_ recompile everything which depends on a lib which has been upgraded, they have an extra 2% chance of being bitten -I mean aside from any un-discovered bugs which are present even in properly compiled-linked libs or programs.

gcc itself is more sensitive to glibc upgrades than most other software, so it's especially important to have a sane toolchain. You forgot to mention the other component of the toolchain -binutils, which links the objects created by gcc into the final binary. binutils/gcc/glibc.

The only way to be *as sure as possible, with or without extensive testing*, that the OS is sane, is to have compiled everything with the same toolchain, *in the *correct order*. The point is that a *complete* OS is one which can reproduce itself. If you are a real stickler, then this means that rebuilding the entire OS would yield binaries *exactly the same* as the ones already on the system. If you partially upgrade (or re-compile with different configs) *any* library and do not recompile the programs which are linked to that library, then you will not get exact binary replicas -and may, or may not experience problems related to the new lib/configs/toolchain difference.

Note that all the above applies even if no libs or programs have been *upgraded* to a newer version -just compiling using different configure options or patches can cause the same problems.
Usually, when upgrading one component of the toolchain, all are upgraded, but this is not always the case. But, if glibc itself is upgraded or recompiled, then the whole toolchain must be re-compiled in 2 passes. The second pass insures that each of the components was compiled, linked and linked-against the same three components.

The whole process of upgrading or re-compiling *a running system* without having to fully isolate each upgraded item(like you have to do to cross-compile), depends on having enough binary-compatibility to be able to build the new toolchain so you can 'bootstrap' the rest of the system. There are some very low-level programs which can easily be broken by upgrading very basic libs. glibc is the most ubiquitous requirement of all the libs -nearly every single library and program on the system will link to it. The next most basic library is zlib. There are 5-6 very critical libs which must ve rebuilt/upgraded early on during a build. Some of them even have circular dependencies which means you have to re-compile it and the other 'partner' two times to have them correct.

The most common scenario used by most distros, is to upgrade all three components of the toolchain together. In such a case, they'd 1. First upgrade to the newer binutils -compiling them using the old gcc, linking them using the old version of themselves -against the old glibc version.
2. Upgrade to the newer gcc, compiling it using the old version of gcc, linking it using the *new* bintuils -against the old glibc.
3. Then, upgrade to the newer glibc, compiling it using the *new* gcc(linked against old glibc) and linking it using the *new* binutils(compiled with old gcc, linked using old binutils).

So, now at this point all three items have been upgraded, but the toolchain is not sane, because all the elements of the new chain have not been used to produce the newer versions. That means that *second pass* is needed:
4. Recompile the new version of binutils, using the Pass1 *new* gcc version(still linked against old glibc). On this second pass, it will now be linked against the new Pass1 glibc version. This new binutils will be linked by the same version of binutils *as what it is* -the Pass 1 binuuils.
5. Recompile gcc (using the Pass 1 build), linking with the new now-sane bintuils, against the new Pass1 glibc.
6. Recompile glibc, using the new now-sane binutils and the new now-sane gcc.

See how easy that is! LOL But, let's not get ahead of ourselves with joy just yet. Upgrading the toolchain is *never* a simple matter of using the latest version of any one of them. The versions must be carefully chosen and tested for suitability *with each other*. Some combinations which work for one arch will not work for others. Here is where it pays to 'spy on' some major distros for choosing a workable combination -they'll always have more people testing than you!

As I said, lots of libs/prog versions may be more likely to hiccup than the toolchain -GTK & Co. comes quickly to mind, but actually the worst offenders are libpng, libjpeg, zlib, and a couple of others.

Also, I mentioned *correct order*. This means that libs which are *depended on* must be rebuilt before the programs they depend on. But, what is, or is *not* on your system at compile-time is just as important. Since some configuration routines automatically check for the presence of some *optional* libs and enable the use of them, if found. This means that in order to avoid lib2 having a dependence on lib1, then lib1 must *not* be installed when lib2 is compiled. Or, if the features of lib1 are wanted, then it must be installed at the time lib2 is compiled.

Having a completely-sane toolchain and having every binary on the system built using that toolchain will not eliminate all software bugs -but it will eliminate *all* bugs which are due to version mis-match. Rolling-release distros never reach the above state -ever. gentoo comes close to achieving this, while still being constantly in flux. But I'm sure it fails, at times -I see too many failed upgrades using 'emerge' to convince me otherwise. debian is the best at achieving 100% binary sanity and fedora comes pretty close. Practically *everybody else* falls short of this. fedora doesn't have as much vetting as debian -every single lib and program in debian is tested -at least somewhat- after first being made 'sane'. That's why debian is always a few versions behind the latest for nearly everything it includes -that's the price to be paid for such levels of sanity and dependability -since the testing process also manages to flush out some of the bugs which are there even in 'sane' binaries.

I sincerely hope that this has been helpful and accurate, without the impression of spreading FUD.
Back to top
View user's profile Send private message 
jpeps

Joined: 31 May 2008
Posts: 3220

PostPosted: Wed 06 Feb 2013, 16:42    Post subject:  

For anyone really interested, I'd recommend going through the procedure at Linux From Scratch at least once.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2251

PostPosted: Thu 07 Feb 2013, 03:56    Post subject:  

You'd think that building LFS or gentoo would make anyone understand it all. But, it isn't really so. Since LFS relies on pre-written scripts and gentoo on mass-rebuild commands, it's difficult for most folks to really get an overview of what is going on. The concepts of circular-dependency and toolchain sanity don't become clear at all.

Still, doing either one will give a user a lot of appreciation for how big a job it is to build and maintain even a small distro.
Back to top
View user's profile Send private message 
jamesbond

Joined: 26 Feb 2007
Posts: 2183
Location: The Blue Marble

PostPosted: Thu 07 Feb 2013, 06:06    Post subject:  

Thank you both, especially amigo, for the informative post (thanks for pointing the binutils dependency too).

I'm on the side with the traditional wisdom. My (simple) reasoning is that a new version of glibc (or, like you say, the same version of glibc compiled with different configuration or even different compiler options) may have different hardcoded offsets in its data structures somewhere so that offset 5 in struct A may point to field X in old glibc, but point to field Y in new glibc. That being the case, I would expect to a lot more breakage, but it isn't happening - so my excuse is that much of what is inside glibc is based on POSIX so they remain the same between versions. But reasoning is reasoning - it is not fact, so I'm glad to hear from someone who has more field experience.

I'm glad you mention that there *are* in fact breakages in rolling-release distros; perhaps not that apparent but there are. I'm surprised that even gentoo has this problem, I thought as a source-based distro it automatically avoids this issue.

That being the case, for these distros (and for us, if we choose to follow their way) it becomes a benefit/risk management then. What is the benefit of upgrading to a new glibc versus the risk of getting obscure and hard-to-debug breakage? Is it worth is to let 2% of that programs fail, if we enjoy the benefit that the new glibc brings to the 98%? Would it be possible to identify which progs fall into that 2%? (I suspect the answer is no - unless one goes to do QA and test every functionality of every programs). Am I correct?

Thanks for your insight on Debian. Sometimes I wonder that even in the "sid" line, debian packages are always a few versions older. But I do know that debian source packages are usually very dependendable, and now you have informed me why.

jpeps, I have been through LFS cycle once (well, perhaps half of it Smile, still trying to find time to finish it). Like amigo said, it is instructive and one certainly gets a feel of the effort required to build a working system "from scratch", and it is even littered with explanations here and there, but there is no deep and cohesive details of why certain things must be done in a certain way (which is understandable - that is not its purpose). I got a little more than I bargained for when I tried to cross-compile bootstrap-linux (based on musl) from x86 to ARM last year - and saw first hand that there are tons of hidden (=undocumentend, or perhaps documented beyond my reach) assumptions that gcc took - especially when compiling itself.

As a side question, I think I understand the circular dependency between this gcc/glibc/binutils, but one that I don't understand is that why the need for circular dependency? As far as theory goes, there should not be any dependency at all - a static compiler can produce code from source without any libc; a libc can be built by and usable for any compiler as long as it uses agreed specifications (calling conventions, name mangling etc); and the binutils is the same (as long as it agrees with the compiler: object formats, etc). LFS build the toolchain two times (one for cross compiling - amigo step 123 and one for building the actual LFS binaries - amigo step 456), and for each toolchain gcc is compiled twice - "naked gcc" (amigo step 2 or 5) and then the full-gcc linked with the glibc (amigo step 3 or 6). I don't understand why the "naked gcc" (output of step 2 or 5) can't be used as is to compile programs directly? Isn't just a matter of specifying where the std include files and libs are? Why do we even need to do step 3/6? Is this circular dependency thing a (mis)feature of gcc?

cheers!

_________________
Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread
Back to top
View user's profile Send private message 
Q5sys


Joined: 11 Dec 2008
Posts: 1066

PostPosted: Thu 07 Feb 2013, 19:11    Post subject: Re: Facts and myth about installing new glibc?  

jamesbond wrote:
Ok, here we go. This has been discussed a few times, but I'd like to know a bit deeper from those who knows more.

Traditional wisdom is to upgrade glibc simply by replacing it with a newer version is frowned upon, because it can cause subtle errors and breakages, resulting in a very difficult-to-troubleshoot situation. As the wisdom goes, to properly ugprade glibc, one needs to:
a) build a new glibc
b) build a new gcc using that new glibc
c) re-compile all the apps using that new gcc+glibc.
==> a lot of work, and that's why we don't see glibc upgraded every day.

Anecdotal challenges to the traditional wisdom:
- a (dynamically linked) app compiled for glibc 2.6 will still run on glibc 2.14, so it seems that glibc upgrade is harmless (for that application)
- Arch Linux with its rolling release upgrades glibc as they go along, without requiring recompilation for all the installed packages.

Question:
1. Why isn't Arch falling to pieces? What do they do to avoid breaking apps compatibility when they install a new glibc?
2. If old apps work with the new glibc, what's the worry about replacing glibc then?
3. I understand that step 2 (rebuild gcc) is necessary, but do we really need to recompile all the apps?

cheers!

PS: Question originally asked by Q5sys to me, of which I'm unable to answer. Thus this post Smile


I'll attach my original PM with Jamesbond here...

Q5sys wrote:
Glibc is one of those things I fail at wrapping my head around all the time. First off... I dont get why apps break. I can go and grab a package from Slackware 13 that uses glibc2.11 and it'll work fine in fatdog or lighthouse. So why would updating fatdog from 2.14 to say 2.17 suddenly break apps? I get that gcc and the such have to be updated, but why does everything else break.
I keep thinking we are doing something wrong. Afterall how on earth are rolling releases getting it right. Take Arch linux, as soon as they approve glibc its rolled out to everyone. They are on 2.17 right now, and they arnet requiring everyone to redownload everything to work with a newer glibc. They release the glibc along with gcc and the things that need to be updated and the system works.
What do they know that we dont?


I've always had headaches when dealing with glibc... I do know you can sorta side step it by compiling a newer gcc/glibc, and sticking them somewhere else in my system, and then editing Make/configure scripts so that they compile the program using those newer versions instead of the older ones; but its an ugly hack in my mind. Sure it works, but I've never attempted to take it another step further and do it again... so perhaps I've just been lucky so far in what I've needed to do that for.

One of the reasons I was talking to Jamesbond about this, and want to dig deeper into this, is that eventually I want to turn Slackbones into a rolling release. It may take me 5 years to get it to that point... but I want it to get to that point. I know this is one major issue Im going to have to learn about... so I'm getting myself started and learning where the pitfalls are that I need to figure out. That way I have a rough roadmap of what I need to learn and resolve before I actually get into trying to make it happen.

Apparently the Carolina guys are having a problem with this as well, but i dont know who's doing the dev work there now adays. Ever since it cracked the 300mb 32bit iso size, I stopped following it closely. Maybe they can get something out of this thread too.



amigo wrote:
Arch is not 'falling to pieces' And, neither is Slackware. But, they both have a few crumbs falling off every day -many of which don't get noticed for a while.

Say it's only 2% of programs which misbehave. Actually, upgrading other libs _besides glibc_ will cause problems more often. Because Arch and Slackware both _do not_ recompile everything which depends on a lib which has been upgraded, they have an extra 2% chance of being bitten -I mean aside from any un-discovered bugs which are present even in properly compiled-linked libs or programs.

I guess I've been lucky then, cause I've only had Arch break on me twice. And both of those times were because of major changes in the system... and that I failed to read the forums before doing a Pacman -Syu. Once I read about the issues, I was able to fix it easy enough.


amigo wrote:
gcc itself is more sensitive to glibc upgrades than most other software, so it's especially important to have a sane toolchain. You forgot to mention the other component of the toolchain -binutils, which links the objects created by gcc into the final binary. binutils/gcc/glibc.

The most common scenario used by most distros, is to upgrade all three components of the toolchain together. In such a case, they'd 1. First upgrade to the newer binutils -compiling them using the old gcc, linking them using the old version of themselves -against the old glibc version.
2. Upgrade to the newer gcc, compiling it using the old version of gcc, linking it using the *new* bintuils -against the old glibc.
3. Then, upgrade to the newer glibc, compiling it using the *new* gcc(linked against old glibc) and linking it using the *new* binutils(compiled with old gcc, linked using old binutils).

So, now at this point all three items have been upgraded, but the toolchain is not sane, because all the elements of the new chain have not been used to produce the newer versions. That means that *second pass* is needed:
4. Recompile the new version of binutils, using the Pass1 *new* gcc version(still linked against old glibc). On this second pass, it will now be linked against the new Pass1 glibc version. This new binutils will be linked by the same version of binutils *as what it is* -the Pass 1 binuuils.
5. Recompile gcc (using the Pass 1 build), linking with the new now-sane bintuils, against the new Pass1 glibc.
6. Recompile glibc, using the new now-sane binutils and the new now-sane gcc.

See how easy that is! LOL But, let's not get ahead of ourselves with joy just yet. Upgrading the toolchain is *never* a simple matter of using the latest version of any one of them. The versions must be carefully chosen and tested for suitability *with each other*. Some combinations which work for one arch will not work for others. Here is where it pays to 'spy on' some major distros for choosing a workable combination -they'll always have more people testing than you!


[/facepalm] I completely forgot about binutils. how exactly does gdb fit into this? Can it just be built after... or does it have to be worked in as well? I normally do debugging work with strace... but im sure eventually i'll use gdb more, so I might as well ask now instead of later.


amigo wrote:
Also, I mentioned *correct order*. This means that libs which are *depended on* must be rebuilt before the programs they depend on. But, what is, or is *not* on your system at compile-time is just as important. Since some configuration routines automatically check for the presence of some *optional* libs and enable the use of them, if found. This means that in order to avoid lib2 having a dependence on lib1, then lib1 must *not* be installed when lib2 is compiled. Or, if the features of lib1 are wanted, then it must be installed at the time lib2 is compiled.

Thats one piece of information that seems to be always left out when dealing with circular dependencies... Thanks for mentioning it, to remind me. I sometimes forget about avoiding the circular dep issue by removing optional deps.

amigo wrote:
Having a completely-sane toolchain and having every binary on the system built using that toolchain will not eliminate all software bugs -but it will eliminate *all* bugs which are due to version mis-match. Rolling-release distros never reach the above state -ever. gentoo comes close to achieving this, while still being constantly in flux. But I'm sure it fails, at times -I see too many failed upgrades using 'emerge' to convince me otherwise. debian is the best at achieving 100% binary sanity and fedora comes pretty close. Practically *everybody else* falls short of this. fedora doesn't have as much vetting as debian -every single lib and program in debian is tested -at least somewhat- after first being made 'sane'. That's why debian is always a few versions behind the latest for nearly everything it includes -that's the price to be paid for such levels of sanity and dependability -since the testing process also manages to flush out some of the bugs which are there even in 'sane' binaries.

Ive only built gentoo once before... just to do it to try to learn, I never messed with it beyond that.

amigo wrote:
I sincerely hope that this has been helpful and accurate, without the impression of spreading FUD.

I cant speak for anyone else, but I definately appreciate what you've said. Direct honest and clearly explained information is always good. I hope anyone wouldnt consider it FUD, but I guess some might. I've always enjoyed reading your posts since you do seem to just state the facts as they are and leave the rest else up to everyone elses opinions.


amigo wrote:
You'd think that building LFS or gentoo would make anyone understand it all. But, it isn't really so. Since LFS relies on pre-written scripts and gentoo on mass-rebuild commands, it's difficult for most folks to really get an overview of what is going on. The concepts of circular-dependency and toolchain sanity don't become clear at all.

Still, doing either one will give a user a lot of appreciation for how big a job it is to build and maintain even a small distro.


As I mentioned before I did build gentoo once before, but honestly... I dont think I really got much out of it. Sure I went through the process... but watching crap scroll by for hours does not make anyone a linux expert (contrary to what some people think). It was an interesting adventure, but at the end I kinda felt like I had just built a big lego kit. I put all the pieces together in the right order, and followed the instructions.... but didnt really know much more than when I started. Aside from a few things here and there... most of it was just 'going through the motions'


jamesbond wrote:
Thank you both, especially amigo, for the informative post (thanks for pointing the binutils dependency too).

Ditto.

jamesbond wrote:
I'm glad you mention that there *are* in fact breakages in rolling-release distros; perhaps not that apparent but there are. I'm surprised that even gentoo has this problem, I thought as a source-based distro it automatically avoids this issue.

Thinking about it a bit more... perhaps one of the reasons issues dont arise that much is that since its a rolling release, all the packages are constantly being updated as soon as a new version is deemed to be stable... so packages are always being updated to the newer toolchain as time goes by.


jamesbond wrote:
That being the case, for these distros (and for us, if we choose to follow their way) it becomes a benefit/risk management then. What is the benefit of upgrading to a new glibc versus the risk of getting obscure and hard-to-debug breakage? Is it worth is to let 2% of that programs fail, if we enjoy the benefit that the new glibc brings to the 98%? Would it be possible to identify which progs fall into that 2%? (I suspect the answer is no - unless one goes to do QA and test every functionality of every programs). Am I correct?


Based on nothing more than 'assumptions' in my own head, at this point I'd think the 2% failure rate is acceptable, becuase at that point we can just recompile those programs as needed with the newer toolchain. But thats just me speaking for me... That's a decision every dev would have to consider and decide for themselves.


jamesbond wrote:
As a side question, I think I understand the circular dependency between this gcc/glibc/binutils, but one that I don't understand is that why the need for circular dependency? As far as theory goes, there should not be any dependency at all - a static compiler can produce code from source without any libc; a libc can be built by and usable for any compiler as long as it uses agreed specifications (calling conventions, name mangling etc); and the binutils is the same (as long as it agrees with the compiler: object formats, etc). LFS build the toolchain two times (one for cross compiling - amigo step 123 and one for building the actual LFS binaries - amigo step 456), and for each toolchain gcc is compiled twice - "naked gcc" (amigo step 2 or 5) and then the full-gcc linked with the glibc (amigo step 3 or 6). I don't understand why the "naked gcc" (output of step 2 or 5) can't be used as is to compile programs directly? Isn't just a matter of specifying where the std include files and libs are? Why do we even need to do step 3/6? Is this circular dependency thing a (mis)feature of gcc?

+1 on this question.

_________________



Back to top
View user's profile Send private message Visit poster's website 
amigo

Joined: 02 Apr 2007
Posts: 2251

PostPosted: Fri 08 Feb 2013, 06:41    Post subject:  

I knew that wouldn't be the end of it -no matter how I tried...

1. Indeed, why not just use a completely static tool chain? Sounds great until you need to compile and use some C++ code. There you must have some shared libs which are part of the gcc and g++ packages.
Every C++ program needs libstdc++ which is part of the g++ installation and many of them also need libgcc_s.so which is part of gcc. No getting around that. And C++ programs dependencies are even worse than C progs -usually tied to a specific version, with no forward or backward compatibility at all.

Well, what about statically linking to glibc, at least. You can do that, but some components of glibc will still need the shared libs at runtime. You'll see the warning about it at compile-time. Sounds crazy, but it's true. You can lay it all on one fellow, whose name you can quickly find using goggle or your favorite search engine. Just type in the two words 'glibc asshole' and the first name you see in the results is him -no kidding!

2. "2% failure rate is acceptable" Totally irresponsible if you pretend to be a developer. Do you feel the same about security bugs? I mean, if I know that 2% of the programs I distribute to the world could quite possibly be a security risk, is it okay if I don't *do all I can* to eliminate them? As a user, you can take any such risk you like. But, if others are using your work, how could you possibly take that attitude?

"identify which progs fall into that 2%
(I suspect the answer is no - unless one goes to do QA and test every functionality of every programs). Am I correct?" Yes, you are correct. As I said, no one beats debian on this account -they have hundreds of package maintainers and many thousands of testers. Of course, their results find many bugs that have nothing to do with the toolchain or mis-matched library versions.

3. "gdb" gdb is not part of the toolchain. It is included in the very best croos-toolchain builds because of what the feedback from it is worth. I don't use it, so I don't build it. But I'm never on the front, bleeding edge either. The basic toolchain is just binutils/gcc/libc (doesn't have to be Glibc. But, it is more compolicated with modern versions of gcc, which depend not only on libc, but also on libelf, gmp, mpfr and mpc. So, you have to work those into your 2-Pass strategy. Lovely!

4. "read the forums before doing a Pacman -Syu" First, you have to remember that pacman has to be able to do the right thing in every situation -that means someone has designed and implemented a sophisticated way of determining what *is* the Right Thing. And that's just from the standpoint of installing or upgrading already built packages. The package maintainer has already worked out the hard part of creating those packages the way they are wanted. In an ideal world... In reailty, arch/gentoo, etc drop the ball sometimes for a few days. How many threads have you seen about fixing up problems after running 'emerge -World' or 'Pacman -Syu'. Sure, it all gets worked out after awhile -but don't ya wish it hadn't happened?

Hmm, expect me to need a couple of days rest again after this. I think explaining these things is harder than doing them, so maybe I'll relax by having a go at the most complex cross-compile toolchain possible -a Canadian-cross! What's that, you say? I'll explain with an example: I have a nice fast x86 desktop, an older imac with pppc arch, and I have a new ARM device. Let's say I want to compile a toolchain on my x86, but I want it to run on the old imac. But, the toolchain will be used to build binaries for the new ARM box... that's a Canadian cross -the Holy Grail. Just kidding! I have better things to do -and I'm really glad I do.
Back to top
View user's profile Send private message 
Q5sys


Joined: 11 Dec 2008
Posts: 1066

PostPosted: Fri 08 Feb 2013, 09:59    Post subject:  

amigo wrote:
2. "2% failure rate is acceptable" Totally irresponsible if you pretend to be a developer. Do you feel the same about security bugs? I mean, if I know that 2% of the programs I distribute to the world could quite possibly be a security risk, is it okay if I don't *do all I can* to eliminate them? As a user, you can take any such risk you like. But, if others are using your work, how could you possibly take that attitude?

"identify which progs fall into that 2%
(I suspect the answer is no - unless one goes to do QA and test every functionality of every programs). Am I correct?" Yes, you are correct. As I said, no one beats debian on this account -they have hundreds of package maintainers and many thousands of testers. Of course, their results find many bugs that have nothing to do with the toolchain or mis-matched library versions.


Ok, perhaps I didnt explain myself fully enough. I dont mind a 2% failure rate, becuase I'd be planning on fixing those issues before releasing. I'm not talking about building the newest toolchain and throwing it out to everyone before testing what will and wont work with it. That, in my mind is irresposible. But If I go into it knowning the changes I may will break 2% of my repo (for exmaple)... then I'm going into it knowning I need to rework at least 2% of my repo before release. Is that extra work for me, yes absolutely.
Now I can only speak for myself, but If Im willing to do that extra work to resolve the 2% breakage before release... how is that being irresponsible? Perhaps you're looking at this from a different perspective, and if thats the case I want to understand your perspective, because Im obviously missing something.

As for security... No, releasing security bugs into the wild is never acceptable. But I know for a fact that during development of patches sometimes other issues come up with are resolved before release of the patch. well... except in Oracles case with Java.

My 2% comment was about failure of things on my end BEFORE release to users. I dont mind breaking things in alpha stage since its only me or another dev working on it. I would never intend to release something like that publicy with the knowledge that it might mess up something for a user.

amigo wrote:
Hmm, expect me to need a couple of days rest again after this. I think explaining these things is harder than doing them, so maybe I'll relax by having a go at the most complex cross-compile toolchain possible -a Canadian-cross! What's that, you say? I'll explain with an example: I have a nice fast x86 desktop, an older imac with pppc arch, and I have a new ARM device. Let's say I want to compile a toolchain on my x86, but I want it to run on the old imac. But, the toolchain will be used to build binaries for the new ARM box... that's a Canadian cross -the Holy Grail. Just kidding! I have better things to do -and I'm really glad I do.


Enjoy taking a break. Smile And when you are able to drop some more insight into the thread, I'll be appreciative.

_________________



Back to top
View user's profile Send private message Visit poster's website 
amigo

Joined: 02 Apr 2007
Posts: 2251

PostPosted: Sat 09 Feb 2013, 03:12    Post subject:  

I didn't mean to be accusative. The problem of finding the failures remains the same -it's a bigger job than 'creating a distro' or even creating a single package. If you have lots of testers or devise some automated way of checking everything it becomes easier. In the end, checking some applications requires very interested users who know the software, use it often and 'push' it often.
Back to top
View user's profile Send private message 
tallboy


Joined: 21 Sep 2010
Posts: 444
Location: Oslo, Norway

PostPosted: Sat 09 Feb 2013, 14:36    Post subject:  

This is very interesting reading, even if you gentlemen reside on a programming level quite a long way above mine. And it pleases me that the discussion between you is kept on a civilized level, far from the 'asshole' version...

Please correct me if I am wrong, but it seems to me that many puppy's package repositories contain .pets that originally were compiled for another version. With regards to what you have written above, will that 'mix-and-match' policy be the reason for some of the bugs that keep reappearing in later versions? Would a different compiling policy improve on that situation?

tallboy

_________________
True freedom is a live Puppy on a multisession CD/DVD.
Back to top
View user's profile Send private message 
amigo

Joined: 02 Apr 2007
Posts: 2251

PostPosted: Sat 09 Feb 2013, 16:11    Post subject:  

will that 'mix-and-match' policy be the reason for some of the bugs that keep reappearing in later versions? Would a different compiling policy improve on that situation?

Yes, and Yes.

I'm really surprised that people continue to try and produce or use these apps, hoping they'll work somewhere else. If only instead of posting packaged applications, people would post build scripts which configure, compile and create the packages, then that would solve it -at least as long as one could bring themselves to run that script or could cajole someone else with the same variant to do so for them.
Back to top
View user's profile Send private message 
jamesbond

Joined: 26 Feb 2007
Posts: 2183
Location: The Blue Marble

PostPosted: Sat 09 Feb 2013, 18:58    Post subject:  

Thank you amigo.

Quote:
1. Indeed, why not just use a completely static tool chain? Sounds great until you need to compile and use some C++ code. There you must have some shared libs which are part of the gcc and g++ packages.
Every C++ program needs libstdc++ which is part of the g++ installation and many of them also need libgcc_s.so which is part of gcc. No getting around that. And C++ programs dependencies are even worse than C progs -usually tied to a specific version, with no forward or backward compatibility at all.
I totally forget about C++, perhaps because I don't use (and don't like) C++. Well that's another reason to keep avoiding C++ Smile

But is this a strict requirement of C++? I mean, other compilers like LLVM or Clang (or pcc or bsd cc) - do they impose the same requirements, or is this just a quirk of gcc + glibc? Judging by your next statement about that "glibc asshole" - I think it is probably the latter Smile (Thanks for that google search keyword - it gives me no end of entertaining posts Smile).

cheers!

_________________
Fatdog64, Slacko and Puppeee user. Puppy user since 2.13.
Contributed Fatdog64 packages thread
Back to top
View user's profile Send private message 
technosaurus


Joined: 18 May 2008
Posts: 4351

PostPosted: Sat 09 Feb 2013, 19:28    Post subject:  

it would be fine except amigos comment about c++ is only part true, you can build uclibc++ without exceptions and rtti and not need the libstdc++ bits, also even if you do add exception and rtti support, you only _need_ libsupc++ which is a much smaller static library that can be included into the uclibc++ libs

Note: without exceptions and/or rtti, some programs will still not build, but you can make your own try/catch/throw/... wrappers like this:
http://www.di.unipi.it/~nids/docs/longjump_try_trow_catch.html
The most complex thing I have done this way is the scintilla libs for geany (dillo3 and fltk-1.3 builds sans exceptions and rtti fine without any changes)

_________________
Web Programming - Pet Packaging 100 & 101
Back to top
View user's profile Send private message 
tallboy


Joined: 21 Sep 2010
Posts: 444
Location: Oslo, Norway

PostPosted: Sun 10 Feb 2013, 02:33    Post subject:  

amigo wrote:
I'm really surprised that people continue to try and produce or use these apps, hoping they'll work somewhere else. If only instead of posting packaged applications, people would post build scripts which configure, compile and create the packages, then that would solve it -at least as long as one could bring themselves to run that script or could cajole someone else with the same variant to do so for them.


I don't want to move away from the original topic, but one of the related problems with .pets, is that there is no way to see from it's name, how it was compiled. Including all necessary info in the name of a .pet compiled for one specific puppy would not be sensible, but could a short code be included in the name, that could correspond to a 'compilation code' included with the puppy version, á la the iso's md5sum?

tallboy

_________________
True freedom is a live Puppy on a multisession CD/DVD.
Back to top
View user's profile Send private message 
Display posts from previous:   Sort by:   
Page 1 of 2 [20 Posts]   Goto page: 1, 2 Next
Post new topic   Reply to topic View previous topic :: View next topic
 Forum index » Off-Topic Area » Programming
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1945s ][ Queries: 12 (0.0107s) ][ GZIP on ]