Static Linking Considered Harmful
Static Linking Considered Harmful
I found this on Ulrich Drepper's website - I'm not sure if he wrote it. According to wikipedia he's the lead contributor and maintainer for glibc.
http://www.akkadia.org/drepper/no_static_linking.html
His site also includes things like tutorials on "Optimizing with gcc and glibc" and "How to Write Shared Libraries"
http://www.akkadia.org/drepper/no_static_linking.html
His site also includes things like tutorials on "Optimizing with gcc and glibc" and "How to Write Shared Libraries"
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
All perfectly reasonable arguments but no answer to the library version mayhem which is the curse of Linux. That is why people choose to use static libraries.
[b]Classic Opera 12.16 browser SFS package[/b] for Precise, Slacko, Racy, Wary, Lucid, etc available[url=http://terryphillips.org.uk/operasfs.htm]here[/url] :)
Have you really seen that "mayhem"? I just haven't really.
I mentioned the other day: I tend to wonder if "dependency hell" on Linux is mostly just urban legend.
I do see it on Windows at work all the time. Most programs will have their own versions of all their dependencies, kept in their own folders (just as bad as static linking for wasting space and bandwidth). But the odd application will install things in the main "system" folder, and very often these will be obsolete versions, in which case they will break a bunch of other programs, because they override the versions programs keep in their own folder. But if there was a standard "repository" for Windows (there is a real package manager, used by cygwin and osgeo4w and various things), there wouldn't be a problem, because all the programs would be compiled against the same current libs.
I mentioned the other day: I tend to wonder if "dependency hell" on Linux is mostly just urban legend.
I do see it on Windows at work all the time. Most programs will have their own versions of all their dependencies, kept in their own folders (just as bad as static linking for wasting space and bandwidth). But the odd application will install things in the main "system" folder, and very often these will be obsolete versions, in which case they will break a bunch of other programs, because they override the versions programs keep in their own folder. But if there was a standard "repository" for Windows (there is a real package manager, used by cygwin and osgeo4w and various things), there wouldn't be a problem, because all the programs would be compiled against the same current libs.
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
I guess what I'm saying is really that if the package management system works acceptably, it is the answer to your "mayhem".
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
puppy is mostly dynamically linked --- thats why when you "upgrade" libraries, things can break ... not so when statically linked
go ahead and upgrade libxcb... and have fun
upgrade from gtk2.16 to any version past 2.18 and be annoyed
I would take any advice from the maintainer of wontfix-libc with a grain of salt
the problem isn't really shared libs either - its the crap GNU tools we use to build them that unnecessarily links in symbols that are not needed because
pkg-config wrongly says to do so (or auto* thinks it did)
hint when you have a properly configured toolchain you can build a nearly unbreakable gtk2 binary with
gcc $CFLAGS `pkg-config gtk+-2.0 --cflags` *.c -o outputbinary $LDFLAGS -lgtk-x11-2.0
but the stupid autotools link in the entire friggin' dependency toolchain directly, causing every used function to get its own special spot in the global offset table so that it can theoretically start .0000001s faster so long as nothing _ever_ moves, changes, gets rebuilt with slightly modified options or compiler flags ... then it loads much much much slower (not to mention, creating an unnecessarily larger binary)
then god forbid you want/need to upgrade to a version with a changed APi ... say xcb ... even though only libX11 directly depends on it (and a few less popular apps) nearly everything built against libX11 will break... no problem - just recompile X11 and your good right? nope - the linker listened to you when you told it to directly link libxcb, so everything you compiled with autotools is now broken
go ahead and upgrade libxcb... and have fun
upgrade from gtk2.16 to any version past 2.18 and be annoyed
I would take any advice from the maintainer of wontfix-libc with a grain of salt
the problem isn't really shared libs either - its the crap GNU tools we use to build them that unnecessarily links in symbols that are not needed because
pkg-config wrongly says to do so (or auto* thinks it did)
hint when you have a properly configured toolchain you can build a nearly unbreakable gtk2 binary with
gcc $CFLAGS `pkg-config gtk+-2.0 --cflags` *.c -o outputbinary $LDFLAGS -lgtk-x11-2.0
but the stupid autotools link in the entire friggin' dependency toolchain directly, causing every used function to get its own special spot in the global offset table so that it can theoretically start .0000001s faster so long as nothing _ever_ moves, changes, gets rebuilt with slightly modified options or compiler flags ... then it loads much much much slower (not to mention, creating an unnecessarily larger binary)
then god forbid you want/need to upgrade to a version with a changed APi ... say xcb ... even though only libX11 directly depends on it (and a few less popular apps) nearly everything built against libX11 will break... no problem - just recompile X11 and your good right? nope - the linker listened to you when you told it to directly link libxcb, so everything you compiled with autotools is now broken
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
So are there alternatives to the gnu tools that solve those problems?
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
yes, - jwm doesn't mtpaint doesn't (they have their own configure scripts) and it is perfectly acceptable to have an editable Makefile(s) or other custom build script.disciple wrote:So are there alternatives to the gnu tools that solve those problems?
The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better
On a separate note - just for reference I can build (and have) a fully self contained 2.6.32 kernel with a statically linked and built-in userland including X in a single 1Mb kernel image that will run with <4mb RAM ... not really possible with shared glibc ... but then again I am using multicall binaries built with my own userland build scripts (only because it was easier for me to do it that way... Not something I would want to do for _every_ package by myself)
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
I'm struggling to follow you here - have you come straight from the package management thread by any chance?The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
not just Linux, pretty much all *nixes. Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it bestdisciple wrote:I'm struggling to follow you here - have you come straight from the package management thread by any chance?The problem is, we dropped the ball so long ago by not integrating the dev environment with the packaging environment, that now its really too late for a small team to go back and pick up the ball and "do it right" ... but we can at least do it better
Who is the "we" you refer to? Puppy packagers? Developers of Linux software in general? Distro builders in general?
http://landley.net/notes-2011.html#28-08-2011
*BSD has bsdbuild and others which do essentially the same thing
my point with integrating dev and packaging was that all of the garbage that the configure script does, could already be done by a properly setup package management system
for example (not well thought out, but just a "for instance")
if a library (libmyclib) provides snprintf it could add the following to the "<systemconfig_file>"
#define HAS_SNPRINTF -lmyclib
which would not only tell the system that we have snprintf, but how to link it
... the only plausible way to even attempt this (and get it into mainstream) is to try and shim it into the autotools caching mechanism to make it think it has already verified everything
.....sorry to get off topic, but getting back to shared vs. static
shared libs are vulnerable to this
LD_PRELOAD=/tmp/vicious_attacklib.so <binary>
if a shared lib has a vulnerability ALL of the programs linked against it also do (with a static link, some may have been linked against non-vulnerable versions or the vulnerable code may not even be linked in if it isn't used)
and it is FUD that static binaries are slower (in fact they are ~100-4000% faster)
http://sta.li/faq
breakages only have to be applied in 1 place too* fixes (either security or only bug) have to be applied to only one place: the new DSO(s). If various applications are linked statically, all of them would have to be relinked. By the time the problem is discovered the sysadmin usually forgot which apps are built with the problematic library. I consider this alone (together with the next one) to be the killer arguments.
If you maintain your source tree its pretty easy to figure out using simple tools (find and grep).
and verify changes/lack of changes using edelta or xdelta
yes, because they aren't nearly as vulnerable to the primary vectors for those exploits such as LD_* attacks and ldd escalations (people put locks on doors, not walls)* Security measures like load address randomization cannot be used. With statically linked applications, only the stack and heap address can be randomized. All text has a fixed address in all invocations. With dynamically linked applications, the kernel has the ability to load all DSOs at arbitrary addresses, independent from each other. In case the application is built as a position independent executable (PIE) even this code can be loaded at random addresses. Fixed addresses (or even only fixed offsets) are the dreams of attackers. And no, it is not possible in general to generate PIEs with static linking. On IA-32 it is possible to use code compiled without -fpic and -fpie in PIEs (although with a cost) but this is not true for other architectures, including x86-64.
you _can_ "statically" link a pie, just compile your "static" lib(s) with -fpic (you will get the dirty pages and other pic overhead but at least the unused code will be removed)
no, they share _some_ pages (only read only), add quite a few extra dirty pages and if you prelink then the load times skyrocket once you change a single shared lib - dont ever do it, it will suck almost immediately* more efficient use of physical memory. All processes share the same physical pages for the code in the DSOs. With prelinking startup times for dynamically linked code is as good as that of statically linked code.
I have tested this with a plethora of compiler/linker optimizations, hacks and tricks and the closest I could get to the speed of its static binary counterpart's startup was still only half as fast
another wontfix glibc bug* all kinds of features in the libc (locale (through iconv), NSS, IDN, ...) require dynamic linking to load the appropriate external code. We have very limited support for doing this in statically linked code. But it requires that the dynamically loaded modules available at runtime must come from the same glibc version as the code linked into the application. And it is completely unsupported to dynamically load DSOs this way which are not part of glibc. Shipping all the dependencies goes completely against the advantage of static linking people site: that shipping one binary is enough to make it work everywhere.
yet another wontfix glibc bug* Related, trivial NSS modules can be used from statically linked apps directly. If they require extensive dependencies (like the LDAP NSS module, not part of glibc proper) this will likely not work. And since the selection of the NSS modules is up the the person deploying the code (not the developer), it is not possible to make the assumption that these kind of modules are not used.
seriously - you don't think it is possible to accidentally violate lgpl ... if you can't remember what library version a program is linked against (because you didn't track it), what magic makes you remember patching it to build your code* no accidental violation of the (L)GPL. Should a program which is statically linked be given to a third party, it is necessary to provide the possibility to regenerate the program code.
if you are statically linking there is no doubt whether or not you need to include the static libs of lgpl libs
exactly, but there are others that do that _aren't_ a giant gaping security hole... tools that aren't designed specifically for shared libraries (strace for instance) do work ... thats like saying hammers don't make very good screwdrivers* tools and hacks like ltrace, LD_PRELOAD, LD_PROFILE, LD_AUDIT don't work. These can be effective debugging and profiling, especially for remote debugging where the user cannot be trusted with doing complex debugging work.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
Thanks, that's a great link, although it would be good to see a lot of real-life numbers, particularly as those guys are focused on small programs. As a user I don't think I generally care about small programs (exceptions would be a few things used by shell scripts like pburn, but for most of those you ideally use busybox anyway). Where I would notice a big performance increase is in big programs like browsers. They say
I'm guessing "easily outperform" is a lot less than 4000%... but where are the numbers?usually big static executables (which we try to avoid) easily outperform dynamic executables with lots of dependencies
How common are "Good libraries" that
?implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.
================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting.
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
Not very, I can think of one that does it (dietlibc) but it is not really "Good" for other reasonsdisciple wrote:How common are "Good libraries" that?implement each library function in separate object (.o) files, this enables the linker (ld) to only extract and link those object files from an archive (.a) that export the symbols that are actually used by a program.
Goingnuts and I have been trying to mary the best of both worlds by merging the best mix of smaller tools (but still useful) static build advantages, small replacement libraries, the multicall binary (mcb) concept and compiler/linker optimizations================
Off-topic: have you tried Stali or Sabotage Linux or anything? I was quite interested in a distro based around uClibc or something and busybox, but it looked like they weren't that viable (alive and with a good selection of apps). A distro based on static linking would be even more interesting.
we try to strike a balance between size, functionality etc
for instance one mcb contains the X11 apps ... xinit, Xvesa, jwm, rxvt
another has gtk1 apps ... Rox-Filer, minimum profit, dillo1, Xdialog, mtpaint, aumix
I did do testing on this with multiple apps open compared to the same mcb built on my Wary box's shared libs and resource usage in Wary increased at nearly double the rate per app, which is fairly consistent with the firefox and seamonkey builds from lamarelle.org (in case you want a "real world" example) though they only use static mozilla libs (well mostly )
Last edited by technosaurus on Wed 22 Feb 2012, 18:23, edited 1 time in total.
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
In 2006 when I was choosing to which image editor project I would like to contribute my code, one of my primary criteria was "configure script which I could read" mtPaint fit the bill; GIMP didn't. The rest is history.technosaurus wrote:Have you ever watched what a ./configure script does or gone through one? OMG, what a disaster, but I find it hilarious when I download a 1000 byte program with a 100kb config script - Rob Landley says it best
technosaurus; Have you done a statistical analysis of lib. common usage?
The first thing to do would be making a list of apps. that are relevant, then take statistics on their libs. to use in separating static and shared libs.
The thought being compile libs. statically if they`re small and seldom used.
Though even common libs. with many different versions would qualify too. The main most commonly used version would be shared and others static.
The first thing to do would be making a list of apps. that are relevant, then take statistics on their libs. to use in separating static and shared libs.
The thought being compile libs. statically if they`re small and seldom used.
Though even common libs. with many different versions would qualify too. The main most commonly used version would be shared and others static.
- technosaurus
- Posts: 4853
- Joined: Mon 19 May 2008, 01:24
- Location: Blue Springs, MO
- Contact:
Gparted is a good example the lib*mm.so libs are essentially garbage. Barry even compiles them statically (though he skips the compiler flags needed for removing most of the unneeded sections). As an excercise, try compiling gparted with all shared libs, then with static libs from the devx. Compare the size differences.
Then if you want to see what difference cflags make, rebuild all the mm libs with -ffunction-sections -fdata-sections and -Wl,--gc-sections
(there are more, but these make the most difference)
Then if you want to see what difference cflags make, rebuild all the mm libs with -ffunction-sections -fdata-sections and -Wl,--gc-sections
(there are more, but these make the most difference)
Check out my [url=https://github.com/technosaurus]github repositories[/url]. I may eventually get around to updating my [url=http://bashismal.blogspot.com]blogspot[/url].
Speaking of Sabotage...
I've played with Sabotage Linux a little because it uses a libc (musl libc, http://www.etalabs.net/musl/) that matches what I want:
-Standard ABI (fairly constant except for major bugs, aims at LSB ABI compatability so eventually it will work with glibc binaries)
-Small source, quick to build (maybe two-minute build time on an atom at -j1; uses hand-edited config.mak instead of ./configure shell script; 7.2 MB for the latest version of the source + 700 KB includes )
-Standards-conformant source interface (mostly conforms to X/Open. ISO C99, POSIX; treats nonconformance as a bug that must be fixed instead of "wontfix")
-Fully supports static compilation; designed so every function gets its own file for minimum link overhead, no extra shared libs that get dragged in behind your back,
-Small binaries (1.5 mb libc.a includes everything, when glibc takes 3 mb just for libc.a and then has all the other stuff; 550 kb libc.so vs 1.3 mb; and a static or shared binary will be smaller than when linked with glibc)
-Designed for low RAM use--the author wrote it because his computer couldn't run libc6 and libc5 was inadequate.
The standards-conformance attitude is an advantage over every other libc, ABI beats uclibc/klibc, size beats glibc.
I've pondered making a Puppy based on it, but don't know that much; but it sure would be nice for that purpose...
I'm currently working (slowly) on "Muslin", a musl-based Linux distro. The long-term aim is to have something light, fast, and X/Open (SUSv4/UNIX2008) conformant.
-Standard ABI (fairly constant except for major bugs, aims at LSB ABI compatability so eventually it will work with glibc binaries)
-Small source, quick to build (maybe two-minute build time on an atom at -j1; uses hand-edited config.mak instead of ./configure shell script; 7.2 MB for the latest version of the source + 700 KB includes )
-Standards-conformant source interface (mostly conforms to X/Open. ISO C99, POSIX; treats nonconformance as a bug that must be fixed instead of "wontfix")
-Fully supports static compilation; designed so every function gets its own file for minimum link overhead, no extra shared libs that get dragged in behind your back,
-Small binaries (1.5 mb libc.a includes everything, when glibc takes 3 mb just for libc.a and then has all the other stuff; 550 kb libc.so vs 1.3 mb; and a static or shared binary will be smaller than when linked with glibc)
-Designed for low RAM use--the author wrote it because his computer couldn't run libc6 and libc5 was inadequate.
The standards-conformance attitude is an advantage over every other libc, ABI beats uclibc/klibc, size beats glibc.
I've pondered making a Puppy based on it, but don't know that much; but it sure would be nice for that purpose...
I'm currently working (slowly) on "Muslin", a musl-based Linux distro. The long-term aim is to have something light, fast, and X/Open (SUSv4/UNIX2008) conformant.
You guys don't have anything to do with "Starch Linux", do you? It is supposed to be all statically linked, with musl.
Do you know a good gtkdialog program? Please post a link here
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Classic Puppy quotes
ROOT FOREVER
GTK2 FOREVER
Not really, but some of the developers post on the musl mailing list about mostly-unrelated topics.disciple wrote:You guys don't have anything to do with "Starch Linux", do you? It is supposed to be all statically linked, with musl.
From what I've seen, it's currently not a bootstrapping environment. It really got started 2 months ago, and is a side/hobby project (ie, slow development).