问题
I recently received a:
...relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
error while trying to compile a program as a shared library.
Now the solution to this is not too difficult (recompile all dependencies with -fPIC), but after some research it turns out that this problem is only present on x86-64 platforms. On 32bit any position dependent code can still be relocated by the dynamic loader.
The best answer I could find is:
x86 has support for .text relocations (which is what happens when you have position-dependend code). This support comes at a cost, namely that every page containing such relocation becomes basically unshared, even if it sits in a shared library, thereby spoiling the very concept of shared libs. Hence we decided to disallow this on amd64 (plus it creates problems if the value needs more than 32bit, because all .text relocs only have size 'word32')
But I don't find this quite adequate. If it is the case that relocations spoil the concept of shared libraries, why can it be done on 32bit platforms? Also, if there were changes that needed to be made to the ELF format to support 64bit, then why were not all fields increased in size to accommodate?
This may be a minor point, but it is motivated by the fact that a) the code in question is a scientific code and it would be nice not to have to take a performance hit and b) this information was nye impossible to find in the first place!
[Edit: 'The Answer'
@awoodlands answer is probably the best 'literal answer', @servn added some good information.
In a search to find more about different types of relocations I found this and ultimately an x86_64 ABI reference (see page 68) ]
回答1:
As I understand it the problem is x86-64 seems to introduce a new, faster way of referencing data relative to the instruction pointer, which did not exist for x86-32.
This article has a nice in-depth analysis of it, and gives the following executive summary:
The ability of x86-64 to use instruction-pointer relative offsetting to data addresses is a nice optimisation, but in a shared-library situation assumptions about the relative location of data are invalid and can not be used. In this case, access to global data (i.e. anything that might be changed around on you) must go through a layer of abstraction, namely the global offset table.
I.e. -fPIC
addressing adds an extra layer of abstraction to addressing, to make what was previously possible (and a desirable feature) in the usual addressing style still work with the newer architecture.
回答2:
But I don't find this quite adequate. If it is the case that relocations spoil the concept of shared libraries, why can it be done on 32bit platforms?
It can be done, it just isn't particularly efficient... computing the relocations has runtime costs, the relocated executables take additional memory, and the mechanism introduces a lot of complexity into the executable loader. Also, Linux distros really want to encourage all code to be compiled with -fPIC because changing the base address of an executable is a mitigation strategy which makes writing exploits for security vulnerabilities more difficult.
It's also worth mentioning that -fPIC isn't generally a significant performance cost, especially if you use -fvisibility=hidden or equivalent.
why were not all fields increased in size to accommodate?
The "field" in question is the immediate field of x86-64 addressing modes, which is isn't under the control of the ELF developers.
回答3:
You can use -mcmodel=large option to build shared libraries without -fpic on x86_64
Reference : http://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models/
来源:https://stackoverflow.com/questions/7216244/why-is-fpic-absolutely-necessary-on-64-and-not-on-32bit-platforms