I found this Thread about Rust sopporting RHEL6
https://github.com/rust-lang/rust/issues/62516
But I actually found that Applications built on Centos7 cannot run on Centos6 because of high glibc requirements.
As documented in:
https://github.com/rust-lang/libc/issues/1617
fn main() {
println!("Hello, world!");
}
$ ./hello-world
./hello-world: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./hello-world)
Reading that it would actually support EL6 makes me wonder why the highest available glibc Version is chosen on built time and not the actually required glibc Version.
I'm pretty sure this is expected behavior. The build system's libc is used to link the executable.
To achieve a real Backward Compatibility you would not require the newest Version but only this whose Functionality you really use.
As documented in the other Issue
https://github.com/rust-lang/libc/issues/1617
other Compilers are completely backward compatible and do run on Centos6 when compiled on Centos7 because they do not require the newest available version.
$ ./hello-world_fpc.run
Hello, world!
$ ./hello-world_fpc.run -h
Usage: /path/to/hello-world_fpc.run -h
Going even further to build a backward and foreward compatible software you would not use the newest available Feature of the newest Version to be backward compatible and delete old obsolete Features to be foreward compatible
as documented in this issue:
https://github.com/rust-lang/rust/issues/34668
You are probably talking about __cxa_thread_atexit_impl@@GLIBC_2.18 symbol https://github.com/rust-lang/rust/blob/d8bdb3fdcbd88eb16e1a6669236122c41ed2aed3/src/libstd/sys/unix/fast_thread_local.rs#L19-L43
It's used since Rust uses TLS during executable startup.
This is duplicate of https://github.com/rust-lang/rust/issues/36826
Closing as duplicate, then.
I executed the Command documented at
https://github.com/rust-lang/rust/issues/36826
on the Centos7 hello-world Application
$ objdump -T hello-world|grep -i glibc
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getenv
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dl_iterate_phdr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __snprintf_chk
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 abort
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_getattr_np
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 sigaction
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __xpg_strerror_r
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 write
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getpid
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.2 pthread_cond_wait
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutexattr_destroy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.4 __stack_chk_fail
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mmap
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_setspecific
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutex_destroy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutexattr_init
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strrchr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 lseek
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dladdr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memset
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_condattr_destroy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getcwd
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 close
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_attr_getstack
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memchr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 read
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcmp
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_attr_init
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3 __tls_get_addr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.2 pthread_cond_signal
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_rwlock_rdlock
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strcmp
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 signal
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.2 pthread_cond_init
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_getspecific
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutex_unlock
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutexattr_settype
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 malloc
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __fxstat
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_rwlock_unlock
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.3 pthread_condattr_setclock
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 realloc
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 munmap
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_key_create
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_condattr_init
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memmove
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_self
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memrchr
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.2 pthread_cond_destroy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 open
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 sysconf
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_attr_destroy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_key_delete
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 bsearch
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 posix_memalign
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 sigaltstack
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutex_init
0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 pthread_mutex_lock
The Function documented in
https://github.com/rust-lang/rust/issues/36826
does not show up
Rather it shows that memcpy() would require GLIBC_2.14
$ objdump -T hello-world|grep -i glibc|grep -i "GLIBC_2.14"
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
so it is not duplicate
but it goes against the affirmation of "Rust being compatible with EL6"
because as documented in
https://github.com/rust-lang/libc/issues/1617
Centos6 does only provide up to GLIBC_2.12
$ rpm -q --provides glibc|grep -i "GLIBC_"
libc.so.6(GLIBC_2.10)(64bit)
libc.so.6(GLIBC_2.11)(64bit)
libc.so.6(GLIBC_2.12)(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.2.6)(64bit)
libc.so.6(GLIBC_2.3)(64bit)
libc.so.6(GLIBC_2.3.2)(64bit)
libc.so.6(GLIBC_2.3.3)(64bit)
libc.so.6(GLIBC_2.3.4)(64bit)
libc.so.6(GLIBC_2.4)(64bit)
libc.so.6(GLIBC_2.5)(64bit)
libc.so.6(GLIBC_2.6)(64bit)
libc.so.6(GLIBC_2.7)(64bit)
libc.so.6(GLIBC_2.8)(64bit)
libc.so.6(GLIBC_2.9)(64bit)
but it also provides the memcpy function with GLIBC_2.2.5
$ objdump -T /lib64/libc.so.6|grep -i cpy|grep -i memcpy
0000000000091370 w DF .text 0000000000000009 GLIBC_2.2.5 wmemcpy
00000000001010f0 g DF .text 000000000000001b GLIBC_2.4 __wmemcpy_chk
0000000000089720 g DF .text 0000000000000465 GLIBC_2.2.5 memcpy
0000000000089710 g DF .text 0000000000000009 GLIBC_2.3.4 __memcpy_chk
$ objdump -T /lib64/libc.so.6|grep -i cpy
00000000000ff460 g DF .text 000000000000015d GLIBC_2.3.4 __stpcpy_chk
0000000000091470 w DF .text 00000000000000cb GLIBC_2.2.5 wcpncpy
0000000000080c20 g iD .text 0000000000000029 GLIBC_2.2.5 strcpy
0000000000090e20 w DF .text 0000000000000172 GLIBC_2.2.5 wcsncpy
0000000000101150 g DF .text 0000000000000040 GLIBC_2.4 __wcpcpy_chk
000000000008d600 g DF .text 000000000000009e GLIBC_2.2.5 __strcpy_small
0000000000084d10 w iD .text 0000000000000029 GLIBC_2.2.5 stpncpy
0000000000091370 w DF .text 0000000000000009 GLIBC_2.2.5 wmemcpy
00000000000845a0 g DF .text 0000000000000009 GLIBC_2.3.4 __mempcpy_chk
0000000000082c90 g iD .text 0000000000000029 GLIBC_2.2.5 strncpy
0000000000101190 g DF .text 0000000000000017 GLIBC_2.4 __wcsncpy_chk
0000000000084d10 g iD .text 0000000000000029 GLIBC_2.2.5 __stpncpy
00000000000896d0 w DF .text 0000000000000033 GLIBC_2.2.5 memccpy
00000000000845b0 w DF .text 0000000000000452 GLIBC_2.2.5 mempcpy
00000000001010f0 g DF .text 000000000000001b GLIBC_2.4 __wmemcpy_chk
00000000001010a0 g DF .text 0000000000000043 GLIBC_2.4 __wcscpy_chk
0000000000101130 g DF .text 000000000000001b GLIBC_2.4 __wmempcpy_chk
0000000000091540 w DF .text 0000000000000009 GLIBC_2.2.5 wmempcpy
0000000000089720 g DF .text 0000000000000465 GLIBC_2.2.5 memcpy
00000000000ff8b0 g DF .text 0000000000000186 GLIBC_2.3.4 __strncpy_chk
00000000000845b0 g DF .text 0000000000000452 GLIBC_2.2.5 __mempcpy
0000000000090ba0 g DF .text 0000000000000027 GLIBC_2.2.5 wcscpy
000000000008d530 g DF .text 00000000000000c2 GLIBC_2.2.5 __mempcpy_small
000000000008d6a0 g DF .text 000000000000009f GLIBC_2.2.5 __stpcpy_small
0000000000089710 g DF .text 0000000000000009 GLIBC_2.3.4 __memcpy_chk
00000000000ffa40 g DF .text 00000000000000dc GLIBC_2.4 __stpncpy_chk
0000000000084c00 g iD .text 0000000000000029 GLIBC_2.2.5 __stpcpy
00000000000ff620 g DF .text 000000000000015d GLIBC_2.3.4 __strcpy_chk
0000000000101350 g DF .text 0000000000000017 GLIBC_2.4 __wcpncpy_chk
0000000000084c00 w iD .text 0000000000000029 GLIBC_2.2.5 stpcpy
0000000000091440 w DF .text 0000000000000027 GLIBC_2.2.5 wcpcpy
It rather seems to have to do with the issue explained at:
https://stackoverflow.com/questions/35656696/explanation-of-memcpy-memmove-glibc-2-14-2-2-5
Interesting. It should still work when also building on CentOS 6 though, can you confirm this?
As mentioned in the Backward Compatibility thread
https://github.com/rust-lang/rust/issues/62516#issuecomment-516130362
Rust is not shipped for Centos6
For this reason I build the Application on Centos7 but I need to run them on Centos6, too.
https://github.com/rust-lang/rust/issues/62516#issuecomment-516130362 implies that Rust should still work on older versions via rustup:
If our customers would like to use Rust on older RHEL, they can do so via rustup, and we'll support them in the same way we would for any other third-party software.
Rust probably won't work on CentOS 6 because the kernel is too old.
Rust produces call to llvm.memcpy intrinsics and LLVM prefers improved memcpy@@GLIBC_2.14 over old memcpy@@GLIBC_2.2.5.
I don't there is easy way for an user to make LLVM pick older version but let's ping LLVM experts: @nagisa @nikic @rkruppe
other Compilers are completely backward compatible and do run on Centos6 when compiled on Centos7 because they do not require the newest available version.
You cannot count on it. If your code had memcpy you'd have exactly the same issue.
Generally if you want to have your binaries working on older system you should build glibc matching the old system and link against it instead of system one.
Rust definitely works in a CentOS 6 userspace with the standard rustup/etc toolchain - I build applications in a centos:6 docker image.
@mati865 I don't think it's possible to just tell LLVM to target an older glibc. The older glibc needs to be actually available, either by building on an old distro (easiest!), by manually building an older glibc, or by using something like crosstool-ng.
As documented in
https://github.com/rust-lang/rust/issues/36826
and others
there are many issues with library versions when deploying the applications cross platform
but the comment of @sfackler makes me think the Rust std does not use this.
A solution could be to have a rust-std_el6 and rust-std_el7 as dynamic library with is compiled platform specific and Rust applications that link dynamically to it.
As it was common for the _Microsoft MFC_ library.
https://www.microsoft.com/en-us/download/details.aspx?id=5555
It would even better to have a Cargo.tomlswitch to choose between dynamic linking and static linking.
I like to add:
Another very well known example of this Architecture is the _DirectX_ Library.
You install it separately but the Application that uses it is still a compiled separately distributed binary.
I was reviewing the Library Bindings of the Rust hello-world Executable and found some Bindings that are actually very surprising:
$ ldd target/release/hello-world
linux-vdso.so.1 => (0x00007ffe11e2a000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fcf1e55d000)
librt.so.1 => /lib64/librt.so.1 (0x00007fcf1e355000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fcf1e139000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fcf1df23000)
libc.so.6 => /lib64/libc.so.6 (0x00007fcf1db56000)
/lib64/ld-linux-x86-64.so.2 (0x00007fcf1e992000)
As a comparison I did check the hello-world Executable generated with the other Compiler which runs on Centos6 right out of the box:
$ ldd hello-world_fpc.run
no es un ejecutable dinámico
The disassembly output of hello-world_fpc.run shows that this Compiler actually doesn't need any external Libraries to print a simple String "_Hello, World!_" to STDOUT:
0x4002ab <DORUN+235> callq 0x41c650 <fpc_get_output>
0x4002b0 <DORUN+240> mov %rax,%rbx
0x4002b3 <DORUN+243> lea 0x83c76(%rip),%rdx # 0x483f30
0x4002ba <DORUN+250> mov %rbx,%rsi
0x4002bd <DORUN+253> mov $0x0,%edi
0x4002c2 <DORUN+258> callq 0x41c8f0 <fpc_write_text_shortstr>
0x4002c7 <DORUN+263> callq 0x4169d0 <fpc_iocheck>
0x4002cc <DORUN+268> mov %rbx,%rdi
0x4002cf <DORUN+271> callq 0x41c820 <fpc_writeln_end>
0x4002d4 <DORUN+276> callq 0x4169d0 <fpc_iocheck>
0x4002d9 <DORUN+281> mov -0x8(%rbp),%rdi
0x4002dd <DORUN+285> mov -0x8(%rbp),%rax
0x4002e1 <DORUN+289> mov (%rax),%rax
0x4002e4 <DORUN+292> callq *0x1f8(%rax)
0x4002ea <DORUN+298> callq 0x413c40 <fpc_popaddrstack>
0x4002ef <DORUN+303> lea -0x10(%rbp),%rdi
0x4002f3 <DORUN+307> callq 0x40ac60 <fpc_ansistr_decr_ref>
0x4002f8 <DORUN+312> mov -0x70(%rbp),%rax
0x4002fc <DORUN+316> test %rax,%rax
0x4002ff <DORUN+319> je 0x400306 <DORUN+326>
0x400301 <DORUN+321> callq 0x413dd0 <fpc_reraise>
0x400306 <DORUN+326> mov -0x78(%rbp),%rbx
0x40030a <DORUN+330> leaveq
0x40030b <DORUN+331> retq
So I'm really surprised that Rust hello-world would bind in so many Libraries and even the libpthread.so.0 for threading to print a simple String "_Hello, world!_" to STDOUT.
But this actually explains why the Rust Executables are that huge and the startup is slower.
Use the musl target if you want fully static binaries.
On Wed, 4 Mar 2020, 12:01 Bodo (Hugo) Barwich, notifications@github.com
wrote:
I was reviewing the Library Bindings of the Rust hello-world Executable
and found some Bindings that are actually very surprising:$ ldd target/release/hello-world
linux-vdso.so.1 => (0x00007ffe11e2a000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fcf1e55d000)
librt.so.1 => /lib64/librt.so.1 (0x00007fcf1e355000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fcf1e139000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fcf1df23000)
libc.so.6 => /lib64/libc.so.6 (0x00007fcf1db56000)
/lib64/ld-linux-x86-64.so.2 (0x00007fcf1e992000)
As a comparison I did check the hello-world Executable generated with the
other Compiler which runs on Centos6 right out of the box:$ ldd hello-world_fpc.run
no es un ejecutable dinámico
The disassembly output of hello-world_fpc.run shows that this Compiler
actually doesn't need any external Libraries to print a simple String "Hello,
World!" to STDOUT:0x4002ab
callq 0x41c650 0x4002b0
mov %rax,%rbx 0x4002b3
lea 0x83c76(%rip),%rdx # 0x483f30 0x4002ba
mov %rbx,%rsi 0x4002bd
mov $0x0,%edi 0x4002c2
callq 0x41c8f0 0x4002c7
callq 0x4169d0 0x4002cc
mov %rbx,%rdi 0x4002cf
callq 0x41c820 0x4002d4
callq 0x4169d0 0x4002d9
mov -0x8(%rbp),%rdi 0x4002dd
mov -0x8(%rbp),%rax 0x4002e1
mov (%rax),%rax 0x4002e4
callq *0x1f8(%rax) 0x4002ea
callq 0x413c40 0x4002ef
lea -0x10(%rbp),%rdi 0x4002f3
callq 0x40ac60 0x4002f8
mov -0x70(%rbp),%rax 0x4002fc
test %rax,%rax 0x4002ff
je 0x400306 0x400301
callq 0x413dd0 0x400306
mov -0x78(%rbp),%rbx 0x40030a
leaveq 0x40030b
retq So I'm really surprised that Rust hello-world would bind in so many
Libraries and even the libpthread.so.0 for threading to print a simple
String "Hello, world!" to STDOUT.But this actually explains why the Rust Executables are that huge and the
startup is slower.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rust-lang/rust/issues/67173?email_source=notifications&email_token=AAFFZURYJDAUHEICZLLQIODRFYRI7A5CNFSM4JYHWLH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENXEMOA#issuecomment-594429496,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAFFZUTCH3OZ4QXFEMLAOMTRFYRI7ANCNFSM4JYHWLHQ
.
Yes, this is expected behavior, as mentioned above. Closing.
As all this discussion comes to show that it is the Binding of the Rust hello-world Application against the libc.so.6 System Library in its highest available version that makes the Executable crash on Centos6.
Actually this is a problem for Cross-Compiling which is not what I was expecting and what other Compilers actually can achieve as shown in many Examples in this discussion.
I just stumbled over another Use Case affected by this limitation:
https://brave-browser.readthedocs.io/en/latest/installing-brave.html#linux
The new _Brave_ Browser written in _Rust_
https://github.com/brave
is not installable on any Centos Version before Centos8.
It just fails due to the described issue.
# yum install brave-browser
Failure: Package: brave-browser-1.7.92-1.x86_64 (brave-browser)
Requires: libc.so.6(GLIBC_2.18)(64bit)
For Publishers this is a quite weighty Draw-Back since it limits the Audience heavily or forces them to enlarge their Build Park with Build Hosts for any possible Distribution Version.
That is really not a favourable situation for a Startup Project.
(I, myself, as possible Customer won't be able to install it in the near future ...)
Which will affect its actual usage for Cross Distribution Projects in Production
as commented in:
https://users.rust-lang.org/t/rust-2020-growth/34956/181
The new _Brave_ Browser written in _Rust_
It has some components written in Rust, but brave-core is written in C++, based on Chromium.
is not installable on any Centos Version before Centos8.
If Brave wants to support older systems, then they need to build on an older system to get compatible symbol versions. This is a consequence of how glibc and others manage ABI compatibility, not Rust itself.
This is a consequence of how glibc and others manage ABI compatibility, not Rust itself.
Since the Rust application compiles on Centos6 and Centos7 it indicates that the Rust Code can work with either Versions of the glibc Library, but there might be a switch which decides on how to call the memcpy() Function
And as all this discussion comes to explain it is not the glibc Library which does not let you write an application which is compatible with older glibc Versions.
On Centos7 glibc provides Backward Compatibility back to GLIBC_2.0:
$ cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)
$ rpm -qi glibc|grep -iE "(name|version|release)"
Name : glibc
Version : 2.17
Release : 260.el7
$ rpm -q --provides glibc|grep -i "GLIBC_"|grep -i libc.so.6|sort
libc.so.6(GLIBC_2.0)
libc.so.6(GLIBC_2.1)
libc.so.6(GLIBC_2.10)
libc.so.6(GLIBC_2.10)(64bit)
libc.so.6(GLIBC_2.11)
libc.so.6(GLIBC_2.1.1)
libc.so.6(GLIBC_2.11)(64bit)
libc.so.6(GLIBC_2.12)
libc.so.6(GLIBC_2.1.2)
libc.so.6(GLIBC_2.12)(64bit)
libc.so.6(GLIBC_2.13)
libc.so.6(GLIBC_2.1.3)
libc.so.6(GLIBC_2.13)(64bit)
libc.so.6(GLIBC_2.14)
libc.so.6(GLIBC_2.14)(64bit)
libc.so.6(GLIBC_2.15)
libc.so.6(GLIBC_2.15)(64bit)
libc.so.6(GLIBC_2.16)
libc.so.6(GLIBC_2.16)(64bit)
libc.so.6(GLIBC_2.17)
libc.so.6(GLIBC_2.17)(64bit)
libc.so.6(GLIBC_2.2)
libc.so.6(GLIBC_2.2.1)
libc.so.6(GLIBC_2.2.2)
libc.so.6(GLIBC_2.2.3)
libc.so.6(GLIBC_2.2.4)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.2.6)
libc.so.6(GLIBC_2.2.6)(64bit)
libc.so.6(GLIBC_2.3)
libc.so.6(GLIBC_2.3.2)
libc.so.6(GLIBC_2.3.2)(64bit)
libc.so.6(GLIBC_2.3.3)
libc.so.6(GLIBC_2.3.3)(64bit)
libc.so.6(GLIBC_2.3.4)
libc.so.6(GLIBC_2.3.4)(64bit)
libc.so.6(GLIBC_2.3)(64bit)
libc.so.6(GLIBC_2.4)
libc.so.6(GLIBC_2.4)(64bit)
libc.so.6(GLIBC_2.5)
libc.so.6(GLIBC_2.5)(64bit)
libc.so.6(GLIBC_2.6)
libc.so.6(GLIBC_2.6)(64bit)
libc.so.6(GLIBC_2.7)
libc.so.6(GLIBC_2.7)(64bit)
libc.so.6(GLIBC_2.8)
libc.so.6(GLIBC_2.8)(64bit)
libc.so.6(GLIBC_2.9)
libc.so.6(GLIBC_2.9)(64bit)
So, when a Rust application build on Centos6 can run on Centos7, why can't you build on Centos7 an Rust application that can run on Centos6 ?
Which again makes it an issue of the _Rust_ Compiler.
And yes, this issue forces the _DevOps_ Department to keep separate Centos6 and Centos7 Infrastructure in order to build the same application on Centos6 for Centos6 deploys and Centos7 build host for Centos7 deploys.
In modern times of _Microservices_ and _Mulithost Distributed Systems_ this is not feasible.
As an Example think of an _Ansible_ Module written in Rust.
The _Ansible_ Module is just another Use Case with makes this issue a Roadblock for any _Rust_ Development to go into production.
@domibay-hugo as said in https://github.com/rust-lang/rust/issues/67173#issuecomment-563353655 there is probably no easy way to tell LLVM to link older symbol (though if you find it many people will be grateful).
Rust does exactly the same thing as other compiler do. Take memcpy example from http://www.cplusplus.com/reference/cstring/memcpy :
$ clang-9 memcpy.c && objdump -T a.out
a.out: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 printf
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
$ gcc memcpy.c && objdump -T a.out
a.out: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.4 __stack_chk_fail
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 printf
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
0000000000000000 w D *UND* 0000000000000000 _ITM_registerTMCloneTable
0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize
@jonas-schievink
Yes, Rust Applications compile on Centos6
As mentioned I set up an Centos6 Build Host with the rustup script to conclude this discussion.
$ cat /etc/centos-release
CentOS release 6.10 (Final)
$ curl https://sh.rustup.rs -sSf | sh
stable installed -$ cargo build
Compiling hello-world v0.1.0 (/home/usr15/rust/hello-world)
Finished dev [unoptimized + debuginfo] target(s) in 1.46s rustc 1.42.0 (b8cedc004 2020-03-09)
$ cargo build --release
Compiling hello-world v0.1.0 (/home/usr15/rust/hello-world)
Finished release [optimized] target(s) in 0.32s
$ target/release/hello-world
Hello, world!
But now I am completely surprised about the Application Linkage
$ ldd target/release/hello-world
linux-vdso.so.1 => (0x00007fff6eafb000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f7ce0a6e000)
librt.so.1 => /lib64/librt.so.1 (0x00007f7ce0866000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7ce0648000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f7ce0432000)
libc.so.6 => /lib64/libc.so.6 (0x00007f7ce009e000)
/lib64/ld-linux-x86-64.so.2 (0x000055ed96f5a000)
$ objdump -T target/release/hello-world|grep -i glibc|grep -i "memcpy"
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcpy
Because I see that the Rust Standard Library does NOT require GLIBC_2.14 to run.
It just runs fine with GLIBC_2.2.5 as well.
Even more hello_world Centos6 does run on Centos7:
$ cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)
$ ./hello-world_el6.run
Hello, world!
So, where does this Requirement of GLIBC_2.14 come from ?
Because I see that the Rust Standard Library does NOT require GLIBC_2.14 to run.
It just runs fine with GLIBC_2.2.5 as well.
Rust is currently built from CentOS 5: https://github.com/rust-lang/rust/blob/4ca5fd2d7b6b1d75b6cb8f679e8523fb3e7b19e2/src/ci/docker/dist-x86_64-linux/Dockerfile#L1
That way it doesn't pull any too new symbols.
So, where does this Requirement of GLIBC_2.14 come from ?
It comes from the system you are building on.
When the linker looks for memcpy, glibc on the system you are building on points it to use memcpy@@GLIBC_2.14 symbol.
@mati865
I tried your given C Code on Centos6 and Centos7.
Compiled on Centos6 it runs as well on Centos7.
$ cat /etc/centos-release
CentOS release 6.10 (Final)
$ ./memcpy_el6.run
person_copy: Pierre de Fermat, 46
$ cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)
$ ./memcpy_el6.run
person_copy: Pierre de Fermat, 46
$ objdump -T memcpy_el6.run
memcpy_el6.run: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 printf
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcpy
$ objdump -T memcpy_el7.0.run
memcpy_el7.0.run: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 printf
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
$ cat /etc/centos-release
CentOS release 6.10 (Final)
$ ./memcpy_el7.0.run
./memcpy_el7.0.run: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./memcpy_el7.0.run)
But then I found an interesting information at:
https://stackoverflow.com/questions/4032373/linking-against-an-old-version-of-libc-to-provide-greater-application-coverage
(Actually it seems by finding so much Questions about this theme on Internet indicates it is a Real Need for Application Developers to be backward and forward compatible.
... And looking at the Dates of the Post (2010/2011) as old as Compilers are.)
so I added to the C Code:
__asm__(".symver memcpy,memcpy@GLIBC_2.2.5");
Now the Linkage is different:
$ rpm -qi glibc |grep -iE "(name|version|release)"
Name : glibc
Version : 2.17
Release : 292.el7
$ objdump -T /lib64/libc.so.6|grep -i memcpy
00000000000a8b10 w DF .text 0000000000000009 GLIBC_2.2.5 wmemcpy
0000000000116f10 g DF .text 0000000000000014 GLIBC_2.4 __wmemcpy_chk
0000000000094a80 g iD .text 0000000000000055 GLIBC_2.14 memcpy
000000000008f870 g iD .text 000000000000004b (GLIBC_2.2.5) memcpy
00000000001151a0 g iD .text 0000000000000055 GLIBC_2.3.4 __memcpy_chk
$ objdump -T memcpy_el7.run
memcpy_el7.run: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcpy
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 prin
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
Since glibc-2.17 still provides (GLIBC_2.2.5) memcpy .
Now memcpy_el7.run runs on Centos6:
$ cat /etc/centos-release
CentOS release 6.10 (Final)
$ ./memcpy_el7.run
person_copy: Pierre de Fermat, 46
Compiled on Centos6 it runs as well on Centos7.
That is entirely expected, glibc is backwards compatible. It has no forward compatibility though so it won't run on CentOS 6 if you compiled it on CentOS 7.
so I added to the C Code:
__asm__(".symver memcpy,memcpy@GLIBC_2.2.5");
As said before Rust creates call to llvm.memcpy. LLVM converts it to memcpy and then glibc points to memcpy@@GLIBC_2.14.
There is no issue at Rust side, you have to convince LLVM to use memcpy@@GLIBC_2.2.5.
or get rid of OS depending bindings by replacing it by own code ? ...
as the other compiler does which I quoted.
or like found at:
https://doc.redox-os.org/kernel/kernel/externs/fn.memcpy.html
Use a *-linux-musl target if you want to avoid glibc altogether. Otherwise, *-linux-gnu is beholden to the glibc's ABI policy, where symbols are versioned to introduce changes, and the latest version is selected at link time.
FWIW, the memcpy symbol is a good example where you probably do want the new version when possible, when your deployment targets support it. The old memcpy acted like memmove, making no assumptions about overlap between the source and destination. The newer memcpy assumes they do not overlap (per spec), which allows a more optimized implementation. The glibc developers chose to use symbol versioning for this change so that older uncompliant programs would still work with the old version.
You are using clang, not musl.
@afwn90cj93201nixr2e1re I suggest opening a thread in https://users.rust-lang.org/ with information you have.
You know: Asking question is an art.
You will get more reply if you demonstrate the question better: what you intend? What is your effort so far? Don't just link to the code or some issue. State the problem more clearly.
Edit: I believe there are many community Rust projects using musl, such as tokei, ripgrep.
Even having installed musl-gcc with the official Download it does not produce Static Targets
and keeps requiring glibc
musl-gcc does not produce static targets
$ musl-gcc memcpy.c
$ ldd a.out
linux-vdso.so.1 => (0x00007fffcf7d7000)
libc.so.6 => /lib64/libc.so.6 (0x00007f47defe9000)
/usr/share/musl/lib/ld-musl-x86_64.so.1 => /lib64/ld-linux-x86-64.so.2 (0x00007f47df3b7000)
$ objdump -T a.out
a.out: formato del fichero elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 printf
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main
0000000000000000 w D *UND* 0000000000000000 __gmon_start__
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy
0000000000000000 w D *UND* 0000000000000000 _Jv_RegisterClasses
0000000000000000 w D *UND* 0000000000000000 _ITM_registerTMCloneTable
0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize
According to this discussion the only viable Workaround is building on old _CentOS6_ Build Hosts
as documented in the cited issue:
Linking against an old version of libc
Bruh, it's not gonna help you) It's still gonna use glibc as depend for cxa at least.
Obviously maintaining and building on _CentOS6_ with get you into trouble of other types since all Centos6 SSL Packages are outdated and unusable.
If you are building an Application that uses SSL you will get some serious headaches.
And thus the CentOS6 Build Host is only a conditional Workaround that does not work always.
yep
eggxactly
Most helpful comment
It has some components written in Rust, but brave-core is written in C++, based on Chromium.
If Brave wants to support older systems, then they need to build on an older system to get compatible symbol versions. This is a consequence of how glibc and others manage ABI compatibility, not Rust itself.