Protobuf: Make C++ implementation C++11 only.

Created on 1 Mar 2017  Â·  48Comments  Â·  Source: protocolbuffers/protobuf

C++11 has been out for several years and we are thinking of moving to allow C++11 features from being used in the code base. That means the implementation will not be able to compile if you don't have a C++11 compiler.

We may create a branch that works for C++98. The branch will only accept bug fixes, but not new features, optimization, etc

Please reply in this thread if you think this would be an issue for your project.

c++

Most helpful comment

Why don't you just fast forward to C++14?
Lots of goodies!

All 48 comments

Hello,

Thank you for collecting feedback about this. My project consists of shared libraries on linux. Some libraries are built with c++11 and others must be built with c++98 (for compatibility with some other libraries). We link protobuf dynamically so this may not an issue for us, unless any c++11 language features will be used in header files that our c++98 libraries use.

This would be an issue for our project. We support back to centos5 (gcc 4.1) and a number of platforms in between for which we are not yet able to turn on the c++11 flag. One of the attractive things about protobuf was its impressive backwards support, and would be sad to have older platforms be stuck with less.

(PS. I feel bad saying this since I would _love_ to move to c++11 only for our project too, haha! It is a sad world that we can't use "new" features from 6 years go yet. I'm sure libraries feel this pain more than applications do even. All of which to say, I feel your pain! 😭 )

Hi there, we made a related decision in gRPC - version 1.0 supports older compilers with only limited C++11 support (not as few as protobuf, though), but version 1.1 and beyond only support true C++11 compilers (as of grpc/grpc#8602 ). I understand, though, that protobuf's predicament is greater since it is a package that has been public for much longer, has more users, and has been supporting C++98.

I strongly favor moving to a 4.x C++11 only branch. Users tight to pre-c++11 can continue using the 3.x branch which we support in maintenance mode.

Note that linking pre-c++11 and post-c++11 libraries is already very tricky due to the standard libraries being slightly incompatible (see https://gcc.gnu.org/wiki/Cxx11AbiCompatibility). Hence I think the impact of not being able to link libs depending on pre-c++11 and post-c++11 proto runtime is minor, given you already have dangerous issues lurking around.

Furthermore, for c++ protos, we already have the rule that all proto files in a binary have to be compiled with the exact same version of the compiler and runtime.

We have C++14 enabled at work so this change would be welcome. Protobuf has great compatibility w.r.t. the serialized format, so people who are stuck with old environments can continue using the old libs.

Hello all,

Now I'm using gcc-5.4.0 to compile protobuf. However it couldn't be built because of C++11 new features.
I tried to modify "Makefile" to add compile options "-std=c++11" into "CXXFLAGS", but it still didn't work. How could I make it?

Solution:

I used command "configure CXXFLAGS=-std=c++11" to config protobuf.

+1 , backporting gcc5.4.x /7.x on old unmaintained distros is straightforward also with proper stripping unneeded language target at configure build flag.

Requiring C++11 will make it basically impossible to support protobuf on some Python versions on Windows. In particular, Python 2.7 support will not be possible and Python 3.5 will need to be a minimum requirement. This is assuming that protobuf continues to need to have compiled portions for these Python versions. The reason being that CPython ties each major minor version of the interpreter to a particular Visual Studio version on Windows. As a result, any extension one builds needs to be built with the same runtime as the interpreter or risk segfaults. A full listing can be found in this reference. Note that Python 2.6 and 2.7 requires Visual Studio 2008 (VC 9), which does not have C++11 support. Also Python 3.4 requires Visual Studio 2010 (VC 10), which also does not have C++11 support. Only Python 3.5 and 3.6, which requires Visual Studio 2015 (VC 14), have C++11 support.

cc @xfxyjwf

With respect to @jakirkham's note, Python does not let people build their recent recent releases with older Visual Studio versions. IMO Protobuf should follow suit. There will be no way to move forward otherwise.

Also note that Visual Studio 2010 has basic C++11 support, see the official reference.

The googlei18n/libphonenumber project which uses protobuf currently doesn't support C++11. See googlei18n/libphonenumber#1594 for the discussion.

Make it a 4.x and require C++11. Should be a clean slate. C++98 is old now and people should really make up their minds using a newer C++ version. There are so many benefits, it's not even funny.

support,use c++ 11 later

Hi,

we use WindRiver compiler, and there is no support for C++11 there. Unfortunately it is not possible to switch to different compiler for us (company defined compiler...).

If you switch, do you plan to support older releases?

If we go with C++11 only, we will probably maintain a branch and only back port severe bug fixes.

Why don't you just fast forward to C++14?
Lots of goodies!

There are a lot of projects that aren't C++11 compatible.

I notice that move constructors for generated messages has landed in master. Fantastic! But there still aren't move constructors/assignments for the containers RepeatedField, RepeatedPtrFieldBase, RepeatedPtrField<T>. (I mean for the whole object, rather than for the elements they contain.) Any chance of adding these too? I think it would be useful and a very small change.

(I hope it's OK that I cross posted this from #2791; this seems to be the main report.)

According to the plan, 3.5 will start to reserve unknown fields. I hope 3.5 will be C++98 compatible, so old compilers will get this important feature in their final release.

@zzm3145 the next 3.5.0 release will not require C++11. We will likely start to require C++11 when we migrate to use Abseil: Google's C++ common libraries:
https://opensource.googleblog.com/2017/09/introducing-abseil-new-common-libraries.html

I do not know how efficient are the calls guarded by GoogleInitOnce() (they happen once per message parameterless constructor invocation), but replacing them with C++11 singleton guarantee on the generated fileset's TableStruct would certainly remove the efficiency concerns away from the platform (trusting the platform to do that is the best way, in my opinion).

As implemented, unless doing link-time code generation, it seems to me that GoogleInitOnce() is a real call into the library, with a real call frame setup and prefetch pipeline kaput at the very least.

@kkm000 With C++11 support removing GoogleOnceInit is indeed something we want to do.

However GoogleOnceInit is implemented smart enough that after inlining the function it's a atomic read on a global variable and only a call in the initialization case. However using c++11 support will remove this dependency and allow the compiler to do whatever is best for the platform and is more readable.

@gerben-s: Yup, my bad, I misread the inline piece. It only calls GoogleOnceInitImpl (which is out-of-line) only after the inline lock-free flag test. D-oh!

Yes, c++11 and its library removes a lot of low-level stuff, like atomics, mutexes, threads, futures, you name it. Great stuff! atomicops_internals_x86_msvc.h? Oh, not any more! :)

Using C++11 would be great. I'm trying to use Protobuf on an embedded ARM target with FreeRTOS and a C++ 11 layer, and it's tricky because of all the PTHREAD stuff.

One of the reasons we like and use protobuf, is that it can easily help us bridge the communication gap between the embedded world and the server world. Some of the embedded devices we use are not running the latest Linux, nor do they support C++11. I think it would be a shame if protobuf were to require C++11, for this reason. If you are only targeting fast, modern devices, the protobuf loses its edge, because faster, more modern "embedded" devices like a Raspberry PI have less of a problem with more bloated protocols and bloated languages.

Is not it striking that @mikejt4 noted how beneficial is C++11 for an embedded ARM system, and right in the next comment @kenlars99 has argued that C++11 is deleterious for an embedded ARM system? (☉_☉)

I would add that we use a number of major libraries in our embedded system: boost, poco, cppunit, and a number of C-only libraries like sqlite, openldap, openssl... None of these require C++11. My issue is not with C++11 (I was very disappointed when we could not use it due to some of the target embedded systems we have to support), but with requiring it.

@kenlars99 I see, so the actual problem is those older systems have no support for C++11 at all, like the runtime libraries and such have not been built for them. Fair point. How viable do you think it is to stay on the latest protobuf version that still can cross-compile for these targets? The C++ world is moving on anyway, so if not protobuf, then you would likely hit the same issue with another product. There is not so much extension to the protobuf data formats at all; proto2 is still around for one, and there is probably a limit of how much of the latest and greatest one can bring into a senescent platform anyway.

Hi again from gRPC! So, would it be possible in your embedded world to be ok with the C++11 language without the C++11 library? This is the approach that we're taking in grpc as we move the core implementation to C++11 from what was originally C89.

@kkm000 Well we have a single code base that we cross compile to different embedded targets, some old, some new. It seems like it would be a burden to have to have our code be able to compile with both the old and the new, that is both non-C++11 and C++11. Depending on the API changes, that could touch a lot of our code.

@vjpai Unlike gRPC, protobuf was originally written in C++ with heavy dependencies on the C++ standard library already. Only using C++11 language features without its std library isn't a viable option for protobufs.

An update from protobuf team: it's decided starting from version 3.6.0 protobuf will require C++11 to compile. A 3.5.x branch will be maintained for pre-C++11 compilers but only bug fixes will be accepted in the branch. We are already accepting C++11 features into our internal code base and will accept pull requests using C++11 features on the github repo as well.

FWIW had seen some C++11 features in 3.5.0 as well. So it would be good to tidy that up if it is intended to work on pre-C++11 compilers. See these comments for details. Haven't checked recently to see if they have been fixed.

@jakirkham I checked your build log but it's using v3.4.1. Have you tried 3.5.0? The issue you pointed to doesn't exist in 3.5.0 as far as I can tell.

Had backported some patches from 3.5.0 to 3.4.1, but I agree that is not the same as building 3.5.0. Have retried fresh with 3.5.0 and ran into issue ( https://github.com/google/protobuf/issues/4064 ).

Ran into an issue with 3.5.1 that we missed. Raised as issue ( https://github.com/google/protobuf/issues/4094 ).

@jayantkolhe @pherl Is it possible to build portable C++11 Linux release binaries on any system that isn't RHEL6? Here's some knowledge from my notes file:

  • RHEL 5 (2007) went EOL 2017-03 (cc: @matthauck). It used kernel2.6.18, glibc2.5 and gcc4.1 which supported -std=c++98 (GPLv2 w/ runtime exception)
  • RHEL 6 (2011) doesn't go EOL until 2020-11. It uses kernel2.6.32, libc2.12, GCC 4.4, supports -std=c++0x flag (GPLv3 w/ GCC RUNTIME LIBRARY EXCEPTION v3.1). If you build C++ binaries on RHEL6 they should work on pretty much any Linux since 2011.
  • Debian 8 (2015) doesn't go EOL until 2020-05. It uses libc2.19, gcc4.9, has complete -std=c++11 support. If you build binaries on Debian 8, they should work on pretty much any Linux since 2014.
  • Ubuntu 16+ uses GCC 5+. After much work, the TensorFlow team determined it's impossible to use these newer versions of GCC to create dynamically linked release binaries that will actually run on any Linux distro before 2016. This can be verified by checking ldd -v for CXXABI_1.3.8.

I'd like to know if it's possible to use something like musl-cross-make to statically link musl-libc (MIT) and LLVM libc++ (MIT). In that case, we'd be able to write C++14 code, compile it with GCC6+ (which can do things like -ftree-vectorize AVX-512) and have our release binaries work on Linux 2.6+ (~2006).

At conda-forge, we are migrating to some new compilers produced by Anaconda (they already use them) that are based on GCC 7.2 that @mingwandroid worked on. We would still use them to build on CentOS 6, but as glibc is generally backwards compatible, the binaries built should still work on newer Linux systems. Maybe this is interesting to you?

Are you able to compile a gcc7 toolchain that runs on rhel6 and produces
dynamically linked binaries that are ABI compatible with libstdc++4.4?

On Fri, Apr 27, 2018, 10:57 AM jakirkham notifications@github.com wrote:

At conda-forge, we are migrating to some new compilers produced by
Anaconda (they already use them) that are based on GCC 7.2 that
@mingwandroid https://github.com/mingwandroid worked on. We would still
use them to build on CentOS 6, but as glibc is generally backwards
compatible, the binaries built should still work on newer Linux systems.
Maybe this is interesting to you?

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/google/protobuf/issues/2780#issuecomment-385047015,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADAbhojNdzz0izQtbXb5Ab3b8diFOYkks5ts1wFgaJpZM4MQFgz
.

No, we provide our own libstdc++. You could link to it statically I guess (we don't though).

The toolchain is a CentOS6 glibc 2.12 targeting (and hosted) pseudo-cross compiler. It is used to build all of the Anaconda Distribution on Linux.

That's one solution but protobuf can't assume an Anaconda subsystem is available when distributing binaries on PyPi. PEP 513 says manylinux1 binaries must work on CentOS 5, which can be ignored, but ideally shouldn't.

Take for example gcc-7.2.0-i486-linux-musl.tar.xz which is 31mB; runs on RHEL4+ i486+; can turn helloworld.c into a 5kb static binary that runs on any Linux and can be optimized for any microarchitecture, e.g. Skylake. It also supports C++17. The catch is GPLv3 if you want portable binaries.

Yup, and thanks for paying attention to the manylinux1 specs. manylinux2010 targets RHEL6/glibc 2.12 though so it may be possible to consider our tools (with static libstdc++) for that?

I like that musl toolchain; very cool.

I heard from @gunan that manylinux2010 doesn't work in practice, i.e. manylinux1 has to be used anyway, even if it can't run on RHEL5, because certain tools will break. I'm not recommending statically linking libstdc++, which could raise legal questions. Also note GNU changed the license of libstdc++ when C++11 support was introduced. That's why libc++ exists, but I don't know how to use it.

Any more details you can provide about this manylinux2010 issue would be welcome.

@yifeif to comment on manylinux issues

On Sat, Apr 28, 2018 at 1:22 AM Ray Donnelly notifications@github.com
wrote:

Any more details you can provide about this manylinux2010 issue would be
welcome.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/google/protobuf/issues/2780#issuecomment-385152250,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHlCORxBMY7qEdzhj9BrR-JteWgiHmudks5ttCa8gaJpZM4MQFgz
.

I think manylinux2010 should be okay? But it might not be fully rolled out yet https://github.com/pypa/manylinux/issues/179

I'm going to close this issue because this is now pretty much done and we're using C++11 now. Feel free to comment if you have any C++11-related issues, though.

Was this page helpful?
0 / 5 - 0 ratings