As part of https://github.com/eclipse/openj9/pull/754/ we removed the ut_j9jit.h and ut_j9jit.c files which are required to build the JIT. Unfortunately this breaks out of source JIT builds because these files do not exist and they are currently not autogenerated by the JIT makefiles.
To give context many people working on the JIT will clone the OMR and OpenJ9 repositories and build only the JIT component (libj9jit29.so) in an out of source build by pointing the makefiles to the location of the JDK. An example of a makefile command would be:
cd openj9/compiler/trj9
make BUILD_CONFIG=prod J9SRC=/jdk/lib/s390x/compressedrefs JIT_SRCBASE=`pwd`/../../../ JIT_OBJBASE=/jdk/lib/s390x/compressedrefs/objs-fjeremic JIT_DLL_DIR=/jdk/lib/s390x/compressedrefs PLATFORM=s390-linux64-gcc
Which assumes a previous full build has been completed in /jdk/lib/s390x/compressdrefs directory and that a brand new repository has been cloned in the toplevel. The reason most JIT developers do this is because we usually keep the JDK level fixed, i.e. compile the VM/GC/JCL once and load the OMR and OpenJ9 repositories out of source somewhere else and only build the JIT from there on out to test different changes.
With https://github.com/eclipse/openj9/pull/754/ this is now broken since the aforementioned files are missing. We need to either reinstate these files or figure a clean way to run tracegen as part of the JIT makefiles to autogenerate these two files from the j9jit.tdf.
@dnakamura FYI.
Of the two options,
or figure a clean way to run tracegen as part of the JIT makefiles to autogenerate these two files from the j9jit.tdf.
Has my vote. With the advent of the clone / clone / make build process, is this functionality still needed? Aren't incremental rebuilds sufficient after the initial (slower) full compile?
With the advent of the clone / clone / make build process, is this functionality still needed? Aren't incremental rebuilds sufficient after the initial (slower) full compile?
In my experience speaking with other people who are commonly working on the JIT yes it is still needed. I think most people (chime in otherwise please!) build out of source so they can keep committing changes to their repository as they develop a feature. Often times we find ourselves having to compile a particular JIT against a particular cached build which has already been precompiled and archived so out of source builds are a must.
+1 for regenerating the code
Anyone familiar with tracegen? How exactly does one invoke it on a particular .tdf file?
tracegen -file j9jit.tdf should work! For the cmake side of things, there is add_tracegen(j9jit.tdf), which is in cmake/modules/OmrTracegen.cmake
@youngar I tried tracegen and it generated ut_j9jit.h and ut_j9jit.pdat files. How can I generate the ut_j9jit.c file?
c:\gitrepo\openj9\runtime\compiler\trj9\env>tracegen
TraceGen found tdf file .\j9jit.tdf
Processing tdf file .\j9jit.tdf
Calculating groups for j9jit
Creating header file: .\ut_j9jit.h
Creating pdat file: .\ut_j9jit.pdat
Try using the flag -generatecfiles. Our current invocation in OMR is ./tracegen -treatWarningAsError -generatecfiles -threshold 1 -root .
The output from help is:
./tracegen [-threshold num] [-w2cd] [-generateCfiles] [-treatWarningAsError] [-root rootDir] [-file file.tdf] [-force]
-threshold Ignore trace level below this threshold (default 1)
-w2cd Write generated .C and .H files to current directory (default: generate in the same directory as the TDF file)
-generateCfiles Generate C files (default false)
-treatWarningAsError Abort parsing at the first TDF error encountered (default false)
-root Comma-separated directories to start scanning for TDF files (default .)
-file Comma-separated list of TDF files to process
-force Do not check TDF file timestamp, always generate output files. (default false)
also note that -root . will walk the file tree starting at . and process any .tdf files it finds. Instead you probably want to use -file j9jit.tdf
Dormant for a couple of months. Closing.
@DanHeidinga this is still an issue for people doing development on the JIT - having to rebuild everything and not being able to easily build out of tree is a real pain. I don't know why someone not working on it actively means the issue should be closed... It is a problem that should be tracked and hopefully someone can find time to work on it...
I'm sorry to hear this continues to be a pain. Given the silence since Dec and suggested work arounds, this seemed like a good candidate to close.
It is a problem that should be tracked and hopefully someone can find time to work on it...
The project currently has ~244 issues opened making it very hard to find the forest for the trees. Issues that are long dormant and not getting any attention should be closed to keep the list at a reasonable length so developers and users can reasonably figure out what's going on in the project.
This approach to closing items was agreed to on one of the very early OpenJ9 hangouts.
Marking these issues as closed seems suboptimal - it makes it look like it was fixed or won't be fixed. We need a better way of tracking a backlog because not every issue is going to be fixed immediately, but we need to make sure they can be found so people can work on them time/interest permitting.
Most helpful comment
In my experience speaking with other people who are commonly working on the JIT yes it is still needed. I think most people (chime in otherwise please!) build out of source so they can keep committing changes to their repository as they develop a feature. Often times we find ourselves having to compile a particular JIT against a particular cached build which has already been precompiled and archived so out of source builds are a must.