I've been using Spack for over 6 months now, and it has made my life way easier. But there is still one nagging problem that I can't get past. Hashes and the need for constant reinstalls. They're great for ensuring you get exactly what you want, but they make using Spack in production a nightmare.
Every time a new version of some random library is added, Spack reinstalls that library and everything that depends on it instead of just using the old one.
Every time a new Python variant is added, Spack reinstalls Python and all of my Python packages.
When dependency types were merged, Spack reinstalled everything built with CMake, M4, Automake, Autoconf, Libtool, Flex, Bison, etc.
When namespaces and newarch support were merged, Spack essentially believed that I had nothing installed and reinstalled everything I asked for. I can't link to any previously installed packages. Worse yet, whenever I try to uninstall, activate, or deactivate something, Python tells me that there are multiple packages installed and I need to be more specific (see #1178). Of course, when you have dozens of identical packages installed and the only difference is that some were installed before newarch support was merged and some after, it isn't actually possible to be more specific. I can't even uninstall any of these older packages with Spack, I have to do it manually because Spack isn't backwards compatible enough to locate the old packages. spack find
no longer gives me the right path because it drops the "linux-x86_64".
The easy solution here is to reinstall everything from scratch every time a major change is made to how Spack handles hashes. But this just isn't feasible when using Spack in production. At this point I have close to 500 packages installed. If I try to uninstall and reinstall all of these, I would get hundreds of tickets from users asking why their software no longer runs. Hell, I don't even know what exact specs I used to install them, or what the default variants were at the time.
Even a Spack command that automatically reinstalled every package wouldn't be of much help. We don't use Modules, we use a different system called SoftEnv which Spack doesn't support. If I reinstalled everything, the installation directories would change since they contain the hash. I would then have to manually edit my SoftEnv system to point to the new hashes.
Of course, a large percentage of these 500 packages are duplicates of each other. These duplicates are slowing Spack down to a grinding halt. It can take me over a couple minutes to uninstall or activate a single package. Python packages in particular are bad.
I started using Spack to make my life easier, not to make it more tedious. If I'm going to continue using Spack, I shouldn't have to keep reinstalling everything every couple of months.
So this issue is an attempt to open up a conversation about how we can make Spack more flexible and backwards compatible. How do others handle this problem? Is there any way we can prevent it?
Hi,
I'm an Easybuilder and I know where you are coming from (although slack
seems to make it more involved).
I actually do for more than four years already what you dread upon:
reinstall the bunch of 100s of packages, every couple of months, about
after a worthy release. And why not?! Machine time -and space- is cheap,
while human time expensive. It has proven to be useful, time and again.
Worked for me!
F.
On Wednesday, July 20, 2016, Adam J. Stewart [email protected]
wrote:
I've been using Spack for over 6 months now, and it has made my life way
easier. But there is still one nagging problem that I can't get past.
Hashes and the need for constant reinstalls. They're great for ensuring you
get exactly what you want, but they make using Spack in production a
nightmare.Every time a new version of some random library is added, Spack reinstalls
that library and everything that depends on it instead of just using the
old one.Every time a new Python variant is added, Spack reinstalls Python and all
of my Python packages.When dependency types were merged, Spack reinstalled everything built with
CMake, M4, Automake, Autoconf, Libtool, Flex, Bison, etc.When namespaces and newarch support were merged, Spack essentially
believed that I had nothing installed and reinstalled everything I asked
for. I can't link to any previously installed packages. Worse yet, whenever
I try to uninstall, activate, or deactivate something, Python tells me that
there are multiple packages installed and I need to be more specific (see1178 https://github.com/LLNL/spack/issues/1178). Of course, when you
have dozens of identical packages installed and the only difference is that
some were installed before newarch support was merged and some after, it
isn't actually possible to be more specific. I can't even uninstall any of
these older packages with Spack, I have to do it manually because Spack
isn't backwards compatible enough to locate the old packages. spack find
no longer gives me the right path because it drops the "linux-x86_64".The easy solution here is to reinstall everything from scratch every time
a major change is made to how Spack handles hashes. But this just isn't
feasible when using Spack in production. At this point I have close to 500
packages installed. If I try to uninstall and reinstall all of these, I
would get hundreds of tickets from users asking why their software no
longer runs. Hell, I don't even know what exact specs I used to install
them, or what the default variants were at the time.Even a Spack command that automatically reinstalled every package wouldn't
be of much help. We don't use Modules, we use a different system called
SoftEnv which Spack doesn't support. If I reinstalled everything, the
installation directories would change since they contain the hash. I would
then have to manually edit my SoftEnv system to point to the new hashes.Of course, a large percentage of these 500 packages are duplicates of each
other. These duplicates are slowing Spack down to a grinding halt. It can
take me over a couple minutes to uninstall or activate a single package.
Python packages in particular are bad.I started using Spack to make my life easier, not to make it more tedious.
If I'm going to continue using Spack, I shouldn't have to keep reinstalling
everything every couple of months.So this issue is an attempt to open up a conversation about how we can
make Spack more flexible and backwards compatible. How do others handle
this problem? Is there any way we can prevent it?—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/LLNL/spack/issues/1325, or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdgQI7klY9lMVMkL_jtehvyZo6yEA6Cks5qXpidgaJpZM4JRPuh
.
echo "sysadmin know better bash than english"|sed s/min/mins/ \
| sed 's/better bash/bash better/' # signal detected in a CERN forum
@fgeorgatos How do you handle your reinstalls? If I have 500+ packages that need to be reinstalled, it will take close to a week to uninstall and reinstall everything, work out bugs that have creeped into the packages since the last time I installed, and update all of my environment variables to point to the new installations. Do you not get hundreds of angry emails asking why everyone's software no longer runs?
My point here is that this isn't the way it has to be. If I install a package, and then a new feature gets added to Spack, the installation hasn't changed. But Spack now believes that the package was never installed. I consider this to be a bug.
Do you have a branch that I can look at to diagnose the spack find –p
bug? That is not how it should be behaving.
-Greg
From: "Adam J. Stewart" <[email protected]notifications@github.com>
Reply-To: LLNL/spack <[email protected]reply@reply.github.com>
Date: Wednesday, July 20, 2016 at 3:53 PM
To: LLNL/spack <[email protected]spack@noreply.github.com>
Subject: Re: [LLNL/spack] How to deal with changing hashes and reinstalls (#1325)
My point here is that this isn't the way it has to be. If I install a package, and then a new feature gets added to Spack, the installation hasn't changed. But Spack now believes that the package was never installed. I consider this to be a bug.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://github.com/LLNL/spack/issues/1325#issuecomment-234108749, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ANUwcD4SOG2Cph0VCFPguS90TBKNns_zks5qXqcBgaJpZM4JRPuh.
Yes, I understand that Spack is still beta software, and things like this will change from time to time. But if you think that Spack will ever settle down, then you're fooling yourself. Every new user wants their own cookie cutter feature that does things just like Homebrew does. Every new package presents a new challenge, and may not be possible within the confines of Spack's current spec resolution. Spack packages will update with every new version, variant, and patch for years to come. What if we decide to start hashing package.py files, patches, and everything in between? I think we need to get out of the mindset that a hash is absolute and start expecting the hash to change. There is one major barrier to this that I see. The hash is included in the installation directory.
I've said this before and I'll say it again. I'm not a fan of including the hash in the installation directory. I don't see it's purpose. The way I see it, there are two uses for the hash:
Great reason, but isn't the installation already uniquely identified by the .spack/spec.yaml file? And why couldn't it be uniquely identified by a .spack/hash.txt file in the same directory? And if this is what the hash in the installation directory is for, then what is the spack database for?
Also a great reason, but gcc-5.3.0-ridiculouslylonghash isn't any more identifiable than gcc-5.3.0-one and gcc-5.3.0-two. If they just need to be different, that's easy to keep track of. If Spack needs a way to locate the installation directory, well isn't that what the database is for?
Yes, this would be a radical change. But imagine this. What if someone merged a new functionality into Spack, like newarch or namespaces, and you could simply run spack reindex
and everything would just work. Spack would update all of the hashes in .spack/hash.txt or in the database or wherever, and Spack wouldn't decide that you didn't have anything installed anymore. Is this really that impossible?
@becker33 I can post some spack find -p stuff tomorrow. But that is the least of my concerns. In my mind, the newarch support broke spack install, uninstall, activate, and deactivate since spack doesn't think I have any packages installed. If Spack reinstalls all of these packages, then both installations have identical specs, making it impossible to differentiate them.
The spack reinstall
command mentioned above would also be nice to have.
I completely agree that this needs to be addressed, and is _very_ important, but I don't think the changes would need to be so radical. (maybe?) Maybe the hash is still in the installation directory but we hash _a lot_ less stuff...
I think the recent change with dependency types could lay the groundwork for fixing this issue. Maybe a good place to start is being _a lot_ less strict on build
type dependencies. For example, I should install cmake v3.6.0 or whatever and be good for a year or so. Setting aside explicitly installing cmake, no matter what package's DAG changes, I don't need to rebuild cmake... ever... Packages might need to be better about specifying the version they depend on because a lot of software legitimately depends on a certain version of cmake (or really they just need version 2.8+).
Another idea is if the spack install is completely self contained (has its own custom dynamic linker etc. - I have been meaning to work on #1106 which hopes to add that for a few weeks now). I don't need to reinstall _everything_ when we decide to upgrade our systems from rhel6 to rhel7 (which I believe we would have to do, and that seems scary)...
Also, this by no means addresses the root of the problem (and you mentioned you don't use environment modules) but it would be nice to have either way: finer control over modulefile generation in the face of conflicts. #984 and #1135 did _a lot_ for environment modules which is awesome. But at the end of the day: spack might be a rat's nest of 500+ dependencies etc. but we only need a handful of modulefiles. The user shouldn't type: module avail
and be bombarded with 500+ modulefiles each with a ridiculously long hash. Users only really care that:
python2.7
not python 2.7.11
- but alas not all packages follow semantic versioning...)Thinking about everything from that perspective would guide a lot of choices in regards to fixing this issue because those things tend to be things that warrant a package being re-compiled in the first place.
Also, we need a concept of ABI compatibility in the database, hash or somewhere.
p.s. I maintained a fork of linuxbrew (a fork of Homebrew) for a long while and we got by using that in production for over a year and the install prefix was package/version/compiler.name-compiler.version
eg. $root/python/2.7.11/gcc-6.1.0
Really I am just hanging on until we could use spack to install package in a container (not Docker until it/if it solves their security issues) but singularity or shifter
i don't think one can completely eliminate these problems (like the newarch PR which IIRC introduced a new directory structure and required complete rebuilt), but i agree that there should be some steps taken to minimise such issues.
1) reuse:
For example, I should install cmake v3.6.0 or whatever and be good for a year or so.
this is a promise of https://github.com/LLNL/spack/issues/839
The same issue also discusses resolving dependencies against already installed packages, which would help minimize duplicates.
2) reindex
If a new variant is added one could assume that previously installed packages did not have this feature enabled, i.e. they are ~new-variant
. This is, of course, not always true, but a good rule of thumb. If spack reindex
would be clever to update the database or .spack
folders with this info -- it would also help together with (1).
Assuming that newarch-like changed don't come that often, those two solutions should greatly reduce the reinstallation needs.
Hi Adam,
Installs are done in separate directories and current version of buildsets
is essentially one symlink pointer to one of these dirs (this works
brilliantly well with modules, since only modulefiles are the ones under
stable pathnames - fi. /opt/hpcbios/latest/modules/all). Software binaries
instead are under tagged directories, fi. 20160721 - Good to see with ldd.
Never remove old dirs, until not before 12 months pass. This permits
perfect roll-back (helps with emerged bugs) or roll-forward (useful for
advanced testing). With this kind of freedom, there is no complaining,
given the advanced service - assuming you only religiously make the changes
during maintenance windows. Yes, that is a MUST. The only real deal here
is, if you are going to eventually be providing bug fixes (via upstream
tools and software) or not! If you do already OS upgrades, fi. RedHat
minors, you already got your answer.
Basically, with tools such as Spack or EasyBuild, I'm surprised that not
everybody is doing this already. Software builds are just one more product
on an HPC platform, they should be treated similarly to several runs of,
say, a climate code: Store it all if you want perfect post-processing. The
fact that they are useful for other software is merely a merry coincidence.
F.
On Thursday, 21 July 2016, Adam J. Stewart [email protected] wrote:
@fgeorgatos https://github.com/fgeorgatos How do you handle your
reinstalls? If I have 500+ packages that need to be reinstalled, it will
take close to a week to uninstall and reinstall everything, work out bugs
that have creeped into the packages since the last time I installed, and
update all of my environment variables to point to the new installations.
Do you not get hundreds of angry emails asking why everyone's software no
longer runs?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/LLNL/spack/issues/1325#issuecomment-234108494, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABdgQNyk7FB3fmaReVLRpf-9NqJwonRXks5qXqapgaJpZM4JRPuh
.
echo "sysadmin know better bash than english"|sed s/min/mins/ \
| sed 's/better bash/bash better/' # signal detected in a CERN forum
I've been using Spack for over 6 months now, and it has made my life way easier. But there is still one nagging problem that I can't get past. Hashes and the need for constant reinstalls. They're great for ensuring you get exactly what you want, but they make using Spack in production a nightmare.
Hashes are a big part of what makes your life easier. It seems to me your real problem are not hashes themselves, but the fact that hashes are a moving target for you.
Every time a new version of some random library is added, Spack reinstalls that library and everything that depends on it instead of just using the old one.
...
I have to do it manually because Spack isn't backwards compatible enough to locate the old packages. spack find no longer gives me the right path because it drops the "linux-x86_64".
I kind of faced the same situation myself and, despite I have no perfect solution to propose at the moment, I see two main problems :
The former is your decision and you have to get along with it : if you use develop
and pull frequently you accept the associated risks, including changes that are potentially non backward compatible.
The latter is a slightly more complicated matter and there are reasons why the current situation is such : the software is beta and rapidly changing and managing the project the way it's managed right now keeps the entire process much more flexible.
If we want more stability a viable solution would be releasing versions of spack on a regular base, even if those versions are still considered beta. This would permit selecting a version for deploying production software and stick with it until the next maintenance window.
Release procedures though come with a price the community must be willing to pay : a slightly higher degree of discipline and more commitment on non-feature issues. For instance :
spack
goes in feature freeze mode and only fixes
PR will be merged to develop
?To be crystal clear I would advocate for it, but I don't think the majority of people actively contributing to the project share my opinion here.
@mwilliammyers
Also, this by no means addresses the root of the problem (and you mentioned you don't use environment modules) but it would be nice to have either way: finer control over modulefile generation in the face of conflicts. #984 and #1135 did a lot for environment modules which is awesome. But at the end of the day: spack might be a rat's nest of 500+ dependencies etc. but we only need a handful of modulefiles.
In modules.yaml
you have the blacklist
keyword that accept constraints. Black-listed packages won't be considered for module file generation.
@adamjstewart
Yes, this would be a radical change. But imagine this. What if someone merged a new functionality into Spack, like newarch or namespaces, and you could simply run spack reindex and everything would just work. Spack would update all of the hashes in .spack/hash.txt or in the database or wherever, and Spack wouldn't decide that you didn't have anything installed anymore.
And goodbye reproducibility :smile:
Seriously : I think it would be already a problem right now that variants are boolean. If I add a new variant should I consider it True
or False
when reindexing ? And if I remove it ?
And this does not even take into account the case where variants could be generalized to admit a set of values...
Wow, I'm glad this Issue is generating so much discussion. It's nice to see that I'm not the only one out there facing this problem. Let me respond to some of your comments:
@mwilliammyers:
I think the recent change with dependency types could lay the groundwork for fixing this issue. Maybe a good place to start is being a lot less strict on build type dependencies. For example, I should install cmake v3.6.0 or whatever and be good for a year or so. Setting aside explicitly installing cmake, no matter what package's DAG changes, I don't need to rebuild cmake... ever... Packages might need to be better about specifying the version they depend on because a lot of software legitimately depends on a certain version of cmake (or really they just need version 2.8+).
I agree, but I would take this one step further. I've mentioned this before in #646 and I think it has been echoed in several issues elsewhere (#311, #577, #644, #839, #1055, #1280). Spack needs to take into account already installed packages before concretization, not just for build dependencies but for all packages. For example, let's take these situations:
New version of dependency
I install py-numpy, then someone adds a new version of zlib to Spack. I then install py-scipy. Instead of linking against my already installed Python and py-numpy, Spack reinstalls Python and py-numpy with a newer version of zlib. This isn't the behavior I want, and I don't think it's the behavior that most users expect. Unless I explicitly say spack install py-scipy ^[email protected]
, I clearly don't care what version of zlib gets used. Yes, we can add an install flag that still forces rebuilds if people really want it. But by default, Spack should use what it has installed unless asked to do otherwise.
New variants added or removed
This is where @alalazo's concern comes in:
If I add a new variant should I consider it True or False when reindexing ? And if I remove it ?
@davydden proposed the following solution to this:
If a new variant is added one could assume that previously installed packages did not have this feature enabled, i.e. they are ~new-variant. This is, of course, not always true, but a good rule of thumb.
I'm going to have to side with @alalazo on this one. We can't assume a new variant is disabled in older installations. In fact, I would say it's just as common to add a variant that makes an already present feature optional (#966 is an example). A better course of action would be to assume the default value of the newly added feature, although this is not guaranteed to be the case either.
@alalazo Here is what I propose as the most robust solution. How do we choose the new value of a previously non-existing variant? We don't. If I install python~ucs4
and a new +tk
variant is added, the old Python installation cannot be guaranteed to be python~ucs4+tk
or python~ucs4~tk
. Why not just describe it as python~ucs4
? To be clear, what I'm suggesting is that Spack needs to understand that variants can be enabled, disabled, and unknown. If a variant is unknown, that's completely fine. If I install py-numpy
which depends on python, well python is already installed. If I install py-numpy ^python~tk
, well then I'm going to have to reinstall python because I don't know whether or not it has the +tk
variant enabled.
Note that these changes would not even involve changing the hash. python~ucs4
is still python~ucs4
, even after the +tk
variant is added.
@alalazo:
Release procedures though come with a price the community must be willing to pay : a slightly higher degree of discipline and more commitment on non-feature issues. For instance :
would you be willing to trade the immediate availability of a long-awaited new feature for a more stable codebase?
Yeah, probably. I can see why this would irk some people. But those that really depend on this feature can always work on that branch. It seems like every major PR (build dependencies, newarch, etc) results in at least a dozen separate issues. In my mind, PRs that change the hash shouldn't be added until we are ready for a new release.
would it be ok if a month before the next release spack goes in feature freeze mode and only fixes PR will be merged to develop?
Yes, I think this would be a good idea. We should either freeze develop, or a new release branch according to the Git Flow model. During this time, only bug fixes should be accepted. I also think we need to separate out the core spack libraries and spack packages. The core libraries should be frozen, but not the packages.
would you be willing to contribute unit test only PRs to cover part of the 35% that is not covered right now ?
Yes, I agree that whenever someone touches the core Spack libraries, they should add unit tests for this. So far I personally haven't really made that many changes outside of spack create
, so I'm willing to maintain a set of unit tests for that subcommand. Honestly, we need better documentation on how to add and run unit tests locally. I didn't figure it out for myself until very recently.
@fgeorgatos So what you're suggesting is instead of having a single Spack installation in /soft/spack/
, create a new Spack installation every time the hash significantly changes (newarch/build-deptypes style merges). It could either be dated (/soft/spack-20160721/
) or numbered (/soft/spack-0.9.1/
). Then every time you end up reinstalling something important, you can change your environment variables to point to the new installation, while keeping the old installation around for a while to prevent angry users. If a problem somehow arises, you can point the environment variables back to the old build, like you suggested. While this does end up with significantly more builds, it makes it much easier to reinstall builds.
While I still think we should implement my above suggestions for having Spack check installed packages before concretization, this will help for the less common installation directory breaking changes (see #1329). It will also make reinstalls much easier for me.
We should dictate that the spec can only change once during a release, specifically at the end of a release cycle on a separate release branch (see the Git Flow model). Every hash changing PR would force a new release, since it is incompatible with the previous release. @alalazo @eschnett @tgamblin Thoughts?
Yes, we can add an install flag that still forces rebuilds if people really want it. But by default, Spack should use what it has installed unless asked to do otherwise.
:+1:
Why not just describe it as python~ucs4? To be clear, what I'm suggesting is that Spack needs to understand that variants can be enabled, disabled, and unknown.
I like this idea, so far i don't see any scenario where it would break something or lead to unexpected errors. Not sure if it is easy to introduce, though.
would it be ok if a month before the next release spack goes in feature freeze mode and only fixes PR will be merged to develop?
That's certainly a good idea and i would like to have it. Develop
should be frizzed and only fixes to packages are to be merged. Alternatively one can introduce a release-candidate
branch and then merge fixes there and to develop
. In the meantime new features can still be added to develop
.
The whole frizzing suggests that we need a way to test Spack to catch errors in packages. It would be good to automate this and test all packages on few chosen OS's / compilers / MPIs / Blas / Lapack combos, but we won't be able to check everything (!) due to combinatorial nature of Spack. So i would suggest, at least for now, that Spack community should test some packages before release. As an example, i use Spack mostly to install dealii
, so when we are in a freeze mode i will be willing to test spack install --run-tests dealii
on Ubuntu and macOS with different MPI's and Blas/Lapack's.
This involves installation of arpack-ng, petsc, trilinos, mumps, suite-sparse, superlu-dist, slepc, tbb, and others -- quite a good coverage of packages related to sparse linear algebra.
If several people try to install at least a few of important to them packages, we would iron out most of errors before each release. That would certainly help to have more satisfied users.
@adamjstewart As long as the spec
for a package doesn't change, then Spack can always convert your spec to a hash.
So... what if you maintain the specs used to install the 500 packages your users are interested in, and associate a "short name" with each spec? When Spack upgrades, you re-run spack install
on those 500 specs. Then run a script to make a symlink from the short-name to the Spack hash currently associated with the given spec. Your users would access packages via YOUR short names, NOT Spack specs. By keeping control of the association between
Yes, we can add an install flag that still forces rebuilds if people really want it. But by default,
Spack should use what it has installed unless asked to do otherwise.
I've said before... as convenient as this sounds, it is a recipe for unrepeatable builds. Don't want it, don't want it, don't want it.
A better (but similar) approach is to allow Spack to direct its activities according to a list of package specs. If that list is the set of installed packages, then you get the above behavior. But you could also query the set of installed packages on one Spack, and use the list to direct builds on another Spack, where those packages weren't already installed.
Yes, we can add an install flag that still forces rebuilds if people really want it. But by default,
Spack should use what it has installed unless asked to do otherwise.I've said before... as convenient as this sounds, it is a recipe for unrepeatable builds. Don't want it, don't want it, don't want it.
@citibeth I'm not sure I understand your concerns about reproducibility. My suggestion of taking into account previously installed packages doesn't affect reproducibility in any way. In my mind, reproducibility implies that it is possible to reinstall something exactly the same way as someone else. Let's look at the following scenarios:
If two people run spack install python
, they both end up with python. But depending on what version of Spack they are using and what their packages.yaml file looks like, they could end up with wildly different installations.
The behavior is the same as before, except that Spack doesn't reinstall Python and every Python module you've ever needed just because a new version of zlib came out.
As you can see, if you don't explicitly state what you want, your build cannot be reproducible under the current logic _or_ with my suggestions. If you really want something reproducible, you have to be explicit.
If two people run:
spack install [email protected]%[email protected]+tk~ucs4 arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
then they will get a reproducible build, no matter what version of Spack they are on or what their packages.yaml file looks like.
If ^[email protected]%[email protected] arch=linux-centos6-x86_64
is already installed, Spack will use it. But since I'm being absurdly specific with the above installation instructions, I'm guaranteed to get a reproducible build.
Yes, typing out the command in 2a is silly and no one does it for a reason. Because we don't care about reproducibility. What we care about is that Spack records exactly what we installed in the python-superlonghash/.spack
directory, so that if for whatever reason we need to reproduce the build, we can. If I install [email protected]
, that means I want to install Python version 3.5.2. It doesn't say anything about what version of zlib I want to use. Right now, Spack ignores what is installed and uses the latest version of zlib available. What I'm suggesting is that if the user doesn't specify what version of zlib they want to use, just use what is already installed.
@citibeth https://github.com/citibeth I'm not sure I understand your
concerns about reproducibility.
If two people have the same version of a program (in this case Spack), and
they give it the same input (same command line; say, spack spec python
and same packages.yaml
), then reproducability means that they will get
the same result. Taking into account previously installed packages makes
spack spec
(and spack install
) depend not just on the explicit inputs,
but also on the entire history of _other_ spack install
commands the user
has given this Spack instance.
My suggestion of taking into account previously installed packages doesn't
affect reproducibility in any way. In my mind, reproducibility implies that
it is possible to reinstall something exactly the same way as someone else.Under current Spack, that is possible as long as you copy the other
person's command line andpackages.yaml
file. That would no longer be
the case if currently-installed packages are taken into account.
1a. Implicit build (with current logic)
If two people run spack install python, they both end up with python. But
depending on what version of Spack they are using and what their
packages.yaml file looks like, they could end up with wildly different
installations.But if I say "I'm having problems with my Python install, can you take a
look...?" Then you can copy my exact Spack andpackages.yaml
file (both
relatively easy) and reproduce exactly what I'm seeing. You don't have to
also copy gigabytes of my installed software base.1b. Implicit build (but take into account already installed packages)
The behavior is the same as before, except that Spack doesn't reinstall
Python and every Python module you've ever needed just because a new
version of zlib came out.I believe there are better ways to achieve this:
- Turn the implicit explicit. That is, obtain a list of full package
specs, based on my installed packages. I can runspack install
against
that list (or edit it down based / curate on my needs). I can send that
list to you as an input for your Spack, and you can reproduce my behavior,
regardless of what you happen to have installed.- Don't worry about rebuilding. Rebuilding is cheap; (at least, cheaper
than when we did it by hand). See #1338As you can see, if you don't explicitly state what you want, your build
cannot be reproducible under the current logic _or_ with my suggestions.
If you really want something reproducible, you have to be explicit.
- No one ever explicitly states everything they want; see #1338.
- Your build CAN be reproducible under current logic, as long as you
replicate the Spack and thepackages.yaml
.2a. Explicit build (with current logic)
If two people run:
spack install [email protected]%[email protected]+tk~ucs4 arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \
^[email protected]%[email protected] arch=linux-centos6-x86_64 \then they will get a reproducible build, no matter what version of Spack
they are on or what their packages.yaml file looks like.Not true. Maybe someone added a new optional variant to
bzip
.Yes, typing out the command in 2a is silly and no one does it for a
reason. Because we don't care about reproducibility.Not really true. We don't type full specs because we don't care about a
lot of the details of a package's dependencies. But when something breaks,
we make a longer spec. But once we've gotten a spec that works, we VERY
MUCH DO care about reproducability. I've gotten my software build working,
and I want to pass it off to a colleague who will build the same software
and use it. Neither of us really care whether they end up building an
extrapython
in the process. As long as it works the same on their
computer as on mine.What we care about is that Spack records exactly what we installed in the
python-superlonghash/.spack directory, so that if for whatever reason we
need to reproduce the build, we can. If I install [email protected], that
means I want to install Python version 3.5.2. It doesn't say anything about
what version of zlib I want to use.See #1338
Right now, Spack ignores what is installed and uses the latest version of
zlib available. What I'm suggesting is that if the user doesn't specify
what version of zlib they want to use, just use what is already installed.
The problem is, Spack has no way to know whether the new zlib fixes a
grievous security flaw, breaks binary compatibility or just makes one of
the compression algorithms a little faster. Without that knowledge, the
safest thing to do is update to the new version.
What's the problem? Spack is automated and CPU cycles are cheap.
@adamjstewart See here for an example of how I distribute my software and
what I mean by reproducability:
https://github.com/citibeth/icebin
I don't distribute complete concretized specs because they contain a lot of
extraneous information that would require continuous updating. Of the
specs I do distribute, I want them to be repeatable when run on the same
version of Spack.
Hello @adamjstewart,
Yes, precisely that. I've been using a variation of the dated approach as
you mention, with no more angry users. Typically, a particular date
corresponds to specific tool and configuration file hashes (ie. It looks
like a date but it is merely a tag). Btw. I use modulefiles to tune
env. variables
pointing to the right place - this adds elegance.
Sometimes people might complain for too many modules, but honestly it's not
as serious as presented when the trees as separate, and it's not my problem
either :-p . Space consumption is insignificant in true HPC context and the
user convenience advantage is huge. I buy into that!
As others said in this thread, builds are cheap now, so let's take
advantage.
F.
On Thursday, July 21, 2016, Adam J. Stewart [email protected]
wrote:
@fgeorgatos https://github.com/fgeorgatos So what you're suggesting is
instead of having a single Spack installation in /soft/spack/, create a
new Spack installation every time the hash significantly changes
(newarch/build-deptypes style merges). It could either be dated (
/soft/spack-20160721/) or numbered (/soft/spack-0.9.1/). Then every time
you end up reinstalling something important, you can change your
environment variables to point to the new installation, while keeping the
old installation around for a while to prevent angry users. If a problem
somehow arises, you can point the environment variables back to the old
build, like you suggested. While this does end up with significantly more
builds, it makes it much easier to reinstall builds.While I still think we should implement my above suggestions for having
Spack check installed packages before concretization, this will help for
the less common installation directory breaking changes (see #1329
https://github.com/LLNL/spack/pull/1329). It will also make reinstalls
much easier for me.We should dictate that the spec can only change once during a release,
specifically at the end of a release cycle on a separate release branch
(see the Git Flow
http://nvie.com/posts/a-successful-git-branching-model/ model). Every
hash changing PR would force a new release, since it is incompatible with
the previous release. @alalazo https://github.com/alalazo @eschnett
https://github.com/eschnett @tgamblin https://github.com/tgamblin
Thoughts?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/LLNL/spack/issues/1325#issuecomment-234319209, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABdgQM73CyWN3EmnBI42IT2hQZtBCjYyks5qX6eOgaJpZM4JRPuh
.
echo "sysadmin know better bash than english"|sed s/min/mins/ \
| sed 's/better bash/bash better/' # signal detected in a CERN forum
Adding #1362 to the list of issues that request that already installed packages be considered during concretization.
Users depend on the packages that we install for them. They write scripts that depending on the software stack on the cluster. Therefore I would prefer to not reinstall the entire software stack ever few months, because if things change, the users would need to adapt their script every few months and complain a lot.
We get requests from scientists that they want to rerun a job with exactly the same setup as they had, when running the job for the first time maybe 2 years ago.
Up to now on our clusters, we had the policy "Install once and keep until the cluster is end-of-life" to provide a certain continuity for the scientists. If there is a change for a software, we install a newer version side-by-side with the existing version. Because of this, we had some cases, where users were rerunning some jobs after 3 years and got exactly the same result down to the last bit.
Hashes being a moving target is also a problem for me for several reasons (one of them is related to having the hash as part of the installation directory, as explained in #4164 ).
Reproducible builds can also be regarded from a different point of view. For me it is also important, that the same spack command (explicitly specifying all dependencies with versions and variants and options) results in the same installation having the same hash. This would allow to script the installations, starting from a system with only having the minimal requirements for spack, setting up a whole software stack from scratch.
I was playing around with spack in the past two months, trying to setup a script that installs 90 very basic packages. When regularly running git pull once a day to get all recent changes of the development branch, the script was breaking on average once or twice a week (when trying to uninstall all packages, and then to rerun the script, the result looked different every week).
From my examination of spack, it looks like this would be a show-stopper for adoption of spack at our site. Is this issue a priority to pin down?
@0xaf1f: yes, this is being worked on, and there should be a new concretization algorithm added in the next couple months that will be able to more aggressively reuse what is installed.
Can you elaborate on your requirements?
That's great! We manage several hundred applications on our cluster, so this bug would mean the accumulation of too much cruft for us to be able to clean up after. So if there is a solution in sight, we can probably get started test-driving it. From what I can say offhand, we're looking for
It'd also be nice for users to be able to benefit from packages we make by using them elsewhere, like in a singularity container.
I think the proposed new concretizer will only solve some of your
problems. In the past, hashes for exactly the same build have changed for
many reasons, including:
The new concretizer would not solve either of these issues.
The new concretizer would solve a subtle but related issue: Spack CHOOSING
a different package /variant just because it's now available, when it
didn't used to be. (i.e. forced upgrades every time Spack has a new
version available).
On Wed, Feb 21, 2018 at 6:19 PM, Afif Elghraoui notifications@github.com
wrote:
That's great! We manage several hundred applications on our cluster, so
this bug would mean the accumulation of too much cruft for us to be able to
clean up after. So if there is a solution in sight, we can probably get
started test-driving it. From what I can say offhand, we're looking for
- automation of building and installation, with dependency resolution
and flexible build configurations (like with/without 64bit integers)- integration with lmod
It'd also be nice for users to be able to benefit from packages we make by
using them elsewhere, like in a singularity container.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/spack/spack/issues/1325#issuecomment-367512022, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB1cd8R9J8xK6cR7_ObcLDW73ZFneHSFks5tXKSVgaJpZM4JRPuh
.
Oh, that's a bummer. I guess that takes us back to where we started.
@0xaf1f The two points you mentioned above:
- automation of building and installation, with dependency resolution and flexible build configurations (like with/without 64bit integers)
- integration with lmod
are well covered in Spack.
The issues with hashing arise only if you are using develop
in production, and pull frequently to update it. If instead at some point you branch off develop
, and maintain your separate release branch, you won't see any problem with changing hashes.
Afif,... I'm confused now about what you mean by "changing hash"?
Spack involves a few transformation from a spec that you type, to a hash.
Briefly, this is:
If anything changes in any part of the process, the hash will be
different. The following things can change the hash, depending on your
situation:
The surest way to avoid any change in hash has historically been to not
upgrade Spack.
There are also ways to ensure you get the same concretized spec when you
want it to be the same.
In general, I would not rely on the hash to be the same forever. There are
also ways ("adding another level of indirection") to deal with that.
A little more information on what you plan to do, and what problems you
forsee, would be really helpful.
On Thu, Feb 22, 2018 at 2:41 PM, Afif Elghraoui notifications@github.com
wrote:
Oh, that's a bummer. I guess that takes us back to where we started.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/spack/spack/issues/1325#issuecomment-367796771, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB1cd6QVPUQUJMccpkSWQVyDmLpZqjThks5tXcLngaJpZM4JRPuh
.
@0xaf1f: there's no reason any of your use cases wouldn't work in Spack right now. I guess I'm not sure what aspect of the hashing you're nervous about, as @citibeth mentioned.
Spack hashes are designed to identify a unique configuration. If the Spack builtin packages change over time, the current concretizer defaults to pick up the latest available version of a package. The usual issue people have with this is that if the default configuration changes slightly for dependencies, then you end up rebuilding a lot of packages when you could've reused an existing installation. The new concretizer will by default allow you to reuse an existing installation, with an option to use the most recent thing, or (potentially) the package from a recent release. This would reduce the number of rebuilds you see over time.
In module schemes, you're free to get rid of hashes altogether. See @alalazo's tutorial here. Your users don't have to be exposed to hashes. In a container, you can use spack view to link things into a prefix so that users aren't exposed to the hashes of a particular install. There is a container guide here. And we'll be merging an update to that soon.
Does that help?
@alalazo
The issues with hashing arise only if you are using develop in production, and pull frequently to update it. If instead at some point you branch off develop, and maintain your separate release branch, you won't see any problem with changing hashes.
We would probably stick to official releases. Although I'm seeing now that all the packages are maintained in the same repository as the package manager itself (which is not an idea I particularly like, though I understand if this is still an early stage of development and they're still tightly coupled), so I can imagine that people would be using develop
in production just to be able to get the latest packages.
@citibeth
Afif,... I'm confused now about what you mean by "changing hash"?
I didn't use that term; I just referred to "this bug".
The following things can change the hash, depending on your situation:
- Upgrading the hash algorithm (core Spack)
- Changes in how a spec concretizes (eg. due to a new version available)
- Changes in a package's available options
My concern is with (1): a new hash for an unchanged spec due to updates in the hashing algorithm. I don't want to end up with packages installed by spack that a newer spack can no longer recognize and keep track of, so that exact duplicates of those packages would end up getting installed and I'd have to clean up after the whole thing.
In general, I would not rely on the hash to be the same forever. There are also ways ("adding another level of indirection") to deal with that.
I take it you're referring to https://github.com/spack/spack/issues/1325#issuecomment-234351593.
for the record, my last comment was posted before I saw @tgamblin's response right before. reading now...
My concern is with (1): a new hash for an unchanged spec due to updates in the hashing algorithm. I don't want to end up with packages installed by spack that a newer spack can no longer recognize and keep track of, so that exact duplicates of those packages would end up getting installed and I'd have to clean up after the whole thing.
This should be "solved" by the new concretizer. I think of it more as a workaround then a solution.
In module schemes, you're free to get rid of hashes altogether.
In practice, this isn't actually true. Since Spack will reinstall the exact same package with different hashes, it's impossible to differentiate between them without the hash, resulting in module file conflicts.
My concern is with (1): a new hash for an unchanged spec due to updates in the hashing algorithm. I don't want to end up with packages installed by spack that a newer spack can no longer recognize and keep track of, so that exact duplicates of those packages would end up getting installed and I'd have to clean up after the whole thing.
This should be "solved" by the new concretizer. I think of it more as a workaround then a solution.
I understood the concretizer as using versions of dependencies already installed, rather than trying to use the latest for everything. If that's the case, it wouldn't address my issue at all (if a newer spack can't recognize any previously installed version at all due to the hash algorithm changes). Did I misunderstand?
In module schemes, you're free to get rid of hashes altogether.
In practice, this isn't actually true. Since Spack will reinstall the exact same package with different hashes, it's impossible to differentiate between them without the hash, resulting in module file conflicts.
So the same package would still get reinstalled if the hash algorithm changes? That's exactly my concern.
I'm referring to the case of
with the _only_ difference being a newer version of spack featuring a new hashing algorithm. Will this package get reinstalled and the existing installation become unrecognizable?
I don't want to end up with packages installed by spack that a newer spack can no longer recognize and keep track of, so that exact duplicates of those packages would end up getting installed and I'd have to clean up after the whole thing.
Spack can already see previously installed versions of things, so no installation becomes "unrecognizable". It's all still there in spack find
. You can still refer to the prior spec by hash, and we could, even now, add a spack garbage-collect
or similar command to remove everything that isn't needed by a certain set of packages (e.g. those installed explicitly). A hash change doesn't mean anything falls out of the install database, and even if you remove the database, a spack reindex
will still pick up packages installed with an old algorithm. You can still generate modules for old packages even now.
What Spack does not currently do is look at what is installed and try hard to reuse it before installing a new package. We concretize first, with the current hash scheme, and we reuse only if hashes match. The new concretizer will check semantically whether an old dependency can be used for a new install.
The way this works is that the hash a spec is installed with is set in stone at install time and stored with the package. We don't re-hash old specs, we continue to respect their original hash.
So the same package would still get reinstalled if the hash algorithm changes?
No. It's not implemented yet, but if a package that satisfies the spec you asked for in spack install
already exists, by default we could just say "already installed". If you say spack install --newest
, then maybe we could give you the old behavior and you get a completely new package with new dependencies alongside the old one.
I'm referring to the case of
- same spec
- same dependency versions
- same build options
with the only difference being a newer version of spack featuring a new hashing algorithm. Will this package get reinstalled and the existing installation become unrecognizable?
Depends on whether we're talking about an abstract or concrete spec. But in general, this can happen (and I think this is kind of reasonable default behavior):
$ spack install foo
# foo is concretized and installed with hash /abc123
# <hash algorithm changes>
$ spack install foo
# spack looks in the DB, sees a foo that satisfies the abstract spec 'foo',
# spack says foo is already installed.
$ spack install foo --newest
# spack concretizes foo without looking at what is installed and
# just takes the newest version of everything
# a new foo is installed alongside the old foo with hash /def456
Make sense?
I think so. And that mostly looks great. But with spack install foo --newest
, if foo
and all its dependencies were already at their latest versions, you would end up with two completely identical installations of foo
, but with different hashes, right? It's admittedly not as large a problem as I was originally concerned about, but I think that's still a problem.
On Thu, Feb 22, 2018 at 5:40 PM, Afif Elghraoui notifications@github.com
wrote:
@alalazo https://github.com/alalazo
The issues with hashing arise only if you are using develop in production,
and pull frequently to update it. If instead at some point you branch off
develop, and maintain your separate release branch, you won't see any
problem with changing hashes.We would probably stick to official releases
Do you mean official releases of Spack or the packages Spack installs?
IMHO, there is not a big problem working with interim Spacks, as long as
the packages themselves are official releases.
. Although I'm seeing now that all the packages are maintained in the same
repository as the package manager itself (which is not an idea I
particularly like, though I understand if this is still an early stage of
development and they're still tightly coupled),It will likely be decoupled at some point.
so I can imagine that people would be using develop in production just to
be able to get the latest packages.
No, that's a bad idea. Unlike real versions, develop
is not checksummed,
and can therefore be a security risk. Plus it's not reproducable, making
it hard to debug if people have problems; because somepackage@develop
that you install today could be different from somepackage@develop
that
someone else installs tomorrow. You should only use develop
for packages
that YOU write and you KNOW are not tainted, and then only while developing
the package. Otherwise, you should use properly checksummed versions.
Even if the upstream authors haven't released an official version, you can
still make an unofficial release of that package by choosing a particular
version out of their Git repo.
The following things can change the hash, depending on your situation:
- Upgrading the hash algorithm (core Spack)
My concern is with (1): a new hash for an unchanged spec due to updates in
the hashing algorithm. I don't want to end up with packages installed by
spack that a newer spack can no longer recognize and keep track of, so that
exact duplicates of those packages would end up getting installed and I'd
have to clean up after the whole thing.There are two issues here: (1) Spack decides to rebuild (just about)
everything. (2) cleaning up garbage. Issue (1) will happen; but it's not
so big deal because Spack is an AUTO builder. Issue (2), I believe Todd
has spoken to.
In general, I would not rely on the hash to be the same forever. There are
also ways ("adding another level of indirection") to deal with that.I take it you're referring to #1325 (comment)
https://github.com/spack/spack/issues/1325#issuecomment-234351593.
Yes. I'm also speaking to Spack environments. I have an experimental
version of Spack that create "environments." Basically, it does a bunch of
"Spack installs" according to a file of what I need installed; and then
immediately creates a script of "module load" commands that load exactly
what was just installed. The hashes are embedded in the "module load"
script, but are never used directly by the user. And it doesn't matter
what other stuff is sitting around in Spack --- be it garbage, or useful
packages used by another environment / project. When using an environment,
I only get the packages I planned on getting.
I understood the concretizer as using versions of dependencies already
installed, rather than trying to use the latest for everything. If that's
the case, it wouldn't address my issue at all (if a newer spack can't
recognize any previously installed version at all due to the hash algorithm
changes). Did I misunderstand?The result of the current concretizer depends on (a) the spec you provide,
(b) the nature of the package.py files, and (c) the packages.yaml files you
provide. The proposed change would allow the concretizer to also depend on
(d) a list of fully concretized specs; for example, the list of things
already installed.
In module schemes, you're free to get rid of hashes altogether.
In practice, this isn't actually true. Since Spack will reinstall the
exact same package with different hashes, it's impossible to differentiate
between them without the hash, resulting in module file conflicts.So the same package would still get reinstalled if the hash algorithm
changes? That's exactly my concern.
Yes they would. What is the problem with that?
Realistically... the days of Spack hash algorithm changing are drawing to
an end.
I'm referring to the case of
- same spec
- same dependency versions
- same build options
with the only difference being a newer version of spack featuring a new
hashing algorithm. Will this package get reinstalled and the existing
installation become unrecognizable?To be clear: suppose you install
[email protected]
, and that results in hash
h101
. Now you upgrade Spack and you install[email protected]
, resulting
in hashh202
. Spack will re-build because there is nothing installed
under hashh202
. However, the copy of[email protected]
installed under
h101
is still there and still works. Spack just won't be able to "find"
it.
@citibeth:
so I can imagine that people would be using develop in production just to
be able to get the latest packages.No, that's a bad idea.
@alalazo means the develop
branch. Not the develop
versions of specific packages.
@0xaf1f:
you would end up with two completely identical installations of foo, but with different hashes, right? It's admittedly not as large a problem as I was originally concerned about, but I think that's still a problem.
I don't anticipate that this would happen frequently. We don't change the hash algorithm frequently anymore, and when we add new things to the spec, we tend to do it in ways that do not invalidate old hashes. In general, hash changes happen because something different was actually built.
I suspect that there may be features added in the future that could require hashes to change, but we'd do a major release around something like that, and the concretizer changes mitigate the rebuild requirements.
@citibeth
Although I'm seeing now that all the packages are maintained in the same
repository as the package manager itself (which is not an idea I
particularly like, though I understand if this is still an early stage of
development and they're still tightly coupled),It will likely be decoupled at some point.
:+1:
so I can imagine that people would be using develop in production just to
be able to get the latest packages.No, that's a bad idea.
I was also referring to the develop
branch of Spack-- using it in order to get the latest package definitions. I'd really look forward to the decoupling because we would want to stick to official releases of Spack while still getting the newer package definitions that come in.
To be clear: suppose you install
[email protected]
, and that results in hashh101
. Now you upgrade Spack and you install[email protected]
, resulting in hashh202
. Spack will re-build because there is nothing installed under hashh202
.
This is not the behavior that @tgamblin described in https://github.com/spack/spack/issues/1325#issuecomment-367864290. I understood from that if you upgrade Spack and try to install [email protected], it will tell you it's already installed (unless you use --newest
).
The behavior you're describing is what I have a problem with. It will result in lots of cruft that will be difficult to clean out.
I confess I did not check all the discussion, but wanted to add that using Lmod
with hierarchical module naming and with hash_length: 0
is a problem.
mpich/3.3
installed, and depending on it, you have boost
. develop
from spack. mpich
package was updated and so the concretizer gets a new hash for mpich/3.3
for whatever reason.p4est
that depends on mpich
. A new build of mpich
gets installed and p4est
with it, but the module for mpich/3.3
is skipped.Now a user cannot load p4est
since it cannot load the p4est
's version of mpich/3.3
by ml
.
What I would expect is that spack
checks if the version of the package it's currently installing is already available. If it is, prefer that package install (or at least merge the leaves' modules).
so I can imagine that people would be using develop in production just to
be able to get the latest packages.No, that's a bad idea.
I was also referring to the
develop
branch of Spack-- using it in order to get the latest package definitions. I'd really look forward to the decoupling because we would want to stick to official releases of Spack while still getting the newer package definitions that come in.
OK I see. I agree, I don't think there's a problem with using develop versions of Spack in production in order to get the latest packages. Because:
Packages are being updated all the time, and people need to use them all the time, and Spack can (realistically) only make releases occasionally.
You are right to be potentially worried about relying on "non-released" features in core Spack. But in reality, I've found the PR process for anything in core to be extremely deliberate and careful. I would not be overly concerned about using develop versions of Spack.
What I DO think is important is, whatever version of Spack you DO use, make it repeatable. I fork Spack to provide my own "spin" for our local environment. It includes standard Spack, plus whatever changes / fixes / additions I needed to get it working for our needs. I'm always trying to get those fixes integrated as PRs of course; but there's always some gap between upstream Spack and the Spack we use.
To be clear: suppose you install
[email protected]
, and that results in hashh101
. Now you upgrade Spack and you install[email protected]
, resulting in hashh202
. Spack will re-build because there is nothing installed under hashh202
.This is not the behavior that @tgamblin described in #1325 (comment). I understood from that if you upgrade Spack and try to install [email protected], it will tell you it's already installed (unless you use
--newest
).
I looked up #1325 again; and I stand by my original claim.
The behavior you're describing is what I have a problem with. It will result in lots of cruft that will be difficult to clean out.
I HIGHLY recommend you use Spack Environments. Environments let you define the complete set of what you need in a portable, repeatable fashion. On top of that, I have a PR to garbage collect anything that's been installed that is NOT part of a current environment:
https://github.com/scheibelp/spack/pull/1
The discussion in this issue seems to be stale. I'm hoping that Spack's new concretizer will solve a lot of the issues raised in this post. If anyone has any further feedback, please open a new issue.
Most helpful comment
@0xaf1f: yes, this is being worked on, and there should be a new concretization algorithm added in the next couple months that will be able to more aggressively reuse what is installed.
Can you elaborate on your requirements?