Pm2: PM2 restart "services.json" doesn't update environment variables

Created on 20 Jun 2014  Â·  39Comments  Â·  Source: Unitech/pm2

We're using a JSON file to define all our node services and their appropriate environment variables, script starting points etc.

When adding an environment variable for one of our services and issuing pm2 restart services.json it doesn't actually add the environment variable to the process (tested using pm2 dump).

Deleting all the services and adding them again does work though but takes down our API's for a while.

Open for PR Enhancement

Most helpful comment

I would prefer to preserve environment on restart without changes. If you want to change the value of an ENV var, it's better to pm2 delete my-app; pm2 start my-app.js

All 39 comments

On restart pm2 will not reload services.json variables.
I don't know if a fix in cluster mode is possible, needs investigations.

+1 - exporting the regular to environment variables (via export VAR=...) didn't seem to work either, so I've switched my app to use processes.json. Now I find out this doesn't work either? :)

I would prefer to preserve environment on restart without changes. If you want to change the value of an ENV var, it's better to pm2 delete my-app; pm2 start my-app.js

Our use case is continuous deployment. One of the environment variables is the hash of the last GitHub commit, which becomes available on the client so we can tell which version it's running. This variable would change with every restart. pm2 delete followed by pm2 start is not 0s downtime reload.

I would prefer to preserve environment on restart without changes.

Is it possible to merge the environment somehow? So you can do VAR1=123 pm2 start xxx and VAR2=456 pm2 restart xxx, and both variables will be present? I know it's very tricky, but it could solve this.

If I need to change variables in an application I usually use a configuration file not ENV variables so I really don't know why they would need to be changed once the application has been started. Of course, that's only a point of view.

@soyuka, I agree with you, but Meteor _forces_ us to use environment variables instead of config files. Maybe you could mention in that ticket your thoughts on this?

That environment variable looks really weird.

But it is a valid use-case, and if some applications use it, pm2 should probably support it.

The 0.10.0 fixing this issue has been published:

$ npm install pm2@latest -g

Now environment variable are refreshed on restart (via CLI and via JSON declaration)

Does this work for startOrRestart?

Don't work for me using pm2 0.10.8

Sorry, I was using

pm2 restart appName

and not

pm2 restart configFile.json

Everything is working now.

Thanks.

But startOrRestart really doesn't work.

I'm trying to figure out how one is supposed to refresh an environment variable for a cluster that is doing 0 downtime reloads using:

pm2 reload myapp.json

It seems that the refresh of environment was only implemented with pm2 restart, but that doesn't give 0 downtime.

Is it intentional that changes to the .JSON file are not picked up by PM2 when a reload is invoked? pm2 delete and pm2 start seems a long way from zero downtime.

+1 @mykwillis Would like to have a way to gracefully load a new environment configuration.

+1 -- this would be very useful -- found that pm2 restart didn't even work -- I had to do pm2 delete; pm2 start xxx.json

+1 without this feature I can't perform zero downtime deploy

+1 This is a much needed feature to have reload / gracefulReload refresh env vars.

It's not the case for reload / gracefulReload, but this feature is available for restarts with PM2 1.0:

You can try it:

$ npm install Unitech/pm2#development -g

You can also restart with different pre-defined environment variables via

$ pm2 restart services.json --env production

Also agree. Would definitely like the ability to update env with zero downtime. i.e. reload
e.g.
pm2 reload app.json --env debug (sets NODE_DEBUG, DEBUG vars)
pm2 reload app.json --env nodebug (turns debugging off)

+1

+1

@gflandre @Florelli did you try the development version?

@soyuka no I haven't yet (our issue is with production use, so it requires us to use a stable version).
Does it allow zero-downtime reloads w/ env update? I'll give it a go on my dev env anyway :)

I've ran into this problem too, I need it to reload the environment variables. I couldn't get it to do it until I deleted the process itself from PM2 and started it as if it was new.

Any update on this? We have th same issue with 2.1.3 although we do not use the json file.

Same issue ...

See --update-env docs here.

@vmarchaud seems not to affect log file configuration changes though...

Yeah some options aren't updated, we are thinking about changing this behavior but it has some pro and cons.

+1 restart and reload will not reload ignore_watch configuration, which took me two hours to fix my bug.

Reminder : You need to pass the ecosystem.json path if you started the application with it, pm2 doesnt save its location so PM2 can't pickup change by using pm2 restart applicationName

@vmarchaud the bash history shows that I used pm2 restart ecosystem.json all the time, and I tried again just a moment ago, the output was:

# pm2 restart ecosystem.json
[PM2] Applying action restartProcessId on app [UEMINA](ids: 0)
[PM2] [UEMINA](0) ✓

and the change of ignore_watch didn't affect. But if I turn watch from true to false, it could work.
My pm2 version is 2.0.18, on CentOS 7.

@Zing22 You should update, behavior have changed around 2.x if i correctly recall

@vmarchaud Problem still there, BTW, the exec mode is fork.
Is there a command like pm2 show 0 to display ignore_watch list?

anyway, thanks for your time : )

@Zing22 You can dump the process data using pm2 prettylist and see the options

Ok so maybe I'm taking crazy pills or something, but I keep having persistent problems with ENV vars and code/assets just downright _not updating_ when I do a pm2 restart either from the server or Capistrano deployment. It's maddening. If I ever make a meaningful change, I have to do cap production pm2:stop && cap production pm2:delete || cap production pm2:start, and now it's to the point where I don't _ever_ trust that restart is going to do what I need it to. It just seems horribly unreliable to me and it's happened on multiple projects that are in no way copied or related to one another. Am I just fundamentally misunderstanding what is supposed to be happening here?

On a prior project, I had some static web assets being served from an express server, and said assets would just refuse to update unless I first deleted the pm2 process and restarted it. There's no mechanism for doing this even intentionally that I'm aware of, and it caused many hours of frustration over many weeks, and ultimately required writing a simple bash script to stop/delete/start the process on deployment because it just wasn't working out of the box.

More context: I'm using .env files (via node-dotenv module), and npm shrinkwrap. shrinkwrap changes were also incidentally ignored on last deploy--when I finally stopped and deleted the pm2 process, the next run exhibited a require error that was an issue on my UAT deployment but NOT production, which is telling me that pm2 is "caching" something I don't want it to.

All other process managers I've worked with do a CLEAN RESTART from scratch when you restart them. There should be no state left on the process via PM2 or any other portion of the stack.

If I can't get this figured out in a sensible way I'm going to have to drop PM2 entirely for my sanity's sake :-)

Although this issue was closed in 2014, I still see weird behavior around restarts, even with tiny example projects. The only consistent solution I have is to run pm2 kill to unload the daemon entirely.

please fix this broken, complex and useless functionality. Env Vars are Env Vars, not JSON, not YML, not ecosystems, not anything different to a EXPORT VAR=VALUE and process.env returning the correct value.

Was this page helpful?
0 / 5 - 0 ratings