Ml-agents: Logging Std of Reward to Tensorboard

Created on 13 Nov 2019  路  4Comments  路  Source: Unity-Technologies/ml-agents

Is there an option to display the std of reward in Tensorboard? Please advise on the fastest way to isolate this value

discussion

Most helpful comment

Not currently but it would be trivial to add. In ml-agents/mlagents/trainers/trainer.py there is a line that reads stat_mean = float(np.mean(self.stats[key])) followed by summary.value.add(). You can just add another summary value add with np.std instead of np.mean.

All 4 comments

Not currently but it would be trivial to add. In ml-agents/mlagents/trainers/trainer.py there is a line that reads stat_mean = float(np.mean(self.stats[key])) followed by summary.value.add(). You can just add another summary value add with np.std instead of np.mean.

Thank you. It works.
Please advise on why there are 2 trainer.py files?
Working on Windows.
My understanding is that one is the github clone and the other is part of the site-packages in anaconda through pip install. But the duplication of files is frustrating. Made the mistake of changing the wrong trainer.py file at first.

Hi @AsadJeewa, if you're planning on modifying trainer.py, install ml-agents with pip install -e ./ in the ml-agents-envs and ml-agents directories, rather than pip install mlagents, this will force mlagents-learn to point to the files in your github clone. The duplication has to do with how pip works - it makes a copy of the file in the site-packages folder.

Thank you. Much appreciated

Was this page helpful?
0 / 5 - 0 ratings

Related issues

tensorgpu picture tensorgpu  路  3Comments

Rodnyy picture Rodnyy  路  3Comments

RavenLeeANU picture RavenLeeANU  路  4Comments

jlanis picture jlanis  路  4Comments

Porigon45 picture Porigon45  路  3Comments