Ray: [docs] Provide tips for configuring logging

Created on 19 Jun 2020  路  8Comments  路  Source: ray-project/ray

I'm using RLlib, but I believe that this is a general question regarding ray.

What is your question?

I have a nicely configured RL environment using structlog for logging, which I want to use with RLlib.
Unfortunately, RLlib/ray seems to ignore my per-module log configurations:

logging.getLogger('my_env').setLevel(logging.WARNING)
logging.getLogger('my_env.env.simulation').setLevel(logging.INFO)

When running the env without Ray, this sets the logging level to WARNING for my entire environment, except for my_env.env.simulation, which logs in INFO level.
With Ray, all my INFO-level logs are printed - even the ones outside of my_env.env.simulation that I typically don't want to see.

I saw that ray.init() allows to configure a fixed logging level, but that doesn't allow per-module configurations. Trying to set configure_logging=False in ray.init() also had no effect.

Is there any way to keep my flexible structlog logging with Ray? It's quite fundamental to understand what's going on in my environment...

I guess this is related to https://github.com/ray-project/ray/issues/2953 ?

Ray version and other system information (Python version, TensorFlow version, OS):

  • Ray 0.8.5
  • Tensorflow 2.2.0
  • Python 3.8.3
  • OS: Ubuntu 20.04 on WSL (Win 10)
core enhancement

All 8 comments

Do I have to configure logging again inside the workers somehow? How would I do that?

Ok, I figured it out: I do have to configure logging again within the workers. To do that, I added a call to my configure_logging() function from within the environment's constructor like this:

class MyEnv(gym.Env):
    def __init__(self, env_config):
        # other stuff
        self.log = structlog.get_logger()
        configure_logging()

It works fine now :)

Before that, I only called configure_logging() once outside the environment in my main script.

@stefanbschneider any idea about how can we improve the docs or your experience here?

Hm. I did search the docs but couldn't find anything related to my problem. How about mentioning it in 1 sentence when describing how to configure custom environments with RLlib?
And possibly also mention it in the development tips regarding logging?

OK got it - thanks for the tip! I'll actually open this issue and close it later when it gets addressed in the docs.

@richardliaw If you want, I can create a PR, adding it to the documentation?

Also, if there's any interest, I could add my workflow for training, saving, and testing RLlib agents to the docs: https://github.com/ray-project/ray/issues/9123
Based on the comments, it seems like this could also be useful for others.

Yeah, @stefanbschneider both would be great!

Was this page helpful?
0 / 5 - 0 ratings