Boto3: Loads of DEBUG messages with new s3transfer package

Created on 9 Aug 2016  路  10Comments  路  Source: boto/boto3

I just updated my boto3 and it installed a new s3transfer package (0.1.1). Problem is that now I get a load of DEBUG messages for every S3 download my app performs.

DEBUG:s3transfer.utils: Acquiring 0
DEBUG:s3transfer.tasks: DownloadSubmissionTask({'transfer_future': <s3transfer.futures.TransferFuture object at 0x10ae6a350>}) about to wait for the following futures []
DEBUG:s3transfer.tasks: DownloadSubmissionTask({'transfer_future': <s3transfer.futures.TransferFuture object at 0x10ae6a350>}) done waiting for dependent futures
DEBUG:s3transfer.tasks: Executing task DownloadSubmissionTask({'transfer_future': <s3transfer.futures.TransferFuture object at 0x10ae6a350>}) with kwargs {'io_executor': <s3transfer.futures.BoundedExecutor object at 0x10ae34f90>, 'request_executor': <s3transfer.futures.BoundedExecutor object at 0x10ae34b90>, 'osutil': <s3transfer.utils.OSUtils object at 0x10ae34950>, 'client': <botocore.client.S3 object at 0x10ae343d0>, 'transfer_future': <s3transfer.futures.TransferFuture object at 0x10ae6a350>, 'config': <boto3.s3.transfer.TransferConfig object at 0x10ae34910>}
DEBUG:s3transfer.futures: Submitting task GetObjectTask({'extra_args': {}, 'bucket': 'BUCKET', 'key': 'FILE'}) to executor <s3transfer.futures.BoundedExecutor object at 0x10ae34b90> for transfer request: 0.
DEBUG:s3transfer.utils: Acquiring 0
DEBUG:s3transfer.tasks: GetObjectTask({'extra_args': {}, 'bucket': 'BUCKET', 'key': 'FILE'}) about to wait for the following futures []
DEBUG:s3transfer.tasks: GetObjectTask({'extra_args': {}, 'bucket': 'BUCKET', 'key': 'FILE'}) done waiting for dependent futures
DEBUG:s3transfer.tasks: Executing task GetObjectTask({'extra_args': {}, 'bucket': 'BUCKET', 'key': 'FILE'}) with kwargs {'fileobj': <open file '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f.6A5DbD3C', mode 'wb' at 0x109865a50>, 'bucket': 'BUCKET', 'download_output_manager': <s3transfer.download.DownloadFilenameOutputManager object at 0x10ae90890>, 'extra_args': {}, 'callbacks': [], 'client': <botocore.client.S3 object at 0x10ae343d0>, 'key': 'FILE', 'io_chunksize': 262144, 'max_attempts': 5}
DEBUG:s3transfer.futures: Submitting task JoinFuturesTask({}) to executor <s3transfer.futures.BoundedExecutor object at 0x10ae34b90> for transfer request: 0.
DEBUG:s3transfer.utils: Acquiring 0
DEBUG:s3transfer.tasks: JoinFuturesTask({}) about to wait for the following futures [<Future at 0x10aeb9390 state=running>]
DEBUG:s3transfer.utils: Releasing acquire 0/None
DEBUG:s3transfer.tasks: JoinFuturesTask({}) about to wait for <Future at 0x10aeb9390 state=running>
DEBUG:s3transfer.futures: Submitting task IOWriteTask({'offset': 0}) to executor <s3transfer.futures.BoundedExecutor object at 0x10ae34f90> for transfer request: 0.
DEBUG:s3transfer.utils: Acquiring 0
DEBUG:s3transfer.tasks: IOWriteTask({'offset': 0}) about to wait for the following futures []
DEBUG:s3transfer.tasks: IOWriteTask({'offset': 0}) done waiting for dependent futures
DEBUG:s3transfer.tasks: Executing task IOWriteTask({'offset': 0}) with kwargs {'fileobj': <open file '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f.6A5DbD3C', mode 'wb' at 0x109865a50>, 'offset': 0}
DEBUG:s3transfer.utils: Releasing acquire 0/None
DEBUG:s3transfer.tasks: JoinFuturesTask({}) done waiting for dependent futures
DEBUG:s3transfer.utils: Releasing acquire 0/None
DEBUG:s3transfer.tasks: Executing task JoinFuturesTask({}) with kwargs {'futures_to_wait_on': [None]}
DEBUG:s3transfer.futures: Submitting task IORenameFileTask({'final_filename': '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f'}) to executor <s3transfer.futures.BoundedExecutor object at 0x10ae34f90> for transfer request: 0.
DEBUG:s3transfer.utils: Acquiring 0
DEBUG:s3transfer.utils: Releasing acquire 0/None
DEBUG:s3transfer.tasks: IORenameFileTask({'final_filename': '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f'}) about to wait for the following futures []
DEBUG:s3transfer.tasks: IORenameFileTask({'final_filename': '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f'}) done waiting for dependent futures
DEBUG:s3transfer.tasks: Executing task IORenameFileTask({'final_filename': '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f'}) with kwargs {'osutil': <s3transfer.utils.OSUtils object at 0x10ae34950>, 'final_filename': '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f', 'fileobj': <open file '/var/folders/kd/yx5n34z97kz10x25_5z9sq9m9qvmpm/T/tmpgrD02f.6A5DbD3C', mode 'wb' at 0x109865a50>}
DEBUG:s3transfer.utils: Releasing acquire 0/None
question

Most helpful comment

Set s3transfer to CRITICAL only.

logging.getLogger('s3transfer').setLevel(logging.CRITICAL)

All 10 comments

I do not think that we would want to remove debug logs. They come in handy if there are any issues users run into and we are not able to reproduce. Could you exclude these log by adding a filter excluding anything related to s3transfer?

I could exclude them, but I use a ton of boto3 calls and have never seen these kind of debug messages coming from any other portion of the library.

Do you currently see logs for botocore as well? I feel that botocore has more debug logs than s3transfer and it is used for every boto3 call.

In general, boto3 tends to not have too much logs because it is just a wrapper over other libraries that do most of the heavy lifting such as botocore and s3transfer, which require much more debug logging to help understand what may be happening.

I don't get any botocore debug messages

Interesting. How are you enabling debug logging then in your application? Are you using boto3.set_stream_logger() or something else? If you use boto3.set_stream_logger() you will only get debug logs from boto3 and none of the underlying libraries.

i'm not explicitly calling any logging method on boto3. All I typically do is call boto3.client(...) for services.

So how are you enabling logging in your application then?

During app create:
logging.config.dictConfig(json.load(open('config/logging.json')))

with logging.json:

{
    "version": 1,
    "formatters": {
        "simple": {
            "format": "%(levelname)s:%(name)s: %(message)s"
        },
        "line": {
            "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
        },
        "json": {
            "format": "{\"loggerName\":\"%(name)s\", \"asciTime\":\"%(asctime)s\", \"pathName\":\"%(pathname)s\", \"logRecordCreationTime\":\"%(created)f\", \"functionName\":\"%(funcName)s\", \"levelNo\":\"%(levelno)s\", \"lineNo\":\"%(lineno)d\", \"time\":\"%(msecs)d\", \"levelName\":\"%(levelname)s\", \"message\":\"%(message)s\"}"
        }
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "level": "DEBUG",
            "formatter": "simple",
            "stream": "ext://sys.stdout"
        },
        "syslog": {
            "class": "logging.handlers.SysLogHandler",
            "level": "INFO",
            "formatter": "json"
        }
    },
    "loggers": {
        "standard": {
            "level": "INFO",
            "handlers": [
                "syslog"
            ],
            "propagate": 1
        }
    },
    "root": {
        "level": "DEBUG",
        "handlers": [
            "console"
        ]
    }
}

Ahh I see what is happening. We need to add a null handler default in s3transfer. I sent a PR to fix this in the s3transfer library.

Set s3transfer to CRITICAL only.

logging.getLogger('s3transfer').setLevel(logging.CRITICAL)
Was this page helpful?
0 / 5 - 0 ratings