Flair: Integration with Weights & Biases (wandb)

Created on 26 Mar 2020  路  15Comments  路  Source: flairNLP/flair

To be removed, once it is done: Please add the appropriate label to this ticket, e.g. feature or enhancement.

Integration with Weights & Biases (wandb) would help with research. For example, it is implemented in https://github.com/ThilinaRajapakse/simpletransformers.

Most helpful comment

Hey @bclavie @alanakbik I was mostly looking for permission and validation to build an integration. I'd also love to know if you have any thoughts on what you'd like the integration to be like as there are many possible levels of integration:

  • Metric streaming -> capture metrics, hyperparameters, and experiment media to make it reproducible.
  • Model and dataset versioning -> along with metrics, wandb can also be used to automatically version your models, checkpoints, and datasets.

These among other features. And all of these features can be used together as well as individually depending on the level of integration that you'd prefer. For example, see this deep integration use case walkthrough that I did for YOLOv5. This is an example of deep integration. But for starting out, I'd say we aim at building a very simple metric logging integration( v1 ), and then based on user feedback we can introduce other features like model tracking, resuming crashed runs across devices etc.
Let me know if that sounds interesting :)

All 15 comments

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

I'm also interested in that - I had good experiences when using it with Transformers :)

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Hey, I'm an ML engineer at Weights and Biases. I'd love to build an integration if the maintainers are interested.

Hey, I'm an ML engineer at Weights and Biases. I'd love to build an integration if the maintainers are interested.

Hey @AyushExel, I think this could be great. Do you need any assistance from the maintainers or just permission? I don't think @alanakbik would have any opposition to allowing this.

Hi @AyushExel oops should have responded earlier - integration would be great! Is there anything you need from our side for this?

Hey @bclavie @alanakbik I was mostly looking for permission and validation to build an integration. I'd also love to know if you have any thoughts on what you'd like the integration to be like as there are many possible levels of integration:

  • Metric streaming -> capture metrics, hyperparameters, and experiment media to make it reproducible.
  • Model and dataset versioning -> along with metrics, wandb can also be used to automatically version your models, checkpoints, and datasets.

These among other features. And all of these features can be used together as well as individually depending on the level of integration that you'd prefer. For example, see this deep integration use case walkthrough that I did for YOLOv5. This is an example of deep integration. But for starting out, I'd say we aim at building a very simple metric logging integration( v1 ), and then based on user feedback we can introduce other features like model tracking, resuming crashed runs across devices etc.
Let me know if that sounds interesting :)

This definitely sounds very interesting, so we'd very much appreciate if this could be integrated!

Can someone tell me what formatting setup is recommended? I'm using black and it's making a lot of unrelated edits.

We actually stopped using black - right now it's anarchy but I think most people use the auto-formatting of their IDEs like PyCharm.

@alanakbik thanks. I've got the metric, hyperparameter, and model checkpoint logging and versioning working on my branch. See an example run here .
I wanted to discuss a few things including how we might also be able to track/log and version datasets visually. Would you be up for a quick meeting? My colleague might've sent you a mail regarding that. Just let us know what time works best for you. Just to confirm, is this your email id -> alan [dot] akbik [盲t] hu-berlin [dot] de ?

Please, log also other metrics like for tensorboard: self.metrics_for_tensorboard - oh, they are probably logged but are not enabled in the run.

@djstrong I'm running this example -https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_7_TRAINING_A_MODEL.md
Yes, it should log everything that can be logged including tensorboard metrics. I think this example has less metrics. If you have any other examples that log multiple metrics, including tb metrics please let me know and I'll use it for my test cases

@AyushExel sure lets sync up - haven't received a mail yet but feel free to ping me via mail or linkedin!

Was this page helpful?
0 / 5 - 0 ratings