a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
In their readme they say that the implementation to this project is still WIP, but they said this on release.
It should be really easy to add it here while they are already using this library in their implementation.
Does the huggingface team has more information about this, if the community can open an PR for it or waiting for the original authors ?
Looking forward to this new model.
Hello all!
I looked at the list of models in the transformers site and TaBERT is still not listed. Does anyone know when it is going to be ready?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Most helpful comment
In their readme they say that the implementation to this project is still WIP, but they said this on release.
It should be really easy to add it here while they are already using this library in their implementation.
Does the huggingface team has more information about this, if the community can open an PR for it or waiting for the original authors ?