Tfjs: Support exporting model weights from browser

Created on 21 Mar 2018  路  27Comments  路  Source: tensorflow/tfjs

layers feature

Most helpful comment

We're working on the design!

All 27 comments

+1

No +1's please

+1

+1

Will this also include exporting without the browser? I would like to be able to export a trained model to a file from node (after running model.fit(...)).

This issue specifically doesn't talk about it, but we will plan on supporting checkpointing to TensorFlow SavedModels in node.js.

any updates on this?

We're working on the design!

Looks like some progress in the last few days... You guys rock, looking forward to this! Is there an ETA? :)

@broggi, thanks for checking.

For basic features such as saving tf.Models (Keras-style Models) to browser local storage or as downloaded files, we are looking at next couple of weeks as the ETA. For more advanced features, such as sending models to HTTP servers, the ETA will be longer. The support for saving non Keras-style models, e.g., FrozenModels loaded from converted TensorFlow SavedModels, the ETA will be even longer.

@caisq so erm... how is the progress going?

@PheoOneWhoMade Apart from the comments made in this issue, you can also look at the linked pull requests to see the progress as it happens.

@caisq - Please please forgive this possibly dumb question - but if you can write to localstorage, why would letting me send the model to my server take longer? I know next to nothing (sorry!) about the internals of tf, but in my coding experience, serializing the data to a buffer is the hard part - where you write it is the easy part. If you just hand me the buffer you would put in localstorage, I'll take care of the bog-standard ajax transmission to my server - at least it seems that simple to me haha! Sorry for possibly stating the obvious - and that may be exactly what you're doing, I just was confused when you said "local storage = quick, upload to server = long term".

@josiahbryan Good question. The hard part, namely serializing the topology and weights of the value, will be done by tf.Model.save(). The client just needs to specify where and how to store the serialization artifacts, e.g., tf.Model.save('localstorage://fooModel'); or tf.Model.save('https://my.model.server/'). The client can also implement a custom IOHandler class to do custom actions on the artifacts. The artifacts are the same regardless of how and where you save the data. I implied that the http route is a little more complex than routes like local storage and indexedDB, because HTTP involves a server and we as authors of the library need to provide some examples of how to write the server. By comparison, saving to local storage and IndexedDB are done purely at client side, so they are simpler in that sense.

@caisq The formdata based browserHTTPRequest is interesting. It should be an efficient way of handling the browser weights as binary data.

Is having a JSON serialization format also being considered?

For example:

{
    "model": MODEL_DATA_HERE,
    "model.weights.bin.base64": MODEL_WEIGHTS_AS_BASE64_ENCODED_STRING
} 

I think it's generally a lot easier to push around JSON (despite the inefficiencies of base64 encoding) and then migrate to a more performant approach when needing to optimize.

@rajsite Base64 encoding has some overhead. So we'll focus on the more efficient ArrayBuffer approach for now.

I'm just curious: how do you plan to export or save your models? We are going to support the most commonly used saving routes (browser local storage, IndexedDB, file downloads, and HTTP requests) and hide the detailed format under the hood. I just want to learn about your use case in case it is not covered by the above-mentioned routes.

@caisq thanks for considering adding more export/save options.
in my use-case, the models are created from custom structures, however I would need to save the weights per layer when training, then later load the weights when creating each layer manually for predictions.
Is there some API for serializing/deserializing the weights or do I need to just do it manually?

@caisq local storage before the user persists on the server over HTTP handles one use case.

The HTTP implementation in your fork would almost do it, but I would need to tweak some CORS configuration for the fetch. It might more flexible to take a Fetch Configuration object as a parameter and replace the body (then you also don't need to manually take a method).

Alternatively, for my specific use case, exposing the credentials property would probably be sufficient.

@rajsite I can make it possible to configure additional fields of the RequestInit used with the fetch, including cache, credentials, headers and mode. How does that sound?

@atanasster I see. Interesting use case. We're going to support saving weights from a subset of a model's layers and loading weights into a subset of a model's layers. But it may not come out in the initial release with the save/export features.

@caisq those should handle my current usage 馃憤
Also, being able to make my own IOHandler sounds like a good backup.

@caisq , thanks looking forward to it. If you don't mind, can I send you my manual serialization link on GitHub for some feedback?

@caisq I am trying to load a Keras model through node.js on the backend. I see that an IOHandler interface is present and it is used by the loadModel function. Is it currently possible to implement such an interface for use with node.js ? Should I give this a try or are other parts missing for this to work ? Is this functionality maybe going to be implemented on the tfjs-node repo ?

We're going to implement IOHandlers for writing to disk for tfjs-node very soon!

Update:
The model importing and exporting feature for the browser environment is launched with @tensorflow/tfjs release 0.11.1. Please see tutorial at:
https://js.tensorflow.org/tutorials/model-save-load.html

Please use the new API and let us know what you think. If you find bugs or have enhancement requests, please file GitHub issues.

I will leave this issue open for now because we plan to modify a few examples in https://github.com/tensorflow/tfjs-examples so they can save/load trained or fine-tuned models locally.

The saving/loading support for TensorFlow.js in Node.js will be released later.

You can play with the runnable code snippets on our website to get a feeling of the new API:
https://js.tensorflow.org/api/latest/#loadModel

Now that a concrete example has been added to the Iris example in tfjs-example, I will close this issue. I filed a separate issue to track the support of model saving and loading in Node.js:
https://github.com/tensorflow/tfjs/issues/343

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rlexa picture rlexa  路  3Comments

take-kuma picture take-kuma  路  3Comments

ritikrishu picture ritikrishu  路  4Comments

chrisdonahue picture chrisdonahue  路  3Comments

nsthorat picture nsthorat  路  3Comments