Keras: TensorBoard

Created on 2 Dec 2015  路  4Comments  路  Source: keras-team/keras

As most tensorflow users know one of the most powerful features it packs is it's amazing visualization tools. Does keras support tensorboard? Or should we add it in the "implement" list?

stale

Most helpful comment

Hey @AntreasAntoniou you can implement some visualizations using callbacks. I'm still working on it but you can easily extend this:

from keras.callbacks import Callback
import keras.backend.tensorflow_backend as tfbe

class TensorBoardViz(Callback):
    def __init__(self, model, feed, freq=2, log_file="./logs"):
        super(Callback, self).__init__()
        self.model = model
        self.freq = freq
        self.log_file = log_file
        self.sess = tfbe._get_session()
        self.feed = feed
        self.w_hists = []
        self.b_hists = []
        self.out_hists = []
        for node in self.model.nodes:
            cur_node = self.model.nodes[node]
            if hasattr(cur_node, "W"):
                self.w_hists.append(tf.histogram_summary("{}_W".format(node), cur_node.W))
            if hasattr(cur_node, "b"):
                self.b_hists.append(tf.histogram_summary("{}_b".format(node), cur_node.b))
            if hasattr(cur_node, "get_output"):
                self.out_hists.append(tf.histogram_summary("{}_out".format(node), cur_node.get_output()))         
        self.merged = tf.merge_all_summaries()
        self.writer = tf.train.SummaryWriter(log_file, self.sess.graph_def)

    def on_epoch_end(self, epoch, logs={}):
        if epoch%self.freq == 0:
            result = self.sess.run([self.merged], feed_dict=self.feed)
            summary_str = result[0]
            self.writer.add_summary(summary_str, epoch)

You then just have to run tensorboard with the right path.
You should have your graph and histograms of the parameters.

I tested it with a simple convnet and it seems to work well.

I wonder if it's possible not to pass the model in the callbacks and also access the loss and/or accuracy easily.

Does someone have a solution to monitor the loss and other metrics?

All 4 comments

Hey @AntreasAntoniou you can implement some visualizations using callbacks. I'm still working on it but you can easily extend this:

from keras.callbacks import Callback
import keras.backend.tensorflow_backend as tfbe

class TensorBoardViz(Callback):
    def __init__(self, model, feed, freq=2, log_file="./logs"):
        super(Callback, self).__init__()
        self.model = model
        self.freq = freq
        self.log_file = log_file
        self.sess = tfbe._get_session()
        self.feed = feed
        self.w_hists = []
        self.b_hists = []
        self.out_hists = []
        for node in self.model.nodes:
            cur_node = self.model.nodes[node]
            if hasattr(cur_node, "W"):
                self.w_hists.append(tf.histogram_summary("{}_W".format(node), cur_node.W))
            if hasattr(cur_node, "b"):
                self.b_hists.append(tf.histogram_summary("{}_b".format(node), cur_node.b))
            if hasattr(cur_node, "get_output"):
                self.out_hists.append(tf.histogram_summary("{}_out".format(node), cur_node.get_output()))         
        self.merged = tf.merge_all_summaries()
        self.writer = tf.train.SummaryWriter(log_file, self.sess.graph_def)

    def on_epoch_end(self, epoch, logs={}):
        if epoch%self.freq == 0:
            result = self.sess.run([self.merged], feed_dict=self.feed)
            summary_str = result[0]
            self.writer.add_summary(summary_str, epoch)

You then just have to run tensorboard with the right path.
You should have your graph and histograms of the parameters.

I tested it with a simple convnet and it seems to work well.

I wonder if it's possible not to pass the model in the callbacks and also access the loss and/or accuracy easily.

Does someone have a solution to monitor the loss and other metrics?

Does someone have a solution to monitor the loss and other metrics?

Yes, the loss is simply the output of a TensorFlow operation, denoted as e.g. model._train, model._test. You should be able to add summaries of these ops, which would then print out graphs of the evolution of the train and test loss.

Other metrics than loss and accuracy would need their own node in the graph. It would be harder for the time being.

@tboquet Would you like to submit a PR for a TensorBoard callback?

@fchollet yep sure! I will try to submit it by the end of the day. Even if it's not perfect, it will possible to build on the version I currently have.

@tboquet hi, what the feed is ? are they trainX,trainY?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

anjishnu picture anjishnu  路  3Comments

amityaffliction picture amityaffliction  路  3Comments

nryant picture nryant  路  3Comments

kylemcdonald picture kylemcdonald  路  3Comments

fredtcaroli picture fredtcaroli  路  3Comments