Presentation is loading. Please wait.

Presentation is loading. Please wait.

TensorBoard Debug, monitor, and examine machine learning models.

Similar presentations


Presentation on theme: "TensorBoard Debug, monitor, and examine machine learning models."— Presentation transcript:

1 TensorBoard Debug, monitor, and examine machine learning models.
Chi Zeng Software Engineer, Alphabet

2 Lets train a simple model and use TensorBoard.
A convolutional layer followed by a series of fully connected layers.

3 Code a convolutional layer.
W_conv = weight_variable([5, 5, 1, 12]) b_conv = bias_variable([12]) conv = tf.nn.conv2d( image_shaped_input, W_conv, strides=[1, 1, 1, 1], padding='SAME') h = tf.nn.relu(conv + b_conv) pool = tf.nn.max_pool(h, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],

4 Code a fully connected layer.
weights = weight_variable([input_dim, output_dim]) variable_summaries(weights) biases = bias_variable([output_dim]) variable_summaries(biases) preactivate = tf.matmul(input_tensor, weights) + biases activations = act(preactivate)

5 Write the TensorFlow graph to disk.
main_writer = tf.summary.FileWriter(FLAGS.log_dir, sess.graph) Result:

6

7 Use name scopes to organize subgraphs.
with tf.name_scope('conv_layer'): with tf.name_scope('layer1'): Result: Much cleaner.

8

9 Use summary ops to log data to TensorBoard.
tf.summary.scalar tf.summary.image tf.summary.audio tf.summary.histogram and so on … Data is saved to a logs directory and read by TensorBoard.

10

11 We initialize weights with non-zero values.
# We can't initialize these variables to 0. def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape)

12

13

14 We resolve the NaNs by using softmax.
probabilities = tf.nn.softmax(y) cross_entropy = tf.reduce_mean( -tf.reduce_sum(one_hot_labels * tf.log(probabilities)))

15 Collect performance metadata.
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run([merged_train_summaries, train_step], feed_dict=feed_dict( tf.contrib.learn.ModeKeys.TRAIN), options=run_options, run_metadata=run_metadata)

16

17 Optimize learning rate and dropout.
# Various learning rates to try. learning_rates = [1e-2, 1e-3, 1e-4] # The probability at which we keep during dropout. dropouts = [0.6, 0.8, 1.0] for learning_rate in learning_rates: for dropout in dropouts: run_name = 'run_{:.2e}_{:.2e}'.format(learning_rate, dropout) logdir = os.path.join(os.getenv('TEST_TMPDIR', '/tmp'), 'tensorflow/mnist/logs/', run_name)

18

19

20 TensorBoard is a team effort.
Justine Tunney Tech Lead Nick Felt Core Team Dandelion Mané Original Creator Shanqing Cais Author of Debugger Dashboard Some other key contributors are not listed.

21 Thank you.


Download ppt "TensorBoard Debug, monitor, and examine machine learning models."

Similar presentations


Ads by Google