Artificial Intelligence 14 min read

Introduction to TensorFlow: Graphs, Sessions, Variables, Placeholders, and MNIST Handwritten Digit Recognition

This tutorial provides a concise, Python‑based introduction to TensorFlow, covering its core concepts such as computation graphs, sessions, data structures, variables, placeholders, feed_dict, and demonstrates a complete MNIST handwritten digit classification example with code snippets.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Introduction to TensorFlow: Graphs, Sessions, Variables, Placeholders, and MNIST Handwritten Digit Recognition

Deep learning is now pervasive in fields like image recognition, speech, and translation; this tutorial offers a brief, Python‑focused introduction to the most popular open‑source deep‑learning framework, TensorFlow, without attempting to be a exhaustive cookbook.

TensorFlow, released by Google under the Apache 2.0 license in November 2015, has become extremely popular, as shown by query‑frequency trends.

TensorFlow is chosen for its easy‑to‑use Python API, ability to run on single or multiple CPUs/GPUs across platforms (Android, Windows, Linux, etc.), built‑in visualization with TensorBoard, checkpointing for experiment management, automatic differentiation via graph computation, and a large community.

For rapid prototyping, high‑level wrappers such as tf.contrib.learn , tf.contrib.slim , Keras, and tflearn (see its GitHub examples) can be used instead of low‑level APIs.

Installation is straightforward; the official TensorFlow documentation provides step‑by‑step OS setup instructions.

TensorFlow basics : building a computation graph and executing it with a session. Example:

import tensorflow as tf
graph = tf.Graph()
with graph.as_default():
    foo = tf.Variable(3, name='foo')
    bar = tf.Variable(2, name='bar')
    result = foo + bar
    initialize = tf.global_variables_initializer()

Printing the tensor without a session shows only its symbolic name:

print(result)  # Tensor("add:0", shape=(), dtype=int32)

Running the graph in a session performs the actual computation:

with tf.Session(graph=graph) as sess:
    sess.run(initialize)
    res = sess.run(result)
    print(res)  # 5

Data structures : TensorFlow tensors have a rank (number of dimensions), a shape (size of each dimension), and a data type (e.g., DT_FLOAT for 32‑bit floats).

Variables store mutable parameters. They must be initialized before use, typically with tf.global_variables_initializer() . Example:

weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35), name="weights")
biases = tf.Variable(tf.zeros([200]), name="biases")

Variables differ from constants: constants are embedded in the graph and shared across sessions, while variables are session‑local and can be saved or restored independently.

const = tf.constant(1.0, name="constant")
print(tf.get_default_graph().as_graph_def())

Placeholders and feed_dict allow feeding external data at runtime. A missing feed raises an InvalidArgumentError :

foo = tf.placeholder(tf.int32, shape=[1], name='foo')
bar = tf.constant(2, name='bar')
result = foo + bar
with tf.Session() as sess:
    print(sess.run(result))  # error

Providing a feed resolves the error:

print(sess.run(result, {foo: [3]}))

MNIST handwritten digit example : after downloading the dataset, placeholders for inputs x (shape [None, 784] ) and labels y_ (shape [None, 10] ) are defined, along with weight and bias variables.

import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

The model computes a softmax over the linear transformation:

y = tf.nn.softmax(tf.matmul(x, W) + b)

The loss uses mean cross‑entropy, and a gradient‑descent optimizer with learning rate 0.5 updates the parameters:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.global_variables_initializer()

Training runs for 1000 iterations with mini‑batches of 100 samples:

with tf.Session() as sess:
    sess.run(init)
    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

Accuracy is evaluated by comparing predicted and true labels:

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

The tutorial concludes that the reader has learned about graphs, sessions, basic data structures, variables, placeholders, and how to assemble them into a functional MNIST digit recognizer.

Machine LearningpythonDeep LearningNeural networksTensorFlowMNIST
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.