TensorFlow Basics (1/2): Computation graph
Here I want to lay out the fundamental mechanics of how to work with TensorFlow.
As far as I understand TensorFlows biggest proposition is that you can find the parameters that miminize/maximize a given function without having to specify it’s gradients.
When using TensorFlow to train neural networks (a complex function) this is particularly handy for the backpropagation steps with arbitrary architectures, activations and cost functions.
Importing TensorFlow
The first thing necessary is to load the library. I prefer to use the shorter ‘tf’ alias.
Evaluating a function
Before trying to optimize functions lets just try to represent a function with parameters and evaluate it. Take the \(y\) function bellow:
\[y(x) = 2*x + 4\]To represent this in TensorFlow we would create two constants, a parameter and a function that ties it all together like so:
In TensorFlow lingo we have just defined a computation graph. To evaluate \(y\) at \(x=5\) we would:
This should produce:
Finding the parameters that minimize a function
Let’s now consider the following function:
\[y=\left(\frac{x}{4}-3\right)^2\]In TensorFlow we create the corresponding computation grah like so:
What if we are interested to find the value of \(x\) that minimizes \(y\) using TensorFlow? Tensorflow comes with optimizers that will search the function for the minimum using the gradients automatically created from the computation graph. We just have to initialize \(x\) at an arbitrary location and iterate through the optimization operation that updates \(x\) closer to the minimum.
We see the number bellow which is the value for \(x\) that TensorFlow found to be approaching the minimum for \(y\).
Next
Next I will write how to use this basis to train a neural network.