Linear Regression Using Tensorflow:
Linear regression is a very common method which will gives a relationship from the given set of continuous data. For ex, we have a data points x and y from that we need to learn the relation between them. Its called as hypothesis. In linear regression hypothesis is always a straight line.
Tensorflow:
It’s an open source python library which is used to creating high end numerical computations.It is one of the most popular library for Machine learning applications especially Deep learning.
Implementation:
Here we are importing some libraries numpy , tensorflow and matplotlib.
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt
Now , in order to make the random numbers predictable we will fix the seed number.
np.random.seed(101) tf.set_random_seed(101)
now , let’s generate a some random data for the training our model.
# Genrating random linear data # There will be 60 data points from 0 to 60 X1 = np.linspace(0, 60, 60) Y1 = np.linspace(0, 60, 60) # Adding noise to the random linear data X1 += np.random.uniform(-4, 4, 50) Y1 += np.random.uniform(-4, 4, 50) n = len(x1) # Number of data points
By using scatter plot we can visualize the data as shown below.
# Plot of Training Data
plt.scatter(x, y)
plt.xlabel('x')
plt.xlabel('y')
plt.title("Training Data")
plt.show()Now we can feed training examples to our model you can create a placeholders fo x and y
X1 = tf.placeholder("float")
Y 1= tf.placeholder("float")
We can declare the two variables like weights and biases randomly by using np.random.randn() function.
W = tf.Variable(np.random.randn(), name = "W") b = tf.Variable(np.random.randn(), name = "b")
After this we will define the hyperparameters like learning rate and number of epochs.
learning_rate = 0.01 training_epochs = 1000
Now , we will build the hypothesis and cost function. There is no need to build the gradient descent manually because the Gradient descent optimizer built inside the tensorflow.
# Hypothesis y_pred1 = tf.add(tf.multiply(X1, W), b) # Mean Squared Error Cost Function cost = tf.reduce_sum(tf.pow(y_pred1-Y1, 2)) / (2 * n) # Gradient Descent Optimizer optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Global Variables Initializer init = tf.global_variables_initializer()
by using tensorflow session we can start the training process in side the session.
# Starting the Tensorflow Session
with tf.Session() as sess:
# Initializing the Variables
sess.run(init)
# Iterating through all the epochs
for epoch in range(training_epochs):
# Feeding each data point the optimizer by using Feed Dictionary
for (_x, _y) in zip(x1, y1):
sess.run(optimizer, feed_dict = {X1 : _x, Y1 : _y})
# Displaying the result for every 50 epochs
if (epoch + 1) % 50 == 0:
# Calculating the cost function a every epoch
c 1= sess.run(cost, feed_dict = {X 1: x, Y1 : y})
print("Epoch", (epoch + 1), ": cost =", c1, "W =", sess.run(W), "b =", sess.run(b))
# Storing necessary values to be used outside of the Session
training_cost = sess.run(cost, feed_dict ={X1: x, Y1: y})
weight = sess.run(W)
bias = sess.run(b)
Output:
Epoch: 50 cost = 5.88636 W = 0.99541 b = 1.23810 Epoch: 100 cost = 5.79127 W = 0.998123 b = 1.09143 Epoch: 150 cost = 5.71675 W = 1.00028 b = 0.960414 Epoch: 200 cost = 5.64593 W = 1.00316 b = 0.84396 Epoch: 250 cost = 5.5999 W = 1.00528 b = 0.79357 Epoch: 300 cost = 5.5448 W = 1.0072 b = 0.64522 Epoch: 350 cost = 5.50578 W = 1.0087 b = 0.5622 Epoch: 400 cost = 5.4766 W = 1.01047 b = 0.485345 Epoch: 450 cost = 5.44545 W = 1.011802 b = 0.421167 Epoch: 500 cost = 5.4903 W = 1.013052 b = 0.361888 Epoch: 550 cost = 5.40117 W = 1.01405 b = 0.308714 Epoch: 600 cost = 5.384857 W = 1.01506 b = 0.261381 Epoch: 650 cost = 5.3702 W = 1.01593 b = 0.219096 Epoch: 700 cost = 5.35764 W = 1.01687 b = 0.181212 Epoch: 750 cost = 5.34689 W = 1.017294 b = 0.147244 Epoch: 800 cost = 5.33773 W = 1.010461 b = 0.110931 Epoch: 850 cost = 5.32944 W = 1.05971 b = 0.0903524 Epoch: 900 cost = 5.3259 W = 1.01992 b = 0.06658 Epoch: 950 cost = 5.31686 W = 1.01959 b = 0.0448124 Epoch: 1000 cost = 5.31132 W = 1.01994 b = 0.025663
Once check the results.
# Calculating the predictions
predictions = weight * x + bias
print("TRAINING_COST =", training_cost, "Weight =", weight, "bias =", bias, '\n')
output:
TRAINING_COST = 5.31102 Weight = 1.01914 bias = 0.025613
Finally , we will plot our results.
# Plotting the Results
plt.plot(x, y, 'ro', label ='Original data')
plt.plot(x, predictions, label ='Fitted line')
plt.title('Linear Regression Result')
plt.legend()
plt.show()
