'multi-variable linear regression'에 해당되는 글 1건

  1. 2019.11.29 [머신러닝/딥러닝] multi-variable linear regression을 Tensorflow 구현
Deep Learning2019. 11. 29. 19:47
반응형

 김성훈 교수님의 [모두를 위한 딥러닝] 강의 정리

 - https://www.youtube.com/watch?reload=9&v=BS6O0zOGX4E&feature=youtu.be&list=PLlMkM4tgfjnLSOjrEJN31gZATbcj_MpUm&fbclid=IwAR07UnOxQEOxSKkH6bQ8PzYj2vDop_J0Pbzkg3IVQeQ_zTKcXdNOwaSf_k0

 - 참고자료 : Andrew Ng's ML class

  1) https://class.coursera.org/ml-003/lecture

  2) http://holehouse.org/mlclass/ (note)



1. Hypothesis using matrix

 - H(x1, x2, x3) = x1w1 + x2w2 + x3w3

 -> H(X) = XW

 

2. tensorflow 구현

 2-1. 기존 방법

import tensorflow as tf
tf.set_random_seed(777) # for reproducibility
 
x1_data = [73., 93., 89., 96., 73.]
x2_data = [80., 88., 91., 98., 66.]
x3_data = [75., 93., 90., 100., 70.]
 
y_data = [152., 185., 180., 196., 142.]
 
# placeholders for a tensor that will be always fed.
x1 = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
x3 = tf.placeholder(tf.float32)
 
Y = tf.placeholder(tf.float32)
 
w1 = tf.Variable(tf.random_normal([1]), name='weight1')
w2 = tf.Variable(tf.random_normal([1]), name='weight2')
w3 = tf.Variable(tf.random_normal([1]), name='weight3')
b = tf.Variable(tf.random_normal([1]), name='bias')
 
hypothesis = x1 * w1 + x2 * w2 + x3 * w3 + b
 
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
 
# Minimize. Need a very small learning rate for this data set
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(cost)
 
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())
 
for step in range(2001):
cost_val, hy_val, _ = sess.run([cost, hypothesis, train],
feed_dict={x1: x1_data, x2: x2_data, x3: x3_data, Y: y_data})
if step % 10 == 0:
print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)

 

 2-2. matrix 이용

import tensorflow as tf
tf.set_random_seed(777) # for reproducibility
 
x_data = [[73., 80., 75.],
[93., 88., 93.],
[89., 91., 90.],
[96., 98., 100.],
[73., 66., 70.]]
y_data = [[152.],
[185.],
[180.],
[196.],
[142.]]
 
 
# placeholders for a tensor that will be always fed.
X = tf.placeholder(tf.float32, shape=[None, 3])
Y = tf.placeholder(tf.float32, shape=[None, 1])
 
W = tf.Variable(tf.random_normal([3, 1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
 
# Hypothesis
hypothesis = tf.matmul(X, W) + b
 
# Simplified cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
 
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(cost)
 
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())
 
for step in range(2001):
cost_val, hy_val, _ = sess.run(
[cost, hypothesis, train], feed_dict={X: x_data, Y: y_data})
if step % 10 == 0:
print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)

 

 

반응형
Posted by CCIBOMB