Lec 06-1 - Softmax Regression 기본 개념 소개

1. Logistic Regression 

    1) Liner Regression의 한계

      -   : 아주 큰수 ,아주 작은 수 적용시에도 선형적 가중치가 적용되어 오차 증가

    2) Logistic Resgression

      - Logistic(Sigmoid) Function : Binary Classification에 적당하도록 아주 큰수, 아주 작은 수 사이에서도 

         0~1 가 되도록 해주는 함수 

       

         

      

          



2. Multinomial classification

   1)  학점 ABC 구현 - Binary Classification 3개  

          


        




Lec 06-2: Softmax classifier 의 cost함수


1. SoftMax Function


   1) Multinomial classification Sigmoid      

      

     

     각 Element 가 0 ~ 1 사이값이 되었지만 각각의 상관 관계를 표현하지 못함


   2) SoftMax 

     

       SCORES                                    POSSIBILITIES


    3) One-Hot Encoding


     


    4) Cross-entropy cost function 

         

         S : 예측값,    L : 실제값 


        [check cost function] 

        


  5) Cross-Entropy vs Logistic Cost  (증명?)  


      

        




Lab 06-1: TensorFlow로 Softmax Classification의 구현


1. Hypothesis with SOFTMAX

tf.matmul(X,W) + b 
hypothesis = tf.nn.softmax(tf.matmul(X,W) + b)

2. Cost Function : Cross Entropy

# Cross entropy cost/loss

cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1))


optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)


3. Test & One-Shot Encoding

  1) One Row 

# Testing & One-hot encoding

a = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9]]})

print(a, sess.run(tf.arg_max(a, 1)))


raw : [[  1.38904958e-03   9.98601854e-01   9.06129117e-06]]

output : [1]


  2) multi Row 

# Testing & One-hot encoding

all = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9], 

                                          [1, 3, 4, 3], 

                                          [1, 1, 0, 1]]})

print(all, sess.run(tf.arg_max(all, 1)))


raw : [[  1.38904958e-03   9.98601854e-01   9.06129117e-06]

[  9.31192040e-01   6.29020557e-02   5.90589503e-03]

[  1.27327668e-08   3.34112905e-04   9.99665856e-01]]

output: [1 0 2]


Lab 06-2: TensorFlow로 Softmax Classification의 구현


1. softmax_cross_entropy_with_logits 


logits = tf.matul(X,W) + b

hypothesis = tf.nn.sotfmax(logits) 


cost = tf.reduce_mean(-tf.reduece_sum(Y * tf.log(hypothesis), axis =1)) 


   cost_i = tf.nn.softmax_coress_entropy_with_logits(logits = logits, labels = Y_one_hot) 


   cost = tf.redue_mean(cost_i)


2. Animal Classification 

  1) load data 

xy = np.loadtxt('data-04-zoo.csv', delimiter= ',', dtype=np.float32) 


x_data = xy[:,0:-1]

y_data = xy[:,[-1]]


hypothesis = tf.nn.sotfmax(logits) 


cost = tf.reduce_mean(-tf.reduece_sum(Y * tf.log(hypothesis), axis =1)) 


  2) ont_hot and reshape 


     Y = tf.placeholder(tf.int32, [None, 1])   # 0~ 6 shape =(?,1)

     Y_one_hot = tf.one_hot(Y, nb_classes)  # one hot shape (?, 1,7) 

     Y_one_hot = tf.reshape(Y_.one_hot, [-1, nb_classes])  # one hot shape (?,7) 


3. Result

'''
Step:     0 Loss: 5.106 Acc: 37.62%
Step:   100 Loss: 0.800 Acc: 79.21%
Step:   200 Loss: 0.486 Acc: 88.12%
Step:   300 Loss: 0.349 Acc: 90.10%
Step:   400 Loss: 0.272 Acc: 94.06%
Step:   500 Loss: 0.222 Acc: 95.05%
Step:   600 Loss: 0.187 Acc: 97.03%
Step:   700 Loss: 0.161 Acc: 97.03%
Step:   800 Loss: 0.140 Acc: 97.03%
Step:   900 Loss: 0.124 Acc: 97.03%
Step:  1000 Loss: 0.111 Acc: 97.03%
Step:  1100 Loss: 0.101 Acc: 99.01%
Step:  1200 Loss: 0.092 Acc: 100.00%
Step:  1300 Loss: 0.084 Acc: 100.00%
...
[True] Prediction: 0 True Y: 0
[True] Prediction: 0 True Y: 0
[True] Prediction: 3 True Y: 3
[True] Prediction: 0 True Y: 0
[True] Prediction: 0 True Y: 0


Posted by 꿈을펼쳐라
,