DL in Python/Tensorflow2.x 기초

[tensorflow2.x 기초 - 3] MNIST data로 CNN 구현을 통해 공부하는 모델 생성방법

SuHawn 2020. 8. 30. 19:16

 

 

 

 

 

Layer Explaination

In [1]:
import tensorflow as tf
 

Input Image

Input, DataSet ~ 시각화

 

패키지 로드

  • os
  • glob
  • matplotlib
In [2]:
import os

import matplotlib.pyplot as plt
%matplotlib inline
In [3]:
from tensorflow.keras import datasets

(train_x, train_y), (test_x, test_y) = datasets.mnist.load_data()
In [5]:
image = train_x[0]
 

이미지 shape

In [6]:
# 이미지 shape 확인
image.shape
Out[6]:
(28, 28)
In [7]:
plt.imshow(image, 'gray')
plt.show
Out[7]:
<function matplotlib.pyplot.show(*args, **kw)>
 
 

차원 수 조정

 

[batch_size, height, width, channel]

In [8]:
# 차원 수 높이기 
image = image[tf.newaxis, ..., tf.newaxis]
image.shape
Out[8]:
(1, 28, 28, 1)
 

Feature Extraction

 
image.png
 

Convolution

 
image.png
 

filters: layer에서 나갈 때 몇 개의 filter를 만들 것인지 (a.k.a weights, filters, channels)
kernel_size: filter(Weight)의 사이즈
strides: 몇 개의 pixel을 skip 하면서 훑어지나갈 것인지 (사이즈에도 영향을 줌)
padding: zero padding을 만들 것인지. VALID는 Padding이 없고, SAME은 Padding이 있음 (사이즈에도 영향을 줌)
activation: Activation Function을 만들것인지. 당장 설정 안해도 Layer층을 따로 만들 수 있음, none으로 default값을 가짐

In [11]:
tf.keras.layers.Conv2D(filters = 3, kernel_size = (3, 3), strides = (1,1), padding = 'SAME', activation = 'relu')
Out[11]:
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x279f2e79a88>
 

(3, 3) 대신에 3으로도 대체 가능

In [13]:
tf.keras.layers.Conv2D(3,3,1,'SAME')
Out[13]:
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x279f2ee4c08>
 

Visualization

 
  • tf.keras.layers.Conv2D
In [17]:
image.dtype
Out[17]:
dtype('uint8')
In [18]:
image = tf.cast(image,dtype = tf.float32)
image.dtype
Out[18]:
tf.float32
In [22]:
layer = tf.keras.layers.Conv2D(5,3,1, padding="SAME")
layer
Out[22]:
<tensorflow.python.keras.layers.convolutional.Conv2D at 0x279f2eeb448>
In [25]:
output = layer(image)
output.shape
Out[25]:
TensorShape([1, 28, 28, 5])
In [28]:
import numpy as np
np.min(image), np.max(image)
Out[28]:
(0.0, 255.0)
In [31]:
np.min(output), np.max(output)
# 원래 기본 이미지는 0~255까지의 이미지였는데, output은 convolution을 거치면서 변한 것을 확인할 수 있음
Out[31]:
(-306.50308, 188.83943)
In [30]:
plt.subplot(1,2,1)
plt.imshow(image[0,:,:,0],'gray')
plt.subplot(1,2,2)
plt.imshow(output[0, :, :, 0], 'gray')
plt.show
Out[30]:
<function matplotlib.pyplot.show(*args, **kw)>
 
 

weight 불러오기

  • layer.get_weights()
In [37]:
weight = layer.get_weights()
weight # list type
Out[37]:
[array([[[[ 0.13222298,  0.0945777 ,  0.27978405, -0.12514322,
           -0.21175227]],
 
         [[ 0.03347826, -0.26219758,  0.25944474,  0.25151113,
            0.02023402]],
 
         [[-0.14181814, -0.00426349,  0.05727997,  0.2529222 ,
           -0.26176995]]],
 
 
        [[[-0.19263117, -0.08363104, -0.29363093,  0.28605852,
           -0.25016427]],
 
         [[-0.30346307, -0.29521894, -0.0281097 , -0.22222798,
            0.11144575]],
 
         [[-0.21989791, -0.07354879, -0.16204111, -0.18821964,
           -0.29326138]]],
 
 
        [[[ 0.18713298,  0.21952489, -0.10914342,  0.2212607 ,
           -0.10599694]],
 
         [[ 0.16472292,  0.26418367, -0.00079775,  0.19178542,
           -0.11449274]],
 
         [[-0.2160782 ,  0.16700968, -0.20024309, -0.32081217,
           -0.13047013]]]], dtype=float32),
 array([0., 0., 0., 0., 0.], dtype=float32)]
In [38]:
len(weight)
Out[38]:
2
In [41]:
print("weight :",weight[0].shape, "bias :", weight[1].shape)
 
weight : (3, 3, 1, 5) bias : (5,)
In [45]:
plt.imshow(image[0, : ,:, 0], 'gray')
plt.show
Out[45]:
<function matplotlib.pyplot.show(*args, **kw)>
 
In [65]:
plt.figure(figsize = (15,5))
plt.subplot(131)
plt.hist(output.numpy().ravel(), range = [-300, 300])
plt.ylim(0,300)
plt.subplot(132)
plt.title(weight[0].shape)
plt.imshow(weight[0][:,:,0,0],'gray')
plt.subplot(133)
plt.title(output.shape)
plt.imshow(output[0,:,:,0],'gray')
plt.colorbar()
plt.show()
 
 

Activation Function

 

relu

 
image.png
In [52]:
tf.keras.layers.ReLU()
Out[52]:
<tensorflow.python.keras.layers.advanced_activations.ReLU at 0x279f4ea4b88>
In [54]:
act_layer = tf.keras.layers.ReLU()
act_output = act_layer(output)
act_output.shape  # shape는 그대로 형태 유지
Out[54]:
TensorShape([1, 28, 28, 5])
In [56]:
# actiavtion function에 들어가기 이전

np.min(output), np.max(output)
# relu activation function은 output의 0 미만 값은 모두 0으로 바꾸는 함수이다.
Out[56]:
(-306.50308, 188.83943)
In [57]:
# activation function에 들어가고 난 후
np.min(act_output), np.max(act_output)
Out[57]:
(0.0, 188.83943)
In [67]:
plt.figure(figsize = (15,5))
plt.subplot(121)
plt.hist(act_output.numpy().ravel(), range = [-300, 300])
plt.ylim(0,300)

plt.subplot(122)
plt.title(act_output.shape)
plt.imshow(act_output[0, :, :, 0], 'gray')
plt.colorbar()
plt.show()

# activation function 값을 거치고 난 후 plot을 보면
# 확연하게 사진에 존재하는 값이 줄어든 것을 느낄 수 있다.(검은 부분의 비중이 증가)
 
 

Pooling

 
image.png
 
  • tf.keras.layers.MaxPool2D
In [69]:
tf.keras.layers.MaxPool2D(pool_size = (2,2), strides = (2,2), padding = 'SAME')
Out[69]:
<tensorflow.python.keras.layers.pooling.MaxPooling2D at 0x279f6854748>
In [70]:
pool_layer = tf.keras.layers.MaxPool2D(pool_size = (2,2), strides = (2,2), padding = 'SAME')
pool_output = pool_layer(act_output)
In [71]:
act_output.shape
Out[71]:
TensorShape([1, 28, 28, 5])
In [73]:
pool_output.shape
# shape이 반으로 줄어든 것을 확인할 수 있음.
# pooling은 의미있다고 판단하는 값의 특징만을 남기고 사진의 정보를 압축하는 것
# (max pooling의 경우는 가장 큰 값이 의미가 있다고 판단.)
Out[73]:
TensorShape([1, 14, 14, 5])
In [76]:
plt.figure(figsize = (15, 5))
plt.subplot(121)
plt.hist(pool_output.numpy().ravel(), range = [-2,2])
plt.ylim(0,100)

plt.subplot(122)
plt.title(pool_output.shape)
plt.imshow(pool_output[0,:,:,0],'gray')
plt.colorbar()
plt.show
Out[76]:
<function matplotlib.pyplot.show(*args, **kw)>
 
 

Fully Connected

 
image.png
  • y = wX + b
  • w : weight
  • b : bias
 

Flatten

 
image.png
 
  • tf.keras.layers.Flatten()
In [78]:
import tensorflow as tf
In [79]:
tf.keras.layers.Flatten()
Out[79]:
<tensorflow.python.keras.layers.core.Flatten at 0x279f4325a88>
In [83]:
layer = tf.keras.layers.Flatten()
In [84]:
flatten = layer(output)
In [87]:
output.shape # 기존 output shape
Out[87]:
TensorShape([1, 28, 28, 5])
In [91]:
flatten.shape 
# flatten 이후 shape
# shape은 [1, 3920]이며 여기서 1은 batch size를 말한다.
Out[91]:
TensorShape([1, 3920])
In [90]:
28*28*5 # 연산을 진행했을 때, 동일한 수를 갖는 것을 확인할 수 있음
Out[90]:
3920
In [93]:
plt.figure(figsize = (10,5))
plt.subplot(211)
plt.hist(flatten.numpy().ravel())
plt.subplot(212)
plt.imshow(flatten[:,:100], 'jet')
plt.show()
 
 

Dense

 
image.png
 
  • tf.keras.layers.Dense
In [94]:
tf.keras.layers.Dense(32, activation = 'relu') # 32개의 node를 생성, relu를 사용하여 값을 전달
# Dense - 몇개의 노드를 만들 것이냐?
Out[94]:
<tensorflow.python.keras.layers.core.Dense at 0x279f4f57b48>
In [95]:
layer = tf.keras.layers.Dense(32, activation = 'relu')
In [97]:
output = layer(flatten)
In [98]:
output.shape
Out[98]:
TensorShape([1, 32])
In [100]:
layer_2 = tf.keras.layers.Dense(10, activation = 'relu')
output_example = layer_2(output)
In [101]:
output_example.shape
Out[101]:
TensorShape([1, 10])
 

DropOut

 
image.png
 
  • tf.keras.layers.Dropout
In [103]:
layer = tf.keras.layers.Dropout(0.7) 
output = layer(output)
# 비율을 인수로 넣는다. 일시적으로 랜덤하게 특정 부분만 학습시킨다.
In [104]:
output.shape
Out[104]:
TensorShape([1, 32])
 

Build Model - 모델을 한 번에 설계

 
image.png
In [105]:
from tensorflow.keras import layers
In [106]:
input_shape = (28, 28, 1) # input의 shape
num_classes = 10 # 데이터 레이블 클래스 개수 0~9
In [109]:
inputs = layers.Input(shape = input_shape)

## Feature Extraction
# 1st convolution block
net = layers.Conv2D(32, 3, padding = "SAME")(inputs)
net = layers.Activation('relu')(net)
net = layers.Conv2D(32, 3, padding = 'SAME')(net)
net = layers.Activation('relu')(net)
net = layers.MaxPool2D((2,2))(net)
net = layers.Dropout(0.25)(net)


# 2nd convolution block
net = layers.Conv2D(64, 3, padding = "SAME")(inputs)
net = layers.Activation('relu')(net)
net = layers.Conv2D(64, 3, padding = 'SAME')(net)
net = layers.Activation('relu')(net)
net = layers.MaxPool2D((2,2))(net)
net = layers.Dropout(0.25)(net)

## Fully Connected
net = layers.Flatten()(net)
net = layers.Dense(512)(net)
net = layers.Activation('relu')(net)
net = layers.Dropout(0.25)(net)
net = layers.Dense(num_classes)(net) # 10개 클래스 - 10개의 노드를 내보냄.
net = layers.Activation('softmax')(net)

model = tf.keras.Model(inputs=inputs, outputs = net, name = 'Basic_CNN')
In [110]:
model
Out[110]:
<tensorflow.python.keras.engine.functional.Functional at 0x279f4c8ab88>
 

Summary

In [111]:
model.summary()
 
Model: "Basic_CNN"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 28, 28, 1)]       0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 28, 28, 64)        640       
_________________________________________________________________
activation_3 (Activation)    (None, 28, 28, 64)        0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 28, 28, 64)        36928     
_________________________________________________________________
activation_4 (Activation)    (None, 28, 28, 64)        0         
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 14, 14, 64)        0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 14, 14, 64)        0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 12544)             0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               6423040   
_________________________________________________________________
activation_5 (Activation)    (None, 512)               0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 10)                5130      
_________________________________________________________________
activation_6 (Activation)    (None, 10)                0         
=================================================================
Total params: 6,465,738
Trainable params: 6,465,738
Non-trainable params: 0
_________________________________________________________________
In [ ]: