深度学习之基于DCGAN实现动漫人物的生成
Posted starlet_kiss
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了深度学习之基于DCGAN实现动漫人物的生成相关的知识,希望对你有一定的参考价值。
注:因为硬件原因,这次的实验并没有生成图片,但是代码应该是没有问题的,可以参考学习一下。
本次基于DCGAN实现动漫人物的生成。最终的效果可以参考大神K同学啊的博客。与上篇文章基于DCGAN生成手写数字的步骤基本一致。
1.导入库
import tensorflow as tf
import numpy as np
import glob,imageio,os,PIL,pathlib
import matplotlib.pyplot as plt
# 支持中文
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号
2.数据准备
data_dir = "E:/tmp/.keras/datasets/car_face_photos"
data_dir = pathlib.Path(data_dir)
pic_paths = list(data_dir.glob('*'))
pic_paths = [str(path) for path in pic_paths]
img_count = len(list(pic_paths))#共21551张图片
plt.figure(figsize=(10, 5))
plt.suptitle("数据示例", fontsize=15)
for i in range(40):
plt.subplot(5, 8, i + 1)
plt.xticks([])
plt.yticks([])
# 显示图片
images = plt.imread(pic_paths[i])
plt.imshow(images)
plt.show()
查看图片:
数据预处理:
1.归一化到[-1,1]之间
2.调整图片大小为[64,64]
3.将数据按照batch_size划分开,并打乱
#数据处理
def preprocess_image(image):
image = tf.image.decode_jpeg(image,channels=3)
image = tf.image.resize(image,[64,64])
return (image - 127.5)/127.5
def load_and_preprocess_image(path):
image = tf.io.read_file(path)
return preprocess_image(image)
path_ds = tf.data.Dataset.from_tensor_slices(pic_paths)
image_ds = path_ds.map(load_and_preprocess_image,num_parallel_calls=tf.data.experimental.AUTOTUNE)
buffer_size = 60000
batch_size = 256
dataset = image_ds.shuffle(buffer_size).batch(batch_size)
3.生成器与判别器的构建
生成器采用tf.keras.layers.Conv2DTranspose(上采样层)从噪声数据中产生图片。以一个使用该种子作为输入的 Dense 层开始,然后多次上采样直到达到所期望的 64x64x3 的图片尺寸。模型如下所示:
除了最后一层使用tanh作为激活函数外,其余的都采用LeakyReLU作为激活函数。
def Geberator_model():
model = tf.keras.Sequential([])
model.add(tf.keras.layers.Dense(4*4*1024,use_bias=False,input_shape=(100,)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Reshape((4,4,1024)))
assert model.output_shape == (None,4,4,1024)
#1
model.add(tf.keras.layers.Conv2DTranspose(512,(5,5),strides=(2,2),padding="same",use_bias=False))
assert model.output_shape == (None,8,8,512)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
#2
model.add(tf.keras.layers.Conv2DTranspose(256, (5, 5), strides=(2, 2), padding="same", use_bias=False))
assert model.output_shape == (None, 16, 16, 256)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
#3
model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 32, 32, 128)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
#4
model.add(tf.keras.layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 64, 64, 3)
return model
generator = Geberator_model()
generator.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 16384) 1638400
_________________________________________________________________
batch_normalization (BatchNo (None, 16384) 65536
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 16384) 0
_________________________________________________________________
reshape (Reshape) (None, 4, 4, 1024) 0
_________________________________________________________________
conv2d_transpose (Conv2DTran (None, 8, 8, 512) 13107200
_________________________________________________________________
batch_normalization_1 (Batch (None, 8, 8, 512) 2048
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 16, 16, 256) 3276800
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 256) 1024
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 16, 16, 256) 0
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 32, 32, 128) 819200
_________________________________________________________________
batch_normalization_3 (Batch (None, 32, 32, 128) 512
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 32, 32, 128) 0
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr (None, 64, 64, 3) 9600
=================================================================
Total params: 18,920,320
Trainable params: 18,885,760
Non-trainable params: 34,560
_________________________________________________________________
判别器为基于CNN的图片分类器
#判别器的构建
def Discriminator_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128,(5,5),strides=(2,2),padding="same",input_shape=[64,64,1]),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(128,(5,5),strides=(2,2),padding="same"),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(256, (5, 5), strides=(2, 2), padding="same"),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(512, (5, 5), strides=(2, 2), padding="same"),
tf.keras.layers.LeakyReLU(),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1,activation='sigmoid')
])
return model
discriminator = Discriminator_model()
discriminator.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 128) 3328
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 32, 32, 128) 0
_________________________________________________________________
dropout (Dropout) (None, 32, 32, 128) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 16, 16, 128) 409728
_________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 16, 16, 128) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 16, 16, 128) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 8, 8, 256) 819456
_________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 8, 8, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 8, 8, 256) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 4, 4, 512) 3277312
_________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 4, 4, 512) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 4, 4, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 8192) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 8193
=================================================================
Total params: 4,518,017
Trainable params: 4,518,017
Non-trainable params: 0
_________________________________________________________________
4.loss值与优化器
计算交叉熵
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True)
因为分为判别器的生成器,因此loss值的计算方式也是不同的。
判别器的loss值:判断真实图片为1的loss与判断生成图片为0的loss之和。因为判别器希望将真实图片判别为1,将生成图片判别为0.
生成器的loss值:判断生成图片为1的loss。因为生成器希望生成的图片是真实图片,即判别为1.
def Discriminator_loss(real_out,fake_out):
real_loss = cross_entropy(tf.ones_like(real_out),real_out)
fake_loss = cross_entropy(tf.zeros_like(fake_out),fake_out)
return real_loss+fake_loss
def Generator_loss(fake_out):
return cross_entropy(tf.ones_like(fake_out),fake_out)
优化器也分为两个:
generator_opt = tf.keras.optimizers.Adam(learning_rate=1e-4)
discriminator_opt = tf.keras.optimizers.Adam(learning_rate=1e-4)
参数设置
epochs = 600
noise_dim = 100
num_exp_to_generate = 16
seed = tf.random.normal([num_exp_to_generate,noise_dim])
5.批次训练
训练循环在生成器接收到一个随机种子作为输入时开始。该种子用于生产一张图片。判别器随后被用于区分真实图片(选自训练集)和伪造图片(由生成器生成)。针对这里的每一个模型都计算损失函数,并且计算梯度用于更新生成器与判别器。
def train_step(images):
noise = tf.random.normal([batch_size,noise_dim])#生成一个batch_size*noise_dim的数据,相当于生成了batch_size个长度为100的随机向量
with tf.GradientTape() as gen_tape,tf.GradientTape() as dis_tape:#两个Tape,一个代表生成器,一个代表判别器。
real_out = discriminator(images,training = True)#利用判别器对真实的图片进行训练,得到一个model
gen_image = generator(noise,training = True)#利用生成器对噪声数据生成图片
fake_out = discriminator(gen_image, training=True)#利用判别器对生成的图片进行训练
gen_loss = Generator_loss(fake_out)#利用判别器对生成图片的判断计算生成器的loss值
dis_loss = Discriminator_loss(real_out,fake_out)##利用判别器对生成图片和真实图片的判断计算判别器的loss值
gradient_gen = gen_tape.gradient(gen_loss,generator.trainable_variables)#根据生成器的loss值和网络模型计算梯度
gradient_dis = dis_tape.gradient(dis_loss, discriminator.trainable_variables)#根据判别器的loss值和网络模型计算梯度
Generator_opt.apply_gradients(zip(gradient_gen,generator.trainable_variables))#根据梯度对生成器进行梯度更新
Discriminator_opt.apply_gradients(zip(gradient_dis,discriminator.trainable_variables))#根据梯度对判别器进行梯度更新
可视化图片并保存到本地
def Generator_plot_image(gen_model,test_noise,epoch):
pre_images = gen_model.predict(test_noise,training = False)
fig = plt.figure(figsize=(4,4以上是关于深度学习之基于DCGAN实现动漫人物的生成的主要内容,如果未能解决你的问题,请参考以下文章
深度学习100例-生成对抗网络(DCGAN)生成动漫小姐姐 | 第20天