个性化阅读
专注于IT技术分析

具有TensorFlow的卷积神经网络

本文概述

TensorFlow是著名的深度学习框架。在这篇博客文章中, 你将学习这个非常流行的Python库的基础知识, 并了解如何使用它来实现这些深层次的, 前馈的人工神经网络。

确切地说, 今天的教程将向你介绍以下主题:

  • 首先将介绍张量以及它们与矩阵的不同之处。了解了张量之后, 将向你介绍Tensorflow框架, 在其中, 你还将看到如何通过TensorFlow中的计算图实现单行代码, 然后你将了解一些软件包的在你进行深度学习时起主要作用的概念, 例如常量, 变量和占位符,
  • 然后, 你将进入本教程最有趣的部分。也就是说, 卷积神经网络的实现:首先, 你将尝试了解数据。你将使用Python及其库来加载, 浏览和分析数据。你还将预处理数据:将学习如何将图像可视化为矩阵, 调整数据形状以及在需要时将图像调整为0到1之间的比例。
  • 完成所有这些步骤后, 你就可以构建深度神经网络模型了:首先定义网络参数, 然后学习如何创建包装器以提高代码的简单性, 定义权重和偏差, 对网络进行建模, 定义损失和优化器节点。一旦完成所有这些准备, 就可以训练和测试模型了;
  • 在对模型进行评估之后, 你将了解有关过度拟合的更多信息, 以及如何通过添加辍学层来克服它。然后, 你将再次训练模型, 并在网络中插入辍学层, 在测试集上评估模型, 并比较两个模型的结果;接下来, 你将对测试数据进行预测, 将概率转换为类标签, 并绘制一些模型正确分类和分类错误的测试样本。你将可视化分类报告, 该报告将具有测试数据集中存在的所有类别的精度, 召回率, f-1分数。

张量

用外行的术语来说, 张量是表示深度学习中数据的一种方式。张量可以是一维, 二维, 3维数组等。你可以将张量视为多维数组。在机器学习和深度学习中, 你具有高维度的数据集, 其中每个维度代表该数据集的不同特征。

考虑以下关于狗与猫的分类问题的示例, 其中你使用的数据集具有多种猫和狗图像。现在, 为了在给定图像时正确地对狗或猫进行分类, 网络必须学习区别性特征, 例如颜色, 面部结构, 耳朵, 眼睛, 尾巴形状等。

这些特征由张量合并。

TensorFlow CNN

提示:如果你想进一步了解张量, 请查看srcmini的TensorFlow初学者教程。

但是, 张量与矩阵有何不同?你将在下一节中找到!

张量与矩阵:差异

矩阵是大小为$ n×m $的二维网格, 其中包含数字:你可以添加和减去相同大小的矩阵, 将一个矩阵与另一个矩阵相乘, 只要它们的大小兼容$((n×m)× (m×p)= n×p)$, 然后将整个矩阵乘以一个常数。

向量是只有一行或一列的矩阵(但请参见下文)。

张量通常被认为是广义矩阵。也就是说, 可能是

  • 一维矩阵(如矢量)实际上就是这样的张量
  • 一个3D矩阵(有点像一个数字立方体),
  • 0-D矩阵(单个数字), 或
  • 更高尺寸的结构, 更难以可视化。

张量的维数称为其秩。

任何等级2的张量都可以表示为一个矩阵, 但并非每个矩阵都是真正的等级2的张量。张量矩阵表示的数值取决于已应用到整个系统的变换规则。

TensorFlow:常量, 变量和占位符

TensorFlow是Google在2015年11月9日开发的框架。它是用Python, C ++和Cuda编写的。它支持Linux, Microsoft Windows, macOS和Android等平台。 TensorFlow在Python, C ++, Java等中提供了多个API。使用最广泛的API是Python, 在本教程中, 你将使用Python API实现卷积神经网络。

TensorFlow这个名称源自人工神经网络对多维数据数组执行的操作(例如加法或乘法)。这些数组在此框架中称为张量, 与你之前看到的稍有不同。

那么, 在谈论操作时为什么要提到流程呢?

让我们考虑一个简单的方程式及其表示为计算图的图表。注意:如果你没有立即得到该方程式, 请不要担心, 这只是为了帮助你了解使用TensorFlow框架时流程如何发生。

prediction = tf.nn.softmax(tf.matmul(W, x) + b)
TensorFlow Python

在TensorFlow中, 你编写的每一行代码都必须经过计算图。如上图所示, 你可以看到首先是$ W $和$ x $相乘, 然后是$ b $, 它被加到$ W $和$ x $的输出中。在将$ W $和$ x $的输出与$ b $相加后, 将应用softmax函数并生成最终输出。

你会发现, 当使用TensorFlow时, 常量, 变量和占位符会很方便地定义输入数据, 类标签, 权重和偏差。

  • 常量不需要任何输入, 你可以使用它们存储常量值。它们产生一个恒定的输出并存储。
import tensorflow as tf
a = tf.constant(2.0)
b = tf.constant(3.0)
c = a * b

在这里, 节点a和b是存储值2.0和3.0的常量。节点c存储分别乘以节点a和b的运算。初始化会话并运行c时, 你将看到返回的输出为6.0:

sess = tf.Session()
sess.run(c)
6.0
  • 占位符允许你在运行中输入输入。由于具有这种灵活性, 因此使用了占位符, 该占位符允许你的计算图将输入作为参数。将节点定义为占位符可确保该节点预期在以后或运行时接收值。在这里, “运行时”是指在运行计算图时将输入输入到占位符。
# Creating placeholders
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)

# Assigning addition operation w.r.t. a and b to node add
add = a + b

# Create session object
sess = tf.Session()

# Executing add by passing the values [1, 3] [2, 4] for a and b respectively
output = sess.run(add, {a: [1, 3], b: [2, 4]})
print('Adding a and b:', output)
('Adding a and b:', array([ 3., 7.], dtype=float32))

在这种情况下, 你已经为数据类型提供了tf.float32。请注意, 此数据类型因此是单精度, 以32位格式存储。但是, 在不执行此操作的情况下, 就像在第一个示例中一样, TensorFlow将从初始值中推断常量/变量的类型。

Python深度学习
  • 变量允许你修改图形, 以便它可以针对相同的输入产生新的输出。变量允许你将此类参数或节点添加到可训练的图形中。即, 可以在一段时间内修改该值。
#Variables are defined by providing their initial value and type
variable = tf.Variable([0.9, 0.7], dtype = tf.float32)

#variable must be initialized before a graph is used for the first time. 
init = tf.global_variables_initializer()
sess.run(init)

当你调用tf.constant时, 常量将被初始化, 并且它们的值永远不会改变。但是, 调用tf.Variable时不会初始化变量。要初始化TensorFlow中的所有变量, 你需要显式调用全局变量intializer global_variables_initializer(), 该初始化器会初始化TensorFlow代码中的所有现有变量, 如上面的代码块所示。

变量在图的多次执行中幸存, 这与正常张量不同, 普通张量仅在运行图时实例化, 然后立即删除。

在本节中, 你已经看到了占位符用于保存输入数据和类标签, 而变量则用于权重和偏差。如果你仍然无法对计算图的工作原理, 在深度学习中通常使用的占位符和变量有正确的直觉, 请不要担心。在本教程的后面, 你将解决所有这些主题。

TensorFlow中的卷积神经网络(CNN)

Fashion-MNIST数据集

在继续加载数据之前, 最好先看看将要使用的数据! Fashion-MNIST数据集包含Zalando的商品图片, 其中包含来自10个类别的65, 000种时尚产品的28×28灰度图片, 每个类别有6, 500张图片。训练集有55, 000张图像, 测试集有10, 000张图像。加载数据后, 你可以再次检查! 😉

Fashion-MNIST与你可能已经知道的MNIST数据集相似, 可用于对手写数字进行分类。这意味着图像尺寸, 训练和测试分割是相似的。

提示:如果你想学习如何使用后一个数据集为分类任务实现多层感知器(MLP), 请转到本教程, 或者如果你想了解卷积神经网络及其在Keras框架中的实现, 请查看本教程。

你可以在此处找到Fashion-MNIST数据集。与Keras或Scikit-Learn软件包不同, TensorFlow没有预定义的模块来加载Fashion MNIST数据集, 尽管默认情况下它具有MNIST数据集。要加载数据, 你首先需要从上面的链接下载数据, 然后将数据构造为特定的文件夹格式, 如下所示, 以便能够使用它。否则, Tensorflow将下载并使用原始MNIST。

加载数据

你首先要导入所有必需的模块, 例如numpy, matplotlib和最重要的Tensorflow。

# Import libraries
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0" #for training on gpu

导入所有模块后, 你现在将学习如何在TensorFlow中加载数据, 这应该非常简单。你唯一需要考虑的是one_hot = True参数, 你也可以在下面的代码行中找到该参数:它将分类类标签转换为二进制向量。

在单编码中, 你可以将分类数据转换为数字向量。这样做是因为机器学习算法无法直接处理分类数据。而是为每个类别或类生成一个布尔列。对于每个样本, 这些列中只有一个可以取值为1。这就解释了术语”单热编码”。

但是, 这种一键编码的数据列是什么样的呢?

对于你的问题陈述, 一种热编码将是行向量, 并且对于每张图像, 其尺寸将为1 x10。重要的是在此处注意, 向量由除其表示的类以外的所有零组成。在那里, 你会找到1。例如, 上面绘制的脚踝靴图像的标签为9, 因此对于所有脚踝靴图像, 一个热编码矢量将为[0 0 0 0 0 0 0 0 0 1]。

现在所有这些都已经清楚了, 该导入数据了!

data = input_data.read_data_sets('data/fashion', one_hot=True)
Extracting data/fashion/train-images-idx3-ubyte.gz
Extracting data/fashion/train-labels-idx1-ubyte.gz
Extracting data/fashion/t10k-images-idx3-ubyte.gz
Extracting data/fashion/t10k-labels-idx1-ubyte.gz

一旦加载了训练和测试数据, 就可以对数据进行分析, 以便对本教程要使用的数据集有一些直觉!

分析数据

在开始任何繁重的工作之前, 最好检查一下数据集中的图像是什么样子。首先, 你可以采用编程方式并检查其尺寸。此外, 还要考虑到如果要浏览图像, 这些图像已经在0到1之间重新缩放。这意味着你无需再次重新缩放图像像素!

# Shapes of training set
print("Training set (images) shape: {shape}".format(shape=data.train.images.shape))
print("Training set (labels) shape: {shape}".format(shape=data.train.labels.shape))

# Shapes of test set
print("Test set (images) shape: {shape}".format(shape=data.test.images.shape))
print("Test set (labels) shape: {shape}".format(shape=data.test.labels.shape))
Training set (images) shape: (55000, 784)
Training set (labels) shape: (55000, 10)
Test set (images) shape: (10000, 784)
Test set (labels) shape: (10000, 10)

从上面的输出中, 你可以看到训练数据的形状为55000 x 784:每个784维向量有55, 000个训练样本。同样, 测试数据的形状为10000 x 784, 因为有10, 000个测试样本。

784维向量不过是28 x 28维矩阵。这就是为什么要将每个训练和测试样本从784维向量重塑为28 x 28 x 1维矩阵, 以便将样本输入CNN模型的原因。

为简单起见, 让我们创建一个字典, 该字典将具有带有相应类别分类标签的类名。

# Create dictionary of target classes
label_dict = {
 0: 'T-shirt/top', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat', 5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle boot', }

另外, 让我们看一下数据集中的图像:

plt.figure(figsize=[5, 5])

# Display the first image in training data
plt.subplot(121)
curr_img = np.reshape(data.train.images[0], (28, 28))
curr_lbl = np.argmax(data.train.labels[0, :])
plt.imshow(curr_img, cmap='gray')
plt.title("(Label: " + str(label_dict[curr_lbl]) + ")")

# Display the first image in testing data
plt.subplot(122)
curr_img = np.reshape(data.test.images[0], (28, 28))
curr_lbl = np.argmax(data.test.labels[0, :])
plt.imshow(curr_img, cmap='gray')
plt.title("(Label: " + str(label_dict[curr_lbl]) + ")")
<matplotlib.text.Text at 0x7f3d17e38cd0>
具有TensorFlow的卷积神经网络4

上面两个图的输出是来自训练和测试数据的样本图像之一, 并且为这些图像分配了类别标签4(外套)和9(脚踝靴)。同样, 其他时尚产品将具有不同的标签, 但是类似的产品将具有相同的标签。这意味着所有6, 500个踝靴图像的类别标签将为9。

数据预处理

图像尺寸为28 x 28(或784维向量)。

图像已经在0到1之间重新缩放, 因此你无需再次重新缩放它们, 但是请确保我们将训练数据集中的图像可视化为矩阵。除此之外, 我们还要打印矩阵的最大值和最小值。

data.train.images[0]
array([0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00784314, 0.0509804 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00784314, 0.00392157, 0.        , 0.        , 0.5137255 , 0.92549026, 0.909804  , 0.87843144, 0.2901961 , 0.        , 0.        , 0.00392157, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00392157, 0.        , 0.        , 0.        , 0.41960788, 0.9176471 , 0.87843144, 0.8470589 , 0.8980393 , 0.8980393 , 0.21568629, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00784314, 0.        , 0.        , 0.36078432, 0.8000001 , 0.8352942 , 0.8431373 , 0.882353  , 0.8470589 , 0.9215687 , 0.80392164, 0.8941177 , 0.7019608 , 0.2509804 , 0.        , 0.        , 0.00784314, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00392157, 0.        , 0.        , 0.75294125, 0.8980393 , 0.854902  , 0.8470589 , 0.78823537, 0.90196085, 1.        , 0.882353  , 0.8196079 , 0.8352942 , 0.8431373 , 0.89019614, 0.4901961 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.01568628, 0.        , 0.09411766, 0.909804  , 0.8078432 , 0.8313726 , 0.8941177 , 0.8235295 , 0.8000001 , 0.86666673, 0.76470596, 0.85098046, 0.8470589 , 0.8078432 , 0.8470589 , 0.8000001 , 0.        , 0.        , 0.00784314, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.01176471, 0.        , 0.3921569 , 0.93725497, 0.85098046, 0.8117648 , 0.86274517, 0.87843144, 0.83921576, 0.8431373 , 0.8313726 , 0.8588236 , 0.8196079 , 0.8352942 , 0.8313726 , 0.90196085, 0.15686275, 0.        , 0.01176471, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.6745098 , 0.9333334 , 0.86666673, 0.882353  , 0.854902  , 0.86274517, 0.86666673, 0.91372555, 0.87843144, 0.8235295 , 0.8431373 , 0.86666673, 0.83921576, 0.92549026, 0.40784317, 0.        , 0.00784314, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.86274517, 0.9215687 , 0.87843144, 0.882353  , 0.8705883 , 0.854902  , 0.85098046, 0.7843138 , 0.8745099 , 0.8431373 , 0.8588236 , 0.8705883 , 0.85098046, 0.91372555, 0.6       , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.82745105, 0.90196085, 0.8941177 , 0.8862746 , 0.882353  , 0.86666673, 0.8705883 , 0.85098046, 0.83921576, 0.86274517, 0.8588236 , 0.8470589 , 0.8588236 , 0.8980393 , 0.7843138 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.01568628, 0.8941177 , 0.8862746 , 0.90196085, 0.882353  , 0.87843144, 0.882353  , 0.8745099 , 0.8352942 , 0.8588236 , 0.86666673, 0.8588236 , 0.854902  , 0.8705883 , 0.8862746 , 0.9176471 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.227451  , 0.93725497, 0.87843144, 0.91372555, 0.882353  , 0.8745099 , 0.8745099 , 0.86666673, 0.83921576, 0.8745099 , 0.8588236 , 0.85098046, 0.854902  , 0.86274517, 0.86666673, 0.8431373 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.37254903, 0.9568628 , 0.8705883 , 0.9058824 , 0.8862746 , 0.8745099 , 0.87843144, 0.87843144, 0.85098046, 0.86274517, 0.854902  , 0.8588236 , 0.86666673, 0.8588236 , 0.85098046, 0.89019614, 0.14901961, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.52156866, 0.9490197 , 0.8705883 , 0.9490197 , 0.89019614, 0.87843144, 0.8862746 , 0.89019614, 0.83921576, 0.86666673, 0.86274517, 0.8588236 , 0.8705883 , 0.909804  , 0.83921576, 0.9215687 , 0.27450982, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.70980394, 0.9294118 , 0.87843144, 0.8745099 , 0.909804  , 0.8745099 , 0.882353  , 0.89019614, 0.85098046, 0.8745099 , 0.8588236 , 0.8588236 , 0.86666673, 0.8431373 , 0.8431373 , 0.92549026, 0.42352945, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.854902  , 0.91372555, 0.90196085, 0.6431373 , 0.94117653, 0.8745099 , 0.882353  , 0.8862746 , 0.854902  , 0.8745099 , 0.8470589 , 0.86666673, 0.86274517, 0.61960787, 0.86274517, 0.8980393 , 0.62352943, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.95294124, 0.909804  , 0.8941177 , 0.49411768, 0.9843138 , 0.87843144, 0.882353  , 0.90196085, 0.8862746 , 0.8745099 , 0.854902  , 0.86274517, 0.8980393 , 0.47450984, 0.91372555, 0.8941177 , 0.7607844 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.8588236 , 0.9058824 , 0.8000001 , 0.427451  , 1.        , 0.8588236 , 0.89019614, 0.8862746 , 0.7803922 , 0.882353  , 0.8745099 , 0.8431373 , 0.9450981 , 0.36078432, 0.8980393 , 0.882353  , 0.8352942 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.01960784, 0.8941177 , 0.90196085, 0.7372549 , 0.4901961 , 1.        , 0.85098046, 0.8862746 , 0.90196085, 0.8352942 , 0.882353  , 0.8705883 , 0.83921576, 0.9921569 , 0.36862746, 0.8588236 , 0.87843144, 0.9294118 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.12156864, 0.91372555, 0.91372555, 0.68235296, 0.5647059 , 1.        , 0.8470589 , 0.87843144, 0.91372555, 0.8705883 , 0.882353  , 0.87843144, 0.8431373 , 0.9960785 , 0.41960788, 0.8196079 , 0.8705883 , 0.8431373 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.3529412 , 0.9058824 , 0.909804  , 0.63529414, 0.59607846, 1.        , 0.854902  , 0.882353  , 0.91372555, 0.854902  , 0.8745099 , 0.87843144, 0.8352942 , 1.        , 0.43529415, 0.7568628 , 0.87843144, 0.86666673, 0.19607845, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.6784314 , 0.9450981 , 0.93725497, 0.6431373 , 0.61960787, 0.9960785 , 0.86274517, 0.882353  , 0.9176471 , 0.85098046, 0.8705883 , 0.8705883 , 0.8352942 , 0.9960785 , 0.45882356, 0.7843138 , 0.8941177 , 0.91372555, 0.65882355, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.4431373 , 0.82745105, 1.        , 0.6313726 , 0.6862745 , 0.9960785 , 0.8588236 , 0.8941177 , 0.9176471 , 0.86666673, 0.8745099 , 0.87843144, 0.8352942 , 0.9960785 , 0.5137255 , 0.7960785 , 0.82745105, 0.8000001 , 0.18431373, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.8196079 , 0.9215687 , 0.8588236 , 0.8941177 , 0.9176471 , 0.86666673, 0.87843144, 0.8745099 , 0.8470589 , 0.9960785 , 0.5882353 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.87843144, 0.9176471 , 0.86666673, 0.8941177 , 0.9176471 , 0.86666673, 0.8705883 , 0.8745099 , 0.86274517, 0.9333334 , 0.6862745 , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00784314, 0.        , 0.        , 0.91372555, 0.90196085, 0.8745099 , 0.882353  , 0.909804  , 0.86274517, 0.8705883 , 0.87843144, 0.86274517, 0.9215687 , 0.72156864, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.00392157, 0.        , 0.        , 1.        , 0.9450981 , 0.8980393 , 0.9333334 , 0.93725497, 0.882353  , 0.9058824 , 0.92549026, 0.8941177 , 0.9725491 , 0.86666673, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.37647063, 0.6745098 , 0.7686275 , 0.81568635, 0.8705883 , 0.85098046, 0.8196079 , 0.7843138 , 0.75294125, 0.64705884, 0.26666668, 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        , 0.        ], dtype=float32)
np.max(data.train.images[0])
1.0
np.min(data.train.images[0])
0.0

让我们调整图像的形状, 使其尺寸为28 x 28 x 1, 并将其作为输入输入到网络。

# Reshape training and testing image
train_X = data.train.images.reshape(-1, 28, 28, 1)
test_X = data.test.images.reshape(-1, 28, 28, 1)
train_X.shape, test_X.shape
((55000, 28, 28, 1), (10000, 28, 28, 1))

你不必重新调整标签的形状, 因为它们已经具有正确的尺寸, 但是让我们将训练和测试标签放在单独的变量中, 并且也可以将它们各自的形状打印在更安全的一侧。

train_y = data.train.labels
test_y = data.test.labels
train_y.shape, test_y.shape
((55000, 10), (10000, 10))

深度神经网络

你将使用三个卷积层:

    第一层将具有32-3 x 3个滤镜,

    第二层将具有64-3 x 3滤镜和

    第三层将具有128-3 x 3滤镜。

    此外, 还有三个最大池化层, 每个层的大小为2 x 2。

    具有TensorFlow的卷积神经网络5

    你首先定义训练迭代次数training_iters, 学习率learning_rate和批次大小batch_size。请记住, 所有这些都是超参数, 并且它们没有固定的值, 因为每个问题陈述的值都不相同。

    但是, 这通常是你可以期望的:

    • 训练迭代次数表示你训练网络的次数,
    • 优良作法是使用1e-3的学习率, 学习率是乘以权重的因子, 权重基于权重进行更新, 这确实有助于减少成本/损失/交叉熵, 最终有助于收敛或达到局部最优。学习率不应太高或太低, 而应保持平衡
    • 批次大小意味着你的训练图像将被划分为固定的批次大小, 并且每批次将获取固定数量的图像并对其进行训练。建议使用2的幂的批处理大小, 因为物理处理器的数量通常是2的幂。使用与2的幂不同的虚拟处理器数量会导致性能下降。另外, 批处理量太大会导致内存错误, 因此必须确保运行代码的计算机具有足够的RAM来处理指定的批处理量。
    training_iters = 200 
    learning_rate = 0.001 
    batch_size = 128
    

    网络参数

    接下来, 你需要定义网络参数。首先, 你定义输入的数量。这是784, 因为图像最初是作为784维向量加载的。稍后, 你将看到如何将784维向量重塑为28 x 28 x 1矩阵。其次, 你还将定义类的数量, 这仅是类标签的数量。

    # MNIST data input (img shape: 28*28)
    n_input = 28
    
    # MNIST total classes (0-9 digits)
    n_classes = 10
    

    现在是时候使用这些占位符了, 你在本教程中之前已阅读过这些占位符。你将定义一个输入占位符x, 其尺寸为None x 784, 而输出占位符的尺寸为None x10。要重申一下, 占位符允许你进行操作和构建计算图而无需输入数据。

    类似地, y将以格式矩阵形式保存训练图像的标签, 该格式矩阵将为None * 10矩阵。

    行尺寸为无。那是因为你定义了batch_size, 它告诉占位符, 当你将数据输入给它们时, 它们将收到此维。由于你将批次大小设置为128, 因此这将是占位符的行尺寸。

    #both placeholders are of type float
    x = tf.placeholder("float", [None, 28, 28, 1])
    y = tf.placeholder("float", [None, n_classes])
    

    创建包装器以简化操作

    在你的网络体系结构模型中, 你将具有多个卷积和最大池化层。在这种情况下, 定义卷积和最大池化函数总是一个更好的主意, 这样你就可以在网络中多次调用它们。

    • 在conv2d()函数中, 你传递4个参数:输入x, 权重W, 偏差b和步幅。默认情况下, 最后一个参数设置为1, 但是你始终可以使用它来查看网络的性能。第一个步长和最后一个步长必须始终为1, 因为第一个步长用于图像编号, 最后一个步长用于输入通道(因为图像是仅具有一个通道的灰度图像)。应用卷积后, 你将添加偏差并应用一个称为”整流线性单元”(ReLU)的激活函数。
    • max-pooling函数很简单:它具有输入x和一个内核大小k(设置为2)。这意味着max-pooling滤波器将是一个尺寸为2 x 2且方差为2的方阵。过滤器也将移入2。

    你将使用等于填充, 以确保执行卷积操作时不会遗漏图像的边界像素, 因此, 等于填充将基本在输入的边界处添加零, 并允许卷积过滤器访问边界像素。

    同样, 在最大池操作中, 填充等于same将添加零。稍后, 当你定义权重和偏差时, 你会注意到在应用三个最大池化层之后, 将大小为28 x 28的输入下采样为4 x 4。

    def conv2d(x, W, b, strides=1):
        # Conv2D wrapper, with bias and relu activation
        x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
        x = tf.nn.bias_add(x, b)
        return tf.nn.relu(x) 
    
    def maxpool2d(x, k=2):
        return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME')
    

    定义conv2d和maxpool2d包装器后, 现在可以定义权重和偏向变量。所以, 让我们开始吧!

    但首先, 让我们逐步了解每个权重和偏差参数。你将创建两个字典, 一个用于权重, 第二个用于bias参数。

    • 如果你可以从上图中回忆起, 第一卷积层具有32-3×3滤镜, 那么权重字典中的第一键(wc1)的参数形状采用带有4个值的元组:第一个和是滤镜大小, 第三个是输入图像中通道的数量, 最后一个代表你要在第一个卷积层中使用的卷积过滤器的数量。偏差字典中的第一个键bc1将具有32个偏差参数。
    • 类似地, 权重字典的第二个键(wc2)具有一个形状参数, 该参数将采用具有4个值的元组:第一个和第二个再次表示过滤器大小, 第三个表示来自先前输出的通道数。由于在输入图像上传递了32个卷积滤镜, 因此你将拥有32个通道作为第一个卷积层操作的输出。最后一个表示第二个卷积过滤器中所需的过滤器数。请注意, 偏差字典中的第二个键bc2将具有64个参数。

    你将对第三卷积层执行相同的操作。

    • 现在, 了解第四个密钥(wd1)非常重要。在应用了3次卷积和最大池合并操作之后, 你将输入图像从28 x 28 x 1下采样到4 x 4 x 1, 现在你需要展平该下采样的输出以将其作为输入输入到完全连接的图层。这就是为什么要执行乘法运算$ 44128 $的原因, 它是前一层的输出或卷积层3输出的通道数。传递给定形的元组的第二个元素包含所需的神经元数。完全连接的层。同样, 在偏见字典中, 第四个键bd1具有128个参数。

    对于最后一个完全连接的层, 你将遵循相同的逻辑, 其中神经元的数量将等于类的数量。

    weights = {
        'wc1': tf.get_variable('W0', shape=(3, 3, 1, 32), initializer=tf.contrib.layers.xavier_initializer()), 'wc2': tf.get_variable('W1', shape=(3, 3, 32, 64), initializer=tf.contrib.layers.xavier_initializer()), 'wc3': tf.get_variable('W2', shape=(3, 3, 64, 128), initializer=tf.contrib.layers.xavier_initializer()), 'wd1': tf.get_variable('W3', shape=(4*4*128, 128), initializer=tf.contrib.layers.xavier_initializer()), 'out': tf.get_variable('W6', shape=(128, n_classes), initializer=tf.contrib.layers.xavier_initializer()), }
    biases = {
        'bc1': tf.get_variable('B0', shape=(32), initializer=tf.contrib.layers.xavier_initializer()), 'bc2': tf.get_variable('B1', shape=(64), initializer=tf.contrib.layers.xavier_initializer()), 'bc3': tf.get_variable('B2', shape=(128), initializer=tf.contrib.layers.xavier_initializer()), 'bd1': tf.get_variable('B3', shape=(128), initializer=tf.contrib.layers.xavier_initializer()), 'out': tf.get_variable('B4', shape=(10), initializer=tf.contrib.layers.xavier_initializer()), }
    

    现在, 该定义网络体系结构了!不幸的是, 这并不像在Keras框架中那样简单!

    conv_net()函数将3个参数作为输入:输入x以及权重和偏向字典。再次, 让我们逐步进行网络的构建:

    • 首先, 将784维输入向量整形为28 x 28 x 1矩阵。如你先前所见, 图像作为784维向量加载, 但是你会将输入作为大小为28 x 28 x 1的矩阵输入模型。reshape()函数中的-1表示它将推断第一个尺寸本身是固定的, 但其余尺寸是固定的, 即28 x 28 x 1。
    • 接下来, 如模型的架构图所示, 你将定义conv1, 它将输入作为图像, 权重wc1和偏差bc1。接下来, 在conv1的输出上应用max-pooling, 直到conv3为止, 基本上将执行与此类似的过程。
    • 由于你的任务是分类, 因此给定图像, 它属于哪个类标签。因此, 在遍历所有卷积和最大池化层之后, 你将使conv3的输出变平。接下来, 你将平整的conv3神经元与下一层中的每个神经元相连。然后, 你将在完全连接的层fc1的输出上应用激活功能。
    • 最后, 在最后一层中, 由于必须对10个标签进行分类, 因此你将拥有10个神经元。这意味着你将连接输出层中fc1的所有神经元和最后一层中的10个神经元。
    def conv_net(x, weights, biases):  
    
        # here we call the conv2d function we had defined above and pass the input image x, weights wc1 and bias bc1.
        conv1 = conv2d(x, weights['wc1'], biases['bc1'])
        # Max Pooling (down-sampling), this chooses the max value from a 2*2 matrix window and outputs a 14*14 matrix.
        conv1 = maxpool2d(conv1, k=2)
    
        # Convolution Layer
        # here we call the conv2d function we had defined above and pass the input image x, weights wc2 and bias bc2.
        conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
        # Max Pooling (down-sampling), this chooses the max value from a 2*2 matrix window and outputs a 7*7 matrix.
        conv2 = maxpool2d(conv2, k=2)
    
        conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
        # Max Pooling (down-sampling), this chooses the max value from a 2*2 matrix window and outputs a 4*4.
        conv3 = maxpool2d(conv3, k=2)
    
    
        # Fully connected layer
        # Reshape conv2 output to fit fully connected layer input
        fc1 = tf.reshape(conv3, [-1, weights['wd1'].get_shape().as_list()[0]])
        fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
        fc1 = tf.nn.relu(fc1)
        # Output, class prediction
        # finally we multiply the fully connected layer with the weights and add a bias term. 
        out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
        return out
    

    损失和优化器节点

    你将从构建模型开始, 并通过传入输入x, 权重和偏差来调用conv_net()函数。由于这是一个多类分类问题, 因此你将在输出层上使用softmax激活。这将为你提供每个类标签的概率。你使用的损失函数是交叉熵。

    之所以使用交叉熵作为损失函数, 是因为交叉熵函数的值始终为正, 并且随着神经元对所有训练输入x的期望输出y的计算变得更好时, 交叉熵函数的值趋向于零。这些都是你可以直观地期望成本函数的属性。它避免了学习变慢的问题, 这意味着即使权重和偏差以错误的方式初始化, 也可以帮助恢复得更快, 并且不会妨碍训练阶段。

    在TensorFlow中, 你可以在一行中同时定义激活函数和交叉熵损失函数。你传递两个参数, 它们是预测输出和地面真相标签y。然后, 你将对所有批次取均值(reduce_mean), 以获得单个损失/成本值。

    接下来, 定义最流行的优化算法之一:Adam优化器。你可以从此处了解有关优化器的更多信息, 并通过明确说明你在上一步中计算出的最低成本来指定学习率。

    pred = conv_net(x, weights, biases)
    
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
    
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    

    评估模型节点

    为了测试你的模型, 让我们再定义两个节点:correct_prediction和准确性。每次训练迭代后, 它将评估你的模型, 这将帮助你跟踪模型的性能。由于每次迭代后都会在10, 000张测试图像上测试模型, 因此在训练阶段不会看到它。

    你始终可以保存图形并在以后运行测试部分。但是现在, 你将在会话中进行测试。

    #Here you check whether the index of the maximum value of the predicted image is equal to the actual labelled image. and both will be a column vector.
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    
    #calculate accuracy across all the given images and average them out. 
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    

    请记住, 权重和偏差是变量, 你必须先对其进行初始化, 然后才能使用它们。因此, 让我们用下面的代码行来做到这一点:

    # Initializing the variables
    init = tf.global_variables_initializer()
    

    训练和测试模型

    在TensorFlow中训练和测试模型时, 请执行以下步骤:

    • 你从启动图形开始。这是一个运行所有TensorFlow操作并在会话中启动图形的类。所有操作都必须在缩进范围内。
    • 然后, 你运行该会话, 该会话将执行在上一步中初始化的变量并评估张量。
    • 接下来, 定义一个for循环, 该循环以开始时指定的训练迭代次数运行。在那之后, 你将启动第二个for循环, 该循环用于根据你选择的批量大小将拥有的批量数量, 因此将图像总数除以批量大小。
    • 然后, 你将根据你在batch_x中传递的批量大小以及在batch_y中传递的相应标签来输入图像。
    • 现在是最重要的步骤。就像创建图形后运行初始化程序一样, 现在你将占位符x和y填充到字典中的实际数据中, 并通过传递你先前定义的成本和准确性来运行会话。它返回损失(成本)和准确性。
    • 你可以在每个时期(训练迭代)完成后打印损失和训练准确性。

    每次训练迭代完成后, 你都将通过所有10000个测试图像和标签, 从而仅运行精度。这将使你了解模型在训练期间的执行效率。

    通常建议你在模型完全训练后进行测试, 并仅在每个时期之后的训练阶段进行验证。但是, 现在让我们继续使用这种方法。

    with tf.Session() as sess:
        sess.run(init) 
        train_loss = []
        test_loss = []
        train_accuracy = []
        test_accuracy = []
        summary_writer = tf.summary.FileWriter('./Output', sess.graph)
        for i in range(training_iters):
            for batch in range(len(train_X)//batch_size):
                batch_x = train_X[batch*batch_size:min((batch+1)*batch_size, len(train_X))]
                batch_y = train_y[batch*batch_size:min((batch+1)*batch_size, len(train_y))]    
                # Run optimization op (backprop).
                    # Calculate batch loss and accuracy
                opt = sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
                loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x, y: batch_y})
            print("Iter " + str(i) + ", Loss= " + \
                          "{:.6f}".format(loss) + ", Training Accuracy= " + \
                          "{:.5f}".format(acc))
            print("Optimization Finished!")
    
            # Calculate accuracy for all 10000 mnist test images
            test_acc, valid_loss = sess.run([accuracy, cost], feed_dict={x: test_X, y : test_y})
            train_loss.append(loss)
            test_loss.append(valid_loss)
            train_accuracy.append(acc)
            test_accuracy.append(test_acc)
            print("Testing Accuracy:", "{:.5f}".format(test_acc))
        summary_writer.close()
    
    Iter 0, Loss= 0.338081, Training Accuracy= 0.87500
    Optimization Finished!
    ('Testing Accuracy:', '0.83890')
    Iter 1, Loss= 0.210727, Training Accuracy= 0.91406
    Optimization Finished!
    ('Testing Accuracy:', '0.87810')
    Iter 2, Loss= 0.169724, Training Accuracy= 0.95312
    Optimization Finished!
    ('Testing Accuracy:', '0.89260')
    Iter 3, Loss= 0.154453, Training Accuracy= 0.93750
    Optimization Finished!
    ('Testing Accuracy:', '0.89600')
    Iter 4, Loss= 0.143760, Training Accuracy= 0.93750
    Optimization Finished!
    ('Testing Accuracy:', '0.89610')
    Iter 5, Loss= 0.142700, Training Accuracy= 0.93750
    Optimization Finished!
    ('Testing Accuracy:', '0.89680')
    Iter 6, Loss= 0.114542, Training Accuracy= 0.94531
    Optimization Finished!
    ('Testing Accuracy:', '0.90190')
    Iter 7, Loss= 0.104471, Training Accuracy= 0.94531
    Optimization Finished!
    ('Testing Accuracy:', '0.90100')
    Iter 8, Loss= 0.089115, Training Accuracy= 0.96094
    Optimization Finished!
    ('Testing Accuracy:', '0.90360')
    Iter 9, Loss= 0.090392, Training Accuracy= 0.96094
    Optimization Finished!
    ('Testing Accuracy:', '0.90420')
    Iter 10, Loss= 0.066802, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.89960')
    Iter 11, Loss= 0.062734, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.89870')
    Iter 12, Loss= 0.071126, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.88770')
    Iter 13, Loss= 0.051628, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.89670')
    Iter 14, Loss= 0.049411, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.90280')
    Iter 15, Loss= 0.057557, Training Accuracy= 0.97656
    Optimization Finished!
    ('Testing Accuracy:', '0.89920')
    Iter 16, Loss= 0.053462, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.89780')
    Iter 17, Loss= 0.042286, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.89980')
    Iter 18, Loss= 0.017384, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.89930')
    Iter 19, Loss= 0.017027, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.89130')
    Iter 20, Loss= 0.032651, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.89500')
    Iter 21, Loss= 0.032651, Training Accuracy= 0.98438
    Optimization Finished!
    ('Testing Accuracy:', '0.88890')
    Iter 22, Loss= 0.030661, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91070')
    Iter 23, Loss= 0.010199, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90620')
    Iter 24, Loss= 0.006742, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91330')
    Iter 25, Loss= 0.015453, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91110')
    Iter 26, Loss= 0.011107, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91070')
    Iter 27, Loss= 0.012601, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90920')
    Iter 28, Loss= 0.013160, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90470')
    Iter 29, Loss= 0.006266, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91160')
    Iter 30, Loss= 0.007183, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91080')
    Iter 31, Loss= 0.006205, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91490')
    Iter 32, Loss= 0.008915, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.90940')
    Iter 33, Loss= 0.001174, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91470')
    Iter 34, Loss= 0.002065, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91310')
    Iter 35, Loss= 0.002440, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91210')
    Iter 36, Loss= 0.001424, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91710')
    Iter 37, Loss= 0.002666, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91510')
    Iter 38, Loss= 0.001833, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91070')
    Iter 39, Loss= 0.004789, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91510')
    Iter 40, Loss= 0.003274, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91160')
    Iter 41, Loss= 0.001958, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91270')
    Iter 42, Loss= 0.004119, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90950')
    Iter 43, Loss= 0.003570, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90750')
    Iter 44, Loss= 0.008136, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91090')
    Iter 45, Loss= 0.003319, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91210')
    Iter 46, Loss= 0.001454, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91420')
    Iter 47, Loss= 0.000695, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91050')
    Iter 48, Loss= 0.002378, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91070')
    Iter 49, Loss= 0.001328, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91520')
    Iter 50, Loss= 0.002429, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90830')
    Iter 51, Loss= 0.000840, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91150')
    Iter 52, Loss= 0.002838, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91120')
    Iter 53, Loss= 0.001496, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91040')
    Iter 54, Loss= 0.001293, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91010')
    Iter 55, Loss= 0.002120, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91390')
    Iter 56, Loss= 0.000601, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91540')
    Iter 57, Loss= 0.001594, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91200')
    Iter 58, Loss= 0.001877, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91370')
    Iter 59, Loss= 0.000769, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91670')
    Iter 60, Loss= 0.002815, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91100')
    Iter 61, Loss= 0.007895, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91280')
    Iter 62, Loss= 0.009527, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91520')
    Iter 63, Loss= 0.003365, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91010')
    Iter 64, Loss= 0.001035, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91570')
    Iter 65, Loss= 0.004112, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91210')
    Iter 66, Loss= 0.002977, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91100')
    Iter 67, Loss= 0.000183, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91530')
    Iter 68, Loss= 0.000348, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91290')
    Iter 69, Loss= 0.001012, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90890')
    Iter 70, Loss= 0.010831, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91020')
    Iter 71, Loss= 0.000026, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91830')
    Iter 72, Loss= 0.000644, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91190')
    Iter 73, Loss= 0.001083, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91410')
    Iter 74, Loss= 0.000335, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91290')
    Iter 75, Loss= 0.012580, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91660')
    Iter 76, Loss= 0.000295, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91390')
    Iter 77, Loss= 0.001756, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91280')
    Iter 78, Loss= 0.001754, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91300')
    Iter 79, Loss= 0.086850, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91600')
    Iter 80, Loss= 0.002057, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91170')
    Iter 81, Loss= 0.018806, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91100')
    Iter 82, Loss= 0.000346, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91100')
    Iter 83, Loss= 0.000076, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91240')
    Iter 84, Loss= 0.000004, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91590')
    Iter 85, Loss= 0.001539, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90970')
    Iter 86, Loss= 0.000007, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91530')
    Iter 87, Loss= 0.002044, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91350')
    Iter 88, Loss= 0.000013, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91260')
    Iter 89, Loss= 0.000201, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90670')
    Iter 90, Loss= 0.000217, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91070')
    Iter 91, Loss= 0.000109, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91990')
    Iter 92, Loss= 0.000056, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91500')
    Iter 93, Loss= 0.001038, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91120')
    Iter 94, Loss= 0.000383, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91110')
    Iter 95, Loss= 0.000004, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91650')
    Iter 96, Loss= 0.000905, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91490')
    Iter 97, Loss= 0.000080, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91370')
    Iter 98, Loss= 0.013281, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91180')
    Iter 99, Loss= 0.005379, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91390')
    Iter 100, Loss= 0.001301, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91240')
    Iter 101, Loss= 0.000181, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91610')
    Iter 102, Loss= 0.000176, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91490')
    Iter 103, Loss= 0.000249, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91160')
    Iter 104, Loss= 0.000944, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.90990')
    Iter 105, Loss= 0.000097, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91700')
    Iter 106, Loss= 0.000887, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91350')
    Iter 107, Loss= 0.000004, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91720')
    Iter 108, Loss= 0.000010, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91530')
    Iter 109, Loss= 0.000097, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91670')
    Iter 110, Loss= 0.000072, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91780')
    Iter 111, Loss= 0.000240, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91640')
    Iter 112, Loss= 0.002454, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91370')
    Iter 113, Loss= 0.000007, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91690')
    Iter 114, Loss= 0.000013, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91340')
    Iter 115, Loss= 0.000450, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91440')
    Iter 116, Loss= 0.000401, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 117, Loss= 0.000428, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91510')
    Iter 118, Loss= 0.000002, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 119, Loss= 0.000159, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91640')
    Iter 120, Loss= 0.000326, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91610')
    Iter 121, Loss= 0.000944, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91060')
    Iter 122, Loss= 0.003092, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91160')
    Iter 123, Loss= 0.000741, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91460')
    Iter 124, Loss= 0.000804, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91710')
    Iter 125, Loss= 0.000302, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91900')
    Iter 126, Loss= 0.000462, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91470')
    Iter 127, Loss= 0.000300, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91560')
    Iter 128, Loss= 0.000005, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91470')
    Iter 129, Loss= 0.000101, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91610')
    Iter 130, Loss= 0.000666, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91530')
    Iter 131, Loss= 0.000094, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.92080')
    Iter 132, Loss= 0.011843, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91940')
    Iter 133, Loss= 0.002057, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91220')
    Iter 134, Loss= 0.000372, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91760')
    Iter 135, Loss= 0.000468, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91360')
    Iter 136, Loss= 0.000006, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91880')
    Iter 137, Loss= 0.000097, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91150')
    Iter 138, Loss= 0.000013, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91740')
    Iter 139, Loss= 0.000171, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91520')
    Iter 140, Loss= 0.000157, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91310')
    Iter 141, Loss= 0.000008, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91630')
    Iter 142, Loss= 0.000035, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 143, Loss= 0.000693, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91560')
    Iter 144, Loss= 0.000539, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91470')
    Iter 145, Loss= 0.000129, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 146, Loss= 0.000347, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91660')
    Iter 147, Loss= 0.000241, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.92050')
    Iter 148, Loss= 0.000007, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91640')
    Iter 149, Loss= 0.000021, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91870')
    Iter 150, Loss= 0.000358, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91390')
    Iter 151, Loss= 0.000101, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91460')
    Iter 152, Loss= 0.000044, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91400')
    Iter 153, Loss= 0.000015, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91700')
    Iter 154, Loss= 0.000063, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 155, Loss= 0.000149, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91140')
    Iter 156, Loss= 0.000277, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91470')
    Iter 157, Loss= 0.000098, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91910')
    Iter 158, Loss= 0.000023, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91490')
    Iter 159, Loss= 0.000239, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91640')
    Iter 160, Loss= 0.001147, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91570')
    Iter 161, Loss= 0.000009, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91650')
    Iter 162, Loss= 0.000963, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91630')
    Iter 163, Loss= 0.000422, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91530')
    Iter 164, Loss= 0.000007, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91360')
    Iter 165, Loss= 0.000026, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91480')
    Iter 166, Loss= 0.000294, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91340')
    Iter 167, Loss= 0.000350, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91330')
    Iter 168, Loss= 0.000917, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91990')
    Iter 169, Loss= 0.000174, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91390')
    Iter 170, Loss= 0.000066, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91740')
    Iter 171, Loss= 0.000078, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91560')
    Iter 172, Loss= 0.000020, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91540')
    Iter 173, Loss= 0.000010, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91630')
    Iter 174, Loss= 0.000048, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91720')
    Iter 175, Loss= 0.008362, Training Accuracy= 0.99219
    Optimization Finished!
    ('Testing Accuracy:', '0.91170')
    Iter 176, Loss= 0.000415, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91590')
    Iter 177, Loss= 0.000202, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91620')
    Iter 178, Loss= 0.000279, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91560')
    Iter 179, Loss= 0.000003, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91100')
    Iter 180, Loss= 0.000128, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91750')
    Iter 181, Loss= 0.000288, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91930')
    Iter 182, Loss= 0.000138, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91680')
    Iter 183, Loss= 0.000400, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91990')
    Iter 184, Loss= 0.000049, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.92000')
    Iter 185, Loss= 0.000866, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91420')
    Iter 186, Loss= 0.000241, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91670')
    Iter 187, Loss= 0.000004, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91610')
    Iter 188, Loss= 0.000058, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91290')
    Iter 189, Loss= 0.000194, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91650')
    Iter 190, Loss= 0.000008, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91480')
    Iter 191, Loss= 0.000010, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91790')
    Iter 192, Loss= 0.000916, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91710')
    Iter 193, Loss= 0.000006, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91460')
    Iter 194, Loss= 0.000001, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91600')
    Iter 195, Loss= 0.000046, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91460')
    Iter 196, Loss= 0.000044, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91750')
    Iter 197, Loss= 0.000633, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91110')
    Iter 198, Loss= 0.000028, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91830')
    Iter 199, Loss= 0.000206, Training Accuracy= 1.00000
    Optimization Finished!
    ('Testing Accuracy:', '0.91870')
    

    测试精度看起来令人印象深刻。事实证明, 你的分类器要比此处报告的基准更好, 后者是平均精度为0.897的SVM分类器。此外, 与fashion-MNIST数据集的创建者的GitHub个人资料上提到的某些深度学习模型相比, 该模型的性能很好。

    但是, 你看到的模型看起来过拟合, 因为训练精度比测试精度高。这些结果真的很好吗?

    让我们将模型评估放到透视图中, 并绘制训练和验证数据之间的准确性和损失图:

    plt.plot(range(len(train_loss)), train_loss, 'b', label='Training loss')
    plt.plot(range(len(train_loss)), test_loss, 'r', label='Test loss')
    plt.title('Training and Test loss')
    plt.xlabel('Epochs ', fontsize=16)
    plt.ylabel('Loss', fontsize=16)
    plt.legend()
    plt.figure()
    plt.show()
    
    <matplotlib.figure.Figure at 0x7feac8194250>
    
    具有TensorFlow的卷积神经网络6
    plt.plot(range(len(train_loss)), train_accuracy, 'b', label='Training Accuracy')
    plt.plot(range(len(train_loss)), test_accuracy, 'r', label='Test Accuracy')
    plt.title('Training and Test Accuracy')
    plt.xlabel('Epochs ', fontsize=16)
    plt.ylabel('Loss', fontsize=16)
    plt.legend()
    plt.figure()
    plt.show()
    
    <matplotlib.figure.Figure at 0x7feac80419d0>
    
    具有TensorFlow的卷积神经网络7

    从以上两个图可以看出, 测试精度在50-60个历元后几乎停滞不前, 并且在某些时期几乎没有增加。最初, 验证准确性随损失呈线性增加, 但随后并没有增加太多。

    验证损失表明, 这是过拟合的迹象, 类似于测试精度, 它线性下降, 但在25-30个历元之后, 它开始增加。这意味着该模型尝试存储数据并成功。

    本教程就是这样, 但是有一个适合大家的任务:

    • 你的任务是通过引入辍学技术来减少上述模型的过拟合。为简单起见, 你可能希望继续使用带有Keras的Python卷积神经网络教程, 尽管它位于keras中, 但准确性和损失启发法仍然几乎相同。因此, 遵循本教程将帮助你在当前模型中添加辍学层。由于这两个教程都具有完全相似的体系结构。
    • 其次, 尝试提高测试精度, 可能是通过稍微加深网络, 或者增加学习速率衰减以加快收敛速度​​, 或者尝试使用优化器等等!

    更进一步, 使用TensorFlow掌握深度学习!

    本教程是了解tensorFlow如何在引擎盖下工作以及在Python中实现卷积神经网络的良好起点。如果你能够轻松地进行工作, 甚至只需付出更多的努力, 那就做好了!尝试使用相同的模型架构但使用不同类型的可用公共数据集进行一些实验。你也可以尝试使用不同的权重初始化器, 可能会加深网络体系结构, 更改学习率等, 并通过更改这些参数来查看网络的性能。但是, 尝试一次只更改一个参数, 你将对这些参数有更多的直觉。

    还有很多内容要讲, 为什么不参加srcmini的Python深度学习课程呢?同时, 还请确保查看TensorFlow文档(如果你尚未这样做的话)。你将找到有关所有函数, 参数, 更多层等的更多示例和信息。当你学习如何在Python中使用神经网络时, 它无疑是必不可少的资源!

    赞(0)
    未经允许不得转载:srcmini » 具有TensorFlow的卷积神经网络

    评论 抢沙发

    评论前必须登录!