当前位置: 首页 > news >正文

有没有在家做的兼职网站如何把网站能搜到

有没有在家做的兼职网站,如何把网站能搜到,海口最新新闻消息,网站建设中最重要的环节是什么学习参考来自 使用CNN在MNIST上实现简单的攻击样本https://github.com/wmn7/ML_Practice/blob/master/2019_06_03/CNN_MNIST%E5%8F%AF%E8%A7%86%E5%8C%96.ipynb 文章目录 在 MNIST 上实现简单的攻击样本1 训练一个数字分类网络2 控制输出的概率, 看输入是什么3 让正确的图片分… 学习参考来自 使用CNN在MNIST上实现简单的攻击样本https://github.com/wmn7/ML_Practice/blob/master/2019_06_03/CNN_MNIST%E5%8F%AF%E8%A7%86%E5%8C%96.ipynb 文章目录 在 MNIST 上实现简单的攻击样本1 训练一个数字分类网络2 控制输出的概率, 看输入是什么3 让正确的图片分类错误 在 MNIST 上实现简单的攻击样本 要看某个filter在检测什么我们让通过其后的输出激活值较大。 想要一些攻击样本有一个简单的想法就是让最后结果中是某一类的概率值变大接着进行反向传播(固定住网络的参数)去修改input 原理图 1 训练一个数字分类网络 模型基础数据、超参数的配置 import time import csv, osimport PIL.Image import numpy as np import pandas as pd import matplotlib.pyplot as plt import cv2 from cv2 import resizeimport torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data.sampler import SubsetRandomSampler from torch.autograd import Variableimport copy import torchvision import torchvision.transforms as transforms os.environ[CUDA_VISIBLE_DEVICES] 1# -------------------- # Device configuration # -------------------- device torch.device(cuda if torch.cuda.is_available() else cpu)# ---------------- # Hyper-parameters # ---------------- num_classes 10 num_epochs 3 batch_size 100 validation_split 0.05 # 每次训练集中选出10%作为验证集 learning_rate 0.001# ------------- # MNIST dataset # ------------- train_dataset torchvision.datasets.MNIST(root./,trainTrue,transformtransforms.ToTensor(),downloadTrue)test_dataset torchvision.datasets.MNIST(root./,trainFalse,transformtransforms.ToTensor()) # ----------- # Data loader # ----------- test_len len(test_dataset) # 计算测试集的个数 10000 test_loader torch.utils.data.DataLoader(datasettest_dataset,batch_sizebatch_size,shuffleFalse)for (inputs, labels) in test_loader:print(inputs.size()) # [100, 1, 28, 28]print(labels.size()) # [100]break# ------------------ # 下面切分validation # ------------------ dataset_len len(train_dataset) # 60000 indices list(range(dataset_len)) # Randomly splitting indices: val_len int(np.floor(validation_split * dataset_len)) # validation的长度 validation_idx np.random.choice(indices, sizeval_len, replaceFalse) # validatiuon的index train_idx list(set(indices) - set(validation_idx)) # train的index ## Defining the samplers for each phase based on the random indices: train_sampler SubsetRandomSampler(train_idx) validation_sampler SubsetRandomSampler(validation_idx) train_loader torch.utils.data.DataLoader(datasettrain_dataset,samplertrain_sampler,batch_sizebatch_size) validation_loader torch.utils.data.DataLoader(train_dataset,samplervalidation_sampler,batch_sizebatch_size) train_dataloaders {train: train_loader, val: validation_loader} # 使用字典的方式进行保存 train_datalengths {train: len(train_idx), val: val_len} # 保存train和validation的长度搭建简单的神经网络 # ------------------------------------------------------- # Convolutional neural network (two convolutional layers) # ------------------------------------------------------- class ConvNet(nn.Module):def __init__(self, num_classes10):super(ConvNet, self).__init__()self.layer1 nn.Sequential(nn.Conv2d(in_channels1, out_channels16, kernel_size5, stride1, padding2),nn.BatchNorm2d(16),nn.ReLU(),nn.MaxPool2d(kernel_size2, stride2))self.layer2 nn.Sequential(nn.Conv2d(in_channels16, out_channels32, kernel_size5, stride1, padding2),nn.BatchNorm2d(32),nn.ReLU(),nn.MaxPool2d(kernel_size2, stride2))self.fc nn.Linear(7*7*32, num_classes)def forward(self, x):out self.layer1(x)out self.layer2(out)out out.reshape(out.size(0), -1)out self.fc(out)return out绘制损失和精度变化曲线 # ----------- # draw losses # ----------- def draw_loss_acc(train_list, validation_list, modeloss):plt.style.use(seaborn)# set interdata_len len(train_list)x_ticks np.arange(1, data_len 1)plt.xticks(x_ticks)if mode Loss:plt.plot(x_ticks, train_list, labelTrain Loss)plt.plot(x_ticks, validation_list, labelValidation Loss)plt.xlabel(Epoch)plt.ylabel(Loss)plt.legend()plt.savefig(Epoch_loss.jpg)elif mode Accuracy:plt.plot(x_ticks, train_list, labelTrain AccuracyAccuracy)plt.plot(x_ticks, validation_list, labelValidation Accuracy)plt.xlabel(Epoch)plt.ylabel(Accuracy)plt.legend()plt.savefig(Epoch_Accuracy.jpg)定义训练模型的函数 # --------------- # Train the model # --------------- def train_model(model, criterion, optimizer, dataloaders, train_datalengths, schedulerNone, num_epochs2):传入的参数分别是:1. model:定义的模型结构2. criterion:损失函数3. optimizer:优化器4. dataloaders:training dataset5. train_datalengths:train set和validation set的大小, 为了计算准确率6. scheduler:lr的更新策略7. num_epochs:训练的epochssince time.time()# 保存最好一次的模型参数和最好的准确率best_model_wts copy.deepcopy(model.state_dict())best_acc 0.0train_loss [] # 记录每一个epoch后的train的losstrain_acc []validation_loss []validation_acc []for epoch in range(num_epochs):print(Epoch [{}/{}].format(epoch 1, num_epochs))print(- * 10)# Each epoch has a training and validation phasefor phase in [train, val]:if phase train:if scheduler ! None:scheduler.step()model.train() # Set model to training modeelse:model.eval() # Set model to evaluate moderunning_loss 0.0 # 这个是一个epoch积累一次running_corrects 0 # 这个是一个epoch积累一次# Iterate over data.total_step len(dataloaders[phase])for i, (inputs, labels) in enumerate(dataloaders[phase]):# inputs inputs.reshape(-1, 28*28).to(device)inputs inputs.to(device)labels labels.to(device)# zero the parameter gradientsoptimizer.zero_grad()# forward# track history if only in trainwith torch.set_grad_enabled(phase train):outputs model(inputs)_, preds torch.max(outputs, 1) # 使用output(概率)得到预测loss criterion(outputs, labels) # 使用output计算误差# backward optimize only if in training phaseif phase train:loss.backward()optimizer.step()# statisticsrunning_loss loss.item() * inputs.size(0)running_corrects torch.sum(preds labels.data)if (i 1) % 100 0:# 这里相当于是i*batch_size的样本个数打印一次, i*100iteration_loss loss.item() / inputs.size(0)iteration_acc 100 * torch.sum(preds labels.data).item() / len(preds)print(Mode {}, Epoch [{}/{}], Step [{}/{}], Accuracy: {}, Loss: {:.4f}.format(phase, epoch 1,num_epochs, i 1,total_step,iteration_acc,iteration_loss))epoch_loss running_loss / train_datalengths[phase]epoch_acc running_corrects.double() / train_datalengths[phase]if phase train:train_loss.append(epoch_loss)train_acc.append(epoch_acc)else:validation_loss.append(epoch_loss)validation_acc.append(epoch_acc)print(* * 10)print(Mode: [{}], Loss: {:.4f}, Acc: {:.4f}.format(phase, epoch_loss, epoch_acc))print(* * 10)# deep copy the modelif phase val and epoch_acc best_acc:best_acc epoch_accbest_model_wts copy.deepcopy(model.state_dict())print()time_elapsed time.time() - sinceprint(* * 10)print(Training complete in {:.0f}m {:.0f}s.format(time_elapsed // 60, time_elapsed % 60))print(Best val Acc: {:4f}.format(best_acc))print(* * 10)# load best model weightsfinal_model copy.deepcopy(model) # 最后得到的modelmodel.load_state_dict(best_model_wts) # 在验证集上最好的modeldraw_loss_acc(train_listtrain_loss, validation_listvalidation_loss, modeLoss) # 绘制Loss图像draw_loss_acc(train_listtrain_acc, validation_listvalidation_acc, modeAccuracy) # 绘制准确率图像return (model, final_model)开始训练并保存模型 if __name__ __main__:# 模型初始化model ConvNet(num_classesnum_classes).to(device)# 打印模型结构#print(model)ConvNet((layer1): Sequential((0): Conv2d(1, 16, kernel_size(5, 5), stride(1, 1), padding(2, 2))(1): BatchNorm2d(16, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(2): ReLU()(3): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse))(layer2): Sequential((0): Conv2d(16, 32, kernel_size(5, 5), stride(1, 1), padding(2, 2))(1): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(2): ReLU()(3): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse))(fc): Linear(in_features1568, out_features10, biasTrue))# -------------------# Loss and optimizer# ------------------criterion nn.CrossEntropyLoss()optimizer torch.optim.Adam(model.parameters(), lrlearning_rate)# -------------# 进行模型的训练# -------------(best_model, final_model) train_model(modelmodel, criterioncriterion, optimizeroptimizer,dataloaderstrain_dataloaders, train_datalengthstrain_datalengths,num_epochsnum_epochs)Epoch [1/3]----------Mode train, Epoch [1/3], Step [100/570], Accuracy: 96.0, Loss: 0.0012Mode train, Epoch [1/3], Step [200/570], Accuracy: 96.0, Loss: 0.0011Mode train, Epoch [1/3], Step [300/570], Accuracy: 100.0, Loss: 0.0004Mode train, Epoch [1/3], Step [400/570], Accuracy: 100.0, Loss: 0.0003Mode train, Epoch [1/3], Step [500/570], Accuracy: 96.0, Loss: 0.0010**********Mode: [train], Loss: 0.1472, Acc: 0.9579********************Mode: [val], Loss: 0.0653, Acc: 0.9810**********Epoch [2/3]----------Mode train, Epoch [2/3], Step [100/570], Accuracy: 99.0, Loss: 0.0005Mode train, Epoch [2/3], Step [200/570], Accuracy: 98.0, Loss: 0.0003Mode train, Epoch [2/3], Step [300/570], Accuracy: 98.0, Loss: 0.0003Mode train, Epoch [2/3], Step [400/570], Accuracy: 97.0, Loss: 0.0005Mode train, Epoch [2/3], Step [500/570], Accuracy: 98.0, Loss: 0.0004**********Mode: [train], Loss: 0.0470, Acc: 0.9853********************Mode: [val], Loss: 0.0411, Acc: 0.9890**********Epoch [3/3]----------Mode train, Epoch [3/3], Step [100/570], Accuracy: 99.0, Loss: 0.0006Mode train, Epoch [3/3], Step [200/570], Accuracy: 99.0, Loss: 0.0009Mode train, Epoch [3/3], Step [300/570], Accuracy: 98.0, Loss: 0.0004Mode train, Epoch [3/3], Step [400/570], Accuracy: 99.0, Loss: 0.0003Mode train, Epoch [3/3], Step [500/570], Accuracy: 100.0, Loss: 0.0002**********Mode: [train], Loss: 0.0348, Acc: 0.9890********************Mode: [val], Loss: 0.0432, Acc: 0.9867********************Training complete in 0m 32sBest val Acc: 0.989000**********torch.save(model, CNN_MNIST.pkl)训练完的 loss 和 acc 曲线 载入模型进行简单的测试以下面这张 7 为例子 model torch.load(CNN_MNIST.pkl) # print(test_dataset.data[0].shape) # torch.Size([28, 28]) # print(test_dataset.targets[0]) # tensor(7) # unload transforms.ToPILImage() img unload(test_dataset.data[0]) img.save(test_data_0.jpg)# 带入模型进行预测 inputdata test_dataset.data[0].view(1,1,28,28).float()/255 inputdata inputdata.to(device) outputs model(inputdata) print(outputs)tensor([[ -7.4075, -3.4618, -1.5322, 1.1591, -10.6026, -5.5818, -17.4020,14.6429, -4.5378, 1.0360]], devicecuda:0,grad_fnAddmmBackward0)print(torch.max(outputs, 1))torch.return_types.max( valuestensor([14.6429], devicecuda:0, grad_fnMaxBackward0), indicestensor([7], devicecuda:0))OK预测结果正常为 7 2 控制输出的概率, 看输入是什么 步骤 初始随机一张图片我们希望让分类中某个数的概率最大最后看一下这个网络认为什么样的图像是这个数字 # hook类的写法 class SaveFeatures():注册hook和移除hookdef __init__(self, module):self.hook module.register_forward_hook(self.hook_fn)def hook_fn(self, module, input, output):# self.features output.clone().detach().requires_grad_(True)self.features output.clone()def close(self):self.hook.remove()下面开始操作 # hook住模型 layer 2 activations SaveFeatures(list(model.children())[layer])# 超参数 lr 0.005 # 学习率 opt_steps 100 # 迭代次数 upscaling_factor 10 # 放大的倍数(为了最后图像的保存)# 保存迭代后的数字 true_nums [] random_init, num_init 1, 0 # the type of initial# 带入网络进行迭代 for true_num in range(0, 10):# 初始化随机图片(数据定义和优化器一定要在一起)# 定义数据sz 28if random_init:img np.uint8(np.random.uniform(0, 255, (1,sz, sz))) / 255img torch.from_numpy(img[None]).float().to(device)img_var Variable(img, requires_gradTrue)if num_init:# 将7变成0,1,2,3,4,5,6,7,8,9img test_dataset.data[0].view(1, 1, 28, 28).float() / 255img img.to(device)img_var Variable(img, requires_gradTrue)# 定义优化器optimizer torch.optim.Adam([img_var], lrlr, weight_decay1e-6)for n in range(opt_steps): # optimize pixel values for opt_steps timesoptimizer.zero_grad()model(img_var) # 正向传播if random_init:loss -activations.features[0, true_num] # 这里的loss确保某个值的输出大if num_init:loss -activations.features[0, true_num] F.mse_loss(img_var, img) # 这里的loss确保某个值的输出大, 并且与原图不会相差很多loss.backward()optimizer.step()# 打印最后的img的样子print(activations.features[0, true_num]) # tensor(23.8736, devicecuda:0, grad_fnSelectBackward0)print(activations.features[0])tensor([ 23.8736, -32.0724, -5.7329, -18.6501, -16.1558, -18.9483, -0.3033,-29.1561, 14.9260, -13.5412], devicecuda:0,grad_fnSelectBackward0)print()img img_var.cpu().clone()img img.squeeze(0)# 图像的裁剪(确保像素值的范围)img[img 1] 1img[img 0] 0true_nums.append(img)unloader transforms.ToPILImage()img unloader(img)img cv2.cvtColor(np.asarray(img), cv2.COLOR_RGB2BGR)sz int(upscaling_factor * sz) # calculate new image sizeimg cv2.resize(img, (sz, sz), interpolationcv2.INTER_CUBIC) # scale image upif random_init:cv2.imwrite(random_regonize{}.jpg.format(true_num), img)if num_init:cv2.imwrite(real7_regonize{}.jpg.format(true_num), img)tensor(22.3530, devicecuda:0, grad_fnSelectBackward0)tensor([-22.4025, 22.3530, 6.0451, -9.4460, -1.0577, -17.7650, -11.3686,-11.8474, -5.5310, -16.9936], devicecuda:0,grad_fnSelectBackward0)tensor(50.2202, devicecuda:0, grad_fnSelectBackward0)tensor([-16.9364, -16.5120, 50.2202, -9.5287, -25.2837, -32.3480, -22.8569,-20.1231, 1.1174, -31.7244], devicecuda:0,grad_fnSelectBackward0)tensor(48.8004, devicecuda:0, grad_fnSelectBackward0)tensor([-33.3715, -30.5732, -6.0252, 48.8004, -34.9467, -17.8136, -35.1371,-17.4484, 2.8954, -11.9694], devicecuda:0,grad_fnSelectBackward0)tensor(31.5068, devicecuda:0, grad_fnSelectBackward0)tensor([-24.5204, -13.6857, -5.1833, -22.7889, 31.5068, -20.2855, -16.7245,-19.1719, 2.4699, -16.2246], devicecuda:0,grad_fnSelectBackward0)tensor(37.4866, devicecuda:0, grad_fnSelectBackward0)tensor([-20.2235, -26.1013, -25.9511, -2.7806, -19.5546, 37.4866, -10.8689,-30.3888, 0.1591, -8.5250], devicecuda:0,grad_fnSelectBackward0)tensor(35.9310, devicecuda:0, grad_fnSelectBackward0)tensor([ -6.1996, -31.2246, -8.3396, -21.6307, -16.9098, -9.5194, 35.9310,-33.0918, 10.2462, -28.6393], devicecuda:0,grad_fnSelectBackward0)tensor(23.6772, devicecuda:0, grad_fnSelectBackward0)tensor([-21.5441, -10.3366, -4.9905, -0.9289, -6.9219, -23.5643, -23.9894,23.6772, -0.7960, -16.9556], devicecuda:0,grad_fnSelectBackward0)tensor(57.8378, devicecuda:0, grad_fnSelectBackward0)tensor([-13.5191, -53.9004, -9.2996, -10.3597, -27.3806, -27.5858, -15.3235,-46.7014, 57.8378, -17.2299], devicecuda:0,grad_fnSelectBackward0)tensor(37.0334, devicecuda:0, grad_fnSelectBackward0)tensor([-26.2983, -37.7131, -16.6210, -1.8686, -11.5330, -11.7843, -25.7539,-27.0036, 6.3785, 37.0334], devicecuda:0,grad_fnSelectBackward0) # 移除hook activations.close()for i in range(0,10):_ , pre torch.max(model(true_nums[i][None].to(device)),1)print(i:{},Pre:{}.format(i,pre))i:0,Pre:tensor([0], devicecuda:0)i:1,Pre:tensor([1], devicecuda:0)i:2,Pre:tensor([2], devicecuda:0)i:3,Pre:tensor([3], devicecuda:0)i:4,Pre:tensor([4], devicecuda:0)i:5,Pre:tensor([5], devicecuda:0)i:6,Pre:tensor([6], devicecuda:0)i:7,Pre:tensor([7], devicecuda:0)i:8,Pre:tensor([8], devicecuda:0)i:9,Pre:tensor([9], devicecuda:0)可以看到相应最高的图片的样子依稀可以看见 0123456789 的轮廓 3 让正确的图片分类错误 步骤如下 使用特定数字的图片如数字7(初始化方式与前面的不一样是特定而不是随机)使用上面训练好的网络固定网络参数最后一层因为是10个输出(相当于是概率), 我们 loss 设置为某个的负的概率值利于梯度下降梯度下降, 使负的概率下降即相对应的概率值上升我们最终修改的是初始的图片这样使得网络认为这张图片识别的数字的概率增加 简单描述输入图片数字7固定网络参数让其他数字的概率不断变大改变输入图片最终达到迷惑网络的目的 只要将第二节中的代码 random_init, num_init 1, 0 # the type of initial 改为 random_init, num_init 0, 1 # the type of initial 即可 成功用 7 迷惑了所有类别 注意损失函数中还额外引入了 F.mse_loss(img_var, img) 确保某个数字类别的概率输出增大, 并且与原图数字 7 像素上不会相差很多
http://www.sadfv.cn/news/203527/

相关文章:

  • 甘肃住房和城乡建设部网站百度小程序申请流程
  • 织梦开发网站南京小程序开发公司
  • 如何做视频会员网站网站设计与系统的区别
  • 苏州网站建设介绍创新驱动发展战略纲要
  • 网站色哦优化8888网站必须做可信认证
  • 网站的页面设计wordpress会务网站模版
  • 西宁做网站君博美评做网站自己租服务器还是网络公司
  • html如果制作一个内容多的网站个人网页设计作品集分析
  • 主机怎么做网站二次跳转网站开发程序员岗位职责
  • windows.net做网站wordpress 命令执行
  • 特色的佛山网站建设深圳关键词推广整站优化
  • 手机网站怎么打开php 读取网站文件
  • 官方网站让第三方建设放心吗做网站一定要有营业执照吗
  • 无锡网站建设运营wordpress自适移动
  • 招聘网站开发需要多长时间医院网站建设选哪家
  • 企业网站的发展历史广州建设工程信息网站
  • 沈阳网站建设制作网站设置反爬虫的主要原因
  • 上海自适应网站开发外包网站问些什么问题
  • mvc网站入口asp鲜花网站建设论文
  • jquery验证网站地址网络电商是做什么的
  • 北京网站建设公司兴田德润实惠网站开发报价
  • 开源 html5网站模板wordpress侧边栏设置
  • 罗湖建设网站做网站后要回源码有何用
  • 上海有哪些做网站的公司网站外链建设记住5种外链方式不可用
  • 网站建设维护协议wordpress主题放在
  • 通辽市 做网站外贸营销工具
  • 做一个购物商城网站多少钱牡丹江47号公告
  • 网站推广宣传语开源wiki做网站
  • 免费行情软件app网站大全下载有图片如何制作网络
  • 建设工程信息比较好的网站跨境电商个人开店的平台