当前位置: 首页 > news >正文

前端案例的网站浙江住房城乡建设厅网站

前端案例的网站,浙江住房城乡建设厅网站,巨鹿企业做网站,可以制作试卷的app学习参考来自#xff1a; PyTorch实现Deep Dreamhttps://github.com/duc0/deep-dream-in-pytorch 文章目录 1 原理2 VGG 模型结构3 完整代码4 输出结果5 消融实验6 torch.norm() 1 原理 其实 Deep Dream大致的原理和【Pytorch】Visualization of Feature Maps#xff08;1 PyTorch实现Deep Dreamhttps://github.com/duc0/deep-dream-in-pytorch 文章目录 1 原理2 VGG 模型结构3 完整代码4 输出结果5 消融实验6 torch.norm() 1 原理 其实 Deep Dream大致的原理和【Pytorch】Visualization of Feature Maps1—— Maximize Filter 是有些相似的前者希望整个 layer 的激活值都很大而后者是希望某个 layer 中的某个 filter 的激活值最大。 这个图画的很好递归只画了一层下面来个三层的例子 CNN 处def deepDream指定网络的某一层固定网络权重开启输入图片的梯度迭代指定层输出的负l2范数相当于最大化该层激活以改变输入图片。 loss -out.norm() # 让负的变小, 正的变大核心代码loss 为指定特征图输出的二范数的负值相当于放大了响应负数负的更多正数正的更多二范数才越大损失才越小 2 VGG 模型结构 VGG((features): Sequential((0): Conv2d(3, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1))(1): ReLU(inplaceTrue)(2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1))(3): ReLU(inplaceTrue)(4): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse)(5): Conv2d(64, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1))(6): ReLU(inplaceTrue)(7): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1))(8): ReLU(inplaceTrue)(9): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse)(10): Conv2d(128, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1))(11): ReLU(inplaceTrue)(12): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1))(13): ReLU(inplaceTrue)(14): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1))(15): ReLU(inplaceTrue)(16): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse)(17): Conv2d(256, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(18): ReLU(inplaceTrue)(19): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(20): ReLU(inplaceTrue)(21): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(22): ReLU(inplaceTrue)(23): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse)(24): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(25): ReLU(inplaceTrue)(26): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(27): ReLU(inplaceTrue) # LAYER_ID 28(28): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1))(29): ReLU(inplaceTrue) # LAYER_ID 30(30): MaxPool2d(kernel_size2, stride2, padding0, dilation1, ceil_modeFalse))(avgpool): AdaptiveAvgPool2d(output_size(7, 7))(classifier): Sequential((0): Linear(in_features25088, out_features4096, biasTrue)(1): ReLU(inplaceTrue)(2): Dropout(p0.5, inplaceFalse)(3): Linear(in_features4096, out_features4096, biasTrue)(4): ReLU(inplaceTrue)(5): Dropout(p0.5, inplaceFalse)(6): Linear(in_features4096, out_features1000, biasTrue)) )(27): ReLU(inplaceTrue) # LAYER_ID 28 (29): ReLU(inplaceTrue) # LAYER_ID 30 3 完整代码 完整代码如下 # 导入使用的库 import torch from torchvision import models, transforms import torch.optim as optim import numpy as np from matplotlib import pyplot from PIL import Image, ImageFilter, ImageChops# 定义超参数 CUDA_ENABLED True LAYER_ID 28 # the layer to maximize the activations through NUM_ITERATIONS 5 # number of iterations to update the input image with the layers gradient LR 0.2we downscale the image recursively, apply the deep dream computation, scale up, and then \ blend with the original imageNUM_DOWNSCALES 20 BLEND_ALPHA 0.5# 定义好一些变量和图像的转换 class DeepDream:def __init__(self, image):self.image imageself.model models.vgg16(pretrainedTrue)# print(self.model)if CUDA_ENABLED:self.model self.model.cuda()self.modules list(self.model.features.modules())# vgg16 use 224x224 imagesimgsize 224self.mean [0.485, 0.456, 0.406]self.std [0.229, 0.224, 0.225]self.normalise transforms.Normalize(meanself.mean,stdself.std)self.transformPreprocess transforms.Compose([transforms.Resize((imgsize, imgsize)),transforms.ToTensor(),self.normalise])self.tensorMean torch.Tensor(self.mean)if CUDA_ENABLED:self.tensorMean self.tensorMean.cuda()self.tensorStd torch.Tensor(self.std)if CUDA_ENABLED:self.tensorStd self.tensorStd.cuda()def toimage(self, img):return img * self.tensorStd self.tensorMeandef deepDream(self, image, layer, iterations, lr):核心代码:param image::param layer::param iterations::param lr::return:transformed self.transformPreprocess(image).unsqueeze(0) # 前处理输入都会 resize 至 224x224if CUDA_ENABLED:transformed transformed.cuda()input_img torch.autograd.Variable(transformed, requires_gradTrue)self.model.zero_grad()optimizer optim.Adam([input_img.requires_grad_()], lrlr)for _ in range(iterations):optimizer.zero_grad()out input_imgfor layerid in range(layer): # 28out self.modules[layerid1](out) # self.modules[28] ReLU(inplaceTrue)# out, torch.Size([1, 512, 14, 14])loss -out.norm() # 负的变小正的变大 -l2loss.backward()optimizer.step()# input_img.data input_img.data lr*input_img.grad.data# remove batchsize, torch.Size([1, 3, 224, 224]) -torch.Size([3, 224, 224])input_img input_img.data.squeeze()# c,h,w 转为 h,w,c 以便于可视化input_img.transpose_(0, 1) # torch.Size([224, 3, 224])input_img.transpose_(1, 2) # torch.Size([224, 224, 3])input_img self.toimage(input_img) # torch.Size([224, 224, 3])if CUDA_ENABLED:input_img input_img.cpu()input_img np.clip(input_img, 0, 1)return Image.fromarray(np.uint8(input_img*255))# 可视化中间迭代的过程def deepDreamRecursive(self, image, layer, iterations, lr, num_downscales)::param image::param layer::param iterations::param lr::param num_downscales::return:if num_downscales 0:# scale down the imageimage_gauss image.filter(ImageFilter.GaussianBlur(2)) # 高斯模糊half_size (int(image.size[0]/2), int(image.size[1]/2)) # 长宽缩放 1/2if (half_size[0]0 or half_size[1]0):half_size image.sizeimage_half image_gauss.resize(half_size, Image.ANTIALIAS)# return deepDreamRecursive on the scaled down imageimage_half self.deepDreamRecursive(image_half, layer, iterations, lr, num_downscales-1)print(Num Downscales: {}.format(num_downscales))print(Half Image, np.shape(image_half))# pyplot.imshow(image_half)# pyplot.show()# scale up the result image to the original sizeimage_large image_half.resize(image.size, Image.ANTIALIAS)print(Large Image, np.shape(image_large))# pyplot.imshow(image_large)# pyplot.show()# Blend the two imageimage ImageChops.blend(image, image_large, BLEND_ALPHA)print(Blend Image, np.shape(image))# pyplot.imshow(image)# pyplot.show()img_result self.deepDream(image, layer, iterations, lr) # 迭代改变输入图片max activationprint(np.shape(img_result))img_result img_result.resize(image.size)print(np.shape(img_result))# pyplot.imshow(img_result)# pyplot.show()return img_resultdef deepDreamProcess(self):return self.deepDreamRecursive(self.image, LAYER_ID, NUM_ITERATIONS, LR, NUM_DOWNSCALES)if __name__ __main__:img Image.open(cat.png).convert(RGB)# 生成img_deep_dream DeepDream(img).deepDreamProcess()pyplot.title(Deep dream images)pyplot.imshow(img_deep_dream)pyplot.show()4 输出结果 output (224, 224, 3)(1, 1, 3)Num Downscales: 1half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 2half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 3half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 4half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 5half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 6half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 7half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 8half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 9half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 10half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 11half Image (1, 1, 3)Large Image (1, 1, 3)Blend Image (1, 1, 3)(224, 224, 3)(1, 1, 3)Num Downscales: 12half Image (1, 1, 3)Large Image (2, 2, 3)Blend Image (2, 2, 3)(224, 224, 3)(2, 2, 3)Num Downscales: 13half Image (2, 2, 3)Large Image (5, 5, 3)Blend Image (5, 5, 3)(224, 224, 3)(5, 5, 3)Num Downscales: 14half Image (5, 5, 3)Large Image (11, 11, 3)Blend Image (11, 11, 3)(224, 224, 3)(11, 11, 3)Num Downscales: 15half Image (11, 11, 3)Large Image (23, 23, 3)Blend Image (23, 23, 3)(224, 224, 3)(23, 23, 3)Num Downscales: 16half Image (23, 23, 3)Large Image (47, 47, 3)Blend Image (47, 47, 3)(224, 224, 3)(47, 47, 3)Num Downscales: 17half Image (47, 47, 3)Large Image (94, 94, 3)Blend Image (94, 94, 3)(224, 224, 3)(94, 94, 3)Num Downscales: 18half Image (94, 94, 3)Large Image (188, 188, 3)Blend Image (188, 188, 3)(224, 224, 3)(188, 188, 3)Num Downscales: 19half Image (188, 188, 3)Large Image (376, 376, 3)Blend Image (376, 376, 3)(224, 224, 3)(376, 376, 3)Num Downscales: 20half Image (376, 376, 3)Large Image (753, 753, 3)Blend Image (753, 753, 3)(224, 224, 3)(753, 753, 3)部分结果展示 Num Downscales: 15 Num Downscales: 16 Num Downscales: 17 Num Downscales: 18 Num Downscales: 19 Num Downscales: 20 5 消融实验 NUM_DOWNSCALES 50 NUM_ITERATIONS 10 LAYER_ID 23 LAYER_ID 30 6 torch.norm() torch.norm() 是 PyTorch 中的一个函数用于计算输入张量沿指定维度的范数。具体而言当给定一个输入张量 x 和一个整数 p 时torch.norm(x, p) 将返回输入张量 x 沿着最后一个维度默认为所有维度上所有元素的 p 范数p 默认为 2。 除了使用标量 p 之外torch.norm() 还接受以下参数 dim指定沿哪个轴计算范数默认对所有维度计算。keepdim如果设置为 True则输出张量维度与输入张量相同其中指定轴尺寸为 1否则将从输出张量中删除指定轴。out可选输出张量结果。 PyTorch中torch.norm函数详解 import torchx torch.tensor([[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]], dtypetorch.float32) print(x.norm()) print(x.norm(1)) print(x.norm(2))output tensor(25.4951) tensor(78.) tensor(25.4951)
http://www.sadfv.cn/news/311767/

相关文章:

  • 长沙电子商务公司网站制作山东公司网页定制
  • 旅游推荐网站怎么做做网站一年
  • 建站模板wordpress网站搭建公司哪家好
  • 创建一个新的公司网站计算机网络技术就业率
  • 营销型网站开发公司电话网站设计开发维护
  • 天津网站建设培训课件wordpress 如何添加模板文件
  • 中国建设银行员工网站网络营销的特点是
  • 英文商城网站昆明做百度网站电话
  • 微信微网站模板下载佛山网站建设专家评价
  • 建站赚钱灰色招工信息发布平台
  • 无锡大型网站建设公司西安关键词快速排名
  • 做个网站需要什么设备福州设计网站
  • 满山红网站建设公司wordpress链接 颜色
  • 建设政务网站报告wordpress会员收费权限
  • 上海网站营销是什么做网站经常用的字体有哪些
  • 建设网站需要公司吗wordpress后台管理界面地址
  • 网站建设服务好公司排名注册科技公司需要什么条件
  • 土巴兔网站开发关于建设网站的报告
  • 建设派网站手机网站主机
  • 专业的建设机械网站制作织梦网站图片移动
  • 盐城网站开发包括哪些北京旅游设计网站建设
  • 网站建设公司douyanet视觉设计专业就业前景
  • 携程旅行网站内容的建设湛江网站建设方案优化
  • 海丰网站制作做任务的奖金网站
  • 九江公司网站建设做网站客户没有付定金
  • 建设高端网站公司北京网站建设公司房山华网
  • 最专业的网站开发公司哪家最专业wordpress做h5
  • 中兴建设云南有限公司网站网站建设项目的预算
  • 网站开发有前途吗口碑营销案例有哪些
  • 品牌创意型网站建设芮城网站开发