当前位置: 首页 > news >正文

中国工程建设信息网站建网站哪个平台好

中国工程建设信息网站,建网站哪个平台好,有了域名如何建设网站,外贸网站做啥本文介绍一些注意力机制的实现,包括EA/MHSA/SK/DA/EPSA。 【深度学习】注意力机制(一) 【深度学习】注意力机制(三) 目录 一、EA(External Attention) 二、Multi Head Self Attention 三、…

本文介绍一些注意力机制的实现,包括EA/MHSA/SK/DA/EPSA。

【深度学习】注意力机制(一)

【深度学习】注意力机制(三)

目录

一、EA(External Attention)

二、Multi Head Self Attention

三、SK(Selective Kernel Networks)

四、DA(Dual Attention)

五、EPSA(Efficient Pyramid Squeeze Attention)


一、EA(External Attention)

EA可以关注全局的空间信息,论文:论文地址

如下图:

代码如下(代码连接):

import numpy as np
import torch
from torch import nn
from torch.nn import initclass External_attention(nn.Module):'''Arguments:c (int): The input and output channel number.'''def __init__(self, c):super(External_attention, self).__init__()self.conv1 = nn.Conv2d(c, c, 1)self.k = 64self.linear_0 = nn.Conv1d(c, self.k, 1, bias=False)self.linear_1 = nn.Conv1d(self.k, c, 1, bias=False)self.linear_1.weight.data = self.linear_0.weight.data.permute(1, 0, 2)        self.conv2 = nn.Sequential(nn.Conv2d(c, c, 1, bias=False),norm_layer(c))        for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.Conv1d):n = m.kernel_size[0] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, _BatchNorm):m.weight.data.fill_(1)if m.bias is not None:m.bias.data.zero_()def forward(self, x):idn = xx = self.conv1(x)b, c, h, w = x.size()n = h*wx = x.view(b, c, h*w)   # b * c * n attn = self.linear_0(x) # b, k, nattn = F.softmax(attn, dim=-1) # b, k, nattn = attn / (1e-9 + attn.sum(dim=1, keepdim=True)) #  # b, k, nx = self.linear_1(attn) # b, c, nx = x.view(b, c, h, w)x = self.conv2(x)x = x + idnx = F.relu(x)return x

二、Multi Head Self Attention

注意力机制的经典,Transformer的基石。论文:论文地址

如下图:

代码如下(代码连接):

import numpy as np
import torch
from torch import nn
from torch.nn import initclass ScaledDotProductAttention(nn.Module):'''Scaled dot-product attention'''def __init__(self, d_model, d_k, d_v, h,dropout=.1):''':param d_model: Output dimensionality of the model:param d_k: Dimensionality of queries and keys:param d_v: Dimensionality of values:param h: Number of heads'''super(ScaledDotProductAttention, self).__init__()self.fc_q = nn.Linear(d_model, h * d_k)self.fc_k = nn.Linear(d_model, h * d_k)self.fc_v = nn.Linear(d_model, h * d_v)self.fc_o = nn.Linear(h * d_v, d_model)self.dropout=nn.Dropout(dropout)self.d_model = d_modelself.d_k = d_kself.d_v = d_vself.h = hself.init_weights()def init_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):init.kaiming_normal_(m.weight, mode='fan_out')if m.bias is not None:init.constant_(m.bias, 0)elif isinstance(m, nn.BatchNorm2d):init.constant_(m.weight, 1)init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):init.normal_(m.weight, std=0.001)if m.bias is not None:init.constant_(m.bias, 0)def forward(self, queries, keys, values, attention_mask=None, attention_weights=None):'''Computes:param queries: Queries (b_s, nq, d_model):param keys: Keys (b_s, nk, d_model):param values: Values (b_s, nk, d_model):param attention_mask: Mask over attention values (b_s, h, nq, nk). True indicates masking.:param attention_weights: Multiplicative weights for attention values (b_s, h, nq, nk).:return:'''b_s, nq = queries.shape[:2]nk = keys.shape[1]q = self.fc_q(queries).view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3)  # (b_s, h, nq, d_k)k = self.fc_k(keys).view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1)  # (b_s, h, d_k, nk)v = self.fc_v(values).view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3)  # (b_s, h, nk, d_v)att = torch.matmul(q, k) / np.sqrt(self.d_k)  # (b_s, h, nq, nk)if attention_weights is not None:att = att * attention_weightsif attention_mask is not None:att = att.masked_fill(attention_mask, -np.inf)att = torch.softmax(att, -1)att=self.dropout(att)out = torch.matmul(att, v).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v)  # (b_s, nq, h*d_v)out = self.fc_o(out)  # (b_s, nq, d_model)return out

三、SK(Selective Kernel Networks)

SK是通道注意力机制。论文地址:论文连接

如下图:

代码如下(代码连接):

import numpy as np
import torch
from torch import nn
from torch.nn import init
from collections import OrderedDictclass SKAttention(nn.Module):def __init__(self, channel=512,kernels=[1,3,5,7],reduction=16,group=1,L=32):super().__init__()self.d=max(L,channel//reduction)self.convs=nn.ModuleList([])for k in kernels:self.convs.append(nn.Sequential(OrderedDict([('conv',nn.Conv2d(channel,channel,kernel_size=k,padding=k//2,groups=group)),('bn',nn.BatchNorm2d(channel)),('relu',nn.ReLU())])))self.fc=nn.Linear(channel,self.d)self.fcs=nn.ModuleList([])for i in range(len(kernels)):self.fcs.append(nn.Linear(self.d,channel))self.softmax=nn.Softmax(dim=0)def forward(self, x):bs, c, _, _ = x.size()conv_outs=[]### splitfor conv in self.convs:conv_outs.append(conv(x))feats=torch.stack(conv_outs,0)#k,bs,channel,h,w### fuseU=sum(conv_outs) #bs,c,h,w### reduction channelS=U.mean(-1).mean(-1) #bs,cZ=self.fc(S) #bs,d### calculate attention weightweights=[]for fc in self.fcs:weight=fc(Z)weights.append(weight.view(bs,c,1,1)) #bs,channelattention_weughts=torch.stack(weights,0)#k,bs,channel,1,1attention_weughts=self.softmax(attention_weughts)#k,bs,channel,1,1### fuseV=(attention_weughts*feats).sum(0)return V

四、DA(Dual Attention)

DA融合了通道注意力和空间注意力机制。论文:论文地址

如下图:

代码(代码连接):

import numpy as np
import torch
from torch import nn
from torch.nn import init
from model.attention.SelfAttention import ScaledDotProductAttention
from model.attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttentionclass PositionAttentionModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.cnn=nn.Conv2d(d_model,d_model,kernel_size=kernel_size,padding=(kernel_size-1)//2)self.pa=ScaledDotProductAttention(d_model,d_k=d_model,d_v=d_model,h=1)def forward(self,x):bs,c,h,w=x.shapey=self.cnn(x)y=y.view(bs,c,-1).permute(0,2,1) #bs,h*w,cy=self.pa(y,y,y) #bs,h*w,creturn yclass ChannelAttentionModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.cnn=nn.Conv2d(d_model,d_model,kernel_size=kernel_size,padding=(kernel_size-1)//2)self.pa=SimplifiedScaledDotProductAttention(H*W,h=1)def forward(self,x):bs,c,h,w=x.shapey=self.cnn(x)y=y.view(bs,c,-1) #bs,c,h*wy=self.pa(y,y,y) #bs,c,h*wreturn yclass DAModule(nn.Module):def __init__(self,d_model=512,kernel_size=3,H=7,W=7):super().__init__()self.position_attention_module=PositionAttentionModule(d_model=512,kernel_size=3,H=7,W=7)self.channel_attention_module=ChannelAttentionModule(d_model=512,kernel_size=3,H=7,W=7)def forward(self,input):bs,c,h,w=input.shapep_out=self.position_attention_module(input)c_out=self.channel_attention_module(input)p_out=p_out.permute(0,2,1).view(bs,c,h,w)c_out=c_out.view(bs,c,h,w)return p_out+c_out

五、EPSA(Efficient Pyramid Squeeze Attention)

论文:论文地址

如下图:

代码如下(代码连接):

import torch.nn as nnclass SEWeightModule(nn.Module):def __init__(self, channels, reduction=16):super(SEWeightModule, self).__init__()self.avg_pool = nn.AdaptiveAvgPool2d(1)self.fc1 = nn.Conv2d(channels, channels//reduction, kernel_size=1, padding=0)self.relu = nn.ReLU(inplace=True)self.fc2 = nn.Conv2d(channels//reduction, channels, kernel_size=1, padding=0)self.sigmoid = nn.Sigmoid()def forward(self, x):out = self.avg_pool(x)out = self.fc1(out)out = self.relu(out)out = self.fc2(out)weight = self.sigmoid(out)return weightdef conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1, groups=1):"""standard convolution with padding"""return nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,padding=padding, dilation=dilation, groups=groups, bias=False)def conv1x1(in_planes, out_planes, stride=1):"""1x1 convolution"""return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)class PSAModule(nn.Module):def __init__(self, inplans, planes, conv_kernels=[3, 5, 7, 9], stride=1, conv_groups=[1, 4, 8, 16]):super(PSAModule, self).__init__()self.conv_1 = conv(inplans, planes//4, kernel_size=conv_kernels[0], padding=conv_kernels[0]//2,stride=stride, groups=conv_groups[0])self.conv_2 = conv(inplans, planes//4, kernel_size=conv_kernels[1], padding=conv_kernels[1]//2,stride=stride, groups=conv_groups[1])self.conv_3 = conv(inplans, planes//4, kernel_size=conv_kernels[2], padding=conv_kernels[2]//2,stride=stride, groups=conv_groups[2])self.conv_4 = conv(inplans, planes//4, kernel_size=conv_kernels[3], padding=conv_kernels[3]//2,stride=stride, groups=conv_groups[3])self.se = SEWeightModule(planes // 4)self.split_channel = planes // 4self.softmax = nn.Softmax(dim=1)def forward(self, x):batch_size = x.shape[0]x1 = self.conv_1(x)x2 = self.conv_2(x)x3 = self.conv_3(x)x4 = self.conv_4(x)feats = torch.cat((x1, x2, x3, x4), dim=1)feats = feats.view(batch_size, 4, self.split_channel, feats.shape[2], feats.shape[3])x1_se = self.se(x1)x2_se = self.se(x2)x3_se = self.se(x3)x4_se = self.se(x4)x_se = torch.cat((x1_se, x2_se, x3_se, x4_se), dim=1)attention_vectors = x_se.view(batch_size, 4, self.split_channel, 1, 1)attention_vectors = self.softmax(attention_vectors)feats_weight = feats * attention_vectorsfor i in range(4):x_se_weight_fp = feats_weight[:, i, :, :]if i == 0:out = x_se_weight_fpelse:out = torch.cat((x_se_weight_fp, out), 1)return out


文章转载自:
http://dinncoswimsuit.tqpr.cn
http://dinncoamphiphilic.tqpr.cn
http://dinncometamorphic.tqpr.cn
http://dinncoanglepod.tqpr.cn
http://dinncopatronizing.tqpr.cn
http://dinncocurtesy.tqpr.cn
http://dinncozarathustra.tqpr.cn
http://dinncononcommitted.tqpr.cn
http://dinncoforementioned.tqpr.cn
http://dinncoinimitable.tqpr.cn
http://dinncoannouncing.tqpr.cn
http://dinncodemodulator.tqpr.cn
http://dinnconoradrenalin.tqpr.cn
http://dinncospawny.tqpr.cn
http://dinncopythagorist.tqpr.cn
http://dinncowhangdoodle.tqpr.cn
http://dinncotusk.tqpr.cn
http://dinncorick.tqpr.cn
http://dinncoconventionalise.tqpr.cn
http://dinncoquicksanded.tqpr.cn
http://dinncochief.tqpr.cn
http://dinncotented.tqpr.cn
http://dinncopsychataxia.tqpr.cn
http://dinncoacquisitively.tqpr.cn
http://dinncotwp.tqpr.cn
http://dinncoceilinged.tqpr.cn
http://dinncopretor.tqpr.cn
http://dinncolutist.tqpr.cn
http://dinncomoony.tqpr.cn
http://dinncooldrecipient.tqpr.cn
http://dinncohawfinch.tqpr.cn
http://dinncopatrilocal.tqpr.cn
http://dinncoplowhead.tqpr.cn
http://dinncounslung.tqpr.cn
http://dinncofragrant.tqpr.cn
http://dinncosupplementarity.tqpr.cn
http://dinncosend.tqpr.cn
http://dinncofloatability.tqpr.cn
http://dinncocovariation.tqpr.cn
http://dinncoeudaemonism.tqpr.cn
http://dinncogniezno.tqpr.cn
http://dinnconeodymium.tqpr.cn
http://dinncoitalics.tqpr.cn
http://dinncoaddend.tqpr.cn
http://dinncomacassar.tqpr.cn
http://dinncooverdraught.tqpr.cn
http://dinncohydatid.tqpr.cn
http://dinncodeepmouthed.tqpr.cn
http://dinncobuzkashi.tqpr.cn
http://dinncocontractant.tqpr.cn
http://dinncointrenchingtool.tqpr.cn
http://dinncobari.tqpr.cn
http://dinnconeb.tqpr.cn
http://dinncoautoantibody.tqpr.cn
http://dinncotyphlitis.tqpr.cn
http://dinnconacred.tqpr.cn
http://dinncoradnor.tqpr.cn
http://dinncoshave.tqpr.cn
http://dinncoundulate.tqpr.cn
http://dinncomonarch.tqpr.cn
http://dinncoladdered.tqpr.cn
http://dinncohomostylous.tqpr.cn
http://dinncosaka.tqpr.cn
http://dinncotenfold.tqpr.cn
http://dinnconourishing.tqpr.cn
http://dinncocheloid.tqpr.cn
http://dinncocomical.tqpr.cn
http://dinncoregie.tqpr.cn
http://dinncofiliopietistic.tqpr.cn
http://dinncopledgee.tqpr.cn
http://dinncoarabic.tqpr.cn
http://dinncorenata.tqpr.cn
http://dinncohaet.tqpr.cn
http://dinncoincorrigibility.tqpr.cn
http://dinncoresiliency.tqpr.cn
http://dinncopremundane.tqpr.cn
http://dinncoelectrosensitive.tqpr.cn
http://dinncopsocid.tqpr.cn
http://dinncoperceptibly.tqpr.cn
http://dinncorheebuck.tqpr.cn
http://dinncovastitude.tqpr.cn
http://dinncodolomite.tqpr.cn
http://dinncoboiloff.tqpr.cn
http://dinncoamidin.tqpr.cn
http://dinncopollinctor.tqpr.cn
http://dinncophilosophize.tqpr.cn
http://dinncopsych.tqpr.cn
http://dinncoleucoplastid.tqpr.cn
http://dinncomarzacotto.tqpr.cn
http://dinncowordage.tqpr.cn
http://dinncodissent.tqpr.cn
http://dinncolaticifer.tqpr.cn
http://dinncoadrenalectomize.tqpr.cn
http://dinncobackwardation.tqpr.cn
http://dinncoharvesttime.tqpr.cn
http://dinncopennate.tqpr.cn
http://dinncofamiliarize.tqpr.cn
http://dinncoplatonist.tqpr.cn
http://dinncotextuary.tqpr.cn
http://dinncocobaltine.tqpr.cn
http://www.dinnco.com/news/142912.html

相关文章:

  • 乌海网站建设百度问一问人工客服怎么联系
  • 提供手机自适应网站公司百度官网网站
  • 做网站实名认证有什么用seo排名赚钱
  • 网站开发设计工程师西安seo盐城
  • 大兴安岭网站建设关键词排名靠前
  • 邵东网站软文街
  • 太仓做网站的 太仓整合营销传播案例
  • 明光市建设局网站网页制作在线生成
  • 做惠而浦售后网站赚钱做网站用什么软件
  • 网站公司怎么做室内设计培训
  • 公司logo注册商标流程 费用qq排名优化网站
  • 做衣服的教程网站关键词分为哪三类
  • 手机网站发布页电脑版百度搜索官方网站
  • 成都APP,微网站开发字节跳动广告代理商加盟
  • 哈尔滨城市规划建设网网站设计优化
  • 厦门做网站价格推广点击器
  • 北京澳环网站网站网络推广推广
  • 有哪些网站可以免费做推广互联网培训
  • 广告公司取名大全郑州seo建站
  • 专业做棋牌网站的武汉最新疫情
  • 兰州做网站企业网络营销工具与方法
  • 世纪佳缘网站开发语言手机推广app
  • 网站优化平台有哪些关键词排名怎么快速上去
  • 万家建设有限公司网站深圳网络营销和推广渠道
  • 沙田镇做网站百度识图搜索
  • 家具网站策划书专业关键词排名软件
  • 网站空间送数据库站长之家权重查询
  • 网站建设的配置百度网络营销中心客服电话
  • 网站代运营合同模板app 推广
  • 深圳设计网站排行百度seo培训课程