空间金字塔池化改进 SPP / SPPF / SimSPPF / ASPP / RFB / SPPCSPC / SPPFCSPC / SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究

马肤

温馨提示:这篇文章已超过463天没有更新,请注意相关的内容是否还可用!

摘要:本文介绍了空间金字塔池化的改进方法,包括SPP(空间金字塔池化)、SPPF(空间金字塔池化改进版)、SimSPPF(简化版空间金字塔池化)、ASPP(自适应空间金字塔池化)等。这些改进方法通过优化池化层的设计,提高了特征提取的效率和准确性。RFB(Receptive Field Block)和SPPCSPC等变体进一步增强了模型的性能。这些改进在空间金字塔池化的基础上实现了更高效的特征提取和更准确的分类结果。SPPELAN等方法也在特定领域取得了良好的应用效果。

🌟想了解YOLO系列算法更多教程欢迎订阅我的专栏🌟

对于基础薄弱的同学来说,推荐阅读《目标检测蓝皮书》📘,里面涵盖了丰富的目标检测实用知识,是你迅速掌握目标检测的理想选择!

如果想了解 YOLOv5 和 YOLOv7 系列算法的训练和改进,可以关注专栏《YOLOv5/v7 改进实战》🌟。该专栏涵盖了丰富的YOLO实用教程,专门为改进YOLO的同学而设计。该专栏阅读量已经突破60w+🚀,被誉为全网最经典的教程!所有的改进方法都提供了详细的手把手教学!

《YOLOv5/v7 进阶实战》🏅专栏是在《YOLOv5/v7 改进实战》🌟专栏上进一步推出的更加有难度的专栏,除大量的最新最前沿改进外,还包含多种手把手的部署压缩教程,内容不仅可以用于小论文,也可用于大论文!

想了解 YOLOv8 系列算法教程的同学可以关注这个专栏《YOLOv8改进实战》🍀,这个专栏为博主精心设计的最新专栏,随 YOLOv8 官方项目实时更新,内容以最新最前沿的改进为主,专栏内容包含【检测】【分类】【分割】【关键点】任务!


我最近在哔哩哔哩上更新了视频版的讲解,有需要的同学可以关注一下~ 我的哔哩哔哩主页


文章目录

    • 1 原理
      • 1.1 SPP(Spatial Pyramid Pooling)
      • 1.2 SPPF(Spatial Pyramid Pooling - Fast)
      • 1.3 SimSPPF(Simplified SPPF)
      • 1.4 ASPP(Atrous Spatial Pyramid Pooling)
      • 1.5 RFB(Receptive Field Block)
      • 1.6 SPPCSPC
      • 1.7 SPPFCSPC🍀
      • 1.8 SPPELAN
      • 2 参数量对比
      • 3 改进方式
      • 4 Issue
      • 本人更多YOLOv5实战内容导航🍀🌟🚀

        1 原理

        1.1 SPP(Spatial Pyramid Pooling)

        SPP模块是何凯明大神在2015年的论文《Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition》中被提出。

        SPP全程为空间金字塔池化结构,主要是为了解决两个问题:

        1. 有效避免了对图像区域裁剪、缩放操作导致的图像失真等问题;
        2. 解决了卷积神经网络对图相关重复特征提取的问题,大大提高了产生候选框的速度,且节省了计算成本。

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第1张

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第2张

        class SPP(nn.Module):
            # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
            def __init__(self, c1, c2, k=(5, 9, 13)):
                super().__init__()
                c_ = c1 // 2  # hidden channels
                self.cv1 = Conv(c1, c_, 1, 1)
                self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
                self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
            def forward(self, x):
                x = self.cv1(x)
                with warnings.catch_warnings():
                    warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                    return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
        

        1.2 SPPF(Spatial Pyramid Pooling - Fast)

        这个是YOLOv5作者Glenn Jocher基于SPP提出的,速度较SPP快很多,所以叫SPP-Fast

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第3张

        class SPPF(nn.Module):
            # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
            def __init__(self, c1, c2, k=5):  # equivalent to SPP(k=(5, 9, 13))
                super().__init__()
                c_ = c1 // 2  # hidden channels
                self.cv1 = Conv(c1, c_, 1, 1)
                self.cv2 = Conv(c_ * 4, c2, 1, 1)
                self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
            def forward(self, x):
                x = self.cv1(x)
                with warnings.catch_warnings():
                    warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
                    y1 = self.m(x)
                    y2 = self.m(y1)
                    return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
        

        1.3 SimSPPF(Simplified SPPF)

        美团YOLOv6提出的模块,感觉和SPPF只差了一个激活函数,简单测试了一下,单个ConvBNReLU速度要比ConvBNSiLU快18%

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第4张

        class SimConv(nn.Module):
            '''Normal Conv with ReLU activation'''
            def __init__(self, in_channels, out_channels, kernel_size, stride, groups=1, bias=False):
                super().__init__()
                padding = kernel_size // 2
                self.conv = nn.Conv2d(
                    in_channels,
                    out_channels,
                    kernel_size=kernel_size,
                    stride=stride,
                    padding=padding,
                    groups=groups,
                    bias=bias,
                )
                self.bn = nn.BatchNorm2d(out_channels)
                self.act = nn.ReLU()
            def forward(self, x):
                return self.act(self.bn(self.conv(x)))
            def forward_fuse(self, x):
                return self.act(self.conv(x))
        class SimSPPF(nn.Module):
            '''Simplified SPPF with ReLU activation'''
            def __init__(self, in_channels, out_channels, kernel_size=5):
                super().__init__()
                c_ = in_channels // 2  # hidden channels
                self.cv1 = SimConv(in_channels, c_, 1, 1)
                self.cv2 = SimConv(c_ * 4, out_channels, 1, 1)
                self.m = nn.MaxPool2d(kernel_size=kernel_size, stride=1, padding=kernel_size // 2)
            def forward(self, x):
                x = self.cv1(x)
                with warnings.catch_warnings():
                    warnings.simplefilter('ignore')
                    y1 = self.m(x)
                    y2 = self.m(y1)
                    return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
        

        1.4 ASPP(Atrous Spatial Pyramid Pooling)

        受到SPP的启发,语义分割模型DeepLabv2中提出了ASPP模块(空洞空间卷积池化金字塔),该模块使用具有不同采样率的多个并行空洞卷积层。为每个采样率提取的特征在单独的分支中进一步处理,并融合以生成最终结果。该模块通过不同的空洞率构建不同感受野的卷积核,用来获取多尺度物体信息,具体结构比较简单如下图所示:

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第5张

        ASPP是在DeepLab中提出来的,在后续的DeepLab版本中对其做了改进,如加入BN层、加入深度可分离卷积等,但基本的思路还是没变。

        # without BN version
        class ASPP(nn.Module):
            def __init__(self, in_channel=512, out_channel=256):
                super(ASPP, self).__init__()
                self.mean = nn.AdaptiveAvgPool2d((1, 1))  # (1,1)means ouput_dim
                self.conv = nn.Conv2d(in_channel,out_channel, 1, 1)
                self.atrous_block1 = nn.Conv2d(in_channel, out_channel, 1, 1)
                self.atrous_block6 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=6, dilation=6)
                self.atrous_block12 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=12, dilation=12)
                self.atrous_block18 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=18, dilation=18)
                self.conv_1x1_output = nn.Conv2d(out_channel * 5, out_channel, 1, 1)
            def forward(self, x):
                size = x.shape[2:]
                image_features = self.mean(x)
                image_features = self.conv(image_features)
                image_features = F.upsample(image_features, size=size, mode='bilinear')
                atrous_block1 = self.atrous_block1(x)
                atrous_block6 = self.atrous_block6(x)
                atrous_block12 = self.atrous_block12(x)
                atrous_block18 = self.atrous_block18(x)
                net = self.conv_1x1_output(torch.cat([image_features, atrous_block1, atrous_block6,
                                                      atrous_block12, atrous_block18], dim=1))
                return net
        

        1.5 RFB(Receptive Field Block)

        RFB模块是在《ECCV2018:Receptive Field Block Net for Accurate and Fast Object Detection》一文中提出的,该文的出发点是模拟人类视觉的感受野从而加强网络的特征提取能力,在结构上RFB借鉴了Inception的思想,主要是在Inception的基础上加入了空洞卷积,从而有效增大了感受野

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第6张

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第7张

        RFB和RFB-s的架构。RFB-s用于在浅层人类视网膜主题图中模拟较小的pRF,使用具有较小内核的更多分支。

        class BasicConv(nn.Module):
            def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1, groups=1, relu=True, bn=True):
                super(BasicConv, self).__init__()
                self.out_channels = out_planes
                if bn:
                    self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False)
                    self.bn = nn.BatchNorm2d(out_planes, eps=1e-5, momentum=0.01, affine=True)
                    self.relu = nn.ReLU(inplace=True) if relu else None
                else:
                    self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True)
                    self.bn = None
                    self.relu = nn.ReLU(inplace=True) if relu else None
            def forward(self, x):
                x = self.conv(x)
                if self.bn is not None:
                    x = self.bn(x)
                if self.relu is not None:
                    x = self.relu(x)
                return x
        class BasicRFB(nn.Module):
            def __init__(self, in_planes, out_planes, stride=1, scale=0.1, map_reduce=8, vision=1, groups=1):
                super(BasicRFB, self).__init__()
                self.scale = scale
                self.out_channels = out_planes
                inter_planes = in_planes // map_reduce
                self.branch0 = nn.Sequential(
                    BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),
                    BasicConv(inter_planes, 2 * inter_planes, kernel_size=(3, 3), stride=stride, padding=(1, 1), groups=groups),
                    BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision, dilation=vision, relu=False, groups=groups)
                )
                self.branch1 = nn.Sequential(
                    BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),
                    BasicConv(inter_planes, 2 * inter_planes, kernel_size=(3, 3), stride=stride, padding=(1, 1), groups=groups),
                    BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision + 2, dilation=vision + 2, relu=False, groups=groups)
                )
                self.branch2 = nn.Sequential(
                    BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),
                    BasicConv(inter_planes, (inter_planes // 2) * 3, kernel_size=3, stride=1, padding=1, groups=groups),
                    BasicConv((inter_planes // 2) * 3, 2 * inter_planes, kernel_size=3, stride=stride, padding=1, groups=groups),
                    BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision + 4, dilation=vision + 4, relu=False, groups=groups)
                )
                self.ConvLinear = BasicConv(6 * inter_planes, out_planes, kernel_size=1, stride=1, relu=False)
                self.shortcut = BasicConv(in_planes, out_planes, kernel_size=1, stride=stride, relu=False)
                self.relu = nn.ReLU(inplace=False)
            def forward(self, x):
                x0 = self.branch0(x)
                x1 = self.branch1(x)
                x2 = self.branch2(x)
                out = torch.cat((x0, x1, x2), 1)
                out = self.ConvLinear(out)
                short = self.shortcut(x)
                out = out * self.scale + short
                out = self.relu(out)
                return out
        

        1.6 SPPCSPC

        该模块是YOLOv7中使用的SPP结构,表现优于SPPF,但参数量和计算量提升了很多

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第8张

        class SPPCSPC(nn.Module):
            # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks
            def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
                super(SPPCSPC, self).__init__()
                c_ = int(2 * c2 * e)  # hidden channels
                self.cv1 = Conv(c1, c_, 1, 1)
                self.cv2 = Conv(c1, c_, 1, 1)
                self.cv3 = Conv(c_, c_, 3, 1)
                self.cv4 = Conv(c_, c_, 1, 1)
                self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
                self.cv5 = Conv(4 * c_, c_, 1, 1)
                self.cv6 = Conv(c_, c_, 3, 1)
                self.cv7 = Conv(2 * c_, c2, 1, 1)
            def forward(self, x):
                x1 = self.cv4(self.cv3(self.cv1(x)))
                y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))
                y2 = self.cv2(x)
                return self.cv7(torch.cat((y1, y2), dim=1))
        
        #分组SPPCSPC 分组后参数量和计算量与原本差距不大,不知道效果怎么样
        class SPPCSPC_group(nn.Module):
            def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):
                super(SPPCSPC_group, self).__init__()
                c_ = int(2 * c2 * e)  # hidden channels
                self.cv1 = Conv(c1, c_, 1, 1, g=4)
                self.cv2 = Conv(c1, c_, 1, 1, g=4)
                self.cv3 = Conv(c_, c_, 3, 1, g=4)
                self.cv4 = Conv(c_, c_, 1, 1, g=4)
                self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
                self.cv5 = Conv(4 * c_, c_, 1, 1, g=4)
                self.cv6 = Conv(c_, c_, 3, 1, g=4)
                self.cv7 = Conv(2 * c_, c2, 1, 1, g=4)
            def forward(self, x):
                x1 = self.cv4(self.cv3(self.cv1(x)))
                y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))
                y2 = self.cv2(x)
                return self.cv7(torch.cat((y1, y2), dim=1))
        

        1.7 SPPFCSPC🍀

        我借鉴了SPPF的思想将SPPCSPC优化了一下,得到了SPPFCSPC,在保持感受野不变的情况下获得速度提升;我把这个模块给v7作者看了,并没有得到否定,详细回答可以看4 Issue

        目前这个结构被YOLOv6 3.0版本使用了,效果很不错,大家可以看一下YOLOv6 3.0的论文,里面有详细的实验结果。

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第9张

        class SPPFCSPC(nn.Module):
            
            def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=5):
                super(SPPFCSPC, self).__init__()
                c_ = int(2 * c2 * e)  # hidden channels
                self.cv1 = Conv(c1, c_, 1, 1)
                self.cv2 = Conv(c1, c_, 1, 1)
                self.cv3 = Conv(c_, c_, 3, 1)
                self.cv4 = Conv(c_, c_, 1, 1)
                self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
                self.cv5 = Conv(4 * c_, c_, 1, 1)
                self.cv6 = Conv(c_, c_, 3, 1)
                self.cv7 = Conv(2 * c_, c2, 1, 1)
            def forward(self, x):
                x1 = self.cv4(self.cv3(self.cv1(x)))
                x2 = self.m(x1)
                x3 = self.m(x2)
                y1 = self.cv6(self.cv5(torch.cat((x1,x2,x3, self.m(x3)),1)))
                y2 = self.cv2(x)
                return self.cv7(torch.cat((y1, y2), dim=1))
        

        1.8 SPPELAN

        YOLOv9 最新更新的模块,原理很简单,感兴趣可以试试~

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第10张

        import numpy as np
        import torch.nn as nn
        import torch
        def autopad(k, p=None, d=1):  # kernel, padding, dilation
            # Pad to 'same' shape outputs
            if d > 1:
                k = (
                    d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]
                )  # actual kernel-size
            if p is None:
                p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
            return p
        class Conv(nn.Module):
            # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
            default_act = nn.SiLU()  # default activation
            def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
                super().__init__()
                self.conv = nn.Conv2d(
                    c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False
                )
                self.bn = nn.BatchNorm2d(c2)
                self.act = (
                    self.default_act
                    if act is True
                    else act
                    if isinstance(act, nn.Module)
                    else nn.Identity()
                )
            def forward(self, x):
                return self.act(self.bn(self.conv(x)))
            def forward_fuse(self, x):
                return self.act(self.conv(x))
        class SP(nn.Module):
            def __init__(self, k=3, s=1):
                super(SP, self).__init__()
                self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2)
            def forward(self, x):
                return self.m(x)
        class SPPELAN(nn.Module):
            # spp-elan
            def __init__(
                self, c1, c2, c3
            ):  # ch_in, ch_out, number, shortcut, groups, expansion
                super().__init__()
                self.c = c3
                self.cv1 = Conv(c1, c3, 1, 1)
                self.cv2 = SP(5)
                self.cv3 = SP(5)
                self.cv4 = SP(5)
                self.cv5 = Conv(4 * c3, c2, 1, 1)
            def forward(self, x):
                y = [self.cv1(x)]
                y.extend(m(y[-1]) for m in [self.cv2, self.cv3, self.cv4])
                return self.cv5(torch.cat(y, 1))
        

        2 参数量对比

        这里我在yolov5s.yaml中使用各个模型替换SPP模块

        模型参数量(parameters)计算量(GFLOPs)
        SPP722588516.5
        SPPF723538916.5
        SimSPPF723538916.5
        ASPP1548572523.1
        BasicRFB789542117.1
        SPPCSPC1366354921.7
        SPPFCSPC🍀1366354921.7
        分组SPPCSPC835513317.4

        3 改进方式

        第一步;各个代码放入common.py中

        第二步;yolo.py中加入类名

        第三步;修改配置文件

        yolov5配置文件如下:

        # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
        # YOLOv5 v6.0 backbone
        backbone:
          # [from, number, module, args]
          [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2
           [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
           [-1, 3, C3, [128]],
           [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
           [-1, 6, C3, [256]],
           [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
           [-1, 9, C3, [512]],
           [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
           [-1, 3, C3, [1024]],
           [-1, 1, SPPF, [1024, 5]],  # 9
           # [-1, 1, ASPP, [512]],  # 9
           # [-1, 1, SPP, [1024]],
           # [-1, 1, SimSPPF, [1024, 5]],
           # [-1, 1, BasicRFB, [1024]],
           # [-1, 1, SPPCSPC, [1024]],
           # [-1, 1, SPPFCSPC, [1024, 5]], # 🍀
          ]
        

        4 Issue

        Q:Why use SPPCSPC instead of SPPFCSPC? /

        yolov5’s SPPF is much faster than SPP.

        Why not try to replace SPPCSPC with SPPFCSPC?

        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第11张

        A:

        Max pooling uses very few computation, if you programming well, above one could run three max pool layers in parallel, while below one must process three max pool layers sequentially.

        By the way, you could replace SPPCSPC by SPPFCSPC at inference time if your hardware is friendly to SPPFCSPC.

        感兴趣的可以试一下


        本人更多YOLOv5实战内容导航🍀🌟🚀

        1. 手把手带你调参Yolo v5 (v6.2)(推理)🌟强烈推荐

        2. 手把手带你调参Yolo v5 (v6.2)(训练)🚀

        3. 手把手带你调参Yolo v5 (v6.2)(验证)

        4. 如何快速使用自己的数据集训练Yolov5模型

        5. 手把手带你Yolov5 (v6.2)添加注意力机制(一)(并附上30多种顶会Attention原理图)🌟强烈推荐🍀新增8种

        6. 手把手带你Yolov5 (v6.2)添加注意力机制(二)(在C3模块中加入注意力机制)

        7. Yolov5如何更换激活函数?

        8. Yolov5如何更换BiFPN?

        9. Yolov5 (v6.2)数据增强方式解析

        10. Yolov5更换上采样方式( 最近邻 / 双线性 / 双立方 / 三线性 / 转置卷积)

        11. Yolov5如何更换EIOU / alpha IOU / SIoU?

        12. Yolov5更换主干网络之《旷视轻量化卷积神经网络ShuffleNetv2》

        13. YOLOv5应用轻量级通用上采样算子CARAFE

        14. 空间金字塔池化改进 SPP / SPPF / SimSPPF / ASPP / RFB / SPPCSPC / SPPFCSPC🚀

        15. 用于低分辨率图像和小物体的模块SPD-Conv

        16. GSConv+Slim-neck 减轻模型的复杂度同时提升精度🍀

        17. 头部解耦 | 将YOLOX解耦头添加到YOLOv5 | 涨点杀器🍀

        18. Stand-Alone Self-Attention | 搭建纯注意力FPN+PAN结构🍀

        19. YOLOv5模型剪枝实战🚀

        20. YOLOv5知识蒸馏实战🚀

        21. YOLOv7知识蒸馏实战🚀

        22. 改进YOLOv5 | 引入密集连接卷积网络DenseNet思想 | 搭建密集连接模块🍀


        空间金字塔池化改进 SPP / SPPF SimSPPF ASPP RFB SPPCSPC SPPFCSPC SPPELAN,空间金字塔池化改进系列研究,SPP与变体技术探究 第12张

        有问题欢迎大家指正,如果感觉有帮助的话请点赞支持下👍📖🌟


        更新日志:2022年8月16日上午9:33分前在图片中增加感受野标注🍀

        更新日志:2022年8月29日晚上11点40分在文中增加了SimSPPF模块,并测试了速度

        更新日志:2022年8月30日修正了SPPCSPC的结构图

        更新日志:2022年8月30日增加了SPPFCSPC的结构

        更新日志:2023年5月19日修复了RFB的小错误

        更新日志:2023年7月23日修复了RFB的Bug


        参考文献:增强感受野SPP、ASPP、RFB、PPM


0
收藏0
文章版权声明:除非注明,否则均为VPS857原创文章,转载或复制请以超链接形式并注明出处。

相关阅读

  • 【研发日记】Matlab/Simulink自动生成代码(二)——五种选择结构实现方法,Matlab/Simulink自动生成代码的五种选择结构实现方法(二),Matlab/Simulink自动生成代码的五种选择结构实现方法详解(二)
  • 超级好用的C++实用库之跨平台实用方法,跨平台实用方法的C++实用库超好用指南,C++跨平台实用库使用指南,超好用实用方法集合,C++跨平台实用库超好用指南,方法与技巧集合
  • 【动态规划】斐波那契数列模型(C++),斐波那契数列模型(C++实现与动态规划解析),斐波那契数列模型解析与C++实现(动态规划)
  • 【C++】,string类底层的模拟实现,C++中string类的模拟底层实现探究
  • uniapp 小程序实现微信授权登录(前端和后端),Uniapp小程序实现微信授权登录全流程(前端后端全攻略),Uniapp小程序微信授权登录全流程攻略,前端后端全指南
  • Vue脚手架的安装(保姆级教程),Vue脚手架保姆级安装教程,Vue脚手架保姆级安装指南,Vue脚手架保姆级安装指南,从零开始教你如何安装Vue脚手架
  • 如何在树莓派 Raspberry Pi中本地部署一个web站点并实现无公网IP远程访问,树莓派上本地部署Web站点及无公网IP远程访问指南,树莓派部署Web站点及无公网IP远程访问指南,本地部署与远程访问实践,树莓派部署Web站点及无公网IP远程访问实践指南,树莓派部署Web站点及无公网IP远程访问实践指南,本地部署与远程访问详解,树莓派部署Web站点及无公网IP远程访问实践详解,本地部署与远程访问指南,树莓派部署Web站点及无公网IP远程访问实践详解,本地部署与远程访问指南。
  • vue2技术栈实现AI问答机器人功能(流式与非流式两种接口方法),Vue2技术栈实现AI问答机器人功能,流式与非流式接口方法探究,Vue2技术栈实现AI问答机器人功能,流式与非流式接口方法详解
  • 发表评论

    快捷回复:表情:
    评论列表 (暂无评论,0人围观)

    还没有评论,来说两句吧...

    目录[+]

    取消
    微信二维码
    微信二维码
    支付宝二维码