**作者:**Jonathan Long Evan Shelhamer Trevor Darrell UC Berkeley

发布时间:2015

论文地址:https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf

代码:https://github.com/shelhamer/fcn.berkeleyvision.org

**地位:**深度学习应用于图像语义分割的开山之作

一、摘要

Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with effificient inference and learning. We defifine and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classifification networks (AlexNet [19],the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fifine-tuning [4] to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fifine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2,and SIFT Flow, while inference takes less than one fififth of a second for a typical image.

二、网络结构

作者主要采用了三种技术,分别是卷积化、上采样、跳跃结构

1. 卷积化(Convolutional)

将分类网络(如ResNet,VGG16等)中最后的全连接层换成卷积层