-
Notifications
You must be signed in to change notification settings - Fork 2
/
TASOD.html
135 lines (133 loc) · 5.56 KB
/
TASOD.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>TASOD</title>
<link rel="stylesheet" type="text/css" href="assets/scripts/bulma.min.css">
<link rel="stylesheet" type="text/css" href="assets/scripts/theme.css">
<link rel="stylesheet" type="text/css" href="https://cdn.bootcdn.net/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
</head>
<body>
<section class="hero is-light" style="">
<div class="hero-body" style="padding-top: 50px;">
<div class="container" style="text-align: center;margin-bottom:5px;">
<h1 class="title">
Exploring Driving-Aware Salient Object Detection via Knowledge Transfer
</h1>
<div class="author">Jinming Su<sup>1,3</sup></div>
<div class="author">Changqun Xia<sup>2</sup></div>
<div class="author">Jia Li<sup>1,2</sup></div>
<div class="group">
<a href="http://cvteam.net/">CVTEAM</a>
</div>
<div class="aff">
<p><sup>1</sup>State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University, Beijing, China</p>
<p><sup>2</sup>Pengcheng Laboratory, Shenzhen, China</p>
<p><sup>3</sup>Meituan</p>
</div>
<div class="con">
<p style="font-size: 24px; margin-top:5px; margin-bottom: 15px;">
ICME 2021
</p>
</div>
<div class="columns">
<div class="column"></div>
<div class="column"></div>
<div class="column">
<a href="https://ieeexplore.ieee.org/abstract/document/9428102" target="_blank">
<p class="link">Paper</p>
</a>
</div>
<div class="column">
<a href="https://github.com/iCVTEAM/TASOD/" target="_blank">
<p class="link">Code</p>
</a>
</div>
<div class="column"></div>
<div class="column"></div>
</div>
</div>
</div>
</section>
<div style="text-align: center;">
<div class="container" style="max-width:850px">
<div style="text-align: center;">
<img src="assets/TASOD/head.png" class="centerImage">
</div>
</div>
<div class="head_cap">
<p style="color:gray;">
The framework of the baseline
</p>
</div>
</div>
<section class="hero">
<div class="hero-body">
<div class="container" style="max-width: 800px" >
<h1 style="">Abstract</h1>
<p style="text-align: justify; font-size: 17px;">
Recently, general salient object detection (SOD) has made
great progress with the rapid development of deep neural networks.
However, task-aware SOD has hardly been studied due to the lack
of task-specific datasets. In this paper, we construct a driving
task-oriented dataset where pixel-level masks of salient objects
have been annotated. Comparing with general SOD datasets, we
find that the cross-domain knowledge difference and task-specific
scene gap are two main challenges to focus the salient objects
when driving. Inspired by these findings, we proposed a baseline
model for the driving task-aware SOD via a knowledge transfer
convolutional neural network. In this network, we construct an
attention-based knowledge transfer module to make up the knowledge
difference. In addition, an efficient boundary-aware feature
decoding module is introduced to perform fine feature decoding
for objects in the complex task-specific scenes. The whole
network integrates the knowledge transfer and feature decoding
modules in a progressive manner. Experiments show that the
proposed dataset is very challenging, and the proposed method
outperforms 12 state-of-the-art methods on the dataset, which
facilitates the development of task-aware SOD.
</p>
</div>
</div>
</section>
<section class="hero is-light" style="background-color:#FFFFFF;">
<div class="hero-body">
<div class="container" style="max-width:800px;margin-bottom:20px;">
<h1>
Representative Examples
</h1>
</div>
<div class="container" style="max-width:800px">
<div style="text-align: center;">
<img src="assets/TASOD/result.png" class="centerImage">
</div>
</div>
</div>
</section>
<section class="hero" style="padding-top:0px;">
<div class="hero-body">
<div class="container" style="max-width:800px;">
<div class="card">
<header class="card-header">
<p class="card-header-title">
BibTex Citation
</p>
<a class="card-header-icon button-clipboard" style="border:0px; background: inherit;" data-clipboard-target="#bibtex-info" >
<i class="fa fa-copy" height="20px"></i>
</a>
</header>
<div class="card-content">
<pre style="background-color:inherit;padding: 0px;" id="bibtex-info">@article{9428102,
title={Exploring Driving-Aware Salient Object Detection via Knowledge Transfer},
author={Su, Jinming and Xia, Changqun and Li, Jia},
booktitle={2021 IEEE International Conference on Multimedia and Expo (ICME)},
year={2021},
}</pre>
</div>
</section>
<script type="text/javascript" src="assets/scripts/clipboard.min.js"></script>
<script>
new ClipboardJS('.button-clipboard');
</script>
</body>
</html>