-
Notifications
You must be signed in to change notification settings - Fork 9
/
index.html
283 lines (205 loc) · 10.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1.0"/>
<meta name="author" content="yulunliu">
<title>Learning to See Through Obstructions</title>
<!-- CSS -->
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="website/css/materialize.css" type="text/css" rel="stylesheet" media="screen,projection"/>
<link href="website/css/style.css" type="text/css" rel="stylesheet" media="screen,projection"/>
<link href="website/css/font-awesome.min.css" rel="stylesheet">
<!--<meta property="og:image" content="http://gph.is/2oZQz8h" />-->
</head>
<body>
<div class="navbar-fixed">
<nav class="grey darken-4" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="#" class="brand-logo"></a>
<a href="#" data-activates="nav-mobile" class="button-collapse"><i class="material-icons">menu</i></a>
<ul class="left hide-on-med-and-down">
<li><a class="nav-item waves-effect waves-light" href="#home">Home</a></li>
<li><a class="nav-item waves-effect waves-light" href="#abstract">Abstract</a></li>
<li><a class="nav-item waves-effect waves-light" href="#paper">Paper</a></li>
<li><a class="nav-item waves-effect waves-light" href="#download">Download</a></li>
<li><a class="nav-item waves-effect waves-light" href="#results">Results</a></li>
<li><a class="nav-item waves-effect waves-light" href="#reference">References</a></li>
</ul>
</div>
</nav>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container scrollspy" id="home">
<h4 class="header center black-text">Learning to See Through Obstructions with Layered Decomposition</h4>
<br>
<div class="row center">
<h5 class="header col offset-l1 l2 m4 s12">
<div class="author"><a href="http://www.cmlab.csie.ntu.edu.tw/~yulunliu/" target="blank">Yu-Lun Liu<sup>1,5</sup></a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="author"><a href="https://www.wslai.net/" target="blank">Wei-Sheng Lai<sup>2</sup></a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="author"><a href="https://faculty.ucmerced.edu/mhyang/" target="blank">Ming-Hsuan Yang<sup>2,4</sup></a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="author"><a href="https://www.csie.ntu.edu.tw/~cyy/" target="blank">Yung-Yu Chuang<sup>1</sup></a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="author"><a href="https://filebox.ece.vt.edu/~jbhuang/" target="blank">Jia-Bin Huang<sup>3</sup></a></div>
</h5>
</div>
<div class="row center testA">
<h5 class="header col offset-l1 l2 m4 s12">
<div class="affiliation"><a href="https://www.ntu.edu.tw/" target="blank"><sup>1</sup>National Taiwan University</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://ai.google/research" target="blank"><sup>2</sup>Google</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://vt.edu/" target="blank"><sup>3</sup>Virginia Tech</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://www.ucmerced.edu/" target="blank"><sup>4</sup>University of California at Merced</a></div>
</h5>
<h5 class="header col l2 m4 s12">
<div class="affiliation"><a href="https://www.mediatek.tw/" target="blank"><sup>5</sup>MediaTek Inc.</a></div>
</h5>
</div>
</div>
</div>
<div class="container">
<div class="section">
<!-- Icon Section -->
<div class="row center">
<div class="col l12 m12 s12">
<iframe width="560" height="315" src="https://www.youtube.com/embed/oqdvYRYOT5s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
</div>
</div>
<div class="section">
<!-- Icon Section -->
<div class="row center">
<div class="col l12 m12 s12">
<img class="responsive-img" src="website/teaser.png">
</div>
</div>
</div>
<br>
<div class="row section scrollspy" id="abstract">
<div class="title">Abstract</div>
We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera.
Our method leverages the motion differences between the background and the obstructing elements to recover both layers.
Specifically, we alternate between estimating dense optical flow fields of the two layers and reconstructing each layer from the flow-warped images via a deep convolutional neural network.
The learning-based layer reconstruction allows us to accommodate potential errors in the flow estimation and brittle assumptions such as brightness consistency.
We show that training on synthetically generated data transfers well to real images.
Our results on numerous challenging scenarios of reflection and fence removal demonstrate the effectiveness of the proposed method.
</div>
<div class="row section scrollspy" id="paper">
<div class="title">Papers</div>
<br>
<div class="row">
<div class="col m12 s12 center">
<a href="https://arxiv.org/abs/2008.04902" target="_blank">
<img src="website/images/icon_pdf.png">
</a>
<br>
<a href="https://arxiv.org/abs/2008.04902" target="_blank">arXiv</a>
</div>
</div>
</div>
<div class="row">
<div class="subtitle">Citation</div>
<p>Yu-Lun Liu, Wei-Sheng Lai, Ming-Hsuan Yang, Yung-Yu Chuang, and Jia-Bin Huang, "Learning to See Through Obstructions with Layered Decomposition", 2020</p>
<br>
<div class="subtitle">Bibtex</div>
<pre>
@article{Liu-TPAMI-2021,
author = {Liu, Yu-Lun and Lai, Wei-Sheng and Yang, Ming-Hsuan and Chuang, Yung-Yu and Huang, Jia-Bin},
title = {Learning to See Through Obstructions with Layered Decomposition},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2021}
}
</pre>
</div>
<div class="section row scrollspy" id="download">
<div class="title">Download</div>
<div class="row">
<div class="col m6 s12 center">
<a href="https://github.com/alex04072000/SOLD" target="_blank">
<img src="website/images/github.png">
</a>
<br>
<a href="https://github.com/alex04072000/SOLD" target="_blank">Code</a>
</div>
<div class="col m6 s12 center">
<a href="https://drive.google.com/file/d/1W90j60fMjroVvc2zUhYspXvifMewYCGi/view?usp=sharing" target="_blank">
<img src="website/images/icon_zip.png">
</a>
<br>
<a href="https://drive.google.com/file/d/1W90j60fMjroVvc2zUhYspXvifMewYCGi/view?usp=sharing" target="_blank">Results (2.23GB)</a>
</div>
</div>
</div>
<div class="section row scrollspy" id="results">
<div class="title">Results</div>
<div class="row center">
<div class="subtitle"><a href="website/Obstruction_HTML_CameraReady/result.html">Additional Comparisons</a></div>
<a class="summary" href="website/Obstruction_HTML_CameraReady/result.html"><div class="col s12 summary-obstruction"></div></a>
</div>
<br>
</div>
<div class="row section scrollspy" id="reference">
<div class="title">References</div>
<ul>
<li>•
<a href="https://arxiv.org/abs/1812.01461" target="blank">The visual centrifuge: Model-free layered video representations</a>, CVPR, 2019.
</li>
<li>•
<a href="https://arxiv.org/abs/1806.10781" target="blank">Accurate and efficient video de-fencing using convolutional neural networks and temporal information</a>, ICME, 2018.
</li>
<li>•
<a href="https://arxiv.org/abs/1708.03474" target="blank">A generic deep architecture for single image reflection removal and image smoothing</a>, ICCV, 2017.
</li>
<li>•
<a href="https://zpascal.net/cvpr2014/Guo_Robust_Separation_of_2014_CVPR_paper.pdf" target="blank">Robust separation of reflection from multiple images</a>, CVPR, 2014.
</li>
<li>•
<a href="https://ieeexplore.ieee.org/document/6751413" target="blank">Exploiting reflection change for automatic reflection removal</a>, ICCV, 2013.
</li>
<li>•
<a href="https://people.csail.mit.edu/changil/assets/video-reflection-removal-through-spatio-temporal-optimization-iccv-2017-nandoriya-et-al.pdf" target="blank">Video reflection removal through spatio-temporal optimization</a>, ICCV, 2017.
</li>
<li>•
<a href="https://sites.google.com/site/obstructionfreephotography/" target="blank">A Computational Approach for Obstruction-Free Photography</a>, SIGGRAPH, 2015.
</li>
<li>•
<a href="https://eccv2018.org/openaccess/content_ECCV_2018/papers/Jie_Yang_Seeing_Deeply_and_ECCV_2018_paper.pdf" target="blank">A deep learning approach for single image reflection removal</a>, ECCV, 2018.
</li>
<li>•
<a href="https://arxiv.org/abs/1806.05376" target="blank">Single image reflection separation with perceptual losses</a>, CVPR, 2018.
</li>
<li>•
<a href="https://arxiv.org/abs/2004.01180" target="blank">Learning to See Through Obstructions</a>, CVPR, 2020.
</li>
</ul>
</div>
</div>
<footer class="page-footer grey lighten-3">
<!--
<div class="row">
<div class="col l4 offset-l4 s12">
<script type="text/javascript" id="clustrmaps" src="//cdn.clustrmaps.com/map_v2.js?cl=ffffff&w=330&t=tt&d=fhGpuMgoLYXytRWhcIV-396rCSmJYtpAJdk3tTNbAnY"></script>
</div>
</div>
-->
<div class="footer-copyright center black-text">
Copyright © Jason Lai 2017
</div>
</footer>
<!-- Scripts-->
<script src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="js/materialize.js"></script>
<script src="js/init.js"></script>
</body>
</html>