-
Notifications
You must be signed in to change notification settings - Fork 0
/
visionbasedquad.html
328 lines (280 loc) · 25.9 KB
/
visionbasedquad.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
<!DOCTYPE html>
<html lang="zxx" class="no-js">
<head>
<!-- Mobile Specific Meta -->
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Favicon-->
<link rel="shortcut icon" href="img/fav.png">
<!-- Author Meta -->
<meta name="author" content="colorlib">
<!-- Meta Description -->
<meta name="description" content="">
<!-- Meta Keyword -->
<meta name="keywords" content="">
<!-- meta character set -->
<meta charset="UTF-8">
<!-- Site Title -->
<title>PeAR WPI</title>
<!-- Site Title -->
<!-- Site Title -->
<link href="https://fonts.googleapis.com/css?family=Poppins:100,200,400,300,500,600,700" rel="stylesheet">
<!--
CSS
============================================= -->
<link rel="stylesheet" href="css/linearicons.css">
<link rel="stylesheet" href="css/font-awesome.min.css">
<link rel="stylesheet" href="css/bootstrap.css">
<link rel="stylesheet" href="css/magnific-popup.css">
<link rel="stylesheet" href="css/nice-select.css">
<link rel="stylesheet" href="css/animate.min.css">
<link rel="stylesheet" href="css/owl.carousel.css">
<link rel="stylesheet" href="css/jquery-ui.css">
<link rel="stylesheet" href="css/main.css">
<link href="css/icofont/icofont.min.css" rel="stylesheet">
<link href="css/remixicon/remixicon.css" rel="stylesheet">
<link href="css/owl.carousel/assets/owl.carousel.min.css" rel="stylesheet">
<link href="css/boxicons/css/boxicons.min.css" rel="stylesheet">
<link rel="stylesheet" href="https://cdn.rawgit.com/jpswalsh/academicons/master/css/academicons.min.css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-171009851-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-171009851-1');
</script>
</head>
<body>
<!-- EDIT ME -->
<header id="header">
<div class="container main-menu">
<div class="row align-items-center justify-content-between d-flex">
<!-- style="margin-left: -36vh; margin-right: -36vh" -->
<div id="logo">
<a href="https://www.wpi.edu/" style="font-size: 24px; font-weight: 600; color: #ddd"><img src="img/logos/WPILogo2.png" width="48px" alt="" title=""> </a><a href="index.html" style="font-size: 24px; font-weight: 600; color: #ddd"><img src="img/logos/LogoWhiteRed.png" width="48px" alt="" title=""> Perception and Autonomous Robotics Group</a>
</div>
<nav id="nav-menu-container">
<ul class="nav-menu">
<li><a title="Home" href="index.html" style="position: relative; top: -4px"><i style="font-size: 28px" class="fa fa-home"></i></a></li>
<li class="menu-has-children"><a title="Research" href="research.html">Research</a>
<ul>
<li><a href="research.html">Research Areas</a></li>
<!-- <li><a href="softwares.html">Softwares/Datasets</a></li> -->
<li><a href="publications.html">Publications/Softwares/Datasets</a></li>
<li><a href="labs.html">Research Labs And Facilities</a></li>
</ul>
</li>
<li><a title="Teaching" href="teaching.html">Teaching</a></li>
<li><a title="Media" href="media.html">Media</a></li>
<li><a title="Openings" href="openings.html">Openings</a></li>
<li><a title="Events" href="events.html">Events</a></li>
</ul>
</nav><!-- #nav-menu-container -->
</div>
</div>
</header> </header> </header> </header> <!-- EDIT ME -->
<!-- Start Sample Area -->
<section class="sample-text-area">
<div class="container">
<h3 class="text-heading">Vision-Based Quadrotor Flight</h3>
<p class="sample-text">
We work on camera-based navigation algorithms for aerial robots. The goal is to develop a fully functional system that can operate in the wild without the need for any external infrastructure such as GPS or motion capture.
</p>
</div>
</section>
<!-- End Sample Area -->
<!-- Start Align Area -->
<div class="whole-wrap">
<div class="container">
<div class="section-top-border" style="text-align: justify">
<h3 class="mb-30">EVPropNet</h3>
The rapid rise of accessibility of unmanned aerial vehicles or drones pose a threat to general security and confidentiality. Most of the commercially available or custom-built drones are multi-rotors and are comprised of multiple propellers. Since these propellers rotate at a high-speed, they are generally the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur. We utilize a class of sensors that are particularly suitable for such scenarios called event cameras, which have a high temporal resolution, low-latency, and high dynamic range.<br><br>
In this paper, we model the geometry of a propeller and use it to generate simulated events which are used to train a deep neural network called EVPropNet to detect propellers from the data of an event camera. EVPropNet directly transfers to the real world without any fine-tuning or retraining. We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with different propeller shapes and sizes. Our network can detect propellers at a rate of 85.1% even when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power budget. To our knowledge, this is the first deep learning-based solution for detecting propellers (to detect drones). Finally, our applications also show an impressive success rate of 92% and 90% for the tracking and landing tasks respectively.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2106.15045">EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing And Following</a></h4><br>
<div class="highlight-sec">
<h6>RSS 2021</h6>
</div>
<p>
<b>Nitin J. Sanket</b>, Chahat Deep Singh, Chethan M. Parameshwara, Cornelia Fermuller, Guido C.H.E. de Croon, Yiannis Aloimonos, <i>Robotics Science and Systems (RSS)</i>, 2021.<br>
</p>
<h6>
<a href="https://arxiv.org/abs/2106.15045"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/EVPropNet"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/EVPropNet"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/evpropnet.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/evpropnet.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">NudgeSeg</h3>
Recent advances in object segmentation have demonstrated that deep neural networks excel at object segmentation for specific classes in color and depth images. However, their performance is dictated by the number of classes and objects used for training, thereby hindering generalization to never seen objects or zero-shot samples. To exacerbate the problem further, object segmentation using image frames rely on recognition and pattern matching cues. Instead, we utilize the 'active' nature of a robot and their ability to 'interact' with the environment to induce additional geometric constraints for segmenting zero-shot samples. In this paper, we present the first framework to segment unknown objects in a cluttered scene by repeatedly 'nudging' at the objects and moving them to obtain additional motion cues at every step using only a monochrome monocular camera.We call our framework NudgeSeg. These motion cues are used to refine the segmentation masks. We successfully test our approach to segment novel objects in various cluttered scenes and provide an extensive study with image and motion segmentation methods. We show an impressive average detection rate of over 86% on zero-shot objects.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2109.13859">NudgeSeg: Zero-Shot Object Segmentation by Repeated Physical Interaction</a></h4><br>
<div class="highlight-sec">
<h6>IROS 2021</h6>
</div>
<p>
<b>Nitin J. Sanket*</b>, Chahat Deep Singh*, Cornelia Fermuller, Yiannis Aloimonos, <i>IEEE International Conference on Intelligent Robots and Systems (IROS)</i>, 2021.<br>
* Equal Contribution
</p>
<h6>
<a href="https://arxiv.org/abs/2109.13859"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/NudgeSeg"><i class="fa fa-globe"></i> Project Page </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/nudgeseg.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/nudgeseg.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">MorphEyes</h3>
Morphable design and depth-based visual control are two upcoming trends leading to advancements in the field of quadrotor autonomy. Stereo-cameras have struck the perfect balance of weight and accuracy of depth estimation but suffer from the problem of depth range being limited and dictated by the baseline chosen at design time. In this paper, we present a framework for quadrotor navigation based on a stereo camera system whose baseline can be adapted on-the-fly. We present a method to calibrate the system at a small number of discrete baselines and interpolate the parameters for the entire baseline range. We present an extensive theoretical analysis of calibration and synchronization errors. We show casethree different applications of such a system for quadrotor navigation: (a) flying through a forest, (b) flying through an unknown shaped/location static/dynamic gap, and (c) accurate 3D pose detection of an independently moving object. We show that our variable baseline system is more accurate and robust in all three scenarios. To our knowledge, this is the first work that applies the concept of morphable design to achieve a variable baseline stereo vision system on a quadrotor.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2011.03077" style="font-weight: 600;"> MorphEyes: Variable Baseline Stereo For Quadrotor Navigation</a></h4><br>
<div class="highlight-sec">
<h6>ICRA 2021</h6>
</div>
<p>
<b>Nitin J. Sanket</b>, Chahat Deep Singh, Varun Asthana, Cornelia Fermuller, Yiannis Aloimonos, <i>IEEE International Conference on Robotics and Automation (ICRA) </i>, 2021.<br>
</p>
<h6>
<a href="https://arxiv.org/abs/2011.03077"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/MorphEyes"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/MorphEyes"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a> <br><br>
<!-- <a href="research/morpheyes.html"><i class="fa fa-quote-right"></i> Cite </a>
-->
<h4>Featured in</h4> <br>
<a href="https://www.crowdsupply.com/stereopi/stereopi/updates/stereopi-powered-drones-and-the-stereopi-v2"><img src="img/logos/CrowdSupply.png" width="200px" alt="" class="img-fluid"></a> <a href="https://robotics.umd.edu/research/multirobot-systems"><img src="img/logos/MRC.png" width="200px" alt="" class="img-fluid"></a>
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/morpheyes.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">PRGFlow</h3>
Odometry on aerial robots has to be of low latency and high robustness whilst also respecting the Size, Weight, Area and Power (SWAP) constraints as demanded by the size of the robot. A combination of visual sensors coupled with Inertial Measurement Units (IMUs) has proven to be the best combination to obtain robust and low latency odometry on resource-constrained aerial robots. Recently, deep learning approaches for Visual Inertial fusion have gained momentum due to their high accuracy and robustness. However, the remarkable advantages of these techniques are their inherent scalability (adaptation to different sized aerial robots) and unification (same method works on different sized aerial robots) by utilizing compression methods and hardware acceleration, which have been lacking from previous approaches. To this end, we present a deep learning approach for visual translation estimation and loosely fuse it with an Inertial sensor for full 6 DoF odometry estimation. We also present a detailed benchmark comparing different architectures, loss functions and compression methods to enable scalability. We evaluate our network on the MSCOCO dataset and evaluate the VI fusion on multiple real-flight trajectories.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/2006.06753" style="font-weight: 600;"> PRGFlow: Unified SWAP‐aware deep global optical flow for aerial robot navigation</a></h4><br>
<div class="highlight-sec">
<h6>Electronic Letters 2021</h6>
</div>
<p>
<b>Nitin J. Sanket</b>, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, <i>Electronics Letters</i>, 2021.<br>
</p>
<h6>
<a href="https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/ell2.12274"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/PRGFlow"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/PRGFlow"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a>
<!-- <a href="research/prgflow.html"><i class="fa fa-quote-right"></i> Cite </a> -->
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/PRGFlow.png" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<hr>
<h3 class="mb-30">EVDodgeNet</h3>
Dynamic obstacle avoidance on quadrotors requires low latency. A class of sensors that are particularly suitable for such scenarios are event cameras. In this paper, we present a deep learning based solution for dodging multiple dynamic obstacles on a quadrotor with a single event camera and onboard computation. Our approach uses a series of shallow neural networks for estimating both the ego-motion and the motion of independently moving objects. The networks are trained in simulation and directly transfer to the real world without any fine-tuning or retraining. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy, proving that our navigation stack can cater to different scenarios.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/1906.02919" style="font-weight: 600;"> EVDodgeNet: Deep Dynamic Obstacle Dodging with Event Cameras</a></h4><br>
<div class="highlight-sec">
<h6>ICRA 2020</h6>
</div>
<p>
<b>Nitin J. Sanket*</b>, Chethan M. Parameshwara*, Chahat Deep Singh, Ashwin V. Kuruttukulam, Cornelia Fermuller, Davide Scaramuzza, Yiannis Aloimonos, <i>IEEE International Confernce on Robotics and Automation</i>, Paris, 2020.<br>
* Equal Contribution
<!-- Add text background in p tag with div -->
</p>
<h6>
<a href="https://arxiv.org/abs/1906.02919"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/EVDodgeNet"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/EVDodgeNet"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a> <br><br>
<!-- <a href="research/evdodgenet.html"><i class="fa fa-quote-right"></i> Cite </a> -->
<h4>Featured in</h4> <br>
<a href="https://mashable.com/video/drone-uses-ai-to-dodge-objects-thrown-at-it/"><img src="img/logos/Mashable.png" width="140px" alt="" class="img-fluid"></a> <a href="https://futurism.com/the-byte/watch-drones-dodge-stuff-thrown"><img src="img/logos/Futurism.png" width="140px" alt="" class="img-fluid"></a>
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/EVDodgeNet.gif" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<br><br>
<hr>
<h3 class="mb-30">GapFlyt</h3>
Although quadrotors, and aerial robots in general, are inherently active agents, their perceptual capabilities in literature so far have been mostly passive in nature. Researchers and practitioners today use traditional computer vision algorithms with the aim of building a representation of general applicability: a 3D reconstruction of the scene. Using this representation, planning tasks are constructed and accomplished to allow the quadrotor to demonstrate autonomous behavior. These methods are inefficient as they are not task driven and such methodologies are not utilized by flying insects and birds. Such agents have been solving the problem of navigation and complex control for ages without the need to build a 3D map and are highly task driven.<br><br>
In this paper, we propose this framework of bio-inspired perceptual design for quadrotors. We use this philosophy to design a minimalist sensori-motor framework for a quadrotor to fly though unknown gaps without a 3D reconstruction of the scene using only a monocular camera and onboard sensing. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with different settings and window shapes, achieving a success rate of 85% at 2.5m/s even with a minimum tolerance of just 5cm. To our knowledge, this is the first paper which addresses the problem of gap detection of an unknown shape and location with a monocular camera and onboard sensing.<br><br>
<h3> References</h3><br>
<div class="rowunmod">
<div class="col-lg-6 col-md-6 mt-sm-20 left-align-p" style="padding-left:0; padding-right:0">
<h4><a href="https://arxiv.org/abs/1802.05330" style="font-weight: 600;"> GapFlyt: Active Vision Based Minimalist Structure-less Gap Detection For Quadrotor Flight</a></h4><br>
<div class="highlight-sec">
<h6>RA-L 2018 | IROS 2018</h6>
</div>
<p>
<b>Nitin J. Sanket*</b>, Chahat Deep Singh*, Kanishka Ganguly, Cornelia Fermuller, Yiannis Aloimonos, <i>IEEE Robotics and Automation Letters</i>, 2018.<br>
* Equal Contribution<br><br>
<span style="font-weight: 600; color:#c30000"><i class="fa fa-trophy"></i>Awarded the Brin Family Prize, 2018.</span> <span style="font-weight: 600"><a href="https://aero.umd.edu/news/story/brin-family-prize-celebrates-student-innovation"> <i class="fa fa-newspaper-o"></i> News Article</a> </span>
<!-- Add text background in p tag with div -->
<!-- Border Radius -->
</p>
<h6>
<a href="https://arxiv.org/abs/1802.05330"><i class="fa fa-file-text-o"></i> Paper </a> <a href="http://prg.cs.umd.edu/GapFlyt"><i class="fa fa-globe"></i> Project Page </a> <a href="https://github.com/prgumd/GapFlyt"><i class="fa fa-github"></i> Code </a> <a href="http://umd.edu"><i class="fa fa-map-marker"></i> UMD </a> <br><br>
<!-- <a href="research/gapflyt.html"><i class="fa fa-quote-right"></i> Cite </a> -->
<h4>Featured in</h4> <br>
<a href="https://spectrum.ieee.org/automaton/robotics/drones/insectinspired-vision-system-helps-drones-pass-through-small-gaps"><img src="img/logos/IEEESpectrum.png" width="140px" alt="" class="img-fluid"></a> <a href="https://techcrunch.com/2018/09/12/new-techniques-teach-drones-to-fly-through-small-holes/"><img src="img/logos/TechCrunch.png" width="160px" alt="" class="img-fluid"></a> <br><br>
<a href="https://news.developer.nvidia.com/insect-inspired-drone-uses-ai-to-fly-through-narrow-gaps/"><img src="img/logos/NVIDIA.png" width="60px" alt="" class="img-fluid"></a>
<a href="https://technologynewsupdate.com/new-techniques-teach-drones-to-fly-through-small-holes/"><img src="img/logos/TechNewsUpdate.png" width="200px" alt="" class="img-fluid"></a> and <a href="http://prg.cs.umd.edu/media">many more</a>
</h6>
</div>
<div class="col-lg-6 col-md-6 mt-sm-20 right-align-p">
<img src="img/research/gapflyt.gif" alt="" class="img-fluid" style="border-radius: 16px;">
</div>
</div>
<br><br>
<hr>
</div>
</div>
</div>
<!-- EDIT FOOT -->
<!-- start footer Area -->
<section class="facts-area section-gap" id="facts-area" style="background-color: rgba(255, 255, 255, 1.0); padding: 40px">
<div class="container">
<div class="title text-center">
<p> <a href="index.html"><img src="img/logos/LogoBlackRed.png" width="128px" alt="" title=""></a><br><br>
Perception and Autonomous Robotics Group <br>
Worcester Polytechnic Institute <br>
Copyright © 2023<br>
<span style="font-size: 10px">Website based on <a href="https://colorlib.com" target="_blank">Colorlib</a></span>
</p>
</div>
</div>
</section>
<!-- End footer Area --> <!-- End footer Area --> <!-- EDIT FOOT -->
<script src="js/vendor/jquery-2.2.4.min.js"></script>
<script src="js/popper.min.js"></script>
<script src="js/vendor/bootstrap.min.js"></script>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyBhOdIF3Y9382fqJYt5I_sswSrEw5eihAA"></script>
<script src="js/easing.min.js"></script>
<script src="js/hoverIntent.js"></script>
<script src="js/superfish.min.js"></script>
<script src="js/jquery.ajaxchimp.min.js"></script>
<script src="js/jquery.magnific-popup.min.js"></script>
<script src="js/jquery.tabs.min.js"></script>
<script src="js/jquery.nice-select.min.js"></script>
<script src="js/isotope.pkgd.min.js"></script>
<script src="js/waypoints.min.js"></script>
<script src="js/jquery.counterup.min.js"></script>
<script src="js/simple-skillbar.js"></script>
<script src="js/owl.carousel.min.js"></script>
<script src="js/mail-script.js"></script>
<script src="js/main.js"></script>
</body>
</html>