-
Notifications
You must be signed in to change notification settings - Fork 0
/
Research.html
72 lines (69 loc) · 4.45 KB
/
Research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Research</title>
<link rel="stylesheet"
href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.0/css/bootstrap.min.css"
integrity="sha384-9gVQ4dYFwwWSjIDZnLEWnxCjeSWFphJiwGPXr1jddIhOegiu1FwO5qRGvFXOdJZ4"
crossorigin="anonymous">
<link rel="stylesheet"
href="css/style.css">
</head>
<body>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarTogglerDemo01" aria-controls="navbarTogglerDemo01" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarTogglerDemo01">
<ul class="navbar-nav mr-auto mt-2 mt-lg-0">
<li class="nav-item">
<a class="nav-link" href="main.html">Home <span class="sr-only">(current)</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="About.html">About Us</a>
</li>
<li class="nav-item">
<a class="nav-link" href="Team.html">Team</a>
</li>
<li class="nav-item">
<a class="nav-link" href="Analysis.html">Analysis</a>
</li>
</ul>
</div>
</nav>
<header class="grey-top">
<h1 class="text-success">Research</h1>
</header>
<div class="w-20 p-2" style="background-color:green;"></div>
<p class="res-title">
<h2>Research on Machine Learning</h2>
<ul>
<li><strong>Machine Learning</strong> – allows computers to learn without explicitly being programmed. </li>
<ul>
<li>Where optimization minimizes loss on sample sets, ML minimizes loss on unseen samples </li>
</ul>
<li><strong>Image Classification</strong> – Task associated with multi-label assignments. It involves the extraction of information from an image and then associating the extracted information to one or more class lables </li>
<li><strong>Supervised Learning</strong>done in the context of classification, mapping input to output labels (ex: regression, classification) </li>
<ul>
<li>Goal is to find specific relationships or structure in the input data that allows us to effectovely produce correct output data </li>
<li>Main considerations: model complexity, bias-variance tradeoff </li>
</ul>
<li><strong>Unsupervised Learning</strong> – learn the inherent structure of our data without using explicitly-provided labels (ex: clustering) </li>
<ul><li>Common use-cases: exploratory analysis and dimensionality reduction </li></ul>
<li><strong>Reinforcement Learning</strong> - The computer is in a dynamic environment where it is rewarded for doing correct tasks and seeks to maximize rewards </li>
<li><strong>Generalization (In terms of ML) - Ability of a learning machine to perform accurately on new, unseen examples after experiencing a training data set </strong></li>
<ul><li>Because training sets are finite and future is uncertain, ML can only give a high probability of an outcome, not a guarantee </li></ul>
</ul>
<h2>Research on InceptionV3</h2>
<ul>
<li><strong>Factorized Convolutions:</strong> Reduce the computational efficiency as it reduces the number of parameters involved in a network.
It also keeps a check on the network efficiency</li>
<li><strong>Smaller Convolutions:</strong> helps reduce the number of parameters through replacing large filters with multiple smaller ones.</li>
<ul><li>Example: A 5x5 filter is replaced with two 3x3 filters (18 parameters instead of 25).</li></ul>
<li><strong>Asymmetric Convolutions:</strong> A 3 × 3 convolution could be replaced by a 1 × 3 convolution followed by a 3 × 1 convolution. If a 3 × 3 convolution is replaced by a 2 × 2 convolution, the number of parameters would be slightly higher than the asymmetric convolution proposed.</li>
<li><strong>Auxilary Classifier:</strong> An auxiliary classifier is a small CNN inserted between layers during training, and the loss incurred is added to the main network loss. In GoogleNet auxiliary classifiers were used for a deeper network, whereas in Inception v3 an auxiliary classifier acts as a regularizer.</li>
<li><strong>Grid Size Reduction:</strong> </li>
</ul>
</p>
</body>