Skip to content

Commit

Permalink
regen
Browse files Browse the repository at this point in the history
  • Loading branch information
vmchale committed Oct 8, 2024
1 parent cb98aa6 commit 1f5d67b
Showing 1 changed file with 41 additions and 32 deletions.
73 changes: 41 additions & 32 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,7 @@ <h1 class="title">Apple by Example</h1>
<li><a href="#map" id="toc-map">Map</a></li>
<li><a href="#zip" id="toc-zip">Zip</a></li>
<li><a href="#fold" id="toc-fold">Fold</a></li>
<li><a href="#scan" id="toc-scan">Scan</a></li>
<li><a href="#array-literals" id="toc-array-literals">Array
Literals</a></li>
<li><a href="#reverse" id="toc-reverse">Reverse</a></li>
Expand Down Expand Up @@ -310,6 +311,8 @@ <h1 class="title">Apple by Example</h1>
<li><a href="#numerically-stable-geometric-mean"
id="toc-numerically-stable-geometric-mean">Numerically Stable Geometric
Mean</a></li>
<li><a href="#array-strides" id="toc-array-strides">Array
Strides</a></li>
<li><a href="#train-neural-network" id="toc-train-neural-network">Train
Neural Network</a></li>
<li><a href="#shoelace-theorem" id="toc-shoelace-theorem">Shoelace
Expand Down Expand Up @@ -435,6 +438,9 @@ <h2 id="fold">Fold</h2>
<p><code>/</code> folds over an array.</p>
<pre><code> &gt; (+)/⍳ 1 100 1
5050</code></pre>
<h2 id="scan">Scan</h2>
<pre><code> &gt; (+)Λ (irange 1 3 1)
Vec 3 [1, 3, 6]</code></pre>
<h2 id="array-literals">Array Literals</h2>
<p>Array literals are delineated by <code>⟨</code>…<code>⟩</code>.</p>
<pre><code> &gt; ⟨_1,0::int⟩
Expand Down Expand Up @@ -688,6 +694,9 @@ <h2 id="numerically-stable-geometric-mean">Numerically Stable Geometric
<pre><code>λxs.
⸎ avg ← [{n ⟜ ℝ(:xs); ((+)/xs)%n}]
; e:(avg (_.&#39;xs))</code></pre>
<h2 id="array-strides">Array Strides</h2>
<pre><code>λds. }:((*)Λₒ 1::int ds)</code></pre>
<p>(Column-major order)</p>
<h2 id="train-neural-network">Train Neural Network</h2>
<pre><code>λwh.λwo.λbh.λbo.
{ X ⟜ ⟨⟨0,0⟩,⟨0,1⟩,⟨1,0⟩,⟨1,1⟩⟩;
Expand All @@ -709,38 +718,38 @@ <h2 id="train-neural-network">Train Neural Network</h2>
}</code></pre>
<p>This is equivalent to the <a
href="https://towardsdatascience.com/implementing-the-xor-gate-using-backpropagation-in-neural-networks-c1f255b4f20d">Python</a>:</p>
<div class="sourceCode" id="cb60"><pre
class="sourceCode python"><code class="sourceCode python"><span id="cb60-1"><a href="#cb60-1" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb60-2"><a href="#cb60-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-3"><a href="#cb60-3" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid (x):</span>
<span id="cb60-4"><a href="#cb60-4" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> <span class="dv">1</span><span class="op">/</span>(<span class="dv">1</span> <span class="op">+</span> np.exp(<span class="op">-</span>x))</span>
<span id="cb60-5"><a href="#cb60-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-6"><a href="#cb60-6" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid_derivative(x):</span>
<span id="cb60-7"><a href="#cb60-7" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> x <span class="op">*</span> (<span class="dv">1</span> <span class="op">-</span> x)</span>
<span id="cb60-8"><a href="#cb60-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-9"><a href="#cb60-9" aria-hidden="true" tabindex="-1"></a>inputs <span class="op">=</span> np.array([[<span class="dv">0</span>,<span class="dv">0</span>],[<span class="dv">0</span>,<span class="dv">1</span>],[<span class="dv">1</span>,<span class="dv">0</span>],[<span class="dv">1</span>,<span class="dv">1</span>]])</span>
<span id="cb60-10"><a href="#cb60-10" aria-hidden="true" tabindex="-1"></a>expected_output <span class="op">=</span> np.array([[<span class="dv">0</span>],[<span class="dv">1</span>],[<span class="dv">1</span>],[<span class="dv">0</span>]])</span>
<span id="cb60-11"><a href="#cb60-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-12"><a href="#cb60-12" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">=</span> np.dot(inputs,hidden_weights)</span>
<span id="cb60-13"><a href="#cb60-13" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">+=</span> hidden_bias</span>
<span id="cb60-14"><a href="#cb60-14" aria-hidden="true" tabindex="-1"></a>hidden_layer_output <span class="op">=</span> sigmoid(hidden_layer_activation)</span>
<span id="cb60-15"><a href="#cb60-15" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-16"><a href="#cb60-16" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">=</span> np.dot(hidden_layer_output,output_weights)</span>
<span id="cb60-17"><a href="#cb60-17" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">+=</span> output_bias</span>
<span id="cb60-18"><a href="#cb60-18" aria-hidden="true" tabindex="-1"></a>predicted_output <span class="op">=</span> sigmoid(output_layer_activation)</span>
<span id="cb60-19"><a href="#cb60-19" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-20"><a href="#cb60-20" aria-hidden="true" tabindex="-1"></a><span class="co">#Backpropagation</span></span>
<span id="cb60-21"><a href="#cb60-21" aria-hidden="true" tabindex="-1"></a>error <span class="op">=</span> expected_output <span class="op">-</span> predicted_output</span>
<span id="cb60-22"><a href="#cb60-22" aria-hidden="true" tabindex="-1"></a>d_predicted_output <span class="op">=</span> error <span class="op">*</span> sigmoid_derivative(predicted_output)</span>
<span id="cb60-23"><a href="#cb60-23" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-24"><a href="#cb60-24" aria-hidden="true" tabindex="-1"></a>error_hidden_layer <span class="op">=</span> d_predicted_output.dot(output_weights.T)</span>
<span id="cb60-25"><a href="#cb60-25" aria-hidden="true" tabindex="-1"></a>d_hidden_layer <span class="op">=</span> error_hidden_layer <span class="op">*</span> sigmoid_derivative(hidden_layer_output)</span>
<span id="cb60-26"><a href="#cb60-26" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb60-27"><a href="#cb60-27" aria-hidden="true" tabindex="-1"></a><span class="co">#Updating Weights and Biases</span></span>
<span id="cb60-28"><a href="#cb60-28" aria-hidden="true" tabindex="-1"></a>output_weights <span class="op">+=</span> hidden_layer_output.T.dot(d_predicted_output)</span>
<span id="cb60-29"><a href="#cb60-29" aria-hidden="true" tabindex="-1"></a>output_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_predicted_output,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span>
<span id="cb60-30"><a href="#cb60-30" aria-hidden="true" tabindex="-1"></a>hidden_weights <span class="op">+=</span> inputs.T.dot(d_hidden_layer)</span>
<span id="cb60-31"><a href="#cb60-31" aria-hidden="true" tabindex="-1"></a>hidden_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_hidden_layer,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span></code></pre></div>
<div class="sourceCode" id="cb62"><pre
class="sourceCode python"><code class="sourceCode python"><span id="cb62-1"><a href="#cb62-1" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb62-2"><a href="#cb62-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-3"><a href="#cb62-3" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid (x):</span>
<span id="cb62-4"><a href="#cb62-4" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> <span class="dv">1</span><span class="op">/</span>(<span class="dv">1</span> <span class="op">+</span> np.exp(<span class="op">-</span>x))</span>
<span id="cb62-5"><a href="#cb62-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-6"><a href="#cb62-6" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid_derivative(x):</span>
<span id="cb62-7"><a href="#cb62-7" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> x <span class="op">*</span> (<span class="dv">1</span> <span class="op">-</span> x)</span>
<span id="cb62-8"><a href="#cb62-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-9"><a href="#cb62-9" aria-hidden="true" tabindex="-1"></a>inputs <span class="op">=</span> np.array([[<span class="dv">0</span>,<span class="dv">0</span>],[<span class="dv">0</span>,<span class="dv">1</span>],[<span class="dv">1</span>,<span class="dv">0</span>],[<span class="dv">1</span>,<span class="dv">1</span>]])</span>
<span id="cb62-10"><a href="#cb62-10" aria-hidden="true" tabindex="-1"></a>expected_output <span class="op">=</span> np.array([[<span class="dv">0</span>],[<span class="dv">1</span>],[<span class="dv">1</span>],[<span class="dv">0</span>]])</span>
<span id="cb62-11"><a href="#cb62-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-12"><a href="#cb62-12" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">=</span> np.dot(inputs,hidden_weights)</span>
<span id="cb62-13"><a href="#cb62-13" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">+=</span> hidden_bias</span>
<span id="cb62-14"><a href="#cb62-14" aria-hidden="true" tabindex="-1"></a>hidden_layer_output <span class="op">=</span> sigmoid(hidden_layer_activation)</span>
<span id="cb62-15"><a href="#cb62-15" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-16"><a href="#cb62-16" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">=</span> np.dot(hidden_layer_output,output_weights)</span>
<span id="cb62-17"><a href="#cb62-17" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">+=</span> output_bias</span>
<span id="cb62-18"><a href="#cb62-18" aria-hidden="true" tabindex="-1"></a>predicted_output <span class="op">=</span> sigmoid(output_layer_activation)</span>
<span id="cb62-19"><a href="#cb62-19" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-20"><a href="#cb62-20" aria-hidden="true" tabindex="-1"></a><span class="co">#Backpropagation</span></span>
<span id="cb62-21"><a href="#cb62-21" aria-hidden="true" tabindex="-1"></a>error <span class="op">=</span> expected_output <span class="op">-</span> predicted_output</span>
<span id="cb62-22"><a href="#cb62-22" aria-hidden="true" tabindex="-1"></a>d_predicted_output <span class="op">=</span> error <span class="op">*</span> sigmoid_derivative(predicted_output)</span>
<span id="cb62-23"><a href="#cb62-23" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-24"><a href="#cb62-24" aria-hidden="true" tabindex="-1"></a>error_hidden_layer <span class="op">=</span> d_predicted_output.dot(output_weights.T)</span>
<span id="cb62-25"><a href="#cb62-25" aria-hidden="true" tabindex="-1"></a>d_hidden_layer <span class="op">=</span> error_hidden_layer <span class="op">*</span> sigmoid_derivative(hidden_layer_output)</span>
<span id="cb62-26"><a href="#cb62-26" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb62-27"><a href="#cb62-27" aria-hidden="true" tabindex="-1"></a><span class="co">#Updating Weights and Biases</span></span>
<span id="cb62-28"><a href="#cb62-28" aria-hidden="true" tabindex="-1"></a>output_weights <span class="op">+=</span> hidden_layer_output.T.dot(d_predicted_output)</span>
<span id="cb62-29"><a href="#cb62-29" aria-hidden="true" tabindex="-1"></a>output_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_predicted_output,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span>
<span id="cb62-30"><a href="#cb62-30" aria-hidden="true" tabindex="-1"></a>hidden_weights <span class="op">+=</span> inputs.T.dot(d_hidden_layer)</span>
<span id="cb62-31"><a href="#cb62-31" aria-hidden="true" tabindex="-1"></a>hidden_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_hidden_layer,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span></code></pre></div>
<h2 id="shoelace-theorem"><a
href="https://artofproblemsolving.com/wiki/index.php/Shoelace_Theorem">Shoelace
Theorem</a></h2>
Expand Down

0 comments on commit 1f5d67b

Please sign in to comment.