Skip to content

Commit

Permalink
document convolve
Browse files Browse the repository at this point in the history
  • Loading branch information
vmchale committed Aug 18, 2024
1 parent 858abbb commit d3487a9
Show file tree
Hide file tree
Showing 3 changed files with 73 additions and 32 deletions.
22 changes: 22 additions & 0 deletions doc/apple-by-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,28 @@ Use `->n` to access the `n`th element of a tuple, viz.
4
```

## Convolve

Convolve `(⨳ {m,n})` is like [dyadic infix](#dyadic-infix) for higher-rank
windows.

```
> [(⋉)/x] ⨳ {3} ⟨_1,0,4,_2,3,3,1,1,0,_5.0⟩
Vec 8 [4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 1.0, 1.0]
```

```
> [(⋉)/x] \`3 ⟨_1,0,4,_2,3,3,1,1,0,_5.0⟩
Vec 8 [4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 1.0, 1.0]
```

```
> ([((+)/* 0 (x::Arr (2 × 3) float))%ℝ(:x)] ⨳ {2,3}) ⟨⟨1.0,2,0,0⟩,⟨_1,_2,3,4⟩,⟨5,6,3,1⟩,⟨3,1,1,3⟩⟩
Arr (3×2) [ [0.5, 1.1666666666666665]
, [2.333333333333333, 2.5]
, [3.1666666666666665, 2.5] ]
```

## REPL Functionality

### Load
Expand Down
7 changes: 7 additions & 0 deletions doc/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,13 @@ Then:
source("R/apple.R")
```

### Editor Integration

There is a [vim plugin](https://github.com/vmchale/apple/tree/canon/vim) and a
[VSCode extension](https://marketplace.visualstudio.com/items?itemName=vmchale.apple). The Vim plugin has digraphs which may be helpful, `:h apple`.

The file extension is `.🍎` or `.🍏`.

## Python Extension Module

To JIT compile a function:
Expand Down
76 changes: 44 additions & 32 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -280,6 +280,7 @@ <h1 class="title">Apple by Example</h1>
<li><a href="#random-numbers" id="toc-random-numbers">Random
Numbers</a></li>
<li><a href="#tuples" id="toc-tuples">Tuples</a></li>
<li><a href="#convolve" id="toc-convolve">Convolve</a></li>
<li><a href="#repl-functionality" id="toc-repl-functionality">REPL
Functionality</a>
<ul>
Expand Down Expand Up @@ -592,6 +593,17 @@ <h2 id="tuples">Tuples</h2>
#t
&gt; (1.0,#t,4::int)-&gt;3
4</code></pre>
<h2 id="convolve">Convolve</h2>
<p>Convolve <code>(⨳ {m,n})</code> is like <a
href="#dyadic-infix">dyadic infix</a> for higher-rank windows.</p>
<pre><code> &gt; [(⋉)/x] ⨳ {3} ⟨_1,0,4,_2,3,3,1,1,0,_5.0⟩
Vec 8 [4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 1.0, 1.0]</code></pre>
<pre><code> &gt; [(⋉)/x] \`3 ⟨_1,0,4,_2,3,3,1,1,0,_5.0⟩
Vec 8 [4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 1.0, 1.0]</code></pre>
<pre><code> &gt; ([((+)/* 0 (x::Arr (2 × 3) float))%ℝ(:x)] ⨳ {2,3}) ⟨⟨1.0,2,0,0⟩,⟨_1,_2,3,4⟩,⟨5,6,3,1⟩,⟨3,1,1,3⟩⟩
Arr (3×2) [ [0.5, 1.1666666666666665]
, [2.333333333333333, 2.5]
, [3.1666666666666665, 2.5] ]</code></pre>
<h2 id="repl-functionality">REPL Functionality</h2>
<h3 id="load">Load</h3>
<pre><code> &gt; :yank chisqcdf math/chisqcdf.🍎
Expand Down Expand Up @@ -680,38 +692,38 @@ <h2 id="train-neural-network">Train Neural Network</h2>
}</code></pre>
<p>This is equivalent to the <a
href="https://towardsdatascience.com/implementing-the-xor-gate-using-backpropagation-in-neural-networks-c1f255b4f20d">Python</a>:</p>
<div class="sourceCode" id="cb54"><pre
class="sourceCode python"><code class="sourceCode python"><span id="cb54-1"><a href="#cb54-1" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb54-2"><a href="#cb54-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-3"><a href="#cb54-3" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid (x):</span>
<span id="cb54-4"><a href="#cb54-4" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> <span class="dv">1</span><span class="op">/</span>(<span class="dv">1</span> <span class="op">+</span> np.exp(<span class="op">-</span>x))</span>
<span id="cb54-5"><a href="#cb54-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-6"><a href="#cb54-6" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid_derivative(x):</span>
<span id="cb54-7"><a href="#cb54-7" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> x <span class="op">*</span> (<span class="dv">1</span> <span class="op">-</span> x)</span>
<span id="cb54-8"><a href="#cb54-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-9"><a href="#cb54-9" aria-hidden="true" tabindex="-1"></a>inputs <span class="op">=</span> np.array([[<span class="dv">0</span>,<span class="dv">0</span>],[<span class="dv">0</span>,<span class="dv">1</span>],[<span class="dv">1</span>,<span class="dv">0</span>],[<span class="dv">1</span>,<span class="dv">1</span>]])</span>
<span id="cb54-10"><a href="#cb54-10" aria-hidden="true" tabindex="-1"></a>expected_output <span class="op">=</span> np.array([[<span class="dv">0</span>],[<span class="dv">1</span>],[<span class="dv">1</span>],[<span class="dv">0</span>]])</span>
<span id="cb54-11"><a href="#cb54-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-12"><a href="#cb54-12" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">=</span> np.dot(inputs,hidden_weights)</span>
<span id="cb54-13"><a href="#cb54-13" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">+=</span> hidden_bias</span>
<span id="cb54-14"><a href="#cb54-14" aria-hidden="true" tabindex="-1"></a>hidden_layer_output <span class="op">=</span> sigmoid(hidden_layer_activation)</span>
<span id="cb54-15"><a href="#cb54-15" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-16"><a href="#cb54-16" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">=</span> np.dot(hidden_layer_output,output_weights)</span>
<span id="cb54-17"><a href="#cb54-17" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">+=</span> output_bias</span>
<span id="cb54-18"><a href="#cb54-18" aria-hidden="true" tabindex="-1"></a>predicted_output <span class="op">=</span> sigmoid(output_layer_activation)</span>
<span id="cb54-19"><a href="#cb54-19" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-20"><a href="#cb54-20" aria-hidden="true" tabindex="-1"></a><span class="co">#Backpropagation</span></span>
<span id="cb54-21"><a href="#cb54-21" aria-hidden="true" tabindex="-1"></a>error <span class="op">=</span> expected_output <span class="op">-</span> predicted_output</span>
<span id="cb54-22"><a href="#cb54-22" aria-hidden="true" tabindex="-1"></a>d_predicted_output <span class="op">=</span> error <span class="op">*</span> sigmoid_derivative(predicted_output)</span>
<span id="cb54-23"><a href="#cb54-23" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-24"><a href="#cb54-24" aria-hidden="true" tabindex="-1"></a>error_hidden_layer <span class="op">=</span> d_predicted_output.dot(output_weights.T)</span>
<span id="cb54-25"><a href="#cb54-25" aria-hidden="true" tabindex="-1"></a>d_hidden_layer <span class="op">=</span> error_hidden_layer <span class="op">*</span> sigmoid_derivative(hidden_layer_output)</span>
<span id="cb54-26"><a href="#cb54-26" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb54-27"><a href="#cb54-27" aria-hidden="true" tabindex="-1"></a><span class="co">#Updating Weights and Biases</span></span>
<span id="cb54-28"><a href="#cb54-28" aria-hidden="true" tabindex="-1"></a>output_weights <span class="op">+=</span> hidden_layer_output.T.dot(d_predicted_output)</span>
<span id="cb54-29"><a href="#cb54-29" aria-hidden="true" tabindex="-1"></a>output_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_predicted_output,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span>
<span id="cb54-30"><a href="#cb54-30" aria-hidden="true" tabindex="-1"></a>hidden_weights <span class="op">+=</span> inputs.T.dot(d_hidden_layer)</span>
<span id="cb54-31"><a href="#cb54-31" aria-hidden="true" tabindex="-1"></a>hidden_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_hidden_layer,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span></code></pre></div>
<div class="sourceCode" id="cb57"><pre
class="sourceCode python"><code class="sourceCode python"><span id="cb57-1"><a href="#cb57-1" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> numpy <span class="im">as</span> np</span>
<span id="cb57-2"><a href="#cb57-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-3"><a href="#cb57-3" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid (x):</span>
<span id="cb57-4"><a href="#cb57-4" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> <span class="dv">1</span><span class="op">/</span>(<span class="dv">1</span> <span class="op">+</span> np.exp(<span class="op">-</span>x))</span>
<span id="cb57-5"><a href="#cb57-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-6"><a href="#cb57-6" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> sigmoid_derivative(x):</span>
<span id="cb57-7"><a href="#cb57-7" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> x <span class="op">*</span> (<span class="dv">1</span> <span class="op">-</span> x)</span>
<span id="cb57-8"><a href="#cb57-8" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-9"><a href="#cb57-9" aria-hidden="true" tabindex="-1"></a>inputs <span class="op">=</span> np.array([[<span class="dv">0</span>,<span class="dv">0</span>],[<span class="dv">0</span>,<span class="dv">1</span>],[<span class="dv">1</span>,<span class="dv">0</span>],[<span class="dv">1</span>,<span class="dv">1</span>]])</span>
<span id="cb57-10"><a href="#cb57-10" aria-hidden="true" tabindex="-1"></a>expected_output <span class="op">=</span> np.array([[<span class="dv">0</span>],[<span class="dv">1</span>],[<span class="dv">1</span>],[<span class="dv">0</span>]])</span>
<span id="cb57-11"><a href="#cb57-11" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-12"><a href="#cb57-12" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">=</span> np.dot(inputs,hidden_weights)</span>
<span id="cb57-13"><a href="#cb57-13" aria-hidden="true" tabindex="-1"></a>hidden_layer_activation <span class="op">+=</span> hidden_bias</span>
<span id="cb57-14"><a href="#cb57-14" aria-hidden="true" tabindex="-1"></a>hidden_layer_output <span class="op">=</span> sigmoid(hidden_layer_activation)</span>
<span id="cb57-15"><a href="#cb57-15" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-16"><a href="#cb57-16" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">=</span> np.dot(hidden_layer_output,output_weights)</span>
<span id="cb57-17"><a href="#cb57-17" aria-hidden="true" tabindex="-1"></a>output_layer_activation <span class="op">+=</span> output_bias</span>
<span id="cb57-18"><a href="#cb57-18" aria-hidden="true" tabindex="-1"></a>predicted_output <span class="op">=</span> sigmoid(output_layer_activation)</span>
<span id="cb57-19"><a href="#cb57-19" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-20"><a href="#cb57-20" aria-hidden="true" tabindex="-1"></a><span class="co">#Backpropagation</span></span>
<span id="cb57-21"><a href="#cb57-21" aria-hidden="true" tabindex="-1"></a>error <span class="op">=</span> expected_output <span class="op">-</span> predicted_output</span>
<span id="cb57-22"><a href="#cb57-22" aria-hidden="true" tabindex="-1"></a>d_predicted_output <span class="op">=</span> error <span class="op">*</span> sigmoid_derivative(predicted_output)</span>
<span id="cb57-23"><a href="#cb57-23" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-24"><a href="#cb57-24" aria-hidden="true" tabindex="-1"></a>error_hidden_layer <span class="op">=</span> d_predicted_output.dot(output_weights.T)</span>
<span id="cb57-25"><a href="#cb57-25" aria-hidden="true" tabindex="-1"></a>d_hidden_layer <span class="op">=</span> error_hidden_layer <span class="op">*</span> sigmoid_derivative(hidden_layer_output)</span>
<span id="cb57-26"><a href="#cb57-26" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb57-27"><a href="#cb57-27" aria-hidden="true" tabindex="-1"></a><span class="co">#Updating Weights and Biases</span></span>
<span id="cb57-28"><a href="#cb57-28" aria-hidden="true" tabindex="-1"></a>output_weights <span class="op">+=</span> hidden_layer_output.T.dot(d_predicted_output)</span>
<span id="cb57-29"><a href="#cb57-29" aria-hidden="true" tabindex="-1"></a>output_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_predicted_output,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span>
<span id="cb57-30"><a href="#cb57-30" aria-hidden="true" tabindex="-1"></a>hidden_weights <span class="op">+=</span> inputs.T.dot(d_hidden_layer)</span>
<span id="cb57-31"><a href="#cb57-31" aria-hidden="true" tabindex="-1"></a>hidden_bias <span class="op">+=</span> np.<span class="bu">sum</span>(d_hidden_layer,axis<span class="op">=</span><span class="dv">0</span>,keepdims<span class="op">=</span><span class="va">True</span>)</span></code></pre></div>
<h2 id="shoelace-theorem"><a
href="https://artofproblemsolving.com/wiki/index.php/Shoelace_Theorem">Shoelace
Theorem</a></h2>
Expand Down

0 comments on commit d3487a9

Please sign in to comment.