Skip to content

Commit

Permalink
deploy: 368bbe3
Browse files Browse the repository at this point in the history
  • Loading branch information
j-mendez committed Dec 4, 2023
1 parent 4038bf1 commit 1b1f8d2
Show file tree
Hide file tree
Showing 4 changed files with 30 additions and 2 deletions.
14 changes: 14 additions & 0 deletions print.html
Original file line number Diff line number Diff line change
Expand Up @@ -379,6 +379,20 @@ <h2 id="storing-and-exporting-data"><a class="header" href="#storing-and-exporti
// we only have one export method atm. Optional file path. All data by default goes to storage
await website.exportJsonlData(&quot;./storage/test.jsonl&quot;);
</code></pre>
<h2 id="stop-crawl"><a class="header" href="#stop-crawl">Stop crawl</a></h2>
<p>To stop a crawl you can use <code>website.stopCrawl(id)</code>, pass in the crawl id to stop a run or leave empty for all crawls to stop.</p>
<pre><code class="language-ts">const website = new Website(&quot;https://choosealicense.com&quot;);

const onPageEvent = (_err, page) =&gt; {
console.log(page)
// stop the concurrent crawl when 8 pages are found.
if (website.size &gt;= 8) {
website.stop();
}
};

await website.crawl(onPageEvent);
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="page"><a class="header" href="#page">Page</a></h1>
<p>A single page on a website, useful if you just the root url.</p>
<h2 id="new-page"><a class="header" href="#new-page">New Page</a></h2>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion searchindex.json

Large diffs are not rendered by default.

14 changes: 14 additions & 0 deletions website.html
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,20 @@ <h2 id="storing-and-exporting-data"><a class="header" href="#storing-and-exporti

// we only have one export method atm. Optional file path. All data by default goes to storage
await website.exportJsonlData(&quot;./storage/test.jsonl&quot;);
</code></pre>
<h2 id="stop-crawl"><a class="header" href="#stop-crawl">Stop crawl</a></h2>
<p>To stop a crawl you can use <code>website.stopCrawl(id)</code>, pass in the crawl id to stop a run or leave empty for all crawls to stop.</p>
<pre><code class="language-ts">const website = new Website(&quot;https://choosealicense.com&quot;);

const onPageEvent = (_err, page) =&gt; {
console.log(page)
// stop the concurrent crawl when 8 pages are found.
if (website.size &gt;= 8) {
website.stop();
}
};

await website.crawl(onPageEvent);
</code></pre>

</main>
Expand Down

0 comments on commit 1b1f8d2

Please sign in to comment.