forked from raystack/firehose
-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: move docs to docusaurus (raystack#172)
* docs: revamp docs * docs: move docs to docusaurus * docs: move docs to docusaurus * docs: move docs to docusaurus
- Loading branch information
Showing
91 changed files
with
9,054 additions
and
12,869 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
name: docs | ||
|
||
on: | ||
push: | ||
branches: | ||
- main | ||
workflow_dispatch: | ||
|
||
jobs: | ||
documentation: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- uses: actions/checkout@v2 | ||
- uses: actions/setup-node@v2 | ||
- name: Installation | ||
uses: bahmutov/npm-install@v1 | ||
with: | ||
install-command: yarn | ||
working-directory: docs | ||
- name: Build docs | ||
working-directory: docs | ||
run: cd docs && yarn build | ||
- name: Deploy docs | ||
env: | ||
GIT_USER: ravisuhag | ||
GIT_PASS: ${{ secrets.DOCU_RS_TOKEN }} | ||
DEPLOYMENT_BRANCH: gh-pages | ||
CURRENT_BRANCH: master | ||
working-directory: docs | ||
run: | | ||
git config --global user.email "[email protected]" | ||
git config --global user.name "ravisuhag" | ||
yarn deploy |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Dependencies | ||
/node_modules | ||
|
||
# Production | ||
/build | ||
|
||
# Generated files | ||
.docusaurus | ||
.cache-loader | ||
|
||
# Misc | ||
.DS_Store | ||
.env.local | ||
.env.development.local | ||
.env.test.local | ||
.env.production.local | ||
|
||
npm-debug.log* | ||
yarn-debug.log* | ||
yarn-error.log* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,51 +1,33 @@ | ||
# Introduction | ||
# Website | ||
|
||
Firehose is a cloud-native service for delivering real-time streaming data to destinations such as service endpoints \(HTTP or GRPC\) & managed databases \(MongoDB, Prometheus, Postgres, InfluxDB, Redis, & ElasticSearch\). With Firehose, you don't need to write applications or manage resources. It automatically scales to match the throughput of your data and requires no ongoing administration. If your data is present in Kafka, Firehose delivers it to the destination\(SINK\) that you specified. | ||
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator. | ||
|
||
![](.gitbook/assets/overview%20%283%29.svg) | ||
### Installation | ||
|
||
## Key Features | ||
``` | ||
$ yarn | ||
``` | ||
|
||
Discover why users choose Firehose as their main Kafka Consumer | ||
### Local Development | ||
|
||
* **Sinks** Firehose supports sinking stream data to log console, HTTP, GRPC, PostgresDB\(JDBC\), InfluxDB, Elastic Search, Redis, Prometheus and MongoDB. | ||
* **Scale** Firehose scales in an instant, both vertically and horizontally, for high-performance streaming sink and zero data drops. | ||
* **Extensibility** Add your own sink to Firehose with a clearly defined interface or choose from already provided ones. | ||
* **Runtime** Firehose can run inside containers or VMs in a fully managed runtime environment like Kubernetes. | ||
* **Metrics** Always know what’s going on with your deployment with built-in monitoring of throughput, response times, errors, and more. | ||
``` | ||
$ yarn start | ||
``` | ||
|
||
## Supported Sinks: | ||
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. | ||
|
||
Following sinks are supported in the Firehose | ||
### Build | ||
|
||
* [Log](https://en.wikipedia.org/wiki/Log_file) - Standard Output | ||
* [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) - HTTP services | ||
* [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) - Postgres DB | ||
* [InfluxDB](https://en.wikipedia.org/wiki/InfluxDB) - A time-series database | ||
* [Redis](https://en.wikipedia.org/wiki/Redis) - An in-memory Key value store | ||
* [ElasticSearch](https://en.wikipedia.org/wiki/Elasticsearch) - A search database | ||
* [GRPC](https://en.wikipedia.org/wiki/GRPC) - GRPC based services | ||
* [Prometheus](https://en.wikipedia.org/wiki/Prometheus_%28software) - A time-series database | ||
* [MongoDB](https://en.wikipedia.org/wiki/MongoDB) - A NoSQL database | ||
* [Bigquery](https://cloud.google.com/bigquery) - A data warehouse provided by Google Cloud | ||
* [Blob Storage](https://gocloud.dev/howto/blob/) - A data storage architecture for large stores of unstructured data like google cloud storage, amazon s3, apache hadoop distributed filesystem | ||
``` | ||
$ yarn build | ||
``` | ||
|
||
## How is Firehose different from Kafka-Connect? | ||
This command generates static content into the `build` directory and can be served using any static contents hosting service. | ||
|
||
* **Ease of use:** Firehose is easier to install, and using different sinks only requires changing a few configurations. When used in distributed mode across multiple nodes, it requires connectors to be installed across all the workers within your Kafka-Connect cluster. | ||
* **Filtering:** Value-based filtering is much easier to implement as compared to Kafka-Connect. Requires no additional plugins/schema-registry to be installed. | ||
* **Extensible:** Provides a comprehensible abstract sink contract making it easier to add a new sink in Firehose. Firehose also comes with an inbuilt serialization/deserialization and doesn't require any converters and serializers when implementing a new sink. | ||
* **Easy monitoring:** Firehose provides a detailed health dashboard \(Grafana\) for effortless monitoring. | ||
* **Connectors:** Some of the Kafka connect available connectors usually have limitations. Its usually rare to find all the required features in a single connector and so is to find documentation for the same | ||
* **Fully open-source:** Firehose is completely open-source while separation of commercial and open-source features is not very structured in Kafka Connect and for monitoring and advanced features, confluent control center requires an enterprise subscription | ||
### Deployment | ||
|
||
## How can I get started? | ||
|
||
Explore the following resources to get started with Firehose: | ||
|
||
* [Guides](guides/overview.md) provide guidance on creating Firehose with different sinks. | ||
* [Concepts](concepts/README.md) describe all important Firehose concepts. | ||
* [FAQs](reference/faq/index.md) lists down some common frequently asked questions about Firehose and related components. | ||
* [Reference](reference/configuration/) contains details about configurations, metrics, FAQs, and other aspects of Firehose. | ||
* [Contributing](contribute/contribution.md) contains resources for anyone who wants to contribute to Firehose. | ||
``` | ||
$ GIT_USER=<Your GitHub username> USE_SSH=true yarn deploy | ||
``` | ||
|
||
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch. |
This file was deleted.
Oops, something went wrong.
Binary file not shown.
Oops, something went wrong.