Skip to content

Commit

Permalink
Flow content v1 (#4)
Browse files Browse the repository at this point in the history
# Description of change

~~1st preview build of WIP content:
https://deploy-preview-381--nifty-wozniak-71a44b.netlify.app/hz-flow/5.5-snapshot/~~

2nd preview:
https://deploy-preview-386--nifty-wozniak-71a44b.netlify.app/hz-flow/5.5-snapshot/

# Feedback

You can give feedback with comments or by starting a review as normal.

---------

Co-authored-by: Oliver Howell <[email protected]>
  • Loading branch information
amandalindsay and oliverhowell authored Aug 22, 2024
1 parent 378a694 commit 98c706f
Show file tree
Hide file tree
Showing 92 changed files with 2,834 additions and 801 deletions.
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
/node_modules/
/docs/
/docs/*.mdx
/build/
/test/
/docs/*.mdx
.DS_Store
package-lock.json
/pdf-docs/
Gemfile.lock
Gemfile.lock
.vscode
4 changes: 3 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
{}
{
"asciidoc.antora.enableAntoraSupport": true
}
Binary file added docs/modules/ROOT/images/network-diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/modules/ROOT/images/network_diagram_flow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
80 changes: 42 additions & 38 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,55 +1,59 @@
.Get started
* xref:index.adoc[Overview]
* xref:introduction:index.adoc[What is {short-product-name}]
* About Taxi
* xref:introduction:index.adoc[What is {short-product-name}?]
* xref:introduction:about-taxi.adoc[About Taxi]
* xref:guides:index.adoc[Tutorials]
** xref:guides:install.adoc[]
** xref:guides:install.adoc[Install {short-product-name}]
** xref:guides:apis-db-kafka.adoc[Connect APIs, a database & Kafka]
** xref:guides:compose.adoc[Compose APIs and database]
** xref:guides:apis-db-kafka.adoc[Create first integration]
** xref:guides:work-with-xml.adoc[Work with XML and JSON]
** xref:guides:streaming-data.adoc[Stream data]
** xref:guides:gen-taxi-from-code.adoc[Generating Taxi from code]
** xref:guides:gen-code-from-taxi.adoc[Generating code from Taxi]
* xref:workspace:overview.adoc[Your workspace]
** xref:workspace:connecting-a-git-repo.adoc[]
** xref:workspace:connecting-a-disk-repo.adoc[]
** xref:workspace:connecting-a-git-repo.adoc[Pull API specs from Git]
** xref:workspace:connecting-a-disk-repo.adoc[Pull API specs from disk]
.Manage data sources
* xref:describing-data-sources:configuring-connections.adoc[]
* xref:describing-data-sources:authentication-to-services.adoc[]
* xref:describing-data-sources:open-api.adoc[]
* xref:describing-data-sources:soap.adoc[]
* xref:describing-data-sources:http.adoc[]
* xref:describing-data-sources:protobuf.adoc[]
* xref:describing-data-sources:databases.adoc[]
* xref:describing-data-sources:hazelcast.adoc[]
* xref:describing-data-sources:mongodb.adoc[]
* xref:describing-data-sources:kafka.adoc[]
* xref:describing-data-sources:aws-services.adoc[]
* xref:describing-data-sources:caching.adoc[]
* xref:describing-data-sources:intro-to-semantic-integration.adoc[]
* xref:describing-data-sources:tips-on-taxonomies.adoc[]
* xref:describing-data-sources:configuring-connections.adoc[Configure connections]
* xref:describing-data-sources:authentication-to-services.adoc[Authenticate to services]
* xref:describing-data-sources:open-api.adoc[Use OpenAPI specs]
* xref:describing-data-sources:soap.adoc[Use SOAP WSDLs]
* xref:describing-data-sources:http.adoc[Work with HTTP services]
* xref:describing-data-sources:protobuf.adoc[Work with Protobuf]
* xref:describing-data-sources:databases.adoc[Describe databases]
* xref:describing-data-sources:hazelcast.adoc[Use a Hazelcast data source]
* xref:describing-data-sources:mongodb.adoc[Describe a Mongo DB]
* xref:describing-data-sources:kafka.adoc[Use a Kafka data source]
* xref:describing-data-sources:aws-services.adoc[Use AWS services]
* xref:describing-data-sources:caching.adoc[Use caching]
.Query data
* xref:querying:writing-queries.adoc[]
* xref:querying:mutations.adoc[]
* xref:querying:kotlin-sdk.adoc[]
* xref:querying:queries-as-endpoints.adoc[]
* xref:querying:observability.adoc[]
* xref:querying:writing-queries.adoc[Query with {short-product-name}]
* xref:querying:mutations.adoc[Perform mutations]
* xref:querying:queries-as-endpoints.adoc[Publish queries]
* xref:querying:observability.adoc[Observe queries]
.Data streams & formats
* xref:streams:streaming-data.adoc[Stream data]
* xref:querying:streams.adoc[Data pipelines]
* xref:data-formats:overview.adoc[Data formats]
* xref:data-formats:avro.adoc[]
* xref:data-formats:csv.adoc[]
* xref:data-formats:json.adoc[]
* xref:data-formats:xml.adoc[]
* xref:data-formats:protobuf.adoc[]
* xref:data-formats:custom-data-formats.adoc[]
** xref:data-formats:avro.adoc[Avro]
** xref:data-formats:csv.adoc[CSV]
** xref:data-formats:json.adoc[JSON]
** xref:data-formats:xml.adoc[XML]
** xref:data-formats:protobuf.adoc[Protobuf]
** xref:data-formats:custom-data-formats.adoc[Custom formats]
.Deployment
* xref:deploying:production-deployments.adoc[]
* xref:deploying:configuring-{short-product-name}.adoc[]
* xref:deploying:managing-secrets.adoc[]
* xref:deploying:authentication.adoc[]
* xref:deploying:data-policies.adoc[]
* xref:deploying:distributing-work-on-a-cluster.adoc[]
* xref:deploying:production-deployments.adoc[Deploy {short-product-name}]
* xref:deploying:configuring.adoc[Configure {short-product-name}]
* xref:deploying:managing-secrets.adoc[Manage secrets]
* xref:deploying:authentication.adoc[Enable authentication]
* xref:deploying:authorization.adoc[Enable authorization]
* xref:deploying:data-policies.adoc[Data policies]
* xref:deploying:distributing-work-on-a-cluster.adoc[Distribute on a cluster]
.Reference
* xref:glossary.adoc[Glossary]
* xref:describing-data-sources:intro-to-semantic-integration.adoc[Semantic integration]
* xref:describing-data-sources:tips-on-taxonomies.adoc[Taxonomy tips]
31 changes: 31 additions & 0 deletions docs/modules/ROOT/pages/glossary.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
= Glossary
:description: Glossary of {short-product-name} terms, acronyms and abbreviations

For a full list of the terms, acronyms and abbreviations used in our documentation, see the https://docs.hazelcast.com/hazelcast/latest/glossary[Hazelcast glossary].

[glossary]
BFF:: Backend-for-frontend.
CLI:: Command-line interface.
Data pipeline:: A series of actions that ingest data from one or more sources and move it to a destination for storage and analysis.

Data source:: The databases, APIs, and messages that fuel Hazelcast Flow.
Hazelcast Flow:: Flow is a data gateway that automates the integration of microservices across your enterprise. Flow accelerates application development by connecting multiple data sources and APIs together, without having to write integration code.
IaC:: Infrastructure as Code.
JWT:: JSON Web Token, an open standard for transmitting information securely between parties as a JSON object.
K8s:: Kubernetes (often abbreviated to K8s), an open-source system that manages and deploys containerized applications.
mTLS:: Mutual authentication (mTLS), a method that ensures the authenticity of the parties at each end of a network connection.
Mutation:: Mutation queries make changes; for example, in performing a task or updating a record.
OIDC:: OpenID Connect provider.
OOME:: Out of Memory Error.
PKCE:: Proof Key for Code Exchange, an extension used in OAuth 2.0 to improve security for public clients.
Projection:: Projections are a way of taking data from one place, and then transforming and combining it with other data sources.
Query:: A request built using Flow's ability to retrieve and analyze data from different sources across an ecosystem. For example, a query might stitch three services together: a database, API and Kafka streaming data.
SAML:: Security Assertion Markup Language identity provider (IdP) authenticates users and passes authentication data to a service provider.
Semantic data type:: A method of encoding data that allows software to discover and map data based upon its meaning rather than its structure.
Serialization:: Process of converting an object into a stream of bytes in order to store it or transmit it to memory, a database, or a file. Its main purpose is to save the state of an object so it can be recreated when needed. The reverse process is called deserialization.
SSE:: Server-sent events.
TaxiQL:: Taxi is a simple query language for describing how data and APIs across an ecosystem relate to one another.
Taxonomy:: The practice of classifying and categorizing data.
Time to live (TTL):: A value that determines how long data is retained, before it is discarded from Flow's internal cache.
Workspace:: A Flow Workspace is a collection of schemas, API specs and Taxi projects that describe data sources and provide a description of the data and capabilities they provide.
WSDL:: Web Services Description Language.
45 changes: 25 additions & 20 deletions docs/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,36 +1,41 @@
= Quickstart
:description: Connect all your APIs & data sources dynamically, without writing integration code.
= Overview
:description: Connect all your APIs and data sources dynamically, without writing integration code.

{long-product-name} automates integration for your microservices.
Welcome to {long-product-name}.

Get started right now, by spinning up {short-product-name} on your machine
{short-product-name} is a data gateway that automates integration for your microservices. Using {short-product-name}, you can automate the integration, transformation and discovery of data from different data sources (APIs, databases, message brokers) across your enterprise.

[,bash]
----
curl https://start.{code-product-name}.com > docker-compose.yml
docker compose up -d
----
{short-product-name} integrates on-the-fly, automatically adjusting as your data sources change.

Then visit http://localhost:9022 in your browser.
This is powered by rich semantic schemas, which infer how data across your organization links together, and which automate the integration and discovery of data.

== What is {long-product-name}?
image:network_diagram_flow.png[]

{short-product-name} is a data gateway that automates the integration, transformation and discovery of data from data sources (API's, databases, message brokers) across your enterprise.
These topics are aimed at developers and architects who want to harness the power of {short-product-name}, and administrators deploying and integrating {short-product-name} within an organization.

{short-product-name} integrates on-the-fly, automatically adjusting as your data sources change.
== Quickstart

This is powered by rich semantic schemas, which infer how data across your organisation links together, and automate the integration and discovery of data.
Get started right now, by spinning up {short-product-name} on your machine:

<ImageWithCaption src=\{NetworkDiagram} addLightBackground/>
[,bash]
----
curl https://start.flow.com > docker-compose.yml
docker compose up -d
----

== Quick Links
Then visit http://localhost:9021 in your browser.

Once you have {short-product-name} running locally, connect your microservices, and start querying for data+++<QuickLinks>++++++<QuickLink title="Connecting data sources" icon="connect" href="docs/describing-data-sources/configuring-connections" description="Connect your APIs, Databases and Message Queues to {short-product-name}.">++++++</QuickLink>+++ +++<QuickLink title="Querying" icon="query" href="/docs/querying/writing-queries" description="Query for data through {short-product-name}'s API, and let {short-product-name} handle the integration plumbing for you.">++++++</QuickLink>+++ +++<QuickLink title="Follow a guide" icon="guides" href="/docs/guides" description="A handful of guides to help get productive with {short-product-name}">++++++</QuickLink>+++ +++<QuickLink title="Head to production" icon="production" href="/docs/deploying/production-deployments" description="Deploy {short-product-name} on your Kubernetes cluster or using Docker Compose">++++++</QuickLink>++++++</QuickLinks>+++
== Next steps

== Getting help
Once you have {short-product-name} running locally, connect your microservices, and start querying for data.

Stuck? Need help? Have an idea? We'd love to hear from your.
* xref:describing-data-sources:configuring-connections.adoc[Connect data sources] - Connect your APIs, Databases and Message Queues to {short-product-name}
* xref:querying:writing-queries.adoc[Query data] - Query for data through {short-product-name}'s API, and let it handle the integration plumbing for you
* xref:deploying:production-deployments.adoc[Head to production] - Deploy {short-product-name} on your Kubernetes cluster or using Docker Compose
* xref:guides:index.adoc[Tutorials] - Use our step-by-step tutorials to quickly get productive with {short-product-name}

You can connect with the {short-product-name} team in a number of ways
== Getting help

Need help? Have an idea? We'd love to hear from you.

You can connect with the {short-product-name} team at https://support.hazelcast.com/s/[Hazelcast Support].
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
123 changes: 123 additions & 0 deletions docs/modules/connecting-data-sources/pages/connecting-a-database.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
= Connecting a database
:description: Learn how to create, modify and remove connections to databases

== Overview
{short-product-name} can connect to databases to fetch data when running a query.

This topic explains how to create database connections, both through the user interface, and through {short-product-name}'s config files.

== Using the UI
The UI allows you to create new connections to databases, as well as see a list of the connections already configured.

NOTE: Currently, only creating new connections in the UI is supported. To edit and remove existing connections,
modify the xref:connecting-a-database.adoc#using-a-configuration-file[config file].

=== Create a new connection
There are two ways to add a new database connection in {short-product-name}:

==== Create a connection in the Connection Manager
The connection manager is the place that allows you to create and view connections.

* Click on the *Connection Manager* icon in the left-hand navigation menu
* Specify a connection name
* Select the database type from the list of possible connections
* Based on the database you select, a list of required parameters is shown
* Fill out the form
* Click *Test connection*
* If the connection test is successful, click *Create*

image:connection-manager.png[]

The connection is created, and written to the config file.

==== Create a connection when importing a new data source
You can also create a connection in-line, when specifying a new datasource.

* From the home page, click *Add a data source*.
* Alternatively, click *Schema Explorer* in the left-hand navigation menu, then click *Add new*
* For the schema type to import, select `Database table`

image:new_data_source_flow.png[]

* Click the connection name drop-down
* Select *Add new connection...* A pop-up form appears, allowing you to create a connection
* Specify a connection name
* Select the database type from the list of possible connections. Based on the database you select, a list of required parameters is shown
* Fill out the form
* Click *Test connection*
* If the connection test is successful, click *Create*

After the connection has been created, the popup form closes and your new connection is populated into the schema form.

image:create-connection-popup.png[]

// <div className="flex justify-center">
// <img src={CreateConnectionPopup.src} width="75%" />
// </div>

=== Required permissions
To view, create or edit database connections through the UI, users must have the following permissions granted.

|===
| Activity | Required permission

| View the list of database connections
| `VIEW_CONNECTIONS`

| Create or modify a database connection
| `EDIT_CONNECTIONS`
|===

For more information about role-based security, see the topic on xref:deploying:authorization.adoc[authorization].

== Using a configuration file
All the connections configured for {short-product-name} are stored in a config file, including any that you configure through the UI.

By default, this file is called `connections.conf`, and is located in the `conf/` directory under where you launch {short-product-name} from.
If this file does not exist when {short-product-name} is launched, it's created the first time a connection is created via the UI.

You can specify the name of the configuration file when launching {short-product-name}, by setting the parameter `{short-product-name}.connections.config-file`.
The same configuration file is used for all types of connections, not just databases, including Kafka connections etc.

=== Defining a database connection

```hocon
jdbc { # The root element for database connections
another-connection { # Defines a connection called "another-connection"
connectionName = another-connection # The name of the connection. Must match the key used above.
jdbcDriver = POSTGRES # Defines the driver to use. See below for the possible options
connectionParameters { ## A list of connection parameters. The actual values here are defined by the driver selected.
database = transactions # The name of the database
host = our-db-server # The host of the database
password = super-secret # The password
port = "2003" # The port
username = jack # The username to connect with
}
}
}
```

For the full specification, including supported database drivers and their connection details, see xref:describing-data-sources:configuring-connections.adoc[Configure connections]

=== Passing sensitive data
It may not always be desirable to specify sensistive connection information directly in the config file, especially
if these are being checked into source control.

Environment variables can be used anywhere in the config file, following the https://github.com/lightbend/config#uses-of-substitutions[HOCON standards].

For example:

```HOCON
jdbc {
another-connection {
connectionName = another-connection
jdbcDriver = POSTGRES # Defines the driver to use. See below for the possible options
connectionParameters {
# .. other params omitted for bevity ..
password = ${postgres_password} # Reads the environment variable "postgres_password"
}
}
}
```


Loading

0 comments on commit 98c706f

Please sign in to comment.