Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mention scylladb in docs #32

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Apache Flink Cassandra Connector
# Apache Flink Cassandra(ScyllaDB) Connector

This repository contains the official Apache Flink Cassandra connector.
This repository contains the official Apache Flink Cassandra(ScyllaDB) connector. ScyllaDB is a drop-in replacement for Apache Cassandra and can be used also by connector just by replacing connection string from Cassandra to ScyllaDB.

## Apache Flink

Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.

Learn more about Flink at [https://flink.apache.org/](https://flink.apache.org/)

## Building the Apache Flink Cassandra Connector from Source
## Building the Apache Flink Cassandra(ScyllaDB) Connector from Source

Prerequisites:

Expand Down
22 changes: 15 additions & 7 deletions docs/content.zh/docs/connectors/datastream/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ weight: 4
type: docs
aliases:
- /zh/dev/connectors/cassandra.html
- /zh/dev/connectors/scylladb.html
- /zh/apis/streaming/connectors/cassandra.html
- /zh/apis/streaming/connectors/scylladb.html
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
Expand All @@ -25,9 +27,9 @@ specific language governing permissions and limitations
under the License.
-->

# Apache Cassandra Connector
# Apache Cassandra/ScyllaDB Connector

This connector provides sinks that writes data into a [Apache Cassandra](https://cassandra.apache.org/) database.
This connector provides sinks that writes data into or sources that read from [Apache Cassandra](https://cassandra.apache.org/) or [ScyllaDB](https://www.scylladb.com) databases.

<!--
TODO: Perhaps worth mentioning current DataStax Java Driver version to match Cassandra version on user side.
Expand All @@ -45,9 +47,15 @@ There are multiple ways to bring up a Cassandra instance on local machine:
1. Follow the instructions from [Cassandra Getting Started page](http://cassandra.apache.org/doc/latest/getting_started/index.html).
2. Launch a container running Cassandra from [Official Docker Repository](https://hub.docker.com/_/cassandra/)

## Cassandra Source
## Installing ScyllaDB
There are multiple ways to bring up a ScyllaDB instance on local machine:

1. Follow the instructions from [ScyllaDB Getting Started page](https://docs.scylladb.com/getting-started/).
2. Launch a container running ScyllaDB from [Official Docker Repository](https://hub.docker.com/r/scylladb/scylla/)

## Cassandra/ScyllaDB Source
Flink provides a [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
bounded source to read from Cassandra and return a collection of entities as `DataStream<Entity>`.
bounded source to read from Cassandra/ScyllaDB and return a collection of entities as `DataStream<Entity>`z
An entity is built by Cassandra mapper ([MappingManager](https://javadoc.io/static/com.datastax.cassandra/cassandra-driver-mapping/3.11.2/com/datastax/driver/mapping/MappingManager.html))
based on a POJO containing annotations (as described in Cassandra object mapper).

Expand Down Expand Up @@ -78,11 +86,11 @@ Regarding performances, the source splits table data like this:

If tableSize cannot be determined or previous numSplits computation makes too few splits, it falls back to `numSplits = parallelism`

## Cassandra Sinks
## Cassandra/ScyllaDB Sinks

### Configurations

Flink's Cassandra sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
Flink's Cassandra/ScyllaDB sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
This method returns a CassandraSinkBuilder, which offers methods to further configure the sink, and finally `build()` the sink instance.

The following configuration methods can be used:
Expand Down Expand Up @@ -231,7 +239,7 @@ result.print().setParallelism(1)
{{< /tabs >}}


### Cassandra Sink Example for Streaming POJO Data Type
### Cassandra/ScyllaDB Sink Example for Streaming POJO Data Type
An example of streaming a POJO data type and store the same POJO entity back to Cassandra. In addition, this POJO implementation needs to follow [DataStax Java Driver Manual](http://docs.datastax.com/en/developer/java-driver/2.1/manual/object_mapper/creating/) to annotate the class as each field of this entity is mapped to an associated column of the designated table using the DataStax Java Driver `com.datastax.driver.mapping.Mapper` class.

The mapping of each table column can be defined through annotations placed on a field declaration in the Pojo class. For details of the mapping, please refer to CQL documentation on [Definition of Mapped Classes](http://docs.datastax.com/en/developer/java-driver/3.1/manual/object_mapper/creating/) and [CQL Data types](https://docs.datastax.com/en/cql/3.1/cql/cql_reference/cql_data_types_c.html)
Expand Down
24 changes: 16 additions & 8 deletions docs/content/docs/connectors/datastream/cassandra.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
---
title: Cassandra
title: Cassandra/ScyllaDB
weight: 4
type: docs
aliases:
- /dev/connectors/cassandra.html
- /dev/connectors/scylladb.html
- /apis/streaming/connectors/cassandra.html
- /apis/streaming/connectors/scylladb.html
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
Expand All @@ -25,9 +27,9 @@ specific language governing permissions and limitations
under the License.
-->

# Apache Cassandra Connector
# Apache Cassandra/ScyllaDB Connector

This connector provides sinks that writes data into a [Apache Cassandra](https://cassandra.apache.org/) database.
(This connector provides sinks that writes data into or sources that read from [Apache Cassandra](https://cassandra.apache.org/) or [ScyllaDB](https://www.scylladb.com) databases.)

<!--
TODO: Perhaps worth mentioning current DataStax Java Driver version to match Cassandra version on user side.
Expand All @@ -45,9 +47,15 @@ There are multiple ways to bring up a Cassandra instance on local machine:
1. Follow the instructions from [Cassandra Getting Started page](http://cassandra.apache.org/doc/latest/getting_started/index.html).
2. Launch a container running Cassandra from [Official Docker Repository](https://hub.docker.com/_/cassandra/)

## Cassandra Source
## Installing ScyllaDB
There are multiple ways to bring up a ScyllaDB instance on local machine:

1. Follow the instructions from [ScyllaDB Getting Started page](https://docs.scylladb.com/getting-started/).
2. Launch a container running ScyllaDB from [Official Docker Repository](https://hub.docker.com/r/scylladb/scylla/)

## Cassandra/ScyllaDB Source
Flink provides a [FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
bounded source to read from Cassandra and return a collection of entities as `DataStream<Entity>`.
bounded source to read from Cassandra/ScyllaDB and return a collection of entities as `DataStream<Entity>`
An entity is built by Cassandra mapper ([MappingManager](https://javadoc.io/static/com.datastax.cassandra/cassandra-driver-mapping/3.11.2/com/datastax/driver/mapping/MappingManager.html))
based on a POJO containing annotations (as described in Cassandra object mapper).

Expand Down Expand Up @@ -78,11 +86,11 @@ Regarding performances, the source splits table data like this:

If tableSize cannot be determined or previous numSplits computation makes too few splits, it falls back to `numSplits = parallelism`

## Cassandra Sinks
## Cassandra/ScyllaDB Sinks

### Configurations

Flink's Cassandra sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
Flink's Cassandra/ScyllaDB sink are created by using the static CassandraSink.addSink(DataStream<IN> input) method.
This method returns a CassandraSinkBuilder, which offers methods to further configure the sink, and finally `build()` the sink instance.

The following configuration methods can be used:
Expand Down Expand Up @@ -231,7 +239,7 @@ result.print().setParallelism(1)
{{< /tabs >}}


### Cassandra Sink Example for Streaming POJO Data Type
### Cassandra/ScyllaDB Sink Example for Streaming POJO Data Type
An example of streaming a POJO data type and store the same POJO entity back to Cassandra. In addition, this POJO implementation needs to follow [DataStax Java Driver Manual](http://docs.datastax.com/en/developer/java-driver/2.1/manual/object_mapper/creating/) to annotate the class as each field of this entity is mapped to an associated column of the designated table using the DataStax Java Driver `com.datastax.driver.mapping.Mapper` class.

The mapping of each table column can be defined through annotations placed on a field declaration in the Pojo class. For details of the mapping, please refer to CQL documentation on [Definition of Mapped Classes](http://docs.datastax.com/en/developer/java-driver/3.1/manual/object_mapper/creating/) and [CQL Data types](https://docs.datastax.com/en/cql/3.1/cql/cql_reference/cql_data_types_c.html)
Expand Down