tCassandraOutput properties for Apache Spark Streaming
These properties are used to configure tCassandraOutput running in the Spark Streaming Job framework.
The Spark Streaming tCassandraOutput component belongs to the Databases family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally. Repository: Select the repository file where the properties are stored. |
Sync columns |
Click this button to retrieve schema from the previous component connected in the Job. |
Keyspace |
Type in the name of the keyspace into which you want to write data. |
Action on keyspace |
Select the operation you want to perform on the keyspace to be used:
|
Column family |
Type in the name of the keyspace into which you want to write data. |
Action on column family |
Select the operation you want to perform on the column family to be used:
This list is available only when you have selected Update, Upsert or Insert from the Action on data drop-down list. |
Action on data |
On the data of the table defined, you can perform:
For more advanced actions, use the Advanced settings view. |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
The schema of this component does not support the Object type and the List type. |
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. When the schema to be reused has default values that are integers or functions, ensure that these default values are not enclosed within quotation marks. If they are, you must remove the quotation marks manually. For more information, see the related description of retrieving table schemas in Talend Studio User Guide. |
Advanced settings
Configuration |
Add the Cassandra properties you need to customize in upserting data into Cassandra.
The following list presents the numerical values you can put and the consistency levels they signify:
When a row is added to the table, you need to click the new row in the Property name column to display the list of the available properties and select the property or properties to be customized. For further information about each of these properties, see the Tuning section in the following link: https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md. |
Use unlogged batch |
Select this check box to handle data in batch but with Cassandra's UNLOGGED approach. This feature is available to the following three actions: Insert, Update and Delete. Then you need to configure how the batch mode works:
The ideal situation to use batches with Cassandra is when a small number of tables must synchronize the data to be inserted or updated. In this UNLOGGED approach, the Job does not write batches into Cassandra's batchlog system and thus avoids the performance issue incurred by this writing. For further information about Cassandra BATCH statement and UNLOGGED approach, see Batches. |
Insert if not exists |
Select this check box to insert rows. This row insertion takes place only when they do not exist in the target table. This feature is available to the Insert action only. |
Delete if exists |
Select this check box to remove from the target table only the rows that have the same records in the incoming flow. This feature is available only to the Delete action. |
Use TTL |
Select this check box to write the TTL data in the target table. In the column list that is displayed, you need to select the column to be used as the TTL column. The DB type of this column must be Int. This feature is available to the Insert action and the Update action only. |
Use Timestamp |
Select this check box to write the timestamp data in the target table. In the column list that is displayed, you need to select the column to be used to store the timestamp data. The DB type of this column must be BigInt. This feature is available to the following actions: Insert, Update and Delete. |
IF condition |
Add the condition to be met for the Update or the Delete action to take place. This condition allows you to be more precise about the columns to be updated or deleted. |
Special assignment operation |
Complete this table to construct advanced SET commands of Cassandra to make the Update action more specific. For example, add a record to the beginning or a particular position of a given column. In the Update column column of this table, you need to
select the column to be updated and then select the operations to be used from the Operation column. The following operations are available:
|
Row key in the List type |
Select the column to be used to construct the WHERE clause of Cassandra to perform the Update or the Delete action on only selected rows. The column(s) to be used in this table should be from the set of the Primary key columns of the Cassandra table. |
Delete collection column based on postion/key |
Select the column to be used as reference to locate the particular row(s) to be removed. This feature is available only to the Delete action. |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component should use one and only one tCassandraConfiguration component present in the same Job to connect to Cassandra. More than one tCassandraConfiguration components present in the same Job fail the execution of the Job. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |