tCacheIn properties for Apache Spark Streaming
These properties are used to configure tCacheIn running in the Spark Streaming Job framework.
The Spark Streaming tCacheIn component belongs to the Processing family.
This component is available in Talend Real-Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
|
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Output cache |
Select the tCacheOut component from which tCacheIn read RDD cache. |
Usage
Usage rule |
This component is used as a start component and requires an output link. This component is used along with tCacheOut. Iteratively, tCacheOut stores input data as cache so that tCacheIn can reads the cache without having to calculate again all of the Spark DAG (Directed Acyclic Graph, the model used by Spark for scheduling Spark actions). This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |