Skip to main content Skip to complementary content

tDataEncrypt properties for Apache Spark Streaming

These properties are used to configure tDataEncrypt running in the Spark Streaming Job framework.

The Standard tDataEncrypt component belongs to the Data Quality family.

The component in this framework is available in Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

Click Sync columns to retrieve the schema from the previous component connected in the Job.

Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion. If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

 

Built-In: You create and store the schema locally for this component only.

 

Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Password

This value must be enclosed in double quotes.

When using an existing cryptographic file, enter the password required to use this file.

When generating a cryptographic file, enter the password used to encrypt this file.

This password is required to decrypt back data using the tDataDecrypt component.

Cryptographic file path

When using an existing cryptographic file, enter the path to the cryptographic file. It must be enclosed in double quotes.

When generating a cryptographic file, enter the destination file path. This must be a local file path. You can enter two types of values:
  • A context: context.mycontext
  • A file path enclosed in double quotes: "C:/user/cryptofiles/mycryptofile"

This cryptographic file is encrypted using AES-GCM.

This cryptographic file is required to decrypt back data using the tDataDecrypt component.

For more information about the cryptographic file, see the data encryption process.

Generate cryptographic file

Click this button to generate the cryptographic file.

In the dialog box, select the cryptographic method used to encrypt the input data:
  • AES, which is a 128-bit block cipher standardized by the National Institute of Standards and Technology (NIST).
  • Blowfish, which is a 64-bit unpatented block cipher developed by Bruce Schneier.

Encryption

Select the corresponding Encrypt check boxes to encrypt input columns.

You can encrypt all column data types, except Dynamic, but the output encrypted data is of String type.

Configure the output schema of the component to set the type of the columns to be encrypted to String.

The columns that are not selected will not be encrypted and be returned as-is by the component.

Advanced settings

tStat Catcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Usage

Usage rule

This component is usually used as an intermediate component, and it requires an input component and an output component.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!