tDataUnmasking properties for Apache Spark Streaming
These properties are used to configure tDataUnmasking running in the Spark Streaming Job framework.
The Spark Streaming tDataUnmasking component belongs to the Data Quality family.
Basic settings
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Sync columns to retrieve the schema from the previous component connected in the Job. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
The output schema of this component contains one read-only column, ORIGINAL_MARK. This column identifies by true or false if the record is an original record or a substitute record respectively. |
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Modifications |
Define in the table what fields to unmask and how to unmask them: Input Column: Select the column from the input flow that contains the data to be unmasked. You can unmask all data masked with tDataMasking using the FF1 with AES or FF1 with SHA-2 method combined with a user-defined password. These modifications are based on the function you select in the Function column. Category: select a category of unmasking functions from the list. Function: Select the function that will unmask data. The functions you can select from the Function list depend on the data type of the input column. Method: From this list, select the Format-Preserving Encryption (FPE) algorithm that was used to mask data, FF1 with AES or FF1 with SHA-2: The FF1 with AES method is based on the Advanced Encryption Standard in CBC mode. The FF1 with SHA-2 method depends on the secure hash function HMAC-256. Java 8u161 is the minimum required version to use the FF1 with AES method. To be able to use this FPE method with Java versions earlier than 8u161, download the Java Cryptography Extension (JCE) unlimited strength jurisdiction policy files from Oracle website. To unmask data, the FF1 with AES and FF1 with SHA-2 methods require the password specified in Password or 256-bit key for FF1 methods when the data was masked with the tDataMasking component. When using the Character handling functions, such as Replace all, Replace characters between two positions, Replace all digits with FPE methods, you must select an alphabet. From the Alphabet list, select the alphabet used to mask data with the tDataMasking component. Extra Parameter: This field is used by some of the functions, it will be disabled when not applicable. When applicable, enter a number or a letter to decide the behavior of the function you have selected. Keep format: this function is only used on Strings. Select this check box to keep the input format when using the Bank Account Unmasking, Credit Card Unmasking, Phone Unmasking and SSN Unmasking categories. That is to say, if there are spaces, dots ('.'), hyphens ('-') or slashes ('/') in the input, those characters are kept in the output. If you select this check box when using Phone Unmasking functions, the characters that are not numbers from the input are copied to the output as is. |
Advanced settings
FF1 settings |
Password or 256-bit key for FF1 methods: To unmask data, the FF1 with AES and FF1 with SHA-2 methods require the password or secret key specified in Password or 256-bit key for FF1 methods when the data was masked with the tDataMasking component. Use tweaks: If tweaks have been generated while masking the data, select this check box. When selected, the Column containing tweaks list is displayed. A tweak allows to unmask all data of a record. Column containing the tweaks: Available when the Use tweaks check box is selected. Select the column that contains the tweaks. If you do not see it, make sure you have declared in the input component the tweaks generated by the masking component. Key derivation function : Select the same key derivation function as to mask the data. By default, PBKDF2 with 300,000 iterations is selected. |
Output the original row |
Select this check box to output masked data rows in addition to the original data. Having both data rows can be useful in debug or test processes. |
Null input returns null |
This check box is selected by default. When selected, the component outputs null when input values are null. When cleared, and when the input data is null, the masking
function applies:
From Talend Studio R2024-08 onwards, when Null input returns null is selected and the input data is null, the masking function is not applied, null is returned and the input data are sent to the main flow. |
Empty input returns an empty output |
When this check box is selected, the component returns the input values if they are empty. Otherwise, the selected functions are applied to the input data. |
Send invalid data to "Invalid" output flow |
This check box is selected by default.
|
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs. |
Spark Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |