tNLPPredict properties for Apache Spark Batch
These properties are used to configure tNLPPredict running in the Spark Batch Job framework.
The Spark Batch tNLPPredict component belongs to the Natural Language Processing family.
The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.
Basic settings
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Sync columns to retrieve the schema from the previous component connected in the Job. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
Read-only columns are added to the output schema:
|
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Define a storage configuration component |
Select the configuration component to be used to provide the configuration information for the connection to the target file system such as HDFS. If you leave this check box clear, the target file system is the local system. The configuration component to be used must be present in the same Job. For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write the result in a given HDFS system. |
Original text column |
Select the column to be labeled in the input schema. |
Token column |
Select the column used for feature construction and prediction. |
Additional Features |
Select this check box to add additional features to the Additional feature template. When you add features, the order must be the same as the additional features used in the TNLPModel component to generate the model file. |
NLP model path |
Set the path to the folder from where you want to retrieve the model files. If the model is stored in a single file, select the Use the model file check box and set the path to the model file. For example: "/opt/model/<model_name>" If you want to store the model in a specific file system, for example S3 or HDFS, you must use the corresponding component in the Job and select the Define a storage configuration component check box in the component basic settings. The button for browsing does not work with the Spark Local mode; if you are using the other Spark Yarn modes that the Studio supports with your distribution, ensure that you have correctly configured the connection in a configuration component in the same Job, such as tHDFSConfiguration. Use the configuration component depending on the filesystem to be used. |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job. |
Spark Batch Connection |
In the Spark
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |