Skip to main content Skip to complementary content

tMapROjaiOutput Standard properties

These properties are used to configure tMapROjaiOutput running in the Standard Job framework.

The Standard tMapROjaiOutput component belongs to the Databases NoSQL family.

The component in this framework is available in all Talend products with Big Data and in Talend Data Fabric.

Basic settings

Distribution and Version

Select the version of your MapR cluster. This cluster must host the MapR-DB database to be used.

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

  • Built-In: You create and store the schema locally for this component only.

  • Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs.

Click Edit schema to make changes to the schema. If you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the schema only.

  • Change to built-in property: choose this option to change the schema to Built-in for local changes.

  • Update repository connection: choose this option to change the schema stored in the repository and decide whether to propagate the changes to all the Jobs upon completion.

    If you just want to propagate the changes to the current Job, you can select No upon completion and choose this schema metadata again in the Repository Content window.

This component supports the Document type. If a field is for entire documents, select Document in the Type column for this field in the schema editor.

Click Sync columns to retrieve the schema from the previous component connected in the Job.

Use kerberos authentication

If you are accessing a MapR-DB as-OJAI database running with Kerberos security, select this check box, then, enter the Kerberos principal name and password in the displayed fields.

Every time when you launch your Job, the component submits this authentication information to Kerberos for a new kinit ticket.

Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log into a Kerberos-enabled system using a given keytab file. A keytab file contains pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in which your Job actually runs, for example, on a Talend JobServer.

Note that the user that executes a keytab-enabled Job is not necessarily the one a principal designates but must have the right to read the keytab file being used. For example, the username you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

Table

Enter the name of the table to be processed.

Action on table

Select an operation to be performed on the table defined.

  • None: No operation is carried out.

  • Drop and create table: The table is removed and created again.

  • Create table: The table does not exist and gets created.

  • Create table if does not exist: The table is created if it does not exist.

  • Drop table if exist and create: The table is removed if it already exists and created again.

  • Truncate: The table content is deleted.

Action on data

Select an action to be performed on data of the table defined.

  • Insert: Add new entries to the table. If duplicates are found, the Job stops.

  • Replace: if the table already contains data, delete all the existing data and insert the new data. If the table is empty, insert the new data.

  • Insert or Replace: it looks at the document IDs, replaces the documents whose IDs exist in both the database and the data to be written, and inserts the documents whose IDs do not exist in the database.

  • Delete: Remove entries corresponding to the input flow.

Bulk write

Select this check box to insert, update or remove data in bulk.

In the Bulk write size field, enter the size of each query group to be processed by MapR-DB.

Mapping

Each column of the schema defined for this component represents a field of the documents to be read. In this table, you need to specify the parent nodes of these fields, if any.

For example, in the document reading as follows
{
               _id: ObjectId("5099803df3f4948bd2f98391"),
               person: { first: "Joe", last: "Walker" }
            }
The first and the last fields have person as their parent node but the _id field does not have any parent node. So once completed, this Mapping table should read as follows:
Column     Parent node path
_id
first       "person"
last        "person"

Die on error

This check box is cleared by default, meaning to skip the row on error and to complete the process for error-free rows.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box.

NB_LINE: the number of rows read by an input component or transferred to an output component. This is an After variable and it returns an integer.

NB_LINE_REJECTED: the number of rows rejected. This is an After variable and it returns an integer.

A Flow variable functions during the execution of a component while an After variable functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it.

For more information about variables, see Using contexts and variables.

Usage

Usage rule

tMapROjaiOutput executes the action defined on the documents in a given MapR-DB database based on the flow incoming from the preceding component in your Job.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!