tRunJob Standard properties
These properties are used to configure tRunJob running in the Standard Job framework.
The Standard tRunJob component belongs to the System and the Orchestration families.
The component in this framework is available in all Talend products.
The tRunJob component is supported with limitations, which means that only S4 (Minor) support cases are accepted and no patches are provided. If you use tRunJob within Data Services and Routes (with cTalendJob), support is provided on a "best effort" basis only. In most cases, there are class loading issues which can sometimes be resolved but not always.
This is because tRunJob is not designed to work in a Service/Route style (ESB) deployment, so regular support is not provided if you decide to use it, even though it may work in many cases. If you used tRunJob in the past, it is recommended to change your Job Design to use Joblets instead.
For DI and non-ESB use cases, it is still a valuable component and has support.
Basic settings
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields. Click Edit schema to make changes to the schema. If the current schema is of the Repository type, three options are available:
This component offers the advantage of the dynamic schema feature. This allows you to retrieve unknown columns from source files or to copy batches of columns from a source without mapping each column individually. For further information about dynamic schemas, see Dynamic schema. This dynamic schema feature is designed for the purpose of retrieving unknown columns of a table and is recommended to be used for this purpose only; it is not recommended for the use of creating tables. |
|
Built-In: You create and store the schema locally for this component only. |
|
Repository: You have already created the schema and stored it in the Repository. You can reuse it in various projects and Job designs. |
Copy Child Job Schema | Click to fetch the child Job schema. |
Use dynamic job | Select this check box to allow multiple Jobs to be called and processed.
When this option is enabled, only the latest version of the Jobs can be called and
processed. An independent process will be used to run the subJob. The Context and the Use an
independent process to run subJob options disappear. Information noteWarning:
|
Context job | This field is visible only when the Use dynamic job option is selected. Enter the name of the Job that you want to call from the list of Jobs selected. |
Job | Select the Job to be called in and processed. Make sure you already executed once the Job called, beforehand, in order to ensure a smooth run through tRunJob. |
Version | Select the child Job version that you want to use. |
Context | If you defined contexts and variables for the Job to be called by this component, select the applicable context entry from the list. |
Use an independent process to run subJob | Select this check box to use an independent process to run the subJob. This
helps in solving issues related to memory limits. Information noteWarning:
|
Die on child error | Clear this check box to execute the parent Job even though there is an error when executing the child Job. |
Transmit whole context | Select this check box to get all the context variables from the parent Job.
Deselect it to get all the context variables from the child Job. If this check box is selected when the parent and child Jobs have the same
context variables defined:
|
Context Param | You can change the value of selected context parameters. Click the
[+] button to add the parameters defined
in the Context view of the child Job. For
more information on context parameters, see Using contexts and variables. The values defined here will be used during the child Job execution even if Transmit whole context is selected. |
Advanced settings
Propagate the child result to the output schema | Select this check box to propagate the output data stored in the buffer
memory via the tBufferOutput component in
the child Job to the output component in the parent Job. This property takes effect only when there is data coming from an input component such as tFixedFlowInput. This check box is cleared by default. It is invisible when the Use dynamic job or Use an independent process to run subJob check box is selected. |
Print Parameters | Select this check box to display the internal and external parameters in the Console. |
JVM Setting |
Set JVM settings for the Job to be called or processed.
|
Use dynamic context for subJob | Select this option to specify a context variable group for the subJob to be
called. After selecting this option, enter a variable name in the text field to
the right of this option. Note that the value of the variable needs to be the name
of an existing context variable group. Information noteNote: This option take precedence over the
Context option in the Basic
settings view.
|
Use extra classpath for subJob | Select this option to specify extra class paths for the subJob to be called.
After selecting this option, provide one or multiple class paths in the text field
to the right of this option. If you provide multiple paths, separate the paths
using ; (for Windows) or : (for
Linux). Information noteNote: This option is available when Use dynamic
job or Use an independent process to run
subjob is selected in the Basic settings
view.
|
Load context parameters from file | Select this check box to create a temporary file where the context
parameters used in the Job are written. The child Job then reads the temporary
file to retrieve context parameters, and the file is deleted at the end of the
execution. Spark local execution is the only supported mode for Spark Batch and Spark Streaming Jobs as the temporary file is created on the local machine. The distributed Job cannot load the temporary file from the client node. Information noteNote: This option is available when Use dynamic
job or Use an independent process to run
subjob is selected in the Basic settings
view.
|
Use Base64 (for byte[]) | This option is selected by default and makes sure the Base64 encoding scheme is used to transfer byte arrays. |
tStatCatcher Statistics | Select this check box to gather the processing metadata at the Job level as well as at each component level. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the component when an error occurs. This is an After variable and it returns a string. This variable functions only if the Die on error check box is cleared, if the component has this check box. CHILD_RETURN_CODE: the return code of a child Job. This is an After variable and it returns an integer. CHILD_EXCEPTION_STACKTRACE: the exception stack trace from a child Job. This is an After variable and it returns a string. A Flow variable functions during the execution of a component while an After variable functions after the execution of the component. To fill up a field or expression with a variable, press Ctrl+Space to access the variable list and choose the variable to use from it. For more information about variables, see Using contexts and variables. |
Usage
Usage rule | This component can be used as a standalone Job or can help clarifying
complex Job by avoiding having too many subJobs all together in one Job. If you want to create a reusable group of components to be inserted in several Jobs or several times in the same Job, you can use a Joblet. Unlike the tRunJob, the Joblet uses the context variables of the Job in which it is inserted. For more information on Joblets, see What is a Joblet. This component also allows you to call a Job of a different framework, such as a Spark Batch Job or a Spark Streaming Job. |
Connections | Outgoing links (from this component to another): Row: Main. Trigger: On Subjob Ok; On Subjob Error; Run if; On Component Ok; On Component ErrorIncoming links (from one component to this one): Row: Main; Reject; Iterate. Trigger: On Subjob Ok; On Subjob Error; Run if; On Component Ok; On Component Error; Synchronize; Parallelize.For further information regarding connections, see Using connections in a Job. |