Skip to main content Skip to complementary content

tElasticSearchInput properties for Apache Spark Batch

These properties are used to configure tElasticSearchInput running in the Spark Batch Job framework.

The Spark Batch tElasticSearchInput component belongs to the ElasticSearch family.

This component is available in all Talend products with Big Data and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to be processed and passed on to the next component. When you create a Spark Job, avoid the reserved word line when naming the fields.

The schema of the data outputted by this component is read-only, id_document and json_document. The json_document column contains the body of the documents read from ElasticSearch. If you need to explore data from this json_document column, you have to use tExtractJSONFields to extract the data to be used.

Use an existing configuration

Select this check box and in the Component List drop-down list, select the desired connection component to reuse the connection details you already defined.

Nodes

Enter the location of the cluster hosting the Elasticsearch system to be used.

Index

Enter the name of the index you want to read documents from.

An index is the largest unit of storage in the Elastisearch system.

Type

Enter the name of the type the documents to be read belong to.

For example, blogpost_en and blogpost_fr can be two types that represent given English blog posts and French blog posts, respectively.

You can dynamically uses the values of a given column to be document types. If you need to do so, enter the name of that column into a pair of braces ({}), for example, {blog_author}.

Query

Enter the ElasticSearch query to be performed by this component.

In editing queries, you need to use the syntax required by ElasticSearch along with escape characters required by Java, and put the query within double quotation marks.

For example, in the ElasticSearch documentation, an example query reads as follows:
es.query = { "query" : { "term" : { "user" : "costinl" } } }
In this Query field, you should write the same query in the following way:
"{ \"query\" : { \"term\" : {\"user\" : \"costinl\" } } }"

Advanced settings

Use SSL/TLS

Select this check box to enable the SSL or TLS encrypted connection.

Then you need to use the tSetKeystore component in the same Job to specify the encryption information.

Configuration

Add the parameters accepted by Elasticsearch to perform more customized actions.

For example, enter es.mapping.id in the Key column and true in the Value column to make the document field/property name contain the document id. Note that you must put double quotation marks around the entered information.

For a list of the parameters you can use, see https://www.elastic.co/guide/en/elasticsearch/hadoop/master/configuration.html.

Usage

Usage rule

This component is used as a start component and requires an output link.

Drop a tElasticSearchConfiguration component in the same Job to connect to ElasticSearch. Then you need to select the Use an existing configuration check box and then select the tElasticSearchConfiguration component to be used.
  • Note that the Talend components for Spark Jobs support the Elasticsearch versions up to 6.4.2.

This component, along with the Spark Batch component Palette it belongs to, appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents only Standard Jobs, that is to say traditional Talend data integration Jobs.

Spark Connection

In the Spark Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
  • Yarn mode (Yarn client or Yarn cluster):
    • When using Google Dataproc, specify a bucket in the Google Storage staging bucket field in the Spark configuration tab.

    • When using HDInsight, specify the blob to be used for Job deployment in the Windows Azure Storage configuration area in the Spark configuration tab.

    • When using Altus, specify the S3 bucket or the Azure Data Lake Storage for Job deployment in the Spark configuration tab.
    • When using Qubole, add a tS3Configuration to your Job to write your actual business data in the S3 system with Qubole. Without tS3Configuration, this business data is written in the Qubole HDFS system and destroyed once you shut down your cluster.
    • When using on-premises distributions, use the configuration component corresponding to the file system your cluster is using. Typically, this system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the configuration component corresponding to the file system your cluster is using, such as tHDFSConfiguration Apache Spark Batch or tS3Configuration Apache Spark Batch.

    If you are using Databricks without any configuration component present in your Job, your business data is written directly in DBFS (Databricks Filesystem).

This connection is effective on a per-Job basis.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!