Skip to main content Skip to complementary content

New features

Shared features

Feature Description

Support for dynamic settings in TCK components

All check boxes and drop-down lists of TCK components can now be customized in the Dynamic settings tab with the following limitations:
  • The check boxes and drop-down lists that have dependent parameters are not supported.
  • The tStatCatcher Statistics, Enable parallel execution, Show Information parameters check boxes are not supported.
  • Parameters that use repository values are not supported.
  • The Component list drop-down list is supported only if it is dependent on the Use existing connection check box which is selected.
For more information, see Dynamic settings tab of components in a Job.

Support for mass database type migration in DB components for Data Integration Jobs

You can now migrate database connection centralized in the Metadata folder and reused in data integration Jobs to another database type. For more information, see Migrating database connection.
Information noteNote: The Snowflake database will be supported from 8.0 R2024-10.
"Database connection migration " dialog box.

Big Data

Feature Description
Improvement of Spark Universal user interface The Spark Universal user interface is changed. You now have to select first the Runtime/mode environment, and then the Version.
Support for Kubernetes with Spark 3.1.x to 3.5.x versions You can now run your Spark Jobs on Kubernetes mode with Spark 3.1.x to 3.5.x versions, with the new Spark 3.x option in the Version drop-down list. You can configure it either in the Spark Configuration view of your Spark Jobs or in the Hadoop Cluster Connection metadata wizard.
Spark Configuration tab of Spark Batch Job highlighting the new Spark 3.x version.
Availability-noteBeta
Support for Amazon EMR 7.x with Spark Universal 3.5.x
You can now run your Spark Jobs on an Amazon EMR cluster using Spark Universal with Spark 3.5.x in Yarn cluster mode. You can configure it either in the Spark Configuration view of your Spark Jobs or in the Hadoop Cluster Connection metadata wizard.

When you select this mode, Talend Studio is compatible with Amazon EMR 7.x version.

Data Integration

Feature Description

tOpenAIClient component in Standard Jobs (GA)

The tOpenAIClient component becomes generally available from 8.0 R2023-09.

For more information, read the tOpenAIClient documentation.

New Solr components in Standard Jobs

The following Solr components are now available to read and write from an Apache Solr web service:
  • tSolrInput allows you to authenticate to a Solr web service and retrieve the information you need from Solr collections through queries.
  • tSolrOutput allows you to insert, upsert, or delete data in a Solr core.

For more information, read the Solr documentation.

New tPineconeClient component in Standard Jobs

The new tPineconeClient component allows you to upsert, query, fetch, update, or delete records in Pinecone index namespaces.

For more information, read the tPineconeClient documentation.

Data Mapper

Feature Description
Support for map export as CSV or Excel files You can now export a Standard map as a CSV file, with the new Maps as CSV option. The exported file is also compatible with Excel.

For more information, see Exporting a map.

Export Maps as CSV dialog box.
Improvement of output name generation with flattening map When you create a flattening map with multiple outputs, the output naming is improved. The generated output naming is now made after the input source element.

For more information on the naming conventions, see Flattened structure naming.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!