Skip to main content Skip to complementary content

New features

Shared features

Feature Description
Support for Bearer Token authentication to connect to JFrog Artifactory repository You can now connect to JFrog Artifactory repository using the Bearer Token authentication type.

This feature will be fully available when Bearer Token authentication is also supported in Talend Administration Center in future monthly release.

Big Data

Feature Description
Support for mass Hadoop distribution migration for Big Data Jobs You can now migrate Hadoop distribution centralized in the repository and reused in Big Data Jobs from one to another. For more information, see Migrating Hadoop distribution.
"Distribution migration" dialog box.
New MapRDB components for HPE Datafabric 7.7 in Standard Jobs The following MapRDB components are now available in Standard Jobs when you use HPE Datafabric 7.7 cluster:
  • tMapRDBClose
  • tMapRDBConnection
  • tMapRDBInput
  • tMapRDBOutput
  • tMapROjaiInput
  • tMapROjaiOutput

The Distribution and Version parameters are removed from the Basic settings view.

Support for time travel with Iceberg components in Standard Jobs The time travel feature is now available in the tIcebergInput component in Standard Jobs. The new Use time travel check box in the Basic settings view allows you to read the data from an Iceberg table by specifying either a datetime, or a snapshot ID.

The SQL query parameter is also changed to a Use custom SQL check box.

tIcebergInput Basic settings view highlighting the new parameters for time travel and SQL query in Standard Jobs.
Support for time travel with Iceberg components in Spark Batch Jobs The time travel feature is now available in the tIcebergInput component in Spark Batch Jobs. The new Use time travel check box in the Basic settings view allows you to read the data from an Iceberg table by specifying either a branch, a tag, a datetime, or a snapshot ID.
tIcebergInput Basic settings view highlighting the new parameter for time travel in Spark Batch Jobs.
Support for Amazon EMR 7.x with Spark Universal 3.5.x You can now run your Spark Jobs on an Amazon EMR cluster using Spark Universal with Spark 3.5.x in Yarn cluster mode. You can configure it either in the Spark Configuration view of your Spark Jobs or in the Hadoop Cluster Connection metadata wizard.

When you select this mode, Talend Studio is compatible with Amazon EMR 7.x version.

Note that there is an issue with Jobs containing Kinesis components on Amazon EMR 7.x version. When you use tKinesisInput, the output is empty.

Continuous Integration

Feature Description

Talend CI Builder upgraded to version 8.0.19

Talend CI Builder is upgraded from version 8.0.18 to version 8.0.19.

Use Talend CI Builder 8.0.19 in your CI commands or pipeline scripts from this monthly version onwards until a new version of Talend CI Builder is released.

Data Integration

Feature Description
Availability-noteBeta

New tEmbeddingAI component in Standard Jobs (Beta)

The new tEmbeddingAI component is now available in Standard Jobs, and allows you to leverage embedding models to efficiently process data with AI.

Support for Atlas vector search for MongoDB components in Standard Jobs

MongoDB components can now use Atlas vector search capabilities.

New option to customize header name for tHTTPClient in Standard Jobs

The Use custom authorization token header and Use custom token prefix options have been added to the tHTTPClient component, and allow you to enter a custom authorization token header name and prefix when using the OAuth 2.0 authentication type.

JTOpen upgraded to version 20.x for AS400 components in Standard Jobs

The AS400 components now use version 20.x of the JTOpen library.

Data Mapper

Feature Description
New database functions in DSQL map You can now use the following functions when you work with databases:
  • databaseLookup to get values from a database
  • databaseLookupAndUpdate to get values from a database, and update these values
  • databaseUpdateAndLookup to update values from a database, and get these values
Support for block with the WITH clause in DSQL map You can now use a block after the WITH clause.
For example:
WITH $default_colors = {
   foreground = 'black',
   background = 'white'
}
Support for multiple keys with the GROUP BY clause in DSQL map You can now use a comma-separated list of multiple keys in the GROUP BY clause.
For example:
FROM customer
GROUP BY rating AS r, address.zipcode AS zip
Syntax update for String literal in DSQL map The syntax for the string data type is updated:
  • strings are delimited by single or double quotes
  • CR, NL, LF are not allowed in a quoted string
  • \ is used to escape a quote in a string
  • the following sequences can be used as escape sequences:
    • \n
    • \r
    • \f
    • \t
    • \b
    • \/
    • \' in case of single quoted string
    • \" in case of double quoted string
Support for multiple collections after FROM and UNNEST clauses in DSQL map You can now specify more than one collection after FROM or UNNEST clauses.
For example:
FROM customer AS c, order AS o
SELECT { c.custid, o.orderno }

Data Quality

Feature Description
New JSON validation component You can now validate JSON columns in Standard Jobs using the tJSONValidator component.
AS400 Versions 7R3 and later are now supported.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!