Skip to main content

Kafka custom Avro schema and limitations

When creating a Kafka dataset, you have the possibility to enter a custom Avro schema which is then used when reading/writing from the selected topic.

Find below a list of different actions you can perform in Talend Pipeline Designer and their impact on reading/writing from/to the Kafka dataset.

Action on the dataset Consequence in the application

Fetch sample from a new topic (no records) in a Kafka dataset with no schema

The sample is empty

Fetch sample from a new topic (no records) in a Kafka dataset with a valid schema

The sample is empty

Fetch sample from an existing topic in a Kafka dataset with no schema

The sample is empty

Fetch sample from an existing topic in a Kafka dataset with a binary compatible schema

The sample is displayed

Fetch sample from an existing topic in a Kafka dataset with a non compatible schema

An error is displayed

Run a pipeline that writes to a new topic in a Kafka dataset without schema

The pipeline is executed without any issues, the records are persisted using the schema of the latest component before the Kafka component

Run a pipeline that writes to a new topic in a Kafka dataset with a schema that is compatible with the pipeline data

The pipeline is executed without any issues, the records are persisted using the dataset schema

Run a pipeline that writes to a new topic in a Kafka dataset with a schema that is not compatible with the pipeline

The pipeline fails with an exception

Run a pipeline that reads from an existing topic in a Kafka dataset with a binary compatible schema

The pipeline is executed without any issues

Run a pipeline that reads from an existing topic in a Kafka dataset with a non compatible schema

The pipeline fails with an exception

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!