Skip to main content

Tokenizing Japanese text

This scenario applies only to Talend Data Management Platform, Talend Big Data Platform, Talend Real-Time Big Data Platform, Talend MDM Platform, Talend Data Services Platform, Talend MDM Platform and Talend Data Fabric.

Using the tJapaneseTokenize component, you can split Japanese text into tokens.

To replicate the example described below, download the tJapaneseTokenize_standard_scenario.zip file.

The tJapaneseTokenize_standard_scenario.zip file is composed of:
  • the plain text file inputJapaneseText.txt containing Japanese text, the transcription and the English translation; and
  • the tJapaneseTokenizeJob.zip file containing the Job.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!