Skip to main content

tJapaneseTokenize

Splits Japanese text into tokens.

Tokenization is an important pre-processing step and prepares text data for subsequent analysis, transliteration, text mining or natural language processing tasks.

Unlike English or French, there are no spaces to mark word boundaries in Japanese. Splitting Japanese text into tokens is then more challenging.

Based on the IPADIC dictionary, tJapaneseTokenize deduces where word boundaries exist and adds a space to separate tokens.

The IPADIC dictionary was developed by the Information-Technology Promotion Agency of Japan (IPA). This dictionary is based on the IPA corpus and is the most widely used dictionary for Japanese tokenization.

In local mode, Apache Spark 1.6, 2.1, 2.3, 2.4 and 3.0 are supported.

For more technologies supported by Talend, see Talend components.

Depending on the Talend product you are using, this component can be used in one, some or all of the following Job frameworks:

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!