Skip to main content Skip to complementary content

Deduplicating data

You can accurately deduplicate your data using the most appropriate function.

Deduplicating values in columns

You can use the Deduplicate rows with identical values function to easily delete rows that are partly or entirely duplicates with other ones.

Information noteNote: This function is not compatible with Spark Jobs, and HDFS or S3 exports.

Duplicated information can be introduced in spreadsheets because of human error, like a bad copy and paste for example, as well as automated operations. In the following dataset, that contains basic customers information, you will notice that the firstname and lastname columns both contain values that are present more than once.

Dataset containing duplicated customer information.

Jake and Peralta are indeed entries that make it look like the firstname and lastname columns contain duplicates when taken separately. But looking more closely shows that the information from rows 1, 2 and 4 belong to separate customers that share either their first or last names. Row 3 on the other hand is a legitimate duplicate of row 2, and is even missing some information.

Because performing a deduplication operation on the two columns separately would make you lose valuable information on customers that happen to have the same first name or last name, you will use the Deduplicate rows with identical values function on the two columns at once. This way, the function will only remove rows were both first and last names are duplicates, like rows 2 and 3, but also other potential duplicates further down in the dataset.

Procedure

  1. While pressing the Ctrl key, click the headers of the firstname and lastname columns to select their content.
  2. In the functions panel, type Deduplicate rows with identical values and click the result to display the options of the associated function.
  3. From the Matching criterion drop down list, select the restriction rule that you want to apply, Exact value for example.
    • Simplified text: Punctuation, white spaces, case and accents are ignored. For example, if Pâté-en-croûte is your reference value, rows with pate-eN-cRoute will be deleted but not rows with Pâté n croûte.
    • Ignore case and accents: Case and accents are not taken into account. For example, if Pâté-en-croûte is your reference value, rows with pate-en-croute will be deleted but not rows with pate en croute.
    • Exact value: The most restrictive validation rule. The rows will be deleted only if there is an exact match with the reference value.
  4. Click Submit.

Results

The row that was a duplicate of row 2 has been deleted, while other rows that contained identical values were kept because they did not match the two-column criteria.
Dataset containing customer information without duplication.

Deduplicating rows

You can use the Remove duplicate rows function to easily delete all the rows that are exact duplicates and keep only one in your dataset.

Information noteNote: This function is not compatible with Spark Jobs, and HDFS or S3 exports.

Duplicated information can be introduced in spreadsheets because of human error, like a bad copy and paste for example, as well as automated operations. In this example, you received a dataset containing customer information, where all the rows are systematically duplicated.

Dataset containing duplicated customer information.

You will use the Remove duplicate rows function to easily clean your dataset.

Procedure

  1. Click the header of any column from your dataset.
  2. Click the Table tab of the functions panel to display the list of functions that can be applied on the whole table.
  3. Point your mouse over the Remove duplicate rows function and click the eye icon to preview its effects.
    Dataset containing duplicated customer information highlighted.
  4. Click Submit to apply the function.

Results

All the duplicated information has been removed in one simple action, leaving you with only one correct occurrence of each row in your dataset.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!