Skip to main content Skip to complementary content

Mounting Kubernetes PersistentVolumeClaims for persistent storage in Dynamic Engine

Use PersistentVolumeClaims to mount persistent storage volumes into your containers so that your Job (Data Integration, Big Data, and Data Services) and Routes tasks have stateful data access, multi-pod data sharing, and persistent storage across pod restarts.

About this task

Mounting Kubernetes PersistentVolumeClaims (PVCs) provides persistent storage independent of pod lifecycle. This is useful for:

  • Storing data that needs to persist across pod restarts
  • Sharing data between multiple pods
  • Accessing pre-existing data volumes

Procedure

  1. Create a Kubernetes PersistentVolumeClaim (PVC) resource file.

    Example

    Create a file named pvc-custom-claim.yaml:

    cat <<EOF > pvc-custom-claim.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-custom-claim-linked-to-a-static-or-dynamic-persistentvolume
      namespace: qlik-processing-env-$DYNAMIC_ENGINE_ENVIRONMENT_ID
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: standard 
    EOF

    Replace $DYNAMIC_ENGINE_ENVIRONMENT_ID with the actual ID and update storageClassName to match your cluster's available storage class. For consistency, use the same storage class that was configured for your Dynamic Engine environment (as specified in Provisioning a storage class dedicated to Dynamic Engine environment services).

    This example uses dynamic provisioning: The storageClassName tells Kubernetes to automatically create a PersistentVolume when this PVC is created. Use this when you want Kubernetes to provision storage on demand.

    For static provisioning instead (when an administrator has already created a PersistentVolume), replace the entire spec section with:

    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      volumeName: my-existing-pv

    Replace my-existing-pv with the actual PersistentVolume name created by your administrator. Omit storageClassName for static provisioning.

    When to use each:

    • Dynamic provisioning (with storageClassName) — Storage is created automatically when needed. Use for most scenarios.
    • Static provisioning (with volumeName) — You bind to an existing PersistentVolume. Use when specific volumes are pre-allocated or require special configuration.
  2. Create a Helm values file that configures PVC mounting.

    Example

    Create a file named pvc-values.yaml:

    configuration:
      # For Data Integration and Big Data Job tasks
      jobDeployment:
        additionalSpec:
          enabled: true
          volumeMounts:
            - name: external-data-storage
              mountPath: /tmp/data
          volumes:
            - name: external-data-storage
              persistentVolumeClaim:
                claimName: my-custom-claim-linked-to-a-static-or-dynamic-persistentvolume
    
      # For Data Service and Route tasks
      dataServiceRouteDeployment:
        additionalSpec:
          enabled: true
          volumeMounts:
            - name: external-data-storage
              mountPath: /tmp/data
          volumes:
            - name: external-data-storage
              persistentVolumeClaim:
                claimName: my-custom-claim-linked-to-a-static-or-dynamic-persistentvolume

    This configuration mounts the PVC my-custom-claim-linked-to-a-static-or-dynamic-persistentvolume at /tmp/data. All data written to this path persists across pod restarts.

  3. Deploy or upgrade your Dynamic Engine environment to create the required namespace.
    helm upgrade --install dynamic-engine-environment-$DYNAMIC_ENGINE_ENVIRONMENT_ID \
      oci://ghcr.io/talend/helm/dynamic-engine-environment \
      --version ${DYNAMIC_ENGINE_VERSION} \
      -f $DYNAMIC_ENGINE_ENVIRONMENT_ID-values.yaml

    Replace ${DYNAMIC_ENGINE_VERSION} with your environment's version.

    This command creates the Dynamic Engine environment and its associated namespace qlik-processing-env-$DYNAMIC_ENGINE_ENVIRONMENT_ID.

  4. Create the PVC in your Dynamic Engine environment namespace.
    kubectl apply -f pvc-custom-claim.yaml
  5. Upgrade your Dynamic Engine environment with PVC mounting.
    helm upgrade dynamic-engine-environment-$DYNAMIC_ENGINE_ENVIRONMENT_ID \
      oci://ghcr.io/talend/helm/dynamic-engine-environment \
      --version ${DYNAMIC_ENGINE_VERSION} \
      -f $DYNAMIC_ENGINE_ENVIRONMENT_ID-values.yaml \
      -f pvc-values.yaml

    Replace $DYNAMIC_ENGINE_ENVIRONMENT_ID with your environment ID and ${DYNAMIC_ENGINE_VERSION} with your environment's version.

  6. Verify the PVC is mounted in running pods.
    # Check PVC status
    kubectl get pvc -n qlik-processing-env-$DYNAMIC_ENGINE_ENVIRONMENT_ID
    
    # Verify the mounted volume
    kubectl get pod <pod-name> -n qlik-processing-env-$DYNAMIC_ENGINE_ENVIRONMENT_ID \
      -o jsonpath='{.spec.volumes[?(@.name=="external-data-storage")]}' | jq .

    Expected output shows the PVC reference:

    {
      "name": "external-data-storage",
      "persistentVolumeClaim": {
        "claimName": "my-custom-claim-linked-to-a-static-or-dynamic-persistentvolume"
      }
    }

Results

Your tasks in the Dynamic Engine environment now have access to persistent storage with PersistentVolumeClaims. Data written to the mounted directory persists across pod restarts, re-deployments, and environment upgrades. If configured with ReadWriteMany, multiple pods can access the same PVC.

When persistent volumes are updated, reload behavior differs by task type:

  • For Data Integration (including Big Data) Job tasks, the next task run automatically uses updated values.
  • For Routes and Data Services tasks, which run continuously, changes are not automatically reloaded into running containers. To apply credential changes to these tasks, update the task in Talend Management Console to trigger a redeployment on your Kubernetes cluster.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!