Skip to main content Skip to complementary content

Deploying Dynamic Engine in a GKE cluster

Install a Dynamic Engine instance and its environment in a Google Kubernetes Engine (GKE) cluster using Helm charts and GKE-specific storage classes.

Information noteNote: This procedure applies only to GKE clusters with a fixed number of nodes but does not support the GKE Autopilot mode.

Before you begin

  • Basic GKE knowledge is required. Install the Google Cloud CLI on your machine to interact with Google Cloud and GKE. For more information, see:
    • Google Cloud SDK installation guide for installing and initializing Google Cloud CLI.
    • GKE access for kubectl for installing the gke-gcloud-auth-plugin plugin and authenticating to your Google Cloud project.
      Information noteTip: To install the plugin, run:
      gcloud components install gke-gcloud-auth-plugin
      To authenticate with Google Cloud and set your project, run:
      gcloud auth login
      gcloud config set project $PROJECT_ID
      Replace $PROJECT_ID with your Google Cloud project ID.
  • Create or update your GKE cluster with the configuration required by Dynamic Engine:
    gcloud container clusters create $CLUSTER_NAME \
      --addons=GcpFilestoreCsiDriver \
      --cluster-version=1.30 \
      --location=us-central1-a \
      --machine-type=e2-standard-4 \
      --num-nodes=3
    If your cluster already exists, update it as follows:
    gcloud container clusters update $CLUSTER_NAME \
      --update-addons=GcpFilestoreCsiDriver=ENABLED \
      --region=$REGION
    These commands enable GcpFilestoreCsiDriver addon, specify a machine type with at least 4 vCPUs (for example, e2-standard-4 or above), and provision a fixed number of nodes.
  • Configure access to your GKE cluster by generating and setting up your local kubeconfig credentials. For more information, see GKE access for kubectl.
    Information noteTip: To generate your kubeconfig, run:
    gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION --project $PROJECT_ID
    Then verify your current Kubernetes context:
    kubectl config get-contexts
    If the current context is not the intended one, set it:
    kubectl config use-context $GKE_CONTEXT
    $GKE_CONTEXT is usually gke_$PROJECT_ID_$REGION_$CLUSTER_NAME. Replace the variables with your actual values.
  • Ensure all system pods are running in your GKE cluster. You can run this command to check the status:
    kubectl get pods -n kube-system
    The Google Cloud Platform console also provides the status information.
  • The dynamic-engine-crd custom resource definitions must have been installed using the oci://ghcr.io/talend/helm/dynamic-engine-crd helm chart. If not, run the following commands for the installation:
    1. Find the chart version to be used:
      • Run the following Helm command:
        helm show chart oci://ghcr.io/talend/helm/dynamic-engine-crd --version <engine_version>
      • See the version directly from Talend Management Console or check the Dynamic Engine changelog for the chart version included in your Dynamic Engine version.
      • Use an API call to the Dynamic Engine version endpoint.
    2. Run the following command to install the Helm chart of a given version:
      helm install dynamic-engine-crd oci://ghcr.io/talend/helm/dynamic-engine-crd --version <helm_chart_version>
      Replace <helm_chart_version> with the chart version supported by your Dynamic Engine version.

      Without specifying the version, you install the latest available dynamic-engine-crd chart version.

About this task

This procedure describes how to deploy a customized Dynamic Engine environment in a GKE cluster using Helm charts. It includes steps for configuring GKE storage classes and customizing Helm values files for GKE compatibility.

Procedure

  1. Configure a ReadWriteMany GKE storage class for Dynamic Engine.
    Create a ReadWriteMany storage class dedicated to Dynamic Engine, and in later steps, use its name as defaultStorageClassName. For example,
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: dyn-engine
    provisioner: filestore.csi.storage.gke.io
    volumeBindingMode: WaitForFirstConsumer
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    parameters:
      tier: <type-of-storage-class>
      network: <name-of-configured-network>
    This example creates a storage class called dyn-engine, for which the provisioner filestore.csi.storage.gke.io is configured to create persistence volumes of a given type in a given network, respectively specified in the tier and the network fields.
    • The following types are available for this tier field:
      • standard-rwx
      • premium-rwx
      • enterprise-rwx
      • enterprise-multishare-rwx
      • zonal-rwx
      See GKE multi-share storage classes for details.
    • The name of the network is the one you have configured for your GKE cluster. If you did not specify a network name for your cluster, use default.

      If you remove the network field from the current storage class configuration, the default name is automatically used.

  2. Deploy the Dynamic Engine instance and its environment.
    1. Run this command to deploy the engine instance:
      helm upgrade --install dynamic-engine \
        -f $DYNAMIC_ENGINE_ID-helm-values/$DYNAMIC_ENGINE_ID-values.yaml \
        oci://ghcr.io/talend/helm/dynamic-engine

      This is a default deployment. This command upgrades or installs the Dynamic Engine instance depending on whether it has been deployed. Replace $DYNAMIC_ENGINE_ID with the Dynamic Engine ID, for example, c-m-sjufu4qy.

    2. Create a custom Helm values file for the environment.

      Example

      cat <<EOF > $DYNAMIC_ENGINE_ENVIRONMENT_ID-custom-gke-values.yaml
      ---
      configuration:
        persistence:
          defaultStorageClassName: dyn-engine
      EOF
      This example sets dyn-engine, the storage class you previously created, as the name of defaultStorageClassName. Change it according to the Dynamic Engine-specific storage class you are using.
    3. Deploy the Helm charts for the Dynamic Engine environment.
      helm upgrade --install dynamic-engine-environment-$DYNAMIC_ENGINE_ENVIRONMENT_ID \
        -f $DYNAMIC_ENGINE_ENVIRONMENT_ID-values.yaml  \
        -f $DYNAMIC_ENGINE_ENVIRONMENT_ID-custom-gke-values.yaml \
        oci://ghcr.io/talend/helm/dynamic-engine-environment \
        --version $DYNAMIC_ENGINE_VERSION
      This command upgrades or installs the Dynamic Engine environment depending on whether it has been deployed.

Results

Once complete, the Dynamic Engine environment services are installed in your GKE cluster and are ready to run tasks or plans.

In Talend Management Console, the status of this environment becomes Ready, confirming that it is ready to run tasks or plans.

If the deployment fails or the Dynamic Engine services are disassociated, the status becomes Not ready.

What to do next

After successful deployment, you can add tasks to the Dynamic Engine environment as you would for standard engines. For details, see Adding a Job task in a Dynamic Engine environment.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – please let us know!