Deploying Dynamic Engine in a GKE cluster
Install a Dynamic Engine instance and its environment in a Google Kubernetes Engine (GKE) cluster using Helm charts and GKE-specific storage classes.
Before you begin
-
Basic GKE knowledge is required. Install the Google Cloud CLI on your machine to interact with Google Cloud and GKE. For more information, see:
- Google Cloud SDK installation guide for installing and initializing Google Cloud CLI.
- GKE access for kubectl for installing the
gke-gcloud-auth-plugin plugin and authenticating to your Google
Cloud project.
Information noteTip: To install the plugin, run:To authenticate with Google Cloud and set your project, run:
gcloud components install gke-gcloud-auth-pluginReplace $PROJECT_ID with your Google Cloud project ID.gcloud auth login gcloud config set project $PROJECT_ID
- Create or update your GKE cluster with the configuration required by Dynamic Engine:If your cluster already exists, update it as follows:
gcloud container clusters create $CLUSTER_NAME \ --addons=GcpFilestoreCsiDriver \ --cluster-version=1.30 \ --location=us-central1-a \ --machine-type=e2-standard-4 \ --num-nodes=3These commands enable GcpFilestoreCsiDriver addon, specify a machine type with at least 4 vCPUs (for example, e2-standard-4 or above), and provision a fixed number of nodes.gcloud container clusters update $CLUSTER_NAME \ --update-addons=GcpFilestoreCsiDriver=ENABLED \ --region=$REGION - Configure access to your GKE cluster by generating and setting up your local kubeconfig
credentials. For more information, see GKE access for kubectl. Information noteTip: To generate your kubeconfig, run:Then verify your current Kubernetes context:
gcloud container clusters get-credentials $CLUSTER_NAME --region $REGION --project $PROJECT_IDIf the current context is not the intended one, set it:kubectl config get-contexts$GKE_CONTEXT is usually gke_$PROJECT_ID_$REGION_$CLUSTER_NAME. Replace the variables with your actual values.kubectl config use-context $GKE_CONTEXT - Ensure all system pods are running in your GKE cluster. You can run this command to
check the status: The Google Cloud Platform console also provides the status information.
kubectl get pods -n kube-system - The dynamic-engine-crd custom resource definitions must have been installed using the oci://ghcr.io/talend/helm/dynamic-engine-crd helm chart. If not, run the following commands for the installation:
- Find the chart version to be used:
- Run the following Helm command:
helm show chart oci://ghcr.io/talend/helm/dynamic-engine-crd --version <engine_version> - See the version directly from Talend Management Console or check the Dynamic Engine changelog for the chart version included in your Dynamic Engine version.
- Use an API call to the Dynamic Engine version endpoint.
- Run the following Helm command:
- Run the following command to install the Helm
chart of a given version:Replace <helm_chart_version> with the chart version supported by your Dynamic Engine version.
helm install dynamic-engine-crd oci://ghcr.io/talend/helm/dynamic-engine-crd --version <helm_chart_version>Without specifying the version, you install the latest available dynamic-engine-crd chart version.
- Find the chart version to be used:
About this task
This procedure describes how to deploy a customized Dynamic Engine environment in a GKE cluster using Helm charts. It includes steps for configuring GKE storage classes and customizing Helm values files for GKE compatibility.
Procedure
Results
Once complete, the Dynamic Engine environment services are installed in your GKE cluster and are ready to run tasks or plans.
In Talend Management Console, the status of this environment becomes Ready, confirming that it is ready to run tasks or plans.
If the deployment fails or the Dynamic Engine services are disassociated, the status becomes Not ready.
What to do next
After successful deployment, you can add tasks to the Dynamic Engine environment as you would for standard engines. For details, see Adding a Job task in a Dynamic Engine environment.