মূল বিষয়বস্তুতে যান

Kestra Integration: Declarative Spark Orchestration

Kestra's main page

সংক্ষিপ্ত বিবরণ

কেস্ত্র is an open-source data orchestration platform that enables the definition of complex pipelines through a declarative YAML syntax. By integrating Kestra with Ilum, engineering teams can combine Kestra's orchestration capabilities with Ilum's Spark-on-Kubernetes execution engine to build robust, scalable data workflows.

This integration allows for the decoupling of workflow logic from execution logic: Kestra manages the dependencies, scheduling, and triggers, while Ilum manages the Spark lifecycle, resource allocation, and Kubernetes pod orchestration.

Architectural Interaction

The integration relies on the native স্পার্ক-সাবমিট protocol proxied through Ilum's control plane. The communication flow for a Spark job submission is as follows:

  1. Workflow Trigger: Kestra initiates a task based on a schedule, API call, or external event.
  2. Task Execution: The Kestra Worker executes the SparkCLI plugin task.
  3. Submission: The Spark Client initiates a REST request to Ilum's Core component (ইলুম-কোর:9888 ), which emulates a Spark Master interface.
  4. অর্কেস্ট্রেশন : Ilum translates the request into a Kubernetes SparkApplication or manages the driver pod directly, applying resource quotas and scaling policies.
  5. নজরদারি : Kestra polls the execution status while Ilum provides real-time logs and metrics.

Installation & Prerequisites

Kestra is packaged as an optional module within the Ilum platform.

  1. Enable Module: Ensure the Kestra module is enabled in your মূল্যবোধ.ইয়ামল during Ilum installation. Refer to the Production Deployment guide for configuration details.
  2. Network Access: The Kestra pods must have network accessibility to the ইলুম-কোর service on port 9888 (Spark Master REST port).

To access the Kestra UI for workflow design:

কুবেক্টল পোর্ট-ফরোয়ার্ড এসভিসি / আইএলইউএম-কেস্ট্রা-সার্ভিস 8080: 8080 

Navigate to http://localhost:8080/external/kestra .


Configuring Spark Workflows

Defining a Spark job in Kestra involves creating a flow that utilizes the io.kestra.plugin.spark.SparkCLI task. This task wraps the standard Spark submission process.

Basic Workflow Structure

The following YAML definition demonstrates a standard Spark batch job submission.

আইডি : স্ফুলিঙ্গ - ডাটা - ingestion
নামস্থান : com.enterprise.data
বর্ণনা : "Daily ingestion pipeline utilizing Ilum Spark cluster"

inputs:
- আইডি : jar_location
টাইপ : স্ট্রিং
defaults: "s3a://data-lake-artifacts/jobs/spark-job-1.0.jar"
- আইডি : main_class
টাইপ : স্ট্রিং
defaults: "com.enterprise.data.IngestionJob"

কার্য :
- আইডি : submit- স্ফুলিঙ্গ - job
টাইপ : io.kestra.plugin.spark.SparkCLI
আদেশসমূহ :
- |
স্পার্ক-সাবমিট \
--মাস্টার spark://ilum-core:9888 \
--মোতায়েন-মোড ক্লাস্টার \
--conf spark.master.rest.enabled=true \
--conf spark.executor.instances=2 \
--conf spark.executor.memory=4g \
--conf spark.executor.cores=2 \
--class {{ inputs.main_class }} \
{{ inputs.jar_location }}

Configuration Parameters Explained

  • --master spark://ilum-core:9888: This targets Ilum's virtual Spark Master. Unlike a standard standalone master, this endpoint is backed by Ilum's orchestration logic which handles the Kubernetes scheduling.
  • --ডিপ্লয়-মোড ক্লাস্টার : Critical. This instructs the client to launch the driver program inside the cluster (managed by Ilum). Using ক্লায়েন্ট mode would attempt to run the driver within the Kestra worker pod, which is not recommended for production due to resource contention and network isolation.
  • --conf spark.master.rest.enabled=true : Enables the REST submission protocol required for communicating with the Ilum control plane.
Artifact Management

The Spark JAR file specified in {{ inputs.jar_location }} must be accessible by the Spark Driver pod spawned by Ilum. Recommended storage backends include S3 (MinIO), HDFS, or GCS. Local file paths from the Kestra worker are না accessible to the Spark cluster.

The workflow implemented Visual representation of the workflow DAG in Kestra


Advanced Workflow Patterns

1. Dynamic Parameter Injection

Pass runtime variables such as execution dates or upstream data paths into your Spark application arguments.

কার্য : 
- আইডি : daily- aggregation
টাইপ : io.kestra.plugin.spark.SparkCLI
আদেশসমূহ :
- |
স্পার্ক-সাবমিট \
--মাস্টার spark://ilum-core:9888 \
--মোতায়েন-মোড ক্লাস্টার \
--conf spark.master.rest.enabled=true \
--class com.etl.Aggregator \
s3a://bucket/jars/etl.jar \
--date {{ execution.startDate | date("yyyy-MM-dd") }} \
--input-path s3a://raw-data/{{ execution.startDate | date("yyyy/MM/dd") }}/

2. Parallel Execution

Execute multiple independent Spark jobs concurrently to maximize cluster utilization.

কার্য : 
- আইডি : parallel- প্রক্রিয়াকরণ
টাইপ : io.kestra.core.tasks.flows.Parallel
কার্য :
- আইডি : job- region- eu
টাইপ : io.kestra.plugin.spark.SparkCLI
আদেশসমূহ : [ "spark-submit ... --class com.jobs.EUJob ..."]

- আইডি : job- region- us
টাইপ : io.kestra.plugin.spark.SparkCLI
আদেশসমূহ : [ "spark-submit ... --class com.jobs.USJob ..."]

Flow execution Execution triggers allow for dynamic input parameters


Performance Engineering: Optimization of Submission Latency

By default, Kestra may launch a new Docker container for each task execution. For high-frequency workflows, the overhead of spinning up a container and initializing the JVM for the Spark Client can introduce latency.

Host-Process Execution Strategy

To eliminate container startup overhead, you can configure Kestra to execute the স্পার্ক-সাবমিট command directly in the host process of the Kestra worker. This requires a custom worker image with Spark binaries pre-installed.

1. Build Custom Worker Image

একটি তৈরি করুন ডকারফাইল that layers the Spark client binaries onto the Kestra base image.

# বেস হিসাবে অফিসিয়াল কেস্ট্রা ইমেজ ব্যবহার করুন 
থেকে kestra/kestra:v0.22.6

ব্যবহারকারী root

# Install Spark Client Dependencies
RUN apt-get update && apt-get install -y curl tar openjdk-17-jre-headless && \
rm -rf /var/lib/apt/lists/*

# স্পার্ক-সাবমিট ডাউনলোড করে ইন্সটল করুন
ENV SPARK_VERSION=3.5.5
RUN curl -O https://dlcdn.apache.org/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop3.tgz && \
tar -xzf spark-${SPARK_VERSION}-bin-hadoop3.tgz -C /opt && \
mv /opt/spark-${SPARK_VERSION}-bin-hadoop3 /opt/spark && \
rm spark-${SPARK_VERSION}-bin-hadoop3.tgz

# Configure Path
ENV PATH=$PATH:/opt/spark/bin
ENV SPARK_HOME=/opt/spark

# Restore Kestra user
ব্যবহারকারী kestra
WORKDIR /app
এন্ট্রি পয়েন্ট [ "docker-entrypoint.sh"]
CMD[ "--help"]

2. Configure Task Runner

Modify your workflow to use the Process task runner, bypassing the Docker isolation for the submission step.

কার্য : 
- আইডি : low- latency- submission
টাইপ : io.kestra.plugin.spark.SparkCLI
টাস্করানার :
টাইপ : আইও.কেস্ট্রা.প্লাগইন.কোর.রানার.প্রক্রিয়া
আদেশসমূহ :
- |
স্পার্ক-সাবমিট \
--মাস্টার spark://ilum-core:9888 \
--মোতায়েন-মোড ক্লাস্টার \
--conf spark.master.rest.enabled=true \
--শ্রেণী {{ ইনপুট। {{পতাকা|YourMainClass }} \
{{ ইনপুট। {{পতাকা|YourSparkJar}}

Observability & Debugging

Job Correlation

When a workflow is executed:

  1. Kestra UI: Displays the স্পার্ক-সাবমিট logs (stdout/stderr), providing immediate feedback on the submission status.
  2. Ilum UI : The job appears in the "Applications" view. The Spark Driver logs provide the deep execution details.

The workflow executed Kestra execution timeline showing task duration and status

Failure Handling

Spark jobs may fail due to transient cluster issues (e.g., preemption). Configure automatic retries in Kestra to handle these gracefully.

কার্য : 
- আইডি : resilient- স্ফুলিঙ্গ - job
টাইপ : io.kestra.plugin.spark.SparkCLI
retry:
টাইপ : constant
interval: PT5M
maxAttempt: 3
আদেশসমূহ : ...

The job inside Ilum Verification of the job execution within the Ilum Dashboard