Configure external kafka
You can configure the mini_runtime deployment to connect to an external Kafka broker instead of deploying a built-in Kafka instance. This is useful when you want to use a managed Kafka service or share a Kafka cluster across multiple deployments.
Features
Support for external Kafka brokers (single or multiple brokers)
SASL/PLAIN authentication support
Prerequisites
Before configuring an external Kafka broker, ensure:
Your Kafka broker is accessible from your Kubernetes cluster
Required Kafka topics are created
If using SASL authentication, you have valid credentials
Network connectivity is established between Kubernetes pods and Kafka broker
Configuration Steps
Step 1: Set Up Your Kafka Cluster
You can use a managed Kafka service from cloud providers or set up your own Kafka cluster. Below are links to official documentation for creating Kafka clusters on major cloud platforms:
Cloud-Managed Kafka Services
Amazon Web Services (AWS):
Amazon MSK (Managed Streaming for Apache Kafka)
Google Cloud Platform (GCP):
Apache Kafka
Microsoft Azure:
Azure Event Hubs for Apache Kafka
Create Required Kafka Topics
Regardless of which platform you choose, ensure the following topics are created:
Required Topics:
akto.api.logs(recommended: 3 partitions, replication factor 3 for production)akto.api.producer.logs(recommended: 3 partitions, replication factor 3 for production)
Note: Many managed Kafka services support automatic topic creation. If enabled, topics will be created automatically when the application first connects.
Step 2: Determine Kafka Broker Address
Obtain the broker address (hostname:port or IP:port) that is accessible from your Kubernetes cluster. Ensure the address is routable from within your cluster and not using localhost, 127.0.0.1, or host.docker.internal.
Step 3: Install Mini Runtime with External Kafka
For Kafka Without Authentication:
Create a values file (custom-kafka-values.yaml):
Install or upgrade the Helm chart:
For Kafka With SASL Authentication:
Create a values file (custom-kafka-sasl-values.yaml):
Install or upgrade the Helm chart:
Using Command-Line Parameters:
You can also configure external Kafka directly via command-line parameters:
Step 4: Verify Connection
Check that the mini_runtime pods are connecting to Kafka successfully:
Troubleshooting
Connection Refused Errors
If you see errors like "Connection to node -1 could not be established":
Verify Kafka is running:
Test network connectivity:
Check advertised listeners match your configuration:
Authentication Failures
If you see "Failed authentication" errors in Kafka logs:
Verify credentials are correct in your values file
Check JAAS configuration matches the username/password
Ensure SASL mechanism is PLAIN:
DNS Resolution Issues
If you see "UnknownHostException" errors:
Use IP addresses instead of hostnames
Test DNS resolution from within the cluster:
Pods Not Starting
Check the deployment environment variables:
Verify these variables are set correctly:
AKTO_KAFKA_BROKER_URLKAFKA_AUTH_ENABLED(should be "true" for SASL, absent for PLAINTEXT)KAFKA_USERNAMEandKAFKA_PASSWORD(for SASL)
Environment Variables Set by Configuration
When you enable external Kafka, the following environment variables are automatically configured:
Without Authentication (PLAINTEXT):
AKTO_KAFKA_BROKER_URL: Your broker URLAKTO_KAFKA_BROKER_MAL: Your broker URLKAFKA_AUTH_ENABLED: Not set (defaults to false)
With SASL Authentication:
AKTO_KAFKA_BROKER_URL: Your broker URLAKTO_KAFKA_BROKER_MAL: Your broker URLKAFKA_AUTH_ENABLED: "true"AKTO_KAFKA_SASL_MECHANISM: "PLAIN"KAFKA_USERNAME: Your usernameKAFKA_PASSWORD: Your password
Last updated
Was this helpful?