Links
Comment on page

Connect Akto with Kubernetes in AWS

Learn how to send API traffic data from your Kubernetes cluster to Akto.

Introduction

Akto needs your staging, production or other environment's traffic to Discover APIs and analyze for AP misconfiguration. It does so by connecting to one of your traffic sources. One such source is your Kubernetes cluster.
Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management.
You can add Akto DaemonSet to your Kubernetes cluster. It is very lightweight and will run 1 per node in your Kubernetes cluster. It will intercept alll the node traffic and send it to Akto traffic analyzer. Akto's traffic analyzer analyzes this traffic to create your application's APIs request and response, understand API metadata and find misconfigurations. Akto can work with high traffic scale though you can always configure the amount of traffic you want to send to Akto dashboard.

Overview of Akto-Kubernetes setup

Kubernetes deployment for Akto in AWS
Kubernetes Deployment
This is how your run Akto's traffic collector on your Kubernetes nodes as DaemonSet and send mirrored traffic to Akto.

Pre-requisites to add data to Akto AWS from Kubernetes cluster

  1. 1.
    You have permissions to create and assign roles to InstanceProfiles
  2. 2.
    You should have installed Akto dashboard in the same VPC as your application server EC2 instances
  3. 3.
    Your application should be receiving unencrypted traffic. SSL, if any, should be terminated before it reaches your application server EC2 instance. Usually, SSL termination happens at API Gateway or Load balancer
  4. 4.
    You should have permissions to add a DaemonSet to your k8s setup

Configuring Akto traffic processing stack and creating AWS policy

Follow these steps to add DaemonSet config to your Kubernetes setup -
  1. 1.
    Navigate to Quick Start on your Akto dashboard and expand the Connect traffic data section.
Navigate to quick start
  1. 2.
    Scroll down to Kubernetes Daemonset section.
Scroll to Kubernetes
Kubernetes DaemonSet
  1. 3.
    Copy the policy json and click on the Akto Dashboard role link.
Copy AWS policy
  1. 4.
    Click on the JSON tab and paste the policy
paste policy in AWS
Paste policy in AWS
  1. 5.
    Click on Review policy button.
Click on review policy
Click on review policy
  1. 6.
    Enter AktoDashboardPolicy as the policy name and click on Create Policy button
Enter Akto Dashboard policy
Enter name of the policy
  1. 7.
    Once the policy is created, go back to the dashboard.
  2. 8.
    You should now see a Setup DaemonSet stack button. Click on this button to setup a traffic processing stack.
This will process your API traffic data and populate APIs on the dashboard. This might take a few minutes to complete.
Setup akto DaemonSet stack
Setup DaemonSet stack

Setting up Akto Daemonset pod on your K8s cluster

  1. 1.
    Once complete, you should now see a daemonset config. Copy the config and paste in a text editor.
Copy the configuration
Copy the configuration
You can also copy from here -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: akto-k8s
namespace: {NAMESPACE}
labels:
app: {APP_NAME}
spec:
selector:
matchLabels:
app: {APP_NAME}
template:
metadata:
labels:
app: {APP_NAME}
spec:
hostNetwork: true
containers:
- name: mirror-api-logging
image: aktosecurity/mirror-api-logging:k8s_agent
env:
- name: AKTO_TRAFFIC_BATCH_TIME_SECS
value: "10"
- name: AKTO_TRAFFIC_BATCH_SIZE
value: "100"
- name: AKTO_INFRA_MIRRORING_MODE
value: "gcp"
- name: AKTO_KAFKA_BROKER_MAL
value: "<AKTO_NLB_IP>:9092"
- name: AKTO_MONGO_CONN
value: "<AKTO_MONGO_CONN>"
  1. 2.
    Replace {NAMESPACE} with your app namespace and {APP_NAME} with the name of your app. If you have installed on AWS -
  • Go to EC2 > Instances > Search for Akto Mongo Instance > Copy private IP.
  • Replace AKTO_MONGO_CONN with mongodb://10.0.1.3:27017/admini where 10.0.1.3 is the private ip (example)
  • Go to EC2 > Load balancers > Search for AktoNLB > Copy its DNS.
  • Replace AKTO_NLB_IP with the DNS name. eg AktoNLB-ca5f9567a891b910.elb.ap-south-1.amazonaws.com
If you have installed on GCP, Kubernetes or OpenShift -
  • Get Mongo Service's DNS name from Akto cluster
  • Replace AKTO_MONGO_CONN with mongodb://mongo.p03.svc.cluster.local:27017/admini (where mongo.p03.svc.cluster.local is Mongo service)
  • Get Runtime Service's DNS name from Akto cluster
  • Replace AKTO_NLB_IP with the DNS name. eg. akto-api-security-runtime.p03.svc.cluster.local
Replace namespace in text editor
Replace namespace in text editor
  1. 3.
    Create a file akto-daemonset-config.yaml with the above YAML config
Create Akto daemonset config yaml
Create Akto daemonset config yaml
  1. 4.
    Call kubectl apply -f akto-daemonset-config.yaml -n <NAMESPACE> on your kubectl terminal
call yaml
call yaml
  1. 5.
    Run the command kubectl get daemonsets in terminal. It should show akto-k8s daemonset.
Run the command
Run the command
  1. 6.
    Go to API Inventory on Akto dashboard to see your new APIs
Check API Inventory
Check API Inventory

Frequently Asked Questions (FAQs)

The traffic will contain a lot of sensitive data - does it leave my VPC?
Data remains strictly within your VPC. Akto doesn't take data out of your VPC at all.
Does adding DaemonSet have any impact on performance or latency?
Zero impact on latency. The DaemonSet doesn't sit like a proxy. It simply intercepts traffic - very similar to tcpdump. It is very lightweight. We have benchmarked it against traffic as high as 20M API requests/min. It consumes very low resources (CPU & RAM).

Troubleshooting Guide

When I hit apply, it says "Something went wrong". How can I fix it?
Akto runs a Cloudformation template behind the scenes to setup the data processing stack and traffic mirroring sessions with your application servers' EC2 instances. This error means the Cloudformation setup failed.
  1. 1.
    Go to AWS > Cloudformation > Search for "mirroring"
  2. 2.
    Click on Akto-mirroring stack and go to Events tab
  3. 3.
    Scroll down to the oldest error event.
The Cloudformation template failed with "Client.InternalError: Client error on launch.". How should I fix it?
This is a known AWS common error. Follow the steps here.
The Cloudformation template failed with "We currently do not have sufficient capacity in the Availability Zone you requested... Launching EC2 instance failed."
You can reinstall Akto in a diff availability zone or you can go to Template tab and save the cloudformation template in a file. Search for "InstanceType" and replace all the occurrences with a type that is available in your availability zone. You can then go to AWS > Cloudformation > Create stack and use this new template to setup Traffic mirroring.
I don't see my error on this list here.
Please send us all details at [email protected] or reach out via Intercom on your Akto dashboard. We will definitely help you out.

Get Support for your Akto setup

There are multiple ways to request support from Akto. We are 24X7 available on the following:
  1. 1.
    In-app intercom support. Message us with your query on intercom in Akto dashboard and someone will reply.
  2. 2.
    Join our discord channel for community support.
  3. 3.
    Contact [email protected] for email support.
  4. 4.
    Contact us here.