Connect Akto with GKE
Last updated
Last updated
GKE is the industry's first fully managed Kubernetes service with full Kubernetes API, release channels, and multi-cluster support.
Once complete, you should now see a daemonset config. Copy the config
and paste in a text editor
.
Replace {NAMESPACE}
with your app namespace and {APP_NAME}
with the name of your app. If you have installed on AWS -
Go to EC2 > Instances > Search for Akto Mongo Instance
> Copy private IP.
Replace AKTO_MONGO_CONN
with mongodb://10.0.1.3:27017/admini
where 10.0.1.3 is the private ip (example)
Go to EC2 > Load balancers > Search for AktoNLB
> Copy its DNS.
Replace AKTO_NLB_IP
with the DNS name. eg AktoNLB-ca5f9567a891b910.elb.ap-south-1.amazonaws.com
If you have installed on GCP, Kubernetes or OpenShift -
Get Mongo Service's DNS name from Akto cluster
Replace AKTO_MONGO_CONN
with mongodb://mongo.p03.svc.cluster.local:27017/admini
(where mongo.p03.svc.cluster.local is Mongo service)
Get Runtime Service's DNS name from Akto cluster
Replace AKTO_NLB_IP
with the DNS name. eg. akto-api-security-runtime.p03.svc.cluster.local
Create a file akto-daemonset-config.yaml
with the above YAML config
Call kubectl apply -f akto-daemonset-config.yaml -n <NAMESPACE>
on your kubectl terminal
Run the command kubectl get daemonsets
in terminal. It should show akto-k8s daemonset.
Go to API Discovery
on Akto dashboard to see your new APIs
The traffic will contain a lot of sensitive data - does it leave my VPC?
Data remains strictly within your VPC. Akto doesn't take data out of your VPC at all.
Does adding DaemonSet have any impact on performance or latency?
Zero impact on latency. The DaemonSet doesn't sit like a proxy. It simply intercepts traffic - very similar to tcpdump. It is very lightweight. We have benchmarked it against traffic as high as 20M API requests/min. It consumes very low resources (CPU & RAM).
When I hit apply, it says "Something went wrong". How can I fix it?
Akto runs a Cloudformation template behind the scenes to setup the data processing stack and traffic mirroring sessions with your application servers' EC2 instances. This error means the Cloudformation setup failed.
Go to AWS > Cloudformation > Search for "mirroring"
Click on Akto-mirroring stack and go to Events tab
Scroll down to the oldest error event.
The Cloudformation template failed with "Client.InternalError: Client error on launch.". How should I fix it?
This is a known AWS common error. Follow the steps here.
The Cloudformation template failed with "We currently do not have sufficient capacity in the Availability Zone you requested... Launching EC2 instance failed."
You can reinstall Akto in a diff availability zone or you can go to Template tab and save the cloudformation template in a file. Search for "InstanceType" and replace all the occurrences with a type that is available in your availability zone. You can then go to AWS > Cloudformation > Create stack and use this new template to setup Traffic mirroring.
I am seeing kafka related errors in the daemonset logs
If you get an error like "unable to reach host" or "unable to push data to kafka", then do the following steps:
Grab the ip of the akto-runtime instance by running "kubectl get service -n {NAMESPACE}"
Use helm upgrade to update the value of kafkaAdvertisedListeners
key to LISTENER_DOCKER_EXTERNAL_LOCALHOST://localhost:29092,LISTENER_DOCKER_EXTERNAL_DIFFHOST://{IP_FROM_STEP_1}:9092
Put the same ip against the AKTO_KAFKA_BROKER_MAL
as {IP_FROM_STEP_1:9092}
in the daemonset config and reapply the daemonset config.
I don't see my error on this list here.
Please send us all details at support@akto.io or reach out via Intercom on your Akto dashboard. We will definitely help you out.
There are multiple ways to request support from Akto. We are 24X7 available on the following:
In-app intercom
support. Message us with your query on intercom in Akto dashboard and someone will reply.
Join our discord channel for community support.
Contact help@akto.io
for email support.
Contact us here.