Helm Deploy
You can install Akto via Helm charts. Read announcement blog.
Resources
Akto's Helm chart repo is on GitHub here. You can also find Akto on Helm.sh here.
Prerequisites
Please ensure you have the following -
A Kubernetes cluster where you have deploy permissions
helm
command installed. Check here
Steps
Here are the steps to install Akto via Helm charts -
Prepare Mongo Connection string - You can create a fresh new Mongo or use existing Mongo if you have Akto installed previously in your cloud.
Install Akto via Helm
Verify Installation and harden security
Prepare Mongo Connection string
Akto Helm setup needs a Mongo connection string as input. It can come from either of the following -
Your own Mongo Ensure your machine where you setup Mongo is NOT exposed to public internet. It shouldn't have a public IP. You can setup a Mongo cluster as follows: Create the following file: mongo-cluster-setup.yaml
Now execute the following command: kubectl apply -f mongo-cluster-setup.yaml -n {namespace}
Wait for a couple of mins till you see 3 mongo pods with name: mongo-0, mongo-1 and mongo-2 are in running state Once the pods are in running state, execute the following commands to initialize the cluster:
The connection string would then be mongodb://mongo-0.mongo.default.svc.cluster.local:27017,mongo-1.mongo.default.svc.cluster.local:27017,mongo-2.mongo.default.svc.cluster.local:27017/admini
Mongo Atlas You can use Mongo Atlas connection as well
Go to
Database Deployments
page for your projectClick on
Connect
buttonChoose
Connect your application
optionCopy the connection string. It should look like
mongodb://....
AWS Document DB If you are on AWS, you can use AWS Document DB too. You can find the connection string on the Cluster page itself.
Existing Akto setup If you have previously installed Akto via CloudFormation template, and you want to move to Helm, please execute the following steps. This guide should be used only if you are NOT using AWS Traffic Mirroring. If you are indeed using AWS Traffic Mirroring, please contact us at support@akto.io.
Go to AWS > EC2 > Auto Scaling Groups and search for
Akto
.Edit all autoscaling groups and set min/max/desired to 0.
This shuts down all existing Akto infra and just leaves Akto-Mongo running.
[Optional - If you want to delete CloudFormation Stacks once migration completes] - We have to "clone" this Akto Mongo Instance. You can create an AMI and launch a new instance with the same AMI. Alternatively, you can also -
Go to AWS > EC2 > Instances > Search for "Akto Mongo instance". Launch a new instance using this template.
SSH on new Mongo and run
sudo su -
and thendocker stop mongo
.Run
rm -rf /akto/infra/data/
on new Mongo.Copy
/akto/infra/data/
from old Mongo instance to this new Mongo instance at the same directory location of/akto/infra/data/
using SCPRun
docker start mongo
If you have installed Akto's K8s agent in your K8s cluster in the previous CloudFormation setup, please run
kubectl delete -f akto-daemonset-config.yml
to halt the traffic processing too.Use the private ip of this Mongo instance while installing helm chart (refer Install Akto via Helm section)
Once you setup Akto via Helm chart, try logging in with your previous credentials and check the data. All your data must be retained.
Change the
AKTO_NLB
to the output ofkubectl get services/flash-akto-runtime -n staging -o jsonpath="{.spec.clusterIP}"
Run
kubectl apply -f akto-daemonset-config.yml
Confirm Akto dashboard has started receiving new data.
Please Do Not Delete AWS CloudFormation Stacks. This will delete the Mongo Instance too and you'll lose the data. If you want to delete AWS CloudFormation stacks, please setup new a duplicate Mongo Instance from step (4). Use private IP of this new instance for step (6).
Mongo on K8s with Persistent volume You can setup a Mongo on K8s cluster itself with a Persistent volume. A sample template is provided here. Use the IP of this service as Mongo private IP in Install Akto via Helm section. If you are migrating from previous Akto installation, you have to bootstrap the persistent volume with original Mongo Instance's data before you start Mongo service.
Mongo cluster setup via cfn template Use the following cloud formation template link
This cfn template requires 2 inputs:
PrivateSubnetId: Select the private subnet in which you want the cluster to be created. Make sure this subnet has a route to a Nat Gateway connectivity.
KeyPair: This keypair will be used to ssh into the instance
The default instance type in the template is m6a.large. You can change it as per your requirement in the template. We recommend not to use
t3/t4
type of instances for running a cluster. Once this template is executed successfully you will see 3 EC2 instances created. You can access the connection url from the output section once the cfn execution completes Note: Please ensure your K8S cluster has connectivity to Mongo.
Install Akto via Helm
Add Akto repo
helm repo add akto https://akto-api-security.github.io/helm-charts
Install Akto via helm
helm install akto akto/akto -n dev --set mongo.aktoMongoConn="<AKTO_CONNECTION_STRING>"
Run
kubectl get pods -n <NAMESPACE>
and verify you can see 4 pods
Verify Installation and harden security
Run the following to get Akto dashboard url
kubectl get services/akto-dashboard -n dev | awk -F " " '{print $4;}'
Open Akto dashboard on port 8080. eg
http://a54b36c1f4asdaasdfbd06a259de2-acf687643f6fe4eb.elb.ap-south-1.amazonaws.com:8080/
For good security measures, you should enable HTTPS by adding a certificate and put it behind a VPN. If you are on AWS, follow the guide here.
If Akto Cluster is Deployed in a Separate Kubernetes Cluster
If you encounter the error Can't connect to Kafka
in your daemonset and you have exposed the Akto runtime service via a route that doesn't resemble *.svc.cluster.local
, you'll need to update the KAFKA_ADVERTISED_LISTENERS
environment variable in the akto-runtime deployment. Follow these steps:
Change the KAFKA_ADVERTISED_LISTENERS environment variable to match your route using the following command:
kubectl set env deployment/{deployment-name} KAFKA_ADVERTISED_LISTENERS="LISTENER_DOCKER_EXTERNAL_LOCALHOST://localhost:29092, LISTENER_DOCKER_EXTERNAL_DIFFHOST://{Service_Endpoint}:9092" -n {namespace}
Verify the change with this command:
kubectl get deployment {deployment-name} -o jsonpath="{.spec.template.spec.containers[?(@.name=='kafka1')].env[?(@.name=='KAFKA_ADVERTISED_LISTENERS')].value}" -n {namespace}
Replace {deployment-name}, {Service_Endpoint}, and {namespace} with your actual deployment name, service DNS, and namespace respectively.
Last updated