Akto - API Security platform
Akto WebsiteStart freeBook a demoDiscordGitHub
  • Github Open Source Repo
  • What is Akto?
  • AktoGPT
  • AGENTIC AI
    • Akto MCP Server
  • Getting Started
    • Deployment Types
    • Akto Cloud
      • Connect Akto with Hybrid SaaS
      • Migrate From Self Hosted Setup To SaaS
      • Setting up proxy
    • Akto Self Hosted
      • AWS deploy
        • AWS multi-VPC deploy
        • AWS Cross-Region Cross-VPC deploy
        • Custom subdomain on Akto on AWS
      • Helm Deploy
      • Azure Deploy
      • Openshift Deploy
      • Heroku
      • GCP Deploy
    • Local Deploy
    • FAQs on data concerns
  • Traffic Connector
    • Traffic Data Sources
    • eBPF
      • Connect Akto with eBPF
      • Connect Akto with eBPF on mTLS
    • Kubernetes
      • Connect Akto with Kubernetes in AWS
      • Connect Akto eBPF with Kubernetes
    • API Gateways
      • Connect Akto with Envoy
      • Connect Akto with NGINX
      • Connect Akto with Istio
      • Connect Akto with HAProxy
      • Connect Akto with Azure API Management
      • Connect Akto with F5
      • Connect Akto with 3Scale
      • Connect Akto with Layer7 API Gateway
      • Connect Akto with Citrix
      • Connect Akto with Kong
      • Connect Akto with Kong Mesh
      • Connect Akto with Cloudflare
      • Connect Akto with IBM Connect
      • Connect Akto with Mulesoft Flex Gateway
      • Connect Akto with Apigee
    • Mirroring
      • Connect Akto with AWS Traffic Mirroring
      • Connect Akto with GCP Packet Mirroring
    • AWS Services
      • Connect Akto with AWS Beanstalk
      • Connect Akto with AWS API Gateway
      • Connect Akto with AWS Lambda
      • Connect Akto with AWS API Gateway with CloudWatch OAM
      • Connect Akto with AWS API Gateway with service account (Temporary Credentials)
      • Connect Akto with AWS Fargate
      • Connect Akto with AWS EKS
      • Connect Akto with AWS ECS
    • GCP Services
      • Connect Akto with GCP Packet Mirroring
      • Connect Akto with Apigee
      • Connect Akto with Google Cloud Run Functions
      • Connect Akto with Google Cloud Run
      • Connect Akto with GKE
    • Azure Services
      • Connect Akto with Azure App Services
      • Connect Akto with Azure API Management
      • Connect Akto with AKS
      • Connect Akto with Azure OpenShift
      • Connect Akto with Azure Container App
      • Connect Akto with Azure Functions
    • Akto SDK
    • Source Code
      • GitHub
      • Bitbucket
      • GitLab
      • API inventory from source code
      • Source code installation
    • Virtual Machines
      • Connect Akto with Docker
      • Connect Akto on TLS service
      • Connect Akto with TCP Agent
    • Manual
      • Connect Akto with Burp suite
      • Connect Akto with Postman
      • Connect Akto with OpenAPI
      • Add API traffic to Akto using HAR file upload
      • API Import: WSDL in Akto
    • Configure TLS on kafka
  • API Inventory
    • Concepts
      • API Endpoints
      • Meta Properties of API Endpoint
      • API Collection
      • API Call Stats
      • Explore mode
      • gRPC API Coverage with Akto
      • Data Types
      • API Groups
      • Environment Type
      • Protocol Support in Akto
      • API Changes
      • Third Party APIs
      • Tags
      • API Dependency Graph
      • Sensitive Data
      • Alerts
      • Shadow APIs
      • Zombie APIs
      • Risk Score
      • Auth types
      • Access Type
      • API discovery from source code
      • Advanced Filter Option
    • How-To
      • Enable Tree view for API collections
      • Export an API Collection to Postman
      • Export an API Collection to Burp
      • Create API group
      • Collection-Based RBAC
      • Descriptions for API Collections & Endpoints
      • Remove API(s) from API group
      • Deactivating and Reactivating API Collections in Akto
      • Add collection using Explore Mode
      • De-merge API
      • Create Swagger File Using Akto
      • Copy API Endpoints Data
      • Add an API Collection
      • Set environment type
      • Delete an API Collection
      • Create a Custom Data Type
      • Reset Data Types in Akto
      • Set Sensitivity of a Data Type
      • De-activate a data type
      • Add a Custom Auth Type
      • Reset an Auth Type
      • Configure Access Types
      • View New API Endpoint
      • Add Private CIDRs list
      • View New Parameters
      • Configure alerts on API changes
      • Create a custom collection
      • Redact sensitive data
      • Extract APIs from github hosted source code using our Github Action
      • Extract APIs from source code using our Docker based CLI
      • Remove Bad Endpoints
      • Create New Tags
      • Edit Tags
  • API Protection
    • Overview
    • External API Runtime Threat Notifications
    • Concepts
      • Threat Actors
      • Threat Policy
  • WAF
    • AWS WAF
    • Cloudflare WAF
  • Test Editor
    • Concepts
      • Overview
      • Test YAML
      • Test Library
      • Custom Test
      • Test YAML Syntax (Detailed)
        • ID
        • Info
        • Wordlists
        • Auth
        • API Selection Filters
        • Execute
        • Validation
        • Contexts
        • Strategy
        • Conditional flows
      • Template YAMLs
        • Local File Inclusion with Akto
      • Dynamic severity
    • How To
      • Edit Test
      • Create a Custom Test
      • Deactivate Test
      • Play in Test Editor Background
      • Copy Test Content
      • Opening Endpoint in Test Editor
      • Add a New Test Library
      • Contribute to Test Library
  • API Security Testing
    • Concepts
      • Severity Levels
      • Test
      • Result types
      • Test Role
      • User Config
      • Test Result
      • JSON Recording for Automated Auth Tokens
    • How To
      • Run Test
      • Auto-Create Jira Tickets
      • Edit Test Settings
      • Install testing module in your Cloud
        • Ephemeral Storage for Hybrid Runtime
        • gRPC Testing in Hybrid Testing Module
      • Create Custom Test Suites
      • Recalculate Issue Counts
      • Testing Module Selector in Akto
      • Run Tests by Category
      • Export Vulnerability Report from Test Results
      • Test Multiple APIs
      • Schedule Tests
      • Stop Tests
      • Run Test on Any One Endpoint
      • Configure global rate limit
      • Rescan Specific Issues Resolved
      • Configure Pre-request Script
      • Set Up JSON Recording for Auth Tokens
      • Create a Test Role
      • Edit Auth Flow in Test Roles
      • Restrict Access to a Test Role Using RBAC
      • Play in Test Editor Playground
      • Conduct Role-Based Testing
      • Run tests in CLI using Akto
      • Secure GraphQL APIs using Akto
      • Secure REST APIs using Akto
      • Secure SOAP APIs using Akto
      • Create and Edit Auth Types
  • Issues
    • Concepts
      • Overview
      • Values
      • Vulnerability Report
      • Remediation
    • How To
      • Jira Integration
      • Azure DevOps Boards Integration
      • Triage Issues
        • Review Issues Marked as False Positives
      • Export Selected Issues to Reports
      • Export Vulnerability Report
  • CI/CD
    • GitHub Actions
      • Create GitHub App
    • Jenkins
    • Azure DevOps
    • GitLab
    • Generic CI/CD
    • How To
      • Run tests in CI/CD
      • Add test to CI/CD Pipeline
      • Get API Credentials
      • Test ID from Akto test
  • Account
    • Invite User
      • Change role of a User
    • Create a New Account
    • How to Switch Between Accounts in Akto
    • Understanding Role Permissions
    • Custom roles
    • Audit Logs
    • SSO
      • Azure AD SAML
      • Okta OIDC
      • Github OIDC
      • Google Workspace SSO
      • Add Members From SSO
  • Compliance
    • Concepts
      • Overview
  • API security posture
    • Concepts
      • Overview
  • SIEM Integration
    • Splunk
  • Alerts
    • Slack Webhook
    • Microsoft Teams Webhook
    • Setup alerts for Akto test run results
  • Pricing
    • Pricing Plans
    • How To
      • Upgrade Your Plan
      • Downgrade Your Plan
      • Sync Usage Data
  • API reference
    • API reference
  • Components
    • Dashboard
    • Testing module
    • Traffic mirroring module
    • Runtime analyzer
    • Context analyzer
    • Puppeteer server
    • Other OSS
    • robots.txt
  • Troubleshooting
    • How to get logs
    • How to disable logging
    • How to update helm deployments
  • Stay Updated on New Releases
  • Support
Powered by GitBook
On this page
  • Prerequisites
  • Setting Up Akto Traffic Collector
  • Traffic Processor Setup
  • F5 Setup
  • Node Setup
  • Pool Setup
  • IRULE

Was this helpful?

  1. Traffic Connector
  2. API Gateways

Connect Akto with F5

PreviousConnect Akto with Azure API ManagementNextConnect Akto with 3Scale

Last updated 4 months ago

Was this helpful?

F5 is a leading application security and delivery platform that provides advanced traffic management and security features. Integrating F5 with Akto allows automatic discovery and security testing of all APIs flowing through your F5 infrastructure, ensuring comprehensive security coverage across your application delivery network.

Prerequisites

  1. Click on Quick Start tab in left nav

  2. Search for Hybrid SaaS Connector and click connect

  3. Copy the token as specified under Runtime Service Command heading. This will be later used in setting up Akto Traffic Processor

Setting Up Akto Traffic Collector

  1. Create a new instance

  2. Login into the instance and save the following file as docker-compose-traffic-collector.yml

version: '2.1'

services:
  zoo1:
    image: confluentinc/cp-zookeeper:6.2.1
    restart: always
    hostname: zoo1
    user: "0"
    volumes:
      - ./data-zoo-data:/var/lib/zookeeper/data
      - ./data-zoo-logs:/var/lib/zookeeper/log
      - ./data-zoo-secrets:/etc/zookeeper/secrets
    container_name: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo1:2888:3888
    labels:
      com.centurylinklabs.watchtower.enable: "false"

  kafka1:
    image: confluentinc/cp-kafka:6.2.1
    restart: always
    hostname: kafka1
    user: "0"
    ports:
      - "9092:9092"
      - "19092:19092"
      - "29092:29092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_EXTERNAL_DIFFHOST://${AKTO_KAFKA_IP}:9092, LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL_LOCALHOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_EXTERNAL_DIFFHOST:PLAINTEXT, LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL_LOCALHOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_CREATE_TOPICS: "akto.api.logs:3:3"
      KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 60000
      KAFKA_LOG_RETENTION_HOURS: 5
      KAFKA_LOG_SEGMENT_BYTES: 104857600
      KAFKA_LOG_CLEANER_ENABLE: "true"
      KAFKA_CLEANUP_POLICY: "delete"
      KAFKA_LOG_RETENTION_BYTES: 10737418240
    volumes:
      - ./data-kafka-data:/var/lib/kafka/data
      - ./data-kafka-secrets:/etc/kafka/secrets
    depends_on:
      - zoo1
    labels:
      com.centurylinklabs.watchtower.enable: "false"
  akto-api-security-traffic-collector:
    image: ayush12493/udp-packet-reassembler:latest
    env_file: ./docker-akto-collector.env
    restart: always
    mem_limit: 2g
    network_mode: host
    privileged: true
    cap_add:
      - SYS_PTRACE
      - SYS_ADMIN
    volumes:
      - /lib/modules:/lib/modules
      - /sys/kernel:/sys/kernel
      - /:/host
  init-kafka:
    image: confluentinc/cp-kafka:6.2.1
    depends_on:
      - kafka1
    entrypoint: [ '/bin/sh', '-c' ]
    command: |
      "
      # blocks until kafka is reachable
      kafka-topics --bootstrap-server 172.17.0.1:9092 --list

      echo -e 'Creating kafka topics'
      kafka-topics --bootstrap-server 172.17.0.1:9092 --create --if-not-exists --topic akto.api.logs --replication-factor 1 --partitions 2

      echo -e 'Successfully created the following topics:'
      kafka-topics --bootstrap-server 172.17.0.1:9092 --list
      "
  1. Replace ${AKTO_KAFKA_IP} in the above file with your instance’s ip

  2. Save below snipped as docker-akto-collector.env. Replace <traffic_processor_instance_ip> with your instance ip.

AKTO_TRAFFIC_BATCH_TIME_SECS=10
AKTO_TRAFFIC_BATCH_SIZE=100
AKTO_KAFKA_BROKER_MAL=<traffic_processor_instance_ip>:9092
AKTO_BYTES_IN_THRESHOLD=10
  1. Run docker-compose -f docker-compose-traffic-collector.yml up -d

  2. Expose UDP port 1053 on this instance

Traffic Processor Setup

  1. Login into the Traffic Collector Instance

  2. Save the following file as docker-compose-runtime.yml

version: '2.1'

services:
  akto-api-security-runtime:
    image: public.ecr.aws/aktosecurity/akto-api-security-mini-runtime:latest
    env_file: ./docker-mini-runtime.env
    mem_limit: 8g
    restart: always
  1. Save the following file as docker-mini-runtime.env. Replace with token value copied in Prerequisites step. Replace <traffic_processor_instance_ip> with your instance ip.

AKTO_CONFIG_NAME=staging
AKTO_KAFKA_TOPIC_NAME=akto.api.logs
AKTO_KAFKA_BROKER_URL=traffic_processor_instance_ip>:9092
AKTO_KAFKA_BROKER_MAL=traffic_processor_instance_ip>:9092
AKTO_KAFKA_GROUP_ID_CONFIG=asdf
AKTO_KAFKA_MAX_POLL_RECORDS_CONFIG=100
AKTO_ACCOUNT_NAME=Helios
AKTO_TRAFFIC_BATCH_SIZE=100
AKTO_TRAFFIC_BATCH_TIME_SECS=10
USE_HOSTNAME=true
AKTO_INSTANCE_TYPE=RUNTIME
DATABASE_ABSTRACTOR_SERVICE_URL=https://cyborg.akto.io
DATABASE_ABSTRACTOR_SERVICE_TOKEN=<token>
RUNTIME_MODE=hybrid
  1. Run docker-compose -f docker-compose-traffic-collector.yml up -d

F5 Setup

Node Setup

  1. Create a new node in your F5 dashboard. Use the ip of Traffic Collector instance as Address

Pool Setup

  1. Create a new pool in your F5 dashboard.

    1. Address - Use the ip of Traffic Collector instance

    2. Service Port - 1053

IRULE

  1. Create a new iRule with the following tcl script

when RULE_INIT {
    set static::hsl_start "${static::delimiter}HSL_START${static::delimiter}"
    set static::delimiter "__"
    set static::request_header_start "${static::delimiter}REQHS${static::delimiter}"
    set static::request_header_end "${static::delimiter}REQHE${static::delimiter}"
    set static::header_name "${static::delimiter}HEAN${static::delimiter}"
    set static::header_value "${static::delimiter}HEAV${static::delimiter}"
    set static::response_header_start "${static::delimiter}RESPHS${static::delimiter}"
    set static::response_header_end "${static::delimiter}RESPHE${static::delimiter}"
    set static::max_collect_len 10000
    set static::request_payload "${static::delimiter}REQPS${static::delimiter}"
    set static::response_payload "${static::delimiter}REQPE${static::delimiter}"
    set static::hsl_end "${static::delimiter}HSL_END${static::delimiter}"
}

when CLIENT_ACCEPTED {
    set hsl [HSL::open -proto UDP -pool trafficpoolnew]
    set sessionId "[IP::client_addr][TCP::client_port][IP::local_addr][TCP::local_port][expr { int(100000000 * rand()) }]"
    binary scan [md5 $sessionId] H* correlationId junk
}

when HTTP_REQUEST {
    set request_time [clock clicks -milliseconds]
    set reqHeaderString "${static::request_header_start}"
    set contentTypeHeaderValue ""
    foreach aHeader [HTTP::header names] {
      if { [string tolower $aHeader] == "content-type" } {
        set contentTypeHeaderValue [HTTP::header value $aHeader]
      }
      set lwcasekey [string map -nocase {"\\"" "\\\\\\""}[string tolower $aHeader]]
      set value [string map -nocase {"\\"" "\\\\\\""} [HTTP::header value $aHeader]]
      set headers "${static::header_name}${lwcasekey}${static::header_value}${value}"
      append reqHeaderString $headers
    }
    append reqHeaderString ${static::request_header_end}
    set uri [HTTP::uri]
    set method [HTTP::method]
    set client_addr [IP::client_addr]
    set local_port [TCP::local_port]
    set method [HTTP::method]
    set host [HTTP::host]
    set request_time [clock clicks -milliseconds]
    set request_payload ""

}

when HTTP_REQUEST_DATA {
    if {[HTTP::payload length] > 0 } {
        set capture_length [HTTP::payload length]
        if { $capture_length >  $static::max_collect_len } {
          set capture_length $static::max_collect_len
        }
        set request_payload [b64encode [string range "[HTTP::payload]" 0 $capture_length ]]
    }
}

when HTTP_RESPONSE  {
    set response_time [clock clicks -milliseconds]
    set resHeaderString "${static::response_header_start}"
    foreach aHeader [HTTP::header names] {
      set lwcasekey [string map -nocase {"\\"" "\\\\\\""}[string tolower $aHeader]]
      set value [string map -nocase {"\\"" "\\\\\\""} [HTTP::header value $aHeader]]
      set headers "${static::header_name}${lwcasekey}${static::header_value}${value}"
      append resHeaderString $headers
    }
    append resHeaderString "${static::response_header_end}"

    HTTP::collect $static::max_collect_len
    set forwarded_data 0
    if { [HTTP::header exists "Content-Length"] && [HTTP::header value "Content-Length"] == 0 } {
       set response_payload ""
       HSL::send $hsl "${static::hsl_start}$correlationId $method $uri $client_addr $local_port [HTTP::status] $host [HTTP::version] $request_time $response_time $reqHeaderString $resHeaderString ${static::request_payload}$request_payload ${static::response_payload}$response_payload ${static::hsl_end}\\n"
       set forwarded_data 1
    }
}

when HTTP_RESPONSE_DATA {
    set response_payload ""
    if { [HTTP::payload length] > 0 } {
        set capture_length [HTTP::payload length]
        if { $capture_length >  $static::max_collect_len } {
          set capture_length $static::max_collect_len
        }
        set response_payload [b64encode [string range "[HTTP::payload]" 0 $capture_length]]
    }
    if { $forwarded_data != 1 } {
        HSL::send $hsl "${static::hsl_start} ${static::hsl_start} ${static::hsl_start} ${static::hsl_start} ${static::hsl_start} ${static::hsl_start} $correlationId $method $uri $client_addr $local_port [HTTP::status] $host [HTTP::version] $request_time $response_time $reqHeaderString $resHeaderString ${static::request_payload}$request_payload ${static::response_payload}$response_payload ${static::hsl_end}\\n"
    }
}
  1. Attach the iRule to your virtual server by going to resources section under your virtual server.

Go to . Login/Signup into your account.

app.akto.io