HomeDocumentation
Documentation
GA

Developer Documentation

Backport is an open-source API gateway that adds WAF, rate limiting, caching & idempotency to any backend. Sign up, set your target URL, and start proxying — no code changes needed.

Getting Started

Quickstart Guide

Sign up, set your backend URL in the dashboard, and start sending requests through the gateway. Takes under 2 minutes.

  1. 1Create a free account
  2. 2Verify your email
  3. 3Go to Dashboard → Settings → Set your target backend URL
  4. 4Copy your API key from Dashboard → API Keys
  5. 5Start sending requests through the proxy
How It Works

Request Flow

Every request passes through the gateway pipeline before reaching your backend.

AuthValidate API key
WAFScan for threats
Rate LimitEnforce quotas
CacheServe from memory
ProxyForward to backend
Overview

Gateway Features

WAF Rules
17
Regex patterns
Core Modules
4
WAF, Rate, Cache, Idem
Attack Types
6
SQLi, XSS, Path, CMD, LDAP, XXE
Cache TTL
5 min
Default for GET requests

Authentication & API Keys

Every request through the Backport gateway must include a valid API key in the X-API-Key header. Keys are automatically created when you sign up and can be managed from the Dashboard.

How to get your API key

  1. 1.Sign up at backport.io/auth/signup
  2. 2.Verify your email (a 6-digit OTP is sent to your inbox)
  3. 3.After verification, your API key is returned automatically. You can also find it in Dashboard → API Keys.

Using your API key

# GET request through the gateway
curl -X GET https://backport.io/proxy/users \
  -H "X-API-Key: bk_your_key_here"
# POST request with idempotency (e.g. payments)
curl -X POST https://backport.io/proxy/checkout \
  -H "X-API-Key: bk_your_key_here" \
  -H "Idempotency-Key: txn_unique_12345" \
  -H "Content-Type: application/json" \
  -d '{"amount": 5000}'
Response Example
{
  "id": "usr_a1b2c3d4",
  "email": "you@example.com",
  "plan": "free",
  "is_verified": true,
  "api_key": "bk_live_xxxxxxxxxxxx",
  "created_at": "2026-04-20T10:30:00Z"
}

Note: The /proxy/ prefix routes traffic through the gateway. The path after /proxy/ is forwarded to your configured target backend URL.

API Key limits by plan

PlanMax API Keys
Free1
Plus3
Pro10

Proxy Endpoint

All traffic flows through a single proxy endpoint. The gateway authenticates your request, applies WAF rules, checks rate limits, serves from cache if available, and forwards to your configured backend.

Base URL
https://backport.io/proxy/{path}
Supports all HTTP methods: GET, POST, PUT, PATCH, DELETE, OPTIONS, HEAD

Request Headers

HeaderRequiredDescription
X-API-KeyYesYour Backport API key (starts with bk_)
Content-TypeNoStandard content type for POST/PUT requests
Idempotency-KeyNoUnique key to prevent duplicate POST/PUT/PATCH requests
X-Target-UrlNoOverride target backend URL (useful for playground/SDK)

Example: Proxy a Request

# Your backend: https://api.yourservice.com/users
# Through Backport: https://backport.io/proxy/users

curl -X GET https://backport.io/proxy/users \
  -H "X-API-Key: bk_your_key_here"

# Response (with gateway headers)
HTTP/1.1 200 OK
X-Backport-Cache: MISS
X-Backport-Latency: 142ms
X-Backport-Idempotent: -
Content-Type: application/json

[
  {"id": 1, "name": "Alice", "role": "admin"},
  {"id": 2, "name": "Bob", "role": "user"}
]

Response Codes

CodeMeaning
200 OKRequest passed through gateway successfully
304 Not ModifiedResponse served from LRU cache (if caching is enabled)
401 UnauthorizedInvalid or missing API key
403 ForbiddenWAF blocked a malicious payload
413 Payload Too LargeRequest body exceeds maximum size (10 MB)
429 Too Many RequestsRate limit exceeded for your plan
502 Bad GatewayBackend returned an invalid response
503 Service UnavailableCircuit breaker open — backend is unreachable
504 Gateway TimeoutBackend did not respond within 30 seconds

Response Headers

Backport adds the following headers to every proxied response, so you can monitor gateway behavior in your application.

HeaderDescription
X-Backport-CacheHIT or MISS — whether the response was served from cache
X-Backport-IdempotentREPLAY — if the idempotency key was already processed
X-Backport-LatencyTotal gateway processing time in milliseconds

WAF Security

Backport includes a Web Application Firewall (WAF) with 17 pre-compiled regex patterns that inspect every request at the gateway level before it reaches your backend. WAF can be toggled on/off from your dashboard settings. By default, WAF is OFF.Enable it when you're ready.

SQL Injection

5 patterns — UNION SELECT, DROP TABLE, OR 1=1, xp_cmdshell, sp_executesql

XSS Attacks

4 patterns — <script> tags, onerror handlers, javascript: URIs, <iframe>/<embed>

Path Traversal

2 patterns — ../ directory escapes, /etc/passwd, /proc/self access

Command Injection

3 patterns — shell metacharacters, subshell execution, backtick injection

LDAP Injection

1 pattern — detects LDAP filter manipulation syntax

XML/XXE

1 pattern — blocks <!DOCTYPE SYSTEM and <!ENTITY declarations

Important: The WAF uses regex-based pattern matching. While it covers common attack vectors, always validate and sanitize inputs at your application layer as defense-in-depth. WAF is a first line of defense, not a replacement for secure coding practices.

Rate Limiting

Backport applies a sliding-window rate limiter to protect your backend from traffic spikes and abuse. Rate limiting is enabled by default. When the limit is exceeded, requests get an HTTP 429 response and your backend is never hit.

Rate limits by plan

PlanRequests / minWindow
Free10060s sliding
Plus50060s sliding
Pro5,00060s sliding

How it works: Rate limits are tracked in-memory using a sliding window algorithm. Each request timestamp is stored per user. Timestamps older than 60 seconds are pruned. If the count exceeds your plan limit, the request is immediately rejected with HTTP 429.

LRU Caching

Heavy GET endpoints like analytics, reports, or lists often hit your database on every request. If you enable caching in your dashboard settings, Backport intercepts GET responses with status 200 and stores them in an in-memory LRU cache. By default, caching is OFF. Enable it from settings.

  • Only GET requests with 200 status are cached
  • Default TTL: 5 minutes — expired entries are evicted automatically
  • Maximum 1,000 cached entries — oldest entries are evicted when the limit is reached
  • Cached responses return with X-Backport-Cache: HIT header
  • Subsequent cache hits are served in under 2ms without touching your backend

Note: Cache is stored in-memory and resets on server restart. This is suitable for reducing repeated database queries for frequently accessed read endpoints.

Idempotency Keys

Duplicate POST requests are a common problem — especially for payments, orders, and form submissions. When a user loses connection and retries, your backend might process the same action twice. Backport solves this by storing the first response and replaying it for duplicate keys. Idempotency is enabled by default.

# First request — processed normally and cached
curl -X POST https://backport.io/proxy/checkout \
  -H "X-API-Key: bk_your_key" \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: txn_88910" \
  -d '{"amount": 5000, "currency": "INR"}'

# Retry with same Idempotency-Key — returns original response
# without hitting your backend
curl -X POST https://backport.io/proxy/checkout \
  -H "X-API-Key: bk_your_key" \
  -H "Content-Type: application/json" \
  -H "Idempotency-Key: txn_88910" \
  -d '{"amount": 5000, "currency": "INR"}
  • Works with POST, PUT, and PATCH methods
  • Triggered by the Idempotency-Key request header
  • Maximum 5,000 stored idempotency results per server
  • Duplicate requests return with X-Backport-Idempotent: REPLAY header

Dashboard API

The dashboard uses JWT-based authentication. After login, a token is returned that you can use to access your account data programmatically. All dashboard endpoints require an Authorization: Bearer <token> header.

Common Dashboard Endpoints
GET/api/user/me
GET/api/user/keys
POST/api/user/keys
DELETE/api/user/keys/{key_id}
GET/api/user/settings
PUT/api/user/settings
GET/api/user/logs
GET/api/user/traffic
GET/api/user/analytics/stats
GET/api/billing/plan
# Login to get JWT token
curl -X POST https://backport.io/api/auth/login \
  -H "Content-Type: application/json" \
  -d '{"email": "you@example.com", "password": "your_password"}'

# Use the token to access dashboard API
curl -X GET https://backport.io/api/user/me \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..."

Response Transformation

Response transformation allows you to modify API responses at the gateway layer before they reach the client. This is useful for stripping sensitive fields, reshaping payloads to match a frontend contract, or adding computed metadata — all without changing your backend code. You can configure transformation rules from the Dashboard under Settings, and they apply globally to all proxied responses for your account.

Supported transformations

Add Fields

Inject new key-value pairs into the response body, such as timestamps, gateway metadata, or computed fields

Remove Fields

Strip sensitive or unnecessary fields like internal IDs, passwords, or debug information from responses

Rename Keys

Map existing keys to new names — useful when your backend uses snake_case but clients expect camelCase

Filter Response Body

Apply include/exclude rules to return only the fields you specify, effectively whitelisting the response schema

Create or update transform rules

# Set transformation rules via the dashboard API
curl -X PUT https://backport.io/api/user/settings \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..." \
  -H "Content-Type: application/json" \
  -d '{
    "transformations": {
      "remove_fields": ["internal_id", "debug_trace"],
      "add_fields": { "gateway": "backport", "status": "active" },
      "rename_keys": { "user_name": "name", "user_email": "email" },
      "filter_mode": "whitelist",
      "filter_fields": ["id", "name", "email", "created_at"]
    }
  }'

Note: Transformations are applied in the order: remove → rename → add → filter. This ensures that renamed keys are available when add or filter operations run. Changes take effect immediately across all proxied endpoints for your account.

API Mocking

API Mocking lets you define fake endpoint responses in the Backport dashboard. When your backend is unreachable, down, or still under development, the gateway automatically serves the mocked response instead of returning a 502 error. This is invaluable for frontend development, integration testing, and creating demo environments without needing a live backend. Mock endpoints match by path and HTTP method, and you can set custom status codes, headers, and response bodies.

Creating a mock endpoint

# Create a mock for GET /api/users
curl -X POST https://backport.io/api/user/mocks \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIs..." \
  -H "Content-Type: application/json" \
  -d '{
    "method": "GET",
    "path": "/api/users",
    "status": 200,
    "headers": { "Content-Type": "application/json" },
    "body": [
      { "id": 1, "name": "Alice", "role": "admin" },
      { "id": 2, "name": "Bob", "role": "viewer" }
    ]
  }'

How mock responses work

  • Mock endpoints are matched by HTTP method and path — exact match only
  • When your backend is healthy, real responses are served and mocks are ignored
  • When your backend returns an error or times out, the gateway falls back to the mock
  • Each mock can define a custom status code, response headers, and JSON body
  • Mocks can be enabled or disabled individually from the dashboard

Example: If your backend at api.example.com is down and you have a mock for GET /api/users, a request to /proxy/api/users will return the mocked JSON with HTTP 200 — your frontend never sees a 502.

Webhook Notifications

Webhooks let you receive real-time HTTP callbacks when important events occur on your gateway. Instead of polling the dashboard for logs or alerts, configure a webhook URL and Backport will POST a JSON payload to your endpoint automatically. This is essential for integrating with Slack, PagerDuty, custom monitoring dashboards, or any system that accepts incoming HTTP requests. You can set up webhook URLs from the Dashboard under Settings.

Supported events

EventDescription
waf_blockA request was blocked by the WAF rule engine
rate_limit_hitA client exceeded their rate limit and received HTTP 429
backend_errorThe target backend returned a 5xx error or timed out
slow_endpointA proxied request took longer than 5 seconds to complete

Example webhook payload

// POST to your webhook URL
{
  "event": "waf_block",
  "timestamp": "2026-04-15T10:32:00Z",
  "gateway": "backport",
  "data": {
    "ip": "203.0.113.42",
    "method": "POST",
    "path": "/proxy/login",
    "blocked_reason": "SQL injection pattern detected",
    "request_headers": {
      "user-agent": "Mozilla/5.0 ...",
      "x-api-key": "bk_****redacted"
    }
  }
}
  • Webhook payloads are sent as JSON with Content-Type: application/json
  • Failed deliveries are retried up to 3 times with exponential backoff
  • You can configure multiple webhook URLs for different event types
  • All payloads include a timestamp and the gateway event type at the top level

Self-Hosting with Docker

If you prefer full control over your infrastructure, Backport can be self-hosted using Docker and Docker Compose. Self-hosting gives you the entire gateway stack running on your own servers, with no vendor lock-in and no usage limits. The self-hosted version includes all features — WAF, rate limiting, caching, idempotency, response transformation, API mocking, and webhooks. Setup takes less than five minutes with the provided docker-compose configuration.

Docker Compose setup

# docker-compose.yml
version: "3.8"

services:
  backport:
    image: ghcr.io/qureshi-1/backport:latest
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/backport
      - JWT_SECRET=your-super-secret-jwt-key
      - NEXT_PUBLIC_API_URL=http://localhost:3000
      - REDIS_URL=redis://cache:6379
      - NODE_ENV=production
    depends_on:
      - db
      - cache
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=backport
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: unless-stopped

  cache:
    image: redis:7-alpine
    volumes:
      - redisdata:/data
    restart: unless-stopped

volumes:
  pgdata:
  redisdata:

Environment variables

VariableDescription
DATABASE_URLPostgreSQL connection string for persistent storage
JWT_SECRETSecret key for signing JWT auth tokens — use a strong random string
NEXT_PUBLIC_API_URLPublic URL of your API gateway instance (used for CORS and email links)
REDIS_URLRedis connection string for caching and session storage
NODE_ENVSet to production for optimized builds, development for debugging
# Start the gateway
docker compose up -d

# View logs
docker compose logs -f backport

# Stop the gateway
docker compose down

License: Backport is released under the MIT License. You are free to use, modify, and distribute it for personal and commercial purposes. See the GitHub repository for the full license text and contributing guidelines.