Documentation
Build something fast
Everything you need to integrate OwlMQ — from your first publish to production-grade self-healing playbooks.
Get started
Quick Start
Send your first message in under 60 seconds.
SDK Reference
Full API reference for TypeScript, Python, Go, Rust, and 11 more languages.
REST API
Low-level HTTP API for publishing, consuming, and managing queues.
IAMP Protocol
Intent-Aware Message Protocol specification and message envelope schema.
Playbooks
Self-healing playbook DSL reference and dry-run testing guide.
Observability
Prometheus metrics, OpenTelemetry tracing, and ClickHouse analytics setup.
import { OwlMQ } from '@owlmq/sdk';
// Initialize client
const client = new OwlMQ({
apiKey: process.env.OWLMQ_API_KEY,
region: 'us-east-1',
});
// Publish a message
await client.publish('orders.created', {
orderId: 'ord_123',
userId: 'usr_456',
total: 99.99,
currency: 'USD',
}, {
priority: 'high',
intent: 'payment.process', // IAMP intent tag
});
// Consume messages
const consumer = client.consume('orders.created', {
group: 'payment-service',
concurrency: 5,
});
consumer.on('message', async (msg) => {
await processPayment(msg.data);
await msg.ack();
});
await consumer.start();
console.log('Consumer running at 10M+ msg/sec');REST API Reference
v2.0https://api.owlmq.dev/v1/messages/publishPublish a message to a topic/v1/messages/consumeLong-poll for messages from a consumer group/v1/messages/{id}/ackAcknowledge successful message processing/v1/messages/{id}/nackNegative-acknowledge and requeue with delay/v1/queuesList all queues in tenant/v1/queuesCreate a new queue with IAMP config/v1/queues/{name}Delete a queue and all pending messages/v1/metricsPrometheus metrics endpoint (tenant-scoped)/v1/trace/{correlationId}Fetch causal message graph for a correlation ID/v1/playbooksCreate a self-healing playbook/v1/playbooks/{id}/runsGet playbook execution history/v1/intelligence/routingGet AI routing decisions (last 1000)IAMP Protocol
The Intent-Aware Message Protocol (IAMP) is OwlMQ's core innovation. Every message carries semantic intent, enabling AI-powered routing, priority inference, and self-healing without config changes.
{
"messageId": "msg_01HX...", // Unique ID (ULID)
"correlationId": "corr_ord_123", // Causal chain ID
"intent": "payment.process", // Semantic intent tag
"topic": "orders.created", // Routing topic
"payload": { ... }, // Your message data
"metadata": {
"priority": "high", // critical | high | normal | low
"ttl": 3600, // Seconds to live
"retryPolicy": {
"maxAttempts": 3,
"backoff": "exponential"
},
"sla": {
"maxLatencyMs": 200, // SLA enforcement
"alertThreshold": 0.95 // 95th percentile target
}
},
"publishedAt": "2025-01-15T10:30:00Z",
"publishedBy": "payment-service/v2"
}Self-Healing Playbooks
Pro+Playbooks are declarative YAML files that define automated responses to queue anomalies. When a condition is met, OwlMQ executes the playbook automatically.
name: payment-queue-healing
version: "1.0"
target: orders.payment
triggers:
- type: lag_exceeded
threshold: 10000 # messages
window: 60s
- type: error_rate
threshold: 0.05 # 5% error rate
window: 300s
actions:
- step: scale_consumers
scale_by: 3x
max_instances: 50
- step: reroute_traffic
condition: error_rate > 0.1
target_queue: orders.payment.fallback
percentage: 50
- step: alert
channels: [slack, pagerduty]
severity: high
message: "Payment queue healing triggered: {{ trigger.type }}"
rollback:
on_failure: restore_previous_state
timeout: 300s15+ official SDKs
Observability
Pro+Prometheus Metrics
Expose `/v1/metrics` to your scrape config. Includes lag, throughput, latency histograms, and error rates per queue.
owlmq_queue_lag{queue="orders.payment"} 142OpenTelemetry Tracing
Distributed traces across publish → route → consume. Compatible with Jaeger, Tempo, and Honeycomb.
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.ioClickHouse Analytics
Stream all events to ClickHouse for real-time analytics dashboards. Query billions of events in milliseconds.
SELECT topic, count() FROM owlmq.events GROUP BY topicNeed help? We're here.
Internal HyperBridge platform. Reach out to the OwlMQ team directly — we respond within hours.