-
Notifications
You must be signed in to change notification settings - Fork 32
Closed
Description
Load test and benchmark on AWS.
Places we can add timestamp:
- The publisher
- Outpost receipt of event
- Delivery attempt
- Destination receipt of attempt
2 - 3 is true latency of Outpost deployment.
In a given scenario, what latency does Outpost add, measured from 2 to 3?
@leggetter, to add scenarios, then we can size.
Scenario: 100,000 events / AWS serverless infrastructure: throughput and latency
- Topics: Not currently testing topic-based routing as part of benchmark.
- Publishers: 10 concurrent
- Destinations/Subscriptions. It's possible that connections could be reusued. So, we do want to test with a number of different destinations. For example, each webhook destination should ideally be to a different to avoid connection reuse.
- Webhooks: 100
- RabbitMQ: 10
- Message size: Use common Stripe/Shopify JSON event payload
- Message rate: each publisher should publish 100 events/second. With 10 publishers, it should take 100 seconds to publish 100,000 messages
Measure:
- Throughput in messages/second:
totalMessages / toSeconds(lastMessageTimestamp - firstMessageTimetamp) - Latency:
- P50 (Median): Half of the messages were faster than this value.
- P95: 95% of messages were faster; 5% were slower.
- P99: Useful to catch tail latency (rare but potentially critical delays).
- The AWS costs for running the benchmark test
Deployment:
- Publish: API
- InternalMQ:
- SQS
- RabbitMQ
- Destination types
- Webhooks
- RabbitMQ
- SQS
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
Done