Skip to content

AWS based load test & Benchmark #204

@alexluong

Description

@alexluong

Load test and benchmark on AWS.

Places we can add timestamp:

  1. The publisher
  2. Outpost receipt of event
  3. Delivery attempt
  4. Destination receipt of attempt

2 - 3 is true latency of Outpost deployment.

In a given scenario, what latency does Outpost add, measured from 2 to 3?

@leggetter, to add scenarios, then we can size.


Scenario: 100,000 events / AWS serverless infrastructure: throughput and latency

  • Topics: Not currently testing topic-based routing as part of benchmark.
  • Publishers: 10 concurrent
  • Destinations/Subscriptions. It's possible that connections could be reusued. So, we do want to test with a number of different destinations. For example, each webhook destination should ideally be to a different to avoid connection reuse.
    • Webhooks: 100
    • RabbitMQ: 10
  • Message size: Use common Stripe/Shopify JSON event payload
  • Message rate: each publisher should publish 100 events/second. With 10 publishers, it should take 100 seconds to publish 100,000 messages

Measure:

  1. Throughput in messages/second: totalMessages / toSeconds(lastMessageTimestamp - firstMessageTimetamp)
  2. Latency:
  • P50 (Median): Half of the messages were faster than this value.
  • P95: 95% of messages were faster; 5% were slower.
  • P99: Useful to catch tail latency (rare but potentially critical delays).
  1. The AWS costs for running the benchmark test

Deployment:

  • Publish: API
  • InternalMQ:
    • SQS
    • RabbitMQ
  • Destination types
    • Webhooks
    • RabbitMQ
    • SQS

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions