Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-hop latency measurement example #501

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

eboasson
Copy link
Contributor

@eboasson eboasson commented Aug 6, 2024

This adds a simple tool for doing latency measurements across multiple hops. It assumes the clocks are synchronised to a high degree so that one-way latencies can be computed directly.

It can operate with a number of different types, all very simple:

struct Hop8 {
  uint32 seq;
  octet z[8 - 4];
};

and variants where the total size is 128, 1k, 8k and 128k bytes. Each process takes a stage, with the source publishing in partition Pj, the sink subscribing in partition Pj and the forwarders subscribing in Pj and publishing in Pk, where j is the stage argument and k = j+1.

Each process additionally subscribes to "junk data" and optionally publishes samples at randomised intervals with a configurable average rate.

This adds a simple tool for doing latency measurements across multiple hops.  It assumes
the clocks are synchronised to a high degree so that one-way latencies can be computed
directly.

It can operate with a number of different types, all very simple:

    struct Hop8 {
      uint32 seq;
      octet z[8 - 4];
    };

and variants where the total size is 128, 1k, 8k and 128k bytes.  Each process takes a
stage, with the source publishing in partition Pj, the sink subscribing in partition Pj
and the forwarders subscribing in Pj and publishing in Pk, where j is the stage argument
and k = j+1.

Each process additionally subscribes to "junk data" and optionally publishes samples at
randomised intervals with a configurable average rate.

Signed-off-by: Erik Boasson <[email protected]>
This adds a simple tool for doing latency measurements between processes when writing
multiple topics.  It assumes the clocks are synchronised to a high degree so that one-way
latencies can be computed directly.

It can operate with a number of different types, all very simple:

    struct Hop128 {
      @key uint32 key;
      uint32 seq;
      octet z[128 - 8];
    };

and variants where the total size is 8, 1k, 8k and 128k bytes.

A process is either a source of data, writing an each instance for each of N topics every
10ms, or it is a sink, recording for each received sample the latency and the topic.  It
can publish these N samples as quickly in succession as possible, or it can do it while
requesting 100us sleeps in between.

Signed-off-by: Erik Boasson <[email protected]>
The sinks now record triplets: latency, topic idx and source id.

Sources now try to agree on a reference time, then start publishing at configurable delay
from that reference time.  This allows running two sources publishing at exactly the same
time (if you have enough cores), or rather preventing them from publishing at the same
time.

It adds a run script that does a small 2x2 matrix of tests with two sources and two sinks:
starting at the same time or the second delayed by 2ms, and with/without sleep 100us in
between samples.

It also fixes a potential race condition on appending latencies in the listener.  Since
the listeners in this experiment always get invoked on the same thread it didn't cause any
problems.

Signed-off-by: Erik Boasson <[email protected]>
This adds cwpl where processes communicate in pairs, each process publishing N topics in
partition Pk and subscribing in Pj, where k is the specified id and j=k+1-2(k%2), so 0 and
1 form a pair, 2 and 3 form a pair, etc. A number of different types are available.

Each writer has its own thread, readers record latency and optional write it to a file on
termination.

Signed-off-by: Erik Boasson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant