| client |
|
|
- |
| client.rs |
|
18246 |
- |
| error.rs |
EWS error values.
Provides an error type specific to EWS operations that may wrap an
underlying [`protocol_shared::error::ProtocolError`]. |
3058 |
- |
| headerblock.rs |
|
6321 |
- |
| headers.rs |
|
4696 |
- |
| lib.rs |
|
22825 |
- |
| line_token.rs |
This module implements structures that help synchronize error handling
across [`OperationQueue`] runners. It revolves around the [`Line`]
struct, which is an asynchronous flow control structure that behaves a bit
like a mutex, with the exception that consumers waiting for the [`Line`] to
be released do not subsequently lock it.
The design of a [`Line`] is inspired from the one of a [one-track railway
line](https://en.wikipedia.org/wiki/Token_(railway_signalling)). To avoid
collisions, conductors must acquire a token at the entrance to the line that
ensures they're the only one on it. If the token is being held, traffic
around this line stops until it's released again.
Here we're not using this concept to drive trains, but to ensure that
whenever multiple [`OperationQueue`] runners encounter an authentication or
throttling error (or other types of errors that might cause requests to be
retried), only one runner handles it while the others wait for the error to
be resolved before retrying.
[`OperationQueue`]: crate::operation_queue::OperationQueue |
5810 |
- |
| observers.rs |
|
5292 |
- |
| operation_queue.rs |
This module contains the request queueing logic for EWS operations.
It exposes two types:
* [`OperationQueue`], which is a struct that represents the queue of
requests attached to an EWS client, and
* [`QueuedOperation`], which is a trait representing an operation that can
be added to the queue.
Consumers can add any type that implement the [`QueuedOperation`] to the end
of an [`OperationQueue`] via [`OperationQueue::enqueue`].
# How it works
The queue is expected to be used while wrapped with an [`Arc`].
The queue's inner buffer is an unbounded MPMC channel from
[`async_channel`]. When enqueueing a new operation (using
[`OperationQueue::enqueue`]), it is sent through this channel via the
matching [`async_channel::Sender`].
[`OperationQueue::start`] starts an infinite loop in the background for each
runner. This loop waits for a new operation to be queued (or gets the next
operation in line) by `await`ing the inner channel's
[`async_channel::Receiver`], and performs it.
Each operation is thus performed in order, the next one waiting for the
previous one to complete. Multiple operations may run in parallel, depending
on the number of items on the queue and the number of runners specified to
[`OperationQueue::start`].
Performing an operation also includes handling authentication and throttling
errors, which includes retrying the request if necessary. This means that,
if an operation needs to be retried due to this kind of failure, these
retries are performed *before* the next operation. This is because
authentication and throttling errors impact all operations indiscriminately,
so pushing retries at the back of the queue (rather than performing them
immediately) means performing a bunch of requests we know will fail.
[`OperationQueue::stop`] stops all of the runners by closing the underlying
[`async_channel`] channel. Operations that have already been queued up by
this point are still performed in order, but any subsequent call to
[`OperationQueue::enqueue`] return with an error. Runners ultimately break
out of their loop once the channel is empty.
# Design considerations
This queue relies on multiple levels of abstraction to define the type of an
operation, from the queue's point of view. While [`QueuedOperation`] is the
public-facing trait that consumers should implement,
[`ErasedQueuedOperation`] is the type of operations the queue actually sees.
It wraps around [`QueuedOperation`] in such a way that both the queue and
its runners can use it as a trait object. This is necessary because
[`QueuedOperation::perform`] is an async method, which would break [Rust's
rules on dyn compatibility] (since `async fn` is a shorthand for returning
`impl Future<...>`, an opaque type which size cannot be known at compile
time).
Regarding the design of the queue itself, a previous approach involved using
a [`VecDeque`] as the queue's inner buffer, but relying on [`async_channel`]
allows simplifying the queue's structure, as well as the logic for waiting
for new items to become available.
Queueing requests in [`moz_http`] was also considered, but this approach was
abandonned as well since it would mean retries due to throttling or
authentication issues would be be added to the back of the queue rather than
performed immediately.
[`Arc`] is used in a few places to ensure memory is correctly managed. Since
we only dispatch to the local thread, [`Rc`] could be used instead. However,
it would make sense to, in a next step, look into dispatching to the
background tasks thread pool instead. In this context, using `Arc` could
avoid a hefty change in the future (at a negligible performance cost).
[Rust's rules on dyn compatibility]:
<https://doc.rust-lang.org/reference/items/traits.html#dyn-compatibility>
[`VecDeque`]: std::collections::VecDeque
[`Rc`]: std::rc::Rc |
13699 |
- |
| operation_sender |
|
|
- |
| operation_sender.rs |
|
21526 |
- |
| outgoing.rs |
|
31970 |
- |
| server_version.rs |
|
8860 |
- |
| xpcom_io.rs |
|
2299 |
- |