__init__.py |
|
0 |
_client_importer.py |
|
829 |
aio |
|
|
auth.py |
Authentication related API end-points for Taskcluster and related
services. These API end-points are of interest if you wish to:
* Authorize a request signed with Taskcluster credentials,
* Manage clients and roles,
* Inspect or audit clients and roles,
* Gain access to various services guarded by this API.
|
27153 |
authevents.py |
The auth service is responsible for storing credentials, managing
assignment of scopes, and validation of request signatures from other
services.
These exchanges provides notifications when credentials or roles are
updated. This is mostly so that multiple instances of the auth service
can purge their caches and synchronize state. But you are of course
welcome to use these for other purposes, monitoring changes for example.
|
5670 |
github.py |
The github service is responsible for creating tasks in response
to GitHub events, and posting results to the GitHub UI.
This document describes the API end-point for consuming GitHub
web hooks, as well as some useful consumer APIs.
When Github forbids an action, this service returns an HTTP 403
with code ForbiddenByGithub.
|
5989 |
githubevents.py |
The github service publishes a pulse
message for supported github events, translating Github webhook
events into pulse messages.
This document describes the exchange offered by the taskcluster
github service
|
8162 |
hooks.py |
The hooks service provides a mechanism for creating tasks in response to events.
|
9395 |
hooksevents.py |
The hooks service is responsible for creating tasks at specific times orin . response to webhooks and API calls.Using this exchange allows us tomake hooks which repsond to particular pulse messagesThese exchanges provide notifications when a hook is created, updatedor deleted. This is so that the listener running in a different hooks process at the other end can direct another listener specified by`hookGroupId` and `hookId` to synchronize its bindings. But you are ofcourse welcome to use these for other purposes, monitoring changes for example.
|
3933 |
index.py |
The index service is responsible for indexing tasks. The service ensures that
tasks can be located by user-defined names.
As described in the service documentation, tasks are typically indexed via Pulse
messages, so the most common use of API methods is to read from the index.
Slashes (`/`) aren't allowed in index paths.
|
7022 |
notify.py |
The notification service listens for tasks with associated notifications
and handles requests to send emails and post pulse messages.
|
6395 |
notifyevents.py |
This pretty much only contains the simple free-form
message that can be published from this service from a request
by anybody with the proper scopes.
|
2201 |
object.py |
The object service provides HTTP-accessible storage for large blobs of data.
Objects can be uploaded and downloaded, with the object data flowing directly
from the storage "backend" to the caller, and not directly via this service.
Once uploaded, objects are immutable until their expiration time.
|
6751 |
purgecache.py |
The purge-cache service is responsible for tracking cache-purge requests.
User create purge requests for specific caches on specific workers, and
these requests are timestamped. Workers consult the service before
starting a new task, and purge any caches older than the timestamp.
|
3761 |
queue.py |
The queue service is responsible for accepting tasks and tracking their state
as they are executed by workers, in order to ensure they are eventually
resolved.
## Artifact Storage Types
* **Object artifacts** contain arbitrary data, stored via the object service.
* **Redirect artifacts**, will redirect the caller to URL when fetched
with a a 303 (See Other) response. Clients will not apply any kind of
authentication to that URL.
* **Link artifacts**, will be treated as if the caller requested the linked
artifact on the same task. Links may be chained, but cycles are forbidden.
The caller must have scopes for the linked artifact, or a 403 response will
be returned.
* **Error artifacts**, only consists of meta-data which the queue will
store for you. These artifacts are only meant to indicate that you the
worker or the task failed to generate a specific artifact, that you
would otherwise have uploaded. For example docker-worker will upload an
error artifact, if the file it was supposed to upload doesn't exists or
turns out to be a directory. Clients requesting an error artifact will
get a `424` (Failed Dependency) response. This is mainly designed to
ensure that dependent tasks can distinguish between artifacts that were
suppose to be generated and artifacts for which the name is misspelled.
* **S3 artifacts** are used for static files which will be
stored on S3. When creating an S3 artifact the queue will return a
pre-signed URL to which you can do a `PUT` request to upload your
artifact. Note that `PUT` request **must** specify the `content-length`
header and **must** give the `content-type` header the same value as in
the request to `createArtifact`. S3 artifacts will be deprecated soon,
and users should prefer object artifacts instead.
## Artifact immutability
Generally speaking you cannot overwrite an artifact when created.
But if you repeat the request with the same properties the request will
succeed as the operation is idempotent.
This is useful if you need to refresh a signed URL while uploading.
Do not abuse this to overwrite artifacts created by another entity!
Such as worker-host overwriting artifact created by worker-code.
The queue defines the following *immutability special cases*:
* A `reference` artifact can replace an existing `reference` artifact.
* A `link` artifact can replace an existing `reference` artifact.
* Any artifact's `expires` can be extended (made later, but not earlier).
|
44340 |
queueevents.py |
The queue service is responsible for accepting tasks and track their state
as they are executed by workers. In order ensure they are eventually
resolved.
This document describes AMQP exchanges offered by the queue, which allows
third-party listeners to monitor tasks as they progress to resolution.
These exchanges targets the following audience:
* Schedulers, who takes action after tasks are completed,
* Workers, who wants to listen for new or canceled tasks (optional),
* Tools, that wants to update their view as task progress.
You'll notice that all the exchanges in the document shares the same
routing key pattern. This makes it very easy to bind to all messages
about a certain kind tasks.
**Task specific routes**, a task can define a task specific route using
the `task.routes` property. See task creation documentation for details
on permissions required to provide task specific routes. If a task has
the entry `'notify.by-email'` in as task specific route defined in
`task.routes` all messages about this task will be CC'ed with the
routing-key `'route.notify.by-email'`.
These routes will always be prefixed `route.`, so that cannot interfere
with the _primary_ routing key as documented here. Notice that the
_primary_ routing key is always prefixed `primary.`. This is ensured
in the routing key reference, so API clients will do this automatically.
Please, note that the way RabbitMQ works, the message will only arrive
in your queue once, even though you may have bound to the exchange with
multiple routing key patterns that matches more of the CC'ed routing
routing keys.
**Delivery guarantees**, most operations on the queue are idempotent,
which means that if repeated with the same arguments then the requests
will ensure completion of the operation and return the same response.
This is useful if the server crashes or the TCP connection breaks, but
when re-executing an idempotent operation, the queue will also resend
any related AMQP messages. Hence, messages may be repeated.
This shouldn't be much of a problem, as the best you can achieve using
confirm messages with AMQP is at-least-once delivery semantics. Hence,
this only prevents you from obtaining at-most-once delivery semantics.
**Remark**, some message generated by timeouts maybe dropped if the
server crashes at wrong time. Ideally, we'll address this in the
future. For now we suggest you ignore this corner case, and notify us
if this corner case is of concern to you.
|
27135 |
secrets.py |
The secrets service provides a simple key/value store for small bits of secret
data. Access is limited by scopes, so values can be considered secret from
those who do not have the relevant scopes.
Secrets also have an expiration date, and once a secret has expired it can no
longer be read. This is useful for short-term secrets such as a temporary
service credential or a one-time signing key.
|
4385 |
workermanager.py |
This service manages workers, including provisioning for dynamic worker pools.
Methods interacting with a provider may return a 503 response if that provider has
not been able to start up, such as if the service to which it interfaces has an
outage. Such requests can be retried as for any other 5xx response.
|
14025 |
workermanagerevents.py |
These exchanges provide notifications when a worker pool is created or updated.This is so that the provisioner running in a differentprocess at the other end can synchronize to the changes. But you are ofcourse welcome to use these for other purposes, monitoring changes for example.
|
3396 |