User:DKinzler (WMF)/Client Software Guidelines
Appearance
This page is currently a draft.
|
This is a personal brain dump and not aligned with anyone
This document provides guidance for client software that is accessing Wikimedia APIs, such as bots and gadgets.
Terminology
[edit]Throughout this guideline, we distinguish between maintainers, operators, and users of client side software. Depending on the kind of software, these may be the same person, or three entirely different groups of people:
* A maintainer of the software is someone who has control over the program code. They can make changes to the software itself.
- An operator of the software does not modify the software, but determins when, where and how the software will be used. Of course, the operator may also at the same time by the maintainer, but that doesn't have to be the case: A develoepr who uses a component developed by someone else to create a web site would be operating that component, but they would not be maintaining it.
- The users control a specific execution of the software, but are not in control of the source code or configuration, and may not even be aware that they are using the software. For example, when a user visits a web site, their browser may load the client software that makes a call to the mediawiki API on their behalf, without them knowing or caring about it. On the other hand, a user who runs a wiki bot from the command line on toolforge would also be the operator of the software at the same time (and, if they wrote the bot, also the maintainer).
Best Practices
[edit]The practices layed out here are designed to reduce the workload of the maintainers of client side software and server side software alike. Following these practices my require some additional effort up front, but will avoid nasty surprises and unplanned work down the road. Client software that follows best pracices is less likely to break unexpectedly, and less likely to be blocked by comunity admins or Wikimedia staff. It also makes life easier for the people wor work on the server side.
So, please:
stay informed
[edit]Maintainers and operators of client side code should follow announcements on the mediawiki-api-announce mailing list. Additional lists that may be useful to stay informed about ongoing developments and upcoming changes include mediawiki-api and wikitech-l.
be reachable
[edit]When problems arise, it's always best to talk about them before taking action. While the Wikimedia Foundation reserves the right [TBD: link to terms of service] to block any incoming requests in order to protect is operational integrity, Wikimedia will try to reach out to the operator of problematic clients in order to resolve problems, ideally before requests need to be blocked. Of course, this requires a way to contact the operator.
To ensure this is possible, clients should adher to the following practices:
user-agent Set the User-Agent header according to the Wikimedia User-Agent policy. The User-Agent header should allow the servier to identify the client software and provides a way to contact the operator of that software in case of issues. The header must have the following form:
User-Agent: <client name>/<version> (<contact information>) <library name>/<version> [<library name>/<version>...]
.
User-Agent
header cannot be set (e.g. because the client code is executing in a web browser which sets its own User-Agent
header), set the Api-User-Agent
instead.[TBD: how can we automate this for Gadgets?]
fake-user-agent Using a fake User-Agent string, e.g. pretenting to be a browser, is likely to get you blocked.
authenticate When making authenticated requests, make sure to authenticate as the user who is actually controlling the activity. When making API calls on behalf of others, use their credentials to authenticate, not yours.
Unauthenticated requests on behalf of others are generally OK, but should be avoided for write operations or expensive or high volume queries.
In the case of a web application, one suitable mechanism for performing authenticated requests in behalf of others is OAuth2.
Suppose Joanne creates a web page on Toolforge that allows people to post messages to multiple users. This is implemented by calling APIs that edit the respective user talk pages. Then these API calls must not be made using Joanne's credentials. Instead, the users who which to post the messages must first authorize Joannes tool to act on their behalf using OAuth. The API calls must then be made using the relevant OAuth tokens. This way, the edits to the talk pages are attibuted top the users who actually controlled them, rather than to Joanne who wrote the tool.
avoid incompatibility
[edit]One common reason for unexpected breakage in client software is the use of APIs that were not intended to be stable interfaces in the first place. To avoid this, client software should follow the following practices:
internal-apis Do not rely on APIs that are indicated to be non-public (restricted or private or internal). Such APIs are reserved for use by software that is controlled by Wikimedia and its affiliates. This also applies for undocumented APIs.
APIs may be indicated to be non-public by their documentation, or by markers in the address used to access them [TBD: reference URL design guides].
If Wikimedia builds a feature specifically for Wikipedia, this may involve creating an API that serves data to populate the user interface components used by that feature. While there is nothing secret about that data, the way it is bundled and structured is specific to the user interface component (backend-for-frontend pattern), and there is no plan to keep it stable. To avoid surprises, third party clients may not access this API.
Wikimedia may set up a server cluster optimized for serving the Wikipedia mobile app. The APIs served by this server cluster would be the same as the ones offered to the general public, but access to this cluster would be reserved for use by the Wikipedia app, to guarantee operational stability.
experimental-apis Accessing APIs that are marked as experimental is acceptable, but they should not be relied upon. They may change or vanish without warning.
Stable APIs may change or be derpecated and removed. In this case, the server will try to warn any callers of deprecated functionality about the upcoming change. Such warnings do not prevent the request from succeeding, and they may not be relevant to the ultimate user of the software. They are directed at maintainers and operators, to inform them about a problem that may cause similar requests to fail in the future.
Maintainers of client side software should make sure that they are aware of such warnings by applying the following practices:
log-warnings Bring warnings to the attention of the maintainer, for instance by doing one of the following:
- Write an entry in a log file
- Fail automated tests
- Output a message on the command line when in verbose mode
- Display a warning when in development mode
Some well known ways in which the server may report warnings are:
- Deprecation and Sunset response headers.
- [TBD: X-WMF-Warning or X-API-Warning headers]
- [TBD: warnign section in response body, e.g. in the action API]
HTTP client libraries may hide warnings from the client code by transparently following redirects. For instance, a deprecated API endpoint may use a 308 redirect to direct the client to the new endpoint, while providing a deprecation header. To allow the client software to process this header and make the developer aware of the issue, automatic resolution of redirects has to be disabled in the underlying library.
be mindful of resource usage
[edit]One common reason why the Wikimedia Foundation may block client software that is operated in good faith is that it is consuming an unreasonable amount of resources, typically by sending too many requests. In that case, the server will often try to instruct the client to slow down. To avoid putting undue stress on the servers, clients should apply the following practice:
documented-rate-limits Keep the rate of requests to Wikimedia APIs below the documented limit for the given API [TBD: provide an easy way to get all applicable limits]. Note that limits may apply separately for individual APIs and across multiple APIs.
default-rate-limits If no other rate limits are specified, apply the following rules of thumb:
- Keep API requests sequential. That is, do not run multiple API requests in parallel.
- Alternatively, if running requests in parallel is important to your use case, make no more than 10 requests in 10 seconds, across all Wikimedia APIs.
One typical cause of clients unintentionally sending a large number of requests at a high rate is badly implemented retry logic. If you want to implement automatic retries, please apply the following practices:
may-retry Only retry on errors that are transient by nature, such as:
- HTTP status 503 ("service unavailable")
- HTTP status 504 ("gateway timeout")
- HTTP status 429 ("too many requests")
- HTTP status 404 ("not found") when received right after creating the respective resoucre. In that case, the error is assumed to be due to stale data on the server side.
delay-retry Make sure to slow down your retries. Specifically:
- If possible, follow instructuions about rate limits and retries provided in the response. In particular, implement support for the Retry-After header, and delay any retry at least as long as specified by that header.
- If the response does not contain information about delaying the retry, or it cannot be used for some reason, then use exponential backoff, starting with a one second delay and doubling the delay time with every attempt.
- Alternatively, just wait ten seconds between all retry attempts. This is the simplest approach, but it often causes unnecessarily long delayes.
pywikibot uses exponential backoff starting with 5 seconds, a max total wait time of 120 seconds, and a max of 15 retries (when using retry-after).
inform users about errors
[edit]To avoid surprises, users should be informed about any failure to successfully complete a request to an API.
Client software should detect and handly any errors indicated in the responses received from the server. In general, errors should be surfaced to the people affected by it, and to the people who can remedy it. To this end, client code should apply the following practices:
show-errors Inform the user about any failure to complete an API request, unless the failure could be resolved automatically (e.g. by retrying, see #MUST delay retries). In particular, alert the user about unresolved server side issues (HTTP status code 5xxx) and about unexpected failures to perform the request (HTTP status 4xx). Use the information contained in the response body to provide further information to the user [TBD reference relevant error payload spec] (see also #SHOULD gracefully handle HTML content when receiving errors).
[TBD: talk about localization].
show-blocks Please take particular care to provide detailed information to the user when a request has failed because the user lacks the necessary privileges (perhaps because of a user or IP block). This problem may arise suddenly for requests that have worked fine until now, and may work fine in the future, or for others. It may be difficult for the user to understand why a particular request was blocked, so it is important to provide them with as many details as possible.
[TBD: reference spec for block info data structures]
html-errors When handling responses with a 4xx or 5xx status codes, hand HTML in the response body gracefully, even when received from an endpoint that is documented to always return a machine readable format such as JSON.
The reason is that, while API endpoints should be designed to return machiene readable errors descriptions, intermediate layers, such as proxies and caches, will often generate HTML responses when something goes wrong. The client should make an effort to process the HTML response in a meaningful way.