The bullet "There is no existing FLOSS software that provides the same functionality" doesn't fit the grammar structure of that list and seems a bit out of place when below it states "Be licensed under an OSI-approved license".
Talk:Requests for comment/Standards for external services
Appearance
- Have a documented installation and uninstallation process that conform to our implementation guidelines
- Have a documented upgrade process that conform with our implementation guidelines
- Provide a mechanism by which support (community or otherwise) can be requested
- Provide a mechanism by which patches can be proposed
I'd hope that these apply to all services we develop, not just in-house ones. They seem pretty fundamental to maintainability (the ability to test locally, the ability for volunteers to get involved...)
Have pinned / pinnable dependencies that don't need to be downloaded at runtime and/or from untrusted source
Dependency management at the WMF, (and across the industry) is something that has been bothering me for some time; I'd like to see us adopt something more robust here.
These dependencies are just as much a part of our applications as the code that we write, and while we demonstrate a lot of rigor around the design and implementation of our code (admirable), we typically treat the dependencies we pull in as black boxes (scary). I think a dependency (including those that are transitive) should be selected after some careful evaluation and review, and then monitored for changes. Changes should likewise be reviewed, including (but not limited to) determining whether the purported benefits outweigh the risks of upgrading. And, this work should be coordinated on a organization-wide basis; Services that share a common dependency should use the same version unless there are good reasons for doing otherwise.
Insofar as trust goes, I do not believe that any of the mechanisms we utilize for fetching dependencies remotely offers a verifiable chain of trust (other than those we source from the Debian archive). Best case, we can claim to validate that a repository's web server certificate is signed by an authority, but that says nothing about the code that resides there. We should consider running our own, internal repositories, and importing the dependencies after selection and review.
I do fully agree with you, but I think we cannot realistically fix the whole industry ourselves. We can make strides in the right direction (and moving to build on k8s and using the tooling for scanning for vulnerabilities should help), keeping in mind the rest of the industry is mindlessly running in the opposite direction.
Making this requirement stricter would require us to build an infrastructure (for npm, pypi, composer, etc) we don't have the resources to build or maintain. This milder version of our requirement should at least protect against software that wants to download dependencies at runtime.
Perhaps going to the extreme I suggested, immediately, and with an expectation of total compliance is asking too much, but surely it's a target worth shooting for.
The status quo as I've observed:
- There is no barrier to entry to adding a new dependency to a project (read: anything goes)
- We specify a range of acceptable versions, (e.g. ^3.5.2)
- We retrieve them from the remote repository with each new deploy (including transitive dependencies)
We can't even guarantee that builds are reproducible.
Would local repositories really be that much work? Where do you see the time being spent?
Making things a bit stricter is a good idea, and I agree we should work in that direction. I was trying to steer away from specifics in the RfC though; how would you suggest we could rewrite how the principle is worded in the RfC?
For security-sensitive services this is a reasonable expectation IMO. (Currently the only security-sensitive "service" is MediaWiki itself, which does have its own repository of pinned dependencies and changes to it are code-reviewed.) For the average service, it is probably unrealistic and the effort is disproportionate with the benefits.
Something in the guidelines about trying to avoid unnecessary dependencies would be nice (mainly for npm where there is a clture of including all kinds of one-liner utility functions in some kind of misguided DRY effort).
I really like the list of requirements for Wikimedia production, but I was wondering if it would also make sense to standardize on WMF-preferred mechanisms for monitoring, service discovery, RPC protocol, etc., at least when the service in question something also WMF-developed -- or if that was outside of the scope of this proposal?
I think you nailed an important point - this document loses a lot of its standardizing power if we don't get into (some) details of the implementation that impose a better standardization across the board.
While the principles defined here should be (almost) immutable with time, the implementation guide stemming from it might evolve at a much faster place.
For example: we want an application to be observable and its metrics exposed (principle), we want them exposed under the /metrics endpoint and to contain binned percentiles of latency for every endpoint we have and counters for requests and errors (implementation guideline).
I would like to make this distinction more explicit in the document so that once this is approved we can link it to the current version of the implementation guidelines.
That sounds great to me.
I agree. The text of this RfC should be such that it is immutable over time. At the same time, it should point to a different document that gets into the nitty gritty implementation details that are required at the time.
I'd like to echo Joe's sentiments about removing this line, and the related line in the DO column. It seems rather misleading to me and like it doesn't communicate what we actually want, which is simply that adding external services shouldn't impact reliability.
There are no older topics