Jump to content

Topic on Talk:Requests for comment/Content-Security-Policy

Definitely a good idea

3
The Anome (talkcontribs)

Setting Content-Security-Policy is definitely a good idea. Like Strict-Transport-Security, I think it's a basic best practice for any modern website.

I think this needs quite a lot of thinking through before it's implemented, but I like the careful, measured approach the proposer has taken to in proposing an incremental rollout instead of a flag day update.

Two questions:

  • is there any way that we can either scan for, or instrument, what's going on at the moment to help find out what might break as we tighten up security?
  • could we perhaps centralize CSS and JS content on separate domains devoted solely to serving such executable content across all Wikimedia sites, so that, for example, content now at //en.wikipedia.org/.../something.js would be served via //js-server.wikimedia.org/en.wikipedia.org/.../something.js , and ditto for sinilar corresponding css-server.wikimedia.org. font-server.wikimedia.org etc. etc. hostnames? (These domains could, of course, all point to the same servers/server cluster as everything else: the distinction is purely semantic, to allow for fine-grained policy filtering.)

    This would require some more (but relatively simple) infrastructural changes to create the necessary server-side configuration to serve the content from the new URLs, but would make the policy much smaller and easier to understand.

Bawolff (talkcontribs)

For instrumenting:

We could maybe use the search engine to look for strings that look suspicious (perhaps insource:/on\w+=/ intitle:js. insource:/javascript:/ intitle:js yields quite a lot actually. I think part of the problem there is there is no way to pass a function to mw.util.addPortlesLink currently.

Ultimately, we won't have a true idea until we start actually adding the Content-Security-Policy-Report-Only header. At the initial stage, we could also only add the header to something like 0.1% of users, so we don't end up drowning in reports.

For separating on to separate domains:

The main reason that the policy is long is for compatability with user scripts loading things from other sites. If we're willing to break that (When I first wrote the RFC I thought it would be bad to break that, but now I actually am more leaning towards allowing that, especially if we still allow meta), we could simply use 'self' after every keyword, and I think that would be very easy to understand.

If we do decide to limit default-src (or more specificly connect-src) we would need the full list of wikimedia domains in order to do CORS.

So I don't think we'd actually get much from separating out those content types into separate domains.

The Anome (talkcontribs)

I take your point. Given the sheer amount of content we're already serving with each page, having the complete list of domains is probably not that much extra bloat, bearing in mind the possible substantial improvement in security that it will make possible, and also the likelihood that HTTP/2 header compression will optimize it all away for all but the first page load.

However, we should also bear in mind that the default size of the dynamic header compression dictionary is only 4096 bytes: see https://tools.ietf.org/html/rfc7540#section-6.5.2 -- if we blow that buffer out, the full benefits of header compression will cease to apply. So someone will need to keep an eye on possible "header bloat" in the longer term.

Reply to "Definitely a good idea"