Multi-Content Revisions/Content Meta-Data#Scalability This is wrong, and it doesn't have into account that you are basically solidifying in the future things such as image_revisions and (page)revisions, and category_revisions, and all types of content revisions on the same couple of tables, generating hugely tall tables; when we already have problems with the current sizes, and no one has produced a solution with proper partitioning that works in all cases. Currently there is partitioning rolled in for logging and revisions, but it is completely undocumented and doesn't work except for a few cases (because it requires index changes). I do not see you solving that issue.
For those people that support this "clean" proposal, know that this will be slower than more conservative approaches, which could keep the same idea but not slowing down the queries and making maintenance much slower. Maintenance slow == deployments slow, mediawiki growth==slow.
You should keep meta-information for revisions of different kinds on separate tables, otherwise, this will not work for performance reasons, but for maintenance reasons too (the multiple extensions that will add information to the same table and that, in your own words "we will never clean up". The same way other 20 extensions created content and were later discontinued (but at least, they stored data on its own separate set of tables).
This could seem all clean and nice on paper to the casual obverver, but I do not see you giving actual numbers based on actual performance issues that we are currently suffering. And you have showed me many times that you clearly lack the db knowledge to take care of those ("we could just use mysql partitioning, right?"). Which is not a big deal at all, except for the fact that you actively ignore the warnings from those that suffer them everyday and end up harass them by email just to get your point across.
I tried to work with you, but as you do not want me to, I will have to present my own alternative multi-content revision proposal, in which we have into account realistic maintenance and performance issues, and a migration plan that requires minimal changes to the database, so can be deployed faster, is more backwards compatible, and does not produce 10000 million tables.