Jump to content

Topic on Project:Support desk

Error loading articles after upgrading to MW 1.39

21
SteelRodent (talkcontribs)

I've got a wiki on my workstation (localhost) and had to upgrade PHP to 8.2 (which in turn meant I had to switch to a different Apache build) for a different application to work, which in turn meant I had to upgrade MW to 1.39 from 1.28. I used the old LocalSettings and edited it according to what the errors complained about.


The upgrade resulted in a plethora of errors while running the update scripts, namely missing fields and tables, which I manually added after looking up what they're supposed to look like. After a lot of fiddling I finally got the wiki to load, but it's not happy. I have no idea if I'm still missing any tables and/or fields since it doesn't tell me. I enabled all the extensions in PHP it demanded and was able to fix the user table so I can log in, but beyond that I can't figure out why it doesn't work.


None of the articles will load and I simply get


Error: There is currently no text in this page. You can search for this page title in other pages, [ search the related logs], or [ create this page].


This goes for ALL articles and category pages, with no exception, including image file pages. Through Special:AllPages it will list all articles in the wiki, but it gives the same error no matter which one I try to read. All categories are also still listed, and the category pages show subcategory and article listing, but again an error for the body. It also fails to show what category any page is in.


When I attempt to edit any page it goes

The revision #0 of the page named "Main Page" does not exist.

This is usually caused by following an outdated history link to a page that has been deleted. Details can be found in the [ deletion log].


It also fails to load the "toolbar" on the edit page, but then that may be an extra extension I can't recall the name of.


Another thing is that 1.39 doesn't seem to function correctly on localhost, and I can't tell if this is what causes it to not load correctly. That is, my wiki is called "P32" and it does load correctly, but when it does redirects, like after logging in, it will point to www.P32.com which doesn't exist, and thus I get more errors, and I can't figure out what causes that or how to stop it. Never had any issues with that in previous versions.


Tech part in short:

MW 1.39.1

PHP 8.2.1

Apache 2.4.55 (Win64)

MySQL 8.2.1 Community Server

all running locally on Windows 10


There's no internet access or domain for this machine (meaning the httpd is not reachable from outside my LAN)

Bawolff (talkcontribs)

For the first part. What you describe sounds like referential integrity issues, particularly with the actor migration.

A lot of people have luck by first upgrading to 1.32. Then running cleanupUsersWithNoId.php and migrateActors.php. Followed by updating the rest of the way

> Another thing is that 1.39 doesn't seem to function correctly on localhost, and I can't tell if this is what causes it to not load correctly. That is, my wiki is called "P32" and it does load correctly, but when it does redirects, like after logging in, it will point to www.P32.com which doesn't exist, and thus I get more errors, and I can't figure out what causes that or how to stop it. Never had any issues with that in previous versions.

What is $wgServer set to in LocalSettings.php ?

SteelRodent (talkcontribs)

I guess I need to download 1.32 first and see if it'll fix things.

EDIT: This has turned out to be a bigger challenge than I thought. The 1.32 scripts are spitting out fatal PHP errors despite downgrading PHP to 7.4 so I've gotten no further.


As for the settings:


$wgServer           = "/localhost/P32/";

$wgServerName       = "P32";

$wgSitename         = "P32";


It's not clear in the documentation if it needs all three and it's obviously not designed to run without a domain, so this is what I came up with after some trial and error

Bawolff (talkcontribs)

$wgServer needs to have a protocol and host only. So it can be 'http://localhost' or '//localhost' or 'https://localhost' it cannot have a path. The path part of the url comes from $wgScriptPath and (optionally) $wgArticlePath.

$wgServerName should not be set (It is detected automatically), and that is also the wrong value for it.

Ciencia Al Poder (talkcontribs)

Instead of 1.32, use 1.35, which may have some bugs fixed for the migration. However, if you've upgraded your database already, you may have lost data already. See phab:T326071. There's no way to downgrade the database. Preferably you should restore from backup.

SteelRodent (talkcontribs)

Got the 1.32 update scripts to work by disabling some error checks PHP disagreed with in versionchecker.php (assume because I couldn't find the correct PHP version for that MW). Then I did the whole thing again with 1.35, and then again with 1.39. Used the --force option on update.php in the hope it'd rebuild all the references.


Unfortunately it hasn't solved the problem. I have checked that everything is still there in the page and text tables, so the primary content isn't lost per se, and all the category and file tables are intact (or at least have their content) as well. I assume that means it's a matter of lost/missing references somewhere, only I don't know enough about how MW uses the enormous amount of tables to tell what's missing or wrong.

Rebastion2 (talkcontribs)

this is similar to a few other support requests on here and more and more appears to be quite an enormous problem for many legacy wikis in the way mediwiki handled the upgrades. It's just really not understandable how these upgrade scripts seem to have not worked in certain version bumps and once you're past some version and unable to restore to some earlier point (which is quite impossible for active wikis) then these reference tables are somehow "lost" forever. What this needs is a fix that would restore certain references, maybe by re-inserting mock revion 0s so these pages are accessible and editable again...

SteelRodent (talkcontribs)

Been reading through the manual about the structure of the primary tables and believe I've found why my wiki is broken:

Page.page_latest has matching entries in Revision.rev_id, and Revision.rev_page correctly points back to Page.page_id, but...

Revisions.rev_text_id is not present, and was likely removed by the update scripts since it was apparently discontinued with 1.32 as part of the MCR project, and the manual states it was replaced by content.content_address which references the latest revision tracked by the slots table, and that is where it breaks.

The content and slots tables only have 71 rows (which appear to match) while there are 7689 rows in the page table and over 25000 rows in the text table. Meanwhile the revision table has roughly the same amount of rows as the text table. This means the content of the revision.rev_text_id was not correctly converted into the slot and content tables before the field was removed.

Another problem is that slots.slot_revision_id is supposed to match revision.rev_id, but for every single one of the 71 rows in slots table the rev_id has no matching entry in the revision table.

As there is no back reference from the text table to any of the other tables, and the rev_text_id field is missing, there does not appear to be any way to rebuild the wiki database from this state.


Unfortunately I messed up and didn't back up the database before I started upgrading, so I don't have a recent dump with the revision table intact, and thus the only way for me to rebuild is to make a new DB for the wiki, pull the blobs from the old, and paste them into the wiki. Only without a proper reference from the revision table there's no way to tell which entries in the text table is the latest revision.

I will say that if the update script is going to completely delete parts of the database it absolutely must verify that all the data of those fields is copied correctly before deleting them. The fact the row count of the new tables don't match the one they're meant to replace is pretty damn bad when it's the only reference to the actual content. I understand that we're trying to avoid duplicating data throughout the database, but if the text table had a back reference to the revision table we'd be able to rebuild most of the database from that one table.

Rebastion2 (talkcontribs)

yeah this all appears to be a major clusterf***, luckily in my case so far I seem to have (anecdotal) incidents in the lower double digits of pages/files that are actually causing trouble, so if I knew how to approach it, I could even fix some things manually, but you must be going insane with the entire database affected like this.... in my case I noticed way too late that there was a problem in the first place - precisely because just a handful of content pages are affected so it took me a while to stumble across these and noticed there was a corruption in the database dating back to who knows which update over the last 10 years....

Tgr (WMF) (talkcontribs)

If every single page is broken, that's probably T326071. Sorry :( You will have to restore the old wiki from a dump, and then either wait for 1.39.2 to be released, or upgrade to 1.35 and from there to 1.39.


If only a few pages are broken, that's probably a different issue.

SteelRodent (talkcontribs)

I didn't check the actor field, but there is part in the MW manual (forgot exactly where, but on one of the pages about the content, slots, or revision tables) where it states something like "if this reference is invalid you'll see 'Error..." and lists the exact error I got.

Either way that database is utterly hosed. As I said before, I screwed up and don't have a recent backup because I forgot to do it before running the update (note to self: figure out how to automate the dumps), so the only backup I got is 3 years old and missing a good chunk of the content because much of what is in the wiki was imported from other sources during covid lockdown. This is entirely on me, but luckily this is not an online wiki - I mostly use it for documentation, prototype articles, and keeping track of various stuff.

So, anyway, I stuck with 1.39 and let it build a new database. And then I'm stuck with the arduous task of manually recreating all the content. To do this I've thrown together a PHP script that pulls the blobs from the old text table so I can paste them into the new database. This is a slow process, but I can't see any way of automating it in a meaningful way

Rebastion2 (talkcontribs)

I would still be interested in knowing how this could be "fixed" somewhat if it really just concerns a very low number of pages. In my case I think (I can't say for sure) only about 20 or so pages may be affected (pages and files, mostly files though). I would love a step by step guide as to 1) how to sure-fire identify all the pages/tables impacted 2) locate what is missing and 3) how to fill up the actors/rev ID with something bogus in a way that even though it might be hackish, it would allow me to access this content again and possibly delete them from the wiki interface or overwrite them with a newer revision etc. One of the reasons I noticed these things in the first place was upgrades to 1.39 making half the website unusable. But just like SteelRodent, I don't have any backups that are pre-1.35 and with current content at the same time. I am somewhere stuck at 1.38 nomansland with a partially corrupt database...

Tgr (WMF) (talkcontribs)

It's hard to provide a guide for DB corruption because the DB can get corrupted in an infinite number of ways. You might be able to use findMissingActors.php --field rev_actor --overwrite-with 'Unknown user' to reassign missing users. I don't know if there is anything similar for revisions.

SteelRodent (talkcontribs)

When it comes to automatically restoring the revisions, I really don't see how it can be done in a meaningful way once you lose all references to the text table and that table doesn't refer to anything else. The problem is once the reference to the text table is lost there's nothing to identify which row in that table belongs to what article and thus that entire table is effectively orphaned. The files are indexed in a different table and at least in my case that seemed to be working fine, but then revisions for files are done in a completely different way.

The revision table is apparently the only one that refers to the title of pages, so it would require a very intelligent parser to read through the text table and figure out what goes where, and I've never seen a wiki where every article contains the title in the body in a consistent format. The alternative I can see is to create all new dummy entries for every entry in the text table (and let's not forget that not everything in a wiki has a body), but then - depending on the wiki in question - you'll end up with potentially thousands of redundant articles that you'll then have to identify, move, delete, and so on, and then you're effectively back to doing the whole thing manually anyway, which makes it much faster to just manually restore it to begin with.

Ciencia Al Poder (talkcontribs)

Since revision ids and text ids are incremental, you may be able to guess the text id from a revision id by sorting both and assigning them in order. There are several causes for text ids not matching, those are the ones that come to my mind:

  • Some extensions, like AbuseFilter, insert text records for their filters and other data, which may add more text ids without creating revision ids.
  • When reverting edits, I think MediaWiki reuses the same text ID of the original revision being reverted to for the new revision.

You may do an initial assignment, and then see how it generally looks. If you see revisions with text that doesn't belong to the page, try again from the revision where it starts to differ, by adding some offset, and so on.

For the abusefilter records, you may get those text ids from the abuse filter tables and discard them directly. You'll have to dig yourself on which tables are they stored, though.

Rebastion2 (talkcontribs)

SteelRodent, good point. However, if I could move, delete, and so on, that would already be a huge progress. At the moment, I can't, because that throws errors. I'd rather have some bogus content in the wiki that I can then move, delete, edit, etc, than have corrupted content that can't be used or fixed on an editorial level....

Tgr (WMF) (talkcontribs)

You can probably reconnect text entries with revisions by recomputing the sha1 hash, as long as the DB got corrupted in such a way that the sha1 is still present. Filed T328169 about that.

Rebastion2 (talkcontribs)
Tgr (WMF) (talkcontribs)

Not too much unfortunately, most developers tend to focus on tasks that are relevant for Wikipedia & co, and this one isn't. Maybe if you can produce a sample of exactly how your DB is corrupted, someone can suggest a query or snippet for fixing it. (Also do you want to recover the data? Or just get rid of the affected pages? Deleting them from the page table and deleting relevant revisions from the revision table should accomplish that.)

Rebastion2 (talkcontribs)

Thanks for the reply. The second option may also work for me, unless I discover loads more pages affected. It would be great (for clueless non-IT guys like me) to have a thorough guide as to 1) how to precisely detect all affected content in the database 2) how and where to delete without doing additional damage and 3) how to identify the files and thumbs from the file system if it's about files (in my case some are files)

81.2.249.137 (talkcontribs)

Same thing happened to me and I went from 1.35.14 to 1.39.8. I was thinking I was following exactly what docs suggested, but still I ended with no articles in my wiki. Damn.

Reply to "Error loading articles after upgrading to MW 1.39"