Unicode正規化の検討
このページは直近の情報を反映していません。 |
概要
バージョン1.4から、MediaWikiはUnicode テキスト入力に対し正規化形式 C (NFC) を適用します。正規化するには十分な理由があります:
- ページタイトルに同じ文字が含まれ、それらが異なる分解度で構成されているときに生じる矛盾を回避します。
- 恒久的な問題がSafariからアップロードされたメディアファイルにあったため、ページタイトルは分解された形式を取っていました。しかしその他ほとんどのツールは組み立てた形式で提供していました。
- テキスト入力の構成形式の種類に関わらず、期待通りの検索ができるようになります。
正規化形式 C が選択された理由は以下の通りです:
- 入力データの大半はすでに形式Cになっていて、あらかじめ組み立てられた文字になっています。
- Form C is supposed to be relatively lossless, with the only changes being invisible transformations between base character + combining character sequences and precomposed characters. In theory, text should never change appearance because it's been normalized to form C.
- And further, the W3C recommends it.
MediaWiki doesn't apply any normalization to its output, for example cafe<nowiki/>́
becomes "café" (shows U+0065 U+0301 in a row, without precomposed characters like U+00E9 appearing).
When MediaWiki shows an internal link, the page title is also normalized to the form C – even if encoded with HTML entities, references, or most other workarounds which evade respective transformation in the source code.
But no NFC transformation happens (as of MediaWiki 1.35.0) on characters embedded to the page title in percent encoding, such as %E1%BD%B5
.
問題点
しかし、時間が経過するにつれ、いくつかの問題点が現れました。
- some Arabic, Persian and Hebrew combining vowel markers sort incorrectly.
- Some of these are just buggy fonts or renderers and only affect some platforms.
- A few cases, however, can produce incorrect text, because the defined classifications don't include enough distinctions to produce semantically correct ordering. This affects primarily older texts such as Biblical Hebrew.
- A surprising composition exclusion in Bangla.
- The result doesn't render right with some tools, probably again a platform-specific bug
- Some third-party search tools apparently don't know how to normalize and fail to locate texts so normalized.
The rendering and third-party search problems are annoying, though if we stay on our high horse we can try to ignore it and let the other parties fix their broken software over time.
The canonical ordering problems are a harder issue; you simply can't get these right by following the current specs. Unicode won't change the ordering definitions because it would break their compatibility rules, so unless they introduce *new* characters with the correct values... Well, it's not clear this is going to happen.
What can we do about it?
We can either ignore it and hope it goes away (easy, but entails dealing with ongoing complaints from particular linguistic groups), or we can give up on comprehensive normalization and change how we use it to maximize the benefits while minimizing the problems.
If we consider normalization form C (NFC) to be destructive (though not as much as its evil little sister NFKC), one possible plan might look like this:
- Remove the normalization check on all web input; replace it with a more limited check for UTF-8 validity but allow funny composition forms through, as is.
- Apply NFC directly in the places where it's most needed:
- Page title normalization in Title::secureAndSplit()
- Search engine index generation
- Search engine queries
This is minimally invasive, allowing page text to contain arbitrary composition forms while ensuring that linking and internal search continue to work. It requires no database format changes, and could be switched on without service disruption.
However, it does leave visible page titles in the normalized, potentially ugly or incorrect form.
より長期には
A further possibility would be to allow page titles to be displayed in non-normalized forms. This might be done in concert with allowing arbitrary case forms ('iMonkey' instead of 'IMonkey').
In this case, the page
table might be changed to include a display title form:
page_title: 'IMonkey' page_display_title: 'iMonkey'
or perhaps even scarier case-folded stuff:
page_title: 'imonkey' page_display_title: 'iMonkey'
The canonical and display titles would always be transformable to one another to maintain purity of wiki essence; you should be able to copy the title with your mouse and paste it into a [[link]] and expect it to work.
These kinds of changes could be more disruptive, requiring changes to the database structure and possibly massive swapping of data around in the tables from one form to another, so we might avoid it unless there are big benefits to be gained.
その他の正規化形式
NFC was originally chosen because it's supposed to be semantically lossless, but experience has shown that that's not quite as true as we'd hoped.
We may then consider NFKC, the compatibility composition form, for at least some purposes. It's more explicitly lossy; the compatibility forms are recommended for performing searches since they fold additional characters such as plain latin and "full-width" latin letters.
It would likely be appropriate to use NFKC for building the search index and to run on search input to get some additional matches on funny stuff. I'm not sure if it's safe enough for page titles, though; perhaps with a display title, but probably not without.
Normalizaton and unicodification can both be done by bots. While no bot has yet been known to "normalize", the function is possible. The "Curpsbot-unicodify" bot has unicodified various articles on Wikipedia and this should not be undone.
関連ページ
- phab:T4399 (with response by Ken Whistler, Unicode 5.0 editor)
- http://www.gossamer-threads.com/lists/wiki/wikitech/184440 (mailing list thread "Unicode equivalence")