Jump to content

সাহায্য:রপ্তানি

From mediawiki.org
This page is a translated version of the page Help:Export and the translation is 15% complete.
Outdated translations are marked like this.
PD বিঃদ্রঃ যখন আপনি এই পাতা সম্পাদনা করছেন, তখন আপনি আপনার অবদান সিসি০’র অধীনে প্রকাশ করার সম্মতি দিচ্ছেন। আরও তথ্যের জন্য পাবলিক ডোমেইন সাহায্য পাতা দেখুন।
এই পাতার জন্য কিছু পুরনো সংশোধন সিসি বাই-এসএ লাইসেন্সের অধীনে আমদানি করা হয়েছে কেবল নতুন অবদানগুলোই পাবলিক ডোমেইনভুক্ত
PD

Wiki pages can be exported in a special XML format to upload import into another MediaWiki installation (if this function is enabled on the destination wiki, and the user is a sysop there) or use it elsewise for instance for analysing the content. See also Syndication feeds for exporting other information but pages and Help:Import on importing pages.

কিভাবে রপ্তানি করবেন

পাতা রপ্তানি করার অন্তত চারটি উপায় রয়েছে:

  • Special:Export-এর বাক্সে নিবন্ধের নাম প্রতিলেপন করুন অথবা //www.mediawiki.org/wiki/Special:Export/FULLPAGENAME ব্যবহার করুন।
  • The backup script dumpBackup.php dumps all the wiki pages into an XML file. dumpBackup.php only works on MediaWiki 1.5 or newer. You need to have direct access to the server to run this script. Dumps of Wikimedia projects are regularly made available at https://dumps.wikimedia.org/.
  • There is a interface known as OAI-PMH to regularly fetch pages that have been modified since a specific time. For Wikimedia projects this interface is not publicly available; see Wikimedia update feed service . OAI-PMH contains a wrapper format around the actual exported articles.
  • Special:MyLanguage/Manual:Pywikibot ফ্রেমওয়ার্ক ব্যবহার করুন। এটি এখানে ব্যাখ্যা করা হয়নি।

By default only the current version of a page is included. Optionally you can get all versions with date, time, user name and edit summary. Optionally the latest version of all templates called directly or indirectly are also exported. If you import a dump that doesn't include templates, then the resulting pages will probably render incorrectly if the templates they need do not exist on the destination wiki.

এছাড়াও আপনি এসকিউএল ডাটাবেজ অনুলিপি করতে পারেন। This is how dumps of the database were made available before MediaWiki 1.5 and it won't be explained here further.

Using 'Special:Export'

To export all pages of a namespace, for example.

Get the names of pages to export

I feel an example is better because the description below feels quite unclear.

  1. Go to Special:Allpages and choose the desired article/file.
  2. Copy the list of page names to a text editor
  3. Put all page names on separate lines
    1. You can achieve that relatively quickly if you copy the part of the rendered page with the desired names, and paste this into say MS Word - use paste special as unformatted text - then open the replace function (Ctrl+H), entering ^t in Find what, entering ^p in Replace with and then hitting Replace All button. (This relies on tabs between the page names; these are typically the result of the fact that the page names are inside td-tags in the HTML-source.)
    2. The text editor Vim also allows for a quick way to fix line breaks: after pasting the whole list, run the command :1,$s/\t/\r/g to replace all tabs by carriage returns and then :1,$s/^\n//g to remove every line containing only a newline character.
    3. Another approach is to copy the formatted text into any editor exposing the HTML. Remove all ‎<tr> and ‎</tr> tags and replace all ‎<td> tags to <tr><td> and ‎<td> tags to </td></tr> the HTML will then be parsed into the needed format.
    4. If you have shell and MySQL access to your server, you can use this script:

mysql -umike -pmikespassword -hlocalhost wikidbname select page_title from wiki_page where page_namespace=0 EOF

Note, replace mike and mikespassword with your own. Also, this example shows tables with the prefix wiki_

  1. Prefix the namespace to the page names (e.g. 'Help:Contents'), unless the selected namespace is the main namespace.
  2. Repeat the steps above for other namespaces (e.g. Category:, Template:, etc.)

A similar script for PostgreSQL databases looks like this:

psql -At -U wikiuser -h localhost wikidb -c "select page_title from mediawiki.page"

Note, replace wikiuser with your own, the database will prompt you for a password. This example shows tables without the prefix wiki_ and with the namespace specified as part of the table name.

Perform the export

  • Go to Special:Export and paste all your page names into the textbox, making sure there are no empty lines.
  • Click 'Submit query'
  • Save the resulting XML to a file using your browser's save facility.

And finally...

  • Open the XML file in a text editor.

Scroll to the bottom to check for error messages.

Now you can use this XML file to perform an import.

পূর্ণ ইতিহাস রপ্তানিকরণ

Exporting the revision history may be desirable to retain authorship information and attribution. A checkbox in the Special:Export interface selects whether to export the full history (all versions of an article) or the most recent version of articles. A maximum of 100 revisions are returned; other revisions can be requested as detailed in Parameters to Special:Export .

রপ্তানির ফরমেট

The format of the XML file you receive is the same in all ways. It is codified in XML Schema at https://www.mediawiki.org/xml/export-0.11.xsd This format is not intended for viewing in a web browser. Some browsers show you pretty-printed XML with + and - links to view or hide selected parts. Alternatively the XML-source can be viewed using the "view source" feature of the browser, or after saving the XML file locally, with a program of choice. If you directly read the XML source it won't be difficult to find the actual wiki text. If you don't use a special XML editor < and > appear as &lt; and &gt;, to avoid a conflict with XML tags; to avoid ambiguity, & is coded as &amp;.

In the current version the export format does not contain an XML replacement of wiki markup (see Wikipedia DTD for an older proposal). You only get the wiki text as you get when editing the article.

উদাহরণ

  <mediawiki xml:lang="en">
    <page>
      <title>Page title</title>
      <restrictions>edit=sysop:move=sysop</restrictions>
      <revision>
        <timestamp>2001-01-15T13:15:00Z</timestamp>
        <contributor><username>Foobar</username></contributor>
        <comment>I have just one thing to say!</comment>
        <text>A bunch of [[Special:MyLanguage/text|text]] here.</text>
        <minor />
      </revision>
      <revision>
        <timestamp>2001-01-15T13:10:27Z</timestamp>
        <contributor><ip>10.0.0.2</ip></contributor>
        <comment>new!</comment>
        <text>An earlier [[Special:MyLanguage/revision|revision]].</text>
      </revision>
    </page>
    
    <page>
      <title>Talk:Page title</title>
      <revision>
        <timestamp>2001-01-15T14:03:00Z</timestamp>
        <contributor><ip>10.0.0.2</ip></contributor>
        <comment>hey</comment>
        <text>WHYD YOU LOCK PAGE??!!! i was editing that jerk</text>
      </revision>
    </page>
  </mediawiki>

ডিটিডি

Here is an unofficial, short Document Type Definition version of the format. If you don't know what a DTD is just ignore it.

<!ELEMENT mediawiki (siteinfo,page*)>
<!-- version contains the version number of the format (currently 0.3) -->
<!ATTLIST mediawiki
  version  CDATA  #REQUIRED 
  xmlns CDATA #FIXED "https://www.mediawiki.org/xml/export-0.3/"
  xmlns:xsi CDATA #FIXED "http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation CDATA #FIXED
    "https://www.mediawiki.org/xml/export-0.3/ https://www.mediawiki.org/xml/export-0.3.xsd"
  xml:lang  CDATA #IMPLIED
>
<!ELEMENT siteinfo (sitename,base,generator,case,namespaces)>
<!ELEMENT sitename (#PCDATA)>      <!-- উইকির নাম -->
<!ELEMENT base (#PCDATA)>          <!-- প্রধান পাতার ইউআরএল -->
<!ELEMENT generator (#PCDATA)>     <!-- মিডিয়াউইকি সংস্করণ স্ট্রিং -->
<!ELEMENT case (#PCDATA)>          <!-- How cases in page names are handled -->
   <!-- possible values: 'first-letter' | 'case-sensitive'
        'Case-insensitive' option is reserved for future -->
<!ELEMENT namespaces (namespace+)> <!-- List of namespaces and prefixes -->
  <!ELEMENT namespace (#PCDATA)>     <!-- Contains namespace prefix -->
  <!ATTLIST namespace key CDATA #REQUIRED> <!-- Internal namespace number -->
<!ELEMENT page (title,id?,restrictions?,(revision|upload)*)>
  <!ELEMENT title (#PCDATA)>         <!-- Title with namespace prefix -->
  <!ELEMENT id (#PCDATA)> 
  <!ELEMENT restrictions (#PCDATA)>  <!-- Optional page restrictions -->
<!ELEMENT revision (id?,timestamp,contributor,minor?,comment?,text)>
  <!ELEMENT timestamp (#PCDATA)>     <!-- According to ISO8601 -->
  <!ELEMENT minor EMPTY>             <!-- Minor flag -->
  <!ELEMENT comment (#PCDATA)> 
  <!ELEMENT text (#PCDATA)>          <!-- Wikisyntax -->
  <!ATTLIST text xml:space CDATA  #FIXED "preserve">
<!ELEMENT contributor ((username,id) | ip)>
  <!ELEMENT username (#PCDATA)>
  <!ELEMENT ip (#PCDATA)>
<!ELEMENT upload (timestamp,contributor,comment?,filename,src,size)>
  <!ELEMENT filename (#PCDATA)>
  <!ELEMENT src (#PCDATA)>
  <!ELEMENT size (#PCDATA)>

Processing XML export

Many tools can process the exported XML. If you process a large number of pages (for instance a whole dump) you probably won't be able to get the document in main memory so you will need a parser based on SAX or other event-driven methods.

You can also use regular expressions to directly process parts of the XML code. This may be faster than other methods but not recommended because it's difficult to maintain.

Please list methods and tools for processing XML export here:


বিস্তারিত ও প্রায়োগিক পরামর্শ

  • To determine the namespace of a page you have to match its title to the prefixes defined in /mediawiki/siteinfo/namespaces/namespace
  • Possible restrictions are
    • sysop - Protected pages

কেন রপ্তানি করবেন

Why not just use a dynamic database download?

Suppose you are building a piece of software that at certain points displays information that came from Wikipedia. If you want your program to display the information in a different way than can be seen in the live version, you'll probably need the wikicode that is used to enter it, instead of the finished HTML.

Also if you want to get all of the data, you'll probably want to transfer it in the most efficient way that's possible. The Wikimedia servers need to do quite a bit of work to convert the wikicode into HTML. That's time consuming both for you and for the Wikimedia servers, so simply spidering all pages is not the way to go.

To access any article in XML, one at a time, go to Special:Export/Title_of_the_article

আরও দেখুন