Jump to content

API talk:Main page/Archive 1

Add topic
From mediawiki.org

Suggested API 27 April 2006

[edit]

I was not able to find a list of suggested API functions for retrieving Information via SOAP/WHATEVER. If you find one that already exists, please delete this page. If you consider meta inappropriete for this kind of page, please delete it. Or move it to my user name subpage. If you feel offended by the pseudo-syntax or naming of these functions, alter them unless they are beyond repair. Thanks.

Data Retrieval

[edit]

Articles

[edit]
article_mw ( string lemma )

article_mw() returns a single article from a wiki in the original Mediawiki syntax

article_xml ( string lemma )

article_xml returns a single article from a wiki in xml output

article_section_mw ( string lemma, int section)

returns a single section from a single article from a wiki in the original mediawiki syntax. section is the same as section in the edit option

[edit]
  • list of articles that contain string query
  • size of an article (chars, bytes, words)
  • list of authors of an article (ugly to compute when it comes to articles with 1000+ revisions)

API?

[edit]

What about telling the unfamiliar what API actually is? The page doesn't make much sense without knowing that beforehand… Jon Harald Søby 17:42, 30 October 2006 (UTC)Reply

Added links. Feel free to add/change the page to make the information better. Thx. --Yurik 19:38, 31 October 2006 (UTC)Reply

User preferences options

[edit]

I tried to download User.php in order to view the uioptions, but I cannot locate the file starting from http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/BotQuery/. Is it within svn.mediawiki.org ?

Secondly, I compared the some uioptions identifiers to the WP user preferences identifiers (<label for="wpHourDiff">). They do not match (e.g., timecorrection vs wpHourDiff). It would be nice to use the same identifiers.

Regards, Sherbrooke 12:48, 8 October 2006 (UTC)Reply

User.php is located in /phase3/includes. The useroption name is the same as used internally in the database, and is constant across all wikies. --Yurik 19:19, 8 October 2006 (UTC)Reply

Category belonging to an article

[edit]

Actually query.php can't get category in a redirect but there is sensible use of such categorized redirect e.g. original book's titles is a redirect to the local book's title and can be categorized. It'll nice if api.php solve this problem. Phe 15:37, 18 October 2006 (UTC)Reply

Afaics fixing it in query.php seems not difficult, in genPageCategoryLinks() use $this->existingPageIds instead of $this->nonRedirPageIds but I'm unsure if it'll break existing bot... Phe 15:42, 18 October 2006 (UTC)Reply
Added in the new api. --Yurik 06:32, 14 May 2007 (UTC)Reply

How to retrieve the html format?

[edit]

I am interested in retrieving a formatted html version of the content of an article, with the links parsed and pointing to wikimedia pages, but without tabs, menu options, etc. The article as you can see it inside wikipedia.--Opinador 11:26, 5 November 2006 (UTC)Reply

The easiest way would probably be to add action=render to the regular wiki request: http://en.wikipedia.org/w/index.php?title=Main_Page&action=render. I plan to add it to the API, but it will not happen soon - currently the internal wiki code is not very well suited for this kind of requests. --Yurik 16:17, 6 November 2006 (UTC)Reply
Thank you, I think is enough for me at this moment. But probably it will be useful to have in API this aproach, or at least to have a parameter to force expand templates before returning the page results.--Opinador 10:26, 7 November 2006 (UTC)Reply
[edit]

I'd like to request that the API be extended to include the equivalent of Special:Linksearch. This would facilitate maintenance of certain types of external links prone to spam and policy problems that cannot necessarily be shot on sight using Spam blacklist. Please let me know if this is something that's already in the API that I've missed. Mike Dillon 04:16, 18 December 2006 (UTC)Reply

Done. --Yurik 02:42, 15 May 2007 (UTC)Reply

Formatting bug

[edit]

When I call the API with "action=opensearch", it will only return results in the JSON format. It doesn't matter what the search term is or what I pass to the format argument. Forgive me if this isn't the proper place to report a bug like this. 68.253.13.166 20:14, 3 February 2007 (UTC)Reply

Bug 9143] is tracking this, and has been postponed. It has more details. --Yurik 02:01, 22 May 2007 (UTC)Reply

feedwatchlist bug

[edit]

Even after successfully logging in with "action=login", if we don't use cookies but rather pass the lgusername, lguserid and lgtoken returned as parameters along with "action=feedwatchlist" then we get a HTTP 500 error ( tried this with http://en.wikipedia.org/w/api.php just now). The documentation says that both parameters and cookies should work. Jyotirmoyb 05:52, 20 February 2007 (UTC)Reply

If you are trying to reproduce this bug with a browser, make sure to clear the cookies set by a successful login. Jyotirmoyb 05:55, 20 February 2007 (UTC)Reply
Is it normal behaviour that watchlist topics appear approx. 1 day late? Thanks, 84.160.6.41 21:05, 9 March 2007 (UTC)Reply
Both issues has been fixed -- Err 500 now returns "not logged in" feed element, 1 days has been fixed. --Yurik 02:02, 22 May 2007 (UTC)Reply

ApiQuery

[edit]

There should be possibility of registering methods in $mQueryListModules, $mQueryPropModules and $mQueryMetaModules for example by settings variables in LocalSettings.php:

$wgApiQueryListModules["listname"] = "ApiQueryMethodName";

and then in ApiQuery.php something like:

Index: ApiQuery.php
===================================================================
--- ApiQuery.php        (wersja 1004)
+++ ApiQuery.php        (kopia robocza)
@@ -70,6 +70,13 @@
 
        public function __construct($main, $action) {
                parent :: __construct($main, $action);
+
+               global $wgApiQueryListModules;
+               if (is_array( $wgApiQueryListModules )) {
+                       foreach ( $wgApiQueryListModules as $lmethod => $amethod) {
+                               $this->mQueryListModules[$lmethod] = $amethod;
+                       }
+               }
                $this->mPropModuleNames = array_keys($this->mQueryPropModules);
                $this->mListModuleNames = array_keys($this->mQueryListModules);
                $this->mMetaModuleNames = array_keys($this->mQueryMetaModules);

83.12.240.122 15:39, 15 March 2007 (UTC) Krzysztof Krzyżaniak (eloy) <eloy@kofeina.net>Reply

Thanks, done. --Yurik 03:28, 15 May 2007 (UTC)Reply

JSON Callback

[edit]

It would be really nice if the api had a JSON callback, à la Yahoo Web Services. This way, any website could make an AJAX request to Wikipedia! --Bkkbrad 17:16, 26 February 2007 (UTC)Reply

Done. --Yurik 18:45, 15 May 2007 (UTC)Reply
Fantastic -- thanks! --Bkkbrad 16:12, 8 November 2007 (UTC)Reply

YAML Formatted Errors

[edit]

I'm working on writing some methods to access the API, requesting my results in YAML format. It works fine when I send a proper request, which is most of the time, but occasionally I do something wrong and get the helpful error message. However, on having my code process the YAML result, I'm getting an error. Specifically, it reports "ArgumentError: syntax error on line 12, col 2: ` For more details see API'" Is this because the result is not properly formatted YAML, or a Ruby error? I don't know very much about YAML, learning as I go along, but I like the format so far. Is anyone else having this issue, and is it an issue at all? Thanks. Eddie Roger 04:21, 21 February 2007 (UTC)Reply

Is it still broken? --Yurik 02:04, 22 May 2007 (UTC)Reply

Broken POST?

[edit]

I just downloaded 1.9.3 to a new dev box of mine, and my scripts seem to have all broken due to a problem with POST. I've switched to GET to test this, and it works fine, but when POSTing, I get the generic help output and not the YAML results I expect. Did someone change something with the latest release? Eddie Roger 03:04, 21 March 2007 (UTC)Reply

Bump? Anyone? I've kept playing with this with no luck. Can anyone at least point me towards the source file that is responsible for returning the results or handling POST requests?
Update: Sorry, Ruby was broken, not MediaWiki. Ruby 1.8.4 had a bug in its post method. Upgrading fixed. Hope that helps someone.
[edit]

Hi all, API is really practical, but I miss a list of all external weblinks from a page. In de:Spezial:Linksearch all weblinks of a page are probably listed, api could only display the list..?--Luxo 20:42, 20 March 2007 (UTC)Reply

done. --Yurik 02:04, 22 May 2007 (UTC)Reply

Watchlist not working any longer?

[edit]

Hi Yuri. It seems that the watchlist feature of api.php is not working any longer :-( [1]. It is not even listed in the help! I went to svn, but I don't see any recent change that justifies the change. Tizio 14:33, 2 April 2007 (UTC)Reply

Ops, just found this: [2]. Sorry for the duplicate. Tizio 14:39, 2 April 2007 (UTC)Reply
Domas removed it because it "failed to conform to db standards" and advised developers to "just use Special:Watchlist", which sucks, but sobeit. Unless it is completely rewritten to conform to db standards (not use the master db, but instead use that dedicated to watchlists, etc.), it will be gone. AmiDaniel 18:16, 2 April 2007 (UTC)Reply
Nuts, it was soooo useful :-( Especially on a slow connection, the difference is really evident. Tizio 15:04, 3 April 2007 (UTC)Reply
I changed the DB group being used. Unless there is another issue with the query, we should get someone to reinstate it. --Yurik 03:29, 15 May 2007 (UTC)Reply
Thanks!!!! bugzilla:9482 (as I said, that was really useful; as the rest of the API, btw). Tizio 12:03, 19 May 2007 (UTC)Reply

Changes in revision query

[edit]

Was there a recent change in the way revision queries are processed? It seems like the revision ids are no longer returned automatically whereas before they were and api.php still claims they are. Also, it seems oldid (i.e. the previous revision's ID) is no longer returned and there's no way to enable that option within the query. Any pointers how I can get that information? Sebmol 05:52, 22 May 2007 (UTC)Reply

How can we get the revision ids ? --NicoV 07:49, 22 May 2007 (UTC)Reply
Because not everyone needed the IDs, they have been added to the list of available properties. To get them, simply include "ids" as one of the rcprop values [3]. Both pageid and revid are there. Sorry, but i had to make this change to make it cleaner. --Yurik 21:04, 22 May 2007 (UTC)Reply
Ok, thanks. --NicoV 21:36, 22 May 2007 (UTC)Reply
[edit]

Would it be possible to have an option on the "links" request to filter out links that are imported by templates ? I am currently working on a tool to help fixing links to disambiguation pages, and I'd be interested in getting only the links that are directly in the page.

This request could probably also be applied to other functions (templates, extlinks, ...).

PS: Thanks for the API --NicoV 21:36, 22 May 2007 (UTC)Reply

It might be possible, but I am afraid this might be a bit complex: the api would have to enumerate recursively every template used by the pages, and subtract all links found in templates from those on the page itself. The problem is that all links from the template pages will be found, not just those included in other pages (<includeonly>, etc). Plus some links might appear both on the page and in a template, and doing this will eliminate them.
The real solution would be to add a bit flags (short int) field to the pagelinks table to record during page save/update what links are real and what links come from templates. --Yurik 03:11, 23 May 2007 (UTC)Reply

Changes to redirects

[edit]

Have there been any changes to redirects? Since a few days, querying for content revisions no longer seems to automatically redirect when using &redirects. Thanks for the API! Tom De Smedt

Shouldn't be. Please provide an example of the broken URL request. --Yurik 18:47, 25 May 2007 (UTC)Reply
Yurik, here is an example request: http://en.wikipedia.org/w/api.php?action=query&redirects&format=xml&prop=revisions&rvprop=content&titles=wolf - this query used to return the content of the "Gray wolf" article, but now returns a redirect page. --Tomdesmedt 18:59, 27 May 2007 (UTC)Reply
Thanks, fixed in r22502. --Yurik 08:06, 28 May 2007 (UTC)Reply
OK, works fine now! Thanks again for the API --Tomdesmedt 12:00, 29 May 2007 (UTC)Reply

List members of a category (list=categorymembers)

[edit]

This doesn't seem to be implemented yet, but it works (mostly) in query.php. Is there a ETA for it to move here? I spoke with Yurik this afternoon about a bug in query.php. Carl CBM 22:05, 4 June 2007 (UTC)Reply

Considering that Yurik is the only one implementing it, and he is extremely busy with his primary work... Probably soon :) --Yurik 23:50, 4 June 2007 (UTC)Reply
I posted here mostly to leave a record of the conversation. I wasn't sure if there is a deployment plan for the new API, but it looks like there isn't. In the meantime, there are still the various libraries that do it by screen scraping. CBM 18:43, 5 June 2007 (UTC)Reply
Not sure what you mean by deployment -- it is deployed on all WikiMedia servers, and is part of the normal distribution. It is disabled by default, but can easily be enabled with a small configuration change. Libraries really ought not to do the screenscraping -- simply because HTML changes very often (by far more often than the API specs :)) --Yurik 03:57, 6 June 2007 (UTC)Reply
I meant the timeline for the full implementation of API.php, sorry. I agree that screenscraping is a method of last resort. CBM 03:43, 7 June 2007 (UTC)Reply
done. --Yurik 10:55, 17 June 2007 (UTC)Reply

Whitespace in front of XML declaration

[edit]

I've noticed that my mediawiki site has a space in front of the xml declaration (for format=xml) which royally jams up every xml reader there is. This doesn't seem to happen on the wikipedia website, but I can't find a bug for it or where the space comes from. And my version is new (literally a week old). Anyone know of this bug and how to fix it w/o a hack? If not, is there any way to overload the output functions and print where they are coming from?

Check LocalSettings.php for byte-order marks and other whitespace at the top of the file (above <?php). You can remove the ?> at the bottom if present, since it's not needed. Also check extension files. robchurch | talk 00:36, 22 June 2007 (UTC)Reply
Yeah I had that problem on our wiki's too and finally tracked it down to whitespace after the ?> on the end of a number of extension's php files. --Nad 10:27, 22 June 2007 (UTC)Reply
I had the same problem. It was whitespace in EditCount extension file EditCount.i18n.php near the end ?>. --Till Kraemer 18:12, 18 January 2009 (UTC)Reply

namespace select with list=usercontrib

[edit]

Is it possible to add an option to choose which namespaces to use when list=usercontrib? i.e. that this link will work. Yonidebest 21:00, 30 June 2007 (UTC)Reply

Done in r23741. --Catrope 15:17, 5 July 2007 (UTC)Reply
Thanks! Yonidebest 15:21, 5 July 2007 (UTC)Reply

Silent normalization of interwiki titles

[edit]

See Bug 10147.

Fixed. --Yurik 07:42, 6 July 2007 (UTC)Reply

InterWiki table

[edit]

Would it be possible to implement retrieval of the InterWiki table through something like this?

api.php?action=query&meta=siteinfo&siprop=interwiki

I'm currently working on a Recent Changes enhancement, and need to parse wikicode inside comments. Luckily, comments only allow very little wikicode, but I still need to parse InterWiki links. index.php?title=mw:Pagename is not guaranteed to work for all InterWiki bindings (more exactly, it doesn't work if iw_local=0 for that binding), so that's not a good solution. Also, I need to place IW links in a different CSS class, and I can't do that if I can't tell IW from internal links.

Thanks in advance. --Catrope 20:16, 31 May 2007 (UTC)Reply

Sounds quite do-able to me -- only problem is that interwiki tables tend to be quite obscenely large, and so we may have to split up the returns. Please add a bug to http://bugzilla.wikimedia.org where it's much more likely to be acted upon :). Thanks. AmiDaniel 08:33, 1 June 2007 (UTC)Reply
Posted, thanks for the link. --Catrope 13:29, 1 June 2007 (UTC)Reply
done. --Yurik 07:46, 6 July 2007 (UTC)Reply

possible category problem

[edit]

Hi, using the API on large categories gives me less results than it should, see en:Category:Uncategorized from September 2006 for example. My program using queri API reports 4712 articles, when there are actually over 5000 articles in that category. I have tested my software, and I don't see that it is doing anything wrong, and indeed works perfectly on smaller categories, though I still don't rule out that I am at fault. I would be greatful if someone could use their queri API software on that category and see how many results they are given. thanks Martin 09:57, 17 September 2006 (UTC)Reply

Using the C# example from the en user manual gives even less results, so I think something somewhere is going wrong with large cats. Martin 09:51, 18 September 2006 (UTC)Reply
Hi Martin, I will take a look in the next few days. Thanks for reporting. --Yurik 13:35, 18 September 2006 (UTC)Reply
Yurik fixed a related bug in queryapi.php a few days ago. Phe 15:33, 18 October 2006 (UTC)Reply
Tried in api - seems to be working fine. --Yurik 09:37, 8 July 2007 (UTC)Reply
[edit]

Hi,

With a group of friends we are working a mediawiki platform to share content about design. Maybe can API users give us some advices ?

We are trying to make other sites able to build them own navigator in our content.

The advantage for the users of theses site is that they could have personalised acces to content that they were looking on the site they were. The advantage to the wiki is to make a better participation through "edit" links on other site that come back to the wiki in edit mode.

After 2 days of researches on the web I found :

Yet I can't manage to use the extracted content. It only prints a layout of the html code, not interpretating it.

If you can help us it would be super !

Thanks !

--Thibho 16:00, 28 June 2009 (UTC)Reply

Some news, a friend told me that it would be better to use xml layout : http://en.design-platform.org/api.php?action=parse&page=Brand&format=xml . Then we would have to build a script to extract content in the "text" balise, to tranform links and to build a lik that redirect on the wiki in edit mode. This is out of my abilities so I continue my researches looking if can find examples to start it. --Thibho 17:17, 28 June 2009 (UTC)Reply


So the solutions we use are :


Option 1 : Using iframes
This integrate a minimaliste skin of the wiki in the site with an iframe.
This solution provides the way to keep integrated style skin up-to-date since only a minimalistic part of the wiki is integrated and the rest can be managed by the site owner.
<iframe src ="http://for-iframes.wikidomaine.com">
  <p>Your browser does not support iframes.</p>
</iframe>
In the wiki localsettings.php :
switch ($_SERVER["SERVER_NAME"])
{
	case "for-iframes.wikidomaine.com":
		$wgDefaultSkin = "minimalist_skin";
		$wgAllowUserSkin = false;
		break;

	default:
		$wgDefaultSkin = "wiki_original_skin";
		break;
} 
The main problem of this is that the users cannot copy/past the url since it is always the same. We are looking for a solution for this aspect.


Option 2 : feel and tast with a full site-like skin and same menu
This use the same type of swich as precedently but with a full skin and copuing the site menu.
This solution can be simpler if there is only one partner site or if you create a skin template that can be easily modified for each site.


There is also many solutions based on scripts, page re-writting, API, etc... wich have the advantage to be managed entirely by the partner site but that require more expetise from the site webmaster.


Thanks !


--Thibho 21:15, 7 July 2009 (UTC)Reply

Protected pages

[edit]

Hi. I have another request... would it be possible to add & protected or something similar to retrieve the list of protected articles? I could not find any other way (other than downloading page.sql) for doing so. Thanks! Paolo Liberatore 12:46, 3 October 2006 (UTC)Reply

I am not sure the list of protected articles is available from the database. If it is, I can certainly expose it. --Yurik 16:24, 3 October 2006 (UTC)Reply
That would be great! Thank you. Paolo Liberatore 16:16, 10 October 2006 (UTC)Reply
As an alternative that could be possibly be simpler to implement, the entire record of an article from the page table could be retrived via prop=info. Paolo Liberatore 17:52, 6 November 2006 (UTC)Reply
I just noticed there is a page_restrictions field in the page table. I will have to find out what it might contain (the Page table is not being very descriptive, and afterwards add that to prop=info. Filtering by that field is problematic because its a tiny blob (i don't think mysql can handle blob indexing, but i might be wrong). --Yurik 22:31, 6 November 2006 (UTC)Reply
Here is my reading of the source code. The point where page_restrictions is interpreted is Title.php. There, loadRestrictions reads this field, parses it and stores the result in the mRestricions array. This array is then used by getRestrictions to return an arrary containing the groups that are allowed to perform a given action. As far as I can see, this array is such that mRestrictions['edit'] is an array that contains the groups that are allowed to perform the edit action, etc. Except that, if this array is empty, no restriction (other than default ones) apply.
In the database dump, the page_restrictions field appears to be either empty or something like edit=autoconfirmed:move=sysop or just sysop. The explanation of the table says that this is a comma-separated list, but the comments in Title.php mention that is an old format (I think that the value 'sysop' is actually in this format; no page in wikien current has a comma in the page_restrictions field). Paolo Liberatore 15:52, 11 November 2006 (UTC)Reply

Diffs

[edit]

Are there any plans to provide raw diffs between revisions? This would be particularly helpful for vandalism fighting bots who check text changes for specific keywords and other patterns. The advantage would be that diffs require much less bandwidth and therefore increase bot response time. What do you think? Sebmol 15:24, 21 November 2006 (UTC)Reply

Count

[edit]

Would it be possible to add an option which woud simply return the number of articles which satisfied the supplied conditions? This would be especially helpful counting large categories, like en:Category:Living people. HTH HAND —Phil | Talk 15:59, 29 November 2006 (UTC)Reply

I second that emotion. Zocky 23:26, 9 December 2006 (UTC)Reply
Unfortunately this is harder than it looks - issuing a count(*) against database if the client asks for list=allpages would grind db to a halt :). Same goes for list of all revisions, user contribution counters (we even had to introduce an extra db field to keep that number instead of counting), etc. So no plans for now, unless some other alternative is given. --Yurik 09:41, 8 July 2007 (UTC)Reply

version/image info

[edit]

I think it would be useful to get information about the sofware version (i.e. core version, extensions, hooks/functions installed) and images (i.e. checksum would useful to see if an image has changed). -Sanbeg 21:08, 7 December 2006 (UTC)Reply

Some of it is available through the siteinfo request. More will be done later. --Yurik 01:59, 22 May 2007 (UTC)Reply

Select category members by timestamp

[edit]

Hello, is it possible to retrieve cm with a parameter that lists only starting from a certain timestamp? This should be possible, since a timestamp is stored in the categorylinks database table. Is this the correct place to ask, or should I file a bug in bugzilla? Bryan 12:18, 24 December 2006 (UTC)Reply

Similarly get all templated embedding pages by timestamp would be useful for monitoring when a template gets used because it contains structured data that can be reflected on other websites.82.6.99.21 15:14, 9 February 2007 (UTC)Reply
Not sure what you mean. Please file a feature req in bugzilla with the sample request/response. --Yurik 09:43, 8 July 2007 (UTC)Reply

mw-plusminus data

[edit]

MediaWiki recently add mw-plusminus tags to Special:Recentchanges and Special:Watchlist, which display the number of characters added/removed by the edit. I was wondering if it may be possible for api.php to both retrieve this info and somehow append the data to query requests that involve page histories, recentchanges, etc. I would most like to see this available in usercontribs queries, as that data is currently not retrievable through any other medium (save for pulling up each edit one at a time and compaing the added/removed chars). Thanks. AmiDaniel 23:55, 26 February 2007 (UTC)Reply

Done for rc & wl. --Yurik 20:09, 8 July 2007 (UTC)Reply

Query Limits

[edit]

Hi all, another suggestion from me. On the API it is stated that the query limit of 500 items is waived for bot-flagged accounts to the MediaWiki max query of 5000. Would it be possible to extend this same waiver to logged-in admin accounts? I have multiple instances in some of my more recent scripts that query, quite literally, thousands of items (backlinks, contribs, etc.) that are primarily only used by myself and other administrators on enwiki. While having to break these queries up into units of 500 causes a rather dramatic performance decrease on the user's end, I can't imagine it's particularly friendly on the servers either (or rather, submitting two or three requests of 5000 each would likely be less harmful to the servers than twenty or thirty requests of 500 each). Generally, admins are considered similarly well-trusted not to abuse such abilities, and they are also small in numbers on all projects. Also, the MediaWiki software itself allows similar queries of upto 5000, and I most presume that loading this data from MediaWiki is far more burdenous on the servers than submitting the same query through API. This clearly more a political matter than a technical one, but I'd like to know if there are any objections to implementing such a change. Thanks. AmiDaniel 05:05, 1 March 2007 (UTC)Reply

done. By amidaniel. :) --Yurik 05:58, 17 July 2007 (UTC)Reply

partial content

[edit]

it is great to be able to retrieve wikipedias content by an API, but nobody is really going to need the whole page.

would it be possible to pass a variable which adds only the lead paragraph (everything before the TOC) to the XML, not the whole content ? This would be ideal for an implementation into another context, then adding a "View Full Article" link underneath.

any chance of this becoming a possibility ?--83.189.27.110 00:49, 19 March 2007 (UTC)Reply

I'd also consider useful to be able to retrieve a single section from the article, so that one could first load the article lead and the TOC, and then see only some of the section(s). This would be a great improvement of efficiency especially for people on slow links (e.g., mobile) and for "community" pages, such as en:Wikipedia:Administrator's noticeboard, some of which tend to be very long. Tizio 15:19, 20 March 2007 (UTC)Reply

returning the beginning/end of a document

[edit]

Is there any possibility of an API function that returns the first x bytes at the beginning or end of a document? An increasing amount of metadata is stored at the beginning or end of wikipedia articles, talk pages, etc. And tools like "popups" that preview pages don't need to transmit the whole document to preview part of it. I am thinking of client, server, and bandwidth efficiency. Outriggr 01:43, 20 April 2007 (UTC)Reply

Current status ?

[edit]

Hi,

I have started developping a tool in Java to deal with some maintenance tasks, especially for the disambiguation pages. For the moment, I have planned to do two parts :

  • Much like CorHomo, a way to find and fix all articles linking to a given disambiguation page.
  • A way to find and fix all links to disambiguation pages in a given article.

To finish my tool, I need a few things done in the API :

  • Retrieving the list of links from an article.
  • Submitting a new version of an article.

Do you have any idea when these features will be available in the API ?

How can I help ? (I am rather a beginner in PHP, I have just developped a MediaWiki extension, here, but I can learn).

--NicoV 13:24, 23 April 2007 (UTC)

For your "Retrieving the list of links from an article" problem, check the query.php what=links query. It's already been implemented, just not brought over into api.php. As for submitting a new version of an article, I'm afraid there is no ETA on this, and I'm honestly not sure how Yurik is going to about doing this. For now, I'd recommend you do it as it's been done for years--submitting an edit page with "POST". It's not the greatest solution, but it works. If you'd like to help, please write to w:User_talk:Yurik; I'm sure your help is needed! AmiDaniel 18:10, 23 April 2007 (UTC)Reply
Thanks for the answer. I was hoping to use a unique API, but no problem in using query.php for the links. For submitting a new version, do you know where I can find an example of submitting an edit page ? --NicoV 20:33, 23 April 2007 (UTC)
Unfortunately, dealing with forms and especially MediaWiki forms (as submitting all forms requires fetching userdata from cookies typically) in Java is not particularly easy. For a basic example in submitting a form in Java, see this IBM example. For most of the stuff I've developed, instead of trying to construct my own HttpClient and handle cookies myself, I've simply hooked into the DOM of a web browser and let it take care of the nitty gritty of sending the POST's and GET's. You can see an example of this method using IE's DOM here. I've not found a good way to get the latter to work with Java as Java does not have very good ActiveX/COM support, though it can sort of be done by launching a separate browser instance. I'm afraid I can't really help much more, unfortunately, as it's a problem that's not been well-solved by anyone. It's easy to build MediaWiki crawler's, but not so easy to build bots that actually interact with MediaWiki. AmiDaniel 21:20, 23 April 2007 (UTC)Reply
A en-wiki user has written something for page editing in java: en:User:MER-C/Wiki.java. HTH. Tizio 13:10, 26 April 2007 (UTC)Reply
I have used parts of the java class you linked, and it's working :) Thanks --NicoV 07:57, 22 May 2007 (UTC)Reply

Ok, thanks. I will take a closer look at the examples. --NicoV 05:38, 27 April 2007 (UTC)

Some have been done (links, etc). See the main page. --Yurik 11:48, 14 May 2007 (UTC)Reply
Thanks, I will try to use it when it's available on the French Wikipedia --NicoV 20:28, 14 May 2007 (UTC)

Hi, I've also started a Java API (see my blog entry: Java API for MediaWiki query API, I'd like to here opinions about the design and usability of the library. Thanks Axelclk 17:57, 22 June 2007 (UTC)Reply

Real current status

[edit]

Could somebody please tell the current status of the api regarding fetching a list of links from a site via api.php?query&titles=Albert%20Einstein&prop=links ?

This works just fine with the latest mediawiki on wikipedia but on my mediawiki v1.10 it dont work at all. error reported: unknown_prop. I investigated some time and recognized that there is no includes/api/ApiQueryLinks.php in 1.10. The import of the module also is commented out in ApiQuery.php, like this:

  private $mQueryListModules = array (
     'info' => 'ApiQueryInfo',
     'revisions' => 'ApiQueryRevisions'
  }
  // 'links' => 'ApiQueryLinks'
  // ...some other modules as well

So, what's going on? On API:Query - Page Info it is stated that the link fetch feature is available with MW 1.9. I am confused...

So am I :) From what I remember, it has been working for a long time. There is a new release coming out shortly, that will include all the proper changes. As for now, I would suggest simply copying the entire API directory from the current SVN - API would not mess up anything, so if worst comes to worst, it will simply not work :) --Yurik 21:37, 13 July 2007 (UTC)Reply
Unfortunatly, this didn't work for me. I checked out the latest trunk/phase3/api. After moving it into includes, nothing works at all :( The PHP engine stated: Fatal error: Call to undefined function wfScript() in /srv/www/vhosts/wikit/htdocs/mediawiki-1.10.0/includes/api/ApiFormatBase.php on line 88. So, I thought to try to get just the class ApiQueryLinks to work with the 1.10.0 api-code and included the class in ApiQuery. After that I tried to call the api. But it seems like the ApiQueryGeneratorBase-class cant be found anywhere. Hopefully the api will work again with one of stable releases to come... --Bell 09:09, 16 July 2007 (UTC)Reply

Hi,you coul try to run 'unstable' vrsion - I actually found it to be reasonably stable. Just sync it from the svn and run updata.php to get the databases up to date. Also, the 10.1 is out. --Yurik 06:07, 17 July 2007 (UTC)Reply

Tokens

[edit]

I, personally, don't understand why state-changing actions currently require tokens. Shouldn't the lg* parameters be enough to determine whether the client is allowed to perform a certain action? If so, why do you need tokens?

On a side note, I intend to start writing a PHP-based bot using this API, and will try to include every feature the API offers. Since both this talk page and its parent page have been quiet for two weeks, I was wondering if new API features are still posted here. If that is the case, I'll monitor this page and add new features to my (still to be coded) bot when they appear. I'll keep you informed on my progress. --Catrope 17:20, 9 May 2007 (UTC)Reply

The motivation for tokens at Manual:Edit token is that they are used to prevent session hijacking. Tizio 19:02, 15 May 2007 (UTC)Reply
Ah, I understand now. Didn't think it through as deeply as you did. --Catrope 20:19, 22 May 2007 (UTC)Reply
Would it be possible to at least grab editing tokens by API, and require session data, cookie tokens, etc. to get them? The same can be done with index.php (if not mixed up in a lot of other HTML). Gracenotes 17:04, 4 June 2007 (UTC)Reply

Login not working ?

[edit]

Hi, is there a problem with the "login" action? Whenever I try (in the last hour), I get the following message:

<error code="unknown_action" info="Unrecognised value for parameter 'action'">

-- NicoV 20:57, 21 May 2007 (UTC)Reply

Yes, unfortunately I had to disable login action until a more secure solution is implemented. The current implementation allowed countless login attempts, allowing crackers to break weak passwords by brute force. Disabling it was the only solution. Any help with fixing login module would be greatly appreciated and bring it back faster. --Yurik 01:58, 22 May 2007 (UTC)Reply
Would it be possible while disabling it to return an understandable message, like result=Illegal or an other value like result=LoginDisabled ? Currently, when asking XML result, the result is not XML. My tool wasn't prepared for this :). That way, when I get this answer, I would be able to validate the login using an other method (by going through the Special:Login page).
Concerning help, I am not very good at PHP, so I doubt I can help you much. I suppose simple tricks like delaying the answer of the login action wouldn't be sufficient. --NicoV 07:17, 22 May 2007 (UTC)Reply
The format is obviously not handled properly - even if everything else fails, proper format should be used. Please file as a bug. Thanks for the logindisabled suggestion - I might implement something along those lines later. Now, if only someone could help with the login php code :) Its not hard, just annoying :( --Yurik 21:07, 22 May 2007 (UTC)Reply

I just sent a patch of to Yurik that fixes these security vulnerabilities (the big one that Yurik mentioned at least) and re-enables the login, so this will hopefully make it in the repo by tomorrow and will be synced up on Wikimedia's servers within the week. Sorry for the inconvenience. AmiDaniel 10:36, 23 May 2007 (UTC)Reply

That's great news, but in the mean time, the Special:Login page has just been modified with a capcha (at least on the French wikipedia). So the method I was using to edit pages is not working any more :( Is there a way to edit pages with a tool ? --NicoV 15:30, 23 May 2007 (UTC)Reply

Limits

[edit]

Limits on some queries (like logevents) is lower than allowed via common MW interface. - VasilievVV 15:34, 29 May 2007 (UTC)Reply

RC (list=recentchanges) also has this problem. api.php limits it to 500, but I can get up to 5000 using the regular interface. --Catrope 15:37, 29 May 2007 (UTC)Reply
This has recently been changed to allow the limit of 5000 and 50 to fast and slow queries, respectively, to sysops as well as bots. We're hesitant enabling higher limits to non-sysops and non-bots, however, until we can do some serious efficiency testing with the interface. AmiDaniel 08:35, 1 June 2007 (UTC)Reply
Why is it necessary to make different limits for bots and normal users? They have equal limits in UI (5000). So I think it would be better to set 5000 limit to all queries, that doesn't read page content - VasilievVV 08:38, 2 June 2007 (UTC)Reply

Bad title error

[edit]

Compare the result of the following two queries:

http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API%7CDog&format=jsonfm
http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API%7C%7CDog&format=jsonfm

In the latter query, the second title is empty (and thus invalid), which causes the API (to rightfully) throw an error. The downside is that it destroys my entire query (the original query that caused my error here contained 500 titles, of which one was empty). This is pretty unfriendly behavior, but fixing this raises the issue of having to return an error and a query result in one response (which is currently impossible).

Alternatively, I can make sure there are no empty titles in my query (which fixed my problem), but are empty titles the only ones that trigger the "invalid query" error? If not, could someone provide a list of exactly what causes api.php to return "invalid query"?

Thanks in advance --Catrope 15:32, 5 June 2007 (UTC)Reply

  • This problem still exists. The same thing happens if there is a bad title as opposed to an empty title; example: this query just returns a "bad title" error but doesn't tell you which one of the titles caused the error, and doesn't return any information about the titles that are OK. --Russ Blau 14:26, 18 September 2007 (UTC)Reply
Thanks. Is there a bug filed for this? --Yurik 17:28, 18 September 2007 (UTC)Reply

Searching content?

[edit]

There's a comment from 2006 in the General Archive above that says:

Search
  • list of articles that contain string query

Is this being implemented in the API? -- SatyrTN 21:55, 7 June 2007 (UTC)Reply

A feature like this would be essential for search-and-replace bots. --Catrope 07:40, 8 June 2007 (UTC)Reply
So would that be a "yes"? :) -- SatyrTN 14:02, 13 June 2007 (UTC)Reply
I have no idea if it's going to be implemented or not, I'm not in charge of developing the API. I was just saying that it's very useful, and should be implemented.--Catrope 15:29, 13 June 2007 (UTC)Reply
I agree that it would be an excellent feature to have. My biggest concern at the present though is the ability to unit test the API: we are in dire need to have good testing framework before we move forward. --Yurik 01:11, 14 June 2007 (UTC)Reply
What sort of framework, exactly, are you thinking of? There is a lot of bot code around that might be a starting point. CBM 02:51, 14 June 2007 (UTC)Reply

Page watched ?

[edit]

With the API, is there a way to know if a page is watched by the user ?

That would be useful for me: I have written a tool to help fixing links to disambiguation pages, and currently when the tool submits a new version of a page, the page is always unwatched. I can probably deal with this, but that would mean reading a lot more from Wikipedia, because the "wpWatchthis" checkbox is far from the begining of the page. --NicoV 21:12, 8 June 2007 (UTC)Reply

You don't need the checkbox, the buttons right on top of the page are enough. There will be a "watch" button if the page isn't watched, and an "unwatch" button if it is. Of course when editing pages is implemented in the API, you'll no longer need all of this. --Catrope 08:47, 11 June 2007 (UTC)Reply
Yes, thanks. The only drawback seems the waste of bandwidth (the watch button is almost at the end of the HTML file, while the checkbox is earlier, and a lot earlier on the French wikipedia because a lot of automatic stuff is added in between). I have already tested with the checkbox, but it doesn't work how I'd like it to for users having a preference of automatically watching pages they are editing. I will try the Watch button. --NicoV 21:39, 12 June 2007 (UTC)Reply
Hmm, maybe request the watchlist (through the API) at the beginning of your program, then check every page title against that list? --Catrope 12:57, 13 June 2007 (UTC)Reply
Thanks again, but I have decided to use the "watch"/"unwatch" button, until (I hope) there's a way of getting this info when retrieving page data through the API. --NicoV 17:40, 14 June 2007 (UTC)Reply
You really don't need to. With two requests to the API (one to log in and one to get your watchlist through action=query&list=watchlist) you can store your watchlist in an array. When editing a page, simply check if it's in the array. --Catrope 19:42, 14 June 2007 (UTC)Reply

DEFAULTSORT key

[edit]

I know very little about programming and computers, but this API thing sounds like it could be used to get useful data such as a list of biographical articles without DEFAULTSORT keys. Am I right to think that a query could be run that could detect all articles with w:Template:WPBiography on their talk pages, but which didn't have the DEFAULTSORT magic word somewhere in the article, and furthermore that the categories (cl) function with "Parameters: clprop=sortkey (optional)" could detect existing pipe-sorting in the categories and output that data as well? Or am I misunderstanding the purpose and limits of API? Carcharoth 10:16, 18 June 2007 (UTC)Reply

It's possible, but:
  • The API can list articles with a certain template in them or all articles in a certain category, but it can't search through them. You'll have to write a script that does that.
  • That script could distinguish DEFAULTSORT, pipe-sorted and non-sorted articles, but it can't automatically correct them to use the DEFAULTSORT magic. You'll have to do that by hand.
I'll write that script some time this week (as the DEFAULTSORT issue is present on BattlestarWiki as well), so keep an eye on this page to see when it's there. --Catrope 15:06, 18 June 2007 (UTC)Reply

Feature request : list=random

[edit]

Let the API generate a list of random pages as the list. Parameters:

  • Namespace(s)
  • Redirect filter
  • Limit

Would be useful for tools that look for pages/images matching criteria that are not easily obtained otherwise, e.g., pages with no (trivial) categories. --Magnus Manske 23:51, 23 June 2007 (UTC)Reply

Future edit API

[edit]

I think that when the edit functionality is to be implemented, it should be defined as POST, and not GET. AzaToth 23:43, 24 June 2007 (UTC)Reply

I think it shouldn't deny either kind of request as both have their uses. It's the names and values contained in the requests that should determine the functionality, not the kind of request. --Nad 04:13, 25 June 2007 (UTC)Reply
nah, there's a reason for GET and POST to be different HTTP-verbs. one to get information from the server, one to change information on the server. btw, login should never work with GET, you don't want your password in the server logs. -- D 12:02, 25 June 2007 (UTC)Reply
GET and POST are allowed for all API requests, and there is no easy way to change that for just one action. IMO, users who are stupid enough to send a GET request with their password or other sensitive stuff should bear the consequences. BTW, action=edit users will be forced to use POST in most cases, since the query string supplied with GET is limited to 255 characters, while most articles are substantially longer. In fact, the longest Wikipedia article comes close to being 450KB in size. --Catrope 18:11, 26 June 2007 (UTC)Reply

API Interface for .NET

[edit]

I've been putting together an Open Source .NET interface for the MedaiWIKI API, I've looked around the site for where to list it, but haven't found anything that looks like the right place, can anyone point me in the right direction as to where? 65.27.174.205 02:15, 1 July 2007 (UTC)Reply

I have made something similar a while back, but haven't published it. Not sure if mediawiki would want to keep this, but you could always add it to sourceforge. --Yurik 03:42, 2 July 2007 (UTC)Reply
Yeah not a big deal, currently have it on google hosting maybe i'll move it. Thanks 65.27.174.205 21:43, 2 July 2007 (UTC)Reply
It seems there is no sources there? Where the sources or library could be downloaded? uk:User:Alex_Blokha
On http://sourceforge.net/projects/jwbf/ you can find a API Interface for Java, what are you thinking about a list of projects which supply connectors to MediaWiki API ?
Will create a page for them, thanks for the link. Please sign your posts with --~~~~. --Yurik 18:49, 8 July 2007 (UTC)Reply

Why there is no wsdl?

[edit]

Why you do not release wsdl api? http://www.google.com/search?hl=uk&q=define:wsdl Why should we write for each platform/language new wrapper, instead of just using wsdl, which is implemented and tested on each platform? For example on .net platform, wsdl serice is included in project with 3 clicks of mouse. uk:User:Alex_Blokha

WSDL is just for web services, whereas we have simple HTTP get/post request-response protocol to allow many different clients to seamlessly use our API. A WSDL wrapper might be useful, and could be added at a later date. Feel free to contribute. --Yurik 04:02, 17 July 2007 (UTC)Reply
I don't program with php. The only thing I know about it, that wsdl support is included in last versions of php. But if you can give windows web-hosting, the web-service based on existing frameworks for wikipedia can be created. By me for example. uk:User:Alex_Blokha
Although the MediaWiki API is written in PHP, its output is still XML by default. You can change that to JSON, PHP, and some other formats by setting the format= parameter. --Catrope 14:32, 1 August 2007 (UTC)Reply

imageinfo/json

[edit]

why doesn't imageinfo use a list for the image revisions? using stringified numbers as keys to a map is a bit strange, especially the extra "repository"-entry makes it a bit hard to parse. within the revision, "size", "width" and "height" look very much like numbers to me, so why are these strings, too? -- 23:42, 6 August 2007 (UTC)Reply

Fixed the size values. Not sure how to implement the list yet (need to move rep name somewhere else). --Yurik 03:34, 7 August 2007 (UTC)Reply
oh, very nice, thank you :) i guess the repository is not fixed easily, pushing down the revisions one level would be a bit incoherent.. -- 08:13, 7 August 2007 (UTC)Reply
I fixed this by adding page-level tag "imagerepository". --Yurik 08:45, 9 August 2007 (UTC)Reply
thanks again :) -- 12:46, 10 August 2007 (UTC)Reply

exturlusage

[edit]

is there a reason list=exturlusage contains a "p"-tag instead of an "eu" as i would expect? -- 00:58, 8 August 2007 (UTC)Reply

Thx, fixed. --Yurik 08:44, 9 August 2007 (UTC)Reply

Protectedpages

[edit]

Yurik, I posted this on your enwiki talk page, then realized that this might be a better venue.

I was wondering if you (or any other API users) would be interested in reviewing an API query module I've cooked up, to see if it'd be worthy of a commit. It's basically a clone of ApiQueryAllpages, except it implements Special:Protectedpages instead of Special:Allpages, removing the need for bot developers to either screen-scrape Special:Protectedpages for a list of protected pages, loop through QueryLogevents, or run QueryAllpages through QueryInfo for protection information.

Perhaps it'd be better implemented by extending an existing module? Let me know what you think.  :) — madman bum and angel 07:08, 8 August 2007 (UTC)Reply

ApiQueryProtectedpages.php

I added this to the list=allpages - makes more sense there. Hope it does not ruin db performance. --Yurik 08:43, 9 August 2007 (UTC)Reply
Thanks! I shouldn't think it would, but I'll watch it carefully. If it does, then I suppose we can switch limits for that query to the slow query limits. Madman 16:18, 9 August 2007 (UTC)Reply
Unfortunately it is not the number of items that takes long time, it's the query itself. Will monitor it later on. --Yurik 05:28, 13 August 2007 (UTC)Reply

rvlimit

[edit]

hi! the documentation says

 rvlimit        - limit how many revisions will be returned (enum)
                  No more than 50 (500 for bots) allowed.

this seems not to be true: when logged in, i can get 500 revisions without having the bot flag set. -- 12:46, 10 August 2007 (UTC)Reply

I think administrators have the same limits of bots. Tizio 16:14, 14 August 2007 (UTC)Reply

result-less lists

[edit]

another thing that bugs me: getting f.e. list=backlinks to a page with zero links to it, i get

[] 

back. i'd expect

{ "query": 
  { "backlinks": 
    [] } } 

instead. the same goes for format=xml where i get a simple <api /> and many other places than just list=backlinks -- 18:16, 11 August 2007 (UTC)Reply

Please file a bug with a sample query url. --Yurik 05:27, 13 August 2007 (UTC)Reply
I filed a similar bug (bug 10887) Bryan Tong Minh 21:35, 14 August 2007 (UTC)Reply

Post/edit data

[edit]

Posting data via the api isn't possible at this time. So, which other method can be used for it right now? It seems like there are some alternatives:

  • post form data via http
  • post data via mediawikis xml-import interface
  • use one of the bot frameworks, that capsulate that job, so we don't have to think further

But there aren't any PHP bot frameworks right now. Does anybody have implemented posting data via php yet? If so, please link/post your solution. thanks!

I'm new to http programming, so forgive me if I'm misunderstanding your question. You can use index.php for writing a page. I have some java code which does that (and more that is specific to my bot) aten:User:WatchlistBot/source.java the writing is in Page.java, in the put() method. I got started using en:User:Gracenotes/Java_code which is much less code and shows how to read/write pages. Mom2jandk 20:23, 28 August 2007 (UTC)Reply
Extension:Simple Forms allows editing/creating of articles from URL, but doesn't use edit-tokens. Once API supports editing, SimpleForms would use that in preference to it's own methods, but for now it is a working PHP way of editing articles. --Nad 21:03, 28 August 2007 (UTC)Reply

How to find API URL?

[edit]

Let's say I'm creating some tools that get information (via the API) off a wiki specified by the user.

So the tools need the API's URL, but how does the user know the URL of the API? For example, on Wikipedia mod rewrite is used to give pretty URLs so the API is avalaible at http://en.wikipedia.org/w/api.php. We know it is there because we are interested in that kind of thing and read medaiwiki.org, but if my tools asked the user for the URL of the wiki's API, I think it is unlikely they would have known to use this address.

Is there a specified pattern to where the non-modrewrite versions of the URLs go in a wiki, or is the /w thing just a weak convention? Is there an established way for the URL of the api to be advertised?

Ideally, I'd like the user to just be able to give the URL of the wiki's main page, and for the URL of the API to be somehow discoverable from that. I think the normal URL of the main page is the easiest address for the user to give, especially given that tools using the API will be used by people who don't know what an "API", "web services", "mod rewrite" etc are. Jim Higson 15:00, 24 August 2007 (UTC)Reply

If the user can give you the location of index.php (available by editing a page and looking at the URL), you should be able to replace index.php with api.php. But really you have no control over HTTP rewriting, so on a particular server it's possible that api.php is not accessible from the same place index.php is accessible from. CBM 16:39, 24 August 2007 (UTC)Reply
Yes, I know about replacing the "index.php" with "api.php", but on most Mediawiki installations this won't work because it will try to rewrite the URL to index.php?title=api.php. Asking the user to edit a page and look at the URL seems not as user friendly as if they just had to enter the URL of the main page.
How about this as a suggestion for advertising the API: the main interface accepts action=advertiseapi, and when this is present responds with the URL for api.php? This way the API could be found quite easily by automated tools without preknowledge of that particular wiki's URL structure. Jim Higson 13:28, 25 August 2007 (UTC)Reply

http response codes

[edit]

I wasn't sure where to ask this, so please let me know if somewhere else would be better. I'm porting my bot (en:User:WatchlistBot) to java, using the API. I get an http response code of 400 when I try to write this page, and a 403 code when I try to write other pages (the 403 code is recent, the other has been happening for awhile). The pages seem to be written correctly despite the error codes. I'm an experienced java programmer, but new to http. I looked up generally what the error codes mean, but it's not very helpful. Can anyone tell me specifically what this means? I have an older version of the source posted and linked from the bot page. I can update that if it would help. Mom2jandk 20:11, 28 August 2007 (UTC)Reply

A useful online tool for debugging http requests and responses is http://web-sniffer.net. Extract the exact request your bot is making, then make that same request from the web-sniffer and it will show you exactly what the server is responding with. --Nad 21:12, 28 August 2007 (UTC)Reply
I had a look at the java code and noticed that "wpSave" was missing from the post request vars, to replicate a normal post MW may like to have that set to "Save page"? --Nad 21:27, 28 August 2007 (UTC)Reply

GET method preview

[edit]

See 11173 for details. Looking for some feedback.

Addendum: I have tested the LivePreview feature, and have determined that any vulnerabilities this could possibly introduce already basically exist via LivePreview and even normal Preview. Currently, to submit a preview, all you need is to send POST data for the wpTextbox1 and wpPreview=Show+preview (I believe). Splarka 23:45, 3 September 2007 (UTC)Reply

Login using CURL + API

[edit]

Hi, i tried to let the user login using api method

the code is like this:

$postfields = array(); $postfields[] = array("action", "login"); $postfields[] = array("lgname", "$id"); $postfields[] = array("lgpassword", "$pass"); $postfields[] = array("format", "xml");

foreach($postfields as $subarray) { list($a, $b) = $subarray; $b= urlencode($b); $postedfields[] = "$a=$b"; }

$urlstring = join("\n", $postedfields); $urlstring = ereg_replace("\n", "&", $urlstring); $ch = curl_init("http://mydomain.com/api.php"); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $urlstring); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $buffer = curl_exec($ch); curl_close($ch);


however, this code doesnot login the user when it is executed.

may i know whether i can use curl with mediawiki api?

or is there any other way to make the login works besides sending header location???

I don't see where you specify that cookies are to be used. I don't use libcurl under PHP, but the C binding has the options CURLOPT_COOKIEFILE, CURLOPT_COOKIEJAR, and CURLOPT_COOKIE, which are probably the same in PHP. Tizio 12:08, 12 September 2007 (UTC)Reply
Try Snoopy, a PHP class that makes this much easier, and also supports cookies. --Catrope 14:23, 12 September 2007 (UTC)Reply
Oh, i'm trying to put the variables like lgname, lgpassword in the link that will be post to the api, the links will look like this : http://mydomain.com/api.php?action=login&lgname=$id&lgpassword=$pass&format=xml

as i echo $buffer, it gave me the output like

<?xml version="1.0" encoding="utf-8" ?>

- <api>

 <login result="Success" lguserid="3" lgusername="Liyen" lgtoken="qq23f583d20cae7508d79abbbe8f8217" /> 
 </api>


the output looks just like what we type in the browser address bar! how come curl is not working and if i type the link in the address bar, it works?

i'm too new to mediawiki api... can anyone please guide me..? please??? If i must use the cookie in this case, how can i apply it?

Thanks a lot!!! --Liyen

Try
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt");
curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt");
before curl_exec. Tizio 16:53, 12 September 2007 (UTC)Reply

Count Revisions

[edit]

It would be nice to get the total number of revisions while getting info for a given page. Currently one needs to make several calls if the number os revisions is larger than 50 (current limit). Is this being considered or are performance issues at stake? -- Sérgio

Get page contents after a given revision?

[edit]

Hi, is is possible to get the page contents after a given revision? I know how to get the contents of the revision but what I currently want is the full page after the revision has been incorporated. --Sérgio

Those are the same. The contents of revision 12345 are the contents of the page after revision 12345 was incorporated. --Catrope 13:54, 8 October 2007 (UTC)Reply

Wikipedia API URL bad guess

[edit]
$ w3m -dump http://en.wikipedia.org/api.php
Did you mean to type w:en:api.php ? You will be
automatically redirected there in five seconds.

No, I meant to type http://en.wikipedia.org/w/api.php ... you see one cannot guess where it might be on different installations. Anyway, The above Wikipedia message could give a better guess. Jidanni 00:46, 24 October 2007 (UTC)Reply

I think people going to en.wikipedia.org/Dog meaning to go to /wiki/Dog instead are a lot more common than people mixing up /api.php and /w/api.php Most people who use api.php know where it's located. --Catrope 22:34, 30 October 2007 (UTC)Reply

Sorting

[edit]

Is there any function for sorting the query-results ? Especially for lists of pages ? Augiasstallputzer 12:16, 11 November 2007 (UTC)Reply

No. But it's fairly trivial to write a script (PHP or otherwise) that sorts the list for you. --217.121.125.222 20:01, 11 November 2007 (UTC)Reply
[edit]

Hi,

I am using the backlinks query for Wikipedia Cleaner and it seems that it's not working any more. I was using it with the titles parameter but I tried the bltitle and it's not working. The example provided in API:Query - Lists is not working also: http://en.wikipedia.org/w/api.php?action=query&list=backlinks&bltitle=Main%20Page&bllimit=5&blfilterredir=redirects returns no backlinks. --NicoV 21:07, 16 November 2007 (UTC)Reply

It does for me. --Catrope 10:25, 17 November 2007 (UTC)Reply
Yes, it's working now, but yesterday there was no <query> tag, only the <query-continue> tag. --NicoV 18:38, 17 November 2007 (UTC)Reply

Used templates' paramaters

[edit]

It'll be a great possibility to get used templates' parameters. Sorry, that was me :)

I mean - now I have to get page content and parse it for templates parameters (like Commons categories or coordinates) and this action generates a huge traffic. || WASD 01:22, 20 December 2007 (UTC)Reply

Unfortunately, these parameters aren't stored in the database separately (only in the article text), so we can't get them efficiently. --Catrope 13:34, 20 December 2007 (UTC)Reply
All right then, at least now I know that it's not possible :) || WASD 19:59, 20 December 2007 (UTC)Reply

Hi,

these parameters aren't stored in the database separately, but wouldnt it be useful to have the possibilty to parse default from the API? So, not anybody has to parse templates ba himself. Only a parameter which replaces {{ and | against < and of course template/xml closing ;)? No less traffic/computing time, but easier use. merci & greetz VanGore 22:31, 1 September 2008 (UTC)Reply

The only thing this does is moving computing time away from the client to the server. Now parsing template parameters just once for one page may be cheap, but parsing them over and over again for hundreds or thousands of clients is not. That's why we try to move computing time from the server to the client if that's a responsible thing to do, like in this case. If the parameters were stored in the database, retrieving them on the server side would be cheaper then extracting them on the client side, so in that case this feature would obviously be added. But since it's not, it won't. --Catrope 11:30, 2 September 2008 (UTC)Reply

Hi Catrope,

thanks for your reply. I understand that big projects like wikimedia-projects cant't offer xml-parsed templates. But for most Dataquerys, people use their own dumps. Would ist be possible to implement the template2xml in the only software, but not as default? I tried to reunderstand template-structur, I tried the hints in de:Hilfe:Personendaten/Datenextraktion but I didn't understand it. Do you know a documentation about the mediawiki templates? merci & greetz VanGore 10:12, 3 September 2008 (UTC)Reply

Judging by the database queries in that document, I'd say it's way out of date. This wiki also has information about the general database layout and the table that keeps track of template usage. Template parameters are only stored in the actual wikitext, so you can't find them in any database tables. The wikitext itself is stored in the text table. --Catrope 13:44, 3 September 2008 (UTC)Reply

Hi Catrope, I'd say it's way out of date. - Yes I searche too long for cur ;). Thanks for the rest, I will try no with theses information. Last Question: do know wehre I can find the php-code for templates? merci & greetz VanGore 15:08, 3 September 2008 (UTC)Reply

The PHP code for parsing templates is probably in includes/parser/Parser.php , but the parser is a complex part of the MW code that's difficult to understand. You'd probably be better off writing your own code to extract template arguments. If you need advice, try talking to Tim Starling, he wrote the parser. --Catrope 15:23, 3 September 2008 (UTC)Reply

Hi Catrope,

cool, I didn't saw the wood for the trees;) Reading includes/Parser.php and Manual:Parser.php is quite good for understanding, writing [my] own code to extract template arguments will be easier, I'll hope;) otherwise I'll come back or contact Tim.... merci & greetz VanGore 16:02, 3 September 2008 (UTC)Reply

usercontribs not working any longer?

[edit]

hi!
for example http://de.wikipedia.org/w/api.php?action=query&list=usercontribs&ucuserprefix=84.167 does not list edits made in 2008. i have to use "ucend" explicitly, but iirc yesterday i did not have to. so, is there no possibility to get the latest edits of a range (without setting "ucend")? -- 85.180.68.18 09:27, 27 March 2008 (UTC)Reply

ucuserprefix now also sorts by username before it starts sorting by date. I know that's weird, but it's for performance reasons. --Catrope 20:02, 28 March 2008 (UTC)Reply
ok, thx. so (as said in #Sorting) for me there's no opportunity to change that behavior? the only solution i can image now is a self-written script which gets the result of api.php and postprocesses the data offline. is there a better way? -- 85.180.71.89 17:06, 29 March 2008 (UTC)Reply
I'll request a DB schema change so we can go back to the previous, more intuitive behavior while not killing performance. --Catrope 12:11, 6 April 2008 (UTC)Reply

Getting the URL of a page

[edit]

It would be very useful to have the full URL to the page in "prop=info". Is there any way to get it in another way? Thanks! -- OlivierCroquette 18:48, 3 June 2008 (UTC)Reply

It's not that difficult. From /wherever/api.php you need to go to /wherever/index.php?title=Foo_bar . If the wiki uses pretty URLs, you'll automatically be redirected to the pretty URL. --Catrope 10:55, 20 June 2008 (UTC)Reply
I'm also curious as to how to generate a full URL for a page. This seems like an odd thing not to provide. Here is the issue. I would like to give the API query string to an external application (SharePoint) that can consume an XML or JSON result and display a list of pages as links back to the wiki. There seems to be no property that generates a full URL.
One step further, it would be nice to add the ability to generate a full URL to list=categorymembers as well.
Konjurer (talk) 14:18, 10 January 2013 (UTC)Reply

Generator + categoryinfo bug

[edit]

I think I have found a bug in the API. (I don't have a bugzilla account and don't know how to use bugzilla so I'll report here instead.)

Bug description:

When using a generator to get a list of categories and then using "prop=categoryinfo", then the categoryinfo is not shown for all the categories in the list. I have tested this on several Wikimedia projects and I see the same bug on all of them.

Background:

We were discussing how to use the API to find redirected categories that still contain pages, so we can fix those pages. See en:Template talk:Category redirect#New categorization ideas. Using the API would be way more efficient than our current approaches.

Examples:

Here are two different queries that show the bug. The first query lists the categoryinfo for the hard redirects. The other query lists the categoryinfo for our "soft" redirects (that is categories with our "this category is redirected" template on them).

http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gapnamespace=14&gapfilterredir=redirects&gaplimit=500&prop=categoryinfo

http://en.wikipedia.org/w/api.php?action=query&generator=categorymembers&gcmtitle=Category:Wikipedia_category_redirects&gcmnamespace=14&gcmlimit=500&prop=categoryinfo

I hope you guys can fix this. Since this would be a very good way for us to find the pages that need their categorising fixed. (Easier for the bots and humans, and costs less server load.) And it would work on all projects.

--Davidgothberg 12:35, 26 August 2008 (UTC)Reply

This is really a symptom of a very different bug: categories that have a description page but never had any members aren't considered categories by the software. Filed at BugZilla. --Catrope 12:57, 26 August 2008 (UTC)Reply
Oh! Your answer solves the problem in our case! Since if those categories never have had any members then we don't need their category info. Well, I think it solves it for all such usage, since as long as you know that "no category info" = empty category, then it should be okay. That should perhaps be documented in the API documentation.
Thanks for your answer. I know some bot owners on some projects that will be very happy to be able to use this query now. I'll report it to them.
--Davidgothberg 13:11, 26 August 2008 (UTC)Reply
Knowing that no categoryinfo means no members is nice, but of course this is in fact a bug and should be fixed. The bug causing it is outside of the API, though, which is why I filed it at Bugzilla. --Catrope 14:37, 26 August 2008 (UTC)Reply

Difference with query.php

[edit]

There was an useful feature in query.php, when getting category members, allowed prop included timestamp (revision table) and touched (page table), with api.php we can get only the timestamp (action=query, list=categorymembers, cmprop = ids, title, sortkey, timestamp), touched is useful because you can implement a category members cache on client side, when categorymembers changes by adding/removing cat in article, touched is updated and the client know it must update its cache. Phe 07:58, 27 August 2008 (UTC)Reply

You can get touched by using generator=categorymembers&prop=info --Catrope 11:15, 27 August 2008 (UTC)Reply
Nice tip, thanks, and it was given as an example in the documentation ... Phe 15:56, 27 August 2008 (UTC)Reply

is a xml-Schema or DTD for the api-Responses available?

[edit]

Hi there. I'm working on a java-Client to access the mediawiki-API. I would like to use XML as the response format and JAXB to evaluate the API-responses. So it would be great if there is already a Schema or DTD to generate the corresponding Java-Classes. Thanks, --Gnu1742 10:50, 29 August 2008 (UTC)Reply

We're working on such a feature, see this bug. --Catrope 21:00, 31 August 2008 (UTC)Reply

Problems with action=edit

[edit]

Hi, I have some problems with api.php?action=edit. It always tries to edit w/api.php instead of the page with the specified title. At first I though that it is a mod_rewrite problem, but the query parameters seem to be correct.

Seems to be this bug http://www.organicdesign.co.nz/MediaWiki_1.11_title_extraction_bug
If disabling your rewrite rules fixes this, it's not a bug in MediaWiki. --Catrope 20:59, 19 October 2008 (UTC)Reply
Only a problem if you're using short url's. I have found a quick n' easy workaround, in your LocalSettings.php, wrap your $wgArticlePath="/$1" line with the condition: if (preg_match("/api\.php$/", $_SERVER['PHP_SELF'])) { ... } --FokeyJoe (2009-11-25)

Location Search?

[edit]

Hi. This is my first time participating in a wikipedia discussion, so I apologize if I'm not following proper etiquette posting here like this. Over the summer, I had used some kind of location search to request a list of articles that were in a particular location. You could filter the results by the category the point of interest fell under (Natural formation, government institution, business, etc.). I can't seem to find any mention of this old functionality, which was quite handy. All I find are 2 jailbroken iphone apps, that seem to do this sort of thing.

The first is this app which is using a cached copy of the data set from the wikipedia-world project. It's close, but i'm sure there used to be (or maybe should be) a way of making these kinds of queries through the API.

The other is Geopedia, an iphone app which seems to do the kind of searching I thought was possible. However, there's no documentation, no homepage and no way for me to contact the developer to ask how they did this.

Am I going crazy? I could have sworn that there was a way of searching wikipedia articles for those that are within some distance of a specific coordinate pair. Can someone point me in the right direction?

As far as I know these coordinates aren't stored in the database separately, so any implementation that isn't hugely inefficient would have to make such a database table, either automatically using an extension, or using a database dump. --Catrope 18:25, 24 November 2008 (UTC)Reply

Account simulation

[edit]

With the ability to simulate an account when performing an action, mediawiki could be completely integrated into virtually any site. This would require a new sort of flag in localsettings to allow trusted systems to perform that way. For example, I'm interested in only the history and parsing/editing bits, and have my own account management, access control system, and page generation system. 75.75.182.36 01:57, 28 February 2009 (UTC)Reply

You could just use action=login to log in with the account you want to 'simulate'. You can allow authentication from other sources with AuthPlugins. --Catrope 10:10, 28 February 2009 (UTC)Reply

Persistent Connections?

[edit]

Is there any possibility wikimedia's HTTP can be upgraded to 1.1, in particular, to allow persistent HTTP connections? Or is there some other way I'm missing of connecting to a persistent server so I don't have to re-establish connection for every single usage of the API? Language Lover 17:09, 20 April 2009 (UTC)Reply

It appears to me (and good old openssl and wireshark) that the secure.wikimedia.org server does indeed allow persistent connections and does support HTTP/1.1 keep-alive as standard. -- JSharp 01:31, 21 April 2009 (UTC)Reply
Oops, I think you might need this: [4] . It's the ssl-enabled link to the en.wp API. :) -- JSharp 01:36, 21 April 2009 (UTC)Reply
Are there any news? I am facing the same Problem. My application sends many but very short requests (with very short answers). So connection opening and closing takes most of the time and traffic for each request. Persistent connections would speed up things. (Even secure.wikimedia.org seems to disallow persistent connections).

How to integrate

[edit]

Hi

Apologies for being totally ignorant of HTML and MediaWiki, but I wonder if there's a way using MediaWiki markup or using HTML to get the output of a query (such as this one) into a Wikipedia page, formatted in some reasonable way. As I don't regularly check this site, I would appreciate a {{Talkback }} at my en talk page (in sig below).

Thanks Bongomatic 04:24, 1 October 2009 (UTC)Reply

Image license information

[edit]

Is there a way to get the license of an image through the api?

By category is probably easiest, assuming the site categorizes by license. There is no built in module though for license information. Splarka 08:45, 22 January 2010 (UTC)Reply

Possible to detect a page's existance?

[edit]

Hi there. I'm currently working on a bit of PHP software. I'm wondering if it's possible to detect a page's existance using the API? If not, is there any external way of doing so? Thanks. Smashman2004 21:42, 7 July 2010 (UTC)Reply

Fail reply is fail. Smashman2004 16:41, 15 July 2010 (UTC)Reply

Throttling

[edit]

Does the API implement any type of throttling for non-Bot users? There is discussion at the Spam attacks article on the Signpost this weeks about the need for such throttling. Apparently the attacker mentioned in the article was able to post at an average rate of 1 article per second, which is a little bit scary. Best practice among bot operators is generally no faster than 1 post every 5 seconds. Hard-coding such a limit into the API (at least for accounts not approved as Bots) might not be a bad idea. Kaldari 17:15, 17 August 2010 (UTC)Reply

Manual:$wgRateLimits applies to the api as well as normal edits as far as i know. Bawolff 23:23, 17 October 2010 (UTC)Reply

API/userid?

[edit]

Is there a way to get someone's userid from the api?

raw code

[edit]

The documentation says that for the source code of pages "index.php?action=raw" should be used. Is this mandatory or is there a way to do it over the API? --::Slomox:: >< 00:13, 22 November 2010 (UTC)Reply

yes you can. See API:Query_-_Properties#revisions_.2F_rv. Bawolff 03:58, 22 November 2010 (UTC)Reply

Support for multiple categories when using list=categorymembers

[edit]
  • jlaska 66.187.233.202 - I see there are extensions that allow querying for ages that are members in multiple categories (such as Extension:CategoryIntersection and Extension:Multi-Category_Search). Those are nice as they provide a Special: page, however they don't appear to add API support. From what I've tested, it appears that list=categorymembers does not support joining multiple cmtitle= values. Are there plans to add this support, or alternative API methods for finding pages that exist in multiple categories?
I was thinking on using two queries for an and-query where the second query would use the returned pageids from the first query with parameters action=query, list=categorymembers and the pageids parameter, but the pageids parameter does not restrict the search to pages among the list of pageids. — Fnielsen (talk) 10:29, 6 October 2012 (UTC)Reply

Extract headings and subheadings

[edit]

Hi. Can the API extract headings and subheadings easily? Cheers, 131.111.1.66 11:20, 13 March 2011 (UTC).Reply

Sure --Catrope 11:40, 13 March 2011 (UTC)Reply
Thank you Catrope; very useful! 86.9.199.117 22:04, 13 March 2011 (UTC)Reply
[edit]

Hi. Can I extract all links of a given page easily using the API? Thanks. 86.9.199.117 22:02, 13 March 2011 (UTC)Reply

You can get all links on a page or get all links to a page. --Catrope 22:45, 13 March 2011 (UTC)Reply
Thanks. Randomblue 11:11, 14 March 2011 (UTC)Reply
Is it possible to sort them by the order they are in the page? Helder 14:59, 14 March 2011 (UTC)
No. --Catrope 17:49, 14 March 2011 (UTC)Reply

Stripping off templates, refs, interwiki links, etc.

[edit]

Hi. Can the API keep only the text (with links) and headings from an article, stripping off all the rest? Thank you. Randomblue 11:11, 14 March 2011 (UTC)Reply

No, you'd have to do that yourself somehow. --Catrope 11:24, 14 March 2011 (UTC)Reply

Extracting template information

[edit]

I would like, for example, to extract information from infoboxes. Suppose that I'm interested in country infoboxes. I would like to be able to extract, for each page that transcludes the country infobox, basic information such as "capital", "population", "president", etc. Has this already been done? What would be the best way to do this? Cheers, 173.192.170.114 17:39, 22 March 2011 (UTC).Reply

This is not available from the API directly. There are projects like DBpedia that have done work in this direction --Catrope 14:51, 23 March 2011 (UTC)Reply

A way to retrieve article class?

[edit]

Even though I've searched the MediaWiki API thoroughly I haven't found a way to retrieve article class information, such as A-class, good, featured etc. Exported article text does not contain this information, nor is there a property corresponding to it. Moreover, dumping the page content with perl's WWW::Mechanize doesn't help because the relevant text is generated on they fly and is not captured by Mechanize.

I'd appreciate any pointers… Patrinos 08:52, 9 April 2011 (UTC)Reply

Concepts like featured articles were invented by Wikipedians, but they don't exist as such in the MediaWiki software. You may be able to detect featured-ness using categories or templates used on the page or something like that. --Catrope 10:15, 12 April 2011 (UTC)Reply

apfrom bug?

[edit]

Hi. 1) I load from '%' http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%&aplimit=11&format=xml 2) apfrom is '%d' so, I get http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%d&aplimit=11&format=xml 3) now apfrom is %25 so, I get http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%25&aplimit=11&format=xml 4) voilà, i'm on '%' again, why? Emijrp 09:40, 9 April 2011 (UTC)Reply

Because you didn't w:percent-encode your parameters when constructing your URL. --R'n'B 11:32, 10 April 2011 (UTC)Reply

API queries from PHP

[edit]

Are all API queries callable using PHP, e.g. for MediaWiki extensions? Randomblue 15:49, 10 April 2011 (UTC)Reply

Yes, see API:Calling internally. However, for most things you'll probably want to use core functionality or a database query rather than going through the API. --Catrope 10:17, 12 April 2011 (UTC)Reply

retrive data from a category with more then 500 articles

[edit]

I want to make a list of the Italian_verbs from Wiktionary. I use the query:

To get the first 500 items. How do I get the rest of the data? (I think It has to do with cmcontinue but I don't understand how to use it... thanks Jobnikon 17:31, 8 May 2011 (UTC)Reply

At the bottom, you'll see <categorymembers cmcontinue="page|1833609|ALLOTTARE ALLOTTARE" />. So to get the next 500 results, repeat the same API call with cmcontinue=page|1833609|ALLOTTARE%0AALLOTTARE (the %0A part is what you get when you XML-decode then URL-encode ). --Catrope 16:36, 9 May 2011 (UTC)Reply
[edit]

Please is there a way in the API to get list of images in a wiki with the pages that link to this images --Mohamed Ouda 14:15, 2 July 2011 (UTC)Reply

Not in one request, no. You can get the list of all images with list=allimages and you can get the list of pages linking to a specific image with list=imageusage. The latter only takes one image at a time though. --Catrope 10:21, 3 July 2011 (UTC)Reply

Get translations from wiktionary?

[edit]

I wonder how to use wiktionary to translate words, for exaple to get a datadump with all the english words, and their translation in spanish? I have seen it done on this site http://www.dicts.info/doc/dl/download.php so its possible. But how do they do it?

Probably by downloading a dump of all Wiktionary content and parsing the translations out of it. MediaWiki itself doesn't treat the translations specially, they're just words appearing in a box with fancy styling, so the API doesn't provide any way to retrieve these short of grabbing the entire page content and parsing out the translations yourself. --Catrope 09:01, 28 October 2011 (UTC)Reply

Problems with MW1.18?

[edit]

We're in soft launch for a new wiki and playing around with the MW1.18 (yes - we know it's beta). In testing out the HotCat gadget, we've noticed a problem with our API. It wasn't an issue with MW1.17 and while we are playing around with things during the soft launch, I can't think of any settings that would have an impact on that. The API was working correctly before MW1.17 - and nothing was changed between the upgrade and test of API. It doesn't seem to output all of the data. Examples:

Anyone have any ideas? --Varnent 01:41, 11 November 2011 (UTC) (updated URLs)Reply

This was fixed in trunk in r99236 but it was overlooked and didn't make it into the 1.18 beta. I've tagged it for 1.18 so it'll go in the final release. You can try applying the diff of that revision locally (it's an easy one), that should fix it. --Catrope 13:13, 11 November 2011 (UTC)Reply
Excellent - thank you!! --Varnent 19:27, 11 November 2011 (UTC)Reply

enable API

[edit]

If this is the article on how API works, shouldn't there be a short blurb that you add $wgEnableAPI = true to enable it? Or is that not correct? These instructions are always so subpar. Igottheconch 06:57, 12 December 2011 (UTC)Reply

The API is enabled by default in non-ancient versions of MediaWiki. Also, if you feel the instructions are "so subpar", by all means go and improve them. It's a wiki, anyone can edit. --Catrope 14:53, 14 December 2011 (UTC)Reply

Doubled Content-Length in HTTP Header

[edit]

I posted this on the help desk, but it probably is more appropriate here:

  • MediaWiki version: 16.0
  • PHP version: 5.2.17 (cgi)
  • MySQL version: 5.0.91-log
  • URL: www.isogg.org/w/api.php

I am trying to track down a bug in the api which is causing a double content-length in the header. This is causing a lot of issues with a python bot. Here is the report from web-sniffer showing the content of the api.php call from this wiki. All other pages when called, i.e. the Main page, etc. only report 1 content-length. Is the api forcing the headers? Why is doubling only the one?

Status: HTTP/1.1 200 OK
Date: Mon, 30 Jan 2012 14:31:25 GMT
Content-Type: text/html; charset=utf-8
Connection: close
Server: Nginx / Varnish
X-Powered-By: PHP/5.2.17
MediaWiki-API-Error: help
Cache-Control: private
Content-Encoding: gzip
Vary: Accept-Encoding
Content-Length: 16656
Content-Length: 16656

As you can see this is a Nginx server. On an Apache server with 16.0, only one content-length is sent. Could that be the issue and how do I solve it? Thanks.

-Hutchy68 15:10, 30 January 2012 (UTC)Reply

Wanted: showcase of cool uses

[edit]

I'd like to get a showcase of uses of the MediaWiki API -- can anyone link to apps or tools that use it well or interestingly? Sumana Harihareswara, Wikimedia Foundation Volunteer Development Coordinator 22:55, 1 February 2012 (UTC)Reply

A basic use (for editing pages) is described on w:Wikipedia:WikiProject User scripts/Guide/Ajax#jQuery examples. Helder 12:13, 2 February 2012 (UTC)

Is it possible to edit the length of a search snippet?

[edit]

Using the prop parameter "snippet" returns a small part of the beginning of the article, and for my needs it is a bit too small. Is there any way to edit the length (when using the API) of the snippet returned, to make it longer? --86.140.133.206 21:42, 24 March 2012 (UTC)Reply

Personal attack removed

template including list

[edit]

How to get list of pages which includes template throw api? Without api I can get it using Special:WhatLinksHere but same api function does not return this information to me... Sorry for my low level of English and if I ask at wrong place. --Base (talk) 15:54, 21 May 2012 (UTC)Reply

I founded answer it is API:Embeddedin. Thanks --Base (talk) 16:12, 21 May 2012 (UTC)Reply

API_talk:Usercontribs#Just new pages

[edit]

Please answer my question here: API_talk:Usercontribs#Just new pages. Thanks. --BaseBot (talk) 20:48, 1 June 2012 (UTC)Reply

action=tokens

[edit]

Should we encourage usage of action=tokens, to fetch tokens, rather than the method described in API:Edit and related pages? →Στc. 01:26, 9 June 2012 (UTC)Reply

Wikt markup removed from API output

[edit]

Querying the API for an article which has a wikt markup causes the term to be absent from the API response. See: http://en.wikipedia.org/w/api.php?action=opensearch&format=xml&search=Uncanny+Valley and http://en.wikipedia.org/wiki/Uncanny_valley . The word 'revulsion' is absent from the API response.

API:Edit - Set user preferences and other old proposals

[edit]

It doesn't make sense to keep that with the same naming convention as everything else. Is there already a convention for a better place to put it? If not, what about API Proposals/Edit - Set user preferences in the main namespace. I don't think the Manual: namespace works either, since the manual should be for stuff in the actual software. Superm401 - Talk 21:22, 30 January 2013 (UTC)Reply

[edit]

How would I format a request to the API to get a count of how many pages link to a specific page? I'm trying to write myself a little JavaScript which in part will expand the "What links here" link in my p-tb to include an count of pages that link to a specific NS (or two or three) and a total count of pages that link to that article. I intend to use such a script to save me some time when I'm tagging articles for improvement on enwiki to quickly know if the article is an "orphan" or not. Thank you. T13   ( C • M • Click to learn how to view this signature as intended ) 15:30, 12 April 2013 (UTC)Reply

Okay, so I have found that "api.php?action=query&list=backlinks&blnamespace=0&bltitle=" + wgPageName; will tell me if there are any articles that link to the page, but how can I get the full count (if greater than 500)? I don't care "what" articles link there, just how many. T13   ( C • M • Click to learn how to view this signature as intended ) 17:01, 12 April 2013 (UTC)Reply

user is deleting text

[edit]

Can someone deal with this user. Can someone block him. 108.243.173.217 23:45, 29 April 2013 (UTC)Reply

Dealt with. Clearly not interested in contributing constructively.--Jasper Deng (talk) 23:55, 29 April 2013 (UTC)Reply

Retreiving List Pages/ Category pages

[edit]

Hey, I am trying to use the API to retrieve the pages inside a given list or category. This query seems to be working from me to get, for instance, the response of everyone in the category American_film_actors Link: http://en.wikipedia.org/w/api.php?action=query&list=categorymembers&cmtitle=Category:American_film_actors&cmlimit=30&format=json

But I can't find in any of the documentation any of the documentation how to do something like this with a list page, ex. List_of_accordionists Link

Is it possible for someone to point me in the right direction or pass me a sample query that will accomplish this?

You mean the list of links in the page ? See links / pl. --NicoV (talk) 21:01, 13 June 2013 (UTC)Reply

That was it! thanks very much!

Selective leak of data provided by recentchanges as a generator

[edit]

Here is some request:

http://pl.wikipedia.org/w/api.php?action=query&prop=categories&format=xml&generator=recentchanges&grcnamespace=0&grcprop=title&grcshow=!redirect&grclimit=10&grctype=new

It should return categories for latest new pages, but it's behave is weird to me. For some pages it returns categories and for others it doesn't. It's not about outdated data in API (eg. someone already added categories, but API still doesn't knows that), because many of pages with no categories (as API reports), actually got categories.

Can somebody help me at that --81.190.176.223 15:28, 29 June 2013 (UTC)?Reply

Ask about active user

[edit]

Hello, I want to ask about how mediawiki api count active users.

When, I see the site statistic (https://id.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=statistics). It said that, there are 1743 activeusers. But when I list all of the active users (https://id.wikipedia.org/w/api.php?action=query&list=allusers&auprop=editcount%7Cgroups%7Cregistration&aulimit=500&auactiveusers), there are more than 1743 rows.

Is there anyone who can explain? Then, it also has different result with the special page (https://id.wikipedia.org/wiki/Istimewa:Pengguna_aktif). Is it possible to extract data from mediawiki api that has the same result with Special:ActiveUsers page? Thank you. William Surya Permana (talk) 17:41, 10 August 2013 (UTC)Reply

abuselog: Permission denied

[edit]

I have all needed rights to see abuse log, but request "action=query&list=abuselog" return error with code="aflpermissiondenied" info="Permission denied" --AS (talk) 16:02, 22 December 2013 (UTC)Reply

Works fine for me, even when logged out, on this wiki. If you are talking about a different wiki, make sure that a group you are in has the abusefilter-log permission in Special:ListGroupRights. --Skizzerz 18:38, 22 December 2013 (UTC)Reply
Now works for me too... Anyway, thanks. --AS (talk) 22:38, 22 December 2013 (UTC)Reply

Removing formats except JSON

[edit]

According to the page, there are plans to remove all formats except JSON. First off, what do we have that supports that statement? Second, I would assume that XML would also be a standard format that would be kept, but that needs to be made clear, assuming I'm correct. – RobinHood70 talk 05:57, 8 January 2014 (UTC)Reply

For anyone interested, the proposal is indeed to remove XML. You can read more about it here. – RobinHood70 talk 03:32, 26 February 2014 (UTC)Reply
[edit]

Is there a way to retrieve the url for the wiki's logo? I expected it to be in meta siteinfo, but it's not. 124.181.108.69 07:12, 25 March 2014 (UTC)Reply

There's logo="//upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png" in the result from siteinfo. --NicoV (talk) 07:20, 25 March 2014 (UTC)Reply
Hmm, that's not in the example, must be outdated. Thanks! 124.181.108.69 11:04, 25 March 2014 (UTC)Reply

An error with api

[edit]

I Cannot access api.php and it gives me this error : Parse error: syntax error, unexpected T_FUNCTION in C:\AppServ\www\wordpress\mediawiki-1.19.15\includes\api\ApiFormatBase.php on line 279 Please help! — Preceding unsigned comment added by 124.181.108.69 (talkcontribs) 11:04, 25 March 2014‎

Looking at it, I saw nothing wrong, so I googled it and came up with this. It sounds like you've got PHP 5.2, where this seems to expect PHP 5.3 or above. – RobinHood70 talk 20:08, 8 April 2014 (UTC)Reply

Internal error in ApiFormatXml::recXmlPrint() has integer keys without _element value

[edit]

Apparently, since yesterday, some requests give this error.

Example: https://fr.wikipedia.org/w/api.php?bltitle=BNF&action=query&blredirect=&list=backlinks&format=xml&bllimit=max gives

<?xml version="1.0"?>
<api servedby="mw1053">
  <error code="internal_api_error_MWException" info="Exception Caught: Internal error in ApiFormatXml::recXmlPrint: (redirlinks, ...) has integer keys without _element value. Use ApiResult::setIndexedTagName()." xml:space="preserve" />
</api>

Also reported in MediaWiki-API mailing list. --NicoV (talk) 08:45, 10 October 2014 (UTC)Reply

The problem is visible in WPCleaner and AWB. Problem reported as bug 71907 by AWB developer. --NicoV (talk) 13:52, 10 October 2014 (UTC)Reply

How to deal with error while getting all users

[edit]

Hi, I am trying to get the list of all users. Mostly I get the following error:

{

   "servedby": "mw1196",
   "error": {
       "code": "internal_api_error_MWException",
       "info": "Exception Caught: Internal error in ApiQueryAllUsers::execute: Saw more duplicate rows than expected",
       "*": ""
   }

}

So I reduce the number of users the API is getting at each call to 1, and even then I get the error. For example, see: https://en.wikipedia.org/w/api.php?action=query&list=allusers&format=jsonfm&aufrom=!!!%20Professional%20Analism%20Account!%20ATTAAAAAAAACK!!!!&aulimit=1&auwitheditsonly=%22%22&auexcludegroup=bot

How do I deal with this error now? Any other way to get the list of all users?

Srijankedia (talk) 20:37, 26 October 2014 (UTC)Reply

There's nothing wrong with your code, it's a problem with the PHP code on Wikipedia. Anomie is probably the best person to look into that, since he's been working heavily on the API code recently. Hopefully, he'll comment shortly. There's no other direct way of getting the user list via the API. – RobinHood70 talk 05:29, 27 October 2014 (UTC)Reply
Okay. I hope this gets resolved soon. I have to use it urgently, or I would just crawl the Wikipedia website :) Srijankedia (talk) 17:36, 27 October 2014 (UTC)Reply
Great! It seems to be fixed now. Thanks for the quick work on this. Srijankedia (talk) 00:23, 28 October 2014 (UTC)Reply

Missing documentation pages

[edit]
import pywikibot
s = pywikibot.Site('mediawiki', 'mediawiki')
missing_docs = sorted([p for p in [pywikibot.Page(s, 'API:' + module.title()) for module in s._paraminfo.query_modules] if not p.exists()])
print('The following query modules do not have documentation:')
for p in missing_docs:
    print('* [[' + p.title() + ']]')

The following query modules do not have documentation:

Modules with no documentation

(repeating s/query/action/)

The following action modules do not have documentation:

Many of these could be a redirect to an existing relevant documentation page. For the ones which are not covered by wiki documentation yet, maybe they could be Google Code-In tasks? (and no the api.php help is not a good replacement; it only documents the *current* API, and not always accurately - it doesnt document which version various features were added, or provide a place that people can add notes about oddities.) John Vandenberg (talk) 12:22, 28 November 2014 (UTC)Reply

It would probably be more useful if you used the 'helpurls' property in the action=paraminfo result, instead of blindly assuming that "API:$modulename" should exist. Anomie (talk) 14:17, 1 December 2014 (UTC)Reply
I've created redirects for all modules which have a 'helpurls' property. p.s. the helpurls for 'Betafeatures' is a broken link. There are many modules without a value in the helpurls property, which are all the red links above. John Vandenberg (talk) 17:35, 4 December 2014 (UTC)Reply

user:RobinHood70 has done a massive restructure of API:Properties and API:Meta, splitting each module to be a separate page. template:API-head now automatically links to the current action=help page for the module, so wiki documentation can be started by simply adding {{API-head}} to a page, like API:Fileusage.

Worth noting, it is now possible to use {{Special:ApiHelp/query+flowinfo}} to produce

prop=flowinfo (fli)

(main | query | flowinfo)
  • This module is deprecated.
  • This module requires read rights.
  • Source: Flow
  • License: GPL-2.0-or-later

Get basic Structured Discussions information about a page.


I find that layout ugly, but it is better than nothing.

See also Thread:Project:Current_issues/Api_documentation_bot. John Vandenberg (talk) 07:08, 22 February 2015 (UTC)Reply

If RobinHood70 hasn't already, someone should submit a patch to update the return value of getHelpUrls() in the affected modules. Anomie (talk) 14:20, 23 February 2015 (UTC)Reply
Sorry, that hadn't even occurred to me. I've gotten out of MediaWiki code modification. As a Windows user, I found it completely unintuitive and I was spending up to an hour just to make the tiniest change, and not sure I'd done it right even then. To add to the fun, I'm cognitively impaired and both times that I submitted something, I was pretty much useless afterwards. So no, no more code mods for me, at least not until we make modifying the code as easy as changing a wiki page (hint!). Sorry to dump this work on someone else's lap, but submitting code modifications is just not something I'm able to get into in any meaningful way. Robin Hood  (talk) 16:35, 23 February 2015 (UTC)Reply
Actually, it just occurred to me, I don't have to be the one to actually submit the changes. I think I can manage a git clone without too much difficulty, so I can go through the files and make the necessary changes, then zip them up and send them to someone else if that would be easier. Robin Hood  (talk) 16:57, 23 February 2015 (UTC)Reply
Okay, if anyone wants it, I have a zip file with all the changed files as well as a patch file if that's any better. The zip file is here, since we can't upload zip files to MW. Robin Hood  (talk) 19:32, 23 February 2015 (UTC)Reply
You could try the Gerrit patch uploader to upload the patch. Anomie (talk) 13:48, 24 February 2015 (UTC)Reply
I'd forgotten that existed. Thanks, Anomie! Pending the outcome of the discussion on my talk page, I'll give it a try. Robin Hood  (talk) 16:17, 24 February 2015 (UTC)Reply
User:RobinHood70, I took your patch set and did git apply /tmp/RobinHood70/patch.diff, and submitted it with git review. It's up at gerrit:202368. I tested all the help links, it looks good to me. Thanks for doing this! I still think it could be better to move the wiki pages for query submodules to be actual subpages of API:Query (or maybe subpages of [[API:Lists], API:Meta, API:Properties) so that you can navigate back to the API wiki pages , but this is forward progress. Onward :-) -- SPage (WMF) (talk) 10:20, 7 April 2015 (UTC)Reply

Question regarding Wiki ID and similar propertes

[edit]

I stumbled over a website that seems to collect mediawiki API information for mediawikis all over the web. I found my wiki there with a value "Has Wiki ID" displayed. What is this wiki ID and how can I avoid that this information is visible or influence what is displayed there at all or is this depending on the server setup? I did check the wikiapiary.com entry from a friend's mediawiki project. For his project some properties where not displayed that are visible for mine, so I wondered --CayceP (talk) 13:22, 14 January 2015 (UTC)Reply

It appears that the "Wiki ID" reported there is the wikiid property in the meta=siteinfo query with siprop=general, which in turn depends on the values of $wgDBprefix and $wgDBname for your wiki. You could modify this information using the APIQuerySiteInfoGeneralInfo hook, although doing so seems rather pointless. Other properties listed on that site presumably come from other data in meta=siteinfo or other API queries. Anomie (talk) 14:27, 15 January 2015 (UTC)Reply

Using API for retrieving images hosted at Commons (variable must be resolution and upload date)

[edit]

Hi! My post at Commons:Forum#Bilder nach Auflösung über Kategorien etc. suchen (in German) did not result in something which I can manage, so I am trying my luck here :-). I am looking for an API query which is able to retrieve images uploaded at Commons on a certain date or period with small resolutions (500/640/960/1024 etc. height or width). Variables must be resolution and date. I tried to use some parameters via API:Allimages and Manual:image table, constructing a query based on timestamps via aistartaiend and resolution size in pixel via img_widthimg_height but all efforts failed. A listing by ascending or descending order by upload date would be great. Is there a good alm who could orientate me? Thx. --Gunnex (talk) 00:08, 21 February 2015 (UTC)Reply

You can't filter by size in the original query, so you'd have to download all the information, then filter the results down to just the ones you're looking for. The query would look like this:
https://commons.wikimedia.org/w/api.php?action=query&list=allimages&aiprop=size&aisort=timestamp&aistart=2015-02-19T00:00:00Z&aiend=2015-02-20T00:00:00Z
Change the dates to whatever you need. Let me know if you need any additional help. Robin Hood  (talk) 02:04, 21 February 2015 (UTC)Reply
I got it! I added some more parameters (user/url/limit/nobots) and with
  • https://commons.wikimedia.org/w/api.php?action=query&list=allimages&aiprop=size|user|url&aisort=timestamp&aistart=2015-02-19T00:00:00Z&aiend=2015-02-20T00:00:00Z&aifilterbots=nobots&ailimit=200
I can work almost perfectly. Thx for the help! --Gunnex (talk) 10:57, 21 February 2015 (UTC)Reply
No problem. Glad to hear you got it working! Robin Hood  (talk) 15:27, 21 February 2015 (UTC)Reply

Author and timestamp

[edit]

Hi, which query would recommend to query for the author of an article and the respective timestamp. So I'm looking for the meta data of the first revision of a certain article. --jobu0101 (talk) 13:25, 29 March 2015 (UTC)Reply

https://www.mediawiki.org/w/api.php?action=query&prop=revisions&rvlimit=1&rvprop=timestamp%7Cuser&rvdir=newer&titles=API:Revisions
Just replace the wiki and titles with whatever you need. Note that you can only query one page at a time with this method. Robin Hood  (talk) 15:23, 29 March 2015 (UTC)Reply
Thank you very much! --jobu0101 (talk) 22:45, 29 March 2015 (UTC)Reply
[edit]

I found out how to generate a list of all backlinks (see here). But I'm more interested in the number of backlinks. Is it possible to simply get the number of backlinks? --jobu0101 (talk) 07:43, 8 April 2015 (UTC)Reply

Not currently. The feature request is phab:T19993; note that an implementation would likely return a number from 0 to 'limit', or some indicator that there are more than the limit (e.g. if the limit is 100, 0–100 or "101+"). Anomie (talk) 13:16, 8 April 2015 (UTC)Reply
Too bad. So calculating the number will result in a big amount of traffic with the current API. --jobu0101 (talk) 13:33, 8 April 2015 (UTC)Reply
By the way there are some examples where it is possible to access the number of results using another method. For example it's possible to get the number of edits a certain user has made without going through all edits an counting them manually. How do you know that there is no other possibility for this backlink issue? --jobu0101 (talk) 13:39, 8 April 2015 (UTC)Reply

API and FlaggedRevs

[edit]

How to check (via API) if article have been reviewed/have never been reviewed/have at some point been reviewed, but the most recent revision is not reviewed? Malarz pl (talk) 09:27, 13 April 2015 (UTC)Reply

See the examples on the FlaggedRevs page, here and here. Robin Hood  (talk) 16:42, 13 April 2015 (UTC)Reply
I also found that list = unreviewedpages gives some of old reviewed pages if I’m not at least an editor, but I don’t know exactly what (it’s probably a bug). --Tacsipacsi (talk) 16:50, 13 April 2015 (UTC)Reply
I'd like to check one article. I don't need list of all unreviewedpages / oldreviewedpages. Malarz pl (talk) 06:42, 14 April 2015 (UTC)Reply
The start and end parameters will help for unreviewedpages (set both to the title you're looking at), but it looks like there's nothing that can easily be done for the oldreviewedpages. I'd suggest posting on the extension's talk page and maybe the authors will add that functionality. Robin Hood  (talk) 17:28, 14 April 2015 (UTC)Reply

Existence of articles

[edit]

I have a question related to the question of Smashman (talk · contribs) which he posted almost five years ago without getting any answer. Of course I know that almost all queries where you ask for certain information about a page lets you also know if the page doesn't exist. But is there a canonical way to ask the API if certain pages exist? I'm looking for a solution with very less output. Just a yes/no is enough (maybe a third possibility in case the page exists but is a redirection). --jobu0101 (talk) 17:04, 14 April 2015 (UTC)Reply

For a yes/no, the closest you'll get is action=query with no submodules, like this. If you want redirect status too, you can add &redirects=1 or use prop=info. Anomie (talk) 13:04, 15 April 2015 (UTC)Reply
Thank you. That's quite good. I didn't come that close. --jobu0101 (talk) 16:26, 15 April 2015 (UTC)Reply

Tokens 2

[edit]

Hey, can you have a look here? --jobu0101 (talk) 17:52, 1 May 2015 (UTC)Reply

ApiSandbox

[edit]

In ApiSandbox it is not possible to "set" the "Parameter for query" -> "continue" to an empty string. At least the xml returned has different tags (ex. a <query-continue> tag) from the ones that exist if the continue parameter is included as an empty string. This way new users using the ApiSandbox are disoriented. --Xoristzatziki (talk) 18:59, 21 June 2015 (UTC)Reply

I'm working on fixing ApiSandbox to allow present-but-empty in gerrit:209570; in the mean time, you're unfortunately out of luck with that one. Anomie (talk) 13:31, 29 June 2015 (UTC)Reply

changes of revisions

[edit]

Is there a simpler way to get anything that caused a recent revision change in articles? Anything that modified the content or the number of the articles in the wiki. This will be useful to keep updated the "pages-meta-current" xml file got from dumps.wikimedia.org, until we have a new dump. Aka:

  • edits (any edit. Either by users or by bot or by moving the article to a new name and changing the content to "#redirect xxx", anything that changes the content of the page),
  • deletions (any deletion that removed the article from the article list, ex. a move by a sysop that do not created a "#redirect xxx"),
  • new articles (either by creation or by undeleting a page or by moving a page to a new name or anything that adds an article in the article list of the wiki)

--Xoristzatziki (talk) 05:46, 23 June 2015 (UTC)Reply

action=query&list=recentchanges? Anomie (talk) 13:32, 29 June 2015 (UTC)Reply

Implications of the Data & Developer Hub project on Api: namespace

[edit]

There is an ongoing discussion about dev.wikimedia.org and the implications it might have revamping this namespace (as opposed to creating a new one in Dev:). Please check Thread:Project:Current_issues/Adding_a_dev_namespace_for_"Data_and_developer_hub"_articles and reply there.--Qgil-WMF (talk) 06:03, 1 July 2015 (UTC)Reply

IRC office hour on APIs and the mw.org namespace

[edit]

Tuesday 2015-09-01T18:00 UTC (11am San Francisco time) we will be having an IRC office hour about

  • T105133 - Organize current and new content in the API: namespace at mediawiki.org
  • T101441 - Integrate the new Web APIs hub with mediawiki.org
  • T98897 - Deploy SkinPerNamespace extension on mediawiki.org

If you're interested in developing with, documenting, or promoting MediaWiki/Wikimedia APIs, you should attend. I hope by having a realtime conversation we can come to a shared understanding faster. -- SPage (WMF) (talk) 02:31, 27 August 2015 (UTC)Reply

API use in wiki-family environment?

[edit]

Has anyone had experience with using the API in a wiki-family setup? We're using the giant switch statement method and a single database with different prefixes to separate the installs.

&redirects not working as expected

[edit]

tried https://commons.wikimedia.org/w/api.php?action=query&titles=Category:Biserica%20fortificat%C4%83%20din%20Al%C5%A3%C3%A2na&redirects The category Biserica fortificată din Alţâna is a redirect, but the API result doesn't display this... What is wrong? --Arch2all (talk) 10:01, 7 November 2015 (UTC)Reply

Reason seems to be, that Category redirects are "soft" (Template "Category redirect" instead of #REDIRECT). I succeeded to check this with https://commons.wikimedia.org/w/api.php?action=query&prop=templates&tltemplates=Template:Category_redirect&titles=Category:Biserica%20fortificat%C4%83%20din%20Al%C5%A3%C3%A2na
But now I only know, that the page is a redirect, but not the target. How can I achieve this (find out the parameters of the redirect template)? --Arch2all (talk) 11:22, 7 November 2015 (UTC)Reply
Finally I found out to extracted the new title by parsing the links to namespace 14: https://commons.wikimedia.org/w/api.php?action=parse&prop=links&page=Category:Biserica%20fortificat%C4%83%20din%20Al%C5%A3%C3%A2na
This works for now, but is pretty complicated (need 2 API calls). Maybe someone nows an easier way? --Arch2all (talk) 12:17, 7 November 2015 (UTC)Reply
Soft redirects are, in essence, pages like any other article, so there's no easy way to figure out where they redirect to. Your method is one way. Another one is to load the page text itself via a revisions query (or via parse, if you prefer) and then parse the resulting text for the template itself. That, of course, requires that you know the template's name and how it works—not a big issue if you're working only on Commons, but if you're loading pages from different wikis, it's gonna be tricky, since you'd need to know all the right names and whether they all work the same. There's no other solution that I can think of. Robin Hood  (talk) 22:59, 7 November 2015 (UTC)Reply
Thanks for the detailed answer. I'm just interested in redirects on commons, so it's not to tricky. Maybe I better analyze the redirect pages wiki text myself with a revision query (as You mentioned) instead of a parse query with automatically extracted links to avoid getting the wrong link (if there are several links to namespace 14). --Arch2all (talk) 08:21, 8 November 2015 (UTC)Reply