Extension talk:Lucene-search/archive/2007
Add topic2007
[edit]Error when editing pages
[edit]I followed your tutorial and installed LuceneSearch. All went fine, but when I edit a page, I get this error:
Fatal error: Call to undefined method LuceneSearch::setLimitOffset() in /path/to/wiki/includes/SearchEngine.php on line 222
I'm using Mediawiki 1.10.0. Is this a known problem or just a configuration issue? Looks like LuceneSearch.php or LuceneSearch_body.php don't define that function at all. Same with LuceneSearch::update() function... --12 July 2007
- You're missing
$wgDisableSearchUpdate = true;
- in your LocalSettings.php. It should be placed before the require_once statement. --Rainman 17:48, 12 July 2007 (UTC)
Installing Lucene on Windows 2003 Server
[edit]Is there a way to install the LuceneSearch under Windows? I Run my wiki on a Windows 2003 Server with XAMPP and I want to use the features of Lucene. I found at m:Installing lucene search that wikipedia uses the C# engine of Lucene.
Is there a compiled version of the C# engine to install it on my Apache running on Windows 2003 Server?----stp-- 13:40, 1 August 2007 (UTC)
- As far as I know, no. --Rainman 09:54, 3 August 2007 (UTC)
I am also interested in a Windows 2003 tutorial for improving MediaWiki search results. Cedarrapidsboy 14:29, 2 August 2007 (UTC)
- You can use the old C# daemon following tutorial on m:Installing lucene search. Wikimedia sites used to use this one, but now use to the latest (java) version. The new version could in principle run on windows with some modifications (main problem is usage of symbolic and hard links), but there is no-one around the patch it. --Rainman 09:54, 3 August 2007 (UTC)
Could you explain, how to compile old C# daemon under windows with Mono? There is no "make" and "make install" commands under Windows :((( --Konstbel 09:04, 31 March 2008 (UTC)
- any luck on the patch for windows? --zhamrock 16:48, 28 July 2008 (SGT)
There is a .dll version available here: http://incubator.apache.org/lucene.net/download/ , but I don't know if this helps --jdpond 21:53, 27 August 2007 (UTC)
- The problem is not in the lucene itself, but the LSearch daemon, that makes use of linux fs to efficiently fetch new indexes, keep old copies, and swap copies after a background warmup phrase. --Rainman 09:18, 28 August 2007 (UTC)
- The 2.1 branch seems to have some support for Windows (see FSUtil.java). Is someone actively working on this? Any idea what the status is? --Cneubauer 19:35, 13 January 2009 (UTC)
- No, no-one is actively working on windows support.. the lucene-search-2.1 branch won't work on windows, although it could with some poking around, e.g. restructuring the indexregistry class.. --Rainman 22:08, 13 January 2009 (UTC)
- I managed to get it run on Windows by patching FSUtil.java. I'm using NTFS hardlinks and a free Microsoft tool to create directory links (linkd.exe). It may not be that flexible as the Linux version using symbolic links but it works for me, especially because I'm able to do the development of a wiki search client completely on my Windows machine. If someone is interested, leave a note on my user page. I would commit it to the repository myself but I guess I'm not allowed to so. --Kai KĂźhn. 19:24, 2 February 2009 (UTC)
- Please put the patch on bugzilla. --Rainman 18:38, 2 February 2009 (UTC)
- done. Patch is here --Kai KĂźhn 20:53, 2 February 2009 (UTC)
- Can we get the remaining Windows-related tasks posted that is needed to build the configuration files, indexes, and fetch the updates? After compiling the lucene-search-2.1 branch with Kai's FSUtils.java patch, I tried running the command in the /configure script, but it failed due to it still looking for Bash.
- The 2.1 branch seems to have some support for Windows (see FSUtil.java). Is someone actively working on this? Any idea what the status is? --Cneubauer 19:35, 13 January 2009 (UTC)
C:\lucene-search-2.1>java -cp LuceneSearch.jar org.wikimedia.lsearch.util.Configure C:\Inetpub\wwwroot\mediawiki Exception in thread "main" java.io.IOException: Cannot run program "/bin/bash": CreateProcess error=3, The system cannot find the path specified at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at java.lang.Runtime.exec(Runtime.java:466) at org.wikimedia.lsearch.util.Command.exec(Command.java:41) at org.wikimedia.lsearch.util.Configure.getVariable(Configure.java:84) at org.wikimedia.lsearch.util.Configure.main(Configure.java:49) Caused by: java.io.IOException: CreateProcess error=3, The system cannot find th e path specified at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 5 more
- FSUtils is only tip of the iceberg, there are many other issues, especially how we handle index updates (which rely on symlinks and such for efficiency and simplicity)... So, there is no "simple hack" you can do and make it work... --Rainman 13:17, 3 December 2009 (UTC)
Missing Method?
[edit]I installed everything following the instructions (on MediaWiki 1.10.1), but I'm getting this when I hit the search-button:
Fatal error: Call to undefined method LuceneSearch::getRedirect() in /var/www/mediawiki-1.10.1/includes/SpecialPage.php on line 396
Is this a known issue with 1.10.1, or am I missing something? --217.6.3.114 06:34, 6 August 2007 (UTC)
- No idea, getRedirect() is defined in SpecialPage, and LuceneSearch inherits SpecialPage. You might be using some odd php version, or something else might be wrong... --Rainman 10:55, 6 August 2007 (UTC)
- My PHP- Version is (PHP 5.2.0-8+etch7 (cli) (built: Jul 2 2007 21:46:15)). Do you really think this might be a problem? I believe it is more likely that I forgot something obvious, not mentioned in the instructions. For example: I had to download ExtensionFunctions.php from svn, because it is not shipped with Mediawiki or the Extension. Do I need to register the Extension anywhere other than in LocalSettings.php? --217.6.3.114 12:55, 6 August 2007 (UTC)
- I've seen people complain about various mediawiki stuff not working with php 5.2, switching back to php 5.1 usually fixes it. But I'm by no means php expert (I mainly do the java part), so I cannot really tell if it would help. If you can, give it a try, and let us know if it helps. --Rainman 16:48, 6 August 2007 (UTC)
- My PHP- Version is (PHP 5.2.0-8+etch7 (cli) (built: Jul 2 2007 21:46:15)). Do you really think this might be a problem? I believe it is more likely that I forgot something obvious, not mentioned in the instructions. For example: I had to download ExtensionFunctions.php from svn, because it is not shipped with Mediawiki or the Extension. Do I need to register the Extension anywhere other than in LocalSettings.php? --217.6.3.114 12:55, 6 August 2007 (UTC)
- There seems to be no php 5.1 package available for debian etch, so I guess there's no chance to make search work.--217.6.3.114 12:10, 7 August 2007 (UTC)
- I submitted a bugreport:bugzilla:10835--7 August 2007
- Yep, seen it .. I still think it might be a php problem, or maybe a broken eAccelerator or something like that... --Rainman 10:33, 21 August 2007 (UTC)
- Is eAccelerator required for this extension? We do not use it.--217.6.3.114 08:58, 7 September 2007 (UTC)
- Found the Solution! The problem was incompatibility between the MWSearch-Extension and LuceneSearch. I forgot that MWSearch was still active when I installed LuceneSearch. After deactivating MWSearch the problem was gone. --217.6.3.114 08:05, 11 September 2007 (UTC)
- Is eAccelerator required for this extension? We do not use it.--217.6.3.114 08:58, 7 September 2007 (UTC)
- Yep, seen it .. I still think it might be a php problem, or maybe a broken eAccelerator or something like that... --Rainman 10:33, 21 August 2007 (UTC)
- I submitted a bugreport:bugzilla:10835--7 August 2007
- There seems to be no php 5.1 package available for debian etch, so I guess there's no chance to make search work.--217.6.3.114 12:10, 7 August 2007 (UTC)
Wildcard Search
[edit]Is there a way to use wildcards as described on http://lucene.apache.org/java/docs/queryparsersyntax.html#Wildcard%20Searches? --217.6.3.114 12:50, 12 September 2007 (UTC)
- Yes. Currently only simple prefixes work (e.g. test*) since I didn't get to test the performance impact of other wildcard schemes. If you want to patch it yourself, look at WikiQueryParser.java around line 669 (function makeQueryFromTokens()), you probably want to replace buffer[length-1]=='*' with something that checks if * or ? are anywhere in the buffer. --Rainman 16:23, 12 September 2007 (UTC)
dumpBackup.php causes DB connection error: Unknown error
[edit]Following the simple Index creation tutorial "Building the index" I tryed to run
php maintenance/dumpBackup.php --current --quiet > wikidb.xml && java -cp LuceneSearch.jar org.wikimedia.lsearch.importer.Importer -s wikidb.xml wikidb
But the Script throws the mentioned error. After big trouble and consideration of this script I've found a solution for this/my and our Problem. The Problem exists, because of the for dumpBackup.php required File "includes/backup.inc". This File does the main-backup-work and uses some MediaWiki-Variables($wg...). This is really no Problem, if dumpBackup.php runs with mediaWiki but as standalone console-script, it will miss this $wg..-Parameters. So dumpBackup.php uses empty strings for $wgDBtype,$wgDBadminuser,$wgDBadminpassword,$wgDBname,$wgDebugDumpSql and this causes the DB connection error: Unknown error while running. I've solved this Problem with a self-written php-wrapper-script, which only initializes this Variables and then simply include dumpBackup.php and now it works fine. This is my php-wrapper-script:
<?php ## dumpBackupInit - Wrapper Script to run the mediaWiki xml-dump "dumpBackup.php" correctly ## @author: Stefan Furcht ## @version: 1.0 ## @require: /srv/www/htdocs/wiki/maintenance/dumpBackup.php # The following Variables musst be set, to get dumpBackup.php at work $wgDBtype = 'mysql'; $wgDBadminuser="[MySQL-Username]"; $wgDBadminpassword ="[MySQL-Usernames-Password]"; $wgDBname = '[mediaWiki-Database-scheme]'; $wgDebugDumpSql='true'; # you'll find this Values in the DB-section into your mediaWiki-Config: LocalSettings.php # XML-Dumper 'dumpBackup.php' requires the setted Vars to run # simply include the original dumpBackup-Script require_once("/srv/www/htdocs/wiki/maintenance/dumpBackup.php"); Â ?>
Now you can use this script as like as the dumpBackup.php with exception it will (hopefully) now run correctly. Example: php dumpBackupInit.php --current > WikiDatabaseDump.xml
I hope this will help you. Please excuse my properly bad english
Regards -Stefan- 12 September 2007
- dumpBackup.php uses AdminSettings.php (and not LocalSettings.php), so you need to set it up (basically you would rename AdminSettings.sample and fill-in the data). What would be in AdminSettings.php is exactly what you provide in your wrapper, see Manual:System_administration#Maintenance_scripts. --Rainman 16:12, 12 September 2007 (UTC)
Thank you very much. I've never read what 'AdminSettings.php' exactly does. By setting this vars, it works finde. So you can delete my "wrapper script" from this discussion page. But perhaps it's usefull to mention explicitly on the extension page that 'AdminSettings.php' musst be set to run 'dumpBackup.php', because somebody may never had to issue on this file before. Thanks for this very great extension. -Stefan- 79.211.199.66 08:14, 20 September 2007 (UTC)
lsearchd killed in virtual hosting environment
[edit]When running lsearchd in a virtual hosting environment, it would work for 10-20 seconds or so, then it would fail with the message "killed." Thanks to Rainman's help, I verified that the resource requirements of the application exceeded the capacity available in the virtual hosting environment (whether it was the size of the JVM or number of threads, I was never sure.) It runs fine and with modest resource requirements on a dedicated server. Dbkayanda 20:44, 14 October 2007 (UTC)
Also, I notice in lsearch.conf there are a number of variables for the Storage backend:
- Storage.username
- Storage.password
etc. Do these need to be modified to my environment, or do they get ignored? --15 September 2007
- These are for the incremental updater (it stores articles rank info). If you don't use it, it gets ignored. --Rainman 17:23, 15 September 2007 (UTC)
Error while initially creating index
[edit]I am trying to get the LuceneSearch-Extension running on a mediawiki-1.11.0rc1 installation under opensuse10.2. LuceneSearch.jar and mwdumper.jar were generated from svn sources with ant and javac-version 1.5.0_12. I followed the instructions, but when I try to build the index, I get a Null-pointer exception:
me@mypc:~/var/lucene> java -cp ~/bin/lucene-search-2/LuceneSearch.jar org.wikimedia.lsearch.importer.Importer -s wikidb_TEST.xml wikidb_TEST MediaWiki Lucene search indexer - index builder from xml database dumps. Trying config file at path /home/muenzebrock/.lsearch.conf 0 [main] INFO org.wikimedia.lsearch.util.UnicodeDecomposer - Loaded unicode decomposer 8 [main] INFO org.wikimedia.lsearch.ranks.RankBuilder - First pass, getting a list of valid articles... 324 pages (1.213,483/sec), 324 revs (1.213,483/sec) 316 [main] INFO org.wikimedia.lsearch.ranks.RankBuilder - Second pass, calculating article links... 375 [main] INFO org.wikimedia.lsearch.util.Localization - Reading localization for En 377 [main] WARN org.wikimedia.lsearch.util.Localization - Error processing message file at file:///srv/www/htdocs/php/mediawiki1.11.0rc1/languages/messages/MessagesEn.php 378 [main] WARN org.wikimedia.lsearch.util.Localization - Could not load localization for En 324 pages (2.677,686/sec), 324 revs (2.677,686/sec) 465 [main] INFO org.wikimedia.lsearch.importer.Importer - Third pass, indexing articles... Exception in thread "main" java.lang.NullPointerException at java.io.File.<init>(File.java:194) at org.apache.lucene.store.FSDirectory.getDirectory(FSDirectory.java:117) at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:204) at org.wikimedia.lsearch.importer.SimpleIndexWriter.openIndex(SimpleIndexWriter.java:67) at org.wikimedia.lsearch.importer.SimpleIndexWriter.<init>(SimpleIndexWriter.java:49) at org.wikimedia.lsearch.importer.DumpImporter.<init>(DumpImporter.java:39) at org.wikimedia.lsearch.importer.Importer.main(Importer.java:128)
I played with the Indexes.path-variable in lsearch.conf, but with no luck. --19 September 2007
- Do you have permissions to write to directory you set as Indexes.path in /home/muenzebrock/.lsearch.conf ? --Rainman 14:13, 19 September 2007 (UTC)
- Yes. For debugging, I set it to be world-writable. --205.175.225.24 14:20, 19 September 2007 (UTC)
- You can do imports only at the indexer, so, did you set your lsearch-global.conf right? i.e. assign the index wikidb_TEST to your host mypc (not localhost or 127.0.0.1) in the Index section? --Rainman 14:47, 19 September 2007 (UTC)
- This is the part of lsearch-global.conf that I touched (i.e. the rest is similar to the file in svn):
- You can do imports only at the indexer, so, did you set your lsearch-global.conf right? i.e. assign the index wikidb_TEST to your host mypc (not localhost or 127.0.0.1) in the Index section? --Rainman 14:47, 19 September 2007 (UTC)
- Yes. For debugging, I set it to be world-writable. --205.175.225.24 14:20, 19 September 2007 (UTC)
# databases can be writen as {url}, where url contains list of dbs [Database] #wikilucene : (single) (language,en) (warmup,0) #wikidev : (single) (language,sr) #wikilucene : (nssplit,3) (nspart1,[0]) (nspart2,[4,5,12,13]), (nspart3,[]) #wikilucene : (language,en) (warmup,10) wikidb_TEST : (single) (language,de) (warmup,100) # Search groups # Index parts of a split index are always taken from the node's group # host : db1.part db2.part # Mulitple hosts can search multiple dbs (N-N mapping) [Search-Group] #oblak : wikilucene wikidev oblak : wikidb_TEST # Index nodes # host: db1.part db2.part # Each db.part can be indexed by only one host [Index] #oblak: wikilucene wikidev oblak : wikidb_TEST
- Now I seem to recognize my failure: I should have replaced oblak with my hostname, right? I was wondering what this should mean anyway ;-) Thanks for your quick help on this. --205.175.225.24 15:00, 19 September 2007 (UTC)
This error can also occur if you follow the installation instructions exactly and use a FQDN in the [Search-Group] and [Index] sections. Use only the hostname part of the $HOSTNAME, omitting the domain name part, if it is included. -- 216.143.51.66 15:54, 7 February 2008 (UTC)
- Hmm, I got this same error and fixed it by adding my complete hostname and domain to the various config files and the hostname file. In my case
<hostname>Â : wikidb
didn't work but<hostname>.domain.com : wikidb
did. --Cneubauer
- ANOTHER EXPERIENCE: the installation-manual mentioned that you use the envirementvariable $HOSTNAME in the global.conf - for SuSE i can say that you need to use the complete hostname standeing in /etc/HOSTNAMEÂ ! --195.216.198.100 10:58, 12 June 2008 (UTC)
Hi
I've got an similar error :
root@rainbow:/usr/local/search/ls2 # java -cp LuceneSearch.jar org.wikimedia.lsearch.importer.Importer -s /srv/www/htdocs/mwiki/wikidb.xml wikidb MediaWiki Lucene search indexer - index builder from xml database dumps. Trying config file at path /root/.lsearch.conf Trying config file at path /usr/local/search/ls2/lsearch.conf 1 [main] INFO org.wikimedia.lsearch.util.UnicodeDecomposer - Loaded unicode decomposer 15 [main] INFO org.wikimedia.lsearch.util.Localization - Reading localization for De 507 [main] INFO org.wikimedia.lsearch.ranks.RankBuilder - First pass, getting a list of valid articles... 114 pages (118.626/sec), 114 revs (118.626/sec) 1666 [main] INFO org.wikimedia.lsearch.ranks.RankBuilder - Second pass, calculating article links... 114 pages (428.571/sec), 114 revs (428.571/sec) 2044 [main] INFO org.wikimedia.lsearch.importer.Importer - Third pass, indexing articles... Exception in thread "main" java.lang.NullPointerException at java.io.File.<init>(File.java:194) at org.apache.lucene.store.FSDirectory.getDirectory(FSDirectory.java:117) at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:204) at org.wikimedia.lsearch.importer.SimpleIndexWriter.openIndex(SimpleIndexWriter.java:67) at org.wikimedia.lsearch.importer.SimpleIndexWriter.<init>(SimpleIndexWriter.java:49) at org.wikimedia.lsearch.importer.DumpImporter.<init>(DumpImporter.java:39) at org.wikimedia.lsearch.importer.Importer.main(Importer.java:128)
My configs :
root@rainbow:/usr/local/search/ls2 # cat lsearch-global.conf | grep ^[^#] [Database] wikidb : (single) (language,de) (warmup,10) [Search-Group] rainbow : wikidb [Index] rainbow : wikidb [Index-Path] <default> : /usr/local/search/indexes [OAI] wikidd : http://rainbow.local.com/mwiki/index.php [Properties] Database.suffix=itowiki_ ExactCase.suffix=itowiki_ [Namespace-Prefix] all : <all> [0] : 0 [1] : 1 [2] : 2 [3] : 3 [4] : 4 [5] : 5 [6] : 6 [7] : 7 [8] : 8 [9] : 9 [10] : 10 [11] : 11 [12] : 12 [13] : 13 [14] : 14 [15] : 15
and the other config :
root@rainbow:/usr/local/search/ls2 # cat lsearch.conf | grep ^[^#] MWConfig.global=file:///usr/local/search/ls2/lsearch-global.conf MWConfig.lib=/usr/local/search/ls2/lib Indexes.path=/usr/local/search/indexes Search.updateinterval=1 Search.updatedelay=0 Search.checkinterval=30 Index.snapshotinterval=5 Index.maxqueuecount=5000 Index.maxqueuetimeout=12 Storage.master=rainbow Storage.username=root Storage.password=mysecret Storage.adminuser=root Storage.adminpass=mysecret Storage.useSeparateDBs=false Storage.defaultDB=lsearch Storage.lib=/usr/local/search/ls2/sql SearcherPool.size=3 Localization.url=file:///srv/www/htdocs/mwiki/languages/messages Logging.logconfig=/usr/local/search/ls2/lsearch.log4j Logging.debug=false
and finally :
root@rainbow:/usr/local/search/ls2 # cat lsearch.log4j | grep ^[^#] log4j.rootLogger=INFO, A1 log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%-4r [%t]Â %-5p %c %x - %m%n
Kind regards Stefan 17 January 2008
Multiple wikis in one database
[edit]Is there a way to index and search multiple wikis that are contained within one database? I've tried a few things in the configuration and command lines, and I've not figured out a way to do this.
Thanks! --Laduncan 16:31, 8 October 2007 (UTC)
- If you want to get search results combined from multiple wikis, that is still not supported (as of v2.0). Next minor release might show some improvements in that direction.. --Rainman 16:55, 8 October 2007 (UTC)
- Thanks for the quick info! --Laduncan 20:31, 8 October 2007 (UTC)
- I have Lucene search running on an installation which contains 3 wikis sharing the same database, using pefixes. The search results give the wrong count; when I search within a wiki, it seems to actually search in all 3 wikis, but shows only the hits fitting in the current wiki. That way, I get for example only 6 hits listed on the result page, because the other invisible hits were in the other two wikis. How do I get the first 20 hits for the current wiki listed (I do not want to see the hits into the other wikis, and I do not want them counted). --83.202.49.58 10:06, 16 March 2009 (UTC)
Requiring less exact matches
[edit]It appears that the search in the fulltext is doing an implicit AND -- that is, all the words need to be in the document for it to appear in the results list.
For what I'm doing, I'd like to have the default be "OR," and let the ranking algorithm hopefully bring the most relevant content to the top. (The queries my users will be using will be long and complex, and will generally match nothing with "AND.")
I can manually search with OR between the words, but I wanted to know if I could change the configuration of the extension to have it do that by default.
Thanks in advance, Dbkayanda 00:57, 15 October 2007 (UTC)
- Personally, I think ranking is not smart enough to give best results if the default operator is OR, but you can change it with hacking the code a bit. In WikiQueryParser.java, on line 112 there is:
BooleanClause.Occur boolDefault = BooleanClause.Occur.MUST;
, replace the last part withBooleanClause.Occur.SHOULD
. --Rainman 14:41, 15 October 2007 (UTC)
- Worked like a charm. Thanks, as always, for your help. --16 October 2007
Index of attachments (doc, pdf, xls)
[edit]Hi Robert,
I found the cool mediawiki extension for the lucene search engine. Is there a possibility to index all attachments like PDF, HTML, DOC and XLS with this addon?
I found some informations in the lucene faq - http://wiki.apache.org/lucene-java/LuceneFAQ#head-37523379241b88fd90bcd1de81b74e7ec8843f72 - how to index attachments. Is it able to use such indexed files with the mediawiki extension you wrote?
Thanks a lot! Alex--14:51, 22 October 2007 (UTC)
- Yes, there are libraries that can parse pdf, doc,.. that work with lucene, but I haven't got around to include them in the extension yet, and I probably won't have time in next few months ... If you really need it, you can try to hack it yourself, you would probably want Importer to fetch the media file (maybe with ?action=raw), and then construct an Article object whose contents would be the parsed text and pass it to the indexer. --Rainman 21:08, 22 October 2007 (UTC)
- Were all namespaces indexed in the current LuceneSearch extension? Also the namespace image that contains all file-data? Does the extension then only index the recent file description? Where I have to start in the LuceneSearch_body.php ?
- Thanks! Alex --12:06, 23 October 2007 (UTC)
- All articles from the database get indexed. LuceneSearch_body.php is just an interface for the java daemon that does all the work. So, you'll need to modify the java code. What currently gets indexed is just the image descriptions, the media files themself are stored outside the database, in the file system... --Rainman 10:20, 23 October 2007 (UTC)
Binary version of LuceneSearch.jar?
[edit]Hello,
Where can I get a binary version of LuceneSearch.jar? I don't have ant on the server this is being installed on, and I tried building LuceneSearch.jar on my desktop computer using ant, but it failed with errors about missing MediaWiki Java classes. I'd prefer a binary, if possible, so I can get this up and running ASAP.
Ben --8 December 2007
Soundex searches?
[edit]Will this extension support Soundex like searches for spelling mistakes etc..? --12 December 2007
- Probably in the next major release (hopefully end of january). --Rainman 14:11, 13 December 2007 (UTC)
Special page search complains about "problem with wiki search"
[edit]After following, as close as possible instructions. Plugin renders special page as such:
[ search_string on text area ] [ dropdown_list ] [search_button] <noexactmatch-nocreate> There was a problem with the wiki search. This is probably temporary; try again in a few moments, or you can search the wiki through an external search service:
Content in square brackets are just my attempt to recreate the gui.
Is there something missing in the way it is using the host to do the search? --Cartoro 00:00, 20 December 2007 (UTC)
- Check your log files for more info about what went wrong ... --Rainman 18:31, 20 December 2007 (UTC)
- Yes, I wanted to see that... but I couldn't find any log files.... sorry, silly question, but where are they? Could this be a problem with accessing the actual DB? --Cartoro 22:11, 20 December 2007 (UTC)
Port 8123 already in use.
[edit]Hi again,
I'm still trying to make it run. I've found that most of the problems are due to an ill configuration of my part. Java error messages at first are not very helpful, but that is just the case with any new functionality one comes across.
When I tried to run ./lsearchd
. It came up with this.
java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketTimeoutException: Read timed out at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.rebind(Unknown Source) at org.wikimedia.lsearch.interoperability.RMIServer.register(RMIServer.java:24) at org.wikimedia.lsearch.interoperability.RMIServer.bindRMIObjects(RMIServer.java:60) at org.wikimedia.lsearch.config.StartupManager.main(StartupManager.java:52) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readByte(DataInputStream.java:248) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228)
further down, it came up with this message:
120488 [Thread-1] FATAL org.wikimedia.lsearch.frontend.HTTPIndexServer - Dying: bind error: Address already in use
Has anybody seen this? I still think is a trivial error from my part, but I still cannot find the cause of the error. --Cartoro 00:00, 20 December 2007 (UTC)
- The above is RMI complaining it cannot register the networked objects. That should be harmless unless you're using distributed searching. About the below, seems to be what it says: some other app is using the ports (the searcher is by default on 8123, and indexer on 8321) - make sure you don't have any old version of lsearchd still running. Use command: nmap localhost to find out which ports are taken. If those default ports are taken by other apps, change them in lsearch.conf, and in LocalSettings.php ... --Rainman 13:36, 20 December 2007 (UTC)
Error when running ./lsearchd
[edit]I am getting the following error when running ./lsearchd
:
53664-jpbaello:/srv/www/htdocs/search/ls2 # ./lsearchd RMI registry started. Trying config file at path /root/.lsearch.conf Trying config file at path /srv/www/htdocs/search/ls2/lsearch.conf Error resolving local hostname. Make sure that hostname is setup correctly. java.net.UnknownHostException: 53664-jpbaello: 53664-jpbaello at java.net.InetAddress.getLocalHost(InetAddress.java:1346) at org.wikimedia.lsearch.config.GlobalConfiguration.determineInetAddress(GlobalConfiguration.java:124) at org.wikimedia.lsearch.config.GlobalConfiguration.<init>(GlobalConfiguration.java:102) at org.wikimedia.lsearch.config.GlobalConfiguration.getInstance(GlobalConfiguration.java:112) at org.wikimedia.lsearch.config.Configuration.<init>(Configuration.java:105) at org.wikimedia.lsearch.config.Configuration.open(Configuration.java:68) at org.wikimedia.lsearch.config.StartupManager.main(StartupManager.java:39) Exception in thread "main" java.lang.NullPointerException at java.util.Hashtable.get(Hashtable.java:336) at org.wikimedia.lsearch.config.GlobalConfiguration.makeIndexIdPool(GlobalConfiguration.java:468) at org.wikimedia.lsearch.config.GlobalConfiguration.read(GlobalConfiguration.java:413) at org.wikimedia.lsearch.config.GlobalConfiguration.readFromURL(GlobalConfiguration.java:247) at org.wikimedia.lsearch.config.Configuration.<init>(Configuration.java:116) at org.wikimedia.lsearch.config.Configuration.open(Configuration.java:68) at org.wikimedia.lsearch.config.StartupManager.main(StartupManager.java:39)
And then it goes back to the command prompt I believe this is an error because I can not get it to create the index. A little new to this though and not sure if I am doing things right. Also, Sorry if I am not putting this in right either! Any ideas? --Think411 22 December 2007
- As the error message suggests, your hostname seems to be wrong. Is "53664-jpbaello" really your hostname? Use "echo $HOSTNAME" to verify this. Check if this hostname correctly maps to your IP in /etc/hosts. Or, try using your IP instead of your hostname. --Rainman 12:21, 22 December 2007 (UTC)