When indexer tries to insert a new URL into database or is trying to index an existing one, it first of all checks whether this URL has corresponding "Server" or "Realm" command given in indexer.conf. URLs without corresponding "Server" or "Realm" command are not indexed. By default those URLs which are already in database and have no Server/Realm commands will be deleted from database. It may happen for example after removing some Server/Realm commands from indexer.conf.
This is the main command of the indexer.conf file. It is used to add servers or their parts to be indexed. The format of Server command is:
Server [subsection] <URL> [alias]
This command also says indexer to insert given URL into database at startup.
"Server" command has required "URL" and two optional "subsection" and "alias" parameters. Usage of alias optional parameters is covered in Aliases section.
E.g. command Server http://localhost/ allows to index whole http://localhost/ server. It also makes indexer insert given URL into database at startup. You can also specify some path to index server subsection: Server http://localhost/subsection/. It also says indexer to insert given URL at startup.
Note: You can suppress indexer behavior to add URL given in Server command by using -q indexer command line argument. It is useful when you have hundreds or thousands Server commands and their URLs are already in database. This allows having more quick indexer startup.
There are several ways how indexer checks that URL corresponds to some Server command. Use optional subsection parameter to specify server's checking behavior. Values of subsection are the same with "Follow" command arguments. Subsection value must be one of the following: page, path, site, world and has "path" value by default. If subsection is not specified, current "Follow" value will be used. So, the only Server site http://localhost/ command and combination of Follow site and Server http://localhost/ have the same effect.
path subsection
When indexer seeks for a "Server" command corresponding to an URL it checks that the discovered URL starts with URL given in Server command argument but without trailing file name. For example, if Server path http://localhost/path/to/index.html is given, all URLs which have http://localhost/path/to/ at the beginning correspond to this Server command.
The following commands have the same effect except that they insert different URLs into database:
Server path http://localhost/path/to/index.html Server path http://localhost/path/to/index Server path http://localhost/path/to/index.cgi?q=bla Server path http://localhost/path/to/index?q=bla
site subsection
indexer checks that the discovered URL have the same hostname with URL given in Server command. For example, Server site http://localhost/path/to/a.html will allow to index whole http://localhost/ server.
world subsection
If world subsection is specified in Server command, it has the same effect that URL is considered to match this Server command. See explanation below.
page subsection
This subsection describes the only one URL given in Server argument.
subsection in news:// schema
Subsection is always considered as "site" for news:// URL schema. This is because news:// schema has no nested paths like ftp:// or http:// Use Server news://news.server.com/ to index whole news server or for example Server news://news.server.com/udm to index all messages from "udm" hierarchy.
Realm command is a more powerful means of describing web area to be indexed. The format of Realm command is:
Realm [String|Regex] [Match|NoMatch] <URLMask> [alias]
It works almost like "Server" command but takes a regular expression or string wildcards as it's argument. There are two comparison types in Realm command. String wildcards is default match type. You can use ? and * signs in URLMask parameters, they means "one character" and "any number of characters" respectively. For example, if you want to index all HTTP sites in .ru domain, use this command:
Realm http://*.ru/*
Regex comparison type takes a regular expression as it's argument. Activate regex comparison type using "Regex" keyword. For example, you can describe everything in .ru domain using regex comparison type:
Realm Regex ^http://.*\.ru/
Second optional argument means match type. There are "Match" and "NoMatch" possible values with "Match" as default. Realm NoMatch has reverse effect. It means that URL that does not match given URLMask will correspond to this Realm command. For example, use this command to index everything without .com domain:
Realm NoMatch http://*.com/*
Optional "alias" argument allows providing very complicated URL rewrite more powerful than other aliasing mechanism. Take a look into Aliases section for "alias" argument usage explanation. Alias works only with "Regex" comparison type and has no effect with "String" type.
As far as subsection actually means which part of argument given in Server command to compare with a URL, Realm command does not have similar optional subsection parameter. It is useless in the case of string wildcards and regular expressions. Because of it "Follow" command does not affect "Realm" command. Imagine that you have:
Follow path Realm http://localhost/* URL http://localhost/somepath/
If you add into database for example an URL http://localhost/somepath/ either using "URL" indexer.conf command given above or using indexer -i -u http://localhost/somepath/, indexer WILL follow any URL beyond /somepath/ directory of localhost if there is a link to it from /somepath/. Follow path has no effect if Realm command is used.
Subnet command is another way to describe web area to be indexed. The format of Subnet command is:
Subnet [Match|NoMatch] <IP mask>
It works almost like "Server" command but takes a string wildcards as it's argument which is compared against IP address instead of URL. Argument may have * and ? signs, they means "one character" and "any number of characters" respectively. For example, if you want to index all HTTP sites in your local subnet, use this command:
Subnet 192.168.*.*
You may use "NoMatch" optional argument. For example, if you want to index everything without 195.x.x.x subnet, use:
Subnet NoMatch 195.*.*.*
Indexer seeks for "Server" and "Realm" commands in order of their appearance. Thus if you want to give different parameters to e.g. whole server and its subsection you should add subsection line before whole server's. Imagine that you have server subdirectory which contains news articles. Surely those articles are to be reindexed more often than the rest of the server. The following combination may be useful in such cases:
# Add subsection Period 200000 Server http://servername/news/ # Add server Period 600000 Server http://servername/
These commands give different reindexing period for /news/ subdirectory comparing with the period of server as a whole. indexer will choose the first "Server" record for the http://servername/news/page1.html as far as it matches and was given first.
The default behavior of indexer is to follow through links having correspondent Server/Realm command in the indexer.conf file. It also jumps between servers if both of them are present in indexer.conf either directly in Server command or indirectly in Realm command. For example, there are two Server commands:
Server http://www/ Server http://web/
When indexing http://www/page1.html indexer WILL follow the link http://web/page2.html if the last one has been found. Note that these pages are on different servers, but BOTH of them have correspondent Server record.
If one of the Server command is deleted, indexer will remove all expired URLs from this server during next reindexing.
The first way to change described default behavior is to use "Follow world" indexer.conf command. indexer will walk through ANY found URLs and will jump between different servers. Theoretically, it will index all Internet in this case if there are no hardware limits
When "Follow world" command is specified, indexer just adds one server record to memory with an empty start URL during loading indexer.conf. This empty server will be found only in the case when no other Server records with non-empty start URL are found.
The second way to change default behavior is to use "DeleteNoServer no" command. This command means that URLs which are already in database will not be deleted even if they have no corresponding Server/Realm command. "DeleteNoServer no" is implemented by adding one empty server just like "Follow world". The difference between those two commands is that in case of "DeleteNoServer no" indexer follows links ONLY INSIDE servers and does not jump between different servers. This allows to index only those servers which are already in database and do not follow other servers.
Example of command sequence:
DeleteNoServer no Server http://www/ Server http://web/
While indexing http://www/page1.html indexer WILL follow the link http://www/page2.html but DOES NOT follow http://web/page2.html link because http://www/page1.html and http://web/page2.html are on different servers.
Note: If you delete URL from the list in url.txt using the "DeleteNoServer no" scheme, indexer WILL NOT delete URLs from the same server. Imagine that you have removed http://www/ from url.txt. To remove all URLs of this server from the database you'll have to run indexer -C -u http://www/%.
You may note that "Realm *" is something like "DeleteNoServer no". Actually it has almost the same effect with "DeleteNoServer no". The only difference is that this command does allow indexer to jump between servers.
The third scheme is very useful for indexer -i -f url.txt running. You may maintain required servers in the url.txt. When new URL is added into url.txt indexer will index the server of this URL during next startup.
If you are using "DeleteNoServer no" it does not matter whether you have passed the root URL (http://www/) of the server or one of internal pages (http://www/path/to/some/page.html). Indexer will index whole server http://www/
The HrefOnly indexer.conf command is useful when indexing large mailing lists archives. Use it to index only the content of the actual messages and not the index and thread pages though still scan them for URLs.
mnoGoSearch has an alias support making it possible to index sites taking information from another location. For example, if you index local web server, it is possible to take pages directly from disk without involving your web server in indexing process. Another example is building of search engine for primary site and using its mirror while indexing. There are several ways of using aliases.
Format of "Alias" indexer.conf command:
Alias <masterURL> <mirrorURL>
E.g. you wish to index http://search.mnogo.ru/ using nearest German mirror http://www.gstammw.de/mirrors/mnoGoSearch/. Add these lines in your indexer.conf:
Server http://search.mnogo.ru/ Alias http://search.mnogo.ru/ http://www.gstammw.de/mirrors/mnoGoSearch/
search.cgi will display URLs from master site http://search.mnogo.ru/ but indexer will take corresponding page from mirror site http://www.gstammw.de/mirrors/mnoGoSearch/.
Another example. If you want to index everything in udm.net domain and one of servers, for example http://home.udm.net/ is stored on local machine in /home/httpd/htdocs/ directory. These commands will be useful:
Realm http://*.udm.net/ Alias http://home.udm.net/ file:/home/httpd/htdocs/
Indexer will take home.udm.net from local disk and index other sites using HTTP.
Aliases are searched in the order of their appearance in indexer.conf. So, you can create different aliases for server and its parts:
# First, create alias for example for /stat/ directory which # is not under common location: Alias http://home.udm.net/stat/ file:/usr/local/stat/htdocs/ # Then create alias for the rest of the server: Alias http://home.udm.net/ file:/usr/local/apache/htdocs/
Note: if you change the order of these commands, alias for /stat/ directory will never be found.
You may specify location used by indexer as an optional argument for Server command:
Server http://home.udm.net/ file:/home/httpd/htdocs/
Aliases in Realm command is a very powerful feature based on regular expressions. The idea of aliases in Realm command implementation is similar to how PHP preg_replace() function works. Aliases in Realm command work only if "regex" match type is used and does not work with "string" match type.
Use this syntax for Realm aliases:
Realm regex <URL_pattern> <alias_pattern>
Indexer searches URL for matches to URL_pattern and builds an URL alias using alias_pattern. alias_pattern may contain references of the form $n. Where n is a number in the range of 0-9. Every such reference will be replaced by text captured by the n'th parenthesized pattern. $0 refers to text matched by the whole pattern. Opening parentheses are counted from left to right (starting from 1) to obtain the number of the capturing subpattern.
Example: your company hosts several hundreds users with their domains in the form of www.username.yourname.com. Every user's site is stored on disk in "htdocs" under user's home directory: /home/username/htdocs/.
You may write this command into indexer.conf (note that dot '.' character has a special meaning in regular expressions and must be escaped with '\' sign when dot is used in usual meaning):
Realm regex (http://www\.)(.*)(\.yourname\.com/)(.*) file:/home/$2/htdocs/$4
Imagine indexer process http://www.john.yourname.com/news/index.html page. It will build patterns from $0 to $4:
$0 = 'http://www.john.yourname.com/news/index.htm' (whole patter match)
$1 = 'http://www.' subpattern matches '(http://www\.)'
$2 = 'john' subpattern matches '(.*)'
$3 = '.yourname.com/' subpattern matches '(\.yourname\.com/)'
$4 = '/news/index.html' subpattern matches '(.*)'
Then indexer will compose alias using $2 and $4 patterns:
file:/home/john/htdocs/news/index.html
and will use the result as document location to fetch it.
You may also specify "AliasProg" command for aliasing purposes. AliasProg is useful for major web hosting companies which want to index their web space taking documents directly from a disk without having to involve web server in indexing process. Documents layout may be very complex to describe it using alias in Realm command. AliasProg is an external program that can be called, that takes a URL and returns one string with the appropriate alias to stdout. Use $1 to pass URL to command line.
For example this AliasProg command uses 'replace' command from MySQL distribution and replaces URL substring http://www.apache.org/ to file:/usr/local/apache/htdocs/:
AliasProg "echo $1 | /usr/local/mysql/bin/mysql/replace http://www.apache.org/ file:/usr/local/apache/htdocs/"
You may also write your own very complex program to process URLs.
The ReverseAlias indexer.conf command allows URL mapping before URL is inserted into database. Unlike Alias command, that triggers mapping right before a document is downloaded, ReverseAlias command triggers mapping after the link is found.
ReverseAlias http://name2/ http://name2.yourname.com/ Server http://name2.yourname.com/
All links with short server name will be mapped to links with full server name before they are inserted into database.
One of the possible use is cutting various unnecessary strings like PHPSESSION=XXXX
E.g. cutting from URL like http://www/a.php?PHPSESSION=XXX, when PHPSESSION is the only parameter. The question sign is deleted as well:
ReverseAlias regex (http://[^?]*)[?]PHPSESSION=[^&]*$ $1$2
Cutting from URL like w/a.php?PHPSESSION=xxx&.., i.e. when PHPSESSION is the first parameter, but there are other parameters following it. The '&' sign after PHPSESSION is deleted as well. Question mark is not deleted:
ReverseAlias regex (http://[^?]*[?])PHPSESSION=[^&]*&(.*) $1$2
Cutting from URL like http://www/a.php?a=b&PHPSESSION=xxx or http://www/a.php?a=b&PHPSESSION=xxx&c=d, where PHPSESSION is not the first parameter. The '&' sign before PHPSESSION is deleted:
ReverseAlias regex (http://[^&]*)&PHPSESSION=[^&]*(.*) $1$2
It is also possible to define aliases in search template (search.htm). The Alias command in search.htm is identical to the one in indexer.conf, however it is active during searching and not indexation.
The syntax of the search.htm Alias command is the same as in indexer.conf:
Alias <find-prefix> <replace-prefix>
For example, there is the following command in search.htm:
Alias http://localhost/ http://www.mnogo.ru/
Search returned a page with the following URL:
http://localhost/news/article10.html
As a result, the $(DU) variable will be replace NOT with this URL:
http://localhost/news/article10.html
but with the following URL (that results in processing with Alias):
http://www.mnogo.ru/news/article10.html
Since version 3.1.7 mnoGoSearch has "ServerTable" indexer.conf command. This works in SQL version and has no effect on built-in database.
When "ServerTable srv" is specified, indexer will load servers information from given "srv" SQL table. Check the structure in create/mysql/server.txt file. If there is no structure example for your database, take it as an example. Please send us structure for your database at <devel@mnogosearch.org>!
You may use several "ServerTable" command to load servers information from different tables.
Servers table consists of all necessary fields which describe servers parameters. Field names have correspondent indexer.conf commands. For example, "period" field corresponds "Period" indexer.conf command. Default field values are the same with default indexer.conf parameters.
"gindex" field corresponds "Index" command. Name is slightly changed to avoid SQL reserved word usage.
"gindex","deletebad","use_robots","delete_no_server","use_clones" are actually boolean parameters. 0 means "no", 1 means "yes".
"follow" field values are 0 for "page", 1 for "path", 2 for "site" and 4 for "world".
Note: Only those servers are read from the table where "active" field has 1 value. This is useful to allow users to submit new URLs into servers table and give administrator a possibility to approve added URLs.
mnoGoSearch indexer can use external parsers to index various file types (mime types).
Parser is an executable program which converts one of the mime types to text/plain or text/html. For example, if you have postscript files, you can use ps2ascii parser (filter), which reads postscript file from stdin and produces ascii to stdout.
Indexer supports four types of parsers that can:
read data from stdin and send result to stdout
read data from file and send result to stdout
read data from file and send result to file
read data from stdin and send result to file
Configure mime types
Configure your web server to send appropriate "Content-Type" header. For apache, have a look at mime.types file, most mime types are already defined there.
If you want to index local files use "AddType" command in indexer.conf to associate file name extensions with their mime types. For example:
AddType text/html *.html
Add parsers
Add lines with parsers definitions. Lines have the following format with three arguments:
Mime <from_mime> <to_mime> <command line>
For example, the following line defines parser for man pages:
# Use deroff for parsing man pages ( *.man ) Mime application/x-troff-man text/plain deroff
This parser will take data from stdin and output result to stdout.
Many parsers can not operate on stdin and require a file to read from. In this case indexer creates a temporary file in /tmp and will remove it when parser exits. Use $1 macro in parser command line to substitute file name. For example, Mime command for "catdoc" MS Word to ASCII converters may look like this:
Mime application/msword text/plain "/usr/bin/catdoc -a $1"
If your parser writes result into output file, use $2 macro. indexer will replace $2 by temporary file name, start parser, read result from this temporary file then remove it. For example:
Mime application/msword text/plain "/usr/bin/catdoc -a $1 >$2"
The parser above will read data from first temporary file and write result to second one. Both temporary files will be removed when parser exists. Note that result of usage of this parser will be absolutely the same with the previous one, but they use different execution mode: file->stdout and file->file correspondingly.
You can use pipes in parser's command line. For example, these lines will be useful to index gzipped man pages from local disk:
AddType application/x-gzipped-man *.1.gz *.2.gz *.3.gz *.4.gz Mime application/x-gzipped-man text/plain "zcat | deroff"
Some parsers can produce output in other charset than given in LocalCharset command. Specify charset to make indexer convert parser's output to proper one. For example, if your catdoc is configured to produce output in windows-1251 charset but LocalCharset is koi8-r, use this command for parsing MS Word documents:
Mime application/msword "text/plain; charset=windows-1251" "catdoc -a $1"
When executing a parser indexer creates UDM_URL environment variable with an URL being processed as a value. You can use this variable in parser scripts.
RPM parser by Mario Lang <lang@zid.tu-graz.ac.at>
/usr/local/bin/rpminfo:
#!/bin/bash
/usr/bin/rpm -q --queryformat="<html><head><title>RPM: %{NAME} %{VERSION}-%{RELEASE}(%{GROUP})</title><meta name=\"description\" content=\"%{SUMMARY}\"></head><body>%{DESCRIPTION}\n</body></html>" -p $1
indexer.conf:
Mime application/x-rpm text/html "/usr/local/bin/rpminfo $1"
It renders to such nice RPM information:
3. RPM: mysql 3.20.32a-3 (Applications/Databases) [4]
Mysql is a SQL (Structured Query Language) database server.
Mysql was written by Michael (monty) Widenius. See the CREDITS
file in the distribution for more credits for mysql and related
things....
(application/x-rpm) 2088855 bytes
catdoc MS Word to text converter
Home page, also listed on Freshmeat
indexer.conf:
Mime application/msword text/plain "catdoc $1"
xls2csv MS Excel to text converter
It is supplied with catdoc.
indexer.conf:
Mime application/vnd.ms-excel text/plain "xls2csv $1"
pdftotext Adobe PDF converter
Supplied with xpdf project.
Homepage, also listed on Freshmeat
indexer.conf:
Mime application/pdf text/plain "pdftotext $1 -"
rthc RTF to text converter
Homepage, also listed on Freshmeat
indexer.conf:
Mime "text/rtf*" text/html "rthc --use-stdout $1 2>/dev/null"
It also produces some output to stderr, so redirect it to /dev/null
Please feel free to contribute your scripts and parsers configuration to <general@mnogosearch.org>.