XOWA allows custom creation of wikis by either excluding words or including words. The system is based on the <ahref="http://dansguardian.org/"rel="nofollow"class="external text">Dansguardian</a> format: an open-source (GPLv2) system for filtering web-pages.
See <ahref="http://xowa.org/home/wiki/Options/Import_Dansguardian.html"id="xolnki_2"title="Options/Import Dansguardian">Options/Import_Dansguardian</a>
Phraselist files are located at <code>/xowa/bin/any/xowa/cfg/bldr/filter/wiki_name/dansguardian</code>. For example, on a Windows system, a phraselist file for simple.wikipedia.org can be placed at <code>C:\xowa\bin\any\xowa\cfg\bldr\filter\simple.wikipedia.org\dansguardian\phraselist1.txt</code>
</li>
<li>
Phraselist files can be placed in sub-directories for grouping purposes. For example, files can be placed at C:\xowa\bin\any\xowa\cfg\bldr\filter\simple.wikipedia.org\dansguardian\group1\phraselist1.txt and C:\xowa\bin\any\xowa\cfg\bldr\filter\simple.wikipedia.org\dansguardian\group2\phraselist2.txt
</li>
</ul>
<h3>
<spanclass="mw-headline"id="Format">Format</span>
</h3>
<p>
Phraselist files are plain text files with the following format:
</p>
<ul>
<li>
Each line is either a rule or a comment
</li>
<li>
Comments start with the hash sign: # . For example <code># this is a comment</code>
</li>
<li>
Each rule has one or more words enclosed in angle brackets (<>) and separated by commas. For example, <code>< earth >,< mars ></code>
</li>
<li>
Each rule ends with a score also enclosed in angle brackets. For example, <code><70></code>
Phraselists are applied during import. The following process occurs:
</p>
<ul>
<li>
The import starts for a wiki
</li>
<li>
All phraselists for the wiki are loaded into memory.
</li>
<li>
Each article's wikitext is analyzed by the phraselists and generates a score.
<ul>
<li>
The article's "title" is not analyzed. Article titles generally have only one or two words, and are not useful for phraselist matching
</li>
<li>
The html is not analyzed. Note that this would slow down the import process dramatically. For example, for English Wikipedia, wikitext would only slow down the process from 2 hours 40 minutes to 3 hours. HTML would slow it down to 70 hours.
A rule is matched if any part of the wikitext contains the words in the ruletext.
</p>
<p>
For example, let's says we wanted to build up a phraselist that allowed us to build a wiki without any astronomy articles. We could use something like the following:
</p>
<pre>
< planet ><50>
< earth >,< planet ><30>
</pre>
<p>
Now consider these short sample articles:
</p>
<ul>
<li>
An article with just the word "planet" would have a score of 50
</li>
<li>
An article with the words "earth planet something" would have a score of 80; 50 for matching "planet" and 30 for matching "earth" and " planet "
</li>
<li>
An article with just the word "earth" would have a score of 0. It needs to have the word "planet" to get a score of 30
For example, if an article has the text "planet planet planet" then its score would be 150, not 50, because it matches the "planet" rule 3 times
</li>
<li>
Similarly "earth planet something earth planet something earth planet something" would have a score of 240 because it matches "earth planet" 3 times (90) and "planet" 3 times (150)
</li>
<li>
However "earth planet something earth planet something earth" only has a score of 160 because it only matches the "earth planet" rule 2 times "60" and the planet rule 2 times (100)
By default, anything that matches a rule (has a score > 0) will be excluded. Note that this exclude number can be raised from 0 to something higher like 100. See <ahref="http://xowa.org/home/wiki/Options/Import_Dansguardian.html"id="xolnki_3"title="Options/Import Dansguardian">Options/Import_Dansguardian</a>
The import filter can also be used to build content-specific wikis. For example, let's say you wanted to build a wiki that only <b>includes</b> articles with the words "planet" and "earth planet". The following can be done:
</p>
<ul>
<li>
Use the same phraselists as above, but negate the numbers:
</li>
</ul>
<pre>
< planet ><-50>
< earth >,< planet ><-30>
</pre>
<ul>
<li>
Change the initial score from 0 to 50
</li>
<li>
Leave the exclude score at 0
</li>
</ul>
<p>
When running the import, the following will happen:
</p>
<ul>
<li>
An article that has the words "earth planet something" will have a score of -30: the initial score of 50 plus the rule score of -80. Because -30 is less than the exclude score of 0, it will not be excluded.
</li>
<li>
An article that has the words "a b c" will still have its initial score of 50. Because 50 is greater than the exclude score of 0, it will be excluded
Note that a phraselist file can have many rules. The number of rules does not significantly slow down the runtime of the import-filter. For example, let's say Simple Wikipedia imports in 3 minutes with 100 rules. If there are 10,000 rules, the import should still take 3 minutes
</li>
<li>
However, the number of rules will affect the amount of memory required by the computer. For example, 100 rules may take 1 MB. 10,000 rules may take 10 MB.
<li><ahref="http://dumps.wikimedia.org/backup-index.html"title="Get wiki datababase dumps directly from Wikimedia">Wikimedia dumps</a></li>
<li><ahref="https://archive.org/search.php?query=xowa"title="Search archive.org for XOWA files">XOWA @ archive.org</a></li>
<li><ahref="http://en.wikipedia.org"title="Visit Wikipedia (and compare to XOWA!)">English Wikipedia</a></li>
</ul>
</div>
</div>
<divclass="portal"id='xowa-portal-donate'>
<h3>Donate</h3>
<divclass="body">
<ul>
<li><ahref="https://archive.org/donate/index.php"title="Support archive.org!">archive.org</a></li><!-- listed first due to recent fire damages: http://blog.archive.org/2013/11/06/scanning-center-fire-please-help-rebuild/ -->