1
0
mirror of https://github.com/gnosygnu/xowa.git synced 2024-10-27 20:34:16 +00:00
gnosygnu_xowa/wiki/home/page/Dev/Command-line/Dumpsx.html
2016-04-12 19:06:56 -04:00

613 lines
28 KiB
HTML

<!DOCTYPE html>
<html dir="ltr">
<head>
<meta http-equiv="content-type" content="text/html;charset=UTF-8" />
<title>Dev/Command-line/Dumpsx - XOWA</title>
<link rel="shortcut icon" href="/xowa/wiki/home/page/xowa_logo.png" />
<link rel="stylesheet" href="/xowa/wiki/home/page/xowa_common.css" type="text/css">
</head>
<body class="mediawiki ltr sitedir-ltr ns-0 ns-subject skin-vector action-submit vector-animateLayout" spellcheck="false">
<div id="mw-page-base" class="noprint"></div>
<div id="mw-head-base" class="noprint"></div>
<div id="content" class="mw-body">
<h1 id="firstHeading" class="firstHeading"><span>Dev/Command-line/Dumpsx</span></h1>
<div id="bodyContent" class="mw-body-content">
<div id="siteSub">From XOWA: the free, open-source, offline wiki application</div>
<div id="contentSub"></div>
<div id="mw-content-text" lang="en" dir="ltr" class="mw-content-ltr">
<p>
XOWA can also generate file-dumps and html-dumps.
</p>
<table class="metadata plainlinks ambox ambox-delete" style="">
<tr>
<td class="mbox-empty-cell">
</td>
<td class="mbox-text" style="">
<p>
<span class="mbox-text-span">Please note that this script is for power users. It is not meant for casual users.</span>
</p>
<p>
<span class="mbox-text-span">Please read through these instructions carefully. If you fail to follow these instructions, you may end up downloading millions of images by accident, and have your IP address banned by Wikimedia.</span>
</p>
<p>
<span class="mbox-text-span">Also, the script will change in the future, and without any warning. There is no backward compatibility. Although the XOWA databases have a fixed format, the scripts do not. If you discover that your script breaks, please refer to this page, contact me for assistance, or go through the code.</span>
</p>
</td>
</tr>
</table>
<p>
<br>
</p>
<table class="metadata plainlinks ambox ambox-delete" style="">
<tr>
<td class="mbox-empty-cell">
</td>
<td class="mbox-text" style="">
<p>
<span class="mbox-text-span">The html-dump is officially experimental. They will become hardened for forward-compatibility, but they are not yet ready for it.</span>
</p>
<p>
<span class="mbox-text-span">Although the XOWA Android app works fine with the html-dumps for English Wikipedia, I still need to run the html-dump code through more wikis. There is a high probability that I may find something that causes me to change the html-dump format. When that happens, the old html-dumps will not work with the newest XOWA Android app.</span>
</p>
<p>
<span class="mbox-text-span">Basically, you should generate these html-dumps for personal use / testing. Please do not generate them for many wikis, or distribute them en masse, without preparing to redo all your work again.</span>
</p>
</td>
</tr>
</table>
<p>
<br>
</p>
<div id="toc" class="toc">
<div id="toctitle">
<h2>
Contents
</h2>
</div>
<ul>
<li class="toclevel-1 tocsection-1">
<a href="#Background"><span class="tocnumber">1</span> <span class="toctext">Background</span></a>
</li>
<li class="toclevel-1 tocsection-2">
<a href="#Overview"><span class="tocnumber">2</span> <span class="toctext">Overview</span></a>
</li>
<li class="toclevel-1 tocsection-3">
<a href="#Requirements"><span class="tocnumber">3</span> <span class="toctext">Requirements</span></a>
<ul>
<li class="toclevel-2 tocsection-4">
<a href="#commons.wikimedia.org_.28file-dump_mode_only.29"><span class="tocnumber">3.1</span> <span class="toctext">commons.wikimedia.org (file-dump mode only)</span></a>
</li>
<li class="toclevel-2 tocsection-5">
<a href="#www.wikidata.org"><span class="tocnumber">3.2</span> <span class="toctext">www.wikidata.org</span></a>
</li>
<li class="toclevel-2 tocsection-6">
<a href="#Hardware"><span class="tocnumber">3.3</span> <span class="toctext">Hardware</span></a>
</li>
<li class="toclevel-2 tocsection-7">
<a href="#Internet-connectivity_.28file-dump_mode_only.3B_optional.29"><span class="tocnumber">3.4</span> <span class="toctext">Internet-connectivity (file-dump mode only; optional)</span></a>
</li>
<li class="toclevel-2 tocsection-8">
<a href="#Pre-existing_image_databases_for_your_wiki_.28file-dump_mode_only.3B_optional.29"><span class="tocnumber">3.5</span> <span class="toctext">Pre-existing image databases for your wiki (file-dump mode only; optional)</span></a>
</li>
</ul>
</li>
<li class="toclevel-1 tocsection-9">
<a href="#gfs"><span class="tocnumber">4</span> <span class="toctext">gfs</span></a>
</li>
<li class="toclevel-1 tocsection-10">
<a href="#Terms_.28file-dump_mode_only.29"><span class="tocnumber">5</span> <span class="toctext">Terms (file-dump mode only)</span></a>
<ul>
<li class="toclevel-2 tocsection-11">
<a href="#lnki"><span class="tocnumber">5.1</span> <span class="toctext">lnki</span></a>
</li>
<li class="toclevel-2 tocsection-12">
<a href="#orig"><span class="tocnumber">5.2</span> <span class="toctext">orig</span></a>
</li>
<li class="toclevel-2 tocsection-13">
<a href="#xfer"><span class="tocnumber">5.3</span> <span class="toctext">xfer</span></a>
</li>
<li class="toclevel-2 tocsection-14">
<a href="#fsdb"><span class="tocnumber">5.4</span> <span class="toctext">fsdb</span></a>
</li>
</ul>
</li>
<li class="toclevel-1 tocsection-15">
<a href="#HTML_dump"><span class="tocnumber">6</span> <span class="toctext">HTML dump</span></a>
<ul>
<li class="toclevel-2 tocsection-16">
<a href="#Plain-html_databases"><span class="tocnumber">6.1</span> <span class="toctext">Plain-html databases</span></a>
</li>
</ul>
</li>
<li class="toclevel-1 tocsection-17">
<a href="#Command-line"><span class="tocnumber">7</span> <span class="toctext">Command-line</span></a>
</li>
<li class="toclevel-1 tocsection-18">
<a href="#Script"><span class="tocnumber">8</span> <span class="toctext">Script</span></a>
</li>
</ul>
</div>
<h2>
<span class="mw-headline" id="Background">Background</span>
</h2>
<p>
XOWA generates three types of dumps:
</p>
<ul>
<li>
text-dumps: These contain wikitext for a page. For example: <code>[[Earth]]</code>
</li>
<li>
html-dumps: These contain HTML for a page (compiled from its wikitext). For example: <code>&lt;a href="/wiki/Earth"&gt;Earth&lt;/a&gt;</code>
</li>
<li>
file-dumps: These contain files for a page. For example: the binary data for <a href="https://commons.wikimedia.org/wiki/File:Africa_and_Europe_from_a_Million_Miles_Away.png" rel="nofollow" class="external free">https://commons.wikimedia.org/wiki/File:Africa_and_Europe_from_a_Million_Miles_Away.png</a> (aka: the Blue Marble).
</li>
</ul>
<p>
Text-dumps are generated within the program through <a href="/xowa/wiki/home/page/Dashboard/Import/Online.html" id="xolnki_2" title="Dashboard/Import/Online" class="xowa-visited">Import online</a> and <a href="/xowa/wiki/home/page/Dashboard/Import/Offline.html" id="xolnki_3" title="Dashboard/Import/Offline" class="xowa-visited">Import offline</a>
</p>
<p>
Html-dumps and file-dumps are only generated through a command-line script
</p>
<p>
This page describes the process to generate html-dumps and file-dumps.
</p>
<h2>
<span class="mw-headline" id="Overview">Overview</span>
</h2>
<p>
The dump script works in the following way:
</p>
<ul>
<li>
It loads a page.
</li>
<li>
It converts the wikitext to HTML
<ul>
<li>
For file-dump mode, it compiles a list of [[File]] links.
</li>
<li>
If HTML-dump mode, it also saves the HTML into XOWA html databases
</li>
</ul>
</li>
<li>
It repeats until there are no more pages
</li>
<li>
For file-dump mode, it also does the following additional steps
<ul>
<li>
It analyzes the list of [[File]] links to come up with a unique list of thumbs.
</li>
<li>
It downloads the thumbs and creates the XOWA file databases.
</li>
</ul>
</li>
</ul>
<p>
The script for simple wikipedia is listed below.
</p>
<p>
You should also refer to <a href="/xowa/wiki/home/page/Dev/Command-line.html" id="xolnki_4" title="Dev/Command-line" class="xowa-visited">Dev/Command-line</a> for general instructions on running by command-line.
</p>
<h2>
<span class="mw-headline" id="Requirements">Requirements</span>
</h2>
<h3>
<span class="mw-headline" id="commons.wikimedia.org_.28file-dump_mode_only.29">commons.wikimedia.org (file-dump mode only)</span>
</h3>
<p>
You will need the latest version of commons.wikimedia.org. Note that if you have an older version, you will have missing images or wrong size information.
</p>
<p>
For example, if you have a commons.wikimedia.org from 2015-04-22 and are trying to import a 2015-05-17 English Wikipedia, then any new images added after 2015-04-22 will not be picked up.
</p>
<h3>
<span class="mw-headline" id="www.wikidata.org">www.wikidata.org</span>
</h3>
<p>
You also need to have the latest version of www.wikidata.org. Note that English Wikipedia and other wikis uses Wikidata through the {{#property}} call or Module code. If you have an earlier version, then data will be missing or out of date.
</p>
<h3>
<span class="mw-headline" id="Hardware">Hardware</span>
</h3>
<p>
You should have a recent-generation machine with relatively high-performance hardware, especially if you're planning to run the script for English Wikipedia.
</p>
<p>
For context, here is my current machine setup for generating the image dumps:
</p>
<ul>
<li>
Processor: 3.5 GHz with 8 MB L3 cache (Intel Core i7-4770K)
</li>
<li>
Memory: 16 GB DDR3 SDRAM DDR3 1600 (PC3 12800)
</li>
<li>
Hard Drive: 1TB SSD drive (Samsung 850 EVO)
</li>
<li>
Operating System: openSUSE 13.2
</li>
</ul>
<p>
(Note: The hardware was assembled in late 2013 for about $1,600 US dollars.)
</p>
<p>
For English Wikipedia, it takes about 70 hours for the entire process.
</p>
<h3>
<span class="mw-headline" id="Internet-connectivity_.28file-dump_mode_only.3B_optional.29">Internet-connectivity (file-dump mode only; optional)</span>
</h3>
<p>
You should have a broadband connection to the internet. The script will need to download dump files from Wikimedia and some dump files (like English Wikipedia) will be in the 10s of GB.
</p>
<p>
You can opt to download these files separately and place them in the appropriate location beforehand. However, the script below assumes that the machine is always online. If you are offline, you will need to comment the "util.download" lines yourself.
</p>
<h3>
<span class="mw-headline" id="Pre-existing_image_databases_for_your_wiki_.28file-dump_mode_only.3B_optional.29">Pre-existing image databases for your wiki (file-dump mode only; optional)</span>
</h3>
<p>
XOWA will automatically re-use the images from existing image databases so that you do not have to redownload them. This is particularly useful for large wikis where redownloading millions of images would be unwanted.
</p>
<p>
It is strongly advised that you download the image database for your wiki. You can find a full list here: <a href="http://xowa.sourceforge.net/image_dbs.html" rel="nofollow" class="external free">http://xowa.sourceforge.net/image_dbs.html</a> Note that if an image database does not exist for your wiki, you can still proceed to use the script
</p>
<ul>
<li>
If you have v1 image databases, they should be placed in <code>/xowa/file/wiki_domain-prv</code>. For example, English Wikipedia should have <code>/xowa/file/en.wikipedia.org-prv/fsdb.main/fsdb.bin.0000.sqlite3</code>
</li>
<li>
If you have v2 image databases, they should be placed in <code>/xowa/wiki/wiki_domain/prv</code>. For example, English Wikipedia should have <code>/xowa/wiki/en.wikipedia.org/prv/en.wikipedia.org-file-ns.000-db.001.xowa</code>
</li>
</ul>
<h2>
<span class="mw-headline" id="gfs">gfs</span>
</h2>
<p>
The script is written in the <code>gfs</code> format. This is a custom scripting format unique to XOWA. It is similar to JSON, but also supports commenting.
</p>
<p>
Unfortunately the error-handling for gfs is quite minimal. When making changes, please do them in small steps and be prepared to revert to backups.
</p>
<p>
The following is a brief list of rules:
</p>
<ul>
<li>
Comments are made with either "//","\n" or "/*","*/". For example: <code>// single-line comment</code> or <code>/* multi-line comment*/</code>
</li>
<li>
Booleans are "y" and "n" (yes / no or true / false). For example: <code>enabled = 'y';</code>
</li>
<li>
Numbers are 32-bit integers and are not enclosed in quotes. For example, <code>count = 10000;</code>
</li>
<li>
Strings are surrounded by apostrophes (') or quotes ("). For example: <code>key = 'val';</code>
</li>
<li>
Statements are terminated by a semi-colon (;). For example: <code>procedure1;</code>
</li>
<li>
Statements can take arguments in parentheses. For example: <code>procedure1('argument1', 'argument2', 'argument3');</code>
</li>
<li>
Statements are grouped with curly braces. ({}). For example: <code>group {procedure1; procedure2; procedure3;}</code>
</li>
</ul>
<h2>
<span class="mw-headline" id="Terms_.28file-dump_mode_only.29">Terms (file-dump mode only)</span>
</h2>
<h3>
<span class="mw-headline" id="lnki">lnki</span>
</h3>
<p>
A <code>lnki</code> is short for "<b>l</b>i<b>nk</b> <b>i</b>nternal". It refers to all wikitext with the double bracket syntax: [[A]]. A more elaborate example for files would be [[File:A.png|thumb|200x300px|upright=.80]]. Note that the abbreviation was chosen to differentiate it from <code>lnke</code> which is short for "<b>l</b>i<b>nk</b> <b>e</b>nternal". For the purposes of the script, all lnki data comes from the current wiki's data dump
</p>
<h3>
<span class="mw-headline" id="orig">orig</span>
</h3>
<ul>
<li>
An <code>orig</code> is short for "<b>orig</b>inal file". It refers to the original file metadata. For the purposes of this script, all orig data comes from commons.wikimedia.org
</li>
</ul>
<h3>
<span class="mw-headline" id="xfer">xfer</span>
</h3>
<ul>
<li>
An <code>xfer</code> is short for "transfer file". It refers to the actual file to be downloaded.
</li>
</ul>
<h3>
<span class="mw-headline" id="fsdb">fsdb</span>
</h3>
<ul>
<li>
The <code>fsdb</code> is short for "<b>f</b>ile <b>s</b>ystem <b>d</b>ata<b>b</b>ase". It refers to the internal table format of the XOWA image databases.
</li>
</ul>
<h2>
<span class="mw-headline" id="HTML_dump">HTML dump</span>
</h2>
<h3>
<span class="mw-headline" id="Plain-html_databases">Plain-html databases</span>
</h3>
<p>
The above script generates pages that are gz-compressed and xowa.mediawiki-compressed. If you just want plain HTML pages to use in another application, you can substitute this command:
</p>
<pre>
hdump_bldr {enabled = 'y'; zip_tid_html = 'raw'; hzip_enabled = 'n'; hzip_diff = 'n';}
</pre>
<p>
After the build completes, you can open up any of the XOWA HTML databases and run the following SQL:
</p>
<pre>
SELECT * FROM html LIMIT 10;
</pre>
<h2>
<span class="mw-headline" id="Command-line">Command-line</span>
</h2>
<p>
Dump-scripts require more memory. You should have at least 8 GB memory and preferably 16 GB. Use a command-line like the following
</p>
<p>
<span class='path'>java -Xmx15000m -XX:+HeapDumpOnOutOfMemoryError -jar xowa_linux_64.jar --app_mode cmd --cmd_file make_wiki.gfs --show_license n --show_args n</span>
</p>
<h2>
<span class="mw-headline" id="Script">Script</span>
</h2>
<pre class='code'>
app.bldr.pause_at_end_('n');
app.scripts.run_file_by_type('xowa_cfg_app');
app.bldr.cmds {
// build commons database; this only needs to be done once, whenever commons is updated
add ('commons.wikimedia.org' , 'util.cleanup') {delete_all = 'y';}
add ('commons.wikimedia.org' , 'util.download') {dump_type = 'pages-articles';}
add ('commons.wikimedia.org' , 'util.download') {dump_type = 'categorylinks';}
add ('commons.wikimedia.org' , 'util.download') {dump_type = 'page_props';}
add ('commons.wikimedia.org' , 'util.download') {dump_type = 'image';}
add ('commons.wikimedia.org' , 'text.init');
add ('commons.wikimedia.org' , 'text.page');
add ('commons.wikimedia.org' , 'text.cat.core');
add ('commons.wikimedia.org' , 'text.cat.link');
add ('commons.wikimedia.org' , 'text.cat.hidden');
add ('commons.wikimedia.org' , 'text.term');
add ('commons.wikimedia.org' , 'text.css');
add ('commons.wikimedia.org' , 'wiki.image');
add ('commons.wikimedia.org' , 'file.page_regy') {build_commons = 'y'}
add ('commons.wikimedia.org' , 'wiki.page_dump.make');
add ('commons.wikimedia.org' , 'wiki.redirect') {commit_interval = 1000; progress_interval = 100; cleanup_interval = 100;}
add ('commons.wikimedia.org' , 'util.cleanup') {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}
// build wikidata database; this only needs to be done once, whenever wikidata is updated
add ('www.wikidata.org' , 'util.cleanup') {delete_all = 'y';}
add ('www.wikidata.org' , 'util.download') {dump_type = 'pages-articles';}
add ('www.wikidata.org' , 'util.download') {dump_type = 'categorylinks';}
add ('www.wikidata.org' , 'util.download') {dump_type = 'page_props';}
add ('www.wikidata.org' , 'util.download') {dump_type = 'image';}
add ('www.wikidata.org' , 'text.init');
add ('www.wikidata.org' , 'text.page');
add ('www.wikidata.org' , 'text.cat.core');
add ('www.wikidata.org' , 'text.cat.link');
add ('www.wikidata.org' , 'text.cat.hidden');
add ('www.wikidata.org' , 'text.term');
add ('www.wikidata.org' , 'text.css');
add ('www.wikidata.org' , 'util.cleanup') {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}
// build simple.wikipedia.org
// NOTE!: deletes all files in /xowa/wiki/simple.wikipedia.org
add ('simple.wikipedia.org' , 'util.cleanup') {delete_all = 'y';}
// download wikitext dump from http://dumps.wikimedia.org/backup-index.html
add ('simple.wikipedia.org' , 'util.download') {dump_type = 'pages-articles';}
// download category dump from http://dumps.wikimedia.org/backup-index.html
add ('simple.wikipedia.org' , 'util.download') {dump_type = 'categorylinks';}
// download page_props dump from http://dumps.wikimedia.org/backup-index.html (needed for hidden categories)
add ('simple.wikipedia.org' , 'util.download') {dump_type = 'page_props';}
// download image dump from http://dumps.wikimedia.org/backup-index.html
add ('simple.wikipedia.org' , 'util.download') {dump_type = 'image';}
// initial step to create stub databases for wikitext
add ('simple.wikipedia.org' , 'text.init');
// calculate redirect_id for #REDIRECT pages. needed for html databases
add ('simple.wikipedia.org' , 'text.page') {redirect_id_enabled = 'y';}
// generates title-search database
add ('simple.wikipedia.org' , 'text.search');
// generates desktop css
add ('simple.wikipedia.org' , 'text.css');
// generates main category database
add ('simple.wikipedia.org' , 'text.cat.core');
// generates category-to-page databases
add ('simple.wikipedia.org' , 'text.cat.link');
// identifies hidden categories
add ('simple.wikipedia.org' , 'text.cat.hidden');
// performs final steps for wikitext databases
add ('simple.wikipedia.org' , 'text.term');
// create local "page" tables in each "text" database for "lnki_temp"
add ('simple.wikipedia.org' , 'wiki.page_dump.make');
// create a redirect table for pages in the File namespace
add ('simple.wikipedia.org' , 'wiki.redirect') {commit_interval = 1000; progress_interval = 100; cleanup_interval = 100;}
// create an "image" table to get the metadata for all files in the current wiki
add ('simple.wikipedia.org' , 'wiki.image');
// NOTE!: deletes all downloaded bz2 / gz / xml / sql files
add ('simple.wikipedia.org' , 'util.cleanup') {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}
// parse every page in the listed namespace and gather data on their lnkis.
// this step will take the longest amount of time.
add ('simple.wikipedia.org' , 'file.lnki_temp') {
// save data every # of pages
commit_interval = 10000;
// print progress to command-line shell every # of pages
progress_interval = 50;
// free memory by flushing internal caches every # of pages
cleanup_interval = 50;
// specify # of pages to read into memory at a time, where # is in MB. For example, 25 means read approximately 25 MB of page text into memory
select_size = 25;
// namespaces to parse. See en.wikipedia.org/wiki/Help:Namespaces
ns_ids = '0|4|14|100';
// generate html-dump databases
hdump_bldr {
// enable / disable html-dump generation
enabled = 'y';
// 'raw' : no compression; stores in plain text
// 'gz' : compresses to gz
// 'bzip2': compresses to bz2
zip_tid_html = 'gz';
// 'y': does secondary mediawiki-specific compression to make databases even smaller (about 30%)
// 'n': does not do secondary compression
hzip_enabled = 'y';
// post-processing check to make sure XOWA-compression format decompresses back to original format
hzip_diff = 'y';
}
}
// aggregate the lnkis
add ('simple.wikipedia.org' , 'file.lnki_regy');
// generate orig metadata for files in the current wiki (for example, for pages in en.wikipedia.org/wiki/File:*)
add ('simple.wikipedia.org' , 'file.page_regy') {build_commons = 'n';}
// generate all orig metadata for all lnkis
add ('simple.wikipedia.org' , 'file.orig_regy');
// generate list of files to download based on "orig_regy" and XOWA image code
add ('simple.wikipedia.org' , 'file.xfer_temp.thumb');
// aggregate list one more time
add ('simple.wikipedia.org' , 'file.xfer_regy');
// identify images that have already been downloaded
add ('simple.wikipedia.org' , 'file.xfer_regy_update');
// download images. This step may also take a long time, depending on how many images are needed
add ('simple.wikipedia.org' , 'file.fsdb_make') {
commit_interval = 1000; progress_interval = 200; select_interval = 10000;
ns_ids = '0|4|14|100';
// specify whether original wiki databases are v1 (.sqlite3) or v2 (.xowa)
src_bin_mgr__fsdb_version = 'v2';
// always redownload certain files
src_bin_mgr__fsdb_skip_wkrs = 'page_gt_1|small_size';
// allow downloads from wikimedia
src_bin_mgr__wmf_enabled = 'y';
}
// generate registry of original metadata by file title
add ('simple.wikipedia.org' , 'file.orig_reg');
// drop page_dump tables
add ('simple.wikipedia.org' , 'wiki.page_dump.drop');
}
app.bldr.run;
</pre>
</div>
</div>
</div>
<div id="mw-head" class="noprint">
<div id="left-navigation">
<div id="p-namespaces" class="vectorTabs">
<h3>Namespaces</h3>
<ul>
<li id="ca-nstab-main" class="selected"><span><a id="ca-nstab-main-href" href="index.html">Page</a></span></li>
</ul>
</div>
</div>
</div>
<div id='mw-panel' class='noprint'>
<div id='p-logo'>
<a style="background-image: url(/xowa/wiki/home/page/xowa_logo.png);" href="https://gnosygnu.github.io/xowa/" title="Visit the main page"></a>
</div>
<div class="portal" id='xowa-portal-home'>
<h3>XOWA</h3>
<div class="body">
<ul>
<li><a href="https://gnosygnu.github.io/xowa/" title='Visit the main page'>Main page</a></li>
<li><a href="https://gnosygnu.github.io/xowa/blog.html" title='Follow XOWA''s development process'>Blog</a></li>
<li><a href="https://gnosygnu.github.io/xowa/screenshots.html" title='See screenshots of XOWA'>Screenshots</a></li>
<li><a href="https://gnosygnu.github.io/xowa/download.html" title='Download the XOWA application'>Download XOWA</a></li>
<li><a href="/xowa/wiki/home/page/Dashboard/Image_databases" title='Download offline wikis and image databases'>Download wikis</a></li>
<li><a href="https://gnosygnu.github.io/xowa/reviews.html" title='Read what others have written about XOWA'>Media</a></li>
<li><a href="/xowa/wiki/home/page/Help/About.html" title='Get more information about XOWA'>About</a></li>
</ul>
</div>
</div>
<div class="portal" id='xowa-portal-help'>
<h3>Help</h3>
<div class="body">
<ul>
<li><a href="/xowa/wiki/home/page/App/Setup/System_requirements.html" title='Get XOWA&apos;s system requirements'>Requirements</a></li>
<li><a href="/xowa/wiki/home/page/App/Setup/Installation.html" title='Get instructions for installing XOWA'>Installation</a></li>
<li><a href="/xowa/wiki/home/page/App/Import/Simple_Wikipedia.html" title='Learn how to set up Simple Wikipedia'>Set up Simple Wikipedia</a></li>
<li><a href="/xowa/wiki/home/page/App/Import/English_Wikipedia.html" title='Learn how to set up English Wikipedia'>Set up English Wikipedia</a></li>
<li><a href="/xowa/wiki/home/page/App/Import/Other_wikis.html" title='Learn how to set up Other Wikipedias'>Set up Other Wikipedias</a></li>
<li><a href="/xowa/wiki/home/page/Help/Feedback.html" title='Questions? Comments? Leave feedback for XOWA'>Feedback</a></li>
<li><a href="/xowa/wiki/home/page/Help/Contents.html" title='View a list of help topics'>Contents</a></li>
</ul>
</div>
</div>
<div class="portal" id='xowa-portal-links'>
<h3>Links</h3>
<div class="body">
<ul>
<li><a href="http://dumps.wikimedia.org/backup-index.html" title="Get wiki datababase dumps directly from Wikimedia">Wikimedia dumps</a></li>
<li><a href="https://archive.org/search.php?query=xowa" title="Search archive.org for XOWA files">XOWA @ archive.org</a></li>
<li><a href="http://en.wikipedia.org" title="Visit Wikipedia (and compare to XOWA!)">English Wikipedia</a></li>
</ul>
</div>
</div>
<div class="portal" id='xowa-portal-donate'>
<h3>Donate</h3>
<div class="body">
<ul>
<li><a href="https://archive.org/donate/index.php" title="Support archive.org!">archive.org</a></li><!-- listed first due to recent fire damages: http://blog.archive.org/2013/11/06/scanning-center-fire-please-help-rebuild/ -->
<li><a href="https://donate.wikimedia.org/wiki/Special:FundraiserRedirector" title="Support Wikipedia!">Wikipedia</a></li>
<!-- <li><a href="" title="Support XOWA! (but only after you've supported archive.org and Wikipedia)">XOWA</a></li> -->
</ul>
</div>
</div>
</div>
</body>
</html>