Review the storage format options at <ahref="http://xowa.org/home/wiki/Options/Import.html"id="xolnki_3"title="Options/Import"class="xowa-visited">Options/Import</a>
Choose if XOWA should decompress the dump file or if XOWA should read from the compressed dump (this will be slower).
<ul>
<li>
<b>decompress the dump file</b>: This will be faster, but it will need additional disk space. For example, a 10 GB .bz2 file will create another 25 GB .xml file. Note that the additional .xml file is deleted when the import completes.
</li>
<li>
<b>read from compressed dump</b>: This will be slower, but it will not use additional disk space. For example, a 2 hour import for "decompress first" may take 2.5 hours for "read from compressed dump". Note that this slowness only affects import; the generated wiki will perform the same as a "decompress first" import
version 1: imports faster; uses less disk space; has less functionality than MediaWiki
</li>
<li>
version 2: imports slower; uses more disk space; has same functionality as MediaWiki.
</li>
</ul>
</li>
<li>
Click "Import now". A message box will pop up when the wiki is installed.
</li>
</ul>
<p>
Other notes:
</p>
<ul>
<li>
If you choose to decompress the dump file:
<ul>
<li>
The uncompressed file will be stored in the wiki directory. For example, if you've chosen simple.wikipedia.org, the path will be /xowa/wiki/simple.wikipedia.org/simplewiki-latest-pages-articles.xml
</li>
<li>
The uncompressed dump file will be automatically deleted when the process completes
</li>
</ul>
</li>
<li>
The dump file (.bz2) will not be deleted
<ul>
<li>
If you are downloading the dump file, it will be automatically moved to /xowa/wiki/#dump/done/ when completed
</li>
<li>
If you are importing from a dump file, the dump file will remain in the same location
If you want to remove a wiki, see <ahref="http://xowa.org/home/wiki/App/Import/Removal.html"id="xolnki_4"title="App/Import/Removal">App/Import/Removal</a>
<spanclass="mw-headline"id="Without_internet_connection">Without internet connection</span>
</h3>
<divclass="mw-collapsible mw-collapsed">
<p>
Follow these instructions if you don't have an internet connection or if you have other reasons not to use the automatic import.
</p>
<h4>
<spanclass="mw-headline"id="Getting_the_files">Getting the files</span>
</h4>
<ul>
<li>
Go to <ahref="http://dumps.wikimedia.org/"rel="nofollow"class="external free">http://dumps.wikimedia.org/</a> and search for your wiki's directory. For example, Simple Wikipedia is <ahref="http://dumps.wikimedia.org/simplewiki/"rel="nofollow"class="external free">http://dumps.wikimedia.org/simplewiki/</a>
<ul>
<li>
Download the file named <code><i>wiki</i>-<i>date</i>-pages-articles.xml.bz2</code>.
</li>
<li>
(version 2 category system) Download two additional files
<code>/xowa/wiki/<i>wiki_domain</i>/</code> where <i>wiki_domain</i> should be replaced with the name of the wiki. For example, if you're using Simple Wikipedia then the files should be placed in <code>/xowa/wiki/<i>simple.wikipedia.org</i>/</code>
<ul>
<li>
This is the recommended location. XOWA will automatically detect all files placed in the wiki directory.
</li>
</ul>
</li>
<li>
<code>/anywhere_else_on_your_system/</code>
<ul>
<li>
If you don't have enough disk space on the disk where XOWA is installed, you can place the dump file in a directory on any other disk. You will need to select the file during the import process.
</li>
<li>
Note that only the wiki dump file can be placed anywhere. The category files must be in <code>/xowa/wiki/<i>wiki_domain</i>/</code>
</li>
</ul>
</li>
</ul>
</li>
<li>
Decompress the dump files manually if you want to (it may be slightly faster than having XOWA do it)
</li>
</ul>
<h4>
<spanclass="mw-headline"id="Running_the_import">Running the import</span>
Review the storage format options at <ahref="http://xowa.org/home/wiki/Options/Import.html"id="xolnki_5"title="Options/Import"class="xowa-visited">Options/Import</a>
If you've placed the wiki dump file in another location, you will need to select "read from file" and enter the location of that file.
</li>
</ul>
</li>
<li>
Choose if XOWA should decompress the dump file (this will need additional disk space) or if XOWA should read from the compressed dump (this will be slower).
<ul>
<li>
Note that this option will be ignored when you already decompressed the files manually.
</li>
</ul>
</li>
<li>
Choose a version of the category system:
<ul>
<li>
version 1: imports faster; uses less disk space; has less functionality than MediaWiki
</li>
<li>
version 2: imports slower; uses more disk space; has same functionality as MediaWiki.
</li>
</ul>
</li>
<li>
Click "Import now". A message box will pop up when the wiki is installed.
</li>
<li>
Download the logo for the wiki whenever you are next online. The logo should be placed in <code>xowa/user/anonymous/wiki/<i>wiki</i>/html/logo.png</code>.
</li>
</ul>
<p>
Other notes:
</p>
<ul>
<li>
If you choose to automatically decompress the dump file:
<ul>
<li>
The uncompressed file will be stored in the wiki directory. For example, if you've chosen simple.wikipedia.org, the path will be /xowa/wiki/simple.wikipedia.org/simplewiki-latest-pages-articles.xml
</li>
<li>
The uncompressed dump file will be automatically deleted when the process completes
</li>
</ul>
</li>
<li>
The dump file (.bz2) will not be deleted
<ul>
<li>
If you are downloading the dump file, it will be automatically moved to /xowa/wiki/#dump/done/ when completed
</li>
<li>
If you are importing from a dump file, the dump file will remain in the same location
Instead of running the import now, you can choose to generate a script and run it manually later. If you are interested in a script-based import, do the following:
</p>
<ul>
<li>
Make the same choices as you did above for "With internet connection" or "Without internet connection"
</li>
<li>
Click "Generate script" instead of "Import now". The script will now appear in the text box.
</li>
<li>
Repeat as necessary to combine other wikis / commands into the script
</li>
<li>
Run the script by clicking "Run script"
<ul>
<li>
Alternatively, you can also copy it to <code>/xowa/xowa_build.gfs</code> and execute <code>java -Xmx256m -jar xowa_windows.jar --cmd_file xowa_build.gfs --app_mode cmd</code>. (Note that xowa_windows.jar is listed as the example. Linux users should substitute xowa_linux.jar while Mac users should substitute xowa_macosx.jar)
<spanclass="reference-text">V1 is a category system from using just the wikitext in "pages-articles". It is quick and does not take up much space, but it will be incomplete</span>
</li>
<li>
<spanclass="reference-text">V2 is a category system that reproduces the exact category system in Wikipedia. However it requires additional files ("categorylinks") and takes up more space (as much as 10 GB for English Wikipedia)</span>
</ul><spanclass="reference-text">For more info, see <ahref="http://xowa.org/home/wiki/App/Category.html"id="xolnki_7"title="App/Category">App/Category</a></span>
<spanclass="mw-cite-backlink"><ahref="#cite_ref-import_now_1-0">^</a></span><spanclass="reference-text">Use this button to import the selected wiki now.</span>
</li>
<liid="cite_note-generate_and_run-2">
<spanclass="mw-cite-backlink"><ahref="#cite_ref-generate_and_run_2-0">^</a></span><spanclass="reference-text">Use these buttons to...</span>
<ul>
<li>
<spanclass="reference-text">Generate the script</span>
</li>
<li>
<spanclass="reference-text">Edit it by hand, or copy for later reference</span>
</li>
<li>
<spanclass="reference-text">Run the script when ready</span>
<li><ahref="http://dumps.wikimedia.org/backup-index.html"title="Get wiki datababase dumps directly from Wikimedia">Wikimedia dumps</a></li>
<li><ahref="https://archive.org/search.php?query=xowa"title="Search archive.org for XOWA files">XOWA @ archive.org</a></li>
<li><ahref="http://en.wikipedia.org"title="Visit Wikipedia (and compare to XOWA!)">English Wikipedia</a></li>
</ul>
</div>
</div>
<divclass="portal"id='xowa-portal-donate'>
<h3>Donate</h3>
<divclass="body">
<ul>
<li><ahref="https://archive.org/donate/index.php"title="Support archive.org!">archive.org</a></li><!-- listed first due to recent fire damages: http://blog.archive.org/2013/11/06/scanning-center-fire-please-help-rebuild/ -->