Dev/Command-line/Dumps

From XOWA: the free, open-source, offline wiki application

XOWA can generate two types of dumps: file-dumps and html-dumps


Overview

The download-thumbs script downloads all thumbs for pages in a specific wiki. It works in the following way:

  • It loads a page.
  • It converts the wikitext to HTML
    • If thumb mode is enabled, it compiles a list of [[File]] links.
    • If HTML-dump mode is enabled, it saves the HTML into XOWA html databases
  • It repeats until there are no more pages
  • If thumb mode, it does the following additional steps
    • It analyzes the list of [[File]] links to come up with a unique list of thumbs.
    • It downloads the thumbs and creates the XOWA file databases.

The script for simple wikipedia is listed below.

Requirements

commons.wikimedia.org (thum

You will need the latest version of commons.wikimedia.org. Note that if you have an older version, you will have missing images or wrong size information.

For example, if you have a commons.wikimedia.org from 2015-04-22 and are trying to import a 2015-05-17 English Wikipedia, then any new images added after 2015-04-22 will not be picked up.

www.wikidata.org

You also need to have the latest version of www.wikidata.org. Note that English Wikipedia and other wikis uses Wikidata through the {{#property}} call or Module code. If you have an earlier version, then data will be missing or out of date.

Hardware

You should have a recent-generation machine with relatively high-performance hardware, especially if you're planning to generate images for English Wikipedia.

For context, here is my current machine setup for generating the image dumps:

  • Processor: Intel Core i7-4770K; 3.5 GHz with 8 MB L3 cache
  • Memory: 16 GB DDR3 SDRAM DDR3 1600 (PC3 12800)
  • Hard Drive: 1TB SSD
  • Operating System: openSUSE 13.2

(Note: The hardware was assembled in late 2013.)

For English Wikipedia, it still takes about 50 hours for the entire process.

Internet-connectivity (optional)

You should have a broadband connection to the internet. The script will need to download dump files from Wikimedia and some dump files (like English Wikipedia) will be in the 10s of GB.

You can opt to download these files separately and place them in the appropriate location beforehand. However, the script below assumes that the machine is always online. If you are offline, you will need to comment the "util.download" lines yourself.

Pre-existing image databases for your wiki (optional)

XOWA will automatically re-use the images from existing image databases so that you do not have to redownload them. This is particularly useful for large wikis where redownloading millions of images would be unwanted.

It is strongly advised that you download the image database for your wiki. You can find a full list here: http://xowa.sourceforge.net/image_dbs.html Note that if an image database does not exist for your wiki, you can still proceed to use the script

  • If you have v1 image databases, they should be placed in /xowa/file/wiki_domain-prv. For example, English Wikipedia should have /xowa/file/en.wikipedia.org-prv/fsdb.main/fsdb.bin.0000.sqlite3
  • If you have v2 image databases, they should be placed in /xowa/wiki/wiki_domain/prv. For example, English Wikipedia should have /xowa/wiki/en.wikipedia.org/prv/en.wikipedia.org-file-ns.000-db.001.xowa

gfs

The script is written in the gfs format. This is a custom scripting format specific to XOWA. It is similar to JSON, but also supports commenting.

Unfortunately the error-handling for gfs is quite minimal. When making changes, please do them in small steps and be prepared to go to backups.

The following is a brief list of rules:

  • Comments are made with either "//","\n" or "/*","*/". For example: // single-line comment or /* multi-line comment*/
  • Booleans are "y" and "n" (yes / no or true / false). For example: enabled = 'y';
  • Numbers are 32-bit integers and are not enclosed in quotes. For example, count = 10000;
  • Strings are surrounded by apostrophes (') or quotes ("). For example: key = 'val';
  • Statements are terminated by a semi-colon (;). For example: procedure1;
  • Statements can take arguments in parentheses. For example: procedure1('argument1', 'argument2', 'argument3');
  • Statements are grouped with curly braces. ({}). For example: group {procedure1; procedure2; procedure3;}

Terms

lnki

A lnki is short for "link internal". It refers to all wikitext with the double bracket syntax: [[A]]. A more elaborate example for files would be [[File:A.png|thumb|200x300px|upright=.80]]. Note that the abbreviation was chosen to differentiate it from lnke which is short for "link enternal". For the purposes of the script, all lnki data comes from the current wiki's data dump

orig

  • An orig is short for "original file". It refers to the original file metadata. For the purposes of this script, all orig data comes from commons.wikimedia.org

xfer

  • An xfer is short for "transfer file". It refers to the actual file to be downloaded.

fsdb

  • The fsdb is short for "file system database". It refers to the internal table format of the XOWA image databases.


Script

app.bldr.pause_at_end_('n');
app.scripts.run_file_by_type('xowa_cfg_app');
app.bldr.cmds {
  // build commons database; this only needs to be done once, whenever commons is updated
  add     ('commons.wikimedia.org' , 'util.cleanup')          {delete_all = 'y';}
  add     ('commons.wikimedia.org' , 'util.download')         {dump_type = 'pages-articles';}
  add     ('commons.wikimedia.org' , 'util.download')         {dump_type = 'categorylinks';}
  add     ('commons.wikimedia.org' , 'util.download')         {dump_type = 'page_props';}
  add     ('commons.wikimedia.org' , 'util.download')         {dump_type = 'image';}
  add     ('commons.wikimedia.org' , 'text.init');
  add     ('commons.wikimedia.org' , 'text.page');
  add     ('commons.wikimedia.org' , 'text.cat.core');
  add     ('commons.wikimedia.org' , 'text.cat.link');
  add     ('commons.wikimedia.org' , 'text.cat.hidden');
  add     ('commons.wikimedia.org' , 'text.term');
  add     ('commons.wikimedia.org' , 'text.css');
  add     ('commons.wikimedia.org' , 'wiki.image');
  add     ('commons.wikimedia.org' , 'file.page_regy')        {build_commons = 'y'}
  add     ('commons.wikimedia.org' , 'wiki.page_dump.make');
  add     ('commons.wikimedia.org' , 'wiki.redirect')         {commit_interval = 1000; progress_interval = 100; cleanup_interval = 100;}
  add     ('commons.wikimedia.org' , 'util.cleanup')          {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}

  // build wikidata database; this only needs to be done once, whenever wikidata is updated
  add     ('www.wikidata.org' , 'util.cleanup')          {delete_all = 'y';}
  add     ('www.wikidata.org' , 'util.download')         {dump_type = 'pages-articles';}
  add     ('www.wikidata.org' , 'util.download')         {dump_type = 'categorylinks';}
  add     ('www.wikidata.org' , 'util.download')         {dump_type = 'page_props';}
  add     ('www.wikidata.org' , 'util.download')         {dump_type = 'image';}
  add     ('www.wikidata.org' , 'text.init');
  add     ('www.wikidata.org' , 'text.page');
  add     ('www.wikidata.org' , 'text.cat.core');
  add     ('www.wikidata.org' , 'text.cat.link');
  add     ('www.wikidata.org' , 'text.cat.hidden');
  add     ('www.wikidata.org' , 'text.term');
  add     ('www.wikidata.org' , 'text.css');
  add     ('www.wikidata.org' , 'util.cleanup')          {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}

  // build simple.wikipedia.org
  add     ('simple.wikipedia.org' , 'util.cleanup')          {delete_all = 'y';}
  add     ('simple.wikipedia.org' , 'util.download')         {dump_type = 'pages-articles';}
  add     ('simple.wikipedia.org' , 'util.download')         {dump_type = 'categorylinks';}
  add     ('simple.wikipedia.org' , 'util.download')         {dump_type = 'page_props';}
  add     ('simple.wikipedia.org' , 'util.download')         {dump_type = 'image';}
  add     ('simple.wikipedia.org' , 'util.download')         {dump_type = 'pagelinks';}
  add     ('simple.wikipedia.org' , 'text.init');
  add     ('simple.wikipedia.org' , 'text.page') {
    // calculate redirect_id for #REDIRECT pages. needed for html databases
    redirect_id_enabled = 'y';
  }
  add     ('simple.wikipedia.org' , 'text.search');

  // upload desktop css
  add     ('simple.wikipedia.org' , 'text.css');

  // upload mobile css
  add     ('simple.wikipedia.org' , 'text.css') {css_key = 'xowa.mobile'; /* css_dir = 'C:\xowa\user\anonymous\wiki\simple.wikipedia.org-mobile\html\'; */}

  add     ('simple.wikipedia.org' , 'text.cat.core');
  add     ('simple.wikipedia.org' , 'text.cat.link');
  add     ('simple.wikipedia.org' , 'text.cat.hidden');
  add     ('simple.wikipedia.org' , 'text.term');
  
  // create local "page" tables in each "text" database for "lnki_temp"
  add     ('simple.wikipedia.org' , 'wiki.page_dump.make');
  
  // create a redirect table for pages in the File namespace
  add     ('simple.wikipedia.org' , 'wiki.redirect')         {commit_interval = 1000; progress_interval = 100; cleanup_interval = 100;}
  
  // create an "image" table to get the metadata for all files in the current wiki
  add     ('simple.wikipedia.org' , 'wiki.image');

  // parse all page-to-page links
  add     ('simple.wikipedia.org' , 'wiki.page_link');

  // calculate a score for each page using the page-to-page links
  add     ('simple.wikipedia.org' , 'search.page__page_score') {iteration_max = 100;}

  // update link score statistics for the search tables
  add     ('simple.wikipedia.org' , 'search.link__link_score') {page_rank_enabled = 'y';}

  // update word count statistics for the search_word table
  add     ('simple.wikipedia.org' , 'search.word__link_count')

  // cleanup all downloaded files as well as temporary files
  add     ('simple.wikipedia.org' , 'util.cleanup')          {delete_tmp = 'y'; delete_by_match('*.xml|*.sql|*.bz2|*.gz');}
  
  // parse every page in the listed namespace and gather data on their lnkis.
  // this step will take the longest amount of time.
  add     ('simple.wikipedia.org' , 'file.lnki_temp') {
    // save data every # of pages
    commit_interval = 10000; 

    // update progress every # of pages
    progress_interval = 50;

    // free memory by flushing internal caches every # of pages
    cleanup_interval = 50;

    // specify # of pages to read into memory at a time, where # is in MB. For example, 25 means read approximately 25 MB of page text into memory
    select_size = 25;

    // namespaces to parse. See en.wikipedia.org/wiki/Help:Namespaces
    ns_ids = '0|4|14';


    // enable generation of ".html" databases. This will increase processing time by 20% - 25%
    hdump_bldr {
      // generate html databases
      enabled = 'y';

      // compression method for html: 1=none; 2=zip; 3=gz; 4=bz2
      zip_tid = 3;

      // enable additional custom compression
      hzip_enabled = 'y';

      // perform extra validation step of custom compression
      hzip_diff = 'y';
    }
  }
  
  // aggregate the lnkis
  add     ('simple.wikipedia.org' , 'file.lnki_regy');
  
  // generate orig metadata for files in the current wiki (for example, for pages in en.wikipedia.org/wiki/File:*)
  add     ('simple.wikipedia.org' , 'file.page_regy')        {build_commons = 'n';}
  
  // generate all orig metadata for all lnkis
  add     ('simple.wikipedia.org' , 'file.orig_regy');
  
  // generate list of files to download based on "orig_regy" and XOWA image code
  add     ('simple.wikipedia.org' , 'file.xfer_temp.thumb');
  
  // aggregate list one more time
  add     ('simple.wikipedia.org' , 'file.xfer_regy');

  // identify images that have already been downloaded
  add     ('simple.wikipedia.org' , 'file.xfer_regy_update');
  
  // download images. This step may also take a long time, depending on how many images are needed
  add     ('simple.wikipedia.org' , 'file.fsdb_make') {
    commit_interval = 1000; progress_interval = 200; select_interval = 10000;
    ns_ids = '0|4|14';
    
    // specify whether original wiki databases are v1 (.sqlite3) or v2 (.xowa)
    src_bin_mgr__fsdb_version = 'v1';
    
    // always redownload certain files
    src_bin_mgr__fsdb_skip_wkrs = 'page_gt_1|small_size';
    
    // allow downloads from wikimedia
    src_bin_mgr__wmf_enabled = 'y';
  }
  
  // generate registry of original metadata by file title
  add     ('simple.wikipedia.org' , 'file.orig_reg');
  
  // drop page_dump tables
  add     ('simple.wikipedia.org' , 'wiki.page_dump.drop');
}
app.bldr.run;

Namespaces

XOWA

Getting started

Android

Help

Blog

Donate