|
|
|
|
@@ -28,10 +28,16 @@ Version 1.1 fixes a few bugs, the major one being the inability to import the 20
|
|
|
|
|
|
|
|
|
|
Because changes have been made to the search schema and the search indexer has been upgraded (to Solr 4.5), all data will need to be re-indexed. Therefore there is no upgrade path; follow the instructions below to set up Stackdump again. It is recommended to install this new version in a new directory, instead of overwriting the existing one.
|
|
|
|
|
|
|
|
|
|
h2. Changes and upgrading from v1.1 to v1.2.
|
|
|
|
|
h2. Changes and upgrading from v1.1 to v1.2
|
|
|
|
|
|
|
|
|
|
The major change in the v1.2 release are improvements to the speed of importing data. There are some other smaller changes, including new PowerShell scripts to start and manage Stackdump on Windows as well as a few bug fixes when running on Windows. The search indexing side of things has not changed, therefore data imported using v1.1 will continue to work in v1.2. _Data from older versions however, needs to be re-indexed. See the above section on upgrading to v1.1 for more details._
|
|
|
|
|
|
|
|
|
|
h2. Changes and upgrading from v1.2 to v1.3
|
|
|
|
|
|
|
|
|
|
v1.3 is primarily bugfix release, for a fairly serious bug. It turns out Stackdump has been subtly overwriting questions as more sites are imported because it assumed post IDs were unique across all sites, when they in fact were not. This meant as more sites were imported, the previous sites started to lose questions. The fix required a change to search index, therefore *the data directory will need to be deleted and all data will need to be re-imported after installing this version*. Thanks to @yammesicka for reporting the issue.
|
|
|
|
|
|
|
|
|
|
Other changes include a new setting to allow disabling the link and image URL rewriting, and a change to the @import_site@ command so it doesn't bail immediately if there is a Solr connection issue - it will prompt and allow resumption after the connection issue has been resolved.
|
|
|
|
|
|
|
|
|
|
h3. Importing the StackOverflow data dump, September 2013
|
|
|
|
|
|
|
|
|
|
The StackOverflow data dump has grown significantly since I started this project back in 2011. With the improvements in v1.2, on a VM with two cores and 4GB of RAM running CentOS 5.7 on a single, standard hard drive containing spinning pieces of metal,
|
|
|
|
|
@@ -43,6 +49,8 @@ The StackOverflow data dump has grown significantly since I started this project
|
|
|
|
|
|
|
|
|
|
In total, the StackOverflow data dump has *15,933,529 posts* (questions and answers), *2,332,403 users* and a very large number of comments.
|
|
|
|
|
|
|
|
|
|
I attempted this on a similarly spec'ed Windows 7 64-bit VM as well - 23 hours later and it is still trying to process the comments. The SQLite, Python or just disk performance is very poor for some reason. Therefore, if you intend on importing StackOverflow, I would advise you to run Stackdump on Linux instead. The smaller sites all complete without a reasonable time though, and there are no perceptible issues with performance as far as I'm aware on Windows.
|
|
|
|
|
|
|
|
|
|
h2. Setting up
|
|
|
|
|
|
|
|
|
|
Stackdump was designed for offline environments or environments with poor internet access, therefore it is bundled with all the dependencies it requires (with the exception of Python, Java and 7-zip).
|
|
|
|
|
@@ -51,7 +59,7 @@ As long as you have:
|
|
|
|
|
* "Python":http://python.org/download/,
|
|
|
|
|
* "Java":http://java.com/en/download/manual.jsp,
|
|
|
|
|
* "Stackdump":https://bitbucket.org/samuel.lai/stackdump/downloads,
|
|
|
|
|
* the "StackExchange Data Dump":http://www.clearbits.net/creators/146-stack-exchange-data-dump (Note: this is only available as a torrent), and
|
|
|
|
|
* the "StackExchange Data Dump":https://archive.org/details/stackexchange (download the sites you wish to import - note that StackOverflow is split into 7 archive files; only Comments, Posts and Users are required), and
|
|
|
|
|
* "7-zip":http://www.7-zip.org/ (needed to extract the data dump files)
|
|
|
|
|
|
|
|
|
|
...you should be able to get an instance up and running.
|
|
|
|
|
@@ -66,7 +74,7 @@ Remember to set your PowerShell execution policy to at least @RemoteSigned@ firs
|
|
|
|
|
|
|
|
|
|
h3. Extract Stackdump
|
|
|
|
|
|
|
|
|
|
Stackdump was to be self-contained, so to get it up and running, simply extract the Stackdump download to an appropriate location.
|
|
|
|
|
Stackdump was designed to be self-contained, so to get it up and running, simply extract the Stackdump download archive to an appropriate location.
|
|
|
|
|
|
|
|
|
|
h3. Verify dependencies
|
|
|
|
|
|
|
|
|
|
@@ -108,15 +116,15 @@ To start the import process, execute the following command -
|
|
|
|
|
|
|
|
|
|
@stackdump_dir/manage.sh import_site --base-url site_url --dump-date dump_date path_to_xml_files@
|
|
|
|
|
|
|
|
|
|
... where site_url is the URL of the site you're importing, e.g. __android.stackexchange.com__; dump_date is the date of the data dump you're importing, e.g. __August 2012__, and finally path_to_xml_files is the path to the XML files you just extracted. The dump_date is a text string that is shown in the app only, so it can be in any format you want.
|
|
|
|
|
... where @site_url@ is the URL of the site you're importing, e.g. __android.stackexchange.com__; @dump_date@ is the date of the data dump you're importing, e.g. __August 2012__, and finally @path_to_xml_files@ is the path to the directory containing the XML files that were just extracted. The @dump_date@ is a text string that is shown in the app only, so it can be in any format you want.
|
|
|
|
|
|
|
|
|
|
For example, to import the August 2012 data dump of the Android StackExchange site, you would execute -
|
|
|
|
|
For example, to import the August 2012 data dump of the Android StackExchange site, with the files extracted into @/tmp/android@, you would execute -
|
|
|
|
|
|
|
|
|
|
@stackdump_dir/manage.sh import_site --base-url android.stackexchange.com --dump-date "August 2012" /tmp/android@
|
|
|
|
|
|
|
|
|
|
It is normal to get messages about unknown PostTypeIds and missing comments and answers. These errors are likely due to those posts being hidden via moderation.
|
|
|
|
|
|
|
|
|
|
This can take anywhere between a minute to 10 hours or more depending on the site you're importing. As a rough guide, __android.stackexchange.com__ took a minute on my VM, while __stackoverflow.com__ took just over 10 hours.
|
|
|
|
|
This can take anywhere between a minute to 20 hours or more depending on the site you're importing. As a rough guide, __android.stackexchange.com__ took a minute on my VM, while __stackoverflow.com__ took just under 24 hours.
|
|
|
|
|
|
|
|
|
|
Repeat these steps for each site you wish to import. Do not attempt to import multiple sites at the same time; it will not work and you may end up with half-imported sites.
|
|
|
|
|
|
|
|
|
|
@@ -130,19 +138,53 @@ To start Stackdump, execute the following command -
|
|
|
|
|
|
|
|
|
|
... and visit port 8080 on that machine. That's it - your own offline, read-only instance of StackExchange.
|
|
|
|
|
|
|
|
|
|
If you need to change the port that it runs on, modify @stackdump_dir/python/src/stackdump/settings.py@ and restart the app.
|
|
|
|
|
If you need to change the port that it runs on, or modify other settings that control how Stackdump works; see the 'Optional configuration' section below for more details.
|
|
|
|
|
|
|
|
|
|
The aforementioned @settings.py@ file also contains some other settings that control how Stackdump works.
|
|
|
|
|
Both the search indexer and the app need to be running for Stackdump to work.
|
|
|
|
|
|
|
|
|
|
h2. Optional configuration
|
|
|
|
|
|
|
|
|
|
There are a few settings for those who like to tweak. There's no need to adjust them normally though; the default settings should be fine.
|
|
|
|
|
|
|
|
|
|
The settings file is located in @stackdump_dir/python/src/stackdump/settings.py@. The web component will need to be restarted after changes have been made for them to take effect.
|
|
|
|
|
|
|
|
|
|
* *SERVER_HOST* - the network interface to run the Stackdump web app on. Use _'0.0.0.0'_ for all interfaces, or _'127.0.0.1'_ for localhost only. By default, it runs on all interfaces.
|
|
|
|
|
* *SERVER_PORT* - the port to run the Stackdump web app on. The default port is _8080_.
|
|
|
|
|
* *SOLR_URL* - the URL to the Solr instance. The default assumes Solr is running on the same system. Change this if Solr is running on a different system.
|
|
|
|
|
* *NUM_OF_DEFAULT_COMMENTS* - the number of comments shown by default for questions and answers before the remaining comments are hidden (and shown when clicked). The default is _3_ comments.
|
|
|
|
|
* *NUM_OF_RANDOM_QUESTIONS* - the number of random questions shown on the home page of Stackdump and the site pages. The default is _3_ questions.
|
|
|
|
|
* *REWRITE_LINKS_AND_IMAGES* - by default, all links are rewritten to either point internally or be marked as an external link, and image URLs are rewritten to point to a placeholder image. Set this setting to _False_ to disable this behaviour.
|
|
|
|
|
|
|
|
|
|
h2. Running Stackdump as a service
|
|
|
|
|
|
|
|
|
|
Stackdump comes bundled with some init.d scripts as well which were tested on CentOS 5. These are located in the @init.d@ directory. To use these, you will need to modify them to specify the path to the Stackdump root directory and the user to run under.
|
|
|
|
|
|
|
|
|
|
Both the search indexer and the app need to be running for Stackdump to work.
|
|
|
|
|
Another option is to use "Supervisor":http://supervisord.org/ with a simple configuration file, e.g.,
|
|
|
|
|
|
|
|
|
|
bc.. [program:stackdump-solr]
|
|
|
|
|
command=/path/to/stackdump/start_solr.sh
|
|
|
|
|
priority=900
|
|
|
|
|
user=stackdump_user
|
|
|
|
|
stopasgroup=true
|
|
|
|
|
stdout_logfile=/path/to/stackdump/solr_stdout.log
|
|
|
|
|
stderr_logfile=/path/to/stackdump/solr_stderr.log
|
|
|
|
|
|
|
|
|
|
[program:stackdump-web]
|
|
|
|
|
command=/path/to/stackdump/start_web.sh
|
|
|
|
|
user=stackdump_user
|
|
|
|
|
stopasgroup=true
|
|
|
|
|
stdout_logfile=/path/to/stackdump/web_stdout.log
|
|
|
|
|
stderr_logfile=/path/to/stackdump/web_stderr.log
|
|
|
|
|
|
|
|
|
|
p. Supervisor v3.0b1 or later is required, due to the _stopasgroup_ parameter. Without this parameter, Supervisor will not be able to stop the Stackdump components properly as they're being executed from a script.
|
|
|
|
|
|
|
|
|
|
Yet another option for those using newer Linux distributions is to create native "systemd service definitions":http://www.freedesktop.org/software/systemd/man/systemd.service.html of type _simple_ for each of the components.
|
|
|
|
|
|
|
|
|
|
h2. Maintenance
|
|
|
|
|
|
|
|
|
|
Stackdump stores all its data in the @data@ directory under its root directory. If you want to start fresh, just stop the app and the search indexer, delete that directory and restart the app and search indexer.
|
|
|
|
|
|
|
|
|
|
To delete certain sites from Stackdump, use the manage_sites management command -
|
|
|
|
|
To delete certain sites from Stackdump, use the @manage_sites@ management command -
|
|
|
|
|
|
|
|
|
|
@stackdump_dir/manage.sh manage_sites -l@ to list the sites (and their site keys) currently in the system;
|
|
|
|
|
@stackdump_dir/manage.sh manage_sites -d site_key@ to delete a particular site.
|
|
|
|
|
|