- Create a place to hold crashes
mkdir crashreports
cd crashreports
- Build stackwalker
git clone https://github.com/rhelmer/minidump-stackwalk
cd minidump-stackwalk
make
cd ../
- Install Socorro
mkvirtualenv crashreports
pip install https://people.mozilla.org/~rhelmer/socorro-master.tar.gz
- Run the Collector and Processor:
socorro collector &
socorro processor &
- Now try submitting a test crash:
curl 'https://raw.githubusercontent.com/mozilla/socorro/master/testcrash/raw/7d381dc5-51e2-4887-956b-1ae9c2130109.dump' > testcrash.dump
curl -F ProductName=TestApp \
-F Version=1.0 \
-F [email protected] \
http://127.0.0.1:8882/submit
- You should see a CrashID returned:
CrashID=bp-d1ffc5df-f26f-40d3-aa11-6d4ea2150113
Both the raw (.json, .dmp
) and processed (.jsonz
) files will be available under ./crashes
:
$ find ./crashes -name "d1ffc5df-f26f-40d3-aa11-6d4ea2150113.*"
./crashes/20150113/name/d1/ff/d1ffc5df-f26f-40d3-aa11-6d4ea2150113.dump
./crashes/20150113/name/d1/ff/d1ffc5df-f26f-40d3-aa11-6d4ea2150113.json
./crashes/20150113/name/d1/ff/d1ffc5df-f26f-40d3-aa11-6d4ea2150113.jsonz
For production use, run the collector app under uwsgi:
uwsgi -H ~/.virtualenvs/crashreports -w socorro.wsgi.collector -s my_socket
Then configure your webserver to pass connections to that socket.
Nginx: http://uwsgi-docs.readthedocs.org/en/latest/Nginx.html
Apache: http://uwsgi-docs.readthedocs.org/en/latest/Apache.html
Fetch the Socorro index mappings:
curl 'https://raw.githubusercontent.com/mozilla/socorro/master/socorro/external/elasticsearch/mappings/socorro_index_settings.json' > socorro_index_settings.json
Stop the socorro processor, and run it with the following options:
socorro processor \
--destination.crashstorage_class='
socorro.external.crashstorage_base.PolyCrashStorage' \
--destination.storage_classes='
socorro.external.fs.crashstorage.FSLegacyDatedRadixTreeStorage,
socorro.external.elasticsearch.crashstorage.ElasticSearchCrashStorage' \
--destination.storage1.elasticsearch_base_settings=./socorro_index_settings.json
This configures Socorro to store processed crashes using the PolyCrashStorage
for the destination
crashstorage_class
, which stores crashes in multiple places. The destination storage_classes
are set to save
crashes both to the filesystem (FSLegacyDatedRadixTreeStorage
) and also in ElasticSearch
(ElasticSearchCrashStorage
).
The ElasicSearch index is expected to be in ./socorro_index_settings.json
.
The processor will by default use an ElasticSearch instance at ```localhost:9200``, this can be changed with:
--destination.storage1.elasticsearch_urls='http://...:9200'
Try passing --help
along with options to see all of these defaults.
Kibana is a web-based tool for exploring data in ElasticSearch and making dashboards: http://www.elasticsearch.org/overview/kibana/installation/
You can find crashes in the socorro
index.
Stop the Socorro processor, and run it with the following options:
socorro processor \
--destination.crashstorage_class='
socorro.external.crashstorage_base.PolyCrashStorage' \
--destination.storage_classes='
socorro.external.fs.crashstorage.FSLegacyDatedRadixTreeStorage,
socorro.external.boto.crashstorage.BotoS3CrashStorage' \
--destination.storage1.access_key=YOUR_ACCESS_KEY \
--destination.storage1.secret_access_key=YOUR_SECRET_ACCESS_KEY \
--destination.storage1.bucket_name=YOUR_BUCKET_NAME
This configures Socorro to store processed crashes using the PolyCrashStorage
for the destination
crashstorage_class
, which stores crashes in multiple places. The destination storage_classes
are set to save
crashes both to the filesystem (FSLegacyDatedRadixTreeStorage
) and also in Amazon S3
(BotoS3CrashStorage
).
Make sure to set YOUR_ACCESS_KEY
, YOUR_SECRET_ACCESS_KEY
and YOUR_BUCKET_NAME
.
Create the default breakpad
database for Socorro to use:
# FIXME need better way to get raw_sql
mkdir -p socorro/external/postgresql/raw_sql/
cp -rp ~/src/socorro/socorro/external/postgresql/raw_sql/* \
socorro/external/postgresql/raw_sql/
cp -rp ~/src/socorro/alembic .
rm alembic/versions/*.py
socorro setupdb --database_name=breakpad
Create PostgreSQL partitions (this must be run on a weekly basis):
# FIXME doesn't actually work yet
# socorro crontabber --job=weekly-reports-partitions --force
python ~/.virtualenvs/crashreports/lib/python2.7/site-packages/socorro/cron/crontabber_app.py \
--job=weekly-reports-partitions --force
Stop the processor and run with the following options:
socorro processor \
--destination.crashstorage_class='
socorro.external.crashstorage_base.PolyCrashStorage' \
--destination.storage_classes='
socorro.external.fs.crashstorage.FSLegacyDatedRadixTreeStorage,
socorro.external.postgresql.crashstorage.PostgreSQLCrashStorage'
This uses PolyCrashStorage
to store crashes in both the filesystem (FSLegacyDatedRadixTreeStorage
) and PostgreSQL
(PostgreSQLCrashStorage
).
If you wish to have a UI like https://crash-stats.mozilla.com, you can use the same Django-based project that Mozilla does.
It's missing the webapp.