Socorro Installation
Overview
This guide illustrates how to install Socorro with the minimum components needed for a medium sized project. Socorro is used at Mozilla to manage thousands of crashes a day and spanning multiple applications. The full architecture allows distributed loads, throttling, queuing and other optimization techniques required for such a large amount of crashes per day. Medium size projects like PlaneShift, may use a simplified architecture, which is described in this installation guide.
This guide has been written in August 2013. I've used a Debian Squeeze (6.0.7) server.
The official reference documentation at this point is pretty minimal and not complete enough to succeed in the install, anyway if you want to have a look it's located here: http://socorro.readthedocs.org/en/latest/installation.html
The full architecture schema is here: http://socorro.readthedocs.org/en/latest/generalarchitecture.html
The architecture we are going to use is the following:
Components
This section lists the components and their usage. There is a reference to their configuration files after you have "deployed for production" (this will be explained later). You don't need to read through all of this chapter now, but you can use it later on as a reference when you are troubleshooting.
Supervisor
Supervisor is the process which starts the needed components of Socorro, namely he starts Processor and Monitor. You can start and stop supervisor with:
> /etc/init.d/supervisor stop > /etc/init.d/supervisor start
- His configuration file is located in /etc/supervisor
- The configuration files of the processes he starts are located in /etc/supervisor/conf.d
- His log files are located in /var/log/supervisor/
Collector
- This is the application which receives the dump file from your application. This is the first piece we would like to have working.
- In our simplified install it runs inside apache, so it’s not started by supervisor. There are ways to have it run separately, but we are not interested in this type of configuration.
- His main configuration file is /etc/socorro/collector.ini
- The collector uses a filesystem to write the dumps to.
- Running inside Apache, its log files are written in the apache logs: /var/log/apache2/error.log
Monitor
Monitor polls the file system and then queues up the crashes in Postgres.
- Monitor is started by supervisor
- His main configuration file is /etc/socorro/monitor.ini
- starting app is here: /data/socorro/application/socorro/monitor/monitor_app.py
- real app is here: /data/socorro/application/socorro/monitor/monitor.py
- is log file is here: /var/log/socorro/monitor.log
Processor
Processor polls Postgres and consumes the job that Monitor just queued, also it uses the filesystem to process some data on the dumps.
- Processor is started by supervisor
- is main configuration file is /etc/socorro/processor.ini
- starting app is here: /data/socorro/application/socorro/processor/processor_app.py
- real app is here: /data/socorro/application/socorro/processor/processor.py
- is log file is here: /var/log/socorro/processor.log
Middleware
Middleware is a layer of API which is called by the various components to execute their operations, in particular is used by the webapp UI to query the database.
- Runs inside Apache with wsgi, so it's not started by supervisor
- Real app is here : /data/socorro/application/socorro/middleware/middleware_app.py
- Configuration file: /etc/socorro/middleware.ini
- Running inside Apache, its log files are written in the apache logs: /var/log/apache2/error.log
Webapp
This is the UI you use to visualize the latest crashes and stacktraces.
- Doesn't have a specific process running, as it's made of web pages calling the middleware
- It's a Django application (you can google it).
- Located here: /data/socorro/webapp-django
- Configuration file: /data/socorro/webapp-django/crashstats/settings/base.py
- Configuration file: /data/socorro/webapp-django/crashstats/settings/local.py
Crontabber
This is the batch process which [WHAT IT DOES?]
- It's run through crontab
- Located here: /data/socorro/application/socorro/cron/crontabber.py
- has it's execution directory here: /home/socorro/persistent/
- Configuration file: /etc/socorro/crontabber.ini
- is log file is here: /var/log/socorro/crontabber.log
database_file='/home/socorro/persistent/crontabbers.json'
CrashMover (DO NOT USE)
Crashmover is an additional component not used in our simplified install. Just forget about him and all related config files.
- Just for reference, it's started by supervisor
- starting app is here: /data/socorro/application/scripts/newCrashMover.py
- real app is here: /data/socorro/application/socorro/storage
We may want to disable it (Need to understand how)
Directory structure
Before proceeding with the installation, it's important you understand the directory structure which will be created by the standard install, so you can troubleshoot more easily the installation if needed.
/home/planeshift/socorro
This is where I’ve checked out the sources and did the initial Socorro installation. Our project is called "planeshift" and our default user on that server was "planeshift" as well. This is the initial enviroment where everything gets built and tested. It's called the "development environment" as it's used for internal testing and not for production usage. When the installation is completed, you will deploy the necessary pieces to the other directories with the procedure "deploying in production" (see below). After the production deployment is done, none of the files (including configs) in this dir will be used anymore.
/etc/supervisor/conf.d
Contains supervisor ini files, like 1-socorro-processor.conf 2-socorro-crashmover.conf 3-socorro-monitor.conf
These scripts point the supervisor to execute the apps in /data/socorro/application/
/etc/socorro
Contains all .ini files like: collector.ini crashmover.ini monitor.ini processor.ini
/home/socorro
This is the primary storage location for uploaded minidumps, no configuration files are present
/data/socorro
Contains all applications as executed in the production environment
Please note there are configuration files under /data/socorro/application/config like collector.ini crashmover.ini monitor.ini processor.ini, but those are NOT used in the final install, as one step of the install is to copy those under /etc/socorro, where the final configuration files will reside.
/var/log/socorro
Contains the logs from the applications like Monitor and Processor.
/var/log/supervisor
Contains the log of the supervisor.
Database Structure
I tried to generate a schema of the database... work in progress...
How to proceed
These are the steps we are going to follow for the installation:
- install all components as per Mozilla instructions
- deploy the components to the "production environment", which is nothing else than other directories
- Test and troubleshoot each of the component installed
Install all components
For this chapter we assume you just have a clean operating system install and none of the components is actually installed. In your case some of the components may already be there, just check the versions in case.
I've taken notes of the versions which were installed on my system. (the "Setting ..." lines)
Install build essentials
> apt-get install build-essential subversion (already present)
Install python software
> apt-get install python-software-properties
Setting up python-apt-common (0.7.100.1+squeeze1) ... Setting up python-apt (0.7.100.1+squeeze1) ... Setting up iso-codes (3.23-1) ... Setting up lsb-release (3.2-23.2squeeze1) ... Setting up python-gnupginterface (0.3.2-9.1) ... Setting up unattended-upgrades (0.62.2) ... Setting up python-software-properties (0.60.debian-3) ...
> apt-get install libpq-dev python-virtualenv python-dev
Setting up libpython2.6 (2.6.6-8+b1) ... Setting up python2.6-dev (2.6.6-8+b1) ... Setting up python-dev (2.6.6-3+squeeze7) ... Setting up python-pkg-resources (0.6.14-4) ... Setting up python-setuptools (0.6.14-4) ... Setting up python-pip (0.7.2-1) ... Setting up python-virtualenv (1.4.9-3squeeze1) ...
> apt-get install python2.6 python2.6-dev
Install postgres 9.2
In my case I was using Debian squeeze. On this release the default postgres is 8.4, which is too old to work with socorro because it doesn't have JSON support. So I needed to update the repos to have postgres 9.2
Create /etc/apt/sources.list.d/pgdg.list and add this line: deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main
> wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
> sudo apt-get update
> apt-get install postgresql-9.2 postgresql-plperl-9.2 postgresql-contrib-9.2 postgresql-server-dev-9.2
Ensure that timezone is set to UTC
> vi /etc/postgresql/9.2/main/postgresql.conf
timezone = 'UTC'
> service postgresql restart
Install other needed components
> apt-get install rsync libxslt1-dev git-core mercurial > apt-get install python-psycopg2 > apt-get install libsasl2-dev
Add a new superuser account to postgres
(executed as root)
> su - postgres -c "createuser -s planeshift"
Remove security layer for postgres (this is to avoid the PostgreSQL Error "FATAL: Peer authentication failed")
Edit /etc/postgresql/9.2/main/pg_hba.conf and change the following line from 'peer' to 'trust': host all all 127.0.0.1/32 peer host all all 127.0.0.1/32 trust
> service postgresql restart
Download and install Socorro
(executed as planeshift)
> cd > git clone --depth=1 https://github.com/mozilla/socorro socorro > cd socorro > git fetch origin --tags --depth=1 > git checkout 56 (chosen release 56 as the stable one)
Node/Nmp is required, install it:
> apt-get install openssl libssl-dev > git clone https://github.com/joyent/node.git > cd node > git tag > git checkout v0.9.12 > ./configure --openssl-libpath=/usr/lib/ssl > make > make test > sudo make install > node -v # this line checks if its running!
Update python-pip (as root):
> pip install --upgrade pip > /home/planeshift/socorro/socorro-virtualenv/bin/pip install --upgrade pip
Install lessc
> npm install less -g
Install json_extensions for use with PostgreSQL
From inside the Socorro checkout
> export PATH=$PATH:/usr/lib/postgresql/9.2/bin > make json_enhancements_pg_extension
Run unit/functional tests
From inside the Socorro checkout
> make test
Install minidump_stackwalk
From inside the Socorro checkout
This is the binary which processes breakpad crash dumps into stack traces:
> make minidump_stackwalk
Setup environment
> make bootstrap-dev (this line is needed only one time)
Everytime you want to run socorro commands you will have to:
> . socorro-virtualenv/bin/activate > export PYTHONPATH=. > (execute here your command)
Populate PostgreSQL Database
as user root
> cd socorro > psql -f sql/roles.sql postgres
as user planeshift
> cd socorro > . socorro-virtualenv/bin/activate > export PYTHONPATH=.
You cannot start from an empty database, as there are multiple tables (example list of operating systems) which have to be populated for the system to work. For this reason you need to use --fakedata, which loads some of those tables together with some sample products (WaterWolf, NightTrain).
I don't remember which one of the two below worked, but it's one of the two, the other should give you an error:
> ./socorro/external/postgresql/setupdb_app.py --database_name=breakpad --fakedata --dropdb --database_superusername=breakpad_rw --database_superuserpassword=bPassword > ./socorro/external/postgresql/setupdb_app.py --database_name=breakpad --fakedata --database_superusername=planeshift --dropdb
Create partitioned reports_* tables
Socorro uses PostgreSQL partitions for the reports table, which must be created on a weekly basis.
Normally this is handled automatically by the cronjob scheduler crontabber but can be run as a one-off:
> python socorro/cron/crontabber.py --job=weekly-reports-partitions --force
I needed to run it as I was getting an error on Processor saying:
ProgrammingError: relation "raw_crashes_20130826" does not exist LINE 1: insert into raw_crashes_20130826 (uuid, raw_crash, date_proc...
After running the command above, the table raw_crashes_20130826 was created.
Run socorro in dev mode
The dev mode is basically a version of socorro which runs only on the local server, launching the services manually, and running on ports 8882 and 8883. Usually you don't want this in production, but can be useful just to make a test if everything works.
Copy default config files
> cp config/collector.ini-dist config/collector.ini > cp config/processor.ini-dist config/processor.ini > cp config/monitor.ini-dist config/monitor.ini > cp config/middleware.ini-dist config/middleware.ini
Run the apps
> cd socorro > . socorro-virtualenv/bin/activate > export PYTHONPATH=.
> screen -S processor python socorro/processor/processor_app.py --admin.conf=./config/processor.ini > screen -S monitor python socorro/monitor/monitor_app.py --admin.conf=./config/monitor.ini > screen -S middleware python socorro/middleware/middleware_app.py --admin.conf=config/middleware.ini > screen -S collector python socorro/collector/collector_app.py --admin.conf=./config/collector.ini
Deploy the apps for production usage
Install prerequisites
> apt-get install supervisor rsyslog libapache2-mod-wsgi memcached
Setup directory structure
> mkdir /etc/socorro > mkdir /var/log/socorro > mkdir -p /data/socorro > useradd socorro > chown socorro:socorro /var/log/socorro > mkdir -p /home/socorro/primaryCrashStore /home/socorro/fallback /home/socorro/persistent > chown www-data:socorro /home/socorro/primaryCrashStore /home/socorro/fallback /home/socorro/persistent > chmod 2775 /home/socorro/primaryCrashStore /home/socorro/fallback /home/socorro/persistent
Install Socorro for production
> cd /home/planeshift/socorro > make install
(as root) > cd /home/planeshift/socorro > cp config/*.ini /etc/socorro/
To setup properly the configuration files you have two ways:
- Download the ones I used and modify as needed for your installation
- Ask the various app to generate the ini for you and then modify as needed for your installation
Generate your own /etc/socorro/collector.ini
> login as socorro user > export PYTHONPATH=/data/socorro/application:/data/socorro/thirdparty > python /data/socorro/application/socorro/collector/collector_app.py --admin.conf=/etc/socorro/collector.ini --help > python /data/socorro/application/socorro/collector/collector_app.py --admin.conf=/etc/socorro/collector.ini --admin.dump_conf=/tmp/c1.ini > cp /tmp/c1.ini /etc/socorro/collector.ini
The main important parameters for collector.ini are:
wsgi_server_class='socorro.webapi.servers.ApacheModWSGI' (tells collector to run inside Apache)
fs_root='/home/socorro/primaryCrashStore' (points to the directory where your disk storage is)
crashstorage_class='socorro.external.fs.crashstorage.FSDatedRadixTreeStorage' (uses the new FSDatedRadixTreeStorage class which organizes files by date in the disk storage area)
IMPORTANT NOTE: all your processes should use the same crashstorage_class or they will not be able to find the files
Generate your own /etc/socorro/processor.ini
> login as socorro user > export PYTHONPATH=/data/socorro/application:/data/socorro/thirdparty > python /data/socorro/application/socorro/processor/processor_app.py --admin.conf=/etc/socorro/processor.ini --help > chown www-data:socorro /home/socorro [NOT NEEDED? WAS root:root] > python /data/socorro/application/socorro/processor/processor_app.py --admin.conf=/etc/socorro/processor.ini --source.crashstorage_class=socorro.external.fs.crashstorage.FSDatedRadixTreeStorage --admin.dump_conf=/tmp/p1.ini > edit p1.ini file manually and delete everything inside [c_signature] > python /data/socorro/application/socorro/processor/processor_app.py --admin.conf=/tmp/p1.ini --admin.dump_conf=/tmp/p2.ini --destination.storage_classes='socorro.external.postgresql.crashstorage.PostgreSQLCrashStorage, socorro.external.fs.crashstorage.FSRadixTreeStorage' > edit p2.ini file manually and delete everything inside [c_signature] > python /data/socorro/application/socorro/processor/processor_app.py --admin.conf=/tmp/p2.ini --admin.dump_conf=/tmp/p3.ini --destination.storage1.crashstorage_class=socorro.external.fs.crashstorage.FSRadixTreeStorage > edit p3.ini file manually and delete everything inside [c_signature] > edit p3.ini and set fs_root=/home/socorro/primaryCrashStore . There should be two places, one under [destination]storage1 and one under [source]
Generate your own /etc/socorro/middleware.ini
> login as socorro user > export PYTHONPATH=/data/socorro/application:/data/socorro/thirdparty > python /data/socorro/application/socorro/middleware/middleware_app.py --admin.conf=/etc/socorro/middleware.ini --help > python /data/socorro/application/socorro/middleware/middleware_app.py --admin.conf=/etc/socorro/middleware.ini --admin.dump_conf=/tmp/m1.ini > edit /tmp/m1 and change: filesystem_class='socorro.external.fs.crashstorage.FSDatedRadixTreeStorage' > comment out 'platforms', 'implementation_list' and 'service_overrides' variables as those are printed wrongly by the dumper and will give an error while running the middleware app
Configure Crontabber
edit /etc/socorro/crontabber.ini
database_file='/home/socorro/persistent/crontabbers.json'
Be sure the user socorro can write in that directory
Comment out the unneeded jobs:
edit /data/socorro/application/socorro/cron/crontabber.py
#socorro.cron.jobs.bugzilla.BugzillaCronApp|1h #socorro.cron.jobs.matviews.CorrelationsCronApp|1d|10:00 #socorro.cron.jobs.ftpscraper.FTPScraperCronApp|1h #socorro.cron.jobs.automatic_emails.AutomaticEmailsCronApp|1h #socorro.cron.jobs.modulelist.ModulelistCronApp|1d
Define your throttling conditions
Due to the high volume of crashes received at Mozilla, by default Socorro is not accepting all crashes, but is 'throttling' some. Meaning some crashes are actually rejected and not processed. To enable this the Collector has throttling rules defined in /etc/socorro/collector.ini
If you want all your crashes to be accepted, change this line to:
throttle_conditions='''[("*", True, 100)]'''
Add your own product to the database
The default installation provides two sample products (WaterWolf, NightTrain) and some sample versions of those products. But most likely you want to add your own app to socorro. Here is what I did.
At the time of this writing there is a bug in https://github.com/mozilla/socorro/blob/master/socorro/external/postgresql/raw_sql/procs/add_new_product.sql
So you need to edit /home/planeshift/socorro/socorro/external/postgresql/raw_sql/procs/add_new_product.sql at line 28 this way:
INSERT INTO products ( product_name, sort, rapid_release_version, release_name, rapid_beta_version ) VALUES ( prodname, current_sort + 1, initversion, COALESCE(ftpname, prodname),rapid_beta_version);
basically rapid_beta_version was missing and causing an error in the execution of add_new_product() below.
Edit the database (as user planeshift)
> psql -U planeshift -d breakpad
> SELECT add_new_product('PlaneShift', '0.5.10','12345','PlaneShift',1); > SELECT add_new_release ('PlaneShift','0.5.10','Release',201305051111,'Windows',1,'release','f','f'); > select update_product_versions(200); // generates products version info for older releases, 200 days.
The last line inserts the data into 'product_versions' table. The default update_product_versions() checks only the new releases within a 30 days timeframe from current date. But if you call it like: select update_product_versions(200) it will then do it for previous 200 days. In my case the release date was older than 30 days, so was not showing up in the web UI.
Adding your own Active Daily User information (ADU)
Many of the reports presented in the webapp UI are using the number of active users per day to generate the graphs. The information about the active users per day is not provided by socorro, but has to be provided manually by the administrator. At Mozilla there is a separate department doing this.
In particular socorro expects to receive the ADU information in the table raw_adu. The tricky part is that this information should be entered daily and be available for all products/platforms/versions/builds/release-channels.
At the moment what I've done is to populate it with a simple script which assigns one value to all platforms/versions/builds/release-channels of a specific product.
INSERT INTO raw_adu ( SELECT 1000,'2013-08-28', v.product_name, platform, platform, release_version, build_id, release_channel, 'dummy_product_guid', now() FROM product_versions v, product_version_builds b, product_release_channels c WHERE v.product_version_id=b.product_version_id AND v.product_name=c.product_name AND v.product_name='NightTrain' )
INSERT INTO raw_adu ( SELECT 100,'2013-08-28', v.product_name, platform, platform, release_version, build_id, release_channel, 'dummy_product_guid', now() FROM product_versions v, product_version_builds b, product_release_channels c WHERE v.product_version_id=b.product_version_id AND v.product_name=c.product_name AND v.product_name='WaterWolf' )
INSERT INTO raw_adu ( SELECT 10000,'2013-08-28', v.product_name, platform, platform, release_version, build_id, release_channel, 'dummy_product_guid', now() FROM product_versions v, product_version_builds b, product_release_channels c WHERE v.product_version_id=b.product_version_id AND v.product_name=c.product_name AND v.product_name='PlaneShift' )
After you have done this the crontabber will populate the other tables for you, including: product_adu, build_adu, ...
Cronjobs for Socorro
Socorro’s cron jobs are managed by crontabber. crontabber runs every 5 minutes from the system crontab.
> cp scripts/crons/socorrorc /etc/socorro/
edit crontab: > crontab -e */5 * * * * socorro /data/socorro/application/scripts/crons/crontabber.sh
Configure and start daemons
Copy default configuration files
> cp puppet/files/etc_supervisor/*.conf /etc/supervisor/conf.d/
The files provided with the standard install point to the old version of the apps, so you need to modify those as follows.
> vi /etc/supervisor/conf.d/1-socorro-processor.conf
command = /data/socorro/application/socorro/processor/processor_app.py --admin.conf=/etc/socorro/processor.ini
> vi /etc/supervisor/conf.d/3-socorro-monitor.conf
command = /data/socorro/application/socorro/monitor/monitor_app.py --admin.conf=/etc/socorro/monitor.ini
> /etc/init.d/supervisor stop > /etc/init.d/supervisor start
Configure Apache
There are two ways you can run the apps.
- With Virtual hosts
- With Virutal Directories (this is what I've used)
In case you want to use Virtual Hosts, you can use this: (I DIDN'T TEST THIS)
> cp puppet/files/etc_apache2_sites-available/{crash-reports,crash-stats,socorro-api} /etc/apache2/sites-available
In case you want to use Virtual Directories, you can use this:
> vi /etc/apache2/apache2.conf
add at the end of the file:
Include socorro.conf
> vi /etc/apache2/socorro.conf (new file)
WSGIPythonPath /data/socorro/application:/data/socorro/application/scripts WSGIPythonHome /home/planeshift/socorro/socorro-virtualenv WSGIScriptAlias /crash-stats /data/socorro/webapp-django/wsgi/socorro-crashstats.wsgi
WSGIScriptAlias /crash-reports /data/socorro/application/wsgi/collector.wsgi
WSGIScriptAlias /bpapi /data/socorro/application/wsgi/middleware.wsgi
RewriteEngine on Redirect /home/ /crash-stats/home/
Activate apache modules
> a2enmod headers > a2enmod proxy > a2enmod rewrite > /etc/init.d/apache2 restart
Set access rights on cache dir
> chmod -R 777 /data/socorro/webapp-django/static/CACHE/
Configure WebAPP
Edit configuration file: /data/socorro/webapp-django/crashstats/settings/local.py
DEFAULT_PRODUCT = 'PlaneShift'
Edit /data/socorro/webapp-django/crashstats/settings/base.py for webapp user authentication database:
'NAME': '/home/socorro/sqlite.crashstats.db'
> cp /data/socorro/webapp-django/sqlite.crashstats.db /home/socorro > chown www-data:socorro /home/socorro/sqlite.crashstats.db
Test each of the component
Generating a crash
The crash minidump should be generated by breakpad on the real app.
Uploading a crash
You can upload through the app and breakpad, but there is also another simpler way which you can use to test.
Edit /data/socorro/application/socorro/collector/throttler.py adding these lines to def throttle(self, raw_crash):
# check if user submitted the crash with minidump_upload and adjust parameters accordingly if 'prod' in raw_crash: raw_crash['ProductName'] = raw_crash['prod'] if 'ver' in raw_crash: raw_crash['Version'] = raw_crash['ver']
It just remaps those 2 parameters as those are different from breakpad to minidump_upload program.
> cd /data/socorro/stackwalk/bin/ > ./minidump_upload -p PlaneShift -v 0.5.12 6cc10361-c469-1504-1d91efef-7b8e750c.dmp http://194.116.72.94/crash-reports/submit
If the upload works you should get something like this:
Successfully sent the minidump file. Response: CrashID=bp-9e3504c5-0967-4b40-9563-aeadc2130829
Check if Collector receives your dump
In a working installation the URL: yoursite.org/crash-reports/submit should return a "None" value. If you get an internal error 500 or 404 then there is a problem. In both cases you should check the /var/log/apache2/error.log and see what is the problem.
When collector properly receives a dump you should see on the error.log this line:
> tail -f -n100 /var/log/apache2/error.log [Thu Aug 29 08:22:43 2013] [error] 2013-08-29 08:22:43,932 INFO - MainThread - 9e3504c5-0967-4b40-9563-aeadc2130829 received
where the hex string is the id of your dump.
Also you should see new files created in /home/socorro/primaryCrashStore, example for the dump above:
> ls /home/socorro/primaryCrashStore/20130828/name/9e/a8/9ea81c35-f847-4951-93a5-f75a92130828
-rw-rw-rw- 1 www-data socorro 309960 Aug 28 02:04 9ea81c35-f847-4951-93a5-f75a92130828.dump -rw-rw-rw- 1 www-data socorro 225 Aug 28 02:04 9ea81c35-f847-4951-93a5-f75a92130828.json -rw-r--r-- 1 socorro socorro 8731 Aug 28 02:04 9ea81c35-f847-4951-93a5-f75a92130828.jsonz
If in the log you see something like:
[Thu Aug 29 08:30:19 2013] [error] 2013-08-29 08:30:19,052 DEBUG - MainThread - deferring Planeshift 0.5.12
this means the crash has been 'throttled' in other words not processed due to the throttle rules you have in place.
If the dump is not throttled, so accepted, you should see this on the log
[Thu Aug 29 09:23:32 2013] [error] 2013-08-29 09:23:32,821 DEBUG - MainThread - not throttled Planeshift 0.5.12
Check if Monitor/Processor read the dump from disk
If the monitor finds the new file to process it should print something like this:
> tail -f -n100 /var/log/socorro/monitor-stderr.log:
2013-08-29 09:23:42,066 DEBUG - standard_job_thread - new job: 9ea81c35-f847-4951-93a5-f75a92130828
If the processor finds the new job it should print something like this:
> tail -f -n100 /var/log/socorro/processor-stderr.log:
2013-08-29 09:23:43,919 DEBUG - QueuingThread - incomingJobStream yielding normal job 59cfcb3a-a1a0-492b-9c73-8ac032130829 2013-08-29 09:23:43,920 INFO - Thread-4 - starting job: 59cfcb3a-a1a0-492b-9c73-8ac032130829 2013-08-29 09:23:44,011 DEBUG - Thread-4 - skunk_classifier: reject - not a plugin crash 2013-08-29 09:23:44,012 INFO - Thread-4 - finishing successful job: 59cfcb3a-a1a0-492b-9c73-8ac032130829
Check if the dump information lands in postgres
The crash will go into the database, you can check it with a query:
select * from raw_crashes where uuid='59cfcb3a-a1a0-492b-9c73-8ac032130829'; select * from reports where uuid='59cfcb3a-a1a0-492b-9c73-8ac032130829';
Check if crontabber is doing his job
breakpad=# select product_version_id from product_versions where product_name='PlaneShift';
Check if he populated the adu tables
breakpad=# select count(*) from product_adu where product_version_id IN (select product_version_id from product_versions where product_name='PlaneShift'); breakpad=# select count(*) from build_adu where product_version_id IN (select product_version_id from product_versions where product_name='PlaneShift');
breakpad=# select count(*) from crashes_by_user_build where product_version_id IN (select product_version_id from product_versions where product_name='PlaneShift');
breakpad=# select count(*) from crashes_by_user where product_version_id IN (select product_version_id from product_versions where product_name='PlaneShift');
breakpad=# select uuid,date_processed from reports where product = 'PlaneShift';
select uuid,date_processed from reports_clean where product_version_id IN (select product_version_id from product_versions where product_name='PlaneShift');
select count(*) from signatures where signature='psclient@0x15201c';
Check if any of your reports crashes went into the reports_bad table.
select release_channel,signature from reports_bad, reports where reports_bad.uuid=reports.uuid and product='PlaneShift';
in this case it means
Access the web UI
http://194.116.72.94/crash-stats/home/products/WaterWolf
Troubleshooting
Postgres useful commands:
> /etc/init.d/postgresql start > /etc/init.d/postgresql stop > psql -U planeshift -d breakpad breakpad# \dt (show tables) breakpad# \d products (describes table products)
unable to open database file
http://194.116.72.94/crash-stats/home/frontpage_json?product=PlaneShift&versions=0.5.12 unable to open database file
Request Method: GET Request URL: http://194.116.72.94/crash-stats/home/frontpage_json?product=PlaneShift&versions=0.5.12 Django Version: 1.4.5 Exception Type: OperationalError Exception Value:
unable to open database file
Exception Location: /data/socorro/webapp-django/vendor/lib/python/django/db/backends/sqlite3/base.py in _sqlite_create_connection, line 278
Answer: this is for authenticated sessions, it does not need to be the socorro postgres db but needs to be somewhere with write access :) either sqlite db or postgres/mysql/anything django supports
Solution: edit /data/socorro/webapp-django/crashstats/settings/base.py for database setting
'NAME': '/home/socorro/sqlite.crashstats.db'
> cp /data/socorro/webapp-django/sqlite.crashstats.db /home/socorro > chown www-data:socorro /home/socorro/sqlite.crashstats.db
raw_adu has not been updated
2013-08-29 11:32:49,656 DEBUG - MainThread - error when running <class 'socorro.cron.jobs.matviews.BuildADUCronApp'> on None Traceback (most recent call last):
File "/data/socorro/application/socorro/cron/crontabber.py", line 703, in _run_one for last_success in self._run_job(job_class, config, info): File "/data/socorro/application/socorro/cron/base.py", line 174, in main function(when) File "/data/socorro/application/socorro/cron/base.py", line 213, in _run_proxy self.run(connection, date) File "/data/socorro/application/socorro/cron/jobs/matviews.py", line 52, in run self.run_proc(connection, [target_date]) File "/data/socorro/application/socorro/cron/jobs/matviews.py", line 22, in run_proc cursor.callproc(self.get_proc_name(), signature)
InternalError: raw_adu has not been updated for 2013-08-28
This is caused by the raw_adu table not populated properly
crontabber skipping ... because it's not time to run
You are getting stuck because crontabber doesn't want to run the apps: it gives "skipping ... because it's not time to run"
Change /data/socorro/application/socorro/cron/crontabber.py to add more debug
def time_to_run(self, class_, time_): """return true if it's time to run the job. This is true if there is no previous information about its last run or if the last time it ran and set its next_run to a date that is now past. """ app_name = class_.app_name _debug = self.config.logger.debug try: info = self.database[app_name] except KeyError: if time_: h, m = [int(x) for x in time_.split(':')] # only run if this hour and minute is < now now = utc_now() _debug("LUCADEBUG time to run %s , curren time %s", time_, now) if now.hour > h: return True elif now.hour == h and now.minute >= m: return True return False else: # no past information, run now return True next_run = info['next_run'] _debug("LUCADEBUG next run %s , current time %s", next_run, utc_now()) if next_run < utc_now(): return True return False
If you want to force your crontabber to run anyway:
> rm /home/socorro/persistent/crontabbers.json > psql -U planeshift -d breakpad breakpad=# update crontabber_state set state='{}'; > sudo -u socorro /data/socorro/application/scripts/crons/crontabber.sh
crontabber not populating product_adu
The current implementation of /data/socorro/application/socorro/external/postgresql/raw_sql/procs/update_build_adu.sql parses only the new builds of the last 6 days. If the build date of a product is older, then it skips it. Also it considers only the build_types 'nightly','aurora'. This is hardcoded for Firefox.
To ensure you index also old products and builds, do the following:
> vi /data/socorro/application/socorro/external/postgresql/raw_sql/procs/update_build_adu.sql remove this clause from the query: --AND '2013-08-28' <= ( bdate + 6 )
To accept also your build_type, do the following:
> vi /data/socorro/application/socorro/external/postgresql/raw_sql/procs/update_build_adu.sql change this line: AND product_versions.build_type IN ('nightly','aurora','release')
Then reload the function:
> psql -U planeshift -d breakpad breakpad=# \i /data/socorro/application/socorro/external/postgresql/raw_sql/procs/update_build_adu.sql
same thing for:
/data/socorro/application/socorro/external/postgresql/raw_sql/procs/update_crashes_by_user_build.sql
References
Json crash example
{ "InstallTime": "1357622062", "Theme": "classic/1.0", "Version": "4.0a1", "id": "{ec8030f7-c20a-464f-9b0e-13a3a9e97384}", "Vendor": "Mozilla", "EMCheckCompatibility": "true", "Throttleable": "0", "URL": "http://code.google.com/p/crashme/", "version": "20.0a1", "CrashTime": "1357770042", "ReleaseChannel": "nightly", "submitted_timestamp": "2013-01-09T22:21:18.646733+00:00", "buildid": "20130107030932", "timestamp": 1357770078.646789, "Notes": "OpenGL: NVIDIA Corporation -- GeForce 8600M GT/PCIe/SSE2 -- 3.3.0 NVIDIA 313.09 -- texture_from_pixmap\r\n", "StartupTime": "1357769913", "FramePoisonSize": "4096", "FramePoisonBase": "7ffffffff0dea000", "Add-ons": "%7B972ce4c6-7e08-4474-a285-3208198ce6fd%7D:20.0a1,crashme%40ted.mielczarek.org:0.4", "BuildID": "20130107030932", "SecondsSinceLastCrash": "1831736", "ProductName": "WaterWolf", "legacy_processing": 0, "ProductID": "{ec8030f7-c20a-464f-9b0e-13a3a9e97384}" }
Other older notes, DO NOT consider
Create a screen startup file “launchScorro” that'll be used for the Socorro scripts:
cd /home/planeshift/socorro . socorro-virtualenv/bin/activate export PYTHONPATH=. startup_message off autodetach on defscrollback 10000 termcap xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' screen -S processor python socorro/processor/processor_app.py --admin.conf=./config/processor.ini screen -S monitor python socorro/monitor/monitor_app.py --admin.conf=./config/monitor.ini screen -S middleware python socorro/middleware/middleware_app.py --admin.conf=config/middleware.ini [NOT NEEDED as it runs inside Apache] screen -S collector python socorro/collector/collector_app.py --admin.conf=./config/collector.ini
Tweaking the database
> INSERT INTO products VALUES ('PlaneShift','0.1','0.1','PlaneShift','0'); > INSERT INTO product_versions VALUES (17,'PlaneShift','0.5','0.5.10','0.5.10',0,'0.5.10','2013-08-23','2013-12-23','f','Release','f','f',null);
If something goes wrong and you want to delete your product and versions
> DELETE from products where product_name='PlaneShift'; > DELETE from product_versions where product_name='PlaneShift'; > DELETE from releases_raw where product_name='PlaneShift'; > DELETE from product_productid_map where product_name='PlaneShift'; > DELETE from product_release_channels where product_name='PlaneShift';