/* here be dragons */ experiment/scripts contains the code used to run the experiments. For running the timed benchmark, use n_jobs_runner.py this dispatches n jobs containing synced_workflow.py to the cluster (for example monitoring/test.job). As of now, this should be run from the same directory as , since the script in the job checks that enough receipts from abel have shown up (ie the jobs have started), so no workflow starts before all jobs (that contain the workflow) have started on abel. You can do a watch "python check_finished.py " to see when the jobs are done, although the heuristic to do that is a little crude, ie it checks that the slurm.out file is of the expected length (57, hardcoded, sorry). get_minutes_from_abel_receipt.py gets you the time of the fastest finishing job, the average runtime of all jobs and the slowest finishing job. The receipts from my experiments are in experiment/speed_out. running the disk-space benchmark: check_growth.py which gets you the (incremental) growth of the db after each tool in the test workflow. NB this uses the lap library and assumes access to pymongo, run it with the python in the lap-tree! Some trivial newplotting can be found in /experiments/plot fin