The two big changes are:
- the ability to use Yaml files to specify samples,
- the introduction of
run_for_all
(andrun_for_all_samples
) functions to simplify the usage of theparallel
module.
Several of the other changes were then to support these two features.
Additionally, some minor fixes and improvements were made.
Full ChangeLog:
-
Add
load_sample_list
function to load samples in YAML format. -
Add
compress_level
argument towrite
function to specify the compression level. -
Added
name()
method toReadSet
objects, so you can do:input = load_fastq_directory("my-sample")
print(input.name())
which will print my-sample
.
- Added
println
function which works likeprint
but prints a newline after the output. - Make
print()
accept ints and doubles as well as strings. - Added
run_for_all
function toparallel
module, simplifying its API. - When using the
parallel
module and a job fails, writes the log to the corresponding.failed
file. - External modules can now use the
sequenceset
type to represent a FASTA file. - The
load_fastq_directory
function now supports.xz
compressed files. - The
parallel
module now checks for stale locks before re-trying failed tasks. The former model could lead to a situation where a particular sample failed deterministically and then blocked progress even when some locks were stale.
Bugfixes
- The
parallel
module should generate a.failed
file for each failed job, but this was not happening in every case. - Fixed parsing of GFF files to support negative values (reported by Josh Sekela on the mailing-list).