Quality value based models and methods for sequencing data
First isolated by Friedrich Miescher in 1869 and then identified by James Watson and Francis Crick in 1953, the double stranded DeoxyriboNucleic Acid (DNA) molecule of Homo sapiens took fifty years to be completely reconstructed and to finally be at disposal to researchers for deep studies and analyses.
The first technologies for DNA sequencing appeared around the mid-1970s; among them the most successful has been chain termination method, usually referred to as Sanger method. They remained de-facto standard for sequencing until, at the beginning of the 2000s, Next Generation Sequencing (NGS) technologies started to be developed. These technologies are able to produce huge amount of data with competitive costs in terms of dollars per base, but now further advances are revealing themselves in form of Single Molecule Real Time (SMRT) based sequencer, like Pacific Biosciences, that promises to produce fragments of length never been available before. However, none of above technologies are able to read an entire DNA, they can only produce short fragments (called reads) of the sample in a process referred to as sequencing. Although all these technologies have different characteristics, one recurrent trend in their evolution has been represented by the constant grow of the fraction of errors injected into the final reads. While Sanger machines produce as low as 1 erroneous base in 1000, the recent PacBio sequencers have an average error rate of 15%; NGS machines place themselves roughly in the middle with the expected error rate around 1%.
With such a heterogeneity of error profiles and, as more and more data is produced every day, algorithms being able to cope with different sequencing technologies are becoming fundamental; at the same time also models for the description of sequencing with the inclusion of error profiling are gaining importance. A key feature that can make these approaches really effective is the ability of sequencers of producing quality scores which measure the probability of observing a sequencing error.
In this thesis we present a stochastic model for the sequencing process and show its application to the problems of clustering and filtering of reads. The novel idea is to use quality scores to build a probabilistic framework that models the entire process of sequencing. Although relatively straightforward, the developing of such a model goes
through the proper definition of probability spaces and events on such spaces. To keep the model simple and tractable several simplification hypotheses need to be introduce, each of them, however, must be explicitly stated and extensively discussed.
The final result is a model for sequencing process that can be used: to give probabilistic interpretation of the problems defined on sequencing data and to characterize corresponding probabilistic answers (i.e., solutions).
To experimentally validate the aforementioned model, we apply it to two different problems: reads clustering and reads filtering. The first set of experiments goes through the introduction of a set of novel alignment-free measures D2 resulting from the extension of the well known D2 -type measures to incorporate quality values. More precisely, instead of adding a unit contribution to the k-mers count statistic (as for D2 statistics), each k-
mer contributes with an additive term corresponding to its probability of being correct as defined by our stochastic model. We show that this new measures are effective when applied to clustering of reads, by employing clusters produced with D2 as input to the problems of metagenomic binning and de-novo assembly.
In the second set of experiments conducted to validate our stochastic model, we applied the same definition of correct read to the problem of reads filtering. We first define rank filtering which is a lossless filtering technique that sorts reads based on a given criterion; then we used the sorted list of reads as input of algorithms for reads mapping and de-novo assembly. The idea is that, on the reordered set, reads ranking higher should have better quality than the ones at lower ranks. To test this conjecture, we use such filtering as pre-processing step of reads mapping and de-novo assembly; in both cases we observe improvements when our rank filtering approach is used.
http://paduaresearch.cab.unipd.it/7983/1/SchimdPhDThesisA.pdf