- J. S. Vitter.
“Design and Analysis of Dynamic Huffman Codes,”
*Journal of the ACM*, 34(4), October 1987, 825-845. A shorter version appears in “The Design and Analysis of Dynamic Huffman Coding,”*Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science (FOCS '85)*, Portland, OR, October 1985, 293-302.We introduce and analyze a new one-pass algorithm for constructing dynamic Huffman codes and also analyze the one-pass algorithm due to Faller, Gallager, and Knuth. In each algorithm, both the sender and the receiver maintain equivalent dynamically varying Huffman trees, and the coding is done in real time. We show that the number of bits used by the new algorithm to encode a message containing letters is bits more than that used by the conventional two-pass Huffman scheme, independent of the alphabet size. This is best possible in the worst case, for any one-pass Huffman method. Tight upper and lower bounds are derived. Empirical tests show that the encodings produced by the new algorithm are shorter than those of the other one-pass algorithm and, except for long messages, are shorter than those of the two-pass method. The new algorithm is well-suited for online encoding/decoding in data networks and for file compression.

- J. S. Vitter.
“ALGORITHM 673 Dynamic Huffman Coding,”
*ACM Transactions on Mathematical Software*, 15(2), June 1989, 158-167. Also appears in*Collected Algorithms of ACM*.We present a Pascal implementation of the one-pass algorithm for constructing dynamic Huffman codes that is described and analyzed in a companion paper [Vitter, 1987]. The program runs in real time; that is, the processing time for each letter of the message is proportional to the length of its codeword. The number of bits used to encode a message of letters is less than bits more than that used by the well-known two-pass algorithm. This is best possible for any one-pass Huffman scheme. In practice it uses fewer bits than all other Huffman schemes. The algorithm has applications in file compression and network transmission.

- P. G. Howard and J. S. Vitter.
“Analysis of Arithmetic Coding for Data
Compression,”
invited paper in special issue on data compression for
images and texts in
*Information Processing and Management*, 28(6), 1992, 749-763. A shorter version appears in an invited paper in*Proceedings of the 1991 IEEE Data Compression Conference (DCC '91)*, Snowbird, UT, April 1991, 3-12. - P. G. Howard and J. S. Vitter.
“New Methods for Lossless Image Compression Using
Arithmetic Coding,”
invited paper in special issue on data compression for image and text
in
*Journal of Information Processing and Management*,**28**(6), 1992, 765-779. A shorter version appears in*Proceedings of the 1991 IEEE Data Compression Conference (DCC '91)*, Snowbird, UT, April 1991, 257-266. - P. G. Howard and J. S. Vitter.
“Practical Implementations of
Arithmetic Coding,”
invited paper in
*Images and Text Compression*, Kluwer Academic Publishers, 1992. A shorter version appears as an invited paper in*Proceedings of the 3rd International Conference on Advances in Communication and Control Systems (COMCON '91)*, Victoria, Canada, October 1991. - P. G. Howard and J. S. Vitter.
“Error Modeling for Hierarchical
Lossless Image Compression,”
*Proceedings of the 1992 IEEE Data Compression Conference (DCC '92)*, Snowbird, UT, March 1992, 269-278.We present a new method for error modeling applicable to the MLP algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. We also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general symmetric exponential distribution further improves compression. We describe a new compression measurement called compression gain, and we give experimental results showing that the MLP method using the variability index technique for error modeling gives significantly more compression gain than other methods in the literature.

- J. S. Vitter and P. Krishnan.
“Optimal Prefetching via Data
Compression,”
*Journal of the ACM*,**43**(5) September 1996, 771-793. A shorter version appears in*Proceedings of the 32nd Annual IEEE Symposium on Foundations of Computer Science (FOCS '91)*, San Juan, Puerto Rico, October 1991, 121-130.Caching and prefetching are important mechanisms for speeding up access time to data on secondary storage. Recent work in competitive online algorithms has uncovered several promising new algorithms for caching. In this paper, we apply a form of the competitive philosophy for the first time to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to large-scale databases and hypertext systems. Our algorithms for prefetching are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, you have to be able to predict future data well, and thus good data compressors should be able to predict well for purposes of prefetching. We show for powerful models such as Markov sources and th order Markov sources that the page fault rates incurred by our prefetching algorithms are optimal in the limit for almost all sequences of page accesses.

- P. Krishnan and J. S. Vitter.
“Optimal Prediction for Prefetching in
the Worst Case,”
*SIAM Journal on Computing*,**27**(6), December 1998, 1617-1636. A shorter version appears in*Proceedings of the 5th Annual SIAM/ACM Symposium on Discrete Algorithms (SODA '94)*, Alexandria, VA, January 1994, 392-401.Response time delays caused by I/O is a major problem in many systems and database applications. Prefetching and cache-replacement methods are attracting renewed attention because of their success in avoiding costly I/Os. Prefetching can be looked upon as a type of online sequential prediction, where the predictions must be accurate as well as made in a computationally efficient way. Unlike other online problems, prefetching cannot admit a competitive analysis, since the optimal offline prefetcher incurs no cost when it knows the future page requests. Previous analytical work on prefetching by Vitter and Krishnan consisted of modeling the user as a probabilistic Markov source.

In this paper, we look at the much stronger form of worst-case analysis and derive a randomized algorithm that we prove analytically converges almost surely to the optimal fault rate in the worst case for every sequence of page request with respect to the important class of finite state prefetchers. In particular, we make no assumption about how the sequence of page requests is generated. This analysis model can be looked upon as a generalization of the competitive framework, in that it compares an online algorithm in a worst-case manner over all sequences against a powerful yet non-clairvoyant opponent. We simultaneously achieve the computational goal of implementing our prefetcher in optimal constant expected time per prefetched page, using the optimal dynamic discrete random variate generator of Matias, Vitter, and Ni.

- P. G. Howard and J. S. Vitter.
“Parallel Lossless Image Compression
Using Huffman and Arithmetic Coding,”
*Information Processing Letters*,**59**, 1996, 65-73. A shorter version appears in*Proceedings of the 1992 IEEE Data Compression Conference (DCC '92)*, Snowbird, UT, March 1992, 299-308.We show that high-resolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.

- J.-H. Lin and J. S. Vitter.
“Nearly Optimal Vector Quantization via
Linear Programming,”
*Proceedings of the 1992 IEEE Data Compression Conference (DCC '92)*, Snowbird, UT, March 1992, 22-31.We present new vector quantization algorithms based on the theory developed by the authors. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. We first solve its relaxed linear program by linear programming techniques. Then we transform the linear program solution into a provably good solution for the vector quantization problem. These methods lead to the first known polynomial-time full-search vector quantization codebook design algorithm and tree pruning algorithm with provable worst-case performance guarantees. We also introduce the notion of pseudo-random pruned tree-structured vector quantizers. Initial experimental results on image compression are very encouraging.

- P. G. Howard and J. S. Vitter.
“Design and Analysis of Fast Text
Compression Based on Quasi-Arithmetic Coding,”
*Journal of Information Processing and Management*,**30**(6), 1994, 777-790. A shorter version appears in*Proceedings of the 1993 IEEE Data Compression Conference (DCC '93)*, Snowbird, UT, April 1993.We give a detailed algorithm for fast text compression. Our algorithm, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. We provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. Our Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.

- P. G. Howard and J. S. Vitter.
“Fast and Efficient Lossless Image
Compression,”
*Proceedings of the 1993 IEEE Data Compression Conference (DCC '93)*, Snowbird, UT, April 1993.We present a new method for lossless image compression that gives compression comparable to JPEG lossless mode with about five times the speed. Our method, called FELICS, is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding we use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter we present and analyze a provably good method for estimating the single coding parameter. (Note: This method is the foundation for the subsequently developed state-of-the-art methods now used for lossless image compression.)

- P. G. Howard and J. S. Vitter.
“Fast Progressive Lossless Image
Compression,”
*Proceedings of the 1994 IST/SPIE International Symposium on Electronic Imaging Science and Technology*, San Jose, CA, February 1994.We present a method for progressive lossless compression of still grayscale images that combines the speed of our earlier FELICS method with the progressivity of our earlier MLP method. We use MLP's pyramid-based pixel sequence, and image and error modeling and coding based on that of FELICS. In addition, we introduce a new prefix code with some advantages over the previously used Golomb and Rice codes. Our new progressive method gives compression ratios and speeds similar to those of non-progressive FELICS and those of JPEG lossless mode, also a non-progressive method.

The image model in Progressive FELICS is based on a simple function of four nearby pixels. We select two of the four nearest known pixels, using the two with the middle (non-extreme) values. Then we code the pixel's intensity relative to the selected pixels, using single bits, adjusted binary codes, and simple prefix codes like Golomb codes, Rice codes, or the new family of prefix codes introduced here. We estimate the coding parameter adaptively for each context, the context being the absolute value of the difference of the predicting pixels; we adjust the adaptation statistics at the beginning of each level in the progressive pixel sequence.

- P. G. Howard and J. S. Vitter.
“Arithmetic Coding for Data
Compression,”
*Proceedings of the IEEE*, 82(6), June 1994, 857-865. A good overview of arithmetic coding and implementation considerations.Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an efficient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.

- D. T. Hoang, P. M. Long, and J. S. Vitter.
“Explicit Bit Minimization
for Motion-Compensated Video Coding,”
*Proceedings of the 1994 IEEE Data Compression Conference (DCC '94)*, Snowbird, UT, March 1994, 175-184.We compare methods for choosing motion vectors for motion-compensated video compression. Our primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where the motion is usually limited, and where the frames must be coded in the order they are generated. We provide evidence, using established benchmark videos of this type, that choosing motion vectors to minimize codelength subject to (implicit) constraints on quality yields substantially better rate-distortion tradeoffs than minimizing notions of prediction error. We illustrate this point using an algorithm within the standard. We show that using quadtrees to code the motion vectors in conjunction with explicit codelength minimization yields further improvement. We describe a dynamic-programming algorithm for choosing a quadtree to minimize the codelength.

- D. T. Hoang, P. M. Long and J. S. Vitter.
“Dictionary Selection using Partial Matching,”
*Information Sciences*,**119**(1-2), 57-72, 1999. A shorter version appeared in “Multiple-Dictionary Compression using Partial Matching,”*Proceedings of the 1995 IEEE Data Compression Conference (DCC '95)*, Snowbird, UT, March 1995, 272-281.Motivated by the desire to find text compressors that compress better than existing dictionary methods, but run faster than PPM implementations, we describe methods for text compression using multiple dictionaries, one for each context of preceding characters, where the contexts have varying lengths. The context to be used is determined using an escape mechanism similar to that of PPM methods. We describe modifications of three popular dictionary coders along these lines and experiments evaluating their efficacy using the text files in the Calgary corpus. Our results suggest that modifying LZ77 along these lines yields an improvement in compression of about 4%, that modifying LZFG yields a compression improvement of about 8%, and that modifying LZW in this manner yields an average improvement on the order of 12%.

- D. T. Hoang, P. M. Long, and J. S. Vitter.
“Rate-Distortion Optimizations for Motion Estimation
in Low-Bitrate Video Coding,”
*IEEE Transactions on Circuits and Systems for Video Technology*,**8**(4), August 1998, 488-500. A shorter version appears in*Proceedings of the Digital Video Compression Conference, IS&T/SPIE 1996 Symposium on Electronic Imaging: Science & Technology*,**2668**, San Jose, CA, January-February 1996, 18-27.We present and compare methods for choosing motion vectors for motion-compensated video coding. Our primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where motion is usually limited, and where frames must be coded in the order they are generated. We provide evidence, using established benchmark videos typical of these applications, that choosing motion vectors explicitly to minimize rate, subject to implicit constraints on distortion, yields better rate-distortion tradeoffs than minimizing notions of prediction error. Minimizing a linear combination of rate and distortion results in further rate-distortion improvements. Using a heuristic function of the prediction error and the motion vector codelength results in compression performance comparable to the more computationally intensive coders while running much faster. We incorporate these ideas into coders that operate within the standard.

- D. T. Hoang, P. M. Long, and J. S. Vitter.
“Efficient Cost Measures for Motion Compensation at Low Bit
Rates,”
*Proceedings of the 1996 IEEE Data Compression Conference (DCC '96)*, Snowbird, UT, April 1996, 102-111.We present and compare methods for choosing motion vectors for block-based motion-compensated video coding. The primary focus is on videophone and video-conferencing applications, where low bit rates are necessary, where motion is usually limited, and where the amount of computation is also limited. In a typical block-based motion-compensated video coding system, motion vectors are transmitted along with a lossy encoding of the residuals. As the bit rate decreases, the proportion required to transmit the motion vectors increases. We provide experimental evidence that choosing motion vectors explicitly to minimize rate (including motion vector coding), subject to implicit constraints on distortion, yields better rate-distortion tradeoffs than minimizing some measure of prediction error. Minimizing a combination of rate and distortion yields further improvements. Although these explicit-minimization schemes are computationally intensive, they provide invaluable insight which we use to develop practical algorithms. We show that minimizing a simple heuristic function of the prediction error and the motion vector code-length results in rate-distortion performance comparable to explicit-minimization schemes while being computationally feasible. Experimental results are provided for coders that operate within the H.261 standard.

- D. T. Hoang, E. Linzer, and J. S. Vitter,
“Lexicographic Bit Allocation for MPEG Video,”
Special issue on high-fidelity media processing
in
*Journal of Visual Communication and Image Representation*,**8**(4), December 1997, 384-404. An extended abstract appears in*Proceedings of the 1997 IEEE International Conference on Image Processing (ICIP'97)*, Santa Barbara, CA, October 1997. A shorter version appears in “A Lexicographic Framework for MPEG Rate Control,”*Proceedings of the 1997 IEEE Data Compression Conference (DCC '97)*, Snowbird, UT, March 1997, 101-110.We consider the problem of allocating bits among pictures in an MPEG video coder to equalize visual quality of the coded pictures while meeting buffer and channel constraints imposed by the MPEG Video Buffering Verifier. We address this problem within a framework that consists of three components: 1. a bit production model for the input pictures; 2. a set of bitrate constraints imposed by the Video Buffering Verifier;and 3. a novel lexicographic criterion for optimality. Under this framework, we derive simple necessary and sufficient conditions for optimality that lead to efficient algorithms.

- P. M. Long, A. I. Natsev, and J. S. Vitter,
“Text Compression via Alphabet Re-representation,”
*Proceedings of the Data Compression Conference (DCC '97)*, Snowbird, Utah, March 1997.We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.

- D. T. Hoang and J. S. Vitter.
*Efficient Algorithms for MPEG Video Compression*, John Wiley & Sons, New York, NY, 2002. The interplay between compression and buffer control algorithms in order to maximize network performance and achieve high visual clarity has shown great results, and*Efficient Algorithms for MPEG Video Compression*is the first book dedicated to the subject.Video belongs to a class of information called

*continuous media*. Continuous media is characterized by the essentially continuous manner in which the information is presented. This is in contrast to*discrete media*, in which there is no essential temporal component. Text, images, and graphics are examples of discrete media, while movies, sound, and computer animation are examples of continuous media. Even though a slide show is a time-based presentation of images, it is not a continuous medium since each image is viewed as an individual item. On the other hand, a video clip, while also consisting of a sequence of images, is a continuous medium since each image is perceived in the context of past and future images.With continuous media, therefore, the temporal dimension becomes important. For example, a video sequence compressed with a constant image quality for every frame is often more desirable than one in which the image quality varies noticeably over time. However, because the compressibility of individual frames varies over time, maintaining a constant image quality results in a variation in coding rate over time. The process of controlling the coding rate to meet the requirements of a transmissions channel or storage device, while maintaining a desired level of quality, is called

*bit rate control*. In this monograph, we focus on the rate control of compressed video. Specifically, we present a new framework for allocating bits to the compression of pictures in a video sequence.Existing optimal rate control techniques typically regulate the coding rate to minimize a sum-distortion measure. While these techniques can leverage the wealth of tools from least-mean-square optimization theory, they do not guarantee constant-quality video, an objective often mentioned in the literature. In this book, we develop a framework that casts rate control as a resource allocation problem with continuous variables, nonlinear constraints, and a novel lexicographic optimality criterion that is motivated for uniform video quality. With the lexicographic criterion, we propose a new concept of coding efficiency to better reflect the constancy in quality that is generally desired from a video coder.

Rigorous analysis within this framework reveals a set of necessary and sufficient conditions for optimality for coding at both constant and variable bit rates. With these conditions, we are able to construct polynomial-time algorithms for optimal bit rate control. Experimental implementations of these algorithms confirm the theoretical analysis and produce encodings that are more uniform in quality than that achieved with existing rate control methods. As evidence of the generality and flexibility of the framework, we show how to extend the framework to allocate bits among multiple variable bit rate bitstreams that are to be transmitted over a common constant bit rate channel and to encompass the case of discrete variables.

- R. Grossi and J. S. Vitter.
“Compressed Suffix Arrays and Suffix Trees, with Applications to
Text Indexing and String Matching,”
*SIAM Journal on Computing*,**35**(2), 2005, 378-407. An extended abstract of the first appears in*Proceedings of the 32nd Annual ACM Symposium on Theory of Computing (STOC '00)*, Portland, OR, May 2000, 397-406.Slides for talk (Adobe pdf format)

The proliferation of online text, such as on the World Wide Web and in databases, motivates the need for space-efficient index methods that support fast search. Consider a text of binary symbols to index. Given any query pattern of binary symbols, the goal is to search for in quickly, with being fully scanned only once, namely, when the index is created. All indexing schemes published in the last thirty years support searching in worst-case time and require memory words (or bits), which is significantly larger than the text itself. In this paper we provide a breakthrough both in searching time and index space under the same model of computation as the one adopted in previous work. Based upon new compressed representations of suffix arrays and suffix trees, we construct an index structure that occupies only bits and compares favorably with inverted lists in space. We can search any binary pattern , stored in words, in only time.

Specifically, searching takes time for , and time for and any fixed . That is, we achieve optimal search time for sufficiently large . We can list all the pattern occurrences in optimal additional time when or when ; otherwise, listing takes additional time.

- R. Grossi, A. Gupta, and J. S. Vitter.
“High-Order Entropy-Compressed Text Indexes,”
*Proceedings of the 14th Annual SIAM/ACM Symposium on Discrete Algorithms (SODA '03)*, Baltimore, MD, January 2003, 841-850. For a more detailed version, see “An Algorithmic Framework for Compression and Text Indexing.”We present a novel implementation of compressed suffix arrays exhibiting new tradeoffs between search time and space occupancy for a given text (or sequence) of symbols over an alphabet , where each symbol is encoded by bits. We show that compressed suffix arrays use just bits, while retaining full text indexing functionalities, such as searching any pattern sequence of length in time. The term denotes the th-order empirical entropy of the text, which means that our index is nearly optimal in space apart from lower-order terms, achieving asymptotically the empirical entropy of the text (with a multiplicative constant 1). If the text is highly compressible so that and the alphabet size is small, we obtain a text index with search time that requires only bits. We also report further results and tradeoffs on high-order entropy-compressed text indexes.

- L. Foschini, R. Grossi, A. Gupta, and J. S. Vitter.
"“When Indexing Equals Compression: Experiments on Suffix Arrays
and Trees,”“Fast Compression with a Static Model in High-Order Entropy.”
../FGGV04.fast.pdf
*ACM Transactions on Algorithms*,**2**(4), 2006, 611-639. An extended abstract appears in R. Grossi, A. Gupta, and J. S. Vitter, “When Indexing Equals Compression: Experiments with Compressing Suffix Arrays and Applications,”*Proceedings of the 15th Annual SIAM/ACM Symposium on Discrete Algorithms (SODA '04)*, New Orleans, LA, January 2004, 636-645..We report on a new and improved version of high-order entropy-compressed suffix arrays, which has theoretical performance guarantees comparable to previous work, yet represents an improvement in practice. Our experiments indicate that the resulting text index offers state-of-the-art compression. In particular, we require roughly 20% of the original text size -- without requiring a separate instance of the text -- and support fast and powerful searches. To our knowledge, this is the best known method in terms of space for fast searching. We can additionally use a simple notion to encode and decode block-sorting transforms (such as the Burrows-Wheeler transform), achieving a slightly better compression ratio than

`bzip2`. We also provide a compressed representation of suffix trees (and their associated text) in a total space that is comparable to that of the text alone compressed with`gzip`. - L. Foschini, R. Grossi, A. Gupta, and J. S. Vitter.
“Fast Compression with a Static Model in High-Order Entropy.”
*Proceedings of the 2004 IEEE Data Compression Conference (DCC '04)*, Snowbird, UT, March 2004, 23-25.We report on a simple encoding format called

`wzip`for decompressing block-sorting transforms, such as the Burrows-Wheeler Transform (BWT). Our compressor uses the simple notions of gamma encoding and RLE organized with a wavelet tree to achieve a slightly better compression ration than`bzip2`in less time. In fact, our compression/decompression time is dependent upon , the empirical th order entropy. Another key contribution of our compressor is its simplicity. Our compressor can also operate as a full-text index with a small amount of data, while still preserving backward compatibility with just the compressor. - A. Gupta, W. Hon, R. Shah, and J. S. Vitter.
“Compressed Data Structures: Dictionaries and the
Data-Aware Measures,”
*Theoretical Computer Science*,**387**(3), November 2007, 313-331.We propose measures for compressed data structures, in which space usage is measured in a data-aware manner. In particular, we consider the fundamental dictionary problem on set data, where the task is to construct a data structure to represent a set of items out of a universe and support various queries on . We use a well-known data-aware measure for set data called gap to bound the space of our data structures. We describe a novel dictionary structure taking bits. Under the RAM model, our dictionary supports membership, rank, select, and predecessor queries in nearly optimal time, matching the time bound of Andersson and Thorup's predecessor structure, while simultaneously improving upon their space usage. Our dictionary structure uses exactly gap bits in the leading term (i.e., the constant factor is ) and answers queries in near-optimal time. When seen from the worst case perspective, we present the first -bit dictionary structure which supports these queries in near-optimal time under RAM model. We also build a dictionary which requires the same space and supports membership, select, and partial rank queries even more quickly in time. To the best of our knowledge, this is the first of a kind result which achieves data-aware space usage and retains near-optimal time.

- A. Gupta, W.-K. Hon, R. Shah, and J. S. Vitter.
“Compressed Dictionaries: Space Measures, Data Sets, and Experiments,”
in
*Proceedings of the 5th International Workshop on Experimental Algorithms (WEA '06)*, Menorca, Spain, May 2006, 158-169.We present an experimental study of the space-time tradeoffs for the dictionary problem. Our primary goal is to reduce the space requirement for storing a dictionary data structure. Many compression schemes have been developed for dictionaries, which fall generally in the categories of combinatorial encodings and data-aware methods and still support queries efficiently. We show that for many real-world datasets, data-aware methods lead to a worthwhile compression over combinatorial methods. Additionally, we design a new data-aware building block structure called BSGAP that presents improvements over other data-aware methods.

- R. Grossi, A. Gupta, and J. S. Vitter.
“An Algorithmic Framework for Compression and Text Indexing,”
manuscript.
This work is an extension of two separate pieces of
research in conference form:
“High-Order Entropy-Compressed Text Indexes,”
*Proceedings of the 14th Annual SIAM/ACM Symposium on Discrete Algorithms (SODA '03)*, Baltimore, MD, January 2003, 841-850; and “Nearly Tight Bounds on the Encoding Length of the Burrows-Wheeler Transform,”*Proceedings of the 5th Workshop on Analytical Algorithmics and Combinatorics (ANALCO '08)*, San Francisco, CA, January 2008, 191-202.We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lower-order terms. For a text of symbols drawn from an alphabet , our bounds are stated in terms of the th-order empirical entropy of the text, . In particular, we provide a tight analysis of the Burrows-Wheeler transform (BWT) establishing a bound of bits, where denotes the asymptotic number of bits required to store the empirical statistical model for contexts of order up to appearing in . Using the same framework, we also obtain an implementation of the compressed suffix array (CSA) which achieves bits of space while still retaining competitive full-text indexing functionality.

The novelty of the proposed framework lies in its use of the finite set model instead of the empirical probability model (as in previous work), giving us new insight into the design and analysis of our algorithms. For example, we show that our analysis gives improved bounds since , where and do not depend on the text length , while is the modified th-order empirical entropy of . Moreover, we show a strong relationship between a compressed full-text index and the succinct dictionary problem. We also examine the importance of lower-order terms, as these can dwarf any savings achieved by high-order entropy. We report further results and tradeoffs on high-order entropy-compressed text indexes in the paper.

- M. Y. Eltabakh, W.-K. Hon, R. Shah, W. Aref, and J. S. Vitter.
“The SBC-tree: An Index for Run-Length Compressed Sequences,”
*Proceedings of the 11th International Conference on Extending Database Technology (EDBT '08)*, Nantes, France, March 2008, 523-534.Run-Length-Encoding (RLE) is a data compression technique that is used in various applications, e.g., biological se- quence databases, multimedia, and facsimile transmission. One of the main challenges is how to operate, e.g., indexing, searching, and retrieval, on the compressed data without decompressing it. In this paper, we present the String B-tree for Compressed sequences, termed the SBC-tree, for indexing and searching RLE-compressed sequences of arbitrary length. The SBC-tree is a two-level index structure based on the well-known String B-tree and a 3-sided range query structure. The SBC-tree supports substring as well as prefix matching, and range search operations over RLE-compressed sequences. The SBC-tree has an optimal external-memory space complexity of pages, where is the total length of the compressed sequences, and is the disk page size. The insertion and deletion of all suffixes of a compressed sequence of length m takes I/O operations. Substring matching, prefix matching, and range search execute in an optimal I/O operations, where is the length of the compressed query pattern and is the query output size. We present also two variants of the SBC-tree: the SBC-tree that is based on an R-tree instead of the 3-sided structure, and the one-level SBC-tree that does not use a two-dimensional index. These variants do not have provable worst-case theoretical bounds for search operations, but perform well in practice. The SBC-tree index is realized inside PostgreSQL in the context of a biological protein database application. Performance results illustrate that using the SBC-tree to index RLE-compressed sequences achieves up to an order of magnitude reduction in storage, up to 30% reduction in I/Os for the insertion operations, and retains the optimal search performance achieved by the String B-tree over the uncompressed sequences.

- A. Gupta, W.-K. Hon, R. Shah, and J. S. Vitter.
“Dynamic Rank/Select Dictionaries with Applications to XML Indexing,”
manuscript.
We consider a central problem in text indexing: Given a text over an alphabet , construct a compressed data structure answering the queries , , and for a symbol . Many data structures consider these queries for static text . We consider the dynamic version of the problem, where we are allowed to insert and delete symbols at arbitrary positions of . This problem is a key challenge in compressed text indexing and has direct application to dynamic XML indexing structures that answer subpath queries [XBW].

We build on the results of [RRR, GMR] and give the best known query bounds for the dynamic version of this problem, supporting arbitrary insertions and deletions of symbols in . Specifically, with an amortized update time of , we suggest how to support , , and queries in time, for any . The best previous query times for this problem were , given by [Makinen Navarro]. Our bounds are competitive with state-of-the-art static structures [GMR]. Some applicable lower bounds for the partial sums problem [PD] show that our update/query tradeoff is also nearly optimal. In addition, our space bound is competitive with the corresponding static structures. For the special case of bitvectors (i.e., ), we also show the best tradeoffs for query/update time, improving upon the results of [Makinen Navarro, Hon, RRR].

Our focus on fast query/slower update is well-suited for a query-intensive XML indexing environment. Using the XBW transform [XBW], we also present a dynamic data structure that succinctly maintains an ordered labeled tree and supports a powerful set of queries on .

- A. Gupta, W.-K. Hon, R. Shah, and J. S. Vitter.
“A Framework for Dynamizing Succinct Data Structures,”
*Proceedings of the 34th Annual International Colloquium on Automata, Languages, and Programming (ICALP '07)*, Wrocaw, Poland, July 2007, published in Lecture Notes in Computer Science,**4596**Springer-Verlag, Berlin, Germany, 521-532.We present a framework to dynamize succinct data structures, to encourage their use over non-succinct versions in a wide variety of important application areas. Our framework can dynamize most state-of-the-art succinct data structures for dictionaries, ordinal trees, labeled trees, and text collections. Of particular note is its direct application to XML indexing structures that answer queries. Our framework focuses on achieving information-theoretically optimal space along with near-optimal update/query bounds.

As the main part of our work, we consider the following problem central to text indexing: Given a text over an alphabet , construct a compressed data structure answering the queries , , and for a symbol . Many data structures consider these queries for static text . We build on these results and give the best known query bounds for the dynamic version of this problem, supporting arbitrary insertions and deletions of symbols in .

Specifically, with an amortized update time of , any static succinct data structure for , taking time for queries, can be converted by our framework into a dynamic succinct data structure that supports , , and queries in time, for any constant . When , we achieve query times. Our update/query bounds are near-optimal with respect to the lower bounds.

- R. Grossi, A. Gupta, and J. S. Vitter.
“Nearly Tight Bounds on the Encoding Length of the
Burrows-Wheeler Transform,”
*Proceedings of the 5th Workshop on Analytical Algorithmics and Combinatorics (ANALCO '08)*, San Francisco, CA, January 2008, 191-202. For a more detailed version, see “An Algorithmic Framework for Compression and Text Indexing.”We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lower-order terms. For a text of symbols drawn from an alphabet , our bounds are stated in terms of the th-order empirical entropy of the text, . In particular, we provide a tight analysis of the Burrows-Wheeler transform (BWT) establishing a bound of bits, where denotes the asymptotical number of bits required to store the empirical statistical model for contexts of order appearing in . Using the same framework, we also obtain an implementation of the compressed suffix array (CSA) that achieves bits of space while still retaining competitive full-text indexing functionality.

The novelty of the proposed framework lies in its use of the finite set model instead of the empirical probability model (as in previous work), giving us new insight into the design and analysis of our algorithms. For example, we show that our analysis gives improved bounds since , where and do not depend on the text length , while is the modified th-order empirical entropy of . Moreover, we show a strong relationship between a compressed full-text index and the succinct dictionary problem. We also examine the importance of lower-order terms, as these can dwarf any savings achieved by high-order entropy. We report further results and tradeoffs on high-order entropy-compressed text indexes in the paper.

- Y.-F. Chien, W.-K. Hon, R. Shah, and J. S. Vitter.
“Geometric Burrows-Wheeler Transform: Linking Range
Searching and Text Indexing,”
*Proceedings of the 2008 IEEE Data Compression Conference (DCC '08)*, Snowbird, UT, March 2008.We introduce a new variant of the popular Burrows-Wheeler transform (BWT) called Geometric Burrows-Wheeler Transform (GBWT). Unlike BWT, which merely permutes the text, GBWT converts the text into a set of points in 2-dimensional geometry. Using this transform, we can answer to many open questions in compressed text indexing: (1) Can compressed data structures be designed in external memory with similar performance as the uncompressed counterparts? (2) Can compressed data structures be designed for position restricted pattern matching? We also introduce a reverse transform, called Points2Text, which converts a set of points into text. This transform allows us to derive the best known lower bounds in compressed text indexing. We show strong equivalence between data structural problems in geometric range searching and text pattern matching. This provides a way to derive new results in compressed text indexing by translating the results from range searching.

- W.-K. Hon, T.-W. Lam, R. Shah, S.-L. Tam, and J. S. Vitter.
“Compressed Index for Dictionary Matching,”
*Proceedings of the 2008 IEEE Data Compression Conference (DCC '08)*, Snowbird, UT, March 2008.The past few years have witnessed several exciting results on compressed representation of a string that supports efficient pattern matching, and the space complexity has been reduced to bits, where denotes the th-order empirical entropy of , and is the size of the alphabet. In this paper we study compressed representation for another classical problem of string indexing, which is called dictionary matching in the literature. Precisely, a collection of strings (called patterns) of total length is to be indexed so that given a text , the occurrences of the patterns in can be found efficiently. In this paper we show how to exploit a sampling technique to compress the existing O(n)-word index to an -bit index with only a small sacrifice in search time.

- P. Ferragina, R. Grossi, A. Gupta, R. Shah, and J. S. Vitter.
“On Searching Compressed String Collections Cache-Obliviously,”
*Proceedings of the 27th Annual ACM Symposium on Principles of Database Systems (PODS '08)*, Vancouver, Canada, June 2008.Current data structures for searching large string collections are limited in that they either fail to achieve minimum space or they cause too many cache misses. In this paper, we discuss some edge linearizations of the classic trie data structure that are simultaneously cache-friendly and storable in compressed space. The widely known frontcoding scheme is one example of linearization; it is at the core of Prefix B-trees and many other disk-conscious compressed indexes for string collections. However, it is largely thought of as a space-effective heuristic without efficient search support.

In this paper, we introduce new insights on front-coding and other novel linearizations, and study how close their space occupancy is to the information-theoretic minimum. The moral is that they are not just heuristics. The second contribution of this paper engineers these linearizations to design a novel dictionary encoding scheme that achieves nearly optimal space, offers competitive I/O-search time, and is also conscious of the query distribution. Finally, we combine those data structures with cache-oblivious tries and obtain a succinct variant, whose space is close to the information-theoretic minimum.

- Y.-F. Chien, W.-K. Hon, R. Shah, S. V. Thankachan, and J. S. Vitter,
“Geometric BWT: Compressed Text Indexing via Sparse Suffixes and Range Searching,”
*Algorithmica*,**71**, 2015, published online May 2013, 258-178. A shorter version of some results appeared in W.-K. Hon, R. Shah, S. V. Thankachan, and J. S. Vitter, “On Entropy-Compressed Text Indexing in External Memory,”*Proceedings of the 16th International Conference on String Processing and Information Retrieval (SPIRE '09)*, Saariselkä, Finland, August 2009, published in Lecture Notes in Computer Science,**5721**, Springer, Berlin, Germany, 75-89.We introduce a new variant of the popular Burrows-Wheeler transform (BWT), called Geometric Burrows-Wheeler Transform (GBWT), which converts a text into a set of points in 2-dimensional geometry.We also introduce a reverse transform, called Points2Text, which converts a set of points into text. Using these two transforms, we show strong equivalence between data structural problems in geometric range searching and text pattern matching. This allows us to apply the lower bounds known in the field of orthogonal range searching to the problems in compressed text indexing. In addition, we give the first succinct (compact) index for I/O-efficient pattern matching in external memory, and show how this index can be further improved to achieve higher-order entropy compressed space.

- S.-Y. Chiu, W.-K. Hon, R. Shah, and J. S. Vitter.
“I/O-efficient Compressed Text Indexes: From Theory to Practice,”
*Proceedings of the 2010 IEEE Data Compression Conference (DCC '10)*, Snowbird, UT, March 2010.Pattern matching on text data has been a fundamental field of Computer Science for nearly 40 years. Databases supporting full-text indexing functionality on text data are now widely used by biologists. In the theoretical literature, the most popular internal-memory index structures are the suffix trees and the suffix arrays, and the most popular external-memory index structure is the string B-tree. However, the practical applicability of these indexes has been limited mainly because of their space consumption and I/O issues. These structures use a lot more space (almost 20 to 50 times more) than the original text data and are often disk-resident.

Ferragina and Manzini (2005) and Grossi and Vitter (2005) gave the first compressed text indexes with efficient query times in the internal-memory model. Recently, Chien et al (2008) presented a compact text index in the external memory based on the concept of Geometric Burrows-Wheeler Transform. They also presented lower bounds which suggested that it may be hard to obtain a good index structure in the external memory.

In this paper, we investigate this issue from a practical point of view. On the positive side we show an external-memory text indexing structure (based on R-trees and KD-trees) that saves space by about an order of magnitude as compared to the standard String B-tree. While saving space, these structures also maintain a comparable I/O efficiency to that of String B-tree. We also show various space vs. I/O efficiency trade-offs for our structures.

- W.-K. Hon, R. Shah, and J. S. Vitter.
“Compression, Indexing, and Retrieval for Massive String Data,”
invited keynote paper in
*Proceedings of the 19th Annual Conference on Combinatorial Pattern Matching (CPM '10)*, New York, NY, June 2010, published in Lecture Notes in Computer Science,**4580**Springer, Berlin, Germany, 40-51. This keynote address from CPM 2010 gives a quick history of indexing using compressed data structures, as well as some of the current challenges to realize their full potential.Slides for CPM '10 keynote talk (Adobe pdf)

The field of compressed data structures seeks to achieve fast search time, but using a compressed representation, ideally requiring less space than that occupied by the original input data. The challenge is to construct a compressed representation that provides the same functionality and speed as traditional data structures. In this invited presentation, we discuss some breakthroughs in compressed data structures over the course of the last decade that have significantly reduced the space requirements for fast text and document indexing. One interesting consequence is that, for the first time, we can construct data structures for text indexing that are competitive in time and space with the well-known technique of inverted indexes, but that provide more general search capabilities. Several challenges remain, and we focus in this presentation on two in particular: building I/O-efficient search structures when the input data are so massive that external memory must be used, and incorporating notions of relevance in the reporting of query answers.

- M. Oguzhan Külekci, J. S. Vitter, and Bojian Xu.
“Fast Pattern Matching via -bit Filtering Based Text Decomposition,"
invited paper from
*ISCIS 2010*in*The Computer Journal*,**55**(1), January 2012, electronic publication December 20, 2010,`doi.org/10.1093/comjnl/bxq090`. An extended abstract appears in “Boosting Pattern Matching Performance via -bit Filtering,"*Proceedings of the 25th International Symposium on Computer and Information Sciences (ISCIS '10)*, London, September 2010, published in Lecture Notes in Electrical Engineering, 1,**62**, Springer, Berlin, Germany, 27-32.This study explores an alternative way of storing text files to answer exact match queries faster. We decompose the original file into two parts as filter and payload. The filter part contains the most informative bits of each byte, and the remaining bits of the bytes are concatenated in order of appearance to generate the payload. We refer to this structure as -bit filtered format. When an input pattern is to be searched on the -bit filtered structure, the same decomposition is performed on the pattern. The bits from each byte of the pattern form the pattern filter bit sequence, and the rest is the payload. The pattern filter is first scanned on the filter part of the file. At each match position detected in the filter part, the pattern payload is verified against the corresponding location in the payload part of the text. Thus, instead of searching an -byte pattern on an -byte text, the first bits are scanned on bits, followed by a verification of bits on the respective locations of the matching positions. Experiments conducted on natural language texts, plain ASCII DNA sequences, and random byte sequences showed that the search performance with the proposed scheme is on average two times faster than the tested best exact pattern matching algorithms. The highest gain is obtained on plain ASCII DNA sequences. We also developed an effective bitwise pattern matching algorithm of possible independent interest within this study.

- W.-K. Hon, R. Shah, and J. S. Vitter.
“Space-Efficient Frameworks for Top- String Retrieval,”
*Journal of the ACM*,**61**(2), April 2014, 9.1-9.36. An extended abstract appears in W.-K. Hon, R. Shah, and J. S. Vitter, “Space-Efficient Frameworks for Top- String Retrieval Problems,”*Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS '09)*, Atlanta, GA, October 2009, 713-722.Given a set of strings of total length , our task is to report the “most relevant” strings for a given query pattern . This involves somewhat more advanced query functionality than the usual pattern matching, as some notion of “most relevant” is involved. In information retrieval literature, this task is best achieved by using inverted indexes. However, inverted indexes work only for some predefined set of patterns. In the pattern matching community, the most popular pattern-matching data structures are suffix trees and suffix arrays. However, a typical suffix tree search involves going through all the occurrences of the pattern over the entire string collection, which might be a lot more than the required relevant documents.

The first formal framework to study such kind of retrieval problems was given by Muthukrishnan. He considered two metrics for relevance: frequency and proximity. He took a threshold-based approach on these metrics and gave data structures taking words of space. We study this problem in a slightly different framework of reporting the top most relevant documents (in sorted order) under similar and more general relevance metrics. Our framework gives linear space data structure with optimal query times for arbitrary score functions. As a corollary, it improves the space utilization for the problems considered by Muthukrishnan while maintaining optimal query performance. We also develop compressed variants of these data structures for several specific relevance metrics.

- M. O. Külekci, W.-K. Hon, R. Shah, J. S. Vitter, and B. Xu.
“-RA: a Parallel Sparse Index for Genomic Read Alignment,”
invited paper from
*BIBM 2010*in*BMC Genomics*,**12**(Suppl. 2), 2011, S7. An extended abstract appears in “-RA: A Parallel Sparse Index for Read Alignment on Genomes,”*Proceedings of the IEEE International Conference on Bioinformatics & Biomedicine (BIBM '10)*, Hong Kong, December 2010, 663-668.Background: Genomic read alignment involves mapping (exactly or approximately) short reads from a particular individual onto a pre-sequenced reference genome of the same species. Because all individuals of the same species share the majority of their genomes, short reads alignment provides an alternative and much more efficient way to sequence the genome of a particular individual than does direct sequencing. Among many strategies proposed for this alignment process, indexing the reference genome and short read searching over the index is a dominant technique. Our goal is to design a space-efficient indexing structure with fast searching capability to catch the massive short reads produced by the next generation high-throughput DNA sequencing technology.

Results: We concentrate on indexing DNA sequences via sparse suffix arrays (SSAs) and propose a new short read aligner named -RA (PSI-RA: parallel sparse index read aligner). The motivation in using SSAs is the ability to trade memory against time. It is possible to fine tune the space consumption of the index based on the available memory of the machine and the minimum length of the arriving pattern queries. Although SSAs have been studied before for exact matching of short reads, an elegant way of approximate matching capability was missing. We provide this by defining the rightmost mismatch criteria that prioritize the errors towards the end of the reads, where errors are more probable. -RA supports any number of mismatches in aligning reads. We give comparisons with some of the well-known short read aligners, and show that indexing a genome with SSA is a good alternative to the Burrows-Wheeler transform or seed-based solutions.

Conclusions: -RA is expected to serve as a valuable tool in the alignment of short reads generated by the next generation high-throughput sequencing technology. -RA is very fast in exact matching and also supports rightmost approximate matching. The SSA structure that -RA is built on naturally incorporates the modern multicore architecture and thus further speed-up can be gained. All the information, including the source code of -RA, can be downloaded at

`http://www.busillis.com/o_kulekci/PSIRA.zip`. - M. O. Külekci, J. S. Vitter, and B. Xu.
“Efficient Maximal Repeat Finding Using the Burrows-Wheeler Transform and Wavelet Tree,”
under submission.
An extended abstract appears in “Time- and Space-efficient Maximal Repeat Finding
Using the Burrows-Wheeler Transform and Wavelet Trees,”
*Proceedings of the IEEE International Conference on Bioinformatics & Biomedicine (BIBM '10)*, Hong Kong, December 2010, 622-625.Finding repetitive structures in genomes and proteins is important to understand their biological functions. Many data compressors for modern genomic sequences rely heavily on finding repeats in the sequences. Small-scale and local repetitive structures are better understood than large and complex interspersed ones. The notion of maximal repeats captures all the repeats in the data in a space-efficient way. Prior work on maximal repeat finding used either a suffix tree or a suffix array along with other auxiliary data structures. Their space usage is 19-50 times the text size with the best engineering efforts, prohibiting their usability on massive data such as the whole human genome. We focus on finding all the maximal repeats from massive texts in a time- and space-efficient manner. Our technique uses the Burrows-Wheeler Transform and wavelet trees. For data sets consisting of natural language texts and protein data, the space usage of our method is no more than three times the text size. For genomic sequences stored using one byte per base, the space usage of our method is less than double the sequence size. Our space-efficient method keeps the timing performance fast. In fact, our method is orders of magnitude faster than the prior methods for processing massive texts such as the whole human genome, since the prior methods must use external memory. For the first time, our method enables a desktop computer with 8GB internal memory (actual internal memory usage is less than 6GB) to find all the maximal repeats in the whole human genome in less than 17 hours. We have implemented our method as general-purpose open-source software for public use.

- W.-K. Hon, T.-H. Ku, T.-W. Lam, R. Shah, S.-L. Tam, S. V. Thankachan, and J. S. Vitter.
“Compressing Dictionary Matching Index via Sparsification Technique,”
*Algorithmica*,**71**, online edition January 2014, 24 pp. A shorter version appears in W.-K. Hon, T.-H. Ku, R. Shah, S. V. Thankachan, and J. S. Vitter, “Compressed Dictionary Matching With One Error,”*Proceedings of the 2011 IEEE Data Compression Conference (DCC '11)*, Snowbird, UT, March 2011, 113-122, with contributions from papers in DCC '08 and ISAAC '09.Given a set D of patterns of total length n, the dictionary matching problem is to index D such that for any query text T, we can locate the occurrences of any pattern within T efficiently. This problem can be solved in optimal O(|T|+occ) time by the classical AC automaton (Aho and Corasick in Commun. ACM 18(6):333–340, 1975), where occ denotes the number of occurrences. The space requirement is O(n) words which is still far from optimal. In this paper, we show that in many cases, sparsification technique can be applied to improve the space requirements of the indexes for the dictionary matching and its related problems. First, we give a compressed index for dictionary matching, and show that such an index can be generalized to handle dynamic updates of D. Also, we give a compressed index for approximate dictionary matching with one error. In each case, the query time is only slowed down by a polylogarithmic factor when compared with that achieved by the best O(n)-word counterparts.

- R. Grossi, J. S. Vitter, and B. Xu.
“Wavelet Trees: from Theory to Practice,”
in preparation. An extended abstract appears in
*Proceedings of the International Conference on Data Compression, Communication and Processing (CCP '11)*, Palinuro, Italy, June 2011, 10 pages.The wavelet tree data structure is a space-efficient technique for rank and select queries that generalizes from binary characters to an arbitrary multicharacter alphabet. It has become a key tool in modern full-text indexing and data compression because of its capabilities in compressing, indexing, and searching. We present a comparative study of its practical performance regarding a wide range of options on the dimensions of different coding schemes and tree shapes. Our results are both theoretical and experimental: (1) We show that the run-length coding size of wavelet trees achieves the 0-order empirical entropy size of the original string with leading constant 1, when the string's 0-order empirical entropy is asymptotically less than the logarithm of the alphabet size. This result complements the previous works that are dedicated to analyzing run-length -encoded wavelet trees. It also reveals the scenarios when run-length encoding becomes practical. (2) We introduce a full generic package of wavelet trees for a wide range of options on the dimensions of coding schemes and tree shapes. Our experimental study reveals the practical performance of the various modifications.

- M. Patil, S. V. Thankachan, R. Shah, W.-K. Hon J. S. Vitter, and S. Chandrasekaran.
“Inverted Indexes for Phrases and Strings,”
*Proceedings of the 34th Annual ACM SIGIR Conference (SIGIR '11)*, Beijing, July 2011, 10 pages.Inverted indexes are the most fundamental and widely used data structures in information retrieval. For each unique word occurring in a document collection, the inverted index stores a list of the documents in which this word occurs. Compression techniques are often applied to further reduce the space requirement of these lists. However, the index has a shortcoming, in that only predefined pattern queries can be supported efficiently. In terms of string documents where word boundaries are undefined, if we have to index all the substrings of a given document, then the storage quickly becomes quadratic in the data size. Also, if we want to apply the same type of indexes for querying phrases or sequence of words, then the inverted index will end up storing redundant information. In this paper, we show the first set of inverted indexes which work naturally for strings as well as phrase searching. The central idea is to exclude document in the inverted list of a string if every occurrence of in is subsumed by another string of which is a prefix. With this we show that our space utilization is close to the optimal. Techniques from succinct data structures are deployed to achieve compression while allowing fast access in terms of frequency and document id based retrieval. Compression and speed tradeoffs are evaluated for different variants of the proposed index. For phrase searching, we show that our indexes compare favorably against a typical inverted index deploying position-wise intersections. We also show efficient top- based retrieval under relevance metrics like frequency and

*tf-idf*. - M. O. Külekci, W.-K. Hon, R. Shah, J. S. Vitter, and B. Xu.
“-RA: a Parallel Sparse Index for Genomic Read Alignment,”
invited paper from
*BIBM '10*in*BMC Genomics*,**12**(Suppl. 2), 2011, S7. A shorter version appears in “PSI-RA: A Parallel Sparse Index for Read Alignment on Genomes,"*Proceedings of 2010 IEEE International Conference on Bioinformatics and Biomedicine (BIBM '10)*, Hong Kong, December 2010, 663-668.Background: Genomic read alignment involves mapping (exactly or approximately) short reads from a particular individual onto a pre-sequenced reference genome of the same species. Because all individuals of the same species share the majority of their genomes, short reads alignment provides an alternative and much more efficient way to sequence the genome of a particular individual than does direct sequencing. Among many strategies proposed for this alignment process, indexing the reference genome and short read searching over the index is a dominant technique. Our goal is to design a space-efficient indexing structure with fast searching capability to catch the massive short reads produced by the next generation high-throughput DNA sequencing technology.

Results: We concentrate on indexing DNA sequences via sparse suffix arrays (SSAs) and propose a new short read aligner named -RA (PSI-RA: parallel sparse index read aligner). The motivation in using SSAs is the ability to trade memory against time. It is possible to fine tune the space consumption of the index based on the available memory of the machine and the minimum length of the arriving pattern queries. Although SSAs have been studied before for exact matching of short reads, an elegant way of approximate matching capability was missing. We provide this by defining the rightmost mismatch criteria that prioritize the errors towards the end of the reads, where errors are more probable. -RA supports any number of mismatches in aligning reads. We give comparisons with some of the well-known short read aligners, and show that indexing a genome with SSA is a good alternative to the Burrows-Wheeler transform or seed-based solutions.

Conclusions: -RA is expected to serve as a valuable tool in the alignment of short reads generated by the next generation high-throughput sequencing technology. -RA is very fast in exact matching and also supports rightmost approximate matching. The SSA structure that -RA is built on naturally incorporates the modern multicore architecture and thus further speed-up can be gained. All the information, including the source code of -RA, can be downloaded at

`http://www.busillis.com/o_kulekci/PSIRA.zip`.T.-H. Ku, W.-K. Hon, R. Shah, S. V. Thankachan, and J. S. Vitter. “Compressed Text Indexing With Wildcards,”

*Journal of Discrete Algorithms*,**19**, March 2013, 23-29. An extended abstract appears in*Proceedings of the 18th International Conference on String Processing and Information Retrieval (SPIRE '11)*, Pisa, Italy, October 2011, published in Lecture Notes in Computer Science, Springer, Berlin, Germany.Let be a text of total length , where characters of each are chosen from an alphabet of size , and denotes a wildcard symbol. The text indexing with wildcards problem is to index such that when we are given a query pattern , we can locate the occurrences of in efficiently. This problem has been applied in indexing genomic sequences that contain single-nucleotide polymorphisms (SNP) because SNP can be modeled as wildcards. Recently Tam et al. (2009) and Thachuk (2011) have proposed succinct indexes for this problem. In this paper, we present the first compressed index for this problem, which takes only bits space, where is the th-order empirical entropy ( ) of .

- J. S. Vitter.
“Compressed Data Structures with Relevance,"
invited keynote abstract,
*Proceedings of the 2012 ACM Conference on Information and Knowledge Management (CIKM '12)*, Maui, Hawaii, October-November 2012.Slides for CIKM '12 keynote talk (Adobe pdf)

We describe recent breakthroughs in the field of compressed data structures, in which the data structure is stored in a compressed representation that still allows fast answers to queries. We focus in particular on compressed data structures to support the important application of pattern matching on massive document collections. Given an arbitrary query pattern in textual form, the job of the data structure is to report all the locations where the pattern appears. Another variant is to report all the documents that contain at least one instance of the pattern. We are particularly interested in reporting only the most relevant documents, using a variety of notions of relevance. We discuss recently developed techniques that support fast search in these contexts as well as under additional positional and temporal constraints.

- W.-K. Hon, R. Shah, S. V. Thankachan, and J. S. Vitter.
“On Position Restricted Substring Searching in Succinct Space,”
*Journal of Discrete Algorithms*,**17**, 2012, 109-114. Part of the results appear in an extended abstract in Y.-F. Chien, W.-K. Hon, R. Shah, and J. S. Vitter, “Geometric Burrows-Wheeler Transform: Linking Range Searching and Text Indexing,”*Proceedings of the 2008 IEEE Data Compression Conference (DCC '08)*, Snowbird, UT, March 2008.We study the position restricted sub-string searching (PRSS) problem, where the task is to index a text of characters over an alphabet set of size , in order to answer the following: given a query pattern (of length ) and two indices and , report all occurrences of in . The known indexes take bits or bits space, and answer the query in time or in optimal time respectively, where is any positive constant. The main drawback of these indexes is their space requirement of bits, which can be much more than the optimal bits to store the text . This paper addresses an open question asked by Mäkinen and Navarro [LATIN, 2006], whether it is possible to design a succinct index answering PRSS queries efficiently ? We first study the hardness of this problem and also prove the following result: a succinct index cannot answer PRSS queries optimally at least in the pointer machine model and also not in RAM model unless bounds on well-researched orthogonal range query problem improve. However, for the special case of sufficiently long query patterns, that is for , we derive an bits index with optimal query time, where and are the space (in bits) of compressed suffix arrays (with time for pattern search) of and (the reverse of ) respectively. The space can be reduced further to bits and the resulting query time will be . For the general case, where there is no restriction on pattern length, we obtain an bits index with query time. We use suffix sampling techniques to achieve these space-efficient indexes.

- S. V. Thankachan, R. Shah, W.-K. Hon, and J. S. Vitter.
“Document Listing for Queries With Excluded Pattern,"
*Proceedings of the 23rd Annual Conference on Combinatorial Pattern Matching (CPM '12)*, Helsinki, Finland, July 2012, published in Lecture Notes in Computer Science,**6129**, Springer, Berlin, Germany, 185-195.Let be a given collection of string documents of total length . We consider the problem of indexing such that, whenever two patterns and comes as an online query, we can list all those documents containing but not . Let represent the number of such documents. An index proposed by Fischer et al. (LATIN, 2012) can answer this query in time. However, its space requirement is bits. We propose the first linear-space index for this problem with a worst case query time of .

- W.-K. Hon, R. Shah, S. V. Thankachan, and J. S. Vitter.
“Faster Compressed Top- Document Retrieval,”
*Proceedings of the 2013 IEEE Data Compression Conference (DCC '13)*, Snowbird, UT, March 2013.Let be a given collection of string documents of total length , our task is to index , such that whenever a pattern (of length ) and an integer come as a query, those documents in which appears the most number of times can be listed efficiently. In this paper, we propose a compressed index taking bits of space, which answers a query with per document report time. This improves the per document report time of the previously best-known index with (asymptotically) the same space requirements [Belazzougui and Navarro, SPIRE 2011]. Here, represents the size (in bits) of the compressed suffix array (CSA) of the text obtained by concatenating all documents in , and is the time for decoding a suffix array value using the CSA.

- S. V. Thankachan, W.-K. Hon, M. Patil, R. Shah, and J. S. Vitter.
“Indexes for Document Retrieval with Relevance,”
*Ian Munro Festschrift*, August 2013, published in Lecture Notes in Computer Science, Festschrift Series, Springer, Berlin, Germany.Document retrieval is a special type of pattern matching that is closely related to information retrieval and web searching. In this problem, the data consist of a collection of text documents, and given a query pattern , we are required to report all the documents (not all the occurrences) in which this pattern occurs. In addition, the notion of relevance is commonly applied to rank all the documents that satisfy the query, and only those documents with the highest relevance are returned. Such a concept of relevance has been central in the effectiveness and usability of present day search engines like Google, Bing, Yahoo, or Ask. When relevance is considered, the query has an additional input parameter , and the task is to report only the documents with the highest relevance to , instead of finding all the documents that contains . For example, one such relevance function could be the frequency of the query pattern in the document. In the information retrieval literature, this task is best achieved by using inverted indexes. However, if the query consists of an arbitrary string--which can be a partial word, multiword phrase, or more generally any sequence of characters--we cannot take advantages of the word boundaries and we need a different approach.

This leads to one of the active research topics in string matching and text indexing community in recent years, and various aspects of the problem have been studied, such as space-time tradeoffs, practical solutions, multipattern queries, and I/O-efficiency. In this article, we review some of the initial frameworks for designing such indexes and also summarize the developments in this area.

- R. Shah, C. Sheng, S. V. Thankachan, and J. S. Vitter.
“Top- Document Retrieval in External Memory,”
*Proceedings of the 21st Annual European Symposium on Algorithms (ESA '13)*, Sophia-Antipolis, France, September 2013, published in Lecture Notes in Computer Science, Springer, Berlin, Germany.Let be a given set of (string) documents of total length . The top- document retrieval problem is to index such that when a pattern of length , and a parameter come as a query, the index returns those documents which are most relevant to . Hon et al. [HSV09] proposed a linear space framework to solve this problem in time. This query time was improved to by Navarro and Nekrich [NN12]. These results are powerful enough to support arbitrary relevance functions like frequency, proximity, PageRank, etc. Despite of continued progress on this problem in terms of theoretical, practical and compression aspects, any non-trivial bounds in external memory model have so far been elusive. In this paper, we propose the first external memory index supporting top- document retrieval queries (outputs unsorted) in optimal I/Os, where is the block size. The index space is almost linear words, where is the iterated logarithm of . We also improve the existing internal memory results. Specifically, we propose a linear space index for retrieving top- documents in time, once the locus of the pattern match is given.

- Y. Nekrich and J. S. Vitter.
“Optimal Color Range Reporting in One Dimension,”
*Proceedings of the 21st Annual European Symposium on Algorithms (ESA '13)*, Sophia-Antipolis, France, September 2013, published in Lecture Notes in Computer Science, Springer, Berlin, Germany.Color (or categorical) range reporting is a variant of the orthogonal range reporting problem in which every point in the input is assigned a color. While the answer to an orthogonal point reporting query contains all points in the query range , the answer to a color reporting query contains only distinct colors of points in . In this paper we describe an -space data structure that answers one-dimensional color reporting queries in optimal time, where is the number of colors in the answer and is the number of points in the data structure. Our result can be also dynamized and extended to the external memory model.

- Q. Yu, H. Huo, J. S. Vitter, J. Huan, and Y. Nekrich.
“An Efficient Exact Algorithm for the Motif Stem Search Problem over Large Alphabets,”
*IEEE/ACM Transactions on Computational Biology and Bioinformatics*,**12**(2), March/April 2015, 384-397,`dx.doi.org/10.1109/TCBB.2014.2361668`. A shorter version appears in “StemFinder: An Efficient Algorithm for Searching Motif Stems over Large Alphabets,”*Proceedings of 2013 IEEE International Conference on Bioinformatics and Biomedicine (BIBM '13)*, Shanghai, December 2013.In recent years, there has been increasing interest in planted motif search (PMS) with applications to discovering significant segments in biological sequences. However, there has been little discussion about PMS over large alphabets. This paper focuses on motif stem search (MSS), which was recently introduced to search motifs on large-alphabet inputs. A motif stem is an -length string with some wildcards. The goal of the MSS problem is to find a set of stems that represents a superset of all motifs present in the input sequences, and the superset is expected to be as small as possible. The three main contributions of this paper are as follows: (1) We build motif stem representation more precisely by using regular expressions. (2) We give a method for generating all possible motif stems without redundant wildcards. (3) We propose an efficient exact algorithm, called StemFinder, for solving the MSS problem. Compared with the previous algorithms, StemFinder runs much faster and first solves the (17, 8), (19, 9) and (21, 10) challenging instances on protein sequences; moreover, StemFinder reports fewer stems which represent a smaller superset of all motifs. StemFinder is freely available at http://sites.google.com/site/feqond/stemfinder.

- M. Lewenstein, Y. Nekrich, and J. S. Vitter.
“Space-Efficient String Indexing for Wildcard Pattern Matching,”
*Proceedings of the 31st Symposium on Theoretical Aspects of Computer Science (STACS '14)*, Lyon, France, March 2014, published in Leibniz International Proceedings in Informatics, Schloss Dagstuhl, Wadern, Germany.In this paper we describe compressed indexes that support pattern matching queries for strings with wildcards. We present a data structure that uses bits for any and reports all occ occurrences of a wildcard string in time, where is the alphabet size, is the number of alphabet symbols, and is the number of wildcard symbols in the query string. We also present an -bit index with query time and an -bit index with query time. These are the first data structures for this problem that need only bits of space.

- H. Huo, L. Chen, J. S. Vitter, and Y. Nekrich.
“A Practical Implementation for Compressed Suffix Arrays with Applications to Self-indexing,”
in preparation.
An extended abstract appears in
*Proceedings of the 2014 IEEE Data Compression Conference (DCC '14)*, Snowbird, UT, March 2014.In this paper, we develop a simple and practical storage scheme for compressed suffix arrays (CSA). Our CSA can be constructed in linear time and needs bits of space simultaneously for any and any constant , where denotes the -th order entropy. We compare the performance of our method with two established compressed indexing methods, viz. the FM-index and Sadakane's CSA. Experiments on the Canterbury Corpus and the Pizza&Chili Corpus show significant advantages of our algorithm over two other indexes in terms of compression and query time. Our storage scheme achieves better performance on all types of data present in these two corpora, except for evenly distributed data, such as DNA. The source code for our CSA is available online.

- Q. Yu, H. Huo, X. Chen, H. Guo, J. S. Vitter, and J. Huan.
“An Efficient Algorithm for Discovering Motifs in Large DNA Data Sets,”
*IEEE Transactions on NanoBioscience*,**14**(5), July 2015, epublished April 9, 2015. A shorter version appears in “An Efficient Motif Finding Algorithm for Large DNA Data Sets,”*Proceedings of 2014 IEEE International Conference on Bioinformatics and Biomedicine (BIBM '14)*, Belfast, UK, November 2014.The planted motif discovery has been successfully used to locate transcription factor binding sites in dozens of promoter sequences over the past decade. However, there has not been enough work done in identifying motifs in the next-generation sequencing (ChIP-seq) data sets, which contain thousands of input sequences and thereby bring new challenge to make a good identification in reasonable time. To cater this need, we propose a new planted motif discovery algorithm named MCES, which identifies motifs by mining and combining emerging substrings. Specially, to handle larger data sets, we design a MapReduce-based strategy to mine emerging substrings distributedly. Experimental results on the simulated data show that i) MCES is able to identify motifs efficiently and effectively in thousands to millions of input sequences, and runs faster than the state-of-the-art motif discovery algorithms, such as F-motif and TraverStringsR; ii) MCES is able to identify motifs without known lengths, and has a better identification accuracy than the competing algorithm CisFinder. Also, the validity of MCES is tested on real data sets. MCES is freely available at http://sites.google.com/site/feqond/mces.

- Q. Yu, H. Huo, R. Zhao, D. Feng, J. S. Vitter, and J. Huan.
">“RefSelect: A Reference Sequence Selection Algorithm for Planted Motif Search,"BMC Bioinformatics,
17(Suppl 9), July 19, 2016,
`dx.doi.org10.1186s12859-016-1130-6`. A shorter version appears in “Reference Sequence Selection for Motif Searches,” Proceedings of 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM '15), Bethesda, MD, November 2015, 569-574.The planted motif search (PMS) is an important yet challenging problem in computational biology. Pattern-driven PMS algorithms usually use out of input sequences as reference sequences to generate candidate motifs, and they can find all the motifs in the input sequences. However, most of them simply take the first sequences in the input as reference sequences without elaborate selection processes, and thus they may exhibit sharp fluctuations in running time, especially for large alphabets.

In this paper, we build the reference sequence selection problem and propose a method named RefSelect to quickly solve it by evaluating the number of candidate motifs for the reference sequences. RefSelect can bring a practical time improvement of the state-of-the-art pattern-driven PMS algorithms. Experimental results show that RefSelect (1) makes the tested algorithms solve the PMS problem steadily in an efficient way, (2) particularly, makes them achieve a speedup of up to about 100 times on the protein data, and (3) is also suitable for large data sets which contain hundreds or more sequences.

The proposed algorithm RefSelect can be used to solve the problem that many pattern-driven PMS algorithms present execution time instability. RefSelect requires a small amount of storage space and is capable of selecting reference sequences efficiently and effectively. Also, the parallel version of RefSelect is provided for handling large data sets.

- J. I. Munro, Y. Nekrich, and J. S. Vitter.
“Fast Construction of Wavelet Trees,”
*Theoretical Computer Science*,**638**, 2016, 91-97. A shorter version appears in*Proceedings of the 21st International Conference on String Processing and Information Retrieval (SPIRE '14)*, Ouro Preto, Brazil, October 2014, published in Lecture Notes in Computer Science, Springer, Berlin, Germany.In this paper we describe a fast algorithm that creates a wavelet tree for a sequence of symbols. We show that a wavelet tree can be constructed in time, where is the number of symbols and σis the alphabet size.

- J. I. Munro, Y. Nekrich, and J. S. Vitter.
“Compressed Data Structures for Dynamic Sequences and Document Collections,”
*Proceedings of the 34th Annual ACM Symposium on Principles of Database Systems (PODS '15)*, Melbourne, Australia, May-June 2015, 277-289.In the dynamic indexing problem, we must maintain a changing collection of text documents so that we can e ciently support insertions, deletions, and pattern matching queries. We are especially interested in developing e cient data structures that store and query the documents in compressed form. All previous compressed solutions to this problem rely on answering rank and select queries on a dynamic sequence of symbols. Because of the lower bound in [Fredman and Saks, 1989], answering rank queries presents a bottleneck in compressed dynamic indexing. In this paper we show how this lower bound can be circumvented using our new framework. We demonstrate that the gap between static and dynamic variants of the indexing problem can be almost closed. Our method is based on a novel framework for adding dynamism to static compressed data structures. Our framework also applies more generally to dynamizing other problems. We show, for example, how our framework can be applied to develop compressed representations of dynamic graphs and binary relations.

- X. Chen, H. Huo, J. Huan, J. S. Vitter, W. Zheng, and L. Zou.
“MSQ-Index: A Succinct Index for Fast Graph Similarity Search,”
*IEEE Transactions on Knowledge and Data Engineering*, November 20, 2019.Graph similarity search has received considerable attention in many applications, such as bioinformatics, data mining, pattern recognition, and social networks. Existing methods for this problem have limited scalability because of the huge amount of memory they consume when handling very large graph databases with millions or billions of graphs.

In this paper, we study the problem of graph similarity search under the graph edit distance constraint. We present a space-efficient index structure based upon the q-gram tree that incorporates succinct data structures and hybrid encoding to achieve improved query time performance with minimal space usage. Specifically, the space usage of our index requires only 5%-15% of the previous state-of-the-art indexing size on the tested data while at the same time achieving 2-3 times acceleration in query time with small data sets. We also boost the query performance by augmenting the global filter with range search, which allows us to perform a query in a reduced region. In addition, we propose two effective filters that combine degree structures and label structures. Extensive experiments demonstrate that our proposed approach is superior in space and competitive in filtering to the state-of-the-art approaches. To the best of our knowledge, our index is the first in-memory index for this problem that successfully scales to cope with the large dataset of 25 million chemical structure graphs from the PubChem dataset.

- H. Huo, L. Chen, H. Zhao, J. S. Vitter, Y. Nekrich, and Q. Yu,
“A Data-Aware FM-index,”
*Proceedings of the 17th Meeting on Algorithm Engineering and Experiments (ALENEX '15)*, San Diego, CA, January 2015.In this paper we present some practical modifications of the higher-order entropy-compressed text indexing method of Foschini et al. [6] based upon the Burrows-Wheeler transform and the FM-index. Our method, called FM-Adaptive, applies a wavelet tree to the entire BWT. It partitions each bit vector of nodes in the wavelet tree into blocks and applies the hybrid encoding along with run-length Gamma code rather than the fixed-length code of [14] to each block while explores data-aware compression. FM-Adaptive retains the theoretical performance of previous work and introduces some improvements in practice. At the same time, broad experiments indicate that our index achieves superior performance, especially in terms of compression, in comparison to the state-of-the-art indexing techniques. The source code is available online.

- H. Huo, Z. Sun, S. Li, J. S. Vitter, X. Wang, Q. Yu, and J. Huan.
“CS2A: A Compressed Suffix Array-Based Method for Short Read Alignment,”
*IEEE Access*,**5**, March 14, 2017, 4551-4560,`dx.doi.org/10.1109/ACCESS.2017.2682107`.Next generation sequencing technologies generate enormous amount of short reads, which poses a significant computational challenge for short read alignment. Furthermore, because of sequence polymorphisms in a population, repetitive sequences, and sequencing errors, there still exist difficulties in correctly aligning all reads. We propose a space-efficient compressed suffix array-based method for short read alignment (CS2A) whose space achieves the high-order empirical entropy of the input string. Unlike BWA that uses two bits to represent a nucleotide, suitable for constant-sized alphabets, our encoding scheme can be applied to the string with any alphabet set. In addition, we present approximate pattern matching on compressed suffix array (CSA) for short read alignment. Our CS2A supports both mismatch and gapped alignments for single-end and paired-end reads mapping, being capable of efficiently aligning short sequencing reads to genome sequences. The experimental results show that CS2A can compete with the popular aligners in memory usage and mapping accuracy. The source code is available online.

- X. Chen, H. Huo, J. Huan, and J. S. Vitter.
“Efficient Graph Similarity Search in External Memory,”
*IEEE Access*,**5**, March 14, 2017, 4551-4560,`dx.doi.org/10.1109/ACCESS.2017.2682107`.Many real-world applications, such as bioinformatics, data mining, pattern recognition, and social network analysis, benefit from efficient solutions for the graph similarity search problem. Existing methods have limited scalability when they handle the large graph databases, for example, those with millions or billions of graphs that cannot fit in main memory. In this paper, we study the problem of graph similarity search under the graph edit distance constraint in external memory. We present an efficient framework for arbitrary q-gram based representations of a graph. Specifically, we propose a q-gram matrix index stored in hybrid layout in external memory to achieve efficient query processing, by converting the q-gram counting filter into a sparse matrix-vector multiplication (SpMV) problem. Furthermore, we also boost the query performance by transforming the global filter to a two-dimensional query rectangle, which allows us to perform a query in a reduced region, significantly reducing the number of query I/Os in practice. Extensive experiments on real datastes confirm that: (1) our method can compete with the state-of-the-art in-memory methods in index size and filtering ability, and outperform them on scalability of coping with the PubChem dataset including 25 million chemical structure graphs. (2) compared with the popular q-gram-based external inverted index, our external index structure needs much fewer number of query I/Os on the PubChem dataset.

- H. Huo, X. Chen, Y. Zhao, X. Zhu, and J. S. Vitter.
“Practical Succinct Text Indexes in External Memory,”
*Proceedings of the 2018 Data Compression Conference (DCC '18)*, Snowbird, UT, March 2018, 217-226.Chien et al. [1, 2] introduced the geometric Burrows-Wheeler transform (GBWT) as the first succinct text index for I/O-efficient pattern matching in external memory; it operates by transforming a text into point set in the two-dimensional plane. In this paper we introduce a practical succinct external memory text index, called mKD-GBWT. We partition into subregions by partitioning the x-axis into intervals using the suffix ranges of characters of and partitioning the y-axis into intervals using characters of , where is the alphabet size of . In this way, we can represent a point using fewer bits and perform a query in a reduced region so as to improve the space usage and I/Os of GBWT in practice. In addition, we plug a crit-bit tree into each node of string B-trees to represent variable-length strings stored. Experimental results show that mKD-GBWT provides significant improvement in space usage compared with the state-of-the-art indexing techniques. The source code is available online [3].

- H. Huo, X. Chen, X. Guo, and J. S. Vitter.
“Efficient Compression and Indexing for Highly Repetitive DNA Sequence Collections,”
*IEEE/ACM Transactions on Computational Biology and Bioinformatics*, January 2020.The development of the next-generation, high-throughput sequencing technologies dramatically reduces the cost of the next-generation sequencing (NGS) data production, thereby leading to the explosive growth in the NGS data.

In this paper, we focus upon the important problem of indexing and searching highly repetitive DNA sequence collections. Given a collection of sequences of length each, we can represent succinctly in bits using time, where is the th-order empirical entropy of the sequence that is used as the reference sequence, is the total number of variations between and the sequences in , and is a small fixed constant. We can restore the length- substring of in time and report the occurrences where occurs in in time. In addition, we propose a method to find the variations between and the sequences in , with which we can build succinct structures to enable fast search. For highly repetitive sequences, experimental results on the tested data demonstrate that the proposed method has significant advantages in space usage and retrieval time over the current state-of-the-art methods.

The source code is available online.

- H. Huo, P. Liu, C. Wang, H. Jiang, and J. S. Vitter.
“CIndex: Compressed Indexes for Fast Retrieval of FASTQ Files,”
*Bioinformatics*, btab655, published electronically September 15, 2021,`doi.org/10.1093/bioinformatics/btab655`, 9 pages.We propose a compressed index for FASTQ files called CIndex. CIndex uses the Burrows-Wheeler transform and the wavelet tree, combined with hybrid encoding, succinct data structures, and special tables, to achieve minimal space usage and fast retrieval on the compressed FASTQ files. Experiments conducted over real publicly available datasets from various sequencing instruments demonstrate that our proposed index substantially outperforms existing state-of-the-art solutions. For count, locate, and extract queries on reads, our method uses 2.7-41.66 percentage points less space and provides a speedup of 70-167.16 times, 1.44-35.57 times, and 1.3-55.4 times. For extracting records in FASTQ files, our method uses 2.86-14.88 percentage points less space and provides a speedup of 3.13-20.1 times. CIndex has an additional advantage in that it can be readily adapted to work as a general-purpose text index; experiments show that it performs very well in practice.

The software is available on Github: https://github.com/Hongweihuo-Lab/CIndex.

- H. Huo, C. Hong, and J. S. Vitter.
“Practical High-order Entropy-compressed Text Indexing,”
*IEEE Transactions on Knowledge and Data Engineering*, 2021,`doi.org/10.1109/TKDE.2021.3114401`. Source code published in*Code Ocean*, November 2020,`codeocean.com/capsule/3554560/tree/v1`.Compressed self-indexes are used widely in string processing applications, such as information retrieval, genome analysis, data mining, and web searching. The index not only indexes the data, but also encodes the data, and it is in compressed form. Moreover, the index and the data it encodes can be operated upon directly, without need to uncompress the entire index, thus saving time while maintaining small storage space. In some applications, such as in genome analysis, existing methods do not exploit the full possibilities of compressed self-indexes, and thus we seek faster and more space-efficient indexes. In this paper, we propose a practical high-order entropy-compressed self-index for efficient pattern matching in a text. We give practical implementations of compressed suffix arrays using a hybrid encoding in the representation of the neighbor function. We analyze the performance in theory and practice of our recommended indexing method, called GeCSA. We can improve retrieval time further using an iterated version of the neighbor function. Experimental results on the tested data demonstrate that the proposed index GeCSA has good overall advantages in space usage and retrieval time over the state-of-the-art indexing methods, especially on the repetitive data.