Pattern Matching for Massive Data Sets
Project Award Date: 08-01-2010
Pattern matching is a fundamental research field with applications in domains such as biological sequence alignment, web search engines, and network intrusion detection. Given a pattern "P" and a text string"T," the central problem is to find occurrences of P in T. When data becomes massive, we cannot assume that text can be stored in RAM. Pattern matching problems must be considered with more appropriate models like external memory model, cache-oblivious model, streaming models, MapReduce paradigm, and multi-core models.
In collaboration with researchers from Louisiana State University, ITTC researchers will develop efficient search algorithms and indexes for when data reside on disks or network storage or are accessible only as an online stream. Data must be efficiently searchable even though it may be in a compressed format. Issues of I/O efficiency and space utilization are central to this project. This involves developing suitable massive data set models, deriving optimal theoretical bounds, and implementing practical tools. Methodologies include combinatorial and randomized methods in pattern matching, succinct data structures, top-k query processing and I/O efficient indexes.
The project will build new, solid theoretical foundations in pattern matching, with direct applications to fields like databases and information retrieval. It will significantly drive forward current state of the art in web search engine technology (by impacting the way inverted indexes are used) and genome sequence alignment tools (e.g., BLAST). Tools and software developed during this project will be widely distributed to the research community.
Prof. Rahul Shah from Louisiana State University serves as co-PI on this project.
Faculty Investigator(s): Jeffrey Vitter (PI)