Cross-Layer Memory Management
Recent trends in computing necessitate an increased focus on power and
energy consumption and support for multi-tenant use cases.
At the same time, data-intensive computing is placing larger demands on
physical memory systems than ever before.
However, it is very challenging to obtain precise control over the
distribution of memory capacity, bandwidth, and power.
Controlling memory power and performance is difficult because these effects
depend upon the results of activities across multiple layers of the vertical
exection stack, and such results are often not available at any single
To address these challenges, we are investigating approaches to memory
management that increase collaboration between layers of the vertical
execution stack (i.e. compilers, applications, middleware, operating system,
We have designed and developed an approach that enables applications to
provide guidance to the operating system regarding allocation and recycling
of physical memory.
We are currently exploring dynamic profiling and analysis to
automatically derive and apply beneficial guidance for use during
Publications: Linux Symposium 2014, VEE 2013
Exploiting Phase Interactions during Phase Order Search
Programs written in managed languages, such as Java and C#, execute in the
context of a virtual machine (VM) (also called runtime system) that compiles
program methods at runtime to achieve high-performance emulation.
Managed runtime systems need to consider several factors when deciding how,
when, or if to compile program methods, including: the compiling speed and
code quality produced by the available compiler(s), the execution frequency
of individual methods, and the availability of compilation resources.
Our research in this area explores tradeoffs involved in selective
compilation and the potential of applying iterative search techniques to
Publications: TACO 2013,
Program-specific or function-specific compiler optimization phase sequences
are universally accepted to achieve better overall performance than any fixed
optimization phase ordering.
In order to find the best combination of phases to apply to a particular
function or program, researchers have developed iterative search techniques
to quickly evaluate many different orderings of optimization phases.
While such techniques have been shown to be effective, they are also extremely
time consuming due to the large number of phase combinations that must be
evaluated for each application.
We conduct research that aims to reduce the phase ordering search space by
identifying and exploiting certain interactions between phases during the
In addition to speeding up exhaustive iterative searches, this work has led
to the invention of a technique that can improve the efficacy of individual
optimization phases, as well as novel heuristics that find more effective
phase ordering sequences much faster than current approaches.
Publications: CASES 2013
Masters Thesis (2010),