Suparna Bhattacharya
Eminent Speaker
Short CV: Suparna Bhattacharya is a Distinguished Technologist in the Hewlett-Packard Enterprise Storage CTO office. She holds a PhD in Computer Science and Automation from the Indian Institute of Science and a B.Tech in Electronics and Electrical communication from IIT Kharagpur. Suparna worked at IBM from 1993-2014, delving into operating systems and file-system internals on various platforms. She made her foray into the Linux kernel in 2000 and got introduced to the joys of working on open source. Her contributions to Linux span multiple areas and she was regularly invited to chair sessions at the Linux Kernel Summit, a by-invitation-only event where key contributors of the Linux Kernel get together to decide the future roadmap of the Kernel. Suparna was elected to the IBM Academy of Technology in 2005 and in 2012, she moved to IBM's research division after an educational leave of absence to pursue doctoral studies. Her dissertation work on "A systems perspective of software runtime bloat and its power-performance implications" impacted diverse research communities resulting in publications at top-tier venues such at SIGMETRICS, ECOOP, HotOS, HotPower, OOPLSA and ICML and was awarded the best PhD thesis in the department of computer science and automation at IISc. At IBM research, Suparna initiated exploratory projects on software defined memory and systems-software co-design for extreme-scale contextual and cognitive computing. These days at Hewlett-Packard Enterprise, she focuses on the implications of emerging non-volatile memory technologies in future systems.
Title of Talk 1: Bridging Lilliput and Brobdingnag—How the Linux kernel has sustained diversity without giving up on efficiency
Synopsis: As the Linux kernel has evolved over the years to satisfy the needs of diverse operating environments, advancements in technology have only intensified the challenge of tackling conflicting design goals in any effort to make new kernel contributions while simultaneously containing code and data bloat. Yet Linux has maintained a remarkable track record of sustaining a high rate of change for more than a decade since the beginning of the 2.6 kernel series. Although the code size has increased to 20 million lines, run-time efficiency remains a central concern in kernel programming. The trick lies in adopting development patterns that minimize overheads of feature generality in code paths where it is not needed. New contributors are thus urged to carefully craft their proposed code changes in incremental steps of simple logically self-contained patches for ease of review and maintenance. It sounds easy enough, but getting used to the model is harder than it appears, yet well worth the effort.
This talk elucidates the challenges which necessitate the minimalist approach characteristic of Linux kernel development with specific examples drawn mostly from personal experience. The iterations it took to get to the final inclusion of key enterprise features brings out the elegant ways in which Linux redefines the concept of modularity as simple innovations around a low center-of-gravity. It is perhaps this addiction to ephemeralization in the incorporation of even the smallest of kernel enhancements, which keeps Linux from becoming a victim of its own success as it stretches itself into spaces where no general purpose OS has gone before.
Title of Talk 2: Software Bloat, Lost Throughput and Wasted Joules
Synopsis: There appears to be an inherent tension between development-time productivity and run-time efficiency of IT solutions. Software systems tend to accumulate "bloat" as a side-effect of the same software engineering trends that have been extremely successful in fueling the widespread growth of frameworks that enable applications to rapidly evolve in scale and function to handle ever more complex business logic, integration requirements and unanticipated changes in demanding environments.
In the past, the benefits of such flexibility have far outweighed the cost of overheads incurred. Today, shifting hardware technology trends and operational challenges motivate a need for paying greater attention to these inefficiencies and devising techniques for mitigating their impact on energy consumption and system performance. However, distinguishing excess resource utilization due to "bloat" from essential function is highly non-trivial without a deep knowledge of intended semantics of very complex software.
This talk presents a systems perspective of the problem of runtime bloat in large framework based Java applications and its implications for energy efficient design, including results from recent research and interesting open questions to tackle.
Title of Talk 3: Shaping software for the persistent memory era and other research problems at the NVM frontline
Synopsis: As a variety of non-volatile memory technologies orders of magnitude faster than flash finally appear on the horizon, we are approaching a long awaited inflection point where storage will be best accessed as persistent memory rather than via IO interfaces. At these low latencies, path length overheads induced by traditional software stacks may no longer be affordable. This has significant architectural ramifications on software and systems design, both evolutionary and revolutionary, motivating substantial research attention to be devoted to these implications in recent times. However, researchers have only begun to scratch the surface of a rich plethora of subtle issues and open problems that remain to be tackled. This talk will help provide an appreciation of the evolution of persistent memory technologies and a glimpse of a some of these exciting challenges that lie ahead of us.
Title of Talk 4: Big data analytics: When will system software catch up?
Synopsis: The big data wave has spurred an exciting ecosystem of analytics platforms and software frameworks. Yet, despite all the advancement in recent years, current day operating systems and storage servers are still far from being naturally geared at big data analytics. Much innovation in this space has instead been realized via layers of middleware that step around the gaps between the needs of this emerging application computing environment and the world that established operating system mechanisms such as virtual memory and storage architectures were traditionally designed for. However, such diffusion of OS and storage system functionality from the core of the system to higher layers in the stack introduces inefficiencies and challenges of cross-layer coordination and adaptation with the evolution of new system architectures.
As the landscape of hardware technologies are undergoing fundamental changes with the advent of byte-addressable non-volatile memory, photonic interconnects and specialized SOCs, perhaps it is time to take a step back and co-design stacks more systematically to cope with the growth of digital data that continues unabated given estimates of 30+ billion intelligent devices by 2020. This talk explores the question of what it could it take to revitalize native system software mechanisms both in storage layers and operating systems to keep up with the intrinsic needs big data analytics.
Suparna Bhattacharya
Qualifications: PhD CSA, Indian Institute of Science; B.Tech, ECE, IIT Kharagpur
Title: Distinguished Technologist
Affiliation: Hewlett-Packard Enterprise
Contact Details: [email protected] or [email protected]