You are here: Home india
ocument Actions


ACM India Council logo



Santonu Sarkar

Qualifications: Ph.D., Computer Science, Indian Institute of Technology Kharagpur

Title: Professor

Affiliation: Computer Science and Information Systems BITS Pilani, K.K. Birla Goa Campus

Contact Details: santonus@goa.bits-pilani.ac.in or santonus@acm.org
http://www.bits-pilani.ac.in/goa/santonus/profile

Short CV:
Dr. Santonu Sarkar received the Ph.D. degree in computer science from Indian Institute of Technology Kharagpur in 1996. He has more than 18 years of experience in applied research, product and application development, consulting, project and client account management. He is currently a professor in the Computer Science and Information Systems department at BITS Pilani K.K.Birla Goa Campus. Prior to BITS, Dr. Sarkar built the next generation computing research group (defining research goal, creating solution, and fostering research collaboration with internal and external partners) at Infosys Labs. Earlier at Accenture technologay labs, he was responsible for research based tool development in software engineering, build research network and ecosystem within and outside the organization, manage research hiring and global internship program (such as MIT Accenture relationship program). Dr. Sarkar has published and served in program committees of IEEE TSE, IEEE SE, JSS, IS, IEEE TSC, APSEC, ISEC, ICSE, ISSRE, ICDCN, IEEE Cloud, IEEE DSN, IEEE ISPA. During Ph.D., he has also published papers in the area of ODBMS, Object oriented VLSI-CAD, Expert System. He has filed several patents with 10 granted.

Title of Talk 1: Software Dependability for Next Gen Systems
Synopsis: Software for massive transaction processing systems, smart grid, telecommunication systems require to deliver the expected quality of service without disruption almost all the time. Due to the enormous complexity of such a system and its operating environment, the applications become much more vulnerable to failure. Assuring fault-tolerance of such systems is absolutely essential and highly challenging. The challenges arise from changing business needs and unexpected usage scenarios in the operating environment. Given this, what are the important ingredients to craft fault tolerance of such a system? In this talk we present the milieu of useful models and methods across various stages of the software development life cycle to design and manage such a system.

Title of Talk 2: Software Engineering Challenges in Building HPC Applications
Synopsis: The promise of parallelism has fascinated researchers for many decades. In the past, parallel computing efforts have shown promise but in the end, uniprocessor computing always prevailed due to lack of software support to develop and run software on parallel machines. In the uniprocessor computing model software designers primarily focused on building large scale software that are reusable, efficient in memory utilization, and optimize in program execution. For speedup, the designers implicitly relied on doubling the transistor count that used to double every 18 months. The implicit hardware/software contract was that transistor count and power dissipation were okay as long as software architects maintained the existing sequential programming model. As the chip manufacturers soon hit the power limit a chip is able to dissipate, they focused on building multi and many core CPUs to offer better speedup. Additionally, co-processors such as NVidia GPGPU, and Intel's Xeon Phi came into the mainstream, offering massively parallel computation capability for certain class of workload. This forced the software designers to find a new paradigm to sustain ever increasing performance demand. However, writing applications to exploit the heterogeneous environment is extremely hard. Additionally, large computation over a sustained period of time results in huge energy consumption which is becoming a concern for infrastructure owners. Thus, it is extremely important that one revisits and create new techniques to develop applications keeping both power and performance into consideration. In this talk we look at some of the software engineering challenges to building applications for hardware that supports parallel computing and how to overcome these challenges.

Title of Talk 3: Cloud Based Next Generation Service and Key Challenges
Synopsis: The National Institute of Standards and Technology (NIST) defined cloud computing as a model that enables on-demand access of computing resources. These resources are provisioned rapidly, without unnecessary client-provider interaction or management. Consumers of cloud get everything, such as software, platform and even the computing infrastructure as a service, rather than an on-premise deployed product.The cloud is perceived to be the game-changer of this decade as it offers a flexible, on-demand and elastic delivery method for the business especially services. To realize the potential and promise of the cloud, we need to step back and revisit the entire lifecycle of a service from six dimensions, namely the way a service is i) designed, ii) engineered, iii) deployed, iv) measured, v) managed and vi) experienced. This will help us identify the key concerns and barriers to its wide scale adoption. In this talk, we introduce the notion of "next generation service" that embraces cloud computing as its backbone. Then we describe each of these six aspects and identify a set of important technical challenges that hinders the complete realization of the next generation service. These challenges are still unresolved to a large extent. We believe that a satisfactory solution to these challenge will largely help in making the benefits of the next generation service available for a wider consumption.

Title of Talk 4: Virtualized Environments—Benefits and Overheads
Synopsis: Virtualization technology is a disruptive business model that can drive significant cost savings; however the key to this resides in optimal resource allocation. Resource management in virtualized environments entails virtual machine (VM) sizing and placement on the physical machines. Unlike a non-virtualized environment, resource management becomes primary challenge because there are multiple workloads on the same physical infrastructure, along with dynamic consolidation possibilities of workloads across the machines. A comprehensive resource management would require models and methods for workload and performance analysis that consider the impact of virtualization overheads. The outcome of these approaches will have to be applied taking into consideration the performance characteristics of the underlying hardware and software (primarily the hypervisor). In this talk, we begin with introduction to virtualization technology and highlight the key challenges in its resource allocation. We then lay the foundation to statistical analysis of workload. Then we delve deep into approaches for VM sizing and placements. We discuss virtual machine introspection as a verification mechanism of these algorithms empirically.





For more information on ACM India contact:
acmindia@acm.org

For Membership Inquiries, please contact:
acmindiahelp@acm.org

Or, reach us through Twitter or LinkedIn: