Skip to content Skip to navigation

Focus Areas

Computational mechanics

Computational mechanics is concerned with the development and application of computational methods based on the principles of mechanics. The field has had a profound impact on science and technology over the past three decades, effectively transforming much of classical Newtonian theory into practical and powerful tools for prediction and understanding of complex systems and for creating optimal designs.

Active research topics within our Group include development of new finite element methods (e.g., discontinuous Galerkin method), computational acoustics and fluid-­structure interaction, algorithms for dynamical and transient transport phenomena, adaptive solution schemes using configurational forces, modeling the behavior of complex materials and biological tissues. The group is actively engaged in methods and algorithm development for high-­performance computing including massively parallel computing. A recent emphasis is concerned with the coupling of techniques for analysis at the quantum, atomistic and continuum levels to achieve multi­-scale modeling.

Multiphysics modeling

Multiphysics modeling arises from the need to model complex mechanical, physical and/or biological systems with functionalities dependent on interactions among chemical, mechanical and/or electronic phenomena. These systems are often characterized by wide ranges in time and length scales, which requires the development of technologies to describe and model, using numerical and mathematical techniques, the coupling between those scales with the goal of designing and/or optimizing new engineering devices.

Myriad applications exist ranging from novel molecular-scale devices based on nanotubes and proteins, to sensors and motors that operate under principles unique to the nano-scale. Computer simulation is playing an increasingly important role in nano­science research to identify the fundamental atomistic mechanisms that control the unique properties of nano­-scale systems.

Computational bioengineering

Computational bioengineering is a quickly advancing field of research and is providing opportunities for major discoveries of both fundamental and technological importance in the coming years. The interface between biology and computational engineering will be one of the most fruitful research areas as the ongoing transformation of biology to a quantitative discipline promises an exciting phase of the biological revolution in which engineers, and especially those employing computation, will play a central role.

As physical models improve and greater computational power becomes available, simulation of complex biological processes, such as the biochemical signaling behavior of healthy and diseased cells, will become increasingly tractable. The group is playing an active part in this research effort at Stanford with current collaborative projects with the School of Medicine in areas such as the modeling of the mechanics of the ear and hearing, the eye and vision, growth and remodeling, simulation of proteins and mechanically gated ion channels, tissue engineering and stem cell differentiation.

Machine Learning

Machine Learning (ML) is emerging as a new way to solve complex engineering problems. It has had great success in the field of computer science to process large amounts of data and, recently, ML is opening new opportunities in computational mechanics. The Mechanics and Computation Group is pursuing several areas of research in ML that include: physics-based learning models, reduced order models to reduce the complexity of large-scale simulations, novel design strategies, autonomous driving, device and sensor monitoring, optimization, imaging and inverse problems, decision making, classification and regression.

Metal additive manufacturing

Metal additive manufacturing 3D printing has captured the imagination of the public because of the freedom and the access to do-it-yourself projects that it enables, and the surging popularity of Maker Faires. Metal 3D printing, or metal additive manufacturing, is poised to play an important role in the automotive, aeronautics, aerospace, and national defense  industries, among others. The group has an incipient program in this area that merges the application and development of  powerful modeling and simulation tools with the design and creation of experiments to advance the state of the art in metal 3D printing. It brings the virtual and the real world seamlessly together, since simulation tools will play a crucial role in the understanding, design, and control of manufacturing processes with this technology.

Experimental nano-mechanics

Experimental nano-mechanics is the measurement of the mechanical response of nanostructured materials. We study the strength, deformation and failure of nanoscale metals, oxides and semiconductors in order to design strong and lightweight structural materials, damage tolerant composites, and mechanically actuated nano-sensors. Active projects include compression testing of core-shell nanocrystals, development of high strain rate testing techniques for microscale samples, and additive manufacturing of metallic nanostructures.

Microscale mechanical measurements

Microscale devices for system monitoring and modeling are also used for measuring nanoscale mechanical behavior. In the Mechanics and Computation Group, we have a special interest in micro and nanoscale mechanical behavior, including material properties and the biomedical applications of nanofabricated devices.

Research includes developing diagnostic tools, measurement and analysis systems, and reliable manufacture methods. Active projects include piezoresistive force sensing and optimal processing, cell stimulation and force measurements, understanding the biological sense of touch, and silicon probes for microscopy and sensing.



The Mechanics and Computation Group has access to the shared Stanford computing resource Sherlock. We also have purchased nodes on Sherlock that are reserved for our group. These nodes have GPUs (NVIDIA TITAN X) available for large-scale scientific simulations.

Here is a brief description of Sherlock:
  1. Four load balanced login nodes
  2. 120 general compute nodes, 16 CPUs with dual socket Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (8 core/socket); 64 GB RAM (1866 MHz DDR3), 100 GB local SSD 
  3. 2 "bigmem" nodes, 32 CPUs with quad socket Intel(R) Xeon(R) CPU E5-4640 @ 2.40GHz (8 core/socket); 1.5 TB RAM; 13 TB local storage 
  4. 6 GPU nodes with dual socket Intel(R) Xeon(R) CPU E5-2640 v2 @ 2.00GHz; 256 GB RAM; 200 GB local storage
    1. 2 nodes with 8 NVIDIA Tesla K20Xm
    2. 3 nodes with 8 NVIDIA GeForce GTX TITAN Black
  5. 2:1 oversubscribed FDR Infiniband network 
  6. Isilon high speed NFS for /home; backed up via snapshots and replicated to alternate site. Snapshots are in ~/.snapshot 
  7. 1 PB Lustre parallel file system for /scratch 

The group also has access to resources from the High Performance Computing Center.


The group has a 3D printer (Aconity 3D Mini) for research purposes.