CSG Seminars


  • Introduction to Wafer Etching using VSim
    Daniel Main, Tech-X Corporation, abstract
    [#s1232, 10 Feb 2021]
    Plasma processing chambers for the etching of wafers are often used to create a uniform etch along most of the wafer. In such a chamber, a plasma is created using a RF source via Capacitive Coupling (CCP) or Inductive Coupling (ICP). The source region is often far from the wafer (thousands of electron Deybe lengths) so that the plasma is nearly uniform for most of the chamber. Therefore, the physics that requires a kinetic approach occurs near the wafer (within a few hundred Debye lengths). An important part of the process is the acceleration of the ions due to the sheath that forms near the wafer. However, the discontinuity in the boundary near the edge of the wafer leads to a non-uniform sheath and hence non-uniform ion velocities impacting the wafer. One way to make the sheath more uniform is to place a “focus ring” (FR) near the wafer edge. To model the essential physics near the wafer, including the effect of the FR on the sheath dynamics, we have used the electromagnetic, fully kinetic, particle-in-cell simulation package VSim. The simulation includes electrons, argon ions and neutral argon gas. We also include collisions between electrons and neutral species, secondary emission off the wafer, and the self-consistent calculation of the electric field, including a proper inclusion of the wafer and FR dielectric constants. Since the electric field is determined by Poisson’s equation, including a full kinetic treatment of the electrons is essential for computing the sheath physics, and hence ion dynamics, correctly. Because of the small spatial and time steps required for a fully kinetic model, we include about half the wafer up to the edge and about 200 Deybe lengths above the wafer. We inject both electrons and ions (modeled as drifting Maxwellians) at the boundary opposite the wafer using incoming-flux boundary conditions, which ensure a smooth transition from the assumed infinite plasma reservoir outside the simulation into the simulation domain. We use Rejection-Sampling theory to compute the correct incoming-flux velocities of the injected particles. The boundary that includes the wafer is an absorbing boundary; electrons and ions accumulate on the dielectrics at this boundary. We show that elastic collisions tend to create a more symmetric Ion Angular-Energy Distribution (IAED) function about the normal. Finally, we demonstrate the role the focus ring has on the IAED and sheath dynamics. On the practical side, I will provide a live demonstration of how to open and run VSim on Eddy. I will demonstrate how to set up the Wafer Etching problem and start the job using the built-in scheduler. Finally, I will show you how to visualize the data.
  • Large-Eddy Simulation of Flow Past Cylinders and Airfoils
    Ravi Samtaney, abstract, slides
    [#s950, 10 Jan 2019]
    A canonical flow in fluid mechanics is flow past a circular cylinder. At about Reynolds number ~ 300,000 a curious phenomenon is observed: this is the drag crisis. We report recent progress on wall-resolved large-eddy simulation (LES) of flow past a circular cylinder. Three configurations are considered: a smooth cylinder, a grooved cylinder and a rotating cylinder. An examination of the local skin friction and flow separation details indicates that the drag crisis is not necessarily attributed to the boundary layer transition from a laminar to a turbulent state. In addition, we discuss recent results of wall-modeled LES of flow past airfoils with an emphasis on the strong validation against experiments. The wall model is essential to avoid resolving the flow at the wall and involves the derivation of a Dirichlet boundary condition for the velocity at a virtual wall. Acknowledgement: The Shaheen-Cray XC40 at KAUST was utilized for all the simulations.
  • Solvers for Sparse Linear Systems
    Jin Chen, abstract, slides
    [#s952, 10 Dec 2018]
    Finding the solutions of large sparse linear systems is part of our daily work here at the Lab. These linear systems are ubiquitous, and appear in a wide range of applications in computational science. The goal of this seminar is to impart a working knowledge of the sparse direct and iterative methods used to solve these systems. This presentation will provide an overview of the algorithms, data schemes, and available software that one can use to both understand the methods and know how to best use them.
  • Scientific Visualization Tools and Techniques
    Eliot Feibush, PPPL, abstract, slides
    [#s851, 26 Sep 2018]
    Scientific visualization enables insight, verification, and communication for presentations and publications. Visualization is closely tied to analyzing and exploring data generated by simulations and acquired in experiments. Animation is effective for representing complex behavior of variables over time. I will present software tools and techniques available for scientific visualization. Interactive programs such as VisIt and Paraview have a graphical user interface for exploring and displaying data. Visualization workflows developed in Python for PPPL projects will also be presented.
  • Optimization and testing for the gyrokinetic PIC codes ORB5 and XGC
    Aaron Scheinberg, PPPL, abstract, slides
    [#s784, 06 Jul 2018]
    Like many legacy codes, the gyrokinetic PIC code ORB5 was written without GPUs or multi-core processors in mind and was thus unable to take full advantage of modern supercomputer architecture. Here, I summarize my work in modularizing, introducing new algorithms, and implementing OpenMP into ORB5. I then summarize recent progress with XGCa GPU functionality and unit testing.
  • Engineering GFDL’s Climate Models For Future Architectures
    Raymond Menzel (GFDL), abstract
    [#s854, 19 Jun 2018]
  • Introduction to parallel computing with MPI
    Stephane Ethier, PPPL, abstract, slides
    [#s852, 15 Dec 2017]
    This tutorial will introduce the Message Passing Interface (MPI), the most widely used method for distributed parallelism on small departmental clusters as well as on large leadership class computers. Practical use of MPI will be discussed, along with more advanced capabilities.