Investigating Solution Convergence in a Global Ocean Model Using a 2048-Processor Cluster of Distributed Shared Memory Machines
Author(s)
Menemenlis, Dimitris; Ciotti, Bob; Henze, Chris; Hill, Christopher N.
Download5192.2007.458463.pdf (283.5Kb)
PUBLISHER_CC
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Up to 1920 processors of a cluster of distributed shared memory machines at the NASA Ames Research Center are being used to simulate ocean circulation globally at horizontal resolutions of 1/4, 1/8, and 1/16-degree with the Massachusetts Institute of Technology General Circulation Model, a finite volume code that can scale to large numbers of processors. The study aims to understand physical processes responsible for skill improvements as resolution is increased and to gain insight into what resolution is sufficient for particular purposes. This paper focuses on the computational aspects of reaching the technical objective of efficiently performing these global eddy-resolving ocean simulations. At 1/16-degree resolution the model grid contains 1.2 billion cells. At this resolution it is possible to simulate approximately one month of ocean dynamics in about 17 hours of wallclock time with a model timestep of two minutes on a cluster of four 512-way NUMA Altix systems. The Altix systems' large main memory and I/O subsystems allow computation and disk storage of rich sets of diagnostics during each integration, supporting the scientific objective to develop a better understanding of global ocean circulation model solution convergence as model resolution is increased.
Date issued
2007Department
Massachusetts Institute of Technology. Department of Earth, Atmospheric, and Planetary SciencesJournal
Scientific Programming
Publisher
Hindawi Publishing Corporation
Citation
Hill, Chris, Dimitris Menemenlis, Bob Ciotti, and Chris Henze. “Investigating Solution Convergence in a Global Ocean Model Using a 2048-Processor Cluster of Distributed Shared Memory Machines.” Scientific Programming 15, no. 2 (2007): 107–115. © 2007 Hindawi Publishing Corporation
Version: Final published version
ISSN
1058-9244
1875-919X