Surviving the deluge of biosimulation data
Citations Over TimeTop 25% of 2019 papers
Abstract
Abstract New hardware, massively parallel and graphical processing unit‐based computers in particular, has boosted molecular simulations to levels that would be unthinkable just a decade ago. At the classical level, it is now possible to perform atomistic simulations with systems containing over 10 million atoms and to collect trajectories extending to the millisecond range. Such achievements are moving biosimulations into the mainstream of structural biology research, complementary to the experimental studies. The drawback of this impressive development is the management of data, especially at a time where the inherent value of data is becoming more apparent. In this review, we summarize the main characteristics of (bio)simulation data, how we can store them, how they can be reused for new, unexpected projects, and how they can be transformed to make them FAIR (findable, accessible, interoperable and reusable). This article is categorized under: Molecular and Statistical Mechanics > Molecular Dynamics and Monte‐Carlo Methods Computer and Information Science > Databases and Expert Systems
Related Papers
- → A scalable multiphysics algorithm for massively parallel direct numerical simulations of electrophoretic motion(2018)9 cited
- → Large-scale simulation of the three-dimensional Navier-Stokes equations(1992)2 cited
- → Electromagnetic scattering simulations using overset grids on massively parallel computing platforms(2002)1 cited
- → Studies on the Efficiency of Massively Parallel Computational Method for Large-Scale Computation based on Unstructured Grid(1996)
- → M3E_LINSOL: A Scalable Linear Solver Package for Reservoir Simulations on Massively Parallel Computers(2017)