This repository contains input examples for calculations associated with CP2k package, including the NonAdiabatic Molecular Dynamics (NA-MD) calculations via CP2K/Libra interface.
This repository summarizes inputs for different types of electronic structure calculations. The same files are also available in other places as well (project_cp2k_libra, project_perovskite_crystal_symmetry) for use in nonadiabatic dynamics. However, here we provide detailed instructions on how to use CP2K and how the functionality and timings change with different inputs.
Here we use CP2K v6.1 compiled with Intel parallel studio 2019. For TD-DFT with hybrid functionals, we will use CP2K v7.1 which is compiled with GCC-8.3 compiler. This is because in lower versions of CP2K the TD-DFT calculation does not converge for hybrid functional due to a problem with converging ADMM calculations. It is also worth noting that the computed results are done using Intel(R) Xeon(R) E5-2699 v4 @ 2.20GHz CPUs.
For CP2K installation we recommend the use of ./install_too_chain.sh in the tools/toolchain folder of CP2K.
For compilation with Intel parallel studio you can use
the instructions given in XConfigure website.
Here, we use a 2D perovskite of (BA)2PbI4 as a test and which its cif file is available from the crystallography website. We will use the unit cell with 156 atoms and a 2x2x2 supercell with 1248 atoms and check the performance of CP2K calculations functionality.
- geometry preparation
- energy (single-point) calculations
- convergence with respect to basis set parameters
- geometry and cell optimization
- single-point calculations at the TD-DFT level
- single-point calculations with hybrid density functionals
- single-point calculations of huge systems
- molecular dynamics
- time-overlap calculations using cube files
- nonadiabatic coupling calculations
The legacy folder contains a number of recent tutorials by Mohammad Shakiba and Brendan Smith, but
they are yet to be organized and revised. These tutorials contain detailed and very useful instructions
which will eventually be migrated to the main folder here.
-
[3. Extra resources]
Here, we provide links to the repositories with the codes used in the following publications:
We highly welcome improving the functionality of the input files in this repository. So, please feel free to share your inputs and timings with us if you used these inputs.
The instructions on how the CP2K is compiled with Intel Parallel Studio can be found in this link.
For installation of CP2K using GNU compilers, you can use the CP2K toolchain. If you want to use slurm environment for submitting jobs, use the OpenMP. In our experience, compiling with MPICH can cause problems when submitting through slurm.
There are different versions of CP2K installed on UB CCR. CP2K v9.1 compiled with GNU compilers v11.2.0, Intel MKL v2020.2, OpenMP v3.1.1, FFTW3, Libint, Libxc, Plumed2, and ELPA is tested and can be run on all general compte and faculty cluster nodes and is available in here
/projects/academic/cyberwksp21/Software/cp2k-gnu/cp2k-9.1/exe/local/cp2k.psmp
/projects/academic/cyberwksp21/Software/cp2k-gnu/cp2k-9.1-no-elpa/exe/local/cp2k.psmp
To load the dependecies for running GNU-compiled version of CP2K use the following
source /projects/academic/cyberwksp21/Software/cp2k-intel/avx512-dependencies/cp2k-9.1-avx512/tools/toolchain/build/setup_gcc
source /projects/academic/cyberwksp21/Software/cp2k-intel/avx512-dependencies/cp2k-9.1-avx512/tools/toolchain/install/setup
module load mkl/2020.2
CP2K v8.2 and 9.1 compiled with Intel Parallel Studio 2020 libraries (with ELPA, Libint and Libxc), runs on Valhalla nodes, cpn-v and cpn-u nodes of Scavenger, cpn-u and cpn-q nodes of General compute node (CPUs with AVX2 instructions) and are available here
/projects/academic/cyberwksp21/Software/cp2k-intel/cp2k-9.1/exe/Linux-x86-64-intelx/cp2k.psmp
/projects/academic/cyberwksp21/Software/cp2k-intel/cp2k-8.2/exe/Linux-x86-64-intelx/cp2k.psmp
To load the dependencies for running CP2K versions compiled with Intel compilers, use the following
module unload intel-mpi/2020.2
module load intel-mpi/2020.2
module load intel/20.2
For submitting jobs in the slurm environment
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so
and for this specific version, use srun instead of mpirun or mpiexe to run CP2K.
And finally, since most of the CPU cores on UB CCR have 1 threads
export OMP_NUM_THREADS=1