qeAPI
qeAPI copied to clipboard
An API btw Quantum ESPRESSO and Python
Python-QE API
Some useful tools are listed here for preparing and analyzing DFT calculations using Quantum Espresso. This repository(or repo) contains the following Python scripts:
raw2qe.py,a collection of functions and classes that transform CIF files into input texts.qe2cif.py, a collection of functions that reads output files from QE calculations and write atomic configurations into CIF filesqe2DDEC.py, a collection of functions that reads charge density CUBE files, and analyze them using DDEC6 method to give bond order, and overlap population information of arbitrary atomic configurations.run_cases.pya script that preparesbatchfiles for individual calculations, and enforce the dependence among different kinds of jobs usingpw.x,pp.x,bands.x,dos.x, andpdos.x.run_cases_phonon.pya script that preparesbatchfiles for individual calculations, and enforce the dependence among different kinds of jobs usingpw.x,ph.x,matdyn.x,andq2r.x.
And three jupyter notebooks are inluded to show some examples of using functions in the three script files:
-
pWprep_test.ipynbshows examples of usingraw2qe.py -
qe2DDEC_test.ipynbshows examples of usingqe2DDEC.py -
qe2cif_test.ipynbshows examples of usingqe2cif.py
The sections below are arranged as
- Install Python
- Install Packages
- Compile Quantum Espresso
- Without libxc
- With Libxc
- Set up file system ready for DFT calculation
- Submit Calculation jobs
- !!! The workflow for running calculations
- Useful command for running jobs on a slurm system
- Useful scripts for running batch jobs on a slurm system
- Navigation of the Repo
- Citation
Install Python
Due to the variety of Windows/Mac/Linux OS distributions, the arguably easiest way to install Python is to install Anaconda, a distribution of Python 3.7 along with some pre-installed numerical packages (e.g. numpy, and scipy). The complete list of pre-installed packages in anaconda can be found here.
- For installation instruction on Windows OS, click here.
- For installation instruction on Mac OS, click here.
Once you installed Anaconda, open the Anaconda Navigator by finding it in your start navigation or in search bar. And the initial GUI looks like the picture shown below:

Now click the install button below jupyter notebook. Jupyter notebook provides an intuitive way to write and inteprete your Python code as we shall see later.
Install Packages
We need to install ASE package (check the documentation here) in your Anaconda distribution. To do so, click the Environment in the navigator shown above to get

- If
ASEis not installed, you can find it by navigating to the drop-down menuNot installedand then searching forASE:

- Select the package you want to install, and click
Applyin the popup window. - Now you can use
ASEin your python code.
To use jupyter notebook, we simply open Anaconda Navigation and click the jupyter icon. Your browser will start automatically and show the following GUI:
You can start by creating a folder with the New drop-down menu:
The name of your new folder will be Untitled folder but you can always change its name later. We now open Untitled folder in jupyter to arrive at the following interface:
Create a Python 3 notebook and open it to get
where the command window in green rectangle is the current active window. Let's write a simple command and run it using shift+enter combo:
From the picture above, we know that
- Jupyter notebook create new empty command window below the previously active window
- The output of the previously active window shows up right before the current active window.
Compile Quantum Espresso
In this section we provide a way to compile Quantum Espresso on normal university-level computation platform, with and without external libraries. The file system for each individual user is run on a Linux system with an access to home folder.
Without libxc
- Create a folder
dftin yourhomefolder - Enter new folder
cd dft - Download newest QE package here to your personal computer, then upload the downloaded
tar.gzfile todftfolder using FileZilla or similar software. - Run
tar -xvf qe-XXXX.tgzto unzip the source files indft\qe-X.Y.ZwhereX,Y, andZare version numbers. - Load required compiling modules by
module load gcc/7.2.0,module load openmpi/4.0.5-gcc-7.2.0, make sure these versions of the two packages match each other!!!
As of April 7,2021, compiling quantum espresso using
intelpackage does not evokefftwpackage, whilelfftwis automatically loaded when quantum espresso is compiled withgcc.
cd qe-X.Y.Zto get into source folder- If you're running QE on slurm system, you might want to add
MANUAL_DFLAGS = -D__ISO_C_BINDINGinmake.incfile to avoiddata parsingerror - Configure source files by running
./configure -enable-parallel - Compile
make all
With Libxc
Important Note: Some exchange-correlation functionals in Libxc are not well-tested for all classes of materials. Make sure you know what you're doing if you want to use Libxc with QE.
- Install libxc using autotool:
(1) Download
libxchere (2) Upload thetar.gzfile todftfolder (3) Unzip the file by usingtar -xvf libxc-x.y.z.tar.gz, wherex,y, andzare version numbers again (4) Run the following command(no change is needed):
./configure --prefix=PATH/TO/LIBXC
make
make check
make install
- Configure QE (Notice that you might need to change module version below)
cd qe-X.Y.Z/
module load gcc openmpi(you can also use intel package to compile QE, but using gcc+openmpi is generally safer)
./configure -enable-parallel
- Change make.inc file and make
(1) open the make.inc file in the folder qe-X.Y.Z using nano make.inc
(2) change the DFLAGS line into DFLAGS = -D__DFTI -D__LIBXC -D__MPI
(3) add -I/path/to/libxc/include/ to IFLAGS
(4) set LD_LIBS=-L/path/to/libxc/lib/ -lxcf90 -lxc if error happens during the compilation process, replace -lxcf90 with -lxcf03
(5) If libxc verstion<5.0, xc_f03 must be repalced with xc_f90 everywhere in the following files: funct.f90, xc_lda_lsda_drivers.f90, xc_gga_drivers.f90, xc_mgga_drivers.f90, dmxc_drivers.f90 and dgcxc_drivers.f90 in Modules folder and xctest_qe_libxc.f90 in PP/src folder.
(6) save the file, and run make all
Set up file system ready for DFT calculation
- create a folder at
/home/named as/pseudo/to store your pseudopotential files - Download full-element pseudopotential package. My personal favorite is GBRV Pseudopotential. Other good resources are Pseudo Dojo, and SSSP on Material cloud.
- Before you upload your pseudopotential files onto computation platform, it is recommended to change your commonly-used pseudopotentials' name into a format of
X.upf, withXbeing the element symbol. Below is what mypseudofolder looks like
where rVV10_kernel_table and vdW_kernel_table are generated by runing the generate_rVV10_kernel_table.x and generate_vdW_kernel_table.x in the /PW/src subfolder of your QE installation. You will use them when you have strong van der Waals' interactions in your atomic system.
4. The outdir option in your input file should always be "/home/netID/scratch/"+unique name to your calculation case. If your .in files have unique names, raw2qe.py sets the outdir to be /home/netID/scratch/+ your input file name automatically.
Scratch is a temporary storage space that saves your output files for at most 30 days. Make sure you transfer your data after the calculations are done.
- Create a
inputdirfolder to store all your input file. In the same folder, create shortcuts to your QE executables by using the following command(in yourinputdirfolder):
ln -s /path/to/qe/installation/PW/src/pw.x pw.xln -s /path/to/qe/installation/PP/src/pp.x pp.xln -s /path/to/qe/installation/PP/src/dos.x dos.xln -s /path/to/qe/installation/PP/src/projwfc.x projwfc.x
We use symbolic links to avoid complicated path when we run DFT calculations.
Submit Calculation jobs
The run_cases_xxx.py in the folder run_cases_script prepares SBATCH files for batch job submission on computation platform running on slurm system(e.g. campuscluster). It generates SBATCH files for all the .in files in your inputdir folder. It changes executable command based on the names of input files. Similarly, run_cases_phonon.py preparaes SBATCH files for phonon-related calculations using ph.x,dynmat.x, and q2r.x. The complete rules for running QE according to run_cases.py are listed below:
-
If the file is named as
xxxx.inwithout symbol of_,run_cases.pyask the system to run the command ofmpirun ./pw.x -in xxxx.in > xxxx.out -
If
_ppis in the name of a.infile, ask the system to run the command ofmpirun ./pp.x -in xxxx_pp.in > xxxx_pp.out -
If
_dosis in the name of a.infile, ask the system to run the command of./dos.x -in xxxx_dos.in > xxxx_dos.out -
If
_pdosis in the name of a.infile, ask the system to run the command of./projwfc.x -in xxxx_pdos.in > xxxx_pdos.out.
A typical sbatch file is shown below:
where
#SBATCH N 4and#SBATCH n 14set number of computation nodes and total number of CPU cores you want to use for the calculation.
as of April 7, 2021, the slurm system running on HPC platform allows users to specify number of cpu cores only. You might consider delete command of
#SBATCH N 4as it might cause slot allocation errors
-
#SBATCH --partition=xxxxchoose your job queue (the queue can beeng-researchorsecondary) if you are using NCSA campuscluster at UIUC. -
#SBATCH --time=04:00:00tells the system to kill your job after some time (the job will be killed after 1 hr in this case). The maximum walltime is 4hrs.(If you have a large job that takes more than 4hrs to finish, try to userestart_modein input file, seepwPrep_test.ipynb) -
#SBATCH --job-name="scf-rlx"gives a name for your job(the name isscf-rlxin this case). -
module load python/3loads necessary package for your jobs. -
cd /home/yourID/inputdiris required for the system to find your links topw.x,pp.xetc.
!!! The workflow for running calculations
- Upload all your input files to
inputdirfolder - Make sure
run_cases_xxx.pyis also in theinputdirfolder and load python3 module bymodule load python/3.
Please use
run_cases_cc.pyon NCSA campuscluster and userun_cases_expanse.pyon expanse@XSEDE. If you don't have access to none of these HPC platform, you might consider modifyrun_cases_cc.pyto fit your needs.
- Run
run_cases_xxx.pybypython3 run_cases.py. This is a piece of interactive code, and it will ask you several questions before it generatessbatchfiles. The questions are shown in the picture blow.
The last question shown above is asking to which queue you want to submit your jobs to. If you're an engineering student at UIUC, you may type beckman, eng-research or secondary(a common queue for everybody). After these questions, the code will spite out the names of input files and sbatch files. The interaction with run_cases_phonon.py is slightly different. It will ask you one more question like the picture shown below. If your answer to the last question is larger than 2, the code will create job-array command to run ph.x-related job repeatedly. For some reason, job-array is at its beta version on slurm system, so some mysterious computation errors might occur.
- Many
sbatchfiles should be created now, run./serialjobin your command line to submit all your jobs. - Wait till
./serialjobcommand finishes and usesqueue -u yourNetIDto check the status of your jobs. - Jobs with
Cstatus are finished or they are running out of the walltime. Jobs withRare running, and jobs withPDstatus are still pending in the queue. - Once all your calculations are done, download your data and use the functions in
qe2DDEC.pyorqe2cif.pyto start your analysis.
Useful command for running jobs on a slurm system
scancel JOBIDfor canceling a job using its job id.squeue -u usrIDfor checking your job status using your userIDsqueue -u usrID -p xxxxxfor checking your job status on partitionxxxxxsbatch xxxx.sbatchsubmit a job using asbatchfilesbatch --exclude cccc xxxxx.sbatchsubmit a job by avoiding using nodecccc
Useful scripts for running batch jobs on a slurm system
Submitting a large number of jobs at the same time can be messy. Platform configuration errors could happen at anytime, resulting in mysterious termination and unfinished jobs. If you want to keep everything organized, you might want to use the scripts provided in the folder workflow_scripts. If you want to know more or change these scripts, you will get a better idea about them by reading the summary at the top of each scripts. A recommended workflow for running large batch of jobs on HPC platform is shown in the flowchart below.

Navigation of the Repo
Important note: Please pay attention to the comment lines starting with !!! in the code. Those lines tell you how to change the code if you're using different pseudopotential or you are using it in different operating system environment.
pwPrep_test.ipynbshows the examples of usingraw2qe.pyto
(1) Update options in input files
(2) Prepare input files for pw.x,dos.x, and projwfc.x
qe2DDEC.ipynbshows the examples of usingqe2DDEC.pyto
(1) prepare job_control.txt to initialize DDEC6 analysis
(2) run DDEC6 binary executable in a pythonic way
(3) extract useful infomation from the analyses, e.g. overlap population, bond orders
qe2cif.ipynbshows the examples of usingqe2cif.pyto
(1) read atomic configurations from QE outputs
(2) save relaxed atomic configurations in a cif format
(3) adjust atoms' info using ASE
workflow_scriptsfolder contains scripts for
(1) Preparing SBATCH file for each job,
(2) Creating bash script serialjob that submit various kinds of jobs by complying their dependency,
(3) Checking integrity of output files and categorying their names into complete.txt,restart.txt, and modify.txt,
(4) Modifying settings in input files, and
(5) Removing output files listed in restart.txt.
Citation
Please cite our paper if you decide to use our code for your research:
[1] Liu, Sizhe, and Kyle C. Smith. "Intercalated Cation Disorder in Prussian Blue Analogues: First-Principles and Grand Canonical Analyses." The Journal of Physical Chemistry C 123.16 (2019): 10191-10204.
If you end up using our qe2DDEC.py, please cite the following papers too:
[2] T. A. Manz and N. Gabaldon Limas, “Introducing DDEC6 atomic population analysis: part 1. Charge partitioning theory and methodology,” RSC Adv., 6 (2016) 47771-47801.
[3] N. Gabaldon Limas and T. A. Manz, “Introducing DDEC6 atomic population analysis: part 2. Computed results for a wide range of periodic and nonperiodic materials,” RSC Adv., 6 (2016) 45727-45747.