修订版 | 314ab5863865ce512193a6b2e5458205be056d9f (tree) |
---|---|
时间 | 2013-07-24 16:36:28 |
作者 | Mikiya Fujii <mikiya.fujii@gmai...> |
Commiter | Mikiya Fujii |
README is updated for MPI-parallelization. #31588
git-svn-id: https://svn.sourceforge.jp/svnroot/molds/trunk@1416 1136aad2-a195-0410-b898-f5ea1d11b9d8
@@ -29,49 +29,71 @@ | ||
29 | 29 | |
30 | 30 | ============================================================================== |
31 | 31 | REQUIREMENTS: |
32 | - MolDS requires c++ mpi compiler that is wrapping Intel (icpc) or GNU (g++) and boost-libraries. | |
33 | - Valid versions of the wrapped c++ compilers are icpc 12.0.4(MkL 10.3 update 4), g++ 4.4, or later | |
34 | - because the MolDS is implemented with openMP 3.0. | |
35 | - To compile MolDS with GNU, furthermore, openBLAS (version 0.2.5 or later) is also required. | |
36 | - | |
37 | - To get and install the boost-libraries, see the HP:<http://www.boost.org/>. | |
38 | - The version of the boost would be no problem if 1.46.0 or later is used. | |
39 | - Especially, the boost-libraries should be builded with MPI | |
40 | - because MolDS needs boost_mpi-library(i.e. -lboost_mpi). | |
41 | - | |
42 | - To get and install the openBLAS-libraries, see the HP:<http://xianyi.github.com/OpenBLAS/>. | |
43 | - Note that "USE_OPENMP = 1" should be set for the installation of the opneBLAS. | |
44 | - Furthermore, "INTERFACE64 = 1" is also needed when you install the openBLAS into 64-bits machines | |
32 | + -Compilers: | |
33 | + MolDS requires c++ mpi compiler (e.g. Intel MPI or Open MPI) | |
34 | + that is wrapping Intel (icpc with MKL) or GNU (g++) c++ compiler. | |
35 | + Valid versions of the mpi compilers are Intel MPI 4.0.2, Open MPI 1.4.5, or later. | |
36 | + Valid versions of the wrapped c++ compilers are icpc 12.0.4(MkL 10.3 update 4), | |
37 | + g++ 4.4, or later because the MolDS is implemented with openMP 3.0. | |
38 | + | |
39 | + -Boost C++ Libraries | |
40 | + Boost C++ Libraries builded with MPI is needed. | |
41 | + To get and install the Boost, see the HP:<http://www.boost.org/>. | |
42 | + The version of the boost would be no problem if 1.46.0 or later is used. | |
43 | + Especially, the Boost should be builded with MPI | |
44 | + because MolDS needs boost_mpi-library(i.e. -lboost_mpi). | |
45 | + | |
46 | + -Linear Algebra Packages (i.e. BLAS and LAPACK) | |
47 | + MolDS needs a linear algebra package. In the current implementation of MolDS, | |
48 | + MKL (Intel's Math Kernel Library) or OpenBLAS is assumed as the linear algebra package | |
49 | + for the Intel or GNU compilers, respectively. | |
50 | + See also the section of compilers about the version of the MKL. | |
51 | + To get and install the OpenBLAS-libraries, see the HP:<http://xianyi.github.com/OpenBLAS/>. | |
52 | + The version of the OpenBLAS would be no problem if 0.2.5 or later is used. | |
53 | + Note that "USE_OPENMP = 1" should be set for the installation of the opneBLAS. | |
54 | + Furthermore, "INTERFACE64 = 1" is also needed when you install the OpenBLAS into 64-bits machines | |
45 | 55 | |
46 | 56 | ============================================================================== |
47 | 57 | COMPILE(using GNUmake): |
48 | 58 | In the "src" directory in the MolDS package. |
49 | 59 | |
50 | - Case i) Using Intel mpi c++ compiler (mpiicpc) | |
60 | + Case i) The Intel mpi compiler (mpiicpc) which is wrapping the Intel c++ compiler (icpc) | |
51 | 61 | Change the "BOOST_TOP_DIR" in Makefile to the top directory of the |
52 | - boost-libraries in your systems. | |
62 | + Boost C++ Libraries in your systems. | |
53 | 63 | |
54 | 64 | To compile MolDS on 32 bits machine, |
55 | 65 | $ make INTEL=32 |
56 | 66 | |
57 | 67 | To compile MolDS on 64 bits machine, |
58 | 68 | $ make INTEL=64 |
69 | + | |
70 | + Case ii) The openMPI compiler (mpicxx) which is wrapping the Intel c++ compiler (icpc) | |
71 | + Change the "BOOST_TOP_DIR" in Makefile to the top directory of the | |
72 | + Boost C++ Libraries in your systems. | |
73 | + | |
74 | + To compile MolDS on 32 bits machine, | |
75 | + $ make INTEL=32 CC=mpicxx | |
59 | 76 | |
60 | - Case ii) Using GNU c++ compiler (mpicxx) | |
77 | + To compile MolDS on 64 bits machine, | |
78 | + $ make INTEL=64 CC=mpicxx | |
79 | + | |
80 | + | |
81 | + Case iii) The openMPI compiler (mpicxx) which is wrapping the GNU c++ compiler (g++) | |
61 | 82 | Change the "BOOST_TOP_DIR" in "Makefile_GNU" to the top directory of the |
62 | - boost-libraries in your systems. | |
83 | + Boost C++ Libraries in your systems. | |
63 | 84 | Change the "OPENBLAS_TOP_DIR" in "Makefile_GNU" to the top directory of the |
64 | - boost-libraries in your systems. | |
85 | + OpneBLAS in your systems. | |
65 | 86 | |
66 | 87 | Then, just type: |
67 | 88 | $ make -f Makefile_GNU |
68 | 89 | |
69 | - For both case, the compile succeeded if you could fine "MolDS.out" in the "src" directory. | |
70 | - Type "$ make clean" when you wanna clean the compilation. | |
90 | + For all case, the compile succeeded if you could fine "MolDS.out" in the "src" directory. | |
91 | + If you want to clean the compilation, type | |
92 | + $ make clean | |
71 | 93 | If you want to compile MolDS in debug-mode, |
72 | 94 | -g, -rdynamic(for function names in backtrace) and -DMOLDS_DBG should be added to CFLAGS, |
73 | - that is, hit the following command: | |
74 | - $make CFLAGS="-O0 -g -rdynamic -DMOLDS_DBG" | |
95 | + namely, hit the following command: | |
96 | + $ make CFLAGS="-O0 -g -rdynamic -DMOLDS_DBG" | |
75 | 97 | |
76 | 98 | ============================================================================== |
77 | 99 | CARRY OUT MolDS: |
@@ -82,8 +104,31 @@ CARRY OUT MolDS: | ||
82 | 104 | or |
83 | 105 | $ ./MolDS.out input.in |
84 | 106 | |
85 | - For the calculations with multiple processes(n) by MPI: | |
86 | - $ mpirun -np n MolDS.out input.in | |
107 | + For the calculations with muliple threads, type | |
108 | + $ export OMP_NUM_THREADS=n1 | |
109 | + $ ./MolDS.out input.in | |
110 | + , where n1 is the number of threads. | |
111 | + | |
112 | + For the calculations with multiple processes by MPI: | |
113 | + $ mpirun -np n2 MolDS.out input.in | |
114 | + , where n2 after the "-np" is the number of process. | |
115 | + | |
116 | + For the calculations with muliple threads and muliple processes, type | |
117 | + $ export OMP_NUM_THREADS=n1 | |
118 | + $ mpirun -np n2 MolDS.out input.in | |
119 | + , where n1 is the number of cores of each node and n2 is the number of nodes. | |
120 | + | |
121 | + In the multiple processes calculations, process-0 can only output results. | |
122 | + If you want to get all output from the all process, | |
123 | + -DMOLDS_DBG should be added to CFLAGS at the compilation. | |
124 | + Then, make only one process on each node and output results to | |
125 | + node unique file (e.g. local file system of each node.), | |
126 | + namely, | |
127 | + $ make CFLAGS="-DMOLDS_DBG" | |
128 | + $ export OMP_NUM_THREADS=n1 | |
129 | + $ mpirun -np n2 MolDS.out input.in > /localFileSyste/output.dat | |
130 | + , where n1 is the number of cores of each node and n2 is the number of nodes. | |
131 | + | |
87 | 132 | ============================================================================== |
88 | 133 | SAMPLE and TEST |
89 | 134 | See files in "test" directories for sample files. |
@@ -99,44 +144,50 @@ SAMPLE and TEST | ||
99 | 144 | |
100 | 145 | $ ruby Test_Of_MolDS.rb test1.in test2.dat test3 ... |
101 | 146 | |
147 | + Note that this test script needs at least 4 cores. | |
148 | + | |
102 | 149 | ============================================================================== |
103 | 150 | CAPABILITIES: |
104 | 151 | |
105 | - Electronic state and molecular dynamics: | |
106 | - | HF | CIS | MD(gs) | MD(es) | MC(gs) | MC(es) | RPMD(gs) | RPMD(es) | Optimize(gs) | Optimize(es) | Frequencies(gs) | | |
107 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
108 | - CNDO2 | OK | -- | -- | -- | OK | -- | -- | -- | -- | -- | -- | | |
109 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
110 | - INDO | OK | -- | -- | -- | OK | -- | -- | -- | -- | -- | -- | | |
111 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
112 | - ZINDO/S | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | -- | | |
113 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
114 | - MNDO | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
115 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
116 | - AM1 | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
117 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
118 | - AM1-D | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
119 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
120 | - PM3 | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
121 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
122 | - PM3-D | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
123 | - ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
124 | - PM3/PDDG | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
152 | + -Electronic state and molecular dynamics | |
153 | + | HF | CIS | MD(gs) | MD(es) | MC(gs) | MC(es) | RPMD(gs) | RPMD(es) | Optimize(gs) | Optimize(es) | Frequencies(gs) | | |
154 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
155 | + CNDO2 | OK | -- | -- | -- | OK | -- | -- | -- | -- | -- | -- | | |
156 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
157 | + INDO | OK | -- | -- | -- | OK | -- | -- | -- | -- | -- | -- | | |
158 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
159 | + ZINDO/S | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | -- | | |
160 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
161 | + MNDO | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
162 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
163 | + AM1 | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
164 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
165 | + AM1-D | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
166 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
167 | + PM3 | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
168 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
169 | + PM3-D | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
170 | + ---------|-----|-----|--------|--------|--------|--------|----------|----------|--------------|--------------|-----------------| | |
171 | + PM3/PDDG | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | OK | | |
125 | 172 | |
126 | 173 | "OK", "Sch", and "--" mean available, shceduled, and non-scheduled methods, respectively. |
127 | 174 | "gs" and "es" mean ground and excited states, respectively. |
128 | 175 | i.e., MD(gs) and MD(es) mean Born-Oppenheimer Molecular Dynamics on ground and excited states, respectively. |
129 | 176 | |
130 | - Elements: | |
131 | - CNDO2 | H, Li, C, N, O, and S | |
132 | - INDO | H, Li, C, N, and O | |
133 | - ZINDO/S | H, C, N, O, and S | |
134 | - MNDO | H, C, N, O, and S | |
135 | - AM1 | H, C, N, O, and S | |
136 | - AM1-D | H, C, N, O, and S | |
137 | - PM3 | H, C, N, O, and S | |
138 | - PM3-D | H, C, N, O, and S | |
139 | - PM3/PDDG | H, C, N, O, and S | |
177 | + -Elements | |
178 | + CNDO2 | H, Li, C, N, O, and S | |
179 | + INDO | H, Li, C, N, and O | |
180 | + ZINDO/S | H, C, N, O, and S | |
181 | + MNDO | H, C, N, O, and S | |
182 | + AM1 | H, C, N, O, and S | |
183 | + AM1-D | H, C, N, O, and S | |
184 | + PM3 | H, C, N, O, and S | |
185 | + PM3-D | H, C, N, O, and S | |
186 | + PM3/PDDG | H, C, N, O, and S | |
187 | + | |
188 | + -Parallelization | |
189 | + Open MP parallelization: everywhere in MolDS | |
190 | + MPI parallelization: CIS is only parallelized with MPI. | |
140 | 191 | |
141 | 192 | ============================================================================== |
142 | 193 | HOW TO WRITE INPUT: |
@@ -158,6 +209,7 @@ HOW TO WRITE INPUT: | ||
158 | 209 | THEORY_END |
159 | 210 | |
160 | 211 | -options |
212 | + Write below options in SCF-directive. | |
161 | 213 | "max_iter", "rms_density", "damping_thresh", "damping_weight", |
162 | 214 | "diis_num_error_vect", "diis_start_error", "diis_end_error", |
163 | 215 | "vdW", "vdW_s6", and "vdW_d" are prepared as options. |