Dalton 2018.2 build error with --int64

Problems with Dalton installation? Find answers or ask for help here
Post Reply
salviano
Posts: 1
Joined: 07 Oct 2020, 16:38
First name(s): Salviano
Middle name(s): Araujo
Last name(s): Leão
Affiliation: Instituto de Física - Universidade Federal de Goiás
Country: Brazil

Dalton 2018.2 build error with --int64

Post by salviano » 09 Oct 2020, 20:20

I have tried to build dalton with --in64, but I cannot . I always got some errors. My steps was

1) Source code

git clone --recursive https://gitlab.com/dalton/dalton.git
cd dalton
git checkout Dalton2018.2
git submodule update

2) build

./setup --mpi --int64 --blas=/usr/lib/x86_64-linux-gnu/liblapack64.a --lapack=/usr/lib/x86_64-linux-gnu/libblas64.a --prefix=/usr/local/bin/ build_mpi | tee setup.out

File setup.out

Configuring system: Ubuntu
FC=mpif90 CC=mpicc CXX=mpicxx cmake -DENABLE_MPI=ON -DENABLE_SGI_MPT=OFF -DENABLE_OPENMP=OFF -DENABLE_64BIT_INTEGERS=ON -DENABLE_GPU=OFF -DENABLE_CUBLAS=OFF -DENABLE_REAL_SP=OFF -DENABLE_STATIC_LINKING=OFF -DENABLE_SCALAPACK=OFF -DEXPLICIT_BLAS_LIB=/usr/lib/x86_64-linux-gnu/liblapack64.a -DENABLE_AUTO_BLAS=OFF -DEXPLICIT_LAPACK_LIB=/usr/lib/x86_64-linux-gnu/libblas64.a -DENABLE_AUTO_LAPACK=OFF -DCMAKE_INSTALL_PREFIX=/usr/local/bin/ -DCMAKE_BUILD_TYPE=release /home/marcos/Salviano/Dalton2018/dalton18.2

-- The Fortran compiler identification is GNU 9.3.0
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working Fortran compiler: /usr/bin/mpif90
-- Check for working Fortran compiler: /usr/bin/mpif90 -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/mpif90 supports Fortran 90
-- Checking whether /usr/bin/mpif90 supports Fortran 90 -- yes
-- Check for working C compiler: /usr/bin/mpicc
-- Check for working C compiler: /usr/bin/mpicc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/mpicxx
-- Check for working CXX compiler: /usr/bin/mpicxx -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Since you specified 64bit integers the math lib search order is (only) MKL;ACML
-- This is because apart from MKL and ACML default math library installations are built for 32bit integers
-- If you know that the library you want to use provides 64bit integers, you can select the library
-- with -D BLAS_TYPE=X or -D LAPACK_TYPE X (X: MKL ESSL OPENBLAS ATLAS ACML SYSTEM_NATIVE)
-- or by redefining MATH_LIB_SEARCH_ORDER
-- Found MPI_C: /usr/bin/mpicc (found version "3.1")
-- Found MPI_CXX: /usr/bin/mpicxx (found version "3.1")
-- Found MPI_Fortran: /usr/bin/mpif90 (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Performing Test MPI_COMPATIBLE
-- Performing Test MPI_COMPATIBLE - Success
-- Performing Test MPI_F90_I4
-- Performing Test MPI_F90_I4 - Failed
-- Performing Test MPI_F90_I8
-- Performing Test MPI_F90_I8 - Success
-- Performing Test ENABLE_MPI3_FEATURES
-- Performing Test ENABLE_MPI3_FEATURES - Success
-- Found Git: /usr/bin/git
-- Polarizable Continuum Model via PCMSolver DISABLED
-- Configuring done
-- Generating done
-- Build files have been written to: /home/marcos/Salviano/Dalton2018/dalton18.2/build3

configure step is done
now you need to compile the sources:
$ cd build3
$ make

3) ctest results

84% tests passed, 78 tests failed out of 496

Label Time Summary:
aosoppa = 261.32 sec*proc (18 tests)
benchmark = 62.01 sec*proc (10 tests)
cc = 111.95 sec*proc (87 tests)
cc3 = 40.87 sec*proc (31 tests)
ccr12 = 166.84 sec*proc (68 tests)
cholesky = 12.34 sec*proc (10 tests)
dalton = 13804.62 sec*proc (475 tests)
dft = 273.04 sec*proc (45 tests)
dpt = 16.52 sec*proc (8 tests)
energy = 746.76 sec*proc (19 tests)
essential = 640.74 sec*proc (117 tests)
fde = 82.76 sec*proc (2 tests)
gen1int = 141.39 sec*proc (2 tests)
geo = 2654.56 sec*proc (29 tests)
long = 1167.89 sec*proc (18 tests)
mcscf = 289.47 sec*proc (3 tests)
medium = 4764.87 sec*proc (75 tests)
mp2r12 = 53.51 sec*proc (17 tests)
multistep = 136.82 sec*proc (9 tests)
numder = 218.99 sec*proc (5 tests)
pcm = 1188.62 sec*proc (17 tests)
peqm = 824.26 sec*proc (24 tests)
prop = 2441.79 sec*proc (44 tests)
qfit = 7.59 sec*proc (3 tests)
qm3 = 159.50 sec*proc (25 tests)
qmmm = 161.62 sec*proc (8 tests)
rsp = 4427.36 sec*proc (79 tests)
runtest = 8922.34 sec*proc (204 tests)
short = 6934.41 sec*proc (373 tests)
soppa = 677.11 sec*proc (18 tests)
unknown = 8.98 sec*proc (1 test)
verylong = 450.89 sec*proc (18 tests)
walk = 826.02 sec*proc (8 tests)
weekly = 80.38 sec*proc (21 tests)

Total Test time (real) = 2458.89 sec

The following tests FAILED:
2 - dft_b3lyp_cart (Failed)
3 - dft_b3lyp_magsus_nosym (Failed)
4 - dft_b3lyp_molhes_nosym (Failed)
5 - dft_b3lyp_nosym (Failed)
6 - dft_b3lyp_sym (Failed)
7 - dft_blyp_nosym (Failed)
8 - dft_blyp_sym (Failed)
9 - dft_camb3lyp (Failed)
10 - dft_camb3lyp_magsus (Failed)
11 - dft_cr_sym (Failed)
12 - dft_disp_d2 (Failed)
13 - dft_disp_d3 (Failed)
14 - dft_disp_d3bj (Failed)
15 - dft_energy_sym (Failed)
16 - dft_hcth120 (Failed)
17 - dft_lb94 (Failed)
18 - dft_lda_cart (Failed)
19 - dft_lda_nosym (Failed)
20 - dft_lda_sym (Failed)
21 - dft_lda_molhes_nosym (Failed)
22 - dft_open_b3lyp (Failed)
23 - dft_open_lr (Failed)
24 - dft_optimize (Failed)
25 - dft_pbe (Failed)
26 - dft_polar (Failed)
27 - dft_properties_sym (Failed)
28 - dft_properties_nosym (Failed)
29 - dft_rspexci (Failed)
30 - dft_qr (Failed)
31 - dft_qr_qlop (Failed)
32 - dft_stex (Failed)
33 - dft_hsrohf (Failed)
34 - dft_spin_local (Failed)
35 - tddft_tda (Failed)
128 - prop_expgrad (Failed)
132 - rsp_cpp_mcd (Failed)
135 - rsp_cpph (Failed)
152 - rsp_exsm (Failed)
183 - rsp_hfc (Failed)
184 - rsp_fullhfc (Failed)
194 - rsp_g_rohf_ecc (Failed)
199 - rsp_g_ldax (Failed)
200 - rsp_g_b3lypx (Failed)
210 - rsp_soestm (Failed)
212 - qmmm1 (Failed)
213 - qmmm2 (Failed)
214 - qmmm3 (Failed)
216 - qmmm5 (Failed)
217 - qmmm6 (Failed)
219 - qm3pcm_1 (Failed)
222 - dftmm_1 (Failed)
223 - dftmm_2 (Failed)
224 - dftmm_3 (Failed)
225 - dftmm_4 (Failed)
226 - dftmm_5 (Failed)
227 - dftmm_6 (Failed)
228 - dftmm_7 (Failed)
229 - dftmm_8 (Failed)
230 - dftmm_9 (Failed)
367 - qfit_dft_transition_charges (Failed)
369 - hfreqfromg (Failed)
467 - pcm_bterm_sym (Failed)
468 - pcm_dipole (Failed)
473 - pcm_neq_exc_sym (Failed)
476 - pcm_shield_spin (Failed)
478 - pcm_trp_qr (Failed)
485 - fde_static-vemb_dipole_short (Failed)
486 - fde_static-vemb_dipole_long (Failed)
487 - benchmark_eri_adz (Failed)
488 - benchmark_eri_adzs (Failed)
489 - benchmark_eri_atzs (Failed)
490 - benchmark_eri_r12 (Failed)
491 - benchmark_eri_r12xl (Failed)
492 - benchmark_her_adz (Failed)
493 - benchmark_her_adzs (Failed)
494 - benchmark_her_atzs (Failed)
495 - benchmark_her_r12 (Failed)
496 - benchmark_her_r12xl (Failed)

I have tried with mkl library and I have same failed tests

./setup --mpi --int64 --type release --prefix /usr/local/bin/ build_mpi | tee setup.out

File setup.out

Configuring system: Ubuntu
FC=mpif90 CC=mpicc CXX=mpicxx cmake -DENABLE_MPI=ON -DENABLE_SGI_MPT=OFF -DENABLE_OPENMP=OFF -DENABLE_64BIT_INTEGERS=ON -DENABLE_GPU=OFF -DENABLE_CUBLAS=OFF -DENABLE_REAL_SP=OFF -DENABLE_STATIC_LINKING=OFF -DENABLE_SCALAPACK=OFF -DCMAKE_INSTALL_PREFIX=/usr/local/bin/ -DCMAKE_BUILD_TYPE=release /home/marcos/Salviano/Dalton2018/2018.2/dalton

-- The Fortran compiler identification is GNU 9.3.0
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working Fortran compiler: /usr/bin/mpif90
-- Check for working Fortran compiler: /usr/bin/mpif90 -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/mpif90 supports Fortran 90
-- Checking whether /usr/bin/mpif90 supports Fortran 90 -- yes
-- Check for working C compiler: /usr/bin/mpicc
-- Check for working C compiler: /usr/bin/mpicc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/mpicxx
-- Check for working CXX compiler: /usr/bin/mpicxx -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Since you specified 64bit integers the math lib search order is (only) MKL;ACML
-- This is because apart from MKL and ACML default math library installations are built for 32bit integers
-- If you know that the library you want to use provides 64bit integers, you can select the library
-- with -D BLAS_TYPE=X or -D LAPACK_TYPE X (X: MKL ESSL OPENBLAS ATLAS ACML SYSTEM_NATIVE)
-- or by redefining MATH_LIB_SEARCH_ORDER
-- Found BLAS: MKL (/usr/lib/x86_64-linux-gnu/libmkl_gf_ilp64.so;/usr/lib/x86_64-linux-gnu/libmkl_sequential.so;/usr/lib/x86_64-linux-gnu/libmkl_core.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so)
-- Found LAPACK: MKL (/usr/lib/x86_64-linux-gnu/libmkl_lapack95_ilp64.a;/usr/lib/x86_64-linux-gnu/libmkl_gf_ilp64.so)
-- Found MPI_C: /usr/bin/mpicc (found version "3.1")
-- Found MPI_CXX: /usr/bin/mpicxx (found version "3.1")
-- Found MPI_Fortran: /usr/bin/mpif90 (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Performing Test MPI_COMPATIBLE
-- Performing Test MPI_COMPATIBLE - Success
-- Performing Test MPI_F90_I4
-- Performing Test MPI_F90_I4 - Failed
-- Performing Test MPI_F90_I8
-- Performing Test MPI_F90_I8 - Success
-- Performing Test ENABLE_MPI3_FEATURES
-- Performing Test ENABLE_MPI3_FEATURES - Success
-- Found Git: /usr/bin/git
-- Polarizable Continuum Model via PCMSolver DISABLED
-- Configuring done
-- Generating done
-- Build files have been written to: /home/marcos/Salviano/Dalton2018/2018.2/dalton/build_mpi

configure step is done
now you need to compile the sources:
$ cd build_mpi
$ make

ctest also fails in same places>

My enviroment variables to build:

CFLAGS=-g -O3 -fstack-protector-strong -Wformat -Werror=format-security -m64
CPPFLAGS=-O3 -Wdate-time -D_FORTIFY_SOURCE=2 -m64
CXXFLAGS=-g -O3 -fstack-protector-strong -Wformat -Werror=format-security -m64
FCFLAGS=-g -O3 -fstack-protector-strong -m64 -frecursive -fdefault-integer-8
FFLAGS=-g -O3 -fstack-protector-strong -m64 -frecursive -fdefault-integer-8
DALTON_TMPDIR=/tmp/scratch
DALTON_LAUNCHER=mpirun -np 6

Can someone advise?

Thanks,
Salviano

taylor
Posts: 597
Joined: 15 Oct 2013, 05:37
First name(s): Peter
Middle name(s): Robert
Last name(s): Taylor
Affiliation: Tianjin University
Country: China

Re: Dalton 2018.2 build error with --int64

Post by taylor » 12 Oct 2020, 01:07

Just as a passing remark, your subject is perhaps a bit misleading --- you do not have any build errors (the program builds and can be run) but some of the tests fail. But that is not the important issue, of course!

An important question at this point is why you want to build with 64-bit integers? In general this is deprecated, because at no point has anyone worked systematically through the code to ensure everything is correct for such a build. In the manual there is a passing remark that 64-bit integers may be useful in some Cholesky-decomposed CC calculations, although these days no-one seems to know exactly that remark signifies. Otherwise the only reason for using 64-bit integers would be to address more than 16GB of memory per task. The only situation I can imagine where that might arise as a need is in calculations involving very large full CI calculations (CASSCF, etc.). If this is what you want to do, then it would appear that the CASSCF and RASSCF tests all pass and your build should be OK for that.

I realize that is it not very satisfactory from an aesthetic point of view to say "either don't build with 64-bit integers, or live with errors from the test suite", but I fear those two alternatives are the only ones at present. To conclude, in our group we use Dalton for a large variety of different calculations and have no 64-bit integer version built for use. The 32-bit integer version should be perfectly satisfactory unless you are doing something that involves very large CI spaces.

Best regards
Pete

Post Reply

Who is online

Users browsing this forum: No registered users and 6 guests