Ctest failed for installation with mkl and int64
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Ctest failed for installation with mkl and int64
Hi everyone,
I'm a new Dalton user and I'm having problems with Dalton installation.
In the first time that I installed Dalton I did:
tar xvzf DALTON-2013.2-Source.tar.gz
cd DALTON-2013.2-Source
./setup
cd build
ctest
make install
In this case gfortran, gcc and g++ compilers was used and at this time
ctest passed without any problems. But, with this procedure I can use only
one processor when runs Dalton.
To solve this problem I uninstalled Dalton then I installed using intel compilers
adding mkl and including 64 bits integer precision:
tar xvzf DALTON-2013.2-Source.tar.gz
cd DALTON-2013.2-Source
./setup --fc=ifort --cc=icc --cxx=icpc --mkl=parallel --int64
cd build
make -j12
ctest -j12
But, in this case, ctest in 109 cases failed.
Why ctest fail when I use --int64 or --mkl=parallel? How can I install Dalton to use
my 12 processor?
I'm a new Dalton user and I'm having problems with Dalton installation.
In the first time that I installed Dalton I did:
tar xvzf DALTON-2013.2-Source.tar.gz
cd DALTON-2013.2-Source
./setup
cd build
ctest
make install
In this case gfortran, gcc and g++ compilers was used and at this time
ctest passed without any problems. But, with this procedure I can use only
one processor when runs Dalton.
To solve this problem I uninstalled Dalton then I installed using intel compilers
adding mkl and including 64 bits integer precision:
tar xvzf DALTON-2013.2-Source.tar.gz
cd DALTON-2013.2-Source
./setup --fc=ifort --cc=icc --cxx=icpc --mkl=parallel --int64
cd build
make -j12
ctest -j12
But, in this case, ctest in 109 cases failed.
Why ctest fail when I use --int64 or --mkl=parallel? How can I install Dalton to use
my 12 processor?
-
- Posts: 1210
- Joined: 26 Aug 2013, 13:22
- First name(s): Radovan
- Last name(s): Bast
- Affiliation: none
- Country: Germany
Re: Ctest failed for installation with mkl and int64
dear Leonardo,
first of all most likely you don't need --int64. this can be a reason for the failing tests.
do not use this flag unless you are sure that you need it.
then with --mkl=parallel you link to a threaded MKL. check MKL_NUM_THEADS when you
run with ctest -j12. otherwise you run 12 tests at a time, each of them trying to use
up to 12 processors.
threaded MKL can only parallelize BLAS/LAPACK calls. depending on what
actual calculation you run this may be a significant portion of the total run time or not.
if you want to make use of Dalton's own parallelization you should compile with
MPI (--mpi) if you want to run Dalton, and with MPI and/or OpenMP (--omp) if
you want to run LSDalton.
final remark: control MKL_NUM_THREADS if you run with MPI to avoid
that MKL and Dalton compete over cores.
good luck,
radovan
first of all most likely you don't need --int64. this can be a reason for the failing tests.
do not use this flag unless you are sure that you need it.
then with --mkl=parallel you link to a threaded MKL. check MKL_NUM_THEADS when you
run with ctest -j12. otherwise you run 12 tests at a time, each of them trying to use
up to 12 processors.
threaded MKL can only parallelize BLAS/LAPACK calls. depending on what
actual calculation you run this may be a significant portion of the total run time or not.
if you want to make use of Dalton's own parallelization you should compile with
MPI (--mpi) if you want to run Dalton, and with MPI and/or OpenMP (--omp) if
you want to run LSDalton.
final remark: control MKL_NUM_THREADS if you run with MPI to avoid
that MKL and Dalton compete over cores.
good luck,
radovan
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Re: Ctest failed for installation with mkl and int64
Hi Radovan,
Thanks for your support.
I try to install DALTON with MPI (./setup --mpi ) but I still having problems.
1- During the installation ctest failed in the last 5 tests
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The following tests FAILED:
539 - dectests/decmp2_energy (Failed)
540 - dectests/decmp2_density (Failed)
541 - dectests/decmp2_gradient (Failed)
542 - dectests/decmp2_geoopt (Failed)
549 - dectests/decmp2_gradient_debug (Failed)
Errors while running CTest
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2- After to do the "make install" I tried to run a Dalton job with:
nohup dalton -N 12 -M 7000 arq.dal arq.mol &
and received the erro mensage:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**** OUTPUT FROM DALTON SHELL SCRIPT ****
*****************************************
DALTON release 2013.3
Invocation: /usr/local/bin/dalton -N 12 -M 7000 gam-hf.dal ATAR.mol
Seg Jul 7 10:15:26 BRT 2014
Calculation: gam-hf_ATAR (input files: gam-hf.dal and ATAR.mol)
PID : 20202
Input dir : /home/marcos/leonardo/teste/atar
Scratch dir: /home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202
INFO : OMP_NUM_THREADS set to 1 because it was not defined by user
INFO : and the cores are probably used by MPI
mpiexec_marcosfapeg1: cannot connect to local mpd (/tmp/mpd2.console_marcos); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)
Error in /opt/intel/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpiexec -np 12 /usr/local/dalton-2013-teste/dalton/dalton.x, exit code 255
DALTON.OUT has not been created from the present run.
/home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202 is therefore not deleted by this script.
List of created files in /home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202 :
total 12
4 -rwxrwxr-x 1 marcos marcos 135 Jul 7 10:15 DALTON.INP
4 -rwxrwxr-x 1 marcos marcos 2515 Jul 7 10:15 MOLECULE.INP
4 -rw-rw-r-- 1 marcos marcos 20 Jul 7 10:15 gam-hf_ATAR.tar.gz
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
What should I do to solve this problem?
Thanks for your support.
I try to install DALTON with MPI (./setup --mpi ) but I still having problems.
1- During the installation ctest failed in the last 5 tests
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The following tests FAILED:
539 - dectests/decmp2_energy (Failed)
540 - dectests/decmp2_density (Failed)
541 - dectests/decmp2_gradient (Failed)
542 - dectests/decmp2_geoopt (Failed)
549 - dectests/decmp2_gradient_debug (Failed)
Errors while running CTest
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2- After to do the "make install" I tried to run a Dalton job with:
nohup dalton -N 12 -M 7000 arq.dal arq.mol &
and received the erro mensage:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**** OUTPUT FROM DALTON SHELL SCRIPT ****
*****************************************
DALTON release 2013.3
Invocation: /usr/local/bin/dalton -N 12 -M 7000 gam-hf.dal ATAR.mol
Seg Jul 7 10:15:26 BRT 2014
Calculation: gam-hf_ATAR (input files: gam-hf.dal and ATAR.mol)
PID : 20202
Input dir : /home/marcos/leonardo/teste/atar
Scratch dir: /home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202
INFO : OMP_NUM_THREADS set to 1 because it was not defined by user
INFO : and the cores are probably used by MPI
mpiexec_marcosfapeg1: cannot connect to local mpd (/tmp/mpd2.console_marcos); possible causes:
1. no mpd is running on this host
2. an mpd is running but was started without a "console" (-n option)
Error in /opt/intel/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpiexec -np 12 /usr/local/dalton-2013-teste/dalton/dalton.x, exit code 255
DALTON.OUT has not been created from the present run.
/home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202 is therefore not deleted by this script.
List of created files in /home/tmp/DALTON_scratch_marcos/gam-hf_ATAR_20202 :
total 12
4 -rwxrwxr-x 1 marcos marcos 135 Jul 7 10:15 DALTON.INP
4 -rwxrwxr-x 1 marcos marcos 2515 Jul 7 10:15 MOLECULE.INP
4 -rw-rw-r-- 1 marcos marcos 20 Jul 7 10:15 gam-hf_ATAR.tar.gz
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
What should I do to solve this problem?
- jmelo
- Posts: 64
- Joined: 27 Aug 2013, 16:59
- First name(s): Juan
- Middle name(s): Ignacio
- Last name(s): Melo
- Affiliation: Dpto. Fisica Fac. Ciencias Exactas y Naturales, Univ. Bs As. And IFIBA- CONICET
- Country: Argentina
- Location: Facultad de Ciencas Exactas y Naturales, Universidad de Buenos Aires, Argentina
- Contact:
Re: Ctest failed for installation with mkl and int64
Hola Leo,
Regarding to 2.
Asuming that your cluster has enough ram to run that calculation (that is 12 threads and 7Gb of ram each)
The problem might be that you splipped .PARALLEL on your .dal file.
Please do attach you input files in order to be able to help you a bit more.
Best regards and salu2,
jim
Regarding to 2.
Asuming that your cluster has enough ram to run that calculation (that is 12 threads and 7Gb of ram each)
The problem might be that you splipped .PARALLEL on your .dal file.
Please do attach you input files in order to be able to help you a bit more.
Best regards and salu2,
jim
-
- Posts: 1210
- Joined: 26 Aug 2013, 13:22
- First name(s): Radovan
- Last name(s): Bast
- Affiliation: none
- Country: Germany
Re: Ctest failed for installation with mkl and int64
hi Leo,
please send the output of this:
i assume that you submit interactively (without queuing system or scheduler)
because this is your own machine, right?
i don't think there is a problem in the input.
the .PARALLEL keyword is not needed since 2011 (the code ignores
it if it is present).
best wishes,
radovan
please send the output of this:
Code: Select all
$ grep -i MPI_Fortran build/CMakeCache.txt
because this is your own machine, right?
i don't think there is a problem in the input.
the .PARALLEL keyword is not needed since 2011 (the code ignores
it if it is present).
best wishes,
radovan
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Re: Ctest failed for installation with mkl and int64
Hi Everybody,
thanks for your support.
RADOVAN, grep -i MPI_Fortran build/CMakeCache.txt, gave me the output:
MPI_Fortran_COMPILER:FILEPATH=/usr/bin/mpif90
MPI_Fortran_COMPILE_FLAGS:STRING=
MPI_Fortran_INCLUDE_PATH:STRING=/usr/include/mpich2;/usr/include/mpich2
MPI_Fortran_LIBRARIES:STRING=/usr/lib/libmpichf90.so;/usr/lib/libmpichf90.so;/usr/lib/libmpich.so;/usr/lib/libopa.so;/usr/lib/libmpl.so;/usr/lib/x86_64-linux- gnu/librt.so;/usr/lib/libcr.so;/usr/lib/x86_64-linux-gnu/libpthread.so
MPI_Fortran_LINK_FLAGS:STRING= -Wl,-Bsymbolic-functions -Wl,-z,relro
//Details about finding MPI_Fortran
FIND_PACKAGE_MESSAGE_DETAILS_MPI_Fortran:INTERNAL=[/usr/lib/libmpichf90.so;/usr/lib/libmpichf90.so;/usr/lib/libmpich.so;/usr/lib/libopa.so;/usr/lib /libmpl.so;/usr/lib/x86_64-linux-gnu/librt.so;/usr/lib/libcr.so;/usr/lib/x86_64-linux-gnu/libpthread.so][/usr/include/mpich2;/usr/include/mpich2][v()]
//ADVANCED property for variable: MPI_Fortran_COMPILER
MPI_Fortran_COMPILER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_COMPILE_FLAGS
MPI_Fortran_COMPILE_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_INCLUDE_PATH
MPI_Fortran_INCLUDE_PATH-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_LIBRARIES
MPI_Fortran_LIBRARIES-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_LINK_FLAGS
MPI_Fortran_LINK_FLAGS-ADVANCED:INTERNAL=1
and yes I'm installing Dalton in my own machine. But after I will do it in a cluster with 18 machines like mine.
-----------------------------------------------------------------------------------------------------
JIM my input .dal file is:
**DALTON INPUT
.RUN RESPONSE
**WAVE FUNCTION
.HF
**RESPONSE
*CUBIC
.DIPLEN
.DC-KERR
.FREQUE
1
0.0
-----------------------------------------------------------------------------------------------------
AND tkjaer thanks for the link.
thanks for your support.
RADOVAN, grep -i MPI_Fortran build/CMakeCache.txt, gave me the output:
MPI_Fortran_COMPILER:FILEPATH=/usr/bin/mpif90
MPI_Fortran_COMPILE_FLAGS:STRING=
MPI_Fortran_INCLUDE_PATH:STRING=/usr/include/mpich2;/usr/include/mpich2
MPI_Fortran_LIBRARIES:STRING=/usr/lib/libmpichf90.so;/usr/lib/libmpichf90.so;/usr/lib/libmpich.so;/usr/lib/libopa.so;/usr/lib/libmpl.so;/usr/lib/x86_64-linux- gnu/librt.so;/usr/lib/libcr.so;/usr/lib/x86_64-linux-gnu/libpthread.so
MPI_Fortran_LINK_FLAGS:STRING= -Wl,-Bsymbolic-functions -Wl,-z,relro
//Details about finding MPI_Fortran
FIND_PACKAGE_MESSAGE_DETAILS_MPI_Fortran:INTERNAL=[/usr/lib/libmpichf90.so;/usr/lib/libmpichf90.so;/usr/lib/libmpich.so;/usr/lib/libopa.so;/usr/lib /libmpl.so;/usr/lib/x86_64-linux-gnu/librt.so;/usr/lib/libcr.so;/usr/lib/x86_64-linux-gnu/libpthread.so][/usr/include/mpich2;/usr/include/mpich2][v()]
//ADVANCED property for variable: MPI_Fortran_COMPILER
MPI_Fortran_COMPILER-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_COMPILE_FLAGS
MPI_Fortran_COMPILE_FLAGS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_INCLUDE_PATH
MPI_Fortran_INCLUDE_PATH-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_LIBRARIES
MPI_Fortran_LIBRARIES-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_Fortran_LINK_FLAGS
MPI_Fortran_LINK_FLAGS-ADVANCED:INTERNAL=1
and yes I'm installing Dalton in my own machine. But after I will do it in a cluster with 18 machines like mine.
-----------------------------------------------------------------------------------------------------
JIM my input .dal file is:
**DALTON INPUT
.RUN RESPONSE
**WAVE FUNCTION
.HF
**RESPONSE
*CUBIC
.DIPLEN
.DC-KERR
.FREQUE
1
0.0
-----------------------------------------------------------------------------------------------------
AND tkjaer thanks for the link.
-
- Posts: 1210
- Joined: 26 Aug 2013, 13:22
- First name(s): Radovan
- Last name(s): Bast
- Affiliation: none
- Country: Germany
Re: Ctest failed for installation with mkl and int64
ok from the different posts it looks like your dalton script
launches dalton with /opt/intel/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpiexec
but you rather want to use /usr/bin/mpiexec (i am guessing the correct launcher is there)
for this edit build/dalton, look for "MPIRUN="
launches dalton with /opt/intel/composer_xe_2013_sp1.0.080/mpirt/bin/intel64/mpiexec
but you rather want to use /usr/bin/mpiexec (i am guessing the correct launcher is there)
for this edit build/dalton, look for "MPIRUN="
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Re: Ctest failed for installation with mkl and int64
Hi Radovan,
Finally DALTON runs fine. My test ended correctly with 12 processors. How I told you before now I have to install DALTON in a cluster
with 18 machines like mine. I'm still having problems with the 5 tests in ctest. I saw and tried what tkjaer link's suggested, but it didn't work.
Do you think that is it a problem?
Thanks again for your support.
Best regards,
Leonardo.
Finally DALTON runs fine. My test ended correctly with 12 processors. How I told you before now I have to install DALTON in a cluster
with 18 machines like mine. I'm still having problems with the 5 tests in ctest. I saw and tried what tkjaer link's suggested, but it didn't work.
Do you think that is it a problem?
Thanks again for your support.
Best regards,
Leonardo.
Re: Ctest failed for installation with mkl and int64
It is only a problem if you plan on running DEC calculations using the LSDALTON code
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Re: Ctest failed for installation with mkl and int64
Hi Everyone,
After install Dalton with MPI I tried to run a CC job and received the error mensage:
-----------------------------------------------------------------------------------------------------
ERROR: CC is not MPI parallelized!
For parallelization speedup, e.g. use parallel MKL
ERROR: CC not implemented for parallel calculations!
--- SEVERE ERROR, PROGRAM WILL BE ABORTED ---
-----------------------------------------------------------------------------------------------------
So, How can I install DALTON with MPI and MKL to use one or other according with my need.
I tried to install with:
1- ./setup --mpi --mkl=parallel
In this case "make" fails after 85% of the process
2 - ./seutp --mpi --omp
In this case "make" complete 100% but I had error mensages and dalton and
lsdalton weren't criated on build directory
best regards,
Leonardo
After install Dalton with MPI I tried to run a CC job and received the error mensage:
-----------------------------------------------------------------------------------------------------
ERROR: CC is not MPI parallelized!
For parallelization speedup, e.g. use parallel MKL
ERROR: CC not implemented for parallel calculations!
--- SEVERE ERROR, PROGRAM WILL BE ABORTED ---
-----------------------------------------------------------------------------------------------------
So, How can I install DALTON with MPI and MKL to use one or other according with my need.
I tried to install with:
1- ./setup --mpi --mkl=parallel
In this case "make" fails after 85% of the process
2 - ./seutp --mpi --omp
In this case "make" complete 100% but I had error mensages and dalton and
lsdalton weren't criated on build directory
best regards,
Leonardo
-
- Posts: 1210
- Joined: 26 Aug 2013, 13:22
- First name(s): Radovan
- Last name(s): Bast
- Affiliation: none
- Country: Germany
Re: Ctest failed for installation with mkl and int64
dear Leonardo,
without the error messages i have to guess and my guess is that
your compilers are GNU and not Intel.
if you compile with GNU, then you want to do this:
if the latter fails, please post the error.
radovan
without the error messages i have to guess and my guess is that
your compilers are GNU and not Intel.
if you compile with GNU, then you want to do this:
Code: Select all
export MATH_ROOT=/opt/intel/mkl # adapt
./setup --mpi --omp
in the configuration output you should see that setup found the MKL libraries.
if not, do not continue and first find out why MKL was not found
in MATH_ROOT.
if you compile with Intel compilers:
[code]./setup --mpi --mkl=parallel --omp
radovan
-
- Posts: 8
- Joined: 24 Apr 2014, 17:48
- First name(s): leonardo
- Last name(s): abreu
- Affiliation: ufg
- Country: Brazil
Re: Ctest failed for installation with mkl and int64
Hi Radovan,
Thanks for your help. Dalton is running well. I installed Dalton with gnu compilers
like you suggested and everything worked fine. I had not errors in ctest and now
I can use all my processors. Thanks for all. End thanks to everybody who helped-me.
Best regards,
Leonardo Abreu.
Thanks for your help. Dalton is running well. I installed Dalton with gnu compilers
like you suggested and everything worked fine. I had not errors in ctest and now
I can use all my processors. Thanks for all. End thanks to everybody who helped-me.
Best regards,
Leonardo Abreu.
-
- Posts: 1210
- Joined: 26 Aug 2013, 13:22
- First name(s): Radovan
- Last name(s): Bast
- Affiliation: none
- Country: Germany
Re: Ctest failed for installation with mkl and int64
thanks - great to know. happy computing with Dalton!
radovan
radovan
Who is online
Users browsing this forum: No registered users and 1 guest