I have just started using Dalton, and it is the first compiled computational code I use, so maybe my inconvenience is solely due to my inaptness. Sorry if the question is inappropriate.
I have trouble on some machines with memory allocation. both by using WRKMEM environmental variable and by using -mb option. On the cluster, I can allocate more than 4GB without any adventures in either way; on our lab machines (which are using more up-to-date system) I cannot, at least this is what I deduce from "dalton" script output:
If I set WRKMEM to precisely 4GB in words (536870912), it work fine and I can read in the script output that 4.00 GB are used. What information I should provide additionally and what can be the reason for this [mis?]behaviour?DALTON: WRKMEM conversion error; WRKMEM = "2147483648 (here some space characters go, which are trimmed when I publish this post)"
DALTON: read as LMWORK = (here a lot of space characters go, which are trimmed when I publish this post) 11053
DALTON: default work memory size used. 64000000
DALTON: master work memory size also used for slaves. 64000000
Work memory size (LMWORK+2): 64000002 = 488.28 megabytes; node 1
1: Directories for basis set searches:
Work memory size (LMWORK+2): 64000002 = 488.28 megabytes; node 0
0: Directories for basis set searches:
As I was collecting the information on the issue, I noticed that on the cluster, I used 64-bit LAPACK and BLAS for compilation (located under /usr/lib64), but on lab machines, it was 32-bit (?) BLAS/LAPACK (located under /usr/lib). Is this the culprit for my issue?
Thank You very much in advance.
Information about the nodes is attached. Sorry, compilation command was the same for both cases but included only in labmachine_info.txt.