Can you point me at what particular tcp/udp ports hydra and smpd services use...
Dear Collegues,Can you point me at what particular tcp/udp ports hydra and smpd services use to communicate over the network ? I tried to make an exception for mpiexec utility and hydra service in...
View ArticleMPI over mmaped region kills Kernel?
Hi,I use MPI (with Infiniband's RDMA enabled) over mmaped region. The size of mmap region is larger than the physical memory size, so I expect TLB is updated often, which may incur 'undefined' behavior...
View ArticleMPI - Code hangs when send/recv large data
Hi all,I have been confused by the strange behaviour of intel mpi library for days. When I send small data, everything is fine. However, when I send large data, the following code hangs. #include...
View ArticleInstallation Problems of Intel Parallel Studio XE 2016 Update 1 Cluster...
Hi,I have installed a local copy of Intel Parallel Studio for a local machine easily using the install GUI (install_GUI.sh). It is working for that local machine properly.Then I decided try to Install...
View Articletime statistics provided by Intel Trace Analyzer
I am using the Trace Analyzer for an MPI job running on 4 nodes (80 physical cores total, 80 MPI threads). When I run 'mpirun -trace ...' the job takes roughly 10 times longer than the same job running...
View ArticleMPI problems with parallel SIESTA
Hello,I need to use the scientific software package SIESTA 3.2 (TranSIESTA actually) but I'm having a hard time getting the code to run on our cluster. I posted this to another forum but someone gave...
View ArticlePerformance loss migrating from Xeon X5550 to Xeon E5-2650 v2
Hi, I'm migrating Fortran in-house developed software from a cluster with Intel Xeon X5550 processors, to a cluster with Intel Xeon E5-2650 v2 processors and experiencing a loss of performance. I...
View Articlempirun and LSF
Hi,I'm trying to use option -ppn (or -perhost) with Hydra and LSF 9.1 but It doesn't work (nodes have 16 cores):$ bsub -q q_32p_1h -I -n 32 mpirun -perhost 8 -np 16 ./a.outJob <750954> is...
View ArticleMPI within parallel_studio_xe_2016_update2 not working under certain conditions
Hi, We just migrated from XE 2013 to XE 2016 update2. We use the compier suite and the MPI library to build the MPI enviroment for ab initio software such as PWscf(Quantum ESPRESSO) and OpenMX. Before...
View ArticleMPI process hangs on 'MPI_Finalize'
Hi,When I run my MPI application over 40 machines, one mpi process does not finish and hangs on 'MPI_Finalize' (The other mpi processes in the other machines show zero cpu usage)In the bellow, I...
View ArticleMPI generates numerous SCIF/scif_connect failure warning
I am running a heterogeneous job - on host and xeon phi coprocessor. If I run the mpi job on just the host or just the card everything is smooth. When I split the job between the host and the xeon...
View Articlemmap() + MPI one-sided communication fails when DAPL UD enabled
Hi!I used a trick in order to read a page located in a remote machine's disk. (using mmap() over the whole file in each machine and creating MPI_one_sided communication windows on it) It works fine...
View Articleerror while loading shared libraries: libiomp5.so: cannot open shared object...
I wrote a MPI Program to run on Intel Xeon Phi in native mode. I am using Stampede super computer. I have defined omp pragmas in the code.I have compiled using following command $mpiicc program.c...
View Articlemvapich2-2.2b and intel-15.0.4
Hi all, apologies if this is not the correct forum. I am trying to compile mvapich-2.2b with the intel 2015 compiler (15.0.4) and I am getting the following error during make:make[2]: Entering...
View ArticleAllocate the memory of an entire node on a single MPI process
Dear all,I want to benchmark an implementation in cluster architectures. The number of processes need not be high, however I need as much memory as possible on each MPI process. For example, I have...
View Articlelibgcc independent binary
Hi there, I want to generate binaries that are independent of libgcc. This can be done by the following compiling options, ifort -O0 -fp-model source -ip -inline-factor=100 -unroll-aggressive x.f90...
View ArticleMPI_Finalize() won't finalize if stdout and stderr are redirected via freopen
Hi,I have a problem using Intel MPI (5.1 Update 3) and redirection of stdout and stderr.When launched with multiple processes, if both stdout and stderr are redirected in (two different) files, then...
View ArticleMPI_Comm_rank , MPI_THREAD_MULTIPLE, and performance
Hi everyone,We found the following behavior in Intel MPI (5.0.3) using both the intel compilers and gcc:In an OpenMP-MPI environment, the performance of MPI_Comm_rank goes down if MPI is initialized...
View ArticleInstalling an older version of Intel Parallel Studio XE
HiI wish to install intel parallel 2015 using a 2016 named user license on our linux cluster.Do I use the hose id of the head node so that the compiler is available on all nodes?also I'm not a root...
View Articlenested MPI application
Hi there,I have a MPI application, say p1.exe, compiled with command,mpiifort p1.f90 /Od /Qopenmp /linkThe program receives a parameter from command line in runtime, p1.exe SRN:n. When n=1, the MPI is...
View Article