HPC Orchestrator integration tests
I am trying to run the HPC orchestrator integration tests. The Fortran tests fails as I do not have a license for the Fortran compiler from Intel - I only have a C++ compiler license. Does anybody...
View ArticleIMPI and DAPL fabrics on Infiniband cluster
Hello, I have been trying to submit a job in our cluster for a intel17 compiled and impi enabled code. I keep getting trouble at startup when running through PBS.This is the submission...
View Articleversion incompatibility inside a cluster
Hey, good morning. My name is Eliomar, im from venezuela. im a bit new working with MPI, im making a masters project with this technology. But im facing a problem with my implementation that i dont...
View ArticleMPI_Mprobe() makes no progress for internode communicator
Hi all,My understanding (correct me if I'm wrong), is that MPI_Mprobe() has to guarantee progress if a matching send has been posted. The minimal working example below, however, runs to completion on a...
View ArticleObtaining the total throughput & Latency using IMB-benchmark
Hello, currently I am new to using IMB benchmark and wanted to make sure whether getting the total throughput and the latency from IMB-benchmark is possibleCurrently the IMB-benchmark provides the...
View ArticleRegister for the Intel® HPC Developer Conference
The Intel® HPC Developer Conference, taking place November 11–12, 2017 in Denver, Colorado, is a great opportunity for developers who want to gain technical knowledge and hands-on experience with HPC...
View ArticleIntel multinode Run Problem
Hi There,I have a system with 6 computenodes, /opt folder is nfs shared and intel parallel studio cluster version installed on nfs server.I am using slurm as workload manager. When i run a vasp job on...
View ArticleIntel Cluster Checker collection issue
Dear All,I'm using Intel(R) Cluster Checker 2017 Update 2 (build 20170117), installed locally on master node in /opt/intel as part of Intel Parallel Studio XE.However, when running clck-collect I get...
View ArticleHow do disable intra-node comminucation
I would like to test the network latency/bandwidth of each node that I am running on in parallel. I think the simplest way to do this would be to have each node test itself. My question is: How can I...
View ArticleMPI ISend/IRecv deadlock on AWS EC2
Hi, I'm encountering an unexpected deadlock in this Fortran test program, compiled using Parallel Studio XE 2017 Update 4 on an Amazon EC2 cluster (Linux system).$ mpiifort -traceback nbtest.f90 -o...
View Articleshared memory initialization failure
Hi all,Running our MPI application on a newly setup RHEL 7.3 system using SGE, we obtain the following error:Fatal error in MPI_Init: Other MPI error, error stack: MPIR_Init_thread(805): fail failed...
View Articlempirun command does not distribute jobs to compute nodes
Dear Folks,I have Intel(R) C Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 13.0.1.117 Build 20121010 in my system. I am trying to submit a job using mpirun to my machine...
View ArticleITAC -- Naming generated .stf file to differentiate runs
Hello, I am using ITAC from the 2017.05 Intel Parallel Cluster Studio. I issue a number of mpirun command lines with ITAC tracing enabled. I am trying though to assign specific names to the generated...
View Articlempirun command does not send jobs to compute nodes
Dear Folks,I have Intel(R) C Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 13.0.1.117 Build 20121010 in my system. I am trying to submit a job using mpirun to my machine...
View Articleintel mpi cross os launch error
Env:node1 : window 10 (192.168.137.1)node2 : debian8 virtual machine. (192.168.137.3) test app: the test.cpp included with intel mpi package 1, Launch from windows...
View Articlepbs system said: 'MPI startup(): ofa fabric is not available and fallback...
I've been using PBS system for testing my code. I have a PBS script to run my binary code. But when I get:> [0] MPI startup(): ofa fabric is not available and fallback fabric is not enabledAnd I...
View ArticleFata Error using MPI in Linux
Hi,I'm using a virtual Linux Ubuntu machine (Linux-VirtualBox 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux), with 8GB RAM.For a process on Matlab, the...
View Articledrastic reduction in performance when compute node running at half load
We have compute nodes with 24 cores( 48 threads) and 64 GB RAM (2x32GB). When I run a sample code (matrix multiplication)in one of the compute node in one thread, it takes only 4 seconds. But when I...
View ArticleSlowdown of message exchange by multiple oders of magnitude due to dynamic...
Hello,We develop MPI algorithms on the SuperMUC supercomputer [1]. We compile our algorithms with Intel MPI 2018. Unfortunately, it seems like the message transfer between two processes which have not...
View ArticleHPCC benchmark HPL results degrade as more cores are used
I have a 6-node cluster consisting of 12 cores per node with a total of 72 cores. When running the HPCC benchmark on 6 cores - 1 core per node, 6 nodes - HPL results is 1198.87 GFLOPS. However,...
View Article