<small id='cKfgN'></small><noframes id='cKfgN'>

  • <tfoot id='cKfgN'></tfoot>

    1. <legend id='cKfgN'><style id='cKfgN'><dir id='cKfgN'><q id='cKfgN'></q></dir></style></legend>

        <i id='cKfgN'><tr id='cKfgN'><dt id='cKfgN'><q id='cKfgN'><span id='cKfgN'><b id='cKfgN'><form id='cKfgN'><ins id='cKfgN'></ins><ul id='cKfgN'></ul><sub id='cKfgN'></sub></form><legend id='cKfgN'></legend><bdo id='cKfgN'><pre id='cKfgN'><center id='cKfgN'></center></pre></bdo></b><th id='cKfgN'></th></span></q></dt></tr></i><div id='cKfgN'><tfoot id='cKfgN'></tfoot><dl id='cKfgN'><fieldset id='cKfgN'></fieldset></dl></div>
          <bdo id='cKfgN'></bdo><ul id='cKfgN'></ul>

        Openmpi mpmd 获取通信大小

        时间:2023-09-26

          <tfoot id='6xm4o'></tfoot>

          <small id='6xm4o'></small><noframes id='6xm4o'>

            • <legend id='6xm4o'><style id='6xm4o'><dir id='6xm4o'><q id='6xm4o'></q></dir></style></legend>
              <i id='6xm4o'><tr id='6xm4o'><dt id='6xm4o'><q id='6xm4o'><span id='6xm4o'><b id='6xm4o'><form id='6xm4o'><ins id='6xm4o'></ins><ul id='6xm4o'></ul><sub id='6xm4o'></sub></form><legend id='6xm4o'></legend><bdo id='6xm4o'><pre id='6xm4o'><center id='6xm4o'></center></pre></bdo></b><th id='6xm4o'></th></span></q></dt></tr></i><div id='6xm4o'><tfoot id='6xm4o'></tfoot><dl id='6xm4o'><fieldset id='6xm4o'></fieldset></dl></div>

                <tbody id='6xm4o'></tbody>

                  <bdo id='6xm4o'></bdo><ul id='6xm4o'></ul>
                  本文介绍了Openmpi mpmd 获取通信大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我有两个 openmpi 程序,我是这样开始的

                  I have two openmpi programs which I start like this

                  mpirun -n 4 ./prog1 : -n 2 ./prog2
                  

                  现在我如何使用 MPI_Comm_size(MPI_COMM_WORLD, &size) 以便我获得大小值

                  Now how do I use MPI_Comm_size(MPI_COMM_WORLD, &size) such that i get size values as

                  prog1 size=4
                  prog2 size=2.
                  

                  截至目前,我在两个程序中都获得了6".

                  As of now I get "6" in both programs.

                  推荐答案

                  这是可行的,虽然实现起来有点麻烦.其原理是根据argv[0]的值将MPI_COMM_WORLD拆分为多个通信器,其中包含可执行文件的名称.

                  This is doable albeit a bit cumbersome to get that. The principle is to split MPI_COMM_WORLD into communicators based on the value of argv[0], which contains the executable's name.

                  可能是这样的:

                  #include <stdio.h>
                  #include <string.h>
                  #include <stdlib.h>
                  #include <mpi.h>
                  
                  int main( int argc, char *argv[] ) {
                  
                      MPI_Init( &argc, &argv );
                  
                      int wRank, wSize;
                      MPI_Comm_rank( MPI_COMM_WORLD, &wRank );
                      MPI_Comm_size( MPI_COMM_WORLD, &wSize );
                  
                      int myLen = strlen( argv[0] ) + 1;
                      int maxLen;
                      // Gathering the maximum length of the executable' name
                      MPI_Allreduce( &myLen, &maxLen, 1, MPI_INT, MPI_MAX, MPI_COMM_WORLD );
                  
                      // Allocating memory for all of them
                      char *names = malloc( wSize * maxLen );
                      // and copying my name at its place in the array
                      strcpy( names + ( wRank * maxLen ), argv[0] );
                  
                      // Now collecting all executable' names
                      MPI_Allgather( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL,
                                     names, maxLen, MPI_CHAR, MPI_COMM_WORLD );
                  
                      // With that, I can sort-out who is executing the same binary as me
                      int binIdx = 0;
                      while( strcmp( argv[0], names + binIdx * maxLen ) != 0 ) {
                          binIdx++;
                      }
                      free( names );
                  
                      // Now, all processes with the same binIdx value are running the same binary
                      // I can split MPI_COMM_WORLD accordingly
                      MPI_Comm binComm;
                      MPI_Comm_split( MPI_COMM_WORLD, binIdx, wRank, &binComm );
                  
                      int bRank, bSize;
                      MPI_Comm_rank( binComm, &bRank );
                      MPI_Comm_size( binComm, &bSize );
                  
                      printf( "Hello from process WORLD %d/%d running %d/%d %s binary
                  ",
                              wRank, wSize, bRank, bSize, argv[0] );
                  
                      MPI_Comm_free( &binComm );
                  
                      MPI_Finalize();
                  
                      return 0;
                  }
                  

                  在我的机器上,我编译并运行它如下:

                  On my machine, I compiled and ran it as follow:

                  ~> mpicc mpmd.c
                  ~> cp a.out b.out
                  ~> mpirun -n 3 ./a.out : -n 2 ./b.out
                  Hello from process WORLD 0/5 running 0/3 ./a.out binary
                  Hello from process WORLD 1/5 running 1/3 ./a.out binary
                  Hello from process WORLD 4/5 running 1/2 ./b.out binary
                  Hello from process WORLD 2/5 running 2/3 ./a.out binary
                  Hello from process WORLD 3/5 running 0/2 ./b.out binary
                  

                  理想情况下,如果存在用于按二进制文件进行排序的相应类型,则可以通过使用 MPI_Comm_split_type() 来大大简化这一过程.不幸的是,在 3.1 MPI 标准中没有预定义这样的 MPI_COMM_TYPE_.唯一的预定义是 MPI_COMM_TYPE_SHARED 用于在运行在相同共享内存计算节点上的进程之间进行排序......太糟糕了!也许该标准的下一个版本需要考虑什么?

                  Ideally, this could be greatly simplified by using MPI_Comm_split_type() if the corresponding type for sorting out by binaries existed. Unfortunately, there is no such MPI_COMM_TYPE_ pre-defined in the 3.1 MPI standard. The only pre-defined one is MPI_COMM_TYPE_SHARED to sort-out between processes running on the same shared memory compute nodes... Too bad! Maybe something to consider for the next version of the standard?

                  这篇关于Openmpi mpmd 获取通信大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  上一篇:运行时不同的执行策略 下一篇:OpenMP:局部变量是否自动私有?

                  相关文章

                  <i id='KNf3m'><tr id='KNf3m'><dt id='KNf3m'><q id='KNf3m'><span id='KNf3m'><b id='KNf3m'><form id='KNf3m'><ins id='KNf3m'></ins><ul id='KNf3m'></ul><sub id='KNf3m'></sub></form><legend id='KNf3m'></legend><bdo id='KNf3m'><pre id='KNf3m'><center id='KNf3m'></center></pre></bdo></b><th id='KNf3m'></th></span></q></dt></tr></i><div id='KNf3m'><tfoot id='KNf3m'></tfoot><dl id='KNf3m'><fieldset id='KNf3m'></fieldset></dl></div>

                      <bdo id='KNf3m'></bdo><ul id='KNf3m'></ul>

                      <tfoot id='KNf3m'></tfoot>

                    1. <small id='KNf3m'></small><noframes id='KNf3m'>

                    2. <legend id='KNf3m'><style id='KNf3m'><dir id='KNf3m'><q id='KNf3m'></q></dir></style></legend>