Programming Taskbook


E-mail:

Password:

User registration   Restore password

Russian

SFedU SMBU

1100 training tasks on programming

©  M. E. Abramyan (Southern Federal University, Shenzhen MSU-BIT University), 1998–2024

 

PT for MPI-2 | Task groups | MPI8Inter

PrevNext


Inter-communicators and process creation

The basic tools for creation of inter-communicators and their use for point-to-point communication are defined in the MPI-1 standard. Therefore, 5 tasks of this group (MPI8Inter1–MPI8Inter4 and MPI8Inter9) can be solved using the MPICH 1.2.5 system. Other tasks are devoted to the new functions for inter-communicator creation (MPI8Inter5–MPI8Inter8), to the collective communications via inter-communicators (MPI8Inter10–MPI8Inter14) and to use inter-communicators for process creation (MPI8Inter15–MPI8Inter22). All these features have appeared in the MPI-2 standard, so you should use the MPICH2 1.3 system to solve these tasks.

You should use a copy of the communicator MPI_COMM_WORLD as the peer communicator (the third parameter of the MPI_Intercomm_create function). Use the MPI_Comm_dup function to create this copy.

The parameters of the MPI_Comm_spawn function, which is used for process creation in then MPI8Inter15–MPI8Inter22 tasks, should be as follows: the first parameter should be the name of the executable file ptprj.exe, the second parameter argv is enough to specify the NULL constant, the fourth parameter info should be the MPI_NULL_INFO constant, the last parameter array_of_errcodes should be the MPI_ERRCODES_IGNORE constant. If the task does not specify the source communicator for the process creation then it is assumed that this communicator should be the MPI_COMM_WORLD one.

Instead of the string "ptprj.exe", you can use the function char* GetExename() that is implemented in the Programming Taskbook and returns the full name of the executable file.

Inter-communicator creation

MPI8Inter1°. The number of processes K is an even number. An integer X is given in each process. Using the MPI_Comm_group, MPI_Group_range_incl, and MPI_Comm_create functions, create two communicators: the first one contains the even-rank processes in the same order (0, 2, …, K/2 − 2), the second one contains the odd-rank processes in the same order (1, 3, …, K/2 − 1). Output the ranks R of the processes included in these communicators. Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using the MPI_Send and MPI_Recv functions for this inter-communicator, send the integer X from each process to the process with the same rank from the other group of the inter-communicator and output the received integers.

MPI8Inter2°. The number of processes K is an even number. An integer C and a real number X are given in each process. The numbers C are equal to 0 or 1, the amount of integers 1 is equal to the amount of integers 0. The integer C is equal to 0 in the process of rank 0 and is equal to 1 in the process of rank K − 1. Using one call of the MPI_Comm_split function, create two communicators: the first one contains processes with C = 0 (in the same order) and the second one contains processes with C = 1 (in the inverse order). Output the ranks R of the processes included in these communicators (note that the first and the last processes of the MPI_COMM_WORLD communicator will receive the value R = 0). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using the MPI_Send and MPI_Recv functions for this inter-communicator, send the real number X from each process to the process with the same rank from the other group of the inter-communicator and output the received numbers.

MPI8Inter3°. The number of processes K is a multiple of 3. A real number X is given in the processes of rank 3N (N = 0, …, 3K − 3), real numbers X and Y are given in the processes of rank 3N + 1, a real number Y is given in the processes of rank 3N + 2. Using the MPI_Comm_group, MPI_Group_range_incl, and MPI_Comm_create functions, create three communicators: the first one contains processes of rank 3N in the same order (0, 3, …, K − 3), the second one contains processes of rank 3N + 1 in the inverse order (K − 2, K − 5, …, 1), the third one contains processes of rank 3N + 2 in the same order (2, 5, …, K − 1). Output the ranks R of the processes included in these communicators. Then combine these communicators into two inter-communicators using the MPI_Intercomm_create function. The first inter-communicator contains the first and second group of processes, the second one contains the second and third group of processes. Using the MPI_Send and MPI_Recv functions for these inter-communicators, exchange the numbers X in the processes with the same rank in the first and second group and the numbers Y in the processes with the same rank in the second and third group. Output the received number in each process.

Note. The MPI_Intercomm_create function should be called once for processes of the first and third groups, and twice for processes of the second group, and this number of calls should be performed for the MPI_Send and MPI_Recv functions.

MPI8Inter4°. The number of processes K is a multiple of 3. Three integers are given in each process. The first integer (named C) is in the range 0 to 2, the amount of each value 0, 1, 2 is equal to K/3, processes 0, 1, 2 contain C with the value 0, 1, 2 respectively. Using one call of the MPI_Comm_split function, create three communicators: the first one contains processes with C = 0 (in the same order), the second one contains processes with C = 1 (in the same order), the third one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (note that the processes 0, 1, 2 of the MPI_COMM_WORLD communicator will receive the value R = 0). Then combine these communicators into three inter-communicators using two calls of the MPI_Intercomm_create function in each process. The first inter-communicator contains groups of processes with C equal to 0 and 1, the second one contains groups of processes with C equal to 1 and 2, the third one contains groups of processes with C equal to 0 and 2 (thus, the created inter-communicators will form a ring connecting all three previously created groups). Denoting two next given integers in the first group as X and Y, in the second group as Y and Z, and in the third group as Z and X (in this order) and using two calls of the MPI_Send and MPI_Recv functions for these inter-communicators, exchange the numbers X in the processes with the same rank in the first and second group, the numbers Y in the processes with the same rank in the second and third group, and the numbers Z in the processes with the same rank in the first and third group. Output the received numbers in each process.

MPI8Inter5°. The number of processes K is a multiple of 4. An integer X is given in each process. Using the MPI_Comm_group, MPI_Group_range_incl and MPI_Comm_create function, create two communicators: the first one contains the first half of the processes (of rank 0, 1, …, K/2 − 1 in this order), the second one contains the second half of the processes (of rank K/2, K/2 + 1, …, K − 1 in this order). Output the ranks R1 of the processes included in these communicators. Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using the MPI_Comm_create function for this inter-communicator, create a new inter-communicator whose the first group contains the even-rank processes of the first group of the initial inter-communicator (in the same order) and the second group contains the odd-rank processes of the second group of the initial inter-communicator (in the inverse order). Thus, the first and second groups of the new inter-communicator will include the processes of the MPI_COMM_WORLD communicator with ranks 0, 2, …, K/2 − 2 and K − 1, K − 3, …, K/2 + 1 respectively. Output the ranks R2 of the processes included in the new inter-communicator. Using the MPI_Send and MPI_Recv functions for the new inter-communicator, send the integer X from each process to the process with the same rank from the other group of the inter-communicator and output the received numbers.

MPI8Inter6°. The number of processes K is a multiple of 4. A real number X is given in each process. Using the MPI_Comm_group, MPI_Group_range_incl and MPI_Comm_create function, create two communicators: the first one contains the first half of the processes (of rank 0, 1, …, K/2 − 1 in this order), the second one contains the second half of the processes (of rank K/2, K/2 + 1, …, K − 1 in this order). Output the ranks R1 of the processes included in these communicators. Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using one call of the MPI_Comm_split function for this inter-communicator, create two new inter-communicators: the first one contains the even-rank processes of the initial inter-communicator, the second one contains the odd-rank processes of the initial intercommunicator; the processes of the second group of each new inter-communicator should be in the inverse order. Thus, the first new communicator will include groups of the processes of the MPI_COMM_WORLD communicator with ranks 0, 2, …, K/2 − 2 and K − 2, K − 4, …, K/2, the first new communicator will include groups of the processes of the MPI_COMM_WORLD communicator with ranks 1, 3, …, K/2 − 1 and K − 1, K − 3, …, K/2 + 1. Output the ranks R2 of the processes included in the new inter-communicators. Using the MPI_Send and MPI_Recv functions for the new inter-communicators, send the integer X from each process to the process with the same rank from the other group of this inter-communicator and output the received numbers.

MPI8Inter7°. The number of processes K is an even number. An integer C is given in each process. The numbers C are equal to 0 or 1. A single value of C = 1 is given in the first half of the processes, the number of values of C = 1 is greater than one in the second half of the processes and, in addition, there is at least one value C = 0 in the second half of the processes. Using the MPI_Comm_split function, create two communicators: the first one contains the first half of the processes (of rank 0, 1, …, K/2 − 1 in this order), the second one contains the second half of the processes (of rank K/2, K/2 + 1, …, K − 1 in this order). Output the ranks R1 of the processes included in these communicators. Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using the MPI_Comm_split function for this inter-communicator, create a new inter-communicator with groups which contain processes from the corresponding groups of the initial inter-communicator with the values C = 1 (in the inverse order). Thus, the first group of the new inter-communicator will include a single process, and the number of processes in the second group will be in the range 2 to K/2 − 1. Output the ranks R2 of the processes that are included in the second group of the new inter-communicator (this group contains more than one process). Input an array Y of K2 integers in the single process of the first group of the new inter-communicator, where K2 is the number of the processes in the second group. Input an integer X in each process of the second group of the new inter-communicator. Using the required number of calls of the MPI_Send and MPI_Recv functions for all the processes of the new inter-communicator, send all the integers X to the single process of the first group and send the element of the array Y with the index R2 to the process R2 of the second group (R2 = 0, 1, …, K2 − 1). Output all received numbers (the integers X should be output in ascending order of ranks of sending processes).

Note. In the MPIÑH 2 version 1.3, the MPI_Comm_split function call for some inter-communicator is erroneous if some of the values of its color parameter are equal to MPI_UNDEFINED. Thus, you should use only non-negative values of color in this situation. In addition, the program can behave incorrectly if the MPI_Comm_split function create empty groups for some inter-communicators (this is possible if some color values are specified for some processes of one of the groups of the initial inter-communicator and these color values are different from color values of all processes of the other group).

MPI8Inter8°. An integer C is given in each process. The integer C is in the range 0 to 2, all the values of C (0, 1, 2) are given for the even-rank processes and for the odd-rank processes. Using one call of the MPI_Comm_split function, create two communicators: the first one contains the even-rank processes (in ascending order of ranks), the second one contains the odd-rank processes (in ascending order of ranks). Output the ranks R1 of the processes included in these communicators. Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. Using one call of the MPI_Comm_split function for this inter-communicator, create three new inter-communicators with groups which contain processes from the corresponding groups of the initial inter-communicator with the same values of C (in the same order). Thus, for instance, the first group of the first new inter-communicator will include the even-rank processes with C = 0 and the second group of the third new inter-communicator will include the odd-rank processes with C = 2. Output the ranks R2 of the processes included in the new inter-communicators. Input an integer X in the processes of the first group of each new inter-communicator, input an integer Y in the processes of the second group of each new inter-communicator. Using the required number of calls of the MPI_Send and MPI_Recv functions for all the processes of all the new inter-communicators, send all the integers X to each process of the second group of the same inter-communicator and send all the integers Y to each process of the first group of the same inter-communicator. Output all received numbers in ascending order of ranks of sending processes.

MPI8Inter9°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K/2. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. Input an integer X in the processes of the first group, input an integer Y in the processes of the second group. Using the required number of calls of the MPI_Send and MPI_Recv functions for all the processes of the inter-communicator, send all the integers X to each process of the second group and send all the integers Y to each process of the first group. Output all received numbers in ascending order of ranks of sending processes.

Collective communications for inter-communicators

MPI8Inter10°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K/2. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. Input integers R1 and R2 in each process of the inter-communicator. The values of the numbers R1 coincide in all processes and indicate the rank of the selected process of the first group; the values of the numbers R2 also coincide in all processes and indicate the rank of the selected process of the second group. A sequence of three integers X is given in the selected process of the first group, a sequence of three integers Y is given in the selected process of the second group. Using two calls of the MPI_Bcast collective function in each process of the inter-communicator, send the numbers X to all the processes of the second group, send the numbers Y to all the processes of the first group, and output the received numbers.

MPI8Inter11°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K/2. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. Input an integer R1 in each process of the inter-communicator. The values of the number R1 coincide in all processes and indicate the rank of the selected process of the first group. An array X of K2 integers is given in the selected process of the first group, where K2 is the number of processes in the second group. Using one call of the MPI_Scatter collective function in each process of the inter-communicator, send the element X[R2] to the process R2 of the second group (R2 = 0, …, K2 − 1) and output the received numbers.

MPI8Inter12°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K/2. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. Input an integer R2 in each process of the inter-communicator. The values of the number R2 coincide in all processes and indicate the rank of the selected process of the second group. An integer X is given in all the processes of the first group. Using one call of the MPI_Gather collective function in each process of the inter-communicator, send all the integers X to the selected process of the second group. Output the received numbers in this process in ascending order of ranks of sending processes.

MPI8Inter13°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K/2. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the same order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. An integer X is given in each process of the first group, an integer Y is given in each process of the second group. Using one call of the MPI_Allreduce collective function in each process of the inter-communicator, receive the number Ymin in each process of the first group and the number Xmax in each process of the second group, where the number Ymin is the minimal value of the given integers Y and the number Xmax is the maximal value of the given integers X. Output the received numbers.

MPI8Inter14°. The number of processes K is an even number. An integer C is given in each process. The integer C is in the range 0 to 2, the first value C = 1 is given in the process 0, the first value C = 2 is given in the process K − 1. Using the MPI_Comm_split function, create two communicators: the first one contains processes with C = 1 (in the same order), the second one contains processes with C = 2 (in the inverse order). Output the ranks R of the processes included in these communicators (output the integer −1 if the process is not included into the created communicators). Then combine these communicators into an inter-communicator using the MPI_Intercomm_create function. A group containing processes with C = 1 is considered to be the first group of the created inter-communicator and the group of processes with C = 2 is considered to be its second group. An array X of K2 integers is given in each process of the first group, where K2 is the number of processes in the second group; an array Y of K1 integers is given in each process of the second group, where K1 is the number of processes in the first group. Using one call of the MPI_Alltoall collective function in each process of the inter-communicator, send the element Y[R1] of each array Y to the process R1 of the first group (R1 = 0, …, K1 − 1) and send the element X[R2] of each array X to the process R2 of the second group (R2 = 0, …, K2 − 1). Output the received numbers in ascending order of ranks of sending processes.

Process creation

MPI8Inter15°. A real number is given in each process. Using the MPI_Comm_spawn function with the first parameter "ptprj.exe", create one new process. Using the MPI_Reduce collective function, send the sum of the given numbers to the new process. Output the received sum in the debug section using the Show function in the new process. Then, using the MPI_Bcast collective function, send this sum to the initial processes and output it in each process.

MPI8Inter16°. An array A of K real numbers is given in each process, where K is the number of processes. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create K new processes. Using the MPI_Reduce_scatter_block collective function, send the maximal value of the elements A[R] of the given arrays to the new process of rank R (R = 0, …, K − 1). Output the received maximal value in the debug section using the Show function in each new process. Then, using the MPI_Send and MPI_Recv functions, send the maximal value from the new process of rank R (R = 0, …, K − 1) to the initial process of the same rank and output the received numbers in the initial processes.

MPI8Inter17°. The number of processes K is an even number. Arrays of K/2 real numbers are given in the processes of rank 0 and 1. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create two new processes. Using one call of the MPI_Comm_split function for the inter-communicator connected with the new processes, create two new inter-communicators: the first one contains the group of even-rank initial processes (0, …, K − 2) and the new process of rank 0 as the second group, the second one contains the group of odd-rank initial processes (1, …, K − 1) and the new process of rank 1 as the second group. Using the MPI_Send function in the initial processes and the MPI_Recv function in the new processes, send all the given numbers from the first process of the first group of each inter-communicator to the single process of its second group. Output the received numbers in the debug section using the Show function in the new processes. Then, using the MPI_Scatter collective function for inter-communicators, send one number from the new process to each process of the first group of the corresponding inter-communicator (in ascending order of ranks of receiving processes) and output the received numbers.

MPI8Inter18°. The number of processes K is an even number. Arrays A of K/2 real numbers are given in each process. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create K new processes. Using one call of the MPI_Comm_split function for the inter-communicator connected with the new processes, create two new inter-communicators: the first one contains the group of even-rank initial processes (0, …, K − 2) and the even-rank new processes as the second group, the second one contains the group of odd-rank initial processes (1, …, K − 1) and the odd-rank new processes as the second group. Perform the following actions for each created inter-communicator: (1) find the minimal value (for the first inter-communicator) or the maximal value (for the second one) of the elements A[R] (R = 0, …, K/2 − 1) of all the arrays A given in the first group of this inter-communicator; (2) send the found value to the new process of rank R in the second group of the corresponding inter-communicator. For instance, the minimal of the first elements of the arrays given in the even-rank initial processes should be sent to the first of the new processes, the maximal of the first elements of the arrays given in the odd-rank initial processes should be sent to the second of the new processes (since this process has rank 0 in the corresponding inter-communicator). To do this, use the MPI_Reduce_scatter_block collective function. Output the received values in the debug section using the Show function in each new process. Then, using the MPI_Reduce collective function, find the minimum of the values received in the second group of the first inter-communicator, send the found minimum to the first process of the first group of this inter-communicator (that is, to the process 0 in the MPI_COMM_WORLD communicator), and output the received minimum. Also, find the maximum of the values received in the second group of the second inter-communicator, send the found maximum to the first process of the first group of this inter-communicator (that is, to the process 1 in the MPI_COMM_WORLD communicator), and output the received maximum.

MPI8Inter19°. An array A of 2K integers is given in the master process, where K is the number of processes. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create K new processes. Using the MPI_Intercomm_merge function for the inter-communicator connected with the new processes, create a new intra-communicator which include both the initial and the new processes. The order of the processes in the new intra-communicator should be as follows: the initial processes, then the new ones (to specify this order, use the appropriate value of the parameter high of the MPI_Intercomm_merge function). Using the MPI_Scatter collective function for the new intra-communicator, send the element A[R] of the array A to the process of rank R in this intra-communicator (R = 0, …, 2K − 1). Output the numbers received in the initial processes in the section of results, output the numbers received in the new processes in the debug section using the Show function. Then, using the MPI_Reduce collective function in this intra-communicator, find and output the sum of all numbers in the process of rank 1 in this intra-communicator.

MPI8Inter20°. The number of processes K is not a multiple of 4. An integer A is given in each process. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create such a number of new processes (1, 2 or 3) that the total number of processes K0 in the application would be a multiple of 4. Define an integer A equal to −R − 1 in each new process, where R is the process rank. Using the MPI_Intercomm_merge function for the inter-communicator connected with the new processes, create a new intra-communicator which include both initial and new processes. The order of the processes in the new intra-communicator should be as follows: the initial processes, then the new ones (to specify this order, use the appropriate value of the parameter high of the MPI_Intercomm_merge function). Using the MPI_Cart_create function for the new intra-communicator, define a Cartesian topology for all processes as a two-dimensional (K0/4 × 4) grid, which is periodic in the second dimension (ranks of processes should not be reordered). Find the process coordinates in the created topology using the MPI_Cart_coords function. Output the coordinates found in the initial processes in the section of results, output the coordinates found in the new processes in the debug section with the "X = " and "Y = " comments using the Show function. Using the MPI_Cart_shift and MPI_Sendrecv_replace functions, perform a cyclic shift of the integers A given in all processes of each column of the grid by step −1 (that is, the number A should be sent from each process in the column, with the exception of the first process, to the previous process in the same column and from the first process in the column to the last process in the same column). Output the integers A received in the initial processes in the section of results, output the integers A received in the new processes in the debug section with the "A = " comment using the Show function.

MPI8Inter21°. A real number is given in each process; this number is denoted by the letter A in the master process and by the letter B in the slave processes. Using two calls of the MPI_Comm_spawn function with the first parameter "ptprj.exe", create two groups of new processes as follows: the first group (named the server group) should include one process, the second group (named the client group) should include K − 1 processes, where K is the number of initial processes. Send the number A from the master process to the single new process of the server group, send the number B from each slave process to the corresponding new process of the client group (in ascending order of the process ranks). Output the number received in each new process in the debug section using the Show function. Using the MPI_Open_port, MPI_Publish_name, and MPI_Comm_accept functions on the server side and the MPI_Lookup_name and MPI_Comm_connect functions in the client side, establish a connection between two new groups of processes by means of a new inter-communicator. Using the MPI_Send and MPI_Recv functions for this inter-communicator, receive the number A in each process of the client group from the process of the server group. Found the sum of the received number A and the number B, which is received earlier from the initial slave process, and output the sum A + B in the debug section using the Show function in each process of the client group. Send this sum to the corresponding initial slave process and output the received sum in this process (the sum found in the process of rank R of the client group should be sent to the initial process of rank R + 1).

Note. The MPI_Lookup_name function call in the client processes should be performed after the function MPI_Publish_name call in the server process. You can, for example, use the MPI_Barrier function for the initial processes and the server process: in the server process, the MPI_Barrier function should be called after the call of the MPI_Publish_name function, whereas in the initial processes, the MPI_Barrier function should be called before the call of the MPI_Comm_spawn function which create the client group.

MPI8Inter22°. An integer N is given in each process. The integer N can take three values: 0, 1 and K (K > 1). There is exactly one process with the value N = 1 and exactly K processes with the value N = K. In addition, an integer A is given in the processes with the non-zero integer N. Using the MPI_Comm_split finction, split the initial communicator MPI_COMM_WORLD into two ones: the first new communicator should include the process with N = 1, the second one should include the processes with N = K. Using one call of the MPI_Comm_spawn function with the first parameter "ptprj.exe" for each new communicator, create two groups of new processes. The number of processes in each new group must coincide with the number of processes in the corresponding communicator (that is, the first group, named the server group, should include one process and the second one, named the client group, should include K processes). Send the integer A from each initial process to the new process; the rank of the receiving process should coincide with the rank of the sending process in the new communicator. Output the received integers in the debug section using the Show function. Using the MPI_Open_port, MPI_Publish_name, and MPI_Comm_accept functions on the server side and the MPI_Lookup_name and MPI_Comm_connect functions in the client side, establish a connection between two new groups of processes by means of a new inter-communicator. Using the MPI_Gather collective function for this inter-communicator, send all the integers A from the processes of the client group to the single process of the server group and output the received numbers in the debug section using several calls of the Show function in the process of the server group. Then, using the MPI_Send and MPI_Recv functions, send all these numbers from the process of the server group to the initial process that has created the server group. Output the received numbers in this initial process.

Note. The MPI_Lookup_name function call in the client processes should be performed after the MPI_Publish_name function call in the server process. You can, for example, send the number A to the process of the server group using the MPI_Ssend function and call the MPI_Barrier function for the MPI_COMM_WORLD communicator after the call of the MPI_Ssend function (on the side of the receiving process, you should receive the number A only after the call of the MPI_Publish_name function). In the other processes of the MPI_COMM_WORLD communicator, you should call the MPI_Barrier function and then send the numbers A to the processes of the client group. Thus, any of the processes of the client group will receive the number A only when the process of the server group has already called the MPI_Publish_name function.


PrevNext

 

  Ðåéòèíã@Mail.ru

Designed by
M. E. Abramyan and V. N. Braguilevsky

Last revised:
01.01.2024