Friday, June 1, 2012

Giving Away a Cisco Live Full Conference Pass

Back in May, we attended a local Cisco event here in Toronto. Besides talking to Cisco engineers about their datacenter products and networking technologies, we also met with some technical UCS server people (more on Cisco UCS Blade Servers & Open Grid Scheduler/Grid Engine in later blog entry).

We also received a Cisco Live Conference Pass, which allows us to attend everything at the conference (ie. the full experience) in San Diego, CA on June 10-14, 2012, and we are planning to give it to the first person who sends us the right answer to the following question:

When run with 20 MPI processes, what will the value of recvbuf[i][i] be for i=0..19 in MPI_COMM_WORLD rank 17 when this application calls MPI_Finalize?


#include <mpi.h>


int sendbuf[100];
int recvbuf[20][100];
MPI_Request reqs[40];


MPI_Request send_it(int dest, int len)
{
   int i;
   MPI_Request req;
   for (i = 0; i < len; ++i) {
       sendbuf[i] = dest;
   }
   MPI_Isend(sendbuf, len, MPI_INT, dest, 0, MPI_COMM_WORLD, &req);
   return req;
}


MPI_Request recv_it(int src, int len)
{
   MPI_Request req;
   MPI_Irecv(recvbuf[src], len, MPI_INT, src, 0, MPI_COMM_WORLD, &req);
   return req;
}


int main(int argc, char *argv[])
{
   int i, j, rank, size;
   MPI_Init(NULL, NULL);
   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   MPI_Comm_size(MPI_COMM_WORLD, &size);


   /* Bound the number of procs involved, just so we can be lazy and
      use a fixed-length sendbuf/recvbuf. */
   if (rank < 20) {
       for (i = j = 0; i < size; ++i) {
           reqs[j++] = send_it(i, 5);
           reqs[j++] = recv_it(i, 5);
       }
       MPI_Waitall(j, reqs, MPI_STATUSES_IGNORE);
   }


   MPI_Finalize();
   return 0;
}


The code above & the question were written by Mr Open MPI, Jeff Squyres, who has worked with us as early as the pre-Oracle Grid Engine days on PLPA, and suggested us to migrate to the hwloc topology library. (Side note: when the Open Grid Scheduler became the maintainer of the open source Grid Engine code base in 2011, Grid Engine Multi-Core Processor Binding with hwloc was one of the first major features we added in Open Grid Scheduler/Grid Engine to support discovery of newer system topologies).

So send us the answer, and the first one who answers the question correctly will get the pass to attend the conference!