Week 1: Generate graphs with varying topologies, edge densities, and sizes and
re-organize the graph structure to improve locality and storage through graph sharding and
compression.
Week 2: Implement graph and scheduler utilities and an MPI pipeline that supports
all graph representations. Implement all-to-all communication.
Week 3:
Week 3.1:
Simulate real runtime communication cost. (Jason)
Implement a ring interconnect topology. (Jason)
Build an OpenMP pipeline that supports all graph representations. (Rui)
Week 3.2:
Implement experiment pipelines and necessary utilities to record results and plot the
communication/computation cost, and cache behaviors by types and representations of graphs.
(Jason)
Implement coarse and fine-grained lock/lock-free data structures in OpenMP. (Rui)
Week 4:
Week 4.1: Debug and continue to optimize current OpenMP and MPI
implementations. Benchmark their scalability on PSC machines. (Both)
Week 4.2: Summarize and analyze the final results. (Both)