Welcome, I’m a researcher at INRIA-Paris in the ERC Deepsea / Gallium group. Before that, I did my PhD with John Reppy at the University of Chicago, and a postdoc with Umut Acar at the Max Planck Institute for Software Systems in Kaiserslautern, Germany.

Contact me by email at me@mike-rainey.site.

This list is organized by research topic. A number of papers appear under multiple topics, as appropriate. For a list without duplicates, see the list of references at the bottom of this page.

Design and implementation of algorithms to map computations generated by parallel programs onto multicore machines: (Acar, Charguéraud, and Rainey 2015; Acar, Charguéraud, and Rainey 2013; Bergstrom et al. 2010; Bergstrom et al. 2012; Fluet et al. 2010; Fluet et al. 2008; Rainey 2010; Fluet, Rainey, and Reppy 2008)

Making parallel programs more robust in the face of parallel-specific overheads: (Umut A Acar, Charguéraud, and Rainey 2016; Acar, Charguéraud, and Rainey 2015; Acar, Charguéraud, and Rainey 2011; Bergstrom et al. 2010; Bergstrom et al. 2012; Rainey 2010)

Programming languages to raise the level of abstraction of parallel programs: (Fluet et al. 2007; Acar, Charguéraud, and Rainey 2012; Umut A Acar et al. 2016)

Work-efficient algorithm for fast parallel depth-first search of directed graphs: (Acar, Charguéraud, and Rainey 2015)

Compiler optimization to control the layout of parallel-friendly data structures: (Bergstrom et al. 2013)

Efficient algorithms and data structures that are amenable to parallel programming: (Charguéraud and Rainey 2017; Acar, Charguéraud, and Rainey 2015; Acar, Charguéraud, and Rainey 2014; Wise et al. 2005)

Concurrent data structures: (Umut A Acar, Ben-David, and Rainey 2017)

Engineering the SML/NJ compiler to handle advanced features of foreign-function calls. (Blume, Rainey, and Reppy 2008)

A technique to help understand the causes of poor speedups: (Umut A. Acar, Charguéraud, and Rainey 2017)

Principles and Practices of Parallel Programming, February 2013; (Acar, Charguéraud, and Rainey 2013)

slides

slides

Language Abstractions for Multicore Environments, July 2013; (Acar, Charguéraud, and Rainey 2012)

slides

slides

MPI-SWS weekly seminar, June 2011

slides

slides

This project features a C++ implementation of the fast DFS-like graph-traversal algorithm from our SC’15 paper (Acar, Charguéraud, and Rainey 2015).

This project features a C++ template library which implements ordered, in-memory containers that are based on a new B-tree-like data structure.

PASL is a C++ library that provides algorithms for mapping computations generated by programs with implicit threading to multicore machines.

Manticore is a parallel programming language aimed at general-purpose applications that run on multicore processors.

I worked on the back end of the compiler. My main projects covered code generation for the x86_64 and support for foreign-function calls.

- FHPC 2018. Program-committee co-chair
- FHPC 2016. Program-committee member
- ICFP 2015. Program-committee member
- ECOOP 2014. Artifact-evaluation-committee member
- FHPC 2013. Program-committee member
- External reviewer (subset)

Get the bibtex file used to generate these references.

Acar, Umut A., Arthur Charguéraud, and Mike Rainey. 2017. “Parallel Work Inflation, Memory Effects, and Their Empirical Analysis.” *CoRR* abs/1709.03767. http://arxiv.org/abs/1709.03767.

Acar, Umut A, Naama Ben-David, and Mike Rainey. 2017. “Contention in Structured Concurrency: Provably Efficient Dynamic Nonzero Indicators for Nested Parallel Computation.” ACM. http://gallium.inria.fr/~rainey/dynsnzi.pdf.

Acar, Umut A, Arthur Charguéraud, and Mike Rainey. 2011. “Oracle Scheduling: Controlling Granularity in Implicitly Parallel Languages.” In *Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications*, 46:499–518. 10. ACM. http://gallium.inria.fr/~rainey/oracle_scheduling.pdf.

———. 2012. “Efficient Primitives for Creating and Scheduling Parallel Computations.” In *Declarative Aspects of Multicore Programming*. http://gallium.inria.fr/~rainey/damp2012_primitives.pdf.

———. 2013. “Scheduling Parallel Programs by Work Stealing with Private Deques.” In *18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming*, 48:219–28. 8. ACM. http://gallium.inria.fr/~rainey/full.pdf.

———. 2014. “Theory and Practice of Chunked Sequences.” In *The 22nd Annual European Symposium on Algorithms*, 25–36. Springer. http://gallium.inria.fr/~rainey/chunked_seq.pdf.

———. 2015. “A Work-Efficient Algorithm for Parallel Unordered Depth-First Search.” In *Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis*, 67:1–67:12. ACM. pdfs_sc15.pdf.

———. 2016. “Oracle-Guided Scheduling for Controlling Granularity in Implicitly Parallel Languages.” *Journal of Functional Programming*. Cambridge University Press. http://gallium.inria.fr/~rainey/jfp-oracle-guided.pdf.

Acar, Umut A, Arthur Charguéraud, Mike Rainey, and Filip Sieczkowski. 2016. “Dag-Calculus: A Calculus for Parallel Computation.” In *The 26th ACM SIGPLAN International Conference on Functional Programming*. ACM. http://gallium.inria.fr/~rainey/dag-calculus.pdf.

Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, Stephen Rosen, and Adam Shaw. 2013. “Data-Only Flattening for Nested Data Parallelism.” In *18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming*, 48:81–92. 8. ACM. http://gallium.inria.fr/~rainey/ppopp13-flat.pdf.

Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw. 2012. “Lazy Tree Splitting.” *Journal of Functional Programming* 22 (4-5). Cambridge University Press: 382–438. http://gallium.inria.fr/~rainey/jfp-lts-submitted.pdf.

Bergstrom, Lars, Mike Rainey, John Reppy, Adam Shaw, and Matthew Fluet. 2010. “Lazy Tree Splitting.” In *The 20th ACM SIGPLAN International Conference on Functional Programming*, 45:93–104. 9. ACM. http://gallium.inria.fr/~rainey/icfp10-lts.pdf.

Blume, Matthias, Michael Rainey, and John Reppy. 2008. “Calling Variadic Functions from a Strongly-Typed Language.” In *Proceedings of the 2008 ACM SIGPLAN Workshop on ML*, 47–58. ACM. http://gallium.inria.fr/~rainey/ml-varargs.pdf.

Charguéraud, Arthur, and Mike Rainey. 2017. “Efficient Representations for Large Dynamic Sequences in ML.” ML Family Workshop. https://hal.inria.fr/hal-01669407.

Fluet, Matthew, Mike Rainey, and John Reppy. 2008. “A Scheduling Framework for General-Purpose Parallel Languages.” In *The 13th ACM SIGPLAN International Conference on Functional Programming*, 43:241–52. 9. ACM. http://gallium.inria.fr/~rainey/icfp08-sched.pdf.

Fluet, Matthew, Mike Rainey, John Reppy, and Adam Shaw. 2008. “Implicitly-Threaded Parallelism in Manticore.” In *The 13th ACM SIGPLAN International Conference on Functional Programming*, 43:119–30. 9. ACM. http://gallium.inria.fr/~rainey/icfp08-implicit.pdf.

———. 2010. “Implicitly Threaded Parallelism in Manticore.” *Journal of Functional Programming* 20 (5-6). Cambridge University Press: 537–76.

Fluet, Matthew, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao. 2007. “Manticore: A Heterogeneous Parallel Language.” In *Proceedings of the 2007 Workshop on Declarative Aspects of Multicore Programming*, 37–44. ACM.

Rainey, Mike. 2010. “Effective Scheduling Techniques for High-Level Parallel Programming Languages.” PhD thesis, University of Chicago. http://gallium.inria.fr/~rainey/rainey-phd.pdf.

Wise, David S., Craig Citro, Joshua Hursey, Fang Liu, and Michael Rainey. 2005. “A Paradigm for Parallel Matrix Algorithms: Scalable Cholesky.” In *In Euro-Par 2005 – Parallel Processing*. http://dx.doi.org/10.1007/11549468_76.