Greetings, I’m a researcher at INRIA-Paris in the ERC Deepsea / Gallium group. Before that, I did my PhD with John Reppy at the University of Chicago, and a postdoc with Umut Acar at the Max Planck Institute for Software Systems in Kaiserslautern, Germany.
Contact me by email at .
This list is organized by research topic. A number of papers appear under multiple topics, as appropriate. For a list without duplicates, see the list of references at the bottom of this page.
Design and implementation of algorithms to map computations generated by parallel programs onto multicore machines: (Acar, Charguéraud, and Rainey 2015; Acar, Charguéraud, and Rainey 2013; Bergstrom et al. 2010; Bergstrom et al. 2012; Rainey 2010; Fluet, Rainey, and Reppy 2008)
Making parallel programs more robust in the face of parallel-specific overheads: (Acar, Charguéraud, and Rainey 2016, Acar, Charguéraud, and Rainey (2015); Acar, Charguéraud, and Rainey 2011, Bergstrom et al. (2010); Bergstrom et al. 2012; Rainey 2010)
Programming languages to raise the level of abstraction of parallel programs: (Fluet et al. 2007; Acar, Charguéraud, and Rainey 2012)
Work-efficient algorithm for fast parallel depth-first search of directed graphs: (Acar, Charguéraud, and Rainey 2015)
Compiler optimization to control the layout of parallel-friendly data structures: (Bergstrom et al. 2013)
Efficient algorithms and data structures that are amenable to parallel programming: (Acar, Charguéraud, and Rainey 2015; Acar, Charguéraud, and Rainey 2014; Wise et al. 2005)
Engineering the SML/NJ compiler to handle advanced features of foreign-function calls. (Blume, Rainey, and Reppy 2008)
A Work-Efficient Algorithm for Parallel Unordered Depth-First Search (Acar, Charguéraud, and Rainey 2015)
Supercomputing, November, 2016
Scheduling parallel programs by work stealing with private deques (Acar, Charguéraud, and Rainey 2013)
Principles and Practices of Parallel Programming, February 2013
Higher-level implicit parallelism with PASL (Acar, Charguéraud, and Rainey 2012)
Language Abstractions for Multicore Environments, July 2013
Fork-join model and work stealing
MPI-SWS weekly seminar, June 2011
My github profile.
This project features a C++ template library which implements ordered, in-memory containers that are based on a new B-tree-like data structure.
PASL is a C++ library that provides algorithms for mapping computations generated by programs with implicit threading to multicore machines.
Manticore is a parallel programming language aimed at general-purpose applications that run on multicore processors.
I worked on the back end of the compiler. My main projects covered code generation for the x86_64 and support for foreign-function calls.
Get the bibtex file used to generate these references.
Acar, Umut A, Arthur Charguéraud, and Mike Rainey. 2011. “Oracle Scheduling: Controlling Granularity in Implicitly Parallel Languages.” In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, 46:499–518. 10. ACM. http://chargueraud.org/research/2011/oracle/oracle_scheduling.pdf.
———. 2012. “Efficient Primitives for Creating and Scheduling Parallel Computations.” In Declarative Aspects of Multicore Programming. http://chargueraud.org/research/2012/damp/damp2012_primitives.pdf.
———. 2013. “Scheduling Parallel Programs by Work Stealing with Private Deques.” In 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 48:219–28. 8. ACM. http://chargueraud.org/research/2013/ppopp/full.pdf.
———. 2014. “Theory and Practice of Chunked Sequences.” In The 22nd Annual European Symposium on Algorithms, 25–36. Springer. http://deepsea.inria.fr/chunkedseq/chunked_seq.pdf.
———. 2015. “A Work-Efficient Algorithm for Parallel Unordered Depth-First Search.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 67:1–67:12. ACM. http://chargueraud.org/research/2015/pdfs/pdfs_sc15.pdf.
———. 2016. “Oracle-Guided Scheduling for Controlling Granularity in Implicitly Parallel Languages.” Journal of Functional Programming. Cambridge University Press.
Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw. 2012. “Lazy Tree Splitting.” Journal of Functional Programming 22 (4-5). Cambridge University Press: 382–438. http://manticore.cs.uchicago.edu/papers/jfp-lts-submitted.pdf.
Bergstrom, Lars, Matthew Fluet, Mike Rainey, John Reppy, Stephen Rosen, and Adam Shaw. 2013. “Data-Only Flattening for Nested Data Parallelism.” In 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 48:81–92. 8. ACM. http://manticore.cs.uchicago.edu/papers/ppopp13-flat.pdf.
Bergstrom, Lars, Mike Rainey, John Reppy, Adam Shaw, and Matthew Fluet. 2010. “Lazy Tree Splitting.” In The 20th ACM SIGPLAN International Conference on Functional Programming, 45:93–104. 9. ACM. http://manticore.cs.uchicago.edu/papers/icfp10-lts.pdf.
Blume, Matthias, Michael Rainey, and John Reppy. 2008. “Calling Variadic Functions from a Strongly-Typed Language.” In Proceedings of the 2008 ACM SIGPLAN Workshop on ML, 47–58. ACM. http://gallium.inria.fr/~rainey/articles/ml-varargs.pdf.
Fluet, Matthew, Mike Rainey, and John Reppy. 2008. “A Scheduling Framework for General-Purpose Parallel Languages.” In The 13th ACM SIGPLAN International Conference on Functional Programming, 43:241–52. 9. ACM. http://manticore.cs.uchicago.edu/papers/icfp08-sched.pdf.
Fluet, Matthew, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao. 2007. “Manticore: A Heterogeneous Parallel Language.” In Proceedings of the 2007 Workshop on Declarative Aspects of Multicore Programming, 37–44. ACM.
Rainey, Mike. 2010. “Effective Scheduling Techniques for High-Level Parallel Programming Languages.” PhD thesis, University of Chicago. http://manticore.cs.uchicago.edu/papers/rainey-phd.pdf.
Wise, David S., Craig Citro, Joshua Hursey, Fang Liu, and Michael Rainey. 2005. “A Paradigm for Parallel Matrix Algorithms: Scalable Cholesky.” In In Euro-Par 2005 – Parallel Processing. http://dx.doi.org/10.1007/11549468_76.