I am broadly interested in new abstractions for improving the design and implementation of efficient kernels for tensor algebra and related domains. My work spans both theory—through methodologies for designing efficient kernels—and practice—by applying these methodologies to propose efficient implementations for specific kernels.
Here is my CV and my Google Scholar.
Publications
Nandeeka Nayak, Xinrui Wu, Toluwanimi O. Odemuyiwa, Michael Pellauer, Joel S. Emer, and Christopher W. Fletcher. “FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design”. MICRO 2024. [paper] [artifact]
Nandeeka Nayak, Toluwanimi O. Odemuyiwa, Shubham Ugare, Christopher W.
Fletcher, Michael Pellauer, and Joel S. Emer. “TeAAL: A Declarative Framework
for Modeling Sparse Tensor Accelerators”. MICRO 2023.
[paper]
[artifact]
[compiler]
IEEE Micro Top Picks 2023 Honorable Mention
Jose Rodrigo Sanchez Vicarte, Pradyumna Shome, Nandeeka Nayak, Caroline
Trippel, Adam Morrison, David Kohlbrenner, and Christopher W. Fletcher.
“Opening Pandora’s Box: A Systematic Study of New Ways Microarchitecture Can
Leak Private Data”. ISCA 2021.
[paper]
[artifact]
Intel Hardware Security Academic Award 2022
Honorable Mention
Nandeeka Nayak, Makoto Nara, Timmy Gambin, Zoë Wood, and Christopher M. Clark. “Machine learning techniques for auv side-scan sonar data feature extraction as applied to intelligent search for underwater archaeological sites”. FSR 2019. [paper]
Tutorials
TeAAL and HiFiber: Precise and Concise Descriptions of (Sparse) Tensor Algebra Accelerators. Co-located with MICRO 2024. [website] [artifact] [slides]
Talks/Posters
FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design. MLArchSys 2024. [program] [paper]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. Highlights of Parallel Computing 2024. [program] [paper]
Extended Einsums: Domain-Specific Kernels in the Language of Tensor Algebra. Stanford AHA Seminar 2024.
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. Workshop on Sparse Tensor Computations 2023. [program] [talk]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. CTSTA 2023. [program]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. DRAGSTERS 2023. [program]