About Me
Hi! I am a fifth-year Computer Science PhD student at University of California, Berkeley, advised by Chris Fletcher. I work on understanding efficient implementations of domain-specific kernels with a focus on building abstractions that unify a wide variety of kernels and accelerator designs into a small set of primitives, in collaboration with Joel Emer and Michael Pellauer. I have applied this analysis to a range of domains, including sparse tensor algebra, transformers, and fully homomorphic encryption.
I transferred to UC Berkeley in January 2024 following my advisor, before which, I was a student at University of Illinois Urbana-Champaign. There, I worked on hardware security and began my research on domain-specific kernels.
Before coming to the University of Illinois, I completed my B.S. in Computer Science from Harvey Mudd College in 2020. There, I worked with Chris Clark in the Lab for Autonomous and Intelligent Robotics. Additionally, for my senior capstone project, I added a numerical programming library to the programming language Factor.
In my free time, I enjoy cooking, social dancing, traveling with my family, and studying Korean.
Please feel free to reach out to me by email at nandeeka [at] berkeley [dot] edu, on GitHub, or on LinkedIn.
Publications
Nandeeka Nayak, Xinrui Wu, Toluwanimi O. Odemuyiwa, Michael Pellauer, Joel S. Emer, and Christopher W. Fletcher. “FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design”. MICRO ‘24. [paper]
Nandeeka Nayak, Toluwanimi O. Odemuyiwa, Shubham Ugare, Christopher W.
Fletcher, Michael Pellauer, and Joel S. Emer. “TeAAL: A Declarative Framework
for Modeling Sparse Tensor Accelerators”. MICRO ‘23.
[paper]
IEEE Micro Top Picks 2023 Honorable Mention
Jose Rodrigo Sanchez Vicarte, Pradyumna Shome, Nandeeka Nayak, Caroline
Trippel, Adam Morrison, David Kohlbrenner, and Christopher W. Fletcher.
“Opening Pandora’s Box: A Systematic Study of New Ways Microarchitecture Can
Leak Private Data”. ISCA ‘21.
[paper]
Intel Hardware Security Academic Award 2022
Honorable Mention
Nandeeka Nayak, Makoto Nara, Timmy Gambin, Zoë Wood, and Christopher M. Clark. “Machine learning techniques for auv side-scan sonar data feature extraction as applied to intelligent search for underwater archaeological sites”. FSR ‘19. [paper]
Talks/Posters
FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design. MLArchSys 2024. [program] [paper]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. Highlights of Parallel Computing 2024. [program] [paper]
Extended Einsums: Domain-Specific Kernels in the Language of Tensor Algebra. Stanford AHA Seminar 2024.
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. Workshop on Sparse Tensor Computations 2023. [program] [talk]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. CTSTA 2023. [program]
TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators. DRAGSTERS 2023. [program]
Students Mentored
Current Students
- Yuxin Jin (March 2024 - present)
- Chenxi Wan (March 2024 - present)
- Xinrui (Alice) Wu (May 2023 - present) → UCLA PhD
- Timor Averbuch (May 2023 - present)
Former Students
- Jules Peyrat (April 2024 - August 2024) → EPFL Master’s
- Alex Dicheva (August 2022 - October 2023)