Olivia Weng. Neural Network Quantization for Efficient Inference: A Survey. arXiv:2112.06126. December 2021.
Olivia Weng, Alireza Khodamoradi, Gabriel Marcano, Nojan Sheybani, Farinaz Koushanfar, Kristof Denolf, Ryan Kastner. ResNet Reshaper: Reshaping Residual Networks for Resource-Efficient Inference on FPGAs. In submission.
Colin Drewes, Olivia Weng, Steven Harris, Winnie Wang, William Hunter, Christopher McCarty, Ryan Kastner, Dustin Richmond. Turn on, Tune in, Listen up: Maximizing Channel Capacity in Time-to-Digital Converters. In submission.
Colin Drewes, Steven Harris, Winnie Wang, Richard Appen, Olivia Weng, Ryan Kastner, William Hunter, Christopher McCarty, Dustin Richmond. A Tunable Dual-Edge Time-to-Digital Converter. In IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). Virtual, May 2021.
Michael Barrow, Olivia Weng, and Ryan Kastner. Design Space Exploration for Machine Learning Architectures. In Workshop on Reimagining Codesign hosted by US DOE, Office of Advanced Scientific Computing Research. Virtual, March 2021.
Olivia Weng, Alireza Khodamoradi, and Ryan Kastner. Hardware-efficient Residual Networks for FPGAs. In Proceedings of Workshop on System-level Design Methods for Deep Learning on Heterogeneous Architectures (SLOHA) at Design, Automation and Test in Europe (DATE).
Grenoble, France (Virtual), February 2021.
Olivia Weng and Andrew A. Chien. Evaluating Achievable Latency and Cost: SSD Latency Predictors. In Workshop on Accelerated Machine Learning (AccML) at High Performance Embedded Architectures and Compilers (HiPEAC). Bologna, Italy, January 2020.