Eugene Brevdo

Eugene Brevdo

Staff Software Engineer

Google DeepMind

Biography

Eugene Brevdo is a Staff SWE at Google DeepMind. His research interests span many interconnected areas:

  • Optimization under uncertainty/constraints and experimental design for e.g., software systems and high throughput screening in biology.
  • Software systems for training and deploying ML, Bandits, and RL models.
  • Machine Learning applied to optimizing large software systems (databases, datacenter scheduling, caches, compilers like LLVM and XLA/TPU).

Eugene received his PhD in Electrical Engineering from Princeton University, where his advisers were Peter Ramadge and Ingrid Daubechies.

Education
  • PhD in Electrical Engineering, 2011

    Princeton University

  • BSc in Electrical, Computer, and Systems Engineering, 2005

    Rensselaer Polytechnic Institute

Experience

 
 
 
 
 
Google Brain + Google DeepMind
Staff Software Engineer
Google Brain + Google DeepMind
Jan 2022 – Present California

Brain Sequin (now GDM Alchemy) team.

  • Focusing on protein understanding and optimization under uncertainty.
  • Multi-stage peptide library design, co-optimizing cell permeability and protein binding.
  • LLM models for protein function annotation and target-conditional optimization.
 
 
 
 
 
Google Brain
Staff Software Engineer
Google Brain
Apr 2017 – Dec 2021 California

Co-TLM of the TF-Agents team (2018 - 2021).

TLM of the Brain Learned Systems Team (2017 - 2022). Clients include Spanner, Compiler, and Cloud infrastructure teams.

  • Built smarter query optimizers, cache eviction algorithms, inlining and register allocation passes.
  • Grew the Learned Systems team from 1 to 7 researchers and engineers.
  • Aligned engagements between Brain, Technical Infrastructure, and Cloud orgs.
  • Set research direction for systems and ML engineers.
 
 
 
 
 
Google Brain
Senior Software Engineer
Google Brain
Oct 2015 – Mar 2017 California

SWE on Brain Applied Machine Intelligence team.

  • Core TensorFlow maintainer.
  • Developed interfaces and support for sparse and sequential input, debugged graph control flow, implemented CPU and GPU kernels; whatever needed doing.
  • Founding SWE / API designer of TF Distributions (now Tensorflow Probability).
 
 
 
 
 
Google Research
Software Engineer
Google Research
Apr 2014 – Sep 2015 California
Hacked on DistBelief, helped opensource TensorFlow.
 
 
 
 
 
Lifecode, Inc.
Software Engineer
Lifecode, Inc.
Mar 2013 – Mar 2014 California
Built supervised learning ML pipelines for clinical diagnosis of rare diseases from NGS assays.
 
 
 
 
 
The Climate Corporation
Senior Data Scientist
The Climate Corporation
Mar 2013 – Mar 2014 California

I worked on two teams:

  • Computational Climatology: Statistical weather forecasting in the short-to-medium-term scale (2 weeks-2 years) using a combination of techniques from climatology, machine learning/statistics, and spatiotemporal signal processing.
  • Computational Agronomy: Analyzed, assimilated, and reconciled remotely sensed weather and agricultural data. Built growth forecasts for corn, sorghum, soy, and winter wheat.
 
 
 
 
 
Research Intern
Siemens Corporate Research
May 2008 – Aug 2008 Princeton, NJ

Focused on applications of Compressive Sensing to inverse problems in medical imaging.

  • Developed CS-based estimator for Computational Tomography with Sinogram Occlusion.
  • Developed a novel CS-based reconstruction technique for Ultrasound tomography.

Featured Publications

(2023). HALP: Heuristic Aided Learned Preference Eviction Policy for YouTube Content Delivery Network. 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023, Boston, MA, April 17-19, 2023.

Cite URL

(2023). Kepler: Robust Learning for Parametric Query Optimization. Proc. ACM Manag. Data.

Cite DOI URL

(2023). The Next 700 ML-Enabled Compiler Optimizations. CoRR.

Cite DOI URL

(2022). A Transferable Approach for Partitioning Machine Learning Models on Multi-Chip-Modules. Proceedings of Machine Learning and Systems 2022, MLSys 2022, Santa Clara, CA, USA, August 29 - September 1, 2022.

Cite URL

(2022). Differentiable Architecture Search for Reinforcement Learning. International Conference on Automated Machine Learning, AutoML 2022, 25-27 July 2022, Johns Hopkins University, Baltimore, MD, USA.

Cite URL

(2022). ProtNLM: Model-based Natural Language Protein Annotation.

PDF Cite

(2021). MLGO: a Machine Learning Guided Compiler Optimizations Framework. CoRR.

Cite URL

(2021). Reverb: A Framework For Experience Replay. CoRR.

Cite URL

(2018). Dynamic control flow in large-scale machine learning. Proceedings of the Thirteenth EuroSys Conference, EuroSys 2018, Porto, Portugal, April 23-26, 2018.

Cite DOI URL

(2018). Tensor2Tensor for Neural Machine Translation. Proceedings of the 13th Conference of the Association for Machine Translation in the Americas, AMTA 2018, Boston, MA, USA, March 17-21, 2018 - Volume 1: Research Papers.

Cite URL

(2017). Deep Probabilistic Programming. 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.

Cite URL

(2017). TensorFlow Distributions. CoRR.

Cite URL

(2016). TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. CoRR.

Cite URL

(2009). Stylistic analysis of paintings using wavelets and machine learning. 17th European Signal Processing Conference, EUSIPCO 2009, Glasgow, Scotland, UK, August 24-28, 2009.

Cite URL

(2008). Image processing for artist identification. IEEE Signal Process. Mag..

Cite DOI URL