Publications by Tags

, , , , , , , , , , , , ,

Selected Papers

Improving Self-Supervised Learning by Characterizing Idealized Representations

Y. Dubois, T. Hashimoto, S. Ermon, P. Liang

NeurIPS 2022

TLDR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.

, , ,

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Learning Optimal Representations with the Decodable Information Bottleneck

Y. Dubois, D. Kiela, D. J. Schwab, R. Vedantam

NeurIPS 2020 Spotlight Presentation 🎉

TLDR: We characterize and approximate optimal representations for supervised learning.

, , ,

Compression

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Equivariance

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,

Extrapolation

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,

Location Attention for Extrapolation to Longer Sequences

Y. Dubois, Gautier Dagan, Dieuwke Hupkes, Elia Bruni

ACL 2020

TLDR: We propose an attention that improves extrapolation capacity of neural NLP models.

, ,

Generalization

Learning Optimal Representations with the Decodable Information Bottleneck

Y. Dubois, D. Kiela, D. J. Schwab, R. Vedantam

NeurIPS 2020 Spotlight Presentation 🎉

TLDR: We characterize and approximate optimal representations for supervised learning.

, , ,

Information Theory

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Learning Optimal Representations with the Decodable Information Bottleneck

Y. Dubois, D. Kiela, D. J. Schwab, R. Vedantam

NeurIPS 2020 Spotlight Presentation 🎉

TLDR: We characterize and approximate optimal representations for supervised learning.

, , ,

Invariance

Learning Instance-Specific Data Augmentations

N. Miao, E. Mathieu, Y. Dubois, T. Rainforth, Y. W. Teh, A. Foster, H. Kim

Arxiv

TLDR: We introduce a method for automatically learning input-specific augmentations from data.

,

Improving Self-Supervised Learning by Characterizing Idealized Representations

Y. Dubois, T. Hashimoto, S. Ermon, P. Liang

NeurIPS 2022

TLDR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.

, , ,

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Meta Learning

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,

NLP

Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning

S. Santurkar, Y. Dubois, R. Taori, P. Liang, T. Hashimoto

Arxiv

TLDR: Our work performs a systematic investigation into whether additional language supervision (in CLIP) helps models learn more transferrable representations.

, , ,

Location Attention for Extrapolation to Longer Sequences

Y. Dubois, Gautier Dagan, Dieuwke Hupkes, Elia Bruni

ACL 2020

TLDR: We propose an attention that improves extrapolation capacity of neural NLP models.

, ,

Representation Learning

Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning

S. Santurkar, Y. Dubois, R. Taori, P. Liang, T. Hashimoto

Arxiv

TLDR: Our work performs a systematic investigation into whether additional language supervision (in CLIP) helps models learn more transferrable representations.

, , ,

Improving Self-Supervised Learning by Characterizing Idealized Representations

Y. Dubois, T. Hashimoto, S. Ermon, P. Liang

NeurIPS 2022

TLDR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.

, , ,

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Learning Optimal Representations with the Decodable Information Bottleneck

Y. Dubois, D. Kiela, D. J. Schwab, R. Vedantam

NeurIPS 2020 Spotlight Presentation 🎉

TLDR: We characterize and approximate optimal representations for supervised learning.

, , ,

Robustness

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Location Attention for Extrapolation to Longer Sequences

Y. Dubois, Gautier Dagan, Dieuwke Hupkes, Elia Bruni

ACL 2020

TLDR: We propose an attention that improves extrapolation capacity of neural NLP models.

, ,

Self-Supervised Learning

Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning

S. Santurkar, Y. Dubois, R. Taori, P. Liang, T. Hashimoto

Arxiv

TLDR: Our work performs a systematic investigation into whether additional language supervision (in CLIP) helps models learn more transferrable representations.

, , ,

Improving Self-Supervised Learning by Characterizing Idealized Representations

Y. Dubois, T. Hashimoto, S. Ermon, P. Liang

NeurIPS 2022

TLDR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.

, , ,

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Time Series

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,

Uncertainty

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,

Vision

Is a Caption Worth a Thousand Images? A Controlled Study for Representation Learning

S. Santurkar, Y. Dubois, R. Taori, P. Liang, T. Hashimoto

Arxiv

TLDR: Our work performs a systematic investigation into whether additional language supervision (in CLIP) helps models learn more transferrable representations.

, , ,

Learning Instance-Specific Data Augmentations

N. Miao, E. Mathieu, Y. Dubois, T. Rainforth, Y. W. Teh, A. Foster, H. Kim

Arxiv

TLDR: We introduce a method for automatically learning input-specific augmentations from data.

,

Improving Self-Supervised Learning by Characterizing Idealized Representations

Y. Dubois, T. Hashimoto, S. Ermon, P. Liang

NeurIPS 2022

TLDR: We characterize idealized self-supervised representations, which leads to actionable insights for improving SSL algorithms.

, , ,

Optimal Representations for Covariate Shifts

Y Ruan*, Y. Dubois*, C. J. Maddison

ICLR 2021

TLDR: We give a simple variational objective whose optima are exactly the set of representations that are robust under covariate shift.

, , , , ,

Lossy Compression for Lossless Prediction

Y. Dubois, B. Bloem-Reddy, K. Ullrich, C. J. Maddison

NeurIPS 2021 Spotlight Presentation 🎉

TLDR: We formalize compression with respect to ML algorithms rather than human perception.

, , , , ,

Learning Optimal Representations with the Decodable Information Bottleneck

Y. Dubois, D. Kiela, D. J. Schwab, R. Vedantam

NeurIPS 2020 Spotlight Presentation 🎉

TLDR: We characterize and approximate optimal representations for supervised learning.

, , ,

Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes

A. Y. K. Foong*, W. P. Bruinsma*, J. Gordon*, Y. Dubois, J. Requeima, R. E. Turner

NeurIPS 2020

TLDR: We propose a translation equivariant (latent) neural process.

, , , , ,

Convolutional Conditional Neural Processes

J. Gordon*, W. P. Bruinsma*, A. Y. K. Foong, J. Requeima,Y. Dubois, R. E. Turner

ICLR 2020 Oral Presentation 🎉

TLDR: We propose a translation equivariant conditional neural process.

, , , , ,