Namuk Park

Email: namuk.park@gmail.com, GitHub: @xxxnell, Twitter: @xxxnell, Google Scholar: scholar link, CV: cv link
The focus of my research has been on understanding how deep neural networks work, and why they work that way, in order to build a more generalizable machine learning system. In particular, I have studied the role of inductive biases in neural networks and how to utilize them to improve performance.
In more detail, my research covers the following topics: (1) “empirical analysis of Vision Transformers (through the lenses of loss landscape visualization, Hessian eigenvalue spectrum, Fourier analysis, etc.); (2) “probabilistic neural network” (e.g., Bayesian neural networks and ensemble methods); and (3) “generalization” and “trustworthy machine learning” (e.g., uncertainty estimation and robustness). Recently, I have been interested in (4) “self-supervised learning” (e.g., contrastive learning and masked image modeling) and (5) “AI for biology and science”.
At Prescient Design, my area of focus is building foundation models for proteins. We expect that a wide range of downstream tasks, including protein design and affinity prediction, can be improved by exploiting self-supervised learning methods.
Before joining Prescient Design, I previously worked at NAVER AI Lab as a visiting researcher. I received a Ph.D. in Computer Science from School of Integrated Technology, Yonsei University in South Korea, and graduated with a B.S. degree in Physics as Valedictorian of the College of Sciences from Yonsei University.

Publications

[4] Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, “What Do Self-Supervised Vision Transformers Learn?” ICLR 2023.
We show that (i) Contrastive Learning (CL) primarily captures global patterns compared with Masked Image Modeling (MIM), (ii) CL is more shape-oriented whereas MIM is more texture-oriented, and (iii) CL plays a key role in the later layers while MIM focuses on the early layers.
[3] Namuk Park and Songkuk Kim. “How Do Vision Transformers Work?” ICLR 2022. Spotlight. Zeta-Alpha’s Top 100 most cited AI papers for 2022. BenchCouncil’s Top 100 AI achievements from 2022 to 2023.
We show that the success of "multi-head self-attentions" (MSAs) lies in the "spatial smoothing" of feature maps, NOT in the capturing of long-range dependencies. In particular, we demonstrate that MSAs (i) flatten the loss landscapes, (ii) are low-pass filters, contrary to Convs, and (iii) significantly improve accuracy when positioned at the end of a stage (not the end of a model). See also [2].
[2] Namuk Park and Songkuk Kim. “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness.” ICML 2022. Winner of Qualcomm Innovative Fellowship South Korea.
We show that "spatial smoothing" (e.g., a simple blur filter) improves the accuracy, uncertainty, and robustness of CNNs, all at the same time. This is primarily due to that spatial smoothing flattens the loss landscapes by "spatially ensembling" neighboring feature maps of CNNs. See also [1].
[1] Namuk Park, Taekyu Lee, and Songkuk Kim. “Vector Quantized Bayesian Neural Network Inference for Data Streams.” AAAI 2021.
We show that "temporal smoothing" (i.e., moving average of recent predictions) significantly improves the computational performance of Bayesian NN inference without loss of accuracy by “temporally ensembling” the latest & previous predictions. To do so, we propose "ensembles for proximate data points", as an alternative theory to “ensembles for a single data point”—this theory is the foundation of [2] and [3].

Awards & Honors

Top Reviewer” at NeurIPS 2023.
“Outstanding Thesis Award, Third prize”, Yonsei University, Jun 2022.
Winner of Qualcomm Innovative Fellowship South Korea”, Qualcomm, Nov 2021.
Research Grant Support for Ph.D. Students”, National Research Foundation of South Korea, Jun 2021 — Feb 2022.
National Fellowship from Global Open Source Frontier”, NIPA (National IT Industry Promotion Agency of South Korea), Jun 2019 — Dec 2020.
CJK (China–Japan–South Korea) OSS (Open Source/Software) Award”, The Organizing Committee of the CJK OSS Award, Nov 2019.
OSS Competition, Honorable Mention”, NAVER Corporation, Feb 2019.
OSS Challenge, First prize—the Award From the Minister of Science and ICT”, Nov 2018
OSS Competition (2nd phase), First prize”, NAVER Corporation, Aug 2018.
OSS Competition (1st phase), Second prize”, NAVER Corporation, Feb 2018.
National Ph.D. Full Ride Fellowship”, Institute for Information and Communications Technology Promotion of South Korea, Mar 2011 — Feb 2016.
The Valedictorian of the College of Sciences”, Yonsei University, Feb 2011.
Yonsei University Alumni Full Ride Scholarship for Undergraduate Students”, “GE Scholarship”, “National Scholarship & for Science and Engineering”, and other merit-based scholarships, Sep 2008 — Feb 2011.

Talks

How Do Vision Transformers Work?”, [2, 3]
Seminar at SeoulTech, Aug 2022
AI Seminar at UNIST, Mar 2022
Tech Talk at NAVER WEBTOON, Jan 2022
NAVER Tech Talk at NAVER Corporation, Dec 2021
Uncertainty in AI: Deep Learning Is Not Good Enough for Safe AI”, [1]
Keras Korea Meetup at AI Yangjae Hub, Dec 2019
OSS Contribution Festival at NIPA, Dec 2019
South Korea-Uzbekistan SW Technology Seminar at NIPA & Tashkent University of Information Technologies, Oct 2019
A Fast and Lightweight Probability Tool for AI in Scala”, [code]
North-East Asia OSS Forum at NIPA, Nov 2019
OSS Day (Keynote) at NIPA, Nov 2018
Scala Night Korea at Scala User Group Korea, Apr 2018