Henry Ko

technical blog


Home     About Me     non-technical
About Me

Here's a brief bio of me that includes snippets of my life.

Jeju Island, Korea

I was born and raised in Seoul, Korea up until middle school, but went to high school in Jeju island-a volcanic island at the southern tip of Korea. I spent a lot of time biking and diving in the waters. Seeing the beauties of the waters made me want to study marine science in college.

Hyeopjae beach in Jeju

UC Berkeley - Freshmen 2019

A lot changed my first semester there. I took an introductory cs class taught in Snap!(berkeley's version of scratch) with the nudge of a friend and loved it. Maybe it was the combination of having a friend to do fun projects with along with many dorm friends also taking cs classes.

Korean Navy 2020~2022

When covid struck I came back to Korea and served in the Korean Navy. I was on a battleship as a boatswain's mate, but towards the end of my service I met a friend who introduced me to ML. It was a happy combination of science and engineering. After my service, I did my first ML project in underwater CV where I turned my distorted diving photos into ones that were clean.

Deep SeaNN outputs

UC Berkeley - 2022~Spring 2024

I got into lots of rabbit holes in this era, mostly in computer vision. It was an exciting time where big ideas were coming out each month.

And I started to wonder what it would be like to do these rabbit hole dives full time, away from classes for a while.


Seoul, Korea - 2024

In the Spring of 2024, I started my gap year to study things full-time.

I discovered that explanability in approaches is a large quality I value and that it's so much fun making programs go fast.

And this got me into the world of making ML more efficient and faster.

pic of namsan tower

UC Berkeley - 2025 (current)

I'm interested in ideas that share a theme of "doing more with less".

- Q. How can we make powerful and flexible, but also energy-efficient hardware fit for ML? [less energy]

- Q. How can we accelerate synthetic data-generation? [efficient post-training]

- Q. How could we create steerable-AI that's shaped through interactions? [less finetuning]

- Q. What are good ways to compress data? (i.e. How do we train smarter models using less data?) [efficient pre/post-training]

Contact

If any of this interests you, please reach out!

email: ko.hyeonmok at berkeley dot edu
github: henryhmko

Misc.

The formats in this blog were inspired by Lilian Weng, Simon Boehm, and Fabien Sanglard.