EAG stands for Effective Altruism Global and is a conference where EAs meet. This was my first one. I haven’t really processed any of it yet but I promised I would post my thoughts today so I’ll do the quickest possible version.
TLDR: I met a ton of people. I heard gossip about GPT-4, Sam Altman, and Grimes. I saw Eliezer Yudkowsky wearing a hat, standing by the dinner meeting point, considering a table of protein bars.
I woke up in NY at 6am Friday, JFK→SFO, arrived hotel in Oakland 4pm, perfect timing. See my roommate Hillel in the lobby, other people I recognize. Check in, drop bags in room, go to EAG first timers orientation. Then “relaxed speed-friending” which seems somewhat anomalous. Met a guy—I’m going to leave out everyone’s names for the most part, though I noted them privately—who advises high net-worth individuals on where they should give their money. He mostly tries to win elections for Democrats. He said “if Trump wins again, everything all these people”—and he gestured broadly to the room—“are working on goes away.”
No, no use doing a full play-by-play. In sum: I had interesting conversations and I encountered people who were about as stressed out as people tend to get, which was itself interesting. The 1:1s—one on ones are just meetings, in the words of Alexander Briand to one of his Twitter followers who was trying to sneak into the hotel, an “EA social ritual”—the 1:1s by Sunday had taken on the flavor of an extreme sport. People were limping. People were surprisingly emotional. People were occasionally distinctly annoyed by, and even rude about, my ill-informedness, which actually was usefully informative: it communicated much better than anything online could about what people expect to be common knowledge, which is probably a pretty good proxy for what actually is common knowledge, and therefore a gap in my awareness that might be important.
I didn’t party quite as much as I was expecting to; yet also I partied—mostly on Saturday. On Friday I asked everyone “where the parties at,” and they said the parties were on Sunday, but by Sunday night I chose to stay in and rewatch the DeepMind documentary about AlphaGo, and take notes on it, because I might use it for a thing I’m writing, because, I guess, I was feeling both professionally inspired/motivated, and completely socially burnt out. So I missed all the big parties but I also reread a bit of the Epic of Gilgamesh and thought about what humans try and fail to do. The Epic of Gilgamesh is one of humanity’s oldest surviving stories and it’s about how we’re fucked, ultimately. It’s about how we really want to be better than we are but we can’t do it. It’s about how we’re all going to die.
I feel like one good thing to do might be to address people who don’t know much, or anything, about why some people think AI might be dangerous, but this rushed post probably isn’t the best place to try to get into that, but I guess I’ll note that this was a major theme of the conference, and leave it at that, except maybe to mention that at this very moment there’s another conference happening in the Bay, organized in part by Ilya Sutskever, the Co-Founder and Chief Scientist of OpenAI, addressing this problem. I am not an AI expert but I think it’s very reasonable to interpret social proof—i.e. who takes the problem seriously—as evidence of how seriously non-experts should take it. And a lot of AI experts take this problem very seriously. Anyway this site I helped edit around Christmas, a kind of a choose-your-own-adventure deal, was made specifically for this purpose, to introduce people to these arguments. It’s mostly aimed at people who are already doing technical AI work, but it may also be interesting to others.
Anyway I have to go so that’s a wrap on EAG notes. God bless you and keep you.
PS — going to change name/domain of substack but not today apparently.