JW

weekly scroll - W21

interesting stuff i'm reading / watching this week.

advice on finding your purpose

https://foundersatwork.posthaven.com/find-your-people

transcript of graduation speech by cofounder of Y-Combinator, Jessica Livingston. speaks to those with drive and motivation, but struggle finding a mission for that ambition. she warns against trying to find comfort in structured paths, an urge we all feel after finishing school (or at least for those who school was a good fit). her advice: make friends with a lot of people. talk to those who are doing interesting work.

it's more than just restoration

https://www.youtube.com/watch?v=5v5gxow0B8c

this is my new favorite restoration video. it's such an incredible thing to fix something that's broken, it's such a foundationally human urge. as such, it's an incredibly popular genre of youtube video. they're typically faceless, no talking, and straightforward. something is broken, and a set of hands fixes it. this video is no exception, but SimonFordman takes it a step further. titled "it WILL run", he gets a wartime jeep running after it was left sitting for the better part of a half-century, and at the same time, he manages to tell a compelling story. there's subtle humor, there's personality, and there's an undeniable desire to give this jeep one last hurrah. maybe I'll write more about this video later about why it's actually just so good, but now I just urge you watch it. (2x speed is fine too if you're short on time lol).

why do we have so much debt

https://www.youtube.com/watch?v=bZ6HodKDxJE

great video on the history of our financial system and why the US has much debt. my main take away is that debt is not a bad thing for an economy if there's productivity to absorb it. for a long time my understanding was that debt == bad. I think that's sort of the first thing you learn if you browse r/personalfinance for any amount of time. the advice is that if you want to succeed financially, you first have to repay any debt. I guess that might be good advice for most people, but it makes it seem like any and all debt is not a good thing. in reality, there's good debt and bad debt. taking out a loan is a good idea if you actually put that money to good use, otherwise it's just spending money you don't have.

in the particular case of the US government, all that debt isn't that big of a deal because the US dollar has become the de facto global currency, and so with any debt taken out, it can be absorbed by the global economy as well as the country itself. this, paired with the decoupling from gold, means the US can take on a ton more debt than other countries. all this hinges though on the US dollar being widely used and not a mass run on US bonds by other countries.

maybe LLMs actually are reasoning?

https://www.anthropic.com/research/mapping-mind-language-model https://www.anthropic.com/research/tracing-thoughts-language-model https://www.youtube.com/watch?v=64lXQP6cs5M

interesting findings from the team at anthropic. featured on the dwarkesh podcast, two researchers discuss how they mapped neuron activation in claude to more human understandable concepts. the most striking thing to me, assuming their concepts actually map onto the way the models are working, is that this provides great evidence that the models are actually "thinking", not in the human way necessarily, but that they're doing much more than just "next token prediction".

in the podcast, the researchers claim that these LLMs are actual under-parameterized rather than over. they must cram so much information into the weights, that as a result, they start to form relevant abstractions. in order to predict the next token, then must actually learn instead of memorize. an example of this is when they compared simple addition to a complicated sin function. when the models were asked to do simple addition like 9 + 15, they could map its neuron activation to concepts like modulus, and fuzzy lookups. they were in a sense performing addition. in contrast, when asked to evaluate a more complicated function like negative sines or something, the models skipped reasoning pathways and just outputted a reasonable sounding number. in fact, there were "I don't know" features lighting up, indicating that the models "knew" when they didn't really know how to do something.

what this all means is that for some tasks like addition, models are actually generalizing concepts. the researchers also mention that the number of synapses in the human brain (something like a quadrillion) far exceeds what these language models have now (around 4 trillion at the maximum). so there is some ways to go.