I am on a bit of an x dot com hiatus, due to frustrations with the platform and a declining level of interest over the last few months. Part of the fun of Posting has just disappeared for me. It’ll probably come back, but I’ve been on a steady decline of activity (outside of intermittent bursts of replies or a couple posts every week). I think I am just burned out from reading people and their thoughts. Every “moment” on the timeline, every discussion is frustrating to read. From the Cluely influence to the monetization-fueled baits to the ever-decreasing quality of the timeline.
Even the potentially interesting discussions that have popped up recently about e.g., RL, I find… baffling, with people seemingly not aware that it is a 70 year old field and nearly all their “original thoughts” have been discussed ages ago. That and the fact that no matter the changes to the algorithm, it remains incredibly simple to generate 1M+ impressions per post, to the point that it has ruined some of the fun for me.
So, in the meantime, I’ve been doing other stuff. I’ve been working on some interesting things at [redacted] and settling in back to a non-managerial position after quitting these responsibilities last year. It’s been fun and I feel way more hopeful about the company and the kind of things I can work on while over there.
Either way, here are some things that have been on my mind:
- I’ve been reading Quantum Computing Since Democritus, from Scott Aaronson. It’s been very interesting. It feels much less like a formal book than Scott’s blog, fleshed out and connected. It’s his thoughts on computation, its limits, the philosophical implications of these limits and all the things Scott usually writes about. It’s not an incredibly smooth read but is also not entirely unlike his blog in that it feels like a stream of thought full of gripping insights. You can sense that it was “gathered notes” that were later cleaned up from his original lectures on this. There are a few very fun exercises, and constant prompting throughout the chapters with Scott randomly stating X but then asking the reader “(why?)”. Answering these frequent prompts I have found to not be so trivial.
I think it’s a book you benefit from having some scratch paper available, unless you are already fairly familiar with all that is being discussed and have spent some time thinking about it. It’s very likely I’ll reread several of the chapters a few times over the next few weeks. Still, so far, just with his blog and a couple of his interviews, it has been an excellent and riveting read and incredibly thought provoking.
I recently discovered the Princeton Companion to Mathematics, and it’s such a wonderful resource. I ended up getting the PDF and have been going through parts of it (including the condensed first part full of introductions to some of the most important concepts in various mathematical fields). I ordered a physical version, but it’s been stuck at customs because I decided to move to a third world country for reasons still somewhat unknown to me. This is the kind of book you want on your shelf for your kid to be able to pick up, and you want to have enough math skills to be able to work through most things with them and break it down intuitively imo. Lifelong purchase, exciting.
I am still thinking about moving to some other country, hopefully for the last / second-to-last time, but have made no tangible progress as to making a decision. I am starting to think that moving to a (very) large unfurnished place and staying here for ~2yrs might be the play. I want a garden for Kobe anyway.
I am very excited for my Quietbox to arrive. It’s been very frustrating to have an amazing Loudbox which I simply cannot use as much as I want to due to me receiving it two months after moving from my previous apartment, which would have been 100x more suited to having it running way more often. At the moment, I just can’t really deal with the sound levels. The QuietBox should change that, which means I can tinker whenever I want and actually dive into running models for inference, digging into tt-transformers, publishing tt-evolve and working on all the things I’ve been itching to do.
I am in dire need of more storage. I’ve been planning to get ~40TB of storage to be able to scrape a chunk of the internet, including a local copy of my entire browser history, all of Wikipedia, etc. I am also thinking about getting a Beelink, a Framework Desktop, a new MacBook Pro and a few more pieces of tech. This is very much of a dependency of 3. at the moment.
I ended up stopping my reading of Active Inference: The Free Energy Principle in Mind, Brain, and Behavior somewhere around halfway through, as it is mostly a more applied look into “how to build active inference model”. Still, I highly recommend the book if you’re interested in computational neuroscience, cognition etc.
I’ve been enjoying Codex and GPT-5 High. Claude Code is still my go-to when I intend to consume millions of tokens (mostly because I have Claude Max) and for actual computer use (as GPT-5, even in High, is good with chaining cryptic 90s sysadmin UNIX commands but is very poor at actually using a computer like a human would). The release of GPT-5 itself was fascinating. Still, it shines at longer-horizon tasks and code editing and understanding and has been impressive at times.
GPT-5, the web product, clearly wasn’t meant for us though. It was meant to simplify the offering in the chat interface and the API for the majority of OpenAI’s userbase. A non-negligible amount of said userbase, it seems, is suffering from varying degrees of LLM-induced psychosis (see: Sama’s AMA on reddit, the #SaveChatGPT
tag) and for good reasons: the model was RLHF’d to optimize it as a revenue-generating product, making sure you’ll come back to prompt it some more and as a byproduct of that it has learned to be approximately optimal at talking to people in the way they want to be talked to in the averages.
The release itself was a fanfare in marketing and truly bizarre to witness. Paid tech influencers, YouTube videos all lined up, livestreams with live reactions, tech and global news organisations have been primed and are ready to publish. This is run-of-the-mill for large commercial releases in the video games industry, but it still hasn’t sunk in entirely that this is what AI space has become already. Still, it’s not surprising OpenAI is doing this too. What’s maybe surprising is how blatant the misinformation, lies, the various the chart crimes and tone of their communication is.
Absolutely everything in that release seemed to have a “profit-maxxing” approach to it. The models’ “intelligence”, Sama’s near immediate backtracking in an interview a couple days after “we can and will release much smarter models!”, the router “debacle” which was clearly quantified on their end. Strange times.