Bookmarks

Do LLMs Have Good Music Taste?

How might LLMs store facts | Deep Learning Chapter 7

High-quality educational lecture on how transformers store factual information, directly relevant to AI interpretability.

Normalization models of attention

Academic tutorial on computational models of visual attention with hands-on MATLAB code; directly relevant for researchers in computational neuroscience and AI.

How difficult is AI alignment? | Anthropic Research Salon

ARC-AGI-2 Overview With Francois Chollet

How To Think About Thinking Models

On the Biology of a Large Language Model (Part 2)

Neel Does Research (Vibe Coding Edition)

Mind from Matter (Lecture By Joscha Bach)

University lecture by cognitive scientist Joscha Bach examining AI architecture and machine consciousness; fits educational and technical focus on cognition and AI philosophy.

Navigating Progress in AI and Neuroscience

Talk explores reciprocal advances between neuroscience and AI, highlighting how brain insights inform interpretable machine-learning models.

Sholto Douglas & Trenton Bricken - How LLMs Actually Think

Activation Atlas

On the Biology of a Large Language Model

Neural Networks, Manifolds, and Topology

(How) Do Language Models Track State?

Chess-GPT's Internal World Model

Manipulating Chess-GPT's World Model

Heatmaps and CNNs Using Fast.ai

ai

KAN: Kolmogorov–Arnold Networks

ai

KAN: Kolmogorov-Arnold Networks

How to Use t-SNE Effectively

Measuring Faithfulness in Chain-of-Thought Reasoning

Subcategories