Bookmarks

Comments on Source

The section of the wiki allows anyone to document, explain, post questions, or make comments on the Lua source code. You may link to [1] or paste the code in question.

TLA+ is hard to learn

I’m a fan of the formal specification language TLA+. With TLA+, you can build models of programs or systems, which helps to reason about their behavior. TLA+ is particularly useful for reason…

Some questions

Google's approach to email

My favorite books

Star means currently reading but already enjoying.

A Fat Pointer Library

libCello Official Website

The Basics

Here’s what I consider to be the basics.

The Fast Track

In order to accelerate the development of prospective mathematical scientists, we have selected a series of textbooks one can study to reach expertise in mathematics and physics in the most efficient manner possible.

A Note About Zig Books for the Zig Community

The author discusses the idea of writing a Zig book and shares personal plans for self-publishing their own book. They weigh the pros and cons of working with a publisher versus self-publishing, emphasizing the importance of considering creative freedom and revenue sharing. The author encourages those interested in writing a Zig book to carefully evaluate their options, noting that the Zig community values learning materials and support.

GitHub - DoctorWkt/acwj: A Compiler Writing Journey

This GitHub repository documents the author's journey to create a self-compiling compiler for a subset of the C language. The author shares steps taken and explanations to help others follow along practically. The author credits Nils M Holm's SubC compiler for inspiration and differentiates their code with separate licensing.

immersivemath: Immersive Linear Algebra

This text introduces a book on linear algebra with chapters covering vectors, dot products, matrix operations, and more. It aims to help readers understand fundamental concepts and tools in linear algebra through clear explanations and examples. The book includes topics such as Gaussian elimination, determinants, rank, and eigenvalues.

Principles of compiler design

This text is about a book on compiler design principles. The book is authored by Jeffrey D. Ullman and contains 604 pages. It includes bibliographical references, but access to the EPUB and PDF versions is not available.

How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study

Meta's LLaMA family has become one of the most powerful open-source Large Language Model (LLM) series. Notably, LLaMA3 models have recently been released and achieve impressive performance across various with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-limited scenarios, we explore LLaMA3's capabilities when quantized to low bit-width. This exploration holds the potential to unveil new insights and challenges for low-bit quantization of LLaMA3 and other forthcoming LLMs, especially in addressing performance degradation problems that suffer in LLM compression. Specifically, we evaluate the 10 existing post-training quantization and LoRA-finetuning methods of LLaMA3 on 1-8 bits and diverse datasets to comprehensively reveal LLaMA3's low-bit quantization performance. Our experiment results indicate that LLaMA3 still suffers non-negligent degradation in these scenarios, especially in ultra-low bit-width. This highlights the signif...

The Illustrated Stable Diffusion

AI image generation with Stable Diffusion involves an image information creator and an image decoder. Diffusion models use noise and powerful computer vision models to generate aesthetically pleasing images. Text can be incorporated to control the type of image the model generates in the diffusion process.

Visual Guides to understand the basics of Large Language Models

This article provides a compilation of tools and articles that aim to break down the complicated concepts of Large Language Models (LLMs) in an intuitive way. It acknowledges that many people struggle with understanding the basics of LLMs and offers resources to help solidify their understanding. The article includes a table of contents with links to various resources, such as "The Illustrated Transformer" by Jay Alammar, which provides visualizations to explain the transformer architecture, a fundamental building block of LLMs. The goal is to make the concepts of LLMs easily understood and accessible.

Paper page - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

The content is a set of instructions on how to cite a specific URL (arxiv.org/abs/2401.01335) in three different types of README.md files, in order to create links from those pages.

Pen and Paper Exercises in Machine Learning

This is a collection of (mostly) pen-and-paper exercises in machine learning. The exercises are on the following topics: linear algebra, optimisation, directed graphical models, undirected graphical models, expressive power of graphical models, factor graphs and message passing, inference for hidden Markov models, model-based learning (including ICA and unnormalised models), sampling and Monte-Carlo integration, and variational inference.

The Random Transformer

This blog post provides an end-to-end example of the math within a transformer model, with a focus on the encoder part. The goal is to understand how the model works, and to make it more manageable, simplifications are made and the dimensions of the model are reduced. The post recommends reading "The Illustrated Transformer" blog for a more intuitive explanation of the transformer model. The prerequisites for understanding the content include basic knowledge of linear algebra, machine learning, and deep learning. The post covers the math within a transformer model during inference, attention mechanisms, residual connections and layer normalization, and provides some code to scale it up.

Reader: Frequently Asked Questions

Changelog December 19, 2023 Added section about the Daily Digest Explained limitations of Kindle/Google/etc books Explained link between Reader docs and Readwise highlights Updated info about auto-highlighting feature Expanded section about PDF highlights Added browser extension hot key (alt+R) December 7, 2023 Added more context for a

Subcategories