Bookmarks
On Technical Challenges: Lock Free Programming
The Oxide Computer Company job application process1 asks applicants to answer a set of personal questions about their career and experiences.
To B or not to B: B-Trees with Optimistic Lock Coupling
CedarDB is a database system that delivers unmatched performance for transactions and analytics, from small writes to handling billions of rows. Built on cutting-edge research to power today’s tools and tomorrow’s challenges.
Why is Yazi fast?
This article assumes that you have already used Yazi and are familiar with most of its features.
Why async Rust?
I genuinely can’t understand how anybody could look at the mess that’s Rust’s async and think that it was a good design for a language that already had the reputation of being very complicated to write.
Introducing Limbo: A complete rewrite of SQLite in Rust
we forked SQLite with the libSQL project. What would it be like if we just rewrote it?
TCP Server in Zig - Part 5a - Poll
Using non-blocking sockets and poll to improve the scalability of our system.
Async Rust can be a pleasure to work with (without `Send + Sync + 'static`)
Async Rust is powerful. And it can be a pain to work with (and learn). Async Rust can be a pleasure to work with, though, if we can do it without `Send + Sync + 'static`.
Ringing in a new asynchronous I/O API
The new "io_uring" interface simplifies asynchronous I/O in the Linux kernel by using two ring buffers for submission and completion queues. Applications can set up these buffers with a system call and submit I/O requests through a structured format. This method aims to reduce complaints about AIO by improving efficiency and ease of use.
How long does it take to make a context switch?
Context switching times vary significantly across different Intel CPU models, with more expensive CPUs generally performing better. The performance can be greatly affected by cache usage and thread migration between cores, leading to increased costs when tasks are switched. Optimizing the number of threads to match the number of hardware threads can improve CPU efficiency and reduce context switching overhead.
Rust Atomics and Locks
This book by Mara Bos explores Rust programming language's concurrency features, including atomics, locks, and memory ordering. Readers will gain a practical understanding of low-level concurrency in Rust, covering topics like mutexes and condition variables. The book provides insights on implementing correct concurrency code and building custom locking and synchronization mechanisms.
Introduction
Wait-freedom ensures that each thread can progress independently, executing operations in a fixed number of steps without being blocked by others. Lock-freedom allows the system to make overall progress, but individual threads might still get stuck. Obstruction-freedom means a thread can only progress without interference from others, making it a weaker guarantee than lock-freedom.
1024cores
Dmitry Vyukov shares information on synchronization algorithms, multicore design patterns, and high-performance computing on his website, 1024cores.net. He focuses on shared-memory systems and does not cover topics like clusters or GPUs. New content is added regularly, and readers can subscribe for updates.
Properly Testing Concurrent Data Structures Jul 5, 2024
The article discusses how to effectively test concurrent data structures by using managed threads that can be paused and resumed. It explains the importance of controlling thread execution to avoid issues like race conditions while executing random operations. The author emphasizes the need for proper synchronization mechanisms to ensure that only one thread is active at a time during tests.
Updating the Go Memory Model
The Go memory model needs updates to clarify how synchronization works and to endorse race detectors for safer concurrency. It suggests adding typed atomic operations and possibly unsynchronized atomics to improve program correctness and performance. The goal is to ensure that Go programs behave consistently and avoid data races, making them easier to debug.
Programming Language Memory Models (Memory Models, Part 2) Posted on Tuesday, July 6, 2021. PDF
Modern programming languages use atomic variables and operations to help synchronize threads and prevent data races. This ensures that programs run correctly by allowing proper communication between threads without inconsistent memory access. All major languages, like C++, Java, and Rust, support sequentially consistent atomics to simplify the development of multithreaded programs.
What Color is Your Function?
Functions in a programming language can be either red or blue, affecting how they are called and used. Red functions are asynchronous and typically more complex to work with than blue functions. The choice between red and blue functions can impact code organization and maintainability.
What every systems programmer should know about concurrency
The document delves into the complexities of concurrency for systems programmers, explaining the challenges of running multithreaded programs where code is optimized and executed in unexpected sequences. It covers fundamental concepts like atomicity, enforcing order in multithreaded programs, and memory orderings. The text emphasizes the importance of understanding how hardware, compilers, programming languages, and applications interact to create a sense of order in multithreaded programs. Key topics include atomic operations, read-modify-write operations, compare-and-swap mechanisms, and memory barriers in weakly-ordered hardware architectures.
Leslie Lamport
Leslie Lamport wrote several papers on verifying and specifying concurrent systems using TLA. He discovered algorithms through formal derivation and emphasized mechanical verification of concurrent algorithms. His work influenced the development of the TLAPS proof system.
Causal ordering
Causal ordering is essential for understanding distributed systems, where events may not have a clear time order. This concept helps determine the causal relationship between events in a system. It enables reasoning about causality, leading to simpler solutions in distributed computing.
Tree-Structured Concurrency — 2023-07-01
Structured concurrency is a programming concept that ensures clear control flow in concurrent programs. In the context of async Rust, it guarantees properties like cancellation propagation, which means that dropping a future will also cancel all nested futures. The text discusses examples of unstructured and structured concurrency patterns, emphasizing the importance of applying structured concurrency to improve program correctness and maintainability. It also mentions the need for more API support to fully achieve structured concurrency in async Rust, suggesting practical approaches like using task queues or adopting the smol model for task spawning. Overall, structured concurrency provides a way to reason about async Rust programs effectively and enhance their reliability.
Scheduling Internals
The document delves into the concept of concurrency in programming, exploring how tasks can be handled concurrently using different methods like threads, async I/O, event loops, and schedulers. It discusses the challenges and benefits of each approach, illustrating examples in C code to demonstrate the practical implementations. The text covers topics like preemptive and non-preemptive schedulers, implementation details in languages like Go and Rust, as well as the use of event loops for efficient task handling. It also touches on the importance of understanding program state management and the impact on task execution in concurrent programming.
Subcategories
- applications (9)
- compression (9)
- computer_vision (8)
- deep_learning (94)
- ethics (2)
- generative_models (25)
- interpretability (17)
- natural_language_processing (24)
- optimization (7)
- recommendation (2)
- reinforcement_learning (11)
- supervised_learning (1)