Bookmarks
Bloom’s 3 Stages of Talent Development
First, fun and exciting playtime. Then, intense and strenuous skill development. Finally, developing one’s individual style while pushing the boundaries of the field.
Russell’s Paradox and Possible Solutions
The origins of set theory can be traced back to a Bohemian priest, Bernhard Bolzano (1781-1848), who was a professor of religion at the University of Prague.
A note about "The Humane Representation of Thought"
A year and a half ago, on a plane, I wrote An Ill-Advised Personal Note about "Media for Thinking the Unthinkable".
Working memory - Wikipedia
Working memory is a cognitive system with a limited capacity that can hold information temporarily. [1] It is important for reasoning and the guidance of decision-making and behavior.
Working hurts less than procrastinating, we fear the twinge of starting
When you procrastinate, you're probably not procrastinating because of the pain of working. …
Depth-First Procrastination
When subgoals recur infinitely.
"CBLL, Research Projects, Computational and Biological Learning Lab, Courant Institute, NYU"
Yann LeCun's Web pages at NYU
Omens of exceptional talent
Gaiseric…was a man of moderate height and lame in consequence of a fall from his horse. He was a man of deep thought and few words
I’m often asked about the signs of exceptional talent I’ve observed, probably because I spend too much running around talking to people & observing things, instead of doing anything useful.
Patrick Collison, Sam Altman, and Tyler Cowen are the three names that come to mind when thinking about this question. Of my writing, Intelligence killed …
An Introduction to Current Theories of Consciousness
(Crosspost from my blog) • • There are few academic lists of theories of consciousness (Doerig 2020) as well as some good blog post series about specific ideas (shout out to SelfAwarePatterns), but…
Being the (Pareto) Best in the World
John Wentworth argues that becoming one of the best in the world at *one* specific skill is hard, but it's not as hard to become the best in the worl…
Noghartt's garden
A place where I put all my thoughts
Unpacking Intuition
Can intuition be taught? The way in which faces are recognized, the structure of natural classes, and the architecture of intuition may all be instances of the same process. The conjecture that intuition is a species of recognition memory implies ...
Conscious exotica
From algorithms to aliens, could humans ever understand minds that are radically unlike our own?
The Perfect Plan
Too often do we obsess over the perfect plan to chase our dreams, resulting in analysis paralysis. Instead of being stuck in this limbo, I've made the perfect plan for anyone to chase their dreams.
The Magic of Sampling, and its Limitations Posted on Saturday, February 4, 2023.
Sampling can help estimate the percentage of items with a specific trait accurately. The number of samples taken greatly affects the accuracy of the estimate. To get precise estimates, all items must have an equal chance of being selected during sampling.
Chess-GPT's Internal World Model
The blog post discusses how a GPT model trained on chess games learns to predict moves and track the board state without being explicitly given the rules. It successfully classified chess pieces with high accuracy and estimated player skill levels based on game moves. The findings suggest that models trained on strategic games can effectively learn complex tasks through pattern recognition.
Teach Yourself Programming in Ten Years
The text discusses the misconception of quickly learning programming in a short time, emphasizing that true expertise takes about ten years of dedicated practice and learning. It highlights the importance of hands-on experience, interacting with other programmers, and working on various projects to become a skilled programmer. The text emphasizes that mastering programming requires practical application, learning from others, and continuous practice over time.
Class Warfare
The text discusses a woman's conversation about company politics and self-interest, highlighting a zero-sum mentality within organizations. It emphasizes the need to shift away from this mindset and focus on creating value instead. The author suggests that combating this mentality starts with internal change and encourages individuals to reject zero-sum thinking for long-term benefit.
How I got better at debugging
Julia Evans shares her journey of improving her debugging skills through logical thinking, confidence, expanding knowledge, communication, and using tools like strace and tcpdump. By being systematic, confident, knowledgeable, and open to collaboration, she transformed debugging from a challenging task to an exciting learning opportunity. Her story emphasizes the importance of persistence, curiosity, and practical problem-solving in mastering the art of debugging.
Death Note: L, Anonymity & Eluding Entropy
The text discusses Light's mistakes in using the Death Note and how they led to his de-anonymization by L. Light's errors, such as revealing his precise killing methods and using confidential police information, significantly reduced his anonymity. The text also explores strategies Light could have employed to better protect his anonymity while using the Death Note.
Copying Better: How To Acquire The Tacit Knowledge of Experts
The text discusses how to acquire expert intuition, known as tacit knowledge, through emulation and apprenticeship. Naturalistic Decision Making (NDM) research helps extract and teach expert judgment using methods like Cognitive Task Analysis and the recognition-primed decision making model. Experts rely on implicit memory and pattern recognition to make rapid assessments and decisions, which can be challenging to verbalize.
Speech-to-text models
Speech-to-text AI enhances communication and accessibility by transcribing spoken words into text accurately and efficiently. Machine learning and AI advancements have significantly improved the accuracy and adaptability of speech-to-text systems. These technologies open up new possibilities for inclusive and effective communication across various industries.
BSTJ 57: 6. July-August 1978: The UNIX Time-Sharing System. (Ritchie, D.M.; Thompson, K.)
The UNIX Time-Sharing System is a versatile operating system with unique features. It runs on Digital Equipment Corporation computers and emphasizes simplicity and ease of use. UNIX has been widely adopted for research, education, and document preparation purposes.
king - man + woman is queen; but why?
The text explains how the word2vec algorithm transforms words into vectors for analyzing similarities and relationships between words. By using vector arithmetic, it can find analogies such as "king - man + woman = queen." Understanding word co-occurrences can provide insight into the meaning of words through the distributional hypothesis.
Human Knowledge Compression Contest
The Human Knowledge Compression Contest measures intelligence through data compression ratios. Better compression leads to better prediction and understanding, showcasing a link between compression and artificial intelligence. The contest aims to raise awareness of the relationship between compression and intelligence, encouraging the development of improved compressors.
Generative Agents: Interactive Simulacra of Human Behavior
The content discusses generative agents that simulate believable human behavior for interactive applications. These agents populate a sandbox environment, interact with each other, plan their days, form relationships, and exhibit emergent social behaviors. The paper introduces a novel architecture that allows agents to remember, retrieve, reflect, and interact dynamically.
Consciousness, Cognition and the Neuronal Cytoskeleton – A New Paradigm Needed in Neuroscience
Viewing the brain as a complex computer of simple neurons is insufficient to explain consciousness and cognition. A new paradigm is needed that considers the brain as a scale-invariant hierarchy, with quantum and classical processes occurring in cytoskeletal microtubules inside neurons. Evidence shows that microtubules regulate specific firings of axonal branches and modulate membrane and synaptic activities. This new paradigm suggests that information processing for cognitive and conscious brain functions occurs in microtubules and involves both top-down and bottom-up regulation within the brain hierarchy. The precise mechanisms of consciousness may be most likely to reveal themselves in Layer V cortical pyramidal neurons, which have a large collection of mixed polarity microtubule networks.
Bekenstein bound
The Bekenstein bound is an upper limit on the entropy or information that can be contained within a given finite region of space with a finite amount of energy. It implies that the information of a physical system must be finite if the region of space and energy are finite. The bound was derived from arguments involving black holes and has implications for thermodynamics and general relativity. It can be proven in the framework of quantum field theory and has applications in various fields, such as black hole thermodynamics and the study of human brains.
Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories
Diffusion Models and Associative Memories show surprising similarities in their mathematical underpinnings and goals, bridging traditional and modern AI research. This connection highlights the convergence of AI models towards memory-focused paradigms, emphasizing the importance of understanding Associative Memories in the field of computation. By exploring these parallels, researchers aim to enhance our comprehension of how models like Diffusion Models and Transformers operate in Deep Learning applications.
Memory in Plain Sight: A Survey of the Uncanny Resemblances between Diffusion Models and Associative Memories
Diffusion Models (DMs) have become increasingly popular in generating benchmarks, but their mathematical descriptions can be complex. In this survey, the authors provide an overview of DMs from the perspective of dynamical systems and Ordinary Differential Equations (ODEs), revealing a mathematical connection to Associative Memories (AMs). AMs are energy-based models that share similarities with denoising DMs, but they allow for the computation of a Lyapunov energy function and gradient descent to denoise data. The authors also summarize the 40-year history of energy-based AMs, starting with the Hopfield Network, and discuss future research directions for both AMs and DMs.
A New Physics Theory of Life | Quanta Magazine
According to physicist Jeremy England, the origin and evolution of life can be explained by the fundamental laws of nature. He proposes that living things are better at capturing and dissipating energy from their environment compared to inanimate objects. England has derived a mathematical formula based on established physics that explains this capacity. His theory, which underlies Darwin's theory of evolution, has sparked controversy among his colleagues. While some see it as a potential breakthrough, others find it speculative. England's idea is based on the second law of thermodynamics and the process of dissipating energy. He argues that self-replication and structural organization are mechanisms by which systems increase their ability to dissipate energy. His theory may have implications for understanding the formation of patterned structures in nature.
K-Level Reasoning with Large Language Models
Large Language Models (LLMs) have shown proficiency in complex reasoning tasks, but their performance in dynamic and competitive scenarios remains unexplored. To address this, researchers have introduced two game theory-based challenges that mirror real-world decision-making. Existing reasoning methods tend to struggle in dynamic settings that require k-level thinking, so the researchers propose a novel approach called "K-Level Reasoning" that improves prediction accuracy and informs strategic decision-making. This research sets a benchmark for dynamic reasoning assessment and enhances the proficiency of LLMs in dynamic contexts.
Measuring Faithfulness in Chain-of-Thought Reasoning
Large language models (LLMs) are more effective when they engage in step-by-step "Chain-of-Thought" (CoT) reasoning, but it is unclear if this reasoning is a faithful explanation of the model's actual process. The study examines how interventions on the CoT affect model predictions, finding that models vary in how strongly they rely on the CoT. The performance boost from CoT does not solely come from added test-time compute or specific phrasing. As models become larger and more capable, they tend to produce less faithful reasoning. The results suggest that faithful CoT reasoning depends on carefully chosen circumstances such as model size and task.
Thinking in Systems: International Bestseller: Donella H. Meadows, Diana Wright: 9781603580557: Amazon.com: Books
"Thinking in Systems" is a book that explores the concept of systems thinking, which involves analyzing the interconnectedness and dynamics of various systems. The book uses examples such as the human body, businesses, and societal systems to illustrate how stocks and flows contribute to achieving system goals. It also highlights the importance of aligning stated goals with actual outcomes and discusses the need for change in systems that are not functioning optimally. The book emphasizes the complexity of systems and the challenges of making meaningful improvements.
Subcategories
- applications (15)
- computer_architecture (1)
- ethics (1)
- expert_systems (2)
- game_ai (5)
- knowledge_representation (4)
- machine_learning (324)
- natural_language_processing (3)
- planning_and_scheduling (2)
- robotics (2)
- software_development (1)
- theory (1)