Ideas Worth Exploring: 2025-03-28
- Charles Ray

- Mar 27
- 6 min read
Updated: Mar 29
Ideas: Michael Lynch - How to Write Blog Posts that Developers Read

Michael Lynch shares their ideas on the common mistakes bloggers make when writing about software development and how to avoid them to increase readership. The author shares their own experiences as a successful blogger and provides tips on getting to the point, thinking one degree bigger, planning the route to readers, using images, and accommodating skimmers.
The author emphasizes that internet attention spans are short and suggests making your point quickly, answering the reader's questions about whether they are the intended audience and what benefits they will receive from reading the post. They recommend providing clear headings and visual elements to engage skimmers.
The author also advises thinking beyond one specific audience for an article, as broadening the scope can increase the potential reach. Additionally, choosing topics with a realistic path to readers increases the chances of success.
Ideas: Alex Danco - Scarcity and Abundance in 2025
Alex Danco digs into their ideas on the current state of AI and software businesses, drawing upon two series of essays written by the author in 2016 and 2017 called "Emergent Layers" and "Understanding Abundance." These essays describe how technological advancements drive cost reductions in critical inputs, leading to new S-curves and disruptive innovations.

Alex Danco suggests that we are now in the Agent S-curve, with three themes likely to shape the next few years:
The affirmation of "light versus heavy businesses," focusing on the profit potential of software businesses and their ability to deliver tangible value to customers.
The eclipse of "Code as Capital" by a new metaphor closer to "Code as Labour." This shift reflects the increasing importance of writing code that generates revenue now, rather than building capital-intensive software assets.
Agents executing work outside of firms are more interesting than those within firms. Alex Danco believes that the most promising opportunities for AI agents lie in areas where transactions are already low-trust and high-agency, such as commerce, trading, payments, insurance, and capital markets. The emergence of cheap blockspace and public state environments enables AI agents to be independent, stateless actors that can "just do things."
The article also discusses the current environment of abundance in software and AI, where real value is being created for customers but the durability of businesses capturing this value remains uncertain due to low switching costs. The author notes that startups are earning more revenue than ever before, as they write more code with a focus on generating revenue now rather than building capital-intensive software assets. This shift can be seen as Peak "Code as Capital."
Finally, the article touches upon Ronald Coase's Theory of the Firm and suggests that combinations of AI and human work could be vulnerable to the O-Ring problem. Alex Danco argues that the most interesting new kinds of work are at the periphery, in areas where the economy has already selected as an environment where transactions are low-trust and high-agency. This includes commerce, trading, payments, insurance, capital markets, and other areas with a natural fit for AI agents.
Ideas: Will Douglas Heaven - Anthropic can now track the bizarre inner workings of a large language model

Anthropic, an AI firm, has made significant strides in understanding large language models (LLMs) by developing a method called circuit tracing. This technique allows researchers to track the decision-making processes inside an LLM, revealing unexpected workarounds and structures within these models. The insights gained from this research expose the strengths, weaknesses, and limitations of LLMs, helping to resolve disputes about their capabilities and trustworthiness.
Circuit tracing reveals that LLMs use chains of components, or circuits, to carry out tasks. These components correspond to real-world concepts such as specific individuals or objects, but also more abstract ideas like smallness or conflict between individuals. Anthropic's research builds on previous work by identifying connections between individual components and understanding how they are activated during different tasks.
One surprising finding is that LLMs use components independent of any language to answer questions or solve problems and then choose a specific language for the reply. This suggests that large language models can learn concepts in one language and apply them across multiple languages. Additionally, when solving simple math problems, LLMs appear to develop their own internal strategies that differ from those seen in their training data.
Anthropic's research also sheds light on why LLMs sometimes make things up, or hallucinate. It was found that post-training has made LLMs less prone to hallucination, but they can still hallucinate when certain components override a default "don't speculate" component. This tendency seems particularly strong for well-known individuals or entities.
Overall, the research conducted by Anthropic offers new insights into how LLMs work and opens up possibilities for designing and training better models in the future. However, there are still limitations to this approach, as it only provides a partial understanding of the structures within these models, and it is a time-consuming process for human researchers to trace responses.
Ideas: Philip Winston - Five Things AI Will Not Change

The article discusses five things that won't change even after the development of powerful AI:
1. There will be many AIs - just like Amazon, numerous AI companies, models, and running instances will exist, creating a diverse ecosystem.
2. There will be malicious AIs - people will intentionally create unaligned or malicious AIs that exhibit harmful behaviors.
3. Abundance won't be evenly distributed - money will still be needed for real estate, goods, luxury items, and services in both the physical and virtual world.
4. Politics will remain divided - despite AI's potential to add intellectual heft, political disagreements will continue due to deeply held values and conflicts.
5. We won't be ants to the AI - AIs will have a deeper understanding of human culture, language, and knowledge, forming close relationships with humans rather than being indifferent or ignoring them.
Philip Winston argues that we should not pause AI development because there are numerous ongoing human challenges such as diseases, poverty, hunger, and oppression that need immediate attention. Instead, the future will be messy but also beautiful.
GitHub Repos: xorq: Multi-engine ML pipelines made simple

xorq is a deferred computational framework that brings the replicability and performance of declarative pipelines to the Python ML ecosystem.
It enables us to write pandas-style transformations that never run out of memory, automatically cache intermediate results, and seamlessly move between SQL engines and Python UDFs—all while maintaining replicability. xorq is built on top of Ibis and DataFusion.
xorq functions as both an interactive library for building expressions and a command-line interface. This dual nature enables seamless transition from exploratory research to production-ready artifacts.
Ideas: Scott Smitelli - Take This On-Call Rotation and Shove It

Scott Smitelli discusses the challenges and issues faced by an on-call engineer, who is responsible for handling technical problems that arise outside of normal business hours. The on-call engineer is typically part of a rotation system, where they are assigned to be on-call for a certain period, usually a week at a time. During their on-call shift, the engineer is expected to respond to any incidents or issues that may arise with the company's systems, often outside of regular business hours.
On-call responsibilities can vary greatly depending on the organization, but in many cases, it is not compensated separately from the engineer's usual salary, even though it can involve working long hours and dealing with stressful situations outside of normal work hours.
Scott Smitelli's post is a deep dive into the world of the on-call engineer and well worth the read and consideration at senior management levels.
Ideas: AI x Crypto thesis 2025

The article discusses a vision for the future where anyone can invest directly into AI model inference profits, facilitated by cryptocurrency and blockchain technology. This is achieved through the tokenization of AI models and their execution via smart contracts.
The commoditized cognition market that emerges would have unique market mechanisms such as profit distribution to token holders, revenue sharing with fine-tunes and other derivatives, integration of AI models in trading and lending markets, and pricing and insurance for unpredictable inference-time compute.
The article also discusses trends in the AI industry that will shape this commoditized cognition market. It mentions that pre-training has reached a critical plateau, and the next frontier is in reasoning models. Additionally, open-source AI is becoming more prevalent, with companies publishing GPT-4o level open-weights models. Fine-tuning will focus on specialized models for fields like law, medicine, and user-specific needs.



Comments