Book
Visible Dimensions
A five-part book about explicit indices in tensor computation.
This is not a complete manual for Einlang. It is a small book with one question:
Can the compiler really see what the tensor program means?
Most tensor code carries meaning in places the compiler cannot fully inspect: axis positions, reshape chains, broadcasting conventions, loops in the host language, and gradient engines that reconstruct structure after execution has already happened. Einlang begins with a narrower bet. If dimensions are named explicitly, and if tensor programs are written as relationships among those names, then some hidden structure becomes visible at the source level.
The point is not to build a new palace around tensor computation. The point is to sharpen one small knife: explicit indices as a way to remove semantic shadow from tensor programs.
The five parts form a path rather than a language manual. Each part begins with a puzzle, develops a small model, and then studies a standard-library specimen. The snippets use only a tiny core:
letbindings;- named indices;
sum;- derivative requests written as
@y / @x; - recurrence over an index such as
t.
Some snippets are executable Einlang; others use mathematical notation to keep the coordinate relation in focus. The aim is not to catalog every interface. The aim is to ask what becomes visible when a tensor program stops hiding its dimensions.
The standard-library excerpts are compact examples that reveal the model.
The examples are read in two layers. First comes the structure: which binding is introduced, which coordinates survive, which coordinates are local, and what dependency relation remains visible. Then comes the handhold: a concrete coordinate to test, a likely mistake to catch, or a phrase that lets the idea stay in memory.
Line of Argument
The argument follows one ascent:
blindness
roles
maps
missing coordinates
consumed coordinates
multi-role coordinates
gradients
pullbacks
local derivative shape
time
storage
RNN dependency
framework question
attention
notation bargain
Each step keeps the same discipline: name the coordinates, ask which ones survive, ask which ones are local, and notice what becomes checkable once the relationship is written down.
Contents
Front Matter
Part I: Visible Axes
- 1. What Can the Compiler Not See?
- 2. Axis Roles Are Not Axis Positions
- 3. Coordinate Maps in the Standard Library
Part II: Missing and Consumed Coordinates
Part III: Gradients as Structure
- 7. What Is a Gradient?
- 8. Matrix Multiplication Teaches the Pullback
- 9. Local Derivatives, Global Shape
Part IV: Time as an Axis
Part V: A Larger Visible-Dimension World
- 13. If Dimensions Had Names Everywhere
- 14. Attention as Named Communication
- 15. What the Notation Refuses to Hide
Back Matter
Reading Promise
There are no exercises. There is no attempt to cover every feature of the implementation. Each part begins with a familiar fragment of tensor code, asks what information is missing, and then follows the idea into smaller examples and standard-library specimens.
The intended aftertaste is simple: after you have named a dimension once, it is harder to pretend that axes are merely numbers.