← Back to all posts

Where "Shift Left" Fails: Type Theater

Jacob Elder — 15 minutes to read — January 31, 2025

  1. Type Theater: Incoming Data
    1. Solution: Decoders
  2. Type Theater: The Database
    1. Aside: Please Just Learn SQL Already
    2. Solution: Compile-Time SQL Clients
  3. Wrapping up

I have been building web apps for long enough to remember when it was commonplace to SSH (or Telnet!) into the server and just vim index.php. As you can imagine, it was pretty easy to bring the site down like that, so we started doing what is now called Continuous Integration and Continuous Deployment: automated tests as a precondition for deployment.

Later, we adopted static analysis tools and code linters. This helped, but only so much could be done with the dynamically typed languages which were popular in the early internet.

As a self-taught programmer with no college degree, I started with dynamic languages: Perl, JavaScript, PHP, Ruby. I viewed the statically typed languages (and the programmers who could wield them) with reverence and awe. C, C++, Java, these were “real” languages that required big brains, and I should stay in my dynamically typed playpen. But as I matured, my understanding shifted: dynamic languages require hubris, require belief that you can keep an entire complicated codebase in your head with perfect recall. Statically typed languages, it turns out, invite humility.

With a static type system, each step in a program’s lifecycle can be a little contract with itself. Break that contract and the language will tell you exactly how and where. These languages can provide immediate feedback to the developer about a program’s correctness before it is even run. Bridging the gap between all of the existing dynamic code (and the developers who wrote it) and the statically typed utopia lies so-called gradual type systems such as TypeScript.

Smarter people than I have explored the meanings of “strong typing” and “strict typing” and “soundness,” and I won’t repeat or try to improve on any of that in this article. What interests me is the ability to “shift left” by using types, and where gradual typing can get in the way.

graph LR ide("IDE\nor Editor") linter("Linter") compiler("Compiler\n(if any)") ci_cd("Automated Tests") pr("Code Review/\nPull Requests") staging("Staging\n(if any)") production("Production") ide --> linter --> compiler --> ci_cd --> pr --> staging --> production

Where in this diagram would you rather discover a bug in your code? The answer invariably is, as close to the left as possible. Ideally, in your IDE before you even save the file.

That’s what type safety can provide. With the right software stack, one can reason about the whole behavior of the system without leaving their IDE. Feedback is nearly instantaneous. Ideally, that type safety should extend all the way from the UI to the persistence layer.

Which brings us to “type theater.” I coined this phrase as a spin on Bruce Schneier’s “Security Theater”:

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure.

Similarly, type theater is merely the feeling of a type system, without the assurances, safety, and productivity thereof.

In gradual type systems like TypeScript for JavaScript, Sorbet for Ruby, or Pyright for Python, even if your own code starts well-typed, the libraries you depend on often don’t. Take for example the sprawling DefinitelyTyped project, which has type definitions for ~9,000 popular JavaScript modules, all written by 3rd parties. Any similarity between these types and the reality of the libraries they represent is purely coincidental.

But sometimes that tradeoff is worth it, like when a library you have no alternative to is in the dynamically typed variant of your language, or when maintaining a legacy codebase. Schneier goes on to say,

I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

And here the analogy continues to work: TypeScript as an alternative to JavaScript is almost universally preferred. I like to use a meteorological analogy here: if JavaScript is “wintry mix,” then TypeScript lowers the temperature just enough to build a snowman.

But I’m not here to just throw shade on the any keyword. Most TypeScript projects I’ve seen have two major areas of type theater: in the handling of incoming data, and at the persistence layer.

Type Theater: Incoming Data

It’s rather common to see a Lambda handler or REST endpoint defining its input roughly like this:

type Input = {
   id: string // UUID
}

const input: Input = JSON.parse(args.input)

Here we’re asserting that the input definitely has an id property, which is a string we can assume represents a UUID, and it has no other properties besides id. Our code can go on to refer to input.id and treat it like a string.

Now let’s imagine a slightly different input has been received.

{
  "id": 1
}

Our assumptions broken, our code has undefined behavior at runtime. The result could range from difficult-to-debug error messages to negative downstream effects.

For example, did you know that in this scenario, if id were passed to a where clause, MySQL will find the first row whose id column begins with 1? It’s true.

Solution: Decoders

The right way to do this is a technique called a decoder. Rather than insisting that JSON.parse(input) is of type Input, we should rigorously check the shape of input and only continue if it exactly matches our expected schema.

import { z } from "zod";

export const inputSchema = z.object({
  id: z.string().uuid()
});

// We can still have an Input type by inferring it from the schema
export type Input = z.infer<typeof inputSchema>;

// Ok!
inputSchema.parse({ id: "f47ac10b-58cc-4372-a567-0e02b2c3d479" })

// Each of these throw with a detailed error message
inputSchema.parse({ id: "🪴" })
inputSchema.parse({ id: 1 })
inputSchema.parse([{ id: "f47ac10b-58cc-4372-a567-0e02b2c3d479" }])

The state of the art for this is surprisingly ergonomic, with good options for most languages.

Language Project
TypeScript Zod
TypeScript SchemaWax
TypeScript Effect.Schema
TypeScript ajv
Rust Serde
Kotlin kotlinx.serialization
Go encoding/json and github.com/go-playground/validator
Haskell aeson
Ruby sortbet-coerce
Python Pydantic

If an app accepts input from the outside world, and isn’t using anything in the above table, it’s probably performing type theater.

Type Theater: The Database

The other major venue for type theater is the persistence layer. Recently there has been an explosion in the popularity of tools that offer this flow:

graph TD; app_schema("Application Schema\n as Source of Truth") introspection(Introspection) app_queries("Type-safe queries") ideal_db(Application's Idea\nof Database Schema) generated_migrations("Generated Migrations") db(Database Schema) app_schema --> introspection --> generated_migrations --> db app_schema --> ideal_db --> app_queries --> db

Prisma and Drizzle Kit both fit this description. It sounds like a great experience, doesn’t it? No more remembering alter table syntax or learning some cumbersome new DSL.

However, a migration tool which derives its migrations from changes to application-level state files (e.g. schema.prisma, or Drizzle’s schema object) will necessarily always need to provide an escape hatch. Namely, editing the raw SQL DDL (Data Definition Language) files.

The application schema generally doesn’t support more than the lowest-common denominator of database features: a few primitive types, foreign keys, secondary and unique indexes. Take schema.prisma for example. Basic stored views are available as a preview feature, but not materialized views. Advanced index options, constraints, triggers, functions, stored procedures, and anything introduced by an extension (like types and operators from pgvector) are not supported and will require you to modify the migrations generated by Prisma.

Many common migration steps will block reads or writes in PostgreSQL:

Large or busy tables, which you won’t experience in dev, will make these issues more noticeable. However, the migration generation tool doesn’t know anything about the cardinality or lock contention of tables it will be run against. Typically it only ever has access to your dev database, not production, and certainly not any runtime metrics. This is fundamentally a very hard problem to solve, requiring migration tools’ upstream developers to have good understanding of real-world scenarios and exceptionally complicated testing apparatus. That’s a lot to ask of open-source software maintainers, and it would be difficult for anyone else to reliably test for your specific production scenarios.

How well will the migration tool handle the introduction of a not null constraint on a previously null-able column? This sort of problem would be difficult to catch even with a robust staging environment, and typically would be solved by SQL surgery dictated by business logic.

That isn’t speculation, but a war story. At $PREVIOUS_JOB, we added Prisma pretty late in the project, and it generated several migrations which impacted site stability. Adding Squawk to our CI/CD pipeline did catch some problems before they shipped, but not until after the pull request had been created.

Enter Squawk, a linter for Postgres migrations. This tool does a great job of noticing when a migration could introduce a lock on a potentially busy table. It does all this without knowing anything about your production traffic or schema; all it needs to see is the .sql files containing the DDL statements that would be run.

The actual experience looks more like this:

graph TD; app_schema("Application Schema\n as Source of Truth") app_queries("`**assumed** Type-safe queries`") generated_migrations("Generated Migrations") introspection(Introspection) ops[/"Manual changes to\nSQL DDL statements"/] real_db(Database Schema) ideal_db(Application's Idea\nof Database Schema) app_schema --> introspection --> generated_migrations --> ops -- changed --> real_db app_schema --> ideal_db --> app_queries -- ❌ FAIL --> real_db

The other common kind of migration system requires that its users write declarative statements, either by directly composing SQL DDL statements, or DSL (Domain Specific Language) or query builder. Some of these DSLs are quite clever (notably ActiveRecord), allowing the user to simply define the new state of the table and it “figures out” the up and down migration scripts. They generally don’t involve any reflection or introspection, and correctness is even more up to the user.

The developer experience here strongly prefers atomic migrations because one is directly authoring a program which generates and executes SQL DDL statements. However, migration atomicity is often at odds with minimizing production impact. Recall that create index concurrently cannot be run in a transaction. Without an enclosing transaction, any error here will require database surgery before the next iteration. At scale, it’s just not safe to run unmodified DDL statements that “worked” in dev.

The features missing from the application schema are also generally missing from the associated ORM or query builder API. If you need to call a stored procedure, use functions or operators, or use any feature provided by an extension, you will have to use another raw SQL escape hatch. And out that escape hatch also goes your type safety.

If type safety starts with those application-level schema definitions, but the escape hatch allows users to make arbitrary changes to those .sql files, or requires these changes through lack of SQL syntax parity, then the type safety implied by such a system must be considered “type theater.”

Aside: Please Just Learn SQL Already

SQL has been around for about 50 years. Throughout those five decades it has been, and continues to be, constantly reinvented, adapted, and improved.

My preferred variant, PostgreSQL, has been around since 1996, with the most recent version (packed with new features) coming out just 2 months ago. There are myriad options for hosting PostgreSQL, each with their own tradeoffs, advantages, and features. The developers of ORM and query builders proclaim support for Postgres, but whose?

When the NoSQLNow retconned as “Not Only SQL” movement got developers excited about schema-free document stores, the SQL community responded by introducing JSON support. Most popular databases now support storing JSON objects as a first-class column type, with structural validation and indexes, and we can now write queries which efficiently read and write deep inside those JSON columns. But ORM support for JSON operations remains niche. I’m not aware of any ORM which supports PostgreSQL’s JSON Functions and Operators without falling back to raw SQL.

SQLite is among the most widely deployed software of all time, and probably one of the most common file formats in the world along with HTML, PDF, and JPEG. It is used by literally millions of applications with literally billions and billions of deployments. You are almost certainly reading this sentence in an application which is dependent on libsqlite3.

DuckDB, the new OLAP on the block, has invested in making SQL syntax more friendly for interactive use.

Amidst this explosion of innovation, why are we limiting the solution space of our applications to only the lowest common denominator database features that ORMs deign to expose?

While not evenly distributed, SQL knowledge is highly democratized within our industry. Most developers have at least some exposure to SQL, many are experts, and because of SQL’s pervasiveness, anyone with good SQL comprehension will be valuable to any company. It (literally) pays to learn SQL.See also: The Unreasonable Effectiveness of SQL

As a result of SQL’s proliferation, there is no shortage of example code for LLMs to train on. GPT-4 is extremely good at writing complex SQLSee also: My own SQL Sage project .

Or sometimes the ORM does try to support the feature you need, but does so incorrectly, and open source maintainers don’t have the resources to fix it. Case in point, drizzle-orm#3555. Too many times, I’ve watched a developer prototype a complex SQL query, make it correct and performant, and then say “cool, now how do I translate this into my ORM’s syntax?” Why do we contort ourselves into using query builders when SQL is actually everywhere?

ORMs and query buildersSee also: Stop Using Knex.js purport to offer simple, safe database access database access. In practice they really make you choose between “safe” and “simple.”

Solution: Compile-Time SQL Clients

As it turns out, you can write real, raw SQL and not sacrifice type safety at all. In this video clip, I’ll refactor a select *​ query in my Green Thumb Guide project, which uses SQLx in Rust. Watch how min_temp_f is known to be a 64-bit floating point value, but that’s not specified anywhere except the database schema.

Note that the compiler errors are coming directly from SQLite, as are the Rust types. This is VS Code with rust-analyzer.

What’s happening here is that as I write SQL as a string, a compile-time macro executes a version of that query on my local development database. From here, two outcomes are possible:

  1. The query fails. This hooks into your IDE’s language server so your query errors are right next to your type errors.
  2. The query succeeds. The system ignores the response rows, but the metadata includes information about each column that would have come back. This is enough for the compiler to know the type of the result.

At runtime, the query is known to be compatible with the database schema, and any further assertions of the shape of the result (e.g. decoders) are optional. Query parameters are automatically quoted or converted as necessary for that type. This precludes the possibility of SQL injection attacks.

This workflow is now possible in most popular statically typed programming languages:

Honorable mention for ZIO Quill for Scala comes close but raw SQL still has to go through a DSL and a subquery.

Language Databases Support Project
Rust PostgreSQL, MySQL, MariaDB, SQLite SQLx
TypeScript PostgreSQL, MySQL sqlx-ts
TypeScript PostgreSQL PgTyped
TypeScript PostgreSQL, MySQL, SQLite Prisma TypedSQL (based on SQLx, in preview)
Go PostgreSQL, MySQL, SQLite sqlc
Haskell PostgreSQL postgresql-typed
Haskell PostgreSQL hasql-th
Any JVM language PostgreSQL, HSQL, MySQL, SQLite, Oracle, DB2, CockroachDB sqldelight
Java PostgreSQL, MySQL, SQLite, Oracle, DB2, DuckDB, etc. (any Type 4 JDBC driver) Manifold SQL

If an application uses raw SQL, and isn’t using anything in the above table, it’s probably performing type theater.

With this setup, the only thing that matters in development, as in production, is the truth of the database schema. A useful side effect here is that your code won’t compile in dev if you forget to run a migration. In this way, the choice of migration system is irrelevant (though SQLx does provide one). This makes incremental or adhoc adoption possible.

Wrapping up

Strong typing provides real assurances and early error detection, increasing developer productivity and trust in the codebase. By contrast, “type theater” gives the appearance of safety without rigorous validation or guarantees, most often appearing when developers assume incoming data conforms to types without formal schema checks, or when application-level schema definitions and tools pretend to manage the truth of the database schema. Carefully enforcing schemas on external inputs and relying on the database as the single source of truth for migrations and queries address these pitfalls.

With the right choice of software stack, we can simultaneously achieve safe code, reliable deployments, and developer happiness.

(And you should probably add Squawk to your CI/CD pipeline. I don’t have no relationship to that project other than gratitude.)

← Back to all posts

Where "Shift Left" Fails: Type Theater - January 31, 2025 - Jacob Elder