<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Antonio Souza</title>
    <link>http://localhost:3000</link>
    <description>Thoughts on programming, software architecture, and technology</description>
    <language>en-us</language>
    <lastBuildDate>Tue, 03 Feb 2026 00:00:00 GMT</lastBuildDate>
    <atom:link href="http://localhost:3000/feed.xml" rel="self" type="application/rss+xml"/>
  <item>
    <title><![CDATA[Framework Boundaries and Macro Hygiene: When Magic Becomes a Liability]]></title>
    <link>http://localhost:3000/posts/2026-02-03-framework-boundaries-and-macro-hygiene</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-02-03-framework-boundaries-and-macro-hygiene</guid>
    <description><![CDATA[What shipping to production taught me about the line between helpful abstractions and architectural violations in Rust web frameworks.]]></description>
    <pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
## Framework Boundaries and Macro Hygiene: When Magic Becomes a Liability

Today I shipped a backend to TestFlight. Not a demo, not a proof of concept—an actual application that actual users will touch. The backend is written in Rapina, the Rust web framework I maintain. This matters because dogfooding is the only real test of whether your abstractions hold up.

They didn't. Not entirely.

We found a bug in our route attribute macros. Variable shadowing. Type inference breaking in subtle ways. The kind of bug that doesn't show up in examples or tutorials, only in real handler functions with real complexity and real deadlines.

The immediate fix is mechanical: better hygiene in macro expansion, prefixed identifiers, clearer separation between generated code and user code. But the bug surfaced a deeper question about framework design that I've been thinking about all day: **where's the line between helpful magic and architectural violation?**

## The Bug

Here's what was happening. Rapina uses attribute macros for routing:

```rust
#[get("/users/:id")]
async fn get_user(
    State(db): State<Database>,
    Path(id): Path<UserId>,
) -> Result<Json<UserResponse>> {
    let user = db.users().find(id).await?;
    Ok(Json(UserResponse::from(user)))
}
```

The macro expands this into the actual axum route registration, extracting path parameters, handling state injection, wrapping errors in our standard envelope format. Standard framework stuff.

But the expansion was generating variable names that could collide with the handler's own scope. If your handler happened to use certain common names for local bindings, type inference would break. Not with a clear error—with confusing messages about trait bounds and lifetime mismatches three layers deep in the inference chain.

This only surfaced in production because production code is messier than tutorial code. Real handlers have more local variables, more complex control flow, more generic parameters in play. The "works in the README" version doesn't survive contact with actual engineering.

## The Real Problem

The bug itself is fixable. The *category* of bug is what matters.

When a framework macro generates code that makes assumptions about the caller's namespace, it's not just a hygiene violation—it's an architectural boundary violation. The macro is reaching across the abstraction layer and making decisions about things it shouldn't know about.

This is the same category of problem that makes dependency injection frameworks in languages like Java and C# so hard to reason about. The framework stops being a tool you use and starts being an environment you exist inside. It has opinions about your internal structure, not just your public interface.

Rust's macro hygiene rules exist specifically to prevent this. When you write a declarative macro, identifiers you introduce don't collide with identifiers in the caller's scope unless you explicitly use `$var:ident` to capture them. This is a *design constraint*, not just a safety feature. It enforces that the macro operates in its own namespace and interacts with the caller only through explicit parameters.

Procedural macros can violate this more easily because they're generating raw token streams. You can emit any identifier you want. The compiler won't stop you from shadowing the caller's variables. This is power, and with power comes the responsibility to not be an asshole.

## The Tension

But here's where it gets interesting: developers *expect* framework magic. The entire value proposition of frameworks like FastAPI, Rails, ASP.NET Core—they wire things up automatically. You write a function with the right signature, slap an attribute on it, and the framework figures out how to turn HTTP bytes into typed parameters and typed results back into HTTP bytes.

That's *good* magic. That's the abstraction doing its job.

So where's the line?

I think it's this: **magic is acceptable when it operates on the public interface, unacceptable when it makes assumptions about the internal implementation.**

A routing macro can look at your function signature and generate the glue code to extract path parameters. That's operating on the public interface—the function's type. It's information you're explicitly publishing.

A routing macro should *not* be generating variable names that could collide with your function body's internal bindings. That's making assumptions about your implementation details—information you didn't publish.

This maps directly to the Rust type system's philosophy. Public vs. private isn't just about visibility—it's about contracts. Your public API is what you promise to the outside world. Your private implementation is what you reserve the right to change. A framework that makes assumptions about your private implementation is violating the abstraction boundary.

## Production Changes Everything

The reason this bug only surfaced in production is important. Tutorial code is clean. Tutorial code has one responsibility per function, minimal local state, straightforward control flow. Tutorial code is *pedagogically optimized*, not *engineering optimized*.

Production code is a mess. It has edge cases and error handling and "TODO: refactor this" comments from three months ago. It has functions that grew beyond their original scope because the deadline was yesterday. It has generic parameters that seemed like a good idea at the time.

And that mess is *fine*. That's what real software looks like. The problem is when your framework can't handle it.

This is why I'm skeptical of frameworks that prioritize demo elegance over production robustness. If your framework only works cleanly in the examples, it's not a framework—it's a pitch deck.

Rapina's philosophy is "90% of apps should require 10% of decisions." That means we need to work in the messy 90%, not just the clean 10%. It means when someone writes a handler function that's longer than it should be and has more local variables than is ideal, the framework doesn't break. It doesn't shadow their variables. It doesn't make their compile errors incomprehensible.

## The AI Era Angle

There's another reason this matters now: AI coding assistants.

LLMs are great at generating clean tutorial-style code. They're less great at understanding the implicit assumptions and hidden coupling in "magical" frameworks. When a framework has a lot of implicit behavior—names that get generated behind the scenes, types that get inferred through complex trait chains, macros that reach into your scope—the AI can't reason about it clearly.

This isn't just a current limitation of LLMs. It's a fundamental property of implicit systems. If the behavior isn't in the code, it's not in the training data. If the coupling isn't visible in the tokens, the model can't learn it.

Rapina's goal is to be AI-friendly by being *explicit*. Predictable structure, clear boundaries, minimal magic. When an AI generates a Rapina handler, it should be obvious what code gets generated by the macro and what code is user-written. The boundary should be clear in the token stream.

This is the same property that makes code maintainable by humans. Explicitness aids reasoning. Rust enforces explicitness through its type system and ownership model. Frameworks should extend that philosophy, not undermine it.

## What I Changed

The fix we shipped today does three things:

1. **Prefixed all generated identifiers** with `__rapina_` to prevent collisions with user code
2. **Minimized the expansion scope** so generated bindings are introduced only where they're needed, not at the function level
3. **Added hygiene tests** that intentionally use common variable names in handlers to catch future violations

But more importantly, we documented the principle: **Rapina macros operate on function signatures, not function bodies.** This is now a design constraint, not just a bug fix.

If we need information from inside the function, we require it to be expressed in the signature—through parameters, return types, or explicit attributes. We don't reach in and assume.

## The Broader Pattern

This maps to a pattern I've seen across systems at different scales: **the best abstractions are the ones that respect boundaries.**

In distributed systems: services that communicate only through explicit contracts (APIs, message schemas) are more maintainable than services that share databases or internal implementation details.

In type systems: functions that take explicit parameters are easier to reason about than functions that close over mutable state.

In build systems: explicit dependencies in a manifest are better than implicit dependencies discovered at runtime.

And in frameworks: macros that operate on public interfaces are better than macros that make assumptions about private implementation.

Rust's ownership system enforces boundaries at the memory level. Its trait system enforces boundaries at the interface level. Framework design should extend this to the architectural level.

## What This Means for Technical Leadership

If you're building frameworks, libraries, or any kind of abstraction layer, the question isn't "what can I make implicit?" It's "what *must* remain explicit to preserve the abstraction boundary?"

Magic is seductive. Reducing boilerplate is genuinely valuable. But every implicit behavior is a tradeoff. You're trading explicitness for convenience, and the cost comes due when someone tries to debug the implicit behavior or extend it in ways you didn't anticipate.

This is especially true in the AI era. The systems we build need to be *legible*—not just to humans, but to tools that operate on code as data. LLMs, static analyzers, refactoring tools—they all depend on being able to see the structure clearly.

Rust gives us the tools to enforce this legibility: strong types, explicit lifetimes, hygiene rules in macros, the borrow checker. The question is whether we use them, or whether we work around them in pursuit of "better DX."

I think the right answer is: use them. Lean into the constraints. Treat them as design guidance, not obstacles. The code that results is more robust, more maintainable, and more legible—to humans and machines alike.

That's what architectural discipline means. Not just "writing clean code," but **designing systems where the boundaries are clear and enforced by the tools.**

## Conclusion

We shipped to TestFlight today. The backend held up. The bug we found was real, but it was fixable, and more importantly, it was *findable*. It manifested as a compile-time error, not a runtime surprise. That's Rust doing its job.

The fix we shipped makes the framework more disciplined. The boundary between framework and application is now clearer and more rigorously enforced. This makes the framework less magical, but more trustworthy.

And in production, trustworthy beats magical every time.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[When the Type System Justifies Deleting Process]]></title>
    <link>http://localhost:3000/posts/2026-01-27-when-the-type-system-justifies-deleting-process</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-01-27-when-the-type-system-justifies-deleting-process</guid>
    <description><![CDATA[Why I killed gitflow in favor of trunk-based development—and what that decision reveals about architectural discipline in Rust.]]></description>
    <pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
Yesterday I merged contributions from two different developers on Rapina, the Rust web framework I'm building. Today I deleted our develop branch.

This wasn't a reckless move. It was the result of watching rust-lang/rust ship 150,000+ commits with hundreds of contributors on a single main branch—and realizing our gitflow setup was solving a problem Rust had already solved at the type system level.

## The Gitflow Tax

We run five repositories. Four to five developers. Gitflow everywhere: feature branches, develop branch, release branches, hotfix branches. The ceremony compounds across repos. A feature spanning two services means coordinating merges across two develop branches. Reviews lag because developers aren't sure which branch represents "current truth." Merge conflicts accumulate.

Gitflow was designed in 2010 for a different era. Vincent Driessen introduced it as a branching model for teams shipping versioned software with long-lived release cycles. It adds process-level isolation: develop is your integration branch, main is your stable release branch, and the space between them is a buffer where you catch problems before they hit production.

The assumption: **you need process to prevent bad code from reaching production.**

That assumption makes sense in languages where the type system doesn't help you. JavaScript, Python, Ruby—languages where tests are your primary safety net and runtime exceptions are a fact of life. In those ecosystems, branch isolation buys you time to catch issues before they compound.

But Rust changes the equation.

## What the Compiler Already Guarantees

When I reviewed contributions to Rapina today, I wasn't looking for null pointer dereferences. I wasn't checking if someone forgot to handle an error case. I wasn't scanning for data races or use-after-free bugs.

The compiler had already ruled those out.

The code review was about **design coherence**:
- Does this API fit the framework's architectural model?
- Is this abstraction at the right level?
- Does this change make the framework easier or harder to reason about?

These are questions the type system can't answer. But everything else—memory safety, concurrency safety, error handling discipline—is enforced before the code is even eligible for review.

This is the insight: **Rust moves invariants from runtime and process into compile time.** Gitflow is a process-level workaround for weak compile-time guarantees. When the compiler enforces invariants, the process can be radically simpler.

## How rust-lang/rust Ships

I've been contributing to the Rust compiler. The repository has over 150,000 commits. Hundreds of active contributors. A CI pipeline that runs thousands of tests across platforms and configurations.

They use a single main branch (historically called `master`, now `main`). No develop branch. Feature work happens in PRs. Merges go straight to main after review and CI passes. Release branches are cut only when preparing a release—at the last responsible moment.

This isn't reckless cowboy coding. It's **compiler-enforced discipline** backed by comprehensive CI.

The branching strategy is simple because the language doesn't allow the complexity to hide. If your PR breaks something, the compiler or CI catches it before merge. There's no "integrate into develop and see what happens" phase. The feedback is immediate and deterministic.

## The Decision

I started with Mediator, our core backend service. Deleted the develop branch. Updated CI to treat main as the single source of truth. Moved to trunk-based development: feature branches merge directly to main after review and CI.

The team asked the expected questions:
- "What if something breaks in production?"
- "How do we isolate work-in-progress features?"
- "Isn't this risky?"

The answers:
1. **If something breaks in production, it's not because we skipped develop—it's because our tests or type modeling are insufficient.** The develop branch was giving us false confidence. It wasn't catching the bugs that matter.
2. **Feature isolation happens at the architecture level, not the branch level.** If a feature isn't ready, it's behind a feature flag or not exposed in the API. The type system ensures incomplete features don't compile into incoherent states.
3. **The risk profile doesn't change—it just becomes visible faster.** Gitflow delays feedback. Trunk-based development surfaces integration issues immediately, when they're cheapest to fix.

## When Process Is a Smell

This is a broader principle in technical leadership: **when you find yourself adding process to compensate for language limitations, you're treating symptoms instead of causes.**

Gitflow compensates for languages where merging code from multiple contributors is inherently risky. The branch model is risk mitigation.

Rust eliminates entire classes of risk at compile time. The mitigation becomes overhead.

This doesn't mean Rust projects never need complex branching strategies. Regulated industries, embedded systems with hardware-in-the-loop testing, teams with async deployment cycles—these contexts might still justify heavier process.

But for a web framework with solid CI and a type system that enforces invariants? Trunk-based development isn't just viable—it's the architecturally honest choice.

## What This Reveals About Architectural Discipline

Rapina is a web framework that emphasizes architectural discipline. The design philosophy: **the framework should make bad architecture hard to write.**

Deleting the develop branch is an extension of that philosophy. If the branching strategy is complex, it's because the architecture allows states and transitions that shouldn't be possible. Fix the architecture. Simplify the process.

The same principle applies to the framework itself:
- If users need extensive documentation to avoid memory bugs, the API is wrong.
- If users need linters to catch concurrency issues, the abstraction is leaky.
- If users need gitflow to prevent broken merges, the type modeling is weak.

Rust enables a different contract: **the compiler is the first line of defense. Process is the last resort.**

## The AI Era Context

Why does this matter for technical leadership in the AI era?

Because AI-generated code is about to flood every codebase. LLMs can write syntactically correct code in any language. But they can't reason about invariants across a system. They can't model ownership semantics. They can't enforce architectural discipline.

In dynamically typed languages, AI-generated code looks fine until it explodes at runtime. The solution is more tests, more process, more review overhead.

In Rust, AI-generated code either compiles or doesn't. The type system is a forcing function. If the AI doesn't understand ownership, the code won't build. If the AI violates an invariant, the compiler rejects it.

This changes the role of the technical lead. In dynamically typed ecosystems, leadership means building process to catch what the language doesn't. In Rust, leadership means **designing type systems that catch what process can't.**

Deleting gitflow is a small decision. But it reflects a larger shift: the compiler becomes the enforcer, and the leader's job is to design the rules the compiler enforces.

## What I'm Watching

Rapina received two contributions from different developers yesterday. Both were clean merges. The code reviews were about design, not bugs.

This is what an open source community forming around architectural discipline looks like. Contributors aren't fighting the type system—they're using it to write better code. The framework's type model is the shared language.

I'm watching to see if this scales. If the community grows, will trunk-based development remain viable? Or will there be a threshold where the coordination overhead justifies reintroducing branch isolation?

My hypothesis: **if the type system is strong enough, the threshold is much higher than most teams assume.**

We'll see.

## Takeaways for Technical Leads

If you're leading a Rust project and still using gitflow, ask:

1. **What invariants does the develop branch protect that the compiler doesn't already enforce?**
2. **What percentage of bugs caught in develop would have been caught by better type modeling or CI?**
3. **Is the branching complexity solving an architectural problem or hiding it?**

If the answers are "not many," "most of them," and "hiding it," consider trunk-based development.

The transition isn't costless. You need solid CI. You need a team that trusts the compiler. You need architectural discipline in how features are flagged and deployed.

But if you have those things—and if you're using Rust—the process can be radically simpler.

Because when the type system enforces the invariants, the branches become theater.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Compile-Time Boundaries and Architectural Optionality]]></title>
    <link>http://localhost:3000/posts/2026-01-26-compile-time-boundaries-and-architectural-optionality</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-01-26-compile-time-boundaries-and-architectural-optionality</guid>
    <description><![CDATA[How Rust's feature flags force explicit decisions about dependency composition—and why that friction is a feature, not a bug.]]></description>
    <pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
Architectural discipline begins with the question: *what can we defer, and what must we decide now?*

In most systems, that question gets answered implicitly. Dependencies accumulate. Abstractions leak. By the time you're debugging a production incident, you've forgotten which layers were supposed to be optional and which are load-bearing.

Rust doesn't let you forget. Today, while implementing logging adapters for [sheen](https://github.com/arferreira/sheen), I was reminded why.

## The Problem: Multiple Logging Backends in Rust

The Rust ecosystem has two dominant logging approaches:

1. **`log`** — a lightweight facade. Libraries log through it, applications provide a backend.
2. **`tracing`** — structured, span-based observability. More powerful, heavier, async-aware.

If you're building a library that needs to emit diagnostics, you face a choice: depend on `log`, depend on `tracing`, or roll your own abstraction and support both.

The naive solution: bundle both. Detect at runtime which one the user has configured, and route logs accordingly.

This is what most ecosystems do. It's ergonomic. It's convenient.

It's also wrong.

## Why Bundling Is an Architectural Smell

When you bundle optional dependencies, you're making a runtime decision that should have been made at compile time. You're saying: "I don't know which of these you'll need, so I'll ship both and let you figure it out."

The costs:

- **Binary bloat.** Every user pays for code they don't use.
- **Coupling.** Your abstraction now depends on both logging crates, even if the user only wants one.
- **Hidden complexity.** The branching logic lives inside your library. Users can't see it. They can't reason about it without reading your source.

In a dynamic language, this is unavoidable. You can't eliminate unused code paths at compile time because you don't know what will execute until runtime.

Rust gives you a choice. Feature flags let you push that decision upstream.

## The Discipline of Feature Flags

Here's how I structured sheen's adapters:

```rust
// Core abstraction (no logging dependencies)
pub trait Logger {
    fn log(&self, level: Level, message: &str);
}

// Adapter for `log` crate (behind "log" feature)
#[cfg(feature = "log")]
pub struct LogAdapter;

#[cfg(feature = "log")]
impl Logger for LogAdapter {
    fn log(&self, level: Level, message: &str) {
        log::log!(level.into(), "{}", message);
    }
}

// Adapter for `tracing` crate (behind "tracing" feature)
#[cfg(feature = "tracing")]
pub struct TracingAdapter;

#[cfg(feature = "tracing")]
impl Logger for TracingAdapter {
    fn log(&self, level: Level, message: &str) {
        tracing::event!(level.into(), "{}", message);
    }
}
```

**No default features.** If you depend on sheen, you get the core abstraction and nothing else. To actually log, you must explicitly enable `log` or `tracing`:

```toml
[dependencies]
sheen = { version = "0.1", features = ["log"] }
```

Or both:

```toml
sheen = { version = "0.1", features = ["log", "tracing"] }
```

The key insight: **the library doesn't decide what you ship. You do.**

## What This Looks Like at Scale

This pattern isn't just for logging. It's everywhere in Rust's backend ecosystem:

- **Serialization:** `serde` supports JSON, YAML, TOML, MessagePack—all behind feature flags.
- **HTTP clients:** `reqwest` lets you opt into rustls vs OpenSSL, blocking vs async, cookies, gzip—each a separate feature.
- **Databases:** `sqlx` supports Postgres, MySQL, SQLite—pick one, pay for one.

The discipline compounds. When every layer of your stack follows this pattern, you build systems where:

1. **Dependencies are legible.** Run `cargo tree --features` and see exactly what you're shipping.
2. **Coupling is explicit.** If a feature pulls in a heavy dependency, that's visible in the feature graph.
3. **Composition is user-controlled.** The application decides the tradeoffs, not the library author.

This is architectural discipline enforced by the compiler.

## The Cost of Explicitness

There's friction here. Users have to read docs. They have to make decisions. They might get it wrong the first time.

In contrast, the "just works" approach—bundle everything, auto-detect at runtime—has zero cognitive load. You add the dependency, and it figures itself out.

So why not do that?

Because **deferred decisions are technical debt.**

Every runtime branch is a code path you have to test. Every bundled dependency is a supply chain risk. Every implicit coupling is a future refactor waiting to happen.

Rust's feature flags front-load that complexity. You pay the cost at integration time, when you're actively thinking about dependencies. In exchange, you get:

- Smaller binaries.
- Faster compile times (unused features aren't compiled).
- Clearer contracts (the feature list documents what's optional).

Most importantly, **you avoid runtime surprises.** There's no "wait, why is this library trying to initialize a logger I didn't configure?" moment. If you didn't enable the feature, the code doesn't exist.

## When Explicitness Is Wrong

This pattern isn't universal. There are cases where feature flags add more friction than value:

1. **Stable, universal dependencies.** If everyone needs `serde`, just depend on it. Feature-flagging it is ceremony.
2. **Tightly coupled features.** If feature A only makes sense with feature B, they shouldn't be separate flags.
3. **Internal implementation details.** If the choice doesn't affect the public API, don't expose it as a feature.

The heuristic: **feature flags should map to user-facing decisions, not implementation details.**

If you're feature-flagging because "the user might not need this," ask: does the user even know this exists? If not, it's probably not a feature; it's an internal abstraction you're leaking.

## Leadership Implications

Here's where this becomes a technical leadership question.

On a team, feature flags create coordination costs. Someone has to document them. Someone has to test combinations. Someone has to field questions when users enable the wrong set.

In a high-trust, slow-moving environment, that's fine. You write the docs, you own the combinations, you support the users.

In a fast-moving startup, it might be the wrong tradeoff. Ship the batteries-included version. Optimize for iteration speed, not binary size.

The leadership question: **what's your team's relationship with optionality?**

If you're building infrastructure that will be reused across many projects—an internal platform, a shared library, a framework—then explicitness pays off. You're forcing each consumer to think about what they need, and that thinking prevents drift.

If you're building a one-off service under deadline pressure, explicitness might be premature optimization. Ship it working, refactor later if you need to.

Rust doesn't make that decision for you. It gives you the tools to express optionality cleanly. Whether you use them is a judgment call.

## The Compiler as Architectural Review

What I appreciate about Rust's approach is that **the decision is visible in the code.**

When you see:

```toml
[dependencies]
sheen = { version = "0.1", features = ["log", "tracing"] }
```

...you know someone made a choice. They decided to pull in both logging backends. Maybe that's intentional. Maybe it's a mistake. Either way, it's **legible.**

Contrast with a runtime approach:

```javascript
const logger = require('universal-logger');
```

What did you just pull in? Which backends? Which transitive dependencies? You'd have to read the source or inspect `node_modules` to know.

Rust's feature flags make the dependency graph a first-class part of the architecture. They turn optionality from a runtime concern into a compile-time contract.

And when the compiler enforces your architectural boundaries, you can't accidentally violate them. You can't merge a PR that silently bundles a dependency you didn't mean to ship. The CI build fails.

This is what I mean when I say Rust enforces architectural discipline. It's not about the borrow checker or memory safety—it's about making implicit decisions explicit, and letting the type system hold you to them.

## Closing Thought

The hardest part of architecture isn't choosing the right abstraction. It's choosing where to draw the boundaries.

Rust's feature flags are a tool for drawing those boundaries explicitly. They let you defer decisions to the right layer—not runtime vs compile-time, but *library author* vs *application developer.*

The library provides capabilities. The application composes them. The compiler ensures the composition is sound.

That's the discipline: knowing what you control, what you defer, and what you enforce.

It's a small decision—how to structure a logging adapter—but it reflects a larger principle. When you build systems where optionality is explicit and composition is user-controlled, you're not just writing better Rust.

You're practicing technical leadership. You're making the right things easy and the wrong things hard. And you're building systems that stay legible as they scale.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Building sheen: Bringing charmbracelet/log to Rust]]></title>
    <link>http://localhost:3000/posts/2026-01-23-building-sheen-bringing-charmbracelet-log-to-rust</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-01-23-building-sheen-bringing-charmbracelet-log-to-rust</guid>
    <description><![CDATA[How I built sheen, a Rust logging library inspired by charmbracelet/log, exploring traits, dynamic dispatch, and ergonomic macros]]></description>
    <pubDate>Fri, 23 Jan 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
![sheen in action](/sheen.gif)

Recently I decided to start building a web framework in Rust. I know, there are lots of Rust web frameworks out there. But lately, I was kind of uncomfortable with how backend frameworks evolved.

I worked with different ones — Rails, Django, Loco — all of them work very well. Except Rails, they became flexible which sounds good, but that flexibility often turns into boilerplate, hidden complexity and inconsistent architectures. After 2023, adding AI writing portion of code transforms this into a mess.

So I decided to create [Rapina](https://github.com/arferreira/rapina), an opinionated Rust web framework exploring better DX in an AI-assisted world.

As I love open source, I was browsing and looking for some libraries to support me in the logging/tracing challenge. I was using tracing-subscriber from Rust — it's used in Loco. But then, I came into this one: [charmbracelet/log](https://github.com/charmbracelet/log).

I found it so special. But the bad side? Written in Go.

Damn. Why doesn't Rust have something this clean?

## What made charmbracelet/log special

Looking at their README, a few things stood out:

- Ease of config — just works out of the box
- Beautiful UI — colorful, aligned, readable
- Very adaptable to any kind of project
- Flexible but not closed — you can customize everything without fighting the library

Their API is dead simple:
```go
log.Info("Hello World!")
log.Error("failed to bake cookies", "err", err)
```

And it just looks good. Colored levels, structured fields, timestamps. No ceremony.

## Starting out

I decided to build it. Created the project:
```bash
cargo new sheen --lib
```

The name "sheen" came from wanting something that evokes polished, glossy, refined output. And it was available on crates.io.

My first decision: what to build first?

I started with the `Level` enum. It's the foundation — everything else depends on it:
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum Level {
    Trace,
    Debug,
    Info,
    Warn,
    Error,
}
```

The key decision here: deriving `PartialOrd` and `Ord` means enum variants are ordered by declaration. So `Level::Trace < Level::Debug < Level::Info`. This makes filtering logs trivial:
```rust
pub fn enabled(&self, level: Level) -> bool {
    level >= self.level
}
```

No manual comparisons, no match statements. The type system does the work.

From there I built the `Logger` struct, added colors with `owo-colors`, then structured fields, timestamps, prefixes. Each feature small and incremental.

## Architecture decisions

### The Formatter trait

At first, all formatting logic lived inside the `log()` method. It worked, but I knew adding JSON output would mean ugly if/else blocks everywhere. Not scalable.

I wanted users to be able to swap formatters cleanly:
```rust
let logger = Logger::new().formatter(JsonFormatter);
```

This meant introducing a trait:
```rust
pub trait Formatter: Send + Sync {
    fn format(
        &self,
        level: Level,
        message: &str,
        timestamp: Option<&str>,
        prefix: Option<&str>,
        fields: &[(String, String)],
        extra: &[(&str, &dyn Debug)],
    ) -> String;
}
```

The `Send + Sync` bounds are required because the global logger lives in a static and might be accessed from multiple threads. Rust forces you to think about this upfront.

### `Box<dyn Formatter>` — dynamic dispatch

I wanted to store any formatter in the Logger struct:
```rust
pub struct Logger {
    level: Level,
    formatter: dyn Formatter,  // ❌ won't compile
}
```

The problem: `dyn Formatter` could be `TextFormatter` (0 bytes) or `JsonFormatter` (maybe 8 bytes) or some user's custom formatter. Rust needs to know struct sizes at compile time.

The solution: `Box<dyn Formatter>`. Box puts the data on the heap and stores a pointer (always 8 bytes on 64-bit):
```rust
pub struct Logger {
    level: Level,
    formatter: Box<dyn Formatter>,  // ✅ always 8 bytes
}
```

Think of it like storing a locker number instead of the actual item. The locker number always fits in your pocket, regardless of what's inside the locker.

This is a classic Rust pattern for runtime polymorphism. The small overhead of heap allocation and dynamic dispatch is negligible for a logger.

### Ergonomic macros

I wanted the API to feel natural:
```rust
sheen::info!("Server started", port = 3000, host = "localhost");
```

The macro:
```rust
#[macro_export]
macro_rules! info {
    ($msg:expr) => {
        $crate::global::logger().info($msg, &[])
    };
    ($msg:expr, $($key:ident = $value:expr),* $(,)?) => {
        $crate::global::logger().info(
            $msg,
            &[$(( stringify!($key), &$value as &dyn std::fmt::Debug )),*]
        )
    };
}
```

`stringify!($key)` converts the identifier `port` to the string `"port"`. The repetition pattern `$(...),*` handles zero or more key=value pairs. The trailing `$(,)?` allows an optional trailing comma.

## Why sheen is a good project to learn Rust

If you're looking to level up your Rust skills, building a logging library covers a lot of ground:

**Traits and generics** — The Formatter trait teaches you how to design extensible APIs. You'll understand the difference between static dispatch (`impl Trait`) and dynamic dispatch (`dyn Trait`).

**Ownership patterns** — `Box<dyn Trait>` for owned trait objects, `&dyn Debug` for borrowed trait objects. You'll learn when to use each.

**Macros** — Declarative macros with `macro_rules!` are powerful. Building `info!`, `debug!`, etc. teaches pattern matching on syntax.

**Builder pattern** — Idiomatic Rust configuration:
```rust
Logger::new()
    .level(Level::Debug)
    .prefix("myapp")
    .timestamp(true)
```

**Global state** — Using `OnceLock` for safe, lazy initialization of a global logger.

**TTY detection** — `std::io::IsTerminal` for smart behavior (colors in terminal, plain text when piped).

The codebase is small enough to understand completely, but covers patterns you'll use in larger projects.

## What's next

Features planned:

- `log` crate compatibility — work with existing Rust ecosystem
- Custom color themes
- File output support
- More time format options

Check the [issues](https://github.com/arferreira/sheen/issues) — several are tagged `good first issue` if you want to contribute.

## Try it
```toml
[dependencies]
sheen = "0.2"
```
```rust
fn main() {
    sheen::init();
    sheen::info!("Hello from sheen", version = "0.2.0");
}
```

- [GitHub](https://github.com/arferreira/sheen)
- [Crates.io](https://crates.io/crates/sheen)

---

*sheen is inspired by [charmbracelet/log](https://github.com/charmbracelet/log). Thanks to them for showing what good logging DX looks like.*
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Why I'm Building Rapina: A Web Framework for APIs You Can Actually Trust]]></title>
    <link>http://localhost:3000/posts/2026-01-18-why-im-building-rapina</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-01-18-why-im-building-rapina</guid>
    <description><![CDATA[Modern APIs are easy to write and hard to trust. Rapina is a Rust web framework built for predictability, auditability, and security—by humans, accelerated by AI.]]></description>
    <pubDate>Sun, 18 Jan 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
I've been writing APIs for nearly two decades. I've seen them grow, break, and become unmaintainable nightmares. Recently, while working with Loco (a Rails-like framework for Rust), I hit a wall. The codebase was becoming a mess. Conventions were inconsistent. Every endpoint did things slightly differently. And with AI assistants now writing code alongside us, the chaos was multiplying.

That's when I decided to build something different.

## The Problem Nobody Names

Modern APIs are easy to write and hard to trust.

They grow fast, break silently, accumulate inconsistencies, and depend on tribal knowledge. Over time, they become hostile territory for maintenance—whether by humans or AI.

I've seen this pattern repeat across companies, teams, and tech stacks. And I realized the root cause isn't the language or the framework. It's the lack of **predictability**, **auditability**, and **guardrails by default**.

## The Pain Points

### "I don't know if this API is correct"

Handlers accept vague inputs. Responses change without warning. Errors aren't standardized. OpenAPI doesn't reflect reality.

### "Every endpoint does things differently"

One endpoint returns `{ data }`, another `{ user }`. Errors vary by module. Auth is applied inconsistently. Observability depends on human discipline.

### "Refactoring is terrifying"

A small change breaks the frontend. You can't tell if it's a breaking change. Nobody trusts big refactors. OpenAPI doesn't help.

### "AI helps write code, but makes the mess worse"

Generated code has no pattern. Errors are inconsistent. Naming is chaotic. Logic gets duplicated.

### "Onboarding is slow and people-dependent"

New devs ask everything. Docs are outdated. Knowledge isn't in the code.

### "Production is fragile by default"

Logs are inconsistent. No trace IDs. Timeouts forgotten. Security is optional.

## Enter Rapina

Rapina is a web framework for Rust, inspired by FastAPI's developer experience, but built with a different philosophy:

**Predictable, auditable, and secure APIs—written by humans, accelerated by AI.**

It's not about being "another web framework" or "FastAPI for Rust." It's about solving the **trust problem** in modern APIs.

## What It Looks Like

```rust
use rapina::prelude::*;

#[derive(Deserialize)]
struct CreateUser {
    name: String,
    email: String,
}

#[derive(Serialize)]
struct User {
    id: u64,
    name: String,
    email: String,
}

#[get("/users/:id")]
async fn get_user(id: Path<u64>) -> Result<Json<User>> {
    let id = id.into_inner();

    if id == 0 {
        return Err(Error::not_found("user not found"));
    }

    Ok(Json(User {
        id,
        name: "Antonio".to_string(),
        email: "antonio@example.com".to_string(),
    }))
}

#[post("/users")]
async fn create_user(body: Json<CreateUser>) -> Json<User> {
    let input = body.into_inner();
    Json(User {
        id: 1,
        name: input.name,
        email: input.email,
    })
}

#[tokio::main]
async fn main() -> std::io::Result<()> {
    let router = Router::new()
        .get("/users/:id", get_user)
        .post("/users", create_user);

    Rapina::new()
        .router(router)
        .listen("127.0.0.1:3000")
        .await
}
```

Clean. Typed. Predictable.

## Standardized Errors with Trace IDs

Every error returns a consistent envelope:

```json
{
  "error": {
    "code": "NOT_FOUND",
    "message": "user not found"
  },
  "trace_id": "550e8400-e29b-41d4-a716-446655440000"
}
```

No more guessing what format an error will be in. No more hunting through logs without a trace ID.

## Dependency Injection That Makes Sense

```rust
#[derive(Clone)]
struct AppConfig {
    app_name: String,
}

#[get("/")]
async fn hello(config: State<AppConfig>) -> String {
    format!("Hello from {}!", config.into_inner().app_name)
}

#[tokio::main]
async fn main() -> std::io::Result<()> {
    let config = AppConfig {
        app_name: "My API".to_string(),
    };

    Rapina::new()
        .state(config)
        .router(router)
        .listen("127.0.0.1:3000")
        .await
}
```

## Per-Request Dependencies

Need auth? Create a `CurrentUser` extractor:

```rust
struct CurrentUser {
    user_id: u64,
}

impl FromRequestParts for CurrentUser {
    async fn from_request_parts(
        parts: &http::request::Parts,
        _params: &PathParams,
        _state: &Arc<AppState>,
    ) -> Result<Self> {
        let user_id = parts
            .headers
            .get("x-user-id")
            .and_then(|v| v.to_str().ok())
            .and_then(|v| v.parse().ok())
            .ok_or_else(|| Error::unauthorized("missing or invalid token"))?;

        Ok(CurrentUser { user_id })
    }
}

#[get("/me")]
async fn get_me(user: CurrentUser) -> Json<User> {
    Json(User {
        id: user.user_id,
        name: "Current User".to_string(),
        email: "me@example.com".to_string(),
    })
}
```

No auth header? Automatic 401 with a proper error response. No panic. No surprise.

## The Philosophy

Rapina follows three principles:

- **Predictability** — Clear conventions, obvious structure. You know what to expect.
- **Auditability** — Typed contracts, traceable errors. You can prove it's correct.
- **Security** — Guardrails by default. You have to opt-out of safety, not opt-in.

## What's Coming

Rapina is still young, but the foundation is solid:

- [x] Basic router with path parameters
- [x] Typed extractors (`Json`, `Path`, `State`)
- [x] Proc macros (`#[get]`, `#[post]`, `#[put]`, `#[delete]`)
- [x] Standardized error handling with `trace_id`
- [x] Dependency injection (`State<T>`, `FromRequestParts`)
- [ ] Query parameters extractor
- [ ] Validation (`Validated<T>`)
- [ ] Auth (Bearer JWT, `CurrentUser`)
- [ ] Middleware system
- [ ] Observability (tracing, structured logs)
- [ ] Automatic OpenAPI generation
- [ ] CLI (`rapina new`, `rapina routes`, `rapina doctor`)
- [ ] Contract-based testing
- [ ] Breaking change detection

## The Ultimate Test

If in 5 years someone can:

- Understand the API without talking to anyone
- Refactor without fear
- Let an AI work on it without creating chaos

Then Rapina will have fulfilled its purpose.

## Want to Contribute?

Rapina is open source and I'd love your help. Whether you're coming from Python, Ruby, PHP, or any other ecosystem—if you care about building APIs that are maintainable, predictable, and AI-friendly, come join us.

**GitHub:** [https://github.com/arferreira/rapina](https://github.com/arferreira/rapina)

The codebase is clean, the architecture is documented, and there's plenty of low-hanging fruit for first-time contributors:

- Adding new extractors (Query, Header, Cookie)
- Improving error messages
- Writing documentation
- Adding examples
- Building the middleware system

Let's build something we can actually trust.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[How AI will obliterate your career in 18 months (and why you should let it)]]></title>
    <link>http://localhost:3000/posts/2026-01-17-how-ai-will-obliterate-your-career</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2026-01-17-how-ai-will-obliterate-your-career</guid>
    <description><![CDATA[Why most engineers are approaching the AI revolution wrong, and the 7 stages of engineering identity you need to understand to survive.]]></description>
    <pubDate>Sat, 17 Jan 2026 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
If you're anything like me, you think "learning AI" is a cope.

Because most engineers are approaching the AI revolution in the completely wrong way.
They're either panic-learning prompt engineering because some influencer told them to, or they're doubling down on "fundamentals" and pretending Cursor AI isn't writing production code faster than their senior developers.  Both groups are fucked. But for different reasons.

If you're one of these people, I'm not here to talk down on you (though I will be harsh). I've chased 10x more tech trends than I've actually mastered. Microservices, GraphQL. The whole Web3 rabbit hole. I think that should be the case for most engineers, you can't know what works until you know what doesn't.
But the fact that engineers are about to get replaced not by AI, but by engineers who understand how to use AI? That's not a trend. That's gravity.
However, as much as I think "upskilling on AI" is missing the point, it's always wise to reflect on the career you hate so you can launch yourself toward something much better.

So wether you want to build the company, escape the FAANG grind, or stop being a code monkey for a PM who can't tell Redux from a Redis cache, I want to share 7 ideas you probably haven't heard before on identity, leverage, and survival in the age of infinite code generation.

This will be comprehensive.

This isn't one of those posts you skim and forget.

This is something you'll want to bookmark, take notes on, and actually execute on over the next week.

The protocol at the end ( to dig deep into why you become and engineer and what you actually want to build) will take about a full day to complete, with effects that last far longer than your last sprint cycle.

Let's begin.


## 1 - You aren't building what you want because you aren't the engineer who would build it

When it comes to surviving    the AI revolution, engineers focus on one of two strategies:
1. Learning new skills (least important, second order)
2. Becoming a different type of person (most important, first order)


Most engineers panic-learn the least framework, hype themselves up to "ship daily" for two weeks, then fall back into Jira tickets, Github issues, Linear tickets (whatever) and stand-ups without realizing they were trying to build a great career on a rotting foundation.

If this doesn't make sense, let's run through an example.

Think of somebody successful in tech. It can be a founder who sold their company for $100M, a staff engineer at Stripe who ships features that print money, or an indie hacker pulling $50K/month from a SaaS they built in 3 months.

Do you think the founder has to "grind" to build features? Does the staff engineer have to discipline themselves to write clean code or communicate well on stand ups? To you, it might seem like that on the surface, but the truth is they can't see themselves living any other way. The founder has to grind to NOT ship. The staff engineer feels physical discomfort looking at poorly architected systems.


To some people, my lifestyle seems extreme. I've been coding for ~15 years, corporate, startups, personal projects, crypto and so on. To me, it's natural. When my wife tells me I should "take a break from coding", I hold my tongue from saying "If I weren't having fun, why would I be doing this?".


This next sentence may sound simple, but it's baffling how many engineers don't get it:  If you want a specific outcome in your career, you must adopt the identity that creates that outcome long before you reach it.  If someone says they want to "become a senior engineer", I often don't believe them. Not because they're incapable, but because that same person says, "I can't wait until I'm senior engineer so I can stop studying documentation". I hate to break it to you, but if you don't adopt the habits that made you senior, systems thinking, code review discipline, architectural taste, publish at least one lib on open source, you'll plateau and waste years wondering why you're stuck at mid-level.


When you truly changed your identity, all of your habits that don't move the needle toward your goal become disgusting, because you have a deep and profound awareness of what kind of career those actions compound into. You're okay with your current career because you're not fully aware of what your daily actions are leading to.

You say you want to escape 9-5 and build your own thing. But your actions show otherwise. And it goes deeper than you think.


## 2 - You aren't building what you want because you don't actually want to build it

"Trust only movement. Life happens at the level of events, not of words. Trust movement." – Alfred Adler

If you want to change who you are as an engineer, you must understand how the mind works so you can reprogram it.

The first step is understanding that all behavior is goal-oriented. It's teleogical. When you think about it, this is obvious, but when we dig into it, most people don't want to hear it.

You open VSCode because you want to ship something.
You scroll social media because you want to avoid anxiety of shipping.

Those are clear. But most of the time, your goals are unconscious. You may not realize that when you refactor code for the third time instead of launching, you're trying to protect yourself from the judgement that comes from putting something real into the world.


On an even more unconscious level, you pursue goals that actively harm you, but you justify them in ways that are socially acceptable:

* If you can't stop bikeshedding in code reviews, you may justify it as "caring about code quality," but in reality, you're trying to feel intellectually superior without the risk of building something yourself.
* If you say you want to leave your current job but stay without any real reason, you may think you "lack courage," but the truth is you're pursuing the goal of safety, predictability, and not looking like a failure to other engineers who see your current one as the ultimate achievement.

The lesson here: real change requires changing your goals.

I don't mean setting some surface-level OKR. I mean changing your point of view. Because a goal is a projection into the future that acts as a lens of perception, it allows you to notice information, ideas, and opportunities that help you achieve it.

If your goal is "get promoted to senior", you'll notice political moves and resume-padding projects.
If your goal is "building something people pay for", you'll notice market gaps and customer pain.

Different lenses. Different lives.

## 3 - You aren't building what you want because you're afraid to be that engineer

"If you have accepted an idea—from yourself, your teachers, your parents, friends, tech Twitter—and you are firmly convinced that idea is true, it has the same power over you as the hypnotist's words have over the hypnotized subject." – Maxwell Maltz

Here's how you become the engineer you are today, and how you'll become the engineer you'll be tomorrow. This is the anatomy of engineering identity:  1 - You want to achieve a goal (get hired, get promoted, build a product)
2 - You perceive reality through the lens of that goal
3 - You only notice "important" information that allows you to achieve it (learning React, Rust, whatever)
4 - You act toward that goal and receive feedback
5 - You repeat that behavior until it becomes automatic (you "become" a React developer)
6 - That behavior becomes part of who you think you are ("I'm a frontend engineer")
7 - You defend your identity to maintain psychological consistency
8 - Your identity shapes new goals, restarting the cycle

The unfortunate reality is you must break the cycle between steps 6 and 7. But this process starts when you're young.
You wanted to survive. Your parents taught you that "good grades → good college → good job → good life." Unless you break that pattern, you're still chasing their definition of success.

And your parents? They were conditioned by the Industrial Age belief that specialization = security. "Pick a lane. Become an expert. Retire at 65."

That worked when companies had 40-year lifespans. Now the average is 15 years. Your "safe" job is getting automated by an intern with Claude Code.
To take it deeper: once your physical survival is handled (which it is—you're reading this on a $1000 phone), you start surviving on the conceptual level. You protect and reproduce your identity.
When your body is threatened, you fight or flee. When your identity is threatened, the same thing happens.
If you identify as "a Python developer," you'll feel threatened when someone suggests Rust is better. You'll feel stress. You'll defend Python in ways that have nothing to do with technical merit.
If you were raised in a "FAANG or bust" culture and didn't think for yourself, you'll attack indie hackers as "not real engineers."
The same happens when you unconsciously see yourself as "the senior who's seen it all" or "the guy who knows Kubernetes." You will sabotage your own growth to protect that identity.


## 4 - The career you want exists at a specific level of engineering consciousness

Engineers evolve through predictable stages over time. Most people crystalize at one level and never leave.
I've synthesized this from models like Dreyfus (skill acquisition), Kegan (adult development), and my own 15 years observing engineers. Here's the 9 stages of engineering identity:
1. Tutorial Hell – You can't separate learning from doing. Every project needs a guide.
2. Survival Mode – You learn to protect yourself. Copy-paste from Stack Overflow. Hide your imposter syndrome.
3. Team Player – You are your team's stack. "We're a React shop" feels like objective reality.
4. Self-Aware Coder – You notice you have opinions that don't match the team. You wonder if microservices are actually necessary, but don't say it out loud yet.
5. Principled Engineer – You build your own system of beliefs. You can defend your architectural choices. You believe the right patterns yield the right results.
6. Pragmatist – You realize your "principles" were shaped by the jobs you've had. You hold them more loosely. You start saying "it depends."
7. Systems Thinker – You see code as part of larger systems (business, team, incentives). You know your own biases but can't fully escape them.
8. Meta-Engineer – You see all frameworks, including "good code," as useful fictions. You know the map is not the territory. You watch yourself play "senior engineer" with gentle amusement.
9. Builder – There's no separation between work and creation. You don't "go to work." You just build. Coding, sleeping, thinking—it's all the same flow.
Most engineers reading this are between stages 3-7. That's a huge range.
If you're at 3-4, you're desperate for change but can't make sense of it yet. If you're at 6-8, you're reading this to either learn something or kill time productively.
The good news? Moving through any stage follows a pattern.



## 5 - Intelligence is the ability to build what you want

"The only real test of intelligence is if you get what you want out of life." – Naval Ravikant

There's a formula for career success:
* Agency (ability to act)
* Opportunity (market/timing)
* Intelligence (ability to iterate and learn)

If you have agency but no opportunity, you're a genius building the wrong thing. If you have opportunity but no intelligence, you'll never capitalize on it.

First, let's talk intelligence in the context of engineering. For that, we look to cybernetics—the art of steering toward a goal.

A cybernetic system has these properties:
1. A goal
2. Action toward that goal
3. Sensing where you are
4. Comparing current state to goal
5. Acting again based on feedback

You can judge intelligence by the system's ability to iterate and persist.

A ship blown off course that corrects. A compiler that catches errors and suggests fixes. An engineer who ships, gets feedback, and ships again.

Low-intelligence engineers get stuck on problems and quit. They hit a bug and blame the framework. They fail to get users and assume "the market isn't ready."

High-intelligence engineers realize any problem can be solved on a large enough timescale. There's a sequence of choices that leads to the outcome you want.

When I say "goals," I'm not talking about JIRA tickets.

I'm talking about teleology—the idea that everything serves a purpose. Goals determine how you see the world.
For most engineers, those goals were assigned:
* "Get the FAANG job"
* "Hit senior by 30"
* "Don't rock the boat"

A known path that doesn't work anymore.

To become more intelligent as an engineer:
1. Reject the known path (FAANG → senior → retirement)
2. Dive into the unknown (build in public, ship fast, fail often)
3. Set new, higher goals to expand your mind
4. Embrace chaos and allow for growth
5. Study the principles (not just the syntax)
6. Become a deep generalist (AI rewards breadth + depth, not just depth)

This isn't the traditional definition of intelligence. But this sequence creates the neural connections that separate great engineers from mediocre ones.


## 6 - How to launch into a completely new engineering career

The best periods of my career came after getting absolutely fed up with the lack of progress I was making.
How do you dig into your mind? How do you become aware of your conditioning? How do you reach insights that change the trajectory of your career?

Through questioning.
Something so few engineers do. You can tell by how they talk about tech—parroting takes, defending frameworks they've never shipped with.
I want to give you a protocol you can use every year to reset your career and launch into a season of intense growth.

This will require one full day to complete. Pen, paper, and an open mind.
When I observe engineers who successfully flip their identity, it happens fast after a buildup of tension. There are 3 phases:
1. Dissonance – They feel like they don't belong in their current role and get fed up.
2. Uncertainty – They don't know what's next. They experiment or spiral.
3. Discovery – They find what they want to build and make 6 years of progress in 6 months.
Our goal: help you reach dissonance, navigate uncertainty, and discover what you actually want to build.

### Part 1) Morning – Career Excavation – Vision & Anti-Vision
Set aside 15-30 minutes to answer these questions. Do NOT use AI. Break past the limiter on your mind.

**Dissonance Questions:**
1. What is the dull, persistent frustration you've learned to live with in your career? Not burnout—what you've learned to tolerate.
2. What do you complain about repeatedly but never actually change? Write down your top 3 career complaints from the past year.
3. For each complaint: If someone only watched your behavior (not your words), what would they conclude you actually want?
4. What truth about your current job would be unbearable to admit to an engineer you deeply respect?

**Anti-Vision (The Career You're Avoiding):**
1. If nothing changes for 5 years, describe an average Tuesday. Where do you wake up? What does your calendar look like? What code are you writing? How do you feel at 5pm?
2. Now 10 years. What opportunities closed? What projects died? What do former colleagues say about you when you're not in the Slack?
3. End of your life. You played it safe. Never built your thing. What was the cost? What did you never let yourself create?
4. Who in your life is already living the future you just described? Someone 5, 10, 20 years ahead on the same path. How do you feel about becoming them?
5. What identity would you have to give up to actually change? ("I am a [language] engineer", "I am the guy who knows [framework]")
6. What's the most embarrassing reason you haven't changed? The one that makes you sound weak, not reasonable?
7. If your current behavior is self-protection, what are you protecting? And what is it costing you?

If you answered truthfully, you should feel disgust for how you're spending your engineering career. Now we orient that energy in a positive direction.

**Minimum Viable Vision:**
1. Forget practicality. Snap your fingers, 3 years from now—what does an average Tuesday look like? Same detail as question 5.
2. What would you have to believe about yourself for that life to feel natural? "I am the type of engineer who..."
3. What's one thing you'd do this week if you were already that engineer?

### Part 2) Throughout The Day – Breaking Autopilot
Set random reminders with these questions:
* 11:00am: What am I avoiding by doing what I'm doing right now?
* 1:30pm: If someone filmed my last 2 hours, what would they conclude I want from my career?
* 3:15pm: Am I moving toward the career I hate or the career I want?
* 5:00pm: What's the most important thing I'm pretending isn't important?
* 7:30pm: What did I do today to protect my identity rather than grow?
* 9:00pm: When did I feel most alive today? Most dead?

During walks/commutes, contemplate:
* What would change if I stopped needing people to see me as "the senior engineer"?
* Where am I trading aliveness for safety?
* What's the smallest version of the engineer I want to become that I could be tomorrow?

### Part 3) Evening – Synthesis
After today, answer:
1. What feels most true about why you've been stuck?
2. What is the actual enemy? Not your manager. Not the tech stack. The internal pattern running the show.
3. Write one sentence that captures what you refuse to let your career become. This is your compressed anti-vision.
4. Write one sentence that captures what you're building toward. Your vision MVP.

**Create Goals (Lenses, Not Deadlines):**
1. One-year lens: What would have to be true in one year to know you broke the old pattern?
2. One-month lens: What project or skill would make the one-year lens possible?
3. Daily lens: What are 2-3 actions you can timeblock tomorrow that the engineer you're becoming would simply do?


## 7 - Turn Your Career Into A Game

"The optimal state of inner experience is one in which there is order in consciousness." – Mihaly Csikszentmihalyi

You now have all the components. Let's organize them:

**The 6 Components of Career-as-Game:**
1. Anti-Vision – The career I refuse to live
2. Vision – The ideal I'm building toward
3. 1-Year Goal – The mission
4. 1-Month Project – The boss fight (what you're shipping)
5. Daily Levers – The quests (priority tasks)
6. Constraints – The rules (what you won't sacrifice)

**Why this works:**
Games create obsession. They have clear feedback loops, progression systems, and stakes.
Your vision is how you win. Your anti-vision is what happens if you lose. Your 1-year goal is the mission. Your 1-month project is the boss fight. Your daily levers are the quests. Your constraints are the rules that force creativity.
These act as concentric circles—a forcefield around your mind that guards against distractions.
The more you play, the stronger the force becomes. Soon it becomes who you are.
And you wouldn't have it any other way.

The engineers who survive the AI revolution won't be the ones who learn prompt engineering.
They'll be the ones who rebuilt their identity from the ground up.
Start tomorrow.

– Antonio
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Building a token launch platform from scratch]]></title>
    <link>http://localhost:3000/posts/2025-08-15-building-a-token-launch-platform-from-scratch</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2025-08-15-building-a-token-launch-platform-from-scratch</guid>
    <description><![CDATA[How I'm building Mauá, a pump.fun-style token creation platform for the Brazilian market using Solana, Rust, and Next.js]]></description>
    <pubDate>Fri, 15 Aug 2025 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
![pumpfun](/pumpfun-logo.webp)

I'm building Mauá, a token creation platform for the Brazilian market. Think pump.fun, but designed for Brazilians who want to launch tokens on Solana without touching code.

The idea is simple: anyone can create a token in two minutes, trade it immediately through an automated bonding curve, and if it gains enough traction, it graduates to a real DEX. No liquidity providers needed. No complex setup. Just name, symbol, image, and you're live.

Here's how I'm building it.

---

## The core problem

Traditional token launches require liquidity. Someone has to put up capital to create a trading pair. This creates two problems: barrier to entry for creators, and rug pull risk for buyers when that liquidity gets pulled.

The solution is a bonding curve. Instead of a liquidity pool, the smart contract itself becomes the market maker. Price is determined mathematically based on supply. Buy tokens, price goes up. Sell tokens, price goes down. The formula is deterministic. No one can drain the liquidity because the liquidity is the contract.

When a token reaches a certain market cap threshold, it graduates. The contract automatically migrates the token to Raydium with real liquidity, and trading continues there.

---

## Architecture decisions

I need four main pieces: smart contracts on Solana, a backend API, a TypeScript SDK, and a frontend.

For the smart contracts, I'm using Anchor. It's the standard framework for Solana development, and it gives me type safety and a clean IDL that I can use to generate TypeScript bindings.

For the backend, I chose Rust with the Loco framework. This might seem like overkill for a web API, but I have my reasons. First, I'm already deep in Rust for the smart contracts, so context switching is minimal. Second, Loco is essentially Rails for Rust. It gives me the rapid prototyping speed of a batteries-included framework with the performance of a compiled language. Third, I want the option to add compute-heavy features later without reaching for a different stack.

The SDK is TypeScript. It wraps all the Solana interactions into clean functions: create token, buy, sell, migrate. It handles PDA derivation, transaction building, and event listening. The frontend doesn't need to know anything about Solana internals.

The frontend is Next.js 14 with Tailwind. App Router, server components where they make sense, client components for wallet interactions. Nothing fancy here. I want it to feel fast and work on mobile, because most Brazilian users will be on their phones.

---

## The smart contract

The program has a few key instructions: initialize the factory, create token, buy tokens, sell tokens, migrate to Raydium, and update fees.

The factory account holds global state: fee rates, authority, total tokens created. Each token gets its own TokenInfo account with reserves, supply, graduation threshold, and metadata.

The bonding curve math is straightforward. Price increases quadratically with supply:

```
price = supply² × constant
```

When someone buys, I calculate how many tokens they get for their SOL based on the current point on the curve. When someone sells, I calculate how much SOL they get back. A 1% fee is taken on every transaction.

The critical security measures: all math uses checked operations to prevent overflow, PDA signing ensures only the program can move funds, and the graduation threshold is immutable after creation.

Events fire for everything: TokenCreated, TokensBought, TokensSold, TokenGraduated, TokenMigrated. The backend listens to these to keep the database in sync with on-chain state.

---

## The backend

Loco gives me a Rails-like structure: controllers, models, views, workers, mailers. I'm using SeaORM for database access, which feels natural coming from ActiveRecord.

Authentication is magic link based. No passwords. User enters email, gets a link, clicks it, gets a JWT. Simple and secure.

The token endpoints are straightforward:

- `GET /api/tokens` returns active tokens
- `GET /api/tokens/trending` returns tokens sorted by recent volume
- `GET /api/tokens/graduated` returns tokens that made it to Raydium
- `GET /api/tokens/:mint` returns a single token with calculated fields
- `POST /api/tokens` creates a new token (triggers the on-chain creation)
- `GET /api/tokens/my` returns the authenticated user's tokens

The views calculate derived fields: current price from the bonding curve formula, market cap from price times supply, graduation progress as a percentage toward the threshold.

A background worker listens to Solana events and updates the database. When someone buys or sells through the contract, the worker catches the event and updates reserves. When a token graduates, it updates the status. This keeps the API fast because reads hit Postgres, not the blockchain.

---

## The SDK

The SDK is the glue between the frontend and the blockchain. It exports typed functions that match the smart contract instructions:

```typescript
await sdk.createToken({
  name: "MyToken",
  symbol: "MTK",
  imageUri: "https://..."
});

await sdk.buyTokens({
  mint: tokenMint,
  solAmount: 0.1
});

await sdk.sellTokens({
  mint: tokenMint,
  tokenAmount: 1000
});
```

It also exports calculation helpers: estimate tokens out for a given SOL input, estimate SOL out for a given token input, calculate current price, calculate fees. The frontend uses these to show users what they'll get before they sign.

Event listeners let the frontend subscribe to updates for a specific token. When someone else buys or sells, the UI updates in real time.

---

## The frontend

The app has four main pages: home with a grid of tokens, individual token pages with charts and trading, a create flow, and a profile page.

The token grid shows live data from the backend. Trending tokens rotate in a carousel. Each card shows the token image, name, current price, and graduation progress.

The individual token page is where the action happens. A price chart, recent trades, holder distribution, and a trading panel. The trading panel connects to the user's wallet (Phantom, Solflare, etc.) and uses the SDK to execute buys and sells.

The create flow is multi-step: upload image, enter name and symbol, preview, confirm. I want this to feel dead simple. Two minutes from idea to live token.

Hooks abstract the data fetching: `useTokens`, `useTrendingTokens`, `useToken`. They hit the backend API and handle loading and error states. The components stay clean.

---

## Deployment

I'm using Railway for hosting and GitHub Actions for CI/CD.

The monorepo structure caused some headaches. Railway kept trying to build from the repository root instead of the specific app directories. The fix was configuring the root directory in Railway's dashboard to point to `apps/backend` and `apps/frontend` respectively.

Each push to main triggers a workflow that deploys the changed service. Backend changes deploy the backend. Frontend changes deploy the frontend. Both get their own Railway service with their own environment variables.

The backend needs DATABASE_URL and REDIS_URL (Railway provides these when you add Postgres and Redis), plus JWT secrets, SMTP credentials for magic links, and Solana RPC configuration.

The frontend needs NEXT_PUBLIC_API_BASE pointing to the backend URL and Solana network configuration.

---

## What's left

The smart contract needs to be audited before mainnet. I'm running on devnet while I iron out the edge cases.

The backend listener that syncs on-chain events to the database needs more work. Right now it handles the happy path. I need to add retry logic and handle reorgs gracefully.

The frontend needs wallet adapter integration for the actual trading flow. Right now the trading panel is mocked. The SDK is ready, I just need to wire it up.

And I need to figure out the onramp story. Brazilian users need to buy SOL with PIX to use the platform. That's a whole separate problem.

---

## Why I'm building this

The token launch market is dominated by platforms that weren't built for Brazilians. The UX assumes English speakers. The payment rails assume you have easy access to crypto.

I want to build something that feels native. Portuguese UI. PIX integration. Mobile-first design. Content and community features that resonate with Brazilian internet culture.

The technical architecture is mostly solved. The bonding curve works. The contracts are secure. The backend is fast. The frontend is responsive.

Now it's about execution. Ship features, get users, iterate on feedback. The code is the easy part. Building something people actually want to use is the hard part.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[The engineers who leave disasters behind]]></title>
    <link>http://localhost:3000/posts/2024-01-15-the-engineers-who-leave-disasters-behind</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2024-01-15-the-engineers-who-leave-disasters-behind</guid>
    <description><![CDATA[What happens when you inherit a codebase from engineers who mastered looking busy without actually delivering anything]]></description>
    <pubDate>Mon, 15 Jan 2024 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
I just joined a startup as tech lead. Four engineers. Eight months of development. An MVP that was supposed to be ready.

There is nothing.

Let me be more precise. There is code. Thousands of lines of it. Repositories, branches, commits with messages like "fix" and "wip" and "asdf". There are Jira tickets marked as done. There are Slack threads where people discussed architecture decisions. There are invoices paid.

What there isn't: a working product.

---

## What I found

The backend is a maze of half-implemented features. Three different authentication systems, none of them complete. Database migrations that reference tables that don't exist. Environment variables scattered across five different .env files with no documentation about which one is correct.

The frontend has components that import from packages that aren't in package.json. There are TODO comments from six months ago. There's a utils folder with 47 files, most of them duplicates of each other with slight variations.

The infrastructure is a graveyard. AWS resources that no one remembers creating. A Kubernetes cluster running nothing. Terraform files that don't match the actual state of anything.

The tests don't run. The CI pipeline is red and has been red for four months. No one noticed because no one was looking.

Eight months. Four engineers. Nothing to show.

---

## The pattern

This isn't the first time I've seen this. It won't be the last.

There's a type of engineer who has mastered the art of looking busy. They know the vocabulary. They can talk about microservices, event sourcing, domain-driven design, clean architecture. They can fill a whiteboard with boxes and arrows. They can stretch a two-week task into two months with status updates that sound reasonable.

They create complexity because complexity hides incompetence. The more convoluted the system, the harder it is for anyone to evaluate whether they're actually delivering value.

They job-hop before the consequences catch up. By the time the company realizes the codebase is a disaster, they're already at the next startup with a shinier title and a higher salary.

They leave behind engineers like me to clean up the mess.

---

## The damage

Let's talk about what this actually costs.

The startup I just joined burned through eight months of runway paying four engineers. Let's say average salary of $4,000 per month. That's $128,000 in engineering costs alone. Add AWS bills, software subscriptions, office space, equipment. Call it $200,000 total.

For nothing.

But the real cost isn't the money. It's the time. The founders spent eight months believing they were building something. They pitched investors based on a timeline that assumed the product would be ready. They delayed sales conversations. They passed on partnership opportunities.

Now they're starting over with a fraction of the runway they had.

This is what bad engineers do. They don't just fail to deliver. They burn resources that can never be recovered. They destroy companies.

---

## How to spot them

They have opinions about everything but ownership of nothing. Ask them about their previous project and they'll tell you about the architecture. Ask them what they shipped and watch them deflect.

They reach for complexity first. Every problem needs a new framework, a new service, a new abstraction layer. Simple solutions are beneath them. They're not here to solve problems. They're here to build impressive-sounding systems.

Their code is write-only. They understand it when they write it. No one else ever will. Documentation is "on the backlog." Comments are for people who don't understand the code.

They're allergic to deadlines. Everything takes longer than expected. There are always blockers. The requirements were unclear. The third-party API was poorly documented. The product team kept changing things.

They blame the tools, the process, the team, the company. Never themselves.

---

## What I expect from engineers

Ship something. I don't care if it's ugly. I don't care if it's not scalable. I don't care if it doesn't follow every best practice you read about on Hacker News. Ship something that works and solves a real problem.

Own your work. If you wrote it, you're responsible for it. If it breaks, you fix it. If it's slow, you optimize it. If it's confusing, you document it. Don't throw code over the wall and walk away.

Be honest about timelines. If you don't know how long something will take, say so. If you're stuck, say so. If you made a mistake, say so. I can work with uncertainty. I can't work with bullshit.

Delete more than you write. The best code is code that doesn't exist. Every line you add is a liability. Fight the urge to build. Solve the problem with the minimum amount of complexity.

Respect other people's money. Someone is paying for your time. That money came from somewhere. Investors, customers, founders who mortgaged their houses. Every hour you waste is their money burning.

---

## The uncomfortable truth

The tech industry has a hiring problem. We optimize for credentials, not competence. We ask people to invert binary trees on whiteboards instead of evaluating whether they can actually build things.

We let people fail upward. The engineer who spent two years on a project that never shipped gets a senior title at the next company because they have "experience with large-scale systems."

We don't fire people fast enough. We give second chances and third chances and fourth chances because firing is uncomfortable and maybe they just need more mentorship.

We don't talk about this because it's impolite. We're all supposed to be supportive and positive and assume good intent.

I'm done being polite about it.

---

## What I'm doing now

I threw away almost everything. Kept a few utility functions that actually worked. Started over with a clean repo.

Two weeks in, we have a working authentication system. One authentication system. That actually works.

The new rule is simple: nothing gets merged until it's deployed and working in production. No more phantom progress. No more tickets marked done that aren't actually done.

We have daily demos. Every day, you show what you built. Not what you worked on. What you built. If you can't demo it, you didn't build it.

Code reviews are mandatory and brutal. I'm not here to make friends. I'm here to ship a product. If your code is bad, I'll tell you it's bad. If you can't handle that, this isn't the team for you.

---

## To the engineers who do this

You know who you are.

You've hopped between three or four companies in the last five years. Each time, you left just before the technical debt caught up. Each time, you had a good excuse. The company was poorly managed. The requirements kept changing. The team didn't have the right culture.

You've never stayed long enough to maintain what you built. You've never been around when someone else had to understand your code. You've never faced the consequences of your decisions.

You've gotten away with it because the industry lets you get away with it.

But here's the thing: people talk. Circles are smaller than you think. Your reputation exists even if you're not aware of it. The mess you left behind has your name on it.

One day you'll find that the referral you needed didn't come through. That the company that seemed excited went quiet. That the offer you expected never materialized.

It catches up. It always catches up.

---

## To the founders and hiring managers

Stop being impressed by vocabulary. Start being impressed by shipped products.

Call the references. Not the ones they gave you. Find people who actually worked with them. Ask specific questions: What did they ship? How was their code to maintain? Did they meet deadlines?

Look at their GitHub. Not the stars on their repos. The actual code. The commit messages. The documentation. How they respond to issues.

Give them a real problem to solve. Not an algorithm puzzle. A small version of something you actually need built. See how they approach it. See if they ask clarifying questions. See if they ship something that works.

Fire faster. The cost of keeping a bad engineer for six months is catastrophic. The discomfort of a firing conversation is temporary. Make the trade.

---

## Final thought

Software engineering is one of the highest-paying professions in the world. We have leverage that most workers can only dream of. A single engineer can build something that generates millions in value.

That leverage comes with responsibility. When we waste time, we waste serious money. When we build poorly, we create problems that compound for years. When we fail to deliver, companies die.

Most engineers understand this. They take pride in their work. They ship. They own their mistakes. They leave codebases better than they found them.

But too many don't. And the industry is too tolerant of it.

I'm not tolerant of it. Not anymore.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Rewriting a carbon footprint platform to handle 6 million calculations]]></title>
    <link>http://localhost:3000/posts/2022-11-15-rewriting-a-carbon-footprint-platform-to-handle-6-million-calculations</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2022-11-15-rewriting-a-carbon-footprint-platform-to-handle-6-million-calculations</guid>
    <description><![CDATA[How I led a team to rebuild a critical calculation engine in Go after our Lambda architecture couldn't handle enterprise scale]]></description>
    <pubDate>Tue, 15 Nov 2022 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
![golang](/golang.jpg)

I just took on the challenge of leading a team of 6 software engineers to build a product that measures carbon footprint from different sources: electricity, fuel, freight, employees, and more. The goal is to help companies understand and offset their environmental impact.

It sounds straightforward. It's not.

---

## The inheritance

When I look at what we actually have as a product, it's far from what we need to build.

The company's first iteration was a Shopify plugin. The idea was simple: e-commerce businesses would install the plugin, customers would see a carbon calculator at checkout, and companies would get a dashboard to track their measurements and offsets.

They validated it. It didn't work. They pivoted to a larger platform targeting enterprise clients.

Now I'm standing in front of 6 engineers, most of them junior, with a legacy Python codebase built on AWS Lambda microservices. The architecture made sense for the plugin. It doesn't make sense for what we need to build now.

---

## Designing for survival

The challenge is to design something new while salvaging what we can from the existing code.

My decision: build a central API that mediates all user requests. I call it the mediator. We're using Django because it's fast to prototype and flexible enough to integrate other technologies as we grow. The team already knows Python, so we can move quickly.

We're rewriting all the calculation algorithms as separate Lambdas using the Chalice framework. This is the hard part. Converting complex mathematical formulas into code while maintaining performance is not trivial. Carbon calculations depend on emission factors, conversion rates, distance matrices, and dozens of variables that change by region and fuel type.

For the frontend, we're building with Next.js and Tailwind CSS. Our UI/UX designer already created the entire design system, which makes it easy to transfer everything to Storybook and start building components. The sales team is active, so we're validating every feature with potential leads in real time.

Four months of intense work. The MVP is done.

---

## The first real client

The timing is perfect. Sales just closed our first major contract. We're going to process all the data from TIM, one of Brazil's largest telecom companies, starting with their São Paulo operation.

Then the first spreadsheet arrives.

TIM wants us to calculate the last year of emissions retroactively. Just for one small third-party logistics operation. I open the file.

6 million freight records.

Each record requires multiple calculations. Each calculation depends on external variables. Many of them require API calls to get emission factors, distance data, fuel coefficients. Every call has latency.

We run the import.

Nothing processes. The system chokes. The MVP is broken on one of its most critical features: freight calculation.

---

## The bottleneck

I call a meeting with the founders.

The problem is clear: Lambda runtime is killing us. Freight calculation alone involves at least 4 Lambdas that call 8 more Lambdas. The cold starts, the invocation overhead, the serialization between functions. It all adds up. For a few hundred records, it's fine. For 6 million, it's impossible.

My proposal: extract the entire freight calculation from Lambda and rewrite it in Go.

Why Go? I need raw performance and easy concurrency. With Go, I can process 4 rows per second per thread and spin up parallel workers to handle the volume simultaneously. Lambda's execution model doesn't give me that control.

The founders agree. We have a client waiting.

---

## Three more months

We spend the next three months rewriting the freight calculation engine in Go.

It's not just the calculation logic. We build a WebSocket server, also in Go, that pushes real-time status updates to the frontend. Users can see exactly where their calculation is: how many records processed, estimated time remaining, any errors encountered.

We add Kafka for messaging between services. The calculation workers pull jobs from the queue, process them, and publish results. The system is decoupled, scalable, and light.

Finally, we're ready to test with the same spreadsheet that broke us before.

We run it.

Hours pass. The progress bar moves steadily. No crashes. No timeouts. No memory explosions.

It works.

---

## What I learned about technical decisions

The easy choice would have been to optimize the Lambdas. Add more memory. Increase timeouts. Batch the requests. We could have spent months squeezing incremental improvements out of an architecture that wasn't designed for this workload.

Instead, we rewrote the critical path in a different language.

This isn't always the right call. Rewrites are expensive. They introduce new bugs. They require the team to learn new tools. But sometimes the architecture itself is the constraint. No amount of optimization will fix a fundamental mismatch between what you're building and how you're building it.

The decision framework I'm using now:

1. Identify the actual bottleneck, not the perceived one
2. Ask whether optimization can solve it or if the constraint is architectural
3. If architectural, isolate the problem and rebuild only that piece
4. Choose the right tool for that specific problem, even if it's different from the rest of the stack

We didn't rewrite the entire system in Go. We didn't throw away Django or the Lambdas that work fine for other calculations. We identified the one piece that couldn't scale and rebuilt it with the right tool.

The mediator still mediates. The Lambdas still calculate electricity and fuel. The frontend still runs on Next.js. But when 6 million freight records come in, they flow through a Go service designed specifically for that job.

Sometimes the best architecture is the one that lets different parts of your system be built differently.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[What is equity and why you should care as a tech lead]]></title>
    <link>http://localhost:3000/posts/2022-10-15-what-is-equity-and-why-you-should-care-as-a-tech-lead</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2022-10-15-what-is-equity-and-why-you-should-care-as-a-tech-lead</guid>
    <description><![CDATA[A practical guide to understanding startup equity, vesting schedules, and what to watch out for when negotiating compensation]]></description>
    <pubDate>Sat, 15 Oct 2022 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
![equity](/equity.png)

As you advance in your career as a software engineer, it's common to feel stuck. You don't know where to go next. Maybe you're at a small startup that just raised some funding and people start throwing around terms like equity, vesting, cliff, and dilution.

I am in that exact position.

---

## My story

I am a tech lead at a startup that has just raised $500,000. My salary is around $4,000 per month, which is already good for the Brazilian market. The founders offered me a raise.

Instead, I asked for equity.

This decision forced me to actually understand how startups work. What are stock options? How does vesting work? What happens if the company fails or gets acquired?

I spent weeks researching. Here's what I learned.

---

## What is equity

Equity is ownership in a company. When you receive equity as part of your compensation, you're getting a piece of the business.

For startups, this usually comes in the form of stock options. You're not getting actual shares immediately. You're getting the option to buy shares at a fixed price (the strike price) in the future.

The idea is simple: if the company grows and becomes valuable, your options let you buy shares at the old, lower price. You can then sell them at the current, higher price. The difference is your profit.

If the company fails or never grows, your options are worth nothing.

---

## How vesting works

You don't get all your equity at once. It vests over time, usually four years.

A typical vesting schedule looks like this:

- **Cliff:** You get nothing for the first year. If you leave before 12 months, you walk away with zero equity.
- **Monthly or quarterly vesting:** After the cliff, your equity vests gradually. If you have 10,000 options over 4 years with a 1-year cliff, you get 2,500 after year one, then roughly 208 per month after that.

This protects the company from people who join, grab equity, and leave immediately.

---

## What equity is actually worth

Here's the uncomfortable truth: most startup equity is worth nothing.

The majority of startups fail. Even if they succeed, your equity gets diluted with every funding round. That 1% you negotiated at seed stage might be 0.3% by Series B.

To understand what your equity might be worth, you need to know:

- **The company's current valuation:** If the company is valued at $5 million and you own 0.5%, your stake is theoretically worth $25,000.
- **The strike price:** How much do you have to pay to exercise your options?
- **Liquidation preferences:** In many exits, investors get paid first. Sometimes there's nothing left for common shareholders.
- **Dilution:** How much funding does the company plan to raise? Each round shrinks your percentage.

Don't treat equity as guaranteed money. Treat it as a lottery ticket with better odds than usual.

---

## What you should watch out for

**Get everything in writing.** I've seen engineers work for years believing they had equity, only to discover there was no signed agreement. No document, no equity.

**Understand your vesting schedule.** Know your cliff date. Know when each chunk vests. Track it yourself.

**Ask about the cap table.** How much of the company do employees own in total? If it's less than 10%, that's a red flag. It means the founders aren't sharing much.

**Know your strike price.** If you have to pay $10,000 to exercise your options, that changes the math significantly.

**Check the exit scenarios.** Ask what happens if the company sells for $10 million, $50 million, or $100 million. Many option grants have clauses that can surprise you.

**Understand the tax implications.** In some countries, you owe taxes when you exercise, not when you sell. This can create a cash flow problem.

---

## When to negotiate equity

Equity makes sense when:

- You believe in the company's potential
- You can afford to take a lower salary in exchange
- You're joining early enough that your percentage is meaningful
- You plan to stay long enough to vest a significant amount

Equity doesn't make sense when:

- You need every dollar of salary to pay bills
- The company has no clear path to an exit
- You're joining late and the percentage is tiny
- The founders are shady about sharing cap table information

In my case, I am already comfortable with my salary. Taking equity instead of a raise is a bet I can afford to make.

---

## Where to learn more

Y Combinator has published a ton of free resources about startup equity, fundraising, and how the ecosystem works. Their library is the best place to start if you want to understand this world.

A few specific resources:

- [YC's guide to equity compensation](https://www.ycombinator.com/library)
- [The Holloway Guide to Equity Compensation](https://www.holloway.com/g/equity-compensation) (free to read online)
- YC's job board (Work at a Startup) is also a good place to find startup jobs and connect with founders

Understanding equity is changing how I evaluate opportunities. It's not just about salary anymore. It's about understanding the whole picture: cash, equity, risk, and potential upside.

---

## The bottom line

Equity is not free money. It's a bet on a company's future, paid for with your time and sometimes your salary.

As a tech lead, you're in a position to negotiate. But negotiate from knowledge, not hope. Understand what you're getting, what it's worth, and what could go wrong.

The worst outcome isn't equity that ends up worthless. It's spending years at a company thinking you own something you don't.

Get it in writing. Track your vesting. Understand the math. Then make your decision.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Why You Don't Need Microservices]]></title>
    <link>http://localhost:3000/posts/2022-01-15-why-you-dont-need-microservices</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2022-01-15-why-you-dont-need-microservices</guid>
    <description><![CDATA[A pragmatic analysis of when monoliths make more sense than microservices]]></description>
    <pubDate>Sat, 15 Jan 2022 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
Most developers have heard the siren song of microservices. They promise scalability, independent deployments, and team autonomy. But here's the thing: **you probably don't need them.**

## The Microservices Hype

Every tech conference talks about microservices as if they're the solution to every architectural problem. Companies love to showcase their microservices architecture as a badge of sophistication. But most of these companies started with monoliths and grew them successfully.

## When Monoliths Make Sense

A monolithic architecture is perfect for most applications. Here's why:

1. **Simpler deployment** - One artifact, one process, one database
2. **Easier debugging** - Stack traces make sense across the entire application
3. **Better performance** - No network hops between services
4. **Faster development** - No need for service discovery, API versioning, or distributed tracing

## The Reality Check

Microservices add complexity:

- Distributed system challenges (network partitions, eventual consistency)
- Deployment orchestration (Kubernetes, Docker Swarm, etc.)
- Service mesh overhead (Istio, Linkerd)
- Monitoring and observability across services
- API versioning and compatibility

## Code Example

Here's what a simple monolith looks like:

```typescript
// Simple, straightforward code
async function createOrder(userId: string, items: Item[]) {
  const user = await db.users.findById(userId)
  const order = await db.orders.create({
    userId,
    items,
    total: calculateTotal(items),
  })
  
  await sendEmail(user.email, { orderId: order.id })
  return order
}
```

In a microservices world, this becomes:

```typescript
// Distributed across 3 services, 2 databases, and a message queue
async function createOrder(userId: string, items: Item[]) {
  const user = await userService.getUser(userId) // HTTP call
  const order = await orderService.create({       // HTTP call
    userId,
    items,
  })
  
  await messageQueue.publish('order.created', {   // Async message
    orderId: order.id,
    userEmail: user.email,
  })
  
  // Hope the email service processes the message correctly
  return order
}
```

## When You Actually Need Microservices

Microservices make sense when:

- You have **multiple teams** that need to deploy independently
- You have **legacy systems** that can't be rewritten
- You have **extreme scale** requirements (think Netflix, Amazon)
- Different parts of your system have **wildly different scalability needs**

For 99% of applications, a well-structured monolith is the better choice.

## Conclusion

Start simple. Build a monolith. Refactor when you have **actual problems**, not theoretical ones. Premature optimization is the root of all evil, and microservices are often premature complexity.

Remember: the best architecture is the one that gets your product shipped and maintained with the least amount of complexity.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Como fui reprovado por inglês e resolvi isso em 6 meses]]></title>
    <link>http://localhost:3000/posts/2020-02-15-como-fui-reprovado-por-ingles-e-resolvi-em-6-meses</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2020-02-15-como-fui-reprovado-por-ingles-e-resolvi-em-6-meses</guid>
    <description><![CDATA[Minha jornada de ser reprovado em uma entrevista por causa do inglês até conseguir me comunicar fluentemente em 6 meses]]></description>
    <pubDate>Sat, 15 Feb 2020 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
2020, estávamos completamente isolados em casa após a pandemia do Covid-19. Abro meu LinkedIn e estava repleto de recrutadores enviando mensagens em busca da minha expertise.

Apesar de já ter experiência em projetos internacionais, a comunicação sempre acontecia via mensagens de texto. A grande maioria vinha de freelance. Eu estava começando uma jornada de realizar entrevistas técnicas e de RH totalmente em inglês.

Foi aí que um recrutador da [Runa](https://www.linkedin.com/company/runahr/) me liga. Achei estranho porque já começou falando em inglês. Me explicou que tinha uma oportunidade e queria fazer uma entrevista via video call. Marcamos para dois dias depois.

Na call, tinha mais um engenheiro senior junto. Foi tranquilo. A vaga era para tech lead alocado em São Paulo/México. A proposta era muito boa: R$20k + bônus.

Saí aprovado e fui chamado para a segunda etapa em SP. Viajei até lá e fui ao escritório na Vila Mariana para a entrevista técnica mais pesada.

O desafio era complexo e tinha uma pegadinha: resolver um problema real e liderar o time remotamente ali mesmo. Eu não estava esperando este nível de teste. Meu inglês também não era bom o suficiente para a vaga.

Uma semana depois, o recrutador me ligou. Disse que minha parte técnica foi muito boa, porém meu inglês e comunicação estavam bem abaixo do esperado. Tinha vagas para engenheiro senior, mas ele me indicou que eu focasse em melhorar meu nível de inglês e tentasse novamente em 6 meses.

Recebi um balde de água gelada.

Eu não tinha aplicado para a vaga, mas depois desse processo seletivo eu estava animado. Saber que tecnicamente eu estava pronto para liderar um time internacional, mas que meu inglês era uma merda, me deixou frustrado.

Foi aí que comecei a procurar formas de melhorar meu nível de inglês em pouco tempo. Eu sabia me comunicar, porém muito lento, com toneladas de erros gramaticais. Falava rápido às vezes para tentar suprimir esses erros.

Com uma breve pesquisa no YouTube, encontrei [este vídeo do Akita](https://www.youtube.com/watch?v=OkboNGQ9LU0). Eu já o acompanhava por ter trabalhado com Rails. Entender que o processo de atingir fluência era possível e autodidata me fez decidir encarar o desafio.

---

## A base primeiro

Comecei focando na base da língua: verbos, estruturação de frases e conversas sobre assuntos gerais.

A ideia era refinar o que eu já sabia. O que eu sabia tinha que ficar bom.

Usei a Cambly. Gastei uns $600 fazendo aulas e conversando com diferentes pessoas. A maioria já eram professores e foi tranquilo. Quando passei o que queria melhorar, eles foram automaticamente refinando meu inglês.

Logo após, caí de cabeça em mudar tudo para inglês. Até minha televisão. Eu não tinha como falar inglês todos os dias, mas tentava me comunicar em inglês sempre que possível: comentando projetos open source, participando de comunidades.

Fiz amizade com um cara na Califórnia que estava construindo uma startup. Fazíamos uma troca: eu era tech advisor e ele me ajudava com inglês. Isso foi um divisor de águas. Em 6 meses, para conversação natural do dia a dia, eu já estava bem desenrolado.

---

## Minha rotina atual

Ao acordar, leio um livro em inglês. Na maioria das vezes, livros técnicos. Depois leio artigos em sites como HackerNews e faço um breve sumário do que entendi, escrevendo em inglês e falando em voz alta.

Nos momentos de folga, como almoço, assisto vídeos no YouTube todos em inglês. Vídeos de "Day in the life" me ajudam a pegar os jargões e slang do dia a dia. Entendo uns 60-70%.

Durante a noite, assisto séries em inglês legendadas com minha esposa. Atualmente estamos assistindo The Ranch. Um sotaque totalmente complexo de entender, que às vezes chega a doer minha cabeça.

---

## Os erros que mais me atrasaram

**Sempre me subestimei.** Achava que ninguém ia me entender. Sempre empurrei a decisão de aplicar para vagas internacionais para depois. Erro gigante. Em 5 anos trabalhando para fora, só encontrei pessoas que me encorajaram. Ninguém nunca riu do meu inglês ou fez cara feia pro meu sotaque.

**Achava que tinha que entender 100% de primeira.** Isso não existe nem em português. Comunicação é difícil. Fazer perguntas é essencial. Quantas vezes em reuniões em português você pede para repetirem? Em inglês é a mesma coisa.

**Demorei demais para começar a aplicar.** Fiquei esperando o inglês perfeito que nunca chegou. Enquanto isso, perdi oportunidades.

---

## Preparando-se para entrevistas técnicas

### Frases que salvam vidas

Grave essas frases. Elas vão te salvar quando travar:

- "Could you repeat that, please?"
- "Let me make sure I understood..."
- "I'm thinking through this problem..."
- "My approach would be..."

### Quando você não entende a pergunta

O pior erro é fingir que entendeu. Entrevistadores preferem mil vezes alguém que pede clarificação do que alguém que responde a pergunta errada.

Jeito errado: "Uh... yes... I think... maybe..." e começa a codar algo aleatório.

Jeito certo: "I want to make sure I understand the requirements. Are you asking me to...?"

### Template para respostas comportamentais

A maioria das perguntas comportamentais segue o formato STAR. Prepare histórias simples:

- **Situation:** "At my current job..."
- **Task:** "I needed to..."
- **Action:** "So I decided to..."
- **Result:** "This led to..."

Palavras simples. Frases curtas. Clareza total.

---

## Vencendo a barreira psicológica

O que mais trava brasileiros não é vocabulário. É medo. Medo de errar. Medo do sotaque. Medo de parecer burro.

Seu sotaque é uma vantagem. Mostra que você fala pelo menos duas línguas. Quantos americanos monolíngues você conhece?

Três passos para perder o medo:

1. Comece errando em ambientes seguros: Discord servers de devs, calls com amigos
2. Aceite que vai errar para sempre. Até hoje falo "I have 27 years" às vezes
3. Foque em ser entendido, não em ser perfeito. Se a mensagem passou, missão cumprida

---

## Conteúdo técnico para consumir hoje

**Canais no YouTube:**
- WebDevCody
- ByteByteGo
- Hello Interview
- Starter Story
- A Life Engineered
- Andrej Karpathy

**Podcasts:**
- The Pragmatic Engineer
- Refactoring Podcast
- The Peterman Pod
- Soft Skills Engineering
- Front End Happy Hour
- Latent Space: The AI Engineer Podcast
- localfirst.fm
- Lenny's Podcast
- The Knowledge Project
- Deep Questions with Cal Newport

**Comunidades:**
- Taro
- Reactiflux

Participar de comunidades com eventos ao vivo que te interessam ajuda muito. Não precisa ser necessariamente voltado para TI, embora ajude caso seu objetivo seja a internacionalização da carreira.

---

## Seu próximo passo

Não saia desse artigo sem fazer algo prático:

1. Mude a linguagem do seu celular e computador para inglês (1 minuto)
2. Assista um vídeo de 10 minutos sobre seu framework favorito em inglês (10 minutos)
3. Escreva um comentário em inglês no seu próximo PR (2 minutos)
4. Marque uma reunião 1:1 com alguém da empresa para praticar inglês (1 email)

O segredo não é estudar mais. É praticar mais.

---

## Resumo

Esqueça certificados e gramática perfeita. Empresas querem comunicação clara, não diplomas. Foque em ser entendido usando palavras simples. Seu sotaque é normal e aceitável.

O caminho efetivo tem duas fases: base sólida nos primeiros 3 meses, depois imersão inteligente dos 3 aos 6 meses. Aulas para o básico (verbos, estrutura, vocabulário), depois prática (YouTube, entretenimento, conversação no trabalho).

Prepare-se especificamente para entrevistas. Tenha frases prontas e pratique com AI. Aprenda a pedir clarificação sem medo. Use estrutura STAR com palavras simples.

Você pode estar fazendo entrevistas internacionais mais cedo do que imagina. Ou pode estar procurando o curso perfeito. A escolha é sua.
]]></content:encoded>
  </item>
  <item>
    <title><![CDATA[Cryptocurrencies After the Crash: Bubble, Correction, or Growing Pains?]]></title>
    <link>http://localhost:3000/posts/2018-02-15-cryptocurrencies-after-the-crash</link>
    <guid isPermaLink="true">http://localhost:3000/posts/2018-02-15-cryptocurrencies-after-the-crash</guid>
    <description><![CDATA[Bitcoin fell from nearly USD 20,000 to USD 6,000–8,000. Was it a bubble? Does that even matter?]]></description>
    <pubDate>Thu, 15 Feb 2018 00:00:00 GMT</pubDate>
    <content:encoded><![CDATA[
If December was euphoria, January was gravity.

After one of the most aggressive speculative runs in modern financial history, cryptocurrencies slammed into reality. Bitcoin fell from nearly USD 20,000 to the USD 6,000–8,000 range. Most altcoins lost more than half of their value. Some were nearly wiped out.

The mood shifted overnight.

The same people who were asking *"How high can this go?"* started asking *"Is this dead?"*  
The word *bubble* came back — louder, angrier, more confident.

So let's ask the question again, properly this time:

**Was it a bubble? And does that even matter?**

---

## The Obsession With Bubbles

Humans love clean narratives.

When prices go up fast, we call it *revolution*.  
When they go down fast, we call it *fraud*.

Reality is always messier.

Yes, 2017 was irrational.  
Yes, valuations ran far ahead of fundamentals.  
Yes, a lot of people bought things they didn't understand with money they couldn't afford to lose.

That doesn't make the entire system invalid.

It makes it **young**.

---

## We've Seen This Movie Before

Anyone who worked in technology during the late 90s feels a deep sense of déjà vu right now.

The dot-com crash of 2000–2001 followed the exact same psychological arc:
- Exponential optimism  
- Capital chasing ideas faster than execution  
- Weak companies riding strong narratives  
- A violent correction  
- Public mockery  
- Declaring the whole thing a mistake  

In 2001, the Nasdaq collapsed. Countless companies vanished. Careers were disrupted. Confidence evaporated.

And yet — here we are.

The internet didn't fail.  
It **shed excess**.

What survived became infrastructure.

---

## Crashes Don't Kill Technologies — They Filter Them

A crash doesn't mean *nothing had value*.  
It means *everything was priced as if it were perfect*.

That's unsustainable in any system.

What collapses after a speculative mania:
- Poor execution  
- Empty promises  
- Projects built only for price appreciation  

What remains:
- Core ideas  
- Real utility  
- Hard lessons  
- Stronger builders  

Cryptocurrencies didn't disappear in January 2018.  
They got quieter. And quieter is where real work happens.

---

## Prediction Is Still a Trap

After the crash, people look for certainty.

Indicators. Ratios. Models. Forecasts.

They all suffer from the same problem:

> **Correlation is not causation.**

Markets are driven by human behavior — fear, greed, hope, regret — wrapped in numbers. You can analyze tendencies, but you cannot predict outcomes with precision.

Anyone claiming certainty is either selling something or lying to themselves.

---

## Risk, Properly Understood

The real mistake people made wasn't buying Bitcoin.

It was misunderstanding **risk**.

Risk isn't volatility.  
Risk is *position size*.

If losing an investment would destroy your life, it was never an investment — it was a gamble.

If losing it would hurt, but not break you, then you were engaging in **asymmetric risk**.

That asymmetry is what made crypto interesting in the first place:
- Limited downside (defined by position sizing)  
- Massive upside (if adoption continues over years, not weeks)

That equation hasn't changed just because prices fell.

---

## "But What If It Never Comes Back?"

This question surfaces after every crash in history.

And it's always the wrong frame.

The better question is:
**Did something fundamental break?**

As of February 2018:
- The network still works  
- Transactions still settle  
- Developers are still building  
- Institutions are still experimenting  
- Governments are still paying attention  

That doesn't look like a dead system.  
It looks like one digesting excess.

---

## The Market Is Less Fun — and That's a Feature

Speculative manias are exciting.  
They're also terrible environments for thinking.

After a crash:
- Noise disappears  
- Weak ideas fade  
- Signal improves  
- Time horizons lengthen  

What remains isn't hype — it's conviction.

And conviction doesn't shout.

---

## Final Thought

So — was 2017 a bubble?

Probably.

But bubbles are not the opposite of progress.  
They are the cost of discovering new territory.

The dot-com bubble didn't kill the internet.  
It taught it how to grow up.

Cryptocurrencies may be going through the same process.

And history suggests that what survives this phase won't look impressive today —  
but it will be impossible to ignore later.
]]></content:encoded>
  </item>
  </channel>
</rss>