Spring Cleaning
March, 2026
Previously on Dorsal Dev Log
Last time we saw our friendly, neighbourhood coder, he was finally implementing environment variables and secrets. In the time since then, Dorsal now automatically redeploys when those environment variables change!
While February was a big milestone, where some new features were finally completed, March has primarily focused on wrapping-up those features’ loose ends and cleaning up old code.
Consolidating Kubernetes
In an earlier implementation, Dorsal’s Kubernetes deployments were largely tied to the Pulumi stack lifecycle. This made things simpler at the time, though it also meant that every update was heavier and took longer to be applied.
Since then, Kubernetes resources have been migrated to Argo CD. This month, that process was completed, with the final ConfigMap being removed from Pulumi and the pulumi-kubernetes package removed as a dependency.
Now, Pulumi manages only AWS resources, and Argo CD handles the Kubernetes state reconciliation. This decouples the application lifecycle from the infrastructure management, speeding-up and simplifying iterative application changes.
As a result, changes to environment variables in Dorsal are reflected with an application redeployment in Kubernetes within 90 seconds.
Wrangling Webhooks
There are two primary sets of concerns in the Dorsal implementation: Stacks and Applications.
Stacks are Pulumi Kubernetes Operator resources. Applications are Argo CD applications, which deploy a Helm Chart.
Given what I’ve said in the previous section, it should be obvious how these implementation details are related but not tightly coupled.
One communication strategy that I’ve implemented to bridge the gap between these subsections is Stack webhooks. These are callbacks, not unlike those in the Pulumi Cloud, which post back to a provided webhooks endpoint with the Pulumi Stack exports.
Internally, the callback endpoint for Dorsal itself is a Dorsal API. This allows me to then ingest any relevant stack exports, such as Route 53 nameservers, store them in the database, and trigger an Application manifest re-render.
The Application will then be populated with the new value(s), if any, triggering a rollout of the new Application in Kubernetes if anything changed. This happens automatically because Argo CD spots the updates to the Application manifest and reconciles those changes with the cluster state.
When I was working out the details for how to design the webhooks, I initially opted for more specific callbacks. This month, I decided to standardize around a single callback per Stack that includes the stack exports, and occurs at the end of the Stack update.
This should work well as a forcing function for keeping the behaviour comprehensible, and requiring the stack exports to include all important Stack resource artifacts.
Pulumi Stack exports will be important in my eventual goal of providing a “bring your own state backend” option, where you can import these exports into any other Pulumi stack that you have, making Dorsal more extensible for end users.
Jekyll
Lastly, I have decided to accept the reality that my in-house Markdown Blog server was a bad product.
It was never merged to the trunk branch and was simply hosting this Dev Log as a proof of concept. However, that server and branch have now been shut down, and this blog has been converted to a standard Dorsal-hosted static website using Jekyll.
Ultimately, this is a far better solution anyway. It provides all of the benefits of static website hosting that I’ve already implemented, such as a CDN and integration with Route 53.
For now, this is still defined via a generic “website” GUI in the Dorsal Console, but I’m thinking of adding a few static site generator-specific forms that will make this super easy for end users.
For example, right now, I’m defining:
Build Context: blog/dev-log
Content: blog/dev-log/_site
Command: jekyll build
It would be really nice to give users the option to select “Jekyll” as a standardized static website option, and then provide a more Jekyll-specific form that simply asks for the “Path to _config.yaml”
Since I’ve already required the user to connect their Github repo, I could then automatically verify that this path is valid and contains the config file. Surely, I could also parse the file and pull out the build output directory if one is provided.
Under the hood, it would still just be the generic static website config, but a friendly abstraction would make it feel much better.
The most exciting part about all of this for me, though, is that I was able to convert my personal blog to use Jekyll as well (any static site generator would have worked), and host it with Dorsal.
If memory serves, this is the first end user content officially hosted with Dorsal and configured purely via the Console. There were no administrator shenanigans required.
Next Steps
There are two big, final steps that are extremely clear. In theory, with these completed, I’ll be able to finally offer Dorsal hosting to some trusted users.
These steps are:
- Refactoring the Docker Container / Kubernetes Deployment data model
- Eventually: Integrate payment processing
Because my first handful of testers won’t be charged, the second one is more of a formality, but I’d still like to have that well-tested by some friends/acquaintances before opening up the Kubernetes hosting offer to the internet as a whole.
The first step, however, is the one that I’ve been procrastinating on for about a week… It’s critical, but it’s also a lot of work, and my brain has been tired lately.
Much like the environment variables and secrets management, though, I know that I’ll be so glad once it’s done, so that’s the primary goal for the immediate future.
Wish me luck.
Until next month,
Nathan