Gift of GitOps
December, 2025
Pulumi Cloud
Over the last 3-4 years, I've had a handful of calls with the Pulumi team. Each one was enjoyable and productive in its own way. The team is always a pleasure to work with, and the product has improved tremendously since I first started using it over 4 years ago. With that context, it is with no ill will that I share that my focus for this past month was migrating off of the Pulumi Cloud.
For the past year, the Pulumi Cloud has served as Dorsal's Pulumi state backend. This convenient default has allowed me to build-out some initial functionality, find my footing, and explore what I want the product to be. However, from the beginning, I knew that it would not be a workable platform for my business model, due to the pricing structure.
Though the pricing has been improved this year, which was much appreciated, the per-resource pricing model remains. With a little back-of-the-napkin math, I quickly saw that the cost for me to store the resource state in Pulumi would be greater than the cost of the resource itself in AWS...
So, no bueno.
The Pulumi Kubernetes Operator 2.0
On the most recent call that I had with the Pulumi team, I requested to hear more about the Pulumi Kubernetes Operator 2.0 (PKO). Personally, I found the documentation to be lacking. Despite my (self-assessed) reasonable level of familiarity with Kubernetes, I didn't see a paved road to a Hello World deployment.
After the call, the Pulumi team sent over some links to help get me started with the PKO. Unfortunately, they were all documents that I had read before, but they did point me in the general direction.
With Argo CD managing the Cluster, I was able to iterate quickly through guess-and-check testing cycles. Over a weekend, I managed to tease-out how I could not only run the PKO but also migrate all of my Pulumi Stacks from the Pulumi Cloud to my newly-created PostgreSQL backend.
Oh, the joys of moving quickly when you don't yet have paying customers!
In the 3 or so weeks since this migration, I am pleased to report that it has been an overwhelming success.
Kubernetes Native
One of the immediate and obvious benefits I saw was moving the core of my operational complexity to a Kubernetes-native solution.
The Dorsal architecture is designed to gradually reconcile whatever state is declared in the database, based on the end-user's configuration, with the deployed state of the Pulumi Stack and the user's application code. This was being achieved through the Pulumi REST API, webhooks, and sometimes repeating work. This required that actions be idempotent, at least for any meaningful values.
Because Kubernetes operates in, effectively, this same model, it was a very natural process to adopt declarative, Kubernetes manifests for a larger chunk of the system. For example, rather than POSTing the deployment configuration to the Pulumi REST API, I can now provide the entire configuration directly in the rendered Stack manifests.
Now, I simply need to have a process for generating the Stack (Pulumi) and Application (Argo CD) manifests based on the user-provided values in the Dorsal database, and the Kubernetes magic handles the rest.
From my earliest implementations of Dorsal, I've aimed to maintain a one-way data flow. While sometimes (oftentimes) more difficult to implement, I think it is far easier to reason about. It also allowed for the idempotent actions, which Kubernetes now handles as efficient noops.
It has also been a rather enjoyable side-quest. I've learned some new things about Kubernetes and a ton about GitOps. As with everything in Dorsal, enjoying the process of this refactor was valuable in seeing it through to completion.
Road to Private Beta
In the past month, I also split out my accounts. A year ago, I signed-up for Dorsal (as the first ever user, of course) with my personal Gmail account. In that single account, I used two projects to separate my concerns: "Dorsal" and "Personal."
As of this month, I now have two accounts. In one, I am an Administrator, and I manage all of my Dorsal resources within it. In the other, I'm a lowly Beta user, where I manage a more limited set of resources for my Personal account.
To my surprise, this has been tremendously helpful in understanding the user experience. I thought that this was already the case, as Dorsal runs on Dorsal, and I manage everything from within the Dorsal Console whenever possible, but having separate accounts has allowed me to properly separate the experiences of Admin and end-user.
It has allowed me to clearly spot the missing functionality. Rather than just "quickly updating this" directly in the database, as I was before, everything that my personal account does must be supported directly in the Console. This has also revealed how many things I still need to do... But I guess those things were always missing, whether I recognized them or not.
With this big reimplementation of the state management largely complete, I can once again look ahead to completing necessary features that facilitate an early adopter's experience with the Dorsal Console.
I have a contact who is interested in testing Dorsal as a deployment platform. His feedback will, surely, be invaluable as I try to figure out exactly how Dorsal fits into the existing landscape of products and services.
Once the quality of his experience is acceptable, I'll be aiming to expand to other Alpha users and, possibly, Beta users. I'd love to launch the Compute service to a Public Preview in the first half of 2026...
Looking Ahead
As I write this, I'm on a flight to Mexico to spend time with my in-laws for the Christmas season, so I won't make any bold claims about timelines or upcoming features.
I will say, however, that I'm optimistic. I feel like I am discovering a bit of product-market-fit through all of this dogfooding, and I'm realizing that there might be something here worth building after all.
Originally, I had planned to launch with a static website hosting platform. Upon building the proof of concept, I (as an end user, myself) felt that it wasn't a great experience. Obviously, I didn't have any secret sauce that magically deployed your website any better than some other service... and I wasn't going to try to recreate everything that Railway already does with build system detection.
So, now, I'm looking at a Compute Service as my initial launch product. Sure, it'll sort of just be a Kubernetes wrapper... but I think that has value in and of itself. It's not the first of its kind, but I have some potentially unique ideas, and I think the concept itself has merit. It will also integrate seamlessly with the static website deployment service, as that continues to take shape.
Check back next month for another update - hopefully with a new feature or two!
Happy Holidays,
Nathan