Loveable is powerful.
You describe an app.
It generates an app.
You deploy something working in minutes.
That phase is valuable.
But eventually, something shifts.
You’re no longer exploring an idea.
You’re building a system.
And systems need specification.
]]>
Loveable is powerful.
You describe an app.
It generates an app.
You deploy something working in minutes.
That phase is valuable.
But eventually, something shifts.
You’re no longer exploring an idea.
You’re building a system.
And systems need specification.
Loveable gives you output.
What it doesn’t give you is:
The app exists.
But the spec rarely does.
And without a spec, you don’t control evolution.
There’s no shortcut here.
If you want:
You must rebuild it.
Not blindly.
Not angrily.
Intentionally.
You extract the intent.
You formalize the spec.
Then you re-implement it on a real stack.
That’s the transition from phase one to phase two.
Before you touch Django, you write the spec.
Not a 100-page enterprise document.
A clear architectural contract.
Here’s what that includes:
If Loveable generated tables, you reverse-engineer them.
You decide what is accidental and what is essential.
If auth was abstracted away, now you define it.
Explicitly.
Generated platforms hide this.
Infrastructure ownership requires it.
If you can’t describe your deployment in text, you don’t own it.
Most developers try to:
Loveable → Django immediately.
That’s wrong.
The real path is:
Loveable → Spec → Django.
Because once you have a clean spec, the rebuild becomes surgical.
Now you open VS Code.
Now you’re in control.
You create:
You wire it to:
You deploy it on Opalstack.
Now your app runs persistently.
Not as a generated artifact.
As infrastructure.
Here’s where modern tooling changes the game.
Rebuilding doesn’t mean “manually type everything.”
It means:
Use AI as an assistant
inside a system you own.
With VS Code + GitHub Copilot (or your model of choice), you can:
The difference is:
You’re generating into a framework that has structure.
Django gives guardrails.
AI accelerates the writing.
That combination is powerful.
When you stay fully inside a prompt-platform:
When you rebuild intentionally:
You trade convenience for sovereignty.
At scale, sovereignty wins.
On Opalstack, your Django application:
No abstraction layer.
No black box.
Just infrastructure.
Loveable feels like:
“The app builds itself.”
Django on Opalstack feels like:
“I build the machine.”
With AI tooling, you don’t lose speed.
You gain clarity.
You move from:
Prompt-driven experimentation
to
Specification-driven engineering.
That’s the upgrade.
You’re ready to exit Loveable when:
You’re not ready if:
There’s no moral judgment.
Just lifecycle phases.
Start fast.
Then own your stack.
If phase one is speed,
phase two is structure.
And structure is what lasts.
— Opalstack
]]>Firebase is great at one thing:
Getting you live fast.
But eventually, a lot of teams hit the same wall:

Firebase is great at one thing:
Getting you live fast.
But eventually, a lot of teams hit the same wall:
If you’ve built a real Django app — or you want one — it’s time to step off the platform treadmill.
Here’s how to exit Firebase cleanly and land on traditional hosting (like Opalstack) without chaos.
And then: how to vibe code the whole migration + ongoing dev workflow using Django on Opalstack with an MCP server + Vibe Shell tooling + VS Code and bring-your-own model (Copilot, local models, whatever).
Firebase is primarily:
It is not:
When your app grows up, you usually want:
That’s where traditional hosting comes in.
Here’s what you’re moving toward:
Traditional Stack (Opalstack-style):
No rewrites into Cloud Functions.
No per-request billing surprises.
No document-store contortions.
Just software running like software.
Everything else is straightforward.
These are the real friction points:
Let’s break them down.
This is where most teams panic.
Here’s the truth:
You can export users.
You cannot export passwords in plaintext.
Firebase stores passwords using a modified scrypt hash.
You get:
You have three options.
Import users into Django.
Set unusable passwords.
Require reset on first login.
This is operationally clean.
No crypto reimplementation.
No security guessing.
For most teams, this is the correct move.
Django supports multiple hashers.
You can:
This preserves seamless login.
But:
You are now implementing security-critical code.
Do not wing this.
You can migrate to a neutral auth provider that supports Firebase imports.
That lets you:
This is useful if you don’t want to own auth long-term.
Firebase Auth ties:
… to its token ecosystem.
Once you move to Django sessions or your own JWT system, you decouple everything.
That’s freedom.
This is the structural shift.
Firestore is hierarchical:
users/{userId}/posts/{postId}
PostgreSQL is relational:
users
posts (foreign key user_id)
You cannot “export JSON and import into SQL.”
You must:
This is where your app becomes stable.
Document stores are easy early.
Relational models are powerful long-term.
If you built serious Django models anyway, this part is just normalization work.
Firebase Storage is Google Cloud Storage under the hood.
Rules look like:
allow read: if request.auth.uid == userId
In Django, that becomes:
if request.user.id == object.owner_id:
allow()
Authorization moves into your application layer.
That’s not worse.
It’s clearer.
You now control permissions through Django’s auth and model relationships instead of policy DSLs.
Cloud Functions feel magical because they auto-scale.
But they’re just:
In Django:
You regain:
You lose:
In exchange, you gain:
Predictability.
On Opalstack, Django runs as a proper application:
/staticEnvironment variables go into uwsgi.ini.
Database is provisioned from the dashboard.
You run:
python manage.py migrate
python manage.py collectstatic
Restart.
Done.
No deploy targets.
No hosting channels.
No CLI deploy tokens.
It’s software running on a server.
Firebase Hosting bills:
It scales elastically.
Which also means:
Costs scale elastically.
Traditional hosting flips the model:
Flat monthly pricing.
You manage load responsibly.
No per-request metering.
If your app is stable and predictable, that’s often a win.
| Phase | Time |
|---|---|
| Inventory + design | 1–3 days |
| Set up Django hosting | 1 day |
| Auth strategy implementation | 2–7 days |
| Data mapping + ETL | 2–10 days |
| Staging validation | 2–5 days |
| DNS cutover | Hours |
The only variable that explodes timelines:
Firestore data complexity.
Everything else is controlled.
You’re ready when:
You’re not ready if:
There’s no moral argument here.
It’s architecture maturity.
Firebase is optimized for:
Fast starts.
Django on traditional hosting is optimized for:
Long-term clarity.
If you’re building something real — SaaS, internal tools, AI-powered services, data-heavy apps — the second model wins over time.
You get:
That’s the trade.
Now for the part Firebase doesn’t give you:
A workflow where you can vibe code the app without living inside one vendor’s world.
Here’s the Opalstack angle:
We don’t sell “our LLM.”
We sell a stack that plays nice with whatever model you want.
Copilot.
Claude.
OpenAI.
Local models.
Anything that can drive your editor + tools.
And we make the hosting side MCP-native, so your “AI dev loop” can actually do work instead of hallucinating instructions.
MCP (Model Context Protocol) is basically:
a standard way to expose tools to your AI dev environment.
So instead of “copy/paste commands,” your assistant can:
…in a controlled way.
That’s the difference between:
“AI that talks”
and
“AI that ships.”
Think of MCPVibe Shell as the bridge between:
So the workflow becomes:
The selling point is simple:
Not a platform prison.
A real server stack, enhanced.
When you leave Firebase, you’re doing things Firebase “hid” from you:
That’s where AI tooling actually helps — if it can touch real systems.
So instead of “read docs for 4 hours,” you get:
This is vibe coding aimed at moving responsibility back to you — without turning that responsibility into pain.
A lot of platforms want to bundle you into:
That’s maximum coupling.
We’re pushing the opposite:
If Copilot is your move, cool.
If you want local models for cost control, cool.
If you want to mix models by task (cheap model for refactors, strong model for architecture), cool.
You’re not stuck.
Firebase is “fast to start.”
Opalstack-style Django hosting is “built to last.”
And with MCP + Vibe Shell + VS Code + BYO AI, you can keep the modern vibe-coding speed without living inside someone else’s walled garden.
That’s the play.
Same mission:
Move from platform dependency
to infrastructure ownership.
— Opalstack
]]>Synapse is the “reference” homeserver implementation — and it’s absolutely runnable on managed hosting as long as you treat it like an app (not a snowflake
]]>Matrix is what you run when you want your own chat without giving a platform permanent custody of your community.
Synapse is the “reference” homeserver implementation — and it’s absolutely runnable on managed hosting as long as you treat it like an app (not a snowflake pet server).
On Opalstack, Synapse runs the way we think apps should run:
server_name + public_baseurl once (then never change them)..well-known so federation works cleanly behind port 443.Most “how to host Matrix” guides assume one of these:
That’s not how Opalstack works.
Here, Synapse runs as a userspace service under your account — same model as every other Open App Stack.
A production-ish baseline that looks like this:
~/apps/<appname>/start and ~/apps/<appname>/stopYour app folder looks like:
~/apps/<appname>/~/apps/<appname>/matrix/~/apps/<appname>/matrix/venv/~/apps/<appname>/matrix/config/homeserver.yaml~/apps/<appname>/matrix/data/~/apps/<appname>/matrix/synapse.logOpen:
~/apps/<appname>/matrix/config/homeserver.yaml
Set these to your real domain:
server_name: "chat.example.com"
public_baseurl: "https://chat.example.com/"
Do not change server_name later unless you intend to wipe and rebuild. (Synapse treats it as identity, not cosmetics.)
~/apps/<appname>/start
tail -f ~/apps/<appname>/matrix/synapse.log
Quick local sanity check:
curl -sS http://127.0.0.1:<YOUR_APP_PORT>/_matrix/client/versions
Synapse ships a helper for this. The pattern looks like:
~/apps/<appname>/matrix/venv/bin/register_new_matrix_user \
-c ~/apps/<appname>/matrix/config/homeserver.yaml \
-u admin -p "use-a-real-password" -a \
http://127.0.0.1:<YOUR_APP_PORT>
That gets you an initial admin without opening public registration.
By default, other Matrix servers expect to reach yours at https://<server_name>:8448/.
On shared hosting, you generally don’t expose 8448 — you run behind standard HTTPS (443). So you must tell the federation network where to find you.
That’s what .well-known is for.
/.well-known/matrix/serverServe this JSON from:
https://<server_name>/.well-known/matrix/server
{ "m.server": "chat.example.com:443" }
/.well-known/matrix/clientServe this JSON from:
https://<server_name>/.well-known/matrix/client
{
"m.homeserver": { "base_url": "https://chat.example.com" }
}
CORS note: Clients expect CORS headers on the .well-known responses. If you’re serving these as static files, add Access-Control-Allow-Origin: *.
If you want reliable voice/video calling, you’ll need a TURN server.
On shared hosting, opening the giant UDP port ranges TURN wants is typically a non-starter. If you need TURN, use a VPS (or use an external TURN provider).
If you want to go further after the base install:
That’s what “managed hosting” should feel like: boring, inspectable, fixable.
]]>Most “run an AI assistant on a server” guides still assume one of these:
OpenClaw is an open-source “personal AI assistant” built around a Gateway. The Gateway is the whole point: it’s the always-on brain/entrypoint, and everything else is optional.
Most “run an AI assistant on a server” guides still assume one of these:
That’s not how Opalstack works.
On Opalstack, OpenClaw runs as a userspace app inside your account. No root. No systemd. No drama.
~/apps/<appname>/start.state/.env).You don’t want “a demo”. You want something you can actually run:
That’s exactly what an Open App Stack is: an installer you can read and modify.
When you install OpenClaw on Opalstack, you’re not just getting npm install -g openclaw.
You get a working, production-ish baseline that looks like this:
In other words: “run a gateway” instead of “become a part-time platform engineer.”
Your OpenClaw app folder will look like:
~/apps/<appname>/~/apps/<appname>/oc/~/apps/<appname>/oc/bin/openclaw~/apps/<appname>/oc/openclaw.json~/apps/<appname>/oc/state/~/apps/<appname>/oc/state/.env~/apps/<appname>/oc/logs/openclaw.console.log~/apps/<appname>/oc/logs/openclaw.jsonlFrom SSH:
# start the gateway
~/apps/<appname>/start
# stop it
~/apps/<appname>/stop
# status (+ a quick OpenClaw status check)
~/apps/<appname>/status
# upgrade OpenClaw in-place (keeps state/config)
~/apps/<appname>/upgrade
# tail the live console log
tail -f ~/apps/<appname>/oc/logs/openclaw.console.log
This gateway uses token auth. The token is stored here:
cat ~/apps/<appname>/oc/state/.env
# on your laptop
ssh -L 19001:127.0.0.1:19001 youruser@your-opalstack-server
Then open:
http://127.0.0.1:19001/?token=PASTE_TOKEN_HEREIf you map the gateway port to a site/domain, you’ll still need the token on first load:
https://yourdomain.tld/?token=PASTE_TOKEN_HEREA couple implementation details worth knowing:
127.0.0.1 as a proxy hop.start script every ~10 minutes, so if the gateway dies, it comes back.That’s the “no systemctl” trick: it’s boring, and it works.
OpenClaw can run with no provider keys configured, but agents won’t answer until you set them up.
Once the gateway is running, you’ll typically do one of these:
# interactive onboarding
~/apps/<appname>/oc/bin/openclaw onboard
# or add an agent explicitly
~/apps/<appname>/oc/bin/openclaw agents add main
Here’s the exact Open App Stack installer we run on AlmaLinux 9:
https://raw.githubusercontent.com/opalstack/installers/refs/heads/master/el9/openclaw/install.py
Fork it. Tweak it. Ship your own flavor. That’s the point.
state/.env like a secret. Don’t commit it. Don’t paste it into screenshots.If you want to go further:
That’s what “managed hosting” should feel like.
]]>And now you can run it on Opalstack the way we think apps should run:
as an
]]>Wagtail is what you reach for when you want a real CMS without the “plugin casino” vibe. It’s Django-native, editor-friendly, and built to scale like an actual web app.
And now you can run it on Opalstack the way we think apps should run:
as an Open App Stack installer. Click. Deploy. Edit the stack script if you want. No mystery meat.
Most “Wagtail hosting guides” start with:
That’s fine… if your hobby is rebuilding the same deployment for the 40th time.
On Opalstack, you use the Open App Stack and get a working Wagtail app that’s already wired for production patterns.
When you install the Wagtail stack, you’re not just getting pip install wagtail.
You’re getting an opinionated, deployable baseline:
In other words: “ship a CMS” instead of “become a part-time SRE.”
You get a live app you can log into, configure, theme, and extend.
Here’s the whole point of our “Open App Stacks” direction:
The installer is a script. You can read it. You can fork it. You can run it yourself.
Not a black box. Not a proprietary wizard. Not “call support to change one setting.”
If you want to:
…you edit the stack like a normal person edits a script. That’s the deal.
You’ll have a management command available inside the app environment. Use it to create the first admin user and get into /admin/.
Wagtail is meant to be customized. Don’t treat it like WordPress. Treat it like Django: templates, settings, apps.
Wagtail makes sense if you want:
It’s especially good for:
We’re hosting people who want control without spending their life on ops.
So the goal is:
Wagtail fits that perfectly.
]]>You don’t have clean inputs.
You have “a thread I saw,” “a DM I meant to follow up on,” “a bug report buried in a forum,” “a spicy quote someone posted,” and
]]>Most automation stacks fail for one simple reason:
You don’t have clean inputs.
You have “a thread I saw,” “a DM I meant to follow up on,” “a bug report buried in a forum,” “a spicy quote someone posted,” and zero pipeline to turn that into something usable.
n8nCapture fixes that by making your browser a data intake tool.
It’s a lightweight WebExtension that sends selected text + page metadata to an n8n webhook via HTTP POST. GitHub
Select text → right-click → choose a target → it POSTs JSON to your n8n workflow. GitHub
Key features from the repo:
This is exactly the kind of “boring glue” that makes automations real.
The extension ships with a clear payload model like:
{
"text": "This platform is great but I wish it had a simpler automation setup.",
"url": "https://example.com/some-thread",
"title": "Random discussion about tooling",
"domain": "example.com",
"target": "leads",
"createdAt": "2025-12-09T07:00:00.000Z",
"userAgent": "Mozilla/5.0 ...",
"extra": {}
}
And optionally a header like:
x-n8n-webclip-token: YOUR_SHARED_SECRET GitHub
Repo’s approach is straightforward:
chrome://extensions/ → Developer mode → Load unpacked)about:debugging GitHubNo build step. No bundler. Just files.
Targets are the real trick: each “target” is basically a named route into your automation brain.
A target typically has:
target)This lets you clip the same kind of content into different buckets:
leads_redditsupport_snippetscompetitor_notesfeature_requestsmeme_fuelThe README’s recommended pattern is the correct one:
targetHere’s a practical “don’t overthink it” flow that scales:
Node 1 — Webhook
Node 2 — Auth check (optional but recommended)
Node 3 — Normalize
target and domainNode 4 — Store raw
Node 5+ — Enrich / classify / act
This is how you build a dataset without pretending you’ll “remember it later.”
Repo guidance is sensible:
If you’re hosting n8n on Opalstack, you can keep it sane by putting it behind HTTPS and adding a simple token check.
n8n is incredible at actions, but it’s still fed by inputs.
n8nCapture turns your day-to-day browsing into structured, queryable, automatable data—with almost no friction.
Right-click. Clip. Automate.
]]>VibeShell goes the opposite direction.
It’s a
]]>If you’ve been watching the agentic tooling wave, you already know the annoying part: most “AI integrations” assume you’re running Docker, have root, or want to rebuild your stack around somebody else’s platform.
VibeShell goes the opposite direction.
It’s a single-file PHP MCP server you can drop onto basically any LAMP/LEMP host, then let an MCP-capable assistant securely read/write/list/search/tail files and logs—without adding dependencies. GitHub
VibeShell is a lightweight, PHP 7.4+ implementation of the Model Context Protocol (MCP) over JSON-RPC 2.0 via HTTP, designed to give an AI assistant a safe toolbox for file operations. GitHub+1
If you read our MCP write-up, you know our stance: you shouldn’t need containers to ship a small project, and you shouldn’t have to “trust magic” to automate your hosting. Opalstack
VibeShell fits that philosophy: show me the wires.
VibeShell exposes a small set of practical tools—enough to be useful, not enough to be scary:
fs_info (paths: home/base/apps/logs)fs_list (list dirs, optional recursion)fs_read (read file contents with offset/limit)fs_write (write/append + auto-mkdir)fs_tail (tail logs)fs_search (recursive grep)fs_move (move/rename)fs_delete (delete, optional recursive) GitHubAnd it comes with guardrails: home-directory jail, realpath() resolution to prevent symlink escapes, traversal blocking, protected files, rate limiting, and bearer-token auth. GitHub
1) Upload the file
You literally upload index.php to a directory your web server can hit. The README shows the idea: GitHub
# Example: deploy to your app directory
scp index.php user@yourserver:~/apps/mcp/
2) Create the config
VibeShell reads a simple INI in your home directory:
cat > ~/.mcp_vibeshell.ini << 'EOF'
[vibeshell]token = "your-secure-token-here"
base_dir = "~"
EOFchmod 600 ~/.mcp_vibeshell.ini
Token + base_dir are the big knobs. If you want to be extra sane, set base_dir="~/apps" so the agent can’t wander. GitHub
Generate a token the boring way:
openssl rand -hex 20
3) Point your MCP client at it
Example MCP client config (VS Code / agent / whatever you’re using):
{
"mcpServers": {
"vibeshell": {
"type": "http",
"url": "https://your-domain.com/mcp/",
"headers": {
"Authorization": "Bearer your-secure-token-here"
}
}
}
}
That matches the repo’s recommended setup. GitHub
Initialize:
curl -X POST https://your-domain.com/mcp/ \ -H "Authorization: Bearer your-token" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05"}}'
List tools:
curl -X POST https://your-domain.com/mcp/ \ -H "Authorization: Bearer your-token" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'
Call a tool (fs_list):
curl -X POST https://your-domain.com/mcp/ \ -H "Authorization: Bearer your-token" \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"fs_list","arguments":{"path":"~"}}}'
All straight from the README. GitHub
Agentic tooling is powerful because it can do real work. That’s also the risk.
base_dir if you can. GitHubIf you’ve used our Opalstack MCP endpoint, the mental model is the same: tokens are granular, and an agent can absolutely do destructive things if you let it. Opalstack
VibeShell is the kind of integration we’re into:
If you want “AI that can actually help maintain real servers,” start here.
]]>index.html (or template) directly from the server.index.html (or template) directly from the server.This post mirrors the steps from the video—use it as your quick checklist.
Yes, keys are best practice. For quick “vibe” sessions, username/password also works. Use what fits your moment; switch to keys when you settle in.
apps/yourapp/index.html)If you’re on macOS and want a full OS mount, install macFUSE—but you don’t need it for the VS Code SSH FS plugin. The plugin mounts inside VS Code’s explorer.
SSH FS → Install.Open Command Palette → “SSH FS: Create a new configuration”.
Or drop this into Settings (JSON). Replace with your details:
// settings.json (User or Workspace)
"sshfs.configs": [
{
"name": "opal1", // shows in the SSH FS panel
"host": "opal1.opalstack.com", // or just "opal1" if you mapped it
"port": 22,
"username": "your_opal_username",
"password": "your_password", // for quick start; prefer keys later
"root": "/home/your_opal_username/", // your home dir
"privateKeyPath": "", // leave empty when using password
"interactiveAuth": false,
"agent": false,
"followSymlinks": true,
"uploadOnSave": false, // not needed—edits are live
"readOnly": false,
"reconnect": true,
"debug": false
}
]
Prefer keys? Set"privateKeyPath": "~/.ssh/id_ed25519"and remove"password".
Connect: Open the SSH FS side panel → click opal1 → Mount.
You’ll see a new tree in the Explorer with your live server files.
Common locations on Opalstack:
apps/<appname>/apps/<wpapp>/apps/<appname>/public/ or static/~/apps/<appname>/run scripts or your start/finish_install patternsOpen your page entry point. For the demo:/home/your_user/apps/index.html
index.html (or a template).<script> that toggles dark mode and remembers user choice in localStorage.”Paste, then refine with Copilot:
<section class="min-h-[70vh] grid place-items-center bg-slate-950 text-white px-6">
<div class="max-w-3xl text-center space-y-5">
<h1 class="text-4xl sm:text-6xl font-extrabold tracking-tight">
Build faster on Opalstack
</h1>
<p class="text-slate-300 text-lg sm:text-xl">
Real Linux hosting, flat pricing, AI-assisted workflows. No surprise bills.
</p>
<div class="flex flex-col sm:flex-row gap-3 justify-center">
<a class="btn btn-primary" href="/signup/">Start Free</a>
<a class="btn btn-outline" href="/docs/">Read Docs</a>
</div>
</div>
</section>
Ask Copilot to “animate the buttons on hover with subtle scale and glow” or “add an ambient radial gradient glow behind the hero”.
If your app is not plain static HTML, you may need a restart or build step:
systemd --user service.touch tmp/restart.txt.Tip: wrap restarts in a tiny script so you can call it from the VS Code terminal:
# ~/bin/restart_my_app.sh
#!/usr/bin/env bash
set -euo pipefail
systemctl --user restart myapp.service
echo "Restarted myapp.service ✅"
Make executable: chmod +x ~/bin/restart_my_app.sh
apps.chmod 777 like the plague.Even if you vibe directly on the server, commit early and often, Use Gitea to set up a Git repo on Opalstack.
I can’t save files / read-only?
Check your SSH FS config "readOnly": false. Ensure file ownership: ls -l and chown -R your_user:your_user.
Latency / slow saves?
Stick to editing within your app’s folder, not your entire home. Disable heavy VS Code extensions on remote mounts.
Permissions weird after deploy script?
Reset ownership: chown -R your_user:your_user ~/apps/yourapp. Avoid sudo in web paths.
Edits not showing in browser?
Copilot keeps hallucinating paths
Point it: “Use the existing /apps/styles.css path.” Then tab-complete the relative import.
If you’ve wired our MCP endpoint into your AI toolchain, you can:
That pairs well with SSH FS: AI handles ops, you live-edit the code.
public_html/index.html.Done.
In those cases, SSH FS still works—just point it at staging.
That’s the whole loop: mount → open → prompt → save → refresh.
With Opalstack + SSH FS + Copilot, you get instant feedback in the real environment, minus the yak-shaving. Keep it tight, keep it versioned, and ship.

When you migrate from Typepad to Opalstack, you get:
Complete ownership of your content — Your blog posts, comments, and media files will be preserved on your own hosting account giving you full control of your content.
Modern hosting infrastructure — Built on the
]]>
When you migrate from Typepad to Opalstack, you get:
Complete ownership of your content — Your blog posts, comments, and media files will be preserved on your own hosting account giving you full control of your content.
Modern hosting infrastructure — Built on the latest technologies with regular server updates ensuring your server stays secure.
Flexible platform options — Whether you want to stick with a familiar blogging experience or explore more powerful content management systems we've got you covered.
Developer-friendly environment — SSH access, Git integration, custom applications, and all the tools you need to customize your site exactly how you want it.
Better performance — With servers located globally and optimized for speed your readers will notice the improvement.
With Typepad's announced shutdown on September 30th, 2025 you'll need a new home for your content. Rather than scrambling at the last minute now's a great time to move to a platform that gives you more control and better performance.
We've made it straightforward to migrate from Typepad with step-by-step guides for the two most popular destinations:
If you love the familiar blogging interface and want minimal disruption to your workflow Movable Type is the natural choice. Since Typepad is built on Movable Type technology your content will feel right at home.
Benefits:
Read our complete guide: Migrating from Typepad to Movable Type on Opalstack →
Want to join the world's most popular content management system? WordPress powers over 40% of all websites and offers an enormous ecosystem of themes, plugins, and customization options.
Benefits:
Read our complete guide: Migrating from Typepad to WordPress on Opalstack →
Both migration paths preserve your valuable content:
Typepad's export tools work seamlessly with both Movable Type and WordPress import processes so you won't lose anything in the transition.
Ready to make the move? Here's what you'll need:
Both migration processes use our one-click installers so you can focus on your content instead of server configuration. Since Typepad shuts down on September 30th it's worth getting your migration done with plenty of time to spare.
We built Opalstack specifically for developers, entrepreneurs, and businesses who want reliable hosting without the corporate overhead. When you migrate from Typepad to Opalstack you get:
Ready to get started? Don't let your years of content disappear on September 30th. Choose your migration path and let's move your content to its safe new home on Opalstack today!
]]>Did you know you can route your browser traffic with a secure tunnel to any web hosting company which supports SOCKS over SSH? That might be useful! Want per-app, encrypted routing through your Opalstack server without installing a VPN? Do this:
Shell users are used for SSH/SFTP access and to run your apps. (docs.opalstack.com)
ssh-copy-id): ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa ssh-copy-id [email protected] Now you can log in without a password.~/.ssh/authorized_keys, and configure PuTTY to use the private key for that host.ssh -N -D 127.0.0.1:1080 [email protected]
-D 127.0.0.1:1080 creates a local SOCKS5 proxy on port 1080.-N tells SSH not to run a remote command—just forward.ssh client out of the box; Opalstack shows the login pattern ssh [email protected].ssh -N -D 127.0.0.1:1080 [email protected]
Microsoft Windows 10/11 include (or can add) the OpenSSH Client: Settings → Apps → Optional Features → OpenSSH Client.
opal1.opalstack.com.1080 → choose Dynamic → Add.127.0.0.1 Port: 1080 Version: SOCKS v5about:config → network.proxy.socks_remote_dns=true).Launch with flags that force proxy use and remote DNS:
# macOS example
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
--proxy-server="socks5://127.0.0.1:1080" \
--host-resolver-rules="MAP * ~NOTFOUND , EXCLUDE localhost"
On Windows, edit the shortcut and append the same flags to the Target. These flags stop Chrome from doing local DNS lookups while using a SOCKS proxy.
curl --socks5-hostname 127.0.0.1:1080 https://ifconfig.me
You should see your Opalstack server’s IP, proving traffic is routing through the tunnel (the --socks5-hostname bit forces remote DNS).
brew install autossh autossh -M 0 -N -D 127.0.0.1:1080 [email protected] (If you want login-start, we can drop a small launchd plist.)ssh -N -D ... command at login, or use Task Scheduler to start it on sign-in.opal1.opalstack.com).# In Opalstack dashboard:
# Applications → Create Shell User → pick server & username
# Check Notice Log for the initial password
# On your computer (macOS/Windows with OpenSSH):
ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa # if needed
ssh-copy-id [email protected] # macOS/Linux (or add key via PuTTY/Server Access docs)
# Start the SOCKS proxy:
ssh -N -D 127.0.0.1:1080 [email protected]
# Browser:
# Firefox: set SOCKS5 127.0.0.1:1080 + "Proxy DNS when using SOCKS v5"
# Chrome/Edge: launch with --proxy-server + --host-resolver-rules flags
# Verify:
curl --socks5-hostname 127.0.0.1:1080 https://ifconfig.me
]]>
Let's cut right to it: Shared servers running AlmaLinux 9 are now available in Frankfurt, Washington DC, Dallas, San Francisco, and Singapore. VPS running the new operating system are available everywhere we support VPS.

Let's cut right to it: Shared servers running AlmaLinux 9 are now available in Frankfurt, Washington DC, Dallas, San Francisco, and Singapore. VPS running the new operating system are available everywhere we support VPS.
Here’s a list of some of the software versions that are available on the new system, with the current versions included for comparison:
The new operating system will only be available on newly-provisioned hardware; our existing servers will not be upgraded. If you’re an existing customer and want to take advantage of the new OS then you must either migrate your account to one of the new servers or purchase an additional hosting plan located on one of the new servers.
Note that the following software is not supported on the new operating system:
If you have applications using these software versions then you will need to upgrade them to a supported version before you can migrate them to the new operating system. You can find everything else you need to know in our migration docs.
We're looking forward to the new installers and software you'll be able to run on the new operating system. Watch for them through the coming months!
]]>Continuing our series on MCP, In this video John explains what MCP is and a birds eye view of how to deploy an MCP server with django-mcp-server
…or why your LLM needs a wrapper if you want it to ship real code,
]]>
Continuing our series on MCP, In this video John explains what MCP is and a birds eye view of how to deploy an MCP server with django-mcp-server
…or why your LLM needs a wrapper if you want it to ship real code, not just cute prose
Context: The video at the top of this post is John’s lightning-tour of Model-Context Protocol (MCP) and the quickest path to spinning up an MCP-enabled Django project withdjango-mcp-server.
Below is the long-form write-up—perfect for pausing, skimming, and copy-pasting at your own pace.
LLMs are brilliant storytellers but amnesiac engineers.
Ask ChatGPT to “create a user” and then immediately “make a DB for that user.” It forgets the ID you just gave it—because every turn starts from zero. Agents need an external memory + tooling system. MCP provides both.
“128 k tokens” sounds huge—until you remember that’s ~300 KB of text. Your smallest SaaS codebase is 10–100× bigger. The model will never “know” your whole system; it only needs to know how to call it. That’s the job of tool manifests.
Free-hand JSON is the Wild West: fields drift, types change, hallucinations happen. An MCP manifest is a strict contract—names, params, enums, return types. Agents stop guessing and start doing.
Giving an untrusted LLM direct API keys is like handing the intern root. MCP splits:
That clean split lets you pass a SOC 2 audit without losing sleep.
Without a standard you get M × N wrappers (every model × every API).
With MCP you write one server; every agent that speaks MCP plugs in. Think HTTP for agents.
Every tool call is a structured log entry—great for debugging and governance: replay, rollbacks, blame, glory.
Here’s an abridged version of the OSUser tool demoed in the video. Key idea: zero business logic in the tool—just forward to the classic API.
# osuser_example.py
from typing import Literal, Any, Dict
from modules.mcp.utils import MCPTransport
from mcp_server import MCPToolset
class OSUserAPI:
def __init__(self, token: str) -> None:
self.http = MCPTransport(bearer_token=token)
def list(self) -> list[dict]:
# This is where you would place logging for auditing,
# on each CRUD or whatever tooling you decide.
# You would have full context to "self.request" which is the same as django "request"
return self.http.get("/osuser/list/")
def read(self, data: Dict[str, Any]) -> dict:
return self.http.get(f"/osuser/read/{data['id']}")
def create(self, data: Dict[str, Any]) -> dict:
return self.http.post("/osuser/create/", [data])
def update(self, data: Dict[str, Any]) -> dict:
return self.http.post(f"/osuser/update/", [data])
def delete(self, data: Dict[str, Any]) -> str:
return self.http.post(f"/osuser/delete/", [data])
class OSUserTools(MCPToolset):
def osuser(self,
action: Literal["list","read","create","update","delete"],
payload: Any | None = None):
""" in python this would be a docstring and it is where the manifest in django-mcp-server lives. This is where you describe to the LLM how to use your API.
"""
return {
"list": lambda _: self._api().list(),
"read": self._api().read,
"create": self._api().create,
"update": self._api().update,
"delete": self._api().delete,
}[action](payload or {})
def _api(self):
return OSUserAPI(token=self.request.auth)
Flow in one breath: Copilot prompt → OSUserTools.osuser() → Django MCP server → classic REST → Celery worker/Ansible/whatever.
When a user types:
chat: vibe deploy wordpress on opalstack
the agent:
The model never “learned DevOps.” It just chained typed tools together.
| Link | Pip |
|---|---|
| django-mcp-server on pypi | pip install django-mcp-server |
Questions? send us an email, [email protected] Ready to ship? Grab a PAT, push to Git, and let MCP handle the midnight shift.
Opalstack’s core platform is built from the ground up not an off‑the‑shelf cPanel clone. We’re wholly owned and operated by experienced software developers many of whom have worked together for almost
]]>Opalstack’s core platform is built from the ground up not an off‑the‑shelf cPanel clone. We’re wholly owned and operated by experienced software developers many of whom have worked together for almost two decades. We built the platform to solve our own problems: rapid deployment, easy management, and rock‑solid support.
From day one the goal was a modern interface—so we shipped a fully REST‑compliant JSON API. Authentication uses a bearer token in the Authorization header and every endpoint comes with copy‑paste curl examples for GET and POST. Power users can streamline workflows with embedded objects—e.g. /osuser/list/?embed=server returns both the user and its server in a single call—keeping automation lean and self‑contained.
Our API now exposes a /mcp endpoint. At first glance these paths look unremarkable – they accept GET, POST and DELETE – but they mark the beginning of something extraordinary.
The Model Context Protocol (MCP) is an emerging open standard that lets AI models call external tools in a safe, structured way. According to the official specification MCP consists of several layers: a base protocol built on JSON‑RPC message types, lifecycle management for connection setup, an authorization framework for HTTP transports, and optional server and client features. Servers expose resources, prompts and tools while clients can provide sampling and directory listings. All messages follow the JSON‑RPC 2.0 specification ensuring that requests responses and notifications are predictable. The protocol’s modular design allows implementations to support exactly the features they need.
This makes MCP a perfect match for a hosting platform. It provides a secure, stateless channel where an AI agent can ask a server to perform an action (like creating a database or deploying an app) and receive structured results. Because MCP is transport‑agnostic it can work over HTTP, WebSockets, or even standard I/O. And because it is built on JSON‑RPC it integrates easily with our existing API.
We have always loved new tech and we’ve been following large‑language‑model research in our spare time. When the Model Context Protocol was released earlier this year we immediately recognized its potential. Within weeks we exposed an /mcp endpoint on the Opalstack API. This endpoint serves as a wrapper around our existing API operations but it also publishes a catalogue of tools defined by manifests to any MCP‑compatible AI agent. You can now vibe deploy your next app using natural language.
Here’s why this matters for you:
Developers come to Opalstack because they want to ship quickly. We provide one‑click installers for WordPress, Django, Ruby on Rails, and more but we also let you build and deploy custom apps without hiding the underlying system. Our philosophy is that you shouldn’t need Docker or Kubernetes to run a small project. You get full SSH and SFTP access in a managed OS environment. There’s no root access required so you can focus on your code while we handle the hosting.
That philosophy extends to our support: our staff are developers who debug server configuration and application code because they enjoy it. We’re a close‑knit team bound by a shared passion for open‑source software. When you’re experimenting with a new framework or pushing an LLM‑powered agent to production, we’re right there with you.
We’ve been on this journey since the early days of Python web frameworks. Along the way we’ve seen hosts come and go. Opalstack isn’t a venture‑funded experiment; it’s a company owned and operated by developers. We keep our team small and cross‑trained so that we can respond quickly and stay accountable. We have no hidden fees – email, SSL and DNS are included in every plan. When customers asked for dark mode or for the ability to route traffic to a single domain we added those features. When the hosting world began experimenting with MCP we were ready.
Our commitment to innovation doesn’t mean we throw caution to the wind. We roll out new OS builds slowly and deliberately because real‑world applications are more complex than any lab test. We listen to feedback and prioritise stability over hype. As we integrate MCP into more of our tooling we’ll continue to refine the manifests and prompts so that your LLM agents behave predictably and securely.
We invite you to explore the Opalstack MCP endpoint and start building your own agent‑driven workflows. As always, our support team is eager to help. Whether you’re spinning up a WordPress blog, orchestrating a Django microservice, or experimenting with AI agents Opalstack is here to make your vision sparkle.
Many MCP clients are now available, the one we have done the most testing with is VS Code and Copilot. The configuration needed to connect to the MCP server, where ABB123 is your bearer token that is issued within the dashboard.

Github Copilot via the Extensions tab.
>MCP: Open User Configuration in in the VS Code search to open the config file.{
"servers": {
"opalstack": {
"url": "https://my.opalstack.com/mcp",
"type": "https",
"headers": {
"Authorization": "Bearer ABC123"
}
}
},
}
https://my.opalstack.com/tokens/
Make sure you change the mode from 'Ask' to 'Agent' This is known as an 'Agenic AI', the ability to perform long chains of tasks. Once this is done you can run the /list command and it will return the mcp_opalstack_* toolkit as well as the other tools your IDE has available.
Behind the scenes, every chat‑command you fire at Opalstack is translated into one of 21 purpose‑built MCP tools—each a wrapper around our JSON REST API (full schema lives at /api/v1/doc/). Think of them as Lego bricks your agent stacks together to get real work done:
mcp_opalstack_account # create/read your account profile
mcp_opalstack_address # forward‑only or full mail addresses
mcp_opalstack_application # deploy Django, Laravel, Node, static, you name it
mcp_opalstack_cert # issue/renew Let’s Encrypt certs
mcp_opalstack_dns # manage records without leaving your editor
mcp_opalstack_domain # add or park domains in one shot
mcp_opalstack_installer_urls # fetch our 1‑click installer library
mcp_opalstack_ip # list dedicated or shared IPs
mcp_opalstack_mailuser # mailbox CRUD (quota, passwords, etc)
mcp_opalstack_mariadb # spin up MariaDB databases
mcp_opalstack_mariauser # grant DB creds
mcp_opalstack_notice # surface panel notifications to your bot
mcp_opalstack_osuser # sandboxed Linux users for each app
mcp_opalstack_psqldb # PostgreSQL 17 in two keystrokes
mcp_opalstack_psqluser # role‑based PSQL access
mcp_opalstack_server # get server health & resource data
mcp_opalstack_site # map domains → apps → SSL in one call
mcp_opalstack_token # issue or revoke API tokens
mcp_opalstack_tokenacl # fine‑grain ACLs for shared accountsBottom line: Opalstack MCP turns our rock‑solid API into an instant‑action command palette for both humans and AI. Less YAML, more “go live” button‑mashing. Go vibe‑code and vibe deploy something wild and let us know what you ship!
After months of work our new operating system is almost here and over the next few weeks we will begin opening up VPS and shared servers running on AlmaLinux 9 instead of CentOS 7. Since your applications are far more complicated than
]]>After months of work our new operating system is almost here and over the next few weeks we will begin opening up VPS and shared servers running on AlmaLinux 9 instead of CentOS 7. Since your applications are far more complicated than our tests could reasonably be we will be slowly rolling out upgraded servers as we get more confident. All of our internal tests have passed but we’ve done this long enough to know that only the real world is a sufficient test plan.
We evaluated a few different Linux distributions including Rocky Linux, AlmaLinux, and Oracle Enterprise Linux. We chose AlmaLinux because it has gained widespread adoption across the hosting industry making migrations to Opalstack faster and easier.
New shared servers running AlmaLinux 9 should be available in all of our shared service regions before the end of the year. VPS running AlmaLinux 9 will be available slightly earlier than the shared servers and will be available in all of the locations we support VPS now.
Here’s a list of some of the software versions that are available on the new system, with the current versions included for comparison:
(an earlier version of this post had MySQL 8.0 and PostgreSQL 16 as the new database versions, but since then MySQL 8.4 had a point release and PostgreSQL 17 was released)
The new operating system will only be available on newly-provisioned hardware; our existing servers will not be upgraded. If you’re an existing customer and want to take advantage of the new operating system then you must either migrate your account to one of the new servers or purchase an additional hosting plan located on one of the new servers.
Note that the following software is not supported on the new operating system:
If you have applications using these software versions then you will need to upgrade them to a supported version before you can migrate them to the new operating system.
We’ll send out an update when the new servers are ready for you to migrate!
This is just the first step toward the next generation of the Opalstack platform. Just as we’ve grown and evolved the platform over the last five and a half years, we’ll continue to do so by introducing new features made possible by the new operating system.
]]>Effective December 1st 2023 we’ll be increasing the price for all Opalstack hosting plans. The increase will be 25% or less depending on the plan, with most plans having a 20% increase.
Price increases are never anyone’s favorite discussion topic. It's
]]>Effective December 1st 2023 we’ll be increasing the price for all Opalstack hosting plans. The increase will be 25% or less depending on the plan, with most plans having a 20% increase.
Price increases are never anyone’s favorite discussion topic. It's not ours either, but our own costs have increased steadily since we launched Opalstack in 2019 — and dramatically so over the past 2 years leading us to an increase in operational costs of over 25% since launch.
We've done our best to avoid a price increase for our customers for as long as possible. We've debated moving our infrastructure to cheaper providers to avoid the higher costs, but we are committed to providing the best hosting experience possible and do not believe that we could do so by relying on cheap hardware and services. We're also a small and fiercely independent company and do not have venture capital or investor fundraising to fall back on. Faced with these facts, our only option is to increase our prices for the first time in the 4+ years that we've been operating.
We at Opalstack appreciate your business and continued support. If you have any questions or concerns about the upcoming price increase then please email our support team and someone from our team will get back to you.
Thank you for your time and, as always, thank you for using Opalstack.
Sincerely,
The (((Opalstack Team
]]>