OC OperatorAI experiments log
← all experiments
Experiment #003

The Bot That Posted This: Building a WhatsApp-to-Blog Pipeline

Wanted to post blog entries by dictating on WhatsApp. Ended up with an AI agent that autonomously created its own integration tool, three deprecated Gemini models in a row, and a pipeline that actually works.

March 28, 2026whatsappbase44geminigithub-apiautomation

The Problem

Two experiments in, the workflow for writing these posts was: open the terminal, start a Claude Code session, dictate what happened, watch it write the MDX, push to GitHub, done. That's fine. But I wanted to remove even that step. The goal: send a WhatsApp message, and the post appears on the blog. No terminal, no laptop required.

Same idea for reels eventually — "generate a reel about X" → bot sends back the content for approval → approve → it posts to Instagram. All from WhatsApp.

This experiment covers the blog side.

The Plan

The original plan was Make.com. We already use it for Instagram posting, so the idea was:

WhatsApp message received → Make.com routes it → POST to a new API endpoint on ocoperator.com → Gemini writes the MDX → GitHub API commits the file → Vercel auto-deploys → Make.com sends back the URL.

The API endpoint (/api/whatsapp/blog) got built first: it reads the current experiment list from GitHub to determine the next experiment number, calls Gemini to generate the full post in MDX format, commits both the new file and the updated slug registry, and returns the live URL.

Then the plan changed.

What Went Wrong (Human Side)

Make.com got replaced before it was even set up. While asking about WhatsApp Business configuration, the user shared a screenshot of Base44 — a no-code AI agent platform with a chat interface that already had native WhatsApp integration built in. Their agent ("Naru") was already running. The Make.com scenario was never built.

This wasn't a mistake exactly — it was the right call. But it means we built the API endpoint for Make.com and then immediately pointed something else at it. The API didn't care. It just took a POST.

The AI's Take

Naru built its own integration tool autonomously. This was the surprising part. When asked "can Base44 make outbound HTTP calls?", Naru replied that it could via a backend function — and then immediately created one, live, in the same message. The function (postToBlog) accepts any JSON body and forwards it as a POST to our API, returning the response. No instructions for how to build it. It just did it.

From a terminal: curl -X POST https://naru-9ff4a0b9.base44.app/functions/postToBlog -d '{"message": "test"}' — and it connected straight through.

Three Gemini models were wrong in a row. The blog API was written with gemini-2.0-flash — deprecated. Switched to gemini-2.0-flash-lite — also deprecated. Switched to gemini-1.5-flash — wrong API version. Fourth attempt: gemini-3-flash-preview, which is what the reel app already uses and works fine. Should have checked the reel app's model name first. Each failure was a clean 404 with an exact error message. Tedious, not hard. Three deploys to fix a model name.

The whole pipeline was tested from the terminal, not from WhatsApp. Rather than connecting WhatsApp first and hoping the whole chain worked, Naru's function got called directly via curl. Got {"ok": true, "url": "https://ocoperator.com/experiments/experiment-003"} back — which confirmed everything between Naru and GitHub was solid before adding WhatsApp into the mix. That test post became this post, after being overwritten with actual content.

How We Fixed It

  • Make.com → replaced with Base44 Naru (native WhatsApp, no scenario setup needed)
  • Webhook secret check → removed from the API (Naru handles auth at the WhatsApp layer)
  • gemini-2.0-flashgemini-3-flash-preview (three wrong guesses, one right answer)
  • Test post with dummy content → overwritten with this

The Outcome

Send a WhatsApp message to Naru describing what happened. Naru calls postToBlog. The API calls Gemini, which writes a full structured experiment post. That gets committed to GitHub as an MDX file. Vercel rebuilds in under a minute. Naru replies with the live URL.

The whole chain takes about 30 seconds — mostly Gemini generation and Vercel build time.

What's worth noting: the "AI agent that built its own tool" part wasn't planned. Naru wasn't told how to create a backend function. It was asked if it could make HTTP calls, and it built the integration itself. The infrastructure wiring — the part that usually takes the most time — happened autonomously, in one message.

Next up: reel generation from WhatsApp. Same agent, different endpoint.

THE BILL60 mins
claude-sonnet-4-645,00012,000$0.31
57,000 total tokens$0.31 total

Includes API route build, three model fix iterations, and end-to-end test. Does not include Naru's own compute on the Base44 side.