Back to Blog

What Starfish's AI Stack Actually Looks Like at the Start of 2026

This is not a vendor recommendation list. It is what we actually run, how we think about connecting it, and where we got it wrong.

Quick AnswerWe publish our stack because opacity does not help anyone. The tools are not the interesting part. How we decided what to connect, and what we learned from the things that failed, is.

The Frame

We publish this once a year because opacity in this space is bad for everyone. There is enough “AI strategy” content that tells you what categories of tools to consider without ever showing you a real implementation. I would rather show you what we run and let you decide what applies.

Two ground rules for reading this. First, the tools are not the lesson. How we decided what to connect and in what order is the lesson. Second, this is a snapshot from early 2026. The stack changes. Some of what we are running now will be gone by Q3 if it does not hold up. I will update this post or write a follow-up when that happens.

What We Run

We organize the stack by workflow category, not by tool name. The category is the permanent layer. The specific tool inside it is replaceable.

Content production. We use AI for first drafts and research aggregation, not for publishing without review. Every piece of client-facing content has a human review step before it goes out. The tool that produces the draft is less important than the prompt library and the review standard we built around it.

Client communication. Intake is partially automated. Confirmation messages, onboarding sequences, and status update triggers all run without manual initiation. High-stakes communication, anything that touches a difficult conversation or a contract decision, stays human. We have not found a way to automate judgment and we are not trying to.

Internal reporting. We have one connected dashboard that pulls from ad platforms, analytics, and our project management system. The dashboard exists to reduce the time spent assembling data. The interpretation still happens in a live conversation with the account lead. Data assembly is a machine job. Reading what the data means for a specific client is not.

Lead and pipeline management. Lead intake triggers a sequence. The sequence is rule-based through the first three touches. After that, a person takes over. We defined that handoff point carefully and we revisit it quarterly.

Operations and task management. Documented in our project management system with AI-assisted templating for recurring project types. New project setups that used to take 30 to 40 minutes now take under ten. The templates are owned by one person who updates them when the workflow changes. If nobody owns the template, it drifts and creates more problems than it solves.

How We Think About Integrations

We have two rules we apply before connecting anything to anything.

Rule one: every connection has a named owner. Not a team. A person. If something breaks or starts firing incorrectly, one person is responsible for noticing and fixing it. Connections without owners go dark and stay dark until they create a visible problem. By then the damage is done.

Rule two: if a workflow depends on more than three tools in sequence, it needs a checkpoint where a human verifies the output before the chain continues. This is not about distrust of the tools. It is about catching the failure modes that do not announce themselves. A field label change in one tool breaks a downstream step silently. The human checkpoint catches that before it runs for two weeks unnoticed.

What We Got Wrong

Three things, kept anonymous on the vendor side because the failure was ours, not theirs.

We automated a workflow before we had a written output standard. The tool ran. The output was inconsistent because nobody had defined what consistent looked like. We ran it for six weeks before we caught the drift. The fix was not a better tool. It was writing the standard we should have written before we turned it on.

We connected two platforms that were technically compatible but had different data models for the same concept. The integration ran cleanly and passed garbage. Nobody caught it for three weeks because the output looked right at a glance. The lesson: compatible does not mean aligned. Check the data model before you connect.

We added a third tool to solve a problem created by two tools that were not fully configured. The right fix was to finish configuring what we had. Instead we added a layer. It took four months to simplify back down. Every time you feel the impulse to add a tool, ask whether full configuration of what you already have would solve the problem first.

What We Are Watching for the Next Six Months

Agentic workflows that operate across tools with less manual setup. The infrastructure for this is maturing faster than I expected at the start of 2025. Our current read is that the first operators who can deploy agents that hand off between platforms without brittle point-to-point integrations will have a real structural advantage. We are building the documentation and process foundations now so we are ready to deploy when the tooling is stable enough to trust in a production environment.

That is the whole snapshot. If something here is useful, take it. If you have questions about a specific category, reach out.

Learn, Grow, Repeat. If you want help building the systems behind a stack like this, that is what Starfish does.

Abel Sanchez

Abel Sanchez

AI Strategist & Marketing Veteran

Over 20 years building brands and systems. Partner at Starfish Ad Age and Starfish Solutions. Abel helps businesses implement AI that actually creates results — not just noise.

More about Abel →