Working with AI as a team

Hrishi Olickel·

Large language models are amazing. Used well, they can make us genuinely better at what we do. Unfortunately, they sit somewhere between people and software, and that's where things get confusing.

This matters more to us than most. We build reliable agentic systems for a living. It's not just what we sell - it's what we believe.

If we can't practice good AI hygiene internally, we have no business telling others how to do it.

This post is about how we define work and our relationship to AIs at Southbridge: what your role is, what you're actually making, and how to keep AI from eroding the trust that makes collaboration possible.

Why Are You Here?

In a world where models can do more and more every day, the (still) irreplaceable human contribution is intent.

Here's the simple version of any work you do: intent goes in, things happen, and output comes out.

You decide something should exist. You have reasons for wanting it to exist in a particular way. You do some work (alone, with others, with AI), and produce an output: a document, code, a presentation, whatever.

Intent is the fingerprint you leave on your work. Why does this exist? Why is it this way and not some other way? What decisions were made? What was tried and didn't work? Everything from the prompts to the process to the thoughts you had that got you out of the shower and typing is authorial intent.

Intent is what underwrites your work and makes it extensible. If someone needs to continue your work, debug it, or extend it, they need to understand the intent behind it. Without that, they're starting from scratch, even if they have all your outputs in front of them.

Even worse is when the intent is not just missing, it's obfuscated. If you've ever read something and been confused about who was talking (AI, human, or something else entirely), you know what I mean.

What Do You Make?

The result of your work is something other humans rely on. They might build on it, learn from it, use it to make decisions, or pay for it. When you produce something and share it with others, you're implicitly saying: there's a mind behind this. You can engage with it. It's worth your time.

LLMs break this implicit contract when the writer becomes a middleman forwarding content they haven't processed. The reader encounters competent-sounding prose, engages with it seriously, maybe even builds on it, and then discovers that the information or opinion didn't actually come from the person they thought it did.

Even worse is the 'oh' moment: that realization that you've been consuming work that had less effort put into it than you put into reading it (wonderfully put by the Oxide LLM policy, something we strongly recommend reading along with this one). Or that you've based your own work on someone else's output that was itself based on a hallucination.

When this happens once, it's a mistake. When it happens repeatedly, people adapt.

We have good defenses: if we read someone's output twice and realize it wasted our time, by the fourth time we just don't give it much attention, and it joins the sea of slop we've become banner blind to. We assume it's not worth the effort. That reputation is incredibly hard to recover from.

The Ideal Standard

Dylan once said about good songwriting that there shouldn't be anything you can stick your finger through.

That's what we're aiming for: ideal work is underwritten by clearly communicated human intent. Someone should be able to point to any line of your work and ask "why is it this way?" and you should have an answer.

Not "I don't know, Claude wrote it." (or even worse, "I think Claude added that somewhere."), but an actual answer that traces back to a decision made by an identifiable intelligence (yours, an AI's, someone else's) in a context that can be recovered.

A few questions that test for this:

  • Can you explain why something is this way and not some other way? If you can't, you don't have intent. You have output that happened to you.
  • If someone engages seriously with this, can you have a real conversation right now? Or would you need to go back and actually read it first? Would you dread serious engagement or welcome it?
  • If you left tomorrow, could someone continue your work? Not just read it. Continue it. Build on it. Debug it.
  • Is there anything here that no one can account for? Phantom work: things that exist but nobody knows why they're there.

Work that passes these tests is work that can be built upon. Work that fails them might still be useful, but it's closer to research material than deliverable.

Finally, the ideal standard for work is output that you take seriously, and expect others to take seriously. There is a virtuous cycle here: when you take work seriously, others take it seriously, and that expectation reinforces itself. The reverse is also true, and much faster.

Vibing

Vibing is when you're working with AI in a kind of fugue state. You're not deciding what should exist; you're reacting to what appears. Prompt, output, prompt, output. The AI proposes, you vaguely approve, things accumulate. Much like a heavy night of drinking, you might not even remember which AIs were involved the next morning.

At the end, there's a lot of stuff. It might look impressive. But if you ask "why is it this way?" the honest answer is: I don't know. It emerged. The output is an emergent property of a process that nobody was really steering.

This isn't inherently bad! Vibing is fun. It's useful for exploration, for seeing what's possible, for generating raw material. It's awesome.

The problem is when vibed work stumbles into being treated as real work through negligence, without anyone noticing it crossed a line. Research starts wearing the costume of deliverable. Exploration gets presented as conclusion. And because the output looks competent, people build on it, trust it, cite it.

Then it breaks somewhere, and nobody knows why, because nobody ever knew how it worked. Or someone tries to continue it and realizes they have to start over. Or, worst case, someone folds a hallucination into their brain as fact, because nothing in the presentation suggested it wasn't carefully verified.

The work has to be set on fire and restarted simply because there's no way to continue it, often even by the exact person who was doing it.

Smells to Notice

Borrowing from code smells, here are some work smells that suggest something's gone wrong.

"I'd need to go back and read it before I could discuss it." If you wrote it (or "wrote" it) and can't engage with it immediately, you forwarded output. You didn't produce work.

"Claude wrote it" as explanation rather than attribution. "Claude wrote the first draft, I revised it for X and Y reasons" is fine. "Claude wrote it" as the full explanation of why something exists is not. (Apologies to other models, I'm sure your work is just as good to be taking credit for)

"I'm not sure why it's this way." If you can't explain a choice, you didn't make it. Someone, or something, did, and you weren't involved enough to know.

Work that can't be continued by someone else. If you're the only person who can debug it or extend it, that's a bus factor problem. But if you can't continue it either in a month, it's not actually work. It's an artifact.

Phantom sections. Parts of a document or codebase that exist but nobody can account for. Where did this come from? Who decided it should be here? If the answer is "it was in Claude's output and I didn't remove it," that's phantom work.

Outputs longer than your inputs. Not a hard rule, but a smell. If you wrote 50 words and got 5000 back and shipped it, the math suggests you didn't add much.

AI Policy at Southbridge

Here's the rule:

Work presented without context is assumed to meet the ideal standard.

When you share something (to the team, to customers, to Notion, wherever), we assume by default that you are underwriting every inch of it. That all of it contains clear intent - ideally yours. That someone can engage with it, build on it, trust it.

If that's true, great. No special marking needed beyond basic honesty about how you got there.

If it's not true, you need to add context. Some examples:

  • "This is exploratory. I haven't verified it, don't build on this yet."
  • "This is Claude's analysis, lightly edited. I'm sharing because it's interesting, not because I've confirmed it."
  • "First draft, will need review before it's usable."
  • "Compiled from multiple AI outputs, I've checked sections X and Y but not Z."

Another wonderful way to cover complex AI work that's emergent is by talking about how you got there. A good example is Claude Code: An Analysis, which is 100K words generated by AI, including hallucinations. However, it was accompanied by Conducting Smarter Intelligences Than Me: 8000 manually written words covering human intent, process, changes and opinions, which made the work accessible and brought it a lot closer to the ideal.

The point isn't to prohibit AI use or require everything to be handwritten. The point is: don't let people think they're encountering your intent when they're not.

Sharing AI output as AI output, curated by you, marked as such, is completely legitimate. Sharing AI output as your work, when you haven't processed it into actual work, is where trust erodes.

A personal ask

Could you try something, for a week? Work according to the ideal standard even when no one's watching. Even for "personal" projects. Even when it's just exploration. Document intent before you ever touch a chat window. Write down prompts manually. Every mistake from an AI you correct, investigate why it happened, and note down what your hypothesis is. Keep a record of your chats, your agentic logs.

Frognu, the Southbridge mascot, asking you to try something for a week

This will feel slower at first. You're getting so much less done! But in my experience (and this might not be yours, so take it as an invitation not a mandate) it actually makes you faster over time.

Work that meets the ideal standard doesn't need to be redone. When you come back to it in two months, it's put away well: you can pick up where you left off. It takes you minutes - not hurried hours - to hand it over, or ask for help. You're not constantly firefighting the downstream consequences of work that nobody really understood.

You also learn more about the AIs you're working with. When you're working intentionally (steering, deciding, capturing your reasoning), you learn how these tools/minds actually behave. Where they're reliable. Where they hallucinate. What kind of guidance they need. That knowledge compounds.

For quick, cheap, voluminous work, we already have AIs. The human job isn't to be a slower AI. It's to provide intent, judgment, and direction, and to get better at doing that over time.

AIs can be drinking buddies or precision instruments. Depends how you treat them.


TL;DR

  • Intent is your contribution. AI generates output; you provide the why.
  • When you share work, people assume you're behind it.
  • If that assumption is wrong, add context so they know what they're getting.
  • Work without intent is exploration. Exploration is fine, just don't let it masquerade as finished work.
  • Consider working to the ideal standard all the time. It seems slower but probably isn't.