CuriouslyC 2 hours ago

As someone who's built a project in this space, this is incredibly unreliable. Subagents don't get a full system prompt (including stuff like CLAUDE.md directions) so they are flying very blind in your projects, and as such will tend to get derailed by their lack of knowledge of a project and veer into mock solutions and "let me just make a simpler solution that demonstrates X."

I advise people to only use subagents for stuff that is very compartmentalized because they're hard to monitor and prone to failure with complex codebases where agents live and die by project knowledge curated in files like CLAUDE.md. If your main Claude instance doesn't give a good handoff to a subagent, or a subagent doesn't give a good handback to the main Claude, shit will go sideways fast.

Also, don't lean on agents for refactoring. Their ability to refactor a codebase goes in the toilet pretty quickly.

  • zarzavat an hour ago

    > Their ability to refactor a codebase goes in the toilet pretty quickly.

    Very much this. I tried to get Claude to move some code from one file to another. Some of the code went missing. Some of it was modified along the way.

    Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.

    Claude can only reliably do this refactoring if it can keep the start and end files in context. This was a large file, so it got lost. Even then it needs direct supervision.

  • theshrike79 2 hours ago

    I don't use subagents to do things, they're best for analysing things.

    Like "evaluate the test coverage" or "check if the project follows the style guide".

    This way the "main" context only gets the report and doesn't waste space on massive test outputs or reading multiple files.

    • olivermuty 2 hours ago

      This is only a problem if an agent is made in a lazy way (all of them).

      Chat completion sends the full prompt history on every call.

      I am working on my own coding agent and seeing massive improvements by rewriting history using either a smaller model or a freestanding call to the main one.

      It really mitigates context poisoning.

      • CuriouslyC an hour ago

        There's a large body of research on context pruning/rewriting (I know because I'm knee deep in benchmarks in release prep for my context compiler), definitely don't ad hoc this.

      • mattmanser 2 hours ago

        Everyone complains that when you compact the context, Claude tends to get stupid

        Which as far as I understand it is summarizing the context with a smaller model.

        Am I misunderstanding you, as the practical experience of most people seem to contradict your results.

        • NitpickLawyer 6 minutes ago

          One key insight I have from having worked on this from the early stages of LLMs (before chatgpt came out) is that the current crop of LLM clients or "agentic clients" don't log/write/keep track of success over time. It's more of a "shoot and forget" environment right now, and that's why a lot of people are getting vastly different results. Hell, even week to week on the same tasks you get different results (see the recent claude getting dumber drama).

          Once we start to see that kind of self feedback going in next iterations (w/ possible training runs between sessions, "dreaming" stage from og RL, distilling a session, grabbing key insights, storing them, surfacing them at next inference, etc) then we'll see true progress in this space.

          The problem is that a lot of people work on these things in silos. The industry is much more geared towards quick returns now, having to show something now, rather than building strong fo0undations based on real data. Kind of an analogy to early linux dev. We need our own Linus, it would seem :)

dutchCourage an hour ago

That sounds crazy to me, Claude Code has so many limitations.

Last week I asked Claude Code to set up a Next.js project with internationalization. It tried to install a third party library instead of using the internationalization method recommended for the latest version of Next.js (using Next's middleware) and could not produce of functional version of the boilerplate site.

There are some specific cases where agentic AI does help me but I can't picture an agent running unchecked effectively in its current state.

simianwords an hour ago

Slightly off topic but I would really like agentic workflow that is embedded in my IDE as well as my code host provider like GitHub for pull requests.

Ideally I would like to spin off multiple agents to solve multiple bugs or features. The agents have to use the ci in GitHub to get feedback on tests. And I would like to view it on IDE because I like the ability to understand code by jumping through definitions.

Support for multiple branches at once - I should be able to spin off multiple agents that work on multiple branches simultaneously.

  • Jare 39 minutes ago

    Would that be solved by having several clones of your repo, each with a IDE and a Claude working on each problem? Much like how multiple people work in parallel.

    • simianwords 25 minutes ago

      Yeah but it’s not ideal. I thought of this too.

raminf 2 hours ago

Was going to ask how much all this cost, but this sort of answers it:

> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."

Frannky 3 hours ago

Is it a good idea to generate more code faster to solve problems? Can I solve problems without generating code?

If code is a liability and the best part is no part, what about leveraging Markdown files only?

The last programs I created were just CLI agents with Markdown files and MCP servers(some code here but very little).

The feedback loop is much faster, allowing me to understand what I want after experiencing it, and self-correction is super fast. Plus, you don't get lost in the implementation noise.

  • ehnto 2 hours ago

    Code you didn't write is an even bigger liability, because if the AI gets off track and you can't guide it back, you may have to spend the time to learn it's code and fix the bugs.

    It's no different to inheriting a legacy application though. As well, from the perspective of a product owner, it's not a new risk.

    • zarzavat an hour ago

      Claude is a junior. The more you work with it, the more you get a feel for which tasks it will ace unsupervised (some subset of grunt work) and which tasks to not even bother using it for.

      I don't trust Claude to write reams of code that I can't maintain except when that code is embarrassingly testable, i.e it has an external source of truth.

    • Frannky an hour ago

      There is no generated code. It is just a user interacting with a CLI terminal(via librechat frontend), guided by Markdown files, with access to MCPs

user3939382 an hour ago

I’ve got this down to a science.

zachwills 4 days ago

Follow up from my last post; lots were asking for more examples. I will be around if anybody has questions this morning.

  • bazhand 3 days ago

    Can it work without Linear, using md files?