automatic6131 8 hours ago

The kind of person who wants to build a website copier is exactly who I had in mind for the target of vibecoding.

Bad idea, bad execution, I like it when a plan comes together.

  • anupsingh123 8 hours ago

    I think there's some confusion about what justcopy does - it's for cloning YOUR OWN projects, not scraping other people's websites. Built it out of frustration when I tried to fork one of my projects for a different idea and it took a full day even with Claude Code and Cursor. Lots of manual config updates, dependency changes, renaming stuff, etc. The $200 mistake was about agent orchestration, not the ethics of the product. But appreciate the feedback - clearly need to communicate the use case better.

    • chucksta 26 minutes ago

      >For those who don’t know, we’re building a tool that lets you copy any website, customize it, and deploy it - all automated.

      _any_ website, can't imagine why there is _any_ confusion.

    • automatic6131 7 hours ago

      I'm not going to pay you to slightly rip off my own ideas. Who is going to pay you for this, and what are they doing with it?

cafebabbe 3 hours ago

Ah so this is where the current GDP growth comes from.

pjdkoch 4 hours ago

If you buy senior engineering hours and give them vague requirements, this is close enough to what you'll get.

dominicrose 7 hours ago

Even without AI, companies have been burning cash uncontrollably on cloud services. I guess it's worth it when time saved, scalability etc, is much much more valuable than money.

leptons 8 hours ago

Oh, they burned a lot more than $200, you just paid only $200. These things are costing way more than what people pay for them, the price heavily subsidized.

  • simonw 8 hours ago

    I think the opposite is much more likely to be true: that vendors who charge money for inference are charging more than it costs them to service a prompt.

    I've heard from sources that I trust that both AWS and Google Gemini charge more than it costs them in energy to run inference.

    You can get a good estimate for the truth here by considering open weight models. It's possible to determine exactly how much energy it costs to serve DeepSeek V3.2 Exp, since that model is open weight. So run that calculation, then take a look at how much providers are charging to serve it and see if they are likely operating at a loss.

    Here are some prices for that particular model: https://openrouter.ai/deepseek/deepseek-v3.2-exp/providers

    • beAbU 3 hours ago

      You cant conveniently ignore the cost of model development and training.

      This is like saying solar power is free if you ignore the equipment and installation costs.

      Even worse still, model creators are in an arms race. They can't release a model and call it a day, waiting for it to start paying for itself. They need to immediately jump on to the next version of the model or risk falling behind.

    • Tade0 8 hours ago

      If that's the case, then why are AI companies bleeding money?

      Or: what are they bleeding money on?

      • simonw 7 hours ago

        They lose money on research and training and offering model trials for free (a marketing expenses).

        That doesn't mean that when they do charge for the models - especially via their APIs - that they are serving them at a unit cost loss.

        • Tade0 6 hours ago

          Ok, fine, but I think it's disindigenous to only mention energy expenditure. There's also infrastructure, necessary re-training and R&D - of which we don't know how much must be spent just to stay in the market.

          • simonw 6 hours ago

            Competitive, venture backed companies losing money when you take R&D into account in a high growth market is how the tech industry has worked for decades.

            Shopify, Uber and Airbnb all hit profitability after 14 years. Amazon took 9.

            • Tade0 2 hours ago

              The mentioned didn't require the sort of R&D AI does.

              And this isn't something that will go away anytime soon. OpenAI for instance is projecting that in 2030 R&D will still account for 45% of their costs. They think they'll be profitable by that time, or so they're telling investors.

        • surgical_fire 7 hours ago

          Depends on the vendor and how they charge. OpenAI loses money on subscriptions [1]. Maybe the people who pay 200 bucks on a subscription are exactly the kind of people that will try to use the maximum out of it, and if you go down to the 20 bucks tier you will find more of the type of user that pays but doesn't use it all that much?

          I would presume that companies selling compute for AI inference either make some money or at least break even when they serve a request. But I wouldn't b surprised if they are subsidizing this cost for the time being.

          [1]: https://finance.yahoo.com/news/sam-altman-says-losing-money-...

          • simonw 7 hours ago

            That "losing money on subscriptions" story is a one-off Sam Altman tweet from January 2025, when they were promoting their brand new $200 account and the first version of Sora. I wouldn't treat that as a universal truth.

            https://twitter.com/sama/status/1876104315296968813

            "insane thing: we are currently losing money on openai pro subscriptions!

            people use it much more than we expected"

            • surgical_fire 6 hours ago

              Sam Altman is a bullshitter. A liar cares about the truth and attempts to hide it. A bullshitter doesn't care if something is true of false, and is just using rhetoric to convince you of something.

              I don't doubt that it is true that they lose money on a 200 subscription because the people that pay 200 are probably the same people that will max out usage over time, no matter how wasteful. Sam Altman was framing it in a way to say "it's so useful people are using it more than we expected!", because he is interested in having everyone believe that LLMs are the future. It's all bullshit.

              If I had to guess, they probably at least break even on API calls, and might make some money on lower tier subscriptions (i.e.: people that pay for it but use it sparingly on a as-need basis).

              But that is boring, and hints at limited usability. Investors won't want to burn hundreds of billions in cash for something that may be sort of useful. They want destructive amounts of money in return.

      • Ferret7446 7 hours ago

        On building the next new feature/integration/whatever? I feel like this should be a rhetorical question, but the fact that it was asked I also feel it is not so...

      • anupsingh123 8 hours ago

        btw this was DeepSeek-V3.2. If I'd been using Claude Sonnet 4.5, we'd be looking at a $2000 bill instead.

        • Tade0 7 hours ago

          Okay, yikes. Good thing that you even can set up those controls, unlike with that other company in the compute infrastructure business.

anupsingh123 9 hours ago

Classic "I'll be right back" moment that cost me real money.

Building justcopy.ai - lets you clone, customize and ship any website. Built 7 AI agents to handle the dev workflow automatically.

Kicked them off to test something. Went to grab coffee.

Came back to a $100 spike on my OpenRouter bill. First thought: "holy shit we have users!"

We did not have users.

Added logging. The agent was still running. Making calls. Spending money. Just... going. Completely autonomous in the worst possible way. Final damage: $200.

The fix was embarrassingly simple: - Check for interrupts before every API call - Add hard budget limits per session - Set timeouts on literally everything - Log everything so you're not flying blind

Basically: autonomous ≠ unsupervised. These things will happily burn your money until you tell them to stop.

Has this happened to anyone else? What safety mechanisms are you using?

  • magicalhippo 8 hours ago

    I thought the hotel AI's playing poker together in Altered Carbon was a bit cheesy until these newfangled LLM-driven agents came along, and it all seemed a lot more realistic.

    Agents doing nothing, just doing things for the sake of doing things.

    Seems we're there.

  • W3schoolz 8 hours ago

    What a great learning opportunity! Supervision is key and budget limits are highly valuable in preventing surprises.

    That said, I think a budget limit of $5-10k per agent makes sense IMO. You're underpaying your agents and won't get principal engineer quality at those rates.

  • fragmede 9 hours ago

    Privacy.com credit card with a limit set, and making sure that billing is not set to auto on the LLM platform.

    • anupsingh123 9 hours ago

      How would that help with supervising agent runs for each user on justcopy.ai?

      • W3schoolz 8 hours ago

        What is justcopy.ai? Is justcopy.ai the project you are working on? How can I find out more about justcopy.ai?

        • anupsingh123 8 hours ago

          Yes, I built 7 AI agents for copying any website. The goal is to create production-quality copies. You can sign up and give it a try! I'm still refining these agents, but first I'm trying to add restrictions so they don't burn through my wallet lol

          I started this project out of frustration. When I tried to clone other projects using Claude Code and customize them a bit—simple Next.js, ECS, CDK, and Express server setups—it took several hours just to get everything working. I realized that while vibe coding is great, it's still time-consuming to build a production-ready, functioning product.

  • SpaceNoodled 8 hours ago

    My chief safety mechanism is not using money-burning slop generators.

    • anupsingh123 8 hours ago

      That's one approach. For me, the agent setup cut what used to be a full day of manual work down to minutes - even with the $200 learning tax, that's still a net win. But I get the skepticism.