• sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    3 days ago

    You’d think that a competent technology company, with their own AI would be able to figure out a way to spoof Cloudflare’s checks. I’d still think that.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      68
      ·
      edit-2
      3 days ago

      Or find a more efficient way to manage data, since their current approach is basically DDOSing the internet for training data and also for responding to user interactions.

      • flux@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        2 days ago

        This is not about training data, though.

        Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.

        Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.

        I think the solution is quite clear, though: either make use of the user identity to walz through the blocks, or even make use of the user browser to do it. Once a captcha appears, let the user solve it.

        Though technically making all this happen flawlessly is quite a big task.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.

          They are one of the sources!

          The AI scraping when a user enters a prompt is DDOSing sites in addition to the scraping for training data that is DDOSing sites. These shitty companies are repeatedly slamming the same sites over and over again in the least efficient way because they are not using the scraped data from training when they process a user prompt that does a web search.

          Scraping once extensively and scraping a bit less but far more frequently have similar impacts.

          • flux@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            When user enters a prompt, the backend may retrieve a handful a pages to serve that prompt. It won’t retrieve all the pages of a site. Hardly different from a user using a search engine and clicking 5 topmost links into tabs. If that is not a DoS attack, then an agent doing the same isn’t a DDoS attack.

            Constructing the training material in the first place is a different matter, but if you’re asking about fresh events or new APIs, the training data just doesn’t cut it. The training, and subsequenctly the material retrieval, has been done a long time ago.

    • The Quuuuuill@slrpnk.net
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      3 days ago

      see, but they’re not competent. further, they don’t care. most of these ai companies are snake oil. they’re selling you a solution that doesn’t meaningfully solve a problem. their main way of surviving is saying “this is what it can do now, just imagine what it can do if you invest money in my company.”

      they’re scammers, the lot of them, running ponzi schemes with our money. if the planet dies for it, that’s no concern of theirs. ponzi schemes require the schemer to have no long term plan, just a line of credit that they can keep drawing from until they skip town before the tax collector comes