• 0 Posts
  • 16 Comments
Joined 7 months ago
cake
Cake day: November 10th, 2024

help-circle
  • I think this would only be acceptable if the “AI-assisted” system kicks in when call volumes are high (when dispatchers are overburdened with calls).

    For anyone that’s been in a situation where you’re frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn’t matter too much because the guy had already been dead for a bit.

    And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.

    Being able to talk to a human right away is way better than essentially being asked to “press 1 if this is really an emergency, press 2 if this is not an emergency”.


  • I had to click to figure out just what an “AI Browser” is.

    It’s basically Copilot/Recall but only for your browser. If the models are run locally, the information is protected, and none of that information is transmitted, then I don’t see a problem with this (although they would have to prove it with being open source). But, as it is, this just looks like a browser with major privacy/security flaws.

    At launch, Dia’s core feature is its AI assistant, which you can invoke at any time. It’s not just a chatbot floating on top of your browser, but rather a context-aware assistant that sees your tabs, your open sessions, and your digital patterns. You can use it to summarize web pages, compare info across tabs, draft emails based on your writing style, or even reference past searches.

    Reading into it a bit more:

    Agrawal is also careful to note that all your data is stored and encrypted on your computer. “Whenever stuff is sent up to our service for processing,” he says, “it stays up there for milliseconds and then it’s wiped.” Arc has had a few security issues over time, and Agrawal says repeatedly that privacy and security have been core to Dia’s development from the very beginning. Over time, he hopes almost everything in Dia can happen locally.

    Yeah, the part about sending my data of everything appearing on my browser window (passwords, banking, etc.) to some other computer for processing makes the other assurances worthless. At least they have plans to get everything running locally, but this is a hard pass for me.


  • I didn’t factor in mobile power usage as much in the equation before because it’s fairly negligible. However, I downloaded an app to track my phone’s energy use just for fun.

    A mobile user browsing the fediverse would be using electricity around a rate of ~1 Watt (depends on the phone of course and if you’re using WiFi or LTE, etc.).

    For a mobile user on WiFi:
    In the 16 seconds that a desktop user has to burn through the energy to match those 2 prompts to chatGPT, that same mobile user would only use up ~0.00444 Wh.

    Looking at it another way, a mobile user could browse the fediverse for 18min before they match the 0.3 Wh that a single prompt to ChatGPT would use.

    For a mobile user on LTE:
    With Voyager I was getting a rate of ~2 Watts.
    With a browser I was getting a rate of ~4 Watts.

    So to match the power for a single prompt to chatGPT you could browse the fediverse on Voyager for ~9 minutes, or using a browser for ~4.5 minutes.

    I’m not sure how accurate this app is, and I didn’t test extensively to really nail down exact values, but those numbers sound about right.







  • “environmentally damaging”
    I see a lot of users on here saying this when talking about any use case for AI without actually doing any sort of comparison.

    In some cases, AI absolutely uses more energy than an alternative, but you really need to break it down and it’s not a simple thing to apply to every case.

    For instance: using an AI visual detection model hooked up to a camera to detect when rain droplets are hitting the windshield of a car. A completely wasteful example. In comparison you could just use a small laser that pulses every now and then and measures the diffraction to tell when water is on the windshield. The laser uses far less electricity and has been working just fine as they are currently used today.

    Compare that to enabling DLSS in a video game where NVIDIA uses multiple AI models to improve performance. As long as you cap the framerates, the additional frame generation, upscaling, etc. will actually conserve electricity as your hardware is no longer working as hard to process and render the graphics (especially if you’re playing on a 4k monitor).

    Looking at Wikipedia’s use case, how long would it take for users to go through and create a summary or a “simple.wikipedia” page for every article? How much electricity would that use? Compare that to running everything through an LLM once and quickly generating a summary (which is a use case where LLMs actually excel at). It’s honestly not that simple either because we would also have to consider how often these summaries are being regenerated. Is it every time someone makes a minor edit to a page? Is it every few days/weeks after multiple edits have been made? Etc.

    Then you also have to consider, even if a particular use case uses more electricity, does it actually save time? And is the time saved worth the extra cost in electricity? And how was that electricity generated anyway? Was it generated using solar, coal, gas, wind, nuclear, hydro, or geothermal means?

    Edit: typo







  • Sandbar_Trekker@lemmy.todaytoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    2 months ago

    Its probably better this way.

    Otherwise you end up with people accusing movies of using AI when they didn’t.

    And then there’s the question of how you decide where to draw the line for what’s considered AI as well as how much of it was used to help with the end result.

    Did you use AI for storyboarding, but no diffusion tools were used in the end product?

    Did one of the writers use ChatGPT for brainstorming some ideas but nothing was copy/pasted from directly?

    Did they use a speech to text model to help create the subtitles in different languages, but then double checked all the work with translators?

    Etc.