A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

  • 9488fcea02a9@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 days ago

    I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.

    NO, i dont want fake AI depth of field. NO, i do not want fake AI “makeup” fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.

    NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be “fixing” that for me

    • arakhis_@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      classic techbro overhype

      Add new feature into everything without seperating and offering choice to opt out of it

    • Flic@mstdn.social
      link
      fedilink
      arrow-up
      1
      ·
      6 days ago

      @9488fcea02a9 @ForgottenFlux I remember reading a whole article about how Samsung now just shoves a hi-res picture of the moon on top of pictures you take with the moon in so it looks like it takes impressive photos. Not sure if the scandal meant they removed that “feature” or not

  • ZeroGravitas@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

    It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

    • Imacat@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.

      Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.

    • Kaja • she/her@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      7 days ago

      We’re not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn’t that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

      The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn’t something that you’re doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 days ago

      People love to make these claims.

      Nothing is “100% accurate” to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

      So either we acknowledge that everything is already “sewage” and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

      Which gets to my big issue with most of the “AI Assistant” features. They don’t source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead “ask jeeves” as it were. But I still want the citation of where information was pulled from so I can at least skim it.

      • tauren@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        6 days ago

        For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you’re trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.

      • ZeroGravitas@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I think you nailed it. In the grand scheme of things, critical thinking is always required.

        The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.

        The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          7 days ago

          Even those examples are the kinds of things that “fall apart” if you actually think things through.

          Art? Actual human artists tend to use a ridiculous amount of “AI” these days and have been for well over a decade (probably closer to two, depending on how you define “AI”). Stuff like magic erasers/brushes are inherently looking at the picture around it (training data) and then extrapolating/magicking what it would look like if you didn’t have that logo on your shirt and so forth. Same with a lot of weathering techniques/algorithms and so forth.

          Same with coding. People more or less understand that anyone who is working on something more complex than a coding exercise is going to be googling a lot (even if it is just that you will never ever remember how to do file i/o in python off the top of your head). So a tool that does exactly that is… bad?

          Which gets back to the reality of things. Much like with writing a business email or organizing a calendar: If a computer program can do your entire job for you… maybe shut the fuck up about that program? Chatgpt et al aren’t meant to replace the senior or principle software engineer who is in lots of design meetings or optimizing the critical path of your corporate secret sauce.

          It is replacing junior engineers and interns (which is gonna REALLY hurt in ten years but…). Chatgpt hallucinated a nonsense function? That is what CI testing and code review is for. Same as if that intern forgot to commit a file or that rockstar from facebook never ran the test suite.

          Of course, the problem there is that the internet is chock full of “rock star coders” who just insist the world would be a better place if they never had to talk to anyone and were always given perfectly formed tickets so they could just put their headphones on and work and ignore Sophie’s birthday and never be bothered by someone asking them for help (because, trust me, you ALWAYS want to talk to That Guy about… anything). And they don’t realize that they were never actually hot shit and were mostly always doing entry level work.

          Personally? I only trust AI to directly write my code for me if it is in an airgapped environment because I will never trust black box code I pulled off the internet to touch corporate data. But I will 100% use it in place of google to get an example of how to do something that I can use for a utility function or adapt to solving my real problem. And, regardless, I will review and test that just as thoroughly as the code Fred in accounting’s son wrote because I am the one staying late if we break production.


          And just to add on, here is what I told a friend’s kid who is an undergrad comp sci:

          LLMs are awesome tools. But if the only thing you bring to the table is that you can translate the tickets I assigned to you to a query to chatgpt? Why am I paying you? Why am I not expensing a prompt engineering course on udemy and doing it myself?

          Right now? Finding a job is hard but there are a lot of people like me who understand we still need to hire entry level coders to make sure we have staff ready to replace attrition over the next decade (or even five years). But I can only hire so many people and we aren’t a charity: If you can’t do your job we will drop you the moment we get told to trim our budget.

          So use LLMs because they are an incredibly useful tool. But also get involved in design and planning as quickly as possible. You don’t want to be the person writing the prompts. You want to be the person figuring out what prompts we need to write.

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            6 days ago

            In short, AI is useful when it’s improving workflow efficiency and not much else beyond that. People just unfortunately see it as a replacement for the worker entirely.

            If you wanna get loose with your definition of “AI,” you can go all the way back to the MS Paint magic wand tool for art. It’s simply an algorithm for identifying pixels within a certain color tolerance of each other.

            The issue has never been the tool itself, just the way that it’s made and/or how companies intend to use it.

            Companies want to replace their entire software division, senior engineers included, with ChatGPT or equivalent because it’s cheaper, and they don’t value the skill of their employees at all. They don’t care how often it’s wrong, or how much more work the people that they didn’t replace have to do to fix what the AI breaks, so long as it’s “good enough.”

            It’s the same in art. By the time somebody is working as an artist, they’re essentially at a senior software engineer level of technical knowledge and experience. But society doesn’t value that skill at all, and has tried to replace it with what is essentially a coding tool trained on code sourced from pirated software and sold on the cheap. A new market of cheap knockoffs on demand.

            There’s a great story I heard from somebody who works at a movie studio where they tried hiring AI prompters for their art department. At first, things were great. The senior artist could ask the team for concept art of a forest, and the prompters would come back the next day with 15 different pictures of forests while your regular artists might have that many at the end of the week. However, if you said, “I like this one, but give me some versions without the people in them,” they’d come back the next day with 15 new pictures of forests, but not the original without the people. They simply could not iterate, only generate new images. They didn’t have any of the technical knowledge required to do the job because they depended completely on the AI to do it for them. Needless to say, the studio has put a ban on hiring AI prompters.

      • AnAmericanPotato@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        99.999% would be fantastic.

        90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

        What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

        I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.

        Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 days ago

          Again: What is the percent “accurate” of an SEO infested blog about why ivermectin will cure all your problems? What is the percent “accurate” of some kid on gamefaqs insisting that you totally can see Lara’s tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

          Everyone is hellbent on insisting that AI hallucinates and… it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can’t do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

          Like I said: I don’t like the AI Assistants that won’t tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

          But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

  • nuko147@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.

    Imagine if AI actually worked for users:

    • Show me all settings to block data sharing and maximize privacy.
    • Explain how you optimized my battery last week and how much time it saved.
    • Automatically silence spam calls without selling my data to third parties.
    • Detect and block apps that secretly drain data or access my microphone.
    • Automatically organize my photos by topic without uploading them to the cloud.
    • Make everything i could do with TASKER with only just saying it in plain words.
    • arakhis_@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 hours ago

      How could you ensure AI to privately sort your pictures, if the requests to analyze your sensitive imagery need to be made on a server? (that based its knowledge of disrespecting others copyright anyway, lol)

      • nuko147@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Why it must connect to a server to do it? Why can not offline? Deepseek showed us that it is possible. The companies want everyone to think that AI only works online. For example AI image enhancements in my mid range Samsung phone work offline.

          • nuko147@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            A lot of people think as a must that AI = permanent server connection. I don’t mind if it is a bit slower but part of my device.

  • Obelix@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    People here like to shit on AI, but it has its use cases. It’s nice that I can search for “horse” in Google Photos and get back all pictures of horses and it is also really great for creating small scripts. I, however, do not need a LLM chatbot on my phone and I really don’t want it everywhere in every fucking app with a subscription model.

      • Guns0rWeD13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        people wouldn’t shit on AI if it were actually replacing our jobs without taking our pay and creating a system of resource management free from human greed and error.

  • fritobugger2017@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    My kids school just did a survey and part of it included questions about teaching technology with a big focus on the use of AI. My response was “No” full stop. They need to learn how to do traditional research first so that they can spot check the error ridden results generated by AI. Damn it school, get off the bandwagon.

    • Akito@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      And what exactly is the difference between researching shit sources on plain internet and getting the same shit via an AI, except manually it takes 6 hours and with AI it takes 2 minutes?

      • clonedhuman@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        5 days ago

        I think the fact someone would need to explain this to you makes it pointless to try and explain it to you. I can’t tell whether you’re honestly asking a question or just searching for a debate to attempt to justify your viewpoint.

        • Akito@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          You’re implicating, there are trusted sources, I am saying, there are no trusted sources whatsoever, and you should equally doubt any source. So, who’s the one not understanding some principle?

  • TylerBourbon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I do not need it, and I hate how it’s constantly forced upon me.

    Current AI feels like the Metaverse. There’s no demand for it or need for it, yet they’re trying their damndest to shove it into anything and everything like it’s a new miracle answer to every problem that doesn’t exist yet.

    And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don’t have to show they understand the topic they’re writing. And considering AI doesn’t exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.

    • Akito@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      6 days ago

      If they write essays with it and the teacher is not checking their actual knowledge, the teacher is at fault, not the AI. AI is literally just a tool, like a pen or a ruler in school. Except much much bigger and much much more useful.

      It is extremely important to teach children, how to handle AI properly and responsibly or else they will be fucked in the future.

      • TylerBourbon@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        I agree it is a tool, and they should be taught how to use it properly, but I disagree that is like a pen or a ruler. It’s more like a GPS or Roomba. Yes, they are tools that can make your life easier, but it’s better to learn how to read a map and operate a vacuum or a broom than to be taught to rely on the tool doing the hard work for you.

        • Akito@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?

          It might be good to teach them this skill additionally, for the sake of brain development. But we should stay in reality and not replace real tools with obsolete ones in education, because children should be prepared for the real world and not for some world, that does not exist (anymore).

          Same reason, why I find it ridiculous, how much children are cushioned to the brim and are denied to see the real world for 17 years and ~355 days, in the USA system. As soon as they are 18, they start to see the real world and they are not at all prepared for this surprise.

          • TylerBourbon@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?

            I strongly advocate it, it’s a basic skill. Like simple math, reading and writing, being able to balance a budget, cooking, etc, being able to read a map is a necessary basic skill.

            Maps aren’t obsolete. GPS literally works off of the existence maps. Trying to claim maps are obsolete is like saying that cooking food at home is obsolete because you can order delivery.

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    The AI thing I’d really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don’t allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.

    What I don’t want is:

    • Ways to make fake photographs
    • Summaries of messages I could just skim the old fashioned way
    • Easier access to LLM chatbots

    It seems like those are the main AI features bundled on phones now, and I have no use for any of them.

    • drthunder@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      That’s useful AI that doesn’t take billions of dollars to train, though. (it’s also a great idea and I’d be down for it)

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?

        /s

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago

    “Stop trying to make fetch AI happen. It’s not going to happen.”

    AI is worse that adding no value, it is an actual detriment.

    • octopus_ink@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I feel like I’m in those years of You really want a 3d TV, right? Right? 3D is what you’ve been waiting for, right? all over again, but with a different technology.

      It will be VR’s turn again next.

      I admit I’m really rooting for affordable, real-world, daily-use AR though.