When it comes to dealing with advertisements when they’re surfing on their browsers. I’ve just learned recently about how Google has or is killing UBlock Origin on the Chrome browser as well as all Chromium based browsers too.

We’ve heard for years about people complaining, bitching, whining and vice versa about how they keep seeing ads. And those trying to help them, keep wasting time to tell these people that they’re surfing without extensions. Whether it’d be on Chrome or Firefox or another browser.

By this point, I’ve long stopped being that helper because if you cared at all about the advertisements you see, you would’ve long had gotten on the wagon of getting adblockers by now. You bring this onto yourself.

  • Nutteman@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    2 months ago

    That would require an actual AGI to emerge, which it has not and is not going to. LLMs are fancy text prediction tools and little more.

    • Ceedoestrees@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      What we see in AI as an average consumer is like the RC hotwheels to a state of the art tank being used by big corps.

      Just imagine that if an early LLM can fool an engineer into thinking it’s sentient, what a state of the art system can do, one designed to predict the market, run propaganda bots on social media or straight up manufacture news stories with the footage to back it up.

      The AI being used by big corporations is so advanced, it’s one of the reasons countries have been trying to digitally isolate themselves. It’s really not an if, it’s a when.

        • Ceedoestrees@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I do. I did get a little lost in the weeds with my point though, as I was talking in a more general sense about how AI is already powerful and dangerous - because AI safety is a subject in this thread.

      • huginn@feddit.it
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        The “AI” being used by big corporations is still fundamentally an LLM and has all the flaws of an LLM. It’s not a hot wheels car vs a tank, it’s a hot wheels car vs a $2 billion RC car

        • Ceedoestrees@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I’d like to get into how both me and OP are talking about how fast AI, not just LLMs, is scaling, and the potential it has across a variety of industries - most concerning to me is it’s use by investment firms. But I need to go to the barber because I already have enough split hairs.

          • huginn@feddit.it
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            It is my understanding that the fundamental architecture (the general purpose transformer) is identical between the “AI” used by Black Rock and by OpenAI

            If you have some evidence to the contrary I’d always appreciate the chance to learn.

            But the transformer based architecture is fundamentally flawed: it will always hallucinate.

    • capital@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Are you assuming LLMs are the only way humans could ever try making an AGI? If so, why do you assume that?

      • Nutteman@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        There’s more important shit than worrying about if an unproven sci fi concept will come to being any time soon.

        • capital@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Yeah, agreed. That’s not what I asked though.

          This response is a bit of a misdirection since we all discuss shit that isn’t the most important all the time.

      • anothermember@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        2 months ago

        I agree that AGI is dangerous but I don’t see LLMs as evidence that we’re close to AGI, I think they should be treated as separate issues.

        • capital@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          Given what I think I know about LLMs, I agree. I don’t think they’re the path to AGI.

          The person I replied to said AGI was never going to emerge.

      • Jack Riddle@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        If people start developing a new more promising kind of “ai”, we can talk about it ðen. For now, ð þing we call “AI” sucks and just steals.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      which it has not and is not going to

      So you’re confident that AGI is not fundamentally possible? That would contradict basically every single scientist in the world and this is exactly why this issue is so difficult. Ironically, proving my point for the OP’s question lol