I can’t see it happening tbh, but like the USA government discussed putting restriction on AI development, I think OpenAI or some other companies asked them to do so!? And there were short/reels of high profile developers hyping out the fact that “we don’t know what we’re doing”, and one of them quit his job. So why is all that hype? Is the “Matrix” route actually a possible future ?

  • d-RLY?@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    19 hours ago

    One thing that bothers me about high level devs just leaving because they realized what they created. Is that them leaving means one more possible road block is just gone. They will just be replaced with people that are more fresh faced and on the hype train of going harder and harder. Lots of folks I know that are finishing college are just leaning more and more into just using all of these AI to solve problems instead of learning to code (or just write things) themselves. Some are still trying and I support them in my little ways, but I can see how much like a drug things start small and can turn into just using it all the time. Comp Sci majors were already getting worse in their actual understanding of how things work before LLMs (just look at all the things that will never be optimized and just rely on higher spec PCs).

  • SuluBeddu@feddit.it
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    What’s funny to me is that such a robot takeover would mean all humans are (wage-)enslaved rather than 99% of us like right now

    We already have a ruler, the Money god, that is already enslaving many, killing others, and silencing dissent. I might actually prefer if my ruler was some superintelligent logical being rather a than few male 60yos hoping to book the next trip to some harem island that might or might not have minors in it taken directly from the territories at war around the world

  • timmytbt@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Of late, my biggest concern is certain parties feeding LLMs with a different version of history.

    Search has become so shit of late that LLMs are often the better path to answering a question. But as everyone knows they are only as good as what they’ve been trained on.

    Do we, as a society, move past basic search to a preference for AI to answer our questions? If we do, how do we ensure that the history they feed the models is accurate?

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      This is absolutely one of the reasons they’re pushing this garbage so hard. It’s VERY easy to manipulate as a propaganda tool.

      You can already see that most of these tools lean right because their userbase does - leftists don’t touch this garbage because of numerous ethical concerns as-is. Add more astroturfing on top of that, and now it’s just a straight up automated fascist mouthpiece.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      My country used to be a fascist dictatorship and there’s still plenty of people alive who were educated on false information and a different fabricated version of history.

      Certainly missinformation is not anything new.

      And people should prevent it and solve it the same way it has always be solved. Taking the missinformators out of power.

      It’s not a tech issue. It’s a political issue.

      • timmytbt@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        21 hours ago

        “It’s not a tech issue. It’s a political issue.”

        It kinda is a tech issue if the output is skewed because nefarious parties are feeding the model shit.

        If they control the tech and what’s being fed into it then it makes the process rife for manipulation.

  • Count Regal Inkwell@pawb.social
    link
    fedilink
    arrow-up
    14
    ·
    2 days ago

    It might but I wouldn’t hold my breath.

    I’d be more concerned about what I call “the dumbot apocalypse”

    Which is to say AI does accelerate the collapse of society. But it’s not because We Created God Only For He To Turn On Us ™ – It’s because some politician, drunk on hype fed to him by venture capitalists and techbros, puts an AI (and I mean these current AIs) in charge of something very important that by no means should be controlled by an AI, even if that AI WERE human level intelligence, and what we call AI right now is not even close – And then the inevitable ChatGPT Hallucination ™ takes place and the bot decides that a war with China is the only way to increase corporate profits for the next quarter or whatever. Humanity nukes itself, and maybe the humans pressing the button don’t even realise their orders come from an LLM.

    … And then the machines immediately shut down, because even these pretend, toy AIs we have right now are straining the global power grids, so the micro-instant electricity production slips, they’ll drop like flies (Roko’s Basilisk MFs when a minor brownout takes out their ‘god’)

  • deadcade@lemmy.deadca.de
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    Current LLMs are just that, large language models. They’re incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that “could take over the world” like Skynet is called “Artificial Generalized Intelligence”. We’re nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM “in charge” of an important task, that could cost lives if a mistake is made.

  • Brad@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    3 days ago

    Anytime I worry about the robot uprising, I just remember the time Google Location couldn’t figure out what method of transportation I was using for two and a half hours between the Santa Ana, CA airport and the Denver, CO airport.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    Just as an aside and in addition to the other comments here:

    There is a phenomenon called regulatory capture. It can take many different forms but the short version is that agencies and policies get perverted to only benefit one group. When the intention should be society at large.

    There is a process where the big players, say OpenAI, call for regulation of their industry, not because they feel it needs regulating but because the regulatory hurdles will keep competitors at bay. Meta pulled a stunt like that as well with social networks. So big hype company calling for regulation in their field is a red flag, accompanied by a loud alarm bell.

  • DarkCloud@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    3 days ago

    No, and in fact the industry is going to see a reduction as people and companies are realising it’s not a silver bullet solution or even that great at what it does.

    The next branch of LLM modeling (now AGI is failing), is likely towards specialisation. Specialised AI problem solving has far more potential than chasing an ill defined and half formed concept of “intelligence”.

    No term can exist on its own like that. Everything is relative.

    • fckreddit@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      3 days ago

      This. LLMs are great for information retrieval tasks, that is, they are essentially search engines. Even then, they can only retrieve information that they are trained on, so eventually, the data can get stale and the model will require retraining on more recent data. Also, they are not very good with tasks that require reasoning such as solving complex engineering problems.

      • timmytbt@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        I somewhat agree with that (good for information retrieval).

        I say somewhat because they will downright lie , until/unless you call them out.

        You need to have an idea of whether what they are telling you is in fact true or not.

        I find them very useful for programming snippets because a) I can usual grok whether what they’ve provided is what I’ve asked for and b) the proof is in the pudding (does the code do what I want?)

        • fckreddit@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          That is because they don’t have any baked in concept of truth or lie. That would require labelling each statement as such. This doesn’t scale well for petabytes of data.

      • zaknenou@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        LLMs are great for information retrieval tasks

        I agree with that

        Also, they are not very good with tasks that require reasoning such as solving complex engineering problems.

        and that too

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    No. AI and robots don’t care about anything. They don’t care about taking over. Whoever controls them though, now we’re talking. And that’s much worse

  • BlemboTheThird@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    3 days ago

    I recently read a neat little book called “Rethinking Consciousness” by SA Graziano. It has nothing to do with AI, but is an attempt to describe the way our myriad neural systems come together to produce our experience, how that might differ between animals with various types of brains, and how our experience might change if some systems aren’t present. It sounds obvious, but the simpler the brain, the simpler the experience. For example, organisms like frogs probably don’t experience fear. Both frogs and humans have a set of survival instincts that help us detect movement, classify it as either threat or food or whatever, and immediately respond, but the emotional part of your brain that makes your stomach plummet just doesn’t exist in them.

    Humans automatically respond to a perceived threat in the same way a frog does–in fact, according to the book, the structures in our brains that dictate our initial actions in those instinctive moments are remarkably similar. You know how your eyes will automatically shift to follow a movement you see in the corner of your vision? A frog responds in much the same way. It’s not something you have to think about–often your eye will have darted over to the point of interest even before you realize you’ve noticed something. But your experience of that reaction is also much richer than it is possible for a frog’s to be, because we have far more layers of systems that all interact to produce what we call consciousness. We have a much deeper level of thought that goes into deciding whether that movement was actually important to us.

    It’s possible for us to continue to live even if we lose some parts of the brain–our personalities will change, our memory may get worse, or we may even lose things like our internal monologue, but we still manage to persist as conscious beings until our brains lose a large number of the overlying systems, or some very critical systems. Like the one that regulates breathing–though even that single function is somewhat shared between multiple systems, allowing you to breathe manually (have fun with that).

    All that to say the things we’re currently calling AI just don’t have that complexity. At best, these generative models could fill out a fraction of the layers that would be useful for a conscious mind. We have developed very powerful language processing systems, at least in terms of averaging out a vast quantity of data. Very powerful image processing. Audio processing. What we don’t have–what, near as I can tell, we haven’t made any meaningful progress on at all–is a system to coalesce all these processing systems into a whole. These systems always rely on a human to tell them what to process, for how long, and ultimately to check whether the result of a process is reasonable. Being able to process all of those types of input simultaneously, choosing which ones to focus on in the moment, and continuously choosing an appropriate response? Barely even a pipe dream. And even all of that would be distinct from a system to form anything like conscious thought.

    Right now, when marketing departments say “AI,” what they’re describing is like that automatic response to movement. Movement detected, eye focuses. Input goes in, output comes out. It’s one small piece of the whole that’s required when science fiction writers say “AI.”

    TL;DR no, the current generative model race is just tech stock market hype. The absolute best it can hope for is to reproduce a small piece of the conscious mind. It might be able to approximate the processing we’re capable of more quickly, but at a massively inflated energy expenditure, not to mention the research costs. And in the end it still needs a human double checking its work. We will need to develop a vast number of other increasingly complex systems before we even begin to approach a true AI.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      I dunno. Obviously individual LLMs are basically sophisticated parrots and are unlikely to develop to AGI on their own. However, a lot of work is being done in combining multiple specialized LLMs. As unlikely as it is for direct LLM improvement to lead to true AI, I think it’s not terribly unlikely that some particular assemblage of many specialized LLMs could achieve the complexity necessary for AGI.

  • 474D@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    There is a theoretical leap to AGI, which would be what you actually would think AI is (it’s currently a buzzword). There is no evidence yet that it’s actually possible

  • Stepos Venzny@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    It’s not complex enough to have intentions of any kind, so the only danger is that people will do incredibly stupid things with it.

    Imagine duct-taping a sharp knife to a Roomba. The Roomba has no concept of what ankles or stabbing even are. It will roll around the floor as it always does, devoid of either malice or compassion, and any ankle-stabbing that ensues can only really be described as your fault.