I’m an anarchocommunist, all states are evil.

Your local herpetology guy.

Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!

  • 0 Posts
  • 189 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2024

help-circle















  • These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.

    these are not tasks except maybe philosophize and discover, which even current models can do… heck google is using old shitty ones to do it…

    https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

    I said a task, not a feeling, a task is a manipulation of the world to achieve a goal, not something vague and undefinable like love.

    We want a machine that can tell us what to do, instead.

    theres no such thing, there’s no objective right answer to this in the first place, it’s not like a conscious being we know of can do this, why would a conscious machine be able to? This is just you asking the impossible, consciousness would not help even the tiniest bit with this problem. you have to say “what to do to achieve x” for it to have meaning, which these machines could do without solving the hard problem of consciousness at all.

    yet again you fail to name one valuable aspect of solving consciousness. You keep saying we need the hard problem of consciousness solved for agi but can’t name even one way in which it provides a functional improvement to anything.


  • Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

    these cannot be grasped by subjective experience, and I would say nothing can possibly achieve this, not any human at all, the best we can do is poll humanity and go by approximates, which I believe is best handled by something automatic. humans can’t answer these questions in the first place, why should I trust something without subjective experience to do it any worse?

    When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics?

    because this is unpopular, there are many things online saying not to… do you think humans are immune to this? When has consciousness ever prevented such an outcome?

    How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

    we don’t, but we also don’t with conscious beings, so there’s still no stated advantage to consciousness.




  • Jobs are not arbitrary, they’re tasks humans want another human to accomplish, an agi could accomplish all of those that a human can.

    For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel’s first incompleteness theorem

    Why do you assume we have to? Even a shitty current ai can do a decent job at this if you fact check it, better than a lot of modern politicians. Feed it the entire internet and let it figure out what humans value, why would we manually do this?

    In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

    humans are conscious and have gotten no closer to doing this, ever, I see no reason to believe consciousness will help at all with this matter.