deleted by creator
The US is indeed in a very good position, having only two borders and two oceans between everyone else. They just need to get Mexico to mine their southern border, while they mine theirs.
But Europe, Russia, China, India have plenty of them and won’t be able to escape refuge streams or conflicts. Large parts of India might become uninhabitable. Food prices are going to fluctuate. Global trade will become unstable or collapse, disabling the complex globalized industrial economy. Nuclear war is very likely. People still don’t know what’s coming.
So doesn’t that mean the earth and sun do not orbit a common center but a varying point based on mostly Jupiter?
Centrists have bamboozled me again!
There are very fine bullshits on both sides
Hmm 😇 The afterlife might be a good way to make it up. Have you seen “The Good Place”?
You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.
No I read what you are saying. I just think that you are something that “acts intelligent without actually being intelligent”. Here is why: All that you’ve written is based on very simple primitive brain cells and synapses and synaptic connections. It’s self evident that this is not really something that is designed to be intelligent. You’re just “really good at parroting sentences”. And you clearly agree that I’m doing the same 😄
Clearly LLMs are not intelligent and don’t understand, and it would need many other systems to make them so. But what they do show is that the “creative spark” even though they are very mediocre in their quality, can be created by using a critical mass of quantity. It’s like it’s just one small part of our mind, the “creative writing center” without intelligence. But it’s there, just because we added more data and processing.
Quality through quantity, that is what we seem to be and what is so shocking. And it’s obvious that there is a kind of disgust or bias against such a notion. A kind of embarrassment of the brain to just be thinking meat.
Now you might be absolutely right that my specific suggestion for an approach is bullshit, I don’t know enough about it. But I am pretty sure we’ll get there without understanding exactly how it works.
I agree somewhat with that but: only if the starting conditions were completely random. Otherwise if you set the conditions to be similar to what we know about humanity, you’d have to anticipate both cooperation and competition and parasitic behavior leading to wars and atrocities. And that also assumes that they actually have a chance to grow up for the suffering to have any meaning. If you just turn it off your science experiment at some point you have invalidated the argument.
Either way when you’re playing god you’d have to morally justify yourself. Imagine you create a universe that eventually becomes an eternal hell where trillions of sentient beings are tortured through something like “I have no mouth but I must scream”.
You’d look at things like the holocaust or million other atrocities and say “this is fine”. Also you can’t assume they’d die out naturally in 5 billion years, they might colonize other planets and go on and on and on until you pull the switch. They might have created beautiful art and things and preserved much of their history for future generation and then poof all gone. What if they would find out? Would you say “I created them, therefor I own them and can do with my toys as I please”. Really?
My main argument would be that it would be incredibly unethical. And any intelligent civilization powerful enough to create a simulation like this would be more likely than not to be ethical, and if it was this unethical it is unlikely to exist for long. Those would be two potential reasons why the “infinite regress” in simulation theory is unlikely.
The Starmaker is an interesting exploration into simulation theory.
Yeah, I imagine generative AI as like one small part of a human mind, so we’d need to create a whole lot more for AGI. But it’s shocking (at least for me) that it works at all just through more data and compute power. That you can make qualitative leaps with just increasing the quantity. Maybe we’ll see more progress now.
Yeah. But maybe this is how you teach an AI a broader understanding of the real world. Or really a slightly less narrow view. Human brains also have to learn and reconcile all these conflicting data points and then create a kind of understanding from it. For any machine learning it would only be an intuitive instinct.
Like you would have a bunch of these “tables” that show relationships between various tokens and embody concepts. Maybe you need to combine different kind of models that are organized and trained differently to resolve such things. I only have a very surface level understanding of how machine learning works so I know this is very speculative. Maybe you’re right and it can only ever reflect the training data. Then maybe you’d need to edit the training data, but you could also maybe use other AIs to “reinterpret” training data based on other models.
Like all the data on reddit, could you train a model to detect sarcasm or lies or to differentiate between liberal, leftist and fascist type of arguments? Not just recognizing the tokens or talking points, but the semantic of an argument? Like detecting a non sequitur. You probably need need “general knowledge” understanding for that. But any kind of AI like that would be incredibly interesting for social media so you client can tag certain posts, or root out bot / shill networks that work for special interests (fossil fuel, usa, china, russia).
So all the stuff “conflicting with each other and making a giant spider web of issues to juggle” might be what you can train an AI to pull apart into “appeal emotion” and “materialistic view” or “belief in inequality” or “preemptive bias counteractor”. Maybe it actually could extract and help us communicate better.
Eh I really need to learn more about AI to understand the limits.
Would it be possible to create a kind of “formula” to express the abstract relationship of ethical makeup, location, year and field? Like convert a table of population, country, ethnicity mix per year and then train the model on that. It’s clear that it doesn’t understand the meaning or abstract concept, but it can associate and extrapolate things. So it could “interpret” what the image description says while training and then use the prompt better. So if you’d prompt “english queen 1700” it would output white queen, if you input year 2087 it would be ever so slightly less pasty.
They are experimenting and tuning. Apparently without any correction there is significant racist bias. Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.
PS: I find it hilarious. If anything it elevates the AI system to art, since it now provides an emotionally provoking mirror about white identity.
DDG recently started to exclude terms from my search query and returns more random garbage and a “search only …” link. So often I search, find nothing, then realize DDG messed up again. Really not sure why it’s doing that.
Huh. I hate advertising and think it’s brainwashing that harms our minds, increases consumerism and causes massive environmental damage and biases all content and news production.
Affiliate links are just another form of advertising.
So the solution is very easy, outlaw affiliate links. Or advertising in general. Amazon and others should no longer be allowed to offer affiliate systems. Or remove any page with affiliate page or linking to pages with affiliate pages from search results.
You can’t create rules and structures that prioritize profit and expect positive results.
Rip INTERWEBZ 2023 just preserve it how it was before it got sick
what do you think when you read stories like these
Honestly they are so fucking sad I try to avoid reading them. Another example is this one: She Was Denied an Abortion After Roe Fell. This Is a Year in Her Family’s Life.
The monsters have been trying to do the same things in Europe though, UK has underfunded the NHS and healthcare in Germany is in deliberate decline too.
Another criterion might be to be self employed. I have little experience with that and it probably has it’s pro’s and con’s but depending on what corporate culture you’ll face as an employee. But it might be worth keep it in mind when choosing your profession.
I’ve recently read a comment saying the great Chinese firewall somehow “learns” that you are using a VPN. So people doing quick tests “yep VPN works” but then a little later it doesn’t work anymore. No clue if that is true though.