• 1 Post
  • 48 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • scratchee@feddit.uktoComic Strips@lemmy.worldNo entry
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    13 days ago

    I mean… “would you love me if I was a worm?” was a textbook meme.

    And then there’s the fact that “meme” really means “an idea shared by humans that survives by a process reminiscent of natural selection”, so yeah, all comics shared on here are competing memes, by the original definition.


  • The theory is that profit seeking is “good” capitalism, where you make money by increasing overall productivity and skimming the extra off the top, but rent seeking is “bad” profiteering where you use your leverage to manipulate the situation so you can derive income without increasing productivity.

    So building a factory on your land is capitalism, but leaving the hovels untouched and charging high rent because your tenants have nowhere else to go is rent-seeking.

    In practise of course lots of rent seeking behaviour is done by people who claim to be capitalists, so it’s at least a good way to argue with them on their own terms


  • Yeah, I’m a big fan of pulling out the concept of “rent seeking” as an ultimate evil and threat to capitalism, because you can explain why all the late stage capitalist horrors are actually anti-capitalist and get capitalists on side. Sometimes you don’t have to tear someone’s world view apart to get them to support making the world better



  • scratchee@feddit.uktoComic Strips@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I feel like that’s justified though. Both sides were out in the middle of fuck all squabbling over a rock with a handful of people on it, very few civilian casualties, low stakes for both countries (well ok, maybe not for the specific governments, but neither side had to worry about being invaded anywhere that really mattered to them). Compared to something like the Ukraine war, it’s more of a skirmish.

    EDIT: to be clear, I do not want the Falkland’s to be “my war”, it was a stupid war.






  • Equally of course, if we use our mighty intellects to override our breeding instincts entirely then we’d arrive at the same extinction rather more quickly.

    So you know, damned if you do, damned if you don’t.

    Given our current birth rates in the western world I’m less worried about our breeding instincts than our inability to convince everyone that their children should live in a better world than them, apparently that’s the instinct that broke first.


  • scratchee@feddit.uktoComic Strips@lemmy.worldHow it feels
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Student loans in the US are a problem because they are a bad deal.

    If they were replaced with a more generous interest rate (eg somewhere equivalent to break even for government debt, maybe higher to compensate for low earners, but nothing like the profit making rates used), and only applied progressively (which as you point out, will be generally fine since graduates should earn plenty on average), then maybe nobody would be pushing for forgiveness.

    But US student loan debt is privatised, so the government can’t easily improve the terms, thus everyone reaches for the hammer of paying it off.


  • scratchee@feddit.uktoComic Strips@lemmy.worldAssistants
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Whilst I’ve avoided LLMs mostly so far, seems like that should actually work a bit. LLMs are imitating us, and if you warn a human to be extra careful they will try to be more careful (usually), so an llm should have internalised that behaviour. That doesn’t mean they’ll be much more accurate though. Maybe they’d be less likely to output humanlike mistakes on purpose? Wouldn’t help much with llm-like mistakes that they’re making all on their own though.


  • scratchee@feddit.uktoComic Strips@lemmy.worldSchrödinger's cat
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    To address the second half of your comment, how do I explain “something apparently happening” when nothing ever happens? Many worlds might claim no definitive events occur, but it does claim that states become mutually dependent and interact, that’s all we need to perceive something occurring. What we perceive as events do not need to line up with “real” events outside our environment.

    If you can emulate the universe on a computer, then you can also (with enough processing power) instead randomly generate universe states, either way you’ll eventually generate redbob, either way redbob exists, even if he’s just a pattern of numbers he doesn’t get to know that. Do events exist if the universe is just randomly generated numbers? Of course not, but redbob still thinks they do.

    Given all that, events do not need to be “real”, they just need to look real from our perspective.

    Edit: to be clear, not supporting nutty concepts like we’re in a simulation or a random number generator, just using them as thought experiments to prove that eventless systems can emulate the appearance of events and states internally


  • Bob is a scientist, they have hooked a computer to the R vs T experiment and when R occurs the screen flashes red.

    When the screen flashes red, red photons collide with Bob’s skin and eyes, signals enter their brain and they observe a red screen, and they remember it.

    After all that, the collection of atoms describing Bob, their state, contains lots of dependency on that red screen, they are redbob, their state could only exist in a universe where that screen was red.

    So, given the state of redbob, I think it’s reasonable to say that perceived R.

    Neither R nor T has actually happened, but the state of redbob I described cannot ever have observed T, they can only have observed R. So they only exist in a limited subset of the full universal quantum state, they coexist with a red screen and with R because they must.

    There’s of course a second state of matter that is the scientist observing T. Bluebob.

    An outside observer of the universe might insist neither R nor T has occurred, that both Bobs are equally real, that the quantum soup contains it all.

    But if you are redbob, you have still observed R.

    We are all redbob all the time.

    But why do we sometimes observe quantum superpositions, why do we not see a fully classical universe?

    If we imagine putting Bob in a quantum tight box, then instead of asking him what he saw we ask only questions that don’t require us to know which bob he is (and the only link is carefully designed to not change even slightly in response to the massive differences between redbob and bluebob), then we get to be the outside observer, our quantum state is indifferent between the bobs so our perspective encompasses them both. We can prove this is distinct from simply not knowing which Bob is in a classical box, because unlike the classical box, we really are able to manipulate a soup of all the states we’re not dependent on.


  • Neither R or T happen, so there is no “limited perspective” that could give you R or T if neither the events R or T ever happen at all.

    If R or T never happen from an external perspective doesn’t really matter to us though.

    If we accept many worlds as true for a second, then it follows that the total quantum states describes quite a lot, your exact configuration is somewhere within the total state. But crucially, your exact configuration is dependent on other configuration. At a large scale you depend on your parents having existed. So you can only perceive the parts of the total state where your parents existed, because any parts of the state where they didn’t exist does not contain a you to perceive. But this is also true for much smaller scales. From this, “collapse” is just your loss of vision into parts of the universe that you cannot be, and quantum uncertainties you can see is really just any quantum states that are not conflicting with your existence (where “you” is this very very specific configuration of you, so anything that alters your at all differently is in conflict with you) So sure, R might never happen if you want to say that, but R becomes what you can observe once you’re dependent on R, so it is reasonable to describe R as having happened from your perspective.

    On your point about the lack of mathematical rigour to all this, I do not deny it, and am not well placed to resolve it, but just as how my arguments are mathematically unhelpful, I’m not entirely sure an actual mathematical solution would help much in a verbal discussion, as I suspect there would be a range of valid mathematical models which could be argued to line up with the range of philosophical interpretations, and without external observations we’d not be able to distinguish between them, but maybe that’s defeatist, we’ve gotten this far after all, maybe we’ll find a way to pin things down further.



  • There’s nothing special about the brain in any quantum theory (except pop science).

    In many worlds, it’s not so much that the classical world is an illusion, more that it’s a limited perspective, similar to how the “observable” universe is just a limitation of our position, in both cases the theory is that there’s more beyond the edge that we cannot see (and in both cases we have no way to test that).

    I don’t think there’s much difference between many worlds and random selection in the end, at least from our perspective. Either way we experience only state contingent on our state, so any quantum superposition that contains us (ok, sure, contains our brain) we can experience only one concrete resolution to, since the others would require our brain to be in a different state. Many worlds adds “but there is a disconnected copy of us experiencing the other valid states after we entered the superposition”, random chance says “and the other states disappeared when the superposition collapsed”, but without reaching past that horizon of our own state they’re measurably identical theories, so either both equally valid or both equally pointless speculation, depending on how strict you want to be.


  • scratchee@feddit.uktoComic Strips@lemmy.worldAudio 📢
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 months ago

    It’s literally the non-verbal equivalent of the classic “crying fire in a crowded theatre” scenario, it should already be illegal by existing law imo.

    “Honking horn at the operator of high speed multi-ton machine on their radio” seems pretty clear cut recklessness in my book.



  • The difference between LLMs and human intelligence is stark. But the difference between LLMs and other forms of computer intelligence is stark too (eg LLMs can’t do fairly basic maths, whereas computers have always been super intelligences in the calculator domain). It’s reasonable to assume that someone will figure out how to make an LLM that can integrate better with the rest of the computer sooner rather than later, and we don’t really know what that’ll look like. And that requires few new capabilities.

    The reality is we don’t know how many steps between now and when we get AGI, some people before the big llm hype were insisting quality language processing was the key missing feature, now that looks a little naive, but we still don’t know exactly what’s missing. So better to plan ahead and maybe arrive early at solutions than wait until AGI has arrived and done something irreversible to start planning for it.