• 0 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: July 23rd, 2023

help-circle


  • Speaking from 10+ YoE developing metrics, dashboards, uptime, all that shit and another 5+ on top of that at an exec level managing all that, this is bullshit. There is a disconnect between the automated systems that tell us something is down and the people that want to tell the outside world something is down. If you are a small company, there’s a decent chance you’ve launched your product without proper alerting and monitoring so you have to manually manage outages. If you are GitHub or AWS size, you know exactly when shit hits the fan because you have contracts that depend on that and you’re going to need some justification for downtime. Assuming a healthy environment, you’re doing a blameless postmortem but you’ve done millions of those at that scale and part of resolving them is ensuring you know before it happens again. Internally you know when there is an outage; exposing that externally is always about making yourself look good not customer experience.

    What you’re describing is the incident management process. That also doesn’t require management input because you’re not going to wait for some fucking suit to respond to a Slack message. Your alarms have severities that give you agency. Again, small businesses sure you might not, but at large scale, especially with anyone holding anything like a SOC2, you have procedures in place and you’re stopping the bleeding. You will have some level of leadership that steps in and translates what the individual contributors are doing to business speak; that doesn’t prevent you from telling your customers shit is fucked up.

    The only time a company actually needs to properly evaluate what’s going on before announcing is a security incident. There’s a huge difference between “my honeypot blew up” and “the database in this region is fucked so customers can’t write anything to it; they probably can’t use our product.” My honeypot blowing up might be an indication I’m fucked or that the attackers blew up the honeypot instead of anything else. Can’t send traffic to a region? Literally no reason the customer would be able to so why am I not telling them?

    I read your response as either someone who knows nothing about the field or someone on the business side who doesn’t actually understand how single panes of glass work. If that’s not the case, I apologize. This is a huge pet peeve for basically anyone in the SRE/DevOps space who consumes these shitty status pages.


  • This is a common problem. Same thing happens with AWS outages too. Business people get to manually flip the switches here. It’s completely divorced from proper monitoring. An internal alert triggers, engineers start looking at it, and only when someone approves publishing the outage does it actually appear on the status page. Outages for places like GitHub and AWS are tied to SLAs that are tied to payouts or discounts for huge customers so there’s an immense incentive to not declare an outage even though everything is on fire. I have yelled at AWS, GitHub, Azure, and a few smaller vendors for this exact bullshit. One time we had a Textract outage for over six hours before AWS finally decided to declare one. We were fucking screaming at our TAM by the end because no one in our collective networks could use it but they refused to declare an outage.



  • The problem is the underlying API. parseInt(“550e8400-e29b-41d4-a716-446655440000”, 10) (this is a UUID) returns 550. If you’re expecting that input to not parse as a number, then JavaScript fails you. To some degree there is a need for things to provide common standards. If your team all understands how parseInt works and agrees that those strings should be numbers and continues to design for that, you’re golden.


  • The correct way to get someone to move to FOSS is to show them how to do it, not tell them it exists. OP already said they can do the YouTube -> captioned gif in 10min so you need to provide a simple tutorial that identifies the tools to use, how to set them up, and how to create a workflow to achieve the goal of some format with captions in under 10min.

    Notice how I explained what was wrong and how to do it? That’s what’s missing from most “you need to use FOSS” posts, including yours.









  • Calling a license by anything other than its name and stated purpose is something I’d dare to call mislabeling. If CC BY-NC-SA 4.0 decides to add “anti-commercial-AI” then and only then is it not mislabeling. That’s like me calling the US copyrights of the books sitting next to me “anti-bitfucker” licenses. They have nothing to do with you at this point in time so it is misleading for me to claim otherwise.

    While you are correct that lemmy itself does not add a license and many instances do not add a license, it’s not as simple as “the user notifies [you] must abides by [their] licenses.” Jurisdiction matters. The Fediverse host content is pulled from matters. Other myriad factors matter. As you correctly pointed out, there is no precedence for any of this so as I pointed out unless you’re willing to go to court and can prove damages it is actually useless.



  • They’re mislabeling the license too. CC BY-NC-SA 4.0 has nothing to do with “anti-commercial-AI.” It provides some terms for using content and, in theory if OP is willing to take someone to court, should provide some basis if the license is being abused. Until there’s actual precedence, though, it’s debatable whether or not sucking up CC BY-NC-SA 4.0 content is a breach of the license. For it to actually matter, someone needs to demonstrably prove 1) CC BY-NC-SA 4.0 content was sucked up by AI, 2) it was their content and it was licensed at the time, 3) the terms of the license were violated, and 4) other legal shit that will pop up during the course of the litigation. “Someone” has to be someone with deep fucking pockets willing to go the distance in many international jurisdictions.