I hadnt seen the fail whale in a loooong time, nor had most other folks, that's why there was a flood of memes of "the resurrection of the fail whale" when Musk bought Twitter. Core services at Twitter had been pretty stable for a quite a while.
Twitter usage spiked and reached record numbers after Elon took over.
Again, Twitter was prone to outages before Elon: https://www.reuters.com/technology/twitter-services-down-thousands-users-downdetector-2021-04-17/
Anecdotal experience of seeing fail whale during peak usage doesn't really say much.
Those 7 words are doing a lot of work right there. For workloads that benefit from it sure, it solves a lot of problems. It also lets you *mask* problems for devs, a pattern a friend likes to call "defensive sysadminning", since you can mask lack of performance, uncaught exceptions and crashes, memory issues (hi Java in Docker, especially older apps that are just bundled into a container and not container aware), etc with scaling. It doesnt solve everything, it adds other complications, it requires serious consideration on isolation of resources, RBAC settings, and state management, etc. It's not one size fits all, and it requires a fair amount of work to both operate and optimize for on large projects.
And all that's assuming the app in question benefits from k8s. Not everything is microservices in light containers. Some things don't work well there. Use the right tool for the right job.
I love k8s, I really do, for orchestrating containerized workloads it's fantastic, but it's not a panacea that removes needs for ops staff. In my experience it doesnt drop your need for ops staff at all, just lets us work differently when it's applicable. It's optimization for scaling and ease of orchestration, not for streamlining ops staff.
It also typically requires us to workshop the services devs through using it right now. Packaging and orchestration layers come in many flavors and containers and k8s are still relatively young, devs don't know how to optimize for it in many cases. Even defining the deliverable across multiple teams can be frustrating (services dev: "what do you mean I should be writing a helm chart? What is that? I just do kubectl apply -f right now on my stackoverflow supplied deployment yaml, can't you do that in prod??" or "what do you mean you have to have a process for managing secrets, I just shoved 'em all in my container in a file" or etc).
Also, something I've found, in many ways k8s isn't radically different from orchestrating piles of stateless nodes like I did on the HPC side years ago. What's old is new again.
This doesn't really doesn't contribute anything to the discussion. I'm well aware of what Kubernetes does. So do current Twitter engineers.