Interesting but not very convincing imo.
For one thing it's more based around philosophy than an understanding of the underlying technology.
One fundamental error it makes is the assumption that AGI would be "intelligent" in the same way humans are "intelligent." That just because a human cannot do something (the example given is increase the IQ of other humans) this means a hypothetical AGI could not do the same.
Even that analogy is poor, as there's a large body of evidence indicating that IQ is variable depending on your environment. So you can make choices that will ultimately increase the IQ of another human. You can't really do this to an adult but you can with a child.
So going back to our AGI. It will naturally think like a machine, not a human. What a sentient machine would think like is currently a pure hypothetical. But there's no basis for the assumption that it'd think like a human computer programmer, nor that the methods of increasing its intelligence would be any way similar to that of a human.
It is likely fair to assume a sentient computer program would have a superior innate understanding of computing, in the same way we innately understand fellow humans better than other living creatures.
Further, the primary bottleneck, once you've actually brought an AGI into existence, is compute power. An AGI is just software and as long as the compute power is has access to is sufficient, its abilities can be developed and improved over time just like any other software. This stands to reason just based on what we know about how computers work. Indeed this also applies to the ML algorithms used everywhere today.
It is a virtual certainty that computers will keep getting more powerful over time. There may well be debate over how rapidly it'll occur, but look at the explosion of power introduced by the M1 chip after years of stagnation in CPU performance improvements. Engineers seem pretty good at continuing to push the limits of computing performance even as Moore's law gets more and more tricky. Then there's the development of quantum computers which is a whole other thing in and of itself.
One final point:
A few A.I. programs have been designed to play a handful of similar games, but the expected range of inputs and outputs is still extremely narrow. Now, alternatively, suppose that you’re writing an A.I. program and you have no advance knowledge of what type of inputs it can expect or of what form a correct response will take. In that situation, it’s hard to optimize performance, because you have no idea what you’re optimizing for.
I'm sure it is hard, AI is a difficult thing to develop, but DeepMind has already created generalised AI running on specialised ASICs that can learn to play any game - board games, arcade games, etc - at a superhuman level without even knowing the rules, it learns purely by playing against itself.
en.wikipedia.org
There is no pre-programmed algorithm telling it "if this, do that." The NN just learns the rules of whatever game they put in front of it then rapidly is able to defeat any human player.
Given this technology already exists right now, why would a future AGI not be able to apply the same level of rapid learning and self-teaching to programming? If such AI can hypothetically replace human jobs in the future, there's no reason to think programmer wouldn't be one of them, and there's no reason to think an AI would be limited to human level programming skills.
I'd actually go further and say you don't necessarily need an AGI to rapidly develop and improve an AI program. You'd just need a narrow AI that's good at generalised programming and algorithm development.