Dedicated VRAM ≠ Performance. We are so used to the current definitions that we forget to look at the big picture. A GPU does not need VRAM to have good performance. The devil is in the detail.
Apple's developer documentation is stressing the notion of Apple GPUs using the system memory. This is part of their overall design. Note that their GPUs operate very differently from Nvidia or AMD ones. Apple GPUs generally need to perform less work to achieve the same result, and they are less reliant on memory bandwidth, because they are better at optimizing memory accesses. Additionally, Apple GPU give the programmer direct access to the on-GPU cache, which allows one to implement rendering techniques much more efficient and with much less RAM usage. For example, many modern games use so called deferred shading multi-step rendering technique, where information about object materials is first collected in memory buffers and then used to compute complex lighting effects. This technique generally needs a lot of fast VRAM (since these buffers are large and need to be read and written multiple times). On Apple GPU however, this technique can be performed in one step, using on-chip cache only. No need to have a lot fo memory to hold the material buffers, no need to move this memory around. This is one of the reasons why Apple GPUs can be very fast while using less memory — they are simply "smarter".