It took OpenAI years to figure out how to properly fine-tune ChatGPT to their liking, and Apple wants to bake this same technology into an operating system. Prank or not, cracks are going to form when applied to something this complex.A prank is more likely than a biased dataset because Apple’s voice dictation relies on extensive, neutral language models trained on diverse data. If ‘Trump’ were consistently replaced with ‘racist,’ it would suggest intentional tampering rather than an accidental correlation in the dataset. Given Apple’s control over its AI systems, a prank —such as user-submitted corrections, or local manipulation — is a more plausible explanation than systemic bias in the training data.