This isn't an ELI5 answer. Don't take my reply as an agreement with the other commenter. I'm replying as someone who uses LLMs a lot (mostly for coding purposes though) and machine learning methods in my research as a scientist. I also try to keep up somewhat on the research being done in developing LLMs.
There are consistent data that larger LLMs tend to be "left-leaning". Here's a small selection of what's been done to look into this.
The assessment of bias within Large Language Models (LLMs) has emerged as a critical concern in the contemporary discourse surrounding Artificial Intelligence (AI) in the context of their potential impact on societal dynamics. Recognizing and considering political bias within LLM applications is...
ui.adsabs.harvard.edu
Based on these data, there's a reason why some people of all political ideologies are concerned about biases in LLMs. If the larger models represent and reflect certain groups of people less accurately (which happens, as mentioned in that HAI Stanford summary paper), they have inherent biases that are possibly harmful to certain groups of people by promoting and reinforcing stereotypes (e.g., "White people are ...", "Black people are ...", "Conservatives are ...", "Liberals are ...", "Christians are ...", "Muslims are ..."). There can be too little or too much representation that is further affected by people fine-tuning models.
Do we have a "conservative" and a "liberal" LLM? Do we have one for each country, culture, class, religion, belief system, etc.? Do we have generalist LLMs? Right now most of the work is going into generalist LLMs, but as those papers point out (and as the Biden Administration Executive Order / Statement points out), there are demonstrated biases and potential biases.