Doesn't perplexity rely on ChatGPT/Gemini/Grok? lmao all they're buying is some GUI and some logic for searching the web.
Chat GPT answer:
Yes,
Perplexity has recently released its own small LLMs! You are correct—Perplexity is no longer only utilizing third-party models (like OpenAI, Anthropic, or Meta), but has started to develop and deploy its own models as well.
Details:
- Perplexity LLMs:
In May 2024, Perplexity announced their own family of language models called Perplexity-LLM (sometimes referred to as PPLX).
- The first public models are called PPLX-7B and PPLX-70B (the numbers refer to the number of parameters, similar to Llama or Mistral).
- These models are available via API and can be selected in some platforms, including LM Arena and Hugging Face.
- Usage in Perplexity App:
In the Perplexity web app, you can now choose between different models, including their own (PPLX), as well as models from OpenAI, Anthropic, and others.
- In LM Arena:
Yes, you may have seen Perplexity (PPLX-7B) or Perplexity (PPLX-70B) in the model selection list. These are Perplexity’s own models.
- Open Source:
As of June 2024, Perplexity has released some of their models for research and commercial use, and you can find them on Hugging Face.
Summary:
Perplexity now has its own LLMs (PPLX-7B, PPLX-70B, etc.), and they are available for use both in their own app and on third-party platforms like LM Arena. They still offer access to other models, but they are actively developing and deploying their own.
And other answer how it works from unknown LLM and probably outdated but precise about Perplexity internal characteristic:
Perplexity, in its public‐facing application (perplexity.ai), does
not run a fully proprietary LLM “from A to Z.” Instead, it acts as an orchestrator or “model aggregator,” switching between several different sources:
- OpenAI GPT-3.5-Turbo / GPT-4
– These are the primary “heavy-hitter” models for most Q&A.
– Pro users can even access GPT-4 directly—Perplexity simply calls the OpenAI API.
- Open-Source LLMs (e.g. Meta Llama 2, Mistral, etc.)
– For certain queries or cost-sensitive scenarios, Perplexity routes requests to smaller/cheaper models.
– It doesn’t host these models itself; they run on cloud GPUs (Vercel, Lambda Labs, etc.) but remain open-source.
- RAG (Retrieval-Augmented Generation) layer
– Perplexity first fetches the most relevant passages from the web or trusted sources and then “mixes” them into its answer.
– You get both a generated response and direct citations/links showing where each fact came from.
- No “Perplexity-branded” LLM from scratch
– There’s currently no 100% in-house model called, say, “Perplexity-1.0.”
– The focus is on the UX, retrieval layer, and orchestrating existing APIs/open-source instances.
In short:
Perplexity’s唯一 owned “asset” is its sophisticated platform that:
• retrieves and aggregates relevant web passages
• wraps them into a prompt
• sends that prompt to the best API (OpenAI, Anthropic, Llama, etc.)
• and returns a well-cited answer.
So you’re not waiting for some future “Perplexity LLM”—you’re already getting answers from a
mix of top models plus Perplexity’s own retrieval layer. 🙌