Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thank you for this explaining. I wasn’t aware of that. This raises the question: if Apple would potentially buy Perplexity what would Open AI think about it?
Keep doing your own research @maxfromdenmark — that explanation of perplexity is woefully uninformed. Spending 30 minutes on perplexity will show you that it is way more than just a brand — it’s a platform that orchestrates multiple llms and related infrastructure to provide excellent search, research and dynamic dashboards on demand. It is a reference for how to build LLM-based systems that deserve to exist. A Perplexity purchase would bring similar talent as an OpenAI purchase but without OpenAI IP, organizational and integration complexity, and probably cost a lot less.
 
Last edited:
  • Love
Reactions: smulji
ChatGPT is garbage for researchers. It generates fake citations and papers that do not exist. I use Perplexity instead to point me in the right direction of relevant papers, and sometimes to have a dialogue with. I pay $20 for the pro version and it’s way better for me than ChatGPT. But ChatGPT does have better image generation but that is useless for me.
 
That’s true, perplexity is just a GUI on top of all the other models is my understanding as well. What does buying that GUI system do for Apple?
 
Thanks for the reply.

That article is propaganda with near zero substance published by people who have financial interest in convincing investors to throw money at them.

Sorry to be so blunt, but there’s virtually no technical merit to what they wrote. They don’t even source any specific benchmarks, for good reason. They should be publishing academic research if they have real breakthroughs, not marketing blog posts.

Quantum computing will make strides in the next decade, but as far as GenAI is concerned I’m firmly of the belief that World Models are the frontier for the next ~5-7 years, and it’s why Meta is putting so much wood behind that arrow, so to speak.

I do agree a perplexity buyout is questionable given Apple’s brand focus, I dont see them running a search engine ever just because they can’t control the content well and their current leadership team doesn’t seem willing to compromise on that in any area of their business.
Let’s just agree to disagree.
 
  • Wow
Reactions: TJFDenver
That’s true, perplexity is just a GUI on top of all the other models is my understanding as well. What does buying that GUI system do for Apple?
As far as I know they have trained their own models, at least for Chinese content. I asked perplexity they told me that 😁
 
  • Like
Reactions: KPOM
Doesn't perplexity rely on ChatGPT/Gemini/Grok? lmao all they're buying is some GUI and some logic for searching the web.
Chat GPT answer:
Yes, Perplexity has recently released its own small LLMs! You are correct—Perplexity is no longer only utilizing third-party models (like OpenAI, Anthropic, or Meta), but has started to develop and deploy its own models as well.

Details:​

  • Perplexity LLMs:
    In May 2024, Perplexity announced their own family of language models called Perplexity-LLM (sometimes referred to as PPLX).
    • The first public models are called PPLX-7B and PPLX-70B (the numbers refer to the number of parameters, similar to Llama or Mistral).
    • These models are available via API and can be selected in some platforms, including LM Arena and Hugging Face.
  • Usage in Perplexity App:
    In the Perplexity web app, you can now choose between different models, including their own (PPLX), as well as models from OpenAI, Anthropic, and others.
  • In LM Arena:
    Yes, you may have seen Perplexity (PPLX-7B) or Perplexity (PPLX-70B) in the model selection list. These are Perplexity’s own models.
  • Open Source:
    As of June 2024, Perplexity has released some of their models for research and commercial use, and you can find them on Hugging Face.

Summary:
Perplexity now has its own LLMs (PPLX-7B, PPLX-70B, etc.), and they are available for use both in their own app and on third-party platforms like LM Arena. They still offer access to other models, but they are actively developing and deploying their own.


And other answer how it works from unknown LLM and probably outdated but precise about Perplexity internal characteristic:

Perplexity, in its public‐facing application (perplexity.ai), does not run a fully proprietary LLM “from A to Z.” Instead, it acts as an orchestrator or “model aggregator,” switching between several different sources:

  1. OpenAI GPT-3.5-Turbo / GPT-4
    – These are the primary “heavy-hitter” models for most Q&A.
    – Pro users can even access GPT-4 directly—Perplexity simply calls the OpenAI API.
  2. Open-Source LLMs (e.g. Meta Llama 2, Mistral, etc.)
    – For certain queries or cost-sensitive scenarios, Perplexity routes requests to smaller/cheaper models.
    – It doesn’t host these models itself; they run on cloud GPUs (Vercel, Lambda Labs, etc.) but remain open-source.
  3. RAG (Retrieval-Augmented Generation) layer
    – Perplexity first fetches the most relevant passages from the web or trusted sources and then “mixes” them into its answer.
    – You get both a generated response and direct citations/links showing where each fact came from.
  4. No “Perplexity-branded” LLM from scratch
    – There’s currently no 100% in-house model called, say, “Perplexity-1.0.”
    – The focus is on the UX, retrieval layer, and orchestrating existing APIs/open-source instances.
In short:
Perplexity’s唯一 owned “asset” is its sophisticated platform that:
• retrieves and aggregates relevant web passages
• wraps them into a prompt
• sends that prompt to the best API (OpenAI, Anthropic, Llama, etc.)
• and returns a well-cited answer.

So you’re not waiting for some future “Perplexity LLM”—you’re already getting answers from a mix of top models plus Perplexity’s own retrieval layer. 🙌
 
  • Like
Reactions: heretiq
Thank you for this explaining. I wasn’t aware of that. This raises the question: if Apple would potentially buy Perplexity what would Open AI think about it?

Nothing. Perplexity pays openai everytime they use chatgpt. Apple buying perplexity means all apple user base will use chatgpt = $$$

I got given a year's sub to Perplexity Pro. I've been making myself use it whenever I remember, but I don't get what all the fuss is about. It all looks good when I search about things I don't know, but whenever I search fields about which I DO know things, it's always just a little bit wrong, no matter which LLM I choose. You always have to check everything, make sure the sources cited are real, etc. I can see why people like it, but it's just not quite there yet.

LLM has no intelligence. It copies and pastes information it finds. and in between it mixes things up. All LLM should have a warning that you should cross check the information. Accuracy of results differ from one bench to the other. YOu can check leaderboards here :
this is another one:

This is a very interesting take. There will definitely be some crashing and consolidation of some LLM players but Perplexity is 10X better than most of their cohorts and backed by Amazon — so they’re less likely to have a fire sale.

Actually, Perplexity is one of the worst. They do not have competitive advantage. They are just middle ground between you and Chatgpt. You can definitely get everything on perplexity by going directly to chatgpt for the most part. Heck, `grok is free so far
 
Keep doing your own research @maxfromdenmark — that explanation of perplexity is woefully uninformed. Spending 30 minutes on perplexity will show you that it is way more than just a brand — it’s a platform that orchestrates multiple llms and related infrastructure to provide excellent search, research and dynamic dashboards on demand. It is a reference for how to build LLM-based systems that deserve to exist. A Perplexity purchase would bring similar talent as an OpenAI purchase but without OpenAI IP, organizational and integration complexity, and probably cost a lot less.
I’m using it since read an article here and I’m extremely glad using it
 
  • Like
Reactions: heretiq
Apple has lost the plot. No one is using perplexity anymore because you can do everything that it does in ChatGPT now.
I started wondering about that myself. I have been using Perplexity for a long time and I get a second response from ChatGBT. Usually they're pretty similar. But now ChatGBT has been giving me more detailed responses than Perplexity.
 
Actually, Perplexity is one of the worst. They do not have competitive advantage. They are just middle ground between you and Chatgpt. You can definitely get everything on perplexity by going directly to chatgpt for the most part. Heck, `grok is free so far
I subscribe to both ChatGPT and Perplexity and use them both every day for personal productivity and embedded functionality via API in apps we build. I can assure you that OpenAI is a better general purpose front-end LLM but Perplexity is better for real-time information, deep research and complex, multi-agent tasks. While both provide comparable functionality, there are areas where the results are better with one or the other and if your work requires the best result across varying use cases, you definitely can not “get everything on Perplexity by going directly to ChatGPT.” Also, I refuse to use Grok for the same reason I don’t use X.
 
I subscribe to both ChatGPT and Perplexity and use them both every day for personal productivity and embedded functionality via API in apps we build. I can assure you that OpenAI is a better general purpose front-end LLM but Perplexity is better for real-time information, deep research and complex, multi-agent tasks. While both provide comparable functionality, there are areas where the results are better with one or the other and if your work requires the best result across varying use cases, you definitely can not “get everything on Perplexity by going directly to ChatGPT.” Also, I refuse to use Grok for the same reason I don’t use X.

what is an example of multi-agent task?
 
what is an example of multi-agent task?
A great example of a multi-agent task is Automated Financial Analysis and Reporting

Perplexity can leverage multiple agents to automate the workflow by:
  1. Gathering financial data from various sources
  2. Performing calculations and generating reports
  3. Creating interactive dashboards for business insights
This task involves data retrieval, analysis, and visualization agents working together to deliver the desired results. This is a simple one to answer your question, but if you really want to see what Perplexity can do, try using Perplexity Deep Research to figure out something complex and pay attention to the realtime reasoning and agent calls or giving Perplexity Labs a complex research and analysis task and ask it to present the results in an interactive dashboard.
 
Last edited:
A great example of a multi-agent task is Automated Financial Analysis and Reporting

Perplexity can leverage multiple agents to automate the workflow by:
  1. Gathering financial data from various sources
  2. Performing calculations and generating reports
  3. Creating interactive dashboards for business insights
This task involves data retrieval, analysis, and visualization agents working together to deliver the desired results. This is a simple one to answer your question, but if you really want to see what Perplexity can do, try using Perplexity Deep Research to figure out something complex and pay attention to the realtime reasoning and agent calls or giving Perplexity Labs a complex research and analysis task and ask it to present the results in an interactive dashboard.

doesn't other AI like chatgpt does already? its called thinking models I believe like DeepSeek R1
 
doesn't other AI like chatgpt does already? its called thinking models I believe like DeepSeek R1
Yes, other platforms do this as well, but Perplexity excels at at and performs consistently well with multi-agent/tool orchestration to satisfy complex requests.

Here’s a recent example:

We are using Google Firebase Studio on a client project. Firebase uses Google Gemini (arguably the best coding LLM at this point) for coding assistance. However Gemini performance decreases exponentially as project complexity increases and we found ourselves at a point where literally every code change introduced a regression of both application code and dev environment configuration.

We spent days implementing fix after fix within Firebase Studio until someone suggested using a third-party LLM to find an answer. We fed seven days of Firebase Studio console logs (contained in an attached .txt file) into both ChatGPT and Perplexity using their deep Research options.

Here is the prompt used with both:

Firebase Studio has become extremely unreliable. Every change cause a a regression or a repeat of the same issue in the last change. This has completely destroyed project productivity because running the project takes so much time and more time is spent on repeat issues than advancing the project. Please review the attached Firebase Studio Project - Partial error history and help me diagnose the issue with Firebase Studio and provide guidance on how to correct it. I suspect it may be related to conflated Firebase Studio IDE and Execution environment Configuration parameters.

ChatGPT Deep Research Response:

Thanks for sharing the detailed error history. To help you effectively, could you please confirm a few things:
  1. What is your current Firebase Studio IDE and execution environment setup (e.g., custom .idx/dev.nix file, ports used, OS)?
  2. Are there specific moments where regressions occur (e.g., dynamic imports, service form additions)?
  3. Is your goal now to identify root causes for repeated regressions, or do you also want guidance on improving your overall development workflow in Firebase Studio?

Perplexity Deep Research Response:

After analyzing the error history in your Firebase Studio project, I’ve identified several interconnected issues causing the persistent regressions and repeated failures. The primary problems stem from configuration conflicts, data serialization errors, and port management issues that create a cascade of failures with each code change.

Perplexity went on to provide specific code and configuration changes for the three issues as well as an addendum to the project plan to provide safeguards against Gemini forgetfulness with complex projects — which caused it to re-inject the same problems immediately after fixing it .. over and over.

While both tools have multi-agent capabilities, Perplexity used the same file that was fed to ChatGPT plus best-in-class multi-agent orchestration and tool calling right out of the box to give us a one-shot solution that we literally copied and pasted into Gemini to instruct it on how to fix itself.

In this case Perplexity was 100X better than ChatGPT. This is not isolated. I’ve seen this disparity of performance time and again — which is why we use both and are biased towards Perplexity for complex problem solving.

I hope this example prompts you to explore Perplexity for yourself if your use cases involve highly complex problem solving.
 
Last edited:
Yes, other platforms do this as well, but Perplexity excels at at and performs consistently well with multi-agent/tool orchestration to satisfy complex requests.

Here’s a recent example:

We are using Google Firebase Studio on a client project. Firebase uses Google Gemini (arguably the best coding LLM at this point) for coding assistance. However Gemini performance decreases exponentially as project complexity increases and we found ourselves at a point where literally every code change introduced a regression of both application code and dev environment configuration.

We spent days implementing fix after fix within Firebase Studio until someone suggested using a third-party LLM to find an answer. We fed seven days of Firebase Studio console logs (contained in an attached .txt file) into both ChatGPT and Perplexity using their deep Research options.

Here is the prompt used with both:



ChatGPT Deep Research Response:



Perplexity Deep Research Response:




Perplexity went on to provide specific code and configuration changes for the three issues as well as an addendum to the project plan to provide safeguards against Gemini forgetfulness with complex projects — which caused it to re-inject the same problems immediately after fixing it .. over and over.

While both tools have multi-agent capabilities, Perplexity used the same file that was fed to ChatGPT plus best-in-class multi-agent orchestration and tool calling right out of the box to give us a one-shot solution that we literally copied and pasted into Gemini to instruct it on how to fix itself.

In this case Perplexity was 100X better than ChatGPT. This is not isolated. I’ve seen this disparity of performance time and again — which is why we use both and are biased towards Perplexity for complex problem solving.

I hope this example prompts you to explore Perplexity for yourself if your use cases involve highly complex problem solving.

thanks for sharing.

-7 days of logs, how long was that context?

-If perplexity is superior, why go back to chatgpt or gemini at all?
 
thanks for sharing.

-7 days of logs, how long was that context?

-If perplexity is superior, why go back to chatgpt or gemini at all?
The log was very long — I didnt measure it but it was so long that it caused a beachball on a 32GB M1 Max MBPro using a Markdown editor, so we had to use BBEdit to handle it.

In my experience: Perplexity is superior for tasks involving deep research and complex multi-agent orchestration with and without code generation and/or actually deploying an interactive dashboard or even a full blown app; ChatGPT is superior for ideation, abstract reasoning and complex numerical analysis and linguistics oriented tasks.

My work spans all of the above so I use each for what they do best and also together to do conceptual work on one and implementation on the other depending on the nature of the project.

My use of Gemini is exclusively for web apps using Firebase Studio. For Apple ecosystem development (Swift) the primary LLM is Claude 3.7 Sonnet and ChatGPT o4 mini.
 
  • Like
Reactions: MacBH928
The log was very long — I didnt measure it but it was so long that it caused a beachball on a 32GB M1 Max MBPro using a Markdown editor, so we had to use BBEdit to handle it.

In my experience: Perplexity is superior for tasks involving deep research and complex multi-agent orchestration with and without code generation and/or actually deploying an interactive dashboard or even a full blown app; ChatGPT is superior for ideation, abstract reasoning and complex numerical analysis and linguistics oriented tasks.

My work spans all of the above so I use each for what they do best and also together to do conceptual work on one and implementation on the other depending on the nature of the project.

My use of Gemini is exclusively for web apps using Firebase Studio. For Apple ecosystem development (Swift) the primary LLM is Claude 3.7 Sonnet and ChatGPT o4 mini.

thanks for sharing this was helpful
 
  • Like
Reactions: heretiq
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.