Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,468
37,713


If you haven't had enough of AI apps, there's yet another to download and try out as of today. European company Mistral's Le Chat chatbot can now be used through a dedicated iOS app.

le-chat-mistral.jpg

Mistral is a French AI company founded by engineers from Google and Meta. It creates its own open-weight large language models, and is aiming to compete with OpenAI. Le Chat has been available on the web, but the app will make it better able to compete with ChatGPT, DeepSeek, Gemini, and other options.

Like many competing chatbots, Le Chat supports natural language conversations, real-time web search, document analysis, and image generation.

LeChat is free to use, but access to the highest performing models is limited. A $14.99 per month fee unlocks a Pro tier with unlimited web browsing, extended access to news, and unlimited messaging.

Le Chat can be downloaded from the App Store for free. [Direct Link]

Article Link: Mistral AI's 'Le Chat' Chatbot Now Available on iPhone
 
Many cutting-edge LLMs actually don't use a 'mixture-of-experts' mode, yet, and Mistral does. This provides a differentiator for trying out Le Chat Ai, compared to other Ai's.

Apparently, it's tricky to implement the mixture-of-experts concept on existing LLMs, so they're needing to be rebuilt with that concept in mind, if those Ais want to harness the benefits it offers.
 
  • Like
Reactions: davide_eu
Good company. I still have Mixtral 8x7B and Miqu on my desktop.
It's probably not objectively true, and Deepseek R1 has most of the spotlight now, but to me, they'll always be the original cool, open-source, and competitive AI lab.
 
Many cutting-edge LLMs actually don't use a 'mixture-of-experts' mode, yet, and Mistral does. This provides a differentiator for trying out Le Chat Ai, compared to other Ai's.

Apparently, it's tricky to implement the mixture-of-experts concept on existing LLMs, so they're needing to be rebuilt with that concept in mind, if those Ais want to harness the benefits it offers.
For those of us who don’t know, please could you explain the advantage of a “mixture of experts” mode?
 
What is even the difference between all these? What is their selling point?
One argument is that AI is very politicized now, and a lot of consumers are starting to alter their spending habits in relation to the trade wars:

Personally, I'm trying to lessen my contributions to the U.S. economy as I don't yet grasp how dedicating four years to make a single American and his businesses partners (even more) exorbitantly rich and powerful will help the country or its allies at large.

That's a level of 4D chess my woke, socialist brain cannot compute. Too many organic soy lattes served by DEI-hired trans baristas, I guess.

Anyway

-Not paying or generating data for OpenAI is a great selling point as they are one of Trump's partners.

Mistral also isn't that big of a step down. So it's not really much of a sacrifice.
 
Have just tried it. The app typeface on an iPhone is a bit smaller compared to ChatGPT, but it can still come handy whenever ChatGPT complains that I have exceeded my daily request limit. Add Copilot and DeepSeek to the mix and everybody should be nicely covered for a day of free requests. 😉
 
  • Like
Reactions: AdamInKent
What is even the difference between all these? What is their selling point?
Some have bigger context lengths and bigger models are more reliable than smaller models. Some will waste your time showing how they come to an answer. They call this reasoning, but it is mostly a lot of babbling and sometimes the reasoning looks like a mental homeless person talking to an imaginary friend.

Anyway according to the AI clown show in the media before last month there were no open weight models people could use online or locally even though Ollama and Hugging Face have thousands to play with.

Those clown outlets said Deepseek “democratizes” AI but we always had models to download and the only way you can get close to ChatGPT level performance is if you have an extremely powerful computer or you use a model in the cloud. The biggest model you can run on a quad RTX 5090 still won’t be a ChatGPT, and if this is makes me look like a ChatGPT fan I’m not. I can live without all this jargon and hoopla.
 
  • Like
Reactions: AdamInKent
Have just tried it. The app typeface on an iPhone is a bit smaller compared to ChatGPT, but it can still come handy whenever ChatGPT complains that I have exceeded my daily request limit. Add Copilot and DeepSeek to the mix and everybody should be nicely covered for a day of free requests. 😉

you are fined one credit for violation of the verbal-morality statute​

 
  • Like
Reactions: CarletonTorpin
Have just tried it. The app typeface on an iPhone is a bit smaller compared to ChatGPT, but it can still come handy whenever ChatGPT complains that I have exceeded my daily request limit. Add Copilot and DeepSeek to the mix and everybody should be nicely covered for a day of free requests. 😉

You could use Huggingface chat for free and load and use thousands of models including the ones from Mistral ai
 
  • Like
Reactions: CarletonTorpin
For those of us who don’t know, please could you explain the advantage of a “mixture of experts” mode?
To my understanding, a 'mixture-of-experts' model breaks a large Ai into several smaller, specialized parts.

When users ask a question, the system picks only the best-suited parts from the 'mixture of experts' and tasks them with answering the question.

The goal behind 'mixture-of-experts' models is to make the Ai's response faster and more accurate, compared to traditional methods for LLMs and Ais.
 
If it doesn’t know the answer does it shrug its shoulders and say “I do not know zee answer to this ridiculous question”
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.