ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.
- In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
- This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
- A defining feature of the model is its multilingual fluency in over 1,000 languages.
I’m sure the community will find something to hate about this as well, since this isn’t an article about an LLM failing at something.
According to the article, they’ve even addressed my environmental concerns. Since it’s created by universities, I don’t think we’ll even have this shoved down our throats all the time.
I doubt whether it will be more useful than any other general LLM so far but hate it? Nah.
Gigantic hater of all things LLM or “AI” here.
The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with “multilingual fluency in over 1,000 languages” could be potentially useful.
And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that’s also a good thing.
Could I find something to hate about it? Oh yeah, most certainly! :)
i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.
Usually when I see this it’s using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.
There’s huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.
If you can identify cancer in x-rays using machine learning that’s awesome, but that’s very seperate from the AI hype machine that is currently running wild.
to be fair, the LLMs they use for chatbots and stolen pics generator are not AI either.
Yeah, I just find it to be a great rule of thumb. Those who understand what they are doing will be aware that they are not dealing with AI, those who jump to label it as such are usually bullshit artists.
Most rational AI hater.
Llms are useful for inspiration, light research, etc.
They should never be used as part of a finished product or as the main scaffolding.
Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.
That’s not research. That’s simply surfacing tidbits it found on the net the happen to be true
.I’ve asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it’s too late.
So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.
It’s good for giving starting points which is exactly what I meant.
Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.
No you rude fuck.
If i ask a simple question about a subject, let’s say foraging as a I do that a lot. And it’s wrong, it’s friggin wrong.
I’ll ask about a specific plant. Full disclosure this is one of my things. 40 years at it. Ok? No big stretch to think I know a thing or two.
So I ask about let’s say, Japanese barberry. An invasive plant that is hated by many and rightly so at times. The question is , is it edible?
The answer given was no. The truth is the opposite. It is edible. Hell there’s recipes online for barberry jam. Now don’t go just eating them though. Smart to test one or two leaves to see if an individual is allergic. That’s not part of the answer, that’s foraging 101. But I digress. The a.i was wrong and then argued about it until I pulled up all of the evidence. The a.i then admitted it was wrong, but who cares? It’s not alive. Winning an argument with a.i is like beating oneself at poker.
Another example I’ll ask about intervals in music ( guitar teacher as my main profession now, as my passion for 48 years). It got the major scale intervals wrong.
I asked ask if yogurt can replace eggs as a binding agent to one of them (can’t remember which, apologies ) and it said no. That’s a friggin home ec tip that’s been around for at least a century
People who give dissertations don’t brag about it. Especially to make a point in a thread. It only makes one seem like a person who isn’t confident in what they’re saying so they drop a line that they feel will impress others. It doesn’t
Others experience is as important, vital and real as yours, regarding the answers given by a.i , but you’ll brush it off because you feel that some how you have more insight than others. You dont. You just have more time to pour through a.i’s mistakes to massage it to getting something close to what you want. That shows an abundance of time available. Which means you aren’t doing the things I’m talking about.
Or it means this is something you do for your job and it works for those specific needs . Which is fine, but your needs are not the world’s. My need have been poorly met by that tool you spouse. Much like a rake won’t help a guy digging a hole, a.i is the wrong tool for most jobs.
Which means your opinion of my evaluation of a. I results is skewed because you don’t value others experience, no matter how intelligent you are. And that is a sign of ignorance.
I wish you a good day