"the post-training performance gains in Qwen3.5 primarily stem from our extensive scaling of virtually all RL tasks and environments we could conceive."
I don't think anyone is surprised by this, but I think it's interesting that you still see people who claim the training objective of LLMs is next token prediction.
The "Average Ranking vs Environment Scaling" graph below that is pretty confusing though! Took me a while to realize the Qwen points near the Y-axis were for Qwen 3, not Qwen 3.5.
dash2today at 1:11 PM
You'll be pleased to know that it chooses "drive the car to the wash" on today's latest embarrassing LLM question.
Would love to see a Qwen 3.5 release in the range of 80-110B which would be perfect for 128GB devices. While Qwen3-Next is 80b, it unfortunately doesn't have a vision encoder.
gunalxtoday at 12:59 PM
Sad to not see smaller distills of this model being released alongside the flaggship. That has historically been why i liked qwen releases. (Lots of diffrent sizes to pick from from day one)
bertilitoday at 12:36 PM
Last Chinese new year we would not have predicted a Sonnet 4.5 level model that runs local and fast on a 2026 M5 Max MacBook Pro, but it's now a real possibility.
vessenestoday at 2:41 PM
Great benchmarks, qwen is a highly capable open model, especially their visual series, so this is great.
Interesting rabbit hole for me - its AI report mentions Fennec (Sonnet 5) releasing Feb 4 -- I was like "No, I don't think so", then I did a lot of googling and learned that this is a common misperception amongst AI-driven news tools. Looks like there was a leak, rumors, a planned(?) launch date, and .. it all adds up to a confident launch summary.
What's interesting about this is I'd missed all the rumors, so we had a sort of useful hallucination. Notable.
myntitoday at 11:27 AM
Does anyone know what kind of RL environments they are talking about? They mention they used 15k environments. I can think of a couple hundred maybe that make sense to me, but what is filling that large number?
azinman2today at 4:50 PM
Does anyone else have trouble loading from the qwen blogs? I always get their placeholders for loading and nothing ever comes in. I don’t know if this is ad blocker related or what… (I’ve even disabled it but it still won’t load)
The "native multimodal agents" framing is interesting. Everyone's focused on benchmark numbers but the real question is whether these models can actually hold context across multi-step tool use without losing the plot. That's where most open models still fall apart imo.
ggcrtoday at 9:52 AM
From the HuggingFace model card [1] they state:
> "In particular, Qwen3.5-Plus is the hosted version corresponding to Qwen3.5-397B-A17B with more production features, e.g., 1M context length by default, official built-in tools, and adaptive tool use."
Anyone knows more about this? The OSS version seems to have has 262144 context len, I guess for the 1M they'll ask u to use yarn?
Wow, the Qwen team is pushing out content (models + research + blogpost) at an incredible rate! Looks like omni-modals is their focus? The benchmark look intriguing but I can’t stop thinking of the hn comments about Qwen being known for benchmaxing.
sasidhar92today at 4:34 PM
Going by the pace, I am more bullish that the capabilities of opus 4.6 or latest gpt will be available under 24GB Mac
Matltoday at 1:58 PM
Is it just me or are the 'open source' models increasingly impractical to run on anything other than massive cloud infra at which point you may as well go with the frontier models from Google, Anthropic, OpenAI etc.?
codingbeartoday at 6:45 PM
Do they mention the hardware used for training? Last I heard there was a push to use Chinese silicon. No idea how ready it is for use
benbojanglestoday at 6:43 PM
Was using Ollama but qwen3.5 unavailable earlier today
XCSmetoday at 5:08 PM
I just started creating my own benchmarks (very simple questions for humans but tricky for AI, like how many r's in strawberry kind of questions, still WIP).
Anyone else getting an automatically downloaded PDF 'ai report' when clicking on this link?
It's damn annoying!
collinwilkinstoday at 4:54 PM
at this point it seems every new model scores within a few points of each other on SWE-bench. the actual differentiator is how well it handles multi-step tool use without losing the plot halfway through and how well it works with an existing stack
XCSmetoday at 3:28 PM
Let's see what Grok 4.20 looks like, not open-weight, but so far one of the high-end models at real good rates.
isusmeljtoday at 12:32 PM
Is it just me or is the page barely readable? Lots of text is light grey on white background. I might have "dark" mode on on Chrome + MacOS.
deletedtoday at 5:44 PM
deletedtoday at 5:44 PM
deletedtoday at 12:33 PM
ddtaylortoday at 12:54 PM
Does anyone know the SWE bench scores?
Western0today at 5:30 PM
Who can tell me how creating a sound generate from text localy
lollobombtoday at 1:23 PM
Yes, but does it answer questions about Tiananmen Square?