Qwen3.5: Towards Native Multimodal Agents

352 points - today at 9:32 AM

Source

Comments

nl today at 11:00 PM
"the post-training performance gains in Qwen3.5 primarily stem from our extensive scaling of virtually all RL tasks and environments we could conceive."

I don't think anyone is surprised by this, but I think it's interesting that you still see people who claim the training objective of LLMs is next token prediction.

The "Average Ranking vs Environment Scaling" graph below that is pretty confusing though! Took me a while to realize the Qwen points near the Y-axis were for Qwen 3, not Qwen 3.5.

dash2 today at 1:11 PM
You'll be pleased to know that it chooses "drive the car to the wash" on today's latest embarrassing LLM question.
danielhanchen today at 9:40 AM
For those interested, made some MXFP4 GGUFs at https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF and a guide to run them: https://unsloth.ai/docs/models/qwen3.5
simonw today at 12:58 PM
tarruda today at 1:18 PM
Would love to see a Qwen 3.5 release in the range of 80-110B which would be perfect for 128GB devices. While Qwen3-Next is 80b, it unfortunately doesn't have a vision encoder.
gunalx today at 12:59 PM
Sad to not see smaller distills of this model being released alongside the flaggship. That has historically been why i liked qwen releases. (Lots of diffrent sizes to pick from from day one)
bertili today at 12:36 PM
Last Chinese new year we would not have predicted a Sonnet 4.5 level model that runs local and fast on a 2026 M5 Max MacBook Pro, but it's now a real possibility.
vessenes today at 2:41 PM
Great benchmarks, qwen is a highly capable open model, especially their visual series, so this is great.

Interesting rabbit hole for me - its AI report mentions Fennec (Sonnet 5) releasing Feb 4 -- I was like "No, I don't think so", then I did a lot of googling and learned that this is a common misperception amongst AI-driven news tools. Looks like there was a leak, rumors, a planned(?) launch date, and .. it all adds up to a confident launch summary.

What's interesting about this is I'd missed all the rumors, so we had a sort of useful hallucination. Notable.

mynti today at 11:27 AM
Does anyone know what kind of RL environments they are talking about? They mention they used 15k environments. I can think of a couple hundred maybe that make sense to me, but what is filling that large number?
azinman2 today at 4:50 PM
Does anyone else have trouble loading from the qwen blogs? I always get their placeholders for loading and nothing ever comes in. I don’t know if this is ad blocker related or what… (I’ve even disabled it but it still won’t load)
ranguna today at 3:23 PM
Already on open router, prices seem quite nice.

https://openrouter.ai/qwen/qwen3.5-plus-02-15

fdefitte today at 8:15 PM
The "native multimodal agents" framing is interesting. Everyone's focused on benchmark numbers but the real question is whether these models can actually hold context across multi-step tool use without losing the plot. That's where most open models still fall apart imo.
ggcr today at 9:52 AM
From the HuggingFace model card [1] they state:

> "In particular, Qwen3.5-Plus is the hosted version corresponding to Qwen3.5-397B-A17B with more production features, e.g., 1M context length by default, official built-in tools, and adaptive tool use."

Anyone knows more about this? The OSS version seems to have has 262144 context len, I guess for the 1M they'll ask u to use yarn?

[1] https://huggingface.co/Qwen/Qwen3.5-397B-A17B

deleted today at 7:01 PM
Alifatisk today at 12:42 PM
Wow, the Qwen team is pushing out content (models + research + blogpost) at an incredible rate! Looks like omni-modals is their focus? The benchmark look intriguing but I can’t stop thinking of the hn comments about Qwen being known for benchmaxing.
sasidhar92 today at 4:34 PM
Going by the pace, I am more bullish that the capabilities of opus 4.6 or latest gpt will be available under 24GB Mac
Matl today at 1:58 PM
Is it just me or are the 'open source' models increasingly impractical to run on anything other than massive cloud infra at which point you may as well go with the frontier models from Google, Anthropic, OpenAI etc.?
codingbear today at 6:45 PM
Do they mention the hardware used for training? Last I heard there was a push to use Chinese silicon. No idea how ready it is for use
benbojangles today at 6:43 PM
Was using Ollama but qwen3.5 unavailable earlier today
XCSme today at 5:08 PM
I just started creating my own benchmarks (very simple questions for humans but tricky for AI, like how many r's in strawberry kind of questions, still WIP).

Qwen3.5 is doing ok on my limited tests: https://aibenchy.com

trebligdivad today at 12:50 PM
Anyone else getting an automatically downloaded PDF 'ai report' when clicking on this link? It's damn annoying!
collinwilkins today at 4:54 PM
at this point it seems every new model scores within a few points of each other on SWE-bench. the actual differentiator is how well it handles multi-step tool use without losing the plot halfway through and how well it works with an existing stack
XCSme today at 3:28 PM
Let's see what Grok 4.20 looks like, not open-weight, but so far one of the high-end models at real good rates.
isusmelj today at 12:32 PM
Is it just me or is the page barely readable? Lots of text is light grey on white background. I might have "dark" mode on on Chrome + MacOS.
deleted today at 5:44 PM
deleted today at 5:44 PM
deleted today at 12:33 PM
ddtaylor today at 12:54 PM
Does anyone know the SWE bench scores?
Western0 today at 5:30 PM
Who can tell me how creating a sound generate from text localy
lollobomb today at 1:23 PM
Yes, but does it answer questions about Tiananmen Square?