Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7
142 points - today at 5:37 PM
SourceComments
ericpauley today at 6:40 PM
Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.
I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.
wood_spirit today at 8:45 PM
Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
mentalgear today at 7:18 PM
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
jbellis today at 7:35 PM
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
sailingcode today at 8:44 PM
I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?
VHRanger today at 7:52 PM
That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
lofaszvanitt today at 8:09 PM
That Qwen flamingo on the unicycle is actually quite good. A work of art.
JaggerFoo today at 8:48 PM
FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.
comandillos today at 6:50 PM
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
aliljet today at 7:52 PM
I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
jedisct1 today at 8:29 PM
I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.
It's pretty good at finding bugs, but not so good at writing patches to fix them.
throwuxiytayq today at 8:40 PM
I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.
19qUq today at 7:40 PM
How about switching to MechaStalin on a tricycle? It gets kind of boring.