Really interesting release. Based on tokenizer behavior, instruction-restraint tests, and explicit reasoning-mode separation, this looks very much like a quantized Qwen 3-Instruct long-context deployment rather than Qwen 2.5. The “slow but deep” profile fits the Giga Potato name perfectly. Curious to see how far you push Qwen 3 once MoE costs settle.
Begun, the produce model wars have.
Pretty good model for being free, it's slower than Grok Code Fast 1 but I did prefer the outputs!
Sounds great! let's eat some potatos.
Really interesting release. Based on tokenizer behavior, instruction-restraint tests, and explicit reasoning-mode separation, this looks very much like a quantized Qwen 3-Instruct long-context deployment rather than Qwen 2.5. The “slow but deep” profile fits the Giga Potato name perfectly. Curious to see how far you push Qwen 3 once MoE costs settle.
Cool
Thank you
Hi! Who is hosting this model? I mean, where is it running? What is policy regarding data protection? Thanks!