Artificially Inspired: Week #2 Open Source Meets the Big Players
We ran the new open-source Wan 2.5 head-to-head with the big players - and it nearly kept up! The demo trial can be viewed here > LinkedIn original: View post
Most AI tools today are locked behind paid APIs. You send a request, they do the work on their servers, and send the result back. Quick and easy but not always flexible.
Open-source models like Wan 2.5 are different. They’re free to use, customisable, and can run locally (if you have the hardware). That means more control, more transparency, and no subscription required.
We’ve been testing Wan 2.5’s preview model in ComfyUI, pushing it to tackle some of the hardest things in AI video: the shimmer of leaves, shifting light, natural camera motion, motion blur, and even a face turning mid-shot.
Compared to others, Kling still has the edge with the best prompt adherence. Veo 3.1 feels more refined in its tone and style. But Wan comes surprisingly close; and the fact it’s open source makes it a strong favourite for us.
Another unexpected bonus: when run locally, we can monitor energy use in real time - a meaningful win for teams tracking sustainability goals.
We’d have loved to include Sora, but access remains limited... maybe next time.
Every week we push these models a little harder and every week, we see new possibilities for what AI video can do.
Tried it yet? If not, use our tips and go have a crack at it.





