• divan 7 days ago |
    I'm using Wonder Dynamics for slightly different purposes (capturing and analyzing complex sports movements from video) and I'm deeply impressed by what it is capable of doing. While it still struggles with what state-of-the-art pose estimation and camera motion estimation models are struggling, the whole package and implementation are just insanely impressive. From the web UI that is incredibly fast even when uploading 4K@120fps footage to the final result - which is a Blender file and clean slate video for me. Extremely easy to use. A lot of love and care is put into this product.

    Wonder Animation seems to be just a specific use case improvement over already impressive capabilities. Normal "Live Action" projects can also detect cuts, but the "Animation" project seems to understand the space from multiple cuts/angles.

  • Abecid 7 days ago |
    kudos
  • sech8420 7 days ago |
    I'm a bit confused. While the demo looks amazing, I feel it is quite misleading along with some of the wording they use.

    Is is actually creating the 3d environment and character models or are these premade, and instead, its handling solely character rigging and camera tracking?

    • imaginationra 7 days ago |
      You have to provide rigged 3d character models yourself(or use their premade ones)- it does camera tracking + motion matching or whatever algo/ai fun to track the biped animation- so yeah you feed it a video and the 3d models and it spits out either a video of the composite or you can download the 3d scene for further use/massaging in other applications.

      btw Animation filmmaker here- tested a previous version- it was a janky toy that wasn't useful to me, checked out the new stuff today but didn't get to testing it after reading through the several pages of limitations on camera work, composition etc that can be used in it. I don't want my cinematography/blocking constrained.

      Nice site design tho(shrug)

      • sech8420 7 days ago |
        I appreciate this information. Saves me some time. Thanks
        • Joel_Mckay 7 days ago |
          We've looked at a number of FOSS and Commercial options for a project recently, and found most options were not much better than https://freemocap.org/ with video occlusions.

          However, we did purchase the https://faceit-doc.readthedocs.io/en/latest/mocap_general/ commercial seats for Blender, and have found it workable with the results from the iPhone 3D camera App compared to other options (52 marker lip sync, gaze, and blink cycles will still need cleaned up to look less glitched in complex lighting.)

          Combined with Auto-Rig Pro in Blender, it is fairly trivial re-targeting for Unreal Engine with volumetric preserving rigs (can avoid secondary transforms, so elbows don't fold in weird ways by default like Makehuman rigged assets.)

          Best of luck, we concluded after dropping/donating a few grand into several dozen addon projects... there were still quite a few version rotted or broken add-ons for Blender around that people had zero interest in maintaining (some already made redundant by FOSS work etc.) However, there were also a few tools that were surprisingly spectacular... will still likely need to run both 3.6.x and 4.x ... YMMV =3

  • vivzkestrel 7 days ago |
    Looks really good, what are some use cases you have in mind outside the movie / animated film industry?
  • BoNour 7 days ago |
    lowkey if the tech keeps up with the demo, huge W for indie movie makers, or amateurs, used their tech last year and was surprised by how good it worked
  • _davide_ 7 days ago |
    Anyone surprised that Autodesk is citing blender in a non-negative light?
    • pavlov 7 days ago |
      It’s like Microsoft embracing Linux. They’ll come around to it slowly, and in five years there will be an Autodesk-branded Blender.
  • bsenftner 7 days ago |
    Welp, looks like one of the entry level jobs in VFX is now fully automated: the role was called "Trackers" and they did camera motion recovery, set/prop placement recovery, and actor match moving, including facial performances (sometimes, that's had a lot of automation attention for years.)
    • dagmx 7 days ago |
      The quality of wonder dynamics is still significantly far away from handling more complex real world shots, or more fine grained movement.

      It’ll certainly help but the death of manual tracking is greatly exaggerated.

      • bsenftner 7 days ago |
        That's good to know. The end of this month, today, marks 20 years since I worked as a tracker.
  • doctorpangloss 7 days ago |
    It's 2044. You wake up at 8am as a freelance VFX artist in the slums of Miami Islands. You pay 3.0 dogecoin to activate your Internet connectivity and refresh your ACACV degree. There's a 12pm deadline for the 4D VR animations in the New New Yorker's October 18th anniversary celebration of Jeffrey Toobin's Zoom call. An update to Wonder Animation 35 is released, saying "New: Retargeting for testicles." Rejoice! Your brain sheath rewards you with a lower dose of GLP-1 agonists. You spend the time you saved to earn points to claim an airdrop by watching a withered Elon Musk rant about electric boats.
    • serf 6 days ago |
      >"New: Retargeting for testicles."

      someone in a long past IRC said once that all the (visual) details in a human is in their fingerprints and their ballbags, everything else is generic.

      I don't do vfx really, so I can't speak to the truth of that :

      /s but at least it sounds like we're automating the hard work for our VFX colleagues. /s