a.heateor_sss_amp{padding:0 4px}div.heateor_sss_horizontal_sharing a amp-img{display:inline-block}.heateor_sss_amp_instagram img{background-color:#624E47}.heateor_sss_amp_yummly img{background-color:#E16120}.heateor_sss_amp_youtube img{background-color:#ff0000}.heateor_sss_amp_buffer img{background-color:#000}.heateor_sss_amp_delicious img{background-color:#53BEEE}.heateor_sss_amp_facebook img{background-color:#3C589A}.heateor_sss_amp_digg img{background-color:#006094}.heateor_sss_amp_email img{background-color:#649A3F}.heateor_sss_amp_float_it img{background-color:#53BEEE}.heateor_sss_amp_linkedin img{background-color:#0077B5}.heateor_sss_amp_pinterest img{background-color:#CC2329}.heateor_sss_amp_print img{background-color:#FD6500}.heateor_sss_amp_reddit img{background-color:#FF5700}.heateor_sss_amp_stocktwits img{background-color:#40576F}.heateor_sss_amp_mewe img{background-color:#007da1}.heateor_sss_amp_mix img{background-color:#ff8226}.heateor_sss_amp_tumblr img{background-color:#29435D}.heateor_sss_amp_twitter img{background-color:#55acee}.heateor_sss_amp_vkontakte img{background-color:#5E84AC}.heateor_sss_amp_yahoo img{background-color:#8F03CC}.heateor_sss_amp_xing img{background-color:#00797D}.heateor_sss_amp_instagram img{background-color:#527FA4}.heateor_sss_amp_whatsapp img{background-color:#55EB4C}.heateor_sss_amp_aim img{background-color:#10ff00}.heateor_sss_amp_amazon_wish_list img{background-color:#ffe000}.heateor_sss_amp_aol_mail img{background-color:#2A2A2A}.heateor_sss_amp_app_net img{background-color:#5D5D5D}.heateor_sss_amp_baidu img{background-color:#2319DC}.heateor_sss_amp_balatarin img{background-color:#fff}.heateor_sss_amp_bibsonomy img{background-color:#000}.heateor_sss_amp_bitty_browser img{background-color:#EFEFEF}.heateor_sss_amp_blinklist img{background-color:#3D3C3B}.heateor_sss_amp_blogger_post img{background-color:#FDA352}.heateor_sss_amp_blogmarks img{background-color:#535353}.heateor_sss_amp_bookmarks_fr img{background-color:#E8EAD4}.heateor_sss_amp_box_net img{background-color:#1A74B0}.heateor_sss_amp_buddymarks img{background-color:#ffd400}.heateor_sss_amp_care2_news img{background-color:#6EB43F}.heateor_sss_amp_citeulike img{background-color:#2781CD}.heateor_sss_amp_comment img{background-color:#444}.heateor_sss_amp_diary_ru img{background-color:#E8D8C6}.heateor_sss_amp_diaspora img{background-color:#2E3436}.heateor_sss_amp_dihitt img{background-color:#FF6300}.heateor_sss_amp_diigo img{background-color:#4A8BCA}.heateor_sss_amp_douban img{background-color:#497700}.heateor_sss_amp_draugiem img{background-color:#ffad66}.heateor_sss_amp_dzone img{background-color:#fff088}.heateor_sss_amp_evernote img{background-color:#8BE056}.heateor_sss_amp_facebook_messenger img{background-color:#0084FF}.heateor_sss_amp_fark img{background-color:#555}.heateor_sss_amp_fintel img{background-color:#087515}.heateor_sss_amp_flipboard img{background-color:#CC0000}.heateor_sss_amp_folkd img{background-color:#0F70B2}.heateor_sss_amp_google_classroom img{background-color:#FFC112}.heateor_sss_amp_google_bookmarks img{background-color:#CB0909}.heateor_sss_amp_google_gmail img{background-color:#E5E5E5}.heateor_sss_amp_hacker_news img{background-color:#F60}.heateor_sss_amp_hatena img{background-color:#00A6DB}.heateor_sss_amp_instapaper img{background-color:#EDEDED}.heateor_sss_amp_jamespot img{background-color:#FF9E2C}.heateor_sss_amp_kakao img{background-color:#FCB700}.heateor_sss_amp_kik img{background-color:#2A2A2A}.heateor_sss_amp_kindle_it img{background-color:#2A2A2A}.heateor_sss_amp_known img{background-color:#fff101}.heateor_sss_amp_line img{background-color:#00C300}.heateor_sss_amp_livejournal img{background-color:#EDEDED}.heateor_sss_amp_mail_ru img{background-color:#356FAC}.heateor_sss_amp_mendeley img{background-color:#A70805}.heateor_sss_amp_meneame img{background-color:#FF7D12}.heateor_sss_amp_mixi img{background-color:#EDEDED}.heateor_sss_amp_myspace img{background-color:#2A2A2A}.heateor_sss_amp_netlog img{background-color:#2A2A2A}.heateor_sss_amp_netvouz img{background-color:#c0ff00}.heateor_sss_amp_newsvine img{background-color:#055D00}.heateor_sss_amp_nujij img{background-color:#D40000}.heateor_sss_amp_odnoklassniki img{background-color:#F2720C}.heateor_sss_amp_oknotizie img{background-color:#fdff88}.heateor_sss_amp_outlook_com img{background-color:#0072C6}.heateor_sss_amp_papaly img{background-color:#3AC0F6}.heateor_sss_amp_pinboard img{background-color:#1341DE}.heateor_sss_amp_plurk img{background-color:#CF682F}.heateor_sss_amp_pocket img{background-color:#f0f0f0}.heateor_sss_amp_polyvore img{background-color:#2A2A2A}.heateor_sss_amp_printfriendly img{background-color:#61D1D5}.heateor_sss_amp_protopage_bookmarks img{background-color:#413FFF}.heateor_sss_amp_pusha img{background-color:#0072B8}.heateor_sss_amp_qzone img{background-color:#2B82D9}.heateor_sss_amp_refind img{background-color:#1492ef}.heateor_sss_amp_rediff_mypage img{background-color:#D20000}.heateor_sss_amp_renren img{background-color:#005EAC}.heateor_sss_amp_segnalo img{background-color:#fdff88}.heateor_sss_amp_sina_weibo img{background-color:#ff0}.heateor_sss_amp_sitejot img{background-color:#ffc800}.heateor_sss_amp_skype img{background-color:#00AFF0}.heateor_sss_amp_sms img{background-color:#6ebe45}.heateor_sss_amp_slashdot img{background-color:#004242}.heateor_sss_amp_stumpedia img{background-color:#EDEDED}.heateor_sss_amp_svejo img{background-color:#fa7aa3}.heateor_sss_amp_symbaloo_feeds img{background-color:#6DA8F7}.heateor_sss_amp_telegram img{background-color:#3DA5f1}.heateor_sss_amp_trello img{background-color:#1189CE}.heateor_sss_amp_tuenti img{background-color:#0075C9}.heateor_sss_amp_twiddla img{background-color:#EDEDED}.heateor_sss_amp_typepad_post img{background-color:#2A2A2A}.heateor_sss_amp_viadeo img{background-color:#2A2A2A}.heateor_sss_amp_viber img{background-color:#8B628F}.heateor_sss_amp_wanelo img{background-color:#fff}.heateor_sss_amp_webnews img{background-color:#CC2512}.heateor_sss_amp_wordpress img{background-color:#464646}.heateor_sss_amp_wykop img{background-color:#367DA9}.heateor_sss_amp_yahoo_mail img{background-color:#400090}.heateor_sss_amp_yahoo_messenger img{background-color:#400090}.heateor_sss_amp_yoolink img{background-color:#A2C538}.heateor_sss_amp_youmob img{background-color:#3B599D}.heateor_sss_amp_gentlereader img{background-color:#46aecf}.heateor_sss_amp_threema img{background-color:#2A2A2A}.heateor_sss_vertical_sharing{position:fixed;left:11px;z-index:99999}.heateor-total-share-count .sss_share_count{color:#666;font-size:23px}.heateor-total-share-count .sss_share_lbl{color:#666}.amp-wp-enforced-sizes img[alt="Pinterest"]{background:#cc2329}.amp-wp-enforced-sizes img[alt="Viber"]{background:#8b628f}.amp-wp-enforced-sizes img[alt="Print"]{background:#fd6500}.amp-wp-enforced-sizes img[alt="Threema"]{background:#2a2a2a}.amp-wp-article-content .heateor_sss_vertical_sharing{left:5px}.amp-wp-article-content amp-img[alt="Pinterest"]{left:4px}.amp-wp-enforced-sizes img[alt="MySpace"]{background:#2a2a2a} amp-web-push-widget button.amp-subscribe { display: inline-flex; align-items: center; border-radius: 5px; border: 0; box-sizing: border-box; margin: 0; padding: 10px 15px; cursor: pointer; outline: none; font-size: 15px; font-weight: 500; background: #4A90E2; margin-top: 7px; color: white; box-shadow: 0 1px 1px 0 rgba(0, 0, 0, 0.5); -webkit-tap-highlight-color: rgba(0, 0, 0, 0); } a.heateor_sss_amp{padding:0 4px;}div.heateor_sss_horizontal_sharing a amp-img{display:inline-block;}.heateor_sss_amp_gab img{background-color:#25CC80}.heateor_sss_amp_parler img{background-color:#892E5E}.heateor_sss_amp_gettr img{background-color:#E50000}.heateor_sss_amp_instagram img{background-color:#624E47}.heateor_sss_amp_yummly img{background-color:#E16120}.heateor_sss_amp_youtube img{background-color:#ff0000}.heateor_sss_amp_teams img{background-color:#5059c9}.heateor_sss_amp_google_translate img{background-color:#528ff5}.heateor_sss_amp_x img{background-color:#2a2a2a}.heateor_sss_amp_rutube img{background-color:#14191f}.heateor_sss_amp_buffer img{background-color:#000}.heateor_sss_amp_delicious img{background-color:#53BEEE}.heateor_sss_amp_rss img{background-color:#e3702d}.heateor_sss_amp_facebook img{background-color:#0765FE}.heateor_sss_amp_digg img{background-color:#006094}.heateor_sss_amp_email img{background-color:#649A3F}.heateor_sss_amp_float_it img{background-color:#53BEEE}.heateor_sss_amp_linkedin img{background-color:#0077B5}.heateor_sss_amp_pinterest img{background-color:#CC2329}.heateor_sss_amp_print img{background-color:#FD6500}.heateor_sss_amp_reddit img{background-color:#FF5700}.heateor_sss_amp_mastodon img{background-color:#6364FF}.heateor_sss_amp_stocktwits img{background-color: #40576F}.heateor_sss_amp_mewe img{background-color:#007da1}.heateor_sss_amp_mix img{background-color:#ff8226}.heateor_sss_amp_tumblr img{background-color:#29435D}.heateor_sss_amp_twitter img{background-color:#55acee}.heateor_sss_amp_vkontakte img{background-color:#0077FF}.heateor_sss_amp_yahoo img{background-color:#8F03CC}.heateor_sss_amp_xing img{background-color:#00797D}.heateor_sss_amp_instagram img{background-color:#527FA4}.heateor_sss_amp_whatsapp img{background-color:#55EB4C}.heateor_sss_amp_aim img{background-color: #10ff00}.heateor_sss_amp_amazon_wish_list img{background-color: #ffe000}.heateor_sss_amp_aol_mail img{background-color: #2A2A2A}.heateor_sss_amp_app_net img{background-color: #5D5D5D}.heateor_sss_amp_balatarin img{background-color: #fff}.heateor_sss_amp_bibsonomy img{background-color: #000}.heateor_sss_amp_bitty_browser img{background-color: #EFEFEF}.heateor_sss_amp_blinklist img{background-color: #3D3C3B}.heateor_sss_amp_blogger_post img{background-color: #FDA352}.heateor_sss_amp_blogmarks img{background-color: #535353}.heateor_sss_amp_bookmarks_fr img{background-color: #E8EAD4}.heateor_sss_amp_box_net img{background-color: #1A74B0}.heateor_sss_amp_buddymarks img{background-color: #ffd400}.heateor_sss_amp_care2_news img{background-color: #6EB43F}.heateor_sss_amp_comment img{background-color: #444}.heateor_sss_amp_diary_ru img{background-color: #E8D8C6}.heateor_sss_amp_diaspora img{background-color: #2E3436}.heateor_sss_amp_dihitt img{background-color: #FF6300}.heateor_sss_amp_diigo img{background-color: #4A8BCA}.heateor_sss_amp_douban img{background-color: #497700}.heateor_sss_amp_draugiem img{background-color: #ffad66}.heateor_sss_amp_evernote img{background-color: #8BE056}.heateor_sss_amp_facebook_messenger img{background-color: #0084FF}.heateor_sss_amp_fark img{background-color: #555}.heateor_sss_amp_fintel img{background-color: #087515}.heateor_sss_amp_flipboard img{background-color: #CC0000}.heateor_sss_amp_folkd img{background-color: #0F70B2}.heateor_sss_amp_google_news img{background-color: #4285F4}.heateor_sss_amp_google_classroom img{background-color: #FFC112}.heateor_sss_amp_google_gmail img{background-color: #E5E5E5}.heateor_sss_amp_hacker_news img{background-color: #F60}.heateor_sss_amp_hatena img{background-color: #00A6DB}.heateor_sss_amp_instapaper img{background-color: #EDEDED}.heateor_sss_amp_jamespot img{background-color: #FF9E2C}.heateor_sss_amp_kakao img{background-color: #FCB700}.heateor_sss_amp_kik img{background-color: #2A2A2A}.heateor_sss_amp_kindle_it img{background-color: #2A2A2A}.heateor_sss_amp_known img{background-color: #fff101}.heateor_sss_amp_line img{background-color: #00C300}.heateor_sss_amp_livejournal img{background-color: #EDEDED}.heateor_sss_amp_mail_ru img{background-color: #356FAC}.heateor_sss_amp_mendeley img{background-color: #A70805}.heateor_sss_amp_meneame img{background-color: #FF7D12}.heateor_sss_amp_mixi img{background-color: #EDEDED}.heateor_sss_amp_myspace img{background-color: #2A2A2A}.heateor_sss_amp_netlog img{background-color: #2A2A2A}.heateor_sss_amp_netvouz img{background-color: #c0ff00}.heateor_sss_amp_newsvine img{background-color: #055D00}.heateor_sss_amp_nujij img{background-color: #D40000}.heateor_sss_amp_odnoklassniki img{background-color: #F2720C}.heateor_sss_amp_oknotizie img{background-color: #fdff88}.heateor_sss_amp_outlook_com img{background-color: #0072C6}.heateor_sss_amp_papaly img{background-color: #3AC0F6}.heateor_sss_amp_pinboard img{background-color: #1341DE}.heateor_sss_amp_plurk img{background-color: #CF682F}.heateor_sss_amp_pocket img{background-color: #ee4056}.heateor_sss_amp_polyvore img{background-color: #2A2A2A}.heateor_sss_amp_printfriendly img{background-color: #61D1D5}.heateor_sss_amp_protopage_bookmarks img{background-color: #413FFF}.heateor_sss_amp_pusha img{background-color: #0072B8}.heateor_sss_amp_qzone img{background-color: #2B82D9}.heateor_sss_amp_refind img{background-color: #1492ef}.heateor_sss_amp_rediff_mypage img{background-color: #D20000}.heateor_sss_amp_renren img{background-color: #005EAC}.heateor_sss_amp_segnalo img{background-color: #fdff88}.heateor_sss_amp_sina_weibo img{background-color: #ff0}.heateor_sss_amp_sitejot img{background-color: #ffc800}.heateor_sss_amp_skype img{background-color: #00AFF0}.heateor_sss_amp_sms img{background-color: #6ebe45}.heateor_sss_amp_slashdot img{background-color: #004242}.heateor_sss_amp_stumpedia img{background-color: #EDEDED}.heateor_sss_amp_svejo img{background-color: #fa7aa3}.heateor_sss_amp_symbaloo_feeds img{background-color: #6DA8F7}.heateor_sss_amp_telegram img{background-color: #3DA5f1}.heateor_sss_amp_trello img{background-color: #1189CE}.heateor_sss_amp_tuenti img{background-color: #0075C9}.heateor_sss_amp_twiddla img{background-color: #EDEDED}.heateor_sss_amp_typepad_post img{background-color: #2A2A2A}.heateor_sss_amp_viadeo img{background-color: #2A2A2A}.heateor_sss_amp_viber img{background-color: #8B628F}.heateor_sss_amp_webnews img{background-color: #CC2512}.heateor_sss_amp_wordpress img{background-color: #464646}.heateor_sss_amp_wykop img{background-color: #367DA9}.heateor_sss_amp_yahoo_mail img{background-color: #400090}.heateor_sss_amp_yahoo_messenger img{background-color: #400090}.heateor_sss_amp_yoolink img{background-color: #A2C538}.heateor_sss_amp_youmob img{background-color: #3B599D}.heateor_sss_amp_gentlereader img{background-color: #46aecf}.heateor_sss_amp_threema img{background-color: #2A2A2A}.heateor_sss_amp_bluesky img{background-color:#0085ff}.heateor_sss_amp_threads img{background-color:#000}.heateor_sss_amp_raindrop img{background-color:#0b7ed0}.heateor_sss_amp_micro_blog img{background-color:#ff8800}.heateor_sss_amp amp-img{border-radius:999px;} .amp-logo amp-img{width:190px} .amp-menu input{display:none;}.amp-menu li.menu-item-has-children ul{display:none;}.amp-menu li{position:relative;display:block;}.amp-menu > li a{display:block;} /* Inline styles */ div.acsse3e3c{font-weight:bold;}amp-img.acss334b9{max-width:35px;}div.acss7b5a6{max-width:789px;}div.acss735f3{max-width:975px;}div.acss90d5b{max-width:965px;}span.acss50a0e{font-size:10pt;}p.acss21c73{padding-left:40px;}div.acss3e9c6{max-width:889px;}div.acss9ca94{max-width:813px;}div.acss9d4ce{max-width:886px;}div.acssf48f0{max-width:821px;}div.acss07a7e{max-width:910px;}div.acssdd0d0{max-width:1210px;}div.acss1dba8{max-width:874px;}div.acss138d7{clear:both;}div.acssf5b84{--relposth-columns:3;--relposth-columns_m:2;--relposth-columns_t:2;}div.acss734a9{aspect-ratio:1/1;background:transparent url(https://aiofm.net/wp-content/uploads/2024/02/Lucjan-Suski-Co-Founder-CEO-of-Surfer-Interview-Series-150x150.jpg) no-repeat scroll 0% 0%;height:150px;max-width:150px;}div.acss6bdea{color:#333333;font-family:Arial;font-size:12px;height:75px;}div.acss7dd20{aspect-ratio:1/1;background:transparent url(https://aiofm.net/wp-content/uploads/2023/10/A-Green-Search-Engine-Sees-Danger—and-Opportunity—in-the-Generative-AI.jpg) no-repeat scroll 0% 0%;height:150px;max-width:150px;}div.acss0cf39{aspect-ratio:1/1;background:transparent url(https://aiofm.net/wp-content/uploads/2024/03/Figure-AIs-675-Million-Breakthrough-in-Humanoid-Robotics.webp-150x150.webp) no-repeat scroll 0% 0%;height:150px;max-width:150px;}div.acssc0b7d{-webkit-box-shadow:none;box-shadow:none;left:-10px;top:100px;max-width:44px;}amp-img.acss8b671{max-width:40px;} .icon-widgets:before {content: "\e1bd";}.icon-search:before {content: "\e8b6";}.icon-shopping-cart:after {content: "\e8cc";}

Towards Total Control in AI Video Generation

Spread the love

Video foundation models such as Hunyuan and Wan 2.1, while powerful, do not offer users the kind of granular control that film and TV production (particularly VFX production) demands.

In professional visual effects studios, open-source models like these, along with earlier image-based (rather than video) models such as Stable Diffusion, Kandinsky and Flux, are typically used alongside a range of supporting tools that adapt their raw output to meet specific creative needs. When a director says, “That looks great, but can we make it a little more [n]?” you can’t respond by saying the model isn’t precise enough to handle such requests.

Instead an AI VFX team will use a range of traditional CGI and compositional techniques, allied with custom procedures and workflows developed over time, in order to attempt to push the limits of video synthesis a little further.

So by analogy, a foundation video model is much like a default installation of a web-browser like Chrome; it does a lot out of the box, but if you want it to adapt to your needs, rather than vice versa, you’re going to need some plugins.

Control Freaks

In the world of diffusion-based image synthesis, the most important such third-party system is ControlNet.

ControlNet is a technique for adding structured control to diffusion-based generative models, allowing users to guide image or video generation with additional inputs such as edge maps, depth maps, or pose information.

ControlNet’s various methods allow for depth>image (top row), semantic segmentation>image (lower left) and pose-guided image generation of humans and animals (lower left).

Instead of relying solely on text prompts, ControlNet introduces separate neural network branches, or adapters, that process these conditioning signals while preserving the base model’s generative capabilities.

This enables fine-tuned outputs that adhere more closely to user specifications, making it particularly useful in applications where precise composition, structure, or motion control is required:

With a guiding pose, a variety of accurate output types can be obtained via ControlNet. Source: https://arxiv.org/pdf/2302.05543

However, adapter-based frameworks of this kind operate externally on a set of neural processes that are very internally-focused. These approaches have several drawbacks.

First, adapters are trained independently, leading to branch conflicts when multiple adapters are combined, which can entail degraded generation quality.

Secondly, they introduce parameter redundancy, requiring extra computation and memory for each adapter, making scaling inefficient.

Thirdly, despite their flexibility, adapters often produce sub-optimal results compared to models that are fully fine-tuned for multi-condition generation. These issues make adapter-based methods less effective for tasks requiring seamless integration of multiple control signals.

Ideally, the capacities of ControlNet would be trained natively into the model, in a modular way that could accommodate later and much-anticipated obvious innovations such as simultaneous video/audio generation, or native lip-sync capabilities (for external audio).

As it stands, every extra piece of functionality represents either a post-production task or a non-native procedure that has to navigate the tightly-bound and sensitive weights of whichever foundation model it’s operating on.

FullDiT

Into this standoff comes a new offering from China, that posits a system where ControlNet-style measures are baked directly into a generative video model at training time, instead of being relegated to an afterthought.

From the new paper: the FullDiT approach can incorporate identity imposition, depth and camera movement into a native generation, and can summon up any combination of these at once. Source: https://arxiv.org/pdf/2503.19907

Titled FullDiT, the new approach fuses multi-task conditions such as identity transfer, depth-mapping and camera movement into an integrated part of a trained generative video model, for which the authors have produced a prototype trained model, and accompanying video-clips at a project site.

In the example below, we see generations that incorporate camera movement, identity information and text information (i.e., guiding user text prompts):

Click to play. Examples of ControlNet-style user imposition with only a native trained foundation model. Source: https://fulldit.github.io/

It should be noted that the authors do not propose their experimental trained model as a functional foundation model, but rather as a proof-of-concept for native text-to-video (T2V) and image-to-video (I2V) models that offer users more control than just an image prompt or a text-prompt.

Since there are no similar models of this kind yet, the researchers created a new benchmark titled FullBench, for the evaluation of multi-task videos, and claim state-of-the-art performance in the like-for-like tests they devised against prior approaches. However, since FullBench was designed by the authors themselves, its objectivity is untested, and its dataset of 1,400 cases may be too limited for broader conclusions.

Perhaps the most interesting aspect of the architecture the paper puts forward is its potential to incorporate new types of control. The authors state:

‘In this work, we only explore control conditions of the camera, identities, and depth information. We did not further investigate other conditions and modalities such as audio, speech, point cloud, object bounding boxes, optical flow, etc. Although the design of FullDiT can seamlessly integrate other modalities with minimal architecture modification, how to quickly and cost-effectively adapt existing models to new conditions and modalities is still an important question that warrants further exploration.’

Though the researchers present FullDiT as a step forward in multi-task video generation, it should be considered that this new work builds on existing architectures rather than introducing a fundamentally new paradigm.

Nonetheless, FullDiT currently stands alone (to the best of my knowledge) as a video foundation model with ‘hard coded’ ControlNet-style facilities – and it’s good to see that the proposed architecture can accommodate later innovations too.

Click to play. Examples of user-controlled camera moves, from the project site.

The new paper is titled FullDiT: Multi-Task Video Generative Foundation Model with Full Attention, and comes from nine researchers across Kuaishou Technology and The Chinese University of Hong Kong. The project page is here and the new benchmark data is at Hugging Face.

Method

The authors contend that FullDiT’s unified attention mechanism enables stronger cross-modal representation learning by capturing both spatial and temporal relationships across conditions:

According to the new paper, FullDiT integrates multiple input conditions through full self-attention, converting them into a unified sequence. By contrast, adapter-based models (leftmost above) use separate modules for each input, leading to redundancy, conflicts, and weaker performance.

Unlike adapter-based setups that process each input stream separately, this shared attention structure avoids branch conflicts and reduces parameter overhead. They also claim that the architecture can scale to new input types without major redesign – and that the model schema shows signs of generalizing to condition combinations not seen during training, such as linking camera motion with character identity.

Click to play. Examples of identity generation from the project site.

In FullDiT’s architecture, all conditioning inputs – such as text, camera motion, identity, and depth – are first converted into a unified token format. These tokens are then concatenated into a single long sequence, which is processed through a stack of transformer layers using full self-attention. This approach follows prior works such as Open-Sora Plan and Movie Gen.

This design allows the model to learn temporal and spatial relationships jointly across all conditions. Each transformer block operates over the entire sequence, enabling dynamic interactions between modalities without relying on separate modules for each input – and, as we have noted, the architecture is designed to be extensible, making it much easier to incorporate additional control signals in the future, without major structural changes.

The Power of Three

FullDiT converts each control signal into a standardized token format so that all conditions can be processed together in a unified attention framework. For camera motion, the model encodes a sequence of extrinsic parameters – such as position and orientation – for each frame. These parameters are timestamped and projected into embedding vectors that reflect the temporal nature of the signal.

Identity information is treated differently, since it is inherently spatial rather than temporal. The model uses identity maps that indicate which characters are present in which parts of each frame. These maps are divided into patches, with each patch projected into an embedding that captures spatial identity cues, allowing the model to associate specific regions of the frame with specific entities.

Depth is a spatiotemporal signal, and the model handles it by dividing depth videos into 3D patches that span both space and time. These patches are then embedded in a way that preserves their structure across frames.

Once embedded, all of these condition tokens (camera, identity, and depth) are concatenated into a single long sequence, allowing FullDiT to process them together using full self-attention. This shared representation makes it possible for the model to learn interactions across modalities and across time without relying on isolated processing streams.

Data and Tests

FullDiT’s training approach relied on selectively annotated datasets tailored to each conditioning type, rather than requiring all conditions to be present simultaneously.

For textual conditions, the initiative follows the structured captioning approach outlined in the MiraData project.

Video collection and annotation pipeline from the MiraData project. Source: https://arxiv.org/pdf/2407.06358

For camera motion, the RealEstate10K dataset was the main data source, due to its high-quality ground-truth annotations of camera parameters.

However, the authors observed that training exclusively on static-scene camera datasets such as RealEstate10K tended to reduce dynamic object and human movements in generated videos. To counteract this, they conducted additional fine-tuning using internal datasets that included more dynamic camera motions.

Identity annotations were generated using the pipeline developed for the ConceptMaster project, which allowed efficient filtering and extraction of fine-grained identity information.

The ConceptMaster framework is designed to address identity decoupling issues while preserving concept fidelity in customized videos. Source: https://arxiv.org/pdf/2501.04698

Depth annotations were obtained from the Panda-70M dataset using Depth Anything.

Optimization Through Data-Ordering

The authors also implemented a progressive training schedule, introducing more challenging conditions earlier in training to ensure the model acquired robust representations before simpler tasks were added. The training order proceeded from text to camera conditions, then identities, and finally depth, with easier tasks generally introduced later and with fewer examples.

The authors emphasize the value of ordering the workload in this way:

‘During the pre-training phase, we noted that more challenging tasks demand extended training time and should be introduced earlier in the learning process. These challenging tasks involve complex data distributions that differ significantly from the output video, requiring the model to possess sufficient capacity to accurately capture and represent them.

‘Conversely, introducing easier tasks too early may lead the model to prioritize learning them first, since they provide more immediate optimization feedback, which hinder the convergence of more challenging tasks.’

An illustration of the data training order adopted by the researchers, with red indicating greater data volume.

After initial pre-training, a final fine-tuning stage further refined the model to improve visual quality and motion dynamics. Thereafter the training followed that of a standard diffusion framework*: noise added to video latents, and the model learning to predict and remove it, using the embedded condition tokens as guidance.

To effectively evaluate FullDiT and provide a fair comparison against existing methods, and in the absence of the availability of any other apposite benchmark, the authors introduced FullBench, a curated benchmark suite consisting of 1,400 distinct test cases.

A data explorer instance for the new FullBench benchmark. Source: https://huggingface.co/datasets/KwaiVGI/FullBench

Each data point provided ground truth annotations for various conditioning signals, including camera motion, identity, and depth.

Metrics

The authors evaluated FullDiT using ten metrics covering five main aspects of performance: text alignment, camera control, identity similarity, depth accuracy, and general video quality.

Text alignment was measured using CLIP similarity, while camera control was assessed through rotation error (RotErr), translation error (TransErr), and camera motion consistency (CamMC), following the approach of CamI2V (in the CameraCtrl project).

Identity similarity was evaluated using DINO-I and CLIP-I, and depth control accuracy was quantified using Mean Absolute Error (MAE).

Video quality was judged with three metrics from MiraData: frame-level CLIP similarity for smoothness; optical flow-based motion distance for dynamics; and LAION-Aesthetic scores for visual appeal.

Training

The authors trained FullDiT using an internal (undisclosed) text-to-video diffusion model containing roughly one billion parameters. They intentionally chose a modest parameter size to maintain fairness in comparisons with prior methods and ensure reproducibility.

Since training videos differed in length and resolution, the authors standardized each batch by resizing and padding videos to a common resolution, sampling 77 frames per sequence, and using applied attention and loss masks to optimize training effectiveness.

The Adam optimizer was used at a learning rate of 1×10−5 across a cluster of 64 NVIDIA H800 GPUs, for a combined total of 5,120GB of VRAM (consider that in the enthusiast synthesis communities, 24GB on an RTX 3090 is still considered a luxurious standard).

The model was trained for around 32,000 steps, incorporating up to three identities per video, along with 20 frames of camera conditions and 21 frames of depth conditions, both evenly sampled from the total 77 frames.

For inference, the model generated videos at a resolution of 384×672 pixels (roughly five seconds at 15 frames per second) with 50 diffusion inference steps and a classifier-free guidance scale of five.

Prior Methods

For camera-to-video evaluation, the authors compared FullDiT against MotionCtrl, CameraCtrl, and CamI2V, with all models trained using the RealEstate10k dataset to ensure consistency and fairness.

In identity-conditioned generation, since no comparable open-source multi-identity models were available, the model was benchmarked against the 1B-parameter ConceptMaster model, using the same training data and architecture.

For depth-to-video tasks, comparisons were made with Ctrl-Adapter and ControlVideo.

Quantitative results for single-task video generation. FullDiT was compared to MotionCtrl, CameraCtrl, and CamI2V for camera-to-video generation; ConceptMaster (1B parameter version) for identity-to-video; and Ctrl-Adapter and ControlVideo for depth-to-video. All models were evaluated using their default settings. For consistency, 16 frames were uniformly sampled from each method, matching the output length of prior models.

The results indicate that FullDiT, despite handling multiple conditioning signals simultaneously, achieved state-of-the-art performance in metrics related to text, camera motion, identity, and depth controls.

In overall quality metrics, the system generally outperformed other methods, although its smoothness was slightly lower than ConceptMaster’s. Here the authors comment:

‘The smoothness of FullDiT is slightly lower than that of ConceptMaster since the calculation of smoothness is based on CLIP similarity between adjacent frames. As FullDiT exhibits significantly greater dynamics compared to ConceptMaster, the smoothness metric is impacted by the large variations between adjacent frames.

‘For the aesthetic score, since the rating model favors images in painting style and ControlVideo typically generates videos in this style, it achieves a high score in aesthetics.’

Regarding the qualitative comparison, it might be preferable to refer to the sample videos at the FullDiT project site, since the PDF examples are inevitably static (and also too large to entirely reproduce here).

The first section of the qualitative results in the PDF. Please refer to the source paper for the additional examples, which are too extensive to reproduce here.

The authors comment:

‘FullDiT demonstrates superior identity preservation and generates videos with better dynamics and visual quality compared to [ConceptMaster]. Since ConceptMaster and FullDiT are trained on the same backbone, this highlights the effectiveness of condition injection with full attention.

‘…The [other] results demonstrate the superior controllability and generation quality of FullDiT compared to existing depth-to-video and camera-to-video methods.’

A section of the PDF’s examples of FullDiT’s output with multiple signals. Please refer to the source paper and the project site for additional examples.

Conclusion

Though FullDiT is an exciting foray into a more full-featured type of video foundation model, one has to wonder if demand for ControlNet-style instrumentalities will ever justify implementing such features at scale, at least for FOSS projects, which would struggle to obtain the enormous amount of GPU processing power necessary, without commercial backing.

The primary challenge is that using systems such as Depth and Pose generally requires non-trivial familiarity with  relatively complex user interfaces such as ComfyUI. Therefore it seems that a functional FOSS model of this kind is most likely to be developed by a cadre of smaller VFX companies that lack the money (or the will, given that such systems are quickly made obsolete by model upgrades) to curate and train such a model behind closed doors.

On the other hand, API-driven ‘rent-an-AI’ systems may be well-motivated to develop simpler and more user-friendly interpretive methods for models into which ancillary control systems have been directly trained.

Click to play. Depth+Text controls imposed on a video generation using FullDiT.

 

* The authors do not specify any known base model (i.e., SDXL, etc.)

First published Thursday, March 27, 2025

Source Link

admin

Recent Posts

Is Your Data Storage Strategy AI-Ready?

The adoption of AI has caused an increased need for proper data governance, and companies…

3 days ago

OpenAI’s new GPT-4.1 AI models focus on coding

OpenAI on Monday launched a new family of models called GPT-4.1. Yes, “4.1” — as…

3 days ago

Evan Brown, Executive Director of EDGE at the Oklahoma Department of Commerce

Evan Brown serves as the Executive Director of EDGE (Economic Development Growth and Expansion) at…

6 days ago

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

All of the 400 exposed AI systems found by UpGuard have one thing in common:…

6 days ago

AI craze mania with AI action figures and turning pets into people

In the 90s, we collected Pokémon cards, in the 2000s, we all had a weird…

6 days ago

The Medicaid Cut Effect: Can AI Prevent an Incoming Healthcare Crisis?

Medicaid has become a central point of a heated political battle, as Republican lawmakers push…

1 week ago

This website uses cookies.