This media is not supported in your browser
VIEW IN TELEGRAM
🐏 EFM3D: 3D Ego-Foundation 🐏
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
🔥9❤2👍2⚡1👏1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 CoTracker3 by #META is out! 🔥
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
❤14🔥3🤯3🍾2👍1😱1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ Universal Relightable Avatars ☀️
👉#Meta unveils URAvatar, photorealistic & relightable avatars from phone scan with unknown illumination. Stunning results!
👉Review https://t.ly/U-ESX
👉Paper arxiv.org/pdf/2410.24223
👉Project junxuan-li.github.io/urgca-website
👉#Meta unveils URAvatar, photorealistic & relightable avatars from phone scan with unknown illumination. Stunning results!
👉Review https://t.ly/U-ESX
👉Paper arxiv.org/pdf/2410.24223
👉Project junxuan-li.github.io/urgca-website
❤11🔥5⚡1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
❤️🔥 Uncommon object in #3D ❤️🔥
👉#META releases uCO3D, a new object-centric dataset for 3D AI. The largest publicly-available collection of HD videos of objects with 3D annotations that ensures full-360◦ coverage. Code & data under CCA 4.0💙
👉Review https://t.ly/Z_tvA
👉Paper https://arxiv.org/pdf/2501.07574
👉Project https://uco3d.github.io/
👉Repo github.com/facebookresearch/uco3d
👉#META releases uCO3D, a new object-centric dataset for 3D AI. The largest publicly-available collection of HD videos of objects with 3D annotations that ensures full-360◦ coverage. Code & data under CCA 4.0💙
👉Review https://t.ly/Z_tvA
👉Paper https://arxiv.org/pdf/2501.07574
👉Project https://uco3d.github.io/
👉Repo github.com/facebookresearch/uco3d
❤11⚡2😍2👍1👏1🤩1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ Relightable Full-Body Avatars ☀️
👉#Meta unveils the first approach ever to jointly model the relightable appearance of the body, face, and hands of drivable avatars.
👉Review https://t.ly/kx9gf
👉Paper arxiv.org/pdf/2501.14726
👉Project neuralbodies.github.io/RFGCA
👉#Meta unveils the first approach ever to jointly model the relightable appearance of the body, face, and hands of drivable avatars.
👉Review https://t.ly/kx9gf
👉Paper arxiv.org/pdf/2501.14726
👉Project neuralbodies.github.io/RFGCA
❤3👍3🔥3⚡1🤯1😢1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 VideoJAM: #META's Video-Model (SOTA) 🔥
👉#META's VideoJAM: the new SOTA (by large margin) in motion coherence for video generation, much better than SORA! A strong motion prior into any video-gen model. Impressive results, no code announced🥲
👉Review https://shorturl.at/id7Bt
👉Paper https://arxiv.org/pdf/2502.02492
👉Project https://hila-chefer.github.io/videojam-paper.github.io/
👉#META's VideoJAM: the new SOTA (by large margin) in motion coherence for video generation, much better than SORA! A strong motion prior into any video-gen model. Impressive results, no code announced🥲
👉Review https://shorturl.at/id7Bt
👉Paper https://arxiv.org/pdf/2502.02492
👉Project https://hila-chefer.github.io/videojam-paper.github.io/
🔥9❤4👍1👏1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🤖 META Human-Robot 🤖
👉#META PARTNR: novel benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. The largest benchmark of its kind: 100,000+ natural language tasks, spanning 60 houses and 5,819 unique objects. Code & Data (🤗) under MIT💙
👉Review https://t.ly/zcN0K
👉Paper arxiv.org/pdf/2411.00081
👉Repo github.com/facebookresearch/partnr-planner
🤗Data huggingface.co/datasets/ai-habitat/partnr_episodes
👉#META PARTNR: novel benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. The largest benchmark of its kind: 100,000+ natural language tasks, spanning 60 houses and 5,819 unique objects. Code & Data (🤗) under MIT💙
👉Review https://t.ly/zcN0K
👉Paper arxiv.org/pdf/2411.00081
👉Repo github.com/facebookresearch/partnr-planner
🤗Data huggingface.co/datasets/ai-habitat/partnr_episodes
🔥9🤩2❤1👍1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🖲️ VGG Transformer 🖲️
👉VGGT by VGG & #META (#CVPR2025) is a feed-forward neural net. that directly infers all key 3D attributes of a scene within seconds. Code released💙
👉Review https://t.ly/WoWXL
👉Paper https://arxiv.org/pdf/2503.11651
👉Project https://vgg-t.github.io/
👉Code github.com/facebookresearch/vggthttps://t.ly/WoWXL
👉VGGT by VGG & #META (#CVPR2025) is a feed-forward neural net. that directly infers all key 3D attributes of a scene within seconds. Code released💙
👉Review https://t.ly/WoWXL
👉Paper https://arxiv.org/pdf/2503.11651
👉Project https://vgg-t.github.io/
👉Code github.com/facebookresearch/vggthttps://t.ly/WoWXL
🤯26👍11🔥6❤2🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦖 DINOv3 is out 🦖
👉#Meta unveils DINOv3! A novel foundation model outperforming the previous SOTAs in computer vision. Code & weights released under DINOv3 License💙
👉Review https://t.ly/-S3ZL
👉Paper https://t.ly/ervOT
👉Project https://lnkd.in/dHFf3esd
👉Repo https://lnkd.in/dPxhDxAq
🤗HF https://lnkd.in/dWGudY2i
👉#Meta unveils DINOv3! A novel foundation model outperforming the previous SOTAs in computer vision. Code & weights released under DINOv3 License💙
👉Review https://t.ly/-S3ZL
👉Paper https://t.ly/ervOT
👉Project https://lnkd.in/dHFf3esd
👉Repo https://lnkd.in/dPxhDxAq
🤗HF https://lnkd.in/dWGudY2i
❤43🔥13👍2😍1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🫔ATLAS: SOTA Human Model🫔
👉#META presents ATLAS, a novel high-fidelity body model learned from 600k high-res. scans captured using 240 synchronized cams. Code announced, to be released💙
👉Review https://t.ly/0hHud
👉Paper arxiv.org/pdf/2508.15767
👉Project jindapark.github.io/projects/atlas/
👉Repo TBA
👉#META presents ATLAS, a novel high-fidelity body model learned from 600k high-res. scans captured using 240 synchronized cams. Code announced, to be released💙
👉Review https://t.ly/0hHud
👉Paper arxiv.org/pdf/2508.15767
👉Project jindapark.github.io/projects/atlas/
👉Repo TBA
🔥7❤6👏1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
❤️🔥PHD: Personalized 3D Humans❤️🔥
👉ETH & #Meta unveil PHD, a novel approach for personalized 3D human mesh recovery (HMR) and body fitting that leverages user-specific shape information. Code & models to be released💙
👉Review https://t.ly/IeRhH
👉Paper https://arxiv.org/pdf/2508.21257
👉Project https://phd-pose.github.io/
👉Repo TBA
👉ETH & #Meta unveil PHD, a novel approach for personalized 3D human mesh recovery (HMR) and body fitting that leverages user-specific shape information. Code & models to be released💙
👉Review https://t.ly/IeRhH
👉Paper https://arxiv.org/pdf/2508.21257
👉Project https://phd-pose.github.io/
👉Repo TBA
❤7🔥2👏1