This media is not supported in your browser
VIEW IN TELEGRAM
💃Outfit Anyone: Ultra-HQ VTO💃
👉Alibaba unveils Outfit Anyone: a two-stream conditional diffusion able to adeptly handle garment deformation for more lifelike results in VOT. Extra: Outfit Anyone + Animate Anyone for outfit + motion generation of any character. NO CODE / NO PAPER / DEMO AVAILABLE :)
👉Review https://t.ly/o6UR9
👉Demo https://lnkd.in/dpQYdXhc
👉Repo (empty) https://lnkd.in/dBsNST6r
👉Alibaba unveils Outfit Anyone: a two-stream conditional diffusion able to adeptly handle garment deformation for more lifelike results in VOT. Extra: Outfit Anyone + Animate Anyone for outfit + motion generation of any character. NO CODE / NO PAPER / DEMO AVAILABLE :)
👉Review https://t.ly/o6UR9
👉Demo https://lnkd.in/dpQYdXhc
👉Repo (empty) https://lnkd.in/dBsNST6r
🤯10👍4❤3🔥2
🔥 #AIwithPapers: we are 8k+ 🔥
👉 After flirting with #ChatGpt for months, you back in love with this channel. I felt bad, but I forgive you 🧡
😈 Hey Telegram Premium Subscribers, what about boosting us? Click: https://xn--r1a.website/AI_DeepLearning?boost
😈 Invite -> https://xn--r1a.website/AI_DeepLearning
👉 After flirting with #ChatGpt for months, you back in love with this channel. I felt bad, but I forgive you 🧡
😈 Hey Telegram Premium Subscribers, what about boosting us? Click: https://xn--r1a.website/AI_DeepLearning?boost
😈 Invite -> https://xn--r1a.website/AI_DeepLearning
❤16🤣7🔥1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🧊 Depth Conditioning 🧊
👉LooseControl to control the generative image modeling process. Layout by boundaries and #3D box control via object locations (approximate bounding boxes)
👉Review https://t.ly/9y72m
👉Paper https://arxiv.org/pdf/2312.03079.pdf
👉Project https://shariqfarooq123.github.io/loose-control/
👉Repo https://github.com/shariqfarooq123/LooseControl
👉LooseControl to control the generative image modeling process. Layout by boundaries and #3D box control via object locations (approximate bounding boxes)
👉Review https://t.ly/9y72m
👉Paper https://arxiv.org/pdf/2312.03079.pdf
👉Project https://shariqfarooq123.github.io/loose-control/
👉Repo https://github.com/shariqfarooq123/LooseControl
🔥14❤6🤯4👍1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🖲️ Amodal Tracking Any Object 🖲️
👉Amodal tracking": inferring complete object boundaries, even when certain portions are occluded. New benchmark & approach, 2x better than SOTA in people tracking 🔥
👉Review https://t.ly/Rc6Ku
👉Paper https://lnkd.in/d39rFYT4
👉Project https://lnkd.in/d7bkEcni
👉(empty) Repo https://lnkd.in/dTsNKdfz
👉Amodal tracking": inferring complete object boundaries, even when certain portions are occluded. New benchmark & approach, 2x better than SOTA in people tracking 🔥
👉Review https://t.ly/Rc6Ku
👉Paper https://lnkd.in/d39rFYT4
👉Project https://lnkd.in/d7bkEcni
👉(empty) Repo https://lnkd.in/dTsNKdfz
❤16🤯8🔥3👍2👏1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🚿 Event-Cam (1000 fps) Hands 🚿
👉Ev2Hands, the first method for the 3D reconstruction of two interacting hands from a single event camera. Code available.
👉Review https://t.ly/YpQpX
👉Paper arxiv.org/pdf/2312.14157.pdf
👉Project 4dqv.mpi-inf.mpg.de/Ev2Hands
👉Repo github.com/Chris10M/Ev2Hands
👉Ev2Hands, the first method for the 3D reconstruction of two interacting hands from a single event camera. Code available.
👉Review https://t.ly/YpQpX
👉Paper arxiv.org/pdf/2312.14157.pdf
👉Project 4dqv.mpi-inf.mpg.de/Ev2Hands
👉Repo github.com/Chris10M/Ev2Hands
🔥3❤2👍2👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🎄UniSDF: Unifying Neural Representations🎄
👉UniSDF: novel general purpose 3D reconstruction for large complex scenes with reflections. SOTA on DTU, Shiny Blender, Mip-NeRF 360 and Ref-NeRF dataset.
👉Review https://t.ly/2QEul
👉Paper https://arxiv.org/pdf/2312.13285.pdf
👉Project https://fangjinhuawang.github.io/UniSDF/
👉Repo: No code :(
👉UniSDF: novel general purpose 3D reconstruction for large complex scenes with reflections. SOTA on DTU, Shiny Blender, Mip-NeRF 360 and Ref-NeRF dataset.
👉Review https://t.ly/2QEul
👉Paper https://arxiv.org/pdf/2312.13285.pdf
👉Project https://fangjinhuawang.github.io/UniSDF/
👉Repo: No code :(
🔥7👍2❤1🥰1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🪮HAAR: Text-Driven Generative Hairstyles🪮
👉 HAAR: new strand-based generative model for #3D human hairstyles driven by textual input.
👉Review https://t.ly/L38iD
👉Project https://haar.is.tue.mpg.de/
👉Paper https://arxiv.org/pdf/2312.11666.pdf
👉Repo coming
👉 HAAR: new strand-based generative model for #3D human hairstyles driven by textual input.
👉Review https://t.ly/L38iD
👉Project https://haar.is.tue.mpg.de/
👉Paper https://arxiv.org/pdf/2312.11666.pdf
👉Repo coming
🤯4🍾3👍2🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🪲UniRef++: Segment Every Reference🪲
👉 UniRef++ is a unified model for RIS, FSS, RVOS & VOS. Code available!
👉Review https://t.ly/OxtOx
👉Paper https://lnkd.in/eTrmDTK3
👉Repo https://lnkd.in/etfTm4Wq
👉 UniRef++ is a unified model for RIS, FSS, RVOS & VOS. Code available!
👉Review https://t.ly/OxtOx
👉Paper https://lnkd.in/eTrmDTK3
👉Repo https://lnkd.in/etfTm4Wq
👍11❤3🤯3⚡1
This media is not supported in your browser
VIEW IN TELEGRAM
🈚 Seeing Through Occlusions 🈚
👉Novel NSF to see through occlusions, reflection suppression & shadow removal.
👉Review https://t.ly/5jcIG
👉Project https://light.princeton.edu/publication/nsf
👉Paper https://arxiv.org/pdf/2312.14235.pdf
👉Repo https://github.com/princeton-computational-imaging/NSF
👉Novel NSF to see through occlusions, reflection suppression & shadow removal.
👉Review https://t.ly/5jcIG
👉Project https://light.princeton.edu/publication/nsf
👉Paper https://arxiv.org/pdf/2312.14235.pdf
👉Repo https://github.com/princeton-computational-imaging/NSF
❤11🤯7🔥3🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
👻 Avatar Behind Occlusions 👻
👉Neural rendering for occluded in-the-wild mono-videos. Decoupling scenes in occlusion, human, and background.
👉Review https://t.ly/8q__B
👉Paper https://arxiv.org/pdf/2401.00431.pdf
👉Project https://cs.stanford.edu/~xtiange/projects/wild2avatar
👉Neural rendering for occluded in-the-wild mono-videos. Decoupling scenes in occlusion, human, and background.
👉Review https://t.ly/8q__B
👉Paper https://arxiv.org/pdf/2401.00431.pdf
👉Project https://cs.stanford.edu/~xtiange/projects/wild2avatar
🔥11❤3👏1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🕍 En3D: Generative 3D Humans 🕍
👉#Alibaba unveils En3D: generative scheme for sculpting HQ 3D human avatars. Zero-shot 3D generative scheme capable of producing visually realistic, geometrically accurate and content-wise diverse 3D humans without relying on pre-existing 3D or 2D asset.
👉Review https://t.ly/nGmDK
👉Project menyifang.github.io/projects/En3D/index.html
👉Paper https://arxiv.org/pdf/2401.01173.pdf
👉Repo (soon?) https://github.com/menyifang/En3D
👉#Alibaba unveils En3D: generative scheme for sculpting HQ 3D human avatars. Zero-shot 3D generative scheme capable of producing visually realistic, geometrically accurate and content-wise diverse 3D humans without relying on pre-existing 3D or 2D asset.
👉Review https://t.ly/nGmDK
👉Project menyifang.github.io/projects/En3D/index.html
👉Paper https://arxiv.org/pdf/2401.01173.pdf
👉Repo (soon?) https://github.com/menyifang/En3D
🤯5❤3🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🐤 MagicVideo-V2 announced! 🐤
👉#Bytedance announces a novel multi-stage pipeline capable of generating high-aesthetic videos from textual description
👉Review https://t.ly/zIq4v
👉Project https://lnkd.in/dKUrJPJd
👉Paper https://lnkd.in/dixnN-kU
👉#Bytedance announces a novel multi-stage pipeline capable of generating high-aesthetic videos from textual description
👉Review https://t.ly/zIq4v
👉Project https://lnkd.in/dKUrJPJd
👉Paper https://lnkd.in/dixnN-kU
🔥7❤1👍1🥰1💩1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 #6D Foundation Pose 🔥
👉#Nvidia unveils FoundationPose, a novel (and unified) foundation model for 6D object pose estimation and tracking.
👉Review https://t.ly/HGd4h
👉Project https://lnkd.in/dPcnBKWm
👉Paper https://lnkd.in/dixn_iHZ
👉Code coming 🩷
👉#Nvidia unveils FoundationPose, a novel (and unified) foundation model for 6D object pose estimation and tracking.
👉Review https://t.ly/HGd4h
👉Project https://lnkd.in/dPcnBKWm
👉Paper https://lnkd.in/dixn_iHZ
👉Code coming 🩷
🔥12❤5👏1🤯1
🃏ReplaceAnything: demo is out!🃏
👉ReplaceAnything: ultra-high quality content replacement. The ultimate #AI solution for human, clothing & background replacement to change the e-commerce experience for vendors.
👉Review https://t.ly/FMyvf
👉Project https://lnkd.in/dcyZvP2b
👉ModelScope https://lnkd.in/dU4x4nE6
👉Hugging Face https://lnkd.in/dn3uXWgd
👉Empty report https://lnkd.in/dcuGXd6c
👉Paper coming?
👉ReplaceAnything: ultra-high quality content replacement. The ultimate #AI solution for human, clothing & background replacement to change the e-commerce experience for vendors.
👉Review https://t.ly/FMyvf
👉Project https://lnkd.in/dcyZvP2b
👉ModelScope https://lnkd.in/dU4x4nE6
👉Hugging Face https://lnkd.in/dn3uXWgd
👉Empty report https://lnkd.in/dcuGXd6c
👉Paper coming?
❤11👍3👏2😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🥛 Transparent Object Tracking 🥛
👉Trans2k: transparent object tracking dataset of 2,000+ sequences with 100,000+ images, annotated by bounding boxes & segmentation mask.
👉Review https://t.ly/mEI6O
👉Paper https://lnkd.in/dsudY3DB
👉Project https://lnkd.in/d48SSJJ3
👉TOB https://lnkd.in/dykBUNfC
👉Trans2k: transparent object tracking dataset of 2,000+ sequences with 100,000+ images, annotated by bounding boxes & segmentation mask.
👉Review https://t.ly/mEI6O
👉Paper https://lnkd.in/dsudY3DB
👉Project https://lnkd.in/d48SSJJ3
👉TOB https://lnkd.in/dykBUNfC
🔥18🤯7❤3👍2😱2👏1
💊💊 AGNOSTIC Object Counting 💊💊
👉PseCo: combining SAM to segment all possible objects as mask proposals & CLIP to classify proposals to obtain accurate object counts. The new SOTA in both few-shot/zero-shot object counting/detection.
👉Review https://t.ly/e4iza
👉Paper https://lnkd.in/dbzMXKWG
👉Repo https://lnkd.in/db9Q9Pse
👉PseCo: combining SAM to segment all possible objects as mask proposals & CLIP to classify proposals to obtain accurate object counts. The new SOTA in both few-shot/zero-shot object counting/detection.
👉Review https://t.ly/e4iza
👉Paper https://lnkd.in/dbzMXKWG
👉Repo https://lnkd.in/db9Q9Pse
🔥17👍5🥰1👏1
💥 Announcing #Py4Ai Conference💥
👉 Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
𝐓𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐛𝐚𝐭𝐜𝐡 𝐨𝐟 𝐬𝐩𝐞𝐚𝐤𝐞𝐫𝐬:
🚀Merve Noyan | #HuggingFace 🤗
🚀Gabriele Lombardi | ARGO Vision
🚀Amanda Cercas Curry | Uni. Bocconi
🚀Piero Savastano | Cheshire Cat AI
🚀Francesco Zuppichini | Zurich Insurance
🚀Andrea Palladino, PhD | Sr. Data Scientist
👉 More: https://www.linkedin.com/posts/visionarynet_py4ai-py4ai-python-activity-7152928716988243968-pOUn?utm_source=share&utm_medium=member_desktop
👉 Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
𝐓𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐛𝐚𝐭𝐜𝐡 𝐨𝐟 𝐬𝐩𝐞𝐚𝐤𝐞𝐫𝐬:
🚀Merve Noyan | #HuggingFace 🤗
🚀Gabriele Lombardi | ARGO Vision
🚀Amanda Cercas Curry | Uni. Bocconi
🚀Piero Savastano | Cheshire Cat AI
🚀Francesco Zuppichini | Zurich Insurance
🚀Andrea Palladino, PhD | Sr. Data Scientist
👉 More: https://www.linkedin.com/posts/visionarynet_py4ai-py4ai-python-activity-7152928716988243968-pOUn?utm_source=share&utm_medium=member_desktop
Linkedin
#py4ai #py4ai #python #ai #telegram #py4ai | Alessandro Ferrari | 26 comments
💥BOOOM! Announcing #Py4AI Conference💥
👉 Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
𝐄𝐯𝐞𝐧𝐭 𝐃𝐞𝐭𝐚𝐢𝐥𝐬:
✅16th March 2024…
👉 Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
𝐄𝐯𝐞𝐧𝐭 𝐃𝐞𝐭𝐚𝐢𝐥𝐬:
✅16th March 2024…
👍10👏2❤1🥰1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
💃Timeline Text-Driven Humans💃
👉Novel challenge: timeline control for text-driven motion synthesis of 3D Humans.
👉Review https://t.ly/HLm-N
👉Paper https://lnkd.in/esaR_M_9
👉Project https://lnkd.in/epCZDvFW
👉Repo coming
👉Novel challenge: timeline control for text-driven motion synthesis of 3D Humans.
👉Review https://t.ly/HLm-N
👉Paper https://lnkd.in/esaR_M_9
👉Project https://lnkd.in/epCZDvFW
👉Repo coming
🔥13❤6👍4👏3🤩1
AI with Papers - Artificial Intelligence & Deep Learning
🖲️ Amodal Tracking Any Object 🖲️ 👉Amodal tracking": inferring complete object boundaries, even when certain portions are occluded. New benchmark & approach, 2x better than SOTA in people tracking 🔥 👉Review https://t.ly/Rc6Ku 👉Paper https://lnkd.in/d39rFYT4…
🔥🔥 Code is out 🔥🔥
Check the comments for the links ;)
Check the comments for the links ;)
🫒 AlphaGeometry: Olympiad-level AI 🫒
👉 Theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by
synthesizing millions of theorems and proofs across different levels of complexity 🤯
👉Review https://t.ly/2-Z7C
👉Paper https://lnkd.in/g3QkqwCE
👉Blog https://lnkd.in/ge-mpM7q
👉Repo https://lnkd.in/gHjwks_9
👉 Theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by
synthesizing millions of theorems and proofs across different levels of complexity 🤯
👉Review https://t.ly/2-Z7C
👉Paper https://lnkd.in/g3QkqwCE
👉Blog https://lnkd.in/ge-mpM7q
👉Repo https://lnkd.in/gHjwks_9
🤯20👍3🥰2🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦠 XINC: Pixels to Neurons 🦠
👉eXplaining the Implicit Neural Canvas (XINC) from the University of Maryland, is a unified framework for explaining properties of INRs by examining the strength of each neuron’s contribution to each output pixel
👉Review https://t.ly/wwAmz
👉Paper arxiv.org/pdf/2401.10217.pdf
👉Project namithap10.github.io/xinc
👉Repo github.com/namithap10/xinc
👉eXplaining the Implicit Neural Canvas (XINC) from the University of Maryland, is a unified framework for explaining properties of INRs by examining the strength of each neuron’s contribution to each output pixel
👉Review https://t.ly/wwAmz
👉Paper arxiv.org/pdf/2401.10217.pdf
👉Project namithap10.github.io/xinc
👉Repo github.com/namithap10/xinc
🤯9👍3👏2🔥1