Skip to content

Commit

Permalink
update 3DV
Browse files Browse the repository at this point in the history
  • Loading branch information
YuliangXiu committed Oct 18, 2023
2 parents 492e78c + babced2 commit 7148939
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 18 deletions.
14 changes: 6 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<br>
* Equal contribution
</p>
<h2 align="center">arXiv 2023</h2>
<h2 align="center">3DV 2024</h2>
<div align="center">
<video autoplay loop muted src="https://github.com/huangyangyi/TeCH/assets/7944350/f8fc55ed-9cbe-4b5f-bd1d-237396360713" type=video/mp4>
</video>
Expand All @@ -43,12 +43,10 @@ TeCH considers image-based reconstruction as a conditional generation task, taki
## Citation

```bibtex
@misc{huang2023tech,
title={TeCH: Text-guided Reconstruction of Lifelike Clothed Humans},
author={Yangyi Huang and Hongwei Yi and Yuliang Xiu and Tingting Liao and Jiaxiang Tang and Deng Cai and Justus Thies},
year={2023},
eprint={2308.08545},
archivePrefix={arXiv},
primaryClass={cs.CV}
@inproceedings{huang2023tech,
title={{TeCH: Text-guided Reconstruction of Lifelike Clothed Humans}},
author={Huang, Yangyi and Yi, Hongwei and Xiu, Yuliang and Liao, Tingting and Tang, Jiaxiang and Cai, Deng and Thies, Justus},
booktitle={International Conference on 3D Vision (3DV)},
year={2024}
}
```
20 changes: 10 additions & 10 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -291,19 +291,19 @@ <h2 class="title is-3 is-centered has-text-centered">
<img src="static/img/TeCH-Method.png" />
<div class="content has-text-justified" style="padding-top: 15px">
<i>TeCH</i> takes an image $\mathcal{I}$ of a human as input. Text
guidance is constructed through \textbf{(a)} using garment parsing
guidance is constructed through $\textbf{(a)}$ using garment parsing
model (Segformer) and VQA model (BLIP) to parse the human attributes
$A$ with pre-defined problems $Q$, and \textbf{(b)} embedding with
$A$ with pre-defined problems $Q$, and $\textbf{(b)}$ embedding with
subject-specific appearance into DreamBooth $\mathcal{D'}$ as unique
token $[V]$. Next, <i>TeCH</i> represents the 3D clothed human with
\textbf{(c)} \smplx initialized hybrid \dmtet, and optimize both
$\textbf{(c)}$ SMPL-X initialized hybrid DMTet, and optimize both
geometry and texture using $\mathcal{L}_\text{SDS}$ guided by prompt
$P=[V]+P_\text{VQA}(A)$. During the optimization,
$\mathcal{L}_\text{recon}$ is introduced to ensure input view
consistency, $\mathcal{L}_\text{CD}$ is to enforce the color
consistency between different views, and $\mathcal{L}_\text{normal}$
serves as surface regularizer. Finally, the extracted high-quality
textured meshes \textbf{(d)} are ready to be used in various
serves as a surface regularizer. Finally, the extracted high-quality
textured meshes $\textbf{(d)}$ are ready to be used in various
downstream applications.
</div>
</div>
Expand All @@ -316,14 +316,14 @@ <h2 class="title is-3 is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Qualitative Results</h2>
<h2 class="title is-4">
Comparison with SOTA single-image human recontruction methods
Comparison with SOTA single-image human reconstruction methods
</h2>
<div class="content has-text-justified">
<p>
We compare TeCH with baselines method PIFu, PaMIR and PHORHUM
qualitatively on in-the-wild images from SHHQ dataset. our
We compare TeCH with baseline methods, PIFu, PaMIR and PHORHUM
qualitatively on in-the-wild images from the SHHQ dataset. our
training-data-free one-shot method generalizes well on
real-world human images and creates rich details for body
real-world human images and creates rich details for the body
textures, such as patterns on clothes and shoes, tattoos on the
skin, and details of face and hair. While PIFu and PaMIR produce
blurry results, limited by the distribution gap between training
Expand Down Expand Up @@ -713,7 +713,7 @@ <h2 class="title is-3">Acknowledgments &amp; Disclosure</h2>
</div>
<h2 class="title">BibTeX</h2>
<pre><code>
@inproceedings{huang2023tech,
@inproceedings{huang2024tech,
title={{TeCH: Text-guided Reconstruction of Lifelike Clothed Humans}},
author={Huang, Yangyi and Yi, Hongwei and Xiu, Yuliang and Liao, Tingting and Tang, Jiaxiang and Cai, Deng and Thies, Justus},
booktitle={International Conference on 3D Vision (3DV)},
Expand Down

0 comments on commit 7148939

Please sign in to comment.