File size: 4,952 Bytes
4a9a81a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
<!-- # DepthMaster: Taming Diffusion Models for Monocular Depth Estimation


This repository represents the official implementation of the paper titled "DepthMaster: Taming Diffusion Models for Monocular Depth Estimation". -->

<!-- [![Website](doc/badges/badge-website.svg)](https://marigoldmonodepth.github.io)
[![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2312.02145) -->

<!-- [![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0) -->

<h1 align="center"><strong>DepthMaster: Taming Diffusion Models for Monocular Depth Estimation</strong></h1>
    <p align="center">
        <a href="https://indu1ge.github.io/ziyangsong">Ziyang Song*</a>,  
        <a href="https://orcid.org/0009-0001-6677-0572">Zerong Wang*</a>,
        <a href="https://orcid.org/0000-0001-7817-0665">Bo Li</a>,
        <a href="https://orcid.org/0009-0007-1175-5918">Hao Zhang</a>,
        <a href="https://ruijiezhu94.github.io/ruijiezhu/">Ruijie Zhu</a>,
        <a href="https://orcid.org/0009-0004-3280-8490">Li Liu</a>,
        <a href="https://pengtaojiang.github.io/">Peng-Tao Jiang†</a>,
        <a href="http://staff.ustc.edu.cn/~tzzhang/">Tianzhu Zhang†</a>,
        <br>
        *Equal Contribution, †Corresponding Author
        <br>
        University of Science and Technology of China, vivo Mobile Communication Co., Ltd.
        <br>
        <b>Arxiv 2025</b>
</p>
<!-- [Ziyang Song*](https://indu1ge.github.io/ziyangsong),
[Zerong Wang*](),
[Bo Li](https://orcid.org/0000-0001-7817-0665),
[Hao Zhang](https://orcid.org/0009-0007-1175-5918),
[Ruijie Zhu](https://ruijiezhu94.github.io/ruijiezhu/),
[Li Liu](https://orcid.org/0009-0004-3280-8490)
[Tianzhu Zhang](http://staff.ustc.edu.cn/~tzzhang/)
[Peng-Tao Jiang](https://pengtaojiang.github.io/) -->

<div align="center">
 <a href='https://arxiv.org/abs/2501.02576'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<!-- <a href='https://arxiv.org/abs/[]'><img src='https://img.shields.io/badge/arXiv-[]-b31b1b.svg'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -->
 <a href='https://indu1ge.github.io/DepthMaster_page/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 <a href='https://github.com/indu1ge/DepthMaster'><img src='https://img.shields.io/badge/GitHub-Repository-blue?logo=github'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 <a href='https://www.apache.org/licenses/LICENSE-2.0'><img src='https://img.shields.io/badge/License-Apache--2.0-929292'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 <!-- <a href='https://paperswithcode.com/sota/unsupervised-monocular-depth-estimation-on-7?p=ec-depth-exploring-the-consistency-of-self'><img src='https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/ec-depth-exploring-the-consistency-of-self/unsupervised-monocular-depth-estimation-on-7'></a> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; -->
</div>


<!-- We present Marigold, a diffusion model, and associated fine-tuning protocol for monocular depth estimation. Its core principle is to leverage the rich visual knowledge stored in modern generative image models. Our model, derived from Stable Diffusion and fine-tuned with synthetic data, can zero-shot transfer to unseen data, offering state-of-the-art monocular depth estimation results. -->


![teaser](assets/framework.png)

<!-- >We present DepthMaster, a tamed single-step diffusion model designed to enhance the generalization and detail preservation abilities of depth estimation models. Through feature alignment, we effectively prevent the overfitting to texture details. By adaptively enhance  -->
>We present DepthMaster, a tamed single-step diffusion model that customizes generative features in diffusion models to suit the discriminative depth estimation task. We introduce a Feature Alignment module to mitigate overfitting to texture and a Fourier Enhancement module to refine fine-grained details. DepthMaster exhibits state-of-the-art zero-shot performance and superior detail preservation ability, surpassing
other diffusion-based methods across various datasets.


## 🎓 Citation

Please cite our paper:

```bibtex
@article{song2025depthmaster,
  title={DepthMaster: Taming Diffusion Models for Monocular Depth Estimation},
  author={Song, Ziyang and Wang, Zerong and Li, Bo and Zhang, Hao and Zhu, Ruijie and Liu, Li and Jiang, Peng-Tao and Zhang, Tianzhu},
  journal={arXiv preprint arXiv:2501.02576},
  year={2025}
}
```

## Acknowledgements

The code is based on [Marigold](https://github.com/prs-eth/Marigold).

## 🎫 License

This work is licensed under the Apache License, Version 2.0 (as defined in the [LICENSE](LICENSE.txt)).

By downloading and using the code and model you agree to the terms in the  [LICENSsE](LICENSE.txt).

[![License](https://img.shields.io/badge/License-Apache--2.0-929292)](https://www.apache.org/licenses/LICENSE-2.0)