Jitesh Jain
commited on
Commit
·
ca96ab1
1
Parent(s):
f206f62
Fix text
Browse files- Dockerfile +1 -4
- gradio_app.py +4 -4
Dockerfile
CHANGED
|
@@ -30,18 +30,15 @@ RUN chmod -R 777 $WORKDIR
|
|
| 30 |
|
| 31 |
COPY requirements.txt $WORKDIR/requirements.txt
|
| 32 |
RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt
|
|
|
|
| 33 |
|
| 34 |
COPY . .
|
| 35 |
|
| 36 |
ARG TORCH_CUDA_ARCH_LIST=7.5+PTX
|
| 37 |
|
| 38 |
-
RUN pip install ninja
|
| 39 |
-
|
| 40 |
USER root
|
| 41 |
RUN chown -R user:user $HOME
|
| 42 |
RUN chmod -R 777 $HOME
|
| 43 |
-
RUN chown -R user:user $WORKDIR
|
| 44 |
-
RUN chmod -R 777 $WORKDIR
|
| 45 |
|
| 46 |
USER user
|
| 47 |
RUN ln -s $WORKDIR/oneformer/modeling/pixel_decoder/ops/ $WORKDIR/ && ls && cd ops/ && FORCE_CUDA=1 python setup.py build --build-base=$WORKDIR/ install --user && cd ..
|
|
|
|
| 30 |
|
| 31 |
COPY requirements.txt $WORKDIR/requirements.txt
|
| 32 |
RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt
|
| 33 |
+
RUN pip install ninja
|
| 34 |
|
| 35 |
COPY . .
|
| 36 |
|
| 37 |
ARG TORCH_CUDA_ARCH_LIST=7.5+PTX
|
| 38 |
|
|
|
|
|
|
|
| 39 |
USER root
|
| 40 |
RUN chown -R user:user $HOME
|
| 41 |
RUN chmod -R 777 $HOME
|
|
|
|
|
|
|
| 42 |
|
| 43 |
USER user
|
| 44 |
RUN ln -s $WORKDIR/oneformer/modeling/pixel_decoder/ops/ $WORKDIR/ && ls && cd ops/ && FORCE_CUDA=1 python setup.py build --build-base=$WORKDIR/ install --user && cd ..
|
gradio_app.py
CHANGED
|
@@ -187,12 +187,12 @@ def segment(path, task, dataset, backbone):
|
|
| 187 |
|
| 188 |
title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
|
| 189 |
|
| 190 |
-
description = "<p font-size: 16px; font-weight: w600; text-align: center'> <a href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a href='https://arxiv.org/abs/2211.06220' target='_blank'>ArXiv Paper</a> | <a href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github Repo</a></p>" \
|
| 191 |
-
+ "<p font-size: 12px; text-align: center' margin: 10px font-weight: w300; text-align: center'> <a href='https://chrisjuniorli.github.io/' target='_blank'>Jiachen Li<sup>*</sup></a> <a href='https://www.linkedin.com/in/mtchiu/' target='_blank'>MangTik Chiu<sup>*</sup></a> <a href='https://alihassanijr.com/' target='_blank'>Ali Hassani</a> <a href='https://www.linkedin.com/in/nukich74/' target='_blank'>Nikita Orlov</a> <a href='https://www.humphreyshi.com/home' target='_blank'>Humphrey Shi</a></p>" \
|
| 192 |
-
+ "<p text-align: center; font-size: 14px; font-weight: w300;'> \
|
| 193 |
OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\
|
| 194 |
</p>" \
|
| 195 |
-
+ "<p text-align: center; font-size: 14px; font-weight: w300;'> [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.</p>"
|
| 196 |
|
| 197 |
# description = "<p style='color: #E0B941; font-size: 16px; font-weight: w600; text-align: center'> <a style='color: #E0B941;' href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a style='color: #E0B941;' href='https://arxiv.org/abs/2211.06220' target='_blank'>OneFormer: One Transformer to Rule Universal Image Segmentation</a> | <a style='color: #E0B941;' href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github</a></p>" \
|
| 198 |
# + "<p style='color:royalblue; margin: 10px; font-size: 16px; font-weight: w400;'> \
|
|
|
|
| 187 |
|
| 188 |
title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
|
| 189 |
|
| 190 |
+
description = "<p style='font-size: 16px; font-weight: w600; text-align: center'> <a href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a href='https://arxiv.org/abs/2211.06220' target='_blank'>ArXiv Paper</a> | <a href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github Repo</a></p>" \
|
| 191 |
+
+ "<p style='font-size: 12px; text-align: center' margin: 10px font-weight: w300; text-align: center'> <a href='https://praeclarumjj3.github.io/' target='_blank'>Jitesh Jain<sup>*</sup></a> <a href='https://chrisjuniorli.github.io/' target='_blank'>Jiachen Li<sup>*</sup></a> <a href='https://www.linkedin.com/in/mtchiu/' target='_blank'>MangTik Chiu<sup>*</sup></a> <a href='https://alihassanijr.com/' target='_blank'>Ali Hassani</a> <a href='https://www.linkedin.com/in/nukich74/' target='_blank'>Nikita Orlov</a> <a href='https://www.humphreyshi.com/home' target='_blank'>Humphrey Shi</a></p>" \
|
| 192 |
+
+ "<p style='text-align: center; font-size: 14px; font-weight: w300;'> \
|
| 193 |
OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\
|
| 194 |
</p>" \
|
| 195 |
+
+ "<p style='text-align: center; font-size: 14px; font-weight: w300;'> [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.</p>"
|
| 196 |
|
| 197 |
# description = "<p style='color: #E0B941; font-size: 16px; font-weight: w600; text-align: center'> <a style='color: #E0B941;' href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a style='color: #E0B941;' href='https://arxiv.org/abs/2211.06220' target='_blank'>OneFormer: One Transformer to Rule Universal Image Segmentation</a> | <a style='color: #E0B941;' href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github</a></p>" \
|
| 198 |
# + "<p style='color:royalblue; margin: 10px; font-size: 16px; font-weight: w400;'> \
|