Update README.md
Browse files
README.md
CHANGED
|
@@ -409,10 +409,10 @@ Many repositories now support fine-tuning of the InternVL series models, includi
|
|
| 409 |
|
| 410 |
### LMDeploy
|
| 411 |
|
| 412 |
-
LMDeploy is a toolkit for compressing, deploying, and serving
|
| 413 |
|
| 414 |
```sh
|
| 415 |
-
pip install lmdeploy>=0.
|
| 416 |
```
|
| 417 |
|
| 418 |
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
|
@@ -501,7 +501,7 @@ print(sess.response.text)
|
|
| 501 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 502 |
|
| 503 |
```shell
|
| 504 |
-
lmdeploy serve api_server OpenGVLab/Mini-InternVL-Chat-2B-V1-5 --
|
| 505 |
```
|
| 506 |
|
| 507 |
To use the OpenAI-style interface, you need to install OpenAI:
|
|
|
|
| 409 |
|
| 410 |
### LMDeploy
|
| 411 |
|
| 412 |
+
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
|
| 413 |
|
| 414 |
```sh
|
| 415 |
+
pip install lmdeploy>=0.6.4
|
| 416 |
```
|
| 417 |
|
| 418 |
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
|
|
|
| 501 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 502 |
|
| 503 |
```shell
|
| 504 |
+
lmdeploy serve api_server OpenGVLab/Mini-InternVL-Chat-2B-V1-5 --server-port 23333
|
| 505 |
```
|
| 506 |
|
| 507 |
To use the OpenAI-style interface, you need to install OpenAI:
|