File size: 1,632 Bytes
27d5f63
 
 
 
 
 
 
c88e257
27d5f63
c88e257
 
 
 
 
27d5f63
 
 
c88e257
27d5f63
c88e257
27d5f63
 
 
c88e257
27d5f63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c88e257
27d5f63
c88e257
27d5f63
c88e257
27d5f63
 
 
c88e257
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Dream-Coder-v0-Instruct-7B

This is the joint sampling enabled Dream-Coder-v0-Instruct-7B model. Kindly refer to the paper below for details. 

- **Arxiv:** https://www.arxiv.org/pdf/2509.22738

## How to use

Here is a simple script for running the model. Setting the `use_adjust` flag as `False` generates from the base diffusion LM with naive parallel sampling.


```python
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, set_seed

model_path = "pbansal/Dream-Coder-v0-Instruct-7B-Adjust"
model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.to("cuda").eval()
use_adjust = True # Set to false to sample from just the base model
messages = [
    {"role": "user", "content": "Write a quick sort algorithm."}
]
inputs = tokenizer.apply_chat_template(
    messages, return_tensors="pt", return_dict=True, add_generation_prompt=True
)
input_ids = inputs.input_ids.to(device="cuda")
attention_mask = inputs.attention_mask.to(device="cuda")

output = model.diffusion_generate(
    input_ids,
    attention_mask=attention_mask,
    max_new_tokens=768,
    output_history=True,
    return_dict_in_generate=True,
    steps=768,
    temperature=0.1,
    top_p=0.95,
    alg="entropy",
    alg_temp=0.,
    use_adjust=use_adjust,
)

generations = [
    tokenizer.decode(g.tolist())
    for p, g in zip(input_ids, output.sequences)
]

print(generations[0].split(tokenizer.eos_token)[0]) 
```