## PRISMA: 16-Bit Temporal Introspection Mechanism - Implementation Specification **Architecture Overview**: PRISMA adds a cross-step feedback loop where model uncertainty from the *previous* forward pass modulates input embeddings for the *current* step. This enables introspective behavior without modifying internal transformer layers. --- ### **Core Components to Add** ```python # 1. In your model class (e.g., LlamaModel, MistralModel) self.uncertainty_embeddings = nn.Embedding(65536, hidden_dim) # 16-bit codes self.register_buffer('prev_uncertainty_code', None) # [batch, prev_seq_len] ``` --- ### **Initialization Details** - **Embedding Table**: Initialize weights from N(0, σ²) where σ = `config.initializer_range` (typically 0.02) - **Buffer**: `prev_uncertainty_code` starts as `None`; will be lazily initialized on first forward pass - **Device/Dtype**: Buffer automatically inherits model's device; ensure `uncertainty_embeddings` runs in same dtype as model (typically bfloat16) --- ### **Forward Pass Modifications (Input Side)** **Location**: *Immediately after input embedding lookup, before transformer layers* ```python # Pseudocode for model forward() def forward(self, input_ids, inputs_embeds=None, ...): if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) # === PRISMA INJECTION POINT === batch_size, seq_len = inputs_embeds.shape[:2] # Handle uncertainty state initialization if self.prev_uncertainty_code is None or self.prev_uncertainty_code.shape[0] != batch_size: # First pass or batch size changed: use neutral uncertainty uncertainty_code = torch.full( (batch_size, seq_len), 32768, # N/2 = neutral dtype=torch.long, device=inputs_embeds.device ) else: # Pad or truncate to match current sequence length prev_len = self.prev_uncertainty_code.shape[1] if prev_len < seq_len: padding = torch.full( (batch_size, seq_len - prev_len), 32768, dtype=torch.long, device=inputs_embeds.device ) uncertainty_code = torch.cat([self.prev_uncertainty_code, padding], dim=1) else: uncertainty_code = self.prev_uncertainty_code[:, :seq_len] # Lookup and shift embeddings (position t gets uncertainty from t-1) uncertainty_embeds = self.uncertainty_embeddings(uncertainty_code) # [B, S, D] uncertainty_shifted = F.pad( uncertainty_embeds[:, :-1, :], (0, 0, 1, 0), value=0.0 ) # First position gets zero # Inject into main embeddings inputs_embeds = inputs_embeds + uncertainty_shifted # === END PRISMA INJECTION === # Proceed to transformer layers as normal hidden_states = self.layers(inputs_embeds, ...) return hidden_states ``` --- ### **Forward Pass Modifications (Output Side)** **Location**: *In your CausalLM class (e.g., LlamaForCausalLM) after computing logits* ```python # Pseudocode for CausalLM forward() def forward(self, ..., labels=None, return_dict=True): outputs = self.model(...) hidden_states = outputs.last_hidden_state logits = self.lm_head(hidden_states) # === PRISMA UNCERTAINTY COMPUTATION === if self.training or logits is not None: # Compute during both train and inference with torch.no_grad(): # Detach to avoid gradient flow into uncertainty mechanism probs = logits.detach().softmax(dim=-1) # [B, S, V] # Compute normalized entropy log_probs = torch.log(probs.clamp(min=1e-9)) entropy = -(probs * log_probs).sum(dim=-1) # [B, S] # Normalize by uniform distribution entropy max_entropy = math.log(probs.size(-1)) entropy_norm = (entropy / max_entropy).clamp(0.0, 1.0) # Quantize to 16-bit integer codes [0, 65535] self.model.prev_uncertainty_code = ( entropy_norm * 65535 ).long().clamp(0, 65535) # === END PRISMA COMPUTATION === # Compute loss, return outputs as normal loss = None if labels is not None: loss = self.loss_function(logits, labels) return CausalLMOutputWithPast( loss=loss, logits=logits, past_key_values=outputs.past_key_values, ) ``` --- ### **Generation Loop Integration** **Required**: Reset uncertainty state between generation runs ```python # Add this method to your CausalLM class def reset_uncertainty(self): """Call this before each new generation to clear uncertainty state""" self.model.prev_uncertainty_code = None # In your generation code: model.reset_uncertainty() # Essential! outputs = model.generate(**inputs) ``` --- ### **Key Implementation Notes for Arbitrary Models** | Model Type | Integration Points | |------------|-------------------| | **Standard Decoder (Llama, Mistral)** | Inject in `forward()` after `self.embed_tokens()`; compute uncertainty in `ForCausalLM.forward()` | | **Encoder-Decoder (T5)** | Inject in decoder embedding; compute uncertainty from decoder output logits | | **Vision-Language (LLaVA, DeepSeek-VL)** | Inject *after* multimodal projections; ensure `prev_uncertainty_code` tracks *text token positions only* | | **MoE Models (Mixtral)** | Inject before expert routing; uncertainty overhead is negligible compared to MoE computation | --- ### **Edge Cases & State Management** 1. **Dynamic Sequence Lengths**: The padding/truncation logic ensures `prev_uncertainty_code` always matches current `seq_len` 2. **Batch Size Changes**: When batch size changes mid-generation, reinitialize with neutral codes 3. **KV Cache**: `prev_uncertainty_code` *does not* participate in KV cache; it's purely a side-channel 4. **Gradient Checkpointing**: The mechanism is checkpointing-safe; embeddings are recomputed during backward 5. **Multi-GPU**: `uncertainty_embeddings` are part of model parameters and get sharded automatically; `prev_uncertainty_code` stays on same device as model --- ### **Performance Characteristics** | Component | Parameters | FLOPs | Memory | Latency | |-----------|------------|-------|--------|---------| | Uncertainty Embeddings | `65,536 × hidden_dim` | 0 | ~134MB (if d=2048) | Negligible | | Entropy Computation | 0 | `O(B×S×V)` | O(1) | <0.1ms | | Embedding Addition | 0 | `O(B×S×D)` | O(1) | <0.01ms | **Total Overhead**: <1% additional compute, ~0.1% additional memory --- ### **Theoretical Intuition** PRISMA transforms autoregressive generation from a **memoryless process** P(y_t | x, y_= T: code = prev[:, :T] else: code = F.pad(prev, (0, T - prev.shape[1]), value=self.n_levels // 2) u = self.uncertainty_embed(code) u = F.pad(u[:, :-1], (0, 0, 1, 0)) # shift right: position i gets uncertainty from i-1 inputs_embeds = inputs_embeds + u ``` --- ## 3. Compute uncertainty from logits **Where:** LM head `forward`, after logits are computed Note: If your buffer lives on an inner model (e.g., `self.model`), update `self.model.prev_uncertainty_code` instead. ```python with torch.no_grad(): probs = logits.softmax(dim=-1) entropy = -(probs * torch.log(probs.clamp_min(1e-9))).sum(dim=-1) entropy = entropy / math.log(probs.size(-1)) # normalize to [0, 1] self.prev_uncertainty_code = ( entropy * (self.n_levels - 1) ).long().clamp(0, self.n_levels - 1) ``` --- ## 4. Reset hook ```python def reset_uncertainty(self): self.prev_uncertainty_code = None ``` Call before each new generation or when switching between unrelated sequences. --- ## 5. Generation rules - **Do NOT** clear `prev_uncertainty_code` between decoding steps within a sequence - **DO** clear it between unrelated sequences or batches --- ## Why this works The uncertainty signal rides along in the residual stream from the first layer. Early on, it competes with stronger signals—token semantics, position, attention patterns. But because it correlates with prediction difficulty, the model learns to preserve it. By approximately two-thirds through the network, the signal has accumulated enough relative strength to influence decisions explicitly. The model doesn't just *feel* uncertain—it can *act* on uncertainty: hedge, qualify, or change course. You don't inject at a specific layer because you don't know where introspection should live. The model discovers that for itself. You just ensure the information is present from the start. --- ## What to watch for - **Ablation test:** Zero out the uncertainty injection and measure perplexity change. If it hurts, the signal is being used. - **Attention probe:** Check whether high-uncertainty positions receive more attention in later layers. - **Behavioral test:** Does the model hedge more after high-entropy predictions? Does it recover better from mistakes? If uncertainty is truly integrated, the model's *behavior* will reflect its confidence—not because you trained it to say "I'm not sure," but because knowing its own uncertainty became useful for prediction.