ReaKase-8B

ReaKase-8B ๐Ÿ‘‰ ReaKase-8B: Legal Case Retrieval via Knowledge and Reasoning Representations with LLMs. More information is available in arXiv & GitHub.

Example Usage

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("AnnaStudy/ReaKase-8B", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AnnaStudy/ReaKase-8B")

case_txt = "The following contains key components of a legal case. Legal facts..."

tokenized = tokenizer(case_txt, return_tensors='pt', padding=True, truncation=True, max_length=2048)
outputs = model(**tokenized)
case_embedding = outputs.last_hidden_state[:, -1]

Base Model

ReaKase-8B is finetuned from Qwen3-Embedding-8B, which provides the underlying semantic representation capability.

Reference: Qwen/Qwen3-Embedding-8B

Cite

If you find this repo useful, please cite

@article
{ReaKase-8B,
author = {Yanran Tang, Ruihong Qiu, Xue Li, Zi Huang},
title = {ReaKase-8B: Legal Case Retrieval via Knowledge and Reasoning Representations with LLMs},
journal = {CoRR},
volume = {abs/2510.26178},
year = {2025}
}
Downloads last month
33
Safetensors
Model size
8B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AnnaStudy/ReaKase-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
(13)
this model