Qwen3 0.6B Fine-Tuned for Search Query Generation

This model is a fine-tuned version of the Qwen3 0.6B model, designed to generate relevant search queries based on user inputs and conversational context. It's particularly useful for enhancing search engine query suggestion systems, chatbots, and virtual assistants.

Model Details

  • Base Model: Qwen3 0.6B
  • Fine-Tuning Dataset: Custom dataset consisting of input-output pairs where the model learns to generate a list of search queries based on a given input and previous conversation.
  • Training Framework: Fine-tuned using Hugging Face's transformers and datasets libraries.
  • Inference Framework: Compatible with Hugging Face's transformers library for easy integration into applications.

Intended Use

This model is intended for applications that require generating search queries from user inputs, such as:

  • Search Engine Query Suggestions: Enhancing search engines by providing more relevant query suggestions.
  • Chatbots and Virtual Assistants: Enabling chatbots to suggest relevant search queries based on user conversations.
  • Content Discovery Systems: Improving content recommendation systems by generating search queries that lead to relevant content.

Example

Input: Generate a list of search queries. Input Query: "What are the benefits of that for children?"
Previous conversation: ["I'm thinking of enrolling my child in music lessons.", "They are interested in piano."]

Output:

  • benefits of music lessons for children
  • advantages of learning piano for kids
  • music education impact on child development
  • child learning piano benefits
  • academic benefits of music education

Model Usage

To use this model for generating search queries:

  1. Install Required Libraries:

    pip install transformers

  2. Load the Model and Tokenizer:

    from transformers import AutoTokenizer, AutoModelForCausalLM

    model_name = "MEGHT/qwen3-finetuned-search" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)

  3. Generate Search Queries:

    inputs = tokenizer("Generate a list of search queries. Input Query: 'How can I teach them about it?'\nPrevious conversation: ['My kids are asking about money.', 'They want to know how to save.']", return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

  • Dataset: Custom dataset of input-output pairs for search query generation.
  • Fine-Tuning Parameters:
    • Epochs: 3
    • Batch Size: 16
    • Learning Rate: 5e-5
    • Optimizer: AdamW
    • Scheduler: Linear warmup with 10% warmup ratio

Evaluation

  • Perplexity: 12.5
  • BLEU Score: 0.35
  • ROUGE-L: 0.45

These metrics indicate that the model generates coherent and relevant search queries based on inputs and conversational context.

Limitations

  • Context Length: Maximum of 1024 tokens; long conversations may be truncated.
  • Domain Specificity: May not perform well on unseen domains.
  • Biases: Model may inherit biases from training data.

License

Apache 2.0 License

Citation

@misc{qwen3_0.6b_finetuned_search, author = {MEGHT}, title = {Qwen3 0.6B Fine-Tuned for Search Query Generation}, year = {2025}, url = {https://huggingface.co/MEGHT/qwen3-finetuned-search} }

Acknowledgements

Thanks to the Hugging Face team for the transformers and datasets libraries.

Contact

For questions or feedback, contact MEGHT

Downloads last month
39
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support