Improve dataset card: Add paper link, project page, task category, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +90 -3
README.md CHANGED
@@ -1,10 +1,97 @@
1
  ---
2
- license: cc0-1.0
3
  language:
4
  - en
 
5
  tags:
6
  - arxiv
 
 
7
  ---
8
- Contains the entire arXiv metadata dataset, including 768-dimensional embeddings of all abstracts generated using sentence-transformer/all-MiniLM-L6-v2.
9
 
10
- This database is an artifact of the Citegeist project ([Github](https://github.com/chenneking/citegeist))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: cc0-1.0
5
  tags:
6
  - arxiv
7
+ task_categories:
8
+ - text-retrieval
9
  ---
 
10
 
11
+ # Citegeist: Automated Generation of Related Work Analysis on the arXiv Corpus
12
+
13
+ This repository contains the entire arXiv metadata dataset, including 768-dimensional embeddings of all abstracts generated using `sentence-transformer/all-MiniLM-L6-v2`. This dataset is a core artifact of the [Citegeist project](https://github.com/chenneking/citegeist).
14
+
15
+ The dataset and associated project were presented in the paper:
16
+ [Citegeist: Automated Generation of Related Work Analysis on the arXiv Corpus](https://huggingface.co/papers/2503.23229)
17
+
18
+ Project Page: [https://citegeist.org](https://citegeist.org)
19
+ Code: [https://github.com/chenneking/citegeist](https://github.com/chenneking/citegeist)
20
+
21
+ ## Sample Usage
22
+
23
+ To generate a related work section for a given abstract, you can use the `citegeist` library's `Generator` class.
24
+
25
+ First, install the package:
26
+ ```bash
27
+ pip install citegeist-arxiv
28
+ ```
29
+
30
+ ### Option 1: Using the Hosted Milvus Database
31
+
32
+ As of March 2025, a hosted Milvus database is provided. You can use it by setting the following environment variables:
33
+
34
+ ```python
35
+ import os
36
+ from citegeist import Generator
37
+
38
+ os.environ["MILVUS_URI"] = "http://49.12.219.90:19530"
39
+ os.environ["MILVUS_TOKEN"] = "citegeist:citegeist"
40
+
41
+ # Set your API key for the chosen LLM provider (e.g., for Gemini)
42
+ os.environ["GEMINI_API_KEY"] = "YOUR_GEMINI_API_KEY"
43
+
44
+ generator = Generator(
45
+ llm_provider="gemini", # Choice of: "azure" (OpenAI Studio), "anthropic", "gemini", "mistral", and "openai"
46
+ api_key=os.environ.get("GEMINI_API_KEY"), # Here, you will need to set the respective API key
47
+ model_name="gemini-2.0-flash", # Choose the model that the provider supports
48
+ database_uri=os.environ.get("MILVUS_URI"), # Set the path (local) / url (remote) for the Milvus DB connection
49
+ database_token=os.environ.get("MILVUS_TOKEN"), # Optionally, also set the access token
50
+ )
51
+ # Define input abstract and breadth (5-20), depth (1-5), and diversity (0.0-1.0) parameters.
52
+ abstract = "Large Language Models provide significant new opportunities for the generation of high-quality written works. However, their employment in the research community is inhibited by their tendency to hallucinate invalid sources and lack of direct access to a knowledge base of relevant scientific articles."
53
+ breadth = 10
54
+ depth = 2
55
+ diversity = 0.0
56
+ related_works, citations = generator.generate_related_work(abstract, breadth, depth, diversity)
57
+
58
+ print("Generated Related Works:\
59
+ ", related_works)
60
+ print("\
61
+ Citations:\
62
+ ", citations)
63
+ ```
64
+
65
+ ### Option 2: Running the Milvus Database Locally
66
+
67
+ Alternatively, you can run the Milvus database locally. Download [database.db](https://huggingface.co/datasets/chenneking/citegeist-milvus-db/blob/main/database.db) and provide its absolute path as the value for `MILVUS_URI`. You do not need to set `MILVUS_TOKEN` for this local setup.
68
+
69
+ ```python
70
+ import os
71
+ from citegeist import Generator
72
+
73
+ # Replace <path_to_database.db_goes_here> with the actual absolute path to your database.db file
74
+ os.environ["MILVUS_URI"] = "<path_to_database.db_goes_here>"
75
+
76
+ # Set your API key for the chosen LLM provider (e.g., for Gemini)
77
+ os.environ["GEMINI_API_KEY"] = "YOUR_GEMINI_API_KEY"
78
+
79
+ generator = Generator(
80
+ llm_provider="gemini", # Choice of: "azure" (OpenAI Studio), "anthropic", "gemini", "mistral", and "openai"
81
+ api_key=os.environ.get("GEMINI_API_KEY"), # Here, you will need to set the respective API key
82
+ model_name="gemini-2.0-flash", # Choose the model that the provider supports
83
+ database_uri=os.environ.get("MILVUS_URI"), # Set the path (local) database.db file
84
+ )
85
+ # Define input abstract and breadth (5-20), depth (1-5), and diversity (0.0-1.0) parameters.
86
+ abstract = "Large Language Models provide significant new opportunities for the generation of high-quality written works. However, their employment in the research community is inhibited by their tendency to hallucinate invalid sources and lack of direct access to a knowledge base of relevant scientific articles."
87
+ breadth = 10
88
+ depth = 2
89
+ diversity = 0.0
90
+ related_works, citations = generator.generate_related_work(abstract, breadth, depth, diversity)
91
+
92
+ print("Generated Related Works:\
93
+ ", related_works)
94
+ print("\
95
+ Citations:\
96
+ ", citations)
97
+ ```