File size: 2,325 Bytes
d0199be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d723119
d0199be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa82f5f
d0199be
 
 
 
 
 
 
 
74e01ae
d0199be
 
 
 
74e01ae
d0199be
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: cc0-1.0
datasets:
- Navanjana/Gutenberg_books
- aisuko/simple_english_wikipedia
- stas/openwebtext-10k
- RaiBP/openwebtext2-first-30-chunks-lang-detect-raw-output
- lucadiliello/bookcorpusopen
- deepmind/pg19
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- Self
model-index:
- name: AaI 1111
  results:
  - task:
      type: text-classification
      name: Multiple Choice
    dataset:
      name: ai2_arc
      type: ai2_arc
      config: ARC-Easy
      split: test
    metrics:
    - name: Accuracy
      type: accuracy
      value: 17.85
---

## **Safety Concerns**

This model has not passed any safety tuning. We are not responsible for any damages.

## AaI Introduction

AaI is a model fully made by 16dvnk on his NVIDIA Geforce RTX 4080 Laptop GPU. He trained it for 11 hours straight, and after some tuning, has made this model. The model is made from scratch. He claims the process was a pain, and has taken lots of effort. He named it AaI and not AAI or other variations since he thinks it is an “eyesore”.

## Architecture

The model uses a Generative pre-trained transformer architecture.

## Technical Specifications

| AaI Specs              | Details                                  |
|------------------------|----------------------------------------|
| Creator                | 16dvnk                                 |
| Hardware               | NVIDIA GeForce RTX 4080 Laptop GPU     |
| Training Duration      | 21 hours                               |
| Framework              | PyTorch                                |
| Parameter Count        | 14 million                             |
| Model Type             | Generative pre-trained transformer     |
| Initial Training Year  | 2025                                   |
| Stable Release Status  | No stable release as of December 2025|
	
## Evaluation Results

The model was evaluated on the **ARC-Easy** and **AaI-sbench** benchmark (test split).

| Dataset  | Split | Metric   | Value   |
|----------|-------|----------|---------|
| ARC-Easy | test  | Accuracy | 17.85%   |
 AaI-sbench| test  | Accuracy | 60.00%

## Notes

• All current releases have 14M parameters, which is considered small.

• The model was trained using PyTorch.

• As of December 2025, there is no stable release of AaI.