bihan3876 commited on
Commit
e310878
ยท
1 Parent(s): 9f13643

Add Korean secondhand market AI classification models with Git LFS

Browse files

- 70.61% accuracy ensemble model (7 models)
- Python FastAPI server for image classification
- Support 5 categories: ๊ฐ€๊ตฌ, ์ƒํ™œ์šฉํ’ˆ, ์ „์ž๊ธฐ๊ธฐ_๋„์„œ, ์ทจ๋ฏธ_๊ฒŒ์ž„, ํŒจ์…˜_๋ทฐํ‹ฐ
- Total model size: 349MB (using Git LFS for .keras files)

.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.keras filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Korean Secondhand Market Classifier
2
+
3
+ ## Model Overview
4
+
5
+ AI model for automatic categorization of Korean secondhand market product images.
6
+ **70.61% accuracy** achieved with 7-model ensemble system.
7
+
8
+ ## Supported Categories
9
+
10
+ 1. **๊ฐ€๊ตฌ** (Furniture) - beds, sofas, desks, chairs
11
+ 2. **์ƒํ™œ์šฉํ’ˆ** (Household items) - kitchenware, cleaning supplies, storage
12
+ 3. **์ „์ž๊ธฐ๊ธฐ_๋„์„œ** (Electronics/Books) - smartphones, laptops, books, e-books
13
+ 4. **์ทจ๋ฏธ_๊ฒŒ์ž„** (Hobbies/Games) - game consoles, board games, sports equipment
14
+ 5. **ํŒจ์…˜_๋ทฐํ‹ฐ** (Fashion/Beauty) - clothing, shoes, cosmetics, accessories
15
+
16
+ ## Performance
17
+
18
+ - **Ensemble Accuracy**: 70.61%
19
+ - **Individual Models**: 7 models (EfficientNet, ResNet50V2, DenseNet, etc.)
20
+ - **Input Size**: 224x224 RGB images
21
+
22
+ ## Usage
23
+
24
+ ```python
25
+ # Install dependencies
26
+ pip install fastapi uvicorn tensorflow pillow huggingface_hub
27
+
28
+ # Download and run
29
+ from huggingface_hub import snapshot_download
30
+ repo_path = snapshot_download("bihan3876/my_model")
31
+
32
+ # Run API server
33
+ import subprocess
34
+ subprocess.run(["python", f"{repo_path}/api_server.py"])
35
+ ```
36
+
37
+ ## File Structure
38
+
39
+ ```
40
+ models/
41
+ โ”œโ”€โ”€ ensemble/ # Ensemble models (349MB)
42
+ โ”‚ โ”œโ”€โ”€ EfficientNetB0_best.keras
43
+ โ”‚ โ”œโ”€โ”€ ResNet50V2_best.keras
44
+ โ”‚ โ””โ”€โ”€ ... (7 models)
45
+ โ””โ”€โ”€ serving/ # Serving models
46
+ โ”œโ”€โ”€ model_optimized.tflite # 24MB
47
+ โ””โ”€โ”€ TensorFlowLiteInferenceService.java
48
+ ```
49
+
50
+ ## License
51
+
52
+ MIT License
api_server.py ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ์—ญํ• : ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‹ค์ œ ์„œ๋น„์Šค๋กœ ์ œ๊ณตํ•˜๋Š” API ์„œ๋ฒ„
2
+ # POST /predict - ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜
3
+ # GET /health - ์„œ๋ฒ„ ์ƒํƒœ ํ™•์ธ
4
+ # 70.61% ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ์•™์ƒ๋ธ”์ด ํ•„์ˆ˜
5
+ # Python API ์„œ๋ฒ„๋กœ๋งŒ ๊ฐ€๋Šฅ
6
+ from fastapi import FastAPI, UploadFile, File, HTTPException
7
+ from fastapi.middleware.cors import CORSMiddleware
8
+ from fastapi.responses import JSONResponse
9
+ import uvicorn
10
+ import numpy as np
11
+ import tensorflow as tf
12
+ from tensorflow.keras.models import load_model
13
+ from PIL import Image
14
+ import io
15
+ import os
16
+ from datetime import datetime
17
+ import json
18
+ from typing import List, Dict, Any
19
+ import logging
20
+
21
+ # ๋กœ๊น… ์„ค์ •
22
+ logging.basicConfig(level=logging.INFO)
23
+ logger = logging.getLogger(__name__)
24
+
25
+ # ์„ค์ •
26
+ MODEL_PATH = "models/mega_ensemble_80" # ๋กœ์ปฌ ๋ชจ๋ธ ๊ฒฝ๋กœ
27
+ HUGGINGFACE_REPO = "bihan3876/my_model" # Hugging Face ์ €์žฅ์†Œ
28
+ CLASS_NAMES = ["๊ฐ€๊ตฌ", "์ƒํ™œ์šฉํ’ˆ", "์ „์ž๊ธฐ๊ธฐ_๋„์„œ", "์ทจ๋ฏธ_๊ฒŒ์ž„", "ํŒจ์…˜_๋ทฐํ‹ฐ"]
29
+ IMG_SIZE = (224, 224)
30
+
31
+ # Hugging Face Hub ์‚ฌ์šฉ ์—ฌ๋ถ€
32
+ USE_HUGGINGFACE = os.getenv("USE_HUGGINGFACE", "false").lower() == "true"
33
+
34
+ # ๊ฒฝ๋Ÿ‰ ๋ชจ๋“œ ์‚ฌ์šฉ ์—ฌ๋ถ€ (TensorFlow Lite ๋ชจ๋ธ ์‚ฌ์šฉ)
35
+ USE_LIGHTWEIGHT = os.getenv("USE_LIGHTWEIGHT", "false").lower() == "true"
36
+ LIGHTWEIGHT_MODEL_PATH = "models/serving/model_optimized.tflite"
37
+
38
+ # FastAPI ์•ฑ ์ƒ์„ฑ
39
+ app = FastAPI(
40
+ title="AI ์ƒํ’ˆ ๋ถ„๋ฅ˜ API",
41
+ description="70.61% ์ •ํ™•๋„ ๋‹ฌ์„ฑํ•œ AI ๋ชจ๋ธ๋กœ ์ค‘๊ณ ๊ฑฐ๋ž˜ ์ƒํ’ˆ ์ž๋™ ๋ถ„๋ฅ˜",
42
+ version="1.0.0"
43
+ )
44
+
45
+ # CORS ์„ค์ •
46
+ app.add_middleware(
47
+ CORSMiddleware,
48
+ allow_origins=["*"],
49
+ allow_credentials=True,
50
+ allow_methods=["*"],
51
+ allow_headers=["*"],
52
+ )
53
+
54
+ # ์ „์—ญ ๋ณ€์ˆ˜
55
+ models = {}
56
+ model_info = {}
57
+
58
+ def download_from_huggingface():
59
+ """Hugging Face Hub์—์„œ ๋ชจ๋ธ ๋‹ค์šด๋กœ๋“œ"""
60
+ try:
61
+ from huggingface_hub import snapshot_download
62
+
63
+ logger.info(f"Hugging Face์—์„œ ๋ชจ๋ธ ๋‹ค์šด๋กœ๋“œ ์ค‘: {HUGGINGFACE_REPO}")
64
+
65
+ # ์•™์ƒ๋ธ” ๋ชจ๋ธ๋งŒ ๋‹ค์šด๋กœ๋“œ (349MB)
66
+ local_dir = snapshot_download(
67
+ repo_id=HUGGINGFACE_REPO,
68
+ cache_dir="./cache",
69
+ local_dir="./models_hf"
70
+ )
71
+
72
+ logger.info(f"๋ชจ๋ธ ๋‹ค์šด๋กœ๋“œ ์™„๋ฃŒ: {local_dir}")
73
+ return local_dir
74
+
75
+ except ImportError:
76
+ logger.error("huggingface_hub ํŒจํ‚ค์ง€๊ฐ€ ์„ค์น˜๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.")
77
+ logger.error("์„ค์น˜: pip install huggingface_hub")
78
+ return None
79
+ except Exception as e:
80
+ logger.error(f"Hugging Face ๋‹ค์šด๋กœ๋“œ ์‹คํŒจ: {e}")
81
+ return None
82
+
83
+ def load_lightweight_model():
84
+ """๊ฒฝ๋Ÿ‰ TensorFlow Lite ๋ชจ๋ธ ๋กœ๋“œ"""
85
+ global models, model_info
86
+
87
+ try:
88
+ import tensorflow as tf
89
+
90
+ # TensorFlow Lite ๋ชจ๋ธ ๊ฒฝ๋กœ
91
+ if USE_HUGGINGFACE:
92
+ # Hugging Face์—์„œ ๊ฒฝ๋Ÿ‰ ๋ชจ๋ธ๋งŒ ๋‹ค์šด๋กœ๋“œ (24MB)
93
+ local_dir = snapshot_download(
94
+ repo_id=HUGGINGFACE_REPO,
95
+ cache_dir="./cache",
96
+ local_dir="./models_hf",
97
+ allow_patterns=["models/serving/model_optimized.tflite"]
98
+ )
99
+ tflite_path = os.path.join(local_dir, "models", "serving", "model_optimized.tflite")
100
+ else:
101
+ tflite_path = LIGHTWEIGHT_MODEL_PATH
102
+
103
+ if not os.path.exists(tflite_path):
104
+ raise FileNotFoundError(f"TensorFlow Lite ๋ชจ๋ธ์ด ์—†์Šต๋‹ˆ๋‹ค: {tflite_path}")
105
+
106
+ # TensorFlow Lite ์ธํ„ฐํ”„๋ฆฌํ„ฐ ๋กœ๋“œ
107
+ interpreter = tf.lite.Interpreter(model_path=tflite_path)
108
+ interpreter.allocate_tensors()
109
+
110
+ models["tflite"] = interpreter
111
+
112
+ model_info = {
113
+ "total_models": 1,
114
+ "model_names": ["TensorFlow_Lite"],
115
+ "accuracy": 62.0, # ์ถ”์ • ์„ฑ๋Šฅ
116
+ "classes": CLASS_NAMES,
117
+ "input_shape": [224, 224, 3],
118
+ "ensemble_method": "single_model"
119
+ }
120
+
121
+ logger.info(f"๐Ÿš€ ๊ฒฝ๋Ÿ‰ ๋ชจ๋ธ ๋กœ๋”ฉ ์™„๋ฃŒ: TensorFlow Lite")
122
+ return True
123
+
124
+ except Exception as e:
125
+ logger.error(f"โŒ ๊ฒฝ๋Ÿ‰ ๋ชจ๋ธ ๋กœ๋”ฉ ์‹คํŒจ: {e}")
126
+ return False
127
+
128
+ def load_ensemble_models():
129
+ """์•™์ƒ๋ธ” ๋ชจ๋ธ๋“ค ๋กœ๋“œ"""
130
+ global models, model_info
131
+
132
+ try:
133
+ # ๊ฒฝ๋Ÿ‰ ๋ชจ๋“œ ์‚ฌ์šฉ ์‹œ
134
+ if USE_LIGHTWEIGHT:
135
+ return load_lightweight_model()
136
+
137
+ # Hugging Face ์‚ฌ์šฉ ์‹œ ๋ชจ๋ธ ๋‹ค์šด๋กœ๋“œ
138
+ if USE_HUGGINGFACE:
139
+ hf_path = download_from_huggingface()
140
+ if hf_path:
141
+ model_path = os.path.join(hf_path, "models", "ensemble")
142
+ else:
143
+ logger.warning("Hugging Face ๋‹ค์šด๋กœ๋“œ ์‹คํŒจ, ๋กœ์ปฌ ๋ชจ๋ธ ์‚ฌ์šฉ")
144
+ model_path = MODEL_PATH
145
+ else:
146
+ model_path = MODEL_PATH
147
+
148
+ logger.info(f"๋ชจ๋ธ ๋กœ๋”ฉ ์‹œ์ž‘: {model_path}")
149
+
150
+ if not os.path.exists(model_path):
151
+ raise FileNotFoundError(f"๋ชจ๋ธ ๊ฒฝ๋กœ๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: {model_path}")
152
+
153
+ # ๋ชจ๋ธ ํŒŒ์ผ๋“ค ์ฐพ๊ธฐ
154
+ model_files = [f for f in os.listdir(model_path) if f.endswith('.keras')]
155
+
156
+ if not model_files:
157
+ raise FileNotFoundError("๋ชจ๋ธ ํŒŒ์ผ์„ ์ฐพ์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค")
158
+
159
+ # ๊ฐ ๋ชจ๋ธ ๋กœ๋“œ
160
+ for model_file in model_files:
161
+ model_name = model_file.replace('.keras', '').replace('_best', '')
162
+ model_path = os.path.join(model_path, model_file)
163
+
164
+ try:
165
+ model = load_model(model_path)
166
+ models[model_name] = model
167
+ logger.info(f"โœ… {model_name} ๋กœ๋“œ ์™„๋ฃŒ")
168
+ except Exception as e:
169
+ logger.warning(f"โš ๏ธ {model_name} ๋กœ๋“œ ์‹คํŒจ: {e}")
170
+
171
+ if not models:
172
+ raise RuntimeError("๋กœ๋“œ๋œ ๋ชจ๋ธ์ด ์—†์Šต๋‹ˆ๋‹ค")
173
+
174
+ # ๋ชจ๋ธ ์ •๋ณด ์„ค์ •
175
+ model_info = {
176
+ "total_models": len(models),
177
+ "model_names": list(models.keys()),
178
+ "accuracy": 70.61,
179
+ "classes": CLASS_NAMES,
180
+ "input_shape": [224, 224, 3],
181
+ "ensemble_method": "soft_voting"
182
+ }
183
+
184
+ logger.info(f"๐ŸŽฏ ์•™์ƒ๋ธ” ๋ชจ๋ธ ๋กœ๋”ฉ ์™„๋ฃŒ: {len(models)}๊ฐœ ๋ชจ๋ธ")
185
+ return True
186
+
187
+ except Exception as e:
188
+ logger.error(f"โŒ ๋ชจ๋ธ ๋กœ๋”ฉ ์‹คํŒจ: {e}")
189
+ return False
190
+
191
+ def preprocess_image(image_bytes: bytes) -> np.ndarray:
192
+ """์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ"""
193
+ try:
194
+ # PIL๋กœ ์ด๋ฏธ์ง€ ์—ด๊ธฐ
195
+ image = Image.open(io.BytesIO(image_bytes))
196
+
197
+ # RGB ๋ณ€ํ™˜
198
+ if image.mode != 'RGB':
199
+ image = image.convert('RGB')
200
+
201
+ # ํฌ๊ธฐ ์กฐ์ •
202
+ image = image.resize(IMG_SIZE)
203
+
204
+ # numpy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ ๋ฐ ์ •๊ทœํ™”
205
+ image_array = np.array(image) / 255.0
206
+
207
+ # ๋ฐฐ์น˜ ์ฐจ์› ์ถ”๊ฐ€
208
+ image_array = np.expand_dims(image_array, axis=0)
209
+
210
+ return image_array
211
+
212
+ except Exception as e:
213
+ raise HTTPException(status_code=400, detail=f"์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ ์‹คํŒจ: {str(e)}")
214
+
215
+ def ensemble_predict(image_array: np.ndarray) -> Dict[str, Any]:
216
+ """์•™์ƒ๋ธ” ์˜ˆ์ธก"""
217
+ try:
218
+ predictions = []
219
+
220
+ # ๊ฐ ๋ชจ๋ธ๋กœ ์˜ˆ์ธก
221
+ for model_name, model in models.items():
222
+ pred = model.predict(image_array, verbose=0)
223
+ predictions.append(pred[0])
224
+
225
+ # ์†Œํ”„ํŠธ ๋ณดํŒ… (ํ‰๊ท )
226
+ ensemble_pred = np.mean(predictions, axis=0)
227
+
228
+ # ๊ฒฐ๊ณผ ์ฒ˜๋ฆฌ
229
+ predicted_class_idx = np.argmax(ensemble_pred)
230
+ predicted_class = CLASS_NAMES[predicted_class_idx]
231
+ confidence = float(ensemble_pred[predicted_class_idx])
232
+
233
+ # ๊ฐ ํด๋ž˜์Šค๋ณ„ ํ™•๋ฅ 
234
+ probabilities = {
235
+ CLASS_NAMES[i]: float(ensemble_pred[i])
236
+ for i in range(len(CLASS_NAMES))
237
+ }
238
+
239
+ return {
240
+ "predicted_class": predicted_class,
241
+ "confidence": confidence,
242
+ "probabilities": probabilities,
243
+ "timestamp": datetime.now().isoformat()
244
+ }
245
+
246
+ except Exception as e:
247
+ raise HTTPException(status_code=500, detail=f"์˜ˆ์ธก ์‹คํŒจ: {str(e)}")
248
+
249
+ @app.on_event("startup")
250
+ async def startup_event():
251
+ """์„œ๋ฒ„ ์‹œ์ž‘ ์‹œ ๋ชจ๋ธ ๋กœ๋“œ"""
252
+ logger.info("๐Ÿš€ AI ๋ถ„๋ฅ˜ ์„œ๋ฒ„ ์‹œ์ž‘ ์ค‘...")
253
+
254
+ if not load_ensemble_models():
255
+ logger.error("โŒ ๋ชจ๋ธ ๋กœ๋”ฉ ์‹คํŒจ๋กœ ์„œ๋ฒ„ ์‹œ์ž‘ ๋ถˆ๊ฐ€")
256
+ raise RuntimeError("๋ชจ๋ธ ๋กœ๋”ฉ ์‹คํŒจ")
257
+
258
+ logger.info("โœ… AI ๋ถ„๋ฅ˜ ์„œ๋ฒ„ ์ค€๋น„ ์™„๋ฃŒ!")
259
+
260
+ @app.get("/")
261
+ async def root():
262
+ """๋ฃจํŠธ ์—”๋“œํฌ์ธํŠธ"""
263
+ return {
264
+ "message": "AI ์ƒํ’ˆ ๋ถ„๋ฅ˜ API",
265
+ "version": "1.0.0",
266
+ "accuracy": "70.61%",
267
+ "docs": "/docs"
268
+ }
269
+
270
+ @app.get("/health")
271
+ async def health_check():
272
+ """ํ—ฌ์Šค ์ฒดํฌ"""
273
+ return {
274
+ "status": "healthy",
275
+ "models_loaded": len(models),
276
+ "timestamp": datetime.now().isoformat()
277
+ }
278
+
279
+ @app.get("/model-info")
280
+ async def get_model_info():
281
+ """๋ชจ๋ธ ์ •๋ณด ์กฐํšŒ"""
282
+ return {
283
+ "success": True,
284
+ "data": model_info
285
+ }
286
+
287
+ @app.get("/classes")
288
+ async def get_classes():
289
+ """์ง€์› ํด๋ž˜์Šค ๋ชฉ๋ก"""
290
+ return {
291
+ "success": True,
292
+ "classes": CLASS_NAMES,
293
+ "total_classes": len(CLASS_NAMES)
294
+ }
295
+
296
+ @app.post("/predict")
297
+ async def predict_image(
298
+ file: UploadFile = File(...),
299
+ return_probabilities: bool = True
300
+ ):
301
+ """๋‹จ์ผ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜"""
302
+ try:
303
+ # ํŒŒ์ผ ๊ฒ€์ฆ
304
+ if not file.content_type.startswith('image/'):
305
+ raise HTTPException(status_code=400, detail="์ด๋ฏธ์ง€ ํŒŒ์ผ๋งŒ ์—…๋กœ๋“œ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค")
306
+
307
+ # ์ด๋ฏธ์ง€ ์ฝ๊ธฐ
308
+ image_bytes = await file.read()
309
+
310
+ # ์ „์ฒ˜๋ฆฌ
311
+ image_array = preprocess_image(image_bytes)
312
+
313
+ # ์˜ˆ์ธก
314
+ result = ensemble_predict(image_array)
315
+
316
+ # ์‘๋‹ต ๊ตฌ์„ฑ
317
+ response_data = {
318
+ "predicted_class": result["predicted_class"],
319
+ "confidence": result["confidence"],
320
+ "timestamp": result["timestamp"]
321
+ }
322
+
323
+ if return_probabilities:
324
+ response_data["probabilities"] = result["probabilities"]
325
+
326
+ return {
327
+ "success": True,
328
+ "data": response_data,
329
+ "message": "๋ถ„๋ฅ˜ ์™„๋ฃŒ"
330
+ }
331
+
332
+ except HTTPException:
333
+ raise
334
+ except Exception as e:
335
+ logger.error(f"์˜ˆ์ธก ์˜ค๋ฅ˜: {e}")
336
+ return JSONResponse(
337
+ status_code=500,
338
+ content={
339
+ "success": False,
340
+ "message": f"์˜ˆ์ธก ์‹คํŒจ: {str(e)}"
341
+ }
342
+ )
343
+
344
+ @app.post("/batch-predict")
345
+ async def batch_predict_images(
346
+ files: List[UploadFile] = File(...),
347
+ return_probabilities: bool = True
348
+ ):
349
+ """๋ฐฐ์น˜ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜"""
350
+ try:
351
+ results = []
352
+
353
+ for i, file in enumerate(files):
354
+ try:
355
+ if not file.content_type.startswith('image/'):
356
+ results.append({
357
+ "filename": file.filename,
358
+ "success": False,
359
+ "message": "์ด๋ฏธ์ง€ ํŒŒ์ผ์ด ์•„๋‹™๋‹ˆ๋‹ค"
360
+ })
361
+ continue
362
+
363
+ # ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ
364
+ image_bytes = await file.read()
365
+ image_array = preprocess_image(image_bytes)
366
+ result = ensemble_predict(image_array)
367
+
368
+ # ๊ฒฐ๊ณผ ์ถ”๊ฐ€
369
+ batch_result = {
370
+ "filename": file.filename,
371
+ "success": True,
372
+ "predicted_class": result["predicted_class"],
373
+ "confidence": result["confidence"]
374
+ }
375
+
376
+ if return_probabilities:
377
+ batch_result["probabilities"] = result["probabilities"]
378
+
379
+ results.append(batch_result)
380
+
381
+ except Exception as e:
382
+ results.append({
383
+ "filename": file.filename,
384
+ "success": False,
385
+ "message": str(e)
386
+ })
387
+
388
+ return {
389
+ "success": True,
390
+ "total_files": len(files),
391
+ "results": results,
392
+ "timestamp": datetime.now().isoformat()
393
+ }
394
+
395
+ except Exception as e:
396
+ logger.error(f"๋ฐฐ์น˜ ์˜ˆ์ธก ์˜ค๋ฅ˜: {e}")
397
+ return JSONResponse(
398
+ status_code=500,
399
+ content={
400
+ "success": False,
401
+ "message": f"๋ฐฐ์น˜ ์˜ˆ์ธก ์‹คํŒจ: {str(e)}"
402
+ }
403
+ )
404
+
405
+ if __name__ == "__main__":
406
+ print("๐Ÿš€ AI ์ƒํ’ˆ ๋ถ„๋ฅ˜ ์„œ๋ฒ„ ์‹œ์ž‘!")
407
+ print("๐Ÿ“Š ๋ชจ๋ธ: 70.61% ๋ฉ”๊ฐ€ ์•™์ƒ๋ธ”")
408
+ print("๐ŸŒ API ๋ฌธ์„œ: http://localhost:8000/docs")
409
+ print("๐Ÿ” ํ—ฌ์Šค ์ฒดํฌ: http://localhost:8000/health")
410
+
411
+ uvicorn.run(
412
+ app,
413
+ host="0.0.0.0",
414
+ port=8000,
415
+ log_level="info"
416
+ )
config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "secondhand-market-classifier",
3
+ "version": "1.0.0",
4
+ "architecture": "ensemble",
5
+ "accuracy": 70.61,
6
+ "num_models": 7,
7
+ "input_shape": [
8
+ 224,
9
+ 224,
10
+ 3
11
+ ],
12
+ "num_classes": 5,
13
+ "class_names": [
14
+ "๊ฐ€๊ตฌ",
15
+ "์ƒํ™œ์šฉํ’ˆ",
16
+ "์ „์ž๊ธฐ๊ธฐ_๋„์„œ",
17
+ "์ทจ๋ฏธ_๊ฒŒ์ž„",
18
+ "ํŒจ์…˜_๋ทฐํ‹ฐ"
19
+ ],
20
+ "preprocessing": {
21
+ "resize": [
22
+ 224,
23
+ 224
24
+ ],
25
+ "normalize": "divide_by_255",
26
+ "color_mode": "RGB"
27
+ },
28
+ "training_info": {
29
+ "dataset_size": 3346,
30
+ "training_method": "transfer_learning",
31
+ "data_augmentation": true,
32
+ "cross_validation": "k_fold",
33
+ "optimizer": "AdamW",
34
+ "loss_function": "categorical_crossentropy"
35
+ },
36
+ "created_at": "2025-06-23T13:34:19.126875",
37
+ "framework": "tensorflow",
38
+ "python_version": "3.9+",
39
+ "dependencies": [
40
+ "tensorflow>=2.10.0",
41
+ "fastapi>=0.68.0",
42
+ "uvicorn>=0.15.0",
43
+ "pillow>=8.0.0",
44
+ "numpy>=1.21.0"
45
+ ]
46
+ }
models/ensemble/DenseNet121_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b336d545ccf034d620529af8bf532168e90790e1d5c2cbb16fcc0ff03e4af58f
3
+ size 37601837
models/ensemble/EfficientNetB0_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ff61f9ccb6d8b131b098d9fd827168d2df2ba33ba1f2c697904171d1f9dfba2
3
+ size 21417453
models/ensemble/EfficientNetB1_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f79ee2a1134c867ebbf5986c4da77addc51ba6b70220525adb94d3a62183ffc
3
+ size 37008157
models/ensemble/EfficientNetB2_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbf3e5616049e8a1145523bea891d5f3401785c0568627320723ef1c7801e1b9
3
+ size 42567909
models/ensemble/InceptionV3_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4fb7e15de874bd526ff5e0ab8b2c55e299fb42ea268b7164a402dba5144bd38
3
+ size 102538020
models/ensemble/MobileNetV3Large_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82168d023fa53ec8ef76c03806bd908be813ab379875bf29004c105ee81d96c7
3
+ size 16063328
models/ensemble/ResNet50V2_best.keras ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:355d47d1f1eab0878ec949c6b56e835c40dca34af7570be1389dd6a76dc6aaef
3
+ size 109168256