Spaces:
Running
Running
| <html lang="pl"> | |
| <head> | |
| <meta charset="UTF-8"> | |
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> | |
| <meta name="description" content="Kompleksowy poradnik uruchamiania modelu AI GLM-4.6 - krok po kroku"> | |
| <meta name="keywords" content="GLM-4.6, AI, sztuczna inteligencja, model j臋zykowy, tutorial, poradnik"> | |
| <meta name="author" content="AI Tutorial Hub"> | |
| <title>GLM-4.6 AI Model - Kompleksowy Poradnik Uruchamiania</title> | |
| <link rel="stylesheet" href="assets/css/styles.css"> | |
| <link rel="preconnect" href="https://fonts.googleapis.com"> | |
| <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> | |
| <link href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700;800&family=JetBrains+Mono:wght@400;500;600&display=swap" rel="stylesheet"> | |
| <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css"> | |
| <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/themes/prism-tomorrow.min.css"> | |
| </head> | |
| <body> | |
| <header class="header"> | |
| <nav class="navbar"> | |
| <div class="nav-container"> | |
| <div class="nav-logo"> | |
| <i class="fas fa-brain"></i> | |
| <span>GLM-4.6 Guide</span> | |
| </div> | |
| <ul class="nav-menu"> | |
| <li class="nav-item"> | |
| <a href="#home" class="nav-link">Strona g艂贸wna</a> | |
| </li> | |
| <li class="nav-item"> | |
| <a href="#requirements" class="nav-link">Wymagania</a> | |
| </li> | |
| <li class="nav-item"> | |
| <a href="#tutorial" class="nav-link">Tutorial</a> | |
| </li> | |
| <li class="nav-item"> | |
| <a href="#examples" class="nav-link">Przyk艂ady</a> | |
| </li> | |
| <li class="nav-item"> | |
| <a href="#troubleshooting" class="nav-link">Troubleshooting</a> | |
| </li> | |
| <li class="nav-item"> | |
| <button class="theme-toggle" id="themeToggle"> | |
| <i class="fas fa-moon"></i> | |
| </button> | |
| </li> | |
| </ul> | |
| <div class="hamburger"> | |
| <span class="bar"></span> | |
| <span class="bar"></span> | |
| <span class="bar"></span> | |
| </div> | |
| </div> | |
| </nav> | |
| </header> | |
| <main> | |
| <section id="home" class="hero"> | |
| <div class="hero-bg"> | |
| <div class="gradient-orb orb-1"></div> | |
| <div class="gradient-orb orb-2"></div> | |
| <div class="gradient-orb orb-3"></div> | |
| </div> | |
| <div class="hero-container"> | |
| <div class="hero-content"> | |
| <div class="hero-badge"> | |
| <i class="fas fa-rocket"></i> | |
| <span>Nowoczesny Model AI</span> | |
| </div> | |
| <h1 class="hero-title"> | |
| Uruchom <span class="gradient-text">GLM-4.6</span> Model AI | |
| </h1> | |
| <p class="hero-subtitle"> | |
| Kompleksowy poradnik krok po kroku jak skonfigurowa膰 i uruchomi膰 | |
| najnowszy model j臋zykowy GLM-4.6 od Zhipu AI | |
| </p> | |
| <div class="hero-buttons"> | |
| <a href="#tutorial" class="btn btn-primary"> | |
| <i class="fas fa-play"></i> | |
| Rozpocznij Tutorial | |
| </a> | |
| <a href="#requirements" class="btn btn-secondary"> | |
| <i class="fas fa-list-check"></i> | |
| Sprawd藕 Wymagania | |
| </a> | |
| </div> | |
| <div class="hero-stats"> | |
| <div class="stat"> | |
| <span class="stat-number">4.6</span> | |
| <span class="stat-label">Wersja Modelu</span> | |
| </div> | |
| <div class="stat"> | |
| <span class="stat-number">128K</span> | |
| <span class="stat-label">Kontekst</span> | |
| </div> | |
| <div class="stat"> | |
| <span class="stat-number">10+</span> | |
| <span class="stat-label">J臋zyk贸w</span> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="hero-visual"> | |
| <div class="ai-card"> | |
| <div class="card-header"> | |
| <div class="card-dots"> | |
| <span></span> | |
| <span></span> | |
| <span></span> | |
| </div> | |
| <span class="card-title">GLM-4.6.py</span> | |
| </div> | |
| <div class="card-content"> | |
| <pre><code class="language-python">from transformers import AutoModel | |
| import torch | |
| # Load GLM-4.6 model | |
| model = AutoModel.from_pretrained( | |
| "THUDM/glm-4-9b-chat" | |
| ) | |
| # Generate response | |
| response = model.generate( | |
| "Hello, GLM-4.6!" | |
| ) | |
| print(response)</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </section> | |
| <section id="requirements" class="requirements"> | |
| <div class="container"> | |
| <div class="section-header"> | |
| <h2 class="section-title">Wymagania Systemowe</h2> | |
| <p class="section-subtitle">Upewnij si臋, 偶e Tw贸j system spe艂nia minimalne wymagania</p> | |
| </div> | |
| <div class="requirements-grid"> | |
| <div class="requirement-card"> | |
| <div class="req-icon"> | |
| <i class="fas fa-microchip"></i> | |
| </div> | |
| <h3>Procesor</h3> | |
| <ul class="req-list"> | |
| <li>Minimum: Intel i5 / AMD Ryzen 5</li> | |
| <li>Zalecane: Intel i7 / AMD Ryzen 7</li> | |
| <li>Pomocne: Obs艂uga AVX2</li> | |
| </ul> | |
| </div> | |
| <div class="requirement-card"> | |
| <div class="req-icon"> | |
| <i class="fas fa-memory"></i> | |
| </div> | |
| <h3>Pami臋膰 RAM</h3> | |
| <ul class="req-list"> | |
| <li>Minimum: 16 GB RAM</li> | |
| <li>Zalecane: 32 GB RAM</li> | |
| <li>Optymalnie: 64 GB RAM</li> | |
| </ul> | |
| </div> | |
| <div class="requirement-card"> | |
| <div class="req-icon"> | |
| <i class="fas fa-hdd"></i> | |
| </div> | |
| <h3>Dysk</h3> | |
| <ul class="req-list"> | |
| <li>Minimum: 50 GB wolnego miejsca</li> | |
| <li>Zalecane: SSD NVMe</li> | |
| <li>Format: ext4 / NTFS</li> | |
| </ul> | |
| </div> | |
| <div class="requirement-card"> | |
| <div class="req-icon"> | |
| <i class="fas fa-desktop"></i> | |
| </div> | |
| <h3>Karta Graficzna</h3> | |
| <ul class="req-list"> | |
| <li>Minimum: GTX 1660 (6GB)</li> | |
| <li>Zalecane: RTX 3060 (12GB)</li> | |
| <li>Optymalnie: RTX 4090 (24GB)</li> | |
| </ul> | |
| </div> | |
| </div> | |
| <div class="software-requirements"> | |
| <h3>Wymagania Software</h3> | |
| <div class="soft-grid"> | |
| <div class="soft-item"> | |
| <i class="fab fa-python"></i> | |
| <span>Python 3.8+</span> | |
| </div> | |
| <div class="soft-item"> | |
| <i class="fab fa-ubuntu"></i> | |
| <span>Linux / Windows 10+</span> | |
| </div> | |
| <div class="soft-item"> | |
| <i class="fab fa-docker"></i> | |
| <span>Docker (opcjonalnie)</span> | |
| </div> | |
| <div class="soft-item"> | |
| <i class="fas fa-code-branch"></i> | |
| <span>Git</span> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </section> | |
| <section id="tutorial" class="tutorial"> | |
| <div class="container"> | |
| <div class="section-header"> | |
| <h2 class="section-title">Krok po Kroku</h2> | |
| <p class="section-subtitle">Post臋puj zgodnie z instrukcjami, aby uruchomi膰 GLM-4.6</p> | |
| </div> | |
| <div class="tutorial-steps"> | |
| <div class="step" data-step="1"> | |
| <div class="step-header"> | |
| <div class="step-number"> | |
| <span>1</span> | |
| </div> | |
| <div class="step-content"> | |
| <h3>Instalacja Python i Virtual Environment</h3> | |
| <p>Pierwszym krokiem jest przygotowanie 艣rodowiska Python</p> | |
| </div> | |
| </div> | |
| <div class="step-details"> | |
| <div class="code-block"> | |
| <div class="code-header"> | |
| <span>Terminal</span> | |
| <button class="copy-btn" data-copy="install-python"> | |
| <i class="fas fa-copy"></i> | |
| </button> | |
| </div> | |
| <pre><code class="language-bash" id="install-python"># Sprawd藕 wersj臋 Python | |
| python --version | |
| # Utw贸rz wirtualne 艣rodowisko | |
| python -m venv glm-env | |
| # Aktywuj 艣rodowisko (Windows) | |
| glm-env\Scripts\activate | |
| # Aktywuj 艣rodowisko (Linux/Mac) | |
| source glm-env/bin/activate</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="step" data-step="2"> | |
| <div class="step-header"> | |
| <div class="step-number"> | |
| <span>2</span> | |
| </div> | |
| <div class="step-content"> | |
| <h3>Instalacja Bibliotek</h3> | |
| <p>Zainstaluj niezb臋dne pakiety przez pip</p> | |
| </div> | |
| </div> | |
| <div class="step-details"> | |
| <div class="code-block"> | |
| <div class="code-header"> | |
| <span>Terminal</span> | |
| <button class="copy-btn" data-copy="install-deps"> | |
| <i class="fas fa-copy"></i> | |
| </button> | |
| </div> | |
| <pre><code class="language-bash" id="install-deps"># Instalacja g艂贸wnych bibliotek | |
| pip install torch torchvision torchaudio | |
| pip install transformers | |
| pip install accelerate | |
| pip install bitsandbytes | |
| pip install sentencepiece | |
| # Instalacja dodatkowych narz臋dzi | |
| pip install gradio | |
| pip install streamlit</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="step" data-step="3"> | |
| <div class="step-header"> | |
| <div class="step-number"> | |
| <span>3</span> | |
| </div> | |
| <div class="step-content"> | |
| <h3>Pobranie Modelu</h3> | |
| <p>Pobierz model GLM-4.6 z Hugging Face</p> | |
| </div> | |
| </div> | |
| <div class="step-details"> | |
| <div class="code-block"> | |
| <div class="code-header"> | |
| <span>Python</span> | |
| <button class="copy-btn" data-copy="download-model"> | |
| <i class="fas fa-copy"></i> | |
| </button> | |
| </div> | |
| <pre><code class="language-python" id="download-model">from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| # Konfiguracja modelu | |
| model_name = "THUDM/glm-4-9b-chat" | |
| # Pobranie tokenizer | |
| tokenizer = AutoTokenizer.from_pretrained( | |
| model_name, | |
| trust_remote_code=True | |
| ) | |
| # Pobranie modelu | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_name, | |
| torch_dtype=torch.float16, | |
| device_map="auto", | |
| trust_remote_code=True | |
| )</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="step" data-step="4"> | |
| <div class="step-header"> | |
| <div class="step-number"> | |
| <span>4</span> | |
| </div> | |
| <div class="step-content"> | |
| <h3>Konfiguracja i Uruchomienie</h3> | |
| <p>Skonfiguruj parametry i uruchom model</p> | |
| </div> | |
| </div> | |
| <div class="step-details"> | |
| <div class="code-block"> | |
| <div class="code-header"> | |
| <span>Python</span> | |
| <button class="copy-btn" data-copy="run-model"> | |
| <i class="fas fa-copy"></i> | |
| </button> | |
| </div> | |
| <pre><code class="language-python" id="run-model">def generate_response(prompt, max_length=512): | |
| # Tokenizacja inputu | |
| inputs = tokenizer( | |
| prompt, | |
| return_tensors="pt", | |
| padding=True, | |
| truncation=True | |
| ).to(model.device) | |
| # Generowanie odpowiedzi | |
| with torch.no_grad(): | |
| outputs = model.generate( | |
| **inputs, | |
| max_length=max_length, | |
| temperature=0.7, | |
| top_p=0.9, | |
| do_sample=True, | |
| pad_token_id=tokenizer.eos_token_id | |
| ) | |
| # Dekodowanie odpowiedzi | |
| response = tokenizer.decode( | |
| outputs[0], | |
| skip_special_tokens=True | |
| ) | |
| return response | |
| # Testowanie | |
| prompt = "Cze艣膰! Jak si臋 masz?" | |
| response = generate_response(prompt) | |
| print(response)</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="step" data-step="5"> | |
| <div class="step-header"> | |
| <div class="step-number"> | |
| <span>5</span> | |
| </div> | |
| <div class="step-content"> | |
| <h3>Tworzenie Interfejsu U偶ytkownika</h3> | |
| <p>Stw贸rz prosty interfejs z Gradio</p> | |
| </div> | |
| </div> | |
| <div class="step-details"> | |
| <div class="code-block"> | |
| <div class="code-header"> | |
| <span>Python</span> | |
| <button class="copy-btn" data-copy="create-ui"> | |
| <i class="fas fa-copy"></i> | |
| </button> | |
| </div> | |
| <pre><code class="language-python" id="create-ui">import gradio as gr | |
| def chat_interface(message, history): | |
| response = generate_response(message) | |
| return response | |
| # Tworzenie interfejsu Gradio | |
| demo = gr.ChatInterface( | |
| fn=chat_interface, | |
| title="GLM-4.6 Chat", | |
| description="Rozmawiaj z modelem GLM-4.6", | |
| examples=[ | |
| ["Jak dzia艂a sztuczna inteligencja?"], | |
| ["Napisz kr贸tki wiersz o wio艣nie"], | |
| ["Wyja艣nij teori臋 wzgl臋dno艣ci prosto"] | |
| ] | |
| ) | |
| # Uruchomienie interfejsu | |
| if __name__ == "__main__": | |
| demo.launch(share=True)</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </section> | |
| <section id="examples" class="examples"> | |
| <div class="container"> | |
| <div class="section-header"> | |
| <h2 class="section-title">Przyk艂ady U偶ycia</h2> | |
| <p class="section-subtitle">Zobrazowane przyk艂ady wykorzystania GLM-4.6</p> | |
| </div> | |
| <div class="examples-grid"> | |
| <div class="example-card"> | |
| <div class="example-header"> | |
| <i class="fas fa-comments"></i> | |
| <h3>Chatbot</h3> | |
| </div> | |
| <div class="example-content"> | |
| <p>Tworzenie inteligentnego asystenta konwersacyjnego</p> | |
| <div class="example-tags"> | |
| <span class="tag">NLP</span> | |
| <span class="tag">Chat</span> | |
| <span class="tag">AI</span> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="example-card"> | |
| <div class="example-header"> | |
| <i class="fas fa-language"></i> | |
| <h3>T艂umaczenia</h3> | |
| </div> | |
| <div class="example-content"> | |
| <p>T艂umaczenie tekstu mi臋dzy 10+ j臋zykami</p> | |
| <div class="example-tags"> | |
| <span class="tag">Translate</span> | |
| <span class="tag">Multi-lang</span> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="example-card"> | |
| <div class="example-header"> | |
| <i class="fas fa-pen-fancy"></i> | |
| <h3>Generowanie Tekstu</h3> | |
| </div> | |
| <div class="example-content"> | |
| <p>Tworzenie artyku艂贸w, emaili, i tre艣ci marketingowych</p> | |
| <div class="example-tags"> | |
| <span class="tag">Content</span> | |
| <span class="tag">Writing</span> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="example-card"> | |
| <div class="example-header"> | |
| <i class="fas fa-code"></i> | |
| <h3>Asystent Kodowania</h3> | |
| </div> | |
| <div class="example-content"> | |
| <p>Pomoc w pisaniu i debugowaniu kodu</p> | |
| <div class="example-tags"> | |
| <span class="tag">Code</span> | |
| <span class="tag">Dev</span> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </section> | |
| <section id="troubleshooting" class="troubleshooting"> | |
| <div class="container"> | |
| <div class="section-header"> | |
| <h2 class="section-title">Troubleshooting</h2> | |
| <p class="section-subtitle">Rozwi膮zanie najcz臋stszych problem贸w</p> | |
| </div> | |
| <div class="faq-list"> | |
| <div class="faq-item"> | |
| <div class="faq-question"> | |
| <h3>Brak wystarczaj膮cej pami臋ci VRAM</h3> | |
| <i class="fas fa-chevron-down"></i> | |
| </div> | |
| <div class="faq-answer"> | |
| <p>Rozwi膮zania:</p> | |
| <ul> | |
| <li>U偶yj kwantyzacji 8-bitowej lub 4-bitowej</li> | |
| <li>Zmniejsz batch size</li> | |
| <li>U偶yj modelu w mniejszej wersji</li> | |
| <li>Rozwa偶 u偶ycie CPU inference</li> | |
| </ul> | |
| <div class="code-snippet"> | |
| <pre><code class="language-python"># Kwantyzacja 8-bitowa | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_name, | |
| load_in_8bit=True, | |
| device_map="auto" | |
| )</code></pre> | |
| </div> | |
| </div> | |
| </div> | |
| <div class="faq-item"> | |
| <div class="faq-question"> | |
| <h3>Wolne generowanie odpowiedzi</h3> | |
| <i class="fas fa-chevron-down"></i> | |
| </div> | |
| <div class="faq-answer"> | |
| <p>Optymalizacje:</p> | |
| <ul> | |
| <li>U偶yj Flash Attention</li> | |
| <li>Zwi臋ksz max_length tylko gdy konieczne</li> | |
| <li>U |