Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
2
80
364
Piotr
piotr-ai
Follow
TomekkAtomek's profile picture
altomek's profile picture
Gargaz's profile picture
21 followers
·
9 following
estibi
estibi
AI & ML interests
None yet
Recent Activity
reacted
to
Kseniase
's
post
with 👍
about 24 hours ago
15 Outstanding Research Papers from NeurIPS 2025 NeurIPS 2025, as a premier annual event in machine learning and computational neuroscience, tackles major topics like the future of AI, current research, and the most difficult challenges. While we’re not attending this year, we’re closely following the updates and today we pull together a quick, easy-to-digest roundup of a few standout papers so you can jump in without getting overwhelmed. Here is a list of 15 papers from NeurIPS 2025, including 8 top research papers that received awards, along with 7 others that caught our attention: 1. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks → https://neurips.cc/virtual/2025/loc/san-diego/test-of-time/128328 Test of Time Award winner. Introduces the RPN, a small convnet that predicts objectness and boxes on shared features, enabling Faster R-CNN to share computation and run around 5 fps on a GPU 2. Artificial Hivemind: The Open-Ended Homogeneity of LMs (and Beyond) → https://neurips.cc/virtual/2025/loc/san-diego/poster/121421 Releases a huge open-ended prompt dataset, showing that LLMs often fall into an “artificial hivemind” – generate surprisingly similar answers – and measuring diversity collapse 3. Optimal Mistake Bounds for Transductive Online Learning → https://neurips.cc/virtual/2025/loc/san-diego/poster/119098 Settles a 30-year-old question by showing how much unlabeled data helps in online learning – it gives a precise quadratic advantage with tight matching bounds 4. Gated Attention for LLMs: Non-linearity, Sparsity, and Attention-Sink-Free → https://neurips.cc/virtual/2025/loc/san-diego/poster/120216 Demonstrates how gating actually affects attention: a simple sigmoid gate after Scaled Dot-Product Attention (SDPA) boosts performance, stability, and long-context behavior by adding useful nonlinearity and sparse modulation Read further below ⬇️ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
liked
a model
5 days ago
microsoft/VibeVoice-Realtime-0.5B
published
a model
12 days ago
piotr-ai/polanka_3.6b_exp_WIP_251127_gguf
View all activity
Organizations
None yet
piotr-ai
's datasets
1
Sort: Recently updated
piotr-ai/gpt-oss20b-samples_deduplicated
Viewer
•
Updated
Aug 12
•
183k
•
51