we really all are going to make it, aren’t we? 2×3090 setup.

“`html A British AI enthusiast has reported significant progress in running large language models (LLMs) locally using a modest hardware setup. The…

By AI Maestro May 14, 2026 1 min read
we really all are going to make it, aren’t we? 2×3090 setup.

“`html

A British AI enthusiast has reported significant progress in running large language models (LLMs) locally using a modest hardware setup. The user, originally using Windows Subsystem for Linux (WSL2), upgraded to Ubuntu as a dual-boot alongside their existing system. This change resulted in a substantial improvement—now achieving over 4000 prompt processing per second and 113 tool-call per second without the need for Nvidia Link. Previously, they were getting around 30 TP/S and 400 PP/S with WSL2.

This upgrade is particularly noteworthy because it demonstrates that even relatively low-end hardware can support LLMs efficiently enough to be a viable alternative to cloud-based services. The user expresses excitement about the potential for smaller models like Qwen 3.6 (with 48GB VRAM) to achieve impressive performance and utility, suggesting they might reach frontier-level intelligence within the next year.

  • The local run of LLMs is now feasible with modest hardware improvements.
  • Smaller models like Qwen could potentially match or exceed cloud-based services in terms of efficiency and cost.
  • This development opens new possibilities for AI research, deployment, and privacy considerations.

“`


Originally published at reddit.com. Curated by AI Maestro.

Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

Name
Scroll to Top