- A British Reddit user conducted a benchmark test on their NVIDIA RTX 5090 GPU, focusing on prompt parsing and token generation.
- The results showed that the GPU’s power consumption ranged between 400W to 600W in increments of 25W. The maximum observed power was at 592W when set to 600W, but it stabilized around 580W even with no upper limit.
- The user noted a significant difference from their previous setup involving an RTX 4090 GPU, where the max values were often higher by about 10-12W. This suggests that newer models like the RTX 50 series may have more pronounced power spikes, even under similar conditions.
- The benchmark highlighted how sensitive prompt processing is to power limits compared to token generation, which exhibited a nearly linear relationship at these settings.
Originally published at reddit.com. Curated by AI Maestro.
Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

![[Benchmark] 5090RTX: Promt Parsing, Token Generation and Power Level](https://ai-maestro.online/wp-content/uploads/2026/05/benchmark-5090rtx-promt-parsing-token-generation-and-power--1024x576.jpg)
![OpenAI’s deployment company move says more about the AI gap than any benchmark[D]](https://ai-maestro.online/wp-content/uploads/2026/05/openai-s-deployment-company-move-says-more-about-the-ai-gap--768x432.jpg)
![Integrating 3D Heat Equation into a PINN for Real-Time Aerospace Simulation (C++ WASM Engine)[P]](https://ai-maestro.online/wp-content/uploads/2026/05/integrating-3d-heat-equation-into-a-pinn-for-real-time-aeros-768x432.jpg)
![Your AI Use Is Breaking My Brain: Why 10 Minutes of Prompting Fries Us[D]](https://ai-maestro.online/wp-content/uploads/2026/05/your-ai-use-is-breaking-my-brain-why-10-minutes-of-prompting-768x432.jpg)