Jump to content

GPU Core Clock stuck at high speed after Stress GPU test


raylau

Recommended Posts

Whenever I run the System Stability Test, if I enable the Stress GPU(s) option and then disable it, even after I disable the option to stop running the GPU test, my GPU stays stuck at a high clock speed (1005MHz instead of 324MHz at idle).

 

The only way to fix this is to go into Device Management, disable, then re-enable.  It will still come back at 1005MHz but will settle back down to 324MHz within a few seconds.  (Or, of course, reboot.)

 

I have an EVGA GTX 780Ti SC ACX (03G-P4-2884-KR) running under Windows 8.1 Pro.

Link to comment
Share on other sites

Thanks for the info. Alas, I don't think I know enough about aida64's GPU test to properly report this to nvidia. Other GPU tests like furmark, heaven, valley, 3d mark, do not show this interaction. Nor does ending a folding@home session, so it is not specific to true 3d vs GPU compute only loads.

Well, I suppose it is what it is, but if the issue also shows with your lab nvidia cards, perhaps help let nvidia know?

Other than this and an occasional access violation when changing prefs (can't reproduce even semi reliably though seen it 3 or 4 times, or I would report it), aida64 is one of the most stable and useful hardware info type tool avail!

Link to comment
Share on other sites

  • 2 weeks later...

We've done a few test runs with various GeForce cards, and it seems GPU clock gets stuck at 3D rates only when you keep the AIDA64 System Stability Test running and just untick the GPU subtest.  That keeps the OpenCL context still open, although not executing any OpenCL kernels (computing tasks).  In such case ForceWare keeps the GPU running at high clock speeds for whatever reason, and only lets the clocks drop down to power saving (low) rates when you press the Stop button on the System Stability Test window.  That's because when you push the Stop button, AIDA64 closes and invalidates the OpenCL context.  While AMD Catalyst drivers vary GPU clocks based on actual GPU load (utilization), ForceWare drivers seem to work based on OpenCL/Direct3D/OpenGL contexts.  That's just a different approach, although quite frankly we would prefer to have drivers all work by GPU utilization :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...