+ Reply to Thread
Results 1 to 11 of 11
September 26th, 2010, 02:31 AM #1
- Join Date
- Sep 2010
Graphics card memory clock half of what it should be
Hello, and thanks in advance for your help.
I have a nVidia EVGA GT240 SuperClocked (512 MB GDDR5) graphics card, and for some reason, its memory clock is only registering as half as fast in GPU-Z, and it is noticeably less faster than all other sources I've tried said they were. This is a brand new card, but as far as I know this problem has always been present.
Whenever I increase the memory clock using the included OC software, Precision, the memory clock increases, but only by half the amount stated, making me think somehow the GDDR5 memory became SDR. Trying to bring the clock up to double what it should be (so that it evens out to stock speeds) only results in BSOD.
September 26th, 2010, 02:36 AM #2
The number your seeing is actually correct. Take it x2 and that should be the proper number your looking for.
So if it says in a monitoring program it's running at say 333Mhz, take it x2 and you'll have 666Mhz.
Google tells me the stock memory clocks on that card are 1800Mhz, so in a monitoring program your probably only seeing 900Mhz. Right?
If so, it's correct and nothing to worry about..
September 26th, 2010, 03:13 AM #3
- Join Date
- Sep 2010
well the issue i've seen, using the same version of GPU-Z and comparing it to a different screenshot of the same model of card (the one on LegitReviews, to be exact), the only difference is the clock speed (at 900, on the other source, 1800) and the bandwidth (about 52 GB compared to the other source's 115 GB)
September 26th, 2010, 04:44 AM #4
Um... 115GB of bandwidth on a GT 240... I think not.
Their not getting anywhere close to that kind of speed. Only the GTX 480 gets close to that (actually exceeds that, the 470 only does about 133GB or so)
GT 240 GDDR3 vs GT 240 GDDR5 Video Card Comparison - GPUReview.com
These numbers aren't exact, but should be enough to go on.
Keep in mind, those numbers are only the bandwidth path from GPU to Memory, and not indicative to the cards over all performance,
My GTX 285 gets 160GB/s bandwidth, and its slower than a GTX 470, which only has around 130-140GB/s.
Main reason is My GTX 285 has a 512-bit memory bus, the GTX 470/480's only have 320-bit.
My memory clock is 1250MHz, or 2500MHz. 512-bit x 2500Mhz = (1,280,000 bits)/8 bits for every Byte = 160GB/s
GTX 470 is 1674MHz or 3348MHz, 320-bit x 3348MHz = (1,071,360 bits)/8 bits for every Byte = 133,920GB/s
GT 240 = 1700MHz (rough estimate, some cards are more, some are less)
For this instance we'll use your cards numbers, 900MHz Actual DDR clock, comes out to 1800MHz, but effective clock is 3600MHz (QDR) GDDR5 despite the name, is only a 4x multiplier like GDDR4 but has a host of other enhancements to it over GDDR4. so actual clock speed is 900MHz with QDR (Quad Data Rate) being 3600MHz Effective speed.
Your GT 240 1800MHz Effective 3600MHz Clock, 3600MHz x 128-bit = (460,800 bits)/8 bits for every Byte = 57,600GB/s Bandwidth.
So 52-57GB/s Bandwidth is about right for your card.
I know nv News has a quick overview of the card, and shows 115GB/s as the bandwidth...
but they fudged in the calculations...
they should of divided by 8 as in 8 bits for every byte, instead of 4.
1800MHz, 3600MHz effective x 128-bit = 460,800/4 = 115,200.
when it should of been 8, because multiplying the bit bus path times the clock speed, yields a number total in bits, to get a Bytes number, they need to divide by 8, since there are 8 bits for every Byte... so their math was wrong on that review, found here: nV News - EVGA GeForce GT 240 SuperClocked Review - Page 1 of 7
Its possible they multiplied only 1800MHz by 128 bit, and then divided by 2 thinking since its DDR Double Data Rate we'll divide by 2.
Either way that review is off on its bandwidth, Whether they got that number from their math, or from another source. (funny thing is they got the math right on the 9800GT they listed there)
Bottom line is that your card should be getting the 50-60GB/s Memory Bandwidth range, depending on how much OC there is on the Memory.
September 26th, 2010, 12:22 PM #5
- Join Date
- Sep 2010
If it was merely a clerical error, then why did the card do significantly better on the same benchmarks I gave it, even though mine was at a smaller resolution? Same issue with games... I'd like to think that the people benchmarking these aren't just doing calculations and are actually running the utilities they say they are... Sure 26 fps at 1920X1200 with 0x antialiasing isn't THAT fast, but I only get around 9 fps at 1280x1024... Now, comparing the relative load temps (mine never got above 45 degrees with only a stock cooler, theirs all the way up into the 75 degree range) (and I know that 75 degrees is a rather hot operating temperature, that's the point of the benchmark) can I conclude that my monitor is the limiting factor here, or is there some kind of speed muzzle nVidia slipped into the drivers (and I've tried 3 different driver editions)?
September 26th, 2010, 12:40 PM #6
- Join Date
- Jun 2009
EVGA precision should be able to tell you pretty precisevly what you have.
TechIMO Folding@home Team #111 - Crunching for the cure!
September 26th, 2010, 12:49 PM #7
The OP is yet to mention the rest of his system specs. For all we know, the benchmarks he's reading could have been done on a OC'd I7 with a 9500gt, and 12Gb's of RAM. Which may be WAY different from what he's actually working with.
You need to post the FULL SYSTEM specs so we have a clue of what we're trying to troubleshoot here. And the benchmarks he's looking at would also be helpful.
September 26th, 2010, 12:53 PM #8
I PM'd Karma, see if he can Chime in on this,
I tried to answer the issue with the differences in the Memory Bandwidth concerns, but now it seems this Thread has evolved into something more than what was originally stated in the 1st and 2nd posts from the original Poster.
Until more specific, detailed info can be provided for us to help out, answers will only be guesses, shots in the dark, vague at best.
Would help, to list the actual reviews and their links for us to use as Reference to what your talking about.
YOUR complete Hardware System Specs, CPU, Motherboard, RAM, Operating System, Version of the Video Drivers Your Using, Versions of the Benchmarks your running, the Benchmarks themselves, any other software your using for your Overclocks, temp monitoring, games being used as Benchmarks etc...
Monitor specs couldn't hurt either.
Whats your Power Supply specs.
With that said, you are aware that its possible the Systems these Reviews Used the over all hardware could be better than what you have now, yielding better results in the same benchmarks being run.
until we know exactly what you have in comparison to the reviews (another reason it'd be nice to link the reviews here), it could be something as simple as a CPU Bottleneck to the card, that could be causing your problems, but until we know what your running we can't answer anything with any Certainty.
I already PM'd Karma, to see if he could chime in again, and maybe offer some ideas or solutions, but in the mean time, I'd recommend posting some or all pertinent info that may help with the problem.
Edit: Thanks KK, I must of been typing out this reply when you read the PM and posted.
September 26th, 2010, 01:08 PM #9
- Join Date
- Sep 2010
My system is an AMD Phenom X4 quad core at 2.75 gHz, on an ASUS M3A78-EM mobo, a 680 watt BStar Power Supply, unfortunately a piddling 2gb of DDR2 ram (more is being shipped in though, hopefully that will improve performance), and a somewhat old and obsolete 17 inch LCD HP vs17d monitor, running Windows 7 Ultimate version 7600 with driver editions varying, most recently Forceware 258.96. In my case the benchmark in particular I'm referring to is Furmark
[GPU Tool] FurMark 1.8.2 Available - 3D Tech News, Pixel Hacking, Data Visualization and 3D Programming - Geeks3D.com and i'm comparing my stats to the ones on
EVGA GeForce GT 240 SuperClocked Video Card Review - EVGA GT 240 Overclocking - Legit Reviews
September 26th, 2010, 02:00 PM #10
Alright, a couple things I want to point out.
1. That benchmark is using 1 year old drivers. While yes, there are tons of newer sets nvidia has released, but the newer sets are optimized for the newer cards, and not necessarily the 9xxx series. What I mean is when I installed the newest drivers from nvidia a few weeks ago, I took a performance hit coming from older drivers. So that could be on of the differences. ( I have a GTS250/9800gtx+)
2. The test system in that benchmark is a I7 975, 12Gb's of DDR3 1866, and P6t deluxe motherboard. Which frankly is WAY faster then your current system. So that is another difference right there. (although you could argue it wouldn't make THAT much of a difference, but I honestly think it does)
3. Since that review is a year old, the actual benchmark program could have been updated to test the newer GPU's. So it's hard to say if your actually running the exact same version/copy of the benchmark.
4. You listed your system specs, which is great. But which exact AMD quad core processor do you have? A older Phenom I, or a Phenom II? That will make a difference right there also.
So what I would do right now, first off, get a older set of drivers. When switching drivers, make sure you use a driver cleaner in between uninstalling and reinstalling the drivers. Next, I'd check to make sure there is no bios upgrade for you motherboard. Never know if there was a issue with PCI 2.0 cards on that particular bios, so it wouldn't hurt to check.
Also, if you have a multi meter, might run some checks on the output of your power supply. Honestly, I've never heard of that brand, and I really don't trust "off brand" PSU's. You never know what voltage,ripple, etc your getting out of them. So if you can run some tests to see that it's actually producing enough to power the system, that would be helpful.
Try those and see if it makes any difference.
September 26th, 2010, 03:01 PM #11
I noticed Furmark linked to in this thread is 1.8.2, the one in screen shots in the review are 1.7.0.
Here's the 1.7.0 Download: FurMark 1.7.0: The Bad Boy of Graphics Cards Utilities is Back! - 3D Tech News, Pixel Hacking, Data Visualization and 3D Programming - Geeks3D.com
But that review even shows at least 2 versions of GPUz 0.3.6 and 0.3.7
so I suspect some of the review was cut and paste from some other source as well.
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
By UnsettledBrute in forum PC HardwareReplies: 6Last Post: March 8th, 2010, 05:27 AM
By Nbbear in forum PC HardwareReplies: 4Last Post: April 7th, 2009, 11:48 PM
By pptdgc in forum PC HardwareReplies: 6Last Post: April 23rd, 2005, 04:28 PM
By -h2o- in forum PC HardwareReplies: 4Last Post: August 17th, 2004, 08:01 PM
By greenstick in forum PC HardwareReplies: 3Last Post: November 4th, 2002, 04:45 PM