Dave tests a really high end NVIDIA GTX-970 video card for accelerated CUDA GPU video rendering with Sony Movie Studio (Vegas) & also Adobe Premiere.
Will it work for the new high frame rate video ?
Will it work at all?
How about a Radeon HD7850 and OpenCL?
Does the CODEC matter ?
Does hard drive or SSD performance matter ?
This also provides some insight into how Dave edits and renders his videos.
NOTE: This is mostly trying out and real time commentary of testing video rendering speed. If you find this stuff boring then DON'T WATCH IT.
There is no electronics content here, but plenty of main channel viewers might find it interesting.
Forum: http://www.eevblog.com/forum/blog/eevblog-698-gpu-video-rendering/'>http://www.eevblog.com/forum/blog/eevblog-698-gpu-video-rendering/
EEVblog Main Web Site: http://www.eevblog.com
The 2nd EEVblog Channel: http://www.youtube.com/EEVblog2
Support the EEVblog through Patreon!
http://www.patreon.com/eevblog
EEVblog Amazon Store (Dave gets a cut):
http://astore.amazon.com/eevblogstore-20
Donations:
http://www.eevblog.com/donations/
Projects:
http://www.eevblog.com/projects/
Electronics Info Wiki:
http://www.eevblog.com/wiki/
Will it work for the new high frame rate video ?
Will it work at all?
How about a Radeon HD7850 and OpenCL?
Does the CODEC matter ?
Does hard drive or SSD performance matter ?
This also provides some insight into how Dave edits and renders his videos.
NOTE: This is mostly trying out and real time commentary of testing video rendering speed. If you find this stuff boring then DON'T WATCH IT.
There is no electronics content here, but plenty of main channel viewers might find it interesting.
Forum: http://www.eevblog.com/forum/blog/eevblog-698-gpu-video-rendering/'>http://www.eevblog.com/forum/blog/eevblog-698-gpu-video-rendering/
EEVblog Main Web Site: http://www.eevblog.com
The 2nd EEVblog Channel: http://www.youtube.com/EEVblog2
Support the EEVblog through Patreon!
http://www.patreon.com/eevblog
EEVblog Amazon Store (Dave gets a cut):
http://astore.amazon.com/eevblogstore-20
Donations:
http://www.eevblog.com/donations/
Projects:
http://www.eevblog.com/projects/
Electronics Info Wiki:
http://www.eevblog.com/wiki/
Hi. This is hopefully going to be a relatively short video. I Just wanted to test the video rendering performance of a new graphics card a GTX 970 that I'm going to install in my video rendering machine here now. I've recently changed over do 50 and 60 frames per second video that on creating to upload to YouTube but I haven't always done this.
this is only fairly recent. I have actually always shot at 25 frames per second because the power video cameras here in Australia that you buy they're 25 frames per second not 30 frames per second and it was very very quick even without GPU acceleration. Let me show you here's an old clip that I've got from my Canon HF G 30 camera and it's shot at 25 frames per second. You can see my project here is 25 frames per second.
so I just want to run show you what my rendering times used to be like here for doing my videos beforehand it was very quick. so if I go in here and I said so sorry wrong window I was actually using for the sake of rendering speed the Xdcam format. so I've got that down here I've got it already set up for 25 frames per second and 35 megabits per second variable bitrate. Okay because I do is two-step video rendering process I use handbrake as a second step.
but anyway, what I'm concerned with was getting the decent video rendering speed out of Sony Movie Studio which is what I'm using. this is Sony Movie Studio Platinum 13. Really, there is essentially I believe no difference between Sony Vegas and Movie Studio apart from some more professional features. But in terms of video rendering speed, it's exactly the same as Sony Vegas.
So here we go: If I just do a test here and I'm going to render this, it's a 1 minute. It's a 1 minute clip precisely, and you'll notice that it is very quick. Look, it's churning this out. It's going to do it at about twice real time.
That's a metric used on video radio and is it real time? So if I've got a one minute clip here, if it takes one minute, that's generally regarded as yeah, pretty good. Now what I'm using here is an Intel Core I7 processor. I'll show you that in a second. but look, yeah, there you go.
it is. twice real time. It took basically just over 30 seconds to render that video 33 seconds, but let's just say twice. real time there.
So I was pretty quick, so that's what I was used to. and then I do a second step in handbrake to get the file size down before I upload to YouTube without dropping any video quality because the far I just rendered would be a very large video size. so I can't archive that and upload it to YouTube It just tastes too long users too much bandwidth so you know it was very quick. I was used to twice real time and my projects I can probably show you a detailed project later, but my projects are usually very simple.
They've got, you know, not much in the way of editing and stuff. All I do is basically trim the start and the end of the video and may maybe there might be a text overlay or something like that. but there's no real graphics, effects or anything like that in my video. So very. UNTAC Scene to a video editing software like Sony Movie Studio here. And for those wondering why I don't use Adobe or something like that, Sony Movie Studio works for me. It works for my workflow. I Just like the way it works.
I've been chucked it. it was actually faster than a Adobe So you know I've been using it for a long time and it works well for me. Yes, I have tried Adobe and almost every video editing software out there I Don't like them. Yes, Sony has its drawbacks too, but it seems to be the best of the bunch in terms of I might do a separate video on this.
Why? that's the case and how I edit videos and things like that Anyway, Very, very quick. There you go. And for those wondering, yes, that was done without GPU acceleration without using my video card. so just the CPU itself.
Now if I go over here here and I do it again I can't use that particular codec to do to use GPX GPU acceleration with that one. so I'll choose one that does. Everyone raves about GPU acceleration. That's what we're going to test today.
Let me just do a very quick test here. Here We go: render using GPU Okay, so I'm going to render using my GPU over here. Yes, I've got CUDA is available so my Nvidia card is all supported there. Everything's hunky-dory So now I'm using this Sony AVC format and once again, I've got high profile Quebec all the requisite stuff at 25 frames per second here and I'm doing that at say 16 megabits which is typical back in the day because I would shoot at 18 megabits, but now I shoot at a much higher I'm a much higher data rate because I'm using the 50 or 60 frames per second.
Here we go. so we can just render that and let's just run a test on that and it's overwrite existing file. yep and you'll notice that it is much much slower. This is why I use the Sony Xdcam intermediate format.
This one's going to take probably around about 3 times as long. I won't actually wait there for it to finish. This is going to be relatively accurate. Okay, so 3 times as long for exactly the same clip.
and if I go in here and I do it again and here we go. I Do that without the GPU render using CPU only. Ok, let's try that with just the CPU Here we go, it's gonna be actually slightly quicker. There you go, My CPU is actually rendering doing video rendering.
in my case, with my particular files in my tree using this codec, the Sony AVC it is actually faster to use my CPU than to use my video card. So why is the CPU faster than the GPU? Well, it's pretty easy because my CPU is a pretty decent one. I Built this machine maybe. Listen, a year ago and it's got an Intel Core I7 three double 700k working it 3.5 gig there, so you know it's got 16 gigs of memory.
all the rest of it, right? So it's a pretty damn decent CPU for graphics rendering. Now if we go over the graphics card, this is why I'm not getting huge performance using the GPU because I've only got an NVIDIA GeForce GTX 650 graphics card and this is what I wanted to update. You can see that a basic industry benchmark here for the 650 is only 1835, right? So that's really not terrific by modern standards. That's why the CPU is quicker. But if we go over to the new card, I've got I'm gonna install in a minute a GTX 970. So this one has eight thousand, six hundred and thirty-five based on the same bench card and it is one of the fastest graphics cards on the market at the moment. Not quite the best, but pretty darn good. So really, we should get no complaints over that one at all.
And if we compare the GTX 650, here we go. It's got 384 cuda cores and the 970 has 16 64 +. It's got an intern, it's got a boost clock mode and got higher you know, memory bandwidth and memory clock is seven gig compared to five gig and it's got four Giga RAM and it's got 224 gigabytes per second bandwidth compared to 80 gigabytes per second bandwidth. So you know a significantly faster graphics card, so let's see if it makes a huge difference.
But what now I want to do is what show you how slow my machine has become now that I'm rendering 50 and 60 frames per second video. So here is my latest mailbag video and here's a clip from it once again, precisely one minute long. So we've got a decent benchmark and it was shot at 59.94 frames per second at I think it's 35 Meg bits or something like that, so it's much higher bitrate as well. Then my previous videos are all shot at 17 megabits per second my previous one.
So if I go here I actually can't use the AVC HD or the Xdcam format anymore cuz it doesn't support 60 frames per second. Hang on! No, yes it does. Sorry, I got confused. it does, but only at 1280.
By 720, there you go. So I've had to ditch that and I've now had to go to this Sony AVC codec. and by the way, no, the main concept codec is even slower than the Sony codec. But hey, it might not be when I stored a new car, but the Sony one supports the Nvidia CUDA cores.
So let's go in here. So I'm now getting a 60 frames per second project. So there we go. and I'll put it in a higher bitrate.
It's at 26 megabits, but it's all essentially the same. So here we go. Let's render the same video here out to, well, the same length one minute. but it's at 60 frames per second at 26 26 megabits CPU only.
So I'm not actually using my graphics card because we've already established that the graphics card is the CPU is faster than the graphics card. Okay, so let's go render and my 1-minute video took 30 seconds to render before and now it's going to take hmm. around about three minutes to do Wow. So that is six times slower just because I've switched to 60 frames per second and I've been forced to change my codec killer. And if you don't believe me that the GPU doesn't accelerate that, then we've got our GPU. Let's render. It should take a little bit more, a little bit longer than three minutes to actually render that one-minute clip. Now, Yep, see, it's gone up.
It's only slightly slower, but basically you know, pretty much on par with the CPU. Very similar to what we saw before and here we go. Just as an absolute benchmark. Well, you reuse this project when I install my video car, but this was using GPU acceleration and it took three minutes and 27 seconds to render that one-minute video with no fancy editing stuff, no fancy transitions or anything else, just raw video from the camera.
Let's see how the new video card does. So here we go: I Got the new graphics card installed that wasn't too painful at all. Just downloaded the latest driver installed that and everything's working just fine. Here it is the NVIDIA GeForce GTX 970 with 4 gig of ram.
So let's give this a bill and see if it makes a difference. Remember it was like 3 minutes 30 or something last time. So let's try it. We've got exactly the same project, exactly the same configuration.
we're using the GPU Here we go, so let's turn on 60 frames per second. Yep, let's go render and here we go. It's not looking great Is it? Nope. Look at that.
three minutes, three minutes? No. that is a complete fail. That is a $500 one of the highest end video cards you can get. not quite there, but jeez, it's pretty done.
Near the top. It is like five times at least I Think it's about four or five times quicker than my previous GTX card and that does not work at all. Is this something wrong in my settings? I Don't like that at all. that's taking forever and yeah, there you go.
That makes sense. Check GPU No GPU available even though if I go up into the Preferences it will look it's there GTX 970 but it doesn't let me. Maybe I have to repair it. Let me try that.
Well there you go. As it turns out, this top-of-the-line Nvidia graphics card has been a complete waste of time and money now. I just cannot get this thing to work well. kind of okay.
I I Originally thought it was the graphics card and I searched the all the drivers and I searched the net and everyone seemed to be having problems with this new Maxwell chipset which is in the 970 and the 980 chipsets and also in the GTX 750. but these specific 970 and 980 chipsets are so new are the new special drivers, all that sort of stuff. and I tried various different drivers the one that came in the box and the one latest one I downloaded and all sorts of things. tried to install all sorts of CUDA stuff and it made absolutely no difference at all now.
I was eventually able to go in here under the main concept codec and actually I get it to get a cooter is available here but I could not get it I could not get the cooter availability on the Sony AVC codec that I want to use. it's just not possible to ever. Sony doesn't support it or some sort of combination of the new code CUDA drivers in the Nvidia driver set aren't compatible. you know, backward compatible with the existing our support that's in Sony And yes, you look at the official list for Sony on what cards it supports. it just says basically anything CUDA supported greater than some old GTX 400 series. It just says anything greater than that should work just fine and well, no, it ain't that easy. I'm here to tell you this is you know, basically pot like I've had no any problems with stuff like this. Anyway, what I've done is I've ditched that card.
Okay, it was I couldn't get the CUDA support. So really it was complete waste of time. So what I've done is I've gone back and I've actually installed an old um ad it will. It's done all that all.
but it's a Radeon HD 7850 and it's not a bad graphic card at all. It's got. look, it's got a benchmark of about 3,700 so it's like more than twice as quick as the previous GTX 650 I had in there and I originally had this graphics card installed in my machine but I was getting all sorts of like video tearing up the top and there all sorts of issues and it was just a pain in the arse. I ditched it and well now I'm back on and got the latest drivers available and if we check it out, if we go in here, we'll see that we've actually got it available in but not as Kuda.
we will have it available as here we go render using GPU and system. It actually uses the OpenCL GPU interface instead of the Cuda because Cuda is Nvidia specific, but I believe that you saw it before that it does offer Cuda. When you accept the main concept, choose the main concept codecs so it's rather unusual. Anyway, we now have open CL available and I can actually run this thing so let's go.
let's run it. And bingo that's looking reasonably quick there. That update right there we go, but ultimately ultimately not much quicker than the GTX 650 that we had before. So really, that's going to take like two and a half minutes.
Not that great. You'll probably see if we cancel that, it's probably going well. we've seen it before. it's going to be pretty much identical with the CPU version of the driver, so you know.
Look, it's not helping. Nor, even with that sort of mid-range graphics card with as I said like a past mark of like 3700 which yellow is sort of like mid-range But here's the one. We tried this Gtx seven and Nine seventy right up here with 8600. It's a shame we couldn't get the damn thing working, but it just goes to show you that this, you know, video rendering stuff is tricky business depends on the card you've got the driver, the OS it's going to what version of driver you got, what type of, whether you're using CUDA or OpenCL what video editing software you use in here, what else, What codec you're going to be choosing in the thing and all sorts of powerful air. So it's and people make it out that you know this sort of thing is just trivial. Just work in a high-end high-end video card and Bob's your uncle. She'll be right. You know you'll scream along your video editing know and it's your source materials.
well, what frame rate your output into, whether or not you're doing any resampling of your video, all sorts of stuff, it's let alone the effects and everything else. So and yes, by the way, I did actually try Adobe Premiere I do have it installed on this machine and I tried it with that card and it was. It's still slower than my original thing, so then my original set up so it's not any quicker at all. so you know, please Adobe Fanboys don't come in and say I just switch to Adobe Premiere It screams on my machine.
Yeah, well, your machine is not my machine and your requirements aren't my requirements, so it's entirely specific as we saw anyway. I Was hoping that this would work, but it turned into a complete fire. So what's that out on? Our 10 or 15 minute video of me waffling on looking at video reading I was hoping I'd fixed my issue by putting in a top-of-the-line video card and no fail. Anyway, there you go.
That's the intricacies of all this sort of stuff. One thing I haven't actually mentioned yet is that the figures that we've seen here while I'm doing this aren't quite as fast as they can be because I'm screen capturing this in the background. So it's got to do two things: It's got to do all that video rendering, which is massively processing intensive and so is this video cap as well. I'm using Nabu video capture software from NCH if you're wondering, that's an Australian company.
works really well and it's pretty cheap. This is what I used to screen capture stuff and it's doing it at 30 frames per second. So I've actually done some tests on this with the AMD HD 7850 and it turns out it's a little bit slower. Like with the capture turned off, it's a little bit slower with the OpenCL GPU enabled.
Once again, the cpu rendering is actually slightly quicker for this particular test video using this particular source, etc etc. So there you go. Yep, I Just can't win with these graphics cards. I Don't want to fork out for another high-end Radian Now geez, it's getting ridiculous.
So it goes to show the sort of complex requirements from a professional video blogger like myself who does this daily. I'm you know editing I'm producing several videos a week, so all sorts of render, time, and productivity and workflow and everything else really matters. And I have to do some separate videos on this too. I'll show you exactly how I edit things and then render and then trance code and do that sort of stuff.
How my workflow actually works for that. But just changing from 250 frames per second made a hell of a difference. But it's not just 50 frames per second, it's I'm now using a combination of two cameras. one does 50, one does 60 just because they have their particular cameras that I've got one saving in Mpeg-4 format, one saving AV CHD slightly different art flavours. there, they're gonna render and in different bit rates and all sorts of stuff can get really, really complex. And by the way, for those wondering know, the hard drives don't make any difference whatsoever. That's a another myth that goes around just for general editing like this. so bit rates are not high enough for the hard drives to have an impact.
so just switching to solar state hard drives is not going to increase your throughput at all. It's all about the bit rate. and we've got here is your rendering. It's a, you know, high bitrate like 25 megabits per second.
for example, that's Meg bits that's not megabytes and modern hard drives. The solder interfaces the read and write speeds more than fast enough to keep up with rendering in real time on a machine like this. No problems whatsoever. and trust me, I've actually tried it.
I've tried right into solid state drives reading from regular hard drives at all, reading and writing from the same hard drive. it makes no difference to the render time whatsoever. So unless you're doing really extreme stuff, solid-state hard drives aren't gonna increase your performance. But yes, I do have a solid-state hard drive on this machine for my boot drive.
but I do all of my video rendering on a regular 7200 secondary drive. But even like on a 5200 speed like a really bottom and consumer grade hard drive these days, more than good enough to keep up with these sort of bit rates at you know, 20 30 megabits per second. No problems whatsoever. And there's no doubt gonna be some people out there that are gonna say, well, I'm completely wasting my time trying to do GPU rendering anyway because it's complete pile of garbage and it produces actually inferior quality video.
and well, I can't Can neither confirm or deny that really, because I've well, haven't bothered to actually critically review the video quality footage from a GPU rendered output as composed to it compared to a CPU rendered output. but some people claim there is significant differences I know I am pretty sure there is with Intel Quick Sync for example, which can be really, really fast at rendering, but apparently yeah, the video quality is not that great on that. So anyway, these are the trials and tribulations of trying to do GPU Rendering: It's not as it may work for some people and well, good on you. but it's never worked for me and even when I bought a high-end card, it's still not working.
Murphy's going to get me every time, so looks like I'm just gonna have to go back and reliance. CPU Rendering Oh well, can't win them all. Catch you next time you.
With no color correction, no effects, no transform, no whatsoever except trimming, your main load is basically decoding the input and encoding the output. GPUs are float-point-masters but can't decode or encode. Encoding is a bit-fiddling thing which CPUs are here for. GPUs simply can't do that.
I GOT MYSELF A GTX 1050TI 4GB AND MY IT'S NOT THAT MUCH DIFFERENT FROM MY CPU.
RUNNING I5 4TH GEN WITH 8GB RAM AND SSD
hey i have GTX 970 but Vegas keep saying no GPU available can u help me plz ?
I rate this video 3.5/4
Sony Movie Studio Platinum is great! and works really well! But They have not been keeping up with new graphics cards. By fare openCL will work better then CUDA in this. And the fasts you can get, is the high end AMD Radion HD 6900 series. anything newer than that, will be slower and a vast of money 🙁 The fasts Nvidia for the Sony Movie Studio Platinum is the high end GTX500 series,, and they are fast because they did good openCL back then.
Would be interesting to see how the GTX 970 would work for you today using DaVinci 12.x instead of Sony…
lool sony sucks you want to use your gpu for rending to help out use adobe premiere pro cc 2015 I use that all the time and my times are cut down a lot using sony with gpu 6 minute 30 second video took over 10 minutes to render at 30fps using adobe with gpu and it only took 3 minutes to render
How about the timeline editing. Did that become smoother with a high end card.
i found out that Sony's AVC renderer does not actually use GTX 9xx cards, only 7xx and lower, sadly, thats why you didn't see a legitimate improvisation by selecting GPU over CPU
Everyone needs to stop using "Gaming Cards" for Video Editing / Rendering…..
Use Nvidia Quadro cards (workstation graphics) which are actually designed for Video and/or 3D Editing and Rendering.
This is what all professional companies are using (and yes they do have cheaper low end cards for the average person)
I have sony vegas 13, I have intel i7 3770k, 16gigs ram, 2 gtx 970 4gigs in sli, takes me 20 mins to render a 7min video at 1080p. Am I doing somthing wrong.?
I also have a GTX970 and I want to use it for rendering videos, but for me, it says that there is no GPU available. I have all the latest drivers.
The problem is Movie Studio does not support newer graphics cards when it comes to CUDA rendering. I think its supported up to GTX 580. To get your GPU to render fast u have to go with software that takes real good advantage of it … Premiere for one GPU, and if u have more you can play around with Da Vinci Resolve. Saying that CUDA us not good for rendering is not correct, I have been using CUDA for rendering in Premiere for 4 years now and it actually performs better then my two xeon cpus at work.
did this guy just buy a flagship (used to be) gpu to render video?? you see there is a difference between a gaming gpu and a rendering gpu, a gaming gpu is made for 3d sampling and coding!!!!
It bothers me than peapole use windows Basic and not Aero
u can use mainconcept avc with cuda
CUDA isnt meant for encoding, its meant for physics. CPU is for rendering video, get a better CPU, or use QuickSync, thats on the cpu and it has HW encoding built in.
Thanks for the video Dave, I just encountered this issue. I recently bought a 980ti card for gaming and picked up sony studio 13 and spent the whole evening trying to get it to work with no luck then I found your video and your mirroring my experience exactly.
I'm personally a big fan of the Intel QuickSync (noticed you didn't try it in this video)
It's pretty amazing in a pinch. For instance encoding videos on an ultrabook or cheap Core i3 laptop. Even on my main system QuickSync encodes faster than OpenCL with my dedicated AMD card, and I know there is supposed to be a loss in quality but I can't see it…
btw its that sony vegas doesn't support any cards that are 600 or higher series due to a change in architechure, my 570 is way faster than my new 970 🙁
Did he seriously say that he paid $500 for a GTX 970??? Holy bananas! I paid $370 for mine and it is one of the more expensive 970 models available…
I think this already solved but in any case.
I had the same problem. I contacted the Sony support. They said … Any driver newer than 340 is not supported for GPU encoding. No Maxwell GPUs are supported. (nor working)
edit: Since I updated from a 580 even my preview bugs out …
The bottleneck could be the Ram,or motherboard
The problem seems to be two…
1: .. The oblivious one. Someone assumed that Kepler code would work on Maxwell… someone was wrong… Well well, this is probably fixed in a month or two
2: There seems to be some trans coding issue (for the actual code, not the coding). CUDA (and to some extent open-CL) don´t code as efficient as x86 code does. The actual processing power is 10-20 times faster, but you might only get 2-3 times.
This seems to be because the coding is first made on a x86 compiler, than ported to the open-CL or CUDA compilers very badly. The programmers is simply not used to it.. and they also i most cases don't know how to make optimized code. Its a lot more complicated than switching from Arm to x86… will the programmers ever be… i would say, probably not
Also the compilers and transcoders are also not as god as they are on x86…. few people remember it now. but back in the 90-tys most x86 compilers was quite bad. Because of that a lot of programmers used assembler for the inner most core of the game, and built the game around it with normal C-code. Porting games was also very hard.
Ironically, when GPU become more common this change quite rapidly. Partly did the compilers get a lot better, partly did a lot of stuff get transferred the GPU drivers. Also the libs got a lot better…
Now days the GPU compilers is about as good as the CPU compilers was back in the 80-tys. Sure, if you know exactly what you are doing, you can probably trick it to get a efficient result. Bot most codec-coders are not GPU experts.
Where I have seen hard-drives affecting video processing, is with raw Mini-DV footage a few years ago… Hard drive could keep up, but only if the data was not fragmented… Drives have become significantly faster since then and modern codecs are a lot better than the one used for MiniDV
On GPU rendering: Even if it takes just as long, it does have the advantage that the PC is probably still usable for something else, while CPU rendering have a larger impact on general PC performance…
The 970 might start working later once all the driver issues have been sorted out?
Looking at gaming benchmarks numbers generally does not translate well to GPGPU performance. AMD cards tend to outperform Nvidia cards in GPGPU, but not necessarily in gaming performance. Even if you could get CUDA working on the new top of the line Nvidia card, the OpenCL performance on an AMD card is likely to be better despite not being at the top of the gaming benchmark charts.
For me Vegas GPU rendering didn't make any difference in rendering time compared to cpu only. Interesting…
Bandwight? lol in Romania dosent exist.