[ 3 / a / adv / an / asp / cgl / ck / co / diy / fa / g / gd / int / jp / k / lit / m / mlp / mu / n / o / out / p / po / sci / sp / tg / toy / trv / tv / v / vg / vp / vr / w / wsg / x]

/3/ - 3DCG

<< back to board
[Delete this thread]

Anonymous 07/08/14(Tue)13:44 UTC+1 No.430326 Report

Why did IRay performance go down so much in the 680 and 780 compared to the 580.

I want to give nvidia more money, I really do. But I can't afford their workstation cards, and so far 580s are second to only Titans. Why is this.

http://www.migenius.com/products/nvidia-iray/iray-benchmarks
>>
Anonymous 07/08/14(Tue)14:08 UTC+1 No.430327 Report

>>430326
Because the architecture was a big enough change that the performance was degraded until one of the more recent versions of Iray that took into account the Kepler architecture. They performance better now.
>>
Anonymous 07/08/14(Tue)14:36 UTC+1 No.430331 Report

>>430327
Will 800 series cards be worth it? Or will we go through this teething process again?
>>
Anonymous 07/08/14(Tue)14:38 UTC+1 No.430334 Report

>>430327
>>430331
I mean, nvidia own mental images (iray) it shouldn't come as any surprise to them what changes were made on the new hardware. And surely them would be developing for the newer hardware far ahead of time.
>>
Anonymous 07/08/14(Tue)15:23 UTC+1 No.430340 Report

Dont use a proprietary software. Dont use nvidia cuda shit. Use the open standard that is OpenCL or use compute shaders in openGL.
>>
Anonymous 07/09/14(Wed)01:12 UTC+1 No.430409 Report

>>430340
when opencl gets its shit together and accelerated video rendering, than i will say go opencl... till then opencl is just a pipe dream to me.

>>430326
because those are gaming cards.
the 580 was the last card they made that was un stripped down on the gaming line, going from the 580 to the 680 they stripped out a fuckload of shit that made gpgpu better but didn't have na effect on games.

the reason the titan was still good was because if i remember right, its a tesla based gpu with gaming drivers.

you can see the difference in gpgpu benchmarks
>>
Anonymous 07/09/14(Wed)01:25 UTC+1 No.430411 Report

>>430334
The release cycle of iray is yearly to coincide with Autodesk products for the most part. There are little updates that come with the mentalray minor version changes, but in general it's yearly.

>>430340
There's nothing wrong with proprietary software, people against it merely think they are entitled to other people's work for free.

Proprietary has the benefit of being programmed closer to the metal and fully optimized for one type of architecture, instead of using a bunch of abstraction layers and generalized functions that are based around regular unified shaders.

Like it or not, Nvidia is leading the general computer industry in full force, they are the one's innovating in that sector. Buy a Nvidia card next time and you won't be such a crybaby.
>>
Anonymous 07/09/14(Wed)01:28 UTC+1 No.430412 Report

>>430409
>680 they stripped out a fuckload of shit that made gpgpu better
Except not... The only real thing they stripped away was double precision performance so that they could focus the chips on gaming and consumer GPGPU finctionality. Most industry sectors doing GPUGPU work generally need double precision though.

Almost all the benchmarks you see for the 6xx series were done before the software was updated to take advantage of the new architecture, and before the drivers matured. Your CUDA parameters need to be altered quite a bit when transitioning from Fermi to Kepler, and there are a bunch more optimizations to take advantage of that require new code.
>>
Anonymous 07/09/14(Wed)01:36 UTC+1 No.430414 Report

>>430412
>CUDA this
>CUDA that

again, use Compute shaders in OpenGL. Deploy absolutely fucking everywhere.
>>
Anonymous 07/09/14(Wed)01:43 UTC+1 No.430415 Report

>>430414
Again, OpenGL uses the basic capabilities of the unified shader functions that both AMD and Nvidia share, so it performs poorly. The fact you're not even saying OpenCL, shows that you know shit all about this. CUDA isn't just a programming extension for Nvidia GPUs, CUDA is also Nvidia's unique architecture, which contains many CPU-like functions and components that AMD's GPU's absolutely do not have. Read a white paper on Kepler architecture (though I doubt you know enough shit about processors to make any sense of it).

Program for performance first, not maximum hardware coverage. If people want to use shittier hardware (AMD), that's their fault. I'll continue to support a company that's actually making innovative hardware changes and putting pressure on Intel. AMD has no interest in phasing out CPUs, due to it being a core part of their business.
>>
Anonymous 07/09/14(Wed)01:46 UTC+1 No.430418 Report

>>430415
>Program for performance first, not maximum hardware coverage

thats how you go belly up, anon.

>If people want to use shittier hardware (AMD), that's their fault.

>even thinking this way
>>
Anonymous 07/09/14(Wed)02:03 UTC+1 No.430419 Report

>>430418
>thats how you go belly up, anon.
Nvidia's large majority gaming GPU market share and nearly 100% use in industry fields over AMD, says otherwise.
>>
Anonymous 07/09/14(Wed)02:06 UTC+1 No.430420 Report

>>430419
no one gives a shit about perf. If we did, we'd all have man caves with 780gtx, anon. Instead, we almost all game on mobile, or shitty consoles. You need to swallow your pride and get over hardware.

>waiting
>>
Anonymous 07/09/14(Wed)02:35 UTC+1 No.430424 Report

>>430420
there is a move back to the pc... in case you didnt know.

>>430419
its less than you think and going up in favor of amd on the gaming side, and not really changing much on the professional side.

on the pro side however, its less about the performance of the card and more about stability... and amd offers more for less, and when a computer is built to spec, will not crash because of the gpu, that goes for either company.
>>
Anonymous 07/09/14(Wed)02:41 UTC+1 No.430425 Report

>>430424
theres always a minority doing something. But pc is dead as shit. The witcher 99 wont save it
>>
WATasFUCK 07/09/14(Wed)03:33 UTC+1 No.430435 Report

>>430415
>>430418
Depending on the situation you may cripple your software by not programming for a wide enough array of hardware. But if everybody was programming to still run on the original game boy we would be nowhere.
>>
Anonymous 07/09/14(Wed)03:52 UTC+1 No.430440 Report

>>430425
the pc market share for games has been going up more and more every year for a while... dont know what else you want me to say... kids use phisher price fake workbenches, and adults use real tools.
kids play on consoles, and adults play on pc
the only reason an adult would use a console is because of exclusives anyway... and those are slowly being chipped away too.
>>
Anonymous 07/09/14(Wed)04:18 UTC+1 No.430445 Report

>>430440
if some fatass wants a mancave with sli 780ti, more power to him. But, that fatass needs to recognize that hes the minority that "takes gaming seriously"
>>
Anonymous 07/09/14(Wed)10:58 UTC+1 No.430476 Report

>>430445
god damn you are retarded.
>>
Anonymous 07/09/14(Wed)11:06 UTC+1 No.430479 Report

Isn't iRay meant for professional cards? Not consumer cards like the 500s, 600s, 700s, etc.?
>>
Anonymous 07/09/14(Wed)11:37 UTC+1 No.430480 Report

>>430479
Yes, but if it's all you can afford then it's what you have to make do with.
>>
Anonymous 07/09/14(Wed)11:44 UTC+1 No.430481 Report

>>430415
>Program for performance first, not maximum hardware coverage. If people want to use shittier hardware (AMD), that's their fault. I'll continue to support a company that's actually making innovative hardware changes and putting pressure on Intel. AMD has no interest in phasing out CPUs, due to it being a core part of their business.
Isn't this how rendering speeds hit a brick wall in the first place though?
>Intel forces AMD out of the highend market, use their X86 monopoly to drive xeon prices higher. Price per cpu core now increases with every die shrink as datacenters don't care much about anything besides power use.

>Guys these $4000 USD per socket xeons are too expensive to render on why don't we use GPUs, they will sell us 300W of processors for $500!

>Proceed to then make everything dependent on one GPU brand because it was slightly easier at first.

What could possibly go wrong? Why is the 3d software industry so terrible in general? Bitcoin miners already have their own custom ASICs. I imagine some sort of specific 3d raytracing ASIC would destroy everything in performance but of course this will never happen as the 3d industry would rather squable over everything like babies making it impossible for any common architecture to ever be created.
>>
Anonymous 07/09/14(Wed)13:33 UTC+1 No.430488 Report

>>430481
Ray tracing ASICs? Fuck yeah somebody needs to make this happen
>>
Anonymous 07/09/14(Wed)14:37 UTC+1 No.430495 Report

>>430481
Modern GPU's are basically ray-trace ASIC's. A chip dedicated solely to ray-tracing would only perform slightly better.
>>
Anonymous 07/09/14(Wed)15:32 UTC+1 No.430499 Report

>>430495
actually they're meant for rasterization. You will never raytrace anything well on a gpu without massively improved memory capabilities
>>
Anonymous 07/09/14(Wed)16:31 UTC+1 No.430509 Report

>>430481
the problem is how much and fast this shit changes, a minor bugfix would fuck you so hard its not even funny, thats why its better to go general use over dedicated hardware.

the reason coin miners got their asics was because the algorithm is known and there is only one way to do it, they aren't upgrading shit or making revisions.

its possible that at the end game of renderers, when everything is hammered out and you got a setting for photoreal, than you may, just MAY have an asic... but if you deviate even a little in the settings, its no longer useable.
>>
Anonymous 07/09/14(Wed)21:57 UTC+1 No.430557 Report

>>430499
>actually they're meant for rasterization
Aaaand you clearly know little about processor architecture. Just because they started out suited for rasterization, doesn't make them not good at ray-tracing, they are both similar computation problems, massive amounts of data that needs to be processed in parallel and referenced by other functions. And nor did I say they were "meant for raytracing". GPU's these days (at least Nvidia's) are literally just hundreds of tiny CPU cores, Nvidia's GPU's have an L2 and L3 cache, and since Fermi they have the capability to launch many kernels concurrently. Modern GPU cores are incredibly well suited to ray-tracing, hundreds of lower powered CPU cores capable of branching logic blasting rays into the scene.

It's pretty much how you'd make a ray-tracing ASIC. In fact, there was a company that already tried to make one, called Caustic, but it wasn't all that much better than the GPU's so it hasn't taken off really.
http://www.extremetech.com/extreme/161074-the-future-of-ray-tracing-reviewed-caustics-r2500-accelerator-finally-moves-us-towards-real-time-ray-tracing
>>
Anonymous 07/09/14(Wed)22:10 UTC+1 No.430560 Report

>>430557
>Modern GPU cores are incredibly well suited to ray-tracing, hundreds of lower powered CPU cores capable of branching logic blasting rays into the scene.

You seem to think this is a big accomplishment. To "trace a ray" all you have to do is P2 - P1. If you want to wow me, show me something original for once, not 80s tech.

And this caustic stuff? Pathetic! Nobody gives a shit about that. If you want to learn how to lose money, look at them! Look at how badly they came off in youtube. FFS
>>
Anonymous 07/09/14(Wed)22:46 UTC+1 No.430564 Report

>>430560
>Doesn't even read
>Thinks I'm arguing for Caustic.
>Thinks I'm touting ray-tracing calculations are something complicated, when I'm in fact arguing the reverse
>>
Anonymous 07/09/14(Wed)22:56 UTC+1 No.430566 Report

>>430564
>read it all and you come off as raytracing on gpu being an accomplishment. I know this because of the words you used and the cadence of the sentences.
>>
Anonymous 07/09/14(Wed)23:15 UTC+1 No.430569 Report

>>430566
Nope. Your grasp of the English language is merely piss-poor. I was arguing against another idiot who was claiming that an ASIC made specifically for ray-tracing would be much better than it is on GPU, plain and simple. Not that it was some massive achievement that GPUs can do it, dumbass.
>>
Anonymous 07/09/14(Wed)23:17 UTC+1 No.430570 Report

>>430568
Everybody knows about the failure of raytracing cards. That you had to even post about that says something about you - mainly why you're even willing to answer such bad questions.

If want to say something, say it the first time. Dont be 10 posts in, calling other anons idiots and insulting them because you cant write well enough to get your feewings across.

Im out to get the gf.
>>
Anonymous 07/09/14(Wed)23:45 UTC+1 No.430571 Report

>>430570
It's called a conversation, I guess you're not that used to them though.

>Im out to get the gf.
She's just an arms length away, you don't need to go anywhere.
>>
Anonymous 07/11/14(Fri)22:41 UTC+1 No.430883 Report

So did they already add support for 800 series cards with maxwell architecture? Or will that be next year?
>>
Anonymous 07/12/14(Sat)03:08 UTC+1 No.430935 Report

>>430883
8xx isn't even out yet, so no. But yes, you'll have to wait until the next Autodesk release cycle, unless they release an updated Mentalray version in a Fall service pack.
>>
WATasFUCK 07/12/14(Sat)13:44 UTC+1 No.430987 Report

>>430935
But if its based on the architecture the card uses then it would have already been updated right? Because the maxwell microarchitecture has already been used in the 750 and 750ti
>>
Anonymous 07/14/14(Mon)00:36 UTC+1 No.431250 Report

>>430987
Nobody Cavan really know the effect of the more powerful chips or if they will be properly utilised. If anybody has more info feel free to correct me.
>>
Anonymous 07/14/14(Mon)09:25 UTC+1 No.431298 Report

>>430987
Maxwell cards should perform pretty good with current Iray, as the architecture isn't really changing much, just some minor improvements. (though the real powerhouse of Maxwell will come in the 20nm versions later this year)
http://videocardz.com/49557/exclusive-nvidia-maxwell-gm107-architecture-unveiled
>>
Anonymous 07/27/14(Sun)07:55 UTC+1 No.433450 Report

Who actually uses Iray though, Vray RT is a better unbiased renderer anyway.
>>
Anonymous 07/27/14(Sun)10:48 UTC+1 No.433464 Report

>>433450
VRayRT can't achieve the same level of quality and speed. Nor does it have as many capabilities.

Take a browse through the Iray blog where the devs post about various stuff as they work on it.
http://blog.irayrender.com/
>>
Anonymous 07/27/14(Sun)19:47 UTC+1 No.433513 Report

>>430326
They're artificially limited so you get a Quadro
>>
Anonymous 07/27/14(Sun)20:23 UTC+1 No.433521 Report

>>433513
They're not artificially limited. You're thinking of the driver limitations for OpenGL acceleration of certain software apps, as well as CAD acceleration. The 5xx/6xx series are not software limited for certain CUDA tasks.
>>
Anonymous 07/28/14(Mon)09:20 UTC+1 No.433582 Report

>>433521
Could you eleborate on what those tasks are?
>>
Anonymous 07/28/14(Mon)09:24 UTC+1 No.433583 Report

>>433464
Can you even bake textures with iray yet.
>>
Anonymous 07/28/14(Mon)09:27 UTC+1 No.433584 Report

>>433582
That last line sounds a bit confusing, but I was saying that it's not limited for any CUDA tasks.
>>
Anonymous 07/28/14(Mon)09:32 UTC+1 No.433585 Report

>>433583
Why would you bake textures with it? Maya already has Transfer Maps for that, as well and Turtle and Mentalray if you fancy it. Iray is a part of Mentalray. Iray can output render passes though, something most other unbiased renderers don't do. And the light path expression system adds further "biased like" control. It will also do light diffusion/diffraction and reflective caustics.
>>
Anonymous 07/28/14(Mon)11:35 UTC+1 No.433600 Report

>>430340
i had to double-check to see if i was on /g/
you should go there, you'd fit nicely
>>
Anonymous 07/31/14(Thu)22:20 UTC+1 No.434111 Report

>>433600
This is basically a more autistic /g/

Blender instead of gentoo though.
>>
Anonymous 08/01/14(Fri)05:31 UTC+1 No.434163 Report

>>433521
>They're not artificially limited.
Isn't there a guide to turn a 680, 690, and Titan into expensive quadro cards?
>>
Anonymous 08/01/14(Fri)06:01 UTC+1 No.434166 Report

>>434163
I said they aren't artificially limited for CUDA tasks. The Quadro series comes with special drivers that have acceleration for various OpenGL and CAD software. So if you're still using the old viewports in Max or Maya, or doing CAD work, then it's worth doing, otherwise there is no point, it will bring you no speed boost to any scientific calculations you're doing through CUDA, and the gaming performance will actually be worse.

People in professional fields who buy Quadro cards these days typically buy them for the increased memory capacity or for the CAD acceleration.
>>
Anonymous 08/01/14(Fri)07:12 UTC+1 No.434173 Report

>>434111
>implying
We don't have any cult obsession over blender, much less any free software.
>>
Anonymous 08/01/14(Fri)12:53 UTC+1 No.434213 Report

>>434163
How does that work?

Running 2 580s and I'd love to squeeze a bit more out now their a bit long in the tooth
>>
Anonymous 08/01/14(Fri)14:50 UTC+1 No.434226 Report

so how does Iray allocate tasks to both the CPU and GPU? from what i read in the manual it says that it can run in a hybrid setting, but doesnt say much else. sorry in advance if i skipped something.
>>
Anonymous 08/01/14(Fri)18:44 UTC+1 No.434252 Report

>>434226
It loads a copy of the scene into the memory of each device and simply has them all work on different parts of the scene I believe. If you have the architectural sampler on, a pre-computation will also be done to determine the bright areas in a scene and general light directions to help speed up the ray convergence and reduce fireflies (it's a form of MLT).
>>
Anonymous 08/01/14(Fri)21:02 UTC+1 No.434263 Report

>>434213
http://forums.guru3d.com/showthread.php?t=390968

Read about it there, for most cards you can simply install the quadro drivers with an inf file mod.
>>
Anonymous 08/01/14(Fri)21:05 UTC+1 No.434264 Report

>>434263
On some cards you get a performance increase, on some you don't, easiest way is to just try.
>>
Anonymous 08/06/14(Wed)10:26 UTC+1 No.434977 Report

>>434264
How risky is it? I don't want to let the smoke out.
>>
Anonymous 08/06/14(Wed)11:08 UTC+1 No.434981 Report

>>434977
The only time you can screw over your graphics card, is if you actually flash new firmware to it that fucks with the voltage regulation, clocks or cooling. These are just drivers.
>>
Anonymous 08/10/14(Sun)11:06 UTC+1 No.435612 Report

>>434981
Would I bet better with two 690s with that mod to make them look like they are quadros or just wait for 880s to come out, I only ask because I have a friend who is selling 2 690s but it would use my whole GPU budget for my new pc.
All the content on this website comes from 4chan.org. All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster. 4chanArchive is not affiliated with 4chan.