• Welcome to the Community Forums at HiveWire 3D! Please note that the user name you choose for our forum will be displayed to the public. Our store was closed as January 4, 2021. You can find HiveWire 3D and Lisa's Botanicals products, as well as many of our Contributing Artists, at Renderosity. This thread lists where many are now selling their products. Renderosity is generously putting products which were purchased at HiveWire 3D and are now sold at their store into customer accounts by gifting them. This is not an overnight process so please be patient, if you have already emailed them about this. If you have NOT emailed them, please see the 2nd post in this thread for instructions on what you need to do

Poser 11.3 now available

Hornet3d

Wise
Indeed this is an old issue, and - surprise! - it has nothing to do with Poser or Superfly. Branched Path Tracing seems to have been sabotaged by nVidia for years, apparently as means to keep Blender3D Cycles from competing with I-ray. They do this by undermining the GeForce driver. I know it sounds like I am speculating here, but the evidence is rather compromising for nVidia. In a nutshell, while using the same GeForce drivers, BPT works in I-ray, but not with competing rendering engines that rely on nVidia drivers. With nVidia I-ray it just works as it should, but with Cycles and Superfly, it just tends to crash your computer.

As a result, you will see a warning in the Poser render settings whenever you enable BPT in GPU mode. Look for the little blue "i" icon at the upper right corner when you are in GPU and enable BPT. If you hover your mouse over it, it will advise you to disable BPT in GPU mode because it can cause "instabilities". In my experience, it can go as far as to crash Windows 10 into a BSOD, where there was a case where it has even corrupted Windows and I couldn't boot up anymore.

That was years ago, and now the issue is not as severe. At best it will crash the render, but not Poser or Windows, which is good. It's funny that nVidia "could not fix" the issue with their drivers that undermine ONLY the competition, no matter how many years have passed by now. nVidia considers this as an "unresolved bug" on their own GeForce drivers, which happens to benefit them, and undermine the competition. A little too convenient if you ask me.

As for BPT, it scales up render multi-threading in the GPU, which should speed up your render considerably. As a result, you can use MUCH less pixel samples when BPT is enabled. Otherwise we have to bump up pixel samples to MUCH higher values, which increases render times exponentially. For example, I can produce a noise-free 900x1200 render with as little as 7 pixel samples with BPT enabled, and it finishes in a couple of minutes. With BPT off, the same render will now require as much as 40~50 pixel samples to produce the same result, and renders times take as along as 10~15 minutes.

The thing about pixel samples is that it is a render thread multiplier. Number of pixel samples vs number of primary rays:
1ps = 1pr
2ps= 4pr
3ps= 9pr
4ps= 16pr
...
7ps= 49pr

As you can see, increasing pixel samples by a single digit produces a quadratic exponential increase on the number of ray castings. When BPT is enabled, these rays will be computed in parallel, while otherwise they will be calculated in series, one at a time, which kills render performance when using the GPU.

Pixel samples are tied to the bucket size, so one thing affects the other - they work together. In a nutshell, if the bucket size is smaller than the largest image dimension (width or height), this will increase the number of required threads, which increases render time. However, bucket size is limited by how much VRAM you have available in your video card - you cannot just bump it at will. You have to try to make it as large as possible, before you run out of memory. This will produce the fastest possible render, but that requires using BPT in GPU mode.

Having that said, we can see how nVidia has something to gain my limiting us from using BPT In GPU mode because they make the GPU hardware, and also the driver that enables each feature. It's freely available in I-ray, but limited and even dangerous in competing rendering engines, like Cycles and Superfly. Even if nVidia is not sabotaging the competition on purpose, it's darn too convenient that they "can't fix" this "bug" after these many years. It's too convenient that this driver "bug" only affects the competition. They could fix everything else, just not this.

Thanks for that, it would explain a lot, I did notice the little blue 'i' icon but by then I had already found out how unstable it was. Out of 10 attempts five crashed, one was my fault when I raised the bucket size too high and it did two render squares and just stopped dropping back to a grey screen. Two more did the same thing even though the bucket size was the default. One froze Poser and the last one crashed Poser so it is far too unstable for me to use particularly when I am not really saving render time.

I stopped using BPT early on when the render times were so long and have not used it for years, from my short play I am not encouraged to use it even with the CPU doing the render.
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
I haven't used BPT in a long time either, and I'm always happy with how my renders turn out.
 

Ken1171

Esteemed
Contributing Artist
Out of 10 attempts five crashed, one was my fault when I raised the bucket size too high and it did two render squares and just stopped dropping back to a grey screen.

Oh yes, bucket size has to be adjusted to make sure you will have enough memory to render in GPU mode when using BPT. That's because if the VRAM doesn't have enough space to keep all textures from your scene, the whole thing will crash. I have been successfully using Superfly GPU+BPT mode, and it's mostly about keeping an eye on bucket size. The higher the pixel samples, the lower the bucket size. If we try to raise both, it will run our of VRAM and crash. As it is now, that seems to be the only thing that can make it GPU+BPT crash, so it's not as unstable as it used to be. At least in my experience, GPU+BPT provides the best Superfly performance I could have so far. Without BPT, we have to use much higher pixel samples, which takes longer to render.

Of course, all this considering your video card has at least 6GB VRAM. With less than that, the bucket size will have to be so small that the renders will take long with or without GPU mode.

I haven't used BPT in a long time either, and I'm always happy with how my renders turn out.

I would just like to mention that BPT only affects how ray casting is calculated. It has no effect on image quality. Renders will look the same with or without it. What BPT does is to calculate ray casting in parallel mode, as opposed to one process having to wait for the previous to complete before it can get started. It can improve performance, but only if the parameters are set to its favor (pixel samples vs bucket size), and assuming the video card has lots of CUDA streaming processors and at least 6GB VRAM. Lastly, the default bucket size was meant for CPU renders, and it's too small to take advantage of GPU rendering. How large it can be set to depends on how much memory the current scene takes, and how much you have available in your video card. If set properly, the render speed gains are noticeable.
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Well, now that PP 11.3 will recognize my GTX 1660 Ti GPU, I'll have to try it, and see what I get. I usually time renders too, so that will also be something for me to check out.
 

Ken1171

Esteemed
Contributing Artist
I have an old GTX 980 Ti, so you should have MUCH better GPU+BPT performance than me. Possibly many times over. You just have to make a balance between pixel samples vs bucket size to optimize the VRAM load. You know you are doing well when the GPU is at full processing capacity during renders, instead of showing little sparse peaks of usage. You can see this visually using the Windows Task Manager, the 2nd tab. If you have a dedicated GPU (I know you do), the last graph on the bottom is the GPU usage.
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Yes, I've used the Task Manager before to check things while rendering. I usually use a small bucket size, but I know there was a thread on the old SM Poser Forum, where someone, don't recall who, stated the best way is to check if it goes evenly into the size of your render. IOW, if you're render size is 1600x1600, you could use 50 or 100 as the bucket size, or the final buckets being render won't be full size areas of the render.

That said, I've seen comments where we should use bucket sizes such as 16, 32, 64, 128, 256 etc. I'm not really sure which is the better option, as I've tried both. but it's been a while since I've played with testing this, and I don't really remember which method worked better for me. Also, I got this puppy around Thanksgiving, and the old laptop was 8 years old, so I'm sure I would get different results if I test again. Oh, and I didn't have a GPU on the old laptop. One of the things I had on my list of hardware requirements when I got it, but it had everything else on the list, so I went with it.
 

Ken1171

Esteemed
Contributing Artist
Bucket size should be ideally the larger dimension as your render (width or height). That will make the entire image render in VRAM at once, which is the optimal case for GPU rendering. This is only possible if your video card has enough VRAM to fit the entire scene at once. If not, the render will crash. So the compromise should be to get bucket size the closest you can to the larger render dimension. That will render the whole thing in a single bucket with no overheads. In other words, it won't use buckets at all. And if you enable BPT on top of that, it will multi-thread the whole thing using the GPU's multiple CUDA streamming processors. That's the ultimate rendering performance you can get. One added advantage of BPT is that you can use lower pixel samples, which uses less VRAM! ^___^

However, the larger the render size, the less VRAM you have left for BPT, so there must be a balance between these two. It depends mostly of what is present in your scene - mainly large textures and heavy geometry (like long mesh hair).
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Yes, strand hair has always been an issue for me, though I have to say, not with this laptop. I had purchased Neftis' Sharon Hair Style for V4 years ago, but couldn't use it until recently utilizing the V4 Scalp morph provided with La Femme. It rendered so fast I was totally amazed.

Then I also have RedzStudios The Business Hair for Dawn and Dusk, but it's very short, and it rendered fast too. I haven't tried long strand hair, so don't now how long that would take.
 

Ken1171

Esteemed
Contributing Artist
It's not so much the hair itself, but the transparency maps on it that drag the render. Longer hair will have more transparency in the image, so that's why I have mentioned long hair. If most of it is behind the figure, it won't have much of an impact, but if it has long parts in front, that can be a factor. Nonetheless, you have a video card much newer than mine, so maybe it won't even feel it. ::)
 

Hornet3d

Wise
It's not so much the hair itself, but the transparency maps on it that drag the render. Longer hair will have more transparency in the image, so that's why I have mentioned long hair. If most of it is behind the figure, it won't have much of an impact, but if it has long parts in front, that can be a factor. Nonetheless, you have a video card much newer than mine, so maybe it won't even feel it. ::)


Iver at Rendo Ghostship2 suggested bumping up the transparency bounces to rectify the hair problem I had when using BPT and it did indeed clear the problem so I can use it with a CPU render.
 

Hornet3d

Wise
Now here's a surprise, here is the completed render using BPT at default with the exception of the changes to the transparency bounces as suggested by Ghostship2. A side by side comparison with the same render using 11.2 shows no real change in the detail. The one difference is the render in Poser 11.2 did not use BPT and had samples set to 50 render time was a few hours whereas the 11.3 render was slightly over 20 mins both using CPU.

Avenza hair BPT Render RE.jpg
 

Ken1171

Esteemed
Contributing Artist
@Hornet3d There shouldn't be any visual differences in your renders from 11.2 or 11.3. If you have a Turing architecture GPU, you should have a faster render in 11.3, because RTX is now supported in Superfly. Over here I have an older GTX, so nothing has changed for me. Bumping up transparency bounces may improve hair rendering, but it will also slow it down.

If you are having faster renders with CPU. it probably means one of 3 things: either your GPU is not very powerful, it doesn't support CUDA acceleration, or the parameters are less than optimal. I have discussed some of this last one above.
 

Hornet3d

Wise
The graphics card is a EVGA GTX 1080i SC with 11g Ram which I intended to use for Poser but it is a non-starter both in Poser 11.2 and 11.3. Shame really because it was not a cheap card when I purchased it about 18 months ago. I could get BPT working faster with the CPU than my normal setups but the hair was blocky and unsightly. I know that increasing transparency bounces increase the time but it is still faster than what I was using before so it is a fair trade off for a better looking render.
 

Ken1171

Esteemed
Contributing Artist
Are you saying the GTX 1080 doesn't work with Poser 11? I thought only the RTX hardware was not supported (until now).
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Hmmm, Hornet said his graphic card is a GTX 1080i SC. I'm not sure what the SC stands for. I recall a post by one of the regulars at Renderosity that the GTX 10xx, or older, are fine, and the GTX 11XX are the cards that have the Turing system like the RTX cards.

I wasn't aware of that when I ordered the new laptop, so only thought the RTX 2XXX cards were the ones with Turing. I specifically told the sales rep. I was talking to that I didn't want the RTX video card this laptop usually has installed, and why I didn't want it, so he said I could have the GTX 1660 Ti. Obviously he wasn't aware that the newer GTX cards have the same system, or he would've mentioned it.
 

Ken1171

Esteemed
Contributing Artist
I think "SC" means "super-clocked", which only changes the default clock speed. It's still a GTX 1080.
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Oh, OK that makes sense. They always have letter code(s) after different hardware model numbers, and I never know what they mean.
 

Hornet3d

Wise
Here is the blurb presented at the time -

Card is EVGA GeForce GTX 1080 Ti SC Black Edition GAMING, 11G-P4-6393-KR, 11GB GDDR5X, iCX Cooler & LED.

The EVGA GeForce GTX 1080 Ti uses NVIDIA's new flagship gaming GPU, based on the NVIDIA Pascal architecture. The latest addition to the ultimate gaming platform, this card is packed with extreme gaming horsepower, next-gen 11 Gbps GDDR5X memory, and a massive 11 GB frame buffer.

SPECIFICATIONS
Base Clock: 1556 MHZ
Boost Clock: 1670 MHz
Memory Clock: 11016 MHz Effective
CUDA Cores: 3584
Bus Type: PCIe 3.0
Memory Detail: 11264MB GDDR5X
Memory Bit Width: 352 Bit
Memory Speed: 0.18ns
Memory Bandwidth: 484.4 GB/s

It really depends on what you call 'works', does it render, yes it does but I expected it to be faster than rendering with the CPU and it is not. Admitted that the CPU in question runs 32 threads but also the graphic card is less stable and more likely to abort the render before completion and has the added difficulty that is locks my computer up. Move the mouse and the cursor will respond a minute or so later, if you are lucky. For me that would be acceptable for renders of 30 minutes or so but the card takes hours so that limits me to overnight renders with no guarantee that there will be a completed render at the end of it.

I can't remember exactly what I paid for it but my memory seems to remember it was in the £600 to £700 price bracket so not what I regarded as cheap at the time.
 

Alisa

RETIRED HW3D QAV Director (QAV Queen Bee)
Staff member
QAV-BEE
Hey, guys - putting this here in hopes that someone here has updated (I haven't) and has Netherworks' Thumbnail Designer and can verify this and maybe have some thoughts as to how to fix it, before I try to contact Joe.

Got a report from someone that they are using Thumbnail Designer on a MAC. It worked fine in PP 11.2 but is NOT working in PP 11.3. They also note that they don't have a "Drive D" which is being called out.

Here is their error message:

Error msg:
Traceback (most recent call last):
File "/Applications/Poser 11/Runtime/Python/poserScripts/ScriptsMenu/Netherworks/Thumbnail Designer/+Thumbnail Designer.py", line 32, in <module>
TD_Main()
File "D:\Poser Pro 2012\Runtime\Python\poserScripts\Netherworks\Compiler\Uncompiled Files\_thumbdesigner.py", line 2095, in TD_Main
File "D:\Poser Pro 2012\Runtime\Python\poserScripts\Netherworks\Compiler\Uncompiled Files\_thumbdesigner.py", line 153, in __init__
File "D:\Poser Pro 2012\Runtime\Python\poserScripts\Netherworks\Compiler\Uncompiled Files\_thumbdesigner.py", line 731, in TD_GenPreview
File "/Applications/Poser 11/Poser.app/Contents/Resources/site-packages/PIL/Image.py", line 679, in convert
self.load()
File "/Applications/Poser 11/Poser.app/Contents/Resources/site-packages/PIL/ImageFile.py", line 189, in load
d = Image._getdecoder(self.mode, d, a, self.decoderconfig)
File "/Applications/Poser 11/Poser.app/Contents/Resources/site-packages/PIL/Image.py", line 385, in _getdecoder
raise IOError("decoder %s not available" % decoder_name)
IOError: decoder zip not available

Any thoughts?

Thanks!!
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
That's not the only one of Joe's older scripts that aren't working. I have 4 I use every day, and my favorite is not working, and when I try to close and reopen Poser, it doesn't get past the opening Splash Screen. I have uninstalled 11.3 so many times until I finally figured out what the problem is. The other 3 load with Poser (which is how I have them set up) but his Slim Parameters Panels isn't, and I'm getting very aggravated.

I even have a script SnarlyGribbly set up for older scripts which don't seem to be working well in 11.3 but it's not working on it, so not sure it will work on all scripts folks are having issues with. Unfortunately, I don't recall where I downloaded it from. Someone had posted a link on Renderosity's forum, but I don't recall who, so it'll be a while before I can find that link again. If I find it, I'll let you know.

Oh, and mine also has that D:\Poser Pro 2012, so that's probably the Poser version Joe was using when he wrote them.
 
Top