• Welcome to the Community Forums at HiveWire 3D! Please note that the user name you choose for our forum will be displayed to the public. Our store was closed as January 4, 2021. You can find HiveWire 3D and Lisa's Botanicals products, as well as many of our Contributing Artists, at Renderosity. This thread lists where many are now selling their products. Renderosity is generously putting products which were purchased at HiveWire 3D and are now sold at their store into customer accounts by gifting them. This is not an overnight process so please be patient, if you have already emailed them about this. If you have NOT emailed them, please see the 2nd post in this thread for instructions on what you need to do

New uses and methods with Self Trained AI

KageRyu

Lost Mad Soul
Contributing Artist
I had thought there was already a thread discussing AI theory and use, but I can not seem to find it now. I found this following video very interesting in it's theories and applications - and the discussion of the experimentation and work needed to not only custom train the AI to get the results they wanted, but post work to create a unified look. Of course, since it involves using AI to make animation there are lots of "reaction" videos calling this offensive, laxy, worthless, etc... but to me it is clear that the makers of many of those videos did not watch this video or did not understand it well. I have yet to watch the animated clip they made and discuss here, but this makes me wish I had access to the diffusion tools (none of my machines are able to use them). To convert my videos to toon style I still need to do it with batch scripts in a graphic processor (and I lost the scripts I had been making in that HD corruption last year).

 
Thanks KageRyu, that's certainly an interesting workflow they have there. The ability do 'stable' eyes and their expressiveness, from frame to frame, would be the important part. For making comics rather than animation, the 'believeability' and 'human-ness' of expressions would also be key.

Hopefully we'll soon have a Stable Diffusion plugin connector for PostFX in Poser. Send PostFX a quick basic render made with Poser's real-time Comic-book mode (in Poser 11 through 13), and get back a professional "slick anime" AI makeover of the image. Thus giving character-consistent AI makeovers for Poser renders. How that would run for animation, rather than comics is unknown. I mean in terms of temporal stability (i.e. flicker and wobble between frames).

In other posts you say you have an old workstation and you seem to use Poser. If your machine is a HP Z600 with its original drivers, the ideal for 'old Poser', there should be no problem about putting an RTX 3060 12Gb card in there, and thus getting Stable Diffusion up and running with something like the InvokeAI package. The only other thing you'd need would be a '10cm PCI Express PCIe 6 Pin to 8 Pin Graphics Card Power Adapter Cable' to connect the PSU to the card.
 
Top