• Welcome to the Community Forums at HiveWire 3D! Please note that the user name you choose for our forum will be displayed to the public. Our store was closed as January 4, 2021. You can find HiveWire 3D and Lisa's Botanicals products, as well as many of our Contributing Artists, at Renderosity. This thread lists where many are now selling their products. Renderosity is generously putting products which were purchased at HiveWire 3D and are now sold at their store into customer accounts by gifting them. This is not an overnight process so please be patient, if you have already emailed them about this. If you have NOT emailed them, please see the 2nd post in this thread for instructions on what you need to do

I Just Wanted to Post an Image Thread

Ken1171

Esteemed
Contributing Artist
That looks Powerful.

It is indeed, but I still have to get the hang of it. Had to redo the hands multiple times (an SD thing), and it confuses the tail with all sorts of things. I have hopes to figure these things out with more practice. ^^
 

Ken1171

Esteemed
Contributing Artist
ControlNet. Very powerful indeed and very easy to train. Makes my old 3D program to Stable Diffusion render pipeline a little easier as it does a better job of depth maps. Canny edge detection along with the depth mapping is very powerful indeed. Throw in openpose and openpose_hand (which still isn't real great) and there is really no more rolling of the dice. A person should be quite able to control the output. I've posted workflow for these tools but am loath to give a URL as it is a competing site to Hivewire and I don't want to fend off the rabid mob of anti-ai people that cruise Rendo. It doesn't have to be Poser, Blender, DS, Makehuman, or anything else specific to start with, so each of the fanboys/gurlz for any given program can take a swak at the AI user with great joy. Ah the glory of social media and the ability to be a troll without hiding under a bridge.

Someone suggested ControlNet at FaceBook last night, AFTER I posted this image. I have played with it for a few hours, but it couldn't figure out what to do with tails. It confuses it with extra arms, legs, or even furniture framing. It couldn't figure the pose because the tail was confusing it. I need to practice more to see how to handle the different models and parameters.

Yeah, I've got my share of trolling at both Rendo and FB anti-AI advocates. They come in lynching mobs, attacking artists in large groups with all sorts of personal attacks, and sometimes claiming absurd things about AI, showing they don't understand what they are attacking. It's Ok to oppose something, but they have to have a valid argument. Most of what they claim is simply not true - but have to you tried having a rational conversation with lynching mobs?
 

parkdalegardener

Adventurous
Someone suggested ControlNet at FaceBook last night, AFTER I posted this image. I have played with it for a few hours, but it couldn't figure out what to do with tails. It confuses it with extra arms, legs, or even furniture framing. It couldn't figure the pose because the tail was confusing it. I need to practice more to see how to handle the different models and parameters.

Yeah, I've got my share of trolling at both Rendo and FB anti-AI advocates. They come in lynching mobs, attacking artists in large groups with all sorts of personal attacks, and sometimes claiming absurd things about AI, showing they don't understand what they are attacking. It's Ok to oppose something, but they have to have a valid argument. Most of what they claim is simply not true - but have to you tried having a rational conversation with lynching mobs?
Get a handle on ControlNet. Pose control now includes hands. Perfect? No. Damn good? You bet. The ability to use these networks on any 1.4 or 1.5 model makes the ControlNet framework very powerful. Multiple figure interaction is possible using manikins from your 3d modeling software of choice for the pose control. No model merging necessary. It does it on the fly. Non destructive. SD 2.x support shortly. TenCent has just released their version of the ControlNet training yesterday. Seems to use similar methods though they also say it should recognize animal poses. Haven't had a chance to run it yet. I suspect that riggify in Blender or skeleton figures in Poser/DS would actually work well with this.

I have selective deafness. My hearing fails a bit when the antagonism rises. With the addition of tools coming at the rate they are, none of the bogus reasoning and pseudoscience will really have a leg to stand upon in the near future.

Tools are tools. Using the best tool available doesn't make you a master at a job. Having a gas powered nail gun doesn't make you a professional roofer. It just makes it faster to put more holes in the roof. Knowledge and experience make one a professional and the tools used just enhance the effectiveness of the workflow. Generative AI isn't any different. Makes a lot of holes in the roof before the inexperienced person wielding it actually hits a rafter.
 

parkdalegardener

Adventurous
And I no sooner finish typing the above than the T2I-Adapter support from the TencentARC have been added into the ControlNet along with Tencent's sketch adapter. Guess I'm going to be busy this weekend.
 

Ken1171

Esteemed
Contributing Artist
Get a handle on ControlNet.

Yes, I plan to. Have been busy with Python creating new automations for Poser, so I have to find some time to keep an eye on AI because it keeps POURING innovation faster than I can follow. :)

With the addition of tools coming at the rate they are, none of the bogus reasoning and pseudoscience will really have a leg to stand upon in the near future.

Like you said, tools are tools. AI-haters claim "there is no human involved in the image creation", only showing they never even tried to use it before criticizing. They look at the final image, thinking AI did everything perfect on the first try - yeah, I wish! Maybe that would be true if I wasn't trying to achieve something specific. Sounds pretty much like how people criticized POSER 20 years ago, claiming there was a "magic button" that creates art. They also claimed it took no effort to 3D render. That's what happens when people criticize without trying, or understanding what's under the hood, and what it takes to produce good results. Before 3D, it was photography. Same old.
 

parkdalegardener

Adventurous
Awh; now I had to look for the Make Art button. It's gone!! I hadn't thought about it in so long I forgot that joke was in Poser.

As for animation, I suspect that with a little work BVH can transfer to Pose Control. When I first started working with openCV a couple years ago I was working with pose estimation and joining that with Kinect camera skeletons and depth mapping. The tools are now more advanced and openCV_python has advanced quite a bit as well.

Three of Tencent's nets are now functioning with ControlNet yamls and a Gradio interface. It's really hard to keep up with indeed. No sooner do I finish reading a white paper and someone's already jumped on it.
 

Doug Hunter

Busy Bee
Contributing Artist
A render from LaCyborg LaFemme Set2
a "second skin" on LaFemme with displacement mapping

LaCyborg 02.jpg
 
Awh; now I had to look for the Make Art button. It's gone!! I hadn't thought about it in so long I forgot that joke was in Poser.

As for animation, I suspect that with a little work BVH can transfer to Pose Control. When I first started working with openCV a couple years ago I was working with pose estimation and joining that with Kinect camera skeletons and depth mapping. The tools are now more advanced and openCV_python has advanced quite a bit as well.

Three of Tencent's nets are now functioning with ControlNet yamls and a Gradio interface. It's really hard to keep up with indeed. No sooner do I finish reading a white paper and someone's already jumped on it.
well Plask.ai has a free level and I have not found it bad for BVH with legacy figures in DAZ studio, Genesis+ just gets worse foot sliding wise each generation whatever the BVH format or conversion method, in DAZ studio at least, they work OK in iClone, Unreal etc
I use batch render with the Visions of Chaos implementation of A1111, I added ControlNet yesterday but that doesn't seem to work with batch render so not really useful for animation, grab the smaller models download Nerdy Rodent links on his video as it is as good as the original one Aientrepeneur links that's a whopping 45GB
 

parkdalegardener

Adventurous
You misunderstand what I am saying about BVH files. I am not talking about using them to manipulate 3d figures. I am talking about using BVH directly in generative AI. The pose estimation in computer vision applications outputs what is essentially a BVH skeleton. When you do animation using i2i, consistency only exists at very low denoising levels. This makes for very little change from the input video. Over the weekend; I have to go out shortly so I haven't started yet, I want to see if I can use BVH files directly, or with a little re-ordering; as the input to guide the diffusion.

The batch processing limitation is not an issue if you run locally. I have been running batches since I installed the Nets. I haven't had the time to batch the Tencent implementations for sketch input yet but I have no problem running their Nets on my machines.

I know both the video presenters you mention, but by the time they present something I've already read the white paper and am working with it locally. After all; both of them predicted the death of AI with the lawsuits and again stated it's demise with the release of 2 and 2.1. I haven't looked at their channels in a couple of weeks though so I don't know how they are doing right now. Both have good info.
 
You misunderstand what I am saying about BVH files. I am not talking about using them to manipulate 3d figures. I am talking about using BVH directly in generative AI. The pose estimation in computer vision applications outputs what is essentially a BVH skeleton. When you do animation using i2i, consistency only exists at very low denoising levels. This makes for very little change from the input video. Over the weekend; I have to go out shortly so I haven't started yet, I want to see if I can use BVH files directly, or with a little re-ordering; as the input to guide the diffusion.

The batch processing limitation is not an issue if you run locally. I have been running batches since I installed the Nets. I haven't had the time to batch the Tencent implementations for sketch input yet but I have no problem running their Nets on my machines.

I know both the video presenters you mention, but by the time they present something I've already read the white paper and am working with it locally. After all; both of them predicted the death of AI with the lawsuits and again stated it's demise with the release of 2 and 2.1. I haven't looked at their channels in a couple of weeks though so I don't know how they are doing right now. Both have good info.
you mean like ControlNet OpenPose?
BVH use an image series render
and in Automatic 1111 batch render check this in settings or you will get an error
 

Attachments

  • 00035-3247720887.png
    00035-3247720887.png
    134.1 KB · Views: 121
  • 00031-3876060701.jpg
    00031-3876060701.jpg
    207.4 KB · Views: 119
  • 00039-350641213.jpg
    00039-350641213.jpg
    152.3 KB · Views: 123
  • 00040-3823326259.jpg
    00040-3823326259.jpg
    197.8 KB · Views: 129
  • Capture.JPG
    Capture.JPG
    138.2 KB · Views: 120

Roberta

Eager
I had a lot of fun with the SuperFly materials pairing them with the primitive Props featured in Poser 11 Pro and with other Props in my runtime. The pants are also in Poser (but they were for LaFemme) I fitted them perfectly and so did a shirt that was part of an old package for V4. The collar is free from Rosemaryr at Sharecg.
Dawn-Diva (morphs and textures from my RA Scary BabyLuna and Scary Diva) looks comfortable with him.
The original render was 3153 x 3928. Thanks for looking! :)
Diva_Carnevale2023_Web.jpg
 

Hornet3d

Wise
Caoimhe stopped at the first sign of smoke and what there was looked far to innocent to be anything serious. Experience told her not to make any assumptions the forest floor could have been smouldering for hours or it could be the signs of something that had only just happened. The scan showed no significant heat source but the chemical readout of sir composition told another story.

Forest Fire HW.jpg
 
Top