• Welcome to the Community Forums at HiveWire 3D! Please note that the user name you choose for our forum will be displayed to the public. Our store was closed as January 4, 2021. You can find HiveWire 3D and Lisa's Botanicals products, as well as many of our Contributing Artists, at Renderosity. This thread lists where many are now selling their products. Renderosity is generously putting products which were purchased at HiveWire 3D and are now sold at their store into customer accounts by gifting them. This is not an overnight process so please be patient, if you have already emailed them about this. If you have NOT emailed them, please see the 2nd post in this thread for instructions on what you need to do

I Just Wanted to Post an Image Thread

parkdalegardener

Adventurous
By default Stable Diffusion adds an invisible watermark to the generated images, which are actually metadata tags, nothing on the image itself. It has also a self-imposed censorship of R-rated images. However, since this is open source, all of that can be disabled if you know your way around Python. In my location version, I have disabled all of it.

Here's a macro photography done with Stable Diffusion of a very small robot insect. :D

View attachment 75646
Actually, the "print" on the lower left hand corner is not an invisible watermark. It is most probably from the training data. I would suspect that most "english cottage" truths that the model is trained on come from public domain woodcut images. A lot of those when, published; have labeling on the lower left. The same training bias is visible using your prompt above. I generated 20 images with it and 18 of the results show the same orientation of the "insect" as your own dispite there being no orientation information in the prompt. While fun to mess with, I see the man behind the curtain. Takes away some fun as I find myself testing it's training to see the bias instead of being amazed by the results. Like all who download the code, I am aware of the NSFW lock in the code and really don't care to disable it. Like I said; I agreed not to make questionable images when I installed the software.
 

Attachments

  • 20220904070910_1884061200.png
    20220904070910_1884061200.png
    436.3 KB · Views: 121
  • 20220904071007_3160200055.png
    20220904071007_3160200055.png
    278.9 KB · Views: 107
  • 20220904071105_3889884801.png
    20220904071105_3889884801.png
    326.1 KB · Views: 106
  • 20220904071514_2650622297.png
    20220904071514_2650622297.png
    323.1 KB · Views: 111

Ken1171

Esteemed
Contributing Artist
Actually, the "print" on the lower left hand corner is not an invisible watermark. It is most probably from the training data.

That's not what I meant. The Python script on Stable Diffusion inserts a tag on the image to tell other programs that it was generated by AI. They call it an "invisible watermark". That's what I was referring to. That can also be removed. The embedded tag includes even the text prompt used to create the image.

Like all who download the code, I am aware of the NSFW lock in the code and really don't care to disable it. Like I said; I agreed not to make questionable images when I installed the software.

Sure, just remember the NSFW filter is only for images you will post online. It's a mandatory liability to protect themselves against legal issues related to TOS violations at 3rd party sites. That's what the "questionable images" refers to.
 

parkdalegardener

Adventurous
Yeah; I'm not worried about that. I find it a fun thing to play with as I like to figure out the training bias. Seed values for instance, appear to be a predictor of output style. Image size also seems to show training data as does image orientation. It's really quite fast on my card so I can kick out fast iterations of an expression. Punctuation is quite interesting when it comes to fine tuning an image though capitalization appears not. There appears no difference between Cat and cat in an expression. Below are all from the exact same words and and seeding with punctuation being the only tuning.
 

Attachments

  • 20220904162916_600.png
    20220904162916_600.png
    469.3 KB · Views: 139
  • 20220904162951_600.png
    20220904162951_600.png
    460.7 KB · Views: 116
  • 20220904163057_600.png
    20220904163057_600.png
    487 KB · Views: 122
  • 20220904163547_600.png
    20220904163547_600.png
    479.2 KB · Views: 139
  • 20220904163812_600.png
    20220904163812_600.png
    454.8 KB · Views: 126

Ken1171

Esteemed
Contributing Artist
Punctuation is quite interesting when it comes to fine tuning an image though capitalization appears not.

That is correct. Capitalization is ignored, but punctuation affects the results because that can change the meaning of sentences in natural language interpretation. There are several examples of sentences where just changing the comma location can change its meaning significantly. The order of words is also important, and can get more weight depending on where they are in the sentence. Those are things that are good to know when typing prompts. :)
 

robert952

Brilliant
I'll stop flooding the thread with these though. When I grok the training bias enough I'll post something interesting.
May I suggest a separate thread for this? When Mandelbulb3D and Apophysis first came out I didn't have the computer power to really run them. So, I didn't follow their development too much. But the concept intrigued me. That's part of why I got into 3D stuff when I came across DAZ Studio (some time just before V2.0).

I'd follow a thread on this. Consider the other programs that generate landscapes and bring in other 3D models (Flowscape for example).

Speaking of CGI (and a bit OT), I just watched the first two episodes the Rings of Power on Prime. Great visual experience.
 

parkdalegardener

Adventurous
robert, my interest in this may be temporary as I am currently trapped here recovering from a car accident. It passes the time and is easier for me right now than Poser, which is my usual poison of choice; as my movement is drastically reduced for the foreseeable future.

As for Rings of Power; I'm afraid I haven't seen it and am not likely to. Retired for years on a small fixed income means no Prime, no Amazon deliveries either for that matter. My friends in "The Industry" are somewhat unhappy at the amount of effects and the pace they were produced at for RoP but are happier at the outsourced pay rate than those that worked on House of the Dragon. I am also told that the effects work on HotD was sent in at a quality many are unhappy with due to the schedule but that is all hearsay till I see for myself.
 

Hornet3d

Wise
robert, my interest in this may be temporary as I am currently trapped here recovering from a car accident. It passes the time and is easier for me right now than Poser, which is my usual poison of choice; as my movement is drastically reduced for the foreseeable future.

As for Rings of Power; I'm afraid I haven't seen it and am not likely to. Retired for years on a small fixed income means no Prime, no Amazon deliveries either for that matter. My friends in "The Industry" are somewhat unhappy at the amount of effects and the pace they were produced at for RoP but are happier at the outsourced pay rate than those that worked on House of the Dragon. I am also told that the effects work on HotD was sent in at a quality many are unhappy with due to the schedule but that is all hearsay till I see for myself.
Sorry to hear about your car accident, I hope your recovery is a full one and achieved in double quick time.
 

Miss B

Drawing Life 1 Pixel at a Time
CV-BEE
Sending good wishes your way PDG! I also hope you recover sooner, rather than later.
 

Pendraia

Sage
Contributing Artist
robert, my interest in this may be temporary as I am currently trapped here recovering from a car accident. It passes the time and is easier for me right now than Poser, which is my usual poison of choice; as my movement is drastically reduced for the foreseeable future.

As for Rings of Power; I'm afraid I haven't seen it and am not likely to. Retired for years on a small fixed income means no Prime, no Amazon deliveries either for that matter. My friends in "The Industry" are somewhat unhappy at the amount of effects and the pace they were produced at for RoP but are happier at the outsourced pay rate than those that worked on House of the Dragon. I am also told that the effects work on HotD was sent in at a quality many are unhappy with due to the schedule but that is all hearsay till I see for myself.

Hope you're feeling better soon.
 

parkdalegardener

Adventurous
Thank you Kera. They had to replace rods again.

So with that being said. Yes, I have rods in my back. The folk over on PFD, like Kerya; have known that for years. Got me thinking and gave me a prompt to work with. Using the fact that SD is trained mostly on square images, a little twist or two will get a double result on rectangular images. It also gives heads growing out of heads but that was another experiment. Think of this as a little pdg self portrait remembrance of before and after.
 

Attachments

  • 20220906141533_2177880885.png
    20220906141533_2177880885.png
    453.8 KB · Views: 106

Ken1171

Esteemed
Contributing Artist
Using the fact that SD is trained mostly on square images, a little twist or two will get a double result on rectangular images.

I have noticed this too. When I use portrait or landscape aspect ratios, there is a good chance SD will duplicate the subject, or worse, parts of the subject, like 2 torsos, or 2 heads. If all training images were square, it doesn't know how to handle extended spaces because it has never seen it. However, I think all AIs were trained with square images, so if other AIs don't duplicate things, then there may be a way around this.
 

parkdalegardener

Adventurous
I have noticed this too. When I use portrait or landscape aspect ratios, there is a good chance SD will duplicate the subject, or worse, parts of the subject, like 2 torsos, or 2 heads. If all training images were square, it doesn't know how to handle extended spaces because it has never seen it. However, I think all AIs were trained with square images, so if other AIs don't duplicate things, then there may be a way around this.
I've trained numerous models for specific recognition tasks. The training images do not have to be of a certain shape or size but they are converted to a similar size of the trainers choosing during the training. In short an image is converted to greyscale, resized to a standard training size, hashed, manipulated, hashed again. These last two are done as often (training epoch) as required by the trainer. These results build the database.

SD goes in the opposite direction. It is building the requested image by changing textual info into a hash and taking that hash to refine noise into an image containing all the hash info requested. It's method couldn't care what the output size is. It will create image data to fill the dimensions of the user. That is where the rub lies. Step two of training. Convert images to a standard size for manipulation during training. SD treats it's output as a square to match training data. That's why duplication of prompt info in the output. SD just tries to match hash data in the query to the hash data in it's database (SD model).

SD could be used to build out around an image, creating a partial wing on a butterfly for instance where the original photograph of the insect on a flower had part of a wing cut off in the original image.

We are getting way into the weeds on this one for a thread that is basically a showoff thread. I agree with robert. We should probably start a different thread about this stuff.
 

Ken1171

Esteemed
Contributing Artist
The documentation claims SD requires dimensions to be multiples of 64, so it doesn't have to be square, but it also can't be any arbitrary size. Fair enough. The dimensions are also capped by how much VRAM we have, but we can upscale afterwards. However, if I want to make a 2K image, that is a problem because 1440 is not a multiple of 64. Maybe this can be compensated with outpainting, which is yet another cool thing SD can do.
 

parkdalegardener

Adventurous
Documentation? RTFM? You have to be kidding. I barely admit to being able to read at all. I'm the first person in my family that even finished high school.

I picked an arbitrary image ratio to work with. 704x384 or 512x512. I have no idea how much vRAM is on the system. I know I'm supposed to need a 3x series card and this machine is a 2070. I have another box with a 3080 and twice the system RAM as this machine does but that machine isn't for development and messing around.

As mentioned above in a late response to RAMWolff, I use video2x for upscaling. Open source, predominately it's for upscaling anime and ghibli but as it has the vulcan real ai model as well the others so it does stills in moments. It will use all the GPU cycles you throw at it for the coding but it is still a win32 app. Again; I use all this stuff on "under spec" equipment, and have yet to even have my fans turn on.

Woops; gotta post an image. That is the purpose of the thread...... There we go..
 

Attachments

  • 20220906135945_2720086599.png
    20220906135945_2720086599.png
    460.9 KB · Views: 93
  • 20220906140807_2884479082.png
    20220906140807_2884479082.png
    427.7 KB · Views: 92

Ken1171

Esteemed
Contributing Artist
I picked an arbitrary image ratio to work with. 704x384 or 512x512.

Wow, all 4 dimensions are precise multiples of 64, and you said you've picked them by random? You are very lucky! ^____^ I have found a WebUI version that automatically picks multiples of 64 when setting the dimensions, I can drop the calculator when using it.

Looks like they have a new model version 1.5 cooking up at the web version. It's still in beta, and depending on how testing goes, they might release it to the public in a week or two. It is meant to improve faces and hands, but in the current state it's still pretty bad. Just the stuff we were mentioning above. As expected, they are working on it, and I am surprised by how fast it's developing. A lot happens in just a week.

Alright, let's post a picture. I like this little bugger. It's macro photography again. With the current model 1.4, SD still has trouble counting limbs, and where they are attached to.

RobotInsects_06.jpg
 
Top