What can Stable Diffusion actually do that Midjourney and DALL-E cannot?
I have been using Midjourney for a while and it does what I need for most things, but I keep reading about people doing stuff with Stable Diffusion that just does not seem possible with the subscription tools. Things like training it on your own images to get a consistent character or style, using ControlNet to guide the composition based on a pose or sketch, or running it on your own machine so you have complete control over the output.
I am a graphic designer so I am not just a casual user. I genuinely want to understand what the ceiling looks like if you invest the time to learn SD properly. Is the gap between what SD can do versus Midjourney as large as the enthusiast community makes it seem, or is a lot of that just the appeal of tinkering for its own sake?
Specifically I would love to know about the practical workflow for training a LoRA on a specific style or subject, and whether the results are consistent enough to use in professional work. I have seen some impressive demos but demos are always cherry-picked. What does the average result look like after a reasonable amount of training time?