Midjourney V6 renders text inside images accurately enough that I use it for typography concepts now
This is a narrow post about a specific capability that changed how I use Midjourney professionally.
I do illustration and visual design work. A recurring brief is concept work for clients who want to see how text will look integrated into an illustration before committing to a final direction. Type treatment, how words sit in a composition, the relationship between illustration and lettering. That used to be a manual concepting step because AI image generators rendered text as garbage, distorted letterforms that were useless as type concepts.
Midjourney V6 handles text rendering with enough accuracy to be usable for concept purposes. Not for finished typography, still not perfect, but for showing a client "here is roughly how this headline treatment could integrate into this illustration style" the output is now legible and compositionally useful.
The Style Reference parameter alongside this is the combination I use most. I establish a visual style from a reference image using --sref, then generate multiple compositions with text integrated. The client can respond to real visual concepts rather than trying to imagine the combination from a description.
The Inpainting editor lets me fix the specific letterforms that go wrong in a generation without regenerating the whole composition. Select the distorted letter, regenerate just that area. For a concept image that gets you to something presentable without starting over.
The Community Gallery is useful for understanding which prompting approaches produce the cleanest text rendering before you try them on a real brief.