Luma AI Dream Machine does video-to-video style transfer and it is the feature I keep coming back to
Most of what gets written about AI video generation focuses on text-to-video, which makes sense because it is the most accessible entry point. But the feature in Luma AI Dream Machine that I find most useful in practice is video-to-video modification, specifically the ability to take an existing clip and completely transform its visual style.
The clearest example of why this matters: I had footage shot on a phone that I wanted to look like 3D animation for a project. I uploaded the clip, described the style I wanted and it generated a version that matched. The motion and composition of the original were preserved but the visual language was entirely different. That kind of transformation used to require specialist software and skills that most people do not have.
For text-to-video the quality is consistently good with detailed prompts. The character consistency feature lets you maintain a specific person or character across different scenes which is the piece most AI video tools struggle with.
Camera controls are precise. Panning, zooming, tilting and combinations of those movements that you direct specifically rather than leaving to chance. If you have used other AI video tools where the camera movement is essentially random this level of control is a meaningful upgrade.