Spent two weeks testing Sora properly, here is an honest account of where it is and where it falls short
I have been testing AI video generators systematically and Sora gets a lot of attention so I wanted to give it a fair and thorough go rather than just generating a few clips and forming an opinion. Two weeks, a lot of prompts, here is what I actually found.
The realistic video generation is genuinely impressive for specific types of content. Complex scenes with multiple elements, specific types of motion, accurate background detail. When it works well it produces clips that are more cinematic than most other tools at this quality level. The gap between a good Sora output and a good output from other tools is visible.
The Personal Cameos feature lets you create a digital version of your own likeness through a face capture process on the mobile app. The result can then appear in generated scenes. This is a feature that sounds like a gimmick but is actually useful if you want to create content featuring yourself without being on camera.
The community discovery feed where you can browse and recreate videos other users have generated is a genuinely useful feature for understanding what kinds of prompts produce good results before you waste credits finding out yourself. More tools should do this.
Where it struggles: consistency across longer content, hands and fine details in close-up shots, and prompts that involve complex physical interactions. These are known limitations across AI video broadly but Sora is not exempt from them.
Prompt-based editing after generation lets you refine without starting from scratch which saves time in the iteration loop.