When OpenAI published a demo of Sora back in February, the hype train made its way across all the major tech blogs, carrying news of Sora’s potential. After months of anticipation, on the 9th of December, OpenAI finally released it to ChatGPT users on paid plans (but only in select regions).
I happened to catch wind of the release that same day. Unfortunately, I didn’t catch it early enough: by the time I went to try it, OpenAI’s servers were so overloaded with Sora frenzy, that they had to temporarily block access to new users. I then forgot about it for about two weeks but it came back across my radar while I was putting together the latest issue of our Bizarro Devs newsletter. I took it as a sign that it was time to finally jump in and take Sora for a test spin.

If you haven’t tried Sora yet (or are in a region where it’s not currently available) and you’re curious about my results, then keep reading. I will begin with an overview of Sora’s usage limits, its UI / UX, and the different ways you can make a video with it. Then I’ll share samples of the videos I made – both good and bad – and I’ll review some of Sora’s additional features that you can use to edit your videos.