Movie Masher supports a range of network architectures for different use cases and performance requirements. The blue colored components in the diagrams below represent either an instance of the Server AMI running in EC2, or any other web server with the Movie Masher SDK installed. The green colored components represent instances of the
In this approach Users only interact with your Server, which then passes transcoding operations to a single instance of the Transcoder through its REST interface. Since media files reside locally on your server it has to be properly configured for uploads and handle much downloading.
With this configuration all User requests involving files are handled by S3, including uploads. Media files never reside on your server which frees it up, but transcoding operations are also speedier since transfers between EC2 and S3 happen fastest. There is still just the one Transcoder though, so users must wait their turn as uploads and renders process linearly through its FIFO queue.
This final approach utilizes S3 as well, but also SQS and multiple instances of the Transcoder running in parallel. Transcoding tasks are sent to SQS instead of directly through REST, so they can be doled out to an instance that's not currently busy. The number of instances in the pool can also be configured to automatically adjust as demand changes, making this approach the most scalable during demand spikes.