// Global Video Cluster

A client was using flash and http to show videos. They lacked failover, scalability, and suffered excessive bandwidth usage, high costs,  and poor performance on low bandwidth user, or distant users.

The http/php solution only used one server in Australia. Using one server leads to downtime, and scale issues. The fast forward, and rewind, re-downloaded the remainder of the video again and again. Given they were training videos, the video searching on average downloaded each video on average more than 4 times.

Having the bandwidth out of Australia was expensive. It showed good response for Australian customers, but the UK customers struggled with higher latency and poor performance.

The use of fixed files over http, if a user had a slower connection the video just kept pausing.

Security of content was very poor, it was trivial to locate the URLs and download the whole video library.

We helped to changed the design to a cluster of video servers. The design implemented:
– a server on the UK and amazon ec2, and Australia
– a redesigned flash player to test all available video servers, and use the closest and least loaded server
– the video server reduced the quality of the content in realtime for slower connections
– the video server used tokens and non-http to protect content
– only 1-5 seconds was needed to be buffered, so zooming within the file was far less wasteful
– water marking of the video content by the video server

The outcomes of the project:
– all users connection to geographically faster servers, so they received better quality, and faster starting videos
– global cluster with failover, if any 1 video server was down, the users connected to the next closest server
– reduced cost through cheaper bandwidth, and less bandwidth
– better quality of delivery to end users with poor connections
– better security of the conent

The project was designed and put into beta in 2 weeks, and was fully deployed in under a month.