How's that there cloudy thing working out for you? Sure, you get flexible, elastic infrastructure at a pretty good price, but what about your data transfer costs? The same question applies to "traditional" hosted apps; data transfer costs can mount up quickly for large client populations.
If your online clients do any kind of frequent or quasi real-time data retrieval such as regularly polling your servers for updates, then you're probably getting used to writing pretty large checks.
The problem is that every "pull" by a client (open a socket, request data, close the socket) doesn't necessarily return an update, so you will be wasting bandwidth (and therefore money )for each "pull" where there's no new content.
So how often should you "pull"? Well, you can either poll the server very frequently, in which case a large percentage of update requests will return no data, or you can poll less frequently at the risk of the client getting a delayed update. If your objective is to provide, say, real-time breaking news headlines, then to be the top news provider you'll probably design the client-side code that "pulls" the headlines from your server to make very frequent updates to keep your edge over other news channels. But the more frequent those "pulls" when there is no update, the more bandwidth is wasted.
Is there a better way? Yep. Rather than "pulling" data from servers, how about "pushing" it to clients? When you use a "push"model the client-side code opens a connection to your server and then waits either to be told there's an update to be fetched or the updated contents are actually sent.
This push model is something we see built into many apps on Apple platforms for notifications and conceptually it's pretty simple. In practice, building a push infrastructure that can operate globally with low latency is a serious engineering task, but a Portuguese company, Realtime, has solved this problem.
Realtime offers the Realtime Messaging System & Framework, which consists of the Open Real-time Connectivity (ORTC) framework (a cloud-based many-to-many messaging service) and the "eXtensible RealTime Multiplatform Language" (xRTML).
The ORTC network is Realtime's secret sauce, providing low-latency message transfers simultaneously to millions of clients. (The company's site has a demo of its technology built in on its homepage that shows, via ORTC and xRTML, the performance of ORTC. At the time of this writing it is handling 706,506 messages per second and has serviced 165,668,475 connections in the last 24 hours.)
The cost of using Realtime is very reasonable (and compared to a "pull" system, it's really cheap) at 0.7 cents per visit (that's a client connection for 30 minutes or less) and $1 for each additional 1,000,000 messages after your first 1,000,000, which are free.
If you're building a Web application which requires unpredictable data updates, will support millions of connections, and operates globally, this is the technology to use rather than an Ajax-based approach, unless you really want to go broke. Realtime and its xRTML and ORTC system gets a Gearhead rating of 5 out of 5. Now that cloudy thing will not only work better, it'll cost a whole lot less.
Gibbs is cloud watching in Ventura, Calif. Your observations to firstname.lastname@example.org and follow him on Twitter and App.net (@quistuipater) and on Facebook (quistuipater).
Read more about software in Network World's Software section.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.