Saturday, December 10, 2005

How Do we Solve the Offline Problem?

Rich web applications like GMail, Flickr, Kiko, or Basecamp are very useful for organizing your life online. But what happens when you have no Internet connection? How do get to your information?

There's two schools of thinking about this problem:

Some say that it will be easier to wire the entire planet than to make everything work offline. There is much evidence that ubiquitous online access may become reality: You can check your e-mail from everywhere in Europe these days and can also surf the web while on a plane. Google is deploying Wi-Fi in its home town.

Joel once wrote that Excel sales really took off when version 4.0 made it easy for people to save files back to Lotus 1-2-3. In the same way, I believe that online apps will really take off when it's possible to go back to viewing stuff offline.

Let's start by thinking about what we'd need to implement (Adam Bosworth also has some good thoughts on this topic):
  • Caching: this amounts to keeping a copy of your online data locally - either all of it or just some fragment.
  • Presentation: When you click around in web app interfaces, this generates requests to the server. You'd want to reproduce the online UI reasonably well, possibly by pretending there's a server that doesn't even exist.
  • Synchronizing offline and online data: When you modify data offline, you'll need to store these operations until the next time you can "call home" and talk to the server. For example, in GMail, this would amount to having "Unsent Messages" stowed away somewhere.
There are three possible architectures:

  1. Do everything in JavaScript. This could be very hard, especially since the only permanent information JavaScript can modify on your computer is cookies. (Is this right?)
    We could get JavaScript strong enough to do handle the entire application, but since it's design-by-committee, that may take some time. Also, I'm not even sure JavaScript should be that powerful.
  2. Through a browser plug-in mechanism, control the application or simulate the presence of a server. This is a better choice, but requires a separate solution for each kind of browser.
  3. Install a local server on the system that serves up the UI. This may be the easiest option. I'm sure you could even train users to go to "" when they have an online connection, but click on "Start Kiko server" first when they don't. Downside: Installing software? Isn't that the opposite of what you want to achieve with a web app?
In closing, I'm sure we'll soon see online apps that have offline functionality. Hopefully, some standard programming techniques and tools will emerge. Maybe there will even be an "AJAX"-like moniker for all this. I believe "Web 3.0" is still up for grabs.


F said...

While I like your post, I have to say I disagree: I think that most of these problems would not arise if companies wouldn't be so fond of presenting ads. (not that I blame them, I can fully understand this and therefore refuse using an ad-blocker). My point is:

I divide the my applications (web and non-web) in two sections:

1. stuff that I frequently use (such as email, contacts, calendar, and the like)
2. stuff I seldomly, randomly or scarcely use

For stuff that I frequently use (1), I like to have my own application. e.g my email client. while I admit, that gmail's interface is really great, I don't like the fact I need to adjust to different email layouts just for checking emails. As I'm using IMAP, this does not exclude the possibility of checking my email in a webbrowser on different computers. When using well tested and developed desktop applications you don't have to readapt every time I choose to use a different provider. In addition, these programs can profit greatly from the sleek protocols that only have to transmit data and not form and style of the user interface.

I admit that apart from email, the situation is difficult as there is currently limited support for stuff like this. (I'm still trying to implement a solution for my addressbook.. if only I had the time) Also the desktop software has to be programmed well enough to support offline use (don't laugh, I've been pretty annoyed to see that Thunderbird's support for IMAP-caching is.. say.. a pain in the ass).

My other problem with this approach is that it harms the next big thing I'm hoping to evolve shortly: payable mobile computing.. say: checking emails on my PDA. Any PDA of your choice will probably just spit out such highly evolved javascript-local-proxy-caching-routines you are proposing. What these devices need are evolved, tested and common protocols for low-bandwidth transmission of data. I don't want to pay 5 cent just to transmit the gmail or Kiko logo.

So what I say is: before trying to establish power-hungry javascript-plugins for my browser: develop and support some well designed protocols to connect to these services with your client of choice. Or even better: use the protocols which have been available for decades. As an added benefit, they'll be guaranteed to work on my PDA without a Pentium IV in and a DSL connected to it, as they've been developed when these things were but shiny dreams in some scientists good-night's-sleep.

Oh, and when talking about (2) things I rarely use: I don't really think it makes sense to cache those. (It would probably be enough if my browser would just be able to redisplay those pages without accessing the internet. no local server required.)


Michael Neale ( said...

I think the third option is viable - lets call it a "Weblet" - I think IBM coined that phrase but I like it.

I wrote a tiny applet yonks ago that was a minimalist HTTP server - its easy to do, and make it a headless applet on a page. Digitally signing the applet means that it can be persistent, and access the disk within reason. All requests go to the applet "web server" of course.

Anonymous said...

It has a desktop version with local server

Julien Couvreur said...

You might be interested in a prototype I wrote of an online/offline wiki, "Take It With You" Wiki (TiwyWiki):