If you ever need to write a single-page application (SPA) that has to support offline operations, you will find that these days the generally agreed upon solution to this problem is a Service Worker. While that generally is a possibility to support working offline as a feature, I find that it works well only for certain classes of applications: Ones that primarily display information that’s easy to cache statically. If you happen to not have one of those applications, I argue that Service Worker will introduce additional complexity because of the way they work. In this article, I want to present a different approach to offline support in applications which also enables more complex interactions.
The techniques presented in this post are backed by experiences with a real application, used by many people every day.
The application
The application in question is used to record data in the field, where mobile reception might be spotty or non-existent. When online, recorded values are sent directly to the server and then shown in the application, based on the server response. However, when offline, users still need to record data and see their local entries, so any applied changes need to be captured and displayed. Thus we need to support both writes of new data or edits to existing data while offline.
When the application is back online, any captured requests need to be transmitted so the central back end is up to date. The user is kept informed about the connection state and any synchronization activities running in the background. As detailed below, this was one of the factors that made us avoid the Service Worker route.
We’ve created a minimal sample application which illustrates the basic approach. Much like the original, this example is an Angular application using Redux for state management (via NgRx).
Note that the choice of SPA framework is largely irrelevant here: You could implement the same in React, Vue, native Web Components or any other solution you prefer. As hinted in the title of this article, we primarily lean on Redux for the heavy lifting.
Service Worker
As you may already know, Service Worker is basically a reverse proxy inside your browser: It can respond to certain events triggered by the page, such as network fetches. While being scoped to a specific URL (and thus being loosely coupled to an application) the Service Worker runs in a different scope and is concerned with lower level technical concepts.
This means that inside your Service Worker you don’t get to operate on your normal business entities of the application. Instead, you have to peel back the abstraction of your API and understand how entities map to specific request URLs and interpret the different HTTP verbs to react appropriately to the intended operation.
For static information, this is relatively easy to achieve: What you need is an understanding of the different resources of your application and the URLs they are bound to. This can be statically determined at build time, so all content can be written into a manifest (or directly into a Service Worker implementation). This content can then be fetched and cached at Service Worker installation. Then on a GET
Request to any of the Resources you serve up the cache content and call things done.
The last paragraph is, more or less, what you get presented as the classic use case for Service Worker. But even that hides more complexity: There is a life cycle that you have to care about when you make updates to your site, since then you also have to update your caches. As you can see in the MDN Documentation being able to update not only depends on your ability to get all the fresh resources, but it also depends on the users' behavior (as they might still be running clients for an old Service Worker version). All that makes even a ‚simple‘ update a topic to think about.
Also, we’ve already mentioned caching, famously one of the two hard Computer Science problems. For all your content you have to decide what is an appropriate caching strategy. This talk might serve to highlight how many things you have to think of even for seemingly simple cases.
For an application like ours, this gets considerately more complex: We’d have to implement offline handlers for all API routes and all valid HTTP verbs again in the Service Worker (I say again, because we need to have them within the application too, obviously).
We’d have to duplicate the domain knowledge of the structure of the entities exchanged in the requests in order to provide appropriate responses (since there is no answer we could preload and return). On top of that, we’d have to implement some kind of protocol between the Service Worker and the application to exchange information about the current connection state and stored requests to allow background synching and informing the user.
This seemed like a pretty tall order – and given that we already had a structure in our application that allowed us to achieve the same result with higher consistency and with the mechanisms the front-end architecture already provided, we chose to not use Service Worker.
Our usage of Redux
We employ the idea of a normalized state in our application to avoid redundancies and keep the concepts in the web application the same as in the back end. With that idea, our application is structured in a simple stack:
-
Components are used to build the user interface. They understand which data they need and obtain it by subscribing to specific slices of the Redux store. In addition, they trigger a load of their data whenever they are displayed (by dispatching an appropriate
load
action to the store). - The Redux store provides the internal API to load, create, update and delete entities, as well as additional functions for filtering, sorting and selecting data. It is the one place that defines the business-object model of the application.
- Angular services are used from within the store to encapsulate HTTP communication with the back end. Any such communication goes through a service, so usually there is one class per API route / Business Entity to handle all the necessary cases.
With this setup, we ensure that the Redux store is the sole source of state for the application and that all interactions with the network are triggered from the store as well. Components do not need to have an understanding of the HTTP API, nor of the connectivity state of the application.
Adding offline support
This setup also enables us to support offline cases with a single generic approach.
We implement an Angular Interceptor
- though you could simply use error handling for fetch
- that acts on HTTP errors: On any error it determines whether the device is offline. In case there is no connectivity, we throw a specific OfflineError
that we can handle separately within services.
We also implement a generic OfflineService
that encapsulates the handling of requests failing for missing connectivity. What we want in this case are two things:
- To store the original request so it can be resent later on.
- To return a value for the Redux store that the reducer can integrate properly. This means the values of the returned object need to reflect the user’s intention (i.e. a new entry should be added, a modification should be applied to the right entry etc.).
For storing requests, we reuse the existing mechanism for state: We simply take the request and add it to a specific slice of the Redux store for later processing, when the connection status changes.
Last but not least, we add error handling in each service. For each call, a dedicated catchError
clause is added which defers the error to the OfflineService
for handling. But the service also provides the necessary default value to return to the store, in case we are offline. Since the service has an understanding of the respective domain objects and is provided with the data from the UI, it is rather easy to provide the right default value for the specific operation the user wants to perform.
This way, even the Redux store has no need to understand the connection state of our device: Not only does it always get a response from a service call, it always gets the right response for the user’s action. For us, it means we transparently can handle connection problems on the one layer that has to understand HTTP. And even better: We don’t need to duplicate domain knowledge, but simply use it from the piece of code that needs to have it anyway. 🎉
If you lay it out on a timeline this is how data flows through the application when it is online:
And this is how things change in case there is no connectitvity:
The little details
This solution works perfectly fine, but of course it glances over a couple of details that need solving when you want to use this in a real application.
First and foremost, we have to think about the synchronization itself. In our case, things were easy, as we could simply implement last write wins, i.e. overwrite whatever the state on the back end was. This of course is highly dependent on your business problem and you might want to implement some more complex solutions, perhaps even consider CRDTs (to learn more about them, you can listen to this (German) podcast episode).
Then the implementation of the services is rather simple. In case you are doing changes online, you always have to wait for the network roundtrip for them to show up, while in the offline case changes are visible immediately. We could implement both of them the same way, to keep the UX the same in both cases.
In the real implementation, we’ve also added some more code to maintain stored offline requests: In case a user edits the same entity multiple times, or deletes a fresh offline entry, we wanted the stored requests to reflect the end result and, in the latter case, not even send a request at all when coming back online. We’ve also implemented a temporary ID mechanism for creating entities while offline, as in our system canonical IDs were assigned by the back end, but we of course needed to be able to identify those entities to allow changing them.
You also might remember that I’d said all components trigger a fetch of their data when they are displayed. While for some information that is good, there are of course some cases where this just means additional, unnecessary network load (as the data does not change frequently). So we’ve also implemented a caching scheme, based on the organization of the store, that allows us to decide whether or not we do want to fetch data or simply return the existing contents of the store.
The good thing is that this functionality (and all the other ones I’ve mentioned before) can be implemented fairly easily and usually in a single place, since the overall structure of the application sets up simple control points to achieve additional functionality.
Acknowledgements & Credits: My thanks go out to m, FND and falk for reviewing earlier versions of this post and helping me making it better. Your inputs are appreciated. The title photo is by Jeremy Bezanger on Unsplash