Improve your application performance with multi-level caching
Speed up your application with multi-level Caching
In my previous article 4 ways to update your cache I briefly explained different ways to update the cache. In this article, I am going to explain more about improving your application performance with multi-level caching which includes
- Client Caching
- CDN Caching
- Web Server Caching
- Database Caching
- Application Caching
- Database Query Level Caching
- Object Level Caching
Client Caching
Caches can be located on the client-side (OS or browser). The browser requests some content from the webserver. If the content is not in the browser cache then it is retrieved directly from the webserver. If the content was previously cached, the browser bypasses the server and loads the content directly from its cache.
Content is considered to be stale depending on whether the cached content has expired or not.
CDN Caching
CDNs are considered a type of cache. A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact.
Serving content from CDNs can significantly improve performance in two ways:
- Users receive content at data centers close to them
- Your servers do not have to serve requests that the CDN fulfils
Web Server Caching
Reverse proxies and caches such as Varnish can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers.
Reverse Proxy
A reverse proxy is a web server that centralizes internal services and provides unified interfaces to the public. Requests from clients are forwarded to a server that can fulfil it before the reverse proxy returns the server's response to the client.
- Caching - Return the response for cached requests
Static content - Serve static content directly like
1. HTML/CSS/JS 2. Photos 3. Videos Etc
Database Caching
The speed and throughput of your database can be the most impactful factor for overall application performance. Your database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance.
A database cache supplements your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data.
Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built-in write-through capabilities. When the underlying data changes on the database table, the database updates its cache automatically, which is great. There is nothing within the application tier required to leverage this cache. Where integrated caches fall short is in their size and capabilities. Integrated caches are typically limited to the available memory allocated to the cache by the database instance and cannot be leveraged for other purposes, such as sharing data with other instances.
Application Caching
In-memory caches such as Memcached and Redis are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so cache invalidation algorithms such as Least recently used (LRU) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
- Persistence option
- Built-in data structures such as sorted sets and lists
There are multiple levels you can cache that fall into two general categories: database queries and objects:
- Row level
- Query-level
- Fully-formed serializable objects
- Fully-rendered HTML
Generally, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
Database Query Level Caching
Whenever you query the database, hash the query as a key and store the result in the cache. This approach suffers from expiration issues:
- Hard to delete a cached result with complex queries
- If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
Object Level Caching
See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s):
- Remove the object from cache if its underlying data has changed
- Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
Suggestions of what to cache:
- User sessions
- Fully rendered web pages
- Activity streams
- User graph data
References :
- Images Source: CDN
- Images Source: Reverse Proxy
- Images Source: Content Served From Application Cache