Page 44 - MSDN Magazine, July 2017
P. 44
Figure 2 Get Method in the ContactManager Class
2. The server checks whether the object is already available in cache, and if so, returns the object immediately to the client as part of its response.
3. If not, the object is retrieved from the persistent storage, and then returned to the client as in Step 2.
In both cases, the object is serialized for submission over the network. At cache level, this object might already be stored in serialized format, to optimize the retrieval process.
You should note that this is an intentionally simplified process. You might see additional complexity if you check for cache expi- ration based on time, dependent resources and so on.
This configuration is typically called a Level 1 (L1) cache, as it contains one level of cache only. L1 caches are normally used for Session and Application state management. Although effective, this approach isn’t optimal when dealing with applications that move large quantities of data over multiple geographies, which is the scenario that we want to optimize. First of all, large data requires large caches to be effective, which in turn are memory-intensive, thus requiring expensive servers with a big allocation of volatile memory. In addition, syncing nodes across regions implies large data transfers, which, again, is expensive and introduces delays in availability of the information in the subordinate nodes.
A more efficient approach to caching objects in data-intensive applications is to introduce a Level 2 (L2) cache architecture, with a first cache smaller in size that contains the most frequently accessed objects in the larger dataset, and a second cache, larger in size, con- taining the remaining objects. When the object isn’t found in the first-level cache, it’s retrieved from the second level, and eventually refreshed periodically from the persistent storage. In a geographically
public async Task<Contact> Get(Guid id) {
IDatabase cache = cacheContext.GetDatabase();
var value = cache.HashGet(cacheKeyName, id.ToString());
// Return the entry found in cache, if any
// HashGetAsync returns a null RedisValue if no entry is found if (!value.IsNull)
{
return JsonConvert.DeserializeObject<Contact>(value.ToString()); }
// Nothing found in cache, read from database Contact contact = databaseContext.Contacts.Find(id);
// Store in cache for next use if (contact != null)
{
HashEntry entry = new HashEntry(
name: id.ToString(),
value: JsonConvert.SerializeObject(contact));
await cache.HashSetAsync(cacheKeyName, new[] { entry }); }
return contact; }
From the cache context, which identifies a connection to Redis Cache, you’ll obtain a reference to the data storage inside Redis itself, through the GetDatabase method. The returned IDatabase object is a wrapper around the Redis cache commands. Specifically, the HashGet method executes the HGET command (bit.ly/2pM0O00) to retrieve an object stored against the specified key (the object ID). The HGET command returns the value identified by a unique key in a named hash collection, if existing, or a null value otherwise. As key in the cache, you can use the object’s ID (a GUID), consistently with the same ID stored at database level, and managed by Entity Framework.
YES
Serialized Object
Serialized Object
If an object is found at the indicated key, it’s deserial- ized from JSON into an instance of the Contact model. Otherwise, the object is loaded from the database, using the Entity Framework Find by ID, and then stored in cache for future use. The HashSet method, and more precisely its async variant, is used for storing a JSON serialized ver- sion of the Contact object.
Similar to this approach, the other CRUD methods are implemented around the HashSet method for creating and updating objects in Redis Cache, and the HashDelete method for removing them.
The complete source code is available in the associated code download at bit.ly/2qkV65u.
Cache Design Patterns
A cache typically contains objects that are used most fre- quently, in order to serve them back to the client without the typical overhead of retrieving information from a persistent storage, like a database. A typical workflow for reading objects from a cache consists of three steps, and is shown in Figure 3:
1. A request to the object is initiated by the client application to the server.
Request
Response
Response
Figure 3 Level 1 Cache Workflow Request
NO
NO
Figure 4 Level 2 Cache Workflow
YES
Database
NO
Object in L1 Cache?
Object in L2 Cache?
YES
Database
38 msdn magazine
Machine Learning
Object in Cache?