Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Problem

Different application consuming the API’s but the backend has no idea who is consuming it and from where the request coming is from. 

...

To avoid querying the datastore every time, we can cache custom entities in-memory on the node, so that frequent entity lookups don't trigger a datastore query every time, but happen in-memory, which is much faster and reliable than querying it from the datastore (especially under heavy load).


A. Lazy caching: Cache custom entities in-memory on the node on the first request, so that frequent entity lookups don’t trigger a datastore query every time (only the first time), but happen in-memory. On every first request by a consumer, associated App IDs to requester consumer will be retrieved from datastore and that will be cached, but If there are no APP IDs associated then empty App ID's arrays will be cached. 

By doing so it doesn’t matter how many requests the consumer makes, after the first request every lookup will be done in-memory without querying the datastore.

  •  Pros
    • Size of cache will be less as it will cache the records that are in use.
    • The object is only loaded from the datastore one time per consumer.  
  • Cons
    • Every first request by each consumer will hit the datastore.

We can use the following kong utility method to get APP-ID mapped to consumer

MethodDescription
value = cache.get_or_set(key, function)This is an utility method that retrieves an object with the specified key, but if the object is nil then the passed function will be executed instead, whose return value will be used to store the object at the specified key. This effectively makes sure that the object is only loaded from the datastore one time, since every other invocation will load the object from the in-memory cache.


The cache will be stored in form of the key-value pair where the key will be 'appid'.consumer_id and value will be an array of APP ids.

...


 B. Full table cache on kong service start:
Cache the all the records exist in the table on every node immediately after kong service start, so entity lookups don’t trigger a datastore query every time.   

  • Pros
    • No lookup on the data-store except initial so it will help for reducing latency in request response time. 
  •  Cons   
    • If the database is size is large then the cache and invalidation process will take time. 
    • Every Node will query datastore after the start of service that will put a load on data-store for some time. 
    • It will cache the all records even if record not in use and that will unnecessarily increase cache size on every node. 


Cache Invalidation

Every time a particular record is being updated, deleted in the datastore cache invalidation can be done, we can explicitly remove the cached record from the cache to avoid having an inconsistent state between the datastore and the cache itself. Removing it from the in-memory cache will trigger the system to query the datastore again, and re-cache that record. Caching will be done against consumer id so whenever particular consumer record updated only that record will be invalidated and re-cached. 

...

entity is being created/updated/deleted in the datastore, Kong notifies the datastore operation across all the nodes telling what command has been executed and what entity has been affected by it.

...

 

We can listen to these events and response with the appropriate action, so that when a cached entity is being modified in the datastore, we can explicitly remove it from the cache to avoid having an inconsistent state between the datastore and the cache itself. Removing it from the in-memory cache will trigger the system to query the datastore again, and re-cache the entity.

The events that Kong propagates are:


EVENT NAMEDESCRIPTION
ENTITY_CREATEDWhen any entity is being created.
ENTITY_UPDATEDWhen any entity is being updated.
ENTITY_DELETEDWhen any entity is being deleted.


In order to listen to these events, we need to implement the hooks.lua file and distribute it with our plugin, for example:

...

Code Block
languagexml
-- hooks.lua

local events = require "kong.core.events"
local cache = require "kong.tools.database_cache"

local function invalidate_on_update(message_t)
  if message_t.collection == "appids" then
    cache.delete("appid."..message_t.old_entity.consumer_id)
  end
end

local function invalidate_on_delete(message_t)
  if message_t.collection == "appids" then
    cache.delete("apikeys."..message_t.entity.consumer_id)
  end
end

return {
  [events.TYPES.ENTITY_UPDATED] = function(message_t)
    invalidate_on_update(message_t)
  end,
  [events.TYPES.ENTITY_DELETED] = function(message_t)
    invalidate_on_delete(message_t)
  end
}


In the example above the plugin is listening to the ENTITY_UPDATED and ENTITY_DELETED events and responding by invoking the appropriate function. The message_t table contains the event properties


Implementation 

Database table structure 

...