Now it's possible to set expiration only for keys to main objects, not for collecion elements.
I have design problem with that.
Let's imagine we store session properties in the hash.
Session expires after some time, but we have to know how many sessions are still alive.
So... | created another collection to store active session list, where:
token (key), and key to session hash (value).
Now, when my session expires, it's token in active collection is still present,
and I have no posiibility to remove them.
Solutions:
1) Possibility of Hash/Set/List fields expiration. I wille simply set expiration of session token, the same like main session hash
2) Expiration triggers - when key expires, some command group wille be executed (Maybe Lua script)
3) Publish expiration message to earlier defined channel, containing expired key. Using drivers I could implements my own triggers
Hi, this will not be implemented by original design:
The reasoning is much more complex and is a lot biased by personal feelings, preference and sensibility so there is no objective way I can show you this is better ;)
Closing. Thanks for reporting.
Is there a best practice for approaching this?
I have come up with a pretty good work-around solution for this "expire redis hash values by key" problem.
Essentially you just store the entries with the timestamps as the keys, or embedded into the keys. Note that redis wants strings as keys so you'll have to .toString() / parseInt() the keys to do compares. Using this scheme, you just make a bunch of hash tables with your relevant variables in the names of the hash tables, and each one has timestamp/value pairs.
For example, in nodejs:
var Redis = require('then-redis')
var _ = require("underscore");
var redClient = Redis.createClient('tcp://localhost:6379');
var now = new Date();
var nowSeconds = parseFloat( Math.round(now.getTime() / 1000));
var nowSecondString = nowSeconds.toString();
thisEntry[nowSecondString] = myReleventValueAtThisTime;
var myHashTableName = thisCategory + ":" + thisSubcategory + ":" + thisFrame;
// add newest entry to hash
redClient.hmset(myHashTableName,thisEntry).then(function () {
// get all entries from hash
redClient.hgetall(myHashTableName).then(function (result) {
console.log("got existing entries from : " + myHashTableName + " hashTable.")
var myCurrentValidValues = [];
if (result) {
for (var i in result) {
// test entries for being too old in seconds
if (parseFloat(i) < nowSeconds - 180) {
// this entry is > 3 minutes old, mark for deletion
redClient.hdel(myHashTableName,i) // delete old entries here
} else {
var thisValidKeyValPair = {}
// our values happen to be floats
thisValidKeyValPair[i] = parseFloat(result[i]);
myCurrentValidValues.push(thisValidKeyValPair)
}
}
// now do stuff with your value array, which doesn't including "stale" entries
// example sort by value descending
myCurrentValidValues.sort(function(a,b){
return( _.values(a) - _.values(b));
})
// example sort by timestamp, oldest to newest, ascending
myCurrentValidValues.sort(function(a,b){
return( parseFloat(_.keys(a)) - parseFloat(_.keys(b)));
})
}
})
})
Note that just deleting the old entries before you do any critical analysis on the values, as you go, is far more efficient, computationally, than having redis actually check timeout values on every key in every redis hash in every DB every second. So, it's not a bad solution.
Most helpful comment
Hi, this will not be implemented by original design:
The reasoning is much more complex and is a lot biased by personal feelings, preference and sensibility so there is no objective way I can show you this is better ;)
Closing. Thanks for reporting.