Redis is an open source, BSD licensed, advanced key-value cache and store.
Connecting on a different port which is useful if you are running things locally:
redis-cli -h 192.168.33.81 -p 6380
Web Servers - Redis:
rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm
's/save 900 1/#save 900 1/'
's/save 300 10/#save 300 1/'
's/save 60 10000/#save 60 10000/'
's/# slaveof <masterip> <masterport>/slaveof 188.8.131.52 6379/'
's/# unixsocket \/tmp\/redis.sock/unixsocket \/var\/tmp\/redis.sock/'
's/# unixsocketperm 755/unixsocketperm 777/'
chkconfig --level 2345 redis on
- Is Redis installed?
- Is Redis running?
- Has the Firewall port been opened up?
Web Servers - PHP Redis
git clone git:
In the set up below, we have a instance of Redis on each virtual. Cache1 acts as the Master. All writes go to the master as specified in the settings.php. Web1 has a direct connection to cache1 so that it is always reading the most up to date records when performing a warm cache and it is regenerating records if they do not exist in the Master instance.
Cache2 has a dual function; it can be a slave of cache1 or a master instance depending on what is happening within the system. For example, if drush cc all needs to be run, we can disconnect it from cache1 making it a Master. By doing this, the caches remain untouched whilst cache1 is updated. When cache1 has been updated, it can simple be reconnected to cache1 and a sync between them begins which propagates down to the web server. In theory, this means that the web server always have warm cache records.
The Drupal 7 Redis module include files are supposed to have no trace of Drupal code meaning that they will work across different versions if directly included. At the time of trying the module out, this was no longer the case. It was necessary to load the Database class which required the Drupal 7 database backport module. Also, in order to use Redis for the caching, it was necessary to use the D7 to D6 backport module.
A further change was made to the module in order to support a master / slave set up in the sense that you write to the master and read from the slave. I could not see how the current set up could support that feature. It looks like some changes are being though about in order to support things like clustering but that will come in the future.
Investigate: API-32 - Designing a failover for Memcache CLOSED
Go live: API-40 - Redis ready to go live CLOSED