It was only finding the first two avatars and then thinking it was done.
I'm not entirely sure why it was doing that.
I think maybe all the cloning made it forget where it was or something.
Either way, it seems to work now, and really uses less memory.
All breakout queues that we're going to need to listen to now need to be explicitly listed in $config['queue']['breakout'].
Until XMPP is moved to component model, this setting will let the individual processes work with their own queues:
$config['queue']['breakout'][] = 'xmpp/xmppout/' . $config['site']['nickname'];
- Multiplexing queues into groups and for multiple sites.
- Sharing vs breakout configurable per site and per queue via $config['queue']['breakout']
- Detect how many times a message is redelivered, discard if it's killed too many daemons
- count configurable with $config['queue']['max_retries']
- can dump the items to files in $config['queue']['dead_letter_dir']
Queue daemon memory & resource leak fixes:
- avoid unnecessary reconnections to memcached server (switch persistent connections back in on second initialization, assuming it's child process)
- monkey-patch for leaky .ini loads in DB_DataObject::databaseStructure() - was leaking 200k per active switch
- applied leak fixes to Status_network as well, using intermediate base Safe_DataObject for both it and Memcache_DataObject
Misc queue fixes:
- correct handling of child processes exiting due to signal termination instead of regular exit
- shutdown instead of infinite respawn loop if we're already past the soft memory limit at startup
- Added --all option for xmppdaemon... still opens one xmpp connection per site that has xmpp active
Cache updates:
- add Cache::increment() method with native support for memcached atomic increment
* skip unnecessary unsubscribes on graceful shutdown -- takes a long time for many queues, slows down our restarts when hitting graceful mem limit
* fix control channel (was broken when we switched to support multiple queue servers)
May miss keys other than the given or primary key, but should work for a lot of common cases where a bad entry's been removed from DB but lingers in cache.
No change in efficiency for the common case where nothing's deleted: does the same bulk fetch of just the notices we think we'll need as before, then if we turned up short keeps checking one by one until we've filled up to our $limit.
This can leave us with overlap between pages, but we already have that when new messages come in between clicks; seems to be the lesser of evils versus not getting a 'before' button.
More permanent fix for that will be to switch timeline paging in the UI to use notice IDs.
Defaulting to only looking at last 90 days of activity, can be adjusted up or down.
$config['tag']['cutoff'] = 86400 * 90;
$config['popular']['cutoff'] = 86400 * 90;
Per-user and per-group tag clouds do not use the cutoff (and it doesn't help with indexing on them).