2013-06-26 23:01:12 -07:00
|
|
|
class Item
|
|
|
|
class ProxyArray < Array
|
use proxies for item html, too
Some lame benchmarking on my box, dev, cache classes, many items:
No proxies:
Fresh JSON: 175, 90, 90, 93, 82, 88, 158, 150, 85, 167 = 117.8
Cached JSON: (none)
Fresh HTML: 371, 327, 355, 328, 322, 346 = 341.5
Cached HTML: 173, 123, 175, 187, 171, 179 = 168
Proxies:
Fresh JSON: 175, 183, 269, 219, 195, 178 = 203.17
Cached JSON: 88, 70, 89, 162, 80, 77 = 94.3
Fresh HTML: 494, 381, 350, 334, 451, 372 = 397
Cached HTML: 176, 170, 104, 101, 111, 116 = 129.7
So, overhead is significant, but the gains when cached (and that should be
all the time, since we currently have 0 evictions) are definitely worth
it. Worth pushing, and probably putting some future effort into reducing
overhead.
On production (again, lame), items#index was consistently averaging
73-74ms when super healthy, and 82ms when pets#index was being louder
than usual. For reference is all. This will probably perform
significantly worse at first (in JSON, anyway, since HTML is already
mostly cached), so it might be worth briefly warming the cache after
pushing.
2013-06-26 23:39:04 -07:00
|
|
|
# TODO: do we really need to include translations? The search documents
|
|
|
|
# know the proper name for each locale, so proxies can tell their
|
|
|
|
# parent items what their names are and save the query entirely.
|
|
|
|
SCOPES = HashWithIndifferentAccess.new({
|
|
|
|
method: {
|
|
|
|
as_json: Item.includes(:translations),
|
|
|
|
},
|
|
|
|
partial: {
|
|
|
|
item_link_partial: Item.includes(:translations)
|
|
|
|
}
|
|
|
|
})
|
2013-06-26 23:01:12 -07:00
|
|
|
|
2013-12-27 18:11:03 -08:00
|
|
|
def initialize(items_or_ids)
|
|
|
|
self.replace(items_or_ids.map { |item_or_id| Proxy.new(item_or_id) })
|
2013-06-26 23:01:12 -07:00
|
|
|
end
|
|
|
|
|
use proxies for item html, too
Some lame benchmarking on my box, dev, cache classes, many items:
No proxies:
Fresh JSON: 175, 90, 90, 93, 82, 88, 158, 150, 85, 167 = 117.8
Cached JSON: (none)
Fresh HTML: 371, 327, 355, 328, 322, 346 = 341.5
Cached HTML: 173, 123, 175, 187, 171, 179 = 168
Proxies:
Fresh JSON: 175, 183, 269, 219, 195, 178 = 203.17
Cached JSON: 88, 70, 89, 162, 80, 77 = 94.3
Fresh HTML: 494, 381, 350, 334, 451, 372 = 397
Cached HTML: 176, 170, 104, 101, 111, 116 = 129.7
So, overhead is significant, but the gains when cached (and that should be
all the time, since we currently have 0 evictions) are definitely worth
it. Worth pushing, and probably putting some future effort into reducing
overhead.
On production (again, lame), items#index was consistently averaging
73-74ms when super healthy, and 82ms when pets#index was being louder
than usual. For reference is all. This will probably perform
significantly worse at first (in JSON, anyway, since HTML is already
mostly cached), so it might be worth briefly warming the cache after
pushing.
2013-06-26 23:39:04 -07:00
|
|
|
def prepare_method(name)
|
|
|
|
prepare(:method, name)
|
|
|
|
end
|
|
|
|
|
|
|
|
def prepare_partial(name)
|
|
|
|
prepare(:partial, name)
|
|
|
|
end
|
|
|
|
|
|
|
|
private
|
|
|
|
|
|
|
|
def prepare(type, name)
|
|
|
|
item_scope = SCOPES[type][name]
|
|
|
|
raise "unexpected #{type} #{name.inspect}" unless item_scope
|
2013-12-08 23:15:57 -08:00
|
|
|
|
|
|
|
# Try to read all values from the cache in one go, setting the proxy
|
|
|
|
# values as we go along. Delete successfully set proxies, so that
|
|
|
|
# everything left in proxies_by_key in the end is known to be a miss.
|
|
|
|
proxies_by_key = {}
|
|
|
|
self.each { |p| proxies_by_key[p.fragment_key(type, name)] = p }
|
|
|
|
Rails.cache.read_multi(*proxies_by_key.keys).each { |k, v|
|
|
|
|
proxies_by_key.delete(k).set_known_output(type, name, v)
|
|
|
|
}
|
|
|
|
|
|
|
|
missed_proxies = proxies_by_key.values
|
|
|
|
missed_proxies_by_id = missed_proxies.index_by(&:id)
|
|
|
|
|
2013-06-26 23:01:12 -07:00
|
|
|
item_scope.find(missed_proxies_by_id.keys).each do |item|
|
|
|
|
missed_proxies_by_id[item.id].item = item
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|
|
|
|
end
|