impress/app/models/item/proxy.rb
Matchu 9e3cac82ec use proxies for item html, too
Some lame benchmarking on my box, dev, cache classes, many items:

No proxies:
    Fresh JSON:  175,  90,  90,  93,  82, 88, 158, 150, 85, 167 = 117.8
    Cached JSON: (none)
    Fresh HTML:  371, 327, 355, 328, 322, 346 = 341.5
    Cached HTML: 173, 123, 175, 187, 171, 179 = 168

Proxies:
    Fresh JSON:  175, 183, 269, 219, 195, 178 = 203.17
    Cached JSON:  88,  70,  89, 162,  80,  77 = 94.3
    Fresh HTML:  494, 381, 350, 334, 451, 372 = 397
    Cached HTML: 176, 170, 104, 101, 111, 116 = 129.7

So, overhead is significant, but the gains when cached (and that should be
all the time, since we currently have 0 evictions) are definitely worth
it. Worth pushing, and probably putting some future effort into reducing
overhead.

On production (again, lame), items#index was consistently averaging
73-74ms when super healthy, and 82ms when pets#index was being louder
than usual. For reference is all. This will probably perform
significantly worse at first (in JSON, anyway, since HTML is already
mostly cached), so it might be worth briefly warming the cache after
pushing.
2013-06-26 23:50:19 -07:00

61 lines
No EOL
1.5 KiB
Ruby

class Item
class Proxy
include FragmentLocalization
attr_reader :id
attr_writer :item, :owned, :wanted
delegate :description, :name, :nc?, :thumbnail_url, to: :item
def initialize(id)
@id = id
@known_outputs = {method: {}, partial: {}}
end
def as_json(options={})
cache_method(:as_json)
end
def cached?(type, name)
# TODO: is there a way to cache nil? Right now we treat is as a miss.
# We eagerly read the cache rather than just check if the value exists,
# which will usually cut down on cache requests.
@known_outputs[type][name] ||= Rails.cache.read(fragment_key(type, name))
!@known_outputs[type][name].nil?
end
def owned?
@owned
end
def to_partial_path
# HACK: could break without warning!
Item._to_partial_path
end
def wanted?
@wanted
end
private
def cache_method(method_name, &block)
# Two layers of cache: a local copy, in case the method is called again,
# and then the Rails cache, before we hit the actual method call.
@known_outputs[method_name] ||= begin
key = fragment_key(:method, method_name)
Rails.cache.fetch(key) { item.send(method_name) }
end
end
def item
@item ||= Item.find(@id)
end
def fragment_key(type, name)
prefix = type == :partial ? 'views/' : ''
base = localize_fragment_key("items/#{@id}##{name}", I18n.locale)
prefix + base
end
end
end