I've been doing this manually via email for a long time,
since building new stuff in the logged-in world was a pain in the old env.
But now here we are! Finally, finally :)
The "fits:8-bit-chomby" search filter was being read as color=8, species=bit.
Now, we split from the right-hand side of the filter instead.
Still a problem for anyone who explicitly types the Spanish/Portuguese
ordering of "fits:chomby-8-bits", but I'm okay with this cheap fix, since
I bet literally nobody has done that in the past month, if ever :P
In particular, outfit_id == 0 would cause outfit_id? to
return false, so it wouldn't run the outfit presence
validation, so /donations/features would try to load
outfit #0 and fail.
Also, flash[:alert] instead of flash[:error] when outfit_id
is bad.
Mostly this was because of Mac's bug where you, in Firefox:
1. Load a real pet with the default appearance (probs Happy Male) into the wardrobe
2. Use a search query containing ":"
3. See the pet biology vanish before your eyes!
I observed that this only happened in cases where the biology stuff in the URL
wasn't replaced by a state number, so figured that it'd probably be good to do
that anyway because biology fields are annoying, and it for some reason seemed
to fix the bug. (Something to do with query parsing and stupid internal state
issues, probably. Ugh. One of these days, I'll re-rewrite all this :P)
Turns out we need to assign closeted to actual items, not
the item proxies, since that's what we check against. (I
would've thought they're backed by the same instance of
the item anyway, but, whatever. The fix works :P)
It turns out that some pets for seemingly nonstandard colors have the
standard body type anyway, and vice-versa. This implies that we should
stop relying on a color's standardness, but, for the time being, we've
just revised the prediction model:
Old model:
* If I see a body_id, I find the corresponding color_ids, and it's wearable
by all pet types with those color_ids.
New model:
* If I see a body_id,
* If it also belongs to a basic pet type, it's a standard body ID.
* It therefore fits all pet types of standard color (if there's
more than one body ID modeled already). (Not really,
because of weird exceptions like Orange Chia. Should that be
standard or not?)
* If it doesn't also belong to a basic pet type, it's a nonstandard
body ID.
* It therefore only belongs to one color, and therefore the item
fits all pet types of the same color.
We used get_multi when preparing the proxies to decide which to
load from the database, but then sent multiple get requests to
Memcache to re-fetch the same data from that get_multi. Silly!
Use the data that's already stored on the proxy anyway.
Right now we're spending too much time expiring cache keys when
getting contributions. The longer-term fix is to move it to a
background task, but it's good to restrict deletions only to usable
locales rather than all the ones that Rails theoretically supports.
Fun bug! If you edit an outfit, but the outfit loads before the
closet items do, then we clone the outfit to give it its new
identity and therefore forget about its item load callbacks.
Now we have a cheap hack to forward item load data to the
outfit's clones. Hooray! Hope this doesn't break tons of things!
That is, Neopets.com will raise an error when you try to equip a
Kyrii Mage Cape to a pet who's already wearing Ceremonial Shenkuu
Warrior Armour, since the armor restricts the Collar zone which
the cape occupies. DTI, however, would just hide the Collar zone,
as if it were biology. Now, however, DTI will unwear the armor
when you wear the cape, and vice-versa (despite the restriction
relationship being one-directional).
Some lame benchmarking on my box, dev, cache classes, many items:
No proxies:
Fresh JSON: 175, 90, 90, 93, 82, 88, 158, 150, 85, 167 = 117.8
Cached JSON: (none)
Fresh HTML: 371, 327, 355, 328, 322, 346 = 341.5
Cached HTML: 173, 123, 175, 187, 171, 179 = 168
Proxies:
Fresh JSON: 175, 183, 269, 219, 195, 178 = 203.17
Cached JSON: 88, 70, 89, 162, 80, 77 = 94.3
Fresh HTML: 494, 381, 350, 334, 451, 372 = 397
Cached HTML: 176, 170, 104, 101, 111, 116 = 129.7
So, overhead is significant, but the gains when cached (and that should be
all the time, since we currently have 0 evictions) are definitely worth
it. Worth pushing, and probably putting some future effort into reducing
overhead.
On production (again, lame), items#index was consistently averaging
73-74ms when super healthy, and 82ms when pets#index was being louder
than usual. For reference is all. This will probably perform
significantly worse at first (in JSON, anyway, since HTML is already
mostly cached), so it might be worth briefly warming the cache after
pushing.
That is, once we get our list of IDs from the search engine, only
fetch records whose JSON we don't already have cached.
It's simpler here to use as_json, but it'd probably be even faster
if I figure out how to serve a plain JSON string from a Rails
controller. In the meantime, requests of entirely cached items
are coming in at about 85ms on average on my box (dev, cache
classes, many items), about 10ms better than the last
iteration.
Specifically, we were running a find_or_initialize_by for all 50
hangers, which isn't great. Collation logic is more complicated this
way, but query count is way lower.
Additionally, compare against hanger.list_id instead of hanger.list,
because hanger.list will fire a query if list_id is non-nil, but that
nil ID tells us everything we needed to know, anyway.
Bug report that this resolves:
...However, when I was using the "Import from SDB" tool just a few
minutes ago, it ended up adding EVERY neocash item into the "Not
In A List" section, regardless if I already that item imported
into my "Your Items". So, basically.. I had duplicates of
everything and it would not allow me to move them around into
separate catergories or anything. I know that every other time i've
used the import tool, it would only add NEW items that are not
currently already in my lists yet.
Most of the reasoning is documented in the big comment. In short, we tried
to solve the problem with caching, but the caching should hardly be necessary
now that the bottleneck should be fixed. We'll see on production if it
actually solves the whole problem, but I've confirmed in the console that
redefining this function makes random_basic_per_species (as called during
rendering) a ton faster. And this way we keep our randomness, woo!