Commit graph

311 commits

Author SHA1 Message Date
d7af6cfd4a populate occupies/restricts selects 2014-04-02 20:26:53 -05:00
f4c435c3cd handle user filters 2014-04-02 10:32:13 -05:00
1d11cf6edc better handling of i18n and labels and resource filters and junk 2014-04-02 10:32:13 -05:00
170b7fa6f5 can search items with a form-based query instead of text-based 2014-04-02 10:32:13 -05:00
a326f09eda lolwhoops, measure prank-funniness in PST 2014-04-01 19:10:44 -05:00
f9fa3eb596 prank color artist credit 2014-03-31 21:05:28 -05:00
6e80c228c1 include prank message on wardrobe page 2014-03-30 22:37:33 -05:00
32bab89ed4 add prank messages to outfits#show 2014-03-28 15:15:04 -05:00
8e93d603fa list prank colors as fake on the homepage, unless pranks are funny today 2014-03-27 22:44:18 -05:00
b583254397 create colors from rake 2014-03-27 22:28:48 -05:00
03c76fe882 Update missing body ID prediction to handle, say, the Maraquan Mynci.
It turns out that some pets for seemingly nonstandard colors have the
standard body type anyway, and vice-versa. This implies that we should
stop relying on a color's standardness, but, for the time being, we've
just revised the prediction model:

Old model:
    * If I see a body_id, I find the corresponding color_ids, and it's wearable
      by all pet types with those color_ids.

New model:
    * If I see a body_id,
        * If it also belongs to a basic pet type, it's a standard body ID.
            * It therefore fits all pet types of standard color (if there's
              more than one body ID modeled already). (Not really,
              because of weird exceptions like Orange Chia. Should that be
              standard or not?)
        * If it doesn't also belong to a basic pet type, it's a nonstandard
          body ID.
            * It therefore only belongs to one color, and therefore the item
              fits all pet types of the same color.
2014-01-20 15:29:01 -06:00
b2fca6b6c1 closet hangers index uses neopets connections dropdown 2014-01-18 22:50:14 -06:00
72b174c9b3 store all neopets usernames for logged-in users, but breaks closet_hangers#index 2014-01-18 21:55:01 -06:00
8288b8a10d username form, backed by localstorage for guests; not yet backed by db for logged-in users 2014-01-17 11:12:56 -06:00
99b2acd419 attach body id to newest unmodeled item species names 2014-01-10 16:25:03 -05:00
9a4e114964 oh yum, this is really starting to come together :) 2014-01-10 16:25:02 -05:00
7c6e607612 basic neopia api integration 2014-01-10 16:25:02 -05:00
4a49ad2fe8 oh poo, didn't commit these properly with the closet hanger caching :( 2013-12-27 21:48:38 -05:00
b6247fa22f prepare partials for closet_hangers#index, too 2013-12-27 21:48:28 -05:00
1ce32e5867 Use item proxies better for items#index?format=html :D
We used get_multi when preparing the proxies to decide which to
load from the database, but then sent multiple get requests to
Memcache to re-fetch the same data from that get_multi. Silly!
Use the data that's already stored on the proxy anyway.
2013-12-27 21:11:03 -05:00
6b340f906e Cache trade info on items#show, finally! I think it's the performance culprit. 2013-12-27 14:49:46 -05:00
cdffcfbcfd TIL item proxies can read from the cache in bulk 2013-12-09 01:15:57 -06:00
728ff60c5f move item cache sweeping and flex syncing to background tasks 2013-12-09 00:12:05 -06:00
4144b4dc74 only send cache deletions for usable locales
Right now we're spending too much time expiring cache keys when
getting contributions. The longer-term fix is to move it to a
background task, but it's good to restrict deletions only to usable
locales rather than all the ones that Rails theoretically supports.
2013-12-08 23:44:25 -06:00
f07996d762 cache pet images on items#show, in case that's what's being a super-slow jerkface 2013-12-05 15:22:43 -06:00
cc7ac363dd WIP commit for speeding up item show pages 2013-12-05 13:27:56 -06:00
2b870cf91b add pet state replacement task 2013-11-30 20:33:48 -05:00
0cb7fc87df include zones_restrict in item selector when mall spidering, to avoid flex_source errors 2013-10-08 14:42:46 -05:00
019303031b choose list when importing from pets 2013-08-17 12:07:04 -04:00
e48d00294d fix silly closet hanger merge bug involving flex 2013-07-28 23:30:29 -07:00
082119afe1 fix some mall spider bugs, including not having all the attributes it needed for search indexing 2013-07-09 21:00:36 -07:00
9bd49aa85d first step in repairing mall spider 2013-07-09 20:01:55 -07:00
72c59f0b68 if there's only one item search result, redirect to it 2013-07-09 19:54:22 -07:00
4c208c9ac3 instead of returning an empty item list on contradiction, return an empty proxy collection 2013-07-03 18:17:16 -07:00
5e60795f31 Oops, delegate Item::Proxy#to_param to the item, or we get bad links. 2013-06-27 10:47:02 -07:00
5b9394ce82 oops - don't cache as_json's owned/wanted, but instead have the proxy override 2013-06-27 00:10:55 -07:00
bf697cef7b expire item#as_json when updated 2013-06-27 00:00:37 -07:00
9e3cac82ec use proxies for item html, too
Some lame benchmarking on my box, dev, cache classes, many items:

No proxies:
    Fresh JSON:  175,  90,  90,  93,  82, 88, 158, 150, 85, 167 = 117.8
    Cached JSON: (none)
    Fresh HTML:  371, 327, 355, 328, 322, 346 = 341.5
    Cached HTML: 173, 123, 175, 187, 171, 179 = 168

Proxies:
    Fresh JSON:  175, 183, 269, 219, 195, 178 = 203.17
    Cached JSON:  88,  70,  89, 162,  80,  77 = 94.3
    Fresh HTML:  494, 381, 350, 334, 451, 372 = 397
    Cached HTML: 176, 170, 104, 101, 111, 116 = 129.7

So, overhead is significant, but the gains when cached (and that should be
all the time, since we currently have 0 evictions) are definitely worth
it. Worth pushing, and probably putting some future effort into reducing
overhead.

On production (again, lame), items#index was consistently averaging
73-74ms when super healthy, and 82ms when pets#index was being louder
than usual. For reference is all. This will probably perform
significantly worse at first (in JSON, anyway, since HTML is already
mostly cached), so it might be worth briefly warming the cache after
pushing.
2013-06-26 23:50:19 -07:00
e42de795dd Use item proxies for JSON caching
That is, once we get our list of IDs from the search engine, only
fetch records whose JSON we don't already have cached.

It's simpler here to use as_json, but it'd probably be even faster
if I figure out how to serve a plain JSON string from a Rails
controller. In the meantime, requests of entirely cached items
are coming in at about 85ms on average on my box (dev, cache
classes, many items), about 10ms better than the last
iteration.
2013-06-26 23:01:12 -07:00
6984201990 dev util method to manually change SWF asset body ID 2013-06-26 20:08:19 -07:00
b93dbb8e49 Remove redundant queries when importing closet pages
Specifically, we were running a find_or_initialize_by for all 50
hangers, which isn't great. Collation logic is more complicated this
way, but query count is way lower.

Additionally, compare against hanger.list_id instead of hanger.list,
because hanger.list will fire a query if list_id is non-nil, but that
nil ID tells us everything we needed to know, anyway.
2013-06-26 00:10:52 -07:00
a7574f0864 Don't add duplicate hangers now that closet import can specify a list
Bug report that this resolves:

...However, when I was using the "Import from SDB" tool just a few
minutes ago, it ended up adding EVERY neocash item into the "Not
In A List" section, regardless if I already that item imported
into my "Your Items". So, basically.. I had duplicates of
everything and it would not allow me to move them around into
separate catergories or anything. I know that every other time i've
used the import tool, it would only add NEW items that are not
currently already in my lists yet.
2013-06-25 23:40:02 -07:00
fb219f82e8 sigh, add another special color description format 2013-06-23 22:58:17 -07:00
3c127569fe stop caching item preview species images, and fix the bad query instead
Most of the reasoning is documented in the big comment. In short, we tried
to solve the problem with caching, but the caching should hardly be necessary
now that the bottleneck should be fixed. We'll see on production if it
actually solves the whole problem, but I've confirmed in the console that
redefining this function makes random_basic_per_species (as called during
rendering) a ton faster. And this way we keep our randomness, woo!
2013-06-23 22:35:27 -07:00
d132567931 move closet-hanger-destroy form to JS 2013-06-22 15:45:59 -07:00
0d348d6971 oops: sweep localized item link caches 2013-06-07 13:26:51 -07:00
2501cb5667 fix null zone ID bug
TNT has started serving half-removed Corridor of Chance effects:
it has the asset ID and URL and all, but the zone ID is blank.
RocketAMF has patched the empty key bug, and now we ignore assets
associated with empty keys.
2013-05-23 18:48:19 -07:00
bf528b06d2 label pet states as glitched, send to bottom of emotion order 2013-04-27 10:21:51 -05:00
3c91f0cde0 import items to a specific list 2013-04-09 15:50:33 -05:00
9d3acf660c in item queries, ignore name filters that are too small or too large 2013-03-29 17:05:14 -05:00