this is the last one to get parity with current modeling, I think?? I'm gonna add one more feature though: removing no-longer-used assets from the item
Oops, when building the Support tool to label pet appearances, I didn't realize that there's also a boolean `labeled` field that needs to be true for labeled appearances. Without it, the old app shows the appearance as "Unlabeled".
I also ran this query to fix the rows we'd incorrectly written:
```
mysql> UPDATE pet_states SET labeled = 1 WHERE mood_id IS NOT NULL;
Query OK, 158 rows affected (0.14 sec)
Rows matched: 19640 Changed: 158 Warnings: 0
```
This is mostly because I want to chain the rels after both items and assets save, and I want to be able to specify that stuff a bit more precisely, rather than the like, layers-of-awaits we were building up.
yeah, I had unified Pet into Outfit, but now I think that was overly clever… 😅
Here, I define a new Pet type, and it has some of the fields of Outfit and the deprecated fields still.
I did this because I want petAppearance to work, for UC testing!
We download the schema from prod, and omit real data, but I didn't notice that we were still pulling the metadata of the auto increment counter for IDs! Now, we scrub that from the schema file we save.
This wasn't actually super helpful to read anyway, and I think it was causing us to hit rate limits.
We can maybe add back a limited version to like, add path context of _where_ a span happened in the GQL tree, but like, I feel like that's typically been pretty intuitive so far.
Boom, now we can also run a clean MySQL test db on each test that wants it :)
the test I wrote as a sample is currently marked `it.skip` because it's not passing yet!
This updates the MySQL procedure to get the important special colors, but keeps the GQL behavior the same by only filtering to Blue. Just an incremental step before changing the behavior, to make sure I've gotten it right so far!
Snapshots significantly updated, but, from scanning it, I think that's expected changes from actual modeling progress. Hooray!
I'm using my first ever MySQL Store Procedure for clever cleverness in caching the modeling query!
I realized that checking for the latest contribution timestamp is a pretty reliable way of deciding when modeling data was last updated at all. If that timestamp hasn't changed, we can reuse the results!
I figured that, because query roundtrips are a bottleneck in this environment, I didn't want to make that query separately. So, I built a MySQL procedure to do the check on the database side!
Okay, we handle the new pages correctly! Still some weird bugs when you send requests near each other? Probably wise to migrate to Apollo's new way of doing this
Been bothered by this for a while!
My hope is that this isn't a notable marginal performance hit—we were already walking the table and doing string ops anyway, I can't imagine adding to that is actually that much of a marginal lift, when the main bottleneck was probably reads. And the perf should be identical for simple single-word queries anyway. But we'll see how it feels!
Previously, if you switched species/color such that one of your items was no longer compatible, we _would_ still apply its zone restrictions to the visible layer set.
In this change, we fix that server-side, since I think it makes the most sense for an empty appearance to be truly empty!
This is in preparation for hiding bio zone restrictions but showing item zone restrictions!
I also refactor the build-cached-data script substantially, to run GraphQL against the server instead of a custom query.
okay so the PetAppearance restrictions are stored on the asset, because that's how they're defined on Neopets.com too
but I think that's a confusing API, so here I define `PetAppearance.restrictedZones`, which just maps over the layers and aggregates the zones server-side, same as we would have done on the client
I think that's much easier to understand than having layer contain a field, but having to know that item restrictions _don't_ work that way, you know?
Previously, when changing a pet's color, we would refresh the items panel and send a new network request for the item appearances, even though they're all the same. This is because item appearance data is queried by species/color, for ease of specification.
But! Item appearances are //cached// by body ID. So, if this is a standard color, it's not hard to look in the cache for the standard color's body ID!
Now, most color changes are faster and don't flicker the item panel anymore. We do still refresh the panel and send the requests for color changes that _do_ matter though, like standard <-> mutant!
ahh, in a recent change I made glitched states valid for canonical poses, but didn't make the corresponding change here! This meant that I think the PosePicker showed them, but other ways of getting to them didn't work, including the Candy Acara (who is 100% marked glitched) was no longer pickable at all
I noticed in prod that the Vercel edge cache can show old data in the Support tool right after you edit it and reload the page, which is super confusing!
In this change, we stop caching the endpoint we use for Support tools, so that the Support tools always feel real-time and trustworthy. (The standard pose picker might still be cached, so it could be a bit confusing for that to be out of sync, but at least you can toggle into Support mode and see that your changes happened _there_, so you don't panic that they're _gone_.)
Previously, I was filtering out glitched appearances from the canonical ones.
But now, I'm thinking it's better to serve glitched ones than no data for a pose at all.
I'm inspired by the case of the Candy Acara, which has _only_ glitched appearances in our db, and I'd like to mark them for reference—but then the site would treat it as no data at all.
Still just read-only stuff, but now you can look at all the different poses we have for a species/color!
Soon I'll make the pose/glitched stuff editable :3
Some sizable refactors here to add the ability to specify appearance ID as well as pose… most of the app still doesn't use it, it's mostly just lil extra logic to make it win if it's available!
(The rationale for making it an override, rather than always tracking appearance ID, is that it gets really inconvenient in practice to //wait// on looking up the appearance ID in order to start loading various queries. Species/color/pose is a more intuitive key, and works better and faster when the canonical appearance is what you want!)
This is in support of a caching issue in a hack tool coming next! Without this, the change to ItemAppearance restricted zones would make other ItemAppearance fields go missing (bc our hack tool didn't also specify them), so the query would re-execute over the network to find the missing fields we overwrote with nothingness—which would undo the local hack change.
Previously, we were using a custom-y `id` field to help Apollo cross-reference `petAppearance` queries with the results from bulk `petAppearances` queries. Now, instead, we deprecate `petStateId`, and start using `id` to have the same stable value!
This is in anticipation of pet appearance support tools: a stable ID will make it easier to edit them, esp changing their pose (which would otherwise have changed the ID!)
Huh, some 8-bit species are broken and use the standard body ID!
This was causing our body name query to prioritize 8-bit for standard assets, as the alphabetically-first compatible color; but 8-bit isn't marked standard, so the function kept it labeled 8-bit.
This should fix it and show "Standard Draik" when deleting an asset off the standard draik body!
In practice I saw that this doesn't actually tell you what you _really_ want to know about where the change happened! You want to know it was broken on the Acara or w/e.
In this change, we cache the zones table as part of the JS build process. This keeps the database as our source of truth, while aggressively caching the data at deploy time.
See the new README for some rationale!
I tested this by pulling up dev Honeycomb, and observing that we no longer run db queries to `zones` in the new traces for the wardrobe page. (It's a good thing we did it this way, because I noticed some code in the server that was still loading the zone anyway, and fixed it here!)
This reverts commit 0f7ab9d10e.
The Production Vercel deploys don't seem to like how I did this build trick, even though the Preview deploys seem fine with it 🤔 Reverting for now, sent a message to Vercel support.
Here's just some simple caching: we try to load the asset manifest from the db with the rest of the asset. If it's not present, we load it via HTTP, and write it to the database.
I might try to do a bulk write of manifests at some point, too.
This is because I noticed that one of the main bottlenecks in most of the endpoints now (and definitely the highest-variance) was loading from images.neopets.com.
Another approach I considered was HTTP/2 to load the manifests, because it kinda looks like the server is refusing to open all these sockets at once and effectively does the requests in waves? But images.neopets.com doesn't support HTTP/2 right now anyway, so oh well! (And that would have probably cut us down to ~250ms of HTTP time still, instead of ~600–700. Also, why is network out of Vercel so slow? :p)
I noticed that, while looking up zone data from the db is near instant when you're on the same box, it's like 300ms here!
In this change, we start downloading zone data into the build process. That way, we can have a very fast and practically-up-to-date cache (I'm not sure I've changed it in many years), while being confident that it's in sync with the database source of truth (for things like join queries).
Oops, of course, we weren't actually taking proper advantage of the dataloader here! The queries got over-complicated, but more importantly, subsequent requests to the same loader would re-submit the query!
I noticed it in the SearchPanel operation, in this Honeycomb trace:
https://ui.honeycomb.io/openneo/datasets/dress-to-impress--2020-/trace/aMuhsTjQFZY
We got bit by the "can't run anything after the response finishes" thing
so I'm just forcing the response to wait for Honeycomb submit to finish
I hope this isn't like, just awful for perf lol. but puts to honeycomb seem fast?
I skipped this in the past runs because I had a hard time getting consistency from the results… but they seem to be behaving now?
It really seemed like there were some races on certain query orders… maybe there still is, but my more-reliable connection today is making them resolve in a more consistent order?
Anyway if I see goofs again, I'll consider adding a snapshot matcher that isn't picky about query order 🤔
oof, got "too many connections" from mysql, this is probably gonna be a scaling issue in time… for now, stop requesting a pool of 5, even on dev lolol, and just go with a single connection per instance
Note that there's a bug when switching back to the null case… when I look in the Apollo dev tools, it's definitely getting set in the cache correctly at the right time… but the query isn't updating for some reason? I'm hoping it's an Apollo bug that will fix itself someday with an upgrade!