ahh, in a recent change I made glitched states valid for canonical poses, but didn't make the corresponding change here! This meant that I think the PosePicker showed them, but other ways of getting to them didn't work, including the Candy Acara (who is 100% marked glitched) was no longer pickable at all
Oops, opening the dropdown would auto-focus the left arrow button, because it's now the first focusable element! Make explicit that we want the dropdown instead.
I noticed in prod that the Vercel edge cache can show old data in the Support tool right after you edit it and reload the page, which is super confusing!
In this change, we stop caching the endpoint we use for Support tools, so that the Support tools always feel real-time and trustworthy. (The standard pose picker might still be cached, so it could be a bit confusing for that to be out of sync, but at least you can toggle into Support mode and see that your changes happened _there_, so you don't panic that they're _gone_.)
Previously, I was filtering out glitched appearances from the canonical ones.
But now, I'm thinking it's better to serve glitched ones than no data for a pose at all.
I'm inspired by the case of the Candy Acara, which has _only_ glitched appearances in our db, and I'd like to mark them for reference—but then the site would treat it as no data at all.
Easier to move between appearances quickly! I'm adding this in anticipation of a use case of rapidly labeling Unknown appearances—I want it to be easy to label one and go to the next!
For most users, I want to hide the pose picker if there's not actually anything to pick from.
But I want Support users to be able to open it and use Support mode, to inspect and label Unknown poses!
In this change, I add new UI to show a question mark pose picker button, and a note explaining the empty picker; but only show them for Support users.
I also made a new `useSupport` hook, to replace `useSupportSecret`, now that I have a use case for "is support user?" that doesn't require the secret. Will migrate the rest!
Previously, toggling between the two PosePicker modes could dramatically disrupt the layout, because Popover didn't know to re-measure itself.
In this change, we add a hacky workaround to simulate a window resize event in moments when we know the content is changing. (The realistic ideal solution would still have these manual triggers, but would use an official API in Chakra to notify the Popper instance directly.)
Still just read-only stuff, but now you can look at all the different poses we have for a species/color!
Soon I'll make the pose/glitched stuff editable :3
Some sizable refactors here to add the ability to specify appearance ID as well as pose… most of the app still doesn't use it, it's mostly just lil extra logic to make it win if it's available!
(The rationale for making it an override, rather than always tracking appearance ID, is that it gets really inconvenient in practice to //wait// on looking up the appearance ID in order to start loading various queries. Species/color/pose is a more intuitive key, and works better and faster when the canonical appearance is what you want!)
similar to the layer zoning tools I just rolled out!
not thrilled about the outfit state hacks here bc of how we cache restrict on the appearance rather than the item, but oh well! this escape hatch is pretty easy and solid, and it's a cleanup for another day
Also did a code split here, now that this file is getting larger, to only load this for support users. I don't actually care about restricting console stuff to support users (I'd honestly rather not), but saving the bytes is worth it I think, since support mode is pretty easy to enter when we need to
This is in support of a caching issue in a hack tool coming next! Without this, the change to ItemAppearance restricted zones would make other ItemAppearance fields go missing (bc our hack tool didn't also specify them), so the query would re-execute over the network to find the missing fields we overwrote with nothingness—which would undo the local hack change.
A dev tool to preview what an item would look like in a different zone!
To use, open the browser console, and type `DTIHackLayerZone(layerId, zoneIdOrName)`, but replace `layerId` with the layer's DTI ID, and `zoneIdOrName` with either the zone ID or its name in quotes.
Previously, we were using a custom-y `id` field to help Apollo cross-reference `petAppearance` queries with the results from bulk `petAppearances` queries. Now, instead, we deprecate `petStateId`, and start using `id` to have the same stable value!
This is in anticipation of pet appearance support tools: a stable ID will make it easier to edit them, esp changing their pose (which would otherwise have changed the ID!)
Previously, we would load all `petAppearances` in `PosePicker`, and use cache keys to instantly find it again as a single `petAppearance` in `OutfitPreview` after switching poses.
In this change, we instead have `PosePicker` explicitly load all 6 poses as separate `petAppearance` queries. This simplifies cache sharing between the two components' queries: Apollo can do it automatically, because they were queried the same way in the first place.
I'm doing this in preparation for changing the `id` field of `PetAppearance`, to become `petStateId`. This will help me build pet appearance support tools, by giving the appearances stable identifiers that won't be affected by editing which pose an appearance is!
Huh, some 8-bit species are broken and use the standard body ID!
This was causing our body name query to prioritize 8-bit for standard assets, as the alphabetically-first compatible color; but 8-bit isn't marked standard, so the function kept it labeled 8-bit.
This should fix it and show "Standard Draik" when deleting an asset off the standard draik body!
In practice I saw that this doesn't actually tell you what you _really_ want to know about where the change happened! You want to know it was broken on the Acara or w/e.
Dice reported this, thank you!
My mistake here was that `loadImage` _does_ reject when the image fails to load… but it ends up throwing `undefined`, since I forgot to pass the error along from `onerror` to `reject`!
So we would cancel stuff, but then store `undefined` as our error in state, which our component interprets as no-error.
I tested this by using Firefox DevTools request blocking!
On the homepage, I want to keep the ability to enter invalid species/color pairs, so that you can say "Alien, Aisha" instead of having to pick the Aisha first.
But on the wardrobe page, we were rejecting invalid state changes anyway, so I decided to remove invalid color options from the list. And I added an ability to still switch to any species, and potentially resetting to a basic color automatically to match.
Oops, our loading state logic was eating the error case! I'm not sure exactly where the gap was happening, but I've rewritten the states to be a bit more foolproof, since that first condition was confusing I think.
Ahaha I fucked up a bit! I was indexing into the array of cached zones, instead of looking up by ID. This meant that all zone names were wrong, and some search results weren't loading bc there was no zone data!
I made a fix here, and also added some fallback values, so that if there's an issue in the future we can at least fall back more gracefully than the infinite-spinner case we had here.
In this change, we cache the zones table as part of the JS build process. This keeps the database as our source of truth, while aggressively caching the data at deploy time.
See the new README for some rationale!
I tested this by pulling up dev Honeycomb, and observing that we no longer run db queries to `zones` in the new traces for the wardrobe page. (It's a good thing we did it this way, because I noticed some code in the server that was still loading the zone anyway, and fixed it here!)
I was finding the script too slow running on my local machine, because the SQL RTTs were too slow - and with one connection, they were essentially a serial bottleneck, not taking much advantage of our concurrency.
Here, I instead add a `--dump` option, which outputs SQL to stdout. I then uploaded the resulting SQL to the DTI box, and ran it up there. Doing the network part fast on my machine, and the SQL part fast on the cloud machine!
I first considered uploading this script to the cloud machine, but it's an old Ubuntu and I couldn't figure out how to install a recent NodeJS onto it 🙃
This reverts commit 0f7ab9d10e.
The Production Vercel deploys don't seem to like how I did this build trick, even though the Preview deploys seem fine with it 🤔 Reverting for now, sent a message to Vercel support.
Just gonna bulk load all those manifests into the db, and then that should make most loads notably faster by removing the net request! 🤞
We'll still load manifests inline sometimes, but only the first time anyone pulls up the layer in impress-2020. After that, it should be cached forever!
Here's just some simple caching: we try to load the asset manifest from the db with the rest of the asset. If it's not present, we load it via HTTP, and write it to the database.
I might try to do a bulk write of manifests at some point, too.
This is because I noticed that one of the main bottlenecks in most of the endpoints now (and definitely the highest-variance) was loading from images.neopets.com.
Another approach I considered was HTTP/2 to load the manifests, because it kinda looks like the server is refusing to open all these sockets at once and effectively does the requests in waves? But images.neopets.com doesn't support HTTP/2 right now anyway, so oh well! (And that would have probably cut us down to ~250ms of HTTP time still, instead of ~600–700. Also, why is network out of Vercel so slow? :p)
I noticed that, while looking up zone data from the db is near instant when you're on the same box, it's like 300ms here!
In this change, we start downloading zone data into the build process. That way, we can have a very fast and practically-up-to-date cache (I'm not sure I've changed it in many years), while being confident that it's in sync with the database source of truth (for things like join queries).
Oops, of course, we weren't actually taking proper advantage of the dataloader here! The queries got over-complicated, but more importantly, subsequent requests to the same loader would re-submit the query!
I noticed it in the SearchPanel operation, in this Honeycomb trace:
https://ui.honeycomb.io/openneo/datasets/dress-to-impress--2020-/trace/aMuhsTjQFZY