In this change, we cache the zones table as part of the JS build process. This keeps the database as our source of truth, while aggressively caching the data at deploy time.
See the new README for some rationale!
I tested this by pulling up dev Honeycomb, and observing that we no longer run db queries to `zones` in the new traces for the wardrobe page. (It's a good thing we did it this way, because I noticed some code in the server that was still loading the zone anyway, and fixed it here!)
This reverts commit 0f7ab9d10e.
The Production Vercel deploys don't seem to like how I did this build trick, even though the Preview deploys seem fine with it 🤔 Reverting for now, sent a message to Vercel support.
Here's just some simple caching: we try to load the asset manifest from the db with the rest of the asset. If it's not present, we load it via HTTP, and write it to the database.
I might try to do a bulk write of manifests at some point, too.
This is because I noticed that one of the main bottlenecks in most of the endpoints now (and definitely the highest-variance) was loading from images.neopets.com.
Another approach I considered was HTTP/2 to load the manifests, because it kinda looks like the server is refusing to open all these sockets at once and effectively does the requests in waves? But images.neopets.com doesn't support HTTP/2 right now anyway, so oh well! (And that would have probably cut us down to ~250ms of HTTP time still, instead of ~600–700. Also, why is network out of Vercel so slow? :p)
I noticed that, while looking up zone data from the db is near instant when you're on the same box, it's like 300ms here!
In this change, we start downloading zone data into the build process. That way, we can have a very fast and practically-up-to-date cache (I'm not sure I've changed it in many years), while being confident that it's in sync with the database source of truth (for things like join queries).
Oops, of course, we weren't actually taking proper advantage of the dataloader here! The queries got over-complicated, but more importantly, subsequent requests to the same loader would re-submit the query!
I noticed it in the SearchPanel operation, in this Honeycomb trace:
https://ui.honeycomb.io/openneo/datasets/dress-to-impress--2020-/trace/aMuhsTjQFZY
We got bit by the "can't run anything after the response finishes" thing
so I'm just forcing the response to wait for Honeycomb submit to finish
I hope this isn't like, just awful for perf lol. but puts to honeycomb seem fast?
I skipped this in the past runs because I had a hard time getting consistency from the results… but they seem to be behaving now?
It really seemed like there were some races on certain query orders… maybe there still is, but my more-reliable connection today is making them resolve in a more consistent order?
Anyway if I see goofs again, I'll consider adding a snapshot matcher that isn't picky about query order 🤔
oof, got "too many connections" from mysql, this is probably gonna be a scaling issue in time… for now, stop requesting a pool of 5, even on dev lolol, and just go with a single connection per instance
Note that there's a bug when switching back to the null case… when I look in the Apollo dev tools, it's definitely getting set in the cache correctly at the right time… but the query isn't updating for some reason? I'm hoping it's an Apollo bug that will fix itself someday with an upgrade!