Hm, okay, so the documented way to not instrument anything doesn't actually stop them from patching Module._load. But this undocumented option sure does! So, woo, let's try it! lol
Huh, well, I can't figure out what in our production env stopped working with Honeycomb's automatic instrumentation… so, oh well! Let's try disabling it for now and see if it works.
This means our Honeycomb logs will no longer include _super helpful_ visualizations of how HTTP requests and MySQL queries create a request dependency waterfall… but I haven't opened Honeycomb in a while, and this bug is blocking all of prod, so if this fixes the site then I'm okay with that as a stopgap!
Btw the error message was:
```
Unhandled rejection: TypeError: Cannot read property 'id' of undefined at exports.instrumentLoad (/var/task/node_modules/honeycomb-beeline/lib/instrumentation.js:80:14) at Function._load (/var/task/node_modules/honeycomb-beeline/lib/instrumentation.js:164:16) at ModuleWrap.<anonymous> (internal/modules/esm/translators.js:199:29) at ModuleJob.run (internal/modules/esm/module_job.js:169:25) at Loader.import (internal/modules/esm/loader.js:177:24)
```
Oh also, this is the first time eslint has looked at scripts/build-cached-data.js I guess, so I fixed some lint errors in there.
Hope it actually work-works lol
Did some refactors in useOutfitState to support the new reset action we do after auto-saving, in case the server tweaked things like the name.
I wanted the ability to clear out closet list text for Support users, and figured I should just build the UI for end users too, and grant Support users the same access!
I narrowed down the problem to the fact that we were joining in pet types against assets, and *then* running GROUP and DISTINCT and everything. Assets x compatible species/color pairs is a LOT of rows!
Here, we instead get all the relevant body IDs first, and *then* match them against pet types—which we fetch in one batch to match body to canonical species/color.
I'm also trashing the weird caching mechanism we did here, because in practice it doesn't seem reliable anyway. If anything, I'd want to look at stronger CDN caching. (I made a small improvement to the caching annotation, but ultimately it still doesn't matter, because this query uses logged-in stuff and always comes out max-age=0 anyway.)
I already had a script for this lying around, and adapted it a tiny bit to the repository!
Part of me thought about building it in as a support tool. I might've if:
- this CLI didn't already exist
- we already had tighter permissioning, this is pretty high stakes!!
I'm not getting cron success _or_ cron failure emails for running this script on our Linode box. I was getting failures back when I had the command wrong, though.
My hypothesis is that the script output is too long to email, because of some limit somewhere along the way. I'll update the cron job to use `--skip-unchanged`, in hopes that it helps me get the emails! (I'm not suuure it's running, is the thing... though hey, here's a way to check: as of now, 512,624 of 521,896 assets are converted. If that changes eventually without a manual script run, then the cron is working!)
I want to start running this on a regular cron, and making the script faster (stop sending redundant queries) and clearer (# actually updated) is super useful for that!
Originally, this was sorta a cache warmup script: we wanted to fill in manifests that we hadn't checked for yet.
But now, I want to _also_ check previous cache misses, that we stored in the db as an empty string. Maybe it's been converted now!
We download the schema from prod, and omit real data, but I didn't notice that we were still pulling the metadata of the auto increment counter for IDs! Now, we scrub that from the schema file we save.
Boom, now we can also run a clean MySQL test db on each test that wants it :)
the test I wrote as a sample is currently marked `it.skip` because it's not passing yet!
This is in preparation for hiding bio zone restrictions but showing item zone restrictions!
I also refactor the build-cached-data script substantially, to run GraphQL against the server instead of a custom query.
In this change, we cache the zones table as part of the JS build process. This keeps the database as our source of truth, while aggressively caching the data at deploy time.
See the new README for some rationale!
I tested this by pulling up dev Honeycomb, and observing that we no longer run db queries to `zones` in the new traces for the wardrobe page. (It's a good thing we did it this way, because I noticed some code in the server that was still loading the zone anyway, and fixed it here!)
I was finding the script too slow running on my local machine, because the SQL RTTs were too slow - and with one connection, they were essentially a serial bottleneck, not taking much advantage of our concurrency.
Here, I instead add a `--dump` option, which outputs SQL to stdout. I then uploaded the resulting SQL to the DTI box, and ran it up there. Doing the network part fast on my machine, and the SQL part fast on the cloud machine!
I first considered uploading this script to the cloud machine, but it's an old Ubuntu and I couldn't figure out how to install a recent NodeJS onto it 🙃
This reverts commit 0f7ab9d10e.
The Production Vercel deploys don't seem to like how I did this build trick, even though the Preview deploys seem fine with it 🤔 Reverting for now, sent a message to Vercel support.
Just gonna bulk load all those manifests into the db, and then that should make most loads notably faster by removing the net request! 🤞
We'll still load manifests inline sometimes, but only the first time anyone pulls up the layer in impress-2020. After that, it should be cached forever!