Commit graph

66 commits

Author SHA1 Message Date
571949b066 Increase timeout in archive SQL queries
One problem I run into with the archive task is that sometimes the queries time out? My hunch is that maybe some of the assets have like, weirdly big manifest files that are being transferred as surprisingly big text files?

Anyway, I'm increasing the timeout to 20s, which is big, but big feels good for a script that doesn't run often and where failing is not great news!

I'm also idly considering whether I wanna finally put in the work to do a bulk S3 uploader sometime, because the current version that iterates over multiple `aws s3 cp` calls is just real slow, I think because it establishes a new connection each time, and that operation is maybe surprisingly expensive? And the CLI doesn't have a way to do multiple uploads any more precisely than "sync up this whole folder", which is slow when the folder contains a lot of stuff you _know_ you don't want to head up there.
2023-06-12 10:56:40 -07:00
7d64709378 Update public data
Just ran `yarn db:export:public-data`, tada!
2022-12-08 13:34:40 -08:00
b84c8ba34e Script to set up dev db with public DTI data
Now, someone with production DB access can run `yarn db:export:public-data` to create `public-data-constants.sql` and `public-data-from-modeling.sql`.

Then, someone setting up their dev database can run `yarn db:setup-dev:full` and get all the wearables data imported right into their dev database!

I'm noticing just how poorly I'm keeping up with my own goals for finishing up DTI, and wondering if now is a good time to circle back to some old offers for code contributions I got last year… I also just figure that making this app Possible To Run with a backup of the basic public database is like. a pretty handy thing to have for archival's sake imo

Note that, for this change, we also set up Git LFS (Large File Storage). Github should be automatically compatible with this! It's a way to not write the whole 30MB database dump into the repository history, and instead keep it in a secondary filestore, because Git's core algorithms aren't really built to handle large blobs of data very well. Users setting up their dev environment will therefore also need to have Git LFS installed for this script to work! (Otherwise, they'll see a "pointer" file in `public-data-from-modeling.sql.gz` that contains some metadata about the file state but not the data itself.)
2022-11-13 07:03:44 -08:00
88511d3dc6 Implement archive:upload:delta
Ok great, we can now run the delta archive process!

It'd be nice to get this running on cron on the impress-2020 server, to a temporary folder? I *do* want to be remembering to run something regularly on my personal machine too though, to keep my own copy up-to-date…
2022-11-05 02:15:31 -07:00
12b5a56694 Fix bug in archive:create scripts
That's what I get for not fully testing lmao! But right, paths in shell scripts are relative to the working directory, and if I want to be relative to the script I gotta use dirname!
2022-10-24 16:56:06 -07:00
8dee9ddbed Refactor archive scripts to prepare/create/upload
Sat down and thought about the structure here and how to make the full/delta stuff make more sense together! Here's what I came up with!

In both full and delta archiving, we prepare the manifest, we create the local archive, then we upload it to remote.
2022-10-13 16:07:12 -07:00
35713069fa Delta version of archive scripts
I like running the full `archive:create` to help us be _confident_ we've got the whole darn thing, but it takes multiple days to run on my machine and its slow HDD, which… I'm willing to do _sometimes_, but not frequently.

But if we had a version of the script that ran faster, and only on URLs we still _need_, we could run that more regularly and keep our live archive relatively up-to-date. This would enable us to build reliable fallback infra for when images.neopets.com isn't responding (like today lol)!

Anyway, I stopped early in this process because images.neopets.com is bad today, which means I can't really run updates today, lol :p but the delta-ing stuff seems to work, and takes closer to 30min to get the full state from the live archive, which is, y'know, still slow, but will make for a MUCH faster process than multiple days, lol
2022-10-13 15:08:29 -07:00
32dd0474f2 Better logging output for model-needed-items
Print out the image hash for easier debugging (can look up the custom data ourselves to check it), and also fix a bug with retries not carrying `contextString` through, oops!
2022-10-12 11:55:29 -07:00
e8d7f6678d Auto-modeling script??
It seems to be working!! How exciting!! I'm just letting it run on stuff now :3

One important issue is that Classic DTI doesn't show images for items modeled this way, because we don't download the SWFs for it. But I wanna update it to stop using AWS anyway and do the same stuff 2020 does, I think we can do that pretty sneakily!
2022-10-11 11:13:10 -07:00
b9b0db8b3a Some archive:create tweaks
I'm looking into what it would take to update the archive on a regular basis. The commands right now *are* pretty good at avoiding duplicate work… but the S3 upload still seems like it's taking very long even to just validate what's in the archive already. We might have to build our own little cache rather than using `aws s3 sync`, if we want faster incremental updates?

Here, I make a few quality-of-life changes to add a `archive:create` command that runs everything in a straight line. That way, I can let it run and see how much wall-time it takes, to be able to decide whether speeding it up feels necessary. (vs whether it's a few-hours task I can just set a reminder to manually run every week or something)
2022-10-02 07:08:40 -07:00
07e2c0f7b1 Add the /donate page
Just doing some house-cleaning on easy pages that need converted before DTI Classic can retire!
2022-09-25 08:05:38 -07:00
50e04385e3 Account creation is ready!
Yay it seems to be working, owo! Added the actual database work, and some additional uniqueness checking.
2022-09-13 21:11:48 -07:00
4c1bda274a Convert openneo_id.users to InnoDB
I'm working on account creation, and got tripped up by transactions not working correctly, because MyISAM tables simply don't support them! Took too long to debug this lol :p

Anyway, I made the change in the prod db, and then re-downloaded the schema files to here.
2022-09-13 21:07:36 -07:00
80038ed9c4 Add openneo_id to the dev database
Specifically, we rename the dev database to `openneo_impress` for consistency with the main app, and create a second schema file for `openneo_id`, so we can do local account creation.
2022-09-13 21:06:35 -07:00
69d96c0b72 Update dev mysql schema
I took away the column-statistics option because the mysqldump version I'm using from mariadb doesn't seem to support it. Whatever!
2022-09-13 00:00:54 -07:00
d9ca07c9b0 Use wget for archive:create:download-urls
Hey this is an exciting development! A list of URLs, that we want to clone onto our hard drive, turns out to be something `wget` is already very good at!

Originally I used `wget`'s `--input-file` option to process the `urls-cache.txt` file, but then I learned how to parallelize it from this StackOverflow answer: https://stackoverflow.com/a/11850469/107415. (Following the guidance in the comments, I removed `-n 1`, to avoid the overhead of extra processes and allow `wget` instances to keep using shared connections over time. Idk why it was in there, maybe the author didn't know `wget` accepts multiple args?)

Anyway yeah, it's working great, except for the weird images.neopets.com downtime! 😅 Specifically I'm noticing that all the item thumbnail images came back really fast, but the customization images are taking for-EV-er. I wonder if that's just caching properties, or if there's a different backing server for it and it's responding much more slowly? Who's to say!

In any case, I'm keeping the timeout in this script pretty low (10 seconds), and just letting failures fail. We can try re-running it again sometime when the downtime is resolved or the cache is warmed up.
2022-09-12 21:47:28 -07:00
ea8715cd90 Sanitize URLs saved by archive:create:list-urls
Especially in our item thumbnails, there's a lot of messiness about what the URL protocol is. There are also some SWF assets whose "URLs" are just saved as paths.

In this change, we start processing all our outputted URLs through a `sanitizeUrl` function, which tries to massage it into an `https://images.neopets.com` URL, and warns if it cannot.

This also warns on some intentionally-different URLs, like our April Fools prank item lol

Anyway, I love functions like this, because the warnings always help me discover the data problems! I wasn't aware of the path-only SWF URLs, for example, until this script started warning about the URL parse errors!
2022-09-12 20:52:45 -07:00
ef9958c11e Add asset URLs to archive:create:list-urls script
Here, we read URLs out from the swf_assets table, including SWFs, manfests, and everything referenced by the manifests.

There are a few data-polishing tricks we needed to do to get this to work! Most notably, newer manfests reference themselves, but older ones don't; so we try to infer the manifest URL from the other URLs. (Our database caches the manifest content, but not the manifest URL it came from.)
2022-09-12 17:26:11 -07:00
3ce895d52f Start building archive:create:list-urls script
Just working on making an images.neopets.com mirror, just in case! To start, I'm extracting all the URLs we need to back up; and then I'll make a separate script whose job is to mirror all of the URLs in the list.
2022-09-12 15:53:22 -07:00
4c9dbf91fb Use latest ~owls NC trade values API
They're moving away from the bulk endpoint to individual item data lookups, so we're updating to match!
2022-09-04 01:35:05 -07:00
25092b865a Add Delete button for outfits
Hey finally! I got in the mood and did it, after… a year? idk lol

The button should only appear for outfits that are already saved, that are owned by you. And the server enforces it!

I also added a new util function to give actually useful error messages when the GraphQL server throws an error. Might be wise to use this in more places where we're currently just using `error.message`!
2022-08-15 20:23:17 -07:00
c1eef6222b Oops, fix db pooling for scripts
Right, ok, `db.close()` needs to be `db.end()` now.

This probably didn't break the user-syncing cron job though, because that doesn't automatically update I think? so it should still be comfortably running older version of the code that should still work just fine
2022-02-03 16:14:40 -08:00
19f1ec092e Turn on Honeycomb instrumentation again
Well, instrumentation seems to be working fine again! The bug we ran into during commit e5081dab7e is gone. Cool!

I want to be able to see what's making the new box slow. My hypothesis was (and it seems to be right) that communication with the database on the Classic DTI server is slow.

But now that they're on the same Linode account and region, I think I can set up a private VLAN to make them muuuch faster. We'll try it out!
2021-11-26 23:41:22 -08:00
eb2eb241ba Use the first Auth0 connection instead of prompt
Oh right, we have another prompt we need to not prompt for, lol :p

Here, I just assume that there's one database connection on the Auth0 account. If that's not true, the script will error—not because this is a fundamentally unresolvable problem, but because I don't want to write code for configuring a situation that doesn't exist yet :p
2021-10-02 01:01:29 -07:00
357061221f Use env vars for MySQL in export-users-to-auth0
Huh, oops, there are a _few_ reasons the user sync cron job hasn't been running correctly.

I fixed some of the config in prod, but then discovered one more issue: the script prompts for an admin database password, so of _course_ it can't auto-run, lol.

Instead, I've now created a `impress2020-util` account, with just a few permissions (but specifically the `openneo_id.users` permission that I _don't_ give the app!), and added the username and password to the secret .env file, both locally and in prod. (In this case, prod means the Linode VPS, not Vercel, because that's where our cron runs.)
2021-10-02 00:57:39 -07:00
ba8e4d8aa7 Trickier disabling honeycomb instrumentation
Hm, okay, so the documented way to not instrument anything doesn't actually stop them from patching Module._load. But this undocumented option sure does! So, woo, let's try it! lol
2021-08-08 00:23:57 -07:00
e5081dab7e Disable honeycomb auto instrumentation
Huh, well, I can't figure out what in our production env stopped working with Honeycomb's automatic instrumentation… so, oh well! Let's try disabling it for now and see if it works.

This means our Honeycomb logs will no longer include _super helpful_ visualizations of how HTTP requests and MySQL queries create a request dependency waterfall… but I haven't opened Honeycomb in a while, and this bug is blocking all of prod, so if this fixes the site then I'm okay with that as a stopgap!

Btw the error message was:
```
Unhandled rejection: TypeError: Cannot read property 'id' of undefined    at exports.instrumentLoad (/var/task/node_modules/honeycomb-beeline/lib/instrumentation.js:80:14)    at Function._load (/var/task/node_modules/honeycomb-beeline/lib/instrumentation.js:164:16)    at ModuleWrap.<anonymous> (internal/modules/esm/translators.js:199:29)    at ModuleJob.run (internal/modules/esm/module_job.js:169:25)    at Loader.import (internal/modules/esm/loader.js:177:24)
```

Oh also, this is the first time eslint has looked at scripts/build-cached-data.js I guess, so I fixed some lint errors in there.
2021-08-08 00:14:55 -07:00
617ffd9a38 Add --upsert option to auth0 script 2021-06-16 08:24:59 -07:00
1d97988383 Finish outfit auto-saving!
Hope it actually work-works lol

Did some refactors in useOutfitState to support the new reset action we do after auto-saving, in case the server tweaked things like the name.
2021-05-04 12:33:13 -07:00
02228f533a Add idempotency comment to auth0 script 2021-03-31 16:47:39 -07:00
62952b80dd Can edit closet list text, incl as Support
I wanted the ability to clear out closet list text for Support users, and figured I should just build the UI for end users too, and grant Support users the same access!
2021-03-23 17:48:11 -07:00
0e8e50b054 Simpler, faster modeling query
I narrowed down the problem to the fact that we were joining in pet types against assets, and *then* running GROUP and DISTINCT and everything. Assets x compatible species/color pairs is a LOT of rows!

Here, we instead get all the relevant body IDs first, and *then* match them against pet types—which we fetch in one batch to match body to canonical species/color.

I'm also trashing the weird caching mechanism we did here, because in practice it doesn't seem reliable anyway. If anything, I'd want to look at stronger CDN caching. (I made a small improvement to the caching annotation, but ultimately it still doesn't matter, because this query uses logged-in stuff and always comes out max-age=0 anyway.)
2021-03-18 13:02:06 -07:00
cfbb23d9ff Keep track of when manifest was last cached
Can use this later to make sure we re-run them on a regular interval or something!
2021-03-11 02:23:40 -08:00
197c623426 delete-user.js script
I already had a script for this lying around, and adapted it a tiny bit to the repository!

Part of me thought about building it in as a support tool. I might've if:
- this CLI didn't already exist
- we already had tighter permissioning, this is pretty high stakes!!
2021-03-10 05:19:51 -08:00
2f36f8a0e8 Export individual user to auth0 2021-03-10 05:18:31 -08:00
2164f06021 Add --resync-existing to cache-asset-manifests
Wowie, looks like all the SVG asset manifests changed format lol! Running this now to update them all. There's a lot of them!
2021-01-21 15:14:23 -08:00
79258e11b9 Add --skip-unchanged flag to cache-asset-manifests
I'm not getting cron success _or_ cron failure emails for running this script on our Linode box. I was getting failures back when I had the command wrong, though.

My hypothesis is that the script output is too long to email, because of some limit somewhere along the way. I'll update the cron job to use `--skip-unchanged`, in hopes that it helps me get the emails! (I'm not suuure it's running, is the thing... though hey, here's a way to check: as of now, 512,624 of 521,896 assets are converted. If that changes eventually without a manual script run, then the cron is working!)
2021-01-21 10:38:47 -08:00
6da2ddb453 Perf & feedback upgrades to cache-asset-manifests
I want to start running this on a regular cron, and making the script faster (stop sending redundant queries) and clearer (# actually updated) is super useful for that!
2021-01-20 09:49:05 -08:00
24f29173bb only sync recent users to auth0 2021-01-16 11:08:12 -08:00
6168f98b8e include previous misses in cache-asset-manifests
Originally, this was sorta a cache warmup script: we wanted to fill in manifests that we hadn't checked for yet.

But now, I want to _also_ check previous cache misses, that we stored in the db as an empty string. Maybe it's been converted now!
2020-12-28 14:04:28 -08:00
fd864ab8ec fix cache-asset-manifests script
Just wanted to run it and see if much has been converted since we last checked!
2020-12-28 14:00:11 -08:00
7c9313f4a6 support tool to edit usernames 2020-11-18 07:42:40 -08:00
53d399f46b add neomail username to user GQL 2020-10-23 22:55:13 -07:00
3a20deba09 can remove owned/wanted items from item page 2020-10-23 22:43:56 -07:00
57889a3a88 can add own/wanted items from item page
the buttons work now! but only when adding 😅 remove comes next!
2020-10-22 21:20:49 -07:00
6c97c15979 add closet_lists to dev schema too 2020-10-22 20:32:02 -07:00
0dcbe6fc2d Add users and closet_hangers to local db schema 2020-10-22 19:53:32 -07:00
99e6480486 add logging to modeling
my hope is that, if we fuck things up, this will make it clear 😅
2020-10-06 07:06:19 -07:00
df2d814c13 enable running against a local dev database
had to add some missing tables, but it seems to work! (some known errors though, from assumptions we make e.g. blue acaras existing)
2020-10-06 06:18:19 -07:00
41e70ba8d0 finish modeling full pet appearance 2020-09-25 03:29:02 -07:00