We're in the process of migrating away from translating these records,
because Neopets hasn't supported non-English languages in many years,
and it'll simplify our code and database lookups.
In Main DTI, we already wrote code to copy these fields onto the main
records and keep them in sync for now; now, once DTI 2020 isn't
referencing them anymore, it should be safe for the main app to drop
the tables altogether.
Note that some Prettier changes got mixed in here and that's fine!
I also wasn't suuuper careful testing these, most of them seem to be
trivially testable by just loading the homepage or doing a few basic
wardrobe actions, and the others are in Discord support log actions
that aren't enabled in development mode, so I'm just like… ehh I'll do
a couple support actions after deploy and see that they don't crash!
Lol I guess we've probably just been having intermittent CORS issues
forever, oops. I hope the cache has been mostly warmed on the right
thing! But today I checked and the entire species/color picker on the
Rails app wasn't working for CORS reasons, so like. Yeah oof.
Without this, 150x150 and 300x300 outfit thumbnails would fail to render new item layers where we didn't have an AWS image layer. Now, they correctly render the new stuff!
I tested this with the new "Spooky Stitches Markings" on the Grarrl, which has a blank image in AWS, but works correctly in the new code by loading the image from neopets.com!
Okay, this is gonna be a drop-in new backend for impress-asset-images.openneo.net, to enable Classic DTI to use the same images as DTI 2020!
This will enable us to stop generating images and uploading them to S3 just for Classic's sake, so we can turn those background processes off! And the new modeling script skips that anyway, so this is an important compatibility step for the new data that went out today!
We're gonna update impress-asset-images.openneo.net to perform redirects and stuff, so Classic DTI can start using the same images that DTI 2020 does.
That should enable us to stop relying on AWS for images, which is important because the new modeling script breaks that anyway :p but this will also let us turn off the image converters that run in the background all the time, and I'm excited for that too!
Oops, Next.js has built-in request body parsing that happens automatically. So it was giving us a `req.body` string, and our code to read in the body and put it in a buffer was waiting forever!
Thankfully, it's much easier than I expected to turn that behavior off for just one route. Now it works like before, so our existing code works again, ta da!
Before, we were using the ContentType from the S3 object, which was unreliable. This helps us behave better for query-string files!
We also add a filename via Content-Disposition for the files that auto-download and for the Save As case. Idk if this is super important exactly, but I feel like it'll be a lifesaver for anyone using this to get at a specific file for their own reference at any point! and it just seems polite lol
Okay so the funny thing is that my upload script is clearly like *super* not working lol, it's been running more than an hour now and still hasn't finished listening the files. So there's only actually a handful of files to test with here, from the `archive:create:upload-test` script!
But anyway, uhh once the archive is actually uploaded, this is a way to read it back! Mainly as a way to assure me that it's all saved correctly, but also as a potential backup for images.neopets.com if it goes down again sometime.
There are two Neopets NC items named Butterfly Dress! ~owls disambiguates them by calling one of them "Butterfly Dress (from Faerie Festival event)".
It'd be a bit more robust to cooperate with ~owls t o get item IDs served up in this case, but it's not a big deal esp. for only this one case, so like… this is fine!
Hey nice it looks like it's working! :3 "Bright Speckled Parasol" is a nice test case, it has a long text string! And when the NC value is not included in the ~owls list, we indeed don't show the badge!
This isn't a partnership we've actually talked through with the team, I'm just validating whether we could reuse our Waka code if it were to come up! and playing with it for fun 😊
Oops, the new Juppie Swirl color has ID 114, and there's no released color #113 yet. But our `/api/validPetPoses` code, when deciding how large to make the byte array, uses the _number_ of colors in the database.
This meant that, when Juppie Swirl was released, there wasn't a 114th slot allocated, and the loop stopped at color ID #113—so the new Juppie Swirl color #114 wasn't included in the results. This made it impossible to select Juppie Swirl as a starting color. (You could, however, model a Juppie Swirl Chia, and the wardrobe would load it successfully; and you would see the color/species picker with the correct options selected, but in a red "invalid" state.)
Now, we instead use the largest ID in the database to determine the size of the array. This means Juppie Swirl is now included correctly!
There would be network perf implications if the color IDs were a sparser space, but it's dense enough to be totally fine in practice. (But let's not release an April Fools color #9999 or anything!)
Well, instrumentation seems to be working fine again! The bug we ran into during commit e5081dab7e is gone. Cool!
I want to be able to see what's making the new box slow. My hypothesis was (and it seems to be right) that communication with the database on the Classic DTI server is slow.
But now that they're on the same Linode account and region, I think I can set up a private VLAN to make them muuuch faster. We'll try it out!
Yeah ok, let's just run one browser instance and one pool.
I feel like I observed that, when I killed chromium in prod, pm2 noticed the abrupt loss of a child process and restarted the whole app process? which is rad? so maybe let's just trying relying on that and see how it goes
We used Playwright in the first place to try to work around a Vercel deploy issue, and I'm not sure it really ended up mattering lol :p
But yeah, I'm putting the new Puppeteer code through the same prod stress test, and it just doesn't seem to be getting into the same broken state that Playwright was. I'm guessing it's just that Puppeteer has more investment in edge-case handling? (There's also the fact that we're no longer running things as root, which could have been a fucky problem, too?)
The motivation is that I want VERCEL_URL and local net requests outta here :p and we were doing some cutesiness with leveraging the CDN cache to back the GQL fields. No more of that, folks! lol
Previously we were using HTTP queries to keep individual function bundle sizes small, but that doesn't matter in a server where all the code is shared!
The immediate motivation is that I want /api/outfitImage requesting against the same server, not impress-2020.openneo.net. For other stuff I'm probably gonna fix this by replacing VERCEL_URL with something else, but here I figured this was a change worth making anyway.
Now that we're not on Vercel's AWS Lambda deployment, we can switch to something a bit more standard!
I also tweaked up our version of Playwright, because, hey, why not?
Getting the package list was a bit tricky, but we got there! Left a comment to explain where it's from.
I noticed this when Playwright was trying to draw cute ASCII art and it wasn't showing up right! Not a big deal, but it's a bit more correct to do this, so let's do it!
Things seemed to mostly work at first glance! I haven't tested outfitPageSSR because we'll need to redo it though, and also the outfit image routes aren't working anymore (vercel.json isn't how next.js works)