Commit graph

1485 commits

Author SHA1 Message Date
9a54694e31 Remove Metaverse license notice
It's covered in the Terms now, and I don't think they can reasonably claim they didn't know, lol
2022-10-14 17:44:26 -07:00
8b3c256a5c Time out manifest requests after 2sec
We do a thing where we sometimes proactively update an appearance layer's manifest from images.neopets.com when it's been a while since the last time, _during_ user requests.

But when images.neopets.com is being slow, this makes our API requests about appearances super slow, too!

In this change, we add a 2-second timeout to those requests. That should be plenty for when images.neopets.com is in a good mood, but also give up fast enough for the site to not feel miserable lol :p (especially when the use "Use DTI's image archive" option is on!)
2022-10-13 17:00:21 -07:00
4619e86ae0 Add "Use DTI's image archive" option
just a lil thing for people to turn on if it gets truly miserable again!!
2022-10-13 16:44:20 -07:00
8dee9ddbed Refactor archive scripts to prepare/create/upload
Sat down and thought about the structure here and how to make the full/delta stuff make more sense together! Here's what I came up with!

In both full and delta archiving, we prepare the manifest, we create the local archive, then we upload it to remote.
2022-10-13 16:07:12 -07:00
35713069fa Delta version of archive scripts
I like running the full `archive:create` to help us be _confident_ we've got the whole darn thing, but it takes multiple days to run on my machine and its slow HDD, which… I'm willing to do _sometimes_, but not frequently.

But if we had a version of the script that ran faster, and only on URLs we still _need_, we could run that more regularly and keep our live archive relatively up-to-date. This would enable us to build reliable fallback infra for when images.neopets.com isn't responding (like today lol)!

Anyway, I stopped early in this process because images.neopets.com is bad today, which means I can't really run updates today, lol :p but the delta-ing stuff seems to work, and takes closer to 30min to get the full state from the live archive, which is, y'know, still slow, but will make for a MUCH faster process than multiple days, lol
2022-10-13 15:08:29 -07:00
861f3ab881 Fix bug in /api/readFromArchive
Well, two bugs: one with URL encoding, and another minor one where I forgot to return after ending the request with 404, oops lol :p
2022-10-13 14:02:37 -07:00
5e6939309a Replace imageUrl with imageUrlV2 in-app
Should be a smooth drop-in replacement, we give the field an alias `imageUrl` in the query, so the rest of the app is none the wiser!

I didn't test the layer upload cache invalidation, but it seems pretty obvious to me, so ehh I'm just shipping it lmao
2022-10-12 12:41:27 -07:00
7bef1f3b9a Use imageUrlV2 in outfit thumbnails
Without this, 150x150 and 300x300 outfit thumbnails would fail to render new item layers where we didn't have an AWS image layer. Now, they correctly render the new stuff!

I tested this with the new "Spooky Stitches Markings" on the Grarrl, which has a blank image in AWS, but works correctly in the new code by loading the image from neopets.com!
2022-10-12 12:37:07 -07:00
3428254318 Simplify modeling output when there's no items
This'll both hide sections that are empty (which just wasn't plausible for a long time), and print a happy lil message if there's no sections to show at all!
2022-10-12 12:09:15 -07:00
0a99668f00 Remove Modeling link from global header
We seem to have everything modeled now, and we have automatic modeling, so like… this is not a useful link for the general public anymore!

Instead, we'll just keep it a secret for us to check on the state of things!
2022-10-12 12:03:53 -07:00
1b0a6c8385 Mention automatic modeling on the homepage 2022-10-12 11:57:28 -07:00
32dd0474f2 Better logging output for model-needed-items
Print out the image hash for easier debugging (can look up the custom data ourselves to check it), and also fix a bug with retries not carrying `contextString` through, oops!
2022-10-12 11:55:29 -07:00
d591eabd0a Add modeling cron job to deploy-setup
This should run it every 10 minutes! Wowie, cron config on the new box is easy! :3
2022-10-11 12:54:02 -07:00
a1844f76e0 Add /api/assetImageRedirect
Okay, this is gonna be a drop-in new backend for impress-asset-images.openneo.net, to enable Classic DTI to use the same images as DTI 2020!

This will enable us to stop generating images and uploading them to S3 just for Classic's sake, so we can turn those background processes off! And the new modeling script skips that anyway, so this is an important compatibility step for the new data that went out today!
2022-10-11 12:21:14 -07:00
29d9d498bf Use aws.impress-asset-images.openneo.net
We're gonna update impress-asset-images.openneo.net to perform redirects and stuff, so Classic DTI can start using the same images that DTI 2020 does.

That should enable us to stop relying on AWS for images, which is important because the new modeling script breaks that anyway :p but this will also let us turn off the image converters that run in the background all the time, and I'm excited for that too!
2022-10-11 12:19:39 -07:00
e8d7f6678d Auto-modeling script??
It seems to be working!! How exciting!! I'm just letting it run on stuff now :3

One important issue is that Classic DTI doesn't show images for items modeled this way, because we don't download the SWFs for it. But I wanna update it to stop using AWS anyway and do the same stuff 2020 does, I think we can do that pretty sneakily!
2022-10-11 11:13:10 -07:00
052cc242e4 Modeling page performance fix
Ok so, I kinda assumed that the query engine would only compute `all_species_ids_for_this_color` on the rows we actually returned, and it's a fast subquery so it's fine. But that was wrong! I think the query engine computing that for _every_ item, and _then_ filter out stuff with `HAVING`. Which makes sense, because the `HAVING` clause references it, so computing it makes sense!

In this change, we inline the subquery, so it only gets called if the other conditions in the `HAVING` clause don't fail first. That way, it only gets run when needed, and the query runs like 2x faster (~30sec instead of ~60sec), which gets us back inside some timeouts that were triggering around 1 minute and making the page fail.

However, this meant we no longer return `all_species_ids_for_this_color`, which we actually use to determine which species are _left_ to model for! So now, we have a loader that also basically runs the same query as that condition subquery.

A reasonable question would be, at this point, is the `HAVING` clause a good idea? would it be simpler to do the filtering in JS?

and I think it might be simpler, but I would guess noticeably worse performance, because I think we really do filter out a _lot_ of results with that `HAVING` clause—like basically all items, right? So to filter on the JS side, we'd be transferring data for all items over the wire, which… like, that's not even the worst dealbreaker, but it would certainly be noticed. This hypothesis could be wrong, but it's enough of a reason for me to not bother pursuring the refactor!
2022-10-10 20:15:16 -07:00
b9b0db8b3a Some archive:create tweaks
I'm looking into what it would take to update the archive on a regular basis. The commands right now *are* pretty good at avoiding duplicate work… but the S3 upload still seems like it's taking very long even to just validate what's in the archive already. We might have to build our own little cache rather than using `aws s3 sync`, if we want faster incremental updates?

Here, I make a few quality-of-life changes to add a `archive:create` command that runs everything in a straight line. That way, I can let it run and see how much wall-time it takes, to be able to decide whether speeding it up feels necessary. (vs whether it's a few-hours task I can just set a reminder to manually run every week or something)
2022-10-02 07:08:40 -07:00
07e2c0f7b1 Add the /donate page
Just doing some house-cleaning on easy pages that need converted before DTI Classic can retire!
2022-09-25 08:05:38 -07:00
2b486ea218 New terms of use page
Remind me to link Classic DTI to this too tbh
2022-09-25 06:00:59 -07:00
1619c8f7bf Add page title to Privacy Policy 2022-09-25 05:07:26 -07:00
34ceb6f5b4 Fix infinite-hang bug in /api/uploadLayerImage
Oops, Next.js has built-in request body parsing that happens automatically. So it was giving us a `req.body` string, and our code to read in the body and put it in a buffer was waiting forever!

Thankfully, it's much easier than I expected to turn that behavior off for just one route. Now it works like before, so our existing code works again, ta da!
2022-09-24 23:10:34 -07:00
832747b7a8 Set Content-Type and filename in readFromArchive
Before, we were using the ContentType from the S3 object, which was unreliable. This helps us behave better for query-string files!

We also add a filename via Content-Disposition for the files that auto-download and for the Save As case. Idk if this is super important exactly, but I feel like it'll be a lifesaver for anyone using this to get at a specific file for their own reference at any point! and it just seems polite lol
2022-09-24 22:23:45 -07:00
ae568072f8 Use AWS CLI for archive upload instead of s3cmd
Okay there we go, I was following Linode's guidance and ended up using a non-Amazon S3 client, but it turns out you can get the official Amazon AWS client to play with private services too, and it doesn't do the thing s3cmd does of trying to list every single file before doing anything 😅

This command _is_ doing weird stall-outs here and there, but is mostly just chugging along. It's not exactly fast, I imagine it'll take some time, but the fact that it's like. working. is huge lmao
2022-09-24 13:18:40 -07:00
6d86e3e2a9 /api/readFromArchive to serve a backed up image
Okay so the funny thing is that my upload script is clearly like *super* not working lol, it's been running more than an hour now and still hasn't finished listening the files. So there's only actually a handful of files to test with here, from the `archive:create:upload-test` script!

But anyway, uhh once the archive is actually uploaded, this is a way to read it back! Mainly as a way to assure me that it's all saved correctly, but also as a potential backup for images.neopets.com if it goes down again sometime.
2022-09-24 12:44:13 -07:00
38cb22980e Add archive:create:upload script
This will upload to our remote storage! We're using `s3cmd`, but our storage isn't actually Amazon S3, it's Linode Object Storage, which has an S3-compatible API. (And it's where our VPSes already are, and its pricing model is very generous for our relatively small scale of data.)

I haven't _really_ tested this exactly yet, because while `archive:create:upload-test` works great and uploads the 3 targeted files successfully… running the big one takes a very long time to even _enumerate_ all the files on my machine. (This makes sense, because I'm keeping the ~100GB archive on my HDD, which is not a fast disk!)

So I'm pushing ahead even though the script is untested, because I wanna work on other stuff too!
2022-09-24 11:01:48 -07:00
d2db7e94a3 Refactor package.json scripts a tiny bit
We extract a `run-script` command that contains the big `ts-node` incantation!
2022-09-24 10:33:16 -07:00
681eb5cdc5 Add dotenv to archive:create:download-urls
That way, I can specify `ARCHIVE_DIR` in my .env file, instead of having to remember to specify it every time!
2022-09-22 22:08:16 -07:00
773ec8974f petOnNeopetsDotCom GQL improvements
We now support returning `null` from `petAppearance` when a pet genuinely has no customization data.

We also deprecate some old fields, and update our own call site to match.
2022-09-17 22:07:34 -07:00
d058f46906 Update Privacy Policy a bit
Remove the references to Auth0 unless you switched back to using it; and remove the references to Vercel and mention Linode instead.
2022-09-15 05:05:13 -07:00
4c343aee3e Stop doing the weird login bouncy bug
The item page got in a weird situation where this setting seemed to cause loading things _about_ `currentUser` to make the `useCurrentUser` GQL query flake out and say it's loading. This would make the layout bounce around a bunch while it tries to decide whether to show you the buttons or not.

I still don't love the UX of not having any loading state after login, but like… eh it's certainly better lmao
2022-09-15 04:59:32 -07:00
193e993095 A very hacky hack for SSR cache merges
Ok! I think I got it! It's very very nastly tho lmao! But this will merge in the new SSR-provided data before the new page can render, instead of having it sometimes make redundant network requests & show loading spinners in the meantime for data that Next.js already fulfilled for it.

Nasty nasty lil trick. But it seems to be working! Let's see how it does lmao
2022-09-15 04:45:44 -07:00
0176792cb9 Delete usePageTitle
We got rid of all its call sites, hooray!
2022-09-15 04:04:13 -07:00
16c9e1a25d Simplify page title & SSR for saved outfit
Now we can just use our usual pattern: preload some GraphQL data, and render the title and such in the page component itself! Whew!
2022-09-15 04:03:51 -07:00
e42f39f49b Use new page title syntax for list pages
I'm not gonna do SSR here, the pages aren't really designed for partial loading state yet. It could be done, but I'm too sleepy! And it's too much refactor at once.
2022-09-15 03:38:42 -07:00
70fc8cdefe Use new page title syntax for home page
In this case it's more that, the homepage doesn't need to do a reset anymore, our titles were always a bit too mutate-y 😅
2022-09-15 03:31:51 -07:00
73e7c8e8a3 Use new page title syntax for modeling 2022-09-15 03:31:00 -07:00
72211ae95a SSR for item trades page
I'm just moving through and using the new page title syntax, and getting some SSR in while I'm at it uwu
2022-09-15 03:30:14 -07:00
544a158f66 Oops, stop clobbering the Apollo client on nav
Ahhh right, a new `initialCacheState` value comes in on every navigation, so if our memoized Apollo client depends on that value, then it's gonna keep getting reset, and thereby dumping everything out of its cache. Rude.

This solution is clearly incomplete, the ideal would be to merge the SSR'd data into the cache each time. But it should be fine in practice I think, we already have good coverage of preloading stuff via GraphQL anyway!
2022-09-15 03:29:07 -07:00
c5bd2695f6 A bit of SSR for the item page
Always been a bit annoyed to have even the item name load in so weird and slow 😅 this fixes it to come in much faster!

This also allows us to SSR the item name in the page title, since we've put it in the GraphQL cache at SSR time!
2022-09-15 03:05:14 -07:00
2887d952de Fix /outfits/new init + add more SSR
Whew, setting up a cute GraphQL SSR system! I feel like it strikes a good balance of not having actually too many moving parts, though it's still a bit extensive for the problem we're solving 😅

Anyway, by doing SSR at _all_, we solve the problem where Next's "Automatic Static Optimization" was causing problems by setting the outfit state to the default at the start of the page load.

So I figured, why not try to SSR things _good_?

Now, when you navigate to the /outfits/new page, Next.js will go get the necessary GraphQL data to show the image before even putting the page into view. This makes the image show up all snappy-like! (when images.neopets.com is behaving :p)

We could do this with the stuff in the items panel too, but it's a tiny bit more annoying in the code right now, so I'm just gonna not worry about it and see how this performs in practice!

This change _doesn't_ include making the images actually show up before JS loads in, I assume because our JS code tries to validate that the images have loaded before fading them in on the page. Idk if we want to do something smarter there for the SSR case, to try to get them loading in faster!
2022-09-15 02:46:14 -07:00
1163d41d32 Uninstall react-router-dom
Begone!!
2022-09-15 00:44:33 -07:00
38170bfbb2 Migrate home page to Next.js routing
I think that means we're done? :3 Gonna uninstall react-router-dom next.
2022-09-15 00:43:05 -07:00
eb602556bf [WIP] Migrate outfit page, with known bug
Okay so there's a bug here where navigating directly to /outfits/new?species=X&color=Y will reset to a Blue Acara, because Next.js statically renders the Blue Acara on build, and then rehydrates a Blue Acara on load, and then updates the real page query in—and our state management for outfits doesn't *listen* to URL changes, it only *emits* them.

It'd be good to consider like… changing that? It's tricky because our state model is… not simple, when you consider that we have both local state and URL state and saved-outfit state in play. But it could be done! But there might be another option too. I'll take a look at this after moving the home page, which will give me the chance to see what the experience navigating in from there is like!
2022-09-15 00:27:49 -07:00
5d28c36e8a [WIP] Migrate single-list page to Next.js routing
Hey we're getting real close! :3

I accepted a small bug here where clicking the breadcrumb to "Items X wants" won't scroll down if the page isn't loaded yet. (e.g. you landed on this list page first). If you came *from* the lists index page though, then when you go back your stuff will be there already, so you should be fine. (It might also happen if the page loads fast enough, which in prod it might do?)

Just gonna leave it for now, because the workaround would be a lot! (have the page re-check the anchor once it's done loading)
2022-09-14 23:18:13 -07:00
e1ebb0eb9a [WIP] Migrate trade lists page to Next.js routing
We're getting so close! :3

There was some shared components in `UserItemListPage` that needed updated too, even though the rest of the page isn't migrated yet.
2022-09-14 23:04:58 -07:00
750ca208f1 [WIP] Migrate item trade pages to Next.js routing
One little tricky thing here was moving the `[itemId].tsx` page into the folder as `index.tsx`! Because we didn't have subpages before but now we do!
2022-09-14 22:56:45 -07:00
17a7e2de81 [WIP] Refactor to renderWithLayout function
Okay, when I saw the recipe in the Next.js docs with `getLayout`, I was like "psh this API is so confusing, this should just be a component"

anyway now we see why it wasn't a component: the _whole point_ of it was to circumvent the usual React diffing algorithm's belief that two different components _can't_ ever share UI. But here we were, making different `layoutComponent`s that were meant to share UI, lol!

Anyway, if you just _return JSX in a function_, the React diffing algorithm never sees that it came from a different place, so it's generous when diffing them. Neat!

But I still changed the recipe's `getLayout` name to `renderWithLayout`, because it just confused me so much at first lol, I thought it was going to like, return a layout function? This is much clearer verbing to me imo
2022-09-14 22:50:56 -07:00
f1cfd1ac8f [WIP] Migrate /items/search to Next.js routing
Okay I actually screwed up the layouts thing a bit! Because right, they need to *share* a LayoutComponent in order to share the UI across the pages. This gets a bit tricky with wanting to change the margin, too. I'll address this with an upcoming refactor!
2022-09-14 22:44:48 -07:00
58edba6983 [WIP] Remove localStorage SSR error
Thankfully this wasn't a crasher since it was already in a try/catch, but it was logging a failure that didn't need to be logged! If `localStorage` isn't available (e.g. SSR), just use the initial value.
2022-09-14 22:35:33 -07:00