Finally playing with this, now that we've been doing paginated search results in the main element! Let's see how it goes 😳
I made a thing to make the pagination toolbar smaller (might want to do that on the mobile view too?), and also to put the search suggestions in a popover floating at the top of the search box.
I tried to do this earlier, but the caching problem from the previous commit (where we weren't including `id` for the search result in the GQL query) was causing it to do a like, infinite loop thing, where the preload results would cache-invalidate the current results, and so the 3 queries would just fight for which one's in the cache?
But now that caching is working, this is working too! Makes it all feel a lot snappier :3
Apollo Client is pretty darn reliant on an `id` field for effective caching, more often than you'd think!
Before this change, navigating back to a page you'd already loaded would cause it to reload. After this change, it no longer does, and serves the page from cache instead!
We also didn't need the query one, because we now `key` the `SearchResults` by the query, so the container becomes empty-then-full-again, which resets scroll back to top.
idk this has been a long-time popular request, so I'm just gonna like. throw it all the way out there. and see what people think of it
I'm a bit worried it might change up the mobile experience too much? But like. let's find out!
My intention is to move this out of PaginationToolbar entirely, so that it becomes a component we can reuse in a non-URL-state setting. (I'm looking at using pagination for the wardrobe item search is why!)
We do a thing where we sometimes proactively update an appearance layer's manifest from images.neopets.com when it's been a while since the last time, _during_ user requests.
But when images.neopets.com is being slow, this makes our API requests about appearances super slow, too!
In this change, we add a 2-second timeout to those requests. That should be plenty for when images.neopets.com is in a good mood, but also give up fast enough for the site to not feel miserable lol :p (especially when the use "Use DTI's image archive" option is on!)
Should be a smooth drop-in replacement, we give the field an alias `imageUrl` in the query, so the rest of the app is none the wiser!
I didn't test the layer upload cache invalidation, but it seems pretty obvious to me, so ehh I'm just shipping it lmao
This'll both hide sections that are empty (which just wasn't plausible for a long time), and print a happy lil message if there's no sections to show at all!
We seem to have everything modeled now, and we have automatic modeling, so like… this is not a useful link for the general public anymore!
Instead, we'll just keep it a secret for us to check on the state of things!
Okay, this is gonna be a drop-in new backend for impress-asset-images.openneo.net, to enable Classic DTI to use the same images as DTI 2020!
This will enable us to stop generating images and uploading them to S3 just for Classic's sake, so we can turn those background processes off! And the new modeling script skips that anyway, so this is an important compatibility step for the new data that went out today!
We're gonna update impress-asset-images.openneo.net to perform redirects and stuff, so Classic DTI can start using the same images that DTI 2020 does.
That should enable us to stop relying on AWS for images, which is important because the new modeling script breaks that anyway :p but this will also let us turn off the image converters that run in the background all the time, and I'm excited for that too!
It seems to be working!! How exciting!! I'm just letting it run on stuff now :3
One important issue is that Classic DTI doesn't show images for items modeled this way, because we don't download the SWFs for it. But I wanna update it to stop using AWS anyway and do the same stuff 2020 does, I think we can do that pretty sneakily!
Ok so, I kinda assumed that the query engine would only compute `all_species_ids_for_this_color` on the rows we actually returned, and it's a fast subquery so it's fine. But that was wrong! I think the query engine computing that for _every_ item, and _then_ filter out stuff with `HAVING`. Which makes sense, because the `HAVING` clause references it, so computing it makes sense!
In this change, we inline the subquery, so it only gets called if the other conditions in the `HAVING` clause don't fail first. That way, it only gets run when needed, and the query runs like 2x faster (~30sec instead of ~60sec), which gets us back inside some timeouts that were triggering around 1 minute and making the page fail.
However, this meant we no longer return `all_species_ids_for_this_color`, which we actually use to determine which species are _left_ to model for! So now, we have a loader that also basically runs the same query as that condition subquery.
A reasonable question would be, at this point, is the `HAVING` clause a good idea? would it be simpler to do the filtering in JS?
and I think it might be simpler, but I would guess noticeably worse performance, because I think we really do filter out a _lot_ of results with that `HAVING` clause—like basically all items, right? So to filter on the JS side, we'd be transferring data for all items over the wire, which… like, that's not even the worst dealbreaker, but it would certainly be noticed. This hypothesis could be wrong, but it's enough of a reason for me to not bother pursuring the refactor!
We now support returning `null` from `petAppearance` when a pet genuinely has no customization data.
We also deprecate some old fields, and update our own call site to match.
The item page got in a weird situation where this setting seemed to cause loading things _about_ `currentUser` to make the `useCurrentUser` GQL query flake out and say it's loading. This would make the layout bounce around a bunch while it tries to decide whether to show you the buttons or not.
I still don't love the UX of not having any loading state after login, but like… eh it's certainly better lmao
I'm not gonna do SSR here, the pages aren't really designed for partial loading state yet. It could be done, but I'm too sleepy! And it's too much refactor at once.
Always been a bit annoyed to have even the item name load in so weird and slow 😅 this fixes it to come in much faster!
This also allows us to SSR the item name in the page title, since we've put it in the GraphQL cache at SSR time!
Whew, setting up a cute GraphQL SSR system! I feel like it strikes a good balance of not having actually too many moving parts, though it's still a bit extensive for the problem we're solving 😅
Anyway, by doing SSR at _all_, we solve the problem where Next's "Automatic Static Optimization" was causing problems by setting the outfit state to the default at the start of the page load.
So I figured, why not try to SSR things _good_?
Now, when you navigate to the /outfits/new page, Next.js will go get the necessary GraphQL data to show the image before even putting the page into view. This makes the image show up all snappy-like! (when images.neopets.com is behaving :p)
We could do this with the stuff in the items panel too, but it's a tiny bit more annoying in the code right now, so I'm just gonna not worry about it and see how this performs in practice!
This change _doesn't_ include making the images actually show up before JS loads in, I assume because our JS code tries to validate that the images have loaded before fading them in on the page. Idk if we want to do something smarter there for the SSR case, to try to get them loading in faster!
Okay so there's a bug here where navigating directly to /outfits/new?species=X&color=Y will reset to a Blue Acara, because Next.js statically renders the Blue Acara on build, and then rehydrates a Blue Acara on load, and then updates the real page query in—and our state management for outfits doesn't *listen* to URL changes, it only *emits* them.
It'd be good to consider like… changing that? It's tricky because our state model is… not simple, when you consider that we have both local state and URL state and saved-outfit state in play. But it could be done! But there might be another option too. I'll take a look at this after moving the home page, which will give me the chance to see what the experience navigating in from there is like!
Hey we're getting real close! :3
I accepted a small bug here where clicking the breadcrumb to "Items X wants" won't scroll down if the page isn't loaded yet. (e.g. you landed on this list page first). If you came *from* the lists index page though, then when you go back your stuff will be there already, so you should be fine. (It might also happen if the page loads fast enough, which in prod it might do?)
Just gonna leave it for now, because the workaround would be a lot! (have the page re-check the anchor once it's done loading)
We're getting so close! :3
There was some shared components in `UserItemListPage` that needed updated too, even though the rest of the page isn't migrated yet.
Okay I actually screwed up the layouts thing a bit! Because right, they need to *share* a LayoutComponent in order to share the UI across the pages. This gets a bit tricky with wanting to change the margin, too. I'll address this with an upcoming refactor!
Thankfully this wasn't a crasher since it was already in a try/catch, but it was logging a failure that didn't need to be logged! If `localStorage` isn't available (e.g. SSR), just use the initial value.
The tricky part here was that `returnPartialData` seems to behave differently during SSR. On the page itself, this seems to cause us to always get back at least an empty object, but in SSR we can sometimes get null—which means that a LOT of code that expects the item object to exist while in loading state gets thrown off.
To keep this situation maximally clear, I added a bunch of null handling with `?.` to `ItemPageLayout`. An alternative would have been to check for null and put in an empty object if not, but this feels more resilient and more true to the situation.
The search bar here is a bit tricky, but is pretty straightforwardly adapted from how we did the layouts in App.js. Fingers crossed that it works as smoothly as expected when the search page is migrated too! (Right now typing in there is all messy because it hops over to the fallback route and does its whole separate thing.)