Hmm, the item page in prod is slower than it is in dev? In dev, most items are satisfied by the preloading in ItemPagePreview, but in prod, those same items need to send a separate OutfitItemsAppearance query _way_ after (which, I think just due to queueing, waits for all the items to wait too).
There's an obvious issue in the case of all the Maraquan items lately, because we just don't do the clever cache lookups for non-standard colors at all. But I don't understand why even standard items like the 17th Birthday Party Hat are struggling!
These are just some simple debug statements, hopefully they'll tell us something about the basics of what's happening!
I didn't want to use the word "basic", since "basic colors" generally means like Blue, Red, Green, Yellow… but it was the only one that fit in the space lol
I tried a lot of stuff with "Fits standard pets" and stuff and couldn't get it to work well
Just a little display bug on the homepage. For an item like the "Evil Coconut Half Mask", which was specifically drawn for the standard _and_ major special colors, our previous logic would have said "Baby only" or "Maraquan only" or whatever special color it happened to find first.
Now, we only show the case "Baby only" if it _doesn't_ fit standard pets too.
Note that the Maraquan case is tricky, because the Blue Mynci can also wear Maraquan items lol! For this reason, we check for two standard bodies before declaring that it's meant for standard pets.
I wasn't sure how to fill the space for items that are fully modeled, then realized some basic at-a-glance "who does this fit" would help!
The load time isn't great, I think I need to break out that dependent subquery, but maybe the stale-while-revalidate will cover it well enough at first.
Add a skeleton stripe for the modeling data! Won't show up in most cases because we load fast, but it helps things a lot when it does. (Also, will we keep loading fast with the cache changes on this query?)
To make this fast, I had to tweak the GraphQL resolver a bit to run a filtered version of the query for `newestItems` instead of scanning the full database! But yeah, looking good!
I think I'm gonna want to swap out "Fully modeled" for some insight about who it fits
Okay cool, I noticed that "A Warm Winters Night Background" sometimes animates when other things are playing, but the animations aren't _detected_. (Huh, I actually thought we just didn't schedule ticks in that case? But maybe I'm missing something.)
Anyway, some movies don't use the built-in frames construct to animate, and instead use tweens that hook into the timeline and mutate the stage. Okay! Now we detect those.
This _did_ enable the Play/Pause button on some items that don't actually animate in practice, like the "#1 Fan Room Background", which seems to have an animated string of lights in the corner that got layered incorrectly. Maybe we should add a new glitch type, to flag movies that don't actually animate?
Doing this for two reasons! One is that I want the movie layer component to be a bit thinner in general - I think we might even want to move the fallback image logic out, too.
The second is that I want the onError for something else soon!
Huh, weird. So I reversed the manifest, because you want to get the *last* movie. And I figured that semantic probably extended to PNGs and SVGs too?
But actually, PNGs sometimes have *other* PNGs in the manifest that aren't the relevant asset at all, and are just reference art.
Again, I'm really not sure what the underlying semantic is here? Does the Neopets customizer just display them all, and for the items with this problem, they happen to layer in a way that's not broken?? I would really like to not do that, and I would really like to know the real semantic, but I can't find it >.>
So um, I'm going ahead and using the best semantic that solves the problems I know about? Which is, use the last movie, and use the first PNG. Fingers crossed lol!
I also didn't test this change extensively, because I'm on a train lol
I'm just trusting that this push will be better than what we had before. I tested it on the Dandan MME, which has two JS files, and it took the latter; and the Pathway of Petals Background, which has two PNG files, and it took the former. Success? 😬🤞
Oops, my inbox was getting full of uncaught promise rejections of `loadImage`!
I'm pretty sure they're caused when multiple images in a movie fail to load (e.g. network problems), but we fail to cancel them. So, the first failure would be caught as a part of `Promise.all` in `loadMovieLibrary`, but then subsequent failures wouldn't be caught by anything, and would propagate up to the console and to Sentry as uncaught errors.
In this change, we make a number of improvements to cancellation. The most relevant change for this bug is that `loadMovieLibrary` will now automatically cancel all resource promises when it throws an error! But this improved robustness also enabled us to finally offer a simple `cancel()` method on movie library promises, which we now available ourselves of at call sites, too.
Experiment! Let's see if them being more prominent like this is helpful or annoying 😅
I think this is clunkier in the HTML5 Green Happy Path, but worth it for bringing attention in the error cases.
But I feel like we might tweak this over time!
Huh, so it turns out sometimes the manifest will include old broken conversion attempts!
This fixed the "MiniMME18-S2c: Holomorphic Foliage and Dandan Set", the "Electric Dress" on various species (incl. Aisha), and yeah!
What an interesting discovery 😂
This is because I want to try adding a search footer to the two-column layout, like in Classic DTI—and so I want more screens that _can_ support two-column layout to use it.
Right, cool, yes, this is the thing about partial data; you need to define the loading condition as "relevant data is missing, _and_ loading is still happening".
Ah right okay, when the `ItemSearchResultV2` doesn't have an `id`, Apollo Cache isn't quite so strong about caching conflicting-y fields, like the different parameterizations of `items`.
With this change, we give the search result object an ID, which helps Apollo cache more confidently!
It's just a serialization of the relevant search fields 😅
That's the last itemSearch call site! I'll probably keep it up for other clients for a while though, esp since it doesn't depend on any additional loaders or anything, it's pretty small overall
Updated the comments to reflect this, and also remembered to make them real docstrings lol!
Now, when you click Prev/Next, we show the page number while the items load, rather than blink it in and out!
This is because we're using itemSearchV2, which makes `numTotalItems` cacheable separately from the paginated `items`. Apollo Cache pretty much does this with zero config, we just have to ask for `returnPartialData`!
The main change is that we restructure the query, so that only the parts that are actually affected by pagination depend on those variables!
This will enable the Apollo Cache to trivially cache and show `numTotalItems` while waiting for other pages to load.
I'm gonna make this a bit more powerful later, but just for now, the text "Page 1 of 27" shows up!
I also don't like that the page number has to blink out while we load the new stuff; there are multiple solutions, but tbh I think the Apollo Cache should be the one to handle this, and that we can do it by refactoring the query structure a bit!
I'm seeing uncaught promise rejections in `loadImage`? It's hard to know exactly where it's actually coming from, those _should_ be caught?
My guess is that it's coming from canceled images, which are throwing errors even after loading? I don't totally understand how, because looking back, I don't think the `cancel` method was actually called???
Anyway, I fixed it so cancel actually _is_ called, and that we don't throw errors when the canceled image _correctly_ fails to load.
This should be more robust either way, but hopefully it also stops the flow of errors?
Right, when there are zero layers, we shouldn't say we're loading!
This is a consequence of the HACK below. If we didn't short-circuit the effect when length == 0, then we would go through and successfully load 0 layers.
Movies often have a lot of assets, which are more likely to be cache misses, and take script time to render! So the time until the user sees something is often huge.
Here, we start loading our PNG image at the same time. This is a filesize loading increase, but even in slow connections, it's generally worth it as a _sharp_ improvement in time until you get to see something!
One noteworthy UI weakness here is that we don't show _any_ loading indicator while the image is visible and the movie is still loading. This makes sense from a practical standpoint, but could be a problem when a movie takes a particularly long amount of time. I also want to be cognizant of whether the blink-of-content ever gets annoying! (We could make it fade out 🤔)
In my last change, I didn't try to change the APIs too much, and kept the concept of `crossOrigin` running through `getBestImageUrlForLayer`.
Now, I've moved the `safeImageUrl` call _outside_ `getBestImageUrlForLayer`, by putting it at the call site: We now call `safeImageUrl` from `loadImage` (which needs to know the `crossOrigin` flag anyway!), and at the `img` tag call site.
This simplifies all of the call sites a lot, I think!
I've noticed that our Fastly proxy adds a surprising amount of latency on cache misses (500-1000ms). And, while our overall hit ratio of 80% is pretty good, most misses happen at inopportune times, like loading items from search.
But now that the Neopets CDN supports HTTPS, we can safely switch back to theirs for *most* image loads. (Some features, like downloads and movies, still require CORS headers, which our proxy is still reponsible for adding.)
This forgoes some minor performance wins (like the Download button now requires separate network requests), and some potential filesize reduction opportunities (like Fastly's auto-gzip which we're today using for SVGs, and eventually using their Image Optimizer for assets), to decrease latency. We could still potentially do something more powerful for low-power connections someday… but for now, with the cache miss latency being *so* heavy, this seems like the clear win for almost certainly *all* users today.
I have a hunch that people aren't finding the Download button! I'm not 100% sure what to do about that, but to start, I want right-clicking the image to give you a hint about it 😅
Woo, it's the big UI experiment! Let's see how it plays for folks 😅
Scrolling through a big lists page right now, I think this is a _huge_ improvement, I can get a sense of the lists and what's in them fast, and see matches fast, and dive quickly and with no extra load time when I want more. I'm pleased tbh!
Pulled MarkdownAndSafeHTML into a shared component, and use it on the single list page now too!
I also simplified some of the logic for the item list, because I figure we'll have to give the trade matching stuff its own pass, y'know?
Oops, I pulled `currentUserId` from the wrong place, so it was always acting as if you're logged in! Now, you can see the list page for your own private list!
I'm not sure real item data "should" ever do this? Our "Written Word Shower" had a blank description, but I think that was an error on our end.
Anyway, it's clearer than showing infinite loading, so!
Oops, okay, I guess I didn't test the new preview centering stuff with 1200x1200 images, like the Usul's Damask Markings.
Now, I apply a max size to the whole-ass container, and make the parent responsible for centering it.
So I finally started looking into the race condition that makes item previews sometimes fail to load, and as expected, it was that we were trying to load the movie before CreateJS had necessarily loaded. Usually the timing worked out, esp after a reload, but not under certain circumstances!
Anyway, I've been wanting for a while to just bundle them instead. That'll help us more eagerly load them when we need them, and not depend on external CDNs, and remove a bunch of loading state!
So yeah, I had to learn how the `easeljs` and `tweenjs` NPM packages did their bundling, and how to use `imports-loader` to let them just register straight onto `window`! But we got there and it's pretty nice tbh!
Woof, the "Swirl of Power Effect" item tanks my CPU waaay too much
(I bet it's those 7000x7000 PNGs lolol 😬)
Anyway, before thinking about optimizing specific issues, I'm just adding this emergency switch: if we detect FPS < 2 on any layer, we just pause the whole outfit, until the user decides to unpause.
Oh hey, turns out I was missing a step in movie clip stuff! Images aren't just for sprite sheets, but also sometimes they just want the raw images!
Here, we register them, uwu
Items like "Swirl of Power Effect" and probably others work correctly now!
That said, Swirl of Power takes WAY too much CPU lmao, I want to maybe add some kind of automatic kill switch lol
There are a couple spots where we parse SWF URLs to get the ID out! Most visibly, our Support tools were crashing on it. And internally, manifest loading wasn't working. (I'm not sure if this got caught or if it caused crashes in user space? I didn't see them when wearing a failing item)
Anyway, fixed now!
This is because it's the terminology I'm using elsewhere ("Items" and "Closet" are too overloaded in the UI), and because I want to start putting specific lists at like `/user/:userId/lists/:listId`!
I also create a redirect from the old URL, and also from the DTI Classic variant of the URL
Ah right, React state batching doesn't always work how I expect it to. The separate state caused the hook to return and cache `{loading: false, error: null, data: null}`, and then on a _later_ tick the data value showed up, but only _after_ the response was already cached!
This broken a bunch of species/color picker stuff, now it's fixed!
Okay, so getting the initial render down time for these faces is annoying, though I might come back to it…
But actually, the _worst_ part isn't the _initial_ render, which just kinda gets processed as part of the page navigation, right?
The _worst_ part is that we render it slowly _twice_: once on page load, as we send the `useAllValidPetPoses` fetch request; and then again when the fetch request ~instantly comes back from the network cache.
The fact that this requires a double-render, instead of just rendering with the cached valids data in the first place (like how our GraphQL client does), causes a second and highly-visible render of a slow-to-render UI!
So, here we update `useAllValidPetPoses` to cache its response in JS memory, similar in principle to how Apollo Client does. That way, we can return the valids instantly on the first render, if you already loaded them from the homepage or the wardrobe page or another item page!
This is a pretty easy change, that makes re-renders faster when something about the item preview state changes!
That said, the initial render is still pretty slow, too, and that's the one that's bothering me more lol
Ah oops, because I forgot to set `hiResMode` here in the image preloader, we would preload the PNG, and _then_ load the SVG separately.
This doubled the effective image loading time in Hi-Res Mode!
Now, the image preloader respects hi-res mode, and will preload the SVG in the SVG case, and the PNG in the PNG case.
If you're not in hi-res mode, then you don't care about broken SVGs, because you wouldn't have seen them anyway!
We also update the message to reference Hi-Res Mode.
We're just having too many glitchy SVGs for my taste, esp since TNT seems to just be using PNGs for now?
This change defaults us to using PNGs for users by default, with the option to use SVGs as a new "hi-res mode" setting.
This is our first ever setting, wow!
I'm also envisioning that like, if we get Fastly Image Optimizer set up, this could be a way to tune the quality of the incoming images.
We could also consider a setting to turn off animations altogether—like, just download the PNG instead of the movie, whereas right now we download the movie on the assumption that you might play it at any time.
The way we were checking for UC compatibility issues, was also triggering while the appearance was still loading, so items didn't have any appearance layers yet!
Now, we check for loading before testing for that glitch.
Oh right, adding user data to this query makes it uncacheable!
Split the query into the main public data, which will cache; and the user data, which will load in later.
I refactor the hide-badge thing into 3 "trade matching modes". Then, the logic for whether to hide specific badges moves into the component, and we use that same flag to decide whether to show the big word "match"!
Boom, cute owns/wants badges on "Latest items", and the item search page, and trade matches!
I'm gonna add some additional flair to the trade match case, too!
This has been bugging me for a while lol, the background was leaking out of the corners!
I had applied the styles to the `InputGroup` because I didn't realize how Chakra implements this… I had assumed that the left/right elements wouldn't also get the background.
But it turns out, `InputGroup` uses `position: absolute` stuff, and uses padding to create visual space in the `Input` below them! So, this works perfect!
Copied styles from the similar layout in the bulk converter tool! The status will flush to the right of the field header on desktop, and move below the input on mobile.
This was a known oversight, that I've finally fixed because I realized this subquery probably would be just fine lol!
Now, instead of removing rows with _all_ species modeled, we remove rows with all species _for that color_ modeled.
This leaves the rest of the modeling list unchanged, but removed 10 Maraquan items that were done modeling but still on the list:
- Dyeworks Coral: Maraquan White Beaded Gown
- Dyeworks Green: Maraquan White Beaded Gown
- Dyeworks Lavender: Maraquan White Beaded Gown
- Dyeworks Purple: Maraquan Wig with Negg Accessory
- Dyeworks Lavender: Maraquan Sea Blue Gown
- Dyeworks Pink: Maraquan Sea Blue Gown
- Dyeworks Silver: Maraquan Sea Blue Gown
- Maraquan White Lace Gown
- Underwater Maraquan Markings
(I also went in the database and marked the "Maraquan Ocean Blue Contacts" with the `modeling_status_hint = "done"`, because it's not compatible with Lutari.)
Oops, right, I forgot for a while that GraphQL fields have a special syntax for docstrings, and it's not just comments! This will help stuff show up in our GraphQL Playground API docs correctly 🥰
I'm not sure which image url is better to return from stuff like this, and I don't actually have a use case for it anymore, so let's just clear it out until we need something like it!
Oops, we get a _lot_ of outfit image requests, and it's pushing the limits of our free Honeycomb plan! But I don't really need all that much detail, because there's so many.
So, we here apply sampling! `api/outfitImage` is getting a 1/10 rate, and for GraphQL, `ApiOutfitImage` is getting 1/10, and `SearchPanel` is getting 1/5.
I had to add a `addTraceContext` call, to give all the child events awareness of what operation they're being called in, too!
I haven't actually tested that this is working-working, just that the endpoints still return good data. We'll see how it shakes out in prod!
But I did add `console.log(sampleRate, shouldSample, data);` to the `samplerHook` briefly, to see the data flow through, and I reloaded a `SearchPanel` request a few times and observed a plausibly 20% success rate.
Oops, I wasn't requesting `bodyId` for item layers, so the check for `layer.bodyId !== "0"` was always true—because it was always `undefined`, even when it should have been `"0"`.
This wasn't an issue on the client, because the client _does_ request `bodyId` for caching item appearances between pets of the same body, and I didn't realize that it needs to be part of this fragment too!
Mm right, when we first render the output, `imageUrl` is `undefined`, so the output textbox renders with `value={undefined}`, which is an "uncontrolled" component that the DOM is free to change.
In practice, this isn't an issue because the textbox has `isReadOnly`, so the user can't _actually_ change it. But it's still a good idea for consistency and clarity to use an empty string instead of `undefined`, and it removes warning spam from my console!
Oops, previously the MajorErrorMessage was willing to shrink the width of the cute Grundo Programmer icon, to allow error messages with long words to avoid word breaks.
Here, we switch `1fr` for `minmax(0, 1fr)`, which allows the text zone to get smaller. (`1fr` is short for `minmax(auto, 1fr)`, which isn't capable of shrinking smaller than the natural value.)
Now, the error text is more willing to shrink by word-wrapping, than the image is by shrinking the image. Success!
Oops, my cute API idea for `speciesPickerProps` breaks `React.memo`, of course!
We could fix this by having the caller memoize the `speciesPickerProps` object, but that's too unusual and error-prone. We could also fix this by writing a custom function for `React.memo` to determine whether props match, but that seems like overkill when there's only one actual prop we're using here in practice.
So yeah, I've updated `SpeciesColorPicker` to just accept `speciesTestId` and `colorTestId` props instead!
Note that actually this component _is_ still re-rendering too often, because of a Chakra bug I just discovered and reported! So this change won't immediately improve performance, but it should stop re-rendering too often once we _also_ upgrade Chakra after this bug is fixed. https://github.com/chakra-ui/chakra-ui/issues/4080
Not using this anywhere in-app yet! But might swap it into the user outfits page, and use it to server-side-render social sharing meta tags!
Also eyeing this as a way to replace our nearly 1TB of outfit image S3 storage, and save $20/mo…
This folder will include code shared by both the client-side app and the server!
The server isn't using it yet, but it will in a new API endpoint soon! I'm doing this in a separate commit to avoid lumping all the import-change noise into that commit.
I'm doing this in preparation for an API endpoint to build outfit images by ID. It'll need the same logic to decide which layers are visible, and the same GQL fragments to load the relevant data!
Oh right, labeling PB items as NP is confusing! Here, we add a "PB" case to the lil badge on the corner of the item thumbnail, in item search page & homepage Newest Items.
Oops, we did an in-place sort on the search variables we passed to Apollo! This meant that Apollo's first read of the variables wouldn't match later reads, so it would always decide the variables had changed, causing an infinite re-render loop.
Remember to copy existing arrays before sorting! 😅
Incidentally, this only happened for Markings, by coincidence: it's the only (I think) searchable zone label with multiple zone IDs, that don't sort alphabetically the same as they sort numerically. This `.sort()` sorts them alphabetically, whereas they come in numerical order in `allZones`, because that's the order the GQL server returns them in `build-cached-data.js`.
There have been usability problems with this search filter UI, and I think they mostly come down to people accidentally selecting filters when they don't mean to—sometimes pressing Enter to indicate that they're done typing, but accidentally selecting something.
Here, we remove that behavior, and additionally add a new behavior to clear the suggestions on pressing Enter.
We've been serving images directly from `impress-asset-images.s3.amazonaws.com` for a long time. While they serve with long-lasting HTTP cache headers, and the app requests them with the `updated_at` timestamp in the query string; each GET request still executes a full S3 ReadObject operation to get the latest version.
In the past, this was only relevant to users on Image Mode, not Flash Mode. But now that everyone's on Image Mode, this matters a lot more!
Now, we've configured a Fastly host at `impress-asset-images.openneo.net`, to sit in front of our S3 bucket. This should dramatically reduce the GET requests to S3 itself, as our cache warms up and gains copies of the most common asset PNGs.
That said, I'm not sure how much actual cost impact this change will have. Our AWS console isn't configured to differentiate cost by bucket yet—I've started this process, but it might take a few days to propagate. All I know is that our current costs are $35/mo data transfer + $20/mo storage, and that outfit images are responsible for most of the storage cost. I hypothesize that `impress-asset-images` is responsible for most of the reads and data transfers, but I'm not sure!
In the future, I think we'll be able to bring our AWS costs to near-zero, by:
- Obsolete `impress-asset-images`, by using the official Neopets PNGs instead, after the HTML5 conversion completes.
- Obsolete `impress-outfit-images`, by using a Node endpoint to generate the images, fronted by a CDN cache. (Transfer the actual data to a long-term storage backup, and replace the S3 objects with redirects, so that old S3 URLs will still work.)
I hope this will be a big slice of the costs though! 🤞
(Note: I'll be deploying this on a bit of a delay, because I want to see the DNS propagate across the globe before flipping to a new domain!)
Boom, pulled some stuff out into util! Now we also have error boundaries in both panels of the wardrobe page.
You can test this one by visiting `/outfits/new?send-test-error-for-sentry`, just like on the home page! Util component for fake errors owo
Previously, we would tear down to a blank white page. Now, for errors within most page content, we show a cute error message with a grundo programmer!
To test, visit `/?send-test-error-for-sentry`, which will trigger an intentional render error on the home page.
Note that this does _not_ cover pages that don't use PageLayout, namely the wardrobe page! I'll want to add other boundaries there…
Crashes on clearing the search box. I really thought I fixed this when I refactored `forceReset` to be a function, but I guess I missed it when I went back-and-forth deciding whether to actually do the refactor!
Before this, if you made a change while the outfit was auto-saving, it would reset your changes back and forth in an infinite loop, oops!
This was because the response from the save would reset the outfit state to match, but the _debounced_ outfit state would still show the user's changes, so we'd trigger another save. And then the same thing would happen in reverse, and back and forth again!
Oops, it was possible after saving an outfit to get into a state where we would show the `<OutfitThumbnailIfCached />` behind the outfit even after it was saved, and then removing items would look weird until auto-saving caught up.
We had used the `backdrop` property because we wanted smoother partial load-ins, but for now I'm just fixing this by switching it to `placeholder`, which already has the right loading-only behavior.
This was also the only call site for `backdrop`, so I've removed it!
Oops, the sequence here was:
1) Save a new outfit
2) The debounced outfit state still contains id=null, which doesn't match the saved outfit, which triggers an auto-save
3) And now again, the debounced outfit state contains the _previous_ saved outfit ID, but the saved outfit has a _new_ ID, so we save the _previous_ outfit again
and back and forth forever.
Right, ok, simple change: if the saved outfit ID changes, reset the debounced state immediately, so it can't even be out of sync in the first place! (I also considered checking it in the condition, but I didn't really understand what the timing properties of being out of sync due to debouncing would be, and it seemed to not represent the reality I want.)
Hope it actually work-works lol
Did some refactors in useOutfitState to support the new reset action we do after auto-saving, in case the server tweaked things like the name.
Note that we implemented the actual horn behavior described in the message, simply by marking the yellow horn appearance glitched for Fem, but not for Masc! Also, we don't have a yellow-horn Sick Masc model, so it's blue too.
It wouldn't open, because I'd set `isLazy` on the popover, so opening the modal would close and UNMOUNT the popover, which unmounted the modal!
Now, we use the new `lazyBehavior` prop to keep it mounted _after_ the first time it opens. This is why I needed to upgrade Chakra!
Oops, I made a recent change to automatically add `appearanceId` to the outfit state when you open the Support pose picker, to avoid navigation issues.
But I didn't realize this happened _silently_ when you open the page as a Support user, because the Popover preloads!
Now, the Popover doesn't preload its content. This is probably better for normal users too, the PosePicker UI is a bit heavier with 6 previews than I really want!
Oops, our "items to reconsider" feature was preventing unwearing/removing items you're already wearing!
This feature helps you try stuff in Search, without disrupting your outfit. e.g. if you try on a new Background, then change your mind and unwear it, then we reapply whatever old Background you had on the outfit before.
But this made it impossible to remove your _current_ background from the search page if you went back and searched for it again, because we would remove it and then reconsider and reapply it 😅
Now we, um, stop that!
Huh, dunno when I regressed this! Or maybe I never did it for search results, just the main items page? But we're needlessly re-rendering the entire search results list when you wear/unwear something, because `onRemove` always changes, and that breaks the `React.useMemo` on `Item`.
Now, we cache the `onRemove` callback with `React.useCallback`, so perf is much happier!
Oops, we extracted Support fields out from the default `appearanceLayerFragment`!
This was causing the page to silently fail to show any changes, because `layer.remoteId` was evaluating to `undefined` rather than one of the ID numbers in the range.
Here, I've added both `remoteId` explictly because we use it directly, and also the support fields because that's what the layer support UI needs!
Oops, making changes in PosePickerSupport would sometimes trigger a re-fetch in PosePicker.
Specifically, PosePicker needs some fields that PosePickerSupport doesn't, so changing the canonical poses causes PosePicker to ask for stuff again—which will probably serve a SWR'd cached version that doesn't reflect the Support changes!
Here, we update the PosePickerSupport query to prefetch all the fields the PosePicker _would_ want for any of these poses. That way, if we swap in a new one as the canonical appearance for a pose, there's no refetch needed, and therefore no risk of hitting a stale cache.