Oops, right, I meant to use the new `impress-outfit-images.openneo.net` host for this! It works just fine from `impress-2020.openneo.net` as the backing source right now, but I want these semi-permanent URLs to be a bit more decoupled.
This is mostly for decoupling's sake: I want pretty outfit URLs that aren't dependent on the `/api/outfitImage` path structure, so that we can serve them as ~permanent pretty URLs that don't actually indicate where or how the image is stored.
This reverts commit a0121f09ea.
I didn't realize that VERCEL_URL wouldn't use the canonical URL. And thinking further about the Open Graph semantics, I think having just one canonical host makes more sense, even if it makes testing a bit more annoying in some cases.
Right, the routes are a bit confusing, `/outfits/new` matches `/outfits/:id`, but shouldn't.
I fix this at the routing level, by creating a more specific `/outfits/new` route that matches first, and just serves `index.html` right from the CDN.
To check that it's exact, I check for either a `?` or the end-of-string after `new`. That way, `/outfits/newish` correctly redirects to the SSR. (This doesn't actually matter a lot, because outfit IDs are only ever numbers and therefore can't start with "new"? But I figure, better to have an unambiguous semantic, then leave random things relying on outfit ID format assumptions.)
Ah right, this is how we get the deployed API code to know its own hostname! This way, when we do a preview deploy, /api/outfitImage will use the preview version of the GraphQL endpoint, too.
Hmm, the built copy of the HTML isn't deployed with the API function on Vercel. Ok! Let's just do the same trick as we did for the dev server, and make an HTTP request.
This is fine enough for perf because we're caching this result locally to the function, plus it should be a fast CDN-cached response anyway.
Size upgrade, wowie! (I figure sharing services probably prefer a higher-resolution image, they're not very big, and to then scale them down for the application-specific use case & device.)
Meta tags are a bit tricky in apps built with `create-react-app`! While some bots like Google are able to render the full page when crawling, not all bots are. Most will just see the empty-ish index.html that would normally load up the application.
But we want outfit sharing to work! And be cool! And use our new outfit thumbnails!
In this change, we add a new server-side rendering API route to handle `/outfits/:id`.
It's very weak server-side rendering: it just loads index.html, and makes a few small tweaks inside the `<head>` tag. But it should be enough for sharing to work in clients that support the basics of Open Graph, which I think most major providers respect! (I know Twitter has their own tags, but it also respects the basics of OG, so let's see whether there's anything we end up _wanting_ to tweak or not!)
Not using this anywhere in-app yet! But might swap it into the user outfits page, and use it to server-side-render social sharing meta tags!
Also eyeing this as a way to replace our nearly 1TB of outfit image S3 storage, and save $20/mo…
This folder will include code shared by both the client-side app and the server!
The server isn't using it yet, but it will in a new API endpoint soon! I'm doing this in a separate commit to avoid lumping all the import-change noise into that commit.
I'm doing this in preparation for an API endpoint to build outfit images by ID. It'll need the same logic to decide which layers are visible, and the same GQL fragments to load the relevant data!
Oh right, labeling PB items as NP is confusing! Here, we add a "PB" case to the lil badge on the corner of the item thumbnail, in item search page & homepage Newest Items.
Oops, I forgot to grep outside `src` for this, lol! I'll keep the S3 domain support for now, because it's still fine to accept and some clients might be on old code, whatever!
Idk I was surprised to notice that /api/validPetPoses wasn't cached by the Vercel CDN… but I guess that makes sense, because it's only allowed to be an hour old, and client caches hold onto it, so I suppose it makes sense for the CDN cache to not bother.
But in practice, that means users loading the site are pretty _likely_ to see /api/validPetPoses taking its full load time, which can be up to 500ms.
So, I'm offering this additional cache hint, to be willing to serve /api/validPetPose responses up to a week old while reloading the latest data. My hope is that the cache algorithm is much more excited about holding onto that, and that it makes the 500ms delay very _rare_ in practice!
Oops, we did an in-place sort on the search variables we passed to Apollo! This meant that Apollo's first read of the variables wouldn't match later reads, so it would always decide the variables had changed, causing an infinite re-render loop.
Remember to copy existing arrays before sorting! 😅
Incidentally, this only happened for Markings, by coincidence: it's the only (I think) searchable zone label with multiple zone IDs, that don't sort alphabetically the same as they sort numerically. This `.sort()` sorts them alphabetically, whereas they come in numerical order in `allZones`, because that's the order the GQL server returns them in `build-cached-data.js`.
There have been usability problems with this search filter UI, and I think they mostly come down to people accidentally selecting filters when they don't mean to—sometimes pressing Enter to indicate that they're done typing, but accidentally selecting something.
Here, we remove that behavior, and additionally add a new behavior to clear the suggestions on pressing Enter.
We've been serving images directly from `impress-asset-images.s3.amazonaws.com` for a long time. While they serve with long-lasting HTTP cache headers, and the app requests them with the `updated_at` timestamp in the query string; each GET request still executes a full S3 ReadObject operation to get the latest version.
In the past, this was only relevant to users on Image Mode, not Flash Mode. But now that everyone's on Image Mode, this matters a lot more!
Now, we've configured a Fastly host at `impress-asset-images.openneo.net`, to sit in front of our S3 bucket. This should dramatically reduce the GET requests to S3 itself, as our cache warms up and gains copies of the most common asset PNGs.
That said, I'm not sure how much actual cost impact this change will have. Our AWS console isn't configured to differentiate cost by bucket yet—I've started this process, but it might take a few days to propagate. All I know is that our current costs are $35/mo data transfer + $20/mo storage, and that outfit images are responsible for most of the storage cost. I hypothesize that `impress-asset-images` is responsible for most of the reads and data transfers, but I'm not sure!
In the future, I think we'll be able to bring our AWS costs to near-zero, by:
- Obsolete `impress-asset-images`, by using the official Neopets PNGs instead, after the HTML5 conversion completes.
- Obsolete `impress-outfit-images`, by using a Node endpoint to generate the images, fronted by a CDN cache. (Transfer the actual data to a long-term storage backup, and replace the S3 objects with redirects, so that old S3 URLs will still work.)
I hope this will be a big slice of the costs though! 🤞
(Note: I'll be deploying this on a bit of a delay, because I want to see the DNS propagate across the globe before flipping to a new domain!)
Boom, pulled some stuff out into util! Now we also have error boundaries in both panels of the wardrobe page.
You can test this one by visiting `/outfits/new?send-test-error-for-sentry`, just like on the home page! Util component for fake errors owo
Previously, we would tear down to a blank white page. Now, for errors within most page content, we show a cute error message with a grundo programmer!
To test, visit `/?send-test-error-for-sentry`, which will trigger an intentional render error on the home page.
Note that this does _not_ cover pages that don't use PageLayout, namely the wardrobe page! I'll want to add other boundaries there…
Crashes on clearing the search box. I really thought I fixed this when I refactored `forceReset` to be a function, but I guess I missed it when I went back-and-forth deciding whether to actually do the refactor!
I've got it in my IDE too, but I want more safeguards lol, I'm tired of goofing up the pushes :p
I don't allow warnings here, but still have them as warnings not errors; because I don't want them to block build, but I _do_ want them to block commit.
Before this, if you made a change while the outfit was auto-saving, it would reset your changes back and forth in an infinite loop, oops!
This was because the response from the save would reset the outfit state to match, but the _debounced_ outfit state would still show the user's changes, so we'd trigger another save. And then the same thing would happen in reverse, and back and forth again!
Lol this is mostly to stop me from accidentally launching it too small 😅
also there seemed to be subtle pixel shifts from some of this? idk still flakier than I want I guess, but hopefully it sticks a bit better now with the new window size hints…
Oops, it was possible after saving an outfit to get into a state where we would show the `<OutfitThumbnailIfCached />` behind the outfit even after it was saved, and then removing items would look weird until auto-saving caught up.
We had used the `backdrop` property because we wanted smoother partial load-ins, but for now I'm just fixing this by switching it to `placeholder`, which already has the right loading-only behavior.
This was also the only call site for `backdrop`, so I've removed it!
Oops, the sequence here was:
1) Save a new outfit
2) The debounced outfit state still contains id=null, which doesn't match the saved outfit, which triggers an auto-save
3) And now again, the debounced outfit state contains the _previous_ saved outfit ID, but the saved outfit has a _new_ ID, so we save the _previous_ outfit again
and back and forth forever.
Right, ok, simple change: if the saved outfit ID changes, reset the debounced state immediately, so it can't even be out of sync in the first place! (I also considered checking it in the condition, but I didn't really understand what the timing properties of being out of sync due to debouncing would be, and it seemed to not represent the reality I want.)
Hope it actually work-works lol
Did some refactors in useOutfitState to support the new reset action we do after auto-saving, in case the server tweaked things like the name.
Note that we implemented the actual horn behavior described in the message, simply by marking the yellow horn appearance glitched for Fem, but not for Masc! Also, we don't have a yellow-horn Sick Masc model, so it's blue too.
It wouldn't open, because I'd set `isLazy` on the popover, so opening the modal would close and UNMOUNT the popover, which unmounted the modal!
Now, we use the new `lazyBehavior` prop to keep it mounted _after_ the first time it opens. This is why I needed to upgrade Chakra!