Okay so one of the trickiest parts of login is done! 🤞 and now we need to make it actually show up in the UI. (and also pressure-test the security a bit, I've only really checked the happy path!)
Thinking about longevity, I think I wanna cut Auth0 loose, and just go back to using our own auth.
I had figured at the time that I didn't want to integrate with OpenNeo ID's whole mess, and I didn't want to write a whole new auth system, so Auth0 seemed to make things easier.
But now, it's just kinda a lot to be carrying along an external service as a dependency for login, especially when we've got all the stuff in the database right here. I wanna remove architecture pieces! Get it outta here!
And I'll finally build account creation from the 2020 site while I'm at it, which seemed like it was gonna be a bit of a pain with Auth0 and syncing anyway. (I think at the time I was a bit more optimistic about a full transfer from one system to another, but that's much further off than I realized, and this path will be much better for keeping things in sync.)
Just a cute little way to let us preview it without having to spin up a separate instance of the app or use a feature flag system!
This means we can safely merge and push this to production, without worrying about leaking the feature before the ~owls team signs off.
Hey nice it looks like it's working! :3 "Bright Speckled Parasol" is a nice test case, it has a long text string! And when the NC value is not included in the ~owls list, we indeed don't show the badge!
Hey finally! I got in the mood and did it, after… a year? idk lol
The button should only appear for outfits that are already saved, that are owned by you. And the server enforces it!
I also added a new util function to give actually useful error messages when the GraphQL server throws an error. Might be wise to use this in more places where we're currently just using `error.message`!
Uhhh I guess I never added the check that the outfit you're editing is your own? Embarrassing.
I don't have any reason to believe anyone abused this, but 😬! Good to have fixed now!
Been getting a lot of errors I *think* from folks trying to add OWLS Pricer to Impress 2020 even though it doesn't work here! Reasonable to have happen though! I thought Sentry knew to ignore those, but I guess it doesn't?
In this change, we add some filtering to ignore errors triggered by extensions. This should keep them out of our inbox!
I wasn't able to test this very robustly locally. I'm mostly just crossing fingers!
Tbh I didn't even really validate these changes, or that the codepaths right now aren't working, they just seem like clear drop-in upgrades now that HTTPS works and HTTP requests are redirected. Simplify!
Right now it returns 50 rows; each item that needs modeling returns 1–4 rows, usually 1. So a limit of 200 should be pretty dangerous, while also creating a release valve if there's another future bug: it'll just have the problem of returning too few items, instead of the problem of crashing everything! 😅
Neopets released a new Maraquan Koi, and it revealed a mistake in our modeling query! We already knew that the Maraquan Mynci was actually the same body type as the standard Mynci colors, but now the Koi is the same way, and because there's _two_ such species, the query started reacting by assuming that a _bunch_ of items that fit both the standard Mynci and standard Koi (which is a LOT of items!!) should also fit all _Maraquan_ pets, because it fits both the Maraquan Mynci and Maraquan Koi too. (Whereas previously, that part of the query would say "oh, it just fits the Maraquan Mynci, we don't need to assume it fits ALL maraquan pets, that's probably just species-specific.")
so yeah! This change should help the query ignore Maraquan species that have the same body type as standard species. That's fine to essentially treat them like they don't exist, because we won't lose out on any modeling that way: the standard models will cover the Maraquan versions for those two species!
Ah okay, pools support `query` and `execute` the same way connection objects do (as a shorthand for acquiring, querying, and releasing), but it doesn't have the same helpers for transactions. Makes sense: you need those queries to go to the same connection, and an API where you just call it against the pool object can't tell that it's part of the same thing!
Now, we have our transaction code explicitly acquire a connection to use for the duration of the transaction.
An alternative considered would have been to have `connectToDb` acquire a connection, and then release it at the end of the GraphQL request. That would have made app code simpler, but added a lot of additional potential surprise failure points to the infra imo (e.g. what if we're misunderstanding the GraphQL codepath and the connection never gets released? whereas here it's relatively easy to audit that there's a `finally` in the right spot.)
This should both fix cases where the connection closes for various reasons, by having the pool reconnect; and also should be a second way of solving some of the blocking issues we were having with large queries, by letting faster queries use parallel connections.
Idk what a reasonable number is, 10 seems to be what various guides are saying? Might tune it down if it ends up pushing various connection limits? (We could also constrain it on dev specifically, if that matters.)
I hypothesize that loading people's full trade lists more often than necessary is part of the cause of the recent mega slowdown!
My hypothesis is that we're clogging up the MySQL connection socket with a ton of data, which blocks all other queries until the big ones come through and parse out. (I haven't actually validated my assumption that MySQL connections send query results in serial like that, but it makes sense to me, and fits what I've been seeing.)
There's more places we could potentially optimize, like the trade list page itself… (we currently aggressively load everything when we could limit it and load the rest on the followup pages, or even paginate the followup pages…)
…but my hope is that this helps enough, by relieving the load on the homepage (latest items) and on item searches!
This is a bit hacky, but I want to ship and I'm not in a mood for a refactor :P
Before this change, you could see a bug by doing the following:
1. Click "I own this" to own an item.
2. Click "Add a list" and add it to a list.
3. Click "I own this" to un-own the item. (This deletes it from all lists.)
4. Observe that the "Add a list" dropdown disappears.
5. Click "I own this" to own it again.
6. Observe that, before this change, the dropdown would reappear, but incorrectly say it was still in the old list. After this change, it appears with the blank "Add to list", as intended.
Oops, I used the wrong property to control the checkbox state! This made it an uncontrolled component. It would always start unchecked when the page loads, regardless of actual own/want state, and then toggle based on physical clicks.
This meant that things generally worked correctly if you didn't own/want the item when you first loaded the page; but if you already did, then you would click once and send an *add* mutation instead of a remove; and then click again and be able to remove.
Now, removes only take one click!
Oooh this feature is feeling very nice :) :) We hid "not in a list" pretty smoothly I think!
A known bug: If you have the item in a list, then click the big colorful button, it will remove the item from *all* lists; and then if you click it again, it will add it to Not in a List. But! The UI will still show the lists it was in before, because we haven't updated the client cache. (It's not that bad in the middle state though, because the list dropdown stuff gets hidden.)
My cute keybind to quickly wrap stuff in <Box> wasn't working in this file, and I figured out from deleting stuff and narrowing down that it was this comment. I guess Emmet's JSX parser doesn't like a comment being there! I moved it up a bit instead.
Ah hm, not sure why this only showed up once I tried a prod deploy, but I declared `hiResMode` twice in there, because we already had fixed this bug for item layers but not pet layers!
In this change, I fix the duplicate `hiResMode` declaration, and update the new pet layers message to match the item layers message.
Give the grid a fixed size, have the list name stuff get ellipsis when it's too long, and try to show all list names (which will almost certainly too long for the space) to give a better hint of what's in there.
I don't think this is actually relevant in-app right now, but I figured sending it is More Correct, and is likely to prevent future bugs if anything (and prevent future question about why we're _not_ sending it).
I also removed the `maxAge: 0` on `currentUser`, now that I've updated Fastly to no longer default to 5-minute caching when no cache time is specified. I can see why that's a reasonable default for Fastly, but we've been pretty careful about specifying Cache-Control headers when relevant, so the extra caching is mostly incorrect.
We had previously configured the client to not bother to try a GET request for GraphQL queries, and just jump straight to POST instead, because the `vercel dev` server for create-react-app reloaded the backend code for every request anyway, which doubled the dev response time.
The Next.js server is more efficient than this, and keeps some memory, so GET requests work similarly in dev as on prod now! (i.e. it fails the first time, but then succeeds on the second)
In this change, we remove the code to skip `createPersistedQueryLink` in development, and instead always call it. We simplify the code accordingly, too.
If the user is searching for things they own or want, make sure we don't CDN cache it!
For many queries, this is taken care of in practice, because the search result includes `currentUserOwnsThis` and `currentUserWantsThis`. But I noticed in testing that, if the search result has no items, so those fields aren't actually part of the _response_, then the private header doesn't get set. So this mainly makes sure we don't accidentally cache an empty result from a user who didn't have anything they owned/wanted yet!
Some queries, like on `/your-outfits`, had the cache hint `max-age=0, private` set. In this case, our cache code sent no cache header, on the assumption that no header would result in no caching.
This was true on Vercel, but isn't true on our new Fastly setup! (Which makes sense, Vercel was a bit more aggressive here I think.)
This was causing an arbitrary user's data to be cached by Fastly as the result for `/your-outfits`. (We found this bug before launching the Fastly cache though, don't worry! No actual user data leaked!)
Now, as of this change, the `/your-outfits` query correctly sends a header of `Cache-Control: max-age=0, private`. This directs Fastly not to cache the result.
To fix this, we made a change to our HTTP header code, which is forked from Apollo's stuff.
Comments explain most of this! Vercel changed around the Cache-Control headers a bit to always essentially apply max-age:0 when scope:PRIVATE was true.
I'm noticing this isn't *fully* working yet though, because we're not getting a `Cache-Control: private` header, we're just getting no header at all. Fastly might aggressively choose to cache it anyway with etag stuff! I bet that's the fault of our caching middleware plugin thing, so I'll check on that!
Hmm, I see, Vercel chews on Cache-Control headers a bit more than I'm used to, so anything marked `scope: PRIVATE` would not be cached at all.
But on a more standard server, this was coming out as privately cacheable, but for an actual amount of time (1 hour in the homepage case), because of the `maxAge` on other fields. That meant the device browser cache would hold onto the result, and not always reflect Own/Want changes upon page reload.
In this change, we set `maxAge: 0`, because we want this field to be very responsive. I also left `scope: PRIVATE`, even though I think it doesn't really matter if we're saying the field isn't cacheable anyway, because I want to set the precendent that `currentUser` fields need it, to avoid a potential gotcha if someone creates a cacheable `currentUser` field in the future. (That's important to be careful with though, because is it even okay for logouts to not clear it? TODO: Can we clear the private HTTP cache somehow? I guess we would need to include the current user ID in the URL?)
The motivation is that I want VERCEL_URL and local net requests outta here :p and we were doing some cutesiness with leveraging the CDN cache to back the GQL fields. No more of that, folks! lol
There are some places where we use <img> tags where I think it's actually just the right thing to do.
`next/image` is good for image optimization, but I don't think it's worth the proxying for Neopets art images that don't actually always have a higher-res version to begin with.
Idk, maybe the species faces could be a decent choice, but right now we're solving it with `srcSet`, and that's fine. Doesn't seem worth migrating, let's just move on with our lives! 😅
I'm still leaving the lint rule on though, because I think it's a helpful reminder. (I don't think it catches `<Box as="img" />` though, which is a shame, because that would be our natural default in this app!)
Yeah, cool, now we use the `next/image` tag, and our images are showing up again!
There's still lint errors for using bare img tags in some places, but I'm not sure I really care…
This was a fun journey! Turns out Next 12 is using a new faster JS compiler called SWC, which had a compiler bug that triggered here!
The incorrect looping behavior caused `libraryUrl` to sometimes be `null` by the time the movie promise completes, because `layer` was set to whatever the `last` layer in the list had been. https://github.com/swc-project/swc/issues/2624
Anyway, turns out this code has been through a few refactors, and the `async` function wrapper is extraneous now! So I've just deleted it and inlined its code. Ta da! lol
Tweaked some of the default Next.js rules, fixed lint-staged for `next lint`, made a few small easy lint fixes. Feels good!
Note that using the `dirs` option in `next.config.js` was causing `lint-staged` to lint _everything_. That's why I edited `yarn lint` to specify the dirs instead: that way, that command will lint all those dirs, but they won't get included in invocations with `--file`.
There are still a few lint errors left after this commit, because our <img> tags aren't working (@next/next/no-img-element). I'll fix those when we figure out what's wrong with images!
I'm interested in ejecting from Vercel, so I'm trying to get off their proprietary-ish create-react-app + Vercel API thing, and onto Nextjs, which is very similar in shape, but more portable.
I had to disable `craCompat` in `next.config.js` to stop us from crashing on their webpack config, see https://github.com/vercel/next.js/discussions/25858#discussioncomment-1573822
The frontend seems to work at a basic level, but network requests fail, and images don't seem to be working. I'll work on those next!
Note that this commit was forced through despite failing lint checks. We'll need to fix that up too!
Also, after the codemod, I moved `src/pages` to the more canonical location `pages`. Lint tooling seemed surprised to not find a `pages` directory, and I didn't see a config that was making it work correctly in the other location, so I figured it's that Next is willing to check `pages` or `src/pages`? But this is more canonical so yeah!
Even in production, scrolling is a bit slow! This will preload the pagination one click ahead.
There is a bit of a perf downside, in that if you click through the pages too fast, you'll trigger _extra_ requests. I think that's a net win though, and I'm not gonna try to get cleverer than this right now.
My main inspiration for doing this is actually our potentially-huge upcoming Vercel bill lol
From inspecting my Honeycomb dashboard, it looks like the main offender for backend CPU time usage is outfit images. And it looks like they come in big spikes, of lots of low usage and then suddenly 1,000 requests in one minute.
My suspicion is that this is from users with many saved outfits loading their outfit page, which previously would show all of them at once.
We do have `loading="lazy"` set, but not all browsers support that yet, and I've had trouble pinning down the exact behavior anyway!
Anyway, paginating makes for a better experience for those huge-list users anyway. We've been meaning to do it, so here we go!
My hope is that this drastically decreases backend CPU hours immediately 🤞 If not, we'll need to investigate in more detail where these outfit image requests are actually coming from!
Note that I added the pagination to the existing `outfits` GraphQL endpoint, rather than creating a new one. I felt comfortable doing this because it requires login anyway, so I'm confident that other clients aren't using it; and because, while this kind of thing often creates a risk of problems with frontend and backend code getting out of sync, I think someone running old frontend code will just see only their first 30 outfits (but no pagination toolbar), and get confused and refresh the page, at which point they'll see all of them. (And I actually _prefer_ that slightly confusing UX, to avoid getting more giant spikes of outfit image requests, lol :p)
This is a minor change to clear a console warning, and make intended behavior clearer! You're not supposed to pass `null` as a select value, because it's ambiguous about whether you're looking for the first option or to make this an "uncontrolled component".
Here, I now provide a fallback value, which is an explicit string for the placeholder option. I made the string very explicit, to aid in debugging if it somehow leaks out from where it's supposed to be! (But I also added gating in the `onChange` event, just to be extra sure.)
Huh. `flexAlign` isn't a real Chakra style prop, because it's not a real CSS style. I wonder if I meant `alignItems`? Anyway, this was getting passed down to the DOM element and triggering a console warning. Removed!
Oops, our cutesy feature to show an outfit thumbnail sa a placeholder while the rest of the data is loading was making spurious requests!
I put the `skip` in the wrong place 😅
This caused a request to https://outfits.openneo-assets.net/outfits/null/v/NaN/300.png, which would return a 500.
The user wouldn't see anything, because the image wouldn't show because it failed. But it's a mistake, and it's sending extra requests from the client and to the server, and it's a good one to fix!
Hmm, right, okay, we *generally* should have all users imported to Auth0, but this can fail if the cron job is behind or Auth0 rejected the data (e.g. user data in a format it doesn't support).
Previously, this would apply the name change in the database, but return Auth0's "The user does not exist." error to the GraphQL client, making it look like the update fully failed.
In this change, we handle that case differently: when the Auth0 update fails with a 404, we proceed but log a warning; and when Auth0 fails with an unexpected error, we roll back the database change in addition to raising the error to the client, to keep the behavior obvious and consistent.
Oh oops, I missed this path change when I changed the route to `/user/:id/lists`! This caused searching by email to redirect to the homepage, but with a valid URL in the address bar; and refreshing the page would hit the redirect defined in `vercel.json`, redirect to the new route, and load the correct page.
Fixed!
Like, the little magnifying glass in the "Search all items", you can click it to get taken to the _big_ search page with the autocomplete filters and stuff
Did some stuff in here for parsing the default list ID too. We skipped that when making the new list index page, but now maybe you could reasonably link to the default list? 🤔 not sure it's a huge deal though
I noticed someone using `<pre>` for styling, and thought, sure why not!
I haven't added support for the code block indent thing, and I think that's probably fine?
A lot of DTI lists use old URLs to anchor-link between lists! Here, we rewrite those URLs to match what DTI 2020 expects, so that they actually correctly jump you across the page and aren't filtered out!
The old URLs were glitchy because we weren't escaping the `layerUrls` param… and this will let us take better advantage of the same shared caching as other stuff!
Whoops, `Promise.race` isn't quite what I wanted here. This meant that, if the image promise _fails_ before the movie _succeeds_, the outfit would crash even though it doesn't need to. (And this was happening too often, due to a bug in /api/assetImage!)
Now, we accept whichever _successful_ result loads first, or reject if they _both_ fail.
I tested this by having /api/assetImage always throw, and confirmed that it crashed the outfit before this change, and no longer does after this change!
We update /api/assetImage to accept size as a parameter (I make it mandatory to push people into HTTP caching happy paths), and we update the GraphQL thing to use it in those cases too!
This also means that, if these images seem to go well, we could swap Classic DTI over to them… I want to turn off those RAM-heavy image converters on the VPS lol
In this change, we start using our new API endpoint for movie image URLs, instead of the Classic DTI image.
This should make the little fade-in phase for certain movies a little bit less jarring (the part where we preload the image before the movie loads), though I suppose that won't necessarily load as fast until it gets into the cache the first time lol. (A good reason to maybe put a more long-lived cache like Fastly in front of this stuff long-term?)
Not doing it for the smaller image sizes yet, I'm a bit worried that I don't 100% know how to teach /api/assetImage to resize without tipping over the function limit…
…oh! I should have the webpage render at different sizes! Yeah that's a great idea lol
Marking this glitch on the Yellow Lutari head today, and oops there isn't UI copy for it yet! Added!
Also fixed some bugs in here, like old text about the position of the pose picker relative to the glitch badge, and I noticed while debugging that `layerUsesHTML5` returns a truthy string instead of a boolean which seems error-prone!
Hmm, the item page in prod is slower than it is in dev? In dev, most items are satisfied by the preloading in ItemPagePreview, but in prod, those same items need to send a separate OutfitItemsAppearance query _way_ after (which, I think just due to queueing, waits for all the items to wait too).
There's an obvious issue in the case of all the Maraquan items lately, because we just don't do the clever cache lookups for non-standard colors at all. But I don't understand why even standard items like the 17th Birthday Party Hat are struggling!
These are just some simple debug statements, hopefully they'll tell us something about the basics of what's happening!
I didn't want to use the word "basic", since "basic colors" generally means like Blue, Red, Green, Yellow… but it was the only one that fit in the space lol
I tried a lot of stuff with "Fits standard pets" and stuff and couldn't get it to work well
Just a little display bug on the homepage. For an item like the "Evil Coconut Half Mask", which was specifically drawn for the standard _and_ major special colors, our previous logic would have said "Baby only" or "Maraquan only" or whatever special color it happened to find first.
Now, we only show the case "Baby only" if it _doesn't_ fit standard pets too.
Note that the Maraquan case is tricky, because the Blue Mynci can also wear Maraquan items lol! For this reason, we check for two standard bodies before declaring that it's meant for standard pets.
I wasn't sure how to fill the space for items that are fully modeled, then realized some basic at-a-glance "who does this fit" would help!
The load time isn't great, I think I need to break out that dependent subquery, but maybe the stale-while-revalidate will cover it well enough at first.
Add a skeleton stripe for the modeling data! Won't show up in most cases because we load fast, but it helps things a lot when it does. (Also, will we keep loading fast with the cache changes on this query?)
To make this fast, I had to tweak the GraphQL resolver a bit to run a filtered version of the query for `newestItems` instead of scanning the full database! But yeah, looking good!
I think I'm gonna want to swap out "Fully modeled" for some insight about who it fits
Okay cool, I noticed that "A Warm Winters Night Background" sometimes animates when other things are playing, but the animations aren't _detected_. (Huh, I actually thought we just didn't schedule ticks in that case? But maybe I'm missing something.)
Anyway, some movies don't use the built-in frames construct to animate, and instead use tweens that hook into the timeline and mutate the stage. Okay! Now we detect those.
This _did_ enable the Play/Pause button on some items that don't actually animate in practice, like the "#1 Fan Room Background", which seems to have an animated string of lights in the corner that got layered incorrectly. Maybe we should add a new glitch type, to flag movies that don't actually animate?
Doing this for two reasons! One is that I want the movie layer component to be a bit thinner in general - I think we might even want to move the fallback image logic out, too.
The second is that I want the onError for something else soon!
Huh, weird. So I reversed the manifest, because you want to get the *last* movie. And I figured that semantic probably extended to PNGs and SVGs too?
But actually, PNGs sometimes have *other* PNGs in the manifest that aren't the relevant asset at all, and are just reference art.
Again, I'm really not sure what the underlying semantic is here? Does the Neopets customizer just display them all, and for the items with this problem, they happen to layer in a way that's not broken?? I would really like to not do that, and I would really like to know the real semantic, but I can't find it >.>
So um, I'm going ahead and using the best semantic that solves the problems I know about? Which is, use the last movie, and use the first PNG. Fingers crossed lol!
I also didn't test this change extensively, because I'm on a train lol
I'm just trusting that this push will be better than what we had before. I tested it on the Dandan MME, which has two JS files, and it took the latter; and the Pathway of Petals Background, which has two PNG files, and it took the former. Success? 😬🤞
Oops, my inbox was getting full of uncaught promise rejections of `loadImage`!
I'm pretty sure they're caused when multiple images in a movie fail to load (e.g. network problems), but we fail to cancel them. So, the first failure would be caught as a part of `Promise.all` in `loadMovieLibrary`, but then subsequent failures wouldn't be caught by anything, and would propagate up to the console and to Sentry as uncaught errors.
In this change, we make a number of improvements to cancellation. The most relevant change for this bug is that `loadMovieLibrary` will now automatically cancel all resource promises when it throws an error! But this improved robustness also enabled us to finally offer a simple `cancel()` method on movie library promises, which we now available ourselves of at call sites, too.
Experiment! Let's see if them being more prominent like this is helpful or annoying 😅
I think this is clunkier in the HTML5 Green Happy Path, but worth it for bringing attention in the error cases.
But I feel like we might tweak this over time!
Huh, so it turns out sometimes the manifest will include old broken conversion attempts!
This fixed the "MiniMME18-S2c: Holomorphic Foliage and Dandan Set", the "Electric Dress" on various species (incl. Aisha), and yeah!
What an interesting discovery 😂
This is because I want to try adding a search footer to the two-column layout, like in Classic DTI—and so I want more screens that _can_ support two-column layout to use it.
Right, cool, yes, this is the thing about partial data; you need to define the loading condition as "relevant data is missing, _and_ loading is still happening".
Ah right okay, when the `ItemSearchResultV2` doesn't have an `id`, Apollo Cache isn't quite so strong about caching conflicting-y fields, like the different parameterizations of `items`.
With this change, we give the search result object an ID, which helps Apollo cache more confidently!
It's just a serialization of the relevant search fields 😅
That's the last itemSearch call site! I'll probably keep it up for other clients for a while though, esp since it doesn't depend on any additional loaders or anything, it's pretty small overall
Updated the comments to reflect this, and also remembered to make them real docstrings lol!
Now, when you click Prev/Next, we show the page number while the items load, rather than blink it in and out!
This is because we're using itemSearchV2, which makes `numTotalItems` cacheable separately from the paginated `items`. Apollo Cache pretty much does this with zero config, we just have to ask for `returnPartialData`!
The main change is that we restructure the query, so that only the parts that are actually affected by pagination depend on those variables!
This will enable the Apollo Cache to trivially cache and show `numTotalItems` while waiting for other pages to load.
I'm gonna make this a bit more powerful later, but just for now, the text "Page 1 of 27" shows up!
I also don't like that the page number has to blink out while we load the new stuff; there are multiple solutions, but tbh I think the Apollo Cache should be the one to handle this, and that we can do it by refactoring the query structure a bit!
I'm seeing uncaught promise rejections in `loadImage`? It's hard to know exactly where it's actually coming from, those _should_ be caught?
My guess is that it's coming from canceled images, which are throwing errors even after loading? I don't totally understand how, because looking back, I don't think the `cancel` method was actually called???
Anyway, I fixed it so cancel actually _is_ called, and that we don't throw errors when the canceled image _correctly_ fails to load.
This should be more robust either way, but hopefully it also stops the flow of errors?
Right, when there are zero layers, we shouldn't say we're loading!
This is a consequence of the HACK below. If we didn't short-circuit the effect when length == 0, then we would go through and successfully load 0 layers.
Movies often have a lot of assets, which are more likely to be cache misses, and take script time to render! So the time until the user sees something is often huge.
Here, we start loading our PNG image at the same time. This is a filesize loading increase, but even in slow connections, it's generally worth it as a _sharp_ improvement in time until you get to see something!
One noteworthy UI weakness here is that we don't show _any_ loading indicator while the image is visible and the movie is still loading. This makes sense from a practical standpoint, but could be a problem when a movie takes a particularly long amount of time. I also want to be cognizant of whether the blink-of-content ever gets annoying! (We could make it fade out 🤔)
In my last change, I didn't try to change the APIs too much, and kept the concept of `crossOrigin` running through `getBestImageUrlForLayer`.
Now, I've moved the `safeImageUrl` call _outside_ `getBestImageUrlForLayer`, by putting it at the call site: We now call `safeImageUrl` from `loadImage` (which needs to know the `crossOrigin` flag anyway!), and at the `img` tag call site.
This simplifies all of the call sites a lot, I think!
I've noticed that our Fastly proxy adds a surprising amount of latency on cache misses (500-1000ms). And, while our overall hit ratio of 80% is pretty good, most misses happen at inopportune times, like loading items from search.
But now that the Neopets CDN supports HTTPS, we can safely switch back to theirs for *most* image loads. (Some features, like downloads and movies, still require CORS headers, which our proxy is still reponsible for adding.)
This forgoes some minor performance wins (like the Download button now requires separate network requests), and some potential filesize reduction opportunities (like Fastly's auto-gzip which we're today using for SVGs, and eventually using their Image Optimizer for assets), to decrease latency. We could still potentially do something more powerful for low-power connections someday… but for now, with the cache miss latency being *so* heavy, this seems like the clear win for almost certainly *all* users today.
I have a hunch that people aren't finding the Download button! I'm not 100% sure what to do about that, but to start, I want right-clicking the image to give you a hint about it 😅
Woo, it's the big UI experiment! Let's see how it plays for folks 😅
Scrolling through a big lists page right now, I think this is a _huge_ improvement, I can get a sense of the lists and what's in them fast, and see matches fast, and dive quickly and with no extra load time when I want more. I'm pleased tbh!
Pulled MarkdownAndSafeHTML into a shared component, and use it on the single list page now too!
I also simplified some of the logic for the item list, because I figure we'll have to give the trade matching stuff its own pass, y'know?
Oops, I pulled `currentUserId` from the wrong place, so it was always acting as if you're logged in! Now, you can see the list page for your own private list!
I'm not sure real item data "should" ever do this? Our "Written Word Shower" had a blank description, but I think that was an error on our end.
Anyway, it's clearer than showing infinite loading, so!
Oops, okay, I guess I didn't test the new preview centering stuff with 1200x1200 images, like the Usul's Damask Markings.
Now, I apply a max size to the whole-ass container, and make the parent responsible for centering it.
So I finally started looking into the race condition that makes item previews sometimes fail to load, and as expected, it was that we were trying to load the movie before CreateJS had necessarily loaded. Usually the timing worked out, esp after a reload, but not under certain circumstances!
Anyway, I've been wanting for a while to just bundle them instead. That'll help us more eagerly load them when we need them, and not depend on external CDNs, and remove a bunch of loading state!
So yeah, I had to learn how the `easeljs` and `tweenjs` NPM packages did their bundling, and how to use `imports-loader` to let them just register straight onto `window`! But we got there and it's pretty nice tbh!
Woof, the "Swirl of Power Effect" item tanks my CPU waaay too much
(I bet it's those 7000x7000 PNGs lolol 😬)
Anyway, before thinking about optimizing specific issues, I'm just adding this emergency switch: if we detect FPS < 2 on any layer, we just pause the whole outfit, until the user decides to unpause.
Oh hey, turns out I was missing a step in movie clip stuff! Images aren't just for sprite sheets, but also sometimes they just want the raw images!
Here, we register them, uwu
Items like "Swirl of Power Effect" and probably others work correctly now!
That said, Swirl of Power takes WAY too much CPU lmao, I want to maybe add some kind of automatic kill switch lol
There are a couple spots where we parse SWF URLs to get the ID out! Most visibly, our Support tools were crashing on it. And internally, manifest loading wasn't working. (I'm not sure if this got caught or if it caused crashes in user space? I didn't see them when wearing a failing item)
Anyway, fixed now!
This is because it's the terminology I'm using elsewhere ("Items" and "Closet" are too overloaded in the UI), and because I want to start putting specific lists at like `/user/:userId/lists/:listId`!
I also create a redirect from the old URL, and also from the DTI Classic variant of the URL
Ah right, React state batching doesn't always work how I expect it to. The separate state caused the hook to return and cache `{loading: false, error: null, data: null}`, and then on a _later_ tick the data value showed up, but only _after_ the response was already cached!
This broken a bunch of species/color picker stuff, now it's fixed!
Okay, so getting the initial render down time for these faces is annoying, though I might come back to it…
But actually, the _worst_ part isn't the _initial_ render, which just kinda gets processed as part of the page navigation, right?
The _worst_ part is that we render it slowly _twice_: once on page load, as we send the `useAllValidPetPoses` fetch request; and then again when the fetch request ~instantly comes back from the network cache.
The fact that this requires a double-render, instead of just rendering with the cached valids data in the first place (like how our GraphQL client does), causes a second and highly-visible render of a slow-to-render UI!
So, here we update `useAllValidPetPoses` to cache its response in JS memory, similar in principle to how Apollo Client does. That way, we can return the valids instantly on the first render, if you already loaded them from the homepage or the wardrobe page or another item page!
This is a pretty easy change, that makes re-renders faster when something about the item preview state changes!
That said, the initial render is still pretty slow, too, and that's the one that's bothering me more lol
Ah oops, because I forgot to set `hiResMode` here in the image preloader, we would preload the PNG, and _then_ load the SVG separately.
This doubled the effective image loading time in Hi-Res Mode!
Now, the image preloader respects hi-res mode, and will preload the SVG in the SVG case, and the PNG in the PNG case.
If you're not in hi-res mode, then you don't care about broken SVGs, because you wouldn't have seen them anyway!
We also update the message to reference Hi-Res Mode.
We're just having too many glitchy SVGs for my taste, esp since TNT seems to just be using PNGs for now?
This change defaults us to using PNGs for users by default, with the option to use SVGs as a new "hi-res mode" setting.
This is our first ever setting, wow!
I'm also envisioning that like, if we get Fastly Image Optimizer set up, this could be a way to tune the quality of the incoming images.
We could also consider a setting to turn off animations altogether—like, just download the PNG instead of the movie, whereas right now we download the movie on the assumption that you might play it at any time.
The way we were checking for UC compatibility issues, was also triggering while the appearance was still loading, so items didn't have any appearance layers yet!
Now, we check for loading before testing for that glitch.
Oh right, adding user data to this query makes it uncacheable!
Split the query into the main public data, which will cache; and the user data, which will load in later.
I refactor the hide-badge thing into 3 "trade matching modes". Then, the logic for whether to hide specific badges moves into the component, and we use that same flag to decide whether to show the big word "match"!
Boom, cute owns/wants badges on "Latest items", and the item search page, and trade matches!
I'm gonna add some additional flair to the trade match case, too!