I've noticed that our Fastly proxy adds a surprising amount of latency on cache misses (500-1000ms). And, while our overall hit ratio of 80% is pretty good, most misses happen at inopportune times, like loading items from search.
But now that the Neopets CDN supports HTTPS, we can safely switch back to theirs for *most* image loads. (Some features, like downloads and movies, still require CORS headers, which our proxy is still reponsible for adding.)
This forgoes some minor performance wins (like the Download button now requires separate network requests), and some potential filesize reduction opportunities (like Fastly's auto-gzip which we're today using for SVGs, and eventually using their Image Optimizer for assets), to decrease latency. We could still potentially do something more powerful for low-power connections someday… but for now, with the cache miss latency being *so* heavy, this seems like the clear win for almost certainly *all* users today.
So I finally started looking into the race condition that makes item previews sometimes fail to load, and as expected, it was that we were trying to load the movie before CreateJS had necessarily loaded. Usually the timing worked out, esp after a reload, but not under certain circumstances!
Anyway, I've been wanting for a while to just bundle them instead. That'll help us more eagerly load them when we need them, and not depend on external CDNs, and remove a bunch of loading state!
So yeah, I had to learn how the `easeljs` and `tweenjs` NPM packages did their bundling, and how to use `imports-loader` to let them just register straight onto `window`! But we got there and it's pretty nice tbh!
Woof, the "Swirl of Power Effect" item tanks my CPU waaay too much
(I bet it's those 7000x7000 PNGs lolol 😬)
Anyway, before thinking about optimizing specific issues, I'm just adding this emergency switch: if we detect FPS < 2 on any layer, we just pause the whole outfit, until the user decides to unpause.
Oh hey, turns out I was missing a step in movie clip stuff! Images aren't just for sprite sheets, but also sometimes they just want the raw images!
Here, we register them, uwu
Items like "Swirl of Power Effect" and probably others work correctly now!
That said, Swirl of Power takes WAY too much CPU lmao, I want to maybe add some kind of automatic kill switch lol
The layer preloader already takes advantage of, and primes, the HTTP cache.
But we still do duplicate work, on every OutfitPreview render, to re-execute movie clip libraries, and create a movie clip to test for animations. The former is nontrivial cost, and the latter is often large cost. This can make even basic outfit changes slow, when there's no change to the movie clip layers and the player is paused!
Here, we add an LRU cache for movie clip libraries, and for the question of "is it animated?". This should speed up a number of places where we would reload the movie (including between toggling the item), and various changes that were triggering full movie clip rebuilds unnecessarily.
We _aren't_ solving here for the fact that toggling an animated item requires rebuilding the movie clip, which could conceivably be cached—but with some state management trickiness, because ideally it should be a separate clip for each context where it's being shown. Imo not yet worth the effort! (esp because I think users understand that toggling an animated item can be slow, whereas this was affecting _other_ actions way too much)
I think what's happening in Sentry error IMPRESS-2020-1F is that mobile devices are running out of memory, so `canvas.getContext("2d")` returns null.
Now, we have a UI affordance to let you know when this is probably what's happening!
Also, when researching this, I learned about a Safari bug where you need to manually garbage-collect your own canvas data. It's possible that Safari users have been having particular trouble with memory leaks over long sessions? I'm not sure, but it seems like a good idea to add this small garbage-collection code!
I'm getting some vague errors in Sentry about `canvas.getContext` returning null? Weird. (IMPRESS-2020-1F)
I'm not sure what that's about, so I don't want to stop sending it to Sentry. But I do want to make sure we handle this kind of error gracefully! (I'm thinking about how, while I don't think this one was, in the future this _could_ be caused by errors in Neopets movie clip JS, and I don't want our app to start messing up because of it!)
Here, we make sure to log the error to the console with more detail (the library URL), and show feedback to the user, and only log the error once per clip (so that animated ones don't like, send a bunch).
"Beautiful Green Painting Background" wasn't loading! https://impress-2020.openneo.net/items/75594
```Error building movie clips Error: Expected JS movie library http://images.neopets.com/cp/items/data/000/000/491/491273_31368b3745/491273_2_HTML5%20Canvas.js to contain a constructor named _491273_2_HTML5%20Canvas, but it did not: ssMetadata,Bitmap3,Bitmap5,CachedTexturedBitmap_4183,CachedTexturedBitmap_4184,CachedTexturedBitmap_4185,CachedTexturedBitmap_4186,CachedTexturedBitmap_4187,Symbol20,Symbol8,Symbol4,Symbol7,Symbol2,Symbol1,Symbol9,Symbol2copy,Symbol2_1,_491273_2_HTML5Canvas,properties,Stage```
We already had code to strip out spaces, but not encoded spaces like %20. Now, we decode the URL first, so that space-stripping will work even if it was encoded.
So I broke the Download button when we switched to impress-2020.openneo.net, and I forgot to update the Amazon S3 config.
But in addition to that, I'm making some code changes here, to make downloads faster: we now use exactly the same URL and crossOrigin configuration between the <img> tag on the page, and the image that the Download button requests, which ensures that it can use the cached copy instead of loading new stuff. (There were two main cases: 1. it always loaded the PNGs instead of the SVG, which doesn't matter for quality if we're rendering a 600x600 bitmap anyway, but is good caching, and 2. send `crossOrigin` on the <img> tag, which isn't necessary there, but is necessary for Download, and having them match means we can use the cached copy.)
Previously I tried to be clever and pre-optimize by putting all the layers onto one canvas… I think this probably helped by batching their paints, but it made fades less smooth by not taking advantage of native CSS transitions, and it made us dip into JS way more often than necessary.
Here, I take the simpler approach: just layers of <img> and <canvas> tags, with each animated layer on its own canvas, and letting the browser handle transitions and compositing, and separate `setInterval` timers to manage their framerates.
I have a suspicion that batching the paints could help performance more, but honestly, maybe that batching is already happening somehow, because things look pretty great on my big-screen stress test now; and so if it _is_ relevant, I want to wait and see after testing on low-power devices.