That is, if you're browsing a trade list and you go "oh actually, I _do_ want that!", and click the item page to mark it, then click Back, we'll now update the matching stuff on the trade list page to reflect that it's now a match.
This was just a matter of simplifying the GraphQL query, I think the `currentUserOwnsThis` and `currentUserWantsThis` fields just didn't exist at the time?
We _don't_ yet update your _own_ trade list, if you click through to an item to remove it or something like that. The cache update function isn't too tricky, but it's a bit verbose to implement in Apollo, so I'm not bothering right now!
Oops, the <Wrap> component is nice, but it uses React.Children to apply margin to its _direct_ children, and our badges are not always direct children! (See the new `ZoneBadgeList`.)
I poked my head into how `Wrap` works, and it's honestly pretty simple, so I've applied the same styles manually. Ta da!
Not sure why movie clip building is failing! But it happened outside our try-catch, so it left us in an infinite spinner state.
The repro item is the Spring Topiary Garden Background!
The AMFPHP gateway's json.php endpoint has always had a problem parsing pets whose names start with digits… I've dug into it before, and checked again today, and there really is just no way around it: d584b58e95/core/json/app/Actions.php (L43)
And there aren't any reliable AMFPHP Node libraries out there to make the actual native AMF call.
Buuuut! In today's investigation, I noticed the xmlrpc.php endpoint for the first time. And, wouldn't you know it, there's //great// reliability for something as enterprise-standard as that!
So here, I've switched over to using an xmlrpc client library, which simplifies our calling code //and// makes number pets work correctly 😁 I wouldn't have done it just for the simplification, I think bringing in a library is net more complexity… but getting this finally right is a big relief.
When you navigated directly to ItemPage, the new `safeImageUrl` function would crash during the loading state, because it was trying to safe-ify `undefined`.
Now, I've just made `safeImageUrl` more resilient to that particular kind of unexpected input, by passing through null-y values without change.
When we decided to start out with /api/assetProxy, we didn't know how much the load would be in practice, so we just went ahead and tried it! Turns out, it was too high, and Vercel shut down our deployment 😅
Now, we've off-loaded this to a Fastly CDN proxy, which should run even faster and more efficiently, without adding pressure to Vercel servers and pushing our usage numbers! And I suspect we're gonna stay comfortably in Fastly's free tier :) but we'll see!
(Though, as always, if Neopets can finally upgrade their own stuff to HTTPS, we'll get to tear down this whole proxy altogether!)
I guess something got more picky about the loading sequencing: the fade in animation was happening faster than the cached image could load. Now, we explicitly wait for the image to load (even though we know it's probably cached) before fading it in.
I noticed that, if you're _reading_ the beta callout it's obviously a feedback link, but it's easy to glaze over "Tell us what you think". Here, I've added the word "feedback" to make it stand out on scanning the page, while adding "Got ideas?" to keep it feeling colloquial.
Oops, our movie layer promises don't have a .cancel() method, so calling it crashed our error handler. Now, when there's an error loading a layer and there are HTML5 layers visible, we'll correctly show the "Could not load preview. Try again?" message.
So I broke the Download button when we switched to impress-2020.openneo.net, and I forgot to update the Amazon S3 config.
But in addition to that, I'm making some code changes here, to make downloads faster: we now use exactly the same URL and crossOrigin configuration between the <img> tag on the page, and the image that the Download button requests, which ensures that it can use the cached copy instead of loading new stuff. (There were two main cases: 1. it always loaded the PNGs instead of the SVG, which doesn't matter for quality if we're rendering a 600x600 bitmap anyway, but is good caching, and 2. send `crossOrigin` on the <img> tag, which isn't necessary there, but is necessary for Download, and having them match means we can use the cached copy.)
Going through matchu@openneo.net was hitting a spam filter on that email host's redirect… Gmail seems to be more okay with it, so I'm sending straight there, and crossing my fingers 🤞
Oops, I shipped with the images.neopets.com TODO undone! Also, the white background was intersecting with the close X for the feedback form.
In this change, we move the xwee image into our bundle instead of depending on images.neopets.com, and we edit it to have a transparent background, which looks nicer for dark mode. (And we do a srcset!)
Previously I tried to be clever and pre-optimize by putting all the layers onto one canvas… I think this probably helped by batching their paints, but it made fades less smooth by not taking advantage of native CSS transitions, and it made us dip into JS way more often than necessary.
Here, I take the simpler approach: just layers of <img> and <canvas> tags, with each animated layer on its own canvas, and letting the browser handle transitions and compositing, and separate `setInterval` timers to manage their framerates.
I have a suspicion that batching the paints could help performance more, but honestly, maybe that batching is already happening somehow, because things look pretty great on my big-screen stress test now; and so if it _is_ relevant, I want to wait and see after testing on low-power devices.