Specifically, we rename the dev database to `openneo_impress` for consistency with the main app, and create a second schema file for `openneo_id`, so we can do local account creation.
If you want prod db, now you use `yarn local-prod`. Just to encourage myself to stick to the dev db a bit more now that I've got it really set up properly on my machine!
Hey this is an exciting development! A list of URLs, that we want to clone onto our hard drive, turns out to be something `wget` is already very good at!
Originally I used `wget`'s `--input-file` option to process the `urls-cache.txt` file, but then I learned how to parallelize it from this StackOverflow answer: https://stackoverflow.com/a/11850469/107415. (Following the guidance in the comments, I removed `-n 1`, to avoid the overhead of extra processes and allow `wget` instances to keep using shared connections over time. Idk why it was in there, maybe the author didn't know `wget` accepts multiple args?)
Anyway yeah, it's working great, except for the weird images.neopets.com downtime! 😅 Specifically I'm noticing that all the item thumbnail images came back really fast, but the customization images are taking for-EV-er. I wonder if that's just caching properties, or if there's a different backing server for it and it's responding much more slowly? Who's to say!
In any case, I'm keeping the timeout in this script pretty low (10 seconds), and just letting failures fail. We can try re-running it again sometime when the downtime is resolved or the cache is warmed up.
Especially in our item thumbnails, there's a lot of messiness about what the URL protocol is. There are also some SWF assets whose "URLs" are just saved as paths.
In this change, we start processing all our outputted URLs through a `sanitizeUrl` function, which tries to massage it into an `https://images.neopets.com` URL, and warns if it cannot.
This also warns on some intentionally-different URLs, like our April Fools prank item lol
Anyway, I love functions like this, because the warnings always help me discover the data problems! I wasn't aware of the path-only SWF URLs, for example, until this script started warning about the URL parse errors!
Here, we read URLs out from the swf_assets table, including SWFs, manfests, and everything referenced by the manifests.
There are a few data-polishing tricks we needed to do to get this to work! Most notably, newer manfests reference themselves, but older ones don't; so we try to infer the manifest URL from the other URLs. (Our database caches the manifest content, but not the manifest URL it came from.)
Just working on making an images.neopets.com mirror, just in case! To start, I'm extracting all the URLs we need to back up; and then I'll make a separate script whose job is to mirror all of the URLs in the list.
I decided that `setAuthToken` was the more appropriate server-level API, and that letting the login/logout mutations engage with the auth library made more sense to me!
Oh yeah, ok, we don't actually want to evict `currentUser` from the Apollo cache on *any* login mutation. That's both inefficient, and puts our navbar in a loading state that hides the login button and thereby unmounts the login modal, oops!
hey it's been a while! lol
I replaced the friendly long-time Feedback Xwee with a cute chef Kiko, to mix things up a bit and help people notice that the box changed!
I forgot that we sometimes use the Apollo server in a context where `req` and `res` aren't present! (namely in our `build-cached-data` script.)
In this change, we update the DTI-Auth-Mode HTTP header check to be cognizant that the request might be absent!
There was a bug in the new db auth method where `useRequireLogin` was expecting Auth0 logins to work, so it would get caught in an infinite redirect loop.
Rather than trying to figure out how to make `useRequireLogin` work with the new modal UI, I figured we can just delete it (since we only ended up using it once anyway), and add a little message if you happen to end up on the page while logged out. Easy peasy!
Hey hey, logging out works! The server side of this was easy, but I made a few refactors to support it well on the client, like `useLoginActions` becoming just `useLogout` lol, and updating how the nav menu chooses between buttons vs menu because I wanted `<LogoutButton />` to contain some state.
We also did good Apollo cache stuff to update the page after you log in or out! I think some fields that don't derive from `User`, like `Item.currentUserOwnsThis`, are gonna fail to update until you reload the page but like that's fine idk :p
There's a known bug where logging out on the Your Outfits page turns into an infinite loop situation, because it's trying to do Auth0 stuff but the login keeps failing to have any effect because we're in db mode! I'll fix that next.
Yeah cool the login button seems to. work now? And subsequent requests serve user data correctly based on that, and let you edit stuff.
I also tested the following attacks:
- Using the wrong password indeed fails! lol basic one
- Changing the userId or createdAt fields in the cookie causes the auth token to be rejected for an invalid signature.
Tbh that's all that comes to mind… like, you either attack us by tricking the login itself into giving you a token when it shouldn't, or you attack us by tricking the subsequent requests into accepting a token when it shouldn't. Seems like we're covered? 😳🤞
Still need to add logout, but yeah, this is… looking surprisingly feature-parity with our Auth0 integration already lmao. Maybe it'll be ready to launch sooner than expected?
Right, I had that idea while writing the comment, then forgot to actually do it lmao
This is important for session expiration: we don't want you to be able to hold onto an old cookie for an account that you should be locked out of. Updating the `createdAt` value requires a new signature, so the client can't forge when this token was created, so we can be confident in our ability to expire them.
Okay so one of the trickiest parts of login is done! 🤞 and now we need to make it actually show up in the UI. (and also pressure-test the security a bit, I've only really checked the happy path!)
This is for like, build targeting based on features, I think. A thing in Next popped up and asked me to update it via `npx browserslist@latest --update-db`, so I did!
Thinking about longevity, I think I wanna cut Auth0 loose, and just go back to using our own auth.
I had figured at the time that I didn't want to integrate with OpenNeo ID's whole mess, and I didn't want to write a whole new auth system, so Auth0 seemed to make things easier.
But now, it's just kinda a lot to be carrying along an external service as a dependency for login, especially when we've got all the stuff in the database right here. I wanna remove architecture pieces! Get it outta here!
And I'll finally build account creation from the 2020 site while I'm at it, which seemed like it was gonna be a bit of a pain with Auth0 and syncing anyway. (I think at the time I was a bit more optimistic about a full transfer from one system to another, but that's much further off than I realized, and this path will be much better for keeping things in sync.)
There are two Neopets NC items named Butterfly Dress! ~owls disambiguates them by calling one of them "Butterfly Dress (from Faerie Festival event)".
It'd be a bit more robust to cooperate with ~owls t o get item IDs served up in this case, but it's not a big deal esp. for only this one case, so like… this is fine!
Ok I think this oughta do it! A bug we used to have was that, if you deployed 5 times in a row without changes to node_modules, then all 5 versions would reference `node_modules` from 6 deploys ago, which would then get "cleaned up". And so then the app would have all its deps disappear!
There's still a bug here, noted in the comments. That one sounds harder to solve, and suggests a refactor for how to do build caching more robustly, and… idk I think this cheat might just not be worth it. Idk! But I'm keeping it for the moment for convenience.
Just a cute little way to let us preview it without having to spin up a separate instance of the app or use a feature flag system!
This means we can safely merge and push this to production, without worrying about leaking the feature before the ~owls team signs off.
Hey nice it looks like it's working! :3 "Bright Speckled Parasol" is a nice test case, it has a long text string! And when the NC value is not included in the ~owls list, we indeed don't show the badge!
Hey finally! I got in the mood and did it, after… a year? idk lol
The button should only appear for outfits that are already saved, that are owned by you. And the server enforces it!
I also added a new util function to give actually useful error messages when the GraphQL server throws an error. Might be wise to use this in more places where we're currently just using `error.message`!
Uhhh I guess I never added the check that the outfit you're editing is your own? Embarrassing.
I don't have any reason to believe anyone abused this, but 😬! Good to have fixed now!
This isn't a partnership we've actually talked through with the team, I'm just validating whether we could reuse our Waka code if it were to come up! and playing with it for fun 😊
Been getting a lot of errors I *think* from folks trying to add OWLS Pricer to Impress 2020 even though it doesn't work here! Reasonable to have happen though! I thought Sentry knew to ignore those, but I guess it doesn't?
In this change, we add some filtering to ignore errors triggered by extensions. This should keep them out of our inbox!
I wasn't able to test this very robustly locally. I'm mostly just crossing fingers!
Tbh I didn't even really validate these changes, or that the codepaths right now aren't working, they just seem like clear drop-in upgrades now that HTTPS works and HTTP requests are redirected. Simplify!
Right now it returns 50 rows; each item that needs modeling returns 1–4 rows, usually 1. So a limit of 200 should be pretty dangerous, while also creating a release valve if there's another future bug: it'll just have the problem of returning too few items, instead of the problem of crashing everything! 😅
Neopets released a new Maraquan Koi, and it revealed a mistake in our modeling query! We already knew that the Maraquan Mynci was actually the same body type as the standard Mynci colors, but now the Koi is the same way, and because there's _two_ such species, the query started reacting by assuming that a _bunch_ of items that fit both the standard Mynci and standard Koi (which is a LOT of items!!) should also fit all _Maraquan_ pets, because it fits both the Maraquan Mynci and Maraquan Koi too. (Whereas previously, that part of the query would say "oh, it just fits the Maraquan Mynci, we don't need to assume it fits ALL maraquan pets, that's probably just species-specific.")
so yeah! This change should help the query ignore Maraquan species that have the same body type as standard species. That's fine to essentially treat them like they don't exist, because we won't lose out on any modeling that way: the standard models will cover the Maraquan versions for those two species!
Oops, the new Juppie Swirl color has ID 114, and there's no released color #113 yet. But our `/api/validPetPoses` code, when deciding how large to make the byte array, uses the _number_ of colors in the database.
This meant that, when Juppie Swirl was released, there wasn't a 114th slot allocated, and the loop stopped at color ID #113—so the new Juppie Swirl color #114 wasn't included in the results. This made it impossible to select Juppie Swirl as a starting color. (You could, however, model a Juppie Swirl Chia, and the wardrobe would load it successfully; and you would see the color/species picker with the correct options selected, but in a red "invalid" state.)
Now, we instead use the largest ID in the database to determine the size of the array. This means Juppie Swirl is now included correctly!
There would be network perf implications if the color IDs were a sparser space, but it's dense enough to be totally fine in practice. (But let's not release an April Fools color #9999 or anything!)
Right, ok, `db.close()` needs to be `db.end()` now.
This probably didn't break the user-syncing cron job though, because that doesn't automatically update I think? so it should still be comfortably running older version of the code that should still work just fine
Ah okay, pools support `query` and `execute` the same way connection objects do (as a shorthand for acquiring, querying, and releasing), but it doesn't have the same helpers for transactions. Makes sense: you need those queries to go to the same connection, and an API where you just call it against the pool object can't tell that it's part of the same thing!
Now, we have our transaction code explicitly acquire a connection to use for the duration of the transaction.
An alternative considered would have been to have `connectToDb` acquire a connection, and then release it at the end of the GraphQL request. That would have made app code simpler, but added a lot of additional potential surprise failure points to the infra imo (e.g. what if we're misunderstanding the GraphQL codepath and the connection never gets released? whereas here it's relatively easy to audit that there's a `finally` in the right spot.)