Oh yeah, ok, we don't actually want to evict `currentUser` from the Apollo cache on *any* login mutation. That's both inefficient, and puts our navbar in a loading state that hides the login button and thereby unmounts the login modal, oops!
hey it's been a while! lol
I replaced the friendly long-time Feedback Xwee with a cute chef Kiko, to mix things up a bit and help people notice that the box changed!
I forgot that we sometimes use the Apollo server in a context where `req` and `res` aren't present! (namely in our `build-cached-data` script.)
In this change, we update the DTI-Auth-Mode HTTP header check to be cognizant that the request might be absent!
There was a bug in the new db auth method where `useRequireLogin` was expecting Auth0 logins to work, so it would get caught in an infinite redirect loop.
Rather than trying to figure out how to make `useRequireLogin` work with the new modal UI, I figured we can just delete it (since we only ended up using it once anyway), and add a little message if you happen to end up on the page while logged out. Easy peasy!
Hey hey, logging out works! The server side of this was easy, but I made a few refactors to support it well on the client, like `useLoginActions` becoming just `useLogout` lol, and updating how the nav menu chooses between buttons vs menu because I wanted `<LogoutButton />` to contain some state.
We also did good Apollo cache stuff to update the page after you log in or out! I think some fields that don't derive from `User`, like `Item.currentUserOwnsThis`, are gonna fail to update until you reload the page but like that's fine idk :p
There's a known bug where logging out on the Your Outfits page turns into an infinite loop situation, because it's trying to do Auth0 stuff but the login keeps failing to have any effect because we're in db mode! I'll fix that next.
Yeah cool the login button seems to. work now? And subsequent requests serve user data correctly based on that, and let you edit stuff.
I also tested the following attacks:
- Using the wrong password indeed fails! lol basic one
- Changing the userId or createdAt fields in the cookie causes the auth token to be rejected for an invalid signature.
Tbh that's all that comes to mind… like, you either attack us by tricking the login itself into giving you a token when it shouldn't, or you attack us by tricking the subsequent requests into accepting a token when it shouldn't. Seems like we're covered? 😳🤞
Still need to add logout, but yeah, this is… looking surprisingly feature-parity with our Auth0 integration already lmao. Maybe it'll be ready to launch sooner than expected?
Right, I had that idea while writing the comment, then forgot to actually do it lmao
This is important for session expiration: we don't want you to be able to hold onto an old cookie for an account that you should be locked out of. Updating the `createdAt` value requires a new signature, so the client can't forge when this token was created, so we can be confident in our ability to expire them.
Okay so one of the trickiest parts of login is done! 🤞 and now we need to make it actually show up in the UI. (and also pressure-test the security a bit, I've only really checked the happy path!)
This is for like, build targeting based on features, I think. A thing in Next popped up and asked me to update it via `npx browserslist@latest --update-db`, so I did!
Thinking about longevity, I think I wanna cut Auth0 loose, and just go back to using our own auth.
I had figured at the time that I didn't want to integrate with OpenNeo ID's whole mess, and I didn't want to write a whole new auth system, so Auth0 seemed to make things easier.
But now, it's just kinda a lot to be carrying along an external service as a dependency for login, especially when we've got all the stuff in the database right here. I wanna remove architecture pieces! Get it outta here!
And I'll finally build account creation from the 2020 site while I'm at it, which seemed like it was gonna be a bit of a pain with Auth0 and syncing anyway. (I think at the time I was a bit more optimistic about a full transfer from one system to another, but that's much further off than I realized, and this path will be much better for keeping things in sync.)
There are two Neopets NC items named Butterfly Dress! ~owls disambiguates them by calling one of them "Butterfly Dress (from Faerie Festival event)".
It'd be a bit more robust to cooperate with ~owls t o get item IDs served up in this case, but it's not a big deal esp. for only this one case, so like… this is fine!
Ok I think this oughta do it! A bug we used to have was that, if you deployed 5 times in a row without changes to node_modules, then all 5 versions would reference `node_modules` from 6 deploys ago, which would then get "cleaned up". And so then the app would have all its deps disappear!
There's still a bug here, noted in the comments. That one sounds harder to solve, and suggests a refactor for how to do build caching more robustly, and… idk I think this cheat might just not be worth it. Idk! But I'm keeping it for the moment for convenience.
Just a cute little way to let us preview it without having to spin up a separate instance of the app or use a feature flag system!
This means we can safely merge and push this to production, without worrying about leaking the feature before the ~owls team signs off.
Hey nice it looks like it's working! :3 "Bright Speckled Parasol" is a nice test case, it has a long text string! And when the NC value is not included in the ~owls list, we indeed don't show the badge!
Hey finally! I got in the mood and did it, after… a year? idk lol
The button should only appear for outfits that are already saved, that are owned by you. And the server enforces it!
I also added a new util function to give actually useful error messages when the GraphQL server throws an error. Might be wise to use this in more places where we're currently just using `error.message`!
Uhhh I guess I never added the check that the outfit you're editing is your own? Embarrassing.
I don't have any reason to believe anyone abused this, but 😬! Good to have fixed now!
This isn't a partnership we've actually talked through with the team, I'm just validating whether we could reuse our Waka code if it were to come up! and playing with it for fun 😊
Been getting a lot of errors I *think* from folks trying to add OWLS Pricer to Impress 2020 even though it doesn't work here! Reasonable to have happen though! I thought Sentry knew to ignore those, but I guess it doesn't?
In this change, we add some filtering to ignore errors triggered by extensions. This should keep them out of our inbox!
I wasn't able to test this very robustly locally. I'm mostly just crossing fingers!
Tbh I didn't even really validate these changes, or that the codepaths right now aren't working, they just seem like clear drop-in upgrades now that HTTPS works and HTTP requests are redirected. Simplify!
Right now it returns 50 rows; each item that needs modeling returns 1–4 rows, usually 1. So a limit of 200 should be pretty dangerous, while also creating a release valve if there's another future bug: it'll just have the problem of returning too few items, instead of the problem of crashing everything! 😅
Neopets released a new Maraquan Koi, and it revealed a mistake in our modeling query! We already knew that the Maraquan Mynci was actually the same body type as the standard Mynci colors, but now the Koi is the same way, and because there's _two_ such species, the query started reacting by assuming that a _bunch_ of items that fit both the standard Mynci and standard Koi (which is a LOT of items!!) should also fit all _Maraquan_ pets, because it fits both the Maraquan Mynci and Maraquan Koi too. (Whereas previously, that part of the query would say "oh, it just fits the Maraquan Mynci, we don't need to assume it fits ALL maraquan pets, that's probably just species-specific.")
so yeah! This change should help the query ignore Maraquan species that have the same body type as standard species. That's fine to essentially treat them like they don't exist, because we won't lose out on any modeling that way: the standard models will cover the Maraquan versions for those two species!
Oops, the new Juppie Swirl color has ID 114, and there's no released color #113 yet. But our `/api/validPetPoses` code, when deciding how large to make the byte array, uses the _number_ of colors in the database.
This meant that, when Juppie Swirl was released, there wasn't a 114th slot allocated, and the loop stopped at color ID #113—so the new Juppie Swirl color #114 wasn't included in the results. This made it impossible to select Juppie Swirl as a starting color. (You could, however, model a Juppie Swirl Chia, and the wardrobe would load it successfully; and you would see the color/species picker with the correct options selected, but in a red "invalid" state.)
Now, we instead use the largest ID in the database to determine the size of the array. This means Juppie Swirl is now included correctly!
There would be network perf implications if the color IDs were a sparser space, but it's dense enough to be totally fine in practice. (But let's not release an April Fools color #9999 or anything!)
Right, ok, `db.close()` needs to be `db.end()` now.
This probably didn't break the user-syncing cron job though, because that doesn't automatically update I think? so it should still be comfortably running older version of the code that should still work just fine
Ah okay, pools support `query` and `execute` the same way connection objects do (as a shorthand for acquiring, querying, and releasing), but it doesn't have the same helpers for transactions. Makes sense: you need those queries to go to the same connection, and an API where you just call it against the pool object can't tell that it's part of the same thing!
Now, we have our transaction code explicitly acquire a connection to use for the duration of the transaction.
An alternative considered would have been to have `connectToDb` acquire a connection, and then release it at the end of the GraphQL request. That would have made app code simpler, but added a lot of additional potential surprise failure points to the infra imo (e.g. what if we're misunderstanding the GraphQL codepath and the connection never gets released? whereas here it's relatively easy to audit that there's a `finally` in the right spot.)
This should both fix cases where the connection closes for various reasons, by having the pool reconnect; and also should be a second way of solving some of the blocking issues we were having with large queries, by letting faster queries use parallel connections.
Idk what a reasonable number is, 10 seems to be what various guides are saying? Might tune it down if it ends up pushing various connection limits? (We could also constrain it on dev specifically, if that matters.)
Uhh hmm, I don't remember when we removed it from package.json, I guess
maybe I thought it was unused and didn't look carefully enough?
Anyway, this fixes the export-users-to-auth0 script, which was crashing
because yargs wasn't installed, oops!
I hypothesize that loading people's full trade lists more often than necessary is part of the cause of the recent mega slowdown!
My hypothesis is that we're clogging up the MySQL connection socket with a ton of data, which blocks all other queries until the big ones come through and parse out. (I haven't actually validated my assumption that MySQL connections send query results in serial like that, but it makes sense to me, and fits what I've been seeing.)
There's more places we could potentially optimize, like the trade list page itself… (we currently aggressively load everything when we could limit it and load the rest on the followup pages, or even paginate the followup pages…)
…but my hope is that this helps enough, by relieving the load on the homepage (latest items) and on item searches!
This is a bit hacky, but I want to ship and I'm not in a mood for a refactor :P
Before this change, you could see a bug by doing the following:
1. Click "I own this" to own an item.
2. Click "Add a list" and add it to a list.
3. Click "I own this" to un-own the item. (This deletes it from all lists.)
4. Observe that the "Add a list" dropdown disappears.
5. Click "I own this" to own it again.
6. Observe that, before this change, the dropdown would reappear, but incorrectly say it was still in the old list. After this change, it appears with the blank "Add to list", as intended.
Oops, I used the wrong property to control the checkbox state! This made it an uncontrolled component. It would always start unchecked when the page loads, regardless of actual own/want state, and then toggle based on physical clicks.
This meant that things generally worked correctly if you didn't own/want the item when you first loaded the page; but if you already did, then you would click once and send an *add* mutation instead of a remove; and then click again and be able to remove.
Now, removes only take one click!
Oooh this feature is feeling very nice :) :) We hid "not in a list" pretty smoothly I think!
A known bug: If you have the item in a list, then click the big colorful button, it will remove the item from *all* lists; and then if you click it again, it will add it to Not in a List. But! The UI will still show the lists it was in before, because we haven't updated the client cache. (It's not that bad in the middle state though, because the list dropdown stuff gets hidden.)
My cute keybind to quickly wrap stuff in <Box> wasn't working in this file, and I figured out from deleting stuff and narrowing down that it was this comment. I guess Emmet's JSX parser doesn't like a comment being there! I moved it up a bit instead.
Ah hm, not sure why this only showed up once I tried a prod deploy, but I declared `hiResMode` twice in there, because we already had fixed this bug for item layers but not pet layers!
In this change, I fix the duplicate `hiResMode` declaration, and update the new pet layers message to match the item layers message.
Well, instrumentation seems to be working fine again! The bug we ran into during commit e5081dab7e is gone. Cool!
I want to be able to see what's making the new box slow. My hypothesis was (and it seems to be right) that communication with the database on the Classic DTI server is slow.
But now that they're on the same Linode account and region, I think I can set up a private VLAN to make them muuuch faster. We'll try it out!
Give the grid a fixed size, have the list name stuff get ellipsis when it's too long, and try to show all list names (which will almost certainly too long for the space) to give a better hint of what's in there.