I'm not doing this thoroughly enough for it to matter (e.g. the deployed rsynced versions aren't having the group permissions set).
I think doing this *right* (to be extensible to additional users) is too much complexity to be worth it, and doing it halfway is more confusing than helpful.
I did this because I was anticipating multi-users permissions to be a bit of an issue for like, granting the web server permission to access the source code. But it turns out, since we're running with pm2, it's all working just fine!
Okay well, we added monit to solve a problem that I coincidentally solved within an hour of getting monit working lol!
This also enables us to remove the pm2 pid file, which we were only using to allow monit to track the pm2 app.
Okay huh, while digging a bit into another issue, I found what was wrong with our config and pm2's built-in monitoring! You can't use `yarn start`, because the wrapper script breaks its ability to look inside and see what's happening.
I also removed the compiler flag thing from the `start` script in `package.json`, because I think it's redundant? There's no compilation to be done in a live server.
I think I might remove monit after this? It's nice extra resilience in a sense, but it feels like extra complexity when it's doing the job `pm2` is supposed to do. (And tbh I've almost never heard of nginx crashing, and if it does it's probably a scenario worth investigating by hand.)
Oops, I didn't understand the different multiline string formats in YAML! I was using one that chomped through newlines, and converted them to normal spaces.
I think that didn't matter in this context anyway? because indentation is an exception to this behavior. What a weird behavior!
Anyway, uhh, yeah, I'll use the simpler multiline string format now 😅 for consistency and clarity!
When I woke up this morning, the app had crashed because the mysql connection was closed!
I'm not sure, why that caused a _crash_? Or why pm2 didn't pick up on it, and said the process was still online? (Maybe the process was running, but the server had stopped?) Those could be good to investigate?…
…but better than diving too far into the details, is to just address the high-level problem: if the app goes down for unexpected reasons, I want it back up!! lol
In this change, we add `monit`, a solid system for monitoring processes (including checking for behavior, like responding to net requests), and configure it to watch the app process and the nginx process.
To test, you can run `pm2 stop impress-2020`, or `systemctl stop nginx`, to see that Monit brings them back up within seconds!
This does add some potential surprise if you're _trying_ to take the processes down. The easiest way is to send the stop command through monit, like `monit stop nginx`. This will disable monitoring until you start it again through monit, I think? (You can also disable/enable monitoring as a direct command, regardless of app state.)
You can see how, instead of the default experience where certbot edits your config for you, I've referenced the certificates in the config in the first place, and set up certbot to just generate them!
Also, I learned about certbot non-interactive mode! At first I wrote this with the Ansible `expect` module lol :p
I was copying over an old create-react-app build folder, and the app was crashing remotely because there was no build!
Anyway woo it looks like the app is at least running now, `curl localhost:3000` shows results! >:3
I was sorting by version name, which _was_ reliable until I changed my version name format, and then a deploy cleaned up the version it had just deployed because it no longer sorted to the end!
Rather than be dependent on a stable version name format, and set myself up for more surprises, I'm going to instead use the folder's modified time. While there could be surprising ways for folder timestamps to be updated, I expect it to be _more_ reliable overall.
Huh, I was previously using ISO timestamps, but it turns out Yarn doesn't handle binary paths very reliably when your folder names contain colons 😳 so `canvas` was failing with `node-gyp not found`
I've documented it here in this self-answered StackOverflow question, so hopefully the next future person searching will find my answer! https://stackoverflow.com/q/69819597/107415
Oh neat, when trying `yarn build && yarn start` locally, I got a message about installing Sharp for better image optimization performance in production.
It mentions that this isn't relevant for Vercel, where it's auto-added. But it's good to get on it now anyway!
I had stopped paying attention to `vercel.json`, because part of my intent is to perhaps move off Vercel, but right… we're still on there for now, and we still want our API functions to run!
There are some places where we use <img> tags where I think it's actually just the right thing to do.
`next/image` is good for image optimization, but I don't think it's worth the proxying for Neopets art images that don't actually always have a higher-res version to begin with.
Idk, maybe the species faces could be a decent choice, but right now we're solving it with `srcSet`, and that's fine. Doesn't seem worth migrating, let's just move on with our lives! 😅
I'm still leaving the lint rule on though, because I think it's a helpful reminder. (I don't think it catches `<Box as="img" />` though, which is a shame, because that would be our natural default in this app!)
Yeah, cool, now we use the `next/image` tag, and our images are showing up again!
There's still lint errors for using bare img tags in some places, but I'm not sure I really care…
This was a fun journey! Turns out Next 12 is using a new faster JS compiler called SWC, which had a compiler bug that triggered here!
The incorrect looping behavior caused `libraryUrl` to sometimes be `null` by the time the movie promise completes, because `layer` was set to whatever the `last` layer in the list had been. https://github.com/swc-project/swc/issues/2624
Anyway, turns out this code has been through a few refactors, and the `async` function wrapper is extraneous now! So I've just deleted it and inlined its code. Ta da! lol
Things seemed to mostly work at first glance! I haven't tested outfitPageSSR because we'll need to redo it though, and also the outfit image routes aren't working anymore (vercel.json isn't how next.js works)
Tweaked some of the default Next.js rules, fixed lint-staged for `next lint`, made a few small easy lint fixes. Feels good!
Note that using the `dirs` option in `next.config.js` was causing `lint-staged` to lint _everything_. That's why I edited `yarn lint` to specify the dirs instead: that way, that command will lint all those dirs, but they won't get included in invocations with `--file`.
There are still a few lint errors left after this commit, because our <img> tags aren't working (@next/next/no-img-element). I'll fix those when we figure out what's wrong with images!
I'm interested in ejecting from Vercel, so I'm trying to get off their proprietary-ish create-react-app + Vercel API thing, and onto Nextjs, which is very similar in shape, but more portable.
I had to disable `craCompat` in `next.config.js` to stop us from crashing on their webpack config, see https://github.com/vercel/next.js/discussions/25858#discussioncomment-1573822
The frontend seems to work at a basic level, but network requests fail, and images don't seem to be working. I'll work on those next!
Note that this commit was forced through despite failing lint checks. We'll need to fix that up too!
Also, after the codemod, I moved `src/pages` to the more canonical location `pages`. Lint tooling seemed surprised to not find a `pages` directory, and I didn't see a config that was making it work correctly in the other location, so I figured it's that Next is willing to check `pages` or `src/pages`? But this is more canonical so yeah!
Even in production, scrolling is a bit slow! This will preload the pagination one click ahead.
There is a bit of a perf downside, in that if you click through the pages too fast, you'll trigger _extra_ requests. I think that's a net win though, and I'm not gonna try to get cleverer than this right now.
My main inspiration for doing this is actually our potentially-huge upcoming Vercel bill lol
From inspecting my Honeycomb dashboard, it looks like the main offender for backend CPU time usage is outfit images. And it looks like they come in big spikes, of lots of low usage and then suddenly 1,000 requests in one minute.
My suspicion is that this is from users with many saved outfits loading their outfit page, which previously would show all of them at once.
We do have `loading="lazy"` set, but not all browsers support that yet, and I've had trouble pinning down the exact behavior anyway!
Anyway, paginating makes for a better experience for those huge-list users anyway. We've been meaning to do it, so here we go!
My hope is that this drastically decreases backend CPU hours immediately 🤞 If not, we'll need to investigate in more detail where these outfit image requests are actually coming from!
Note that I added the pagination to the existing `outfits` GraphQL endpoint, rather than creating a new one. I felt comfortable doing this because it requires login anyway, so I'm confident that other clients aren't using it; and because, while this kind of thing often creates a risk of problems with frontend and backend code getting out of sync, I think someone running old frontend code will just see only their first 30 outfits (but no pagination toolbar), and get confused and refresh the page, at which point they'll see all of them. (And I actually _prefer_ that slightly confusing UX, to avoid getting more giant spikes of outfit image requests, lol :p)
This is a minor change to clear a console warning, and make intended behavior clearer! You're not supposed to pass `null` as a select value, because it's ambiguous about whether you're looking for the first option or to make this an "uncontrolled component".
Here, I now provide a fallback value, which is an explicit string for the placeholder option. I made the string very explicit, to aid in debugging if it somehow leaks out from where it's supposed to be! (But I also added gating in the `onChange` event, just to be extra sure.)
Huh. `flexAlign` isn't a real Chakra style prop, because it's not a real CSS style. I wonder if I meant `alignItems`? Anyway, this was getting passed down to the DOM element and triggering a console warning. Removed!
Oops, our cutesy feature to show an outfit thumbnail sa a placeholder while the rest of the data is loading was making spurious requests!
I put the `skip` in the wrong place 😅
This caused a request to https://outfits.openneo-assets.net/outfits/null/v/NaN/300.png, which would return a 500.
The user wouldn't see anything, because the image wouldn't show because it failed. But it's a mistake, and it's sending extra requests from the client and to the server, and it's a good one to fix!
Hmm, right, okay, we *generally* should have all users imported to Auth0, but this can fail if the cron job is behind or Auth0 rejected the data (e.g. user data in a format it doesn't support).
Previously, this would apply the name change in the database, but return Auth0's "The user does not exist." error to the GraphQL client, making it look like the update fully failed.
In this change, we handle that case differently: when the Auth0 update fails with a 404, we proceed but log a warning; and when Auth0 fails with an unexpected error, we roll back the database change in addition to raising the error to the client, to keep the behavior obvious and consistent.
Oh oops, I missed this path change when I changed the route to `/user/:id/lists`! This caused searching by email to redirect to the homepage, but with a valid URL in the address bar; and refreshing the page would hit the redirect defined in `vercel.json`, redirect to the new route, and load the correct page.
Fixed!
Oh right, we have another prompt we need to not prompt for, lol :p
Here, I just assume that there's one database connection on the Auth0 account. If that's not true, the script will error—not because this is a fundamentally unresolvable problem, but because I don't want to write code for configuring a situation that doesn't exist yet :p
Huh, oops, there are a _few_ reasons the user sync cron job hasn't been running correctly.
I fixed some of the config in prod, but then discovered one more issue: the script prompts for an admin database password, so of _course_ it can't auto-run, lol.
Instead, I've now created a `impress2020-util` account, with just a few permissions (but specifically the `openneo_id.users` permission that I _don't_ give the app!), and added the username and password to the secret .env file, both locally and in prod. (In this case, prod means the Linode VPS, not Vercel, because that's where our cron runs.)