Commit graph

15 commits

Author SHA1 Message Date
ead0003397 Add custom 502 error page, for when the app goes down but nginx is up 2024-02-19 13:19:31 -08:00
aa108190b6 Oops, only redirect to maintenance.html internally
Oh I see, if I start with a slash, then it's interpreted as a reference
to a file; whereas if I don't, it's interpreted as a URL redirect. Ok!
2024-02-19 11:18:28 -08:00
7c36ba81e5 Minor change to explanation text in authorized-ssh-keys.txt 2024-02-19 11:12:40 -08:00
974aaa48ff Add maintenance.html page 2024-02-19 09:45:45 -08:00
e9b0fa0779 Future-proof our nginx config for IPv6
Today I learned that nginx requires a special invocation to listen to
IPv6 addresses as well as IPv4. On some of my other projects, this was
causing Let's Encrypt certificate renewal to fail, because Let's
Encrypt prefers to connect over IPv6 when an AAAA record is present, so
its challenges were always returning 404, because nginx wasn't
listening on IPv6.

This shouldn't be affecting impress in production, because we don't
have an AAAA record right now. But I'm just making this change in all
my projects, to make sure this doesn't bite me in the future!
2024-02-13 08:52:45 -08:00
76af587e7c Replace falcon server with puma
Been wanting this for a while in theory, gonna actually do it now!

The motivation is that I want to turn up the timeout for loading pets,
because the Neopets endpoints are slower today with the NC UC release -
but I can already predict that under our current architecture that will
be a problem, because it'll block up our request queue!

Falcon uses Ruby's relatively-new async system to *not* have requests
block on upstream requests, and my understanding is that this behavior
is plug-and-play. Let's see how it goes!
2024-01-23 21:55:26 -08:00
2b382d95fb Update my desktop SSH key
I did a pretty thorough reset of my desktop machine, and rather than go
spelunking for the same private key, I just rolled it over to a new
one. Let's set it up!
2024-01-14 03:07:12 -08:00
91eb2f7752 Kill the app at high RAM, instead of trying to throttle it first
Well, sitting at the `MemoryHigh` limit still grinds the app to a halt
anyway, lmao. I guess it's a feature designed for well-behaved processes
and not for outright leaking ones?

Let's try just having systemd basically reset the app regularly when the
RAM hits a certain threshold. I think that's what this config will do,
we'll find out!
2023-10-27 17:03:08 -07:00
af705f1be0 Tighten the RAM limit bounds on the production impress service
Lol ok, as I had kinda predicted, the memory bounds I set last time
were not tight enough, and it stalled out again! (It was at 75% and
fully just not working.)

Let's try this tighter bound instead!
2023-10-27 10:32:33 -07:00
06258b1dd5 Upgrade puma in the initial-placeholder app, to satisfy Dependabot
So, Dependabot correctly reported that this version of puma is
vulernable, which I fixed in the main app already—but I didn't notice we
also use that version in this cute tiny placeholder app we use early in
the deployment process.

There's not a real security need to upgrade this, as this placeholder
app has no access to useful data when it is run, but I think it's better
to resolve this by fixing it than by silencing Dependabot! May as well!
2023-10-26 14:48:21 -07:00
271d477110 Add RAM constraints to impress service on in production
I just restarted the impress app in production! First I logged in to see
why it wasn't responding, and I saw that there was almost no free RAM
left, and that the Rails app had grown to eat it all up!

So in this change, we set a memory limit: if the impress app is taking
up more than 75% of the machine's RAM, systemctl will try to shrink it
down; if it can't, then it will kill the app at 80%.

I'm not totally sure whether these bounds are tight enough? I didn't
look closely enough at the numbers to see what the app's actual usage
was according to systemctl at the time (`sudo systemctl status
impress`), so my hope is this is enough. But if we run into a memory
leak crash like that again, because it turns out even existing at 75%
RAM freezes the machine when running alongside its other processes, we
can decrease these numbers!

I also don't know the nature of the memory leak, and that could be worth
investigating—the app pretty cleanly fits into ~500–600MB when it starts
up, but then does seem to slowly but steadily grow. If it could be kept
at that size, it's possible we could downgrade the server and save some
costs—but that's a question for another day, since making sure we handle
memory leaks when they *do* happen is a more important robustness fix!
2023-10-26 13:52:44 -07:00
024041e591 Configure nginx to send pre-gzipped files to the client
Rails already creates little pre-gzipped `.gz` copies of all our assets
in the `public/assets` directory when we build. This configures nginx to
send those when available!

We weren't doing *any* gzip stuff before, so this helps a lot with those
bigger JS files, like the `wardrobe-2020` stuff. It's now at ~.5MB with
compression, which is still a bit big, but nowhere near as offensive as
the 4.5MB pre-anything, or 1.5MB post-minification, lol.
2023-10-25 15:44:01 -07:00
44141ce165 Extract some files out of the deploy script
Okay, there's enough going on in here now that I don't like it inline
anymore! Welcome to `files`!
2023-10-25 15:41:16 -07:00
22e3f4240a Update most URLs to use HTTPS
I noticed we didn't have the little lock icon in the browser, and yeah
huh there's a lot of `http://` still floating around! Let's fix that!
2023-10-25 15:22:57 -07:00
3dd5d26332 Create setup.yml deploy script
Yay it's working! We set up the box, install Ruby, upload a placeholder app, set it up as a service, and get it hooked up to nginx!

Next, we'll add the script to upload the latest version of the site. We just need to slot it into `/srv/impress/current`, run `bundle install`, and that should basically be that! (Oh, and we need to compile production assets—I wonder if it's useful to do that on the dev machine instead of on the target? That might save us from needing to install Node. Or maybe we'll have to anyway!)
2023-10-23 19:05:09 -07:00