1
0
Fork 0
forked from OpenNeo/impress
Commit graph

6 commits

Author SHA1 Message Date
472ae645a0 Finish migrating to Ruby 3.3.0
As the comment in `deploy.yml` explains, this was a multi-step process,
but it went very smoothly as planned, hooray!!

I noticed again while making this change that Bundler doesn't seem to
be availing itself of the checked-in dependencies in `vendor/cache`. I
think I know the fix for this, I'll toss it into an upcoming change and
see if it works!
2024-02-22 12:05:02 -08:00
76af587e7c Replace falcon server with puma
Been wanting this for a while in theory, gonna actually do it now!

The motivation is that I want to turn up the timeout for loading pets,
because the Neopets endpoints are slower today with the NC UC release -
but I can already predict that under our current architecture that will
be a problem, because it'll block up our request queue!

Falcon uses Ruby's relatively-new async system to *not* have requests
block on upstream requests, and my understanding is that this behavior
is plug-and-play. Let's see how it goes!
2024-01-23 21:55:26 -08:00
91eb2f7752 Kill the app at high RAM, instead of trying to throttle it first
Well, sitting at the `MemoryHigh` limit still grinds the app to a halt
anyway, lmao. I guess it's a feature designed for well-behaved processes
and not for outright leaking ones?

Let's try just having systemd basically reset the app regularly when the
RAM hits a certain threshold. I think that's what this config will do,
we'll find out!
2023-10-27 17:03:08 -07:00
af705f1be0 Tighten the RAM limit bounds on the production impress service
Lol ok, as I had kinda predicted, the memory bounds I set last time
were not tight enough, and it stalled out again! (It was at 75% and
fully just not working.)

Let's try this tighter bound instead!
2023-10-27 10:32:33 -07:00
271d477110 Add RAM constraints to impress service on in production
I just restarted the impress app in production! First I logged in to see
why it wasn't responding, and I saw that there was almost no free RAM
left, and that the Rails app had grown to eat it all up!

So in this change, we set a memory limit: if the impress app is taking
up more than 75% of the machine's RAM, systemctl will try to shrink it
down; if it can't, then it will kill the app at 80%.

I'm not totally sure whether these bounds are tight enough? I didn't
look closely enough at the numbers to see what the app's actual usage
was according to systemctl at the time (`sudo systemctl status
impress`), so my hope is this is enough. But if we run into a memory
leak crash like that again, because it turns out even existing at 75%
RAM freezes the machine when running alongside its other processes, we
can decrease these numbers!

I also don't know the nature of the memory leak, and that could be worth
investigating—the app pretty cleanly fits into ~500–600MB when it starts
up, but then does seem to slowly but steadily grow. If it could be kept
at that size, it's possible we could downgrade the server and save some
costs—but that's a question for another day, since making sure we handle
memory leaks when they *do* happen is a more important robustness fix!
2023-10-26 13:52:44 -07:00
44141ce165 Extract some files out of the deploy script
Okay, there's enough going on in here now that I don't like it inline
anymore! Welcome to `files`!
2023-10-25 15:41:16 -07:00