1
0
Fork 0
forked from OpenNeo/impress
Commit graph

1663 commits

Author SHA1 Message Date
d2de971a60 Delete more rake tasks
I tried to port the Rainbow Pool ones forward, but ran into issues with the
service that uses browser-specific stuff to check that traffic is valid :/

Incidentally, those were the only places we were using `rest-client`.
Goodbye!
2023-11-10 18:59:46 -08:00
93511b3d51 Delete unused rake tasks 2023-11-10 18:04:24 -08:00
eb4a3ce0d9 More gracefully handle batches that fail to save
I noticed a thing with like, an asset that I think referenced an item that
doesn't exist, which caused an error in the `body_specific?` validation
step?

Tbh that validation step needs fixed up in a number of ways, but I'm
scared to, since it's hard to know what will break modeling lol.

But in any case, more graceful handling is nice! If something happens,
I'd rather leave it null and try again later than have the job crash!
2023-11-10 17:42:56 -08:00
80bd229bc6 Clarify an error message in swf_assets:manifests task
It's not just that none of them were 200 OK, it's that they were all 404.
In the event that something returns not-200 and not-404, we immediately
abort, so we shouldn't get to this case unless they were all 404!
2023-11-10 17:27:35 -08:00
dc22a458bf Move manifest backfill to swf_assets:manifests task
Okay, I've simplified the migration to *just* add the column, and
instead added a task to find assets without manifest URLs and backfill
them.

Performance is a lot better now, using the `async-http` library, which
as I understand it supports both persistent connections when invoked
like this, and maybe also HTTP/2 multiplexing?? (Though I'm not
actually sure images.neopets.com does lol)

I'm not sure about the number of concurrent tasks I picked here, 100
seems okay for an internet thing and for such small requests, but I
worry that the CDN is gonna get annoyed or something. Well, we'll see!
This task is very resumable if it turns out we get frozen out or
something.
2023-11-10 16:52:50 -08:00
d35b10c1b8 Oops, fix validation bug for SwfAsset
Oh right, saving an asset that doesn't have a manifest will crash this
validation step!
2023-11-10 16:16:32 -08:00
96998643b5 Add manifest_url to swf_assets table
Ok so, impress-2020 guesses the manifest URL every time based on common
URL patterns. But the right way to do this is to read it from the
modeling data! But also, we don't have a great way to get the modeling
data directly. (Though as I write this, I guess we do have that
auto-modeling trick we use in the DTI 2020 codebase, I wonder if that
could work for this too?)

So anyway, in this change, we update the modeling code to save the
manifest URL, and also the migration includes a big block that attempts
to run impress-2020's manifest-guessing logic for every asset and save
the result!

It's uhh. Not fast. It runs at about 1 asset per second (a lot of these
aren't cache hits), and sometimes stalls out. And we have >600k assets,
so the estimated wall time is uhh. Seven days?

I think there's something we could do here around like, concurrent
execution? Though tbqh with the nature of the slowness being seemingly
about hitting the slow underlying images.neopets.com server, I don't
actually have a lot of faith that concurrency would actually be faster?

I also think it could be sensible to like… extract this from the
migration, and run it as a script to infer missing manifest URLs. That
would be easier to run in chunks and resume if something goes wrong.
Cuz like, I think my reasoning here was that backfilling this data was
part of the migration process… but the thing is, this migration can't
reliably get a manifest for everything (both cuz it depends on an
external service and cuz not everything has one), so it's a perfectly
valid migration to just leave the column as null for all the rows to
start, and fill this in later. I wish I'd written it like that!

But anyway, I'm just running this for now, and taking a break for the
night. Maybe later I'll come around and extract this into a separate
task to just try this on all assets missing manifests instead!
2023-11-09 21:42:51 -08:00
3a963c7d25 Fix old .scoped -> .all Rails upgrade bug
Ahh, I guess I missed these, I think they're maybe not actually used in
the app is why? cuz they're all default values that are overridden at
the actual call sites. But I ran into it when running `Pet.load` in the
console, and yeah let's just fix 'em up!
2023-11-09 21:35:42 -08:00
c9dd0f7376 Remove "Import from pets" page
This hasn't worked for a while, and I don't know an API off the top of
my head to drop in for it. Let's just delete it for now, and revisit it
later if we want to!
2023-11-06 13:06:18 -08:00
e9034fb085 Fix petpage etc import
Dang, I'm really wishing I'd opened this sooner cuz I didn't realize it
would be THIS easy!!

The bug was that the `t` method started taking Ruby keyword params
instead of a hash object for `options`, so the syntax changed.
Womp womp!
2023-11-06 12:59:28 -08:00
562cc33045 Fix closet list petpage export
I uhhh don't know why this ever worked at all lmao. Simple fix, I'd
been avoiding it for a while assuming it was worse lol!
2023-11-06 12:55:03 -08:00
47ea796af4 Fix outfit saving infinite loop in error case
Really don't know why this wasn't a problem with Apollo (or was it??),
but yeah, don't save when there's a save error!! Then we reset the
mutation state when the outfit state changes.
2023-11-06 12:54:23 -08:00
f8bcf5a0de Don't save the outfit while it's already saving
It's weird to be reading this code and be like. was this not always an
issue? Maybe something in Apollo prevented this? Did we use optimistic
UI or something? Idk?

There's still an issue with it infinitely retrying in an error state
though.
2023-11-06 12:38:38 -08:00
18ff22f211 Add Sentry to Rails
Now we're tracking both JS and Rails errors, phew!
2023-11-06 12:37:40 -08:00
4ffd85ade0 Add mini-survey announcement
Just sharing this out to gather info, since this might be coming kinda
soon!

I also moved the announcement higher up in the template, because it
gets broken on the user lists page which uses floats quite a bit for
the site header—and tbh I feel like this is better anyway lol.
2023-11-06 12:12:25 -08:00
2126cefdd4 Add posted-at date to Pardon Our Dust page
Just to like. clarify how up-to-date it is and isn't!
2023-11-06 11:54:56 -08:00
57e262a884 Remove unused initialCacheState code from wardrobe-2020
In the impress-2020 app, we use this to prepopulate certain GraphQL
data into the Apollo cache when SSR'ing a page. We don't do that here,
so, goodbye!
2023-11-03 18:16:57 -07:00
a18ffb22a7 Remove the item page drawer, just link to the item page instead
The wardrobe-2020 app had a cute drawer that embeds the item page, but
honestly I don't think it was that valuable, and especially not when it
means we have to basically maintain two item pages lol. Let's decrease
the surface area!
2023-11-03 16:56:51 -07:00
a2feee2d9b Add support for is_manually_nc
A really really simple change! It works on the item page, the item
index page, item search, the homepage, and the item lists page.

The main reason I avoided this for so long (even before modernizing the
Rails app) was that the ElasticSearch stuff felt like it made it messy?
But now it's pretty simple, and it works in search already cuz I did
that when I implemented item search, so, nice!
2023-11-03 16:27:39 -07:00
5dcb1dedb4 Add Owls values to the item page
Eyy it's time!! Long-requested, finally here lol
2023-11-03 16:20:02 -07:00
494f82601f Set up eslint for wardrobe-2020
Ok cool, I have just not been running any of this since moving out of
impress-2020, but now that we're doing serious JS work in here it's time
to turn it back on!!

1. Install eslint and the plugins we use
2. Set up a `yarn lint` command
3. Set up a git hook via husky to lint on pre-commit
4. Fix/disable all the lint errors!
2023-11-02 18:11:07 -07:00
629706a182 Use the main app for outfit deletion, too 2023-11-02 17:39:26 -07:00
d32c6459b0 Normalize outfit data as we load it into wardrobe-2020
Rather than letting the fact that the server API models outfits a bit
differently (underscore keys, integer IDs for things), I'd rather
convert it to the familiar field names and expected types!
2023-11-02 17:12:59 -07:00
1830689a19 Fix bin/dev to use the right settings in development
Oh yeah, I didn't catch that `bin/dev` calls the `Procfile` which in
turn calls `yarn build --watch`, which leaves out the important
`--public-path=/dev-assets` argument!

Calling `yarn dev` instead keeps us consistent!
2023-11-02 16:54:39 -07:00
7a3aa609ba Use the main app for outfit saving, not impress-2020
This came in a few parts!
1. Add meta tags to let us know we're logged in.
2. Install React Query, which has the data-loading sensibilities I like
   about Apollo without the GraphQL that has honestly been a drag.
3. Replace the outfit-loading and outfit-saving calls with API calls to
   the main app.
4. Update the main app's API calls to use our more flexible data
   constructs like "pose".

Would've loved to do this more incrementally, but it's hard to! You
can't split out outfit-loading and outfit-saving, or auth from any of
that, or the state gets all out-of-sorts.

Still, this is a good nugget we've pulled out all-in-all, and one that
people have been asking for! Can maybe look to logged-in item search
soon too, for own/want data?
2023-11-02 16:54:35 -07:00
52e7456987 Upgrade to React 18.2.0
Poked at the app and saw no breakages, I'm reasonably satisfied! I think
the 17 -> 18 migration is designed to be pretty backwards-compatible,
especially since I didn't bother to do the `createRoot` stuff yet, which
is where more of the breaking things happen.
2023-11-01 20:15:30 -07:00
1ab3f21917 Finally remove unused auth0 code
Moreover, this code is in a bit of a flimsy state anyway: it'll kinda
work if you're logged in at impress-2020.openneo.net, but that wasn't
intentionally engineered, and I'm not sure exactly what circumstances
cause those cookies to send vs not send?

My intent is to clean up to replace all this with login code that
references the main app instead… this'll require swapping out some
endpoints to refer to what the main app has, but I'm hoping that's not
so tricky to do, since the main app does offer basically all of this
functionality already anyway. (That's not to say it's simple, but I
think it's a migration in the right direction!)
2023-11-01 15:26:53 -07:00
7948974949 Do preloading manually on user list pages, to reduce memory usage
I used the new profiler tools on this page, and noticed a lot of
allocations in the Globalize library, which we use for translating
database records. I realized that we were loading all of the fields of
not just all of the items on the page, but all of their translation
records in all locales! We used to scrape data for lots of languages, so
that can be quite a lot!

Unfortunately, Rails's `includes` method to efficiently preload related
records always loads all fields, and simply can't be overridden.

So, in this change we write manual preloading code, to identify the
records we need, load them in big bulk queries, and assign them back to
the appropriate associations. Basically just what `includes` does, but
written out a bit more, to give us the chance to specify SELECT and
WHERE clauses!
2023-10-27 19:42:02 -07:00
c496e33c37 Add mini profiler to each page
It shows up in development always, and if you're logged in as Me
Specifically in production!

I'm using this to poke at memory usage for pages that seem suspicious.
I don't know why our app reliably grows so large in RAM, but my hunch is
that maybe there are some pages that just use a truly large amount to
begin with - and I've learned Ruby doesn't release memory back after
it's GC'd, it just grows the process and keeps the free space to itself
in its own heap!

So I'm just eyeing pages that I know *can* have a lot going on, and
seeing what I find!
2023-10-27 19:38:49 -07:00
2d1443da4f Add .env.* to gitignore
Just a little affordance to help me connect to the production database
to try to reproduce the memory issues locally!
2023-10-27 17:44:12 -07:00
91eb2f7752 Kill the app at high RAM, instead of trying to throttle it first
Well, sitting at the `MemoryHigh` limit still grinds the app to a halt
anyway, lmao. I guess it's a feature designed for well-behaved processes
and not for outright leaking ones?

Let's try just having systemd basically reset the app regularly when the
RAM hits a certain threshold. I think that's what this config will do,
we'll find out!
2023-10-27 17:03:08 -07:00
af705f1be0 Tighten the RAM limit bounds on the production impress service
Lol ok, as I had kinda predicted, the memory bounds I set last time
were not tight enough, and it stalled out again! (It was at 75% and
fully just not working.)

Let's try this tighter bound instead!
2023-10-27 10:32:33 -07:00
06258b1dd5 Upgrade puma in the initial-placeholder app, to satisfy Dependabot
So, Dependabot correctly reported that this version of puma is
vulernable, which I fixed in the main app already—but I didn't notice we
also use that version in this cute tiny placeholder app we use early in
the deployment process.

There's not a real security need to upgrade this, as this placeholder
app has no access to useful data when it is run, but I think it's better
to resolve this by fixing it than by silencing Dependabot! May as well!
2023-10-26 14:48:21 -07:00
556d50c4ed Merge branch 'devcontainer' 2023-10-26 14:42:30 -07:00
421f11143d Build wardrobe-2020 JS during dev container setup
Oh yeah right, let's `yarn install` and build as part of the `post-create.sh` script!

I didn't have a command to do a one-time dev build of the assets yet (just one-time prod, or dev with reloading), so I added `yarn build:dev`!
2023-10-26 21:39:42 +00:00
6bbef7e75e Move the host: db part of the dev container into its environment vars
Okay, so I've kept `database.yml` using the new username and password
`impress_dev`, cuz I like that it helps clarify that it's dev stuff and
is impossible to confuse with prod. (I updated my local setup to match!)

But hardcoding `host: db` into here breaks my local setup where the
database _isn't_ at the hostname `db`. So I add a way for new optional
database URL environment variables to get merged in with these settings,
and then configured the dev container to use that—and just in the most
limited override possible, to avoid duplicating stuff we don't need to.
(I could've just used the same names `DATABASE_URL_{PRIMARY,OPENNEO_ID}`
for this, but idk, I think it's confusing to have the same one for both
dev and prod, even though Rails _does_ basically do this; see below.)

Normally, the environment variable `DATABASE_URL` just _does_ this, and
you don't need to include a `url` key at all to get this behavior. But
since we've got the legacy two-database thing going on, we do this
instead! If we were to merge the `openneo_id.users` table into the
primary database, we could simplify this!
2023-10-26 14:20:15 -07:00
271d477110 Add RAM constraints to impress service on in production
I just restarted the impress app in production! First I logged in to see
why it wasn't responding, and I saw that there was almost no free RAM
left, and that the Rails app had grown to eat it all up!

So in this change, we set a memory limit: if the impress app is taking
up more than 75% of the machine's RAM, systemctl will try to shrink it
down; if it can't, then it will kill the app at 80%.

I'm not totally sure whether these bounds are tight enough? I didn't
look closely enough at the numbers to see what the app's actual usage
was according to systemctl at the time (`sudo systemctl status
impress`), so my hope is this is enough. But if we run into a memory
leak crash like that again, because it turns out even existing at 75%
RAM freezes the machine when running alongside its other processes, we
can decrease these numbers!

I also don't know the nature of the memory leak, and that could be worth
investigating—the app pretty cleanly fits into ~500–600MB when it starts
up, but then does seem to slowly but steadily grow. If it could be kept
at that size, it's possible we could downgrade the server and save some
costs—but that's a question for another day, since making sure we handle
memory leaks when they *do* happen is a more important robustness fix!
2023-10-26 13:52:44 -07:00
e022e8dbfb Oops, fix bugs in dev container setup!
I missed two things:
1) `rake` wasn't available in the path (surprising but ok!), so I replaced it with the more modern and more portable `bin/rails` invocation.
2) Without specifying a `host`, Rails was trying to connect with the database over a socket instead of over a port. Here, we tell it where to actually connect!

I think I'll need to make some tweaks here, since this isn't compatible with my _own_ local setup—maybe I'll revert `database.yml`, and have the dev container use the `DATABASE_URL` environment variable to override it?

But whatever, this is working for now, and that's exciting to me!

Note: On my machine, the install step hangs for a loooong time, to the point where I usually give up. On Codespaces, it also took a while at the same step (`Installing devise-encrpytable`), but eventually got through it. Maybe on my machine it would work if I'm more patient? Idk! But it's good to see it working on something!!
2023-10-26 00:04:20 +00:00
793c2c0451 Merge branch 'main' into devcontainer 2023-10-25 16:47:23 -07:00
805521c289 Oops, needs to be a README.md file! 2023-10-25 16:31:41 -07:00
dfbece6e71 Slightly more up-to-date README
Basically empty, but no longer refers to old and unhelpful things, at
least!

I'll say more once we have dev environment instructions ready to go.
2023-10-25 16:29:29 -07:00
3243a0fdd9 Remove unused Javascript utility libraries
Some of these just didn't have call sites anymore; the HTML5 shim still
did, but that URL is literally broken now lmao. Goodbye!
2023-10-25 16:24:50 -07:00
8ae1370cc9 Remove unused neopia_host config variable
I cleared the references to the Neopia server (old thing responsible
for pet loading without blocking the Rails app) out of here a while ago,
but didn't clear out this config value!
2023-10-25 16:20:28 -07:00
e60c9d6efe Delete unused tmp/restart.txt file
This also clears the `tmp` directory out of the initial repository's
state! Rails should auto-generate it when it wants to make temporary
stuff.

`touch tmp/restart.txt` is the way to restart a Rails app deployed with
Phusion Passenger. I'm, uhh, not sure how a copy of it ever made its way
into the codebase, maybe I was confused about how to restart my local
app?
2023-10-25 16:18:03 -07:00
814f8e2af7 Delete unused LocalImpressHost & RemoteImpressHost config vars
These were cute stuff from long ago! They have no call sites now! Bye!
2023-10-25 16:13:47 -07:00
be023a075e Delete unused doc folder
I guess this is where it goes if you want to generate RDoc stuff. Ok!
If I do that someday, I'll put it here, lol!
2023-10-25 16:12:49 -07:00
13371e3cf2 Remove unused automated testing files & gems
Look, I'll be real, I have literally not run these automated tests in
probably like a whole decade. Most of these files are empty, the ones
that aren't seem basically trivial, and I bet half of it would fail
anyway.

If I wanted to do real automated testing, I would basically want to
start from scratch anyway, and apply coverage I can trust to the areas
I actually care about.

Until then, I feel like these gems and files are mostly just clutter,
and I don't like them being One More Barrier To Entry. Goodbye, unused
complexity!
2023-10-25 16:09:01 -07:00
ca50937cb1 Delete unused script directory
Huh, is this the ancient version of `bin`? Goodbye!
2023-10-25 16:05:53 -07:00
b756ae023e Use a hardcoded SECRET_TOKEN, in development only
Oh right, we intentionally fail if there's no SECRET_TOKEN provided, but
that's not really useful for development!

Here, we add a SECRET_TOKEN only used in development - which doesn't
need to be secret, because it doesn't guard actual user sessions!

In production, the behavior is unchanged.
2023-10-25 15:54:19 -07:00
024041e591 Configure nginx to send pre-gzipped files to the client
Rails already creates little pre-gzipped `.gz` copies of all our assets
in the `public/assets` directory when we build. This configures nginx to
send those when available!

We weren't doing *any* gzip stuff before, so this helps a lot with those
bigger JS files, like the `wardrobe-2020` stuff. It's now at ~.5MB with
compression, which is still a bit big, but nowhere near as offensive as
the 4.5MB pre-anything, or 1.5MB post-minification, lol.
2023-10-25 15:44:01 -07:00