and with a `rails neopets:import` task you can call to do them all at
once!
I'm gonna do some other stuff here too to make `neopets:import` easier
to call all in one go, like prompting for the Neologin cookie just
once at the start.
Note that this changes the cron setup, so you gotta run
`bin/deploy:setup` after this deploys!
Just getting this stuff out of Pet, in part because I want to start
being able to unit test modeling, and that will require stubbing out
what this service returns!
Just a bit more clarity of grouping! I'm also thinking about extracting
modeling APIs into a service file like this too, in which case I think
this would help clarify what it is.
See comment for details! I wonder if other items have been affected by
this in the past. I think probably what happened before was that we
successfully created this item, but failed to create the *translation*,
so when migrating over the Patchwork Staff all its translated fields
were empty? (That's what I found looking in the database today.)
But yeah, thankfully our crash logging at health.openneo.net gave me
the name of a pet someone was trying to model, and so I was able to
find the bug and fix it!
Used to have something like this long ago, now here's the latest
version!
This task can't run autonomously, it needs the human user to provide a
neologin cookie value. So, no cron for us! But we're cleaning up *years*
of lil guys in one swoop now :3
I upgraded our local MariaDB for compatibility with the latest server
dumps (https://mariadb.org/mariadb-dump-file-compatibility-change/),
and I thiiiink what I'm seeing is that, also in this version of
MariaDB, the default value for the `ssl` option is `true`? That is,
command-line clients will try to connect over SSL by default—which
isn't generally supported on development servers, where this task runs.
I could probably fix this with a change to my local config? But I
figure I can't really picture a scenario where this option being set in
the task would be *wrong*, but I *can* see it saving future people time
if they're working in a similar environment. So, let's just set it!
Oh this was a fun little dev environment bug: I ran `public_data:pull`
on my laptop before migrating my database, so the `items` table pulled
as the latest production version, which included the migrations, but
they hadn't been marked as "run" yet.
So Rails was still telling me I needed to run them, but the migrations
themselves were crashing, with stuff like "there's already a column
with this name!"
This change ensures that `public_data:pull` won't run until migrations
are done, to prevent silly accidents like that.
Oh whoops, I was symlinking to the *full* path of the latest dump,
which includes the site version directory in it. This meant that, if 5
new versions of the app were deployed since the most recently public
data commit (and so that version is deleted), the symlink fails.
In this change, we just symlink to the filename, which behaves as a
relative path and should be completely resilient to deploys changing
where these files ostensibly live!!
Currently we only load the homepage, so there's only actually one
wearable item to sync up! But here's the task to do it!
To do this, we also created the backing model NCMallRecord, where we'll
save the current NC Mall state!
Huh, curious, I think what I'm seeing is: on my development machine,
`File.exist?` returns true for symlinks, but, on our production
machine, `File.exist?` returns false for symlinks.
I imagine this is a difference in the implementation of the underlying
system calls? Curious!
This new check should work more reliably across platforms. I considered
checking both `exists?` and `symlink?`, but decided that, in the
unexpected case that `latest.sql.gz` exists but is an actual file
instead of a symlink like we expect, it's probably best to avoid
overwriting it anyway, and a crash on the `symlink` attempt is a
reasonable way to do that.
In newer versions of MySQL, `mysqldump`'s default behavior requires
accessing some privileged `INFORMATION_SCHEMA` tables, which requires
the global `PROCESS` permission.
Rather than require that, we can just skip this step, by adding the
`--no-tablespaces` argument. This was the guidance I found when looking
up this issue! https://dba.stackexchange.com/a/274460/289961
Right, I didn't totally connect the dots that there's some OpenID
features in the mix here for how we expect to identify the user once
they authenticate. It requires looking up the provider's public key,
and validating the JWT they sent us. This gem does all that for us!
I don't actually know what a real NeoPass `id_token` looks like yet?
But I'll fill in some placeholder stuff for now, and use that for
initializing the account!
In this change, we wire up a new NeoPass OAuth2 strategy for OmniAuth,
and hook up the "Log in with NeoPass" button to use it!
The authentication currently fails with `invalid_credentials`, and
shows the `owo` response we hardcoded into the NeoPass server's token
response. We need to finally follow up on the little `TODO` written in
there!
I'm starting to port over the functionality that was previously just,
me running `yarn db:export:public-data` in `impress-2020` and
committing it to Git LFS every time.
My immediate motivation is that the `impress-2020` git repository is
getting weirdly large?? Idk how these 40MB files have blown up to a
solid 16GB of Git LFS data (we don't have THAT many!!!), but I guess
there's something about Git LFS's architecture and disk usage that I'm
not understanding.
So, let's move to a simpler system in which we don't bind the public
data to the codebase, but instead just regularly dump it in production
and make it available for download.
This change adds the `rails public_data:commit` task, which when run in
production will make the latest available at
`https://impress.openneo.net/public-data/latest.sql.gz`, and will also
store a running log of previous dumps, viewable at
`https://impress.openneo.net/public-data/`.
Things left to do:
1. Create a `rails public_data:pull` task, to download `latest.sql.gz`
and import it into the local development database.
2. Set up a cron job to dump this out regularly, idk maybe weekly? That
will grow, but not very fast (about 2GB per year), and we can add
logic to rotate out old ones if it starts to grow too far. (If we
wanted to get really intricate, we could do like, daily for the past
week, then weekly for the past 3 months, then monthly for the past
year, idk. There must be tools that do this!)
A few pieces here:
1. Convert all tables to `utf8mb4`+`utf8mb4_unicode_520_ci` strings.
2. Configure that as the server's default.
3. Configure the Rails database connection to use this encoding too.
Came together pretty well, whew! This has been a LONG time coming,
`latin1` is NOT a good charset for the year 2024!
This was a bit tricky! When I initially turned it on, running
`rails swf_assets:manifests:load` would trigger database errors of "oh
no we can't get a connection from the pool!", because too many records
were trying to concurrently save at once.
So now, we give ourselves the ability to say `save_changes: false`, and
then save them all in one batch after! That way, we're still saving by
default in the edge cases where we're downloading and saving a manifest
on the fly, but batching them in cases where we're likely to be dealing
with a lot of them!
Now we're *really* duplicating with Impress 2020's system lol, but I
need a way to not keep trying to load manifests that are actually 404,
which are surprisingly plentiful!
This doesn't actually stop us from loading anything yet, it just tracks
the timestamps and the HTTP status! But next I'll add logic to skip
when it was 4xx recently.
Doing that sweet, sweet backfill!! It's not exactly *fast*, since
there's about 570k records to work through, but it's pretty good all
things considered! Thanks, surprisingly-reusable async code!
I'm not sure where these duplicate records have been coming from over
the years (I checked the timestamps and it's been happening
occasionally since 2013 up to late last year, there were ~1,600
instances), but for now let's just get rid of them!
This is related to the issues we've been addressing lately where some
biology assets have manifests but no PNG specified in them: the older
copies of the assets would have our generated PNG as a fallback, but
the newer copies would get served as part of the pet appearance *in
addition to* the older copies, and the newer copies would be marked as
having no DTI-generated image, which our system wasn't always able to
handle.
We've primarily been addressing this by leaning into more graceful
failure modes of skipping certain layers, but… these layers *shouldn't
be here*, and are cluttering up support tools and such; let's be rid of
them!
I ran this today seemingly without issue, but I kept a backup of the
`yarn db:export:public-data` task in `impress-2020` to be able to check
and rollback if we discover a mistake.
One last note: the `ORDER BY` clause in the `GROUP_CONCAT` call was a
late addition, *after* I ran this in production. Scanning the console
output, it seems like ordering by ID was MySQL's default behavior here
anyway (makes sense!), so I'm not gonna bother to rollback and re-run,
but I think specifying this is helpful to ensure we're not depending on
unspecified behavior and to be really clear about our intentions of
which record to keep (the one with the smallest DTI ID number).
Something in the Rails loader doesn't like that I have both a gem and
a lib folder named `RocketAMF`, I think? It'll often work for the first
pet load request, then on subsequent ones say `Envelope` is not
defined, I'm guessing because it scrapped the gem's module in favor of
mine?
Idk, let's just simplify all this by making our own module. I feel like
this old lib could use an overhaul and simplification anyway, but this
will do for now!
The models folder is a bit confusingly large, these are more mixins and
kinda clutter it. Push them off into `lib`, I think!
I think they used to be in models mainly because Rails used to handle
`lib` differently with autoloading, and it made for a worse dev
experience. Now it's all the same, though!
At this point, I've gone through all the assets, and the only ones
without manifests are:
1. The ones that truly have no manifest yet (that we know of)
2. The ones where execution happened to time out
I think the 5-second timeout is a very reasonable default for starting
the backfill, in a way that prioritizes moving forward; but now that we
have most things, I'd rather be able to re-run it with a more generous
timeout. So here we are!
I tried to port the Rainbow Pool ones forward, but ran into issues with the
service that uses browser-specific stuff to check that traffic is valid :/
Incidentally, those were the only places we were using `rest-client`.
Goodbye!
I noticed a thing with like, an asset that I think referenced an item that
doesn't exist, which caused an error in the `body_specific?` validation
step?
Tbh that validation step needs fixed up in a number of ways, but I'm
scared to, since it's hard to know what will break modeling lol.
But in any case, more graceful handling is nice! If something happens,
I'd rather leave it null and try again later than have the job crash!
It's not just that none of them were 200 OK, it's that they were all 404.
In the event that something returns not-200 and not-404, we immediately
abort, so we shouldn't get to this case unless they were all 404!
Okay, I've simplified the migration to *just* add the column, and
instead added a task to find assets without manifest URLs and backfill
them.
Performance is a lot better now, using the `async-http` library, which
as I understand it supports both persistent connections when invoked
like this, and maybe also HTTP/2 multiplexing?? (Though I'm not
actually sure images.neopets.com does lol)
I'm not sure about the number of concurrent tasks I picked here, 100
seems okay for an internet thing and for such small requests, but I
worry that the CDN is gonna get annoyed or something. Well, we'll see!
This task is very resumable if it turns out we get frozen out or
something.
Okay, right, if we're just using www.neopets.com (like we are for now), it fails on http://www.neopets.com because it triggers a redirect that we don't follow.
So here I 1) change the default to HTTPS, and 2) add HTTPS support to our little RocketAMF lib
This removes login/logout/session logic for integrating with OpenNeo ID, replacing them with stubs that just redirect to `/?TODO` when you click login, and helpers that act as if you're not logged in.
This gives us a clean slate to plug in new Devise logic to integrate with the `openneo_id` database directly!
Some tricks required here to get the dependencies to work out, but we got it!!
Oh also, we move away from the rbenv in Ubuntu's package manager, because it doesn't support more recent Rubies like 2.4.10.
I don't think these work anymore, and our volunteers get new items into the db fast anyway, Impress 2020 is doing better spidering these days. And then we get to remove the cron job `whenever` gem!
Yay, we've deleted all our background tasks!
We'll probably want to replace some of the basic functionality like certain caching? But we can deal with that as we run into it.
The direct motivation here was a seeming version conflict between Rails 4.2's rack dependency and latest Resque's rack dependency... but this is just nice complexity elimination regardless, we want this anyway :3
NOTE: This doesn't boot yet! There's something changed in the `devise` API that we'll need to fix!
```
/vagrant/config/initializers/devise.rb:46:in `block in <top (required)>': undefined method `encryptor=' for Devise:Module (NoMethodError)
```
But yeah, we navigated the gem upgrades, and also I ran `rake rails:update` and hand-processed the suggestions it had for our config files.