Commit graph

64 commits

Author SHA1 Message Date
86205c5e44 Create new DTIRequests.{get,post} helpers for HTTP requests
Instead of using `Async::HTTP::Internet` directly, and always applying
our `User-Agent` header manually, let's build a helper for it!
2024-12-16 14:12:19 -08:00
1d4771ecc5 Create new DTIRequests.load_many helper, to make parallel requests
Just a wrapper for the barrier/semaphore thing we're doing constantly!

I only applied it in the importer rake tasks for now. There's other
call sites to move over to it, too!
2024-12-16 13:16:03 -08:00
ed5b62e161 Use PetType's created_at to predict who an item might be compatible with
This is a basic attempt at the Vandagyre logic, but also things like
"Maraquan items released before the Maraquan X was released"!

I also added a new task, `rails items:update_cached_fields`, which needs
to be run after this change, because it affects the value of
`Item#predicted_fully_modeled?`.

Eyeballing the updated search results for `-is:modeled`, this feels
pretty close? I'm guessing it's not perfect (e.g. maybe a pet type we
got modeled late into its existence, or some items that just never did
fit a certain pet), but feels pretty good.

I also know we had the "modeling hints" override in Impress 2020, which
we aren't reading yet. We should probably take that into account here
too!
2024-11-19 16:41:50 -08:00
ec0b8d9cb9 Only prompt for Neologin cookie once when running rails neopets:import 2024-11-16 12:11:13 -08:00
a57b3629db Refactor Neopets import tasks all into a neopets:import namespace
and with a `rails neopets:import` task you can call to do them all at
once!

I'm gonna do some other stuff here too to make `neopets:import` easier
to call all in one go, like prompting for the Neologin cookie just
once at the start.

Note that this changes the cron setup, so you gotta run
`bin/deploy:setup` after this deploys!
2024-11-16 11:58:43 -08:00
1ad3ea8f96 Add rails alt_styles:import task to import info from Styling Studio
Yay whew! Magic time!
2024-11-14 19:03:44 -08:00
881e63cfbd Output JSON from rails pets:load[pet_name]
Gonna use this for making mock data for automatic testing!
2024-10-21 14:03:56 -07:00
e36e273d50 Extract Neopets::CustomPets service from the Pet class
Just getting this stuff out of Pet, in part because I want to start
being able to unit test modeling, and that will require stubbing out
what this service returns!
2024-10-18 17:40:31 -07:00
acb52cb870 Move NCMall and NeoPass services into a Neopets module
Just a bit more clarity of grouping! I'm also thinking about extracting
modeling APIs into a service file like this too, in which case I think
this would help clarify what it is.
2024-10-18 17:27:15 -07:00
fdf1f31867 Add pets:find task to look up pets of a given color/species 2024-09-13 18:59:17 -07:00
c7b0ec71ef Add pet_types:guess task to guess poses for Invisible etc pets 2024-09-13 18:12:28 -07:00
620e59f3ed Add rails rainbow_pool:import task, to get clean image hashes for pets
Used to have something like this long ago, now here's the latest
version!

This task can't run autonomously, it needs the human user to provide a
neologin cookie value. So, no cron for us! But we're cleaning up *years*
of lil guys in one swoop now :3
2024-09-07 12:51:59 -07:00
d79b6b6c33 Ensure rails public_data:pull doesn't use SSL for localhost
I upgraded our local MariaDB for compatibility with the latest server
dumps (https://mariadb.org/mariadb-dump-file-compatibility-change/),
and I thiiiink what I'm seeing is that, also in this version of
MariaDB, the default value for the `ssl` option is `true`? That is,
command-line clients will try to connect over SSL by default—which
isn't generally supported on development servers, where this task runs.

I could probably fix this with a change to my local config? But I
figure I can't really picture a scenario where this option being set in
the task would be *wrong*, but I *can* see it saving future people time
if they're working in a similar environment. So, let's just set it!
2024-07-07 17:21:54 -07:00
4d24a9577f Only run public_data:pull if there are no pending migrations
Oh this was a fun little dev environment bug: I ran `public_data:pull`
on my laptop before migrating my database, so the `items` table pulled
as the latest production version, which included the migrations, but
they hadn't been marked as "run" yet.

So Rails was still telling me I needed to run them, but the migrations
themselves were crashing, with stuff like "there's already a column
with this name!"

This change ensures that `public_data:pull` won't run until migrations
are done, to prevent silly accidents like that.
2024-06-18 14:52:54 -07:00
d3d0cda81f Oops, fix symlink for /public-data/latest.sql.gz
Oh whoops, I was symlinking to the *full* path of the latest dump,
which includes the site version directory in it. This meant that, if 5
new versions of the app were deployed since the most recently public
data commit (and so that version is deleted), the symlink fails.

In this change, we just symlink to the filename, which behaves as a
relative path and should be completely resilient to deploys changing
where these files ostensibly live!!
2024-05-29 19:01:23 -07:00
46d3325144 Load *all* NC Mall pages in nc_mall:sync
Ta da! Now I can run this and pull 481 records into our database, and
then turn around and run it again and have them all correctly say
"skipped"!
2024-05-10 17:39:40 -07:00
b6e18e10a5 Add bare-bones rails nc_mall:sync task, incl. NCMallRecord model
Currently we only load the homepage, so there's only actually one
wearable item to sync up! But here's the task to do it!

To do this, we also created the backing model NCMallRecord, where we'll
save the current NC Mall state!
2024-05-07 17:40:14 -07:00
c751173c52 Fix public_data:commit's symlinking on some platforms
Huh, curious, I think what I'm seeing is: on my development machine,
`File.exist?` returns true for symlinks, but, on our production
machine, `File.exist?` returns false for symlinks.

I imagine this is a difference in the implementation of the underlying
system calls? Curious!

This new check should work more reliably across platforms. I considered
checking both `exists?` and `symlink?`, but decided that, in the
unexpected case that `latest.sql.gz` exists but is an actual file
instead of a symlink like we expect, it's probably best to avoid
overwriting it anyway, and a crash on the `symlink` attempt is a
reasonable way to do that.
2024-05-02 13:10:30 -07:00
7c09b76b5e Require fewer db privileges to run public_data:commit
In newer versions of MySQL, `mysqldump`'s default behavior requires
accessing some privileged `INFORMATION_SCHEMA` tables, which requires
the global `PROCESS` permission.

Rather than require that, we can just skip this step, by adding the
`--no-tablespaces` argument. This was the guidance I found when looking
up this issue! https://dba.stackexchange.com/a/274460/289961
2024-05-02 13:06:27 -07:00
98dd9ec782 Create rails public_data:pull task, to load up the latest public data
Yay, it works! Easy peasy! Love this way of integrating shell and Ruby,
it's cute!
2024-03-01 13:18:58 -08:00
8dc11f9940 Create rails public_data:commit task, to share public data dumps
I'm starting to port over the functionality that was previously just,
me running `yarn db:export:public-data` in `impress-2020` and
committing it to Git LFS every time.

My immediate motivation is that the `impress-2020` git repository is
getting weirdly large?? Idk how these 40MB files have blown up to a
solid 16GB of Git LFS data (we don't have THAT many!!!), but I guess
there's something about Git LFS's architecture and disk usage that I'm
not understanding.

So, let's move to a simpler system in which we don't bind the public
data to the codebase, but instead just regularly dump it in production
and make it available for download.

This change adds the `rails public_data:commit` task, which when run in
production will make the latest available at
`https://impress.openneo.net/public-data/latest.sql.gz`, and will also
store a running log of previous dumps, viewable at
`https://impress.openneo.net/public-data/`.

Things left to do:
1. Create a `rails public_data:pull` task, to download `latest.sql.gz`
   and import it into the local development database.
2. Set up a cron job to dump this out regularly, idk maybe weekly? That
   will grow, but not very fast (about 2GB per year), and we can add
   logic to rotate out old ones if it starts to grow too far. (If we
   wanted to get really intricate, we could do like, daily for the past
   week, then weekly for the past 3 months, then monthly for the past
   year, idk. There must be tools that do this!)
2024-02-29 14:30:33 -08:00
ec6dca1c16 Improve Unicode support, emojis don't crash us anymore lol!
A few pieces here:

1. Convert all tables to `utf8mb4`+`utf8mb4_unicode_520_ci` strings.
2. Configure that as the server's default.
3. Configure the Rails database connection to use this encoding too.

Came together pretty well, whew! This has been a LONG time coming,
`latin1` is NOT a good charset for the year 2024!
2024-02-28 18:54:27 -08:00
2cac048158 Save manifest load info when preloading them, too
This was a bit tricky! When I initially turned it on, running
`rails swf_assets:manifests:load` would trigger database errors of "oh
no we can't get a connection from the pool!", because too many records
were trying to concurrently save at once.

So now, we give ourselves the ability to say `save_changes: false`, and
then save them all in one batch after! That way, we're still saving by
default in the edge cases where we're downloading and saving a manifest
on the fly, but batching them in cases where we're likely to be dealing
with a lot of them!
2024-02-25 16:02:36 -08:00
a684c915a9 Track when manifest was last loaded, and what status it returned
Now we're *really* duplicating with Impress 2020's system lol, but I
need a way to not keep trying to load manifests that are actually 404,
which are surprisingly plentiful!

This doesn't actually stop us from loading anything yet, it just tracks
the timestamps and the HTTP status! But next I'll add logic to skip
when it was 4xx recently.
2024-02-25 15:35:04 -08:00
992954ce89 Create swf_assets:manifests:load task to save all manifest files
Doing that sweet, sweet backfill!! It's not exactly *fast*, since
there's about 570k records to work through, but it's pretty good all
things considered! Thanks, surprisingly-reusable async code!
2024-02-23 14:06:49 -08:00
f6cece9a59 Fix inconsistent indentation in swf_assets.rake
My editor now flags this stuff better, thank you editor!
2024-02-23 13:12:21 -08:00
95949da6f9 Create swf_assets:remove_duplicates task
I'm not sure where these duplicate records have been coming from over
the years (I checked the timestamps and it's been happening
occasionally since 2013 up to late last year, there were ~1,600
instances), but for now let's just get rid of them!

This is related to the issues we've been addressing lately where some
biology assets have manifests but no PNG specified in them: the older
copies of the assets would have our generated PNG as a fallback, but
the newer copies would get served as part of the pet appearance *in
addition to* the older copies, and the newer copies would be marked as
having no DTI-generated image, which our system wasn't always able to
handle.

We've primarily been addressing this by leaning into more graceful
failure modes of skipping certain layers, but… these layers *shouldn't
be here*, and are cluttering up support tools and such; let's be rid of
them!

I ran this today seemingly without issue, but I kept a backup of the
`yarn db:export:public-data` task in `impress-2020` to be able to check
and rollback if we discover a mistake.

One last note: the `ORDER BY` clause in the `GROUP_CONCAT` call was a
late addition, *after* I ran this in production. Scanning the console
output, it seems like ordering by ID was MySQL's default behavior here
anyway (makes sense!), so I'm not gonna bother to rollback and re-run,
but I think specifying this is helpful to ensure we're not depending on
unspecified behavior and to be really clear about our intentions of
which record to keep (the one with the smallest DTI ID number).
2024-02-09 09:53:41 -08:00
1057fdd3a9 Add rake task to load pet data
Just a quick lil shortcut to look up a pet, I've wanted this recently!
2024-01-24 00:51:37 -08:00
0845881aba Add TIMEOUT parameter to swf_assets:manifests task
At this point, I've gone through all the assets, and the only ones
without manifests are:

1. The ones that truly have no manifest yet (that we know of)
2. The ones where execution happened to time out

I think the 5-second timeout is a very reasonable default for starting
the backfill, in a way that prioritizes moving forward; but now that we
have most things, I'd rather be able to re-run it with a more generous
timeout. So here we are!
2023-11-11 11:04:53 -08:00
d2de971a60 Delete more rake tasks
I tried to port the Rainbow Pool ones forward, but ran into issues with the
service that uses browser-specific stuff to check that traffic is valid :/

Incidentally, those were the only places we were using `rest-client`.
Goodbye!
2023-11-10 18:59:46 -08:00
93511b3d51 Delete unused rake tasks 2023-11-10 18:04:24 -08:00
eb4a3ce0d9 More gracefully handle batches that fail to save
I noticed a thing with like, an asset that I think referenced an item that
doesn't exist, which caused an error in the `body_specific?` validation
step?

Tbh that validation step needs fixed up in a number of ways, but I'm
scared to, since it's hard to know what will break modeling lol.

But in any case, more graceful handling is nice! If something happens,
I'd rather leave it null and try again later than have the job crash!
2023-11-10 17:42:56 -08:00
80bd229bc6 Clarify an error message in swf_assets:manifests task
It's not just that none of them were 200 OK, it's that they were all 404.
In the event that something returns not-200 and not-404, we immediately
abort, so we shouldn't get to this case unless they were all 404!
2023-11-10 17:27:35 -08:00
dc22a458bf Move manifest backfill to swf_assets:manifests task
Okay, I've simplified the migration to *just* add the column, and
instead added a task to find assets without manifest URLs and backfill
them.

Performance is a lot better now, using the `async-http` library, which
as I understand it supports both persistent connections when invoked
like this, and maybe also HTTP/2 multiplexing?? (Though I'm not
actually sure images.neopets.com does lol)

I'm not sure about the number of concurrent tasks I picked here, 100
seems okay for an internet thing and for such small requests, but I
worry that the CDN is gonna get annoyed or something. Well, we'll see!
This task is very resumable if it turns out we get frozen out or
something.
2023-11-10 16:52:50 -08:00
22e3f4240a Update most URLs to use HTTPS
I noticed we didn't have the little lock icon in the browser, and yeah
huh there's a lot of `http://` still floating around! Let's fix that!
2023-10-25 15:22:57 -07:00
Matchu
fd263ea82f Remove mall spider cron jobs
I don't think these work anymore, and our volunteers get new items into the db fast anyway, Impress 2020 is doing better spidering these days. And then we get to remove the cron job `whenever` gem!
2023-10-23 19:05:05 -07:00
Matchu
1195a6190b Uninstall resque
Yay, we've deleted all our background tasks!

We'll probably want to replace some of the basic functionality like certain caching? But we can deal with that as we run into it.

The direct motivation here was a seeming version conflict between Rails 4.2's rack dependency and latest Resque's rack dependency... but this is just nice complexity elimination regardless, we want this anyway :3
2023-10-23 19:05:04 -07:00
Matchu
8ea74b737e Remove outfit image saving
This has already been moved to Impress 2020 too, so we can delete all the image generation and saving!
2023-10-23 19:05:04 -07:00
Matchu
72a08901c8 Upgrade to Ruby 2.2.4, Rails 4.0.13
NOTE: This doesn't boot yet! There's something changed in the `devise` API that we'll need to fix!

```
/vagrant/config/initializers/devise.rb:46:in `block in <top (required)>': undefined method `encryptor=' for Devise:Module (NoMethodError)
```

But yeah, we navigated the gem upgrades, and also I ran `rake rails:update` and hand-processed the suggestions it had for our config files.
2023-10-23 19:05:02 -07:00
d8038f2fbf prefer scraped rainbow pool images over pet images 2015-09-05 18:48:41 +00:00
8ccc2fd741 set the right asset id in the rake task :P 2014-03-28 00:12:04 -05:00
e62d52bbd4 use body zone (15) instead of static so that most items look good 2014-03-27 22:28:48 -05:00
173c1eab5d fit the asset ID in the 8-char image_hash limit 2014-03-27 22:28:48 -05:00
58fdb3d6ac rake task to quickly create prank pet types 2014-03-27 22:28:48 -05:00
b583254397 create colors from rake 2014-03-27 22:28:48 -05:00
2b870cf91b add pet state replacement task 2013-11-30 20:33:48 -05:00
6c3ff09f5d make en-MEEP translatable - because sorting by translations doesnt work well with fallbacks 2013-01-27 20:43:08 -06:00
9c37e894f7 translate species/color 2013-01-26 10:34:48 -06:00
361b5df256 translate zones 2013-01-26 09:54:29 -06:00
1571a10500 shoot, included in the wrong spot. this is rspec issue hard to test :( 2013-01-02 23:58:25 -05:00