In this change, instead of *always* inferring the Dyeworks base item
from the item name at runtime, we now have a database field that tracks
it, and auto-populates whenever an item *seems* to need a Dyeworks base
item but doesn't have one yet.
This will enable us to set the base item manually in cases where it
can't be inferred, and load Dyeworks base items for the Item Getting
Guide in one query with `includes(:dyeworks_base_item)`.
This migration does a bit more of the fix-em-up scripting work *in* the
migration itself than I usually do, mainly because there's so much in
this one that I think being extra-explicit is useful. We make sure to
do it gracefully though!
I am. Kinda surprised we didn't have this already, huh?? I guess in
most searches, the difference isn't very noticeable, because we don't
have a lot of item names to sort anyway? But we do this *so often* that
I think an index will certainly help! Let's add it!
Currently we only load the homepage, so there's only actually one
wearable item to sync up! But here's the task to do it!
To do this, we also created the backing model NCMallRecord, where we'll
save the current NC Mall state!
Idk how we got into this state, or if it's environment-dependent or
MySQL-version-dependent or what, but setting up the dev environment on
my macOS machine is complaining that `TEXT` columns can't have default
values.
Well, in that case, let's just have it be a non-nullable field, and add
a note to our code that missing fields *can* cause item saving to fail!
(This was always true, but I'm just extra-noting it because it's
becoming *more* true.)
Simple enough to start! If `shadowbanned: true` gets set on a user,
then we show a 404 instead of the actual list page, *unless* you're
logged in as that user, or coming from a known IP of that user.
This isn't a very strong mechanism! Just something to hopefully
increase the costs of messing around with list spam.
Ahh wow, we're finally at the point of having enough outfits to need to
increase this ID field!
I got an alert this morning that queries were failing with the MySQL error
"id out of range". I went and found that creating a new outfit still works if
it's empty, but fails once you try to add an item to it. So, I assume this is
the failing column!
Note that I'm writing this from my emergency Steam Deck workstation, and that
I didn't quite get MySQL set up on it yet (I just yolo ran this in
production), so we don't have the corresponding changes to `db/schema.rb` yet.
This might require another commit from my normal workstation later to
complete this change!
A few pieces here:
1. Convert all tables to `utf8mb4`+`utf8mb4_unicode_520_ci` strings.
2. Configure that as the server's default.
3. Configure the Rails database connection to use this encoding too.
Came together pretty well, whew! This has been a LONG time coming,
`latin1` is NOT a good charset for the year 2024!
I considered this at first, but decided to keep it simple until it
turned out to matter. Oops, it already matters, lol!
I want the item search code to be able to easily tell if the series
name is real or a placeholder, so we can decide whether to build the
filter text in `fits:$series-$color-$species` form or
`fits:alt-style-$id` form.
So in this change, we keep it that `AltStyle#series_name` returns the
placeholder string if none is set, but callers can explicitly ask
whether it's a real series name or not. Will use this in our next
change!
Previously we did this hackily by comparing the ID to a hardcoded list
of IDs, but I think putting this in the database is clearer and more
robust, and it should also help with our upcoming item search stuff
that will filter by it!
Now we're *really* duplicating with Impress 2020's system lol, but I
need a way to not keep trying to load manifests that are actually 404,
which are surprisingly plentiful!
This doesn't actually stop us from loading anything yet, it just tracks
the timestamps and the HTTP status! But next I'll add logic to skip
when it was 4xx recently.
Okay cool, we're successfully migrated off translations, we can delete
the table now!
I'm not worried about backing up this data as such, because the
impress-2020 repo has a bunch of this data in its
`public-data-from-modeling.sql.gz` file history. Safe to remove from
the live app!
Like with Species, Color, and Zone, we're moving the translation data
directly onto the model, and just using English. This will simplify
some of our queries a lot (way fewer joins!), and it's what Neopets
does now anyway, and I have a secret hope that removing the complexity
along the codepath for `item.name` might help speed up large item lists
if we're lucky?? 🤞
Anyway, this is the first step, performing the migration to copy the
data onto the `items` table, making sure to keep them in sync for the
2020 app for now!
They're no longer referenced in either app! Begone!
(The translated values are still available in the DTI 2020 repository's
history, under `scripts/db`!)
A little architecture trick here! DTI 2020 authorizes support staff
requests by means of a secret token, instead of user account stuff. And
our support tools still all call DTI 2020 APIs.
So here, we bridge the gap: we copy DTI 2020's support secret to this
app's environment variables (I needed to update
`deploy/files/production.env` and run `bin/deploy:setup` for this!),
then users with the new `support_secret` flag have it added to their
HTML documents in the meta tags. Then, the JS reads the meta tag.
I also fixed an issue in the `deploy/setup.yml` playbook, where I had
temporarily commented some stuff out to skip steps one time, and forgot
to uncomment them after oops lol!
Like in 0dca538, this is preliminary work for being able to drop the
`zone_translations` table! We're copying the field over first, to be
able to migrate DTI 2020 safely before dropping anything.
Non-English languages haven't been supported on Neopets for a while, so
I'd like to remove this extra cross-cutting complexity, especially
since it's now inconsistent with the real site anyway!
The main motivation is that I'd like to do this for items too, because
I have a hunch that all the complexity of `globalize` to read
`item.name` is part of what's making large user lists so slow to
render: lots of little objects getting created down the stack, and
needing to be garbage-collected later.
I'm not sure that's why! But I figure removing this complexity is a
simplicity win anyway, so let's do it!
Note that this doesn't *finish* the migration, it just starts it! The
`Species::Translation` and `Color::Translation` models still exist, and
still have their data, and not all references to them are scrubbed yet.
I especially don't want to delete the backing tables until both DTI and
DTI 2020 are ready for it!
So this change will someday be paired with another change to actually
drop the tables - after backing up the data for future records just in
case, of course!
In impress-2020, we do a big slow query to figure out which users have
been active in trades recently. Now, we cache that timestamp on the
User model.
This won't have any immediate effect; it's to clear the way for Classic
DTI to receive the better trade ratios feature people like from 2020.
I also added some unit testing infra because I finally wanted it! for
all the ways you can trigger this timestamp lol
Note too that this is a bit of an unusually complex migration, but my
hope is that the batching and query structure and such helps it run
surprisingly fast! 🤞
We haven't used the mall spider in this app in forever (I guess we even
deleted the code at some point?), but there was some vestigial stuff
left. Goodbye!
There was a static page explaining it, which we no longer link to; and
there was an unused field in the User model for who was a beta tester
for it. Goodbye!
Okay, I've simplified the migration to *just* add the column, and
instead added a task to find assets without manifest URLs and backfill
them.
Performance is a lot better now, using the `async-http` library, which
as I understand it supports both persistent connections when invoked
like this, and maybe also HTTP/2 multiplexing?? (Though I'm not
actually sure images.neopets.com does lol)
I'm not sure about the number of concurrent tasks I picked here, 100
seems okay for an internet thing and for such small requests, but I
worry that the CDN is gonna get annoyed or something. Well, we'll see!
This task is very resumable if it turns out we get frozen out or
something.
Ok so, impress-2020 guesses the manifest URL every time based on common
URL patterns. But the right way to do this is to read it from the
modeling data! But also, we don't have a great way to get the modeling
data directly. (Though as I write this, I guess we do have that
auto-modeling trick we use in the DTI 2020 codebase, I wonder if that
could work for this too?)
So anyway, in this change, we update the modeling code to save the
manifest URL, and also the migration includes a big block that attempts
to run impress-2020's manifest-guessing logic for every asset and save
the result!
It's uhh. Not fast. It runs at about 1 asset per second (a lot of these
aren't cache hits), and sometimes stalls out. And we have >600k assets,
so the estimated wall time is uhh. Seven days?
I think there's something we could do here around like, concurrent
execution? Though tbqh with the nature of the slowness being seemingly
about hitting the slow underlying images.neopets.com server, I don't
actually have a lot of faith that concurrency would actually be faster?
I also think it could be sensible to like… extract this from the
migration, and run it as a script to infer missing manifest URLs. That
would be easier to run in chunks and resume if something goes wrong.
Cuz like, I think my reasoning here was that backfilling this data was
part of the migration process… but the thing is, this migration can't
reliably get a manifest for everything (both cuz it depends on an
external service and cuz not everything has one), so it's a perfectly
valid migration to just leave the column as null for all the rows to
start, and fill this in later. I wish I'd written it like that!
But anyway, I'm just running this for now, and taking a break for the
night. Maybe later I'll come around and extract this into a separate
task to just try this on all assets missing manifests instead!
Idk if this used to be different or what, but it looks like the current
behavior is: if you delete a closet list, it'll leave the hangers
present, but Classic DTI would not show them anywhere; but Impress 2020
(until recently) would crash about it.
Now, we use `dependent: :destroy` to delete the hangers when you delete
the list (which I think makes sense, and is different than what I
decided in the past but that's ok, and is what the current behavior
*looks* like to people!), and we add a migration that deletes orphaned
hangers.
The migration also outputs the deleted hangers as JSON, for us to hold
onto in case we made a mistake! I'm also backing up the database in
advance of running this migration, just in case we gotta roll back HARD!
Oh I didn't realize the lowest version Rails had for this is 4.2. I wish running `rake db:migrate` checked this, but I'm running into it on another branch when I try to create a *new* migration which for some reason leads it to inspect the old migrations and notice the issue. Weird!
I'm not sure it's literally true that they were all built against Rails 3.2, but that's what it was at before we upgraded, and like. that's probably fine
Okay, fine, finally making this controllable from the db without requiring a deploy :P Setting this new field will cause `item.special_color` to return the corresponding color. This mainly affects what we show on the item page, and what colors we request for modeling on the homepage.