This has just been absent for too long! We've lost a lot of data about
when poses were first modeled, which is a shame.
But I want this in now, because I was just doing caching on
/rainbow-pool.json, and realized that _labeling_ poses is another way
pet states can update rather than just being created!
So we need an `updated_at` field, to be able to quickly detect edits
that require us to invalidate the cache on
`PetState.all_supported_poses`. I'll do that next!
This is the first part of a change to improve search performance, by
caching occupied zone IDs and supported body IDs onto the Item record
itself, instead of always doing joins with `SwfAsset`.
It's unfortunate, because part of the power of SQL is joins! But doing
joins with big tables, in ways that can't take advantage of indexes in
the same ways as we often want to, is… slow.
It's possible there's something I'm misunderstanding about SQL
optimization, and this _could_ be done with query optimization or
indexes instead of duplicating data like this? This complexity carries
the risk of data getting out of sync in unforeseen ways. But this is
what I know how to do, and it seems to be working, so! Okay!
This hasn't worked for a while anyway! Let's remove the bits of code
where we deal with it, and the database field that signals it. (We also
make a corresponding change in Impress 2020, so it doesn't crash trying
to query based on the `prank` column.)
I also ran this snippet to clear out all the Nebula stuff in the db:
```rb
Color.transaction do
nebula = Color.where(prank: true).find_by_name("Nebula")
nebula.pet_types.includes(pet_states: :swf_assets).each do |pet_type|
pet_type.pet_states.each do |pet_state|
pet_state.parent_swf_asset_relationships.each do |psa|
psa.swf_asset.destroy!
psa.destroy!
end
pet_state.destroy!
end
pet_type.destroy!
end
nebula.destroy!
end
```
In this change, instead of *always* inferring the Dyeworks base item
from the item name at runtime, we now have a database field that tracks
it, and auto-populates whenever an item *seems* to need a Dyeworks base
item but doesn't have one yet.
This will enable us to set the base item manually in cases where it
can't be inferred, and load Dyeworks base items for the Item Getting
Guide in one query with `includes(:dyeworks_base_item)`.
This migration does a bit more of the fix-em-up scripting work *in* the
migration itself than I usually do, mainly because there's so much in
this one that I think being extra-explicit is useful. We make sure to
do it gracefully though!
I am. Kinda surprised we didn't have this already, huh?? I guess in
most searches, the difference isn't very noticeable, because we don't
have a lot of item names to sort anyway? But we do this *so often* that
I think an index will certainly help! Let's add it!
Currently we only load the homepage, so there's only actually one
wearable item to sync up! But here's the task to do it!
To do this, we also created the backing model NCMallRecord, where we'll
save the current NC Mall state!
Idk how we got into this state, or if it's environment-dependent or
MySQL-version-dependent or what, but setting up the dev environment on
my macOS machine is complaining that `TEXT` columns can't have default
values.
Well, in that case, let's just have it be a non-nullable field, and add
a note to our code that missing fields *can* cause item saving to fail!
(This was always true, but I'm just extra-noting it because it's
becoming *more* true.)
Simple enough to start! If `shadowbanned: true` gets set on a user,
then we show a 404 instead of the actual list page, *unless* you're
logged in as that user, or coming from a known IP of that user.
This isn't a very strong mechanism! Just something to hopefully
increase the costs of messing around with list spam.
Commit 07617fa34f was an emergency commit
while I was on a trip, away from my usual workstation! I was able to
write the migration and run it against production, but I didn't have
the dev server fully set up, so I wasn't able to run the migration in
development, which is when Rails updates `schema.rb`.
Now I'm home and can run it, easy peasy! `rails db:migrate`
A few pieces here:
1. Convert all tables to `utf8mb4`+`utf8mb4_unicode_520_ci` strings.
2. Configure that as the server's default.
3. Configure the Rails database connection to use this encoding too.
Came together pretty well, whew! This has been a LONG time coming,
`latin1` is NOT a good charset for the year 2024!
Yeah what the heck, why is this here, we have the migration to drop it
and it's already dropped in production!
Y'know, sometimes I goof a migration and it sets things in a weird
state in development, maybe we didn't recover correctly from that or
something? But idk how we would have goofed this one. Whatever! I've
manually dropped it from my development machine, and it was already
correctly dropped on production, so, go figure!
This reverts commit cc339672b1.
Oh, wait, no, the state the schema file was in *was* correct. I'm not
sure why… huh ok, I need to debug my local state.
Did we end up behind on this migration in development? Idk! Weird! I'm
doing other migration stuff now and noticing this change slipping in
and I'm like. Huh! You should've already been there!
I considered this at first, but decided to keep it simple until it
turned out to matter. Oops, it already matters, lol!
I want the item search code to be able to easily tell if the series
name is real or a placeholder, so we can decide whether to build the
filter text in `fits:$series-$color-$species` form or
`fits:alt-style-$id` form.
So in this change, we keep it that `AltStyle#series_name` returns the
placeholder string if none is set, but callers can explicitly ask
whether it's a real series name or not. Will use this in our next
change!
Previously we did this hackily by comparing the ID to a hardcoded list
of IDs, but I think putting this in the database is clearer and more
robust, and it should also help with our upcoming item search stuff
that will filter by it!
Now we're *really* duplicating with Impress 2020's system lol, but I
need a way to not keep trying to load manifests that are actually 404,
which are surprisingly plentiful!
This doesn't actually stop us from loading anything yet, it just tracks
the timestamps and the HTTP status! But next I'll add logic to skip
when it was 4xx recently.
Okay cool, we're successfully migrated off translations, we can delete
the table now!
I'm not worried about backing up this data as such, because the
impress-2020 repo has a bunch of this data in its
`public-data-from-modeling.sql.gz` file history. Safe to remove from
the live app!
Like with Species, Color, and Zone, we're moving the translation data
directly onto the model, and just using English. This will simplify
some of our queries a lot (way fewer joins!), and it's what Neopets
does now anyway, and I have a secret hope that removing the complexity
along the codepath for `item.name` might help speed up large item lists
if we're lucky?? 🤞
Anyway, this is the first step, performing the migration to copy the
data onto the `items` table, making sure to keep them in sync for the
2020 app for now!
They're no longer referenced in either app! Begone!
(The translated values are still available in the DTI 2020 repository's
history, under `scripts/db`!)
A little architecture trick here! DTI 2020 authorizes support staff
requests by means of a secret token, instead of user account stuff. And
our support tools still all call DTI 2020 APIs.
So here, we bridge the gap: we copy DTI 2020's support secret to this
app's environment variables (I needed to update
`deploy/files/production.env` and run `bin/deploy:setup` for this!),
then users with the new `support_secret` flag have it added to their
HTML documents in the meta tags. Then, the JS reads the meta tag.
I also fixed an issue in the `deploy/setup.yml` playbook, where I had
temporarily commented some stuff out to skip steps one time, and forgot
to uncomment them after oops lol!
Like in 0dca538, this is preliminary work for being able to drop the
`zone_translations` table! We're copying the field over first, to be
able to migrate DTI 2020 safely before dropping anything.
Non-English languages haven't been supported on Neopets for a while, so
I'd like to remove this extra cross-cutting complexity, especially
since it's now inconsistent with the real site anyway!
The main motivation is that I'd like to do this for items too, because
I have a hunch that all the complexity of `globalize` to read
`item.name` is part of what's making large user lists so slow to
render: lots of little objects getting created down the stack, and
needing to be garbage-collected later.
I'm not sure that's why! But I figure removing this complexity is a
simplicity win anyway, so let's do it!
Note that this doesn't *finish* the migration, it just starts it! The
`Species::Translation` and `Color::Translation` models still exist, and
still have their data, and not all references to them are scrubbed yet.
I especially don't want to delete the backing tables until both DTI and
DTI 2020 are ready for it!
So this change will someday be paired with another change to actually
drop the tables - after backing up the data for future records just in
case, of course!
In impress-2020, we do a big slow query to figure out which users have
been active in trades recently. Now, we cache that timestamp on the
User model.
This won't have any immediate effect; it's to clear the way for Classic
DTI to receive the better trade ratios feature people like from 2020.
I also added some unit testing infra because I finally wanted it! for
all the ways you can trigger this timestamp lol
Note too that this is a bit of an unusually complex migration, but my
hope is that the batching and query structure and such helps it run
surprisingly fast! 🤞
We haven't used the mall spider in this app in forever (I guess we even
deleted the code at some point?), but there was some vestigial stuff
left. Goodbye!
There was a static page explaining it, which we no longer link to; and
there was an unused field in the User model for who was a beta tester
for it. Goodbye!
Ok so, impress-2020 guesses the manifest URL every time based on common
URL patterns. But the right way to do this is to read it from the
modeling data! But also, we don't have a great way to get the modeling
data directly. (Though as I write this, I guess we do have that
auto-modeling trick we use in the DTI 2020 codebase, I wonder if that
could work for this too?)
So anyway, in this change, we update the modeling code to save the
manifest URL, and also the migration includes a big block that attempts
to run impress-2020's manifest-guessing logic for every asset and save
the result!
It's uhh. Not fast. It runs at about 1 asset per second (a lot of these
aren't cache hits), and sometimes stalls out. And we have >600k assets,
so the estimated wall time is uhh. Seven days?
I think there's something we could do here around like, concurrent
execution? Though tbqh with the nature of the slowness being seemingly
about hitting the slow underlying images.neopets.com server, I don't
actually have a lot of faith that concurrency would actually be faster?
I also think it could be sensible to like… extract this from the
migration, and run it as a script to infer missing manifest URLs. That
would be easier to run in chunks and resume if something goes wrong.
Cuz like, I think my reasoning here was that backfilling this data was
part of the migration process… but the thing is, this migration can't
reliably get a manifest for everything (both cuz it depends on an
external service and cuz not everything has one), so it's a perfectly
valid migration to just leave the column as null for all the rows to
start, and fill this in later. I wish I'd written it like that!
But anyway, I'm just running this for now, and taking a break for the
night. Maybe later I'll come around and extract this into a separate
task to just try this on all assets missing manifests instead!
Idk if this used to be different or what, but it looks like the current
behavior is: if you delete a closet list, it'll leave the hangers
present, but Classic DTI would not show them anywhere; but Impress 2020
(until recently) would crash about it.
Now, we use `dependent: :destroy` to delete the hangers when you delete
the list (which I think makes sense, and is different than what I
decided in the past but that's ok, and is what the current behavior
*looks* like to people!), and we add a migration that deletes orphaned
hangers.
The migration also outputs the deleted hangers as JSON, for us to hold
onto in case we made a mistake! I'm also backing up the database in
advance of running this migration, just in case we gotta roll back HARD!
Whew! Seems like a pretty clean one? Ran `rails app:upgrade` and stuff, and made some corrections to keyword arguments for `translate` calls. There might be more such problems elsewhere? But that's hard to search for, and we'll have to see.
Okay, fine, finally making this controllable from the db without requiring a deploy :P Setting this new field will cause `item.special_color` to return the corresponding color. This mainly affects what we show on the item page, and what colors we request for modeling on the homepage.