Okay so, when we reverted a buncha stuff in e3d196f, it was in response
to a bug where item modeling data was getting deleted. And I was tired,
and just took a big simple hammer to it of reverting all the modeling
refactors.
Here, we reintroduce *some* of them: the biology ones before the item
bug. And tests still pass, and in fact I can un-pending some of them!
I might also try to reapply the change where we extract it all into a
new file, but without the item parts.
```shell
git cherry-pick --no-commit 13ceec8fcc
git cherry-pick --no-commit f81415d327
git cherry-pick --no-commit c03e7446e3
git cherry-pick --no-commit 52ca41dbff
```
Because we ended up with such a big error, and it doesn't have an easy
fix, I'm wrapping up today by reverting the entire set of refactors
we've done lately, so modeling in production can continue while we
improve this code further over time.
I generated this commit by hand-picking the refactor-y commits
recently, running `git revert --no-commit <hash>` in reverse order,
then manually updating `pet_spec.rb` to reflect the state of the code:
passing the most important behavioral tests, but no longer passing one
of the kinds of annoyances I *did* fix in the new code.
```shell
git revert --no-commit 48c1a58df9
git revert --no-commit 42e7eabdd8
git revert --no-commit d82c7f817a
git revert --no-commit 5264947608
git revert --no-commit 90407403ba
git revert --no-commit 242b85470d
git revert --no-commit 9eaee4a2d4
git revert --no-commit 52ca41dbff
git revert --no-commit c03e7446e3
git revert --no-commit f81415d327
git revert --no-commit 13ceec8fcc
```
This bug never made it into production I think, it was a consequence of
some of how I refactored stuff in the recent changes? I think??
But yeah, I refactor how we manage `SwfAsset#body_id`, to be a bit more
explicit about when and how it can change, instead of the weird
callbacks that tbqh have bit us too often…
Ah right, the callbacks in `ParentSwfAssetRelationship` don't get
called when Rails does automatic join-model management stuff. We need
the `Item` to call its `update_cached_fields` callback itself, too!
When fixing this, I found a new bug that arose, in how we infer
`body_id` for assets that fit all pets. Fixing that next!
Hmm, I think I made a mistake on `modeling_snapshot.rb:69`: I'm
assigning the *entire* `item.swf_assets` relation to *just* the assets
for the new model of it, which breaks all the other connections.
First, I'm disabling modeling. Then, I'll restore a backup. Then, I'll
write tests for that case, and fix it up!
If you check this box, it'll keep you in a mode where saving an alt
style redirects you to the *next* one that needs labeling, until
they're all done. Useful for big drops!
Catch missing fields in validation before sending it to the DB, and
skip the Dyeworks stuff if the name is missing.
I ran into this looking into `test/trade_activity_test.rb`, which fails
right now because we try to create a boring placeholder item with
minimal fields, which Dyeworks can't call `name.match()` on!
Now, the test fails with a more helpful error about the item being
invalid. Next, I'll fix that!
Just getting this stuff out of Pet, in part because I want to start
being able to unit test modeling, and that will require stubbing out
what this service returns!
Just a bit more clarity of grouping! I'm also thinking about extracting
modeling APIs into a service file like this too, in which case I think
this would help clarify what it is.
They're not all Nostalgic anymore! Oh, how the times have changed!
This way, new ones will appear as "<New?>", until support staff come in
and label them (with our cool new tools!)
Whoops, I didn't realize this change I made to validation for the alt
style editing form, was goofing up alt style modeling!
The trick is, the validation was happening before the `before_create`
hook. Now I've reformulated these as `before_validation` hooks, so
we're not rejecting new alt styles for having no thumbnail!
Oh huh, I guess most of the new items we had when I rewrote this were
Maraquan, and I didn't test enough on standard species-specific items.
Before this change, partially-modeled items for standard pets would
appear as fully modeled, because the presence of the "nonstandard"
color Orange (because of the Orange Chia) meant that the "standard" key
didn't actually have any unique bodies (it was all `["standard", 47]`).
Here, I take my own comments' advice and move away from the standard
label as part of the logic. Instead, we look first for nonstandard
colors with unique bodies, which we'll call out as modelable; and then
check whether there are any basic bodies *not* covered by those special
colors.
That way, compatibility with the Maraquan Acara (a unique body) means
we'll treat Maraquan as a modelable color; and then we'll ignore the
basic bodies, even though it *does* fit the basic Mynci, because there
aren't any compatible basic bodies that aren't *also* Maraquan bodies.
This also means that compatibility with both the Blue Acara and Orange
Acara does *not* preclude a normal item from needing basic pets for
models: because, while Orange is a slightly nonstandard color in the
case of Chia, it doesn't have its own unique body for this item, so we
ignore it when predicting compatibility with basic colors.
This is because labeling poses with the Support tools *should*
invalidate the `PetState.all_supported_poses` cache! But the previous
cache key would only invalidate when a new pet state is *added*, not
when one is *edited*.
This clocks in a bit bigger than what Impress 2020 does in terms of
binary encoding (with gzip it's at 11K instead of 4K), but I'm okay
with that for the simplicity win.
Gonna try to swap this in for where we're still using Impress 2020 for
the species/color picker in the outfit editor!
Before this change, a fully-modeled item (Dyeworks Burgundy: Gown of
the Night) was displaying as still needing the Chia. This was because
looking for "standard" body IDs like this caught up some of the weird
Chia bodies.
I think there's probably something here where we need to like, relabel
certain colors? But honestly, the better version of this logic would
probably be to lean more into the `basic` label in this logic.
But hey, that's a refactor for another time. I gotta go eat!
Noticing a lot of Maraquan items on the homepage today, and they're
doing that thing of expecting standard body types to be relevant too,
because I think we wrote this logic before the Maraquan Mynci ended up
having the same standard Mynci body? (Maybe? Or maybe I'm being
ahistorical and I just wrote this wrong to begin with lol)
In any case, this is more accurate, and I think I'm also maybe
incidentally noticing that it's running faster, at least in my brief
before/after production testing? (There's *more* queries, like 100! But
many of them are *very* fast lookups, coming in at under 1ms—and also a
lot of them are dupes being served by Rails's request-scoped query
cache.)
I have some other changes planned too, but these are some easy ones. I
also turn back on this stuff in development, in hopes that my changes
can make these queries fast enough to not be a big deal anymore!
This is the second part of the previous change `efda6d74`, in which we
switch out the item search query conditions!
This was a two-parter to ease deployment: first deploy the change with
the migration, then *run* the migration (because it's an unusually slow
one), then deploy this change that actually uses it.
Another way to approach this would've been to deploy it all in one
commit, but not set it as `current` until we had run the migration.
That would have been a reasonable approach too!
This is the first part of a change to improve search performance, by
caching occupied zone IDs and supported body IDs onto the Item record
itself, instead of always doing joins with `SwfAsset`.
It's unfortunate, because part of the power of SQL is joins! But doing
joins with big tables, in ways that can't take advantage of indexes in
the same ways as we often want to, is… slow.
It's possible there's something I'm misunderstanding about SQL
optimization, and this _could_ be done with query optimization or
indexes instead of duplicating data like this? This complexity carries
the risk of data getting out of sync in unforeseen ways. But this is
what I know how to do, and it seems to be working, so! Okay!
This hasn't worked for a while anyway! Let's remove the bits of code
where we deal with it, and the database field that signals it. (We also
make a corresponding change in Impress 2020, so it doesn't crash trying
to query based on the `prank` column.)
I also ran this snippet to clear out all the Nebula stuff in the db:
```rb
Color.transaction do
nebula = Color.where(prank: true).find_by_name("Nebula")
nebula.pet_types.includes(pet_states: :swf_assets).each do |pet_type|
pet_type.pet_states.each do |pet_state|
pet_state.parent_swf_asset_relationships.each do |psa|
psa.swf_asset.destroy!
psa.destroy!
end
pet_state.destroy!
end
pet_type.destroy!
end
nebula.destroy!
end
```