Realizing that, with the keyword argument spread syntax, I don't need
to do merging, I can just. spread at the right place!
My rationale for the ordering here is: if the caller theoretically tried
to override the builder (even though I don't see why), I think we would
want to respect that. Whereas the `class` argument should be overridden
because we're safely *merging* our `.support-form` class into it.
I want to reuse this for unlabeled pet styles is why! (That's been the
immediate motivation for this refactor, but also I do just like that
it'll make support forms easier to build.)
I think helpers are fine for the simpler ones that are basically *just*
wrapper tags, but once it starts getting into `concat`, I think that's
too unfamiliar of a syntax for developers; let's bail into our usual
templating system!
I'm not sure about putting them in `application/support_form` like this.
That's cute for one-offs like `application/hanger_spinner`, because
`render partial: "hanger_spinner"` assumes the `application` view folder
by default, but that doesn't work once it's nested: it looks for a
`views/support_form` folder.
I think maybe it could soon be time to bail from the strict "view
folders belong to controllers" thing, similar to how we did for
`SupportFormHelper`, and add a `components` folder or similar? Idk, not
sure yet!
Ah right, `> label` doesn't work with how Rails will wrap broken labels
and inputs each in a `.field_with_errors` element. Fixed, and added
some basic coloring!
Instead of hand-rolling HTML, this offers helpers like `f.field`, to
help ensure the HTML is consistent, and to keep the templates more
focused on the unique form elements.
Most notable change here is extracting the pose option bubbles into a
`data-type="radio-grid"`, and pulling that into the `.support-form`
CSS. My rationale is that, unlike most fields, this field benefits from
being 100%-width, and I don't want to specify that as an override if I
can avoid it, because that's fragile-y.
Instead, I extract this into a generic type of field that
`.support-form` can use (it feels pretty reusable anyway!), and require
the caller to specify how many columns they want as `--num-columns`.
Specifically, I'm going for a more-vertical layout, cuz I want to bring
PetState over to it, and the weird grid situation wasn't gonna fit the
big pose label radios.
There's still plenty left, but we have 213 we "manually" marked as
"done" (I think I ran a batch job on everything Chips told me was on
the page and already done), and that should help a lot!
This keeps causing missing-attribute crashes when I change things, and
I don't think the performance benefit is a big deal for how the page
currently runs, esp as we keep gathering more attributes? I feel like
`description` is the main "large" one we're omitting, and like. Shrug!
Been running into the item "Hanging Plushie Grundo Background Item" not
being modelable, because TNT seem to have left its description blank!
Let's be less picky about what data we take in, but keep the intention
of these validations: to ensure that *we* don't make a mistake and
forget a field when importing items!
This is a basic attempt at the Vandagyre logic, but also things like
"Maraquan items released before the Maraquan X was released"!
I also added a new task, `rails items:update_cached_fields`, which needs
to be run after this change, because it affects the value of
`Item#predicted_fully_modeled?`.
Eyeballing the updated search results for `-is:modeled`, this feels
pretty close? I'm guessing it's not perfect (e.g. maybe a pet type we
got modeled late into its existence, or some items that just never did
fit a certain pet), but feels pretty good.
I also know we had the "modeling hints" override in Impress 2020, which
we aren't reading yet. We should probably take that into account here
too!
We're now caching `predicted_fully_modeled?` on the database record, so
we can query by it in the database!
I'm moving on from the model I did in Impress 2020, of writing really
big fancy single-source-of-truth queries based on the assets themselves.
I see the merit of that in terms of theoretical reliability, but in
practice I think it will be *more* reliable to have one *in-code*
definition of modeling status (which we need anyway for generating the
homepage modeling requests), and just save that in a queryable way.
In our tests, I discovered an unexpected behavior where calling
`item.swf_assets << swf_asset` wasn't updating computed fields
correctly.
This isn't something we actually do in-app, I think the modeling system
happened to trigger the callbacks in a way that still worked fine?
But I think this is a good idea for reliability, since caching is such
a notoriously difficult thing to get right anyway! And it makes our
tests simpler and clearer.
Specifically, `compatible_body_ids` references `swf_assets`, which, I'm
kinda surprised, *doesn't* include the newly-added asset yet when the
`ParentSwfAssetRelationship.after_save` hook runs while calling
`item.swf_assets << swf_asset`. Reloading it fixes this!
I'm grouping some shared behaviors to pull into the different cases, so
that we can check the behaviors of a fully-modeled item vs a
not-fully-modeled item in *all* of the relevant cases.
Specifically, I'm planning to add `is:modeled` search filters, and
creating pending placeholder tests for them!
This doesn't generally happen, but did the other day when I rolled back
some of the database's SWF asset records but kept the items—and it was
a bit confusing that the homepage marked them as fully modeled!
The main thing is that I was getting "RequireNotFound" warnings for
`require 'rails_helper'`, because the LSP seems unaware of how RSpec
offers `spec/` as a root for requires.
I think the `require_relative` is clearer anyway, I'm decently
satisfied with it. And if I decide it's too much ugly, we can try
something else in the Solargraph config or something sometime!
and with a `rails neopets:import` task you can call to do them all at
once!
I'm gonna do some other stuff here too to make `neopets:import` easier
to call all in one go, like prompting for the Neologin cookie just
once at the start.
Note that this changes the cron setup, so you gotta run
`bin/deploy:setup` after this deploys!
I'm running into this with the automated tests and the fixtures I think
sometimes using large auto-generated IDs?
But the point is, our tables generally use Rails's default `:integer`
size for its IDs, and then columns that reference them are *smaller*,
which is… not correct stuff, y'know?
So I figure, let's just expand the columns. We don't have enough data
that being real picky about the integer sizes matters, so let's keep it
simple and more obviously correct.
Oh huh, when doing Rainbow Pool stuff, I put the ordering in the wrong
place! It's a sensible ordering for the Rainbow Pool page, but not so
much for the JSON view!
This is currently crashing the Rainbow Pool when the Anniversary Techo
would appear, because the asset seems to be missing? The SWF doesn't
seem to exist, nor does its manifest.
Oh right, yeah, we like to do things gracefully around here when
there's no corresponding color/species record yet!
Paying more attention to this, I'm thinking like… it could be a cool
idea to, in modeling, *create* the new color/species record, and just
not have all the attributes filled in yet? Especially now that we're
less dependent on attributes like `standard` to be set for correct
functioning.
But for now, we follow the same strategy we do elsewhere in the app: a
pet type can have `color_id` and `species_id` that don't correspond to
a real record, and we cover over that smoothly.
Oh dang, we're on color #120 now, and looks like our maximum value is
127. Let's expand that!
I noticed this because I'm writing tests for some stuff, and used "456"
as a placeholder ID number, and it just fully did not work, and I'm
like. Oh.
Huh, I guess when I reapplied my refactors to modeling disabling the
other day, I didn't notice that I turned it off in production. And I
guess I didn't deploy this at the time cuz it's just refactors, but
when I deployed other changes yesterday this came with it. Whoops!
I only now thought through that I can scrape these instead of enter
them manually, similar to how we did our Rainbow Pool scraper… hooray!
I'm actually writing tests for stuff too, wowie!
This change was modified a bit after cherry-picking, to no longer
include the broken changes to item modeling in 9eaee4a.
(cherry picked from commit 90407403ba)
Okay so, when we reverted a buncha stuff in e3d196f, it was in response
to a bug where item modeling data was getting deleted. And I was tired,
and just took a big simple hammer to it of reverting all the modeling
refactors.
Here, we reintroduce *some* of them: the biology ones before the item
bug. And tests still pass, and in fact I can un-pending some of them!
I might also try to reapply the change where we extract it all into a
new file, but without the item parts.
```shell
git cherry-pick --no-commit 13ceec8fcc
git cherry-pick --no-commit f81415d327
git cherry-pick --no-commit c03e7446e3
git cherry-pick --no-commit 52ca41dbff
```
Also, while we're here! To restore the lost data, I:
1. Downloaded this scheduled public data backup, which was taken
thankfully the day before we updated modeling code!
https://impress.openneo.net/public-data/2024-11-03T08_15_02Z-scheduled.sql.gz
2. Trimmed it just to the section about the `parents_swf_assets` table:
dropping it, then rebuilding it from scratch.
3. Ran this modified backup SQL dump on the production server.
4. Ran the code from `db/migrate/20241001052510_add_cached_fields_to_items.rb`
to bring items' cached fields back into the correct state.
I also had to fix some errors in the item data that prevented some
items from passing the latest validations:
```rb
Item.where(rarity: "").update_all(rarity: "???")
Item.where(description: "").update_all(description: "???")
Item.where(zones_restrict: "").update_all(zones_restrict: "00000
00000000000000000000000000000000000000000000000")
```
Because we ended up with such a big error, and it doesn't have an easy
fix, I'm wrapping up today by reverting the entire set of refactors
we've done lately, so modeling in production can continue while we
improve this code further over time.
I generated this commit by hand-picking the refactor-y commits
recently, running `git revert --no-commit <hash>` in reverse order,
then manually updating `pet_spec.rb` to reflect the state of the code:
passing the most important behavioral tests, but no longer passing one
of the kinds of annoyances I *did* fix in the new code.
```shell
git revert --no-commit 48c1a58df9
git revert --no-commit 42e7eabdd8
git revert --no-commit d82c7f817a
git revert --no-commit 5264947608
git revert --no-commit 90407403ba
git revert --no-commit 242b85470d
git revert --no-commit 9eaee4a2d4
git revert --no-commit 52ca41dbff
git revert --no-commit c03e7446e3
git revert --no-commit f81415d327
git revert --no-commit 13ceec8fcc
```