This wasn't causing big problems because we made resilient little
choices in a lot of little places… but it's confusing as a potential
state, and the Styles chooser wasn't selecting the "Default" option
after you switch—which is what tipped us off!
Yay it works(*)! But two major missing pieces:
- Outfit saving doesn't persist it at all
- Item compatibility is unaffected: items will still appear in search
and in the preview, even when they don't fit anymore.
To help with space, I'm just showing the word "Nostalgic" (or "???" if
it's from a series we don't recognize, this is hardcoded by ID), and
trusting that from context it will be obvious that it's the "Nostalgic
Faerie" case or whatever. (Moreover, in both the button and the select
we're omitting the species name, by similar reasoning!)
Note that this _still_ doesn't actually apply the style to the outfit
whatsoever; this is all just local state as we're continuing to play
with UI concepts. Actually applying it is probably next though! (Though
there's a couple more UI things I want to do, like some affordances to
clarify that a Style is applied and that Expression changes won't work.)
Oh right, React Query's API is slightly different, fixed! (Previously,
this would cause the PosePicker to show before all the data was ready,
so alt styles would sometimes pop in after the popover was already
open.)
On small screens, the PosePicker opens down, and we put the tabs on the
top, to be near the button.
On large screens, the PosePicker opens up, and we put the tabs on the
bottom, to be near the button.
Previously, we always set `placement="bottom-end"`, which on small
screens behaved as written, and on large screens there would not be
space to open downward so it would open upward instead.
Now, we set the placement explicitly based on a media breakpoint, and
we change the `flexDirection` of the tabs container on the same media
breakpoint.
I'm playing with using text to call more attention to this button, but
I'm not altogether pleased with the design yet. I'll leave it there for
me and Support users, but hide it for most people until we've got a
more complete concept.
That way, I can stop being on a branch and be working on main, and
deploy stuff to preview live, without having to share it with everyone
just yet! (This was the motivation for finally adding Support tooling
to main DTI lol!)
A little architecture trick here! DTI 2020 authorizes support staff
requests by means of a secret token, instead of user account stuff. And
our support tools still all call DTI 2020 APIs.
So here, we bridge the gap: we copy DTI 2020's support secret to this
app's environment variables (I needed to update
`deploy/files/production.env` and run `bin/deploy:setup` for this!),
then users with the new `support_secret` flag have it added to their
HTML documents in the meta tags. Then, the JS reads the meta tag.
I also fixed an issue in the `deploy/setup.yml` playbook, where I had
temporarily commented some stuff out to skip steps one time, and forgot
to uncomment them after oops lol!
To activate this, I created a `.env.development` file in my project
root, with the following content:
```env
IMPRESS_2020_ORIGIN=http://localhost:4000
```
Then, I started impress-2020 with `yarn dev --port=4000`.
Now, the app loads from there, hooray!! It even fixes that obnoxious
pet state ID bug that happens when you run against the production db lol
Something in the Rails loader doesn't like that I have both a gem and
a lib folder named `RocketAMF`, I think? It'll often work for the first
pet load request, then on subsequent ones say `Envelope` is not
defined, I'm guessing because it scrapped the gem's module in favor of
mine?
Idk, let's just simplify all this by making our own module. I feel like
this old lib could use an overhaul and simplification anyway, but this
will do for now!
Idk what it's doing, but what I do know is the Falcon process is
weirdly slow to restart on deploy. My hope was this would fix it cuz
the supervisor was maybe blocking process exits? Idk, something to look
into, I don't think this fixed it but I also don't think it broke
anything, and I think systemd is doing a fine job monitoring already?
idk