Because we ended up with such a big error, and it doesn't have an easy
fix, I'm wrapping up today by reverting the entire set of refactors
we've done lately, so modeling in production can continue while we
improve this code further over time.
I generated this commit by hand-picking the refactor-y commits
recently, running `git revert --no-commit <hash>` in reverse order,
then manually updating `pet_spec.rb` to reflect the state of the code:
passing the most important behavioral tests, but no longer passing one
of the kinds of annoyances I *did* fix in the new code.
```shell
git revert --no-commit 48c1a58df9
git revert --no-commit 42e7eabdd8
git revert --no-commit d82c7f817a
git revert --no-commit 5264947608
git revert --no-commit 90407403ba
git revert --no-commit 242b85470d
git revert --no-commit 9eaee4a2d4
git revert --no-commit 52ca41dbff
git revert --no-commit c03e7446e3
git revert --no-commit f81415d327
git revert --no-commit 13ceec8fcc
```
Hmm, I think I made a mistake on `modeling_snapshot.rb:69`: I'm
assigning the *entire* `item.swf_assets` relation to *just* the assets
for the new model of it, which breaks all the other connections.
First, I'm disabling modeling. Then, I'll restore a backup. Then, I'll
write tests for that case, and fix it up!
Just a bit more clarity of grouping! I'm also thinking about extracting
modeling APIs into a service file like this too, in which case I think
this would help clarify what it is.
This hasn't worked for a while anyway! Let's remove the bits of code
where we deal with it, and the database field that signals it. (We also
make a corresponding change in Impress 2020, so it doesn't crash trying
to query based on the `prank` column.)
I also ran this snippet to clear out all the Nebula stuff in the db:
```rb
Color.transaction do
nebula = Color.where(prank: true).find_by_name("Nebula")
nebula.pet_types.includes(pet_states: :swf_assets).each do |pet_type|
pet_type.pet_states.each do |pet_state|
pet_state.parent_swf_asset_relationships.each do |psa|
psa.swf_asset.destroy!
psa.destroy!
end
pet_state.destroy!
end
pet_type.destroy!
end
nebula.destroy!
end
```
I'm experimenting with a Rainbow Pool ish UI, mainly as a support tool
for exploring and labeling poses—but one we can probably just show to
real users too!
Right now, I just use pet type images as a placeholder, and I polished
up some of the `pet_type_image` API. But we're probably gonna drop
these for a full outfit viewer, now that I think of it.
Uhh I guess when I half-removed a feature from the translations list (I
don't remember when?), it left two different dictionaries labeled
`neopets_page_import_tasks.new`, and the second one overwrote the
first. Oops! Yikes!
By removing these, the translations *above* them actually get to apply
to the page correctly. Before this change, the page just showed the
translation keys as placeholders, womp womp.
Just for consistency with the other features we're not using, we turn
off ActionCable when loading the app. I just removed
`config/cable.yml`, so I figure, let's not load a feature without the
config file it expects! (even though that didn't seem to bother it)
We're not using the ActionCable or ActiveStorage Rails features in this
app, so we can clear out these default config files. If we need them
later, it's not hard to re-find / re-generate them!
Our production data now contains basic hashes for all species/color
combinations, and it's easy enough for a dev copy of the site to get
them too by running `rails public_data:pull`. So, I think it's time to
retire this hardcoded set, and get one more file out of our codebase!
The silly motivation is that I wanted to remove `.prettierignore`,
which just exists to omit that one folder from `npm run format`. But it
also seems like this is the standard place to put them—a standard
created long after we first set this up lol
I forget what this was for, I think part of it was for managing item
names in different languages, and the "private" locale thing was
probably for WIP locales? But yeah, not used, delete!
Okay cool, so this was an error that was happening *only* when building
assets for production: Sass's CSS minifier isn't familiar with all
modern CSS syntax (I think is the issue?), and so errors on things that
are actually totally okay.
I had previously worked around this in `swf_assets/show.css` with an
equivalent syntax that Sass recognized. But in this latest case with
the new `fonts.css.erb`, it was upset about the `src` list for the
fonts, and I don't know a workaround for that.
So, let's just disable Sass's CSS minification for now. I imagine the
difference isn't huge when CSS compresses just fine with gzip anyway?
(Most of what you can "minify" in CSS is whitespace, and that largely
seems silly to me when gzip is running.)
I think this has just been broken for a long time? And I don't think
it's very useful in a world 15 years later, where our problem *used* to
be giant gaps in our library, which isn't really our data problem
anymore.
Not using this on the item page preview yet, but we will!
I like this approach over e.g. a web component specifically for the
sandboxing: while I don't exactly *distrust* JS that we're loading from
Neopets.com, I don't like the idea of *any* part of the site that
executes arbitrary JS unsafely at runtime, even if we theoretically
trust where it theoretically came from. I don't want any failure
upstream to have effects on us!
I copied basically all of the JS from a related project
`impress-media-server` that I had spun up at one point, to investigate
similar embed techniques. Easy peasy drop-in-squeezy!
This doesn't affect a lot right now, it was previously defaulting to
UTC I think? And I've confirmed that, while timestamps are stored in
the database as UTC, they're not interpreted any differently with this
setting. (Or, rather, it's loaded as a `DateTime` object for the same
moment in time, but in the NST time zone. Good!)
But this feels like a more useful default for displaying things for
development etc, and moreover, I'm working on some logic for things
like "when do limited-time Dyeworks items expire exactly?", and that
logic is made *much* simpler if we can just compare dates in NST by
default rather than fudge around with the zones on them and figuring
out the correct midnight!
(Although, as I type this, I think I maybe have thought of an easier
way to do it? So maybe this change won't actually be necessary for
that, but it still feels like a more sensible default for us
regardless!)
There's been some stability issues with our self-hosted
health.openneo.net service lately. I just upgraded to a new version
today, so my hope is that today's weird slowness is that it's doing the
expected GlitchTip 4.0 upgrade data migration process designed to save
disk space, and hopefully it will take care of itself by the end of the
day?
But just to help out, I'm disabling this feature to remove pressure on
the GlitchTip service. We don't actually use performance trace analysis
irl, and so it's just taking up disk space and processing power. If we
end up wanting to dig into something in the future, we can turn it back
on! (My hope had been that a sample rate of 5% was small enough to be a
good hedge to have "enough" data just in case something comes up, but
idk, I don't actually have a great sense of how much pressure even 5%
of our total volume is, and I'd rather just be certain it's out of the
picture altogether, at least while working on this.)
This doesn't connect to anything yet, I'm just doing the beginnings of
loading NC Mall item data!
My intent is to run this regularly to keep our own NC info in the
database too, primarily for use in the Item Getting Guide. (Could be
useful to surface in other places too though!) This will help us split
items into those that can be one-click purchased with the NC Mall
integration, vs NC items that need to be acquired by other means.
TNT requested that we figure out ways to connect the dots between
people's intentions on DTI to their purchases in the NC Mall.
But rather than just slam ad links everywhere, our plan is to design an
actually useful feature about it: the "Item Getting Guide". It'll break
down items by how you can actually get them (NP economy, NC Mall,
retired NC, Dyeworks, etc), and we're planning some cute actions you
can take, like shortcuts for getting them onto trade wishlists or into
your NC Mall cart.
This is just a little demo version of the page, just breaking down
items specified in the URL into NC/NP/PB! Later we'll do more granular
breakdown than this, with more info and actions—and we'll also like,
link to it at all, which isn't the case yet! (The main way we expect
people to get here is by a "Get these items" button we'll add to the
outfit editor, but there might be other paths, too.)
I'm doing some back-and-forth on the contract between me and TNT, and
they proposed this amendment to the copyright text in the Fan Site
Agreement. Implemented now!
Ah right, now that you no longer need to provide this secret value as a
query param or a cookie in order to see NeoPass stuff, we can safely
delete it! Goodbye! 👋
Yay, we got the API endpoint for this! The `linkage` scope is the key.
Rather than pulling back the specific fallback behavior we had wrote
for usernames before, which was slightly different and involved
appending `neopass` in there too (e.g. `matchu-neopass-1234`), I
figured let's just use a lot of the same logic, and just use the
preferred name as the base name. (I figure the `neopass` suffix isn't
that useful anyway, `matchu-1234` kinda looks better tbh! And it's all
fallback stuff that I expect serious users to replace, anyway.)
I'm getting ready to add handling for "what if you don't *have* a
current password*??", so it seems like the right way to do that is to
just eject the controller and start customizing!
Hacky and inconvenient, but it works!
I want this primarily to enable me to live-debug what info we're
getting back in the auth token. In production right now, the flow with
NeoPass succeeds, but we fail to create the account, and my production
error logs say it's because the username field is too long. I had hoped
it would just be the Neopets username, but now that I've poked at
NeoPass itself a bit, I'm realizing it won't be that simple.
So, we'll use this to investigate!
Ahh okay, the NeoPass spec says to pass the `client_id` and
`client_secret` as POST form data when exchanging the code for the
access token, but the default behavior of our client is to pass it as
an `Authorization` header instead.
In this change, we set an option to change that behavior, and also add
a lot of comments about this and the other options!
Wowie, it's starting to happen! :3
When you run this in production, though, you get back the auth failure
message, and the OmniAuth logs say the server returned the following:
> invalid_client: Client authentication failed (e.g., unknown client,
> no client authentication included, or unsupported authentication
> method). The OAuth 2.0 Client supports client authentication method
> 'client_secret_post', but method 'client_secret_basic' was requested.
> You must configure the OAuth 2.0 client's
> 'token_endpoint_auth_method' value to accept 'client_secret_basic'.
I'll add a fix for this in the next commit, with some explanations as
to why!
I don't link to this in the footer or anything yet, but TNT asked for
it as part of the NeoPass setup, so I currently just redirect to the
outdated Impress 2020 privacy policy. (It's outdated in the sense that
we share *less* data with our third party services now, e.g. we moved
our analytics and error tracking onto our own machines!)
Ah right, I went and checked the Devise source code, and the default
implementation for `password_required?` is a bit trickier than I
expected:
```ruby
def password_required?
!persisted? || !password.nil? || !password_confirmation.nil?
end
```
Looks like `super` does a good enough job here, though! (I'm actually
kinda surprised, I wasn't sure how Ruby's `super` rules worked, and
this isn't a subclass thing—or maybe it is, maybe the `devise` method
adds a mixin? Idk! But it does what I expect, so, great!)
So now, we require the password if 1) Devise doesn't see a UI reason
not to, *and* 2) the user isn't using OmniAuth (i.e. NeoPass).
This had caused a bug where it was impossible to use the Settings page
*without* changing your password! (The form says it's okay to leave it
blank, which stopped being true! But now it's fixed!)
Right, I didn't totally connect the dots that there's some OpenID
features in the mix here for how we expect to identify the user once
they authenticate. It requires looking up the provider's public key,
and validating the JWT they sent us. This gem does all that for us!
I don't actually know what a real NeoPass `id_token` looks like yet?
But I'll fill in some placeholder stuff for now, and use that for
initializing the account!
In this change, we wire up a new NeoPass OAuth2 strategy for OmniAuth,
and hook up the "Log in with NeoPass" button to use it!
The authentication currently fails with `invalid_credentials`, and
shows the `owo` response we hardcoded into the NeoPass server's token
response. We need to finally follow up on the little `TODO` written in
there!