I don't link to this in the footer or anything yet, but TNT asked for
it as part of the NeoPass setup, so I currently just redirect to the
outdated Impress 2020 privacy policy. (It's outdated in the sense that
we share *less* data with our third party services now, e.g. we moved
our analytics and error tracking onto our own machines!)
Ah right, I went and checked the Devise source code, and the default
implementation for `password_required?` is a bit trickier than I
expected:
```ruby
def password_required?
!persisted? || !password.nil? || !password_confirmation.nil?
end
```
Looks like `super` does a good enough job here, though! (I'm actually
kinda surprised, I wasn't sure how Ruby's `super` rules worked, and
this isn't a subclass thing—or maybe it is, maybe the `devise` method
adds a mixin? Idk! But it does what I expect, so, great!)
So now, we require the password if 1) Devise doesn't see a UI reason
not to, *and* 2) the user isn't using OmniAuth (i.e. NeoPass).
This had caused a bug where it was impossible to use the Settings page
*without* changing your password! (The form says it's okay to leave it
blank, which stopped being true! But now it's fixed!)
Right, I didn't totally connect the dots that there's some OpenID
features in the mix here for how we expect to identify the user once
they authenticate. It requires looking up the provider's public key,
and validating the JWT they sent us. This gem does all that for us!
I don't actually know what a real NeoPass `id_token` looks like yet?
But I'll fill in some placeholder stuff for now, and use that for
initializing the account!
In this change, we wire up a new NeoPass OAuth2 strategy for OmniAuth,
and hook up the "Log in with NeoPass" button to use it!
The authentication currently fails with `invalid_credentials`, and
shows the `owo` response we hardcoded into the NeoPass server's token
response. We need to finally follow up on the little `TODO` written in
there!
If you pass `?neopass=1` (or a secret value in production), you can see
the "Log in with NeoPass" button, which currently takes you to
OmniAuth's "developer" login page, where you can specify a name and
email and be redirected back. (All placeholder UI!)
We're gonna strip the whole developer strategy out pretty fast and
replace it with one that uses our NeoPass test server. This is just me
checking my understanding of the wiring!
This is setting us up for NeoPass, but first we're just gonna try stuff
with the "developer" strategy that's built in for testing, rather than
using the NeoPass dev server!
Turbo expects this to exist, for features we don't use! I didn't bother
to check if Turbo has a way to turn this off too, I just said fine lol
Turbo worked in development without this because it only loads files as
needed, whereas in production it blocked the app from starting up. But
you can see the error in development by running `rails zeitwerk:check`,
which attempts to preload everything to make sure it all works, and you
get this:
```
NameError: uninitialized constant ActionCable (NameError)
class Turbo::StreamsChannel < ActionCable::Channel::Base
```
Fixed now!
Someone requested this in Discord, and I figured why not! I'm still
planning to move stuff away from Impress 2020 over time, I just figure
may as well have them more linked while this is still The Reality
I'm starting to port over the functionality that was previously just,
me running `yarn db:export:public-data` in `impress-2020` and
committing it to Git LFS every time.
My immediate motivation is that the `impress-2020` git repository is
getting weirdly large?? Idk how these 40MB files have blown up to a
solid 16GB of Git LFS data (we don't have THAT many!!!), but I guess
there's something about Git LFS's architecture and disk usage that I'm
not understanding.
So, let's move to a simpler system in which we don't bind the public
data to the codebase, but instead just regularly dump it in production
and make it available for download.
This change adds the `rails public_data:commit` task, which when run in
production will make the latest available at
`https://impress.openneo.net/public-data/latest.sql.gz`, and will also
store a running log of previous dumps, viewable at
`https://impress.openneo.net/public-data/`.
Things left to do:
1. Create a `rails public_data:pull` task, to download `latest.sql.gz`
and import it into the local development database.
2. Set up a cron job to dump this out regularly, idk maybe weekly? That
will grow, but not very fast (about 2GB per year), and we can add
logic to rotate out old ones if it starts to grow too far. (If we
wanted to get really intricate, we could do like, daily for the past
week, then weekly for the past 3 months, then monthly for the past
year, idk. There must be tools that do this!)
A few pieces here:
1. Convert all tables to `utf8mb4`+`utf8mb4_unicode_520_ci` strings.
2. Configure that as the server's default.
3. Configure the Rails database connection to use this encoding too.
Came together pretty well, whew! This has been a LONG time coming,
`latin1` is NOT a good charset for the year 2024!
Trying something new and lightweight and more data-controlled!
I also turned down the sample rate for the performance traces feature,
because we hardly use it right now, and Sentry is always getting mad at
us for vastly exceeding our free plan quota—and like, we're not on
Sentry anymore so I imagine we have more wiggle room with that, but I
figure let's turn down the volume anyway, until we decide we want it.
Previously, passing in `fits:blue` would cause a crash, because
`species_name` part of the split would be `nil`, oops!
In this change, we use a regex for more explicitness about the pattern
we're trying to match. We'll also add more cases next! (You'll note the
error message mentions `fits:nostalgic-faerie-draik`, which isn't
actually possible yet, but will be!)
I built this API endpoint in anticipation of a change I never actually
made! I'll just remove it for now, leaning toward cleanuppery over
holding onto something I'm not sure about.
The Neopets Media Archive is a service that mirrors `images.neopets.com`
over time! Right now we're starting by just loading manifests, and
using them to replace the hacks we used for determining the Alt Style
PNG and SVG URLs; but with time, I want to load *all* customization
media files, to have our own secondary file source that isn't dependent
on Neopets to always be up.
Impress 2020 already caches manifest files, but this strategy is
different in two ways:
1. We're using the filesystem rather than a database column. (That is,
manifest data is kinda duplicated in the system right now!) This is
because I intend to go in a more file-y way long-term anyway, to
load more than just the manifests.
2. Impress 2020 guesses at the manifest URLs by pattern, and reloads
them on a regular basis. Instead, we use the modeling system: when
TNT changes the URL of a manifest by appending a new `?v=` query
string to it, this system will consider it a new URL, and will load
the new copy accordingly.
Fun fact, I actually have been prototyping some of this stuff in a side
project I'd named `impress-media-server`! It's a little Sinatra app
that indeed *does* save all the files needed for customization, and can
generate lightweight lil preview iframes and images pretty easily. I
had initially been planning this as a separate service, but after
thinking over the arch a bit, I think it'll go smoother to just give
the main app all the same access and awareness—and I wrote it all in
Ruby and plain HTML/JS/CSS, so it should be pretty easy to port over
bit-by-bit!
Anyway, only Alt Styles use this for now, but my motivation is to be
able to use more-correct asset URL logic to be able to finally swap
over wardrobe-2020's item search to impress.openneo.net's item search
API endpoint—which will get "Items You Own" searches working again, and
whittle down one of the last big things Impress 2020 can do that the
main app can't. Let's see how it goes!
I've moved the support secret into the encrypted credentials file, and
moved the origin into a top-level custom config value in the
environment files, with different defaults per environment but still
the ability to override it. (I don't use this, but it feels polite to
not actually *demand* that people use port 4000, y'know?)
There's a bit happening behind the scenes of this change. Previously,
we kept a `SECRET_TOKEN` environment variable in `production.env`, and
used a `secret_token.rb` initializer to wire it up as the
`secret_key_base`.
In this change, we move to Rails's new-ish (two years old :p) encrypted
credentials system. Now, we set a `RAILS_MASTER_KEY` environment
variable in the deployed `production.env` instead (and in our local
`.env.production` in the project root for managing it), and we can run
`rails credentials:edit` to open the encrypted file in a text editor.
Inside, the content is just:
```yml
secret_key_base: "<OUR_SECRET_KEY>"
```
This indirection doesn't exactly do much for us functionally; it's just
the more standard way of achieving what our `secret_token.rb` situation
was achieving.
We could also migrate other secrets into there, and I just might! That
would simplify duplication between `/deploy/files/production.env` and
`/.env.production`, at any rate! The main notable one is
`MATCHU_EMAIL_PASSWORD` for sending auth emails from
`matchu@openneo.net` (and there's also a Stripe token that we don't
actually use in the app these days, those codepaths are old bones). Oh
and there's also the `IMPRESS_2020_SUPPORT_SECRET`!
Anyway, the motivation for this was to remove the warning when starting
the app that Devise is trying to use the deprecated
`Rails.application.secrets` method. I was expecting to have to do
[the workaround shared here](https://github.com/heartcombo/devise/issues/5644#issuecomment-1804626431),
but it turns out whatever default behavior Devise does under the hood
is happy enough with our new decision to use the credentials file, and
the deprecation warning is gone! Ok neat!
Mostly this is just me testing out what it would look like to
modularize the app more… I've noticed that some concerns, like
fundraising, are just not relevant to most of the app, and being able
to lock them away inside subfolders feels like it'll help tidy up
long folder lists.
Notably, I haven't touched the models case yet, because I worry that
might be a bit more complex, whereas everything else seems pretty
well-isolated? We'll try it out!
I'm not really using this lately, and it _only_ creates vulnerability
surface area when not in use; so, while I'm pretty sure we locked this
down correctly to only admin accounts, I'm disabling it just as good
practice. We can add this back later if we need it again!
A little architecture trick here! DTI 2020 authorizes support staff
requests by means of a secret token, instead of user account stuff. And
our support tools still all call DTI 2020 APIs.
So here, we bridge the gap: we copy DTI 2020's support secret to this
app's environment variables (I needed to update
`deploy/files/production.env` and run `bin/deploy:setup` for this!),
then users with the new `support_secret` flag have it added to their
HTML documents in the meta tags. Then, the JS reads the meta tag.
I also fixed an issue in the `deploy/setup.yml` playbook, where I had
temporarily commented some stuff out to skip steps one time, and forgot
to uncomment them after oops lol!
To activate this, I created a `.env.development` file in my project
root, with the following content:
```env
IMPRESS_2020_ORIGIN=http://localhost:4000
```
Then, I started impress-2020 with `yarn dev --port=4000`.
Now, the app loads from there, hooray!! It even fixes that obnoxious
pet state ID bug that happens when you run against the production db lol
Been wanting this for a while in theory, gonna actually do it now!
The motivation is that I want to turn up the timeout for loading pets,
because the Neopets endpoints are slower today with the NC UC release -
but I can already predict that under our current architecture that will
be a problem, because it'll block up our request queue!
Falcon uses Ruby's relatively-new async system to *not* have requests
block on upstream requests, and my understanding is that this behavior
is plug-and-play. Let's see how it goes!
Oh yeah, a long-standing limitation. Good thing we're better at stuff
now!
This is also probably the real cause of the weird number of slight
discrepancies between main DTI and DTI 2020 when I eyeballed stuff lol
oh, well, that and the missing default-lists. A bit messy!
In impress-2020, we do a big slow query to figure out which users have
been active in trades recently. Now, we cache that timestamp on the
User model.
This won't have any immediate effect; it's to clear the way for Classic
DTI to receive the better trade ratios feature people like from 2020.
I also added some unit testing infra because I finally wanted it! for
all the ways you can trigger this timestamp lol
Note too that this is a bit of an unusually complex migration, but my
hope is that the batching and query structure and such helps it run
surprisingly fast! 🤞
So this was a slightly wrong error message, what was happening was:
1. Trying to load the image hash for this pet, by looking them up at
https://pets.neopets.com/cpn/PET_NAME/1/1.png and seeing what URL it
redirects to.
2. But pets.neopets.com was rejecting our User-Agent string, which
would've been just "Ruby", since we hadn't set it otherwise. I guess
that's an explicitly banned string?
I also found that the kind of more-helpful User-Agent string I like to
write was being rejected, and I could only get it to accept something
very simple? So that's what we're using now, I guess!!