This helped me debug a thing in the upcoming change! It lets you drop a
`debugger` line into the app, then run `rdbg --attach` in another
terminal to get into a debug session. Neat!
The main thing is that I was getting "RequireNotFound" warnings for
`require 'rails_helper'`, because the LSP seems unaware of how RSpec
offers `spec/` as a root for requires.
I think the `require_relative` is clearer anyway, I'm decently
satisfied with it. And if I decide it's too much ugly, we can try
something else in the Solargraph config or something sometime!
I only now thought through that I can scrape these instead of enter
them manually, similar to how we did our Rainbow Pool scraper… hooray!
I'm actually writing tests for stuff too, wowie!
This is a transitional gem to help with upgrading from old versions of
Rails: it provides a deprecated feature that Rails removed.
I audited and I *think* we only used it in one place, and that this one
place doesn't even use any of its functionality for styling or
scripting? So, begone!
No pressing reason, I'm just doing upgrades today, and noticed a new
version is out, and scrolled the patch notes and there's no obvious
breaking changes for my purposes, so. Up we go!
When playing with a Rainbow Pool syncing task, I noticed that error
handling wasn't working correctly for requests using `async-http`: if
the block raised an error, the `Sync` block would never return.
My suspicion is that this is because we were never reading or releasing
the request body.
In this change, I upgrade all the relevant gems for good measure, and
switch to using the response object yielded by the _block_, so we can
know it's being resource-managed correctly. Now, failures raise errors
as expected!
(I tested all these relevant service calls, too!)
For the gems, I mostly just ran `bundle update`; with the exception of
`httparty`, because latest Ruby throws a deprecation warning about its
use of the deprecated `csv` stdlib, which the latest version resolves.
One other little thing: this is on my new Fedora workstation, and I had
to deal with a known bug where the `sassc` gem compiles a `libsass.so`
file, but saves it in the wrong place somehow.
Here's the known bug, and the comment that helped me:
https://github.com/sass/sassc-ruby/issues/146#issuecomment-2028974524
And here's what I ran to get it into the right place:
```shell
ln -s ~/.local/share/gem/ruby/3.3.0/extensions/aarch64-linux/3.3.0/sassc-2.4.0/sassc/libsass.so \
~/.local/share/gem/ruby/3.3.0/gems/sassc-2.4.0/lib/sassc/libsass.so
```
This thing about `libsass` isn't reflected in the code changes anywhere
in this commit! I'm just mentioning it so that it's literally written
down anywhere. (I did try other comments' advice to use an older
version of `sassc` first, but I ran into compilation errors, so figured
this machine-side hack was better than untangling that mess.)
This is a Ruby language server that integrates with my editor! Static
analysis of Ruby and Rails is pretty tricky, but it's working and I
think that's neat!!
Oh huh, I guess we used to use this for automated testing, but since
then I've moved the test database to just be in MySQL like everything
else, so I think we don't need this adapter anymore! Goodbye!
Right, I didn't totally connect the dots that there's some OpenID
features in the mix here for how we expect to identify the user once
they authenticate. It requires looking up the provider's public key,
and validating the JWT they sent us. This gem does all that for us!
I don't actually know what a real NeoPass `id_token` looks like yet?
But I'll fill in some placeholder stuff for now, and use that for
initializing the account!
In this change, we wire up a new NeoPass OAuth2 strategy for OmniAuth,
and hook up the "Log in with NeoPass" button to use it!
The authentication currently fails with `invalid_credentials`, and
shows the `owo` response we hardcoded into the NeoPass server's token
response. We need to finally follow up on the little `TODO` written in
there!
This is setting us up for NeoPass, but first we're just gonna try stuff
with the "developer" strategy that's built in for testing, rather than
using the NeoPass dev server!
Oh right, we don't have Rails UJS going on anymore, which is what
handled the confirmation prompts for deleting lists. Turbo is the more
standard modern solution to that, and should speed up certain
pageloads, so let's do it!
Here I install the `turbo-rails` gem, then run `rails turbo:install` to
install the `@hotwired/turbo-rails` npm package. Then I move
`application.js` that's run all on pages but the outfit editor into our
section of JS that gets run through the bundler, and add Turbo to it.
I had to fix a couple tricky things:
1. The outfit editor page doesn't play nice with being swapped into the
document, so I make it require a full page reload instead.
2. Prefetching the Sign In link can cause the wrong `return_to` address
to be written to the `session`. (It's a GET request that does, ever
so slightly, take its own actions, oops!) As a simple hacky answer,
we disallow prefetching on that link.
Haven't fixed up the UJS stuff for confirm prompts to use Turbo yet,
that's next!
I'm starting to port over the functionality that was previously just,
me running `yarn db:export:public-data` in `impress-2020` and
committing it to Git LFS every time.
My immediate motivation is that the `impress-2020` git repository is
getting weirdly large?? Idk how these 40MB files have blown up to a
solid 16GB of Git LFS data (we don't have THAT many!!!), but I guess
there's something about Git LFS's architecture and disk usage that I'm
not understanding.
So, let's move to a simpler system in which we don't bind the public
data to the codebase, but instead just regularly dump it in production
and make it available for download.
This change adds the `rails public_data:commit` task, which when run in
production will make the latest available at
`https://impress.openneo.net/public-data/latest.sql.gz`, and will also
store a running log of previous dumps, viewable at
`https://impress.openneo.net/public-data/`.
Things left to do:
1. Create a `rails public_data:pull` task, to download `latest.sql.gz`
and import it into the local development database.
2. Set up a cron job to dump this out regularly, idk maybe weekly? That
will grow, but not very fast (about 2GB per year), and we can add
logic to rotate out old ones if it starts to grow too far. (If we
wanted to get really intricate, we could do like, daily for the past
week, then weekly for the past 3 months, then monthly for the past
year, idk. There must be tools that do this!)
As the comment in `deploy.yml` explains, this was a multi-step process,
but it went very smoothly as planned, hooray!!
I noticed again while making this change that Bundler doesn't seem to
be availing itself of the checked-in dependencies in `vendor/cache`. I
think I know the fix for this, I'll toss it into an upcoming change and
see if it works!
I also put in a manual bump for `falcon`!
The motivation is that I'm working on a Ruby 3.3.0 upgrade in another
branch, and I'm getting deprecation warnings from the `async` gem,
which I think are resolved in the latest version, so I figure, hey,
good time for an update!
Been wanting this for a while in theory, gonna actually do it now!
The motivation is that I want to turn up the timeout for loading pets,
because the Neopets endpoints are slower today with the NC UC release -
but I can already predict that under our current architecture that will
be a problem, because it'll block up our request queue!
Falcon uses Ruby's relatively-new async system to *not* have requests
block on upstream requests, and my understanding is that this behavior
is plug-and-play. Let's see how it goes!
In impress-2020, we do a big slow query to figure out which users have
been active in trades recently. Now, we cache that timestamp on the
User model.
This won't have any immediate effect; it's to clear the way for Classic
DTI to receive the better trade ratios feature people like from 2020.
I also added some unit testing infra because I finally wanted it! for
all the ways you can trigger this timestamp lol
Note too that this is a bit of an unusually complex migration, but my
hope is that the batching and query structure and such helps it run
surprisingly fast! 🤞
I tried to port the Rainbow Pool ones forward, but ran into issues with the
service that uses browser-specific stuff to check that traffic is valid :/
Incidentally, those were the only places we were using `rest-client`.
Goodbye!
Okay, I've simplified the migration to *just* add the column, and
instead added a task to find assets without manifest URLs and backfill
them.
Performance is a lot better now, using the `async-http` library, which
as I understand it supports both persistent connections when invoked
like this, and maybe also HTTP/2 multiplexing?? (Though I'm not
actually sure images.neopets.com does lol)
I'm not sure about the number of concurrent tasks I picked here, 100
seems okay for an internet thing and for such small requests, but I
worry that the CDN is gonna get annoyed or something. Well, we'll see!
This task is very resumable if it turns out we get frozen out or
something.
It shows up in development always, and if you're logged in as Me
Specifically in production!
I'm using this to poke at memory usage for pages that seem suspicious.
I don't know why our app reliably grows so large in RAM, but my hunch is
that maybe there are some pages that just use a truly large amount to
begin with - and I've learned Ruby doesn't release memory back after
it's GC'd, it just grows the process and keeps the free space to itself
in its own heap!
So I'm just eyeing pages that I know *can* have a lot going on, and
seeing what I find!
Look, I'll be real, I have literally not run these automated tests in
probably like a whole decade. Most of these files are empty, the ones
that aren't seem basically trivial, and I bet half of it would fail
anyway.
If I wanted to do real automated testing, I would basically want to
start from scratch anyway, and apply coverage I can trust to the areas
I actually care about.
Until then, I feel like these gems and files are mostly just clutter,
and I don't like them being One More Barrier To Entry. Goodbye, unused
complexity!
The usual stuff! Installed the new gem and its new deps, ran
`bin/rails app:update` and did my best to manually merge the dev/prod
config files with the new canonical defaults, deleted some migrations I
don't think are relevant to us, and yeah!
Also, Rails 7.1 seems to need `libyaml-dev` installed, so I added that
to the `deploy/setup.yml` playbook!
One thing to note is that, while I was here, I turned on some settings
relating to our use of SSL that technically weren't on before. This
should be fine and helpful? But if stuff breaks, well, check those!
This required a buncha fixes to how SASS scoping works! Needed to add a bunch of imports for stuff that previously would get read from the global scope by being imported *after* the constants and mixins etc.
There's clearly a lot of refactor opportunity here, but I'm not gonna worry about it!!
I wasn't sure what we were actually using it for, turns out it was mostly polyfills for CSS features that are very standard now!
I didn't audit these changes very carefully tbqh because they seemed pretty simple? Fingers crossed!
I did some refactoring while here too, of pulling the deploy scripts out of `package.json` and into `bin`, to be a bit more canonically Rails-y. (idk how canonical the colon thing is but, probably fine??)
I don't know enough about our caching situation to know where memcache performs meaningfully better than Rails's in-memory cache. Let's delete it for now and see if there's a problem, to simplify the deploy environment!
We add jsbuilding-rails to get esbuild running in the app, and then we copy-paste the files we need from impress-2020 into here!
I stopped at the point where it was building successfully, but it's not running correctly: it's not sure about `process.env` in `next`, and I think the right next step is to delete the NextJS deps altogether and use React Router instead.
A lot of rough edges here (e.g. no styles on the flash messages), but it's working and that's good!!
I tested this by temporarily switching to the production database and logging in as matchu!
Still missing a lot of big features too, like registration, password resets, settings page, etc.
This removes login/logout/session logic for integrating with OpenNeo ID, replacing them with stubs that just redirect to `/?TODO` when you click login, and helpers that act as if you're not logged in.
This gives us a clean slate to plug in new Devise logic to integrate with the `openneo_id` database directly!
Whew! Seems like a pretty clean one? Ran `rails app:upgrade` and stuff, and made some corrections to keyword arguments for `translate` calls. There might be more such problems elsewhere? But that's hard to search for, and we'll have to see.
Hey nice! We have to add webrick now because it's not included in Ruby 3, but hey just drop it right back in.
Idk how to choose between this or puma or whatever, but in the absence of a specific reason let's just pick the one whose name I know best.
This one was pretty straightforward yaay! Main thing was the change from `render file` to `render template` in a couple places, oh and a thing with complex `order()` clauses.