We're now all-in on impress.openneo.net for this box!
One little wrinkle is that certbot was initially upset that I had
already uploaded the copy-pasted certs from the other box to here, at
the file path it expected to get to manage. So, I moved those to
`/srv/impress/shared/temp-certs`, and changed the nginx config
accordingly; and then deleted the original and let certbot control it!
The usual stuff! Installed the new gem and its new deps, ran
`bin/rails app:update` and did my best to manually merge the dev/prod
config files with the new canonical defaults, deleted some migrations I
don't think are relevant to us, and yeah!
Also, Rails 7.1 seems to need `libyaml-dev` installed, so I added that
to the `deploy/setup.yml` playbook!
One thing to note is that, while I was here, I turned on some settings
relating to our use of SSL that technically weren't on before. This
should be fine and helpful? But if stuff breaks, well, check those!
We used to do this for weird clever caching tricks that I don't think
were actually very effective. We stopped using this a few months ago,
and now I'm finally cleaning up this supporting code!
Huh, Arel can *sometimes* handle just having an attribute stand in as
"X is true" in a condition, but sometimes gets upset about it. I guess
this changed in Rails since we recently wrote this?
Specifically, item search would crash on "is:nc" (but *not* "is:np"),
saying:
```
undefined method `fetch_attribute' for #<struct Arel::Attributes::Attribute relation=#<Arel::Table:0x0000000109a67110 @name="items", @klass=Item(…), @type_caster=#<ActiveRecord::TypeCaster::Map:0x0000000109a66e90 @klass=Item(…)>, @table_alias=nil>, name="is_manually_nc">
```
The traceback was a bit misleading (it happened at the part where we
merge all the scopes together), but that hinted to me that it working
with an attribute in a place where it expected a conditional. So I
converted the attribute in the `is_nc` scope to a conditional, and made
the matching change in `is_np`, and that fixed it! Ok phew!
The URL anchors were getting like. double-encoded? The `closet[]` part
was encoding as `closet%255B%255D`. Maybe a thing in Rails, where you
need to mark them `html_safe` to insert them in a URL like that?
Well anyway, those URLs are redundant now, I just have it link straight
to the same outfit page as the big link!
Now, like in DTI 2020, opening an outfit will go straight to the editor.
I'm not 100% on whether this is actually like. the superior behavior?
But I think it's good enough, and it's what the wardrobe-2020 code
expects, so let's just roll with it for now!
Ohh I see, I made a mistake converting this from Next.js routing. It's
not that we had a URL search parameter named `outfitId`; it's that if
you were coming from the `/outfits/:outfitId` route, it would use that!
I still haven't gotten the rest of the site to point that route to this
page, but I'll do that in a later change.
Notable things:
- We used to have the parameters in the hash (`#`) part of the URL.
- We used to use the key `outfit=123` instead of `outfitId=123`.
In this change, we add backwards-compatibility for these things, while
still keeping the latest behavior too, with no change to the URLs we
generate!
Looks like the version of Prettier I just installed is v3, whereas our
last run in the impress-2020 repo was with v2. I don't think we had any
special config in that project, I think these are just changes to
Prettier's defaults, and I'm comfortable accepting them! (Mostly seems
like a lot of trailing commas.)
I'm trying out a new editor setup, and it noticed that Prettier isn't
obviously installed! I think it makes sense to put it in dev deps, even
if there's not a direct hook calling it—though tbh maybe I should add it
to `yarn dev` somehow?
Now, if I run `sudo -i -u impress` on the production server, it opens a
login bash shell, with all of the app's environment variables exported,
straight to `/srv/impress`.
This will let me quickly `cd current; bin/rails console` to start poking
at whatever needs poked!
Idk if this used to be different or what, but it looks like the current
behavior is: if you delete a closet list, it'll leave the hangers
present, but Classic DTI would not show them anywhere; but Impress 2020
(until recently) would crash about it.
Now, we use `dependent: :destroy` to delete the hangers when you delete
the list (which I think makes sense, and is different than what I
decided in the past but that's ok, and is what the current behavior
*looks* like to people!), and we add a migration that deletes orphaned
hangers.
The migration also outputs the deleted hangers as JSON, for us to hold
onto in case we made a mistake! I'm also backing up the database in
advance of running this migration, just in case we gotta roll back HARD!
This is an important workflow for people doing art stuff, I'm told! They used to use the Classic DTI broken image UI for this, but now that that's uhh Fully Gone, let's add this more explicitly!
Ah right, the CSS reset only applies in the ScopedCSSReset container, which doesn't work for elements portaled out with the <Portal> component (which a LOT of Chakra components use for things like tooltips etc).
Here, we take advantage of <Portal> having a hardcoded classname .chakra-portal, and applying it to them too!
This was used by the Neopia server to send us the modeling data it requested out-of-band. But now we do all our modeling requests back in-app again, so we don't need this!
Okay, this is a process that idk if it's even been working for a while anyway, I don't think Neopets translates item names anymore?
And it's crashing when I try to model stuff now, so like. yeah ok I'm fine with just skipping this, it's a shame to lose out on potential data going forward but *I think there just isn't data to get anyway*
I think we used this for both conversion to image, and also for CORS stuff when rendering Flash-based previews… let's trash it, I don't want to be growing our hard drive with files I don't think we use anymore!
If I'm wrong and it turns out we do use them for something, then like. hey I'm sure we'll find out soon enough, and it's very recoverable operation.
I hope this doesn't cause problems! But yeah, with Puma doing threading, and maybe switching to Falcon someday to get even better concurrency properties, I feel like this will probably be fine?
And it makes the UX a loootttt better, to be back in the world where all these forms just work, whew.
Oh okay, I was misinterpreting the error: it was that our NEOPETS_URL_ORIGIN secret value isn't the real Neopets.com IP address anymore, so amfphp requests were just plain *always* failing in production. Oops!
I've remove that environment variable from our production config, and now modeling is working in the bulk thing!
Also I'm noticing that we're using puma these days, which does good threading stuff. I think there might be merit to switching over to Falcon because of just how async-y our stuff is, but having 5 threads going is honestly probably good enough that I don't need to worry too much about mutual blocking, and could probably just write stuff to get Neopia out of the picture like *right now*. Neat!
Okay so… I'm worried about this because of Rails whole single-threaded situation, which doesn't really let it handle blocking on external network requests very well.
Ultimately I think we're gonna have to do a clever thing but idk quite what?
I should look into whether like, puma + the new async stuff can enable Rails to be more tolerable about this, and handle a few requests at once, instead of having to have the Neopia server doing it. (Right now, the Neopia server isn't really doing its job quite right, because it depends on the Rails app being *local* to send stuff to it.)
But for now, let's just extend the timeout, cuz it's basically always getting hit in production—because there's currently no other way to do modeling, oops lol
I'm not sure why this was causing problems? especially why *now*? But I was seeing errors in systemctl of it trying to parse this comment as an environment variable soooo ok!
Could just be an intermittent thing where like, a byte got dropped last time we transferred this file or something? but whatever, this has fixed it and also is reasonable comment placement!
Just find_all_by's that I never cleaned up
Oddly enough, I still got a "neopets seems down" message out of this, idk if that's an actual bug or just sluggishness rn
Okay, right, if we're just using www.neopets.com (like we are for now), it fails on http://www.neopets.com because it triggers a redirect that we don't follow.
So here I 1) change the default to HTTPS, and 2) add HTTPS support to our little RocketAMF lib
Just cleaning up a bit! I'm sure there's more to remove, these were just some clear candidates: old wardrobe code, and stuff in `public` that I just fully don't recognize and don't think is doing anything? (We'll find out if something crashes though lol!)
Looking back at this now I'm just like. Oh right, of course, we don't have passwordless access to *become root*, so of course Ansible's strategy of becoming root and then running the playbook step was failing!
Oops, this was causing the page to render in a weird zoomed-out way on mobile!
Note that, for most of the site, we intentionally haven't added this tag yet because most of our pages aren't especially responsively-designed; so we _want_ the device's best attempt to work with that, rather than trying to enforce something.
Oh right, Rails does its own terser minification step, so using esbuild's minifier is just running two minifiers, which is just asking for trouble!
For some reason, running it this time on the non-Vagrant box, terser was crashing trying to read something in the minified item-page.js. Now that we don't minify, that fixes it, and the output is still minified by the end!
I do notice though that --minify does some other stuff in esbuild that I forget all of what it is. Oh well, not gonna worry about it for now!
This was necessary when we were running old Rubies that I couldn't build on macOS, but now we're on standard modern stuff, so I'm not gonna leave around a config that we no longer use and keep updated!
Looking at the docs, I think what changed is that `throttled_responder` gets the request as an argument instead of the `env`? And has the same return type for the lambda as before?
So uhhh I don't remember how to test this, but uhh it's not crashing when the server starts anymore, and I feel like the most likely problem here would be that you get a 500 instead of a useful response in the rate limit case, so like. ehh I'll just leave it be!
Oh right, since we've told Rails that in development the assets path is `/dev-assets`, but the JS scripts don't know that, they're still sending requests to `/assets/thing.svg` or whatever, which is returning the prebuilt production asset if present, or nothing if not. Fixed!
This required a buncha fixes to how SASS scoping works! Needed to add a bunch of imports for stuff that previously would get read from the global scope by being imported *after* the constants and mixins etc.
There's clearly a lot of refactor opportunity here, but I'm not gonna worry about it!!