Idk why, but unlike my previous experience with Rails devcontainers, this time the setup process is running so wildly slowly?
Might just be a transient issue on my machine, maybe something that would be improved with a restart and trying again another time? Or could be something about the MySQL image that doesn't run great in this context?
In any case, I'm just gonna set this down for now!
The URL anchors were getting like. double-encoded? The `closet[]` part
was encoding as `closet%255B%255D`. Maybe a thing in Rails, where you
need to mark them `html_safe` to insert them in a URL like that?
Well anyway, those URLs are redundant now, I just have it link straight
to the same outfit page as the big link!
Now, like in DTI 2020, opening an outfit will go straight to the editor.
I'm not 100% on whether this is actually like. the superior behavior?
But I think it's good enough, and it's what the wardrobe-2020 code
expects, so let's just roll with it for now!
Ohh I see, I made a mistake converting this from Next.js routing. It's
not that we had a URL search parameter named `outfitId`; it's that if
you were coming from the `/outfits/:outfitId` route, it would use that!
I still haven't gotten the rest of the site to point that route to this
page, but I'll do that in a later change.
Notable things:
- We used to have the parameters in the hash (`#`) part of the URL.
- We used to use the key `outfit=123` instead of `outfitId=123`.
In this change, we add backwards-compatibility for these things, while
still keeping the latest behavior too, with no change to the URLs we
generate!
Looks like the version of Prettier I just installed is v3, whereas our
last run in the impress-2020 repo was with v2. I don't think we had any
special config in that project, I think these are just changes to
Prettier's defaults, and I'm comfortable accepting them! (Mostly seems
like a lot of trailing commas.)
I'm trying out a new editor setup, and it noticed that Prettier isn't
obviously installed! I think it makes sense to put it in dev deps, even
if there's not a direct hook calling it—though tbh maybe I should add it
to `yarn dev` somehow?
Now, if I run `sudo -i -u impress` on the production server, it opens a
login bash shell, with all of the app's environment variables exported,
straight to `/srv/impress`.
This will let me quickly `cd current; bin/rails console` to start poking
at whatever needs poked!
Idk if this used to be different or what, but it looks like the current
behavior is: if you delete a closet list, it'll leave the hangers
present, but Classic DTI would not show them anywhere; but Impress 2020
(until recently) would crash about it.
Now, we use `dependent: :destroy` to delete the hangers when you delete
the list (which I think makes sense, and is different than what I
decided in the past but that's ok, and is what the current behavior
*looks* like to people!), and we add a migration that deletes orphaned
hangers.
The migration also outputs the deleted hangers as JSON, for us to hold
onto in case we made a mistake! I'm also backing up the database in
advance of running this migration, just in case we gotta roll back HARD!
This is an important workflow for people doing art stuff, I'm told! They used to use the Classic DTI broken image UI for this, but now that that's uhh Fully Gone, let's add this more explicitly!
Ah right, the CSS reset only applies in the ScopedCSSReset container, which doesn't work for elements portaled out with the <Portal> component (which a LOT of Chakra components use for things like tooltips etc).
Here, we take advantage of <Portal> having a hardcoded classname .chakra-portal, and applying it to them too!
This was used by the Neopia server to send us the modeling data it requested out-of-band. But now we do all our modeling requests back in-app again, so we don't need this!
Okay, this is a process that idk if it's even been working for a while anyway, I don't think Neopets translates item names anymore?
And it's crashing when I try to model stuff now, so like. yeah ok I'm fine with just skipping this, it's a shame to lose out on potential data going forward but *I think there just isn't data to get anyway*
I think we used this for both conversion to image, and also for CORS stuff when rendering Flash-based previews… let's trash it, I don't want to be growing our hard drive with files I don't think we use anymore!
If I'm wrong and it turns out we do use them for something, then like. hey I'm sure we'll find out soon enough, and it's very recoverable operation.
I hope this doesn't cause problems! But yeah, with Puma doing threading, and maybe switching to Falcon someday to get even better concurrency properties, I feel like this will probably be fine?
And it makes the UX a loootttt better, to be back in the world where all these forms just work, whew.
Oh okay, I was misinterpreting the error: it was that our NEOPETS_URL_ORIGIN secret value isn't the real Neopets.com IP address anymore, so amfphp requests were just plain *always* failing in production. Oops!
I've remove that environment variable from our production config, and now modeling is working in the bulk thing!
Also I'm noticing that we're using puma these days, which does good threading stuff. I think there might be merit to switching over to Falcon because of just how async-y our stuff is, but having 5 threads going is honestly probably good enough that I don't need to worry too much about mutual blocking, and could probably just write stuff to get Neopia out of the picture like *right now*. Neat!
Okay so… I'm worried about this because of Rails whole single-threaded situation, which doesn't really let it handle blocking on external network requests very well.
Ultimately I think we're gonna have to do a clever thing but idk quite what?
I should look into whether like, puma + the new async stuff can enable Rails to be more tolerable about this, and handle a few requests at once, instead of having to have the Neopia server doing it. (Right now, the Neopia server isn't really doing its job quite right, because it depends on the Rails app being *local* to send stuff to it.)
But for now, let's just extend the timeout, cuz it's basically always getting hit in production—because there's currently no other way to do modeling, oops lol
I'm not sure why this was causing problems? especially why *now*? But I was seeing errors in systemctl of it trying to parse this comment as an environment variable soooo ok!
Could just be an intermittent thing where like, a byte got dropped last time we transferred this file or something? but whatever, this has fixed it and also is reasonable comment placement!
Just find_all_by's that I never cleaned up
Oddly enough, I still got a "neopets seems down" message out of this, idk if that's an actual bug or just sluggishness rn
Okay, right, if we're just using www.neopets.com (like we are for now), it fails on http://www.neopets.com because it triggers a redirect that we don't follow.
So here I 1) change the default to HTTPS, and 2) add HTTPS support to our little RocketAMF lib
Just cleaning up a bit! I'm sure there's more to remove, these were just some clear candidates: old wardrobe code, and stuff in `public` that I just fully don't recognize and don't think is doing anything? (We'll find out if something crashes though lol!)
Looking back at this now I'm just like. Oh right, of course, we don't have passwordless access to *become root*, so of course Ansible's strategy of becoming root and then running the playbook step was failing!
Oops, this was causing the page to render in a weird zoomed-out way on mobile!
Note that, for most of the site, we intentionally haven't added this tag yet because most of our pages aren't especially responsively-designed; so we _want_ the device's best attempt to work with that, rather than trying to enforce something.
Oh right, Rails does its own terser minification step, so using esbuild's minifier is just running two minifiers, which is just asking for trouble!
For some reason, running it this time on the non-Vagrant box, terser was crashing trying to read something in the minified item-page.js. Now that we don't minify, that fixes it, and the output is still minified by the end!
I do notice though that --minify does some other stuff in esbuild that I forget all of what it is. Oh well, not gonna worry about it for now!
This was necessary when we were running old Rubies that I couldn't build on macOS, but now we're on standard modern stuff, so I'm not gonna leave around a config that we no longer use and keep updated!
Looking at the docs, I think what changed is that `throttled_responder` gets the request as an argument instead of the `env`? And has the same return type for the lambda as before?
So uhhh I don't remember how to test this, but uhh it's not crashing when the server starts anymore, and I feel like the most likely problem here would be that you get a 500 instead of a useful response in the rate limit case, so like. ehh I'll just leave it be!
Oh right, since we've told Rails that in development the assets path is `/dev-assets`, but the JS scripts don't know that, they're still sending requests to `/assets/thing.svg` or whatever, which is returning the prebuilt production asset if present, or nothing if not. Fixed!
This required a buncha fixes to how SASS scoping works! Needed to add a bunch of imports for stuff that previously would get read from the global scope by being imported *after* the constants and mixins etc.
There's clearly a lot of refactor opportunity here, but I'm not gonna worry about it!!
I wasn't sure what we were actually using it for, turns out it was mostly polyfills for CSS features that are very standard now!
I didn't audit these changes very carefully tbqh because they seemed pretty simple? Fingers crossed!
Soo I think the reason adding `.digested` to the filenames like `jsbundling-rails` says to doesn't work, is because we're on an old version of Sprockets for compass-rails's sake?
I'm gonna investigate what Compass actually does for us, and see if we can delete it.
This feels a bit weird on the idioms but ehh I think it's fine… mostly just, the `assets:precompile` task is gonna call `yarn build` here, so `yarn build` needs to be production settings; but also we want not quite all those settings on in development, so there's a base task that both the prod `build` and `dev` call.
Uhhh I think I must have made a mistake here where like… I must have left this in the service file for a while then accidentally deleted it from the Ansible playbook but not the live server? I had tested with this, then tested again without it and thought it wasn't necessary, but it turns out to have been necessary I guess? Ok!
This instructs Rails's ExecJS library to not bother looking for Node or something similar, because the app doesn't actually need to run any JS, even though the `react-rails` library (?) seems to be pretty eager about the possibility that we'll need to server-side-render stuff. (We should consider whether we want to though tbh? But… idk that would be a pretty different arch than what we've done with `jsbundling-rails` so like. idk whatever)
Mm, something in Rails was getting upset when working with session cookies because the `Host` header was `127.0.0.1:3000` instead of `beta.impress.openneo.net`. I only saw this log entry on important actions like login, so my hope is that this is why login is failing??
I was intentionally omitting these to start, because I didn't understand them well and didn't want to add things I didn't understand. But now I've checked in on them more and they seem standard and reasonable. Ok!
```
HTTP Origin header (https://beta.impress.openneo.net) didn't match request.base_url (http://127.0.0.1:3000)
```
Source: https://stackoverflow.com/a/73198861/107415
I did some refactoring while here too, of pulling the deploy scripts out of `package.json` and into `bin`, to be a bit more canonically Rails-y. (idk how canonical the colon thing is but, probably fine??)