Oh okay, I was misinterpreting the error: it was that our NEOPETS_URL_ORIGIN secret value isn't the real Neopets.com IP address anymore, so amfphp requests were just plain *always* failing in production. Oops!
I've remove that environment variable from our production config, and now modeling is working in the bulk thing!
Also I'm noticing that we're using puma these days, which does good threading stuff. I think there might be merit to switching over to Falcon because of just how async-y our stuff is, but having 5 threads going is honestly probably good enough that I don't need to worry too much about mutual blocking, and could probably just write stuff to get Neopia out of the picture like *right now*. Neat!
Okay so… I'm worried about this because of Rails whole single-threaded situation, which doesn't really let it handle blocking on external network requests very well.
Ultimately I think we're gonna have to do a clever thing but idk quite what?
I should look into whether like, puma + the new async stuff can enable Rails to be more tolerable about this, and handle a few requests at once, instead of having to have the Neopia server doing it. (Right now, the Neopia server isn't really doing its job quite right, because it depends on the Rails app being *local* to send stuff to it.)
But for now, let's just extend the timeout, cuz it's basically always getting hit in production—because there's currently no other way to do modeling, oops lol
I'm not sure why this was causing problems? especially why *now*? But I was seeing errors in systemctl of it trying to parse this comment as an environment variable soooo ok!
Could just be an intermittent thing where like, a byte got dropped last time we transferred this file or something? but whatever, this has fixed it and also is reasonable comment placement!
Just find_all_by's that I never cleaned up
Oddly enough, I still got a "neopets seems down" message out of this, idk if that's an actual bug or just sluggishness rn
Okay, right, if we're just using www.neopets.com (like we are for now), it fails on http://www.neopets.com because it triggers a redirect that we don't follow.
So here I 1) change the default to HTTPS, and 2) add HTTPS support to our little RocketAMF lib
Just cleaning up a bit! I'm sure there's more to remove, these were just some clear candidates: old wardrobe code, and stuff in `public` that I just fully don't recognize and don't think is doing anything? (We'll find out if something crashes though lol!)
Looking back at this now I'm just like. Oh right, of course, we don't have passwordless access to *become root*, so of course Ansible's strategy of becoming root and then running the playbook step was failing!
Oops, this was causing the page to render in a weird zoomed-out way on mobile!
Note that, for most of the site, we intentionally haven't added this tag yet because most of our pages aren't especially responsively-designed; so we _want_ the device's best attempt to work with that, rather than trying to enforce something.
Oh right, Rails does its own terser minification step, so using esbuild's minifier is just running two minifiers, which is just asking for trouble!
For some reason, running it this time on the non-Vagrant box, terser was crashing trying to read something in the minified item-page.js. Now that we don't minify, that fixes it, and the output is still minified by the end!
I do notice though that --minify does some other stuff in esbuild that I forget all of what it is. Oh well, not gonna worry about it for now!
This was necessary when we were running old Rubies that I couldn't build on macOS, but now we're on standard modern stuff, so I'm not gonna leave around a config that we no longer use and keep updated!
Looking at the docs, I think what changed is that `throttled_responder` gets the request as an argument instead of the `env`? And has the same return type for the lambda as before?
So uhhh I don't remember how to test this, but uhh it's not crashing when the server starts anymore, and I feel like the most likely problem here would be that you get a 500 instead of a useful response in the rate limit case, so like. ehh I'll just leave it be!
Oh right, since we've told Rails that in development the assets path is `/dev-assets`, but the JS scripts don't know that, they're still sending requests to `/assets/thing.svg` or whatever, which is returning the prebuilt production asset if present, or nothing if not. Fixed!
This required a buncha fixes to how SASS scoping works! Needed to add a bunch of imports for stuff that previously would get read from the global scope by being imported *after* the constants and mixins etc.
There's clearly a lot of refactor opportunity here, but I'm not gonna worry about it!!
I wasn't sure what we were actually using it for, turns out it was mostly polyfills for CSS features that are very standard now!
I didn't audit these changes very carefully tbqh because they seemed pretty simple? Fingers crossed!
Soo I think the reason adding `.digested` to the filenames like `jsbundling-rails` says to doesn't work, is because we're on an old version of Sprockets for compass-rails's sake?
I'm gonna investigate what Compass actually does for us, and see if we can delete it.
This feels a bit weird on the idioms but ehh I think it's fine… mostly just, the `assets:precompile` task is gonna call `yarn build` here, so `yarn build` needs to be production settings; but also we want not quite all those settings on in development, so there's a base task that both the prod `build` and `dev` call.
Uhhh I think I must have made a mistake here where like… I must have left this in the service file for a while then accidentally deleted it from the Ansible playbook but not the live server? I had tested with this, then tested again without it and thought it wasn't necessary, but it turns out to have been necessary I guess? Ok!
This instructs Rails's ExecJS library to not bother looking for Node or something similar, because the app doesn't actually need to run any JS, even though the `react-rails` library (?) seems to be pretty eager about the possibility that we'll need to server-side-render stuff. (We should consider whether we want to though tbh? But… idk that would be a pretty different arch than what we've done with `jsbundling-rails` so like. idk whatever)
Mm, something in Rails was getting upset when working with session cookies because the `Host` header was `127.0.0.1:3000` instead of `beta.impress.openneo.net`. I only saw this log entry on important actions like login, so my hope is that this is why login is failing??
I was intentionally omitting these to start, because I didn't understand them well and didn't want to add things I didn't understand. But now I've checked in on them more and they seem standard and reasonable. Ok!
```
HTTP Origin header (https://beta.impress.openneo.net) didn't match request.base_url (http://127.0.0.1:3000)
```
Source: https://stackoverflow.com/a/73198861/107415
I did some refactoring while here too, of pulling the deploy scripts out of `package.json` and into `bin`, to be a bit more canonically Rails-y. (idk how canonical the colon thing is but, probably fine??)
I don't know enough about our caching situation to know where memcache performs meaningfully better than Rails's in-memory cache. Let's delete it for now and see if there's a problem, to simplify the deploy environment!
Okay, this is much simpler than the impress-2020 version where we symlinked node_modules and stuff - Bundler is just a lot better at this lol
Right now, the app is failing to start because we don't install Node—I wasn't sure whether we'd need to and whether I was gonna precompile the assets etc
Though now that I say that out loud, I guess part of the issue might be that I'm not sure the app is running in RAILS_ENV=production, I wonder if it still wants Node in that case?? I'll flip that switch in the service file now, then commit to save my place for the day, then try again with starting the app sometime and see what it says!
Yay it's working! We set up the box, install Ruby, upload a placeholder app, set it up as a service, and get it hooked up to nginx!
Next, we'll add the script to upload the latest version of the site. We just need to slot it into `/srv/impress/current`, run `bundle install`, and that should basically be that! (Oh, and we need to compile production assets—I wonder if it's useful to do that on the dev machine instead of on the target? That might save us from needing to install Node. Or maybe we'll have to anyway!)
Eyyy tasty! There were some issues with conflicting styles with the main app, but I think we got it!
Scoping Chakra's CSS reset was a big deal to not accidentally overwrite the app's own styles lol, and we had to solve a specificity problem for that, thanks Aria for the :where tip!! <3
We never had a specific reason why we didn't use the router for this I don't think? Not that I wrote down anyway. Let's just switch it over and see what happens!
I mainly did this as a misdiagnosis of the page reload problem fixed in c162864, but it seems like a good idea to try out anyway!
This I think is why the page was reloading when you try to item search? The failed import was triggering our "hey maybe this is an old module URL that got deleted" code?
We add jsbuilding-rails to get esbuild running in the app, and then we copy-paste the files we need from impress-2020 into here!
I stopped at the point where it was building successfully, but it's not running correctly: it's not sure about `process.env` in `next`, and I think the right next step is to delete the NextJS deps altogether and use React Router instead.