lol again this is hard to test so uhh I hope this didn't break it all!! though tbh I feel like we removed this feature or something anyway? idk it stopped working in some way
Tbh I'm not 100% sure this is a fix, I'm not sure what `haml_concat` was doing here, and the page is still crashing so it's hard to say. But fingers crossed!
Idk why, but when the `select` was the first thing in the query, it was getting ignored. I wonder if there's something about the `object_assets` scope that I'm not understanding that's overwriting it? Or the `joins`? But whatever, this works, I'm not worried about it for now!
The controller was like "oh yeah we have that cached" (from previous renders of the app on Rails 3 I think?), but the view disagreed, bc it was appending a template digest to the cache key. That's a smart feature, but not compatible with how we skip queries in the controller, so disable it for now!
Rails 4 removed `attr_accessible`, and we should move away from it to (I think we'll need to by Rails 5?), but for now we can install this and move on!
We'll need to replace the item search query stuff with direct MySQL queries, but that's not ready yet bc the app still isn't booting, so we're committing this in a known broken state for now!
I haven't logged into newrelic in a billion years, let's just stop sending them stuff
(This is a precursor to an attempt to delete flex stuff too and replace our elasticsearch stuff with direct mysql queries like Impress 2020 does, but that'll be more work!)
I guess the APIs changed here, but these were placeholder settings we weren't actually using anyway (cuz we use the OpenNeo ID integration), so I just commented them out and it seems fine for now!
NOTE: This doesn't boot yet! There's something changed in the `devise` API that we'll need to fix!
```
/vagrant/config/initializers/devise.rb:46:in `block in <top (required)>': undefined method `encryptor=' for Devise:Module (NoMethodError)
```
But yeah, we navigated the gem upgrades, and also I ran `rake rails:update` and hand-processed the suggestions it had for our config files.
Rather than figure out how to upgrade the Stripe gem to be compatible with future Rails, I'd rather just delete the references, since it's currently unused.
I'm not so bold as to go in and fully trash all our donation code; I just want to ensure we're not sending people down broken codepaths, and that if they reach them, the error messages are clear enough.
I'm just giving the app a very quick scan on critical pages, it's possible I'm missing some issues on paths that are harder to test rn like openneo_id auth, but I'll check in on that later I think?
These are necessary for installing some of our gems!
Note the tricky bit where we need an older OpenSSL package when building Ruby 1.9.3, but need to uninstall before `libmysqlclient-dev`, which requires a more recent version of `libssl-dev`. I thiiiink this is safe to do, but we'll find out!
None of these are private repos, so there's no reason to use the authenticated git protocol to download the stuff. (I guess this used to work because I had github creds set up on the machine that was running the app, whereas right now it's running in Vagrant, so yeah makes sense that it wasn't an issue before!)
I'm pretttty sure we fully do not need these, they were an attempt to solve the "contacting neopets.com is slow" problem, which we now solve by having other processes who are better at concurrency handle that request.
The intent is to set up for an upgrade of Ruby and Rails to the modern versions, but I want to start by having a stable running copy that we can incrementally pull up to new versions of things!
And it turns out getting Ruby 1.9.3 to build on modern platforms is hard! I started by trying on macOS and just couldn't get there, the instructions I found for workarounds didn't seem to work anymore.
So the solution I landed on was to set up an Ubuntu VM, and follow some instructions from https://stackoverflow.com/q/51986932 to patch Ruby to work with the version of OpenSSL we have access to!
And it was enough of a challenge that I figured that, rather than setting up the Vagrantfile elsewhere, it would be helpful documentation to do it here, even if we scrap the Vagrantfile etc later once we're in a new stable environment.
We set up `impress-asset-images.openneo.net` to redirect to the right asset, without needing to depend on AWS anymore for HTML5-converted items!
Our quick fix for this: always serve `has_image: true` to the frontend, so it always tries to use the image, regardless of whether we've marked it as converted in the database. (We've turned off the converters too!)
Oh, yeah, shit, okay, when we set `self.url` like that, it's supposed to be the _canonical_ URL for the SWF, not our proxied one—this is the URL that's gonna go in the database.
We do proxying late in the process, like when we're actually setting up to download something, but for just referencing where the asset lives, we use `images.neopets.com`.
In this change, we revert the use of `NEOPETS_IMAGES_URL_ORIGIN`, but we _do_ update this to `https` for good measure. (We currently have both HTTP and HTTPS urls in the database, I guess neopets.com started serving different URLs at some point, this is probably the future! And anything interpreting these URLs will need to handle both cases anyway, unless we do some kind of migration update situation thing.)
We're migrating the incorrect assets with the following query (with the limit changed to match the number we currently see in the DB, just as a safety check):
```
UPDATE swf_assets SET url = REPLACE(url, 'http://images.neopets-asset-proxy.openneo.net', 'https://images.neopets.com') WHERE url LIKE 'http://images.neopets-asset-proxy.openneo.net%' ORDER BY id LIMIT 2000;
```
Okay, like in the previous commit, we're dealing with forced HTTPS, on a server that isn't going to cooperate with our dependencies' HTTPS version. And this time, I don't think there's a secret origin server that will accept `http://` requests for us.
Thankfully, we have the perfect hack in our back pocket: our own pre-existing images.neopets.com proxy server! I set the following in our secret `.env` file, and now we're good:
```
NEOPETS_IMAGES_URL_ORIGIN=http://images.neopets-asset-proxy.openneo.net
```
Oops, neopets.com finally stopped accepting `http://` connections, so our AMFPHP requests stopped working! And our current dependencies make it hard to make modern HTTPS requests :(
Instead, we're doing this quick-fix: we have a connection who knows the internal address for the Neopets origin server behind their CDN, which *does* still accept `http://` requests!
So, when `NEOPETS_URL_ORIGIN` is specified in the secret `.env` file (not committed to the repository), we'll use it instead of `http://www.neopets.com`. However, we still have that in the code as a fallback, just to be a bit less surprising to some theoretical future dev so they can see the real error message, and to self-document a bit of what that value is semantically doing! (The documentation angle is more of why it's there, rather than an actual expectation that any actual person in the future will run the code and get the fallback.)
Okay this one was weird, the reference to the Parallel gem in `pet.rb` just, stopped working? Is that some weird downstream consequence of something we changed today, or has it just been broken for a long time and we just never ran that codepath? Seems… odd if we hadn't? But ok?
In any case, upgrading the gem seemed to fix whatever was causing it to not load in for whatever reason. Ok!
There's a bug on Neopets.com that breaks links and images for *.openneo.net, on petpages specifically.
So, we've registered a new domain, and we're using that to serve outfit images now.
I'm a bit hesitant to add a new domain name to our like, permanent URL surface area, lol… but I'm not hearing back from TNT, and I already closed the doors on S3, so… here we are, whatever 😅
TNT started using HTTPS URLs! And our old Ruby version (lol 😬) still requires explicit invocation to perform SSL during a request, so requests were failing!
Now, we explicitly build the `Net::HTTPS` object, and turn on `use_ssl` if it's an HTTPS URL! (The shorthand invocation didn't seem to have an option for this, that I could find!)
Here, we turn off the hooks that enqueue outfit image updates, and we disconnect the `OutfitImageUploader` that manages uploaded S3 URLs, instead replacing it with an `image` method that simulates the same basic API.
This should cause _all_ views on Classic DTI to use the new outfit URLs. Some notable examples:
- The user's Outfits page
- The donations page
- The outfit page, and its sharing metadata
I hope I didn't miss anything in the views that will make this crash stuff! I tested the new model code in the Rails console, and checked it against invocations that I noticed when searching the codebase for `outfit.image` 🤞
Oops, right, I meant to use the new `impress-outfit-images.openneo.net` host for this! It works just fine from `impress-2020.openneo.net` as the backing source right now, but I want these semi-permanent URLs to be a bit more decoupled.
As part of our project to get off S3 and dramatically reduce costs, we're gonna start serving outfit images that Impress 2020 generates, fronted by Vercel's CDN cache! This should hopefully be just as fast in practice, without requiring an S3 storage cost. (Outfits whose thumbnails are pretty much unused will be evicted from the cache, or never stored in the first place—and regenerated back into the cache on-demand if needed.)
One important note is that the image at the URL will no longer be guaranteed to auto-update to reflect the changes to the outfit, because we're including `updated_at` in the URL for caching. (It also isn't guaranteed to _not_ auto-update, though 😅) Our hope is that people aren't using it for that use case so much! If so, though, we have some ways we could build live URLs without putting too much pressure on image generation, e.g. redirects 🤔
This change does _not_ disable actual outfit generation, because I want to keep that running until we see these new URLs succeed for folks. Gonna wait a bit and see if we get bug reports on them! Then, if all goes well, we'll stop enqueueing outfit image jobs altogether, and maybe wind down some of the infrastructure accordingly.
We've been serving images directly from `impress-asset-images.s3.amazonaws.com` for a long time. While they serve with long-lasting HTTP cache headers, and the app requests them with the `updated_at` timestamp in the query string; each GET request still executes a full S3 ReadObject operation to get the latest version.
In the past, this was only relevant to users on Image Mode, not Flash Mode. But now that everyone's on Image Mode, this matters a lot more!
Now, we've configured a Fastly host at `impress-asset-images.openneo.net`, to sit in front of our S3 bucket. This should dramatically reduce the GET requests to S3 itself, as our cache warms up and gains copies of the most common asset PNGs.
That said, I'm not sure how much actual cost impact this change will have. Our AWS console isn't configured to differentiate cost by bucket yet—I've started this process, but it might take a few days to propagate. All I know is that our current costs are $35/mo data transfer + $20/mo storage, and that outfit images are responsible for most of the storage cost. I hypothesize that `impress-asset-images` is responsible for most of the reads and data transfers, but I'm not sure!
In the future, I think we'll be able to bring our AWS costs to near-zero, by:
- Obsolete `impress-asset-images`, by using the official Neopets PNGs instead, after the HTML5 conversion completes.
- Obsolete `impress-outfit-images`, by using a Node endpoint to generate the images, fronted by a CDN cache. (Transfer the actual data to a long-term storage backup, and replace the S3 objects with redirects, so that old S3 URLs will still work.)
I hope this will be a big slice of the costs though! 🤞
(Note: I'll be deploying this on a bit of a delay, because I want to see the DNS propagate across the globe before flipping to a new domain!)
Oops, if you saved `SwfAsset` outside of modeling code, the `item` field would be empty, and so `item.body_specific?` wouldn't happen.
This would trigger when you even just report a broken image!
Now, we always run the SQL query to check for that flag.
Okay so, userlookup stuff hasn't worked in years, because it requires a login now.
But apparently, somewhere recently, the code inside our `neopets` gem started hard crashing, because of assumptions we made about the document we'd get back.
I'm not sure why it only recently started crashing? or if I'm even necessarily right about that?
But anyway, I'm just doing the easiest safest (🤞🏻) change possible: being more generous with the errors we swallow.
Test Plan:
Deploy and cross fingers.
Okay, fine, finally making this controllable from the db without requiring a deploy :P Setting this new field will cause `item.special_color` to return the corresponding color. This mainly affects what we show on the item page, and what colors we request for modeling on the homepage.
We recently flipped the switch for various hosts to force HTTPS, yay! This includes `neopia.openneo.net`.
However, I forgot to change the URL scheme in this file. This meant that the form submit from the homepage would go to `http://neopia.openneo.net/`, then redirect to `https://neopia.openneo.net/`, but only preserve the form data in certain browsers. This change should fix that!
Note: This probably breaks the dev environment, where we don't have a cert for `https://neopia.dev.openneo.net`. I'll fix that some other time!
Interestingly, these items *are* correctly detecting their special
color on the homepage for model progress. So, we *do* have the ability
to detect this. But I don't have good item data locally, so it would
be hard to test this, so I'm just gonna go with the cheap solution
again, sorry XP