Right, previously we were querying "has *at least one asset* that is not in zone X" instead of "has NO assets that are in zone X".
I don't know a fast way to query for that, this will have to do for now!
Not doing the tricks with `is_positive` anymore, instead just calling different functions altogether at the call site.
Also, instead of classes, I feel like this is a lot more concise to just write as class methods that create certain instances of a trivial `Filter` data class. Without the tricks of `is_positive` in play, the value of classes goes way down imo.
Ohh ok, without this change all of our `scope`s were just immediately evaluating the argument and fetching _all_ such matching records immediately, instead of waiting to actually be called. This led to bugs like `pet_type.as_json` returning ALL pet states in the whole db, because the `PetState.emotion_order` scope was being treated as a single predefined query, rather than a query fragment to merge into the current context.
This also explains what happened in 724ed83: that's why things before the scope in the query were being ignored.
lol again this is hard to test so uhh I hope this didn't break it all!! though tbh I feel like we removed this feature or something anyway? idk it stopped working in some way
Tbh I'm not 100% sure this is a fix, I'm not sure what `haml_concat` was doing here, and the page is still crashing so it's hard to say. But fingers crossed!
Idk why, but when the `select` was the first thing in the query, it was getting ignored. I wonder if there's something about the `object_assets` scope that I'm not understanding that's overwriting it? Or the `joins`? But whatever, this works, I'm not worried about it for now!
The controller was like "oh yeah we have that cached" (from previous renders of the app on Rails 3 I think?), but the view disagreed, bc it was appending a template digest to the cache key. That's a smart feature, but not compatible with how we skip queries in the controller, so disable it for now!
Rails 4 removed `attr_accessible`, and we should move away from it to (I think we'll need to by Rails 5?), but for now we can install this and move on!
We'll need to replace the item search query stuff with direct MySQL queries, but that's not ready yet bc the app still isn't booting, so we're committing this in a known broken state for now!
I haven't logged into newrelic in a billion years, let's just stop sending them stuff
(This is a precursor to an attempt to delete flex stuff too and replace our elasticsearch stuff with direct mysql queries like Impress 2020 does, but that'll be more work!)
I guess the APIs changed here, but these were placeholder settings we weren't actually using anyway (cuz we use the OpenNeo ID integration), so I just commented them out and it seems fine for now!
NOTE: This doesn't boot yet! There's something changed in the `devise` API that we'll need to fix!
```
/vagrant/config/initializers/devise.rb:46:in `block in <top (required)>': undefined method `encryptor=' for Devise:Module (NoMethodError)
```
But yeah, we navigated the gem upgrades, and also I ran `rake rails:update` and hand-processed the suggestions it had for our config files.
Rather than figure out how to upgrade the Stripe gem to be compatible with future Rails, I'd rather just delete the references, since it's currently unused.
I'm not so bold as to go in and fully trash all our donation code; I just want to ensure we're not sending people down broken codepaths, and that if they reach them, the error messages are clear enough.
I'm just giving the app a very quick scan on critical pages, it's possible I'm missing some issues on paths that are harder to test rn like openneo_id auth, but I'll check in on that later I think?
These are necessary for installing some of our gems!
Note the tricky bit where we need an older OpenSSL package when building Ruby 1.9.3, but need to uninstall before `libmysqlclient-dev`, which requires a more recent version of `libssl-dev`. I thiiiink this is safe to do, but we'll find out!
None of these are private repos, so there's no reason to use the authenticated git protocol to download the stuff. (I guess this used to work because I had github creds set up on the machine that was running the app, whereas right now it's running in Vagrant, so yeah makes sense that it wasn't an issue before!)
I'm pretttty sure we fully do not need these, they were an attempt to solve the "contacting neopets.com is slow" problem, which we now solve by having other processes who are better at concurrency handle that request.
The intent is to set up for an upgrade of Ruby and Rails to the modern versions, but I want to start by having a stable running copy that we can incrementally pull up to new versions of things!
And it turns out getting Ruby 1.9.3 to build on modern platforms is hard! I started by trying on macOS and just couldn't get there, the instructions I found for workarounds didn't seem to work anymore.
So the solution I landed on was to set up an Ubuntu VM, and follow some instructions from https://stackoverflow.com/q/51986932 to patch Ruby to work with the version of OpenSSL we have access to!
And it was enough of a challenge that I figured that, rather than setting up the Vagrantfile elsewhere, it would be helpful documentation to do it here, even if we scrap the Vagrantfile etc later once we're in a new stable environment.
We set up `impress-asset-images.openneo.net` to redirect to the right asset, without needing to depend on AWS anymore for HTML5-converted items!
Our quick fix for this: always serve `has_image: true` to the frontend, so it always tries to use the image, regardless of whether we've marked it as converted in the database. (We've turned off the converters too!)
Oh, yeah, shit, okay, when we set `self.url` like that, it's supposed to be the _canonical_ URL for the SWF, not our proxied one—this is the URL that's gonna go in the database.
We do proxying late in the process, like when we're actually setting up to download something, but for just referencing where the asset lives, we use `images.neopets.com`.
In this change, we revert the use of `NEOPETS_IMAGES_URL_ORIGIN`, but we _do_ update this to `https` for good measure. (We currently have both HTTP and HTTPS urls in the database, I guess neopets.com started serving different URLs at some point, this is probably the future! And anything interpreting these URLs will need to handle both cases anyway, unless we do some kind of migration update situation thing.)
We're migrating the incorrect assets with the following query (with the limit changed to match the number we currently see in the DB, just as a safety check):
```
UPDATE swf_assets SET url = REPLACE(url, 'http://images.neopets-asset-proxy.openneo.net', 'https://images.neopets.com') WHERE url LIKE 'http://images.neopets-asset-proxy.openneo.net%' ORDER BY id LIMIT 2000;
```
Okay, like in the previous commit, we're dealing with forced HTTPS, on a server that isn't going to cooperate with our dependencies' HTTPS version. And this time, I don't think there's a secret origin server that will accept `http://` requests for us.
Thankfully, we have the perfect hack in our back pocket: our own pre-existing images.neopets.com proxy server! I set the following in our secret `.env` file, and now we're good:
```
NEOPETS_IMAGES_URL_ORIGIN=http://images.neopets-asset-proxy.openneo.net
```
Oops, neopets.com finally stopped accepting `http://` connections, so our AMFPHP requests stopped working! And our current dependencies make it hard to make modern HTTPS requests :(
Instead, we're doing this quick-fix: we have a connection who knows the internal address for the Neopets origin server behind their CDN, which *does* still accept `http://` requests!
So, when `NEOPETS_URL_ORIGIN` is specified in the secret `.env` file (not committed to the repository), we'll use it instead of `http://www.neopets.com`. However, we still have that in the code as a fallback, just to be a bit less surprising to some theoretical future dev so they can see the real error message, and to self-document a bit of what that value is semantically doing! (The documentation angle is more of why it's there, rather than an actual expectation that any actual person in the future will run the code and get the fallback.)
Okay this one was weird, the reference to the Parallel gem in `pet.rb` just, stopped working? Is that some weird downstream consequence of something we changed today, or has it just been broken for a long time and we just never ran that codepath? Seems… odd if we hadn't? But ok?
In any case, upgrading the gem seemed to fix whatever was causing it to not load in for whatever reason. Ok!
There's a bug on Neopets.com that breaks links and images for *.openneo.net, on petpages specifically.
So, we've registered a new domain, and we're using that to serve outfit images now.
I'm a bit hesitant to add a new domain name to our like, permanent URL surface area, lol… but I'm not hearing back from TNT, and I already closed the doors on S3, so… here we are, whatever 😅
TNT started using HTTPS URLs! And our old Ruby version (lol 😬) still requires explicit invocation to perform SSL during a request, so requests were failing!
Now, we explicitly build the `Net::HTTPS` object, and turn on `use_ssl` if it's an HTTPS URL! (The shorthand invocation didn't seem to have an option for this, that I could find!)