At one point we piloted a "Camo" service to proxy HTTPS image urls for us, but it doesn't exist anymore.
We already have proxies and stuff for this, so I left `Image` as a placeholder for this, but it's not working yet!
This also deletes our final reference to the Addressable gem, so we can remove it!
I don't think these work anymore, and our volunteers get new items into the db fast anyway, Impress 2020 is doing better spidering these days. And then we get to remove the cron job `whenever` gem!
Yay, we've deleted all our background tasks!
We'll probably want to replace some of the basic functionality like certain caching? But we can deal with that as we run into it.
The direct motivation here was a seeming version conflict between Rails 4.2's rack dependency and latest Resque's rack dependency... but this is just nice complexity elimination regardless, we want this anyway :3
We've already swapped out the backend for this stuff to Impress 2020, so the resque task and the broken image report UI aren't actually relevant anymore. Delete them!
This helps us delete Resque soon too.
Rails 4 removed `attr_accessible`, and we should move away from it to (I think we'll need to by Rails 5?), but for now we can install this and move on!
We'll need to replace the item search query stuff with direct MySQL queries, but that's not ready yet bc the app still isn't booting, so we're committing this in a known broken state for now!
I haven't logged into newrelic in a billion years, let's just stop sending them stuff
(This is a precursor to an attempt to delete flex stuff too and replace our elasticsearch stuff with direct mysql queries like Impress 2020 does, but that'll be more work!)
NOTE: This doesn't boot yet! There's something changed in the `devise` API that we'll need to fix!
```
/vagrant/config/initializers/devise.rb:46:in `block in <top (required)>': undefined method `encryptor=' for Devise:Module (NoMethodError)
```
But yeah, we navigated the gem upgrades, and also I ran `rake rails:update` and hand-processed the suggestions it had for our config files.
Rather than figure out how to upgrade the Stripe gem to be compatible with future Rails, I'd rather just delete the references, since it's currently unused.
I'm not so bold as to go in and fully trash all our donation code; I just want to ensure we're not sending people down broken codepaths, and that if they reach them, the error messages are clear enough.
I'm just giving the app a very quick scan on critical pages, it's possible I'm missing some issues on paths that are harder to test rn like openneo_id auth, but I'll check in on that later I think?
This commit will require a few changes to the deployment process:
* store AWS credentials in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars
* store OpenNeo Auth credentials in OPENNEO_AUTH_APP, OPENNEO_AUTH_SERVER, and OPENNEO_AUTH_SECRET env vars
STRIPE_SECRET_KEY and STRIPE_PUBLISHABLE_KEY are now required, too; better have a good .env.
You can always put in dummy values if you have no credentials, though; they only get called up if there's a transaction.
SECRET_TOKEN is now an env var, too, but the production deploy process currently replaces config/initializers/secret_token.rb anyway; this might be the way to go later, though.
TNT has started serving half-removed Corridor of Chance effects:
it has the asset ID and URL and all, but the zone ID is blank.
RocketAMF has patched the empty key bug, and now we ignore assets
associated with empty keys.
A few key changes:
* Don't reload the whole pet 8 times!! Sooo many bad things
happen, including redundant lookups of everything else and
too many item saves and reindexes. Instead, fetch the item
data, apply it to the items, and then save the items (once
each!)
* Updated my branch of globalize3 to be even better at avoiding
redundant queries when saving. Woo.
* Last realization: wrapping all the item saves in a single
transaction works wonders. COMMIT seems to have high overhead,
so doing only one took it from 50ms * 10 or whatever to 60ms.
Good stuff.
Confirmed features:
* Output (retrieval, sorting, etc.)
* Name (positive and negative, but new behavior)
* Flags (positive and negative)
Planned features:
* users:owns, user:wants
Known issues:
* Sets are broken
* Don't render properly
* Shouldn't actually be done as joined sets, anyway, since
we actually want (set1_zone1 OR set1_zone2) AND
(set2_zone1 OR set2_zone2), which will require breaking
it into multiple terms queries.
* Name has regressed: ignores phrases, doesn't require *all*
words. While we're breaking sets into multiple queries,
maybe we'll do something similar for name. In fact, we
really kinda have to if we're gonna keep sorting by name,
since "straw hat" returns all hats. Eww.
Use the ImageMagick flatten command to generate the output all at
once instead of compositing each layer individually, and download
the layers in parallel. On my box, saving roopal27 five times took
a total of 30 seconds before, whereas now it takes 7 seconds. I
expect it to be even better on the production box, where latency
is even lower.
Lots of scary bugs were being caused by the fact that the possibly-duplicate Neopets ID
was being treated as an SWF's real primary key, meaning that a save meant for object swf
number 123 could be saved to biology swf number 123. Which is awful.
This update gives SWFs their own unique internal ID numbers. All external lookups still use
the remote ID and the type, meaning that the client side remains totally unchanged (phew).
However, all database relationships with SWFs use the new ID numbers, making everything
cleaner. Yay.
There are probably a few places where it would be appropriate to optimize certain lookups
that still depend on remote ID and type. Whatever. Today's goal was to remove crazy
glitches that have been floating around like mad. And I think that goal has been met.