Previously, passing in `fits:blue` would cause a crash, because
`species_name` part of the split would be `nil`, oops!
In this change, we use a regex for more explicitness about the pattern
we're trying to match. We'll also add more cases next! (You'll note the
error message mentions `fits:nostalgic-faerie-draik`, which isn't
actually possible yet, but will be!)
Previously, the query wouldn't fill into the search box or page title
if e.g. parsing had failed. Now it does!
I'm not sure why the rescue strategy we previously had here doesn't
work anymore (I'm sure it must've in the past sometime?), but this is
simpler anyway, let's go!
I think this is a bit clearer and lets us clean up some of the syntax a
bit (don't need to always say `filters <<`), and also it will let us
use `return`, which I'm interested in for my next change!
Right, fitting isn't just body_id = this one, it's also body_id=0!
Anyway, doing this query on its own is still deathly slow, I wonder if
the idea I had about left joins (back when I was still working in a
Rails version that didn't support it lol) could help! Might poke at
that a smidge.
Oh right, `imageUrl` is the name of the field relative to what the app
expects, but under the hood `useOutfitAppearance` actually makes that
an alias for `imageUrlV2(idealSize: SIZE_600)`.
So we need to cache it as the same field with the same params, rather
than as just plain `imageUrl`!
This fixes the bug where wearing an item from search would require a
network round-trip and visually remove all items in the meantime.
(Also, none of this issue was visible to most users, because item
search is still feature-flagged onto the old GQL one for most people!)
This makes clicking on search results in the new mode actually work! It
correctly adds it to the outfit, and removes other items.
The thing that's behaving strangely is that, when you add the item, we
visually remove all items until we can finish a fresh network request
for what they should all look like. This probably means that the cache
lookup for `useOutfitAppearance` is not as satisfied with what we cache
here as `findItemConflicts` is? Something to investigate!
It'd be nice to customize the message a bit, but this should be rare
and I'd prefer the simplicity of just going with the default text.
I ran into this when I made a mistake in how I process the return value
of search results, so React Query caught and raised the error via
React, as intended! And I was annoyed that it wasn't logged anywhere,
so that's my motivation for this change—but also, the old message is
pretty meh and has some layout problems anyway.
I feel like this was part of `will_paginate` back before the Rails
community had itself figured out about what belongs in a model?
But yeah, a default per-page value for search results does not belong
here. And I don't think anything references it anymore, because we pass
`per_page` to the `paginate` call in `ItemsController` explicitly! So,
goodbye!
First off, I think our code has converged on a convention of gracefully
returning `nil` for manifest-less situations, so we can do that instead
of raise! And then that lets us just simplify this check to whether
`manifest` is present, instead of `manifest_url`, so we stop crashing
in cases where we get to this point in the code and there's a manifest
URL but not a manifest.
This was a bit tricky! When I initially turned it on, running
`rails swf_assets:manifests:load` would trigger database errors of "oh
no we can't get a connection from the pool!", because too many records
were trying to concurrently save at once.
So now, we give ourselves the ability to say `save_changes: false`, and
then save them all in one batch after! That way, we're still saving by
default in the edge cases where we're downloading and saving a manifest
on the fly, but batching them in cases where we're likely to be dealing
with a lot of them!
Now we're *really* duplicating with Impress 2020's system lol, but I
need a way to not keep trying to load manifests that are actually 404,
which are surprisingly plentiful!
This doesn't actually stop us from loading anything yet, it just tracks
the timestamps and the HTTP status! But next I'll add logic to skip
when it was 4xx recently.
This is both unnecessary now, but also caused a bug in the new search
stuff where searching by zone would pass an extra `locale` argument to
a filter that doesn't need it!
Idk when this regressed exactly, but probably people didn't super
notice because I don't think it's a very common thing to type directly
into the Infinite Closet search box! (It used to be crucial to the old
wardrobe app.)
But I'm using it in the wardrobe app again now, so, fixed!
For now, I'm doing it with a secret feature flag, since I want to be
committing but it isn't all quite working yet!
Search works right, and the appearance data is getting returned, but I
don't have the Apollo Cache integrations yet, which we rely on more
than I remembered!
Also, alt styles will crash it for now!
`is:np` now means "is not NC and is not PB".
Note that it might be good to make NC and PB explicitly mutually
exclusive too? It would complicate queries though, and not matter in
most cases… the Burlap Usul Bow is the only item that we currently
return for `is:pb is:nc`, which is probably because of a rarity issue?
Adding new functionality to the item search JSON endpoint, and adding
an adapter layer to match the GQL format!
Hopefully this will be pretty drop-in-able, we'll see!
Clearing the way to be able to delete the announcement banner, which is
currently the only link!
I feel like there's room to redo the site layout to find a place to
more properly link to this from, but I don't have one yet! And this is
enough of a niche reference that I think this is good enough?
A bit of a hack, because the thing triggering it was also a bit of a
hack? I feel like there's something we gotta do with refactoring how
our multiple concepts of state are managed… but in any case! This seems
to keep basic outfit-loading working, while no longer getting us
trapped in autosave loops!
Here's how I reproduced the bug:
1. Open a saved outfit.
2. Set the browser devtools to apply a latency of 5sec to all requests.
3. Add an item to the outfit, and wait for the autosave to start.
4. While it still says "Saving", remove the item again.
5. Watch how, when the first autosave request comes in, the item is
re-applied to the outfit, then autosave gets stuck looping forever.
The issue was that, when an outfit finishes saving, the change in
outfit data was triggering this effect in `useOutfitState` that was
*meant* to *initialize* local state from the saved outfit, not to keep
them in sync all the time. (In general, when saved outfit data comes
back from the server, we don't want to use it to "fix" local outfit
state in the case of discrepancies, because the most common source of
discrepancy will be the user having made further changes!)
But anyway, one thing I didn't realize is that we *were* depending on
this hacky hook to do more than I thought: it was responsible for
syncing `id` and `appearanceId` to the local state after saving the
outfit. So, I replaced the `rename` action dispatch here with a new
action that explicitly sets all fields the server is responsible for!
Ah yeah, if you're not on the wardrobe page (so we don't need the Alt
Styles UI), and the outfit's `altStyleId` is null (as is the case for
the item preview page), then there's no need to load the alt styles for
that species.
So before this change, going to `/items/123` would include an XHR
request to `/species/<id>/alt-styles.json`, which would not be used for
anything. After this change, that request is no longer sent. Hooray!
The alt styles controller is the one place we use this right now, but
I'm planning to generalize this to loading appearances during item
search, too!
I also add more `only` fields to the alt styles `as_json` call, because
idk it feels like good practice to both 1) say what we need in this
endpoint, rather than rely on default behavior upstream, and 2) to
avoid leaking fields we didn't realize were on there. (And also to
preserve bandwidth, too!)
I think there's no call sites for these anymore, so now I can start
repurposing these methods for the new API endpoints I'm planning! :3
Now, `SwfAsset#image_url` approximately matches Impress 2020 logic: use
the thumbnail PNG from the manifest if one exists, or the Impress 2020
converter for canvas movies, or the old AWS copy generated by gnash if
necessary, or return nil.
I built this API endpoint in anticipation of a change I never actually
made! I'll just remove it for now, leaning toward cleanuppery over
holding onto something I'm not sure about.
I think this used to be used in an API endpoint we've now deleted? I'm
just cleaning up call sites because I intend to refactor the `urls`
method and stuff, so I'm removing cruft that would complicate it!
I'm not certain-certain this is unused, but I did a global search for
`\bimages\b` in the codebase, and didn't find anything that looked like
a match to me!
Doing that sweet, sweet backfill!! It's not exactly *fast*, since
there's about 570k records to work through, but it's pretty good all
things considered! Thanks, surprisingly-reusable async code!
I'm gonna also use this for a task to try to warm up *all* the
manifests in the database! But to start, just a simple one, to prepare
the alt styles page quickly on first run. (This doesn't really matter
in production now that I've already visited the page once, but it helps
when resetting things in dev, and I think more it's about establishing
the pattern!)
The Neopets Media Archive is a service that mirrors `images.neopets.com`
over time! Right now we're starting by just loading manifests, and
using them to replace the hacks we used for determining the Alt Style
PNG and SVG URLs; but with time, I want to load *all* customization
media files, to have our own secondary file source that isn't dependent
on Neopets to always be up.
Impress 2020 already caches manifest files, but this strategy is
different in two ways:
1. We're using the filesystem rather than a database column. (That is,
manifest data is kinda duplicated in the system right now!) This is
because I intend to go in a more file-y way long-term anyway, to
load more than just the manifests.
2. Impress 2020 guesses at the manifest URLs by pattern, and reloads
them on a regular basis. Instead, we use the modeling system: when
TNT changes the URL of a manifest by appending a new `?v=` query
string to it, this system will consider it a new URL, and will load
the new copy accordingly.
Fun fact, I actually have been prototyping some of this stuff in a side
project I'd named `impress-media-server`! It's a little Sinatra app
that indeed *does* save all the files needed for customization, and can
generate lightweight lil preview iframes and images pretty easily. I
had initially been planning this as a separate service, but after
thinking over the arch a bit, I think it'll go smoother to just give
the main app all the same access and awareness—and I wrote it all in
Ruby and plain HTML/JS/CSS, so it should be pretty easy to port over
bit-by-bit!
Anyway, only Alt Styles use this for now, but my motivation is to be
able to use more-correct asset URL logic to be able to finally swap
over wardrobe-2020's item search to impress.openneo.net's item search
API endpoint—which will get "Items You Own" searches working again, and
whittle down one of the last big things Impress 2020 can do that the
main app can't. Let's see how it goes!
Preparing to finally move wardrobe-2020's item search to use the main
app's API endpoints instead!
One blocker I forgot about here: Impress 2020 has actual support for
knowing an item's true appearance, like by reading the manifest and
stuff, that we haven't really ported over. I feel like maybe I should
pause and work on the changes to manifest-archiving that I'd been
planning anyway? I'll think about it.
I changed the type of this tag without realizing the JS references it
by both class and `div`!
I think at the time this was a perf suggestion for jQuery, because the
best way to query by class name was to query by tag first then filter?
It's possible our jQuery still does this, but I don't imagine it's very
relevant today, so I'll just remove that for better guarding against
similar bugs in the future instead.
I've moved the support secret into the encrypted credentials file, and
moved the origin into a top-level custom config value in the
environment files, with different defaults per environment but still
the ability to override it. (I don't use this, but it feels polite to
not actually *demand* that people use port 4000, y'know?)
Okay, so I still don't know why rendering is just so slow (though
migrating away from item translations did help!), but I can at least
cache entire closet lists as a basic measure.
That way, the first user to see the latest version of a closet list
will still need just as much time to load it… but *only* the ones that
have changed since last time (rather than always the full page), and
then subsequent users get to reuse it too!
Should help a lot for high-traffic lists, which incidentally are likely
to be the big ones belonging to highly active traders!
One big change we needed to make was to extract the `user-owns` and
`user-wants` classes (which we use for trade matches for *the user
viewing the list right now*) out of the cached HTML, and apply them
after with Javascript instead. I always dislike moving stuff to JS, but
the wins here seem. truly very very good, all things considered!
From an era when we didn't have that! Now we do!
(My motivation is that I'm trying to add new JS to this page and errors
in stickUp are crashing the page early, womp womp!)
This one is important, I didn't notice that this is a way of setting
attributes that won't be written to both tables! `name` will only be
written to the translation table (which crashes the save), and the
other fields would only be written to the main table. Fixed! (I don't
like the super-dynamic this code was written before, anyway.)
Missed this at first - now that the `name` field is just a normal field
and is always English, it's now an error to provide the locale to it as
a parameter, like we used to for the translated version of the field!
Like with Species, Color, and Zone, we're moving the translation data
directly onto the model, and just using English. This will simplify
some of our queries a lot (way fewer joins!), and it's what Neopets
does now anyway, and I have a secret hope that removing the complexity
along the codepath for `item.name` might help speed up large item lists
if we're lucky?? 🤞
Anyway, this is the first step, performing the migration to copy the
data onto the `items` table, making sure to keep them in sync for the
2020 app for now!
I think this was to explain why `order` wasn't part of this query, and
we probably used to sort in the controller? But now the item search
module takes care of all that, this is just confusing to say now imo!
Impress 2020 has had this for a while, I've wanted it for reference on
occasion, let's bring it in!
Very similar logic, and Ruby & Rails's date affordances are super
helpful for simplifying how to express it!
The homepage used to point to old projects that don't work anymore
anyway! This is the only project that stuck, so just redirect here!
We also remove the openneo.net link from the footer, because there's
nothing useful to say there anymore!
It hasn't been updated in a long time, let's just be rid of it!
It's possible I'll replace it with another blog sometime if we get the
chance to do more development work, it could be a useful way to improve
communication—but not yet!
I think I cleared this from the outfits/new template a while ago, but
never cleaned up this file, because I was too anxious that I was
correctly identifying all its call sites. But now I'm more confident!
At least, they seem unused to me on a quick audit! The scriptaculous
stuff has long been replaced by jQuery UI equivalents. (Wow, so many
generations of libraries! lol)
Mostly this is just me testing out what it would look like to
modularize the app more… I've noticed that some concerns, like
fundraising, are just not relevant to most of the app, and being able
to lock them away inside subfolders feels like it'll help tidy up
long folder lists.
Notably, I haven't touched the models case yet, because I worry that
might be a bit more complex, whereas everything else seems pretty
well-isolated? We'll try it out!
Tbh I'm not sure `special_color` is actually used anywhere? It used to
be how we decide what to show in the previewer on the item page, but
that's been replaced with the 2020 logic, so idk…
But in any case, I noticed that the description doesn't match the
pattern we have, so here's the fix!
I looked at this and was like. "ok literally what is
`nonstandard_colors` trying to do"
reading it again now, I'm realizing the idea is that it probably runs
two queries: one to get nonstandard colors, then depends on
ActiveRecord to implicitly convert the relation to an array and then to
IDs for the second query? Instead of doing a join??
Idk, it's unused, so trash it!
This used to be the behavior, and the site has plenty of graceful
fallbacks for it, I just forgot this one when doing Rails upgrades!
Note that the impress-2020 stuff is *not* as graceful about this, so
the wardrobe page won't show the pet until the color is in the DB. Ah
well, still an improvement!
I thought this refactor of this change was working, but actually it was
just failing to build the JS lmao. Here's a version with correct syntax!
😅
Is there a syntax for this kind of thing that I'm just forgetting? Idk,
oh well!
Okay right, the wardrobe-2020 app treats `state` as a bit of an
override thing, and `pose` is the main canonical field for how a pet
looks. We were missing a few pieces here:
1. After loading a pet, we weren't including the `pose` field in the
initial query string for the wardrobe URL, but we _were_ including
the `state` field, so the outfit would get set up with a conflicting
pet state ID vs pose.
2. When saving an outfit, we weren't taking the `state` field into
account at all. This could cause the saved outfit to not quite match
how it actually looked in-app, because the default pet state for
that species/color/pose trio could be different; and regardless, the
outfit state would come back with `appearanceId` set to `null`,
which wouldn't match the local outfit state, which would trigger an
infinite loop.
Here, we complete the round-trip of the `state` field, from pet loading
to outfit saving to the outfit data that comes back after saving!
Now that DTI 2020 has been deployed without references to the
translations tables, we can stop keeping them in sync!
Next step is to drop the tables and be done with them altogether! (I
have a backup of the public data for this too, as does this repo!)
This happens on the Baby Kougra, where for most poses half of the
assets have a manifest that includes an SVG but no PNG. Skip 'em!
I considered adding a glitch tag for this, but idk I think we can do
that once we're aware of an actual case where this causes visible
issues.
On the small-screen layout, the popover goes down and covers the item
list, which isn't a big deal in context; so I want it to be basically
as tall as it can be without being unwieldy, to give more info.
But on the large-screen layout, it doesn't take long at all for the
popover to start intersecting the pet preview, because it *has* to go
up and cover the pet preview. So, I'm much more reserved about how much
vertical space I'm willing to give it!
(I also considered sending the popover off to the *right* of the button,
to cover the item list, but it felt *way* too weird imo! Especially in
the expressions case.)
The intent of this glitch message was that, when UC or Invisible pets
hide an item because of a zones-restrict thing, it would still show up
in the items panel as fitting a certain zone, whereas it should have
been in the "Incompatible" section and having none of its zones applied.
But the previously implementation would like, show this message even for
items that _were_ correctly marked as Incompatible? And that the server
returned no layers for, because it doesn't fit this body type to begin
with? (e.g. put a Grarrl hat on a Grarrl, then switch to Acara, and the
Grarrl hat is marked Incompatible—but would also show this confusing
message; or similarly with switching to Alt Styles)
So, when the server just returned no layers for this item to begin with,
don't show this message!
Now that we're tracking tab state ourselves, it's pretty easy to just
pass the `initialFocusRef` to the right place instead of to both!
This helps switching between the tabs feel a lot smoother, because we
don't have to re-render and fade-in all the poses again.
I'm keeping it in the same place for now rather than trying to fold it
into Styles, because I think that's net-less-confusing (since Styles
work pretty differently, e.g. different color requirements), and
certainly less work either way lol!
For Alt Style outfits, it's useful to call special attention to the Alt
Style feature as the likely cause of incompatibilities.
(Incompatibilities previously were most often caused by choosing a
species-specific item, then switching to another species. We generally
make it hard to enter this state, by hiding incompatible items during
search.)
Now that the pose picker button can have text content too, things were
getting pretty cramped horizontally on smaller screens! Now we use the
`sm` size when it's small-time.
I'm not planning to port full Alt Style support over to the 2020
frontend, I really am winding that down, but adding a couple lil API
parameters are *by far* the easiest way to get Alt Styles working in
the main app because of how it calls the 2020 API. So here we are, just
using new parameters for DTI 2020 that I'm gonna deploy to impress-2020
first!
Specifically, I was looking at the new "Stormy Cloud Kacheek" items,
and was surprised to find that, in the outfit editor, they all get
grouped under "Markings" (and therefore the UI treats them as
mutually-exclusive via hidden radio button and only bolds one at a
time), but they aren't actually conflicting because they occupy
different zones named "Markings".
In this change, we make the zone groups actually just be *by zone*
rather than jumbling all of the zones with the same label together; but
in most cases, we still keep the same simplified display. In the case
of the "Stormy Cloud Kacheek" items though, we now get a few groups:
`Glasses`, `Markings (#6)`, and `Markings (#16)`. Glasses is chosen
by coincidence because it's the first zone label for that item
alphabetically (even though that item also occupies a third "Markings"
zone), and then the other two know to disambiguate from each other.
There's an opportunity here to cheat things further, like to
*intentionally* select items like "Glasses" that are less ambiguous
when possible. I'm not aware of enough other cases like this for that
to really matter, though, so I'm just leaving it as-is!
I tested this a *bit* on other outfits, and everything looked fine at
a glance, so I'm just moving forward—but I'll make an announcement to
ask people to help take a look!
Two motivations here:
1. Unconverted pets should no longer exist on Neopets.com (and we
especially don't expect new ones), so this logic helps no one.
2. The Baby Pteri keeps getting overridden to be marked as Unconverted
because it has only one asset, which is incorrect.
A decent heuristic for a bygone era, goodbye!
This wasn't causing big problems because we made resilient little
choices in a lot of little places… but it's confusing as a potential
state, and the Styles chooser wasn't selecting the "Default" option
after you switch—which is what tipped us off!
Yay it works(*)! But two major missing pieces:
- Outfit saving doesn't persist it at all
- Item compatibility is unaffected: items will still appear in search
and in the preview, even when they don't fit anymore.
To help with space, I'm just showing the word "Nostalgic" (or "???" if
it's from a series we don't recognize, this is hardcoded by ID), and
trusting that from context it will be obvious that it's the "Nostalgic
Faerie" case or whatever. (Moreover, in both the button and the select
we're omitting the species name, by similar reasoning!)
Note that this _still_ doesn't actually apply the style to the outfit
whatsoever; this is all just local state as we're continuing to play
with UI concepts. Actually applying it is probably next though! (Though
there's a couple more UI things I want to do, like some affordances to
clarify that a Style is applied and that Expression changes won't work.)
Oh right, React Query's API is slightly different, fixed! (Previously,
this would cause the PosePicker to show before all the data was ready,
so alt styles would sometimes pop in after the popover was already
open.)
On small screens, the PosePicker opens down, and we put the tabs on the
top, to be near the button.
On large screens, the PosePicker opens up, and we put the tabs on the
bottom, to be near the button.
Previously, we always set `placement="bottom-end"`, which on small
screens behaved as written, and on large screens there would not be
space to open downward so it would open upward instead.
Now, we set the placement explicitly based on a media breakpoint, and
we change the `flexDirection` of the tabs container on the same media
breakpoint.
I'm playing with using text to call more attention to this button, but
I'm not altogether pleased with the design yet. I'll leave it there for
me and Support users, but hide it for most people until we've got a
more complete concept.
That way, I can stop being on a branch and be working on main, and
deploy stuff to preview live, without having to share it with everyone
just yet! (This was the motivation for finally adding Support tooling
to main DTI lol!)
A little architecture trick here! DTI 2020 authorizes support staff
requests by means of a secret token, instead of user account stuff. And
our support tools still all call DTI 2020 APIs.
So here, we bridge the gap: we copy DTI 2020's support secret to this
app's environment variables (I needed to update
`deploy/files/production.env` and run `bin/deploy:setup` for this!),
then users with the new `support_secret` flag have it added to their
HTML documents in the meta tags. Then, the JS reads the meta tag.
I also fixed an issue in the `deploy/setup.yml` playbook, where I had
temporarily commented some stuff out to skip steps one time, and forgot
to uncomment them after oops lol!
To activate this, I created a `.env.development` file in my project
root, with the following content:
```env
IMPRESS_2020_ORIGIN=http://localhost:4000
```
Then, I started impress-2020 with `yarn dev --port=4000`.
Now, the app loads from there, hooray!! It even fixes that obnoxious
pet state ID bug that happens when you run against the production db lol
Something in the Rails loader doesn't like that I have both a gem and
a lib folder named `RocketAMF`, I think? It'll often work for the first
pet load request, then on subsequent ones say `Envelope` is not
defined, I'm guessing because it scrapped the gem's module in favor of
mine?
Idk, let's just simplify all this by making our own module. I feel like
this old lib could use an overhaul and simplification anyway, but this
will do for now!
Like in 0dca538, this is preliminary work for being able to drop the
`zone_translations` table! We're copying the field over first, to be
able to migrate DTI 2020 safely before dropping anything.
Non-English languages haven't been supported on Neopets for a while, so
I'd like to remove this extra cross-cutting complexity, especially
since it's now inconsistent with the real site anyway!
The main motivation is that I'd like to do this for items too, because
I have a hunch that all the complexity of `globalize` to read
`item.name` is part of what's making large user lists so slow to
render: lots of little objects getting created down the stack, and
needing to be garbage-collected later.
I'm not sure that's why! But I figure removing this complexity is a
simplicity win anyway, so let's do it!
Note that this doesn't *finish* the migration, it just starts it! The
`Species::Translation` and `Color::Translation` models still exist, and
still have their data, and not all references to them are scrubbed yet.
I especially don't want to delete the backing tables until both DTI and
DTI 2020 are ready for it!
So this change will someday be paired with another change to actually
drop the tables - after backing up the data for future records just in
case, of course!
Using good ol'-fashioned cookies! The JS sets it, and then Rails reads
it on pageload. That way, there's no flash of content for it to load in
after JS loads.
If your first wanted list was created before your first owned list,
then `false` would come before `true` in the keys of
`current_user_lists`.
I both fixed this to be more consistent at the model level, because who
likes unpredictable behavior? But also downstream at the view I
hardcoded that true should come before false, because that's a UI
concern that I want to be encoded in the view regardless of what's
upstream.
Oh right, I just remembered how tabs work, they join with the page when
selected! lol
Something about this still feels off to me, but idk, I find it charming
and I wanna just let it rock.
They can just ellipsis, that's fine, the full name is not essential
when you're just scanning the list; you're gonna pop it open and see
the full name during contact anyway.
It was a bit tricky to figure out the right API for this, since I'm
looking ahead to the possibility of splitting these across multiple
pages with more detail, like we do in DTI 2020.
What I like about this API is that the caller gets to apply, or not
apply, whatever scopes they want to the underlying hanger set (like
`includes` or `order`), without violating the usual syntax by e.g.
passing it as a parameter to a method.
I made two mistakes here! One is that you need to round up subpixel
line heights, and the other is that `clientHeight` is what I want, I
shouldn't include the margin!
I guess I deleted this a while ago without really noticing… I think I'd
at some point like to replace this with like, the DTI 2020 improved
table layout thing, but I figured this would be pretty quick to throw
in and make the page not feel like a pain to use lmao
Oh yeah, a long-standing limitation. Good thing we're better at stuff
now!
This is also probably the real cause of the weird number of slight
discrepancies between main DTI and DTI 2020 when I eyeballed stuff lol
oh, well, that and the missing default-lists. A bit messy!
Ahh I see, the way we got away with not having a `trading` scope before
was a weird metaprogramming `{owned/wanted}_trading` situation. Okay,
let's trash that in favor of our new stuff! And that helps us bulk the
queries too which is nice.
In impress-2020, we do a big slow query to figure out which users have
been active in trades recently. Now, we cache that timestamp on the
User model.
This won't have any immediate effect; it's to clear the way for Classic
DTI to receive the better trade ratios feature people like from 2020.
I also added some unit testing infra because I finally wanted it! for
all the ways you can trigger this timestamp lol
Note too that this is a bit of an unusually complex migration, but my
hope is that the batching and query structure and such helps it run
surprisingly fast! 🤞
Poking at the site, I didn't love that clicking any of the NC items on
the homepage took a few seconds to load cuz of the OWLS request taking
a couple seconds.
If the request were faster to load I might decide otherwise, but for
now, let's just keep that cache long.
Another approach we could take would be to ask for an endpoint that
returns all the values at once, so we could cache and update them all
together, instead of having it all split into separate cache keys!
So this was a slightly wrong error message, what was happening was:
1. Trying to load the image hash for this pet, by looking them up at
https://pets.neopets.com/cpn/PET_NAME/1/1.png and seeing what URL it
redirects to.
2. But pets.neopets.com was rejecting our User-Agent string, which
would've been just "Ruby", since we hadn't set it otherwise. I guess
that's an explicitly banned string?
I also found that the kind of more-helpful User-Agent string I like to
write was being rejected, and I could only get it to accept something
very simple? So that's what we're using now, I guess!!
Building toward replacing more of the 2020 data sources! I think this is
an endpoint that benefits from bulk loading, esp with the way the item
page previews work. I also like taking the concept of "canonical" out of
the GQL interface, and instead just loading for each of the 50 species
and letting the client decide. (And then it can fast-swap between them!)
The models folder is a bit confusingly large, these are more mixins and
kinda clutter it. Push them off into `lib`, I think!
I think they used to be in models mainly because Rails used to handle
`lib` differently with autoloading, and it made for a worse dev
experience. Now it's all the same, though!
Self-hosted Plausible instance! I have need of usage numbers again,
after a good few years of just not using it; but I don't want to send
the data to Google, and I enjoy self-hosting things, so here we have it!
We haven't used the mall spider in this app in forever (I guess we even
deleted the code at some point?), but there was some vestigial stuff
left. Goodbye!
There was a static page explaining it, which we no longer link to; and
there was an unused field in the User model for who was a beta tester
for it. Goodbye!
Just moving more stuff over! I modernized Item's `as_json` method while
I was here. (Note that I removed the NC/own/want fields, because I
think the only other place this method is still called from is the
quick-add feature on the closet lists page, and I think it doesn't use
these fields to do anything: updating the page is basically a full-page
reload, done sneakily.)
There was a time when I used an old proxy server to try to fix mixed
content issues, and I eventually removed it but never took the tendrils
out from the code.
We probably _should_ figure out how to secure these URLs! But until
then, we may as well simplify the code.
Ok progress! We've moved the info about what bodies this fits, and
what zones it occupies, into a Rails API endpoint that we now load at
the same time as the other data!
We'll keep moving more over, too!
I changed my mind again! At first I wanted to make the special case
clearer, and to be able to more strongly assert that the species is
not null. But now I'm like… eh, there's code that references `body.id`
that has no reason _not_ to work in the all-bodies case… let's just
keep the types more consistent, I think.
This is more similar to what impress-2020 does, I was working on the
wardrobe-2020 code and took some inspiration!
The body has an ID and a species, or is the string "all".
Preparing a better endpoint for wardrobe-2020 to use! I deleted the
now-unused swf_assets#index endpoint, and replaced it with an
"appearances" concept that isn't exactly reflected in the database
models but is a _lot_ easier for clients to work with imo.
Note that this was a big part of the motivation for the recent
`manifest_url` work—in this draft, I'm probably gonna have the client
request the manifest, rather than use impress-2020's trick of caching
it in the database! There's a bit of a perf penalty, but I think that's
a simpler starting point, and I have a hunch I'll be able to make up
the perf difference once we have the impress-media-server managing more
of these responsibilities.
Ok so, impress-2020 guesses the manifest URL every time based on common
URL patterns. But the right way to do this is to read it from the
modeling data! But also, we don't have a great way to get the modeling
data directly. (Though as I write this, I guess we do have that
auto-modeling trick we use in the DTI 2020 codebase, I wonder if that
could work for this too?)
So anyway, in this change, we update the modeling code to save the
manifest URL, and also the migration includes a big block that attempts
to run impress-2020's manifest-guessing logic for every asset and save
the result!
It's uhh. Not fast. It runs at about 1 asset per second (a lot of these
aren't cache hits), and sometimes stalls out. And we have >600k assets,
so the estimated wall time is uhh. Seven days?
I think there's something we could do here around like, concurrent
execution? Though tbqh with the nature of the slowness being seemingly
about hitting the slow underlying images.neopets.com server, I don't
actually have a lot of faith that concurrency would actually be faster?
I also think it could be sensible to like… extract this from the
migration, and run it as a script to infer missing manifest URLs. That
would be easier to run in chunks and resume if something goes wrong.
Cuz like, I think my reasoning here was that backfilling this data was
part of the migration process… but the thing is, this migration can't
reliably get a manifest for everything (both cuz it depends on an
external service and cuz not everything has one), so it's a perfectly
valid migration to just leave the column as null for all the rows to
start, and fill this in later. I wish I'd written it like that!
But anyway, I'm just running this for now, and taking a break for the
night. Maybe later I'll come around and extract this into a separate
task to just try this on all assets missing manifests instead!
Ahh, I guess I missed these, I think they're maybe not actually used in
the app is why? cuz they're all default values that are overridden at
the actual call sites. But I ran into it when running `Pet.load` in the
console, and yeah let's just fix 'em up!
This hasn't worked for a while, and I don't know an API off the top of
my head to drop in for it. Let's just delete it for now, and revisit it
later if we want to!
Dang, I'm really wishing I'd opened this sooner cuz I didn't realize it
would be THIS easy!!
The bug was that the `t` method started taking Ruby keyword params
instead of a hash object for `options`, so the syntax changed.
Womp womp!
Really don't know why this wasn't a problem with Apollo (or was it??),
but yeah, don't save when there's a save error!! Then we reset the
mutation state when the outfit state changes.
It's weird to be reading this code and be like. was this not always an
issue? Maybe something in Apollo prevented this? Did we use optimistic
UI or something? Idk?
There's still an issue with it infinitely retrying in an error state
though.
Just sharing this out to gather info, since this might be coming kinda
soon!
I also moved the announcement higher up in the template, because it
gets broken on the user lists page which uses floats quite a bit for
the site header—and tbh I feel like this is better anyway lol.
In the impress-2020 app, we use this to prepopulate certain GraphQL
data into the Apollo cache when SSR'ing a page. We don't do that here,
so, goodbye!
The wardrobe-2020 app had a cute drawer that embeds the item page, but
honestly I don't think it was that valuable, and especially not when it
means we have to basically maintain two item pages lol. Let's decrease
the surface area!
A really really simple change! It works on the item page, the item
index page, item search, the homepage, and the item lists page.
The main reason I avoided this for so long (even before modernizing the
Rails app) was that the ElasticSearch stuff felt like it made it messy?
But now it's pretty simple, and it works in search already cuz I did
that when I implemented item search, so, nice!
Ok cool, I have just not been running any of this since moving out of
impress-2020, but now that we're doing serious JS work in here it's time
to turn it back on!!
1. Install eslint and the plugins we use
2. Set up a `yarn lint` command
3. Set up a git hook via husky to lint on pre-commit
4. Fix/disable all the lint errors!
Rather than letting the fact that the server API models outfits a bit
differently (underscore keys, integer IDs for things), I'd rather
convert it to the familiar field names and expected types!
This came in a few parts!
1. Add meta tags to let us know we're logged in.
2. Install React Query, which has the data-loading sensibilities I like
about Apollo without the GraphQL that has honestly been a drag.
3. Replace the outfit-loading and outfit-saving calls with API calls to
the main app.
4. Update the main app's API calls to use our more flexible data
constructs like "pose".
Would've loved to do this more incrementally, but it's hard to! You
can't split out outfit-loading and outfit-saving, or auth from any of
that, or the state gets all out-of-sorts.
Still, this is a good nugget we've pulled out all-in-all, and one that
people have been asking for! Can maybe look to logged-in item search
soon too, for own/want data?
Moreover, this code is in a bit of a flimsy state anyway: it'll kinda
work if you're logged in at impress-2020.openneo.net, but that wasn't
intentionally engineered, and I'm not sure exactly what circumstances
cause those cookies to send vs not send?
My intent is to clean up to replace all this with login code that
references the main app instead… this'll require swapping out some
endpoints to refer to what the main app has, but I'm hoping that's not
so tricky to do, since the main app does offer basically all of this
functionality already anyway. (That's not to say it's simple, but I
think it's a migration in the right direction!)
I used the new profiler tools on this page, and noticed a lot of
allocations in the Globalize library, which we use for translating
database records. I realized that we were loading all of the fields of
not just all of the items on the page, but all of their translation
records in all locales! We used to scrape data for lots of languages, so
that can be quite a lot!
Unfortunately, Rails's `includes` method to efficiently preload related
records always loads all fields, and simply can't be overridden.
So, in this change we write manual preloading code, to identify the
records we need, load them in big bulk queries, and assign them back to
the appropriate associations. Basically just what `includes` does, but
written out a bit more, to give us the chance to specify SELECT and
WHERE clauses!
It shows up in development always, and if you're logged in as Me
Specifically in production!
I'm using this to poke at memory usage for pages that seem suspicious.
I don't know why our app reliably grows so large in RAM, but my hunch is
that maybe there are some pages that just use a truly large amount to
begin with - and I've learned Ruby doesn't release memory back after
it's GC'd, it just grows the process and keeps the free space to itself
in its own heap!
So I'm just eyeing pages that I know *can* have a lot going on, and
seeing what I find!
We used to do this for weird clever caching tricks that I don't think
were actually very effective. We stopped using this a few months ago,
and now I'm finally cleaning up this supporting code!
Huh, Arel can *sometimes* handle just having an attribute stand in as
"X is true" in a condition, but sometimes gets upset about it. I guess
this changed in Rails since we recently wrote this?
Specifically, item search would crash on "is:nc" (but *not* "is:np"),
saying:
```
undefined method `fetch_attribute' for #<struct Arel::Attributes::Attribute relation=#<Arel::Table:0x0000000109a67110 @name="items", @klass=Item(…), @type_caster=#<ActiveRecord::TypeCaster::Map:0x0000000109a66e90 @klass=Item(…)>, @table_alias=nil>, name="is_manually_nc">
```
The traceback was a bit misleading (it happened at the part where we
merge all the scopes together), but that hinted to me that it working
with an attribute in a place where it expected a conditional. So I
converted the attribute in the `is_nc` scope to a conditional, and made
the matching change in `is_np`, and that fixed it! Ok phew!
The URL anchors were getting like. double-encoded? The `closet[]` part
was encoding as `closet%255B%255D`. Maybe a thing in Rails, where you
need to mark them `html_safe` to insert them in a URL like that?
Well anyway, those URLs are redundant now, I just have it link straight
to the same outfit page as the big link!
Now, like in DTI 2020, opening an outfit will go straight to the editor.
I'm not 100% on whether this is actually like. the superior behavior?
But I think it's good enough, and it's what the wardrobe-2020 code
expects, so let's just roll with it for now!
Ohh I see, I made a mistake converting this from Next.js routing. It's
not that we had a URL search parameter named `outfitId`; it's that if
you were coming from the `/outfits/:outfitId` route, it would use that!
I still haven't gotten the rest of the site to point that route to this
page, but I'll do that in a later change.
Notable things:
- We used to have the parameters in the hash (`#`) part of the URL.
- We used to use the key `outfit=123` instead of `outfitId=123`.
In this change, we add backwards-compatibility for these things, while
still keeping the latest behavior too, with no change to the URLs we
generate!
Looks like the version of Prettier I just installed is v3, whereas our
last run in the impress-2020 repo was with v2. I don't think we had any
special config in that project, I think these are just changes to
Prettier's defaults, and I'm comfortable accepting them! (Mostly seems
like a lot of trailing commas.)
Idk if this used to be different or what, but it looks like the current
behavior is: if you delete a closet list, it'll leave the hangers
present, but Classic DTI would not show them anywhere; but Impress 2020
(until recently) would crash about it.
Now, we use `dependent: :destroy` to delete the hangers when you delete
the list (which I think makes sense, and is different than what I
decided in the past but that's ok, and is what the current behavior
*looks* like to people!), and we add a migration that deletes orphaned
hangers.
The migration also outputs the deleted hangers as JSON, for us to hold
onto in case we made a mistake! I'm also backing up the database in
advance of running this migration, just in case we gotta roll back HARD!
This is an important workflow for people doing art stuff, I'm told! They used to use the Classic DTI broken image UI for this, but now that that's uhh Fully Gone, let's add this more explicitly!
Ah right, the CSS reset only applies in the ScopedCSSReset container, which doesn't work for elements portaled out with the <Portal> component (which a LOT of Chakra components use for things like tooltips etc).
Here, we take advantage of <Portal> having a hardcoded classname .chakra-portal, and applying it to them too!
This was used by the Neopia server to send us the modeling data it requested out-of-band. But now we do all our modeling requests back in-app again, so we don't need this!
Okay, this is a process that idk if it's even been working for a while anyway, I don't think Neopets translates item names anymore?
And it's crashing when I try to model stuff now, so like. yeah ok I'm fine with just skipping this, it's a shame to lose out on potential data going forward but *I think there just isn't data to get anyway*
I think we used this for both conversion to image, and also for CORS stuff when rendering Flash-based previews… let's trash it, I don't want to be growing our hard drive with files I don't think we use anymore!
If I'm wrong and it turns out we do use them for something, then like. hey I'm sure we'll find out soon enough, and it's very recoverable operation.
I hope this doesn't cause problems! But yeah, with Puma doing threading, and maybe switching to Falcon someday to get even better concurrency properties, I feel like this will probably be fine?
And it makes the UX a loootttt better, to be back in the world where all these forms just work, whew.
Oh okay, I was misinterpreting the error: it was that our NEOPETS_URL_ORIGIN secret value isn't the real Neopets.com IP address anymore, so amfphp requests were just plain *always* failing in production. Oops!
I've remove that environment variable from our production config, and now modeling is working in the bulk thing!
Also I'm noticing that we're using puma these days, which does good threading stuff. I think there might be merit to switching over to Falcon because of just how async-y our stuff is, but having 5 threads going is honestly probably good enough that I don't need to worry too much about mutual blocking, and could probably just write stuff to get Neopia out of the picture like *right now*. Neat!
Okay so… I'm worried about this because of Rails whole single-threaded situation, which doesn't really let it handle blocking on external network requests very well.
Ultimately I think we're gonna have to do a clever thing but idk quite what?
I should look into whether like, puma + the new async stuff can enable Rails to be more tolerable about this, and handle a few requests at once, instead of having to have the Neopia server doing it. (Right now, the Neopia server isn't really doing its job quite right, because it depends on the Rails app being *local* to send stuff to it.)
But for now, let's just extend the timeout, cuz it's basically always getting hit in production—because there's currently no other way to do modeling, oops lol
Just find_all_by's that I never cleaned up
Oddly enough, I still got a "neopets seems down" message out of this, idk if that's an actual bug or just sluggishness rn
Okay, right, if we're just using www.neopets.com (like we are for now), it fails on http://www.neopets.com because it triggers a redirect that we don't follow.
So here I 1) change the default to HTTPS, and 2) add HTTPS support to our little RocketAMF lib
Just cleaning up a bit! I'm sure there's more to remove, these were just some clear candidates: old wardrobe code, and stuff in `public` that I just fully don't recognize and don't think is doing anything? (We'll find out if something crashes though lol!)
Oops, this was causing the page to render in a weird zoomed-out way on mobile!
Note that, for most of the site, we intentionally haven't added this tag yet because most of our pages aren't especially responsively-designed; so we _want_ the device's best attempt to work with that, rather than trying to enforce something.
This required a buncha fixes to how SASS scoping works! Needed to add a bunch of imports for stuff that previously would get read from the global scope by being imported *after* the constants and mixins etc.
There's clearly a lot of refactor opportunity here, but I'm not gonna worry about it!!
I wasn't sure what we were actually using it for, turns out it was mostly polyfills for CSS features that are very standard now!
I didn't audit these changes very carefully tbqh because they seemed pretty simple? Fingers crossed!
Eyyy tasty! There were some issues with conflicting styles with the main app, but I think we got it!
Scoping Chakra's CSS reset was a big deal to not accidentally overwrite the app's own styles lol, and we had to solve a specificity problem for that, thanks Aria for the :where tip!! <3
We never had a specific reason why we didn't use the router for this I don't think? Not that I wrote down anyway. Let's just switch it over and see what happens!
I mainly did this as a misdiagnosis of the page reload problem fixed in c162864, but it seems like a good idea to try out anyway!
This I think is why the page was reloading when you try to item search? The failed import was triggering our "hey maybe this is an old module URL that got deleted" code?
We add jsbuilding-rails to get esbuild running in the app, and then we copy-paste the files we need from impress-2020 into here!
I stopped at the point where it was building successfully, but it's not running correctly: it's not sure about `process.env` in `next`, and I think the right next step is to delete the NextJS deps altogether and use React Router instead.
Nice, just turning it on seemed to do all we need for now!
Fair questions to be asked about like, should you be able to look up by username instead of email? But like idk, this feels simpler *and* more solid, to give you feedback on if it's the right email.
In the login case, we save the `return_to` parameter in the session, because login can be a multi-step process.
In the logout case, we just read it directly from the form params.
Note that you *could* end up in a weird scenario where an old return_to value sticks around for a bit? But we have the sense to delete it when we use it on a successful sign-in, and most links to the login page come with a `return_to` param which should reset it. So, you'd have to 1) have started but not finished a sign-in, 2) during the same session, and 3) get to the login page by an unusual means.
Probably fine!
This is a bit more standard, and has the bonus of being compatible with Devise, which is using `flash[:notice]` and so its flashes were coming out unstyled, oops!
Hey nice!!
Note that I removed an account delete button from the settings page. You can still send a DELETE request to the right endpoint to do it, but it's not gonna delete all the associated records, and I wanna think a bit about how to handle that better before exposing that button.
I noticed this was stopping changing your default list visibility bc contact neopets connection can't be empty, so I fixed that!
And then I just decided to scroll through every `belongs_to` relationship and add optional to the ones that jumped out at me lol
A lot of rough edges here (e.g. no styles on the flash messages), but it's working and that's good!!
I tested this by temporarily switching to the production database and logging in as matchu!
Still missing a lot of big features too, like registration, password resets, settings page, etc.
This removes login/logout/session logic for integrating with OpenNeo ID, replacing them with stubs that just redirect to `/?TODO` when you click login, and helpers that act as if you're not logged in.
This gives us a clean slate to plug in new Devise logic to integrate with the `openneo_id` database directly!
No user-facing functionality here yet, just configuring the database connection to work with openneo_id records.
This is a first step in integrating Devise stuff into this app instead of connecting with a weird second app.
My basic testing for this was to temporarily connect to production `openneo_id`, and see `AuthUser.first` correctly return a user!
I had added this many Rails versions ago during the recent upgrade process, because it was in latest Rails but not in the version of Rails I was using when replacing Elasticsearch with MySQL queries. We can remove it now!
lmao I keep forgetting things! note that the negative case of this filter, like the negative case of `fits`, is currently broken because Rails changed the default SQL mode and I didn't notice! We'll need to add a `database.yml` file and set `sql_mode: TRADITIONAL`.
Whew! Seems like a pretty clean one? Ran `rails app:upgrade` and stuff, and made some corrections to keyword arguments for `translate` calls. There might be more such problems elsewhere? But that's hard to search for, and we'll have to see.
This one was pretty straightforward yaay! Main thing was the change from `render file` to `render template` in a couple places, oh and a thing with complex `order()` clauses.
I ran `rails zeitwerk:check`, which eager-loads the app, and it found two problems: `closet_group.rb` doesn't define `ClosetGroup` (cuz it's empty), and I left in a reference to a cache sweeper observer oops. Goodbye!
Rails 5 added new validation on `belongs_to` to ensure the corresponding record exists. In the case of moving to the null list, this shouldn't trigger!
I wish we could flag that specifically `nil` is okay, but other values should be validated? But oh well, this is fine!
Ok so weird little situation, usually Arel will accept an attribute as a param to `order()`, but not when it's in a very specific situation of all of the following:
`Item.joins(:translations).includes(:translations).limit(30).order(Item::Translation.arel_table[:name])`
For some reason, it's all like "hey I can't call `to_sql` on an attribute!", but only in the scenario where all 3 of those other things are present. Weird!
Anyway, explicitly saying `.asc` fixes this. Ok!
Some important little upgrades but mostly straightforward!
Note that there's still a known issue where item searches crash, I was hoping that this was a bug in Rails 4.2 that would be fixed on upgading to 5, but nope, oh well!
Also uhh I just got a bit silly and didn't actually mean to go all the way to 5.2 in one go, I had meant to start at 5.0… but tbh the 5.1 and 5.2 changes seem small, and this seems to be working, so. Yeah ok let's roll!
Some tricks required here to get the dependencies to work out, but we got it!!
Oh also, we move away from the rbenv in Ubuntu's package manager, because it doesn't support more recent Rubies like 2.4.10.
This labeling technique hasn't worked in a long time bc it requires being logged in. These days we just manually label them with the 2020 support tools I think!
Clearing out the Neopets gem should help us manage some gem dep conflicts in the 4.2 upgrade too (I think the nokogiri one gets tricky?)
At one point we piloted a "Camo" service to proxy HTTPS image urls for us, but it doesn't exist anymore.
We already have proxies and stuff for this, so I left `Image` as a placeholder for this, but it's not working yet!
This also deletes our final reference to the Addressable gem, so we can remove it!
I don't think these work anymore, and our volunteers get new items into the db fast anyway, Impress 2020 is doing better spidering these days. And then we get to remove the cron job `whenever` gem!
Using `s3_path` and stuff made it sound like we were still referencing the original Amazon S3 images - but actually our new asset proxy just uses the same path structure, and we didn't change anything about it.
Oh also I deleted an after_conversion method that isn't used anymore, forgot about that!
We've already swapped out the backend for this stuff to Impress 2020, so the resque task and the broken image report UI aren't actually relevant anymore. Delete them!
This helps us delete Resque soon too.
Idk this one might actually be a bit of a pain to load? But I'd want to optimize it differently anyway, and there's overhauls we're already planning to do here.
Huh! This cache key seemed to only be referenced in checks and expirations, but was never actually used! So I guess we've been loading the modeling predictions every time for a while huh??
We'll get smarter about that someday, but anyway, that lets us delete our Item resque tasks and ItemObserver!
Again I'm just not convinced of the perf on this, and it enables us to delete some whole infra over it, we can improve it another time if it's useful to!
Just removing some caching and the expiration of it! There's still more superfluous(?) caching on the item page to audit, but these seem a bit more sensible about avoiding loading extra data.
In the interest of clearing out Resque, I'm just gonna remove a lot of our more complex caching stuff, and we can do a perf pass for things like big item list pages once everything's upgraded. (I'm hopeful that the upgrades themselves improve perf; and if not, that some improved sensibilities 10 years later can find simpler approaches.)
We uninstalled Flex, our Elasticsearch gem, to replace item search with direct DB queries; but I forgot these calls, oops!
I also kinda want to see about deleting the resque tasks altogether, since I'm not sure how to get Resque installed on latest rails bc there seems to be a conflict over the version of Rack? And it'd be nice to get rid of the complexity if we can.
Back in the day, `all` would immediately load up a query into an array, but now I think it's an alias for what `scoped` used to be: a relation that contains everything.
I want to test some logged-in stuff, but the whole openneo_id app is a mess to integrate with (and I want to eliminate it down the line anyway), so here's a simple hacky thing that just gets you into a test user for development!
Not being a subquery is better! I realized later that a LEFT JOIN would probably do it even betterer? with like `HAVING count(x) = 0`? but the `left_outer_joins` method doesn't seem to be in Rails 4, and I don't want to do stringy joins, so this is fine for now!
Right, previously we were querying "has *at least one asset* that is not in zone X" instead of "has NO assets that are in zone X".
I don't know a fast way to query for that, this will have to do for now!
Not doing the tricks with `is_positive` anymore, instead just calling different functions altogether at the call site.
Also, instead of classes, I feel like this is a lot more concise to just write as class methods that create certain instances of a trivial `Filter` data class. Without the tricks of `is_positive` in play, the value of classes goes way down imo.
Ohh ok, without this change all of our `scope`s were just immediately evaluating the argument and fetching _all_ such matching records immediately, instead of waiting to actually be called. This led to bugs like `pet_type.as_json` returning ALL pet states in the whole db, because the `PetState.emotion_order` scope was being treated as a single predefined query, rather than a query fragment to merge into the current context.
This also explains what happened in 724ed83: that's why things before the scope in the query were being ignored.
lol again this is hard to test so uhh I hope this didn't break it all!! though tbh I feel like we removed this feature or something anyway? idk it stopped working in some way
Tbh I'm not 100% sure this is a fix, I'm not sure what `haml_concat` was doing here, and the page is still crashing so it's hard to say. But fingers crossed!
Idk why, but when the `select` was the first thing in the query, it was getting ignored. I wonder if there's something about the `object_assets` scope that I'm not understanding that's overwriting it? Or the `joins`? But whatever, this works, I'm not worried about it for now!
The controller was like "oh yeah we have that cached" (from previous renders of the app on Rails 3 I think?), but the view disagreed, bc it was appending a template digest to the cache key. That's a smart feature, but not compatible with how we skip queries in the controller, so disable it for now!
We'll need to replace the item search query stuff with direct MySQL queries, but that's not ready yet bc the app still isn't booting, so we're committing this in a known broken state for now!
Rather than figure out how to upgrade the Stripe gem to be compatible with future Rails, I'd rather just delete the references, since it's currently unused.
I'm not so bold as to go in and fully trash all our donation code; I just want to ensure we're not sending people down broken codepaths, and that if they reach them, the error messages are clear enough.
We set up `impress-asset-images.openneo.net` to redirect to the right asset, without needing to depend on AWS anymore for HTML5-converted items!
Our quick fix for this: always serve `has_image: true` to the frontend, so it always tries to use the image, regardless of whether we've marked it as converted in the database. (We've turned off the converters too!)
Oh, yeah, shit, okay, when we set `self.url` like that, it's supposed to be the _canonical_ URL for the SWF, not our proxied one—this is the URL that's gonna go in the database.
We do proxying late in the process, like when we're actually setting up to download something, but for just referencing where the asset lives, we use `images.neopets.com`.
In this change, we revert the use of `NEOPETS_IMAGES_URL_ORIGIN`, but we _do_ update this to `https` for good measure. (We currently have both HTTP and HTTPS urls in the database, I guess neopets.com started serving different URLs at some point, this is probably the future! And anything interpreting these URLs will need to handle both cases anyway, unless we do some kind of migration update situation thing.)
We're migrating the incorrect assets with the following query (with the limit changed to match the number we currently see in the DB, just as a safety check):
```
UPDATE swf_assets SET url = REPLACE(url, 'http://images.neopets-asset-proxy.openneo.net', 'https://images.neopets.com') WHERE url LIKE 'http://images.neopets-asset-proxy.openneo.net%' ORDER BY id LIMIT 2000;
```
Okay, like in the previous commit, we're dealing with forced HTTPS, on a server that isn't going to cooperate with our dependencies' HTTPS version. And this time, I don't think there's a secret origin server that will accept `http://` requests for us.
Thankfully, we have the perfect hack in our back pocket: our own pre-existing images.neopets.com proxy server! I set the following in our secret `.env` file, and now we're good:
```
NEOPETS_IMAGES_URL_ORIGIN=http://images.neopets-asset-proxy.openneo.net
```
Oops, neopets.com finally stopped accepting `http://` connections, so our AMFPHP requests stopped working! And our current dependencies make it hard to make modern HTTPS requests :(
Instead, we're doing this quick-fix: we have a connection who knows the internal address for the Neopets origin server behind their CDN, which *does* still accept `http://` requests!
So, when `NEOPETS_URL_ORIGIN` is specified in the secret `.env` file (not committed to the repository), we'll use it instead of `http://www.neopets.com`. However, we still have that in the code as a fallback, just to be a bit less surprising to some theoretical future dev so they can see the real error message, and to self-document a bit of what that value is semantically doing! (The documentation angle is more of why it's there, rather than an actual expectation that any actual person in the future will run the code and get the fallback.)
There's a bug on Neopets.com that breaks links and images for *.openneo.net, on petpages specifically.
So, we've registered a new domain, and we're using that to serve outfit images now.
I'm a bit hesitant to add a new domain name to our like, permanent URL surface area, lol… but I'm not hearing back from TNT, and I already closed the doors on S3, so… here we are, whatever 😅
TNT started using HTTPS URLs! And our old Ruby version (lol 😬) still requires explicit invocation to perform SSL during a request, so requests were failing!
Now, we explicitly build the `Net::HTTPS` object, and turn on `use_ssl` if it's an HTTPS URL! (The shorthand invocation didn't seem to have an option for this, that I could find!)
Here, we turn off the hooks that enqueue outfit image updates, and we disconnect the `OutfitImageUploader` that manages uploaded S3 URLs, instead replacing it with an `image` method that simulates the same basic API.
This should cause _all_ views on Classic DTI to use the new outfit URLs. Some notable examples:
- The user's Outfits page
- The donations page
- The outfit page, and its sharing metadata
I hope I didn't miss anything in the views that will make this crash stuff! I tested the new model code in the Rails console, and checked it against invocations that I noticed when searching the codebase for `outfit.image` 🤞
Oops, right, I meant to use the new `impress-outfit-images.openneo.net` host for this! It works just fine from `impress-2020.openneo.net` as the backing source right now, but I want these semi-permanent URLs to be a bit more decoupled.
As part of our project to get off S3 and dramatically reduce costs, we're gonna start serving outfit images that Impress 2020 generates, fronted by Vercel's CDN cache! This should hopefully be just as fast in practice, without requiring an S3 storage cost. (Outfits whose thumbnails are pretty much unused will be evicted from the cache, or never stored in the first place—and regenerated back into the cache on-demand if needed.)
One important note is that the image at the URL will no longer be guaranteed to auto-update to reflect the changes to the outfit, because we're including `updated_at` in the URL for caching. (It also isn't guaranteed to _not_ auto-update, though 😅) Our hope is that people aren't using it for that use case so much! If so, though, we have some ways we could build live URLs without putting too much pressure on image generation, e.g. redirects 🤔
This change does _not_ disable actual outfit generation, because I want to keep that running until we see these new URLs succeed for folks. Gonna wait a bit and see if we get bug reports on them! Then, if all goes well, we'll stop enqueueing outfit image jobs altogether, and maybe wind down some of the infrastructure accordingly.
Oops, if you saved `SwfAsset` outside of modeling code, the `item` field would be empty, and so `item.body_specific?` wouldn't happen.
This would trigger when you even just report a broken image!
Now, we always run the SQL query to check for that flag.
Okay so, userlookup stuff hasn't worked in years, because it requires a login now.
But apparently, somewhere recently, the code inside our `neopets` gem started hard crashing, because of assumptions we made about the document we'd get back.
I'm not sure why it only recently started crashing? or if I'm even necessarily right about that?
But anyway, I'm just doing the easiest safest (🤞🏻) change possible: being more generous with the errors we swallow.
Test Plan:
Deploy and cross fingers.
Okay, fine, finally making this controllable from the db without requiring a deploy :P Setting this new field will cause `item.special_color` to return the corresponding color. This mainly affects what we show on the item page, and what colors we request for modeling on the homepage.
We recently flipped the switch for various hosts to force HTTPS, yay! This includes `neopia.openneo.net`.
However, I forgot to change the URL scheme in this file. This meant that the form submit from the homepage would go to `http://neopia.openneo.net/`, then redirect to `https://neopia.openneo.net/`, but only preserve the form data in certain browsers. This change should fix that!
Note: This probably breaks the dev environment, where we don't have a cert for `https://neopia.dev.openneo.net`. I'll fix that some other time!
Interestingly, these items *are* correctly detecting their special
color on the homepage for model progress. So, we *do* have the ability
to detect this. But I don't have good item data locally, so it would
be hard to test this, so I'm just gonna go with the cheap solution
again, sorry XP
In bfd825d, we refactored the "is item body-specific?" check. In the process, we dropped the check for the manual override flag, `explicitly_body_specific?`. Not sure if it was an accident or if I was just _so_ confident that it was gonna work :P In any case, re-add the check!
Okay, surprise, the bug was unrelated to Camo config (though I'm glad I cleaned
that up anyway :P). We now, at a low level, serve a placeholder image for item
thumbnail URL if, for some reason, we don't have a good thumbnail URL on hand.
One time I did a thing called Camo to try to get our HTTPS pages working,
because images.neopets.com not supporting HTTPS is crazy >_> I've diasbled it
these days, but it had debug behavior to append `?NO_CAMO_CONFIG` to all
proxied URLs when Camo was not configured.
When an item had no thumbnail URL for some reason (mall spider needs fixing,
maybe?), this caused Rails to try to map that empty string into the path
`/assets/?NO_CAMO_CONFIG`, which made Rails complain that it was trying to load
an asset that doesn't exist. This is probably a sign that using `image_tag` for
URLs that *should* be external URLs, but aren't strictly *guaranteed* to be, is
unwise - but, for now, I've just disabled that behavior. I hope Rails has a
better escape hatch for the empty string :P
Ooh, this one was nasty, and only one symptom ever got noticed:
1. Pick "Occupies: Collar" in Advanced Search.
You get the text query "occupies:necklace".
2. And, if you try to do "occupies:collar" even in text-based search,
you *also* get the results from "occupies:necklace" mixed in with
the correct results.
The trick is that, in Spanish, zone 24 (necklace) is named "collar",
as is zone 27 (collar). Not sure what to do for Spanish, but this
issue also leaked into English: we really don't want English to return
results for Spanish-named zones.
This is a tricky problem, though, because it'd be nice for es users
to be able to type "occupies:hat". I think we'll have to do the quick
fix for now, though, and just only interpret the query in the current
locale.
Turns out ~22% of our users initially land on a trade list.
We like to keep the campaign off the pages where space is at a
premium, so we try to whitelist it to major landing pages in order
to avoid accidentally creating a bad experience on some page :)
I've been doing this manually via email for a long time,
since building new stuff in the logged-in world was a pain in the old env.
But now here we are! Finally, finally :)