Commit graph

1518 commits

Author SHA1 Message Date
571949b066 Increase timeout in archive SQL queries
One problem I run into with the archive task is that sometimes the queries time out? My hunch is that maybe some of the assets have like, weirdly big manifest files that are being transferred as surprisingly big text files?

Anyway, I'm increasing the timeout to 20s, which is big, but big feels good for a script that doesn't run often and where failing is not great news!

I'm also idly considering whether I wanna finally put in the work to do a bulk S3 uploader sometime, because the current version that iterates over multiple `aws s3 cp` calls is just real slow, I think because it establishes a new connection each time, and that operation is maybe surprisingly expensive? And the CLI doesn't have a way to do multiple uploads any more precisely than "sync up this whole folder", which is slow when the folder contains a lot of stuff you _know_ you don't want to head up there.
2023-06-12 10:56:40 -07:00
f54ea61854 Merge branch 'state-of-dti-2022' into main 2023-03-23 21:54:53 -07:00
c28475fcdc Add a big callout link about the State of DTI message 2023-02-04 23:59:18 -08:00
e5e368fb65 More honest language in the homepage "beta" module 2023-02-04 23:49:25 -08:00
4287f12912 Update "State of DTI" message copy & URL 2023-02-04 23:48:55 -08:00
f2832a3bde Logic & comment updates to #30
Tested this out and compared to Dice's other work in Neobot and I think the condition should be the other way around, as it is here? (I found myself starting to write the explanatory comment, and realizing it wasn't making sense, then going heyyy wait a minute lol)
2022-12-22 18:58:49 -08:00
60d2f9a014 Merge branch 'main' of github.com:matchu/impress-2020 into main 2022-12-22 18:49:29 -08:00
82ecf9bbd3
Merge pull request #30 from diceroll123/patch-4
Tweak layer algorithm for bio overrides
2022-12-22 18:49:17 -08:00
d80522ef6d Update SSH key for laptop
I did a factory-reset on this laptop, so the old key is gone forever! Here's the new one!
2022-12-22 18:47:11 -08:00
bf2745f9f1 Oops, remove leftover debug code 2022-12-22 18:44:47 -08:00
722f6b55a5
Tweak layer algorithm for bio overrides 2022-12-12 08:50:38 -05:00
14c6e5a8be State of DTI 2022 letter
not linked anywhere yet, just the page itself created!
2022-12-12 01:46:27 -08:00
5f7a17b244 Fix typo in terms page title 2022-12-12 01:26:10 -08:00
7d64709378 Update public data
Just ran `yarn db:export:public-data`, tada!
2022-12-08 13:34:40 -08:00
d37b7fab2f Add installation guide to README
I tried it in my lil Ubuntu WSL box and hey, it worked great! Neat!!

Pretty neat to have just sat down and fixed up the dev environment for other people?? I'll see about what it would take to actually invite people in…
2022-11-13 09:09:36 -08:00
b84c8ba34e Script to set up dev db with public DTI data
Now, someone with production DB access can run `yarn db:export:public-data` to create `public-data-constants.sql` and `public-data-from-modeling.sql`.

Then, someone setting up their dev database can run `yarn db:setup-dev:full` and get all the wearables data imported right into their dev database!

I'm noticing just how poorly I'm keeping up with my own goals for finishing up DTI, and wondering if now is a good time to circle back to some old offers for code contributions I got last year… I also just figure that making this app Possible To Run with a backup of the basic public database is like. a pretty handy thing to have for archival's sake imo

Note that, for this change, we also set up Git LFS (Large File Storage). Github should be automatically compatible with this! It's a way to not write the whole 30MB database dump into the repository history, and instead keep it in a secondary filestore, because Git's core algorithms aren't really built to handle large blobs of data very well. Users setting up their dev environment will therefore also need to have Git LFS installed for this script to work! (Otherwise, they'll see a "pointer" file in `public-data-from-modeling.sql.gz` that contains some metadata about the file state but not the data itself.)
2022-11-13 07:03:44 -08:00
88511d3dc6 Implement archive:upload:delta
Ok great, we can now run the delta archive process!

It'd be nice to get this running on cron on the impress-2020 server, to a temporary folder? I *do* want to be remembering to run something regularly on my personal machine too though, to keep my own copy up-to-date…
2022-11-05 02:15:31 -07:00
12b5a56694 Fix bug in archive:create scripts
That's what I get for not fully testing lmao! But right, paths in shell scripts are relative to the working directory, and if I want to be relative to the script I gotta use dirname!
2022-10-24 16:56:06 -07:00
22784555d9 Use small pagination toolbar in wardrobe
I thought about making this by media query, but tbh I think this looks better at the larger sizes too?
2022-10-14 20:45:28 -07:00
edce95ec3b Oops, also resize pagination toolbar dropdown 2022-10-14 20:44:57 -07:00
3dfc53cda1 Add basic search results to footer
It looks bad, and only shows page 1, but it does stuff at all! :0
2022-10-14 20:40:06 -07:00
a83e3a9e0b Share search query state between panel and footer 2022-10-14 20:29:57 -07:00
c564400223 Oops, don't mess up mobile with search footer! 2022-10-14 20:26:07 -07:00
589e6c35b4 Hide other search bar when search footer is visible 2022-10-14 20:24:26 -07:00
eb18fd54b6 SearchFooter visual tweaks
Finally playing with this, now that we've been doing paginated search results in the main element! Let's see how it goes 😳

I made a thing to make the pagination toolbar smaller (might want to do that on the mobile view too?), and also to put the search suggestions in a popover floating at the top of the search box.
2022-10-14 20:16:06 -07:00
78d8cf3739 Preload the prev/next item search pages
I tried to do this earlier, but the caching problem from the previous commit (where we weren't including `id` for the search result in the GQL query) was causing it to do a like, infinite loop thing, where the preload results would cache-invalidate the current results, and so the 3 queries would just fight for which one's in the cache?

But now that caching is working, this is working too! Makes it all feel a lot snappier :3
2022-10-14 19:41:52 -07:00
2ee48250da Fix caching for search result pages
Apollo Client is pretty darn reliant on an `id` field for effective caching, more often than you'd think!

Before this change, navigating back to a page you'd already loaded would cause it to reload. After this change, it no longer does, and serves the page from cache instead!
2022-10-14 19:38:26 -07:00
a9aeab12a3 Scroll to top when item search page number changes
We also didn't need the query one, because we now `key` the `SearchResults` by the query, so the container becomes empty-then-full-again, which resets scroll back to top.
2022-10-14 19:33:45 -07:00
45b0aba736 Fix comment 2022-10-14 19:27:05 -07:00
6be5dd74d3 Homepage update: item search pagination 2022-10-14 19:21:55 -07:00
4cdfff03d1 Use pagination instead of infinite scrolling
idk this has been a long-time popular request, so I'm just gonna like. throw it all the way out there. and see what people think of it

I'm a bit worried it might change up the mobile experience too much? But like. let's find out!
2022-10-14 19:19:56 -07:00
ca58bc1be3 Refactor URL routing out of PaginationToolbar
in preparation for PaginationToolbar being able to support other kinds of state!
2022-10-14 18:27:44 -07:00
98e89e4302 Refactor router pagination into a hook
My intention is to move this out of PaginationToolbar entirely, so that it becomes a component we can reuse in a non-URL-state setting. (I'm looking at using pagination for the wardrobe item search is why!)
2022-10-14 18:01:31 -07:00
9a54694e31 Remove Metaverse license notice
It's covered in the Terms now, and I don't think they can reasonably claim they didn't know, lol
2022-10-14 17:44:26 -07:00
8b3c256a5c Time out manifest requests after 2sec
We do a thing where we sometimes proactively update an appearance layer's manifest from images.neopets.com when it's been a while since the last time, _during_ user requests.

But when images.neopets.com is being slow, this makes our API requests about appearances super slow, too!

In this change, we add a 2-second timeout to those requests. That should be plenty for when images.neopets.com is in a good mood, but also give up fast enough for the site to not feel miserable lol :p (especially when the use "Use DTI's image archive" option is on!)
2022-10-13 17:00:21 -07:00
4619e86ae0 Add "Use DTI's image archive" option
just a lil thing for people to turn on if it gets truly miserable again!!
2022-10-13 16:44:20 -07:00
8dee9ddbed Refactor archive scripts to prepare/create/upload
Sat down and thought about the structure here and how to make the full/delta stuff make more sense together! Here's what I came up with!

In both full and delta archiving, we prepare the manifest, we create the local archive, then we upload it to remote.
2022-10-13 16:07:12 -07:00
35713069fa Delta version of archive scripts
I like running the full `archive:create` to help us be _confident_ we've got the whole darn thing, but it takes multiple days to run on my machine and its slow HDD, which… I'm willing to do _sometimes_, but not frequently.

But if we had a version of the script that ran faster, and only on URLs we still _need_, we could run that more regularly and keep our live archive relatively up-to-date. This would enable us to build reliable fallback infra for when images.neopets.com isn't responding (like today lol)!

Anyway, I stopped early in this process because images.neopets.com is bad today, which means I can't really run updates today, lol :p but the delta-ing stuff seems to work, and takes closer to 30min to get the full state from the live archive, which is, y'know, still slow, but will make for a MUCH faster process than multiple days, lol
2022-10-13 15:08:29 -07:00
861f3ab881 Fix bug in /api/readFromArchive
Well, two bugs: one with URL encoding, and another minor one where I forgot to return after ending the request with 404, oops lol :p
2022-10-13 14:02:37 -07:00
5e6939309a Replace imageUrl with imageUrlV2 in-app
Should be a smooth drop-in replacement, we give the field an alias `imageUrl` in the query, so the rest of the app is none the wiser!

I didn't test the layer upload cache invalidation, but it seems pretty obvious to me, so ehh I'm just shipping it lmao
2022-10-12 12:41:27 -07:00
7bef1f3b9a Use imageUrlV2 in outfit thumbnails
Without this, 150x150 and 300x300 outfit thumbnails would fail to render new item layers where we didn't have an AWS image layer. Now, they correctly render the new stuff!

I tested this with the new "Spooky Stitches Markings" on the Grarrl, which has a blank image in AWS, but works correctly in the new code by loading the image from neopets.com!
2022-10-12 12:37:07 -07:00
3428254318 Simplify modeling output when there's no items
This'll both hide sections that are empty (which just wasn't plausible for a long time), and print a happy lil message if there's no sections to show at all!
2022-10-12 12:09:15 -07:00
0a99668f00 Remove Modeling link from global header
We seem to have everything modeled now, and we have automatic modeling, so like… this is not a useful link for the general public anymore!

Instead, we'll just keep it a secret for us to check on the state of things!
2022-10-12 12:03:53 -07:00
1b0a6c8385 Mention automatic modeling on the homepage 2022-10-12 11:57:28 -07:00
32dd0474f2 Better logging output for model-needed-items
Print out the image hash for easier debugging (can look up the custom data ourselves to check it), and also fix a bug with retries not carrying `contextString` through, oops!
2022-10-12 11:55:29 -07:00
d591eabd0a Add modeling cron job to deploy-setup
This should run it every 10 minutes! Wowie, cron config on the new box is easy! :3
2022-10-11 12:54:02 -07:00
a1844f76e0 Add /api/assetImageRedirect
Okay, this is gonna be a drop-in new backend for impress-asset-images.openneo.net, to enable Classic DTI to use the same images as DTI 2020!

This will enable us to stop generating images and uploading them to S3 just for Classic's sake, so we can turn those background processes off! And the new modeling script skips that anyway, so this is an important compatibility step for the new data that went out today!
2022-10-11 12:21:14 -07:00
29d9d498bf Use aws.impress-asset-images.openneo.net
We're gonna update impress-asset-images.openneo.net to perform redirects and stuff, so Classic DTI can start using the same images that DTI 2020 does.

That should enable us to stop relying on AWS for images, which is important because the new modeling script breaks that anyway :p but this will also let us turn off the image converters that run in the background all the time, and I'm excited for that too!
2022-10-11 12:19:39 -07:00
e8d7f6678d Auto-modeling script??
It seems to be working!! How exciting!! I'm just letting it run on stuff now :3

One important issue is that Classic DTI doesn't show images for items modeled this way, because we don't download the SWFs for it. But I wanna update it to stop using AWS anyway and do the same stuff 2020 does, I think we can do that pretty sneakily!
2022-10-11 11:13:10 -07:00
052cc242e4 Modeling page performance fix
Ok so, I kinda assumed that the query engine would only compute `all_species_ids_for_this_color` on the rows we actually returned, and it's a fast subquery so it's fine. But that was wrong! I think the query engine computing that for _every_ item, and _then_ filter out stuff with `HAVING`. Which makes sense, because the `HAVING` clause references it, so computing it makes sense!

In this change, we inline the subquery, so it only gets called if the other conditions in the `HAVING` clause don't fail first. That way, it only gets run when needed, and the query runs like 2x faster (~30sec instead of ~60sec), which gets us back inside some timeouts that were triggering around 1 minute and making the page fail.

However, this meant we no longer return `all_species_ids_for_this_color`, which we actually use to determine which species are _left_ to model for! So now, we have a loader that also basically runs the same query as that condition subquery.

A reasonable question would be, at this point, is the `HAVING` clause a good idea? would it be simpler to do the filtering in JS?

and I think it might be simpler, but I would guess noticeably worse performance, because I think we really do filter out a _lot_ of results with that `HAVING` clause—like basically all items, right? So to filter on the JS side, we'd be transferring data for all items over the wire, which… like, that's not even the worst dealbreaker, but it would certainly be noticed. This hypothesis could be wrong, but it's enough of a reason for me to not bother pursuring the refactor!
2022-10-10 20:15:16 -07:00