1
0
Fork 0
forked from OpenNeo/impress
impress/app/models/swf_asset.rb

235 lines
6.7 KiB
Ruby
Raw Normal View History

2011-05-20 16:19:14 -07:00
require 'fileutils'
require 'uri'
class SwfAsset < ApplicationRecord
# We use the `type` column to mean something other than what Rails means!
self.inheritance_column = nil
2011-05-13 05:00:34 -07:00
2012-07-16 13:34:44 -07:00
IMAGE_SIZES = {
:small => [150, 150],
:medium => [300, 300],
:large => [600, 600]
}
2013-01-21 17:34:39 -08:00
belongs_to :zone
has_many :parent_swf_asset_relationships
2013-01-21 17:34:39 -08:00
scope :includes_depth, -> { includes(:zone) }
2011-05-13 05:00:34 -07:00
before_validation :normalize_manifest_url, if: :manifest_url?
Add manifest_url to swf_assets table Ok so, impress-2020 guesses the manifest URL every time based on common URL patterns. But the right way to do this is to read it from the modeling data! But also, we don't have a great way to get the modeling data directly. (Though as I write this, I guess we do have that auto-modeling trick we use in the DTI 2020 codebase, I wonder if that could work for this too?) So anyway, in this change, we update the modeling code to save the manifest URL, and also the migration includes a big block that attempts to run impress-2020's manifest-guessing logic for every asset and save the result! It's uhh. Not fast. It runs at about 1 asset per second (a lot of these aren't cache hits), and sometimes stalls out. And we have >600k assets, so the estimated wall time is uhh. Seven days? I think there's something we could do here around like, concurrent execution? Though tbqh with the nature of the slowness being seemingly about hitting the slow underlying images.neopets.com server, I don't actually have a lot of faith that concurrency would actually be faster? I also think it could be sensible to like… extract this from the migration, and run it as a script to infer missing manifest URLs. That would be easier to run in chunks and resume if something goes wrong. Cuz like, I think my reasoning here was that backfilling this data was part of the migration process… but the thing is, this migration can't reliably get a manifest for everything (both cuz it depends on an external service and cuz not everything has one), so it's a perfectly valid migration to just leave the column as null for all the rows to start, and fill this in later. I wish I'd written it like that! But anyway, I'm just running this for now, and taking a break for the night. Maybe later I'll come around and extract this into a separate task to just try this on all assets missing manifests instead!
2023-11-09 21:42:51 -08:00
2011-06-10 11:45:33 -07:00
def swf_image_dir
@swf_image_dir ||= Rails.root.join('tmp', 'asset_images_before_upload', self.id.to_s)
end
2011-05-13 05:00:34 -07:00
def swf_image_path(size)
2011-06-10 11:45:33 -07:00
swf_image_dir.join("#{size.join 'x'}.png")
2011-05-20 16:19:14 -07:00
end
PARTITION_COUNT = 3
PARTITION_DIGITS = 3
PARTITION_ID_LENGTH = PARTITION_COUNT * PARTITION_DIGITS
def partition_path
(remote_id / 10**PARTITION_DIGITS).to_s.rjust(PARTITION_ID_LENGTH, '0').tap do |id_str|
PARTITION_COUNT.times do |n|
id_str.insert(PARTITION_ID_LENGTH - (n * PARTITION_DIGITS), '/')
end
end
end
2012-07-16 13:34:44 -07:00
def image_version
converted_at.to_i
end
def image_url(size=IMAGE_SIZES[:large])
host = ASSET_HOSTS[:swf_asset_images]
size_key = size.join('x')
image_dir = "#{self['type']}/#{partition_path}#{self.remote_id}"
"https://#{host}/#{image_dir}/#{size_key}.png?#{image_version}"
2012-07-16 13:34:44 -07:00
end
def images
IMAGE_SIZES.values.map { |size| {:size => size, :url => image_url(size)} }
end
2010-11-06 08:52:58 -07:00
attr_accessor :item
has_one :contribution, :as => :contributed, :inverse_of => :contributed
has_many :parent_swf_asset_relationships
delegate :depth, :to => :zone
def self.body_ids_fitting_standard
@body_ids_fitting_standard ||= PetType.standard_body_ids + [0]
end
scope :fitting_body_id, ->(body_id) {
where(arel_table[:body_id].in([body_id, 0]))
}
scope :fitting_standard_body_ids, -> {
where(arel_table[:body_id].in(body_ids_fitting_standard))
}
scope :fitting_color, ->(color) {
body_ids = PetType.select(:body_id).where(:color_id => color.id).map(&:body_id)
body_ids << 0
where(arel_table[:body_id].in(body_ids))
}
scope :biology_assets, -> { where(:type => PetState::SwfAssetType) }
scope :object_assets, -> { where(:type => Item::SwfAssetType) }
scope :for_item_ids, ->(item_ids) {
joins(:parent_swf_asset_relationships).
where(ParentSwfAssetRelationship.arel_table[:parent_id].in(item_ids))
}
scope :with_parent_ids, -> {
select('swf_assets.*, parents_swf_assets.parent_id')
}
# To manually change the body ID without triggering the usual change to 0,
# use this override method.
def override_body_id(new_body_id)
@body_id_overridden = true
self.body_id = new_body_id
end
2010-05-20 16:56:08 -07:00
def as_json(options={})
super({
2023-11-11 07:21:13 -08:00
only: [:id, :known_glitches],
methods: [:zone, :restricted_zones, :urls]
}.merge(options))
end
def urls
{
swf: url,
png: image_url,
manifest: manifest_url,
}
end
MANIFEST_PATTERN = %r{^https://images.neopets.com/(?<prefix>.+)/(?<id>[0-9]+)(?<hash_part>_[^/]+)?/manifest\.json}
def html5_image_url
return nil if manifest_url.nil?
# HACK: Just assuming all of these were well-formed by the same process,
# and infer the image URL from the manifest URL! But strictly speaking we
# should be reading the manifest to check!
match = manifest_url.match(MANIFEST_PATTERN)
return nil if match.nil?
"https://images.neopets.com/#{match[:prefix]}/" +
"#{match[:id]}#{match[:hash_part]}/#{match[:id]}.png"
end
def html5_svg_url
return nil if manifest_url.nil?
# HACK: Just assuming all of these were well-formed by the same process,
# and infer the image URL from the manifest URL! But strictly speaking we
# should be reading the manifest to check!
match = manifest_url.match(MANIFEST_PATTERN)
return nil if match.nil?
"https://images.neopets.com/#{match[:prefix]}/" +
"#{match[:id]}#{match[:hash_part]}/#{match[:id]}.svg"
end
2023-11-11 07:21:13 -08:00
def known_glitches
self[:known_glitches].split(',')
end
def known_glitches=(new_known_glitches)
if new_known_glitches.is_a? Array
new_known_glitches = new_known_glitches.join(',')
end
self[:known_glitches] = new_known_glitches
end
def restricted_zone_ids
[].tap do |ids|
zones_restrict.chars.each_with_index do |bit, index|
ids << index + 1 if bit == "1"
end
end
end
def restricted_zones
Zone.where(id: restricted_zone_ids)
end
2010-10-09 08:23:59 -07:00
def body_specific?
self.zone.type_id < 3 || item_is_body_specific?
end
def item_is_body_specific?
# Get items that we're already bound to in the database, and
# also the one passed to us from the current modeling operation,
# if any.
#
# NOTE: I know this has perf impact... it would be better for
# modeling to preload this probably? But oh well!
items = parent_swf_asset_relationships.includes(:parent).where(parent_type: "Item").map { |r| r.parent }
items << item if item
# Return whether any of them is known to be body-specific.
# This ensures that we always respect the explicitly_body_specific flag!
return items.any? { |i| i.body_specific? }
2010-10-09 08:23:59 -07:00
end
def origin_pet_type=(pet_type)
self.body_id = pet_type.body_id
end
def origin_biology_data=(data)
2013-03-05 13:10:25 -08:00
Rails.logger.debug("my biology data is: #{data.inspect}")
self.type = 'biology'
self.zone_id = data[:zone_id].to_i
self.url = data[:asset_url]
self.zones_restrict = data[:zones_restrict]
Add manifest_url to swf_assets table Ok so, impress-2020 guesses the manifest URL every time based on common URL patterns. But the right way to do this is to read it from the modeling data! But also, we don't have a great way to get the modeling data directly. (Though as I write this, I guess we do have that auto-modeling trick we use in the DTI 2020 codebase, I wonder if that could work for this too?) So anyway, in this change, we update the modeling code to save the manifest URL, and also the migration includes a big block that attempts to run impress-2020's manifest-guessing logic for every asset and save the result! It's uhh. Not fast. It runs at about 1 asset per second (a lot of these aren't cache hits), and sometimes stalls out. And we have >600k assets, so the estimated wall time is uhh. Seven days? I think there's something we could do here around like, concurrent execution? Though tbqh with the nature of the slowness being seemingly about hitting the slow underlying images.neopets.com server, I don't actually have a lot of faith that concurrency would actually be faster? I also think it could be sensible to like… extract this from the migration, and run it as a script to infer missing manifest URLs. That would be easier to run in chunks and resume if something goes wrong. Cuz like, I think my reasoning here was that backfilling this data was part of the migration process… but the thing is, this migration can't reliably get a manifest for everything (both cuz it depends on an external service and cuz not everything has one), so it's a perfectly valid migration to just leave the column as null for all the rows to start, and fill this in later. I wish I'd written it like that! But anyway, I'm just running this for now, and taking a break for the night. Maybe later I'll come around and extract this into a separate task to just try this on all assets missing manifests instead!
2023-11-09 21:42:51 -08:00
self.manifest_url = data[:manifest]
end
def origin_object_data=(data)
2013-03-05 13:10:25 -08:00
Rails.logger.debug("my object data is: #{data.inspect}")
self.type = 'object'
self.zone_id = data[:zone_id].to_i
self.url = data[:asset_url]
self.zones_restrict = ""
Add manifest_url to swf_assets table Ok so, impress-2020 guesses the manifest URL every time based on common URL patterns. But the right way to do this is to read it from the modeling data! But also, we don't have a great way to get the modeling data directly. (Though as I write this, I guess we do have that auto-modeling trick we use in the DTI 2020 codebase, I wonder if that could work for this too?) So anyway, in this change, we update the modeling code to save the manifest URL, and also the migration includes a big block that attempts to run impress-2020's manifest-guessing logic for every asset and save the result! It's uhh. Not fast. It runs at about 1 asset per second (a lot of these aren't cache hits), and sometimes stalls out. And we have >600k assets, so the estimated wall time is uhh. Seven days? I think there's something we could do here around like, concurrent execution? Though tbqh with the nature of the slowness being seemingly about hitting the slow underlying images.neopets.com server, I don't actually have a lot of faith that concurrency would actually be faster? I also think it could be sensible to like… extract this from the migration, and run it as a script to infer missing manifest URLs. That would be easier to run in chunks and resume if something goes wrong. Cuz like, I think my reasoning here was that backfilling this data was part of the migration process… but the thing is, this migration can't reliably get a manifest for everything (both cuz it depends on an external service and cuz not everything has one), so it's a perfectly valid migration to just leave the column as null for all the rows to start, and fill this in later. I wish I'd written it like that! But anyway, I'm just running this for now, and taking a break for the night. Maybe later I'll come around and extract this into a separate task to just try this on all assets missing manifests instead!
2023-11-09 21:42:51 -08:00
self.manifest_url = data[:manifest]
end
Add manifest_url to swf_assets table Ok so, impress-2020 guesses the manifest URL every time based on common URL patterns. But the right way to do this is to read it from the modeling data! But also, we don't have a great way to get the modeling data directly. (Though as I write this, I guess we do have that auto-modeling trick we use in the DTI 2020 codebase, I wonder if that could work for this too?) So anyway, in this change, we update the modeling code to save the manifest URL, and also the migration includes a big block that attempts to run impress-2020's manifest-guessing logic for every asset and save the result! It's uhh. Not fast. It runs at about 1 asset per second (a lot of these aren't cache hits), and sometimes stalls out. And we have >600k assets, so the estimated wall time is uhh. Seven days? I think there's something we could do here around like, concurrent execution? Though tbqh with the nature of the slowness being seemingly about hitting the slow underlying images.neopets.com server, I don't actually have a lot of faith that concurrency would actually be faster? I also think it could be sensible to like… extract this from the migration, and run it as a script to infer missing manifest URLs. That would be easier to run in chunks and resume if something goes wrong. Cuz like, I think my reasoning here was that backfilling this data was part of the migration process… but the thing is, this migration can't reliably get a manifest for everything (both cuz it depends on an external service and cuz not everything has one), so it's a perfectly valid migration to just leave the column as null for all the rows to start, and fill this in later. I wish I'd written it like that! But anyway, I'm just running this for now, and taking a break for the night. Maybe later I'll come around and extract this into a separate task to just try this on all assets missing manifests instead!
2023-11-09 21:42:51 -08:00
def normalize_manifest_url
parsed_manifest_url = Addressable::URI.parse(manifest_url)
parsed_manifest_url.scheme = "https"
self.manifest_url = parsed_manifest_url.to_s
end
def self.from_biology_data(body_id, data)
remote_id = data[:part_id].to_i
swf_asset = SwfAsset.find_or_initialize_by type: 'biology',
remote_id: remote_id
swf_asset.body_id = body_id
swf_asset.origin_biology_data = data
swf_asset
end
2015-05-03 14:57:42 -07:00
def self.from_wardrobe_link_params(ids)
where((
arel_table[:remote_id].in(ids[:biology]).and(arel_table[:type].eq('biology'))
).or(
arel_table[:remote_id].in(ids[:object]).and(arel_table[:type].eq('object'))
))
end
before_save do
2010-10-09 08:23:59 -07:00
# If an asset body ID changes, that means more than one body ID has been
# linked to it, meaning that it's probably wearable by all bodies.
self.body_id = 0 if !@body_id_overridden && (!self.body_specific? || (!self.new_record? && self.body_id_changed?))
2010-10-09 08:23:59 -07:00
end
class DownloadError < Exception;end
end