1
0
Fork 0
forked from OpenNeo/impress
impress/app/controllers/application_controller.rb

109 lines
3 KiB
Ruby
Raw Permalink Normal View History

Add handlers for requests that were stopped during the reboot process According to our GlitchTip error tracker, every time we deploy, a couple instances of `Async::Stop` and `Async::Container::Terminate` come in, presumably because: 1. systemd sends a STOP signal to the `falcon host` process. 2. `falcon host` gives the in-progress requests some time to finish up 3. Sometimes some requests take too long, and so something happens. (either a timer in Falcon or a KILL signal from systemd, not sure!) that leads the ongoing requests to finally be terminated by raising an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure when each happens, and maybe they happen at different points in the process? Maybe one happens for the actual long-running ones, vs the other happens if more requests come in during the meantime but get caught in the spin-down process?) 4. Rails bubbles up the errors, our Sentry library notices them and sends them to GlitchTip, the user presumably receives the generic 500 error, and the app can finally close down gracefully. It's hard for me to validate that this is *exactly* what's happening here or that my mitigation makes sense, but my logic here is basically, if these exceptions are bubbling up as "uncaught exceptions" and spamming up our error log, then the best solution would be to catch them! So in this change, we add an error handler for these two error classes, which hopefully will 1) give users a better experience when this happens, and 2) no longer send these errors to our logging 🤞❗️ That strange phenomenon where the best way to get a noisy bug out of your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
require 'async'
require 'async/container'
2010-05-14 15:12:31 -07:00
class ApplicationController < ActionController::Base
include FragmentLocalization
2010-05-14 15:12:31 -07:00
protect_from_forgery
2011-06-27 12:33:34 -07:00
helper_method :current_user, :user_signed_in?
before_action :set_locale
2011-06-27 12:33:34 -07:00
before_action :configure_permitted_parameters, if: :devise_controller?
before_action :save_return_to_path,
if: ->(c) { c.controller_name == 'sessions' && c.action_name == 'new' }
# Enable profiling tools if logged in as admin.
before_action do
if current_user && current_user.admin?
Rack::MiniProfiler.authorize_request
end
end
Add handlers for requests that were stopped during the reboot process According to our GlitchTip error tracker, every time we deploy, a couple instances of `Async::Stop` and `Async::Container::Terminate` come in, presumably because: 1. systemd sends a STOP signal to the `falcon host` process. 2. `falcon host` gives the in-progress requests some time to finish up 3. Sometimes some requests take too long, and so something happens. (either a timer in Falcon or a KILL signal from systemd, not sure!) that leads the ongoing requests to finally be terminated by raising an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure when each happens, and maybe they happen at different points in the process? Maybe one happens for the actual long-running ones, vs the other happens if more requests come in during the meantime but get caught in the spin-down process?) 4. Rails bubbles up the errors, our Sentry library notices them and sends them to GlitchTip, the user presumably receives the generic 500 error, and the app can finally close down gracefully. It's hard for me to validate that this is *exactly* what's happening here or that my mitigation makes sense, but my logic here is basically, if these exceptions are bubbling up as "uncaught exceptions" and spamming up our error log, then the best solution would be to catch them! So in this change, we add an error handler for these two error classes, which hopefully will 1) give users a better experience when this happens, and 2) no longer send these errors to our logging 🤞❗️ That strange phenomenon where the best way to get a noisy bug out of your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
class AccessDenied < StandardError; end
rescue_from AccessDenied, with: :on_access_denied
rescue_from Async::Stop, Async::Container::Terminate,
with: :on_request_stopped
def authenticate_user!
redirect_to(new_auth_user_session_path) unless user_signed_in?
2011-07-12 21:25:14 -07:00
end
def authorize_user!
raise AccessDenied unless user_signed_in? && current_user.id == params[:user_id].to_i
end
def current_user
if auth_user_signed_in?
User.where(remote_id: current_auth_user.id).first
else
nil
end
end
def user_signed_in?
auth_user_signed_in?
2011-06-27 12:33:34 -07:00
end
def infer_locale
return params[:locale] if valid_locale?(params[:locale])
return cookies[:locale] if valid_locale?(cookies[:locale])
Rails.logger.debug "Preferred languages: #{http_accept_language.user_preferred_languages}"
http_accept_language.language_region_compatible_from(I18n.public_locales.map(&:to_s)) ||
I18n.default_locale
end
def not_found(record_name='record')
raise ActionController::RoutingError.new("#{record_name} not found")
end
def on_access_denied
render file: 'public/403.html', layout: false, status: :forbidden
end
Add handlers for requests that were stopped during the reboot process According to our GlitchTip error tracker, every time we deploy, a couple instances of `Async::Stop` and `Async::Container::Terminate` come in, presumably because: 1. systemd sends a STOP signal to the `falcon host` process. 2. `falcon host` gives the in-progress requests some time to finish up 3. Sometimes some requests take too long, and so something happens. (either a timer in Falcon or a KILL signal from systemd, not sure!) that leads the ongoing requests to finally be terminated by raising an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure when each happens, and maybe they happen at different points in the process? Maybe one happens for the actual long-running ones, vs the other happens if more requests come in during the meantime but get caught in the spin-down process?) 4. Rails bubbles up the errors, our Sentry library notices them and sends them to GlitchTip, the user presumably receives the generic 500 error, and the app can finally close down gracefully. It's hard for me to validate that this is *exactly* what's happening here or that my mitigation makes sense, but my logic here is basically, if these exceptions are bubbling up as "uncaught exceptions" and spamming up our error log, then the best solution would be to catch them! So in this change, we add an error handler for these two error classes, which hopefully will 1) give users a better experience when this happens, and 2) no longer send these errors to our logging 🤞❗️ That strange phenomenon where the best way to get a noisy bug out of your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
def on_request_stopped
render file: 'public/stopped.html', layout: false,
status: :internal_server_error
end
2011-07-20 12:16:22 -07:00
def redirect_back!(default=:back)
redirect_to(params[:return_to] || default)
end
def set_locale
I18n.locale = infer_locale || I18n.default_locale
end
def valid_locale?(locale)
locale && I18n.usable_locales.include?(locale.to_sym)
end
def configure_permitted_parameters
# Devise will automatically permit the authentication key (username) and
# the password, but we need to let the email field through ourselves.
devise_parameter_sanitizer.permit(:sign_up, keys: [:email])
devise_parameter_sanitizer.permit(:account_update, keys: [:email])
end
def save_return_to_path
if params[:return_to]
Rails.logger.debug "Saving return_to path: #{params[:return_to].inspect}"
session[:devise_return_to] = params[:return_to]
end
end
def after_sign_in_path_for(user)
return_to = session.delete(:devise_return_to)
Rails.logger.debug "Using return_to path: #{return_to.inspect}"
return_to || root_path
end
def after_sign_out_path_for(user)
return_to = params[:return_to]
Rails.logger.debug "Using return_to path: #{return_to.inspect}"
return_to || root_path
end
2010-05-14 15:12:31 -07:00
end
2011-06-27 12:33:34 -07:00