Add handlers for requests that were stopped during the reboot process
According to our GlitchTip error tracker, every time we deploy, a
couple instances of `Async::Stop` and `Async::Container::Terminate`
come in, presumably because:
1. systemd sends a STOP signal to the `falcon host` process.
2. `falcon host` gives the in-progress requests some time to finish up
3. Sometimes some requests take too long, and so something happens.
(either a timer in Falcon or a KILL signal from systemd, not sure!)
that leads the ongoing requests to finally be terminated by raising
an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure
when each happens, and maybe they happen at different points in the
process? Maybe one happens for the actual long-running ones, vs the
other happens if more requests come in during the meantime but get
caught in the spin-down process?)
4. Rails bubbles up the errors, our Sentry library notices them and
sends them to GlitchTip, the user presumably receives the generic
500 error, and the app can finally close down gracefully.
It's hard for me to validate that this is *exactly* what's happening
here or that my mitigation makes sense, but my logic here is basically,
if these exceptions are bubbling up as "uncaught exceptions" and
spamming up our error log, then the best solution would be to catch
them!
So in this change, we add an error handler for these two error classes,
which hopefully will 1) give users a better experience when this
happens, and 2) no longer send these errors to our logging 🤞❗️
That strange phenomenon where the best way to get a noisy bug out of
your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
|
|
|
require 'async'
|
|
|
|
require 'async/container'
|
2015-07-17 22:04:53 -07:00
|
|
|
|
2010-05-14 15:12:31 -07:00
|
|
|
class ApplicationController < ActionController::Base
|
|
|
|
protect_from_forgery
|
2011-06-27 12:33:34 -07:00
|
|
|
|
2023-08-03 17:40:52 -07:00
|
|
|
helper_method :current_user, :user_signed_in?
|
2024-09-20 20:10:04 -07:00
|
|
|
|
2023-08-02 16:05:02 -07:00
|
|
|
before_action :set_locale
|
2011-06-27 12:33:34 -07:00
|
|
|
|
2023-08-06 17:26:56 -07:00
|
|
|
before_action :configure_permitted_parameters, if: :devise_controller?
|
2023-08-06 18:24:23 -07:00
|
|
|
before_action :save_return_to_path,
|
|
|
|
if: ->(c) { c.controller_name == 'sessions' && c.action_name == 'new' }
|
2023-08-06 17:26:56 -07:00
|
|
|
|
2023-10-27 19:38:49 -07:00
|
|
|
# Enable profiling tools if logged in as admin.
|
|
|
|
before_action do
|
|
|
|
if current_user && current_user.admin?
|
|
|
|
Rack::MiniProfiler.authorize_request
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
Add handlers for requests that were stopped during the reboot process
According to our GlitchTip error tracker, every time we deploy, a
couple instances of `Async::Stop` and `Async::Container::Terminate`
come in, presumably because:
1. systemd sends a STOP signal to the `falcon host` process.
2. `falcon host` gives the in-progress requests some time to finish up
3. Sometimes some requests take too long, and so something happens.
(either a timer in Falcon or a KILL signal from systemd, not sure!)
that leads the ongoing requests to finally be terminated by raising
an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure
when each happens, and maybe they happen at different points in the
process? Maybe one happens for the actual long-running ones, vs the
other happens if more requests come in during the meantime but get
caught in the spin-down process?)
4. Rails bubbles up the errors, our Sentry library notices them and
sends them to GlitchTip, the user presumably receives the generic
500 error, and the app can finally close down gracefully.
It's hard for me to validate that this is *exactly* what's happening
here or that my mitigation makes sense, but my logic here is basically,
if these exceptions are bubbling up as "uncaught exceptions" and
spamming up our error log, then the best solution would be to catch
them!
So in this change, we add an error handler for these two error classes,
which hopefully will 1) give users a better experience when this
happens, and 2) no longer send these errors to our logging 🤞❗️
That strange phenomenon where the best way to get a noisy bug out of
your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
|
|
|
class AccessDenied < StandardError; end
|
|
|
|
rescue_from AccessDenied, with: :on_access_denied
|
2024-09-27 17:50:35 -07:00
|
|
|
|
Add handlers for requests that were stopped during the reboot process
According to our GlitchTip error tracker, every time we deploy, a
couple instances of `Async::Stop` and `Async::Container::Terminate`
come in, presumably because:
1. systemd sends a STOP signal to the `falcon host` process.
2. `falcon host` gives the in-progress requests some time to finish up
3. Sometimes some requests take too long, and so something happens.
(either a timer in Falcon or a KILL signal from systemd, not sure!)
that leads the ongoing requests to finally be terminated by raising
an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure
when each happens, and maybe they happen at different points in the
process? Maybe one happens for the actual long-running ones, vs the
other happens if more requests come in during the meantime but get
caught in the spin-down process?)
4. Rails bubbles up the errors, our Sentry library notices them and
sends them to GlitchTip, the user presumably receives the generic
500 error, and the app can finally close down gracefully.
It's hard for me to validate that this is *exactly* what's happening
here or that my mitigation makes sense, but my logic here is basically,
if these exceptions are bubbling up as "uncaught exceptions" and
spamming up our error log, then the best solution would be to catch
them!
So in this change, we add an error handler for these two error classes,
which hopefully will 1) give users a better experience when this
happens, and 2) no longer send these errors to our logging 🤞❗️
That strange phenomenon where the best way to get a noisy bug out of
your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
|
|
|
rescue_from Async::Stop, Async::Container::Terminate,
|
|
|
|
with: :on_request_stopped
|
|
|
|
|
2024-09-27 17:50:35 -07:00
|
|
|
rescue_from ActiveRecord::ConnectionTimeoutError, with: :on_db_timeout
|
|
|
|
|
2023-08-03 17:40:52 -07:00
|
|
|
def authenticate_user!
|
2023-08-06 15:52:05 -07:00
|
|
|
redirect_to(new_auth_user_session_path) unless user_signed_in?
|
2011-07-12 21:25:14 -07:00
|
|
|
end
|
|
|
|
|
2011-07-26 17:27:23 -07:00
|
|
|
def authorize_user!
|
|
|
|
raise AccessDenied unless user_signed_in? && current_user.id == params[:user_id].to_i
|
|
|
|
end
|
|
|
|
|
2023-08-03 17:40:52 -07:00
|
|
|
def current_user
|
2023-08-06 15:52:05 -07:00
|
|
|
if auth_user_signed_in?
|
|
|
|
User.where(remote_id: current_auth_user.id).first
|
|
|
|
else
|
|
|
|
nil
|
|
|
|
end
|
2023-08-03 17:40:52 -07:00
|
|
|
end
|
|
|
|
|
|
|
|
def user_signed_in?
|
2023-08-06 15:52:05 -07:00
|
|
|
auth_user_signed_in?
|
2011-06-27 12:33:34 -07:00
|
|
|
end
|
2024-09-20 20:10:04 -07:00
|
|
|
|
2012-12-29 22:46:36 -08:00
|
|
|
def infer_locale
|
2013-01-11 09:07:11 -08:00
|
|
|
return params[:locale] if valid_locale?(params[:locale])
|
|
|
|
return cookies[:locale] if valid_locale?(cookies[:locale])
|
|
|
|
Rails.logger.debug "Preferred languages: #{http_accept_language.user_preferred_languages}"
|
2024-09-13 20:55:09 -07:00
|
|
|
http_accept_language.language_region_compatible_from(I18n.available_locales.map(&:to_s)) ||
|
2013-01-11 09:07:11 -08:00
|
|
|
I18n.default_locale
|
2012-12-29 22:46:36 -08:00
|
|
|
end
|
2024-09-20 20:10:04 -07:00
|
|
|
|
2012-06-05 09:44:11 -07:00
|
|
|
def not_found(record_name='record')
|
|
|
|
raise ActionController::RoutingError.new("#{record_name} not found")
|
|
|
|
end
|
2011-07-15 13:15:57 -07:00
|
|
|
|
|
|
|
def on_access_denied
|
2024-02-28 13:20:41 -08:00
|
|
|
render file: 'public/403.html', layout: false, status: :forbidden
|
2011-07-15 13:15:57 -07:00
|
|
|
end
|
2011-07-20 09:39:18 -07:00
|
|
|
|
Add handlers for requests that were stopped during the reboot process
According to our GlitchTip error tracker, every time we deploy, a
couple instances of `Async::Stop` and `Async::Container::Terminate`
come in, presumably because:
1. systemd sends a STOP signal to the `falcon host` process.
2. `falcon host` gives the in-progress requests some time to finish up
3. Sometimes some requests take too long, and so something happens.
(either a timer in Falcon or a KILL signal from systemd, not sure!)
that leads the ongoing requests to finally be terminated by raising
an `Async::Stop` or `Async::Container::Terminate`. (I'm not sure
when each happens, and maybe they happen at different points in the
process? Maybe one happens for the actual long-running ones, vs the
other happens if more requests come in during the meantime but get
caught in the spin-down process?)
4. Rails bubbles up the errors, our Sentry library notices them and
sends them to GlitchTip, the user presumably receives the generic
500 error, and the app can finally close down gracefully.
It's hard for me to validate that this is *exactly* what's happening
here or that my mitigation makes sense, but my logic here is basically,
if these exceptions are bubbling up as "uncaught exceptions" and
spamming up our error log, then the best solution would be to catch
them!
So in this change, we add an error handler for these two error classes,
which hopefully will 1) give users a better experience when this
happens, and 2) no longer send these errors to our logging 🤞❗️
That strange phenomenon where the best way to get a noisy bug out of
your logs is to fix it lmao
2024-02-28 13:50:13 -08:00
|
|
|
def on_request_stopped
|
|
|
|
render file: 'public/stopped.html', layout: false,
|
|
|
|
status: :internal_server_error
|
|
|
|
end
|
|
|
|
|
2024-09-27 17:50:35 -07:00
|
|
|
def on_db_timeout
|
|
|
|
render file: 'public/503.html', layout: false,
|
|
|
|
status: :service_unavailable
|
|
|
|
end
|
|
|
|
|
2011-07-20 12:16:22 -07:00
|
|
|
def redirect_back!(default=:back)
|
|
|
|
redirect_to(params[:return_to] || default)
|
|
|
|
end
|
2012-12-29 22:46:36 -08:00
|
|
|
|
|
|
|
def set_locale
|
|
|
|
I18n.locale = infer_locale || I18n.default_locale
|
|
|
|
end
|
2013-01-11 09:07:11 -08:00
|
|
|
|
|
|
|
def valid_locale?(locale)
|
2024-09-13 20:55:09 -07:00
|
|
|
locale && I18n.available_locales.include?(locale.to_sym)
|
2013-01-11 09:07:11 -08:00
|
|
|
end
|
2023-08-06 17:26:56 -07:00
|
|
|
|
|
|
|
def configure_permitted_parameters
|
|
|
|
# Devise will automatically permit the authentication key (username) and
|
|
|
|
# the password, but we need to let the email field through ourselves.
|
|
|
|
devise_parameter_sanitizer.permit(:sign_up, keys: [:email])
|
|
|
|
devise_parameter_sanitizer.permit(:account_update, keys: [:email])
|
|
|
|
end
|
2023-08-06 18:24:23 -07:00
|
|
|
|
|
|
|
def save_return_to_path
|
|
|
|
if params[:return_to]
|
|
|
|
Rails.logger.debug "Saving return_to path: #{params[:return_to].inspect}"
|
|
|
|
session[:devise_return_to] = params[:return_to]
|
|
|
|
end
|
|
|
|
end
|
|
|
|
|
|
|
|
def after_sign_in_path_for(user)
|
|
|
|
return_to = session.delete(:devise_return_to)
|
|
|
|
Rails.logger.debug "Using return_to path: #{return_to.inspect}"
|
|
|
|
return_to || root_path
|
|
|
|
end
|
|
|
|
|
|
|
|
def after_sign_out_path_for(user)
|
|
|
|
return_to = params[:return_to]
|
|
|
|
Rails.logger.debug "Using return_to path: #{return_to.inspect}"
|
|
|
|
return_to || root_path
|
|
|
|
end
|
2010-05-14 15:12:31 -07:00
|
|
|
end
|
2011-06-27 12:33:34 -07:00
|
|
|
|