falcon icon indicating copy to clipboard operation
falcon copied to clipboard

falcon hosting on heroku using falcon.rb

Open semdinsp opened this issue 4 years ago • 20 comments

I was a bit confused by the documentation so i wondered if you could bless the following. The confusion is production should only use the host command (and not serve) but there is no clear falcon.rb file. I have attached a sample file that works below.

a) deployment should be using falcon host (and NOT falcon serve. I have used serve successfully for about a year but thought i should be be upto date). So my heroku Procfile now looks like:

web: bundle exec falcon host 
# old was  web: bundle exec falcon serve -b http://0.0.0.0:${PORT:-3000} --count 3

my falcon.rb file in app directory looks like

#!/usr/bin/env -S falcon host
# frozen_string_literal: true
module Async::Container
  def self.processor_count
    ENV.fetch('FALCON_WORKERS') { Etc.nprocessors }
  rescue
    2
  end
end
puts "falcon processor count #{Async::Container.processor_count} cores: #{Etc.nprocessors} set FALCON_WORKERS env to override"
# Above courtesy of Daniel Evans
load :rack, :lets_encrypt_tls, :supervisor
myport=3000
myport=ENV['PORT'] if !ENV['PORT'].nil? and ENV['PORT']!=""
#puts "port set to [#{myport}] #{ENV.inspect}"
puts "port set to [#{myport}]"
hostname = File.basename(__dir__)
rack hostname, :lets_encrypt_tls do
  cache true
  # endpoint Async::HTTP::Endpoint.parse("http://localhost:#{myport}").with(protocol: 
  Async::HTTP::Protocol::HTTP2)
      endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{myport}").with(protocol: 
  Async::HTTP::Protocol::HTTP11)
  end
supervisor

ABOVE SEEMS TO WORK. Can you bless it? Your software has been plug and play for me with the exception of async-postgres

Question - how can I limit the servers spun up usng the host command (old count command in serve) as I was running out of memory in another heroku deployment and changing count to 2 fixed it. this pull request fixes this

thank you.

semdinsp avatar Jun 04 '20 13:06 semdinsp

I will check this tomorrow.

ioquatix avatar Jun 04 '20 14:06 ioquatix

Sorry, I lost track of this issue. I'll take a look at my earliest convenience.

ioquatix avatar Jun 11 '20 14:06 ioquatix

Firstly, we provided ASYNC_CONTAINER_PROCESSOR_COUNT to override the default processor count until we make a better solution in falcon.rb configuration.

The rest looks fine but it could be a bit tidier. I'll try to make some canonical example of how to do it.

ioquatix avatar Jun 11 '20 23:06 ioquatix

Also, looking try out Falcon on Heroku so a good example config would be great

danmayer avatar Jan 02 '21 02:01 danmayer

here is sample which is i production for over a year. any questions let me know. It may not be updated with most recent versio so your mileage may vary...

On 2 Jan 2021, at 10:54 AM, Dan Mayer [email protected] wrote:

Also, looking try out Falcon on Heroku so a good example config would be great

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/socketry/falcon/issues/121#issuecomment-753421607, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAC67S6UR3ZCXOZRGCYD53SX2DGDANCNFSM4NSTYWDA.

semdinsp avatar Jan 02 '21 03:01 semdinsp

sorry attachinng procfile as well..

here is sample which is i production for over a year. any questions let me know. It may not be updated with most recent versio so your mileage may vary...

On 2 Jan 2021, at 10:54 AM, Dan Mayer <[email protected] mailto:[email protected]> wrote:

Also, looking try out Falcon on Heroku so a good example config would be great

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/socketry/falcon/issues/121#issuecomment-753421607, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAC67S6UR3ZCXOZRGCYD53SX2DGDANCNFSM4NSTYWDA.

semdinsp avatar Jan 02 '21 03:01 semdinsp

awesome thanks @semdinsp wow on fast response, I think the attachments aren't working or at least I can't see them on the github issue comments

danmayer avatar Jan 02 '21 03:01 danmayer

they are short so have sent as embedded —falcon.rb does NOT take advantage of latest env variables. this is currently runnning well on heroku for multiple apps/code bases

# procfile
web: bundle exec falcon host 
# falcon.rb
#!/usr/bin/env -S falcon host
# frozen_string_literal: true

module Async::Container
  def self.processor_count
    # was: ENV.fetch('FALCON_WORKERS') { Etc.nprocessors }
    ENV.fetch('FALCON_WORKERS') { 2 }
  rescue
    2
  end
end

puts "falcon processor count #{Async::Container.processor_count} cores: #{Etc.nprocessors} set FALCON_WORKERS env to override"


load :rack, :lets_encrypt_tls, :supervisor
myport=3000
myport=ENV['PORT'] if !ENV['PORT'].nil? and ENV['PORT']!=""
#puts "port set to [#{myport}] #{ENV.inspect}"
puts "port set to [#{myport}]"
hostname = File.basename(__dir__)
rack hostname, :lets_encrypt_tls do
        cache true
 # endpoint Async::HTTP::Endpoint.parse("http://localhost:#{myport}").with(protocol: Async::HTTP::Protocol::HTTP2)
  
  endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{myport}").with(protocol: Async::HTTP::Protocol::HTTP11)
end

supervisor

semdinsp avatar Jan 02 '21 03:01 semdinsp

Thanks, this is really helpful

danmayer avatar Jan 02 '21 04:01 danmayer

You can use ASYNC_CONTAINER_PROCESSOR_COUNT instead of FALCON_WORKERS monkey patch.

ioquatix avatar Jan 06 '21 03:01 ioquatix

Thank you for this! I'd be curious to hear more details about this config, as I'm not sure what the various parts are doing. For example, what is the lets_encrypt_tls bit doing? I thought I might be able to omit it since I'm using Heroku's SSL, but it appears to be necessary. Here's the config from above but with formatting for future readers:

ENV:

ASYNC_CONTAINER_PROCESSOR_COUNT=8

Procfile:

web: bundle exec falcon host

falcon.rb:

#!/usr/bin/env -S falcon host
# frozen_string_literal: true

load :rack, :lets_encrypt_tls, :supervisor

hostname = File.basename(__dir__)
port = ENV["PORT"] || 3000
rack hostname, :lets_encrypt_tls do
  cache true
  endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{port}").with(protocol: Async::HTTP::Protocol::HTTP11)
end

supervisor

Any additional info would be much appreciated, but thank you in any case for the pointers already!

trevorturk avatar Feb 19 '21 21:02 trevorturk

sorry I just grabbed the code to get it to work. I did not look into that line load: rack but it is an interesting question. I will look into it a bit.

As a side note, I fiddled with the Async protocols as I wanted to run HTTP2 (quic?) but then i found out that heroku was not supporting http2. Howevef if you don’t use heroku setting the protocol to HTTP2 works in this configuration and seems to be robust (at least on my testing).

On 20 Feb 2021, at 5:47 AM, Trevor Turk [email protected] wrote:

Thank you for this! I'd be curious to hear more details about this config, as I'm not sure what the various parts are doing. For example, what is the lets_encrypt_tls bit doing? I thought I might be able to omit it since I'm using Heroku's SSL, but it appears to be necessary. Here's the config from above but with formatting for future readers:

ENV:

ASYNC_CONTAINER_PROCESSOR_COUNT=8 Procfile:

web: bundle exec falcon host falcon.rb:

#!/usr/bin/env -S falcon host

frozen_string_literal: true

load :rack, :lets_encrypt_tls, :supervisor

hostname = File.basename(dir) port = ENV["PORT"] || 3000 rack hostname, :lets_encrypt_tls do cache true endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{port}").with(protocol: Async::HTTP::Protocol::HTTP11) end

supervisor ...but even then I'm not sure why we pick this protocol specifically -- is that the only one Heroku supports? Any additional info would be much appreciated, but thank you in any case for the pointers already!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/socketry/falcon/issues/121#issuecomment-782382561, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAC67UZLFBRDE3DC42MGJLS73L6BANCNFSM4NSTYWDA.

semdinsp avatar Feb 22 '21 05:02 semdinsp

I chatted with @ioquatix about this a bit more. I'm not 100% sure I have this correct, and I'm not sure where best to document it, but I think this should serve as a viable Heroku config with preloading for a typical Rails app:

ENV:

ASYNC_CONTAINER_PROCESSOR_COUNT=8

Procfile:

web: bundle exec ./falcon.rb

preload.rb:

require_relative "config/environment"

falcon.rb:

#!/usr/bin/env -S falcon host
# frozen_string_literal: true

load :rack

hostname = File.basename(__dir__)
port = ENV["PORT"] || 3000

rack hostname do
  append preload "preload.rb"
  endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{port}").with(protocol: Async::HTTP::Protocol::HTTP11)
end

trevorturk avatar Apr 26 '21 19:04 trevorturk

ASYNC_CONTAINER_PROCESSOR_COUNT=8

Strickly speaking you should not need this, it will automatically scale depending on the number of CPU cores.

ioquatix avatar Apr 30 '21 23:04 ioquatix

Ah, I was thinking this is how to control memory use... is there another way? (Generally, Rails apps seem to be memory constrained on Heroku in my experience.)

trevorturk avatar May 01 '21 04:05 trevorturk

(Generally, Rails apps seem to be memory constrained on Heroku in my experience.)

I totally agree with this comment. In my experience I need to tune down the number of threads to limit memory use on heroku for the $7 dyno class and in general for any size dyno.

I think it is interesting and a bit non intuitive that we use cpu count as a way to control memory usage. Obviously we don’t need to change the name of the variable I just wanted to write a post so if newbies were having a problem with memory on heroku with rails they might see this comment.

On 1 May 2021, at 12:04 PM, Trevor Turk @.***> wrote:

Ah, I was thinking this is how to control memory use... is there another way? (Generally, Rails apps seem to be memory constrained on Heroku in my experience.)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/socketry/falcon/issues/121#issuecomment-830512811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAC67T7FF4VL62AC7WG2FTTLN4WVANCNFSM4NSTYWDA.

semdinsp avatar May 01 '21 05:05 semdinsp

Yes, I'm planning to work with @ioquatix on this a bit more and we hope to provide some docs to help explain the trade offs etc. I just wanted to post what I had so far on this thread since it'll likely be a few weeks before I can get back into this.

I'm sure lots of people will be like me, starting from their experience with Puma, so I'd imagine people will be looking for guidance on how to port things over to Falcon, and how to optimize for Heroku etc.

trevorturk avatar May 01 '21 16:05 trevorturk

This was recently promoted to an environment configuration: https://socketry.github.io/falcon/source/Falcon/Environments/application/index.html#Falcon%3A%3AEnvironments.application%23count

ioquatix avatar May 11 '21 02:05 ioquatix

@ioquatix what should be the pool of database? for puma you would do smth like:

  on_worker_boot do
    config = ActiveRecord::Base.configurations[rails_env]
    config["pool"] = puma_max_threads
    ActiveRecord::Base.establish_connection
  end

and puma_max_threads usually is 5

sebyx07 avatar May 15 '21 06:05 sebyx07

@sebyx07 recently came around same task - how to do something once worker is forked, so for my case it was to add next line into config.ru

require_relative 'after_fork'

I use preload script as well which simply loads all the app (sinatra in my case) that loading initiates connections which I want to close and connect again in the fork, take a look at https://github.com/rubyapi/rubyapi they have preload which loads rails application before it starts rack, so anything added to config.ru should be equivalent to on_worker_boot

The pool side should not really depend on the number of connections you will have in falcon, it's more about how many max concurrent transactions per fork your database can handle, so keep it as it was before

troex avatar Jul 06 '21 09:07 troex