You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use cuprite to test my scrappers, therefore I have a base test that is checking if it can connect to my browserless instance via url and then checks if the proxy is applied. The Proxy connected test is currently failling on purpose to obtain an image of the ip in use. The proxy with this setup is not applied at all.
Down below is an example of my scrapper using Ferrum and having proxy successfully applied.
Recreation: docker run -p 3000:3000 browserless/chrome BROWSERLESS_URL=http://localhost:3000
cuprite_helper.rb
# frozen_string_literal: true
require 'capybara/cuprite'
# Default settings
Capybara.default_driver = :browserless
Capybara.javascript_driver = :browserless
# Main Browserless driver
Capybara.register_driver(:browserless) do |app|
driver_options = {
url: ENV.fetch('BROWSERLESS_URL', nil),
browser_options: { 'no-sandbox': nil }
}
Capybara::Cuprite::Driver.new(app, driver_options).tap do |driver|
# DOES NOT WORK
#
# driver.set_proxy(ENV.fetch('BROWSERLESS_PROXY_HOST', nil),
# ENV.fetch('BROWSERLESS_PROXY_PORT', nil),
# ENV.fetch('BROWSERLESS_PROXY_USERNAME', nil),
# ENV.fetch('BROWSERLESS_PROXY_PASSWORD', nil),
# Capybara.app_host)
end
end
# This is a helper to run a block of code which
# url is not pointing to the current project.
# This is useful for testing external websites.
# @param [String] root_url The root url of the external site
def external(root_url)
# Unsetting default Capybara settings for the block
previous_app_host = Capybara.app_host
Capybara.always_include_port = false
Capybara.app_host = root_url
yield
# Resetting default Capybara settings
Capybara.app_host = previous_app_host
Capybara.always_include_port = true
end
browser_less_test.rb
# frozen_string_literal: true
require 'application_system_test_case'
class BrowserLessTest < ApplicationSystemTestCase
include Capybara::DSL
test 'Remote browser connected' do
assert_not page.driver.browser.nil?
end
test 'Proxy connected' do
external 'https://whatismyipaddress.com' do
# DOES NOT WORK
page.driver.set_proxy(ENV.fetch('BROWSERLESS_PROXY_HOST', nil),
ENV.fetch('BROWSERLESS_PROXY_PORT', nil),
ENV.fetch('BROWSERLESS_PROXY_USERNAME', nil),
ENV.fetch('BROWSERLESS_PROXY_PASSWORD', nil),
Capybara.app_host)
visit '/'
sleep 10
assert page.has_content?('you passed')
end
end
Scrapper Example (working proxy)
# frozen_string_literal: true
module ScrapServices
class MyClass
def initialize
@browser = Ferrum::Browser.new(browser_options)
end
def scrap
@browser.go_to('/')
cleanup
end
private
# Browser options for connecting to remote
# chrome instance.
def browser_options
{
url: ENV.fetch('BROWSERLESS_URL', nil),
base_url: 'https://whatismyipaddress.com',
proxy: browser_proxy
}
end
# Proxy options for browser instance
def browser_proxy
{
host: ENV.fetch('BROWSERLESS_PROXY_HOST', nil),
port: ENV.fetch('BROWSERLESS_PROXY_PORT', nil),
user: ENV.fetch('BROWSERLESS_PROXY_USERNAME', nil),
password: ENV.fetch('BROWSERLESS_PROXY_PASSWORD', nil)
}
end
# Quits the browser
def cleanup
@browser.reset
@browser.quit
end
end
end
The text was updated successfully, but these errors were encountered:
I am trying to use cuprite to test my scrappers, therefore I have a base test that is checking if it can connect to my browserless instance via
url
and then checks if the proxy is applied. TheProxy connected
test is currently failling on purpose to obtain an image of the ip in use. The proxy with this setup is not applied at all.Down below is an example of my scrapper using Ferrum and having proxy successfully applied.
Recreation:
docker run -p 3000:3000 browserless/chrome
BROWSERLESS_URL=http://localhost:3000
cuprite_helper.rb
browser_less_test.rb
Scrapper Example (working proxy)
The text was updated successfully, but these errors were encountered: