Python 请求-html,尝试在 Jscript 中加载所有信息
2021-06-19
6862
我不想访问这个提供免费代理的网站,而是想抓取信息然后过滤。我尝试使用请求 html 来执行此操作,但到目前为止,按照教程并阅读库,它没有发生,当我运行它时,它只是输出 []。这是我目前拥有的代码,我试图抓取具有 IP 的页面部分
import requests
from bs4 import BeautifulSoup
from requests_html import HTMLSession
# create an HTML Session object
session = HTMLSession()
# Use the object above to connect to needed webpage
resp = session.get("https://advanced.name/freeproxy")
# Run JavaScript code on webpage
resp.html.render()
port = resp.html.find("data-ip")
print(port)
2个回答
需要在 render() 中添加睡眠时间:
from requests_html import HTMLSession
session = HTMLSession()
url = "https://advanced.name/freeproxy"
r = session.get(url)
r.html.render(sleep=2)
ips = r.html.find('tr > td:nth-child(2)')
ports = r.html.find('tr > td:nth-child(3)')
for ip, port in zip(ips, ports):
print(ip.text + ":" +port.text)
输出:
186.96.117.28:9991
181.209.106.196:3128
181.209.86.210:999
115.77.191.25:9090
177.52.221.166:999
49.232.118.212:3128
45.235.110.66:53281
177.155.215.89:8080
191.242.230.135:8080
45.167.95.184:8085
170.83.76.73:999
142.44.148.56:8080
103.139.194.69:8080
102.134.123.167:8080
45.167.23.30:999
45.224.150.155:999
103.138.41.132:8080
170.239.180.58:999
103.160.56.16:8080
210.18.133.71:8080
185.179.30.130:8080
190.61.90.141:999
187.188.200.2:999
42.194.212.250:8081
88.157.181.42:8080
31.40.135.67:31113
218.60.8.99:3129
104.238.195.10:80
45.189.252.40:999
190.52.129.39:8080
103.151.226.133:8080
178.205.254.106:8080
186.233.186.60:8080
201.222.44.58:999
175.103.35.2:3888
177.21.237.100:8080
113.20.31.24:8080
190.108.93.82:999
158.140.162.70:80
36.75.246.41:80
190.120.252.245:999
167.172.180.46:42580
188.133.137.9:8081
191.234.166.244:80
47.101.59.76:8888
178.32.129.31:3128
202.142.189.21:8080
185.190.38.14:8080
203.75.190.21:80
222.74.202.229:80
223.82.106.253:3128
3.221.105.1:80
3.219.153.200:80
62.33.207.196:80
178.63.17.151:3128
111.90.179.74:8080
14.97.2.108:80
120.197.179.166:8080
68.15.147.8:48678
183.215.206.39:55443
221.6.201.74:9999
18.224.59.63:3128
61.153.251.150:22222
184.180.90.226:8080
162.243.161.166:80
103.148.195.37:4153
18.236.151.253:80
81.19.0.134:3128
78.47.104.35:3128
71.172.1.52:8080
65.184.156.234:52981
199.192.126.211:8080
125.99.106.250:3128
69.163.162.222:37926
173.236.176.67:17838
184.155.36.194:8080
216.75.113.182:39602
107.150.37.82:3128
159.65.171.69:80
45.43.19.140:33533
104.215.127.197:80
124.41.211.211:57258
103.216.82.20:6667
74.143.245.221:80
124.71.162.246:808
103.79.96.173:4153
47.57.188.208:80
69.163.166.126:37926
41.33.66.241:1080
131.0.87.225:52017
50.242.100.89:32100
103.78.27.49:4145
220.163.129.150:808
193.164.94.244:4153
203.215.181.219:36342
202.168.147.189:34493
106.52.10.171:9999
51.222.21.95:32768
122.155.165.191:3128
218.16.62.152:3128
Dorian Massoulier
2021-06-19
此页面使用
JavaScript
来检测机器人/脚本,并且它似乎有效,因为它阻止了您的代码。您可能需要更多东西。
如果您检查 repo requests-html ,则会看到它已超过 1 年没有更新。
我可以使用 Selenium 获取它
from selenium import webdriver
url = "https://advanced.name/freeproxy"
#driver = webdriver.Firefox()
driver = webdriver.Chrome()
driver.get(url)
all_ips = driver.find_elements_by_xpath('//td[@data-ip]')
all_ports = driver.find_elements_by_xpath('//td[@data-port]')
for ip, port in zip(all_ips, all_ports):
print(ip.text, port.text)
编辑:
阅读下一页
-
使用
for
循环和带有页码的url
- 但它需要知道有多少页。from selenium import webdriver #driver = webdriver.Firefox() driver = webdriver.Chrome() url = "https://advanced.name/freeproxy?ddexp4attempt=1&page=" for page in range(15): print('--- page', page, '---') driver.get(url + str(page)) all_ips = driver.find_elements_by_xpath('//td[@data-ip]') all_ports = driver.find_elements_by_xpath('//td[@data-port]') for ip, port in zip(all_ips, all_ports): print(ip.text, port.text)
-
使用
while
并单击链接到下一页 - 您不必知道有多少页。from selenium import webdriver #driver = webdriver.Firefox() driver = webdriver.Chrome() url = "https://advanced.name/freeproxy" driver.get(url) while True: print('--- page ---') all_ips = driver.find_elements_by_xpath('//td[@data-ip]') all_ports = driver.find_elements_by_xpath('//td[@data-port]') for ip, port in zip(all_ips, all_ports): print(ip.text, port.text) try: # 转到下一页 link_to_next_page = driver.find_element_by_link_text('»') link_to_next_page.click() except: # 如果没有更多页面则退出循环 break
furas
2021-06-19