Scrape .aspx Form With Python
i'm trying to scrape: https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx, which in paper seems like a easy task and with a lot of resources from other SO questions. Non
Solution 1:
I was able to successfully solve this problem by handling the __VIEWSTATE
values with more care. In a ASPX form, the page is using the __VIEWSTATE
to hash the status of the webpage (i.e. which options of the form has the user already selected, or in our case requested), and allow the next request.
In this case:
- Request to get all headers, store those in the
payload
and add my first selection by updating the dictionary. - Make a second request with an updated
__VIEWSTATE
value, and add more options into my request. - Same as 2., but adding the final option.
This will five me the same HTML body I get when I make my request using the browser, but still does not show me the data, or allow me to download the files as part of the body of the last request. This problem can be handled with selenium
, but I haven't been sucessful. This question in SO describe my problem.
url = 'https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx'with requests.Session() as s:
s.headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Referer": "https://apps.neb-one.gc.ca/CommodityStatistics/Statistics.aspx",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9"
}
response = s.get(url)
soup = BeautifulSoup(response.content, 'html5lib')
data = { tag['name']: tag['value']
for tag in soup.select('input[name^=ctl00]') if tag.get('value')
}
state = { tag['name']: tag['value']
for tag in soup.select('input[name^=__]')
}
payload = data.copy()
payload.update(state)
payload.update({
"ctl00$MainContent$rdoCommoditySystem": "ELEC",
"ctl00$MainContent$lbReportName": '76',
"ctl00$MainContent$rdoReportFormat": 'PDF',
"ctl00$MainContent$ddlStartYear": "2008",
"__EVENTTARGET": "ctl00$MainContent$rdoCommoditySystem$2"
})
print(payload['__EVENTTARGET'])
print(payload['__VIEWSTATE'][-20:])
response = s.post(url, data=payload, allow_redirects=True)
soup = BeautifulSoup(response.content, 'html5lib')
state = { tag['name']: tag['value']
for tag in soup.select('input[name^=__]')
}
payload.pop("ctl00$MainContent$ddlStartYear")
payload.update(state)
payload.update({
"__EVENTTARGET": "ctl00$MainContent$lbReportName",
"ctl00$MainContent$lbReportName": "171",
"ctl00$MainContent$ddlFrom": "01/12/2018 12:00:00 AM"
})
print(payload['__EVENTTARGET'])
print(payload['__VIEWSTATE'][-20:])
response = s.post(url, data=payload, allow_redirects=True)
soup = BeautifulSoup(response.content, 'html5lib')
state = { tag['name']: tag['value']
for tag in soup.select('input[name^=__]')
}
payload.update(state)
payload.update({
"ctl00$MainContent$ddlFrom": "01/10/1990 12:00:00 AM",
"ctl00$MainContent$rdoReportFormat": "HTML",
"ctl00$MainContent$btnView": "View"
})
print(payload['__VIEWSTATE'])
response = s.post(url, data=payload, allow_redirects=True)
print(response.text)
Post a Comment for "Scrape .aspx Form With Python"