Hướng dẫn python requests submit button

I logged in to this website by sending login and password through POST.

I don't need to pass all form data, just to click submit button visible as "zapisz zmiany".

Is it possible with requests or scrapy libraries?

asked Oct 30, 2019 at 7:57

5

You can try to use Scrapy FormRequest.from_response[] [but there is no guarantee that it will work because your page may use JavaScript]:

yield scrapy.FormRequest.from_response[
    response=response,
    formname="your_form_name", # you can get it from DevTools in your browser
    formdata={
        "form_field_1": "value_1", # you can get fields / values from the Network tab in DevTools [after you submit your form]
    },
    callback=self.parse_form, # a code that will process response
]

answered Oct 30, 2019 at 9:34

gangabassgangabass

10.5k2 gold badges22 silver badges35 bronze badges

I have this little website i want to fill in a form with the requests library. The problem is i cant get to the next site when filling the form data and hitting the button[Enter does not work].

The important thing is I can't do it via a clicking bot of some kind. This needs to be done so I can run in without graphics.

info = {'name':'JohnJohn',
        'message':'XXX',
        'sign':"XXX",
        'step':'1'}

First three entries name, message, sign are the text areas and step is I think the button.

r = requests.get[url]
r = requests.post[url, data=info]

print[r.text]

The Form Data looks like this when i send a request via chrome manually:

  • name:JohnJohn
  • message:XXX
  • sign:XXX
  • step:1

The button element looks like this:


    
    
        Wyślij
    

The next site if i do this manually has the same adres.

Hi there!

I was wondering how exactly I would go about using python's requests library to click on buttons on a webpage. After I click on a button on a webpage I want to be able to access the page that I am redirected to.

As an example, if I wanted to access r/learnpython's homepage and then click the "Create Post" button to have the webpage redirect me to the page that lets me create a post, how would I use requests to do that?

r = requests.get['//www.reddit.com/r/learnpython/new/']

Keep in mine that I do not want to go to the URL "//www.reddit.com/r/learnpython/submit" directly. I want to be able to actively press the "Create Post" button in this example. I also don't want to use selenium or any other solution that requires me to have a the browser's GUI open.

I know this is basic but I can't for the life of me find a post that gives me a clear answer to this question. Thanks for your time!!!

TL;DR: Don't think that this functionality is available yet with requests-HTML, as the script argument is used to scrape static HTML only [and not to run js on a dynamic page]. Seems like Selenium is the only option [if button click causes existing data on the page to change, instead of opening a new page, and the method for the button click is located in the backend]?

When I try to put js in the script I get a puppeteer error.

For example, I have the following code:
script = """
[] => {
let button = document.getElementsByClassName["course-title"][0]
button.click[]
}
"""
r.html.render[sleep=5, timeout=10000, script=script, keep_page=True]

and I get the error:
pyppeteer.errors.ElementHandleError: Evaluation failed: TypeError: Cannot read property 'click' of undefined
at pyppeteer_evaluation_script:4:20

I thought that maybe the page content wasn't rendering properly, so I wrote [this was right above where script was declared]
with open['response.txt', 'w', encoding='utf-8'] as f:
f.write[r.text]
to get the content of the HTML, and the element did exist on the page.

Similar error when trying to use jQuery [website has jQuery enabled]
Code:
script = """
[] => {
$[document].ready[function[] {
$[".course-title"][0].click[];
}]
}
"""
response = r.html.render[sleep=5, timeout=10000, script=script, keep_page=True]

Error:
response is None

So then I wanted to check if js was even working and wrote:
script = """
[] => {
return 1+1;
}
"""
response = r.html.render[sleep=5, timeout=10000, script=script, keep_page=True]
print[response]

Output:
2

This means that the role of passing a script to the render function is not to run the script on the page, but rather to run a script to scrape the static HTML content of the page. Otherwise, the response would be an object [requests.Response[] object with the attributes .text and methods .json[], etc.]. Furthermore, in the case where a button click edits content on the page, and the method for the click is in the backend [I mean this is the only reason you would have to click a button in the first place, otherwise just look at the URL endpoint, or what the method for the click does], not being able to get the HTML response is useless?
This matches up with the documentation for requests-HTML where the only time they mention render[script=script] is under the line "You can also render JavaScript pages without Requests:", and in this section, the js is simply run on a HTML string [and this js is just for getting information in the HTML], and the return of the render is just the return of the js code that was in script.

How do you simulate a button click in Python?

1 Answer.

Pressed F12 key to enter it and selected the Network Tab..

Clicked the "Display more examples" button..

Found the last request ["bst-query-service"].

Right-clicked it and selected Copy > Copy as cURL [cmd].

How do you press a button on a website using python?

We can find the button on the web page by using methods like find_element_by_class_name[], find_element_by_name[], find_element_by_id[] etc, then after finding the button/element we can click on it using click[] method. This will click on the button and a popup will be shown.

Chủ Đề