Python3 Web Crawler Actual-37, Dynamic Rendering Page Grabbing: Selenium

In the previous chapter, we learned about how Ajax is analyzed and grabbed, which is also a case of pages rendered dynamically by JavaScript. By directly analyzing Ajax, we can still use Requests or Urllib to capture data.

However, JavaScript dynamically renders more pages than Ajax.For example, China Youth Network: http://news.youth.cn/gn/ , its paging portion is generated by JavaScript, not the original HTML code, which does not contain Ajax requests.An official example of ECharts is: http://echarts.baidu.com/demo ..., their graphics are generated after JavaScript calculations.There are pages like Taobao, even if it is data acquired by Ajax, but its Ajax interface contains a lot of encryption parameters. It is difficult to find out its rules directly, and it is also difficult to analyze Ajax directly to capture it.

However, data is always to be captured. To solve these problems, we can directly use the simulation browser to run, so that we can achieve what the browser sees, what the source of the capture is, that is, visible and crawlable.In this way, we can get the final result of JavaScript rendering directly by simulating the browser, regardless of the algorithm used by JavaScript to render the page inside the web page and the parameters of the Ajax interface behind the web page. As long as we can see it in the browser, we can capture it.

There are many libraries in Python that simulate the operation of browsers, such as Selenium, Splash, PyV8, Ghost, etc. In this chapter, we introduce the usage of Selenium and Plash so that we can stop worrying about dynamically rendered pages.

Use of Selenium

Selenium is an automated test tool that allows us to drive the browser to perform specific actions, such as clicking, dropping, and so on, while also getting the source code of the page the browser is currently rendering so that it can be seen and crawled.This is an effective way to grab pages that are rendered dynamically in JavaScript, so let's take a look at its power.

1. Preparations

This section uses Chrome as an example to illustrate the use of Selenium. Before starting this section, make sure you have installed the Chrome browser correctly and configured the ChromeDriver. You also need to correctly install the Selenium library for Python. You can refer to the installation and configuration instructions in Chapter 1 for the detailed process.

2. Basic Use

Once you're ready, let's take a general look at what Selenium has to do. First, let's take a look at an example code:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait

browser = webdriver.Chrome()
try:
    browser.get('https://www.baidu.com')
    input = browser.find_element_by_id('kw')
    input.send_keys('Python')
    input.send_keys(Keys.ENTER)
    wait = WebDriverWait(browser, 10)
    wait.until(EC.presence_of_element_located((By.ID, 'content_left')))
    print(browser.current_url)
    print(browser.get_cookies())
    print(browser.page_source)
finally:
    browser.close()
Python Resource Sharing qun 784758214 ,Installation packages are included. PDF,Learn video, here is Python The place where learners gather, zero base, advanced, all welcome

After running the code, you can find that a Chrome browser pops up automatically. The browser first jumps to Baidu, then enters Python in the search box to search for it, then jumps to the search results page. After waiting for the search results to load, the console will output the current URL, the current Cookies and web pages respectivelySource code, as shown in Figure 7-1:

Figure 7-1 Running Results
You can see in jon g zhi tai that the current URL s, Cookies, and source code we get are all real content in the browser.
So if we use Selenium to drive browsers to load web pages, we can get the results of JavaScript rendering directly, no matter what the encryption is.
Let's take a closer look at the use of Selenium.

3. Declare Browser Objects

Selenium supports a wide range of browsers, such as Chrome, Firefox, Edge, Android, BlackBerry, and so on, as well as Phantom JS, a * face browser.
We can initialize it as follows:

from selenium import webdriver

browser = webdriver.Chrome()
browser = webdriver.Firefox()
browser = webdriver.Edge()
browser = webdriver.PhantomJS()
browser = webdriver.Safari()

This completes the initialization of the browser object and assigns it to the browser object. Next, all we have to do is call the browser object and let it perform various actions to simulate the browser operation.

3. Access Pages

We can request a web page with get() method and pass in the link URL with parameters. For example, here we use get() method to access Taobao and print out the source code, the code is as follows:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.taobao.com')
print(browser.page_source)
browser.close()

After running, we found that the Chrome browser popped up, accessed Taobao automatically, then the console output the source code of the Taobao page, and then the browser closed.
With these simple lines of code, we can drive the browser and get the source code of the web page, which is very convenient.

4. Find Nodes

Selenium can drive the browser to complete various operations, such as filling out forms, simulating clicks, etc. If we want to complete the operation of entering text into an input box, we need to know where the input box is.So Selenium provides a series of ways to find nodes that we can use to get the desired nodes so that we can perform some actions or extract information in the next step.

Single Node

For example, if we want to extract the node of search box from Taobao page, first look at its source code, as shown in Figure 7-2:

Figure 7-2 Source Code
You can see that its ID is q, Name is q, and there are many other attributes, so there are many forms of how we get it, such as find_element_by_name() based on Name value, ind_element_by_id() based on ID, and XPath, CSS Selector, etc.
Let's do it in code:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.taobao.com')
input_first = browser.find_element_by_id('q')
input_second = browser.find_element_by_css_selector('#q')
input_third = browser.find_element_by_xpath('//*[@id="q"]')
print(input_first, input_second, input_third)
browser.close()

Here we use three ways to get input boxes, based on ID, CSS Selector, and XPath, which return exactly the same results.
Run result:

<selenium.webdriver.remote.webelement.WebElement (session="764d5dc3113b4c60c143c4d69f91e60d", element="43e07ede-d084-474e-a03c-20451c4a4f51")>
<selenium.webdriver.remote.webelement.WebElement (session="764d5dc3113b4c60c143c4d69f91e60d", element="43e07ede-d084-474e-a03c-20451c4a4f51")>
<selenium.webdriver.remote.webelement.WebElement (session="764d5dc3113b4c60c143c4d69f91e60d", element="43e07ede-d084-474e-a03c-20451c4a4f51")>

You can see that all three nodes are WebElement types and are identical.
Here are all the ways to get a single node:

find_element_by_id
find_element_by_name
find_element_by_xpath
find_element_by_link_text
find_element_by_partial_link_text
find_element_by_tag_name
find_element_by_class_name
find_element_by_css_selector

Selenium also provides a generic find_element() method, which requires two parameters to be passed in, one is to find by and the other is to value, which is actually the version of the generic function for find_element_by_id(), such as find_element_by_id(id), which is equivalent to find_element(By.ID, id).The results are identical.
Let's do it in code:

from selenium import webdriver
from selenium.webdriver.common.by import By

browser = webdriver.Chrome()
browser.get('https://www.taobao.com')
input_first = browser.find_element(By.ID, 'q')
print(input_first)
browser.close()

This way of finding is actually exactly the same as the lookup functions listed above, but the parameters are more flexible.

Multiple Nodes

If we are looking for only one target in a web page, we can use the find_element() method, but if there are more than one node, then the find_element() method will only get the first node. If we are looking for all the nodes that meet the criteria, we need to use the find_elements() method.The element has an s in its name. Note the distinction.
For example, here we look for all entries of the left navigation bar of Taobao, as shown in Figure 7-3:

Figure 7-3 Navigation Bar
You can do this:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.taobao.com')
lis = browser.find_elements_by_css_selector('.service-bd li')
print(lis)
browser.close()

Run result:

[<selenium.webdriver.remote.webelement.WebElement (session="de61fc129db58e372b39e6a56452fcdf", element="f460bc2b-04a7-4659-a08f-e39026f44f78")>, <selenium.webdriver.remote.webelement.WebElement (session="de61fc129db58e372b39e6a56452fcdf", element="64d9673d-967e-4c45-bb09-60b6efc5ee2b")>, ···]

The output is simplified here, with the middle omitted.
What you can see becomes a list type, with each node of the list being a WebElement type.
That is, if we use the find_element() method, we can only get the first node that matches, and the result is the WebElement type. If we use the find_elements() method, the result is the list type, and each node of the list is the WebElement type.
The list of functions is as follows:

find_elements_by_id
find_elements_by_name
find_elements_by_xpath
find_elements_by_link_text
find_elements_by_partial_link_text
find_elements_by_tag_name
find_elements_by_class_name
find_elements_by_css_selector

Of course, as we just did, we can also choose directly by the find_elements() method, so we can also write as follows:

lis = browser.find_elements(By.CSS_SELECTOR, '.service-bd li')

The results are identical.

5. Node Interaction

Selenium drives the browser to perform certain actions, that is, we can let the browser simulate some actions, the more common uses are:
The input text is sent_keys() method, the empty text is clear() method, and there are button clicks and click() method.
Let's take a look at an example:

from selenium import webdriver
import time

browser = webdriver.Chrome()
browser.get('https://www.taobao.com')
input = browser.find_element_by_id('q')
input.send_keys('iPhone')
time.sleep(1)
input.clear()
input.send_keys('iPad')
button = browser.find_element_by_class_name('btn-search')
button.click()

Here we first drive the browser to open Taobao, then use find_element_by_id() method to get the input box, then use send_keys() method to enter the iPhone text, wait a second, clear() method to empty the input box, call send_keys() method again to enter the iPad text, and then find_element_by_class_nThe ame() method takes the search button and finally calls the click() method to complete the search.
With the above method, we have completed some operations of common nodes, and more operations can be described in the interaction of official documents: http://selenium-python.readth....

6. Action Chain

In the example above, some interactive actions are performed for a node, such as input box we call its input text and empty text methods, button calls its click method, in fact, there are other operations it has no specific execution object, such as mouse dragging, keyboard keys and so on.Do.So we have another way to perform these actions, which is the action chain.
For example, we now drag a node, dragging a node from one place to another, using code to do this:

from selenium import webdriver
from selenium.webdriver import ActionChains

browser = webdriver.Chrome()
url = 'http://www.runoob.com/try/try.php?filename=jqueryui-api-droppable'
browser.get(url)
browser.switch_to.frame('iframeResult')
source = browser.find_element_by_css_selector('#draggable')
target = browser.find_element_by_css_selector('#droppable')
actions = ActionChains(browser)
actions.drag_and_drop(source, target)
actions.perform()

First, we open a dragged instance in the web page, then select the node to be dragged and the target node to be dragged, then declare that the ActionChains object is assigned to the actions variable, and then execute the action by calling the drag_and_drop() method of the actions variable, and then the perform() method.It becomes a drag operation, as shown in Figures 7-4 and 7-5:

Figure 7-4 Drag the front page

Figure 7-5 Dragging Page
The two figures above show the results before and after dragging.
More action chain operations can be referred to in the official document Action Chain Introduction: http://selenium-python.readth....

7. Executing JavaScript

For some operations, the Selenium API is not available, such as pulling a progress bar, you can directly simulate running JavaScript, using the execute_script() method, and the code is as follows:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.zhihu.com/explore')
browser.execute_script('window.scrollTo(0, document.body.scrollHeight)')
browser.execute_script('alert("To Bottom")')

Here we use the execute_script() method to drop the progress bar to the bottom and pop up the alert prompt box.
So with this, basically all the functionality that the API does not provide can be implemented by executing JavaScript.

8. Get node information

We have previously said that the source code of a Web page can be obtained by using the page_source attribute, and after obtaining the source code, information can be extracted using parsing libraries such as Regular, BeautifulSoup, PyQuery, and so on.
However, since Selenium already provides a way to select nodes and returns a WebElement type, it also has methods and attributes to extract node information directly, such as attributes, text, and so on.In this way, we can extract information without parsing the source code, which is very convenient.
Now let's see how we can get node information.

get attribute

We can use the get_attribute() method to get the properties of a node, so this precondition is to select the node first.
Let's take a look at an example:

from selenium import webdriver
from selenium.webdriver import ActionChains

browser = webdriver.Chrome()
url = 'https://www.zhihu.com/explore'
browser.get(url)
logo = browser.find_element_by_id('zh-top-link-logo')
print(logo)
print(logo.get_attribute('class'))

After running, the program will drive the browser to open the known page, get the known LOGO node, and print out its class.
Console output:

<selenium.webdriver.remote.webelement.WebElement (session="e08c0f28d7f44d75ccd50df6bb676104", element="0.7236390660048155-1")>
zu-top-link-logo

We can get its value by getting_attribute() and passing in the name of the attribute we want to get.

Get Text Value

Each WebEelement node has a text attribute, which we can call directly to get the text information inside the node, which is equivalent to BeautifulSoup's get_text() method and PyQuery's text() method.
Let's take a look at an example:

from selenium import webdriver

browser = webdriver.Chrome()
url = 'https://www.zhihu.com/explore'
browser.get(url)
input = browser.find_element_by_class_name('zu-top-add-question')
print(input.text)

Here we still open the Know Page, get the Question Button node, and print out its text value.
Console output:

Put questions to

Get ID, Location, Label Name, Size

There are also other properties of a WebElement node, such as the ID attribute to get the node id, location to get the node's relative position on the page, tag_name to get the tag name, size to get the node's size, that is, width and height, which are sometimes useful.
Let's take a look at an example:

 from selenium import webdriver

browser = webdriver.Chrome()
url = 'https://www.zhihu.com/explore'
browser.get(url)
input = browser.find_element_by_class_name('zu-top-add-question')
print(input.id)
print(input.location)
print(input.tag_name)
print(input.size)

Here we first get the Question Button node, and then call its id, location, tag_name, size attributes to get the corresponding attribute values.

9. Switch Frame

We know that such a node in a web page is called an iframe, or sub-frame, which is equivalent to a sub-page of a page, and its structure is identical to that of an external web page.When Selenium opens a page, it operates within the parent frame by default, and at this time it cannot get nodes inside the child frame if there are child frames on the page.So at this point we need to switch the Frame using the switch_to.frame() method.
Let's start with an example:

import time
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException

browser = webdriver.Chrome()
url = 'http://www.runoob.com/try/try.php?filename=jqueryui-api-droppable'
browser.get(url)
browser.switch_to.frame('iframeResult')
try:
    logo = browser.find_element_by_class_name('logo')
except NoSuchElementException:
    print('NO LOGO')
browser.switch_to.parent_frame()
logo = browser.find_element_by_class_name('logo')
print(logo)
print(logo.text)

Console Output:

NO LOGO
<selenium.webdriver.remote.webelement.WebElement (session="4bb8ac03ced4ecbdefef03ffdc0e4ccd", element="0.13792611320464965-2")>
RUNOOB.COM

Let's also take the Web page above that demonstrates the action chain operation as an example. First we switch to the child frame by using switch_to.frame(), then we try to get the LOGO node in the parent frame. It can't be found. If it can't be found, it throws a NoSuchElementException exception, and the exception is caught.It outputs NO LOGO, and then we switch back to the parent Frame, and then retrieve the node again. We find that we can retrieve it successfully.
So, if we want to get the nodes in a sub-Frame when the page contains a sub-Frame, we need to call the switch_to.frame() method to switch to the corresponding Frame before we can do so.

10. Delayed Waiting

In Selenium, the get() method ends after the page frame has finished loading, at which point if getting page_source may not be a fully loaded page by the browser, or if some pages have additional Ajax requests, we may not be able to successfully get it in the page source code.So here we need to wait a little longer to make sure the nodes are loaded.
There are two ways to wait here, implicit waiting and explicit waiting.

Implicit Wait

When using implicit wait to execute a test, if Selenium does not find a node in the DOM, it will continue to wait and throw an exception when the node is not found beyond the set time. In other words, when looking for a node and the node does not appear immediately, implicit wait will wait for a period of time to find the DOM, defaultTime is 0.
Let's take a look at an example:

from selenium import webdriver

browser = webdriver.Chrome()
browser.implicitly_wait(10)
browser.get('https://www.zhihu.com/explore')
input = browser.find_element_by_class_name('zu-top-add-question')
print(input)
Python Resource Sharing qun 784758214 ,Installation packages are included. PDF,Learn video, here is Python The place where learners gather, zero base, advanced, all welcome

Here we implement implicit wait using the implicitly_wait() method.

Explicit Wait

The effect of implicit waiting is actually not that good, because we are only specifying a fixed time, and the page load time is affected by network conditions.
So here's a more appropriate explicit wait method, which specifies the node to look for and then specifies a maximum wait time.If the node is loaded within the specified time, it returns to the node it is looking for, and if it is not loaded at the specified time, a timeout exception is thrown.
Let's take a look at an example:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

browser = webdriver.Chrome()
browser.get('https://www.taobao.com/')
wait = WebDriverWait(browser, 10)
input = wait.until(EC.presence_of_element_located((By.ID, 'q')))
button = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, '.btn-search')))
print(input, button)

Here we first introduce the object WebDriverWait, specify the maximum wait time, and then call its until() method to pass in the expected_conditions to wait for. For example, here we pass in the presence_of_element_located condition, which represents what the node means. Its parameter is the nodeLocate the tuple, which is the node search box with ID q.
So the effect of this is that if a node with ID q, the search box, loads successfully within 10 seconds, it returns to that node and throws an exception if it hasn't loaded in 10 seconds.
For a button, you can change the wait condition, for example, to element_to_be_clickable, which is clickable, so when you look for a button, you look for a button with a CSS selector of.btn-search. If it is clickable within 10 seconds or loaded successfully, you return to this button node, if it is more than 10 secondsIf it is not clickable, that is, it is not loaded, then an exception is thrown.
Run the code to load successfully at a good network speed.
Console Output:

<selenium.webdriver.remote.webelement.WebElement (session="07dd2fbc2d5b1ce40e82b9754aba8fa8", element="0.5642646294074107-1")>
<selenium.webdriver.remote.webelement.WebElement (session="07dd2fbc2d5b1ce40e82b9754aba8fa8", element="0.5642646294074107-2")>

You can see that the console successfully output two nodes, both of WebElement type.
If there is a problem with the network and the load does not succeed within 10 seconds, then a TimeoutException is thrown and the console output is as follows:

TimeoutException Traceback (most recent call last)
<ipython-input-4-f3d73973b223> in <module>()
      7 browser.get('https://www.taobao.com/')
      8 wait = WebDriverWait(browser, 10)
----> 9 input = wait.until(EC.presence_of_element_located((By.ID, 'q')))

There are many more waiting conditions, such as judging the content of the title, judging whether a text appears in a node, and listing all the loading conditions here:

Waiting conditions Meaning
title_is Title is something
title_contains The title contains something
presence_of_element_located Nodes are loaded out and positioning tuples are passed in, such as (By.ID,'p')
visibility_of_element_located Node visible, incoming positioning tuple
visibility_of Visible, incoming node object
presence_of_all_elements_located All nodes loaded out
text_to_be_present_in_element A node text contains a text
text_to_be_present_in_element_value A node value contains a text
frame_to_be_available_and_switch_to_it frame Load and Switch
invisibility_of_element_located Node is not visible
element_to_be_clickable Node clickable
staleness_of Determine if a node is still in the DOM and if the page has been refreshed
element_to_be_selected Node Selectable, Pass Node Object
element_located_to_be_selected Node selectable, incoming positioning tuple
element_selection_state_to_be Incoming node object and state, equal return True, otherwise return False
element_located_selection_state_to_be Pass in positioning tuple and state, return True equally, otherwise return False
alert_is_present Is Alert present

More detailed parameters and usage descriptions of wait conditions can be found in the official documentation: http://selenium-python.readth....

11. Forward and backward

We usually use the browser to have forward and backward functions, which can also be done with Selenium, back() to go back, and forward() to go forward.
Let's take a look at an example:

import time
from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.baidu.com/')
browser.get('https://www.taobao.com/')
browser.get('https://www.python.org/')
browser.back()
time.sleep(1)
browser.forward()
browser.close()

Here we visit three pages in a row, then call back() to get back to the second page, and then call forward() to get back to the third page.

12. Cookies

Cookies can also be easily manipulated using Selenium, such as getting, adding, deleting, and so on.
Let's take another example:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.zhihu.com/explore')
print(browser.get_cookies())
browser.add_cookie({'name': 'name', 'domain': 'www.zhihu.com', 'value': 'germey'})
print(browser.get_cookies())
browser.delete_all_cookies()
print(browser.get_cookies())

First we visited KNOWN, then after loading, the browser actually generated the Cookies, we called get_cookies() method to get all the Cookies, then we added a Cookie, passed in a dictionary with name, domain, value, and so on.Next we get all the Cookies again and we can see that there are more Cookies.Finally, we call the delete_all_cookies() method, delete all Cookies, and retrieve them, and the result is empty.
Console Output:

[{'secure': False, 'value': '"NGM0ZTM5NDAwMWEyNDQwNDk5ODlkZWY3OTkxY2I0NDY=|1491604091|236e34290a6f407bfbb517888849ea509ac366d0"', 'domain': '.zhihu.com', 'path': '/', 'httpOnly': False, 'name': 'l_cap_id', 'expiry': 1494196091.403418}]
[{'secure': False, 'value': 'germey', 'domain': '.www.zhihu.com', 'path': '/', 'httpOnly': False, 'name': 'name'}, {'secure': False, 'value': '"NGM0ZTM5NDAwMWEyNDQwNDk5ODlkZWY3OTkxY2I0NDY=|1491604091|236e34290a6f407bfbb517888849ea509ac366d0"', 'domain': '.zhihu.com', 'path': '/', 'httpOnly': False, 'name': 'l_cap_id', 'expiry': 1494196091.403418}]
[]

Cookies are also handy to operate using the above methods.

13. Tab Management

We open tabs when we visit a web page, so tabs can also be manipulated in Selenium.

import time
from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.baidu.com')
browser.execute_script('window.open()')
print(browser.window_handles)
browser.switch_to_window(browser.window_handles[1])
browser.get('https://www.taobao.com')
time.sleep(1)
browser.switch_to_window(browser.window_handles[0])
browser.get('https://python.org')

Console Output:

['CDwindow-4f58e3a7-7167-4587-bedf-9cd8c867f435', 'CDwindow-6e05f076-6d77-453a-a36c-32baacc447df']

First we visited Baidu, then we called the execute_script() method, passed in the JavaScript statement of window.open() to open a new tab, then we want to switch to that tab, you can call the window_handles property to get all the tabs currently open, and the code list of the tabs is returnedTo switch tabs, simply call the switch_to_window() method and pass in the tab's code.Here we pass in the code number for the second tab, jumping to the second tab, then opening a new page under the second tab, then switching back to the first tab allows you to call the switch_to_window() method again and do other things.
This allows us to manage tabs.

14. Exception handling

In the process of using Selenium, you will inevitably encounter some exceptions, such as timeout, node not found, etc. Once such errors occur, the program will not continue to run, so exception handling is very important in the program.
Here we can use the try except statement to catch exceptions.
Let's start by demonstrating an exception not found by the node, as shown in the following example:

from selenium import webdriver

browser = webdriver.Chrome()
browser.get('https://www.baidu.com')
browser.find_element_by_id('hello')

Here we open the Baidu page and try to select a node that does not exist, so we will encounter an exception.
After running, the console output is as follows:

NoSuchElementException Traceback (most recent call last)
<ipython-input-23-978945848a1b> in <module>()
      3 browser = webdriver.Chrome()
      4 browser.get('https://www.baidu.com')
----> 5 browser.find_element_by_id('hello')

You can see exceptions such as NoSuchElementException thrown, which are usually exceptions not found by the node. To prevent the program from encountering exceptions and interrupting, we need to catch them.

from selenium import webdriver
from selenium.common.exceptions import TimeoutException, NoSuchElementException

browser = webdriver.Chrome()
try:
    browser.get('https://www.baidu.com')
except TimeoutException:
    print('Time Out')
try:
    browser.find_element_by_id('hello')
except NoSuchElementException:
    print('No Element')
finally:
    browser.close()
Python Resource Sharing qun 784758214 ,Installation packages are included. PDF,Learn video, here is Python The place where learners gather, zero base, advanced, all welcome

As shown in the example above, here we use try except to catch all kinds of exceptions, such as NoSuchElementException exceptions caught by the method of finding_element_by_id() to find nodes, so that once such an error occurs, exception handling will occur and the program will not be interrupted.
Console Output:
No Element
More abnormal fatigue can be found in official documents: http://selenium-python.readth ... if an exception occurs, we can just catch it.

15. Conclusion

With Selenium, it is no longer difficult to work with JavaScript.

Tags: Python Selenium Javascript Session

Posted on Tue, 06 Aug 2019 15:16:34 -0700 by wolfrock