web scraping

web scraping and automate twitter post with selenium

Automate Your Tweets with Selenium

Introduction

In the previous post, we have discussed about how to start web scraping with requests and lxml libraries, and we also summarized two limitations with this approach:

  • Time & effort required to chain all the requests for some complicated operations such as user authentication
  • Triggering a button click or calling JavaScript code is not possible from the HTML response

To solve these two issues, I recommended to use selenium package. In fact you have checked this post, you may still remember that we can use selenium to simulate human actions such as open URL on browser or trigger a button click on the web page and so on.

In this post, I will demonstrate how to use selenium to automatically login to tweeter account, view and post tweets, where the same approach can be used for your web scraping project.

Prerequisites

In order to use selenium to launch browser, you will need to download a web driver for the browser you are using. You can check all the supported browsers as well as the download links from here.

For the below code example, I will use Chrome version 86 and download the driver with this version supported. For simplicity, I will save the chromedriver.exe into my current code directory.

Besides the driver file, you will also need to install selenium in your working environment. Below is the pip command for installation of the latest version:

pip install --upgrade selenium

Let’s also import all the modules at the beginning of our code. Explanation will be given later where these modules are used:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as ec

With the above ready, let’s dive into our code example.

Login to twitter account with Selenium

Similar to a human behavior on the browser, Selenium does not allow you to interact with the invisible elements, and you would encounter ElementNotVisibleException when trying to access the element if it is not fully loaded or not in the view. So the best practice is to always maximize your browser window, so that majority of the information you need are visible and interactable.

To maximize the browser upon launching, you can set –start-maximized in the chrome operations as per below:

chromeOptions = Options()
chromeOptions.add_argument("--start-maximized")

(You can also launch the browser first and later call the maximize_window function to maximize it)

This Chrome options shall be passed into the web driver constructor when it is initiated. We also need to specify the full path of driver exe file, for our case, it’s under the current directory.

driver = webdriver.Chrome(executable_path="chromedriver.exe", options=chromeOptions)

With the above code, a new Chrome browser will be launched. The web driver object has a get method which accepts a URL parameter and opens it from the browser. Below will open the twitter login page on your browser:

tweeter_url = "https://twitter.com/login"
driver.get(tweeter_url)

As there are many factors impact how fast the web page can be fully loaded, you may need to add in some delays at certain steps to make sure that the current action has been completed successfully before moving to the next step.

In Selenium, there are two types of waiting approaches: implicit wait and explicit wait. The implicit wait will just instruct web driver to wait for maximum of certain time when polling the DOM, while explicit wait will check the presence/visibility of the element periodically until the condition is met or the maximum waiting time reached. As implicit wait applies to the entire lifecycle of the web driver, the explicit wait is relatively more flexible. Let’s define our explicit wait for a max of 10 seconds in our example:

wait = WebDriverWait(driver, 10)

Now, we shall follow what we have discussed in the previous post to find a unique identifier of the login username and password fields. By inspecting the web page HTML, you can easily find out the name attribute of the username and password field. Below is the screenshot of the HTML structure for username field:

web scraping and automating twitter post with selenium

 

To locate the username element, we can use the XPath with its element name. And let’s also use the explicit wait to locate it until the element is fully loaded and visible on the page:

username_input = wait.until(ec.visibility_of_element_located((By.NAME, "session[username_or_email]")))

Once we located the username input field, we can send our login ID to this field with send_keys function as per below:

username_input.send_keys(username)

Note: you will need to replace this username/password variable with your twitter login credentials

Similarly, we can locate our password field by its name and send in our password:

password_input = wait.until(ec.visibility_of_element_located((By.NAME, "session[password]")))
password_input.send_keys(password)

Once we have successfully set the values into these two fields, we can simulate the button click on the login button:

  • Firstly we shall locate to the login button by its attribute data-testid=’LoginForm_Login_Button’
  • Then call the WebElement click function to simulate how user clicks on the button

With the below code, you shall be able to login into your tweeter account and view the tweets on your home screen:

login_button = wait.until(ec.visibility_of_element_located((By.XPATH, "//div[@data-testid='LoginForm_Login_Button']")))
login_button.click()

To showcase how to interact with your web page like a normal user, let’s move to the next example to search a tweeter posts with some keywords.

Search tweeter posts by keywords

Same as previously, we shall first locate our search input box by its data-testid attribute as per below:

search_input = wait.until(ec.visibility_of_element_located((By.XPATH, 
"//div/input[@data-testid='SearchBox_Search_Input']")))

As a normal user, I can key in some keywords in the search box and hit ENTER for a search. We can do the same from Selenium via the send_keys function. Let’s first clear the input box and then send a keyword “ethereum” together with a ENTER key:

search_input.clear()
search_input.send_keys("ethereum" + Keys.ENTER)

Upon receiving the ENTER key event, you shall see the search results are loading on the page. The next is to extract the tweeter posts from the searching results.

Below is the sample code that I extracted all the text from the tweets and printed as output:

tweet_divs = driver.find_elements_by_xpath("//div[@data-testid='tweet']")
for div in tweet_divs:
    spans = div.find_elements_by_xpath(".//div/span")
    tweets = ''.join([span.text for span in spans])
    print(tweets)

You shall see the output similar to below:

web scraping and automating twitter post with selenium

With this plain text results, you may use some text processing tools to further analyze what people are discussing around to this keyword.

Automatically post new tweets

Since we are able to search within tweeter, we shall also be able to post a new tweet with Selenium.

Let’s first locate the below text area by the data-testid attribute:

web scraping and automating twitter post with selenium

Below is the code to locate to the span of the text area by it’s ancestor div:

tweet_text_span = driver.find_element_by_xpath("//div[@data-testid='tweetTextarea_0']/div/div/div/span")

Then we can send whatever text we want to tweet:

tweet_text_span.send_keys("Do you know we can tweet with selenium?")

Once the text is written into the span, the tweet button will be enabled. You can locate the button and click to submit the post:

tweet_button = wait.until(ec.visibility_of_element_located((By.XPATH, 
                                                           "//div[@data-testid='tweetButtonInline']")))
tweet_button.click()

Upon submission, you shall see a new post added to your timeline as per below:

web scraping and automating twitter post with selenium

 

Move invisible element into visible view

There are always cases that you need to scroll up and down or left and right to view some information on the web page. You will also need to make sure your elements are in the view before you can do any operation such as getting its attributes or performing clicks.

To move the elements into the view, you can execute some JavaScript code to scroll to the element as per below:

who_to_follow = driver.find_element_by_xpath("//div/span[text() = 'Who to follow']")
driver.execute_script("arguments[0].scrollIntoView(true);", who_to_follow)

Hide your browser with headless mode

When you use Selenium for some automation or scraping job, you may not wish to see the web pages jumping around in front of you. To make it running peacefully in the background, you can set the headless parameter in the Chrome option before the initialization of the web driver:

chromeOptions.add_argument('--headless')

With this parameter, we would not see the browser launched and everything will be running quietly in the background. It’s good that you always test your code properly before you enable this headless mode.

Conclusion

In this article, we have demonstrated how to use Selenium to automatically login to tweeter account, and read or post tweets. And we have also reviewed through how to trigger the JavaScript code with Selenium web driver and run everything totally in the background. In your real project, you may not want to use the same approach to scrap website like tweeter since it has already provided developer account with all the API access. So this article is more to showcase the capability of the Selenium package.

With Selenium, dealing with complicated operations such as user authentication become much simpler as everything is performed like a normal browser user, and it also provides action chains to support all sorts of mouse movement actions such hover over or drag and drop etc. You shall consider to use it in your automation project or web scraping project if your target website relies heavily on the front-end JavaScript code.

web scraping with python requests and lxml

Web Scraping From Scratch With 3 Simple Steps

Introduction

Web scraping or crawling refers to the technique to extract the information from a website and transform into structured data for later analysis. There are generally a few reasons that you may need to implement a web scraping scripts to automate the data collection process:

  • There isn’t any public API available for you to get data from the source sites
  • The information is updated from time to time, such as the exchange rate, you cannot manage it in a manual way
  • The final data you need is piecemeal from multiple sites; and so on

Before you decide to implement a scraping script, you will also need to check to be sure that you are not violating the term of use for the data you are going to scrape. Some sites are against the scraping robot. This article is intended for education purpose to help you to understand the overall processes of web scraping, so we will assume you already know the implication of the web scraping and possible legal issues on how the data is used.

Scraping a website sometimes can be difficult depends on how the target website is designed and where the data is resided. But generally you can split the process into 3 steps. Let’s walk through them one by one.

Understand the structure of your target website

As the first step, you shall take a quick look at your target website to see how the front end interacts with the backend, and how the data is populated to the web page. To keep our example simple, let’s assume user authentication is not required and our target is to extract the price change for the top 20 cryptocurrencies from coindesk for further analysis.

The first thing we shall do is to understand how this information is organized on the website. Below is the screenshot of the data presented on the web page:

web scraping with python requests and lxml

In Chrome browser, if you right click on the web page to inspect the HTML elements, you shall see that the entire data table is under <section class=”cex-table”>…</section>. You can verify this by hovering your mouse to this element, you would see there is a light blue overlay on the data table as per below:web scraping in python with requests and lxml

Next, you may want to inspect each text field on the page to further understand how the table header and records are arranged. For instance, when you check the “Asset” text field, you would see the below HTML structure:

<section class="cex-table">
	<section class="thead">
		<div>...</div>
		<div class="tr-wrapper">
			<div class="tr-left">
				<div class="tr">
					<div>...</div>
					<div style="flex:7" class="th">
						<span class="cell">
						<i class="sorting-icon">
						</i>
						<span class="cell-text">Asset</span>
						</span>
					</div>
				</div>
			</div>
		</div>
		...
	</section>
</section>

And similarly you can find the structure of the first row in the table body as per below:

<section class="tbody">
	<section class="tr-section">
		<a href="/price/bitcoin">
			<div class="tr-wrapper">
				<div class="tr-left">
					<div class="tr">
						<div style="flex:2" class="td">
							<span class="cell cell-rank">
							<strong>01</strong>
							</span>
						</div>
						<div style="flex:7" class="td">
							<span class="cell cell-asset">
							<img>...</img>
							<strong class="cell-asset-title">Bitcoin</strong>
							<span class="cell-asset-iso">BTC</span>
							</span>
						</div>
					</div>
				</div>
			</div>
		</a>
	</section>
</section>

You may notice that majority of these HTML elements does not have a id or name attribute as the unique identifier, but the style sheet (“class” attribute) is quite consistent for the same row of data. So in this case, we shall consider to use the style sheet as a reference to find our data elements.

Locate and parse the target data element with XPath

With the initial understanding on HTML structure of our target website, we shall start to find a way to locate the data elements programmatically.

For this demonstration, we will use requests and lxml libraries to send the http requests and parse the results. There are other package for parsing DOM such as beautifulsoup, but personally I find using XPath expression is more straightforward when locating an element although the syntax may not as intuitive as the way beautifulsoup does.

Below is the pip command if you do not have these two packages installed:

pip install requests
pip install lxml

Let’s import the packages and send a GET request to our target URL:

import requests
from lxml import html

target_url = "https://www.coindesk.com/coindesk20"
result = requests.get(target_url)

Our target URL does not require any parameters, in case you need to pass in parameters, you can pass via the params argument as per below:

payload = {"q" : "bitcoin", "s" : "relevant"}
result = requests.get("https://www.coindesk.com/search", params=payload)

The result is a response object which has a status_code attribute to indicate if correct response has been returned from the target website. To simplify the code, let’s assume we can always get the correct response with the return HTML in string format from the text attribute.

We then pass our HTML string to lxml and use it to parse the DOM tree as per below:

tree = html.fromstring(result.text)

Now we come to the most important step, we will need to use XPath syntax to locate the data elements we want and extract the data out.

Since the id or name attributes are not available for these elements, we will need to use the style sheet to locate our data elements. To locate the table header, we need to perform the below:

  • Find the section tag with style sheet class as “cex-table” from the entire DOM
  • Find its child section node with style sheet class as “thead
  • Further find its child div node with style sheet as “tr-wrapper

Below is how the syntax looks like in XPath:

table_header = tree.xpath("//section[@class='cex-table']/section[@class='thead']/div[@class='tr-wrapper']")

It will scan through the entire DOM tree to find if any element matches this structure and return a list of nodes matched.

If everything goes well, the table_header list should only contain 1 element which is the div with “tr-wrapper” style sheet. Sometimes if it returns multiple nodes, you may need recheck your path expression to see how you can fine-tune it to get only the unique node that you need.

From the wrapper div, there are still a few levels before we can reach to the node with the text. But you may notice that all the data fields we need are under the span tag which has a style name “cell-text“. So we can actually locate all these span tags with CSS class and extract its text with text() function. Below is how it works in XPath expression:

headers = table_header[0].xpath(".//span[@class='cell']/span[@class='cell-text']/text()")

Note that “.” means to start from current node, and “//” is to indicate the following path expression is relative path

If you examine the headers now, you can see all the column headers are extracted into a list as per below:

['Asset',
 'Price',
 'Market Cap',
 'Total Exchange Volume',
 'Returns (24h)',
 'Total Supply',
 'Category',
 'Value Proposition',
 'Consensus Mechanism']

Let’s continue to move the table body. Following the same logic, we shall be able to locate to the section with “tr-section” in below syntax:

table_body = tree.xpath("//section[@class='cex-table']/section[@class='tbody']/section[@class='tr-section']")

This means that we have already collected all the nodes for rows in the table body. We can now loop through the rows to get the elements. We will use the style sheet to locate our elements, but for the “Asset” column, it actually contains a few child nodes with different style sheet, so we need to handle them separately from the rest of the columns. Below is the code to extract the data row by row and add it into a record list:

records = []
for row in table_body:    
    tokens = row.xpath(".//span[contains(@class, 'cell-asset-iso')]/text()")
    ranks = row.xpath(".//span[contains(@class, 'cell-rank')]/strong/text()")
    assets = row.xpath(".//span[contains(@class, 'cell-asset')]/strong/text()")
    spans = row.xpath(".//div[contains(@class,'tr-right-wrapper')]/div/span[contains(@class, 'cell')]")
    rest_cols = [span.text_content().strip() for span in spans]
    row_data = ranks + tokens + assets + rest_cols
    records.append(row_data)

Note that we are using “contains” in order to match the node with class like cell cell-rank“, and use text_content() to extract all the text from its current nodes and child nodes.

Occasionally you may find that the number of columns we extracted does not tally with the original column header due to header column merged or hidden, such as our above ranking and token ticker column. So let’s also give them column name as “Rank” and “Token”:

column_header = ["Rank", "Token"] + headers

Save the scraping result

With both the header and data ready, we can easily load the data into pandas as per below:

import pandas as pd
df = pd.DataFrame(records, columns=column_header)

You can see the below result in pandas dataframe, which looks pretty good except some formatting to be done to convert all the amount into proper number format.

web scraping to get cryptocurrency price

Or you can also write the scrapped data into a csv file with the csv module:

import csv
with open("token_price.csv", "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(column_header)
    for row in records:
        writer.writerow(row)

Limitations & Constraints

In your real scraping project, you may encounter more complicated scenarios rather than directly getting the data from a GET request. So it’s better to understand how are the constraints/limitations for our above mentioned approach.

  • Go through the authentication process can be time-consuming with requests

If your target website requires authentication before you can retrieve the data, you may need to create a session and send multiple POST/GET requests to the server in order to get yourself authorized. Depends on how complicated the authentication process is, you will need to understand what are the parameters to be supplied and how the requests are chained together. This process may take some time and effort.

  • You cannot trigger JavaScript code to get your data

If the response from your target website returns some JavaScript code to populate the data, or you need to trigger some JavaScript function in order to have the data populated on the web page, you may find requests package simply would not work.

For both scenarios, you may consider to use selenium which I have mentioned in one of my past post. It has a headless mode where you can simulate user’s action such as key in user credentials or click buttons without actually showing the browser, and you can also execute JavaScript code to interact with the web page. The downside is that you will have to periodically upgrade your driver file to match with the browser’s version.

Conclusion

In this article, we have reviewed through a very basic example to scrape data with requests and lxml packages, and we have also discussed a few limitations where you may start looking for alternatives such as selenium or even the scrapy framework in case you have more complicated scenarios to be handled. No matter which libraries you choose to use, the fundamental remains the same. Hope this article gives you some hints on how to start your web scraping journey.