Resources

split or merge PDF files with PyPDF2

Split or merge PDF files with 5 lines of Python code

There are many cases you want to extract a particular page from a big PDF file or merge PDF files into one due to various reasons. You can make use of some PDF editor tools to do this, but you may realize the split or merge functions are usually not available in the free version, or it is too tedious when there are just so many pages or files to be processed. In this article, I will be sharing a simple solution to split or merge multiple PDF files with a few lines of Python code.

Prerequisite

We will be using a Python library called PyPDF2, so you will need to install this package in your working environment. Below is an example with pip:

pip install PyPDF2

Let’s get started

The PyPDF2 package has 4 major classes PdfFileWriter, PdfFileReader, PdfFileMerger and PageObject which looks quite self explanatory from class name itself. If you need to do something more than split or merge PDF pages, you may want to check this document to find out more about what you can do with this library.

Split PDF file

When you want to extract a particular page from the PDF file and make it a separate PDF file, you can use PdfFileReader to read the original file, and then you will be able to get a particular page by it’s page number (page number starts from 0). With the PdfFileWriter, you can use addPage function to add the PDF page into a new PDF object and save it.

Below is the sample code that extracts the first page of the file1.pdf and split it as a separate PDF file named first_page.pdf

from PyPDF2 import PdfFileWriter, PdfFileReader
input_pdf = PdfFileReader("file1.pdf")
output = PdfFileWriter()
output.addPage(input_pdf.getPage(0))
with open("first_page.pdf", "wb") as output_stream:
    output.write(output_stream)

The input_pdf.getPage(0) returns the PageObject which allows you to modify some of the attributes related to the PDF page, such as rotate and scale the page etc. So you may want to understand more from here.

Merge PDF files

To merge multiple PDF files into one file, you can use PdfFileMerger to achieve it. Although you can also do with PdfFileWriter, but PdfFileMerger probably is more straightforward when you do not need to edit the pages before merging them.

Below is the sample code which using append function from PdfFileMerger to append multiple PDF files and write into one PDF file named merged.pdf

from PyPDF2 import PdfFileReader, PdfFileMerger
pdf_file1 = PdfFileReader("file1.pdf")
pdf_file2 = PdfFileReader("file2.pdf")
output = PdfFileMerger()
output.append(pdf_file1)
output.append(pdf_file2)

with open("merged.pdf", "wb") as output_stream:
    output.write(output_stream)

If you do not want to include all pages from your original file, you can specify a tuple with starting and ending page number as pages argument for append function, so that only the pages specified would be add to the new PDF file.

The append function will always add new pages at the end, in case you want to specify the position where you wan to put in your pages, you shall use merge function. It allows you to specify the position of the page where you want to add in the new pages.

Conclusion

PyPDF2 package is a very handy toolkit for editing PDF files. In this article, we have reviewed how we can make use of this library to split or merge PDF files with some sample codes. You can modify these codes to suit your needs in order to automate the task in case you have many files or pages to be processed. There is also a pdfcat script included in this project folder which allows you to split or merge PDF files by calling this script from the command line. You may also want to take a look in case you just simply deal with one or two PDF files each time.

In case you are interested in other topics related to Python automation, you may check here. Thanks for reading.

Pyinstaller upxdir and icon options

In previous article, we have discussed about most of the commonly used options for PyInstaller library. There are two more very useful options but you may encounter some issues when you use them for the first time. In this article, we will discuss about the common issues for using PyInstaller –icon and –upxdir options.

Customize icon for your exe file with –icon

PyInstaller has the –icon option to specify your own icon when creating the executable file. If this option is not given, the exe files will be generated with default icon as per below.

pyinstaller logo

You can use –icon followed by image file name to let PyInstaller to use your own icon. You may see errors when you try to use a normal image format as icon, in this case you can convert your image file into .ico format and run the command again.

For demo purpose, I downloaded an icon from this website into my project folder to use it for my app. And with the below command, I shall be able to get new look for my exe file.

pyinstaller --onefile hello.py --name "SuperHero" --add-data "test.config;." --icon "superhero.icon" --clean

Below is how it looks like when the new exe file generated:

Pyinstaller generate exe with icon

Sometimes, you may also find that the icon did not get changed after you rebuilt the executable file, but when checking the “General” tab in file properties, you are able to see the new icon displayed. This is due to the window icon cache, you may try to delete the cache files from the below directory and retry.

User\AppData\Local\Microsoft\Windows\Explorer\IconCacheToDelete

Or if you specify a new name for your exe file, you shall be able to see the new icon applied.

 

Reduce file size with PyInstaller –upx-dir option

When you used a lot of libraries or resource files, your executable file can grow very big and become difficult for distribution. In this case, you can use upx to compress your exe file.

You can download the upx executable file into your PC and copy the full path as the parameter value for –upx-dir option. E.g.:

pyinstaller --onefile hello.py --name "SuperHero" --add-data "test.config;." --icon "superhero.icon" --upx-dir "c:\upx-3.96-win64" --clean

Sometimes you may find even there is no error when you build the executable file, there can be a runtime error such as the below, which showing that VCRUNTIME140.dll is either not designed to run on Windows or it contains an error.

pyinstaller-VCRUNTIME140.dll-error

This issue is due to PyInstaller modified the dll files during packing and compressing. The workaround is that you use the –upx-exclude to exclude the particular dll files. (No need to specify the path for the dll)

pyinstaller --onefile hello.py --name "SuperHero" --add-data "test.config;." --icon "superhero.icon" --upx-dir "c:\upx-3.96-win64" --upx-exclude "VCRUNTIME140.dll" --clean

Conclusion

Beside the above issues we discussed, you may occasional encounter some other errors, you will need to check  both your Python and PyInstaller versions to see if is it some compatibility issues. And also not all the Python libraries are supported by PyInstaller, you will need to check this list to see if you have used any libraries not in supported by PyInstaller.

python split text with multiple delimiters

Python split text with multiple delimiters

There are cases you want to split a text which possibly use different symbols (delimiters) for separating the different elements, for instance, if the given text is in csv or tsv format, each field can be separated with comma (,) or tab (\t). You will need to write your code logic to support both delimiters. In this article, I will be sharing with you a few possible ways to split text with multiple delimiters in Python.

Checking if certain delimiter exists before splitting

If you are pretty sure the text will only contains one type of delimiter at a time, you can check if such delimiter exists before splitting. e.g. 

text = 'field1,field2,field3,field4'
#or 
text = 'field1;field2;field3;field4'

You can write a one-liner to check if comma exists before splitting by comma, otherwise splitting by semicolon.

text.split(",") if text.find(",") > -1 else text.split(";")

But if there are a lot of possible delimiters can be used in the text, or different delimiters can be mixed in the text, then writing the above if else logic will become very tedious work.  You might have thought about to use the replace function (see the full list of string functions from this article) to replace all the different delimiters into a single delimiter. It may work for your case, but it is far from a elegant solution.

So for such case, let’s move to the second option.

Using re to split text with multiple delimiters

In regular expression module, there is a split function which allows to split by pattern. You can specify all the possible delimiters with “|” to split the text with multiple delimiters at one time.

For instance, the below will extract the field1 to field5 into a list.

import re

text1 = "field1\tfield2,field3;field4 field5"
fields = re.split(r",|;|\s|\t", text1)

The result of fields will be list with all the data fields we want:

['field1', 'field2', 'field3', 'field4', 'field5']

What if you want to also keep these delimiters in the list for later use (e.g. reform back the text) ? You can use the capture groups () in the regular expression, so that the matched patterns will be also showing in the result.

fields = re.split(r'(,|;|\s|\t)', text1)

Result of fields variable:

['field1', '\t', 'field2', ',', 'field3', ';', 'field4', ' ', 'field5']

Conclusion

This quite common that we need write code to split text with multiple delimiters, and there are possibly other ways to solve this problem, but so far using the re.split still the most straightforward and efficient way.

pandas tricks pass multiple columns to lambda

Pandas tricks – pass multiple columns to lambda

Pandas is one of the most powerful tool for analyzing and manipulating data. In this article, I will be sharing with you the solutions for a very common issues you might have been facing with pandas when dealing with your data – how to pass multiple columns to lambda or self-defined functions.

Prerequisite

You will have to install pandas on your working environment:

pip install pandas

When dealing with data, you will always have the scenario that you want to calculate something based on the value of a few columns, and you may need to use lambda or self-defined function to write the calculation logic, but how to pass multiple columns to lambda function as parameters?

Let me use some real world example, so that easier for you to understand the issue that I am talking about. Below table shows partial of the e-com delivery charges offered by some company, so the delivery charges are determined by package size (H+L+W), package weight and the delivery mode you are choosing.

Size (cm/kg) 3 hours express Next Day Delivery Same Day Delivery
<60 CM (H+L+W) & MAX 1KG 12 8 10
<80 CM (H+L+W) & MAX 5KG 15 9 11
<100 CM (H+L+W) & MAX 8KG 17 11 13
<120 CM (H+L+W) & MAX 10KG 19 14 16

And assuming we have the below order data, and we want to simulate the delivery charges. Let’s create the data in a pandas dataframe.

import pandas as pd

df = pd.DataFrame({
    "Order#" : ["1", "2", "3", "4"], 
    "Weight" : [5.0, 2.1, 8.1, 7.5], 
    "Package Size" : [80, 45, 110, 90],
    "Delivery Mode": ["Same Day", "Next Day", "Express", "Next Day"]})

If you view dataframe from Jupyter Notebook (you can sign up here to use it for free), you shall be able to see the data as per below.

Pandas pass multiple columns to lambda same data

Let’s also implement a calculate_rate function where we need to pass in the weight, package size, and delivery mode in order to calculate the delivery charges:

def calculate_rate(weight, package_size, delivery_mode):
    #set the charges as $20 since we do not have the complete rate card
    charges = 20
    if weight <=1 and package_size <60:
        if delivery_mode == "Express":
            charges = 13
        elif delivery_mode == "Next Day":
            charges = 8
        else:
            charges = 10
    elif weight <=5 and package_size <80:
        if delivery_mode == "Express":
            charges = 15
        elif delivery_mode == "Next Day":
            charges = 9
        else:
            charges = 11
    elif weight <=8 and package_size <100:
        if delivery_mode == "Express":
            charges = 17
        elif delivery_mode == "Next Day":
            charges = 11
        else:
            charges = 13
    return charges

Pass multiple columns to lambda

Here comes to the most important part. You probably already know data frame has the apply function where you can apply the lambda function to the selected dataframe. We will also use the apply function, and we have a few ways to pass the columns to our calculate_rate function.

 Option 1

We can select the columns that involved in our calculation as a subset of the original data frame, and use the apply function to it.

And in the apply function, we have the parameter axis=1 to indicate that the x in the lambda represents a row, so we can unpack the x with *x and pass it to calculate_rate.

df["Delivery Charges"] = df[["Weight", "Package Size", "Delivery Mode"]].apply(lambda x : calculate_rate(*x), axis=1)

If we check the df again in Jupyter Notebook, you should see the new column “Delivery Charges” with the figures calculated based on the logic we defined in calculate_rate function.

Pandas pass multiple columns to lambda

Option 2:

If you do not want to get a subset of the data frame and then apply the lambda, you can also directly use the apply function to the original data frame. In this case, you will need to select the columns before passing to the calculate_rate function. Same as above, we will need to specify the axis=1 to indicate it’s applying to each row.

df["Delivery Charges"] = df.apply(lambda x : calculate_rate(x["Weight"], x["Package Size"], x["Delivery Mode"]), axis=1)

This will produce the same result as option 1. And you can also use x.Weight instead of x[“Weight”] when passing in the parameter.

 

Conclusion

The two options we discussed to pass multiple columns to lambda are basically the same, and it’s either applying to the subset or the original data frame. I have not yet tested with a large set of data, so there might be some differences in terms of the performance, you may need to take a note if you are dealing with a lot of data.

You may also interested to read some other articles related to pandas.

 

pandas tricks calculate percentage within group

Pandas Tricks – Calculate Percentage Within Group

Pandas groupby probably is the most frequently used function whenever you need to analyse your data, as it is so powerful for summarizing and aggregating data. Often you still need to do some calculation on your summarized data, e.g. calculating the % of vs total within certain category. In this article, I will be sharing with you some tricks to calculate percentage within groups of your data.

Prerequisite

You will need to install pandas if you have not yet installed:

pip install pandas
#or conda install pandas

I am going to use some real world example to demonstrate what kind of problems we are trying to solve. The sample data I am using is from this link , and you can also download it and try by yourself.

Let’s first read the data from this sample file:

import pandas as pd

# You can also replace the below file path to the URL of the file
df = pd.read_excel(r"C:\Sample Sales Data.xlsx", sheet_name="Sheet")

The data will be loaded into pandas dataframe, you will be able to see something as per below:

pandas tricks - calculate percentage within group

Let’s first calculate the sales amount for each transaction by multiplying the quantity and unit price columns.

df["Total Amount"] = df["Quantity"] * df["Price Per Unit"]

You can see the calculated result like below:

pandas tricks - calculate percentage within group

Calculate percentage within group

With the above details, you may want to group the data by sales person and the items they sold, so that you have a overall view of their performance for each person. You can do with the below :

#df.groupby(["Salesman","Item Desc"])["Total Amount"].sum()
df.groupby(["Salesman", "Item Desc"]).agg({"Total Amount" : "sum"})

And you will be able to see the total amount per each sales person:

pandas tricks - calculate percentage within group

This is good as you can see the total of the sales for each person and products within the given period.

Calculate the best performer

Now let’s see how we can get the % of the contribution to total revenue for each of the sales person, so that we can immediately see who is the best performer.

To achieve that, firstly we will need to group and sum up the “Total Amount” by “Salemans”, which we have already done previously.

df.groupby(["Salesman"]).agg({"Total Amount" : "sum"})

And then we calculate the sales amount against the total of the entire group. Here we can get the “Total Amount” as the subset of the original dataframe, and then use the apply function to calculate the current value vs the total. Take note, here the default value of axis is 0 for apply function.

[["Total Amount"]].apply(lambda x: 100*x/x.sum())

With the above, we should be able get the % of contribution to total sales for each sales person. And let’s also sort the % from largest to smallest:

sort_values(by="Total Amount", ascending=False)

Let’s put all together and run the below in Jupyter Notebook:

df.groupby(["Salesman"])\
.agg({"Total Amount" : "sum"})[["Total Amount"]]\
.apply(lambda x: 100*x/x.sum())\
.sort_values(by="Total Amount", ascending=False)

You shall be able to see the below result with the sales contribution in descending order. (Do not confuse with the column name “Total Amount”, pandas uses the original column name for the aggregated data. You can rename it to whatever name you want later)

pandas tricks - calculate percentage within group for salesman

 

Calculate the most popular products

Similarly, we can follow the same logic to calculate what is the most popular products. This time we want to summarize the sales amount by product, and calculate the % vs total for both “Quantity” and “Total Amount”. And also we want to sort the data in descending order for both fields. e.g.:

df.groupby(["Item Desc"])\
.agg({"Quantity": "sum", "Total Amount" : "sum"})[["Quantity", "Total Amount"]]\
.apply(lambda x: 100*x/x.sum())\
.sort_values(by=["Quantity","Total Amount"], ascending=[False,False])

This will produce the below result, which shows “Whisky” is the most popular product in terms of number of quantity sold. But “Red Wine” contributes the most in terms of the total revenue probably because of the higher unit price.

pandas tricks - calculate percentage within group for products

 

Calculate best sales by product for each sales person

What if we still wants to understand within each sales person, what is the % of sales for each product vs his/her total sales amount?

In this case, we shall first group the “Salesman” and “Item Desc” to get the total sales amount for each group. And on top of it, we calculate the % within each “Salesman” group which is achieved with groupby(level=0).apply(lambda x: 100*x/x.sum()).

Note: After grouping, the original datafram becomes multiple index dataframe, hence the level = 0 here refers to the top level index which is “Salesman” in our case.

df.groupby(["Salesman", "Item Desc"])\
.agg({"Total Amount" : "sum"})\
.groupby(level=0).apply(lambda x: 100*x/x.sum())\
.sort_values(by=["Salesman", "Item Desc","Total Amount"], ascending=[True, True, False])

You will be able see the below result which already sorted by % of sales contribution for each sales person.

pandas tricks - calculate percentage within group - for salesman and product

 

Conclusion

This is just some simple use cases where we want to calculate percentage within group with the pandas apply function, you may also be interested to see what else the apply function can do from here.