ken

python cache

Python cache – the must read tips for code performance

Introduction

Most of us may have experienced the scenarios that we need to implement some computationally expensive logic such as recursive functions or need to read from I/O or network multiple times, these functions typically requires more resources and longer CPU time, and eventually can cause performance issue if handle without care. For such case, you shall always pay special attention to it once you have completed all the functional requirements, as the additional costs on the resources and time may eventually lead to the user experience issue. In this article, I will be sharing how we can make use of the cache mechanism (aka memoization) to improve the code performance.

Prerequisites:

To follow the examples in below, you will need to have requests package installed in your working environment, you may use the below pip command to install:

pip install requests

With this ready, let’s dive into the problem we are going to solve today.

As I mentioned before, the computationally expensive logic such as recursive functions or reading from I/O or network usually have the significant impacts to the runtime, and are always the targets for optimization. Let me illustrate with a specific example, for instance, assume we need to call some external API to get the rates:

import requests
import json

def inquire_rate_online(dimension):
    result = requests.get(f"https://postman-echo.com/get?dim={dimension}")
    if result.status_code == requests.codes.OK:
        data = result.json()
        return data["args"]["dim"]
    return ''

This function needs to make a call through the network and return the result (for demo purpose, this API call just echo back the input as result). If you want to provide this as a service to everybody, there is a high chance that different people inquire the rate with same dimension value. And for this case, you may wish to have the result stored at somewhere after the first person inquired, so that later you can just return this result for the subsequent inquiry rather than making an API call again. With this sort of caching mechanism, it should speed up your code.

Implement cache with global dictionary

For the above example, the most straightforward way to implement a cache is to store the arguments and results in a dictionary, and every time we check this dictionary to see if the key exists before calling the external API. We can implement this logic in a separate function as per below:

cached_rate = {}
def cached_inquire(dim):
    if dim in cached_rate:
        print(f"cached value: {cached_rate[dim]}")
        return cached_rate[dim]
    cached_rate[dim]= inquire_rate_online(dim)
    print(f"result from online : {cached_rate[dim]}")
    return cached_rate[dim]

With this code, you can cache the previous key and result in the dictionary, so that the subsequent calls will be directly returned from the dictionary lookup rather than an external API call. This should dramatically improve your code performance since reading from dictionary is much faster than making an API through the network.

You can quickly test it from Jupyter Notebook with the %time magic:

%time cached_inquire(1)

For the first time you call it, you would see the time taken is over 1 seconds (depends on the network condition):

result from online : 1
Wall time: 1.22 s

When calling it again with the same argument, we should expect our cached result start working:

%time cached_inquire(1)

You can see the total time taken dropped to 997 microseconds for this call, which is over 1200 times faster than previously:

cached value: 1
Wall time: 997 µs

With this additional global dictionary, we can see so much improvement on the performance. But some people may have concern about the additional memory used to hold these values in a dictionary, especially if the result is a huge object such as image file or array. Python has a separate module called weakref which solves this problem.

Implement cache with weakref

Python introduced weakref to allow creating weak reference to the object and then garbage collection is free to destroy the objects whenever needed in order to reuse its memory.

For demonstration purpose, let’s modify our earlier code to return a Rate class instance as the inquiry result:

class Rate():
    def __init__(self, dim, amount):
        self.dim = dim
        self.amount = amount
    def __str__(self):
        return f"{self.dim} , {self.amount}"

def inquire_rate_online(dimension):
    result = requests.get(f"https://postman-echo.com/get?dim={dimension}")
    if result.status_code == requests.codes.OK:
        data = result.json()
        return Rate(float(data["args"]["dim"]), float(data["args"]["dim"]))
    return Rate(0.0,0.0)

And instead of a normal Python dictionary, we will be using WeakValueDictionary to hold a weak reference of the returned objects, below is the updated code:

import weakref

wkrf_cached_rate = weakref.WeakValueDictionary()
def wkrf_cached_inquire(dim):
    if dim in wkrf_cached_rate:
        print(f"cached value: {wkrf_cached_rate[dim]}")
        return wkrf_cached_rate[dim]

    result = inquire_rate_online(dim)
    print(f"result from online : {result}")
    wkrf_cached_rate[dim] = result
    return wkrf_cached_rate[dim]

With the above changes, if you run the wkrf_cached_inquire two times, you shall see the significant improvement on the performance:

python weakref cache

And the dictionary does not hold the instance of the Rate, rather a weak reference of it, so you do not have to worry about the extra memory used, because the garbage collection will reclaim it when it’s needed and meanwhile your dictionary will be automatically updated with the particular entry being removed. So subsequently the program can continue to call the external API like the first time.

If you stop your reading here, you will miss the most important part of this article, because what we have gone through above are good but just not perfect due to the below issues:

  • In the example, we only have 1 argument for the inquire_rate_online function, things are getting tedious if you have more arguments, all these arguments have to be stored as the key for the dictionary. In that case, re-implement the caching as a decorator function probably would be easier
  • Sometimes you do not really want to let garbage collection to determine which values to be cached longer than others, rather you want your cache to follow certain logic, for instance, based on the time from the most recent calls to the least recent calls, aka least recent used, to store the cache

If the least recent used cache mechanism makes sense to your use case, you shall consider to make use of the lru_cache decorator from functools module which will save you a lot of effort to reinvent the wheels.

Cache with lru_cache

The lru_cache accepts two arguments :

  • maxsize to limit the size of the cache, when it is None, the cache can grow without bound
  • typed when set it as True, the arguments of different types will be cached separately, e.g. wkrf_cached_inquire(1) and wkrf_cached_inquire(1.0) will be cached as different entries

With the understanding of the lru_cache, let’s decorate our inquire_rate_online function to have the cache capability:

from functools import lru_cache

@lru_cache(maxsize=None)
def inquire_rate_online(dimension):
    result = requests.get(f"https://postman-echo.com/get?dim={dimension}")
    if result.status_code == requests.codes.OK:
        data = result.json()
        return Rate(float(data["args"]["dim"]), float(data["args"]["dim"]))
    return Rate(0.0,0.0)

If we re-run our inquire_rate_online twice, you can see the same effect as previously in terms of the performance improvement:

Python cache with lru_cache

And with this decorator function, you can also see the how the cache is used. The hits shows no. of calls have been returned from the cached results:

inquire_rate_online.cache_info()
#CacheInfo(hits=1, misses=1, maxsize=None, currsize=1)

Or you can manually clear all the cache to reset the hits and misses to 0:

inquire_rate_online.cache_clear()

Limitations:

Let’s also talk about the limitations of the solutions we discussed above:

  • The cache mechanism works best for the deterministic function meaning by given the same set of inputs, it always returns the same set of results. And you would not benefit much if you try to cache the result of a nondeterministic function, e.g.:
def random_x(x):
    return x*random.randint(1,1000)
  • For keyword arguments, if you swap the position of the keywords, the two calls will be cached as separate entries
  • It only works for the arguments that are immutable data type.

Conclusion

In this article, we have discussed about the different ways of creating cache to improve the code performance whenever you have computational expensive operations or heavy I/O or network reads. Although lru_cache decorator provide a easy and clean solution for creating cache but it would be still better that you understand the underline implementation of cache before we just take and use.

We also discussed about the limitations for these solutions that you may need to take note before implementing. Nevertheless, it would still help you in a lot of scenarios where you can make use of these methods to improve your code performance.

pandas split data into buckets with cut and qcut

Pandas – split data into buckets with cut and qcut

If you do a lot of data analysis on your daily job, you may have encountered problems that you would want to split data into buckets or groups based on certain criteria and then analyse your data within each group. For instance, you would like to check the popularity of your products or website within each age groups, or understand how many percent of the students fall under each score range. The most straightforward way might be to categorize your data based on the conditions and then summarize the information, but this usually requires some additional effort to massage the data. In this article, I will be sharing with you a simple way to bin your data with pandas cut and qcut function.

Prerequisite

You will need to install pandas package if you do not have it yet in your working environment. Below is the command to install pandas with pip:

pip install pandas

And let’s import the necessary packages and create some sample sales data for our later examples.

import pandas as pd
import numpy as np
df = pd.DataFrame({"Consignee" : ["Patrick", "Sara", "Randy", "John", "Patrick", "Joe"],
                   "Age" : [44, 51, 23, 30, 44, 39],
                  "Order Date" : pd.date_range(start='2020-08-01', end="2020-08-05", periods=6),
                  "Item Desc" : ["White Wine", "Whisky", "Red Wine", "Whisky", "Red Wine", "Champagne"],
                  "Price Per Unit": [10, 20, 30, 20, 30, 30], 
                  "Order Quantity" : [50, 60, 40, 20, 10, 50],
                  "Order Dimensions" : [0.52, 0.805, 0.48, 0.235,0.12, 0.58]})

With the above codes, we can do a quick view of how the data looks like:

pandas split data into segments

And let’s also calculate the total sales amount by multiplying the price per unit and the order quantity:

df["Total Amount"] = df["Price Per Unit"] * df["Order Quantity"]

Once this data is ready, let’s dive into the problems we are going to solve today.

split data into buckets by cut

If we would like to classify our customers into a few age groups and have a overall view of how much money each age group has spent on our product, how shall we do it ? As I mentioned earlier, we are not going to apply some lambda function with conditions like : if the age is less than 30 then classify the customer as young, because this can easily drive you crazy when you have hundreds or thousands of groups to be defined. Instead, we will be using a powerful data frame cut function to achieve this.

The cut function has two mandatory arguments:

  • x – an array of values to be binned
  • bins – indicate how you want to bin your values

For instance, if you supply the df[“Age”] as the first argument, and indicate bins as 2, you are telling pandas to split your age data into 2 equal groups. In our case, the minimum age value is 23, and maximum age value is 51, so the first group will be from 23 to 23 + (51-23)/2, and second group from 23 + (51-23)/2 to 51. When you run the below code:

pd.cut(df["Age"],2)

You shall see the output similar to below:

pandas split data segment category

Pandas already classified our age data into these two groups and the output shows that data type is a pandas category object. This is very useful as you can actually assign this category column back to the original data frame, and do further analysis based on the categories from there.

Since we don’t want the decimal points for age data, we can set precision = 0, and we also want to label our age data into 3 groups as Yong, Mid-Aged and Old.

Below is the code that we assign our binned age data into “Age Group” column:

df["Age Group"] = pd.cut(df["Age"],3, precision=0, labels=["Young","Mid-Aged","Old"])

If you examine the data again, you would see:

pandas split data into buckets - age group

Pandas mapped out our age data into 3 groups evenly based on the min and max of the age values. But you may have noticed that age 44 has been classified as “Old” which does not sound that true. In this case, we would want to give our own definition of young, mid-aged and old in the bins argument. Let’s delete the “Age Group” column and redo it with below:

df["Age Group"] = pd.cut(df["Age"],[20, 30, 50, 60], precision=0, labels=["Young","Mid-Aged","Old"])

With this list of integer intervals, we are telling pandas to split our data into 3 groups (20, 30], (30, 50] and (50, 60], and label them as Young, Mid-Aged and Old respectively. (here “(” means exclusive, and “]” means inclusive)

If we check the data again:

df[["Age", "Age Group"]]

You shall see the correct result as per we expected:

pandas split data into buckets- age groups with custom intervals

Now with this additional column, you can easily find out how much each age group contributed to the total sales amount. For example:

df.groupby("Age Group").agg({"Total Amount": "sum"})[["Total Amount"]].apply(lambda x: 100*x/x.sum())

This would calculate the contribution % to the total sales amount within each group (more details from here):

pandas split data into buckets - cut age groups

If you do not wish to have any intermediate data column (for our case, the “Age Group”) added to you data frame, you can directly pass the output of the cut into the groupby function:

df.groupby(pd.cut(df["Age"],[20, 30, 50, 55], precision=0, labels=["Young","Mid-Aged","Old"])).agg({"Total Amount": "sum"})[["Total Amount"]].apply(lambda x: 100*x/x.sum())

The above code will produce the same result as previously.

There are times you may want to define your bins with a start point & end point at a fixed interval, for instance, to understand for order dimensions at each 0.1, how much is the total sales amount.

For such case, we can make use of the arange function from numpy package, e.g.:

np.arange(0, 1, 0.1)

This would give us an array of values between 0 and 1 with interval of 0.1, and we can supply it as the bins to cut function:

df.groupby(pd.cut(df["Order Dimensions"],np.arange(0, 1, 0.1))).agg({"Total Amount": "sum"})

With the above code, we can see pandas split the order dimensions into small chunks of every 0.1 range, and then summarized the sales amount for each of these ranges:

pandas split data into buckets - order dimensions

Note that arange does not include the stop number 1, so if you wish to include 1, you may want to add an extra step into the stop number, e.g.: np.arange(0, 1 + 0.1, 0.1). And cut function also has two arguments – right and include_lowest to control how you want to include the left and right edge. E.g.:

df.groupby(pd.cut(df["Order Dimensions"],np.arange(0, 1 + 0.1, 0.1), right=False, include_lowest=True)).agg({"Total Amount": "sum"})

This will make the left edge inclusive and right edge exclusive, the output will be similar to below:

pandas split data into buckets - order dimensions left inclusive

cut vs qcut

Pandas also provides another function qcut, which helps to split your data based on quantiles (the cut points based on the distribution of the data). For instance, if you use qcut for the “Age” column:

pd.qcut(df["Age"],2, duplicates="drop")

You would see the age data has been split into two groups : (22.999, 41.5] and (41.5, 51.0]. 

pandas split data into buckets - age groups qcut

If you examine the data inside each group:

pd.qcut(df["Age"],2, duplicates="drop").value_counts()

You would see qcut has split the total of 6 rows of age data equally into 2 groups, and the cut point is at 41.5:

pandas split data into buckets - age groups qcut - value_counts1

So if you would like to understand what are the 4 age groups spent similar amount of money on your product, you can do as below:

df.groupby(pd.qcut(df["Age"],4, duplicates="drop")).agg({"Total Amount" : "sum"})

And you would see if we split our data into these 4 groups, the total sale amount are relatively the same:

pandas split data into buckets - age groups qcut - sales amount

Conclusion

In this article, we have reviewed through the pandas cut and qcut function where we can make use of them to split our data into buckets either by self defined intervals or based on cut points of the data distribution.

Hope this gives you some hints when you are solving the problems similar to what we have discussed here.

 

pandas filtering records

Pandas – filtering records in 20 ways

Filtering records is a quite common operation when you process or analyze data with pandas,a lot of times you will have to apply filters so that you can concentrate to the data you want. Pandas is so powerful and flexible that it provides plenty of ways you can filter records, whether you want to filtering by columns to focus on a subset of the data or base on certain conditions. In this article, we will be discussing the various ways of filtering records in pandas.

Prerequisite:

You will need to install pandas package in order to follow the below examples. Below is the command to install pandas with pip:

pip install pandas

And I will be using the sample data from here, so you may also want to download a copy into your local machine to try out the later examples.

With the below codes, we can get a quick view of how the sample data looks like:

import pandas as pd
df = pd.read_excel(r"C:\Sample-Sales-Data.xlsx")
df.head(5)

Below is the output of the first 5 rows of data:

pandas filtering data

Let’s get started with our examples.

Filtering records by label or index

Filtering by column name/index is the most straightforward way to get a subset of the data frame in case you are only interested in a few columns of the data rather than the full data frame. The syntax is to use df[[column1,..columnN]] to filter only the specified columns. For instance, the below will get a subset of data with only 2 columns –  “Salesman” and “Item Desc”:

new_df = df[["Salesman","Item Desc"]]
new_df.head(5)

Output from the above would be:

pandas filtering data subset

If you are pretty sure which are the rows you are looking for, you can use the df.loc function which allows you to specify both the row and column labels to filter the records. You can pass in a list of row labels and column labels like below:

df.loc[[0,4], ["Salesman", "Item Desc"]]

And you would see the row index 0 and 4, column label “Salesman” and “Item Desc” are selected as per below output:

pandas filtering loc

Or you can specify the label range with : to filter the records by a range:

df.loc[0:4, ["Salesman", "Item Desc"]]

You would see 5 rows (row index 0 to 4) selected as per below output:

pandas filtering loc with label range

Note that currently we are using the default row index which is a integer starting from 0, so it happens to be same as the position of the rows. Let’s say you have Salesman as your index, then you will need to do filtering based on the index label (value of the Salesman), e.g.:

df.set_index("Salesman", inplace=True)
df.loc["Sara", ["Item Desc", "Order Quantity"]]

With the above code, you will be able to select all the records with Salesman as “Sara”:

pandas filtering loc with row label

Filtering records by row/column position

Similarly, you can use iloc function to achieve the same as what can be done with loc function. But the difference is that, for iloc, you shall pass in the integer position for both row and columns. E.g.:

df.iloc[[0,4,5,10],0:2]

The integers are the position of the row/column from 0 to length-1 for the axis. So the below output will be generated when you run the above code:

pandas filtering iloc function

Filtering records by single condition

If you would like to filter the records based on a certain condition, for instance, the value of a particular column, you may have a few options to do the filtering based on what type of data you are dealing with. 

The eq and == work the same when you want to compare if the value matches:

flt_wine = df["Item Desc"].eq("White Wine")
df[flt_wine]

Or:

flt_wine = (df["Item Desc"] == "White Wine")
df[flt_wine]

Both will generate the below output:

pandas filtering equals condition

If you run the flt_wine alone, you will see the output is a list of True/False with their index. This is how the filter works as pandas data frame would filter out the index with False value.

To get the data with the negation of certain condition, you can use ~ before your condition statement as per below:

df[~flt_wine]
#or
df[~(df["Item Desc"] == "White Wine")]
#or
df[(df["Item Desc"] != "White Wine")]

This will return the data with “Item Desc” other than “White Wine”.

And for string data type, you can also use the str.contains to match if the column has a particular sub string.

df[df["Item Desc"].str.contains("Wine")]

If you want to filter by matching multiple values, you can use isin with a list of values:

flt_wine = df["Item Desc"].isin(["White Wine", "Red Wine"])
df[flt_wine].head(5)

pandas filtering isin function

And you can also use data frame query function to achieve the same. But the column label with spaces in-between would cause errors when using this function, so you will need to reformat a bit of your column header, such as replacing spaces with underscore (refer to this article for more details ).

With this change in the column header, you shall be able to run the below code with the same result as above isin method.

df1 = df.query("Item_Desc in ('White Wine','Red Wine')")
df1.head(5)

There are other Series functions you can use to filter your records, such as isnull, isna, notna, notnull, find etc. You may want to check pandas Series documentation.

Filtering records by multiple conditions

When you need to filter by multiple conditions where multiple columns are involved, you can also do similar as what we have discussed in above with the & or | to join the conditions.

For filtering records when both conditions are true:

flt_whisky_bulk_order = (df["Item Desc"] == "Whisky") & (df["Order Quantity"] >= 10)
df[flt_whisky_bulk_order]

The output would be :

pandas filtering and condition

For filtering the records when either condition is true:

flt_high_value_order = (df["Item Desc"] == "Whisky") | (df["Price Per Unit"] >= 50) 
df[flt_high_value_order]

The output would be :

pandas filtering or condition

Similarly, the above can be done with data frame query function. Below is the example of AND condition:

df1 = df.query("Item_Desc == 'Whisky' and Order_Quantity >= 10") 
df1.head(5)

Below is the example of OR condition:

df1 = df.query("Item_Desc_ == 'Whisky' or Price_Per_Unit >= 10")
df1.head(5)

Filtering records by dataframe.filter

There is also another filter method which can be used to filter by the row or column label.

Below is an example that can be used to get all the columns with the name starting with “Order” keyword:

df.filter(regex="Order*", axis=1)

you shall see the below output:

pandas filtering dataframe filter

Similarly, when applying to row labels, you can axis=0

df.set_index("Order Date", inplace=True)
df.filter(like="2020-06-21", axis=0)

pandas filtering dataframe filter 2

Take note that data frame query function only works on the row or column label not any specific data series.

Conclusion

Filtering records is a so frequently used operation whenever you need to deal with the data in pandas, and in this article we have discussed a lot of methods you can use under different scenarios. It may not cover everything you need but hopefully it can solve 80% of your problems. There are other Series functions you may employ to filter your data, but probably you would see the syntax still falls under what we have summarized in this article.

If you are interested in other topics about pandas, you may refer to here.

pandas format column headers

Pandas format column headers

When using Pandas to deal with data from various sources, you may usually see the data headers in various formats, for instance, some people prefers to use upper case, some uses lowercase or camel case. And there are also different ways to join the words when using as column label, such as space, hyphen or underscore are commonly seen. This potentially causes some problem when you want to reference a particular column since pandas column label is case sensitive, and you may get confused what the correct spelling. In this case, you would need to format column headers into a standard format before processing the data. This article will be explaining the different ways to format column headers.

Prerequisite:

You will need to install pandas package in order to follow the below examples. Below is the command to install pandas with pip:

pip install pandas

With the package installed, let’s create a sample data set for our later use:

import pandas as pd
df = pd.DataFrame({"Salesman" : ["Patrick", "Sara", "Randy"],
                  "order date" : pd.date_range(start='2020-08-01', periods=3),
                  "Item Desc " : ["White Wine", "Whisky", "Red Wine"],
                  "Price Per-Unit": [10, 20, 30], 
                  "Order Quantity" : [50, 10, 40],
                  99: ["remak1", "remark2", "remark3"]})

You can preview your data set from Jupyter Notebook, it would be similar to below:

pandas format column headers

You probably wonder why someone would use number as column header, but it does happen in the real-world for various reasons.

If you use df[99] or df.loc[0,99], you are able to see the correct data, which means it does not confuse pandas whether your column label is string or numeric.

pandas format column headers

But it sometimes causes readability issue to human and introduce errors, especially if you always assume column labels are string and perform some string operation on them.

Convert column header to string

So the first thing we probably want to do it to convert column header into string. You can use the astype method to convert it:

df.columns = df.columns.astype("str")

A lot of pandas methods have “inplace” parameter to apply the changes without creating new objects, but astype does not support “inplace”, so we will need to re-assign the formatted object back to df.columns.

Format column header with cases

If you would like to reference all columns in uppercase, you can have at least three choices of doing it:

  • Use the str method from pandas Index object
  • Use the map method from pandas Index object
  • Use Python built-in map method

Below is the sample code for above 3 options:

#Index.str method
df.columns = df.columns.str.upper()

#Index.map method
df.columns = df.columns.map(str.upper)

#Python built-in map method
df.columns = map(str.upper, df.columns)

The column headers would be all converted to uppercase:

Index(['SALESMAN', 'ORDER DATE', 'ITEM DESC ', 'PRICE PER-UNIT',
       'ORDER QUANTITY', '99'],
      dtype='object')

Option 1 seems to be most straightforward way as long as the operations are supported by str, such as ljust, rjust, split etc.

Similarly, you can convert column headers to lowercase with str.lower():

df.columns = df.columns.str.lower()

or camel case with str.title if this is the format you wish to standardize across all data sources:

df.columns = df.columns.str.title()

Replace characters in column header

If you noticed there is a space accidentally added in my column header – “Item Desc “, this will cause index error if I use df[“Item Desc”] to access the column. To fix this, we can use the str.strip to remove all the leading or trailing spaces:

df.columns = df.columns.str.strip()

But those spaces in-between cannot be removed, if want to you use df.Item Desc , it will give you error. The best way is to replace all the spaces with hyphen or underscore, so that you can use both df[“Item_Desc”] and df.Item_Desc format to reference the column. Below is how you can use a simple lambda function to replace the space and hyphen with underscore:

df.columns = df.columns.map(lambda x : x.replace("-", "_").replace(" ", "_"))
# Or
df.columns = map(lambda x : x.replace("-", "_").replace(" ", "_"), df.columns)

If you check again, the column header would be updated as per below:

Index(['Salesman', 'Order_Date', 'Item_Desc', 'Price_Per_Unit',
       'Order_Quantity', '99'],
      dtype='object')

Note that, if you use df.columns.str.replace, you cannot just chain multiple replace function together, as the first replace function just return an Index object not a string.

Often you would see there are new line characters in the column header, you can remove them with the replace method as per below:

df.columns = df.columns.str.replace("\n", "")

Add prefix or suffix to column header

With the map and lambda, you can also easily add prefix or suffix to the column header, e.g.:

#adding prefix with "Label_"
df.columns = df.columns.map(lambda x : "Label_" + x)

#adding suffix with "_Col"
df.columns = df.columns.map(lambda x : x + "_Col")

Use of rename method

If you find the entire column header is not meaningful to you, you can manually rename multiple column names at one time with the data frame rename method as per below:

df.rename(columns={"Salesman" : "Sales Person", "Item Desc " : "Order Desc"}, inplace=True)

The rename method support inplace parameter, so you can immediately apply the changes in the original data frame.

pandas rename column name

flatten multi index column

After you aggregated your data with groupby and agg function, you may sometimes get a multi index column header, for instance:

df_sum = df.groupby("Salesman").agg({"Order Quantity": ["mean", "sum"]})

When you calculate both mean and sum of the “Order Quantity” column at the same time, you will get the result similar to below:

python pandas format column header multi index column

The column header become a multi index header, so if you want to flatten this column header by join the two levels into one, you can make use of the list comprehension as per below :

df_sum.columns = [' '.join(col) for col in df_sum.columns]

With the above, you would see column header changed from hierarchical to flattened as per the below:

python pandas format column header flatten multi index column

 

Conclusion

In this article, we have discussed a few options you can use to format column headers such as using str and map method of pandas Index object, and if you want something more than just some string operation, you can also pass in a lambda function. All these methods are not just limited to column header or row label (Index object), you can also use them to format your data series.

If you are interested in other topics about pandas, you may refer to here.

 

split or merge PDF files with PyPDF2

Split or merge PDF files with 5 lines of Python code

There are many cases you want to extract a particular page from a big PDF file or merge PDF files into one due to various reasons. You can make use of some PDF editor tools to do this, but you may realize the split or merge functions are usually not available in the free version, or it is too tedious when there are just so many pages or files to be processed. In this article, I will be sharing a simple solution to split or merge multiple PDF files with a few lines of Python code.

Prerequisite

We will be using a Python library called PyPDF2, so you will need to install this package in your working environment. Below is an example with pip:

pip install PyPDF2

Let’s get started

The PyPDF2 package has 4 major classes PdfFileWriter, PdfFileReader, PdfFileMerger and PageObject which looks quite self explanatory from class name itself. If you need to do something more than split or merge PDF pages, you may want to check this document to find out more about what you can do with this library.

Split PDF file

When you want to extract a particular page from the PDF file and make it a separate PDF file, you can use PdfFileReader to read the original file, and then you will be able to get a particular page by it’s page number (page number starts from 0). With the PdfFileWriter, you can use addPage function to add the PDF page into a new PDF object and save it.

Below is the sample code that extracts the first page of the file1.pdf and split it as a separate PDF file named first_page.pdf

from PyPDF2 import PdfFileWriter, PdfFileReader
input_pdf = PdfFileReader("file1.pdf")
output = PdfFileWriter()
output.addPage(input_pdf.getPage(0))
with open("first_page.pdf", "wb") as output_stream:
    output.write(output_stream)

The input_pdf.getPage(0) returns the PageObject which allows you to modify some of the attributes related to the PDF page, such as rotate and scale the page etc. So you may want to understand more from here.

Merge PDF files

To merge multiple PDF files into one file, you can use PdfFileMerger to achieve it. Although you can also do with PdfFileWriter, but PdfFileMerger probably is more straightforward when you do not need to edit the pages before merging them.

Below is the sample code which using append function from PdfFileMerger to append multiple PDF files and write into one PDF file named merged.pdf

from PyPDF2 import PdfFileReader, PdfFileMerger
pdf_file1 = PdfFileReader("file1.pdf")
pdf_file2 = PdfFileReader("file2.pdf")
output = PdfFileMerger()
output.append(pdf_file1)
output.append(pdf_file2)

with open("merged.pdf", "wb") as output_stream:
    output.write(output_stream)

If you do not want to include all pages from your original file, you can specify a tuple with starting and ending page number as pages argument for append function, so that only the pages specified would be add to the new PDF file.

The append function will always add new pages at the end, in case you want to specify the position where you wan to put in your pages, you shall use merge function. It allows you to specify the position of the page where you want to add in the new pages.

Conclusion

PyPDF2 package is a very handy toolkit for editing PDF files. In this article, we have reviewed how we can make use of this library to split or merge PDF files with some sample codes. You can modify these codes to suit your needs in order to automate the task in case you have many files or pages to be processed. There is also a pdfcat script included in this project folder which allows you to split or merge PDF files by calling this script from the command line. You may also want to take a look in case you just simply deal with one or two PDF files each time.

In case you are interested in other topics related to Python automation, you may check here. Thanks for reading.