# Resources

## How to calculate date difference between rows in pandas

#### Problem:

You have some data with date (or numeric) data columns, you already knew you can directly use – operator to calculate the difference between columns, but now you would like to calculate the date difference between the two consecutive rows.

For instance, You have some sample data for GPS tracking, and it has the start and end time at each location (latitude and longitude). You would like to calculate the time gap within each location or between two locations.

```import pandas as pd
import numpy as np

For a quick view, you can see the sample data output as per below:

#### Solutions:

##### Option 1: Using Series or Data Frame diff

Data frame diff function is the most straightforward way to compare the values between the current row and the previous rows. By default, it compare the current and previous row, and you can also specify the period argument in order to compare the current row and current – period row. To calculate the time gap of the start time between two consecutive rows:

`df["Start Time"].diff()`

You shall see the below output:

If you check the date type of the result, you may see it is showing as dtype(‘<m8[ns]’), you can convert the result into integer minutes for easier reading.

In this case, you can use the below timedelta from numpy to convert the date difference into minutes:

`df["Start Time"].diff().apply(lambda x: x/np.timedelta64(1, 'm')).fillna(0).astype('int64')`

You shall see the below output:

You can also select multiple date columns as a data frame to apply the diff function.

##### Option 2: Using Series or Data Frame shift with – operator

Shift function allows us to move the values up/down or left /right to the given periods depends on what axis you have specified. You can imagine it is the same as Excel shift cells function.

To calculate the difference between the current and next row, you will need to shift the subtrahend column up 1 cell, below is how to calculate the difference between current End Time and the Start Time from the following row:

`df["End Time"] - df["Start Time"].shift(1)`

Yous shall see the below result:

If you want to calculate the difference for multiple date columns, you can use the data frame shift operation as per below:

`df[["End Time", "Start Time"]] - df[["Start Time", "End Time"]].shift(1)`

##### Option 3: Using data frame sub

The data frame sub function is self-explanatory by it’s name. You can either apply the subtraction at row level or column level by specifying the aixs argument. For our case, to calculate the date difference between two rows, you can use the original data frame to subtract another data frame which starts from the second row of the original data frame. Below is the code:

`(df.loc[:,["Start Time", "End Time"]].sub(df.loc[0,["Start Time", "End Time"]], axis='columns')/np.timedelta64(1, "m")).astype("int64")`

You can see the below output:

If you would like to calculate the gap between current End Time and next Start Time, you can use the below:

`df["End Time"].sub(df["Start Time"].shift(1))`

It should produce the same result as previously when we use – with shift.

#### Conclusion:

Among the 3 options we discussed above, using diff is the most straightforward approach, but you may notice that it can only apply the calculation on the same columns, if you would like to calculate the difference between the End Time of the current row and the Start Time of the next row, you will have to use sub or – with shift operation. One more difference between diff and sub is that sub has the fill_value argument which supports to substitute the missing values with a default value, so that you do not need another line of code to handle the NA values.

## Pandas tricks – split one row of data into multiple rows

As a data scientist or analyst, you will need to spend a lot of time wrangling the data from various sources so that you can have a standard data structure for your further analysis. There are cases that you get the raw data in some sort of summary view and you would need to split one row of data into multiple rows based on certain conditions in order to do grouping and matching from different perspectives. In this article, we will be discussing a solution to solve this particular issue.

#### Prerequisites:

You will need to get pandas installed if you have not yet. Below is the pip command to install pandas:

`pip install pandas`

And let’s import the necessary modules and use this sample data for our demonstration, you can download it into your local folder, or just supply this URL link to pandas read_excel method:

```import pandas as pd
import numpy as np

So if we do a quick view of the first 5 rows of the data with df.head(5), you would see the below output:

Assume this is the data extracted from a eCommerce system where someone is running a online shop for footwear and apparel products, and the shop provides free 7 days return for the items that it is selling. You can see that each of the rows has the order information, when and who performed the delivery service, and if customer requested return, when the item was returned and by which courier service provider. The data is more from the shop owner’s view, and you may find some difficulty when you want to analyse from courier service providers’ perspective with the current data format. So probably we shall do some transformation to make the format simpler for analysis.

#### Split one row of data into multiple rows

Now let’s say we would like to split this one row of data into 2 rows if there is a return happening, so that each row has the order info as well as the courier service info and we can easily do some analysis such as calculating the return rate for each product, courier service cost for each month by courier companies, etc.

The output format we would like to have is more like a transaction based, so let’s try to format our date columns and rename the delivery related columns, so that it won’t confuse us later when splitting the data.

```df["Delivery Date"] = pd.to_datetime(df["Delivery Date"]).dt.date
df["Return Date"] = pd.to_datetime(df["Return Date"]).dt.date

df.rename(columns={"Delivery Date" : "Transaction Date",
"Delivery Courier" : "Courier",
"Delivery Charges" : "Charges"}, inplace=True)```

And we add one more column as transaction type to indicate whether the record is for delivery or return. For now, we just assign it as “DELIVERY” for all records:

`df["Transaction Type"] = "DELIVERY"`

The rows we need to split are the ones with return info, so let’s create a filter by checking if return date is empty:

`flt_returned = ~df["Return Date"].isna()`

If you verify the filter with df[flt_returned], you shall see all rows with return info are selected as per below:

To split out the delivery and return info for these rows, we will need to perform the below steps:

• Duplicate the current 1 row into 2 rows
• Change the transaction type to “RETURN” for the second duplicated row
• Copy values of the Return Date, Return Courier, Return Charges to Transaction Date, Courier, Charges respectively

To duplicate these records, we use data frame index.repeat() to repeat these index twice, and then use loc function to get the data for these repeated indexes. Below is the code to create the duplicate records for the rows with return info:

`d = df[flt_returned].loc[df[flt_returned].index.repeat(2),:].reset_index(drop=True)`

Next, let’s save the duplicated row indexes into a variable, so that we can refer to it multiple times even when some data in the duplicated row changed. We use the data frame duplicated function to return the index of the duplicated rows. For this function, the keep=”first” argument will mark 1st row as non-duplicate and the subsequent rows as duplicate, while keep=”last” will mark the 1st row as duplicate.

```idx_duplicate = d.duplicated(keep="first")
#the default value for keep argument is "first", so you can just use d.duplicated()```

With this idx_duplicate variable, we can directly update the transaction type for these rows to RETURN:

`d.loc[idx_duplicate,"Transaction Type"] = "RETURN"`

And next, we shall copy the return info into Transaction Date, Courier, Charges fields for these return records. You can either base on the transaction type value to select rows, or continue to use the idx_duplicate to identify the return records.

Below will copy values from Return Date, Return Courier, Return Charges to Transaction Date, Courier, Charges respectively:

```d.loc[idx_duplicate, ["Transaction Date", "Courier", "Charges"]] = d.loc[idx_duplicate,
["Return Date", "Return Courier","Return Charges"]].to_numpy()```

If you check the data now, you shall see for the return rows, the return info is already copied over:

(Note: you may want to check here to understand why to_numpy() is needed for swapping columns)

Finally, we need to combine the original rows which only has delivery info with the above processed data. Let’s also sort the values by order number and reset the index:

`new_df = pd.concat([d, df[~flt_returned]]).sort_values("Order#").reset_index(drop=True)`

Since the return related columns are redundant now, we shall drop these columns to avoid the confusion, so let’s use the below code to drop them by the “Return” keywords in the column labels:

`new_df.drop(new_df.filter(regex="Return*", axis=1), axis=1, inplace=True)`

Once we deleted the redundant columns, you shall see the below final result in the new_df as per below:

So we have successfully transformed our data from a shop owner’s view to courier companies’ view, each of the delivery and return records are now an individual row.

#### Conclusion

Data wrangling sometimes can be tough depends on what kind of source data you get. In this article, we have gone through a solution to split one row of data into multiple rows by using the pandas index.repeat to duplicate the rows and loc function to swapping the values. There are other possible ways to handle this, please do share your comments in case you have any better idea.

## Why your lambda function does not work

#### Introduction

Lambda function in Python is designed to be a one-liner and throwaway function even without the needs to assign a function name, so it is also known as anonymous function. Comparing to the normal Python functions, you do not need to write the def and return keywords for lambda function, and it can be defined just at the place where you need it, so it makes your code more concise and looks a bit special. In this article, we will be discussing some unexpected results you may have encountered when you are using lambda function.

#### Basis usage of lambda

Let’s cover some basis of lambda function before we dive into the problems we are going solve in this article.

Below is the syntax to define lambda function:

lambda [arguments] : expression

As you can see lambda function can be defined with or without arguments, and take note that it only accepts one line of expression, not any of the Python statements. Expressions can be also statements, the difference is that you are able to evaluate a expression into values (or objects), e.g.: 2**2, but you may not be able to evaluate a statement like while(True): into a value. You can think there is an implicit “return” keyword before the expression, so your expression must be eventually computed into a value.

And here are some basic usage of lambda function:

```square = lambda x: x**2
print(square(4))
#Output: 16
cryptocurrencies = [('Bitcoin', 10948.52),('Ethereum', 381.41),('Tether', 1.00),
('XRP', 0.249940),
('Bitcoin Cash', 231.86),
('Binance Coin', 27.02),
('Litecoin', 48.20),
('EOS', 2.69),
('TRON', 0.027157),
('Neo', 24.29),
('Stellar', 0.077903),
('Huobi Token', 4.91)]

top5_by_name = sorted(cryptocurrencies, key=lambda token: token[0].lower())[0:5]
print(top5_by_name)
#Output: [('Binance Coin', 27.02), ('Bitcoin', 10948.52), ('Bitcoin Cash', 231.86), ('Chainlink', 10.47), ('EOS', 2.69)]

lowest = min(cryptocurrencies, key=lambda token: token[1])
print(lowest)
#Output: ('TRON', 0.027157)

highest = max(cryptocurrencies, key=lambda token: token[1])
print(highest)
#Output: ('Bitcoin', 10948.52)

highest_in_local_currency = lambda exchange_rate: highest[1] * exchange_rate
highest_sgd = highest_in_local_currency(1.38)
print(highest_sgd)
#Output: 15108.9576```

You can see that it is quite convenient when you just need a very short function to be supplied to another function which accepts argument like key=keyfunc, such as sorted, list.sort, min, max, heapq.nlargest, heapq.nsmallest, itertool.groupby and so on. The common thing about these use cases is that you do not need very complicated logic (can be written in one line) in the keyfunc and probably you will not reuse it in anywhere else. So it is the ideal scenario to use a lambda function.

Now Let’s expand further on our previous example, assuming the bitcoin price fluctuated a lot on Mon & Tue although it still dominated the market, and you would like to convert the price in SGD in below way:

```highest = ('Bitcoin', 10948.52)
mon_highest = lambda exchange_rate: highest[1] * exchange_rate

highest = ('Bitcoin', 10000)
tue_highest = lambda exchange_rate: highest[1] * exchange_rate

print("Mon:", mon_highest(1.36))
print("Tue:", tue_highest(1.36))

```

You want to assign different values in highest variable to calculate the price in another currency, but you would be surprised when checking the result:

Instead of scratching your head to figure out why it does not work, and let’s try another approach. I am going to create a list of converter functions where I pass in the cryptocurrency pair to calculate the price based on the exchange rate supplied. Later I loop through these functions and print out the converted values:

```converters = [lambda exchange_rate: crypto[1] * exchange_rate for crypto in cryptocurrencies]
for c in converters:
print(c(1.36))```

I am expecting to see all the prices are converted into local currency based on the exchange rate 1.36, but when running the above code, it gives below result:

python lambda variable binding output

Same as the previous behaviour, only the last value was used in lambda function. so why it does not work as intended when I use the lambda in this way?

#### Runtime data binding

When people come into this issue, it is usually due to a fundamental misunderstanding of the variable binding for Python function. For Python function regardless of normal function or lambda function, the variables used in function are bound at runtime not in definition time. So for our first example, the lambda function only used the highest variable stored in locals() at the moment when it is executed.

With this concept cleared, you shall be able to understand the behavior of the output from above two examples, only the latest values at execution time were used in the lambda function.

To fix this issue, we just need a minor change to our original code to pass in the variable in the function definition as default value to the argument. For instance, below is the fix for the first example:

```mon_highest = lambda exchange_rate, highest = highest: highest[1] * exchange_rate
tue_highest = lambda exchange_rate, highest = highest: highest[1] * exchange_rate```

Below is the fix for the second example:

`converters = [lambda exchange_rate, crypto = crypto: crypto[1] * exchange_rate for crypto in cryptocurrencies]`

You may wonder why must use lambda in above two examples, indeed they do not necessarily require a lambda function. For the first example, since you need to call the function more than once, you should just use normal function instead just to be more careful when you need any variable from outside of the function.

And for the second example, it can be simply replaced with a list comprehension as per below:

`list(map(lambda crypto: crypto[1] * 1.36, cryptocurrencies))`

Conclusion:

Lambda function provides convenience for writing tiny functions for the one-time use, and make your code concise. But it is also highly restricted due to the one line of expression, as you cannot use multiple statements, exception handling and conditions etc. Whatever lambda does, you can definitely use a normal function to replace. The only thing matters is about the readability, so you will need to evaluate whether it is the best scenario to use lambda, and bear in mind about the variable binding.

## Pandas – filtering records in 20 ways

Filtering records is a quite common operation when you process or analyze data with pandas,a lot of times you will have to apply filters so that you can concentrate to the data you want. Pandas is so powerful and flexible that it provides plenty of ways you can filter records, whether you want to filtering by columns to focus on a subset of the data or base on certain conditions. In this article, we will be discussing the various ways of filtering records in pandas.

Prerequisite:

You will need to install pandas package in order to follow the below examples. Below is the command to install pandas with pip:

`pip install pandas`

And I will be using the sample data from here, so you may also want to download a copy into your local machine to try out the later examples.

With the below codes, we can get a quick view of how the sample data looks like:

```import pandas as pd

Below is the output of the first 5 rows of data:

Let’s get started with our examples.

#### Filtering records by label or index

Filtering by column name/index is the most straightforward way to get a subset of the data frame in case you are only interested in a few columns of the data rather than the full data frame. The syntax is to use df[[column1,..columnN]] to filter only the specified columns. For instance, the below will get a subset of data with only 2 columns –  “Salesman” and “Item Desc”:

```new_df = df[["Salesman","Item Desc"]]

Output from the above would be:

If you are pretty sure which are the rows you are looking for, you can use the df.loc function which allows you to specify both the row and column labels to filter the records. You can pass in a list of row labels and column labels like below:

`df.loc[[0,4], ["Salesman", "Item Desc"]]`

And you would see the row index 0 and 4, column label “Salesman” and “Item Desc” are selected as per below output:

Or you can specify the label range with : to filter the records by a range:

`df.loc[0:4, ["Salesman", "Item Desc"]]`

You would see 5 rows (row index 0 to 4) selected as per below output:

Note that currently we are using the default row index which is a integer starting from 0, so it happens to be same as the position of the rows. Let’s say you have Salesman as your index, then you will need to do filtering based on the index label (value of the Salesman), e.g.:

`df.set_index("Salesman", inplace=True)df.loc["Sara", ["Item Desc", "Order Quantity"]]`

With the above code, you will be able to select all the records with Salesman as “Sara”:

#### Filtering records by row/column position

Similarly, you can use iloc function to achieve the same as what can be done with loc function. But the difference is that, for iloc, you shall pass in the integer position for both row and columns. E.g.:

`df.iloc[[0,4,5,10],0:2]`

The integers are the position of the row/column from 0 to length-1 for the axis. So the below output will be generated when you run the above code:

#### Filtering records by single condition

If you would like to filter the records based on a certain condition, for instance, the value of a particular column, you may have a few options to do the filtering based on what type of data you are dealing with.

The eq and == work the same when you want to compare if the value matches:

`flt_wine = df["Item Desc"].eq("White Wine")df[flt_wine]`

Or:

`flt_wine = (df["Item Desc"] == "White Wine")df[flt_wine]`

Both will generate the below output:

If you run the flt_wine alone, you will see the output is a list of True/False with their index. This is how the filter works as pandas data frame would filter out the index with False value.

To get the data with the negation of certain condition, you can use ~ before your condition statement as per below:

`df[~flt_wine]#ordf[~(df["Item Desc"] == "White Wine")]#ordf[(df["Item Desc"] != "White Wine")]`

This will return the data with “Item Desc” other than “White Wine”.

And for string data type, you can also use the str.contains to match if the column has a particular sub string.

`df[df["Item Desc"].str.contains("Wine")]`

If you want to filter by matching multiple values, you can use isin with a list of values:

`flt_wine = df["Item Desc"].isin(["White Wine", "Red Wine"])df[flt_wine].head(5)`

And you can also use data frame query function to achieve the same. But the column label with spaces in-between would cause errors when using this function, so you will need to reformat a bit of your column header, such as replacing spaces with underscore (refer to this article for more details ).

With this change in the column header, you shall be able to run the below code with the same result as above isin method.

`df1 = df.query("Item_Desc in ('White Wine','Red Wine')")df1.head(5)`

There are other Series functions you can use to filter your records, such as isnull, isna, notna, notnull, find etc. You may want to check pandas Series documentation.

#### Filtering records by multiple conditions

When you need to filter by multiple conditions where multiple columns are involved, you can also do similar as what we have discussed in above with the & or | to join the conditions.

For filtering records when both conditions are true:

```flt_whisky_bulk_order = (df["Item Desc"] == "Whisky") & (df["Order Quantity"] >= 10)
df[flt_whisky_bulk_order]```

The output would be :

For filtering the records when either condition is true:

`flt_high_value_order = (df["Item Desc"] == "Whisky") | (df["Price Per Unit"] >= 50) df[flt_high_value_order]`

The output would be :

Similarly, the above can be done with data frame query function. Below is the example of AND condition:

```df1 = df.query("Item_Desc == 'Whisky' and Order_Quantity >= 10")

Below is the example of OR condition:

`df1 = df.query("Item_Desc_ == 'Whisky' or Price_Per_Unit >= 10")df1.head(5)`

#### Filtering records by dataframe.filter

There is also another filter method which can be used to filter by the row or column label.

Below is an example that can be used to get all the columns with the name starting with “Order” keyword:

`df.filter(regex="Order*", axis=1)`

you shall see the below output:

Similarly, when applying to row labels, you can axis=0

```df.set_index("Order Date", inplace=True)
df.filter(like="2020-06-21", axis=0)```

Take note that data frame query function only works on the row or column label not any specific data series.

#### Conclusion

Filtering records is a so frequently used operation whenever you need to deal with the data in pandas, and in this article we have discussed a lot of methods you can use under different scenarios. It may not cover everything you need but hopefully it can solve 80% of your problems. There are other Series functions you may employ to filter your data, but probably you would see the syntax still falls under what we have summarized in this article.

If you are interested in other topics about pandas, you may refer to here.

When using Pandas to deal with data from various sources, you may usually see the data headers in various formats, for instance, some people prefers to use upper case, some uses lowercase or camel case. And there are also different ways to join the words when using as column label, such as space, hyphen or underscore are commonly seen. This potentially causes some problem when you want to reference a particular column since pandas column label is case sensitive, and you may get confused what the correct spelling. In this case, you would need to format column headers into a standard format before processing the data. This article will be explaining the different ways to format column headers.

#### Prerequisite:

You will need to install pandas package in order to follow the below examples. Below is the command to install pandas with pip:

`pip install pandas`

With the package installed, let’s create a sample data set for our later use:

```import pandas as pd
df = pd.DataFrame({"Salesman" : ["Patrick", "Sara", "Randy"],
"order date" : pd.date_range(start='2020-08-01', periods=3),
"Item Desc " : ["White Wine", "Whisky", "Red Wine"],
"Price Per-Unit": [10, 20, 30],
"Order Quantity" : [50, 10, 40],
99: ["remak1", "remark2", "remark3"]})```

You can preview your data set from Jupyter Notebook, it would be similar to below:

You probably wonder why someone would use number as column header, but it does happen in the real-world for various reasons.

If you use df[99] or df.loc[0,99], you are able to see the correct data, which means it does not confuse pandas whether your column label is string or numeric.

But it sometimes causes readability issue to human and introduce errors, especially if you always assume column labels are string and perform some string operation on them.

#### Convert column header to string

So the first thing we probably want to do it to convert column header into string. You can use the astype method to convert it:

`df.columns = df.columns.astype("str")`

A lot of pandas methods have “inplace” parameter to apply the changes without creating new objects, but astype does not support “inplace”, so we will need to re-assign the formatted object back to df.columns.

#### Format column header with cases

If you would like to reference all columns in uppercase, you can have at least three choices of doing it:

• Use the str method from pandas Index object
• Use the map method from pandas Index object
• Use Python built-in map method

Below is the sample code for above 3 options:

```#Index.str method
df.columns = df.columns.str.upper()

#Index.map method
df.columns = df.columns.map(str.upper)

#Python built-in map method
df.columns = map(str.upper, df.columns)

```

The column headers would be all converted to uppercase:

```Index(['SALESMAN', 'ORDER DATE', 'ITEM DESC ', 'PRICE PER-UNIT',
'ORDER QUANTITY', '99'],
dtype='object')```

Option 1 seems to be most straightforward way as long as the operations are supported by str, such as ljust, rjust, split etc.

Similarly, you can convert column headers to lowercase with str.lower():

`df.columns = df.columns.str.lower()`

or camel case with str.title if this is the format you wish to standardize across all data sources:

`df.columns = df.columns.str.title()`

#### Replace characters in column header

If you noticed there is a space accidentally added in my column header – “Item Desc “, this will cause index error if I use df[“Item Desc”] to access the column. To fix this, we can use the str.strip to remove all the leading or trailing spaces:

`df.columns = df.columns.str.strip()`

But those spaces in-between cannot be removed, if want to you use df.Item Desc , it will give you error. The best way is to replace all the spaces with hyphen or underscore, so that you can use both df[“Item_Desc”] and df.Item_Desc format to reference the column. Below is how you can use a simple lambda function to replace the space and hyphen with underscore:

```df.columns = df.columns.map(lambda x : x.replace("-", "_").replace(" ", "_"))
# Or
df.columns = map(lambda x : x.replace("-", "_").replace(" ", "_"), df.columns)```

If you check again, the column header would be updated as per below:

```Index(['Salesman', 'Order_Date', 'Item_Desc', 'Price_Per_Unit',
'Order_Quantity', '99'],
dtype='object')```

Note that, if you use df.columns.str.replace, you cannot just chain multiple replace function together, as the first replace function just return an Index object not a string.

Often you would see there are new line characters in the column header, you can remove them with the replace method as per below:

`df.columns = df.columns.str.replace("\n", "")`

With the map and lambda, you can also easily add prefix or suffix to the column header, e.g.:

```#adding prefix with "Label_"
df.columns = df.columns.map(lambda x : "Label_" + x)

df.columns = df.columns.map(lambda x : x + "_Col")```

#### Use of rename method

If you find the entire column header is not meaningful to you, you can manually rename multiple column names at one time with the data frame rename method as per below:

`df.rename(columns={"Salesman" : "Sales Person", "Item Desc " : "Order Desc"}, inplace=True)`

The rename method support inplace parameter, so you can immediately apply the changes in the original data frame.

#### flatten multi index column

After you aggregated your data with groupby and agg function, you may sometimes get a multi index column header, for instance:

`df_sum = df.groupby("Salesman").agg({"Order Quantity": ["mean", "sum"]})`

When you calculate both mean and sum of the “Order Quantity” column at the same time, you will get the result similar to below:

The column header become a multi index header, so if you want to flatten this column header by join the two levels into one, you can make use of the list comprehension as per below :

`df_sum.columns = [' '.join(col) for col in df_sum.columns]`

With the above, you would see column header changed from hierarchical to flattened as per the below:

#### Conclusion

In this article, we have discussed a few options you can use to format column headers such as using str and map method of pandas Index object, and if you want something more than just some string operation, you can also pass in a lambda function. All these methods are not just limited to column header or row label (Index object), you can also use them to format your data series.

If you are interested in other topics about pandas, you may refer to here.