This Pandas cheat sheet through the basics of Pandas that you will need to get started on wrangling your data with Python.
The Pandas cheat sheet will guide you through the basics of Pandas, going from the data structures to reading, writing, selection, dropping indices or columns, sorting and ranking, retrieving basic info of the data structures you’re working with to applying functions and data alignment.
import pandas as pd
import numpy as np
Importing Data
Pandas library offers a set of reader functions that can be performed on a wide range of file Pandas cheat sheet for importing data.
From a CSV file – pd.read_csv(filename)
From a delimited text file (like TSV) – pd.read_table(filename)
From an Excel file – pd.read_excel(filename)
Read from a SQL table/database – pd.read_sql(query, connection_object)
Read from a JSON formatted string, URL or file – pd.read_json(json_string)
Parses an html URL, string or file and extracts tables to a list of dataframes – pd.read_html(url)
Takes the contents of your clipboard and passes it to – read_table() pd.read_clipboard()
From a dict, keys for columns names, values for data as lists – pd.DataFrame(dict)
Exporting Data
list of write operations which are useful while writing data into a file – pandas cheat sheet for exporting data.
Write to a CSV file – data.to_csv(filename)
Write to an Excel file – data.to_excel(filename)
Write to a SQL table – data.to_sql(table_name, connection_object)
Write to a file in JSON format – data.to_json(filename)
Viewing/Inspecting Data
First n rows of the DataFrame – data.head(n)
Last n rows of the DataFrame – data.tail(n)
Number of rows and columns – data.shape
Index, Datatype and Memory information – data.info()
Summary statistics for numerical columns – data.describe()
View unique values and counts – s.value_counts(dropna=False)
Unique values and counts for all columns – data.apply(pd.Series.value_counts)
Create Test Objects
15 rows and 5 columns of random floats – pd.DataFrame(np.random.rand(15,5))
Create a series from an iterable my_list – pd.Series(my_list)
Add a date index – df.index = pd.date_range(‘2005/4/30’, periods=data.shape[0])
Selection
Selecting by position and selecting by label.
Returns column with label col as Series – s[col]
Returns columns as a new DataFrame – data[[col1, col2]]
Selection by position – s.iloc[0]
Selection by index – s.loc[‘index_one’]
First row – data.iloc[0,:]
First element of first column – data.iloc[0,0]
Data Cleaning
Rename columns – data.columns = [‘x’,’y’,’z’]
Checks for null values, Returns Boolean Arrray – data.isnull()
Checks for not null values, opposite of data.isnull() – data.notnull()
Drop all rows that contain null values – data.dropna()
Drop all columns that contain null values – data.dropna(axis=1)
Drop all rows have less than n non null values – data.dropna(axis=1, thresh=n)
Replace all null values with a – data.fillna(a)
Replace all null values with the mean – data.fillna(data.mean())
Convert the datatype of the series to float – data.astype(float)
Replace all values equal to 1 with ‘one’- data.replace(1,’one’)
Replace all 1 with ‘one’ and 3 with ‘three’ – data.replace([1,3],[‘one’,’three’])
Mass renaming of columns – data.rename(columns=lambda x: x + 1)
Selective renaming – data.rename(columns={‘old_name’: ‘new_ name’})
Change the index – data.set_index(‘column_one’)
Mass renaming of index – data.rename(index=lambda x: x + 1)
Sort, Filter and Group-by
Very useful feature offered by Pandas is the sorting of DataFrame – pandas cheat sheet for sorting, filtering & group by.
Rows where the column col is greater than 0.5 – data[data[col] > 0.5]
Rows where 0.7 > col > 0.5 – data[(data[col] > 0.5) & (data[col] < 0.7)]
Sort values by col1 in ascending order – data.sort_values(col1)
Sort values by col2 in descending order – data.sort_values(col2, ascending=False)
Sort values by col1 in ascending order then col2 in descending order – data.sort_values([col1,col2], ascending=[True,False])
Returns a groupby object for values from one column – data.groupby(col)
Returns groupby object for values from multiple columns – data.groupby([col1,col2])
Returns the mean of the values in col2, grouped by the values in col1 – data.groupby(col1)[col2]
Create a pivot table that groups by col1 and calculates the mean of col2 and col3 – data.pivot_table(index=col1,values=[col2,col3], aggfunc=mean)
Find the average across all columns for every unique col1 group – data.groupby(col1).agg(np.mean)
Apply the function np.mean() across each column – data.apply(np.mean)
Apply the function np.max() across each row – data.apply(np.max,axis=1)
Join/Combine
Add the rows in data1 to the end of data2 (columns should be identical) – data1.append(data2)
Add the columns in data1 to the end of data2 (rows should be identical)- pd.concat([data1, data2],axis=1)
SQL-style join the columns in data1 with the columns on data2 where the rows for col have identical values. how can be one of ‘left’, ‘right’, ‘outer’, ‘inner’ – data1.join(data2,on=col1, how=’inner’)
Statistics
Summary statistics for numerical columns – data.describe()
Returns the mean of all columns – data.mean()
Returns the correlation between columns in a DataFrame – data.corr()
Returns the number of non-null values in each DataFrame column – data.count()
Returns the highest value in each column – data.max()
Returns the lowest value in each column – data.min()
Returns the median of each column – data.median()
Returns the standard deviation of each column – data.std()