DEV Community

Jian
Jian

Posted on

#004 | Automate PDF data extraction: Build

Overview

I wrote a Python script that translates the PDF data extraction business logic into working code.

The script was tested on 71 pages of Custodian Statement PDFs covering a 10 month period (Jan to Oct 2024). Processing the PDFs took about 4 seconds to complete - significantly quicker than doing it manually.

Image description

From what I see, the output looks correct and the code did not run into any errors.

Snapshots of the three CSV outputs are shown below. Note that sensitive data has been greyed out.

Snapshot 1: Stock Holdings

Image description

Snapshot 2: Fund Holdings

Image description

Snapshot 3: Cash Holdings

Image description

This workflow shows the broad steps I took to generate the CSV files.

Image description

Now, I will elaborate in more detail how I translated the business logic to code in Python.

Step 1: Read PDF documents

I used pdfplumber's open() function.

# Open the PDF file
with pdfplumber.open(file_path) as pdf:
Enter fullscreen mode Exit fullscreen mode

file_path is a declared variable that tells pdfplumber which file to open.

Step 2.0: Extract & filter tables from each page

The extract_tables() function does the hard work of extracting all tables from each page.

Though I am not really familiar with the underlying logic, I think the function did a pretty good job. For example, the two snapshots below show the extracted table vs. the original (from the PDF)

Snapshot A: Output from VS Code Terminal

Image description

Snapshot B: Table in PDF

Image description

I then needed to uniquely label each table, so that I could "pick and choose" data from specific tables later on.

The ideal option was to use each table's title. However, determining the title coordinates were beyond my capabilities.

As a workaround, I identified each table by concatenating the headers of the first three columns. For example, the Stock Holdings table in Snapshot B is labeled Stocks/ETFs\nNameExchangeQuantity.

⚠️This approach has a serious drawback - the first three header names do not make all tables sufficiently unique. Fortunately, this only impacts irrelevant tables.

Step 2.1: Extract, filter & transform non-table text

The specific values I needed - Account Number and Statement Date - were sub-strings in Page 1 of each PDF.

For example, "Account Number M1234567" contains account number "M1234567".

Image description

I used Python's re library and got ChatGPT to suggest suitable regular expressions ("regex"). The regex breaks up each string into two groups, with the desired data in the second group.

Regex for Statement Date and Account Number strings

regex_date=r'Statement for \b([A-Za-z]{3}-\d{4})\b'
regex_acc_no=r'Account Number ([A-Za-z]\d{7})'
Enter fullscreen mode Exit fullscreen mode

I next transformed the Statement Date into "yyyymmdd" format. This makes it easier to query and sort data.

 if match_date:
    # Convert string to a mmm-yyyy date
    date_obj=datetime.strptime(match_date.group(1),"%b-%Y")
    # Get last day of the month
    last_day=calendar.monthrange(date_obj.year,date_obj.month[1]
    # Replace day with last day of month
    last_day_of_month=date_obj.replace(day=last_day)
    statement_date=last_day_of_month.strftime("%Y%m%d")
Enter fullscreen mode Exit fullscreen mode

match_date is a variable declared when a string matching the regex is found.

Step 3: Create tabular data

The hard yards - extracting the relevant datapoints - were pretty much done at this point.

Next, I used pandas' DataFrame() function to create tabular data based on the output in Step 2 and Step 3. I also used this function to drop unnecessary columns and rows.

The end result can then be easily written to a CSV or stored in a database.

Step 4: Write data to CSV file

I used Python's write_to_csv() function to write each dataframe to a CSV file.

write_to_csv(df_cash_selected,file_cash_holdings)

Enter fullscreen mode Exit fullscreen mode

df_cash_selected is the Cash Holdings dataframe while file_cash_holdings is the file name of the Cash Holdings CSV.

➡️ I will write the data to a proper database once I have acquired some database know-how.

Next Steps

A working script is now in place to extract table and text data from the Custodian Statement PDF.

Before I proceed further, I will run some tests to see if the script is working as expected.

--Ends

Top comments (0)