DEV Community

Shahid
Shahid

Posted on

BeautifulSoup Cheat Sheet Python

BeautifulSoup Cheat Sheet

Python Installtion

pip install beautifulsoup4
Enter fullscreen mode Exit fullscreen mode

Import Library

from bs4 import BeautifulSoup
Enter fullscreen mode Exit fullscreen mode

Creating a BeautifulSoup Object

Parse HTML string:
html = "<p>Example paragraph</p>"
soup = BeautifulSoup(html, 'html.parser')
Enter fullscreen mode Exit fullscreen mode

Parse from file

with open("index.html") as file:
  soup = BeautifulSoup(file, 'html.parser')
Enter fullscreen mode Exit fullscreen mode

BeautifulSoup Object Types

Tag

A Tag corresponds to an HTML or XML tag in the original document:

soup = BeautifulSoup('<p>Hello World</p>')
p_tag = soup.p
p_tag.name # 'p' <p>
p_tag.string # 'Hello World'
Enter fullscreen mode Exit fullscreen mode

NavigableString

soup = BeautifulSoup('Hello World')
text = soup.string
text # 'Hello World'
type(text) # bs4.element.NavigableString
Enter fullscreen mode Exit fullscreen mode

BeautifulSoup

The BeautifulSoup object represents the parsed document as a whole. It is the root of the tree:

soup = BeautifulSoup('<html>...</html>')
soup.name # '[document]'
soup.head # <head> Tag element
Enter fullscreen mode Exit fullscreen mode

Comment

Comments in HTML are also available as Comment objects:

<!-- This is a comment -->
Enter fullscreen mode Exit fullscreen mode
comment = soup.find(text=re.compile('This is'))
type(comment) # bs4.element.Comment
Enter fullscreen mode Exit fullscreen mode

Searching the Parse Tree

By Name

HTML:

<div>
  <p>Paragraph 1</p>
  <p>Paragraph 2</p>
</div>
Enter fullscreen mode Exit fullscreen mode
paragraphs = soup.find_all('p')
# <p>Paragraph 1</p>, <p>Paragraph 2</p>
Enter fullscreen mode Exit fullscreen mode

By Attributes

HTML:

<div id="content">
  <p>Paragraph 1</p>
</div>
Enter fullscreen mode Exit fullscreen mode

Python:

div = soup.find(id="content")
# <div id="content">...</div>
Enter fullscreen mode Exit fullscreen mode

Commonly Used find and select Methods in BeautifulSoup

By Text
HTML:

<p>This is some text</p>
Enter fullscreen mode Exit fullscreen mode

Python:

p = soup.find(text="This is some text")
# <p>This is some text</p>
Enter fullscreen mode Exit fullscreen mode

Searching with CSS Selectors
CSS selectors provide a very powerful way to search for elements within a parsed document.

Some examples of CSS selector syntax:

By Tag Name
Select all

tags:

soup.select("p")
Enter fullscreen mode Exit fullscreen mode

By ID
Select element with ID "main":

soup.select("#main")
Enter fullscreen mode Exit fullscreen mode

By Class Name
Select elements with class "article":

soup.select(".article")
Enter fullscreen mode Exit fullscreen mode

By Attribute
Select tags with a "data-category" attribute:

soup.select("[data-category]")
Enter fullscreen mode Exit fullscreen mode

Descendant Combinator
Select paragraphs inside divs:

soup.select("div p")
Enter fullscreen mode Exit fullscreen mode

Child Combinator
Select direct children paragraphs:

soup.select("div > p")
Enter fullscreen mode Exit fullscreen mode

Adjacent Sibling
Select h2 after h1:

soup.select("h1 + h2")
Enter fullscreen mode Exit fullscreen mode

General Sibling
Select h2 after any h1:

soup.select("h1 ~ h2")
Enter fullscreen mode Exit fullscreen mode

By Text
Select elements containing text:

soup.select(":contains('Some text')")
Enter fullscreen mode Exit fullscreen mode

By Attribute Value
Select input with type submit:

soup.select("input[type='submit']")
Enter fullscreen mode Exit fullscreen mode

Pseudo-classes
Select first paragraph:

soup.select("p:first-of-type")
Enter fullscreen mode Exit fullscreen mode

Chaining
Select first article paragraph:

soup.select("article > p:nth-of-type(1)")
Enter fullscreen mode Exit fullscreen mode

Accessing Data
HTML:

<p class="content">Some text</p>
Enter fullscreen mode Exit fullscreen mode

Python:

p = soup.find('p')
p.name # "p"
p.attrs # {"class": "content"}
p.string # "Some text"
Enter fullscreen mode Exit fullscreen mode

The Power of find_all()
The find_all() method is one of the most useful and versatile searching methods in BeautifulSoup.

Returns All Matches
find_all() will find and return a list of all matching elements:

all_paras = soup.find_all('p')
Enter fullscreen mode Exit fullscreen mode

This gives you all paragraphs on a page.

Flexible Queries
You can pass a wide range of queries to find_all():

Name - find_all('p')
Attributes - find_all('a', class_='external')
Text - find_all(text=re.compile('summary'))
Limit - find_all('p', limit=2)
And more!
Useful Features
Some useful things you can do with find_all():

Get a count - len(soup.find_all('p'))
Iterate through results - for p in soup.find_all('p'):
Convert to text - [p.get_text() for p in soup.find_all('p')]
Extract attributes - [a['href'] for a in soup.find_all('a')]
Why It's Useful
In summary, find_all() is useful because:
It returns all matching elements
It supports diverse and powerful queries
It enables easily extracting and processing result data
Whenever you need to get a collection of elements from a parsed document, find_all() will likely be your go-to tool.

Navigating Trees
Traverse up and sideways through related elements.

Modifying the Parse Tree
BeautifulSoup provides several methods for editing and modifying the parsed document tree.

HTML:

<p>Original text</p>
Enter fullscreen mode Exit fullscreen mode

Python:

p = soup.find('p')
p.string = "New text"
Enter fullscreen mode Exit fullscreen mode

Edit Tag Names
Change an existing tag name:

tag = soup.find('span')
tag.name = 'div'
Enter fullscreen mode Exit fullscreen mode

Edit Attributes
Add, modify or delete attributes of a tag:

tag['class'] = 'header' # set attribute
tag['id'] = 'main'
del tag['class'] # delete attribute
Enter fullscreen mode Exit fullscreen mode

Edit Text
Change text of a tag:

tag.string = "New text"
Enter fullscreen mode Exit fullscreen mode

Append text to a tag:

tag.append("Additional text")
Enter fullscreen mode Exit fullscreen mode

Insert Tags
Insert a new tag:

new_tag = soup.new_tag("h1")
tag.insert_before(new_tag)
Enter fullscreen mode Exit fullscreen mode

Delete Tags
Remove a tag entirely:

tag.extract()
Enter fullscreen mode Exit fullscreen mode

Wrap/Unwrap Tags
Wrap another tag around:

tag.wrap(soup.new_tag('div))
Enter fullscreen mode Exit fullscreen mode

Unwrap its contents:

tag.unwrap()
Enter fullscreen mode Exit fullscreen mode

Modifying the parse tree is very useful for cleaning up scraped data or extracting the parts you need.

Outputting HTML
Input HTML:

<p>Hello World</p>
Enter fullscreen mode Exit fullscreen mode

Python:

print(soup.prettify())
# <p>
#  Hello World
# </p>
Enter fullscreen mode Exit fullscreen mode

Integrating with Requests
Fetch a page:

import requests
res = requests.get("<https://example.com>")
soup = BeautifulSoup(res.text, 'html.parser')
Enter fullscreen mode Exit fullscreen mode

Parsing Only Parts of a Document
When dealing with large documents, you may want to parse only a fragment rather than the whole thing. BeautifulSoup allows for this using SoupStrainers.
There are a few ways to parse only parts of a document:
By CSS Selector
Parse just a selection matching a CSS selector:

from bs4 import SoupStrainer
only_tables = SoupStrainer("table")
soup = BeautifulSoup(doc, parse_only=only_tables)
Enter fullscreen mode Exit fullscreen mode

This will parse only the
tags from the document.
By Tag Name
Parse only specific tags:

only_divs = SoupStrainer("div")
soup = BeautifulSoup(doc, parse_only=only_divs)
Enter fullscreen mode Exit fullscreen mode

By Function
Pass a function to test if a tag should be parsed:

def is_short_string(string):
  return len(string) < 20

only_short_strings = SoupStrainer(string=is_short_string)
soup = BeautifulSoup(doc, parse_only=only_short_strings)
Enter fullscreen mode Exit fullscreen mode

This parses tags based on their text content.

By Attributes
Parse tags that contain specific attributes:

has_data_attr = SoupStrainer(attrs={"data-category": True})
soup = BeautifulSoup(doc, parse_only=has_data_attr)
Enter fullscreen mode Exit fullscreen mode

Multiple Conditions
You can combine multiple strainers:

strainer = SoupStrainer("div", id="main")
soup = BeautifulSoup(doc, parse_only=strainer)
Enter fullscreen mode Exit fullscreen mode

This will parse only

Parsing only parts you need can help reduce memory usage and improve performance when scraping large documents.

Dealing with Encoding
When parsing documents, you may encounter encoding issues. Here are some ways to handle encoding:

Specify at Parse Time
Pass the from_encoding parameter when creating the BeautifulSoup object:

soup = BeautifulSoup(doc, from_encoding='utf-8')
Enter fullscreen mode Exit fullscreen mode

This handles any decoding needed when initially parsing the document.

Encode Tag Contents
You can encode the contents of a tag:

tag.string.encode("utf-8")
Enter fullscreen mode Exit fullscreen mode

Use this when outputting tag strings.

Encode Entire Document
To encode the entire BeautifulSoup document:

soup.encode("utf-8")
Enter fullscreen mode Exit fullscreen mode

This returns a byte string with the encoded document.

Pretty Print with Encoding
Specify encoding when pretty printing output:

print(soup.prettify(encoder="utf-8"))
Enter fullscreen mode Exit fullscreen mode

Unicode Dammit
BeautifulSoup's UnicodeDammit class can detect and convert incoming documents to Unicode:

from bs4 import UnicodeDammit
dammit = UnicodeDammit(doc)
soup = dammit.unicode_markup
Enter fullscreen mode Exit fullscreen mode

This converts even poorly encoded documents to Unicode.

Properly handling encoding ensures your scraped data is decoded and output correctly when using BeautifulSoup.

1. find()

  • Purpose: Find the first occurrence of a tag.
  • Usage: soup.find('tag_name', {attributes}, text=optional_text)
  • Example:

    first_div = soup.find('div')
    p_with_class = soup.find('p', class_='example')
    a_tag = soup.find('a', href='/home')
    

2. find_all()

  • Purpose: Find all occurrences of a tag.
  • Usage: soup.find_all('tag_name', {attributes}, limit=number)
  • Example:

    all_p_tags = soup.find_all('p')
    all_links = soup.find_all('a', class_='link')
    first_five_divs = soup.find_all('div', limit=5)
    

3. select()

  • Purpose: Find all tags matching a CSS selector.
  • Usage: soup.select('CSS_selector')
  • Example:

    divs_with_class = soup.select('div.example')
    links_in_divs = soup.select('div a')
    element_with_id = soup.select('#specific-id')
    

4. find_parents() / find_parent()

  • Purpose: Find parent(s) of a tag.
  • Usage: soup.find_parent('tag_name') or soup.find_parents('tag_name')
  • Example:

    parent_div = soup.find('span').find_parent('div')
    all_parents = soup.find('span').find_parents()
    

5. find_next_sibling() / find_previous_sibling()

  • Purpose: Find the next or previous sibling of a tag.
  • Usage: soup.find_next_sibling('tag_name') or soup.find_previous_sibling('tag_name')
  • Example:

    next_sibling = soup.find('div').find_next_sibling()
    prev_sibling = soup.find('div').find_previous_sibling()
    

6. find_all_next() / find_all_previous()

  • Purpose: Find all tags after or before a specific tag.
  • Usage: soup.find_all_next('tag_name') or soup.find_all_previous('tag_name')
  • Example:

    next_p_tags = soup.find('h1').find_all_next('p')
    previous_div_tags = soup.find('h2').find_all_previous('div')
    

7. select_one()

  • Purpose: Find the first element matching a CSS selector.
  • Usage: soup.select_one('CSS_selector')
  • Example:

    first_div_container = soup.select_one('div.container')
    first_link_in_main = soup.select_one('#main a')
    

8. find_next() / find_previous()

  • Purpose: Find the next or previous element in the document.
  • Usage: soup.find_next('tag_name') or soup.find_previous('tag_name')
  • Example:

    next_p_tag = soup.find('div').find_next('p')
    previous_div = soup.find('p').find_previous('div')
    

9. find_all(string=True)

  • Purpose: Find all occurrences of a specific string or text.
  • Usage: soup.find_all(string="text_to_find")
  • Example:

    python_mentions = soup.find_all(string="Python")
    programming_mentions = soup.find_all(string=lambda text: "Programming" in text)
    

10. find_all(True) (Find all tags)

  • Purpose: Find all tags in the document.
  • Usage: soup.find_all(True)
  • Example:

    all_tags = soup.find_all(True)
    

Example Use Cases

  • Find all links on a page:

    links = soup.find_all('a', href=True)
    for link in links:
        print(link['href'])
    
  • Find all headings (h1 to h6):

    headings = soup.find_all(['h1', 'h2', 'h3', 'h4', 'h5', 'h6'])
    for heading in headings:
        print(heading.get_text())
    
  • Extract text from a specific class using CSS selector:

    text_in_class = soup.select_one('.specific-class').get_text()
    
  • Find all images on a page:

    images = soup.find_all('img')
    for image in images:
        print(image['src'])
    

Notes

  • find() and find_all() are the go-to methods for finding elements based on tag names and attributes.
  • select() and select_one() are very powerful if you're comfortable with CSS selectors.
  • Navigational methods like find_next(), find_previous(), and find_parents() help when you need to traverse through sibling and parent tags.
  • find_all(string=True) is useful when searching for specific text rather than tags.

Additional Methods:

  • find_all(True) – Finds all tags in the document, useful when you want to iterate over everything.
  • get_text() – Extracts the text from a tag, stripping away HTML tags.

python
# Example of extracting text:
text = soup.find('p').get_text()
Enter fullscreen mode Exit fullscreen mode

Top comments (0)