DEV Community

Cover image for πŸ“ˆ TIL: 80/20 Principles
Mai Chi Bao
Mai Chi Bao

Posted on β€’ Edited on β€’ Originally published at notion.so

1 1 2 2 1

πŸ“ˆ TIL: 80/20 Principles

Today, I think I've stumbled upon something interesting about the 80/20 rule

20% of effort determines 80% of productivity

Remember the 10,000-hour rule to become an expert? Turns out, you only need 20% of that time to become GOOD at most things. (2000 hours, roughly 2 hours of consistent work per day for 2 years)

Let's take learning English as an example.

Image description

Grammar

English has a total of 12 tenses. But you only need to master 4 basic ones:

  • Simple Present
  • Simple Past
  • Simple Future
  • Present Perfect.

Most English conversations, articles, and papers I've come across use these tenses. You only need these 4 to pass any Speaking Test. Even a normal user can talk fluently without knowing Present Perfect and Simple Past.

Vocabulary

Simplify your vocabulary learning. Some people categorize words into 3 types:

  • those for speaking (simple: do, eat, drink)
  • those for writing (formal: consequently, edge technology)
  • those just for exam (too difficult, specialized). No need to learn them all if you won't use them!

A tip for coders: If you're unsure which 20% of vocabulary to learn, crawl all the data (wiki, BBC articles, etc.) and learn based on frequency.

import requests
from bs4 import BeautifulSoup
from collections import Counter
import re

# Step 1: Fetch data from Wikipedia
url = "https://en.wikipedia.org/wiki/Main_Page"
response = requests.get(url)
html_data = response.text if response.status_code == 200 else None

# Step 2: Extract text from HTML
soup = BeautifulSoup(html_data, 'html.parser')
text_data = ' '.join([p.get_text() for p in soup.find_all('p')]) if html_data else None

# Step 3: Tokenize and calculate word frequencies
words = re.findall(r'\b\w+\b', text_data.lower()) if text_data else []
word_frequencies = Counter(words)

# Step 4: Select top 20% of words
total_words = len(word_frequencies)
top_words_count = int(total_words * 0.2)
top_words = dict(word_frequencies.most_common(top_words_count))

# Step 5: Display or save the list of top words
for word, frequency in top_words.items():
    print(f"{word}: {frequency}")
Enter fullscreen mode Exit fullscreen mode

Listening

Q: Why do I listen to English every night and still not understand anything?

A: The reason is, 80% of the time, I'm just passively listening. The best way to learn is active listening. Listen -> listen again -> read the script -> listen again. Also, learning listening skills heavily depends on how you practice pronunciation.

Listen -> listen again -> read the script -> listen again

Conclusion

Of course, this isn't a shortcut. These 2000 hours demand 100% focus. Don't let yourself be distracted by your phone or the surrounding environment. Wishing you all success!

Image of Datadog

The Essential Toolkit for Front-end Developers

Take a user-centric approach to front-end monitoring that evolves alongside increasingly complex frameworks and single-page applications.

Get The Kit

Top comments (1)

Collapse
 
mrzaizai2k profile image
Mai Chi Bao β€’

2000 hours with full focus? That’s the reality of real improvement. No hacks, just discipline. Challenge accepted!

Sentry image

See why 4M developers consider Sentry, β€œnot bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

πŸ‘‹ Kindness is contagious

Please show some love ❀️ or share a kind word in the comments if you found this useful!

Got it!