🔐
oneforall
  • Welcome
  • ETHICAL HACKING METHODOLOGY / USUAL AND USEFUL TOOLS
    • Reconnaissance
      • Overview
    • Enumeration
      • Scanning
      • Ports
    • Exploitation
    • Post-exploitation
      • Password cracking
      • File transfers
      • Maintaining access
      • Pivoting
      • Cleaning up & Covering tracks
  • Active Directory
    • Basics
    • Attack vectors
      • Network poisoning
      • SMB services
      • LDAP
      • OSINT
    • Post-compromise
      • Enumeration
        • PowerView
        • Bloodhound
      • Attacks
        • Pass the hash / password
        • Token impersonation
        • Kerberoasting
        • GPP / cPassword
        • Mimikatz
  • WEB
    • TOP 10 OWASP
  • WEB SERVER
    • SQL Injection
    • Authentication
    • OS injection
    • CSRF
  • WIRELESS
    • WPA2 PSK
  • FORENSIC
    • Radare2
    • Obtaining Memory Samples
    • Volatility
    • USB forensic
  • EXPLOIT DEVELOPMENT
    • Buffer Overflow
  • SCRIPTING AND PL
    • HTML
    • C basics
    • Python
      • Libraries & hacking usages
    • Bash basics
      • Hacking usages
    • Powershell basics
    • PHP
  • NETWORK SECURITY
    • Network reminders
    • CCNAv7
      • LAN security concepts
      • Switch security configuration
    • Wireshark
  • MISC
    • VIM basics
    • Metasploit Cheatsheet
    • Common ports & protocols
    • Tmux Cheatsheet
    • Linux reminders
  • STEGANOGRAPHY
    • Steganography
  • Privilege Escalation
    • Linux
    • Windows
  • CRYPTO
    • Encryption
    • Hashing
    • RSA
      • Tools
      • Factorisarion
Powered by GitBook
On this page
  • Installation example
  • Extract links
  • HTTP brute force
  • Base64

Was this helpful?

  1. SCRIPTING AND PL
  2. Python

Libraries & hacking usages

PreviousPythonNextBash basics

Last updated 4 years ago

Was this helpful?

Installation example

pip3 install requests

Extract links

#!/usr/bin/env python3

from bs4 import BeautifulSoup
import requests 

# requests.get downloads the webpage and stores it as a variable
html = requests.get('http://10.10.225.228:8000/')

# this parses the webpage into something that beautifulsoup can read over
soup = BeautifulSoup(html.text, "lxml")
# lxml is just the parser for reading the html 

# this is the line that grabs all the links # stores all the links in the links variable
links = soup.find_all('a') 
for link in links:  
    if "href" in link.attrs:
        print(link["href"])

HTTP brute force

#!/usr/bin/env python3

import requests 

for api_key in range(1,100,2): #1, 3, 5, 7... 
    print(f"api_key {api_key}")
    html = requests.get(f'http://10.10.225.228:8000/api/{api_key}')
    print(html.text)

Base64

base64msg = "Hello world"
base64_bytes = base64msg.encode('ascii')
msg_bytes = base64.b64decode(base64_bytes)
realmsg = msg_bytes.decode('ascii')
PyPI · The Python Package IndexPyPI
Logo