Sudomy
Search…
Extract & Collecting Path, Parameter & Endpoint
Collecting Juicy URL & Scraping URL Parameter from Passive scan. Default Source Using Web Archive, CommonCrawl, UrlScan
    Regex using DFA Engine (awk,sed)
    Support and Collecting URL with multi Parameter to Fuzzing-
    Removing Duplicate Parameter
There will be 3 files in output:
Passive_Collecting_URLParamter_Full.txt : This File is original collecting URL Parameter without Parsing ( Original URL & Parameter Value )
Passive_Collecting_URLParamter_Uniq.txt : This File is original collecting URL Parameter with Unique URL for Fuzzing
Passive_Collecting_JuicyURL.txt: This is a huge list of urls associated with the domain. Here we can get and extract relative paths, endpoint, url and interest strings.
1
cat Passive_Collecting_JuicyURL.txt | grep --colour=always -e "\.js"
Copied!
Like filtering the javascript for found a sensitive data from JS Files
1
cat Passive_Collecting_JuicyURL.txt | grep --colour=always -e "\.pdf"
Copied!
NOTE : As it fetches the parameters from WebArchive, CommonCrawl, URLscan data , so chances of false positives are high.
Last modified 1yr ago
Export as PDF
Copy link