Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mhmdiaa committed Dec 10, 2021
1 parent e15e7b0 commit 1fddf93
Showing 1 changed file with 73 additions and 37 deletions.
110 changes: 73 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,98 @@
# Second Order

Scans web applications for second-order subdomain takeover by crawling the app, and collecting URLs (and other data) that match some specific rules, or respond in a specific way.
Scans web applications for second-order subdomain takeover by crawling the app, and collecting URLs (and other data) that match certain rules, or respond in a certain way.

### Installation
Go version >= 1.8 is required.
## Installation
### From binary
Download a prebuilt binary from the [releases page](https://github.com/mhmdiaa/second-order/releases/latest) and unzip it.

### From source
Go version 1.17 is recommended.
```
go install -v github.com/mhmdiaa/second-order@latest
```

### Docker
```
go get github.com/mhmdiaa/second-order
docker pull mhmdiaa/second-order
```
This will download the code, compile it, and leave a `second-order` binary in $GOPATH/bin.

### Command line options
## Command line options
```
-base string
Base link to start scraping from (default "http://127.0.0.1")
-config string
Configuration file (default "config.json")
-debug
Print visited links in real-time to stdout
-output string
Directory to save results in (default "output")
-target string
Target URL
-config string
Configuration file (default "config.json")
-depth int
Depth to crawl (default 1)
-insecure
Accept untrusted SSL/TLS certificates
-output string
Directory to save results in (default "output")
-threads int
Number of threads (default 10)
```

### Example
## Example
```
go run second-order.go -base https://example.com -config config.json -output example.com -concurrency 10
```

### Configuration File
**Example configuration file included (config.json)**
## Configuration File
**Example configuration files are in [config](/config/)**
- `Headers`: A map of headers that will be sent with every request.
- `Depth`: Crawling depth.
- `LogCrawledURLs`: If this is set to true, Second Order will log the URL of every crawled page.
- `LogQueries`: A map of tag-attribute queries that will be searched for in crawled pages. For example, `"a": "href"` means log every `href` attribute of every `a` tag.
- `LogURLRegex`: A list of regular expressions that will be matched against the URLs that are extracted using the queries in `LogQueries`; if left empty, all URLs will be logged.
- `LogNon200Queries`: A map of tag-attribute queries that will be searched for in crawled pages, and logged only if they don't return a `200` status code.
- `ExcludedURLRegex`: A list of regular expressions whose matching URLs will not be accessed by the tool.
- `ExcludedStatusCodes`: A list of status codes; if any page responds with one of these, it will be excluded from the results of `LogNon200Queries`; if left empty, all non-200 pages' URLs will be logged.
- `LogInlineJS`: If this is set to true, Second Order will log the contents of every `script` tag that doesn't have a `src` attribute.
- `LogNon200Queries`: A map of tag-attribute queries that will be searched for in crawled pages, and logged only if they contain a valid URL that doesn't return a `200` status code.
- `LogInline`: A list of tags whose inline content (between the opening and closing tags) will be logged, like `title` and `script`

### Output Directory Structure
## Output
All results are saved in JSON files that specify what and where data was found

- The results of `LogQueries` are saved in `attributes.json`
```
{
"https://example.com/": {
"input[name]": [
"user",
"id",
"debug"
]
}
}
```
OUTPUT
logged-queries.json -> The results of `LogQueries`
logged-non-200-queries.json -> The results of `LogNon200Queries`
inline-scripts.json -> The results of `LogInlineJS`
- The results of `LogNon200Queries` are saved in `non-200-url-attributes.json`
```
{
"https://example.com/": {
"script[src]": [
"https://cdn.old_abandoned_domain.com/app.js",
]
}
}
```
- The results of `LogInline` are saved in `inline.json`
{
"https://example.com/": {
"title": [
"Example - Home"
]
},
"https://example.com/login": {
"title": [
"Example - login"
]
}
}

### Usage Ideas
## Usage Ideas
This is a list of tips and ideas (not necessarily related to second-order subdomain takeover) on what to use Second Order for.
- Check for second-order subdomain takeover. (Duh!)
- Collect JS code by setting `LogInlineJS` to true, and adding `"script": "src"` to `LogQueries`.
- Find a target's online assets by using `LogURLRegex`. (S3 buckets, anyone?)
- Collect SWF files by adding `"object": "src"` to `LogQueries`.
- Collect `<input>` names by adding `"input": "name"` to `LogQueries`.

- Check for second-order subdomain takeover: [takeover.json](config/takeover.json). (Duh!)
- Collect inline and imported JS code: [javascript.json](config/javascript.json).
- Find where a target hosts static files [cdn.json](config/cdn.json). (S3 buckets, anyone?)
- Collect `<input>` names to build a tailored parameter bruteforcing wordlist: [parameters.json](config/parameters.json).
- Feel free to contribute more ideas!

### References
## References
https://shubs.io/high-frequency-security-bug-hunting-120-days-120-bugs/#secondorder

https://edoverflow.com/2017/broken-link-hijacking/

0 comments on commit 1fddf93

Please sign in to comment.