The Ultimate Guide for BloodHound Community Edition (BHCE)
I’ve run into many interested hackers who want to learn how to use BloodHound, but struggle to get started. This results in many getting lost and getting frustrated. Beyond this, if we’re in a pentest engagement, we do a complete disservice to the customer by not enumerating as many risks as possible.
This article will cover setting up, collecting data, analyzing the data, and providing value with that data. This will use the GOADv2 forest as sample data for collection and analysis. I’ve already collected some sample data and placed it on a repository to ingest for analysis and practice.
Huge thanks to the SpecterOps team for creating BloodHound and giving it to the community!
The Why
Before we start, we need to understand the WHY. Active Directory is notoriously complex, and with anything that involves complexity, there is plenty of room for errors. AD is interesting in the fact that most environments don’t have traditional CVE-based exploits available — instead most escalation attacks rely on misconfigurations. Our job is to enumerate as many of these escalation scenarios as possible, define their risk, and present recommendations to reduce risk to lowest possible levels.
If you want to get paid to do this work, you must be able to effectively present these risks in writing!
The Background
BloodHound has been developed by SpecterOps as a way to visualize relationships between objects in AD. Because of the scale and complexity of most AD networks, manually auditing these relationships is a nightmare. Instead, the original BloodHound relied on Neo4j’s Graph Theory to visualize this information to escalate between objects.
There are currently three versions of BloodHound you need to know about:
BloodHound Legacy: The original BloodHound, no longer supported. Built on an Electron-based application, somewhat complicated to set up.
BloodHound Community Edition: Released in August 2023, actively supported. Leverages docker compose to manage a set of containers, exceptionally easy to deploy. Smooth web application interface.
BloodHound Enterprise: Paid version of BloodHound for attack path management. The major difference is that this version is used for risk management and validation.
The Basics
There are a few different parts we need to be aware about. First, the BloodHound application itself is nothing more than a front-end to help visualize, present, and analyze data. We need to gather data about the environment with a collector to have it ingested into the application for analysis.
The Collector
We need to gather the data from the AD environment in order to feed it into BloodHound for analysis. There are two major collectors you need to know about:
SharpHound: This is the officially supported collector tool for BloodHound, written in C#. Must be run from a Windows-based PC connected to the domain in order to collect the information.
BloodHound.py: A community driven python script used to collect AD data. Can be run from a Linux-based machine, such as a Raspberry Pi. Excellent use-case for a pentesting dropbox.
It’s important to realize that at the time of this writing, bloodhound.py
does not natively support BloodHound-CE. You must use the bloodhound-ce
branch of the bloodhound.py
python collector if you choose to use this. We cannot mix legacy collectors with Community Edition collectors — this will cause the ingest to fail (and it’s frustrating!).
The Frontend
BloodHound itself is the web application used to interpret the data from the collector. This is the GUI-based application we interact with to interpret the data for risks and escalation paths. The frontend is only as good as the data it’s ingested from the collector.
The Ingest
Within the GUI frontend is the File Ingest. This is what is used to place the data gathered from the collector to be interpreted into the Neo4j database. Once parsed, this data will be accessible to the GUI application for analysis.
The API
One of the most exciting parts about BloodHound-CE is the HTTP API available to query data. This can help us automate and extract data quickly to prove value in a pentesting engagement.
Legacy BloodHound
We’ve touched on the original version of BloodHound legacy and why it’s important earlier, but this is critical to understand. The older collectors DO NOT WORK with the Community Edition of BloodHound. Anyone can still use the legacy version and it still works great — however, it will not be up to date on the latest threats. Since we’re proving value to customers, we need to use items which can evaluate the most applicable risk to them.
Getting Started
Great, now that we know all about each part of the puzzle, we can get started by installing BloodHound-CE and collecting data for our analysis.
Start the Containers
Community Edition leverages docker compose through a set of containers. This significantly enhances the simplicity of starting up and managing infrastructure for BloodHound, as everything is containerized within a docker network.
To start, all we need to do is download the docker-compose.yml
file and instruct docker to build the containers. There’s a simple one-liner provided by SpecterOps to begin:
# Recommended start for BHCE
curl -L https://ghst.ly/getbhce | docker compose -f - up
Note: If you’re using Kali, there’s a chance the installed docker
version does not have the compose
command. If this is the case, you will receive an error similar to below:
To fix this, we can install the docker-compose
package within Kali’s repositories. Then, we can modify the command to run the containers.
# Install docker-compose for Kali
sudo apt install docker-compose -y
docker-compose -v
curl -L https://ghst.ly/getbhce | docker-compose -f - up
To hold onto the docker-compose.yml
file to use again later, we can use this command to download it to our current directory. When the file is saved with this name, we don’t need to specify it when starting docker-compose
.
# Download the docker-compose.yml file to disk, then launch
wget https://ghst.ly/getbhce -O docker-compose.yml
docker-compose up
Great! Looks like the containers have been started with the docker-compose
command, green is always a good color. Pay attention though, in the logs there will be an initial password we need to use to log into the system.
Copy this password and browse to http://localhost:8080 to log into the GUI. The user will be admin
and the initial password from the logs.
When you log in, you will be directed to update the password. Choose one you will remember.
Now we’re in! But wait, there’s no data! So how do we actually get started with the analysis? Well, we need to download the collectors first!
Data Collection
We have a few ways for us to do this. We can use the SharpHound.exe C#, PowerShell, or Python collectors to gather this information.
To get a copy of the supported collectors, we can download them straight from the BHCE GUI. Click on the cog and then “Download Collectors”. This will open a page where we can download the collector.
We can always gather the latest versions of the collector which are available on the SharpHound Releases page:
Once unzipped, we can launch this collection tool on the remote host. Choose your favorite way to complete this, whether it’s within a beacon for inline-execute or an interactive RDP session.
C# Collector
Using SharpHound.exe is straightforward — we can simply execute it without any additional flags and it will happily gather the default information about the current domain with the current user.
# Collect default information using current account
.\SharpHound.exe
To collect all available information, we can specify the flag -c All
. This will provide information such as ADCS information, RDP, and DCOM. However, in large environments collecting everything could easily overwhelm our machine — keep this in mind!
Personally, I like to collect everything and then encrypt the ZIP with a password, while also giving the files a prefix. While this data is available to all users in the domain, in the wrong hands it could prove to be sensitive information.
# Personal favorite collection mechanism
.\SharpHound.exe -c All --zippassword 'p@ssw0rd' --outputprefix 'NORTH'
Once completed, we’ll see that the zip file was created with our intended prefix as well. Note the number of objects, this can be a useful number within a report to a customer.
Cross-Domain Collection
To collect data from other domains in the same forest, we’ll need to add a few additional flags. For instance, we’ll need to direct to the correct domain with the --domain
flag. We’ll pivot to the sevenkingdoms.local
domain next, while running as the hodor@north.sevenkingdoms.local
user. This machine must be able to resolve the domain within DNS to work.
# Cross-domain collection mechanism (relies on trust between domains)
.\SharpHound.exe -c All --domain sevenkingdoms.local --zippassword 'p@ssw0rd' --outputprefix 'SEVENKINGDOMS'
Collection as a Different User
In the event we’re in a forest but don’t have access to an account trusted on a separate domain, we can always launch SharpHound.exe
with the runas.exe
command.
# Start a cmd shell as another user, then collect data
runas /netonly /user:khal.drogo@essos.local cmd
.\SharpHound.exe -c All --domain essos.local --zippassword 'p@ssw0rd' --outputprefix 'ESSOS'
Alternatively, we can also specify the --ldapusername
and --ldappassword
flags to connect to the other domain. This does not require the runas.exe
command to function.
# Gather cross-domain information using LDAP authentication
.\SharpHound.exe -c All --domain essos.local --ldapusername khal.drogo --ldappassword horse --zippassword 'p@ssw0rd' --outputprefix 'ESSOS'
Reflectively Loading SharpHound
Have you run into a scenario where the SharpHound.exe is flagged by detections as malicious? In some scenarios, we can bypass these controls by reflectively loading the C# executable into memory, then executing the entrypoint (ATT&CK ID T1620).
# Load the C# assembly into memory, add the desired flags, execute
$sh = [System.Reflection.Assembly]::Load([byte[]]([IO.FIle]::ReadAllBytes("C:\Temp\SharpHound.exe")));
$cmd = "-c All --zippassword 'p@ssw0rd' --outputprefix REFLECTED"
[Sharphound.Program]::Main($cmd.Split())
Note that you may need to add the --outputdirectory
switch to ensure it is saved in your desired location.
Python Collector
Next we can use the bloodhound.py
tool to gather this information as well. As noted earlier, the current bloodhound.py
package in Kali repositories is for Legacy BloodHound only. You will need to download the bloodhound-ce
branch from their GitHub.
# Gather information with Python collector
sudo apt install bloodhound.py
bloodhound-python -d north.sevenkingdoms.local -u hodor -p hodor -c All -op default_kali_bloodhoundpy --zip -ns 192.168.56.10
Since we’re interested in data to support BHCE, let’s focus on installing that branch and using that one instead. We can clone that branch in particular directly from GitHub.
# Clone the CE branch of bloodhound.py and execute
git clone -b bloodhound-ce https://github.com/dirkjanm/BloodHound.py.git
If we don’t want to clobber any dependencies, we can always build a container to run this within. A Dockerfile is already present within the repo which we can use to build this.
# Build the docker container for bloodhound.py
cd BloodHound.py
docker build -t bloodhound .
# Run the container,
docker run -v ${PWD}:/bloodhound-data -it bloodhound
a0140a0d356a:/bloodhound-data# bloodhound-python -d north.sevenkingdoms.local -u hodor -p hodor -c All -op ce_branch_bloodhoundpy --zip -ns 192.168.56.10
Data Ingest
Now that we’ve been able to collect data, we need to be able to use it. We need to upload it into the GUI, where it will be ingested into the Neo4j database. This is done by clicking on the cog button and then the Administration button.
At this point, we can click the UPLOAD FILE(S) button to upload our data. Note that we cannot upload a zip file, but we can select more than one JSON file at once.
The popup window that appears will allow us to drag and drop files to be ingested. Remember, we can’t upload the zip, but we can upload all of the JSON files that are extracted from the zip.
After clicking the upload button twice, we’ll be brought back to the ingest page. We can see that the status states it’s completed ingesting! We can continue to upload additional data for other domains in the forest as well.
Ingest Failures
One of the more frustrating pieces about BHCE is that there doesn’t appear to be any feedback when outdated information is loaded in for ingest. This scenario in particular is why I believe so many hackers learning how to use this tool get frustrated and quit.
For this scenario, if we gather information from a collector for Legacy Bloodhound and import it into BHCE, there’s a chance that the files will pass the initial upload check, be marked as complete, but won’t actually be ingested.
Browsing back to the Explore page we see that there’s still no data ingested. This is where it’s easy to get frustrated — after all we uploaded the data right?
To find this we needed to jump into the logs with docker compose logs
to see the error. This shows that there was an unmarshalling error from the data uploaded. This was with data collected via the bloodhound-python
from the Kali repos.
To help fix this, I would recommend using the latest collectors for the BHCE. I would also certainly wish that the BHCE GUI is updated to reflect these unmarshalling errors or at least provide an indication that the ingest did not work successfully.
Ingest via API
To ingest data through the API, we can read the docs. Personally, I do not do this since I upload data through the GUI, however, it is supported.
Browsing the Data
Since you’ve made it this far, we can actually start to explore the data that was gathered! This allows us to find and understand the relationships between the objects in the forest and how they could be exploited. To get started, we can use the built-in queries to explore the data. This is done by clicking the CYPHER button, then the folder icon to open the searches.
Clicking on one of the searches, such as “All Domain Admins” will add a Cypher query to the search bar and look for matches in the database. The results will now be on the screen!
To get detailed information about an object, we can click on it to see its properties. This can help reveal additional info we could use about the account or within the domain.
Pathfinding
To find specific paths from one object to another, we can use the PATHFINDING button. In this case, we can request how the user samwell.tarly
can gain access to Domain Admins
to identify how this user could exploit this path.
Edges
In this scenario, we can see that samwell.tarly
has the GenericWrite
, WriteOwner
, and WriteDacl
permissions over the STARKWALLPAPER
GPO. If we don’t know how this could be epxloited, we can click on the edge itself to have the properites open in the right pane. This has information about the edge, to include how to abuse this permission on Windows or Linux based machines.
How cool is that?? It tells us what and how we can use these edges to prove impact. These properties also include several excellent references, which are always worth reading to learn more about a specific abuse scenario.
Marking Objects as Owned
To mark an object as owned, we can right click on them in the GUI. This will place a skull icon on their object, allowing us to perform additional queries based on owned objects. This will help us keep track of how we can maneuver within the environment as we continue to gain access.
To review objects marked as owned, we can click on the “GROUP MANAGEMENT” button at the top of the page. You’ll notice the page is blank at first, we’ll need to click a few buttons to get the correct information.
Start by clicking the second button to select an entity to search through. In most cases, selecting “All Active Directory Domains” will be enough.
After, we can click on the top drop-down menu to select “OWNED”.
At this point, we’ll be able to see the account we marked as owned. This can be useful to help us keep track of objects as we gain access, as well as use in reporting to a customer.
Cypher Queries
While the pre-built searches will be able to help use see interesting items quickly, there are many times where we will need to find something which is not covered by those searches. To solve this, we can create our own queries sent to the Neo4j database. This is how we can validate a search operates in the way we intend it to, where we can save it to a custom search afterwards.
Personally, I feel that the BHCE pre-built queries are missing some critical searches to help match between owned objects and high-value targets. This is handled differently in BHCE vs Legacy and require specific queries in Cypher to achieve it. The way to list all owned Objects in BHCE is below:
MATCH (n) WHERE "owned" in n.system_tags RETURN n
This can be placed into the Cypher query panel, click search, and see all owned objects. Note that in the image below, the User and Computer objects are both owned, but do not have a path to each other from this search.
Here’s a few useful queries I’ve been using in the past which have helped me find misconfigurations:
# Find all Unconstrained Delegation from non-DCs
MATCH (c1:Computer)-[:MemberOf*1..]->(g:Group) WHERE g.objectid ENDS WITH '-516' WITH COLLECT(c1.name) AS domainControllers MATCH (c2 {unconstraineddelegation:true}) WHERE NOT c2.name IN domainControllers RETURN c2
# Find users with "pass" in their description
MATCH p = (d:Domain)-[r:Contains*1..]->(u:User) WHERE u.description =~ '(?i).*pass.*' RETURN p
# List all owned objects
MATCH (n) WHERE "owned" in n.system_tags RETURN n
# Find all paths from owned to tier 0 objects (Warning, can be heavy)
MATCH p = allShortestPaths((o)-[*1..]->(h)) WHERE 'owned' in o.system_tags AND 'admin_tier_0' in h.system_tags RETURN p
There’s some fantastic resources on how to create and understand in-depth Cypher queries on the BloodHound docs if you’re interested in learning more:
Custom Searches
When we’re happy with how a search operates and want to save it for later use, we can save it with the “Save Query” button. This will populate the “Custom Searches” category within the same folder icon for later use.
However, at the moment there doesn’t appear to be a way to load the custom queries from disk similar to BloodHound Legacy. This was addressed in the BloodHound Slack a few times, suggesting to use the API instead.
Nevertheless, we can use the API docs to try and figure out what we need to do. It’s good to note that there are a few different things we can use to read and place new custom-saved queries into the system for us.
As far as actually uploading these, we can use the curl
command below to create one uploaded to the GUI. Note that you will need your JWT.
curl -X 'POST' \
'http://localhost:8080/api/v2/saved-queries' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {{token}}' \
-d '{"name":"New Query via API", "query":"CYPHER CODE HERE"}'
This is great to upload one or two queries, but would be terrible to do for hundreds of them. So instead, let’s use our python scripting skills. I’ll probably put this in a follow-up blog post 😁
Neo4j Web Console
In the event that we need to directly access the web console for Neo4j, we can do this by browsing to http://localhost:7474. For most scenarios, this is not necessary unless we need direct access to the database. This is the web interface for the Neo4j database and will allow us to run raw cypher queries and review the data. Log into this application with the default neo4j:bloodhoundcommunityedition
creds in the docker-compose.yml
file.
We can place the raw queries into the prompt and see the results, as well as all of the properties for each object returned.
This can help us create and debug custom queries for BHCE. Again, access to the Neo4j web console is generally unnecessary in most scenarios, but it good to have as a backup if needed!
Sample BHCE Data
I’ve had a hard time locating sample data for BHCE online. There seems to be a significant amount of sample data for legacy BloodHound, but it’s harder to find for BHCE. To solve this, I’ve created a repo of sample data from the GOADv2 range.
This data can be downloaded from the repository below:
Clearing Data
In most consulting environments, we’ll need to make sure we clear out the BloodHound data to keep data separated between customers. In the most recent release of BloodHound 5.8.0, this can now be accomplished entirely in the GUI.
Prior to version 5.8.0, we needed to remove the volume used by BloodHound in order to clear this out. I’ve left these instructions in here in case someone else needs to know how to accomplish this, this is documented in issue #107 for BHCE as well.
Let’s start by listing the volumes in use by docker first:
# List out docker volumes
docker volume ls
Within this data, we can see there are two volumes related to BloodHound.
To remove this data and get a fresh instance of BloodHound data, we need to remove this volume. Note that the bloodhound_postgres-data
volume is used for logging into the GUI and web application. We generally don’t want to remove this, unless you want everything reset.
Let’s remove this volume to reset the data. We’ll use the command below to ask docker to remove this volume.
To simplify things, we can accomplish this all in a single one-liner.
docker volume rm $(docker volume ls -q | grep neo4j-data)
Nuke Everything
If you want to nuke everything and pull a fresh copy of all containers, volumes, and configurations, follow these instructions. These assume that you have the docker-compose.yml
file in the working directory.
We need to first take the containers down and remove the volumes with the command below:
# Take down the containers and volumes
docker-compose down -v
After, we can pull a fresh copy of the containers with the pull command.
# Get the latest and greatest version of containers
docker-compose pull
Then we can start the containers as usual, and we should have the latest version of the containers! Note that you will need to set the initial password again, since we removed those previous volumes.
# Start them back up!
docker-compose up
Stopping Containers
Stopping the containers is straight-forward, we can just ask docker to stop them. This will stop the containers from serving the application.
# Stop the running containers
docker compose stop
Exposing to Others
In many consulting environments, we’ll want to share our instance of BHCE to others so they can all connect to one instance. By default, BHCE is only exposed to the localhost. To achieve this, we’ll need to modify the docker-compose.yml
file to expose it to others.
By changing the line below within the bloodhound
service, we can tell Docker Compose to expose the GUI to interfaces other than localhost. This is great if we’re planning on using a single server for many pentesters to use concurrently.
If we want to expose this to all interfaces, change the BLOODHOUND_HOST
setting to 0.0.0.0
. Note that this will expose it to the Internet if you have that server Internet accessible! Instead, it’s usually a better idea to bind it only to a VPN interface such as a WireGuard interface to limit exposure.
Accelerating
If you know me, you know that I live to accelerate processes and make things more gooder. So how do we leverage this to get things done faster?
AD-Miner
The AD-Miner toolset leverages the data within Neo4j to search for many known risks and escalation paths, then presents these findings within an HTML file with an overall score.
We can run this by first installing the AD-Miner
tool via pipx
then providing the information to connect to the Neo4j database — these credentials are stored in the docker-compose.yml
file. By default, they are neo4j:bloodhoundcommunityedition
. Note that this is the password to the Neo4j database, no to the BHCE GUI!
# Install via pipx
pipx install 'git+https://github.com/Mazars-Tech/AD_Miner.git'
# Create AD-Miner report using default BHCE credentials
AD-miner -u neo4j -p bloodhoundcommunityedition -cf GOAD
This can take a long time depending on the size of the environment — in some domains I’ve had to wait over 2 hours! Once this has completed, the files will be in a folder named render_GOAD
using the label provided within the -cf
switch. We can find this HTML file in the folder created.
How cool is that? We now have an interactive dashboard which brings the most critical misconfigurations to our attention. We can use all of this information to help customers and environments reduce risk and exposure.
AD-Miner is a fantastic tool to help provided additional analysis and enrichment from the BloodHound data — definitely check it out!
Uploading Saved-Queries
Since there’s plenty of queries we know we want to explore that are not in the pre-built ones, we can upload them via the API. Again, this is probably something that I’ll write at a later time as a follow-up post 😁
Conclusions
While this was pretty comprehensive, hopefully it was helpful! I’ve seen far too many analysts either become frustrated by not knowing how to use this amazing tool or not use it to its potential. Use it, learn it, and love it.
This certainly wouldn’t be possible without the development effort and technical excellence from the SpecterOps team! Huge thanks to the AD-Miner team from Mazars for their tool as well. This would be exceptionally more difficult without the GOAD set to analyze, thanks to Mayfly for that work!
If you’re not already in the BloodHound Slack, you should join! There’s always tons of great discussions and analysis.
Finally, if you’ve enjoyed this post, please consider pre-ordering my book I’ve co-authored, The Hack is Back: Techniques to Beat Hackers at Their Own Games. It’s set to release in August of 2024 😁
References
- BloodHound Community Edition: A New Era | by Andy Robbins | Posts By SpecterOps Team Members
- BloodHound Feature Comparison — BloodHound Enterprise
- Releases · BloodHoundAD/SharpHound (github.com)
- Searching with Cypher — BloodHound (bloodhoundenterprise.io)
- Mazars-Tech/AD_Miner: AD Miner is an Active Directory audit tool that leverages cypher queries to crunch data from the #Bloodhound graph database to uncover security weaknesses (github.com)
- Orange-Cyberdefense/GOAD: game of active directory (github.com)