Prepared by:
HALBORN
Last Updated 09/12/2025
Date of Engagement: August 14th, 2025 - August 20th, 2025
100% of all REPORTED Findings have been addressed
All findings
17
Critical
2
High
7
Medium
7
Low
1
Informational
0
ZKCross
engaged Halborn
to conduct a security assessment for the off-chain side of components of their cross-chain bridge. Halborn was provided access to the testing environment for testing and performed whitebox testing to identify and validate potential security vulnerabilities. The engagement was designed to identify vulnerabilities, validate security controls, and ensure the robustness of the bridge against both traditional application threats and bridge-specific business logic risks. Testing was performed using both blackbox and greybox methodologies to balance coverage and depth, and all findings were documented and reported at the conclusion of the engagement.
The team at Halborn
was provided a timeline for the engagement and assigned a full-time security engineer to verify the security of the assets in scope. The security engineer is a penetration testing expert with advanced knowledge in web, mobile, blockchain, recon, discovery & infrastructure penetration testing. The engineer conducted in-depth testing of transaction flows, API endpoints, and supporting infrastructure.
The assessment identified multiple vulnerabilities affecting transaction handling, withdrawal rights assignment, nonce-based authentication, event processing, token resolution, API design, and dependency management. Business-logic flaws such as duplicate processing of bridge transactions and missing verification before fund release were noted as particularly impactful. Additional weaknesses included unauthenticated initialization endpoints, lack of rate limits, cacheable HTTPS responses, and persistence of token prices without safeguards. Observations also highlighted risks from outdated dependencies and exposed secrets in code history.
The findings underscore the importance of remediating key logic and API weaknesses to improve resilience while maintaining the strong baseline already present in secure operational practices.
The client addressed all identified issues, where one issue was partially resolved and will be completely addressed in future releases of the application.
The following repository was part of the scope:
Repository: https://github.com/zkCross-Network/zkcross_release_handler
Commit: 750bd14d295d23e2be655be070e8d0a8ade8216f
Branch: audit-approach-2
Halborn followed whitebox methodology as per the scope and performed a combination of manual and automated security testing with both to balance efficiency, timeliness, practicality, and accuracy regarding the scope of the pentest. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques assist enhance coverage of the infrastructure and can quickly identify flaws in it.
The assessment methodology covered a range of phases and employed various tools, including but not limited to the following:
- Mapping bridge workflows (lock → index → execute → release)
- Validating chain and token configuration
- Testing concurrency and race conditions in worker scheduling
- Verifying RPC reliability and alt-RPC fallback logic
- Assessing price feed caching and persistence
- Reviewing funder detection and withdrawal authorization flows
- Evaluating session, nonce, and authentication mechanisms in API endpoints
- Testing role assignment and liquidity wallet access controls
- Fuzzing endpoints for injection or misuse
- Dependency analysis for outdated or vulnerable third-party libraries
- Analysis for hardcoded credentials or API keys
Critical
2
High
7
Medium
7
Low
1
Informational
0
Security analysis | Risk level | Remediation Date |
---|---|---|
Race condition allows duplicate bridge transactions for same lockId | Critical | Solved - 09/12/2025 |
First depositor can gain withdraw rights | Critical | Solved - 09/11/2025 |
Missing re-verification before fund release | High | Solved - 09/11/2025 |
Nonce auth message is weak and volatile | High | Solved - 09/11/2025 |
Bridge Initialization Endpoint Accessible Without Authentication | High | Solved - 09/11/2025 |
Concurrency may cause resource exhaustion & duplicate worker processing | High | Solved - 09/11/2025 |
Alt-RPC verification fails open when secondary is unset | High | Solved - 09/11/2025 |
Price worker persists values without safeguards | High | Solved - 09/12/2025 |
API Endpoints Served Over Insecure HTTP | High | Solved - 09/11/2025 |
Event processing at head block may skip events on reorgs | Medium | Solved - 09/11/2025 |
Outdated third party dependencies introduce risk | Medium | Partially Solved - 09/11/2025 |
Token resolution inconsistencies can cause wrong contract selection | Medium | Solved - 09/11/2025 |
Hardcoded Secret Git History | Medium | Solved - 09/11/2025 |
Lack of Rate Limits in API | Medium | Solved - 09/11/2025 |
Block sync pointer may update incorrectly could lead to race conditions | Medium | Solved - 09/11/2025 |
Risk Of Supply Chain Attack Due To Unpinned Dependencies | Medium | Solved - 09/11/2025 |
Cacheable Https API Responses | Low | Solved - 09/11/2025 |
//
We observed that the bridge transaction creation logic did not enforce an atomic insert check on unique transaction identifiers such as lockId or lockHash. The create() method in BridgeTxnService directly inserted a new transaction record into the memory database without verifying whether an existing record for the same lock event was already present. This design allowed a race condition in which multiple concurrent workers processing the same blockchain event could both create separate transaction records for the same lockId. As the service operated in memory without locking or transactional guarantees, the duplication could result in multiple downstream processing attempts for the same blockchain lock event. This behavior introduced a double-processing risk that could lead to duplicate fund releases if off-chain side controls were insufficient.
Code location:
/src/db/memory/services/BridgeTxnService.ts
async create(bridgeTxn: IBridgeTxn): Promise<IBridgeTxn> {
const id = this.db.generateId();
const now = new Date();
const newBridgeTxn: MemoryBridgeTxn = {
_id: id,
lockId: bridgeTxn.lockId,
// ...[SNIP]...
};
this.db.bridgeTxns.set(id, newBridgeTxn as any);
this.db.addBridgeTxnToIndexes(id, newBridgeTxn);
this.db.counters.bridgeTxn++;
}
Running PoC script, that calls create() twice for the same lockId produced two distinct rows with different _ids but the same lockId.
Before inserting, first check for an existing record keyed by a stable unique identifier such as lockId or (chain, txHash, logIndex). In memory, this can be quick if (await exists(lockId)) return;
. For stronger checks, add an insert-if-absent primitive or a unique constraint in the backing store when moving beyond in-memory storage. This check must be performed in a thread-safe or single-process lock context to prevent race conditions. In distributed deployments, the check should occur in a centralized store such as Redis or a transactional database with unique constraints to ensure consistency across processes. Implementing atomic deduplication aligns with industry best practices for idempotent processing in blockchain indexing systems and prevents double-release conditions.
SOLVED: The ZKCross team resolved the race condition by adding a shared locking mechanism to prevent duplicate transactions on the same lockId.
//
It was observed that “funder” is inferred from the first inbound transfer seen on an uninitialized admin wallet for example, detectEvmFunding() / detectStellarFunding()), and /api/system/withdraw
authorizes a withdrawal if the recovered signer equals that inferred funder. In practice, anyone who dust‑deposits first to the bridge admin wallet can become the recognized funder and later sign withdrawals. Making /api/system/start
idempotent cements whichever sender won that race; it doesn’t remove the authorization risk. In production, this exposes a front‑run path where a random depositor gains withdraw authority.
Code Location:zkcross_release_handler/src/routes/system.ts
const funders = await Promise.all(
chains.map(async (n) => ({
n,
f: getEvmFunder(n) || (await detectEvmFunding(n)).funder,
}))
);
if (!funderSet.has(ethers.utils.getAddress(recovered).toLowerCase())) {
return res.status(403).json({ error: "unauthorized signer" });
}
await withdrawUsdcOnEvm(n, f);
await withdrawNativeOnEvm(n, f);
await withdrawStellarAll(stellarFunder);
Replace dynamic funder detection with an explicit allowlist of authorized funder addresses stored in secure configuration or protected on-chain via multisignature governance. This allowlist should be immutable without multi-party approval. The withdrawal nonce should be short-lived and tied to both signer and action. Remove dynamic detection logic from production deployments to prevent privilege escalation.
SOLVED: The ZKCross team resolved the issue as per the recommendations by removing dynamic funder detection and implementing an environment-based multisig system that requires all authorized funders’ signatures for withdrawals.
//
In src/executors/evm/index.ts
, release transactions are signed and broadcast based on DB entries populated by indexer scans, without a final re-verification of the original lock event on-chain. Additionally, verifyEvmLogOnAlt() returns true if no secondary RPC is configured, meaning execution can proceed with only a single RPC source.
This could leaves a gap where, if a reorg occurs between indexing and release or if the RPC provider returns stale/malformed data, the executor may attempt to release funds for an event that no longer exists or was never finalized. In case contracts would still reject invalid releases, but the system wastes gas, risks stuck transactions, and increases operational load.
/src/helpers/common/web3/index.ts
export async function verifyEvmLogOnAlt(
network: types.EvmChains,
txHash: string,
expectedAddress: string,
logIndex: number,
eventSigHash?: string
): Promise<boolean> {
try {
const { secondary } = getProviders(network);
if (!secondary) return true; // no alt configured
const rcpt = await secondary.getTransactionReceipt(txHash);
if (!rcpt || rcpt.status !== 1) return false;
const log = rcpt.logs?.[logIndex];
if (!log) return false;
if (log.address.toLowerCase() !== expectedAddress.toLowerCase())
return false;
if (eventSigHash) {
if (!log.topics || log.topics.length === 0) return false;
if ((log.topics[0] || "").toLowerCase() !== eventSigHash.toLowerCase())
return false;
}
return true;
} catch (_) {
return false;
}
}
Before signing, re-verify the source lock event against both primary and secondary RPCs, and enforce a confirmation depth for example, latest - minConf. If no secondary RPC is available, fail closed instead of proceeding. This aligns with industry best practices for financial bridges and ensures fund releases are always tied to finalized, valid events.
SOLVED: The ZKCross team resolved the issue as per the recommendations by removing unsafe single-RPC fallback, enforcing dual-RPC verification at indexing, adding on-chain checks before release, and ensuring fund releases only occur after full re-verification across multiple RPCs.
//
We observed that the withdrawal authorization flow issues a single-use nonce and constructs a message containing only that nonce. The nonce is stored in a process-local in-memory Set with no expiry and no persistence across restarts or multiple instances. This design means nonces can be lost on restarts, may not validate across nodes, and the signed message is not bound to a signer, chain, contract, recipient, or amount. As a result, the system relies heavily on downstream checks and is more exposed to blind-sign phishing or replay in multi-instance deployments.
Code Location:
src/helpers/system/auth.ts
src/helpers/system/auth.ts
const nonceStore = new Set<string>();
export function buildAuthMessage(nonce: string): string {
return `Authorize withdrawal to funders. Nonce: ${nonce}`;
// ❗ message includes only the nonce, no scope/chain/amount binding
}
export function issueNonce() {
const nonce = `${Date.now()}-${randomBytes(8).toString("hex")}`;
nonceStore.add(nonce); // ❗ never expires
return { nonce, message: buildAuthMessage(nonce) };
}
export function validateAndConsumeNonce(nonce: string) {
if (!nonceStore.has(nonce)) return false;
nonceStore.delete(nonce); // single use, but only in this process
return true;
}
# Request
curl -s -X GET http://localhost:3000/api/system/withdraw/nonce
# Sample responses observed
{"nonce":"1755337707381-5c5c4064849da079","message":"Authorize withdrawal to funders. Nonce: 1755337707381-5c5c4064849da079"}
{"nonce":"1755337708057-c85e6daa966222f7","message":"Authorize withdrawal to funders. Nonce: 1755337708057-c85e6daa966222f7"}
{"nonce":"1755337708810-8f565b41c25037e8","message":"Authorize withdrawal to funders. Nonce: 1755337708810-8f565b41c25037e8"}
{"nonce":"1755337709431-491863a998f909a7","message":"Authorize withdrawal to funders. Nonce: 1755337709431-491863a998f909a7"}
Adopt EIP-712 typed-data for withdraw authorization so signatures are bound to domain and scope; chainId, verifying contract, action, recipient, token, amount, expiresAt, nonce. Persist issued nonces in a shared store with TTL to survive restarts and multi-instance deployments. This reduces replay/phishing risk and improves reliability in production.
SOLVED: The ZKCross team resolved the issue as per the recommendations by replacing the weak nonce scheme with an EIP-712 structured signature system. They also added a 5-minute expiry with automatic cleanup to prevent replay and ensure single-use validity.
//
It was observed that the backend exposes some API endpoints in which for example one of them is POST /api/system/start
which can start bridge operations like wallet setup, funder detection, and Stellar trustline creation without requiring any authentication or verification or /api/wallet/address
reveals administrative wallet addresses. If these are exposed in a production environment, anyone able to reach the endpoint could trigger these operations simply by sending a POST request. This could cause unwanted funder assignments, unnecessary on-chain activity, or disruption to the bridge setup process.
We recommend protecting any endpoint that can initialize or reinitialize the bridge or that is critical for operations with strong authentication and restricting access to trusted operators only. Options include API keys, mutual TLS, or IP allowlists. In production, such endpoints should also be rate-limited and not exposed to the public internet.
SOLVED: The ZKCross team resolved the issue as per the recommendations by protecting sensitive endpoints with API key authentication and rate limiting, ensuring sensitive endpoints cannot be abused or accessed without authorization.
//
It was observed that in zkcross_release_handler/src/helpers/common/workerManager.ts
, the MAX_CONCURRENT_WORKERS
setting is only enforced inside a single call to processPendingTransactions()
. This means that if several different parts of the system, such as listeners, scanners, or cron jobs, call this function at the same time, each one can start its own set of workers up to that limit. When these calls happen together, the total number of workers can grow far beyond the intended limit.
It was also noticed that the activeWorkers
map is only updated after a worker is started and there is no check before spawning a worker to see if one is already running for the same lockId. As a result, the same transaction can be handled by more than one worker at the same time. If the downstream checks do not catch this, the same transaction could be processed twice, and if any worker hangs while calling an RPC, it could lead to high memory or CPU use.
zkcross_release_handler/src/helpers/common/workerManager.ts
export const processPendingTransactions = async (
transactions: types.IBridgeTxn[]
): Promise<void> => {
const MAX_CONCURRENT_WORKERS = 5;
const chunks = [] as types.IBridgeTxn[][];
for (let i = 0; i < transactions.length; i += MAX_CONCURRENT_WORKERS) {
chunks.push(transactions.slice(i, i + MAX_CONCURRENT_WORKERS));
}
for (const chunk of chunks) {
await Promise.allSettled(
chunk.map((transaction) =>
spawnReleaseWorker(
transaction,
config.allChainsInfo[transaction.toNetwork]
)
)
);
}
};
// Worker reference stored here
activeWorkers.set(transaction.lockId, worker);
^^No check to see if activeWorkers already contains this lockId
const wm = require('../../build/helpers/common/workerManager');
wm.spawnReleaseWorker = async (transaction) => {
console.log(`Simulated worker spawned for lockId: ${transaction.lockId}`);
};
function tx(same = false) {
return { lockId: same ? 'SAME' : Math.random().toString(), toNetwork: 'poly', bridgeId: 0 };
}
(async () => {
console.log('=== Concurrency Test ===');
const tasks = [];
for (let i = 0; i < 10; i++) {
tasks.push(wm.processPendingTransactions([tx()]));
}
await Promise.allSettled(tasks);
console.log('Done — check for multiple "Simulated worker" lines for SAME lockId.');
})();
/**
concurrency reproduction script.
Purpose: Demonstrates how multiple calls to processPendingTransactions()
can spawn workers without a global concurrency limit or duplicate-worker check.
Why it's an issue: When called concurrently (e.g., by multiple listeners or jobs),
each call can start its own set of workers, leading to more total workers than intended.
This can cause CPU/memory spikes and duplicated transaction processing.
**/
Processing for SAME
lockId
We recommend introducing a global concurrency limit that is shared across all calls to processPendingTransactions()
. The system should check activeWorkers before starting a new worker for the same lockId and reserve the lockId
before the worker is created.
A timeout should also be added so that any worker that becomes stuck is cleaned up, which will reduce the chance of resource exhaustion and duplicated work.
SOLVED: The ZKCross team resolved the issue by introducing global concurrency limits, duplicate worker prevention, and memory-aware worker allocation with cleanup.
//
In verifyEvmLogOnAlt()
, if no secondary RPC is configured the function immediately returns true, effectively bypassing log verification.
This behavior weakens the intended defense against RPC inconsistency or shallow reorgs whenever altRpcUrl is absent in production configs. Operators may believe “alt verification is active,” but in practice it silently degrades to single-RPC trust.
src/helpers/common/web3/index.ts
export async function verifyEvmLogOnAlt(
network: types.EvmChains,
txHash: string,
expectedAddress: string,
logIndex: number,
eventSigHash?: string
): Promise<boolean> {
try {
const { secondary } = getProviders(network);
if (!secondary) return true; // <-- bypass if no alt RPC
const rcpt = await secondary.getTransactionReceipt(txHash);
if (!rcpt || rcpt.status !== 1) return false;
const log = rcpt.logs?.[logIndex];
if (!log) return false;
if (log.address.toLowerCase() !== expectedAddress.toLowerCase()) return false;
if (eventSigHash) {
if (!log.topics || log.topics.length === 0) return false;
if ((log.topics[0] || "").toLowerCase() !== eventSigHash.toLowerCase()) return false;
}
return true;
} catch (_) {
return false;
}
}
The verification flow should be changed to fail closed when an alternative RPC is not available in production. Event persistence should require a positive match on both providers, including checks for transaction status log index emitting address, and the expected event signature. Operational monitoring should alert when the secondary provider is unavailable so that indexing pauses rather than silently degrading verification to single RPC verification.
SOLVED: The ZKCross team resolved the issue as per the recommendations by enforcing a fail-closed model with mandatory dual-RPC verification.
//
The price worker (src/helpers/common/workers/prices.ts
) persists token prices both in Redis and to local disk without validation, TTLs, or sanity checks. It was observed that the coin price caching mechanism accepted and stored arbitrary values without validation. Because Redis writes are unconditional and lack TTLs, any trusted service or compromised price feed could poison the cache indefinitely. If downstream components (executors, release logic, or minimum output enforcement) consume these values, they may calculate incorrect payouts, bypass slippage limits, or release funds at a manipulated rate. This design essentially shifts the trust boundary entirely onto the external price feed without local safeguards, which is especially risky in a bridge context where integrity of conversion rates directly controls fund flows.
Code Location:
try {
await redis.setCoinPricesCache(prices); // ❗ cached in Redis with no TTL
fs.writeFileSync("prices.json", JSON.stringify(prices)); // ❗ persisted locally
console.log(`Prices saved in redis`);
} catch (error: any) {
console.log(`Error while saving prices in redis: ${error.message}`);
}
And in src/redis/index.ts
export const setCoinPricesCache = async (
prices: types.PricesCache
): Promise<void> => {
miniRedis.hsetMany(types.CacheKeys.PRICES, prices); // ❗ unvalidated write
};
import * as redis from "../../src/redis";
(async () => {
await redis.setCoinPricesCache({ usdc: "1000" }); // poison price
const prices = await redis.getCoinPricesCache();
console.log("Fetched prices:", prices);
})();
If persistence is not required, remove or gate the fs.writeFileSync call behind a debug flag. Redis writes should enforce TTLs and include timestamps for freshness. Add validation and sanity checks on incoming price data, including fallback sources and stablecoin clamps. For production environments, avoid synchronous disk writes and use async logging or database persistence with explicit safeguards.
SOLVED: The ZKCross team esolved the issue by improving how price data is stored. Validation checks and automatic expiry were added so that only fresh and trusted values remain in the cache, preventing stale or tampered data from persisting.
//
It was observed that the backend API was deployed over HTTP rather than HTTPS. This behavior was caused by the server being configured to expose its endpoints on port 3000 without enforcing TLS. As a result, all API communications including sensitive operations such as wallet address retrieval, system initialization, and withdrawal requests were transmitted in cleartext. This exposed the application to interception and manipulation of traffic by attackers positioned on the same network path, increasing the risk of credential theft and unauthorized transactions.
The API should be deployed with HTTPS enforced for all endpoints in order to protect the confidentiality and integrity of communications. TLS 1.2 or higher should be used with certificates issued by a trusted authority. Any requests made over HTTP should be redirected to HTTPS and transport security headers such as Strict-Transport-Security
should be configured to enforce secure connections. This ensures protection against network-level interception and prevents exposure of sensitive information in transit. The deployment configuration should be updated to serve traffic exclusively over HTTPS, aligning with modern security best practices.
SOLVED: The ZKCross team clarified that the insecure HTTP setup was only part of the local environment used during testing. In production, the API will be deployed strictly over HTTPS with TLS enforced, ensuring encrypted communication and proper transport security.
//
We noticed that in /src/helpers/common/web3/index.ts
the helper getLatestBlockNumber()
always returns the current chain head, and getLogs()
queries directly up to that block. So this means the system processes events from the most recent block without any confirmation buffer or overlap.
If a chain reorg occurs, previously processed events may disappear or change, leading to wasted executor work, inconsistent cursors, or replaying old events. While if the onchain contract ensures a lock hash cannot be released twice, the offchain system can still miss events or see fake ones, which could cause delays or make the bridge less reliable.
export const getLatestBlockNumber = async (
network: types.EvmChains
): Promise<number> => {
const provider = getProvider(network);
const blockNumber = await provider.getBlockNumber();
return blockNumber; // returns the chain head with no confirmation buffer
};
—————
export const getLogs = async (
network: types.EvmChains,
bridge: ethers.Contract,
filter: ethers.EventFilter,
fromBlock: number,
toBlock: number
): Promise<types.Log[]> => {
const interval = config.evmChainsInfo[network]!.eventPollingInterval;
const provider = bridge.provider;
const allLogs: types.Log[] = [];
for (let i = fromBlock; i < toBlock; i += interval) {
const logs = await provider.getLogs({
...filter,
fromBlock: i,
toBlock: i + interval, // currently fetches up to the latest block with no safety buffer and no recheck of recent blocks
});
allLogs.push(...logs);
}
return allLogs;
};
Introduce a per-chain confirmation buffer, for example safeHead = latestBlock - N
and apply a small overlap window so that recent blocks are rescanned. This reduces the impact of shallow reorgs and ensures consistent event processing, even if contracts prevent duplicate releases.
SOLVED: The ZKCross team resolved the issue by applying a confirmation buffer before event processing. This ensures the system only processes confirmed blocks, mitigating reorg risks and preventing event loss or duplication.
//
We observed that the project relies on multiple outdated npm dependencies, including core libraries used in bridge execution and indexing. Outdated packages can expose the system to publicly known vulnerabilities or silent breaking changes in transitive dependencies. Because the bridge is a security-sensitive component handling fund releases, relying on unpatched packages significantly increases the attack surface.
# npm audit report
@babel/runtime <7.26.10
Severity: moderate
Babel has inefficient RegExp complexity in generated code with .replace when transpiling named capturing groups - https://github.com/advisories/GHSA-968p-4wvh-cqc8
fix available via `npm audit fix`
node_modules/@babel/runtime
axios <=0.29.0 || 1.0.0 - 1.8.1
Severity: high
Axios Cross-Site Request Forgery Vulnerability - https://github.com/advisories/GHSA-wf5p-g6vw-rhxx
axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL - https://github.com/advisories/GHSA-jr5f-v2jv-69x6
axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL - https://github.com/advisories/GHSA-jr5f-v2jv-69x6
fix available via `npm audit fix`
node_modules/axios
node_modules/tronweb/node_modules/axios
tronweb <=5.3.1
Depends on vulnerable versions of axios
node_modules/tronweb
@allbridge/bridge-core-sdk *
Depends on vulnerable versions of @solana/spl-token
Depends on vulnerable versions of tronweb
Depends on vulnerable versions of web3
node_modules/@allbridge/bridge-core-sdk
base-x <=3.0.10
Severity: high
Homograph attack allows Unicode lookalike characters to bypass validation. - https://github.com/advisories/GHSA-xq7p-g2vc-g82p
fix available via `npm audit fix`
node_modules/base-x
bigint-buffer *
Severity: high
bigint-buffer Vulnerable to Buffer Overflow via toBigIntLE() Function - https://github.com/advisories/GHSA-3gc7-fjrx-p6mg
fix available via `npm audit fix`
node_modules/bigint-buffer
@solana/buffer-layout-utils *
Depends on vulnerable versions of bigint-buffer
node_modules/@solana/buffer-layout-utils
@solana/spl-token >=0.2.0-alpha.0
Depends on vulnerable versions of @solana/buffer-layout-utils
node_modules/@solana/spl-token
@solana/web3.js 1.43.1 - 1.98.0
Depends on vulnerable versions of bigint-buffer
node_modules/@solana/web3.js
form-data >=4.0.0 <4.0.4 || <2.5.4
Severity: critical
form-data uses unsafe random function in form-data for choosing boundary - https://github.com/advisories/GHSA-fjxv-7rqg-78g4
form-data uses unsafe random function in form-data for choosing boundary - https://github.com/advisories/GHSA-fjxv-7rqg-78g4
fix available via `npm audit fix`
node_modules/form-data
node_modules/request/node_modules/form-data
request *
Depends on vulnerable versions of form-data
Depends on vulnerable versions of tough-cookie
node_modules/request
servify *
Depends on vulnerable versions of request
node_modules/servify
eth-lib <=0.1.29
Depends on vulnerable versions of servify
Depends on vulnerable versions of ws
node_modules/eth-lib
swarm-js >=0.1.36
Depends on vulnerable versions of eth-lib
Depends on vulnerable versions of tar
node_modules/swarm-js
web3-bzz *
Depends on vulnerable versions of swarm-js
node_modules/web3-bzz
web3 1.0.0-beta.1 - 3.0.0-rc.0
Depends on vulnerable versions of web3-bzz
node_modules/web3
mongoose 8.0.0-rc0 - 8.9.4
Severity: critical
Mongoose search injection vulnerability - https://github.com/advisories/GHSA-m7xq-9374-9rvx
Mongoose search injection vulnerability - https://github.com/advisories/GHSA-vg7j-7cwx-8wgw
fix available via `npm audit fix`
node_modules/mongoose
pbkdf2 <=3.1.2
Severity: critical
pbkdf2 silently disregards Uint8Array input, returning static keys - https://github.com/advisories/GHSA-v62p-rq8g-8h59
pbkdf2 returns predictable uninitialized/zero-filled memory for non-normalized or unimplemented algos - https://github.com/advisories/GHSA-h7cp-r72f-jxh6
fix available via `npm audit fix`
node_modules/pbkdf2
tar <6.2.1
Severity: moderate
Denial of service while parsing a tar file due to lack of folders count validation - https://github.com/advisories/GHSA-f5x3-32g6-xq36
fix available via `npm audit fix`
node_modules/tar
tough-cookie <4.1.3
Severity: moderate
tough-cookie Prototype Pollution vulnerability - https://github.com/advisories/GHSA-72xf-g2v4-qvf3
fix available via `npm audit fix`
node_modules/tough-cookie
ws 2.1.0 - 5.2.3
Severity: high
ws affected by a DoS when handling a request with many HTTP headers - https://github.com/advisories/GHSA-3h5v-q93c-6h6q
fix available via `npm audit fix`
node_modules/eth-lib/node_modules/ws
21 vulnerabilities (6 moderate, 11 high, 4 critical)
It is recommended to upgrade all outdated dependencies to their latest stable versions to prevent known vulnerabilities from being exploited. A regular dependency audit should be performed using pnpm audit or similar tools as part of the CI/CD pipeline. Dependencies should be updated as per the advisories provided by their maintainers.
PARTIALLY SOLVED: The ZKCross team resolved the issue by updating dependencies to the latest. All critical and high-severity vulnerabilities were fixed, and only moderate issues remain, which will be addressed in future updates.
//
We observed that the bridge’s token resolution logic applies inconsistent normalization for both token symbols and contract addresses across the codebase. In some cases, input symbols are lowercased before comparison while the stored config is not. In others, raw string equality is used. Similarly, addresses are compared with ==
instead of normalized checksummed values. This inconsistency introduces scenarios where a token like USDC may be missed if stored as usdc, or where a valid non-checksummed address fails to resolve. In multi-token deployments, such mismatches could cause events to be ignored or mis-mapped, potentially leading to incorrect releases or skipped processing.
Code Location:
src/helpers/common/config/evmChains/index.ts
Asymmetric normalization (input lowercased, config not):
(token) => token.symbol === tokenSymbol.toLowerCase();
Unnormalized address comparison:
src/helpers/common/utils/bridge.ts
if (fromToken == token.address) { ... } // no checksum normalization
Mixed raw symbol checks:
src/helpers/common/config/nonEvmChains/stellarTokens.ts
if (token.symbol === symbol) { ... } // raw equality, case-sensitive
Event listener using asymmetric symbol normalization:
src/indexers/evmIndexer/listener.ts
const tokenConfig = allTokens.find(
(token) => token.symbol === tokenSymbol.toLowerCase()
);
All token resolution should be made deterministic by enforcing normalization at both storage and lookup. Token symbols should always be converted to lowercase before being stored and looked up, and contract addresses should always be passed through ethers.utils.getAddress()
to ensure checksummed consistency. Equality checks should then only be performed on these normalized values. By ensuring consistent handling of symbols and addresses throughout the system, the bridge can avoid subtle mismatches that could otherwise result in missed events, misclassification of tokens, or incorrect execution flows.
SOLVED: The ZKCross team resolved the issue by enforcing lowercase normalization for all token symbols and using checksummed address comparisons.
//
We observed that sensitive secrets were committed into version control and remain retrievable in the Git history. In commit 2ec271b2f4a63a5ad9022cbe785f8f99703af442, both an API key and a private key were present inside .env.example
. Although later versions may remove or replace them, their presence in history exposes the project to supply chain and credential compromise. While the repository may currently be private, secrets committed to version control remain at risk of future exposure if the repository is ever made public, shared with external contractors, or accessed by unauthorized collaborators.
The exposed secret must be rotated immediately. Secrets should never be hardcoded into application code or committed to version control. Use secure environment variables or a secrets manager for configuration. Further, implementation for more robust controls over access to repositories, and review invite processes to avoid overexposing sensitive infrastructure to unnecessary collaborators should be taken internally to prevent any future exposures.
SOLVED: The ZKCross team resolved the issue as per the recommendation, rotating all exposed secrets and enforcing secure environment variable management.
//
It was observed that the HTTP API accepted unlimited requests without throttling or quota enforcement. The behavior was caused by the absence of any rate‑limiting middleware at the server initialization layer and on sensitive routes such as /system/start, /system/withdraw
, and /wallet/address
. As a result, automated request loops were processed continuously, which increased the risk of denial‑of‑service conditions, brute‑force attempts against authorization flows, and resource exhaustion. The issue affected service availability and resilience under abuse.
Rate-limiting middleware should be implemented at the server initialization layer to enforce per-IP and global quotas on sensitive endpoints such as /system/start
, /system/withdraw
, and /wallet/address
. Requests exceeding the defined thresholds must be automatically throttled or rejected.
SOLVED: The ZKCross team resolved the issue by implementing tiered rate-limiting middleware across sensitive endpoints.
//
We noticed that in src/indexers/evmIndexer/blockScanner.ts, the call, await redis.setLastSyncBlockNumber(network, config.events.evm, toBlockNumber)
ultimately calls miniRedis.set(...)
in src/redis/evm.ts
without checking if the new block number is greater than the previous one. If multiple processes or a reorg cause this to be called with a smaller value, the cursor can be overwritten backwards, skipping unprocessed events or replaying old ones. Because miniRedis is in memory only, any restart could loses this state entirely too . This means after a restart, the system relies on hardcoded bridgeDeployBlock values, which can lead to reprocessing a large number of old events and potential inconsistencies.
const redis = require('../../build/redis/evm');
(async () => {
console.log('=== Block Sync Pointer Test ===');
const network = 'polygon';
const eventType = 'Lock';
// Set higher block first
await redis.setLastSyncBlockNumber(network, eventType, 105);
console.log(`Set block to 105`);
// Set lower block to simulate regression
await redis.setLastSyncBlockNumber(network, eventType, 102);
console.log(`Set block to 102 (regression)`);
// Read back current value
const stored = await redis.getLastSyncBlockNumber(network, eventType);
console.log(`Stored block number is now: ${stored} (should NOT be less than 105)`);
console.log('If value regressed, missing guard logic confirmed.');
})();
/**
Minimal block sync pointer overwrite reproduction.
Purpose: Shows how setLastSyncBlockNumber() in src/redis/evm.ts can overwrite
the stored block cursor with a smaller value without validation.
Why it's an issue: If multiple indexers/processes call this concurrently
or during a chain reorg, the cursor can move backwards or skip forward,
causing unprocessed events to be skipped or old events to be reprocessed.
This can lead to data inconsistency, missed transactions, and replayed events.
**/
// Process A
setLastSyncBlockNumber("polygon", "Lock", 105);
// Process B
setLastSyncBlockNumber("polygon", "Lock", 102); // overwrites
Result: cursor = 102 (regressed)
We recommend adding logic to ensure that block numbers only move forward in a contiguous manner and that updates are atomic. Using a persistent Redis store with proper locking would help avoid race conditions. On startup, cross check the stored cursor against the actual chain head to detect and recover from skipped ranges.
SOLVED: The ZKCross team resolved the issue as per the recommendations by adding safeguards so that block sync pointers only move forward, preventing regressions and ensuring consistent event processing.
//
The application depends on multiple third-party npm packages and uses loose semver ranges (^x.x.x) instead of exact pins. This allows automatic updates to minor and patch versions, which can introduce unreviewed or malicious code into production. Such weak version control creates exposure to supply chain attacks, as seen in the event-stream compromise in the Copay Bitcoin Wallet and the @solana/web3.js incident.
Pinning dependencies to an exact version (=x.x.x) is recommended to reduce the risk of inadvertently introducing a malicious version of a dependency in the future. This helps in mitigating supply chain attack risks, ensuring that updates are controlled and vetted before implementation.
SOLVED: The ZKCross team resolved the issue by updating and pinning dependency versions to secure releases.
//
It was observed that several API responses were returned without explicit cache control headers, meaning intermediaries such as browsers or proxies could cache them. Some sensitive API calls for example /system/withdraw/nonce
returned authorization messages that should have been treated as strictly non-cacheable. The absence of Cache-Control: no-store and Pragma: no-cache headers allowed the possibility that authorization payloads could be stored and later retrieved outside their intended lifecycle. This weakened the integrity of the withdrawal flow because cached nonces or responses may be replayed or reused.
Ensure that all sensitive endpoints include headers that prevent client-side and intermediary caching of sensitive responses. Specifically, the response should include.
• Cache-Control: no-store, no-cache, must-revalidate, private
• Pragma: no-cache
• Expires: 0
These headers enforce a strict no-cache policy, ensuring tokens are not unintentionally stored by browsers or proxies, and align with secure session management best practices.
SOLVED: The ZKCross team resolved the issue by adding strict no-cache headers to sensitive API endpoints, ensuring that authorization system responses cannot be cached or reused.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
ZKCross - Penetration Test
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed