Prepared by:
HALBORN
Last Updated 06/10/2025
Date of Engagement: April 21st, 2025 - May 2nd, 2025
100% of all REPORTED Findings have been addressed
All findings
16
Critical
1
High
1
Medium
7
Low
5
Informational
2
Quex
engaged Halborn to conduct a security assessment of their smart contracts from April 21st, 2025, to May 2nd, 2025. The assessment scope was limited to the smart contracts provided to the Halborn team. Commit hashes and additional details are available in the Scope section of this report.
The Halborn team dedicated 10 days to this engagement, assigning one full-time security engineer to evaluate the smart contracts' security.
The assigned security engineer is an expert in blockchain and smart contract security, with advanced skills in penetration testing, smart contract exploitation, and extensive knowledge of multiple blockchain protocols.
The objectives of this assessment were to:
Verify that the smart contract functions operate as intended.
Identify potential security vulnerabilities within the smart contracts.
In summary, Halborn identified several improvements to reduce the likelihood and impact of potential risks, which were mostly addressed by the Quex team
. The main ones were:
Enforce certificate chain validation to maintain integrity and prevent bypasses.
Limit batch sizes to prevent unbounded loops and mitigate denial-of-service risks in PlatformCA revocation.
Deprecate or isolate outdated hardware to avoid compromise of the oracle network.
Disable debug modes in production environments to prevent memory extraction.
Prevent duplicate PCK registrations in TrustDomainFacet.
Invalidate stale Trust Domain (TD) quotes to mitigate replay attacks.
Implement alerts and safeguards to catch silent fund lock failures.
Ensure explicit cancellation mechanisms to avoid permanent fund locks.
Patch TEE_TCB_SVN counter leaks to enable effective revocation.
Harden signature verification against malleability risks.
Restrict contract call flows to prevent arbitrary executions.
Clear revoked PCKs to eliminate storage bloat.
Add comprehensive validation to critical functions.
Protect against data overwrite from hash collisions.
Validate upgrade events to prevent misleading off-chain monitoring.
Review admin logic to prevent unintended privilege escalation.
Halborn employed a combination of manual, semi-automated, and automated security testing to balance efficiency, timeliness, practicality, and accuracy within the scope of this assessment. Manual testing is essential for uncovering flaws in logic, process, and implementation, while automated techniques enhance code coverage and quickly identify deviations from security best practices. The following phases and tools were utilized throughout the assessment:
Research into the architecture and purpose of the smart contracts.
Manual code review and walkthrough of the smart contracts.
Manual assessment of critical Solidity variables and functions to identify potential vulnerability classes.
Manual testing using custom scripts.
Static security analysis of the scoped contracts and imported functions using Slither
.
Local deployment and testing with Foundry & Hardhat
.
EXPLOITABILITY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
1
High
1
Medium
7
Low
5
Informational
2
Security analysis | Risk level | Remediation Date |
---|---|---|
Trust Domain Validation Bypasses Certificate Chain Integrity Checks | Critical | Solved - 05/14/2025 |
Unbounded Loop Creates Permanent DoS Risk in Platform CA Revocation | High | Solved - 05/14/2025 |
Outdated Hardware Can Compromise Entire Oracle Network | Medium | Solved - 05/28/2025 |
Debug Mode Enables TD Memory Extraction | Medium | Solved - 05/28/2025 |
Duplicate PCK Registration in TrustDomainFacet | Medium | Solved - 05/14/2025 |
Replay Attacks Through Stale TD Quotes | Medium | Risk Accepted - 06/04/2025 |
Silent Failures Could Lock User Funds Forever | Medium | Risk Accepted - 06/04/2025 |
Permanent Fund Lock Without Request Cancellation | Medium | Solved - 05/16/2025 |
TEE_TCB_SVN Reference Counter Leak Prevents Permanent Revocation | Medium | Solved - 06/04/2025 |
Signature verification vulnerable to malleability | Low | Solved - 05/15/2025 |
Arbitrary contract call vulnerability through unrestricted flow creation | Low | Risk Accepted - 06/04/2025 |
Storage Bloat From Unremoved Revoked PCKs | Low | Solved - 05/14/2025 |
Critical Functions Lack Comprehensive Validations | Low | Solved - 05/15/2025 |
Data Overwrite Risk from Hash Collisions | Low | Risk Accepted - 06/04/2025 |
False Upgrade Events Can Mislead Off-Chain Monitoring | Informational | Acknowledged - 06/04/2025 |
QuexDiamond Can Get Unintended Admin Rights | Informational | Solved - 05/15/2025 |
//
The isTDValid()
function in TrustDomainFacet.sol
performs only a minimal existence check instead of a comprehensive certificate chain validation. It verifies solely that a TD's REPORT_DATA1
field is non-zero, rather than validating the entire certificate chain.
function isTDValid(uint256 tdId) external view returns (bool) {
return TrustDomainStorage.tdLayout().tdQuotes[tdId].REPORT_DATA1 != 0;
}
This introduces a critical security vulnerability where:
Revoked Platform CAs and PCKs remain effective for existing TDs.
Expired certificates within the chain are not detected.
Compromised attestation chains can continue to operate undetected.
This flaw undermines the core security model of the protocol, which relies on complete certificate chain validation for each oracle message. Once a TD is registered, it remains "valid" regardless of any changes to the status of its certificate chain.
Add the following test case to TrustDomainFacet.test.ts
:
it("halborn: poc_InsecureTDValidation_poc", async () => {
// =========== SETUP: Register certificates and create TD ============
// Track each step of the chain for demonstration purposes
console.log("\nSETTING UP THE COMPLETE CERTIFICATE CHAIN:");
// 1. Add Platform CA key (this inherits trust from the pre-deployed rootCA)
console.log("1. Adding Platform CA");
await testObject
.connect(owner)
.addPlatformCAKey(
platformCaCert.x,
platformCaCert.y,
platformCaCert.serial,
platformCaCert.notBefore,
platformCaCert.notAfter,
platformCaCert.extensions,
platformCaCert.r,
platformCaCert.s
);
// 2. Add PCK (signed by Platform CA)
console.log("2. Adding PCK signed by Platform CA");
await testObject
.connect(nonOwner)
.addPCK(
processorPckCert.x,
processorPckCert.y,
processorPckCert.serial,
processorPckCert.notBefore,
processorPckCert.notAfter,
processorPckCert.extensions,
processorPckCert.authority,
processorPckCert.r,
processorPckCert.s
);
// 3. Add QE (using PCK)
console.log("3. Adding QE validated by PCK");
// Use the helper to add QE which already knows how to deal with IDs
await ContractHelpers.TrustDomainFacet.addQE(diamond);
// 4. Add TD (validated by QE)
console.log("4. Adding TD validated by QE");
// Use the helper to add TD which returns the correct ID
const tdId = await ContractHelpers.TrustDomainFacet.addTD(diamond);
// Verify TD is initially valid
expect(await testObject.isTDValid(tdId)).to.be.true;
console.log(` TD (ID: ${tdId}) is initially valid`);
// =========== EXPLOIT: Demonstrate failures in validation ============
console.log("\nDEMONSTRATING VALIDATION VULNERABILITIES:");
// 1. Revoke PCK - but TD remains valid!
console.log("1. Revoking PCK that signed the QE");
await testObject.connect(owner).revokePCK(processorPckCert.authority, processorPckCert.serial);
// Check if PCK was actually revoked (zeroed out)
const pckAfterRevocation = await testObject.getPCK(processorPckCert.authority, processorPckCert.serial);
expect(pckAfterRevocation.x).to.eq(0n);
console.log(" PCK successfully revoked (x coordinate is now 0)");
// But TD validation still returns true!
const isValidAfterPCKRevocation = await testObject.isTDValid(tdId);
expect(isValidAfterPCKRevocation).to.be.true;
console.log(" VULNERABILITY: TD still reports as valid after its PCK was revoked!");
// 2. Revoke Platform CA - but TD remains valid!
console.log("\n2. Revoking Platform CA (should invalidate entire chain)");
await testObject.connect(owner).revokePlatformCA(platformCaCert.serial);
// Check if Platform CA was actually revoked
const platformCAAfterRevocation = await testObject.getPlatformCAKey(platformCaCert.serial);
expect(platformCAAfterRevocation.x).to.eq(0n);
console.log(" Platform CA successfully revoked (x coordinate is now 0)");
// But TD validation still returns true!
const isValidAfterPlatformCARevocation = await testObject.isTDValid(tdId);
expect(isValidAfterPlatformCARevocation).to.be.true;
console.log(" VULNERABILITY: TD still reports as valid after its Platform CA was revoked!");
});
Output:
This function checks if the TD exists in storage. It does NOT verify that:
The Platform CA is still valid (not revoked, not expired)
The PCK is still valid (not revoked, not expired)
The QE that validated the TD is still valid
The full attestation chain is intact
Implement comprehensive certificate chain validation as follows:
function isTDValid(uint256 tdId) external view returns (bool) {
TrustDomainStorage.TDLayout storage tdLayout = TrustDomainStorage.tdLayout();
// Verify TD existence
if (tdLayout.tdQuotes[tdId].REPORT_DATA1 == 0) {
return false;
}
// Retrieve QE ID associated with this TD
uint256 qeId = tdLayout.tdToQe[tdId];
// Validate QE
TrustDomainStorage.QELayout storage qeLayout = TrustDomainStorage.qeLayout();
QEReport memory qeReport = qeLayout.qeReports[qeId];
if (qeReport.MRENCLAVE.length == 0) {
return false;
}
// Retrieve PCK associated with this QE
TrustDomainStorage.QEAuthority memory qeAuthority = qeLayout.qeAuthorities[qeId];
uint256 platformSerial = qeAuthority.platformSerial;
uint256 pckSerial = qeAuthority.pckSerial;
// Validate PCK
TrustDomainStorage.CertificateLayout storage certLayout = TrustDomainStorage.certificateLayout();
ECKey memory pck = certLayout.processorPCKs[platformSerial][pckSerial];
if (pck.x == 0 || pck.y == 0) {
return false;
}
// Validate Platform CA
ECKey memory platformCA = certLayout.platformCAs[platformSerial];
if (platformCA.x == 0 || platformCA.y == 0) {
return false;
}
// Check certificate expiration
if (block.timestamp < platformCA.notBefore ||
block.timestamp > platformCA.notAfter ||
block.timestamp < pck.notBefore ||
block.timestamp > pck.notAfter) {
return false;
}
return true;
}
SOLVED: Added certificate expiration tracking in the tdValidityEnd
mapping, and implemented both existence and time validation in isTDValid()
.
//
The revokePlatformCA
function in TrustDomainFacet.sol
contains an unbounded loop that iterates through all PCKs associated with a Platform CA:
function revokePlatformCA(uint256 serial) external onlyOwner {
TrustDomainStorage.CertificateLayout storage layout = TrustDomainStorage.certificateLayout();
delete layout.platformCAs[serial];
uint256 curr_len = layout.processorPCKSerials[serial].length;
while (curr_len > 0) {
uint256 pck_serial = layout.processorPCKSerials[serial][curr_len - 1];
delete layout.processorPCKs[serial][pck_serial];
layout.processorPCKSerials[serial].pop();
curr_len -= 1;
}
}
This creates a critical denial-of-service vulnerability, as an attacker can register enough PCKs to cause this function to exceed the block gas limit, permanently preventing the revocation of the Platform CA. Even if the issue of duplicate PCK registration is resolved, this vulnerability remains severe because:
An attacker only needs to register approximately 2,500 to 4,000 unique PCKs to exceed the block gas limit.
Although more challenging than using duplicates, acquiring enough unique PCKs is still feasible for advanced attackers.
The core problem is the unbounded loop itself, not solely the ease of PCK registration.
Once exploited, the Platform CA becomes permanently non-revocable.
This flaw undermines the entire protocol's security model by making it impossible to revoke a compromised Platform CA, thereby invalidating all downstream security checks.
Add the following test to TrustDomainFacet.test.ts
it("poc_UnboundedLoopDOS_RevokePlatformCA", async () => {
// ======= SETUP: Register large number of PCKs to same Platform CA =======
console.log("\nSetting up the DOS attack scenario...");
// Start timing for PCK registration phase
const startRegistration = Date.now();
const numPCKs = 4970;
console.log(`Registering ${numPCKs} PCKs for the same Platform CA...`);
// Register multiple PCKs
for (let i = 0; i < numPCKs; i++) {
await testObject
.connect(nonOwner)
.addPCK(
processorPckCert.x,
processorPckCert.y,
processorPckCert.serial,
processorPckCert.notBefore,
processorPckCert.notAfter,
processorPckCert.extensions,
processorPckCert.authority,
processorPckCert.r,
processorPckCert.s
);
}
const registrationTime = Date.now() - startRegistration;
console.log(`Registration of ${numPCKs} PCKs completed in ${registrationTime}ms`);
// ======= EXPLOIT: Measure gas consumption in revokePlatformCA =======
console.log("\nAttempting to revoke Platform CA with many PCKs...");
// Measure time and gas consumption for revocation
const startRevocation = Date.now();
// Attempt to revoke the platform CA (this should process all PCKs)
const tx = await testObject.connect(owner).revokePlatformCA(platformCaCert.serial);
const receipt = await tx.wait();
const revocationTime = Date.now() - startRevocation;
// Calculate and display gas usage statistics
const gasUsed = receipt?.gasUsed || 0n;
console.log(`\nRevocation Results:`);
console.log(`- Time taken: ${revocationTime}ms`);
console.log(`- Gas used: ${gasUsed.toString()}`);
console.log(`- Gas per PCK: ~${Number(gasUsed) / numPCKs}`);
// ======= ANALYSIS: Calculate gas limit breach point =======
// Ethereum block gas limit is typically around 30M gas
const blockGasLimit = 30000000;
const estimatedMaxPCKs = Math.floor(blockGasLimit / (Number(gasUsed) / numPCKs));
console.log(`\nVulnerability Analysis:`);
console.log(`- Current Ethereum block gas limit: ~${blockGasLimit} gas`);
console.log(`- Estimated PCKs to breach block gas limit: ~${estimatedMaxPCKs}`);
console.log(
`- If an attacker registered ${estimatedMaxPCKs} PCKs, the revokePlatformCA function would exceed the block gas limit`
);
// Verify the PCKs were indeed revoked in our test case
const pckAfterRevocation = await testObject.getPCK(processorPckCert.authority, processorPckCert.serial);
expect(pckAfterRevocation.x).to.eq(0n);
});
Output:
The following output shows that the PCKs were registered but when a call was made to revoke all of them in one go, it has the potential to run out of gas in real EVM deployment:
Implement a batch revocation mechanism to avoid unbounded loops:
function revokePlatformCABatch(uint256 serial, uint256 batchSize) external onlyOwner {
TrustDomainStorage.CertificateLayout storage layout = TrustDomainStorage.certificateLayout();
// Mark the Platform CA as revoked
if (layout.platformCAs[serial].x != 0) {
delete layout.platformCAs[serial];
emit PlatformCARevoked(serial);
}
// Process a batch of PCKs from the end of the array
uint256 remainingPCKs = layout.processorPCKSerials[serial].length;
uint256 toProcess = remainingPCKs < batchSize ? remainingPCKs : batchSize;
for (uint256 i = 0; i < toProcess; i++) {
// Get the serial number from the end of the array
uint256 pck_serial = layout.processorPCKSerials[serial][remainingPCKs - 1];
// Delete the PCK from the mapping
delete layout.processorPCKs[serial][pck_serial];
// Pop the last element
layout.processorPCKSerials[serial].pop();
// Update our counter
remainingPCKs--;
}
emit BatchPCKRevocationProcessed(serial, toProcess, remainingPCKs);
}
SOLVED: Completely removed the problematic processorPCKSerials
arrays and replaced them with reference counting to enable clean manual deletion.
//
The QuoteVerifier
does not validate security version numbers (CPUSVN, ISVSVN, TEE_TCB_SVN) against minimum acceptable thresholds. Intel regularly updates these values when releasing security patches for hardware vulnerabilities such as Spectre, Meltdown, and other side-channel attacks.
Impact:
Allows outdated and potentially vulnerable hardware to be trusted as valid oracles
Permits attestations from compromised environments to be accepted
Weakens the entire chain of trust by allowing the least secure components to participate
Implement proper validation checks as follows:
function ensureQEReportIsValid(QEReport memory qeReport, ...) internal view {
// Verify minimum security version numbers
if (!_isCPUSVNAcceptable(qeReport.CPUSVN)) {
revert("QE_CPUSVN_TooLow");
}
if (uint16(qeReport.ISVSVN) < MINIMUM_ISVSVN) {
revert("QE_ISVSVN_TooLow");
}
// Existing validation...
}
function ensureTDQuoteIsValid(TDQuote memory tdQuote, ...) internal view {
// Verify minimum TEE TCB SVN
if (!_isTEETCBSVNAcceptable(tdQuote.TEE_TCB_SVN)) {
revert("TD_TCB_SVN_TooLow");
}
// Existing validation...
}
SOLVED: Added allowedCpuSvn
and allowedTeeTcbSvn
validation checks in the QE and TD verification functions.
//
The QuoteVerifier::ensureTDQuoteIsValid
function does not verify critical security attributes, such as whether debug mode is disabled in TDATTRIBUTES
or if the appropriate flags are set in XFAM
. These flags are essential indicators of the enclave's security posture.
function ensureTDQuoteIsValid(
TDQuote memory tdQuote,
uint256 qeId,
uint256 x,
uint256 y,
bytes32 authenticationData,
uint256 r,
uint256 s
) internal view {
TrustDomainStorage.QELayout storage layout = TrustDomainStorage.qeLayout();
bytes32 qeReportData = sha256(bytes.concat(bytes32(x), bytes32(y), authenticationData));
if (layout.qeReports[qeId].REPORT_DATA1 != qeReportData) {
revert TDReport_InvalidQuote();
}
bytes memory tdHeader = bytes.concat(TD_HEADER_PREAMBLE, tdQuote.USER_DATA);
bytes memory quoteBody1 = bytes.concat(
tdQuote.TEE_TCB_SVN,
tdQuote.MRSEAM,
tdQuote.MRSIGNERSEAM,
tdQuote.SEAMATTRIBUTES,
tdQuote.TDATTRIBUTES,
tdQuote.XFAM,
tdQuote.MRTD,
tdQuote.MRCONFIGID
);
bytes memory quoteBody = bytes.concat(
quoteBody1,
tdQuote.MROWNER,
tdQuote.MROWNERCONFIG,
tdQuote.RTMR0,
tdQuote.RTMR1,
tdQuote.RTMR2,
tdQuote.RTMR3,
tdQuote.REPORT_DATA1,
tdQuote.REPORT_DATA2
);
bytes memory report = bytes.concat(tdHeader, quoteBody);
bytes32 hash = sha256(report);
if (!_verifySignatureAllowMalleability(hash, r, s, x, y)) {
revert TDReport_InvalidSignature();
}
}
Impact:
The function may accept Trust Domains (TDs) configured insecurely, such as those with debug mode enabled.
Debug-enabled TDs are susceptible to host inspection and memory introspection attacks.
This vulnerability could allow attackers with host access to extract sensitive secrets from the TD.
Implement checks for critical security attributes as follows:
function ensureTDQuoteIsValid(TDQuote memory tdQuote, ...) internal view {
// Verify that debug mode is disabled (bit 30 in ATTRIBUTES)
if ((uint64(tdQuote.TDATTRIBUTES) & 0x40000000) != 0) {
revert("TD_DebugMode_Enabled");
}
// Verify other required security flags
if ((uint64(tdQuote.XFAM) & REQUIRED_XFAM_FLAGS) != REQUIRED_XFAM_FLAGS) {
revert("TD_Insecure_Features");
}
// Continue with existing validation...
}
SOLVED: A new function, ensureTDIsNotInDebugMode
, was introduced to validate that debug mode is disabled.
//
The addPCK
function in TrustDomainFacet.sol
does not verify whether a PCK serial number has already been registered for a given Platform CA. This oversight allows the same PCK to be registered multiple times, resulting in a storage bloat vulnerability:
function addPCK(...) external {
// Verification logic...
TrustDomainStorage.CertificateLayout storage layout = TrustDomainStorage.certificateLayout();
layout.processorPCKs[authority][serial] = ECKey(x, y, notBeforeTimestamp, notAfterTimestamp);
layout.processorPCKSerials[authority].push(serial); // No duplicate check before pushing
}
The function unconditionally appends the PCK serial to the processorPCKSerials array while updating the mapping. This creates an inconsistency where:
The mapping (processorPCKs
) holds one entry per unique serial number.
The array (processorPCKSerials
) grows with each registration, including duplicates.
This design flaw enables anyone to register the same PCK repeatedly, causing storage bloat and increasing gas costs for operations that iterate over the array.
Implement a duplicate check before adding the serial to the array:
mapping(uint256 => mapping(uint256 => bool)) private processorPCKExists;
function addPCK(...) external {
// Verification logic...
layout.processorPCKs[authority][serial] = ECKey(...);
// Only add to array if it does not already exist
if (!layout.processorPCKExists[authority][serial]) {
layout.processorPCKSerials[authority].push(serial);
layout.processorPCKExists[authority][serial] = true;
}
}
SOLVED: Although no explicit duplicate check was implemented, removing the processorPCKSerials
arrays eliminates the storage bloat issue; overwriting mapping entries is now harmless.
//
The QuoteVerifier::ensureTDQuoteIsValid
function does not verify the temporal freshness of TD quotes, which allows potentially outdated attestations to be accepted. TEE quotes should have a defined validity period to prevent replay attacks.
Impact:
Enables replay of old attestations that may no longer reflect the current hardware state.
An enclave compromised after attestation could continue to use an outdated quote.
Violates the security model that assumes attestations represent the enclave's current state.
Ensure the TDQuote is not stale by implementing a validity check:
// Add a timestamp field to the TDQuote structure
struct TDQuote {
// Existing fields...
uint256 attestationTimestamp; // Timestamp when the quote was generated
}
function ensureTDQuoteIsValid(TDQuote memory tdQuote, ...) internal view {
// Verify the quote is not older than the maximum allowed age (e.g., 24 hours)
if (block.timestamp > tdQuote.attestationTimestamp + MAX_QUOTE_AGE) {
revert("TD_Quote_TooOld");
}
// Existing validation logic...
}
RISK ACCEPTED: The Quex team indicated that the TD Quote lifetime assumptions are kept minimal because hardware vendors do not explicitly define expiration periods.
//
The contract uses low-level call
to transfer ETH without verifying the return values in the following functions:
QuexActionFacet::pushData
QuexActionFacet::createRequest
QuexActionFacet::fulfillRequest
Each transfer follows this pattern:
payable(quexMonetary.getTreasury()).call{value: quexFee}("");
This approach is problematic because the transaction will succeed even if the ETH transfer fails. This creates risks for the protocol's fee management system and may result in user funds being locked indefinitely.
Verify the return values of ETH transfers and revert the transaction if the transfer fails.
RISK ACCEPTED: The Quex team has chosen to prioritize data delivery over fee collection, accepting the risk that treasury payments may fail.
//
The QuexActionFacet
contract does not provide a mechanism for users to cancel requests and reclaim their funds if a request remains unfulfilled. Once a request is created via createRequest()
, the only way to recover the funds is through successful fulfillment:
function createRequest(uint256 flowId) external payable nonReentrant returns (uint256 requestId) {
// ... validation and fee calculation ...
requestId = ++QuexActionStorage.layout().lastRequestId;
emit RequestCreated(requestId, flowId, flow.pool);
QuexActionStorage.layout().requests[requestId] =
QuexActionStorage.Request(flowId, quexFee, relayerPremium, oraclePoolFee);
// ... excess refund logic ...
return requestId;
}
However, there is no corresponding cancelRequest()
function that would allow users to recover funds from unfulfilled requests. This exposes users to the risk of permanent fund loss in several scenarios:
User funds may become permanently locked if no oracle fulfills the request.
Network congestion, high gas fees, or oracle downtime can indefinitely lock user capital.
This results in a poor user experience with no recourse for failed requests.
It discourages users from creating requests due to the risk of losing funds.
Relayer premium calculations may become inaccurate during periods of high gas price volatility.
Implement a request cancellation mechanism that permits only the original request creator to cancel their request:
// Add request creator to the Request struct
struct Request {
uint256 flowId;
uint256 quexFee;
uint256 relayerPremium;
uint256 oraclePoolFee;
address creator; // Store the request creator
}
function createRequest(uint256 flowId) external payable nonReentrant returns (uint256 requestId) {
// ... existing code ...
QuexActionStorage.layout().requests[requestId] =
QuexActionStorage.Request(flowId, quexFee, relayerPremium, oraclePoolFee, msg.sender);
// ... existing code ...
}
function cancelRequest(uint256 requestId) external nonReentrant {
QuexActionStorage.Layout storage layout = QuexActionStorage.layout();
QuexActionStorage.Request memory request = layout.requests[requestId];
if (request.flowId == 0) {
revert Request_NotFound();
}
// Only the creator can cancel the request
require(msg.sender == request.creator, "Only request creator can cancel");
// Delete the request from storage
delete layout.requests[requestId];
// Return funds to the creator
uint256 refundAmount = request.relayerPremium + request.oraclePoolFee;
(bool success, ) = payable(msg.sender).call{value: refundAmount}("");
require(success, "Refund failed");
emit RequestCancelled(requestId, request.flowId, msg.sender);
}
SOLVED: The cancelRequest
function was added to enable request creators to retrieve funds from stale requests.
//
The revokeTD()
function in TrustDomainFacet.sol
fails to properly decrement the teeTcbSvnTDCounter
mapping when revoking Trust Domains. This defect results in a permanent denial-of-service condition affecting the TEE_TCB_SVN
revocation mechanism.
function revokeTD(uint256 tdId) external onlyOwner {
TrustDomainStorage.Layout storage layout = TrustDomainStorage.layout();
layout.tdCounterByQE[layout.tdToQe[tdId]]--;
delete layout.tdQuotes[tdId]; // ❌ TEE_TCB_SVN information lost here
delete layout.tdSignerAddress[tdId];
delete layout.tdToQe[tdId];
emit TDReportRevoked(tdId);
// ❌ Missing: layout.teeTcbSvnTDCounter[tdQuote.TEE_TCB_SVN]--;
}
When a Trust Domain (TD) is created with a specific TEE_TCB_SVN
value and later revoked, that TEE version becomes permanently unrevokable via revokeTeeTcbSvn()
because the reference counter is never decremented. This flaw prevents administrators from addressing vulnerabilities in TEE software versions, thereby undermining the protocol's security model.
Modify the revokeTD()
function to retrieve the TEE_TCB_SVN
value before deletion and properly decrement its reference counter:
function revokeTD(uint256 tdId) external onlyOwner {
TrustDomainStorage.Layout storage layout = TrustDomainStorage.layout();
// ✅ FIX: Retrieve TEE_TCB_SVN before deletion
TDQuote memory tdQuote = layout.tdQuotes[tdId];
require(tdQuote.REPORT_DATA1 != 0, "TD does not exist");
layout.tdCounterByQE[layout.tdToQe[tdId]]--;
// ✅ FIX: Decrement TEE_TCB_SVN reference counter
layout.teeTcbSvnTDCounter[tdQuote.TEE_TCB_SVN]--;
delete layout.tdQuotes[tdId];
delete layout.tdSignerAddress[tdId];
delete layout.tdToQe[tdId];
delete layout.tdValidityEnd[tdId]; // ✅ BONUS: Fix storage leak
emit TDReportRevoked(tdId);
}
SOLVED: The recommended fix was implemented by the Quex team.
//
The QuexActionFacet
contract employs the raw ecrecover
function for signature verification without implementing safeguards against signature malleability. This flaw allows multiple valid signatures to be generated for the same message, potentially undermining future security mechanisms that depend on signature uniqueness.
Within the _isSignatureValid
function, which verifies oracle message signatures, the contract invokes Ethereum's native ecrecover
function as follows:
return ecrecover(ethSignedMessageHash, signature.v, signature.r, signature.s) == tdAddress;
This implementation does not enforce signature canonicalization as mandated by EIP-2, which requires the <s value to be in the lower half of the elliptic curve order. Due to the properties of elliptic curve cryptography, for any valid signature (r, s, v), there exists another valid signature with parameters (r, curve_order - s, flip(v)), where <flip(v) toggles between 27 and 28.
This vulnerability may:
Compromise any future replay protection mechanisms that rely on signature uniqueness.
Allow the same signed message to be accepted with different signature representations.
Cause inconsistencies in systems that track or index data by signature.
Although this issue does not pose an immediate risk of fund theft, it deviates from best practices and could impact other protocol features that assume signature uniqueness.
In the following scenario, the signature malleability is shown:
function test_signatureMalleability() public {
// Get a deterministic private key and address for signing
uint256 privateKey = uint256(keccak256(abi.encodePacked("malleability_test_key")));
address signerAddress = vm.addr(privateKey);
// Use a custom TD ID
uint256 customTdId = 999;
// Mock trust domain checks for our custom signer
vm.mockCall(address(diamond), abi.encodeWithSelector(ITrustDomainRegistry.getTDSignerAddress.selector, customTdId), abi.encode(signerAddress));
vm.mockCall(address(diamond), abi.encodeWithSelector(ITrustDomainRegistry.isTDValid.selector, customTdId), abi.encode(true));
vm.mockCall(oraclePoolAddress, abi.encodeWithSelector(IOraclePool.isInPool.selector, customTdId), abi.encode(true));
// Create oracle message
OracleMessage memory message = OracleMessage(actionId, DataItem(vm.getBlockTimestamp(), 0, abi.encode(1)));
// Create message hash for signing
bytes memory messageBytes = abi.encode(message);
bytes32 messageHash = keccak256(messageBytes);
bytes32 ethSignedMessageHash = keccak256(abi.encodePacked("\x19Ethereum Signed Message:\n32", messageHash));
// Sign message with our key
(uint8 v, bytes32 r, bytes32 s) = vm.sign(privateKey, ethSignedMessageHash);
// The secp256k1 curve order N
uint256 N = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
// Create two equivalent but different signatures
ETHSignature memory originalSig = ETHSignature(r, s, v);
ETHSignature memory malleableSig = ETHSignature(r, bytes32(N - uint256(s)), v == 27 ? 28 : 27);
// Log signature details
console.log("Original Signature:");
console.log("v:", uint256(originalSig.v));
console.log("r:", uint256(originalSig.r));
console.log("s:", uint256(originalSig.s));
console.log("\nMalleable Signature:");
console.log("v:", uint256(malleableSig.v));
console.log("r:", uint256(malleableSig.r));
console.log("s:", uint256(malleableSig.s));
console.log("\nMath check: s1 + s2 == curve_order?", uint256(originalSig.s) + uint256(malleableSig.s) == N);
// Try first signature
testObject.pushData{value: quexFee}(message, originalSig, flowId, customTdId);
// Try malleable signature
testObject.pushData{value: quexFee}(message, malleableSig, flowId, customTdId);
console.log("Both signatures were accepted");
}
Replace the current signature verification with OpenZeppelin's ECDSA library, which includes protection against signature malleability:
import "@openzeppelin/contracts/utils/cryptography/ECDSA.sol";
function _isSignatureValid(OracleMessage memory oracleMessage, ETHSignature memory signature, address tdAddress) private pure returns (bool) {
bytes memory message = abi.encode(oracleMessage);
bytes32 messageHash = keccak256(message);
bytes32 ethSignedMessageHash = keccak256(abi.encodePacked("\x19Ethereum Signed Message:\n32", messageHash));
return ECDSA.recover(ethSignedMessageHash, signature.v, signature.r, signature.s) == tdAddress;
}
SOLVED: The contract now uses @solidstate/contracts/cryptography/ECDSA.sol
to prevent signature malleability.
//
The Quex protocol contains an access control vulnerability that allows an attacker to execute arbitrary function calls on any contract. This issue stems from two related problems:
Unrestricted Flow Creation: The createFlow()
function in FlowFacet.sol
lacks proper access control:
struct Flow {
uint256 gasLimit;
uint256 actionId;
address pool;
address consumer;
bytes4 callback;
}
function createFlow(Flow memory flow) external returns (uint256 flowId) {
FlowStorage.Layout storage layout = FlowStorage.layout();
flowId = layout.lastFlowId + 1;
layout.flows[flowId] = flow;
layout.lastFlowId = flowId;
emit FlowAdded(flowId);
return flowId;
}
Dangerous Call to Arbitrary Consumer: The <pushData() function performs an unchecked low-level call to the consumer address specified in the flow:
bytes memory payload = abi.encodeWithSelector(flow.callback, flowId, message.dataItem, IdType.FlowId);
(bool success, ) = flow.consumer.call{gas: flow.gasLimit}(payload);
This vulnerability enables an attacker to:
Create a malicious flow targeting any contract as the consumer with any function selector.
Invoke pushData()
with a valid oracle message to trigger the arbitrary function call.
Potentially drain funds from the contract, manipulate the protocol state, or disrupt critical system operations.
In this scenario, an allowance for an ERC20 token that the contract may hold can be updated, putting assets at immediate risk:
function test_dangerousCallToArbitraryConsumer() public {
address attacker = address(makeAddr("attacker"));
vm.deal(attacker, pushFee);
// Give the contract some tokens
uint256 initialSupply = 1000 ether;
MockERC20 token = new MockERC20();
// Mint tokens to the test contract so we can transfer them
deal(address(token), address(testObject), 100 ether);
assertEq(token.balanceOf(address(testObject)), 100 ether, "Contract should have tokens");
// Create a malicious flow targeting the token's approve function
uint256 maliciousFlowId = uint256(uint160(attacker));
uint256 maliciousActionId = uint256(keccak256("malicious_action"));
// Get the selector for approve(address,uint256) function
bytes4 approveSelector = bytes4(keccak256("approve(address,uint256)"));
// Create flow targeting the token's approve function
Flow memory maliciousFlow = Flow(
1000000, // gasLimit
maliciousActionId, // actionId
oraclePoolAddress, // pool
address(token), // consumer (token contract)
approveSelector // callback to approve tokens
);
// Mock the getFlow call
vm.mockCall(address(diamond), abi.encodeWithSelector(IFlowRegistry.getFlow.selector, maliciousFlowId), abi.encode(maliciousFlow));
// TRICK #2: Craft a dataItem that will result in a large approval amount
// The first 32 bytes of dataItem's encoding will be interpreted as the amount
bytes memory amountBytes = abi.encode(type(uint256).max);
// Use the predefined TD_validInQuex_inOraclePool instead of td
TDTestData memory tdData = TD_validInQuex_inOraclePool;
OracleMessage memory message = OracleMessage(maliciousActionId, DataItem(vm.getBlockTimestamp(), 0, amountBytes));
ETHSignature memory signature = _signOracleMessage(message, tdData);
// Execute attack
vm.prank(attacker);
testObject.pushData{value: pushFee}(message, signature, maliciousFlowId, tdData.tdId);
// Verify the attack succeeded - check that the attacker has approval
assertGt(token.allowance(address(testObject), attacker), 0, "Attacker should have approval to spend contract's tokens");
}
Ensure that the oracle contract does not have access to token accounts or funds, preventing attackers from exploiting it to compromise assets.
RISK ACCEPTED: The Quex team considers unrestricted flow creation a feature and places responsibility on receiving contracts to perform validation.
//
The TrustDomainFacet::revokePCK
function deletes the PCK from the mapping but does not remove it from the processorPCKSerials
array. Proper state management requires that mappings and arrays remain synchronized.
function revokePCK(uint256 platformSerial, uint256 pckSerial) external onlyOwner {
delete TrustDomainStorage.certificateLayout().processorPCKs[platformSerial][pckSerial];
}
Impact:
Revoked PCKs persist in the array, causing storage bloat.
Increased gas costs for operations that iterate over all PCKs.
Potential for inconsistent state representation and unexpected behavior.
Update the processorPCKSerials
array to accurately reflect the current state.
SOLVED: Storage bloat was completely eliminated by removing the processorPCKSerials
arrays and implementing proper reference counting.
//
Several critical setter functions and system flow creation processes lack adequate validation of input parameters:
QuexMonetaryFacet::setTreasury
TreasuryFacet::setTreasury
QuexAddressFacet::setQuexAddress
FlowFacet::createFlow
For instance, the createFlow
function does not validate any of the flow parameters, as shown below:
function createFlow(Flow memory flow) external returns (uint256 flowId) {
FlowStorage.Layout storage layout = FlowStorage.layout();
flowId = layout.lastFlowId + 1;
layout.flows[flowId] = flow;
layout.lastFlowId = flowId;
emit FlowAdded(flowId);
return flowId;
}
This omission allows flows to be created with zero addresses for the pool and consumer, zero gas limits, and invalid callback signatures.
Impact:
Setting the treasury to the zero address permanently locks all protocol fees.
Creating flows with zero addresses disrupts the oracle system.
Zero gas limits cause all callbacks to fail.
Invalid callback signatures may render flows unusable.
Implement comprehensive validation for all critical input parameters.
SOLVED: The recommended mitigation has been implemented.
//
The functions within RequestActionFacet
add data (Requests, Patches, Filters, Schemas, Actions) to storage mappings. The keys for these mappings (requestId
, patchId
, filterId
, schemaId
, actionId
) are derived by hashing the input data using keccak256
. These functions write the data (or their constituent IDs, in the case of actions) directly to the corresponding mapping slot without first verifying whether an entry already exists at that key. In the highly unlikely event of a natural keccak256
collision between two distinct valid inputs, adding the second input would overwrite the data associated with the first, resulting in state corruption where retrieving the ID returns unexpected data.
Before writing to storage, implement a check to confirm that the target mapping slot is empty (or that a key field within the stored struct holds its default value). This measure prevents accidental overwrites and makes collisions explicit.
RISK ACCEPTED: The Quex team stated that a keccak256
collision would compromise the entire Ethereum ecosystem, rendering additional checks unnecessary.
//
In the DiamondWritableInternal::_diamondCut
function, the DiamondCut
event is emitted before the initialization process:
// Event must be emitted before the initializer is called, in case the initializer triggers further diamond cuts
emit DiamondCut(facetCuts, target, data);
_initialize(target, data);
This behavior is explicitly noted in a comment, but it results in events being emitted even if the initialization fails.
Impact:
May cause inconsistent event logs if initialization does not complete successfully.
Reduces the reliability of event monitoring for tracking successful upgrades.
Can lead to complications with off-chain monitoring systems relying on these events.
Although this is a deliberate design choice, a safer practice would be to emit events only after state changes have been successfully applied.
ACKNOWLEDGED: The Quex team has acknowledged this finding.
//
In the QuexDiamond::init
function, the diamond immediately assigns the DefaultAdminRole
to the deployer:
_grantRole(QuexRoles.DefaultAdminRole, msg.sender);
There is no validation of the deployer's identity, which may pose a risk if the contract is deployed via a factory or proxy pattern.
Impact:
If deployed through a factory, the factory contract receives admin privileges.
This can result in unintended access control if the deployment process is misunderstood.
Proxied deployments become more complex and prone to errors.
Introduce an address parameter to explicitly designate the owner:
function init(address _ownerAddress) external initializer {
require(_ownerAddress != address(0));
_setOwner(_ownerAddress);
_grantRole(QuexRoles.DefaultAdminRole, _ownerAddress);
//...
}
SOLVED: The recommended mitigation has been implemented.
Halborn employed automated testing techniques to enhance coverage of specific areas within the smart contracts under review. One of the primary tools utilized was Slither, a static analysis framework for Solidity. After successfully verifying and compiling the smart contracts in the repository into their ABI and binary formats, Slither was executed against the contracts. This tool performs static verification of mathematical relationships between Solidity variables to identify invalid or inconsistent usage of the contracts' APIs throughout the entire codebase.
The security team conducted a thorough review of the findings generated by the Slither static analysis tool. No significant issues were identified, as the detected findings were determined to be false positives.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
Quex V1 Contracts
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed