Ripple - Smart Contract Audit - Credentials - Ripple


Prepared by:

Halborn Logo

HALBORN

Last Updated 06/27/2025

Date of Engagement: January 6th, 2025 - January 30th, 2025

Summary

100% of all REPORTED Findings have been addressed

All findings

8

Critical

0

High

0

Medium

2

Low

2

Informational

4


1. Introduction

Ripple engaged Halborn to conduct a security assessment on XRP Ledger (XRPL) feature amendments beginning on January 6th, 2025 and ending on January 30th, 2025, focusing on modifications to the codebase between the commit 4c2e6a3 and the commit cd9b5c9. The review specifically targets changes introduced during these commits, assuming the validity of the pre-existing logic and excluding verification of its security, e.g., input validation of previously existing parameters.


The Credentials feature introduced new transaction validation mechanisms and structures that allow credential-based authorization within the Ripple network. This feature enables accounts to issue, manage, and verify credentials for transaction approval, adding an additional layer of security and control.

2. Assessment Summary

The team at Halborn assigned a full-time security engineer to assess the security of the node. The security engineer is a blockchain and smart-contract security expert in advanced penetration testing, smart-contract hacking, and deep knowledge of multiple blockchain protocols.

The purpose of this assessment is to:

    • Ensure that the new features operate as intended.

    • Identify potential security issues with the new features.


In summary, Halborn identified some improvements to reduce the likelihood and impact of risks, which were mostly acknowledged by the Ripple team. The main ones were the following: 

    • Ensure that expired NFT offers and credentials are handled separately by using distinct error codes.

    • Explicitly reject transactions signed with a disabled Master Key, even if it is also set as the Regular Key.

    • Replace assertions with explicit error handling that operates in all build configurations.

    • Include a check that confirms whether the recipient has configured Deposit Auth before invoking verifyDepositPreauth.

3. Test Approach and Methodology

Halborn performed a combination of manual review of the code and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of the node security assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:

    • Research into the architecture, purpose, and use of the platform.

    • Manual code review and walkthrough to identify any logic issue.

    • Graphing out functionality and contract logic/connectivity/functions.


4. RISK METHODOLOGY

Every vulnerability and issue observed by Halborn is ranked based on two sets of Metrics and a Severity Coefficient. This system is inspired by the industry standard Common Vulnerability Scoring System.
The two Metric sets are: Exploitability and Impact. Exploitability captures the ease and technical means by which vulnerabilities can be exploited and Impact describes the consequences of a successful exploit.
The Severity Coefficients is designed to further refine the accuracy of the ranking with two factors: Reversibility and Scope. These capture the impact of the vulnerability on the environment as well as the number of users and smart contracts affected.
The final score is a value between 0-10 rounded up to 1 decimal place and 10 corresponding to the highest security risk. This provides an objective and accurate rating of the severity of security vulnerabilities in smart contracts.
The system is designed to assist in identifying and prioritizing vulnerabilities based on their level of risk to address the most critical issues in a timely manner.

4.1 EXPLOITABILITY

Attack Origin (AO):
Captures whether the attack requires compromising a specific account.
Attack Cost (AC):
Captures the cost of exploiting the vulnerability incurred by the attacker relative to sending a single transaction on the relevant blockchain. Includes but is not limited to financial and computational cost.
Attack Complexity (AX):
Describes the conditions beyond the attacker’s control that must exist in order to exploit the vulnerability. Includes but is not limited to macro situation, available third-party liquidity and regulatory challenges.
Metrics:
EXPLOITABILITY METRIC (mem_e)METRIC VALUENUMERICAL VALUE
Attack Origin (AO)Arbitrary (AO:A)
Specific (AO:S)
1
0.2
Attack Cost (AC)Low (AC:L)
Medium (AC:M)
High (AC:H)
1
0.67
0.33
Attack Complexity (AX)Low (AX:L)
Medium (AX:M)
High (AX:H)
1
0.67
0.33
Exploitability EE is calculated using the following formula:

E=meE = \prod m_e

4.2 IMPACT

Confidentiality (C):
Measures the impact to the confidentiality of the information resources managed by the contract due to a successfully exploited vulnerability. Confidentiality refers to limiting access to authorized users only.
Integrity (I):
Measures the impact to integrity of a successfully exploited vulnerability. Integrity refers to the trustworthiness and veracity of data stored and/or processed on-chain. Integrity impact directly affecting Deposit or Yield records is excluded.
Availability (A):
Measures the impact to the availability of the impacted component resulting from a successfully exploited vulnerability. This metric refers to smart contract features and functionality, not state. Availability impact directly affecting Deposit or Yield is excluded.
Deposit (D):
Measures the impact to the deposits made to the contract by either users or owners.
Yield (Y):
Measures the impact to the yield generated by the contract for either users or owners.
Metrics:
IMPACT METRIC (mIm_I)METRIC VALUENUMERICAL VALUE
Confidentiality (C)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Integrity (I)None (I:N)
Low (I:L)
Medium (I:M)
High (I:H)
Critical (I:C)
0
0.25
0.5
0.75
1
Availability (A)None (A:N)
Low (A:L)
Medium (A:M)
High (A:H)
Critical (A:C)
0
0.25
0.5
0.75
1
Deposit (D)None (D:N)
Low (D:L)
Medium (D:M)
High (D:H)
Critical (D:C)
0
0.25
0.5
0.75
1
Yield (Y)None (Y:N)
Low (Y:L)
Medium (Y:M)
High (Y:H)
Critical (Y:C)
0
0.25
0.5
0.75
1
Impact II is calculated using the following formula:

I=max(mI)+mImax(mI)4I = max(m_I) + \frac{\sum{m_I} - max(m_I)}{4}

4.3 SEVERITY COEFFICIENT

Reversibility (R):
Describes the share of the exploited vulnerability effects that can be reversed. For upgradeable contracts, assume the contract private key is available.
Scope (S):
Captures whether a vulnerability in one vulnerable contract impacts resources in other contracts.
Metrics:
SEVERITY COEFFICIENT (CC)COEFFICIENT VALUENUMERICAL VALUE
Reversibility (rr)None (R:N)
Partial (R:P)
Full (R:F)
1
0.5
0.25
Scope (ss)Changed (S:C)
Unchanged (S:U)
1.25
1
Severity Coefficient CC is obtained by the following product:

C=rsC = rs

The Vulnerability Severity Score SS is obtained by:

S=min(10,EIC10)S = min(10, EIC * 10)

The score is rounded up to 1 decimal places.
SeverityScore Value Range
Critical9 - 10
High7 - 8.9
Medium4.5 - 6.9
Low2 - 4.4
Informational0 - 1.9

5. SCOPE

Files and Repository
(a) Repository: rippled
(b) Assessed Commit ID: cd9b5c9
(c) Items in scope:
  • src/libxrpl/protocol/ErrorCodes.cpp
  • src/libxrpl/protocol/Indexes.cpp
  • src/libxrpl/protocol/InnerObjectFormats.cpp
↓ Expand ↓
Out-of-Scope: The review specifically targets changes introduced during the commit 4c2e6a3 and the commit cd9b5c9, assuming the validity of the pre-existing logic and excluding verification of its security, e.g., input validation of previously existing parameters.
Out-of-Scope: New features/implementations after the remediation commit IDs.

6. Assessment Summary & Findings Overview

Critical

0

High

0

Medium

2

Low

2

Informational

4

Security analysisRisk levelRemediation Date
Incorrect handling of expired Credentials and NFT offersMediumRisk Accepted - 02/18/2025
Master Key persistence allows transactions despite being disabledMediumRisk Accepted - 01/28/2025
Assertions may lead to inconsistent behavior in productionLowRisk Accepted - 02/18/2025
Inconsistent credential handling when Deposit Auth is not configuredLowRisk Accepted - 02/18/2025
Incorrect error code for duplicate credential entriesInformationalAcknowledged - 03/16/2025
Misleading error handling for invalid ledger index valuesInformationalSolved - 03/11/2025
Expiration validation in doApply rather than preclaimInformationalAcknowledged - 02/18/2025
Inability to replace an expired credential due to duplicate checksInformationalAcknowledged - 02/26/2025

7. Findings & Tech Details

7.1 Incorrect handling of expired Credentials and NFT offers

//

Medium

Description

In the Transactor::operator() function, the transaction processing logic treats expired NFT offers and expired credentials as the same condition (tecEXPIRED). The following assignments cause this issue:

bool const doNFTokenOffers = (result == tecEXPIRED);
bool const doCredentials = (result == tecEXPIRED);

Since both flags are set based on the same transaction result (tecEXPIRED), this could lead to cases where a transaction processing expired credentials unintentionally affects NFT offers, or vice versa.


While the Ripple team has clarified that expired credentials or NFT offers are only deleted if they are explicitly referenced in the transaction, this logic still poses a risk of unintended interactions, inconsistencies, or future regressions if transaction handling changes. Furthermore, the Ripple team has acknowledged a known bug that prevents expired NFT offers from being deleted, meaning the behavior may change once the bug is fixed.

BVSS
Recommendation

It is recommended to ensure that expired NFT offers and credentials are handled separately by using distinct error codes, e.g.: tecEXPIRED_NFT and tecEXPIRED_CREDENTIAL.

Remediation Comment

RISK ACCEPTED: The Ripple team accepted this risk and stated the following:

It can't happen because candidates to deletion are added by the transaction in doApply() to the special list. This code will always delete only what TX ask to delete.

References
src/xrpld/app/tx/detail/Transactor.cpp#L929-L930

7.2 Master Key persistence allows transactions despite being disabled

//

Medium

Description

An account that previously set its Regular Key equal to its Master Key before the activation of the fixMasterKeyAsRegularKey amendment can still sign transactions using the Master Key even after explicitly disabling it. This occurs because the Master Key remains valid through the Regular Key mechanism.


This issue affects all transaction types, as accounts that believe they have disabled their Master Key can still sign transactions unintentionally. For example, in credential management transactions, an account that has disabled its Master Key may still sign CredentialCreate, CredentialAccept, or CredentialDelete transactions, despite assuming it has removed its ability to authorize such actions.


Because Transactor::checkSingleSign() does not properly reject a disabled Master Key when it is also used as a Regular Key, affected accounts may unknowingly retain unauthorized signing capabilities:

    if (ctx.view.rules().enabled(fixMasterKeyAsRegularKey))
    {
        // Signed with regular key.
        if ((*sleAccount)[~sfRegularKey] == idSigner)
        {
            // ❌ Allows signing with RegularKey even 
            // if it's the disabled Master Key
            return tesSUCCESS;
        }

        // Signed with enabled mater key.
        if (!isMasterDisabled && idAccount == idSigner)
        {
            return tesSUCCESS;
        }

        // Signed with disabled master key.
        if (isMasterDisabled && idAccount == idSigner)
        {
            return tefMASTER_DISABLED;
        }

        // Signed with any other key.
        return tefBAD_AUTH;
    }
BVSS
Recommendation

It is recommended to modify Transactor::checkSingleSign to explicitly reject transactions signed with a disabled Master Key, even if it is also set as the Regular Key. The validation should ensure that once the Master Key is disabled (via lsfDisableMaster), it cannot be used for signing in any way, preventing unintended access through the Regular Key mechanism. However, to allow affected legacy accounts to recover, SetRegularKey transactions should be exempted from this restriction.

Remediation Comment

RISK ACCEPTED: The Ripple team accepted this risk and stated the following:

Only 23 accounts out of approximately 6 million are affected by this issue, so the complexity of implementing the solution, including the necessary exception, may not be justified.

7.3 Assertions may lead to inconsistent behavior in production

//

Low

Description

The codebase extensively uses assert() statements, which are disabled in production builds when NDEBUG is defined. This can result in silent failures, undefined behavior, and inconsistent ledger states since conditions enforced during development may not be validated in production. Additionally, assertions are used in transaction processing, ledger operations, and security-sensitive functions, which can introduce unintended side effects when they are omitted.


For example, the following assertion ensures a pointer is not null, but in a production build, it would be skipped, potentially leading to a segmentation fault or undefined behavior:

TER
Transactor::consumeSeqProxy(SLE::pointer const& sleAccount)
{
    assert(sleAccount);
    SeqProxy const seqProx = ctx_.tx.getSeqProxy();
    if (seqProx.isSeq())
    {
        // Note that if this transaction is a TicketCreate,
        // then the transaction will modify the account root 
        // sfSequence yet again.
        sleAccount->setFieldU32(sfSequence, seqProx.value() + 1);
        return tesSUCCESS;
    }
    return ticketDelete(
        view(), account_, getTicketIndex(account_, seqProx), j_);
}
BVSS
Recommendation

Assertions should be replaced with explicit error handling that operates in all build configurations. Instead of using assert(), apply conditionals with proper logging and error responses. For instance, use if (!condition) { return errorCode; } along with appropriate logging mechanisms to ensure that failures are captured and handled correctly.

Remediation Comment

RISK ACCEPTED: The Ripple team accepted this risk and stated the following:

We don't know why these asserts are left in the code, may be for some debug purpose. Their presence is useless, the code is constructed in such a way that those asserts will never fire (nor debug either release).

References
src/libxrpl/protocol/ErrorCodes.cpp#L214
src/libxrpl/protocol/Indexes.cpp#L98, L138, L254, L272, L390
src/xrpld/app/tx/detail/applySteps.cpp#L160, L209, L225, L237, L284
src/xrpld/app/tx/detail/DeleteAccount.cpp#L348, L352, L380, L394
src/xrpld/app/tx/detail/InvariantCheck.cpp#L403
src/xrpld/app/tx/detail/PayChan.cpp#L153, L549
src/xrpld/app/tx/detail/Payment.cpp#L528
src/xrpld/app/tx/detail/Transactor.cpp#L371, L443, L457, L582, L583, L819, L834, L891, L947
src/xrpld/net/detail/RPCCall.cpp#L965
src/xrpld/rpc/detail/RPCHelpers.cpp#L547, L554, L559, L968, L981, L1087, L1121

7.4 Inconsistent credential handling when Deposit Auth is not configured

//

Low

Description

When Deposit Auth is not configured for a recipient, transactions that include credentials behave inconsistently across multiple functions. The functions DeleteAccount::doApply(), EscrowFinish::doApply(), PayChanClaim::doApply(), and Payment::doApply() validate if featureDepositAuth is enabled by checking ctx_.view().rules().enabled(featureDepositAuth) before invoking verifyDepositPreauth. However, these functions do not verify whether the recipient has actually configured Deposit Auth.


As a result:

  • If valid credentials are included, the transaction succeeds.

  • If expired or invalid credentials are included, the transaction fails.


This partial validation leads to unintended behavior, where credentials influence transaction outcomes even when Deposit Auth is not set by the recipient:

if (ctx_.view().rules().enabled(featureDepositAuth)) {
    // Called without confirming if Deposit Auth
    // is configured by the recipient
    verifyDepositPreauth(...); 
}
BVSS
Recommendation

Update the functions DeleteAccount::doApply(), EscrowFinish::doApply(), PayChanClaim::doApply(), and Payment::doApply() to include a check that confirms whether the recipient has configured Deposit Auth before invoking verifyDepositPreauth. If Deposit Auth is not configured, credentials should be ignored entirely to ensure consistent transaction behavior.

Remediation Comment

RISK ACCEPTED: The Ripple team accepted this risk and stated the following:

It's intentional, for a better user experience.

7.5 Incorrect error code for duplicate credential entries

//

Informational

Description

The function authorized in the CredentialHelpers.cpp incorrectly returns tefINTERNAL when a transaction includes duplicate credential entries with the same (Issuer, CredentialType). This error code is meant for internal failures but is being used for a user-input validation failure, which can mislead users and make debugging difficult. The appropriate response should be tefBAD_LEDGER, which indicates an unexpected state in the ledger caused by invalid transaction data.


At the end of the function, the issue occurs when inserting credentials into the std::set, where a duplicate entry results in tefINTERNAL:

auto [it, ins] =
    sorted.emplace((*sleCred)[sfIssuer],
        (*sleCred)[sfCredentialType]);
if (!ins)
    return tefINTERNAL;
BVSS
Recommendation

The function should return tefBAD_LEDGER instead of tefINTERNAL when detecting duplicate credential entries. This ensures that the error properly reflects a ledger state issue rather than an internal failure, improving debugging clarity and user feedback.

Remediation Comment

ACKNOWLEDGED: The Ripple team acknowledged this issue and stated the following:

It returns tefINTERNAL because this function is only used in the apply stage, where the incoming credentials array has already been checked for duplicates. tefINTERNAL represents an impossible scenario that should never occur, so this behavior is expected and correct.

7.6 Misleading error handling for invalid ledger index values

//

Informational

Description

The jvParseLedger function in RPCCall.cpp fails to properly handle invalid ledger_index values, defaulting to 0 instead of rejecting them. When a user provides an out-of-range or malformed ledger index—such as -1 or 4294967297—the function attempts to convert it using beast::lexicalCast<std::uint32_t>. If the conversion fails, it silently assigns 0, which does not exist in the XRP Ledger:

    // TODO New routine for parsing ledger parameters,
    // other routines should standardize on this.
    static bool
    jvParseLedger(Json::Value& jvRequest,
        std::string const& strLedger)
    {
        if (strLedger == "current" || strLedger == "closed" ||
            strLedger == "validated")
        {
            jvRequest[jss::ledger_index] = strLedger;
        }
        else if (strLedger.length() == 64)
        {
            // YYY Could confirm this is a uint256.
            jvRequest[jss::ledger_hash] = strLedger;
        }
        else
        {
            jvRequest[jss::ledger_index] =
                beast::lexicalCast<std::uint32_t>(strLedger);
        }

        return true;
    }

This behavior results in misleading error messages when transactions or queries reference an invalid ledger, complicating debugging and user interactions. While this does not pose a direct security concern, it can lead to confusion and inefficient troubleshooting due to inaccurate error reporting.

BVSS
Recommendation

It is recommended to explicitly validate ledger_index values, ensuring they are within the allowed range (starting from 1). If an invalid value is detected, return a descriptive error instead of defaulting to 0.

Remediation Comment

SOLVED: The Ripple team solved the issue in the specified PR.

7.7 Expiration validation in doApply rather than preclaim

//

Informational

Description

The code checks whether a credential is expired only in the doApply() phase of certain transactions (e.g., CredentialCreate::doApply and CredentialAccept::doApply), but does not verify expiration during preclaim. As a result, transactions involving an expired credential are allowed to pass preclaim, only to fail later in doApply. This inconsistency consumes validator and node resources unnecessarily and delays rejection of invalid transactions.


An attacker (or careless user) could submit numerous transactions referencing obviously expired credentials. Because no expiration check is done in preclaim, these transactions are placed in the nodes mempools. Although each transaction will eventually fail in doApply, it still requires processing overhead, adds queue latency, and costs the transaction fee payer extra XRP.


Code Location

  • Expiration Check in CredentialAccept::doApply: the check happens only at the final stage.

if (checkExpired(sleCred, view().info().parentCloseTime))
{
    JLOG(j_.trace()) << "Credential is expired: " 
        << sleCred->getText();
    // delete expired credentials even if the transaction failed
    auto const err = credentials::deleteSLE(view(),
        sleCred, j_);
    return isTesSuccess(err) ? tecEXPIRED : err;
}

  • Expiration Check in CredentialCreate::doApply: the check happens only at the final stage.

if (closeTime > *optExp)
{
    JLOG(j_.trace()) << "Malformed transaction: "
                        "Expiration time is in the past.";
    return tecEXPIRED;
}
BVSS
Recommendation

It is recommended to add an expiration check in preclaim for any transactions that depend on non-expired credentials. If the credential is expired, return tecEXPIRED (or an analogous code) immediately, preventing it from entering the transaction queue in the first place. This ensures prompt rejection of invalid transactions, saving network and node resources.

Remediation Comment

ACKNOWLEDGED: The Ripple team acknowledged this issue and stated the following:

It is because of deletion of the expired credentials (which is not possible in the preclaim).

7.8 Inability to replace an expired credential due to duplicate checks

//

Informational

Description

When creating a credential (CredentialCreate), the logic checks for duplicates of the same credential type on the same subject, returning tecDUPLICATE if one already exists. However, it does not check whether the existing credential is expired. Consequently, an expired credential with the same type blocks the creation of a newer, valid credential. This can lead to a situation in which the subject is effectively unable to refresh or replace an expired credential without manual credential deletion.

Code Location

  • In CredentialCreate: the function refuses to recreate a credential if it already exists, not checking if the current credential was expired.

if (ctx.view.exists(
    keylet::credential(subject, ctx.tx[sfAccount], credType)))
{
    JLOG(ctx.j.trace()) << "Credential already exists.";
    return tecDUPLICATE;
}
BVSS
Recommendation

During the credential creation, when detecting an preexisting credential, it is recommended to verify whether it is expired, and in that case allow the new credential to overwrite or replace the old one without requiring explicit deletion.

Remediation Comment

ACKNOWLEDGED: The Ripple team acknowledged this issue and stated the following:

It is not recognized as bug, as it implemented as designed. However it seems like a feature that worth to discuss and we may return to it.

Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.

© Halborn 2025. All rights reserved.