Prepared by:
HALBORN
Last Updated 04/29/2025
Date of Engagement: October 15th, 2024 - December 13th, 2024
100% of all REPORTED Findings have been addressed
All findings
30
Critical
0
High
1
Medium
2
Low
3
Informational
24
Tangle Network
engaged Halborn to perform a security assessment of their Rust codebase from Oct. 15, 2024, to Dec. 13, 2024. The assessment focused on the substrate code, precompiled code, and pallets listed in the provided GitHub repository and included relevant commit hashes. More details can be found in the Scope section of this report.
The Halborn team was allocated 8 weeks and 3 days for the engagement and assigned two full-time security engineers to assess the security of the substrate pallets and the overall codebase. The security engineers are experts in blockchain and smart contract security, with advanced skills in penetration testing and smart contract auditing, as well as extensive knowledge of various blockchain protocols.
The purpose of this assessment is to:
Ensure that the codebase functions operate as intended, including properly implementing staking, pooling, reward mechanisms, and Ethereum compatibility features.
Identify potential security issues within the codebase, such as:
Cryptographic vulnerabilities (e.g., signature malleability, replay attacks, or misuse of hashing functions).
Logical inconsistencies in key functionalities like staking, unbonding, reward calculations.
Dependency-related risks from external crates and libraries.
Insufficient input validation or unchecked operations may lead to panics, overflows, or security exploits.
Verify that the code adheres to best practices for blockchain security, such as preventing unauthorized access, maintaining data integrity, and ensuring predictable execution under edge cases and adverse conditions.
Assess the implementation of Ethereum precompiles, proxy types, and EVM integration to ensure proper validation, safe asset handling, and gas usage efficiency.
The Halborn team performed a combination of manual code review and automated security testing to ensure a comprehensive and efficient assessment of the Tangle codebase. This approach balanced efficiency, timeliness, practicality, and accuracy to address the assessment scope. Manual testing was used to uncover flaws in logic, process, and implementation, while automated tools were employed to quickly identify deviations from security best practices. The following phases and associated tools were used throughout the assessment:
Research on the architecture, purpose, and usage of the Tangle network.
Manual code walkthroughs to understand the design of key modules, including staking, pooling, reward distribution, and EVM precompiles, while identifying potential vulnerabilities.
Assessment of Rust functions and variables for arithmetic vulnerabilities, such as overflow/underflow or unsafe computations.
Audit of cryptographic protocols and primitives, including ECDSA signature recovery and Ethereum address mapping, to ensure compliance with industry standards and robustness against attacks.
Scanning Rust crates with Cargo Audit to identify outdated dependencies, known vulnerabilities, and insecure versions of third-party libraries.
Analyzing unsafe code usage with Cargo Geiger, ensuring minimal reliance on unsafe Rust features to reduce risks of memory safety vulnerabilities.
Validation of error handling and logging practices to prevent unintentional exposure of sensitive information or system behavior.
EXPLOITABILITY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
0
High
1
Medium
2
Low
3
Informational
24
Security analysis | Risk level | Remediation Date |
---|---|---|
Missing Pallet Functionalities In Corresponding Precompiles Makes It Impossible To Call Them Through EVM Contracts | High | Solved - 01/21/2025 |
Incorrect Implementation Of is_nominator In Staking Precompile | Medium | Solved - 01/20/2025 |
Unavailable Features Due to Discrepancies Between Precompile and EVM Interface | Medium | Solved - 01/21/2025 |
Miscalculated Selector Of The TransferNative Log | Low | Solved - 01/20/2025 |
Missing TransferNative Event in ERC-20 Interface Implementation | Low | Solved - 01/20/2025 |
Mismatch in Return Types Between Precompiles and EVM Interfaces | Low | Solved - 01/21/2025 |
Potential Overflow in Deposit Accumulation | Informational | Solved - 01/19/2025 |
Underflow in Stake Reduction | Informational | Solved - 01/19/2025 |
Missing Unstake Amount validation | Informational | Solved - 01/19/2025 |
Overflow in Operator Stake Addition | Informational | Solved - 01/19/2025 |
Ensure APY and Cap Are Within Acceptable Ranges | Informational | Solved - 01/19/2025 |
Missing Existing Stake Reduction Request Validation | Informational | Solved - 01/19/2025 |
Underflow in Gas Calculation | Informational | Solved - 01/19/2025 |
Potential Overflow in Total Supply Update in mint_claim | Informational | Solved - 01/19/2025 |
Overflow Risk in current_reward_counter Commission Calculation | Informational | Solved - 01/19/2025 |
Unhandled Results in verify_signature! Macro | Informational | Solved - 01/19/2025 |
Hardcoded Gas Limit and Inconsistent Error Handling in on_register_hook | Informational | Solved - 01/19/2025 |
Missing Era Validation in dispute Function | Informational | Not Applicable |
Insecure Handling of Invalid Scalar Decoding in decode | Informational | Solved - 01/19/2025 |
Possible Underflow on Balance Reduction | Informational | Solved - 01/19/2025 |
Use Distinct Error for Pending Leave Round | Informational | Solved - 01/19/2025 |
Presence Of Typos | Informational | Solved - 01/21/2025 |
Presence Of TODOs | Informational | Solved - 01/21/2025 |
Erroneous Documentation | Informational | Solved - 01/21/2025 |
Missing Precompiles in Tangle Workspace | Informational | Solved - 01/21/2025 |
Compilation Errors In `tangle-lst` Precompile | Informational | Solved - 11/07/2024 |
Improper Mocking And Testing Of The `tangle-lst` Precompile | Informational | Solved - 11/07/2024 |
Redundant Function In `services` Precompile | Informational | Solved - 12/19/2024 |
Redundant Operations in move_claim for Identical Addresses | Informational | Solved - 01/19/2025 |
Lack of Validation for Identity Element in deserialize Function | Informational | Solved - 01/19/2025 |
//
It was identified that the following pallet functions are not defined within the corresponding precompiles, despite not being restricted to root or forced origin access:
In multi-asset-delegation
precompile:
manage_asset_in_vault
In tangle-lst
precompile:
chill
bond_extra_other
set_commission
set_commission_max
set_commission_change_rate
claim_commission
adjust_pool_deposit
set_commission_claim_permission
In services
precompile:
update_price_targets
pre_register
In vesting
precompile:
merge_schedules
In staking
precompile:
validate
reap_stash
kick
chill_other
force_apply_min_commission
payout_stakers_by_page
update_payee
As a result, the excluded pallet features will be inaccessible from the EVM contracts, limiting their reach and discouraging interaction with these pallets through the dedicated precompiles.
It is recommended to implement the missing functions outlined above to ensure that each pallet function intended for invocation is accessible via the corresponding precompile.
SOLVED: The Tangle Network team solved the issue in the following commit ids:
Regarding the multi-asset delegation precompile, the team has decided to remove all operator-related logic, stating "We explicitly removed all operator functions from precompile. We do not want all the pallets exposed through the precompile, we decided we don’t want operators registering from the EVM. So this is removed and the only functionality exposed is the depositing/delegation restaker logic".
//
In the staking precompile, the implementation of the is_nominator
function is incorrect because it retrieves the status of the given nominator account from the Validators
map instead of the Nominators
map:
#[precompile::public("isNominator(address)")]
#[precompile::public("is_nominator(address)")]
#[precompile::view]
fn is_nominator(handle: &mut impl PrecompileHandle, nominator: Address) -> EvmResult<bool> {
let nominator_account = Runtime::AddressMapping::into_account_id(nominator.0);
handle.record_cost(RuntimeHelper::<Runtime>::db_read_gas_cost())?;
let is_nominator = pallet_staking::Validators::<Runtime>::contains_key(nominator_account);
Ok(is_nominator)
}
This will cause an integrity issue, as the is_nominator
function will incorrectly return whether the given account is a validator instead.
It is recommended to correct the is_nominator
implementation by retrieving data from the Nominators
map instead:
let is_nominator = pallet_staking::Nominators::<Runtime>::contains_key(nominator_account);
SOLVED: The Tangle Network team solved this issue in the specified commit id.
//
It was identified that the staking precompile and its corresponding EVM interface do not implement the same set of functions due to the following reasons:
The eras_total_reward_points
function is implemented within the precompile, but the corresponding EVM interface lacks the matching erasTotalRewardPoints
function:
#[precompile::public("erasTotalRewardPoints(uint32)")]
#[precompile::public("eras_total_reward_points(uint32)")]
#[precompile::view]
fn eras_total_reward_points(
handle: &mut impl PrecompileHandle,
era_index: u32,
) -> EvmResult<u32> {
Although the isValidator
function is defined in the staking EVM interface, the matching function is not implemented in the precompile itself:
/// @dev Check whether the specified address is a validator
/// @param stash the address that we want to confirm is a validator
/// @return A boolean confirming whether the address is a validator
function isValidator(address stash) external view returns (bool);
As a result, these precompile features will be inaccessible.
It is recommended to:
function erasTotalRewardPoints(uint32 eraIndex) external view returns (uint32);
Implement the is_validator
function within the staking precompile, and configure it using the precompile::public
macro to ensure it is called when the isValidator
function is invoked through the EVM interface.
SOLVED: The Tangle Network team solved the issue by including the erasTotalRewardPoints
function in StakingInterface
and removing the is_validator
function in the following commit IDs:
//
The selector of the TransferNative
log in the balances-erc20
precompile is miscalculated because it omits the sender's field type (address
). It is currently defined as follows:
/// Solidity selector of the TransferNative log, which is the Keccak of the Log signature.
pub const SELECTOR_LOG_TRANSFER_NATIVE: [u8; 32] = keccak256!("TransferNative(bytes32,uint256)");
However, the TransferNative
log emitted by the transfer_native
function is structured as follows, indicating that it contains three parameters: the caller, the recipient, and the encoded transfer amount.
log3(
handle.context().address,
SELECTOR_LOG_TRANSFER_NATIVE,
handle.context().caller,
to,
solidity::encode_event_data(value),
)
.record(handle)?;
It is recommended to change the selector of the TransferNative
log calculation as follows:
pub const SELECTOR_LOG_TRANSFER_NATIVE: [u8; 32] = keccak256!("TransferNative(address,bytes32,uint256)")
SOLVED: The Tangle Network team solved the issue in the specified commit id.
//
The TransferNative
event is missing in the ERC20.sol
interface within the balances-erc20
precompile.
It is recommended to add the missing TransferNative
event to the aforementioned Solidity interface.
SOLVED: The Tangle Network team solved the issue in the specified commit id.
//
In the multi-asset-delegation
, tangle-lst
, and vesting
precompiles, the EVM interface incorrectly specifies the return type of each function as uint8
, even though these functions do not return any value.
For example, the join
function in the TangleLst
interface is defined as follows:
function join(uint256 amount, uint256 poolId) external returns (uint8);
However, the join
function in the precompile has the following return type:
#[precompile::public("join(uint256,uint256)")]
fn join(handle: &mut impl PrecompileHandle, amount: U256, pool_id: U256) -> EvmResult {
In the staking
precompile, some return types in the Staking EVM interface are uint256
, which differs from the u128
type specified in the precompile:
function minNominatorBond() external view returns (uint256);
fn min_nominator_bond(handle: &mut impl PrecompileHandle) -> EvmResult<u128>
function minValidatorBond() external view returns (uint256);
fn min_validator_bond(handle: &mut impl PrecompileHandle) -> EvmResult<u128> {
function minActiveStake() external view returns (uint256);
fn min_active_stake(handle: &mut impl PrecompileHandle) -> EvmResult<u128>
function erasTotalStake(uint32 eraIndex) external view returns (uint256);
fn eras_total_stake(handle: &mut impl PrecompileHandle, era_index: u32) -> EvmResult<u128> {
It is recommended to correct the return types in the aforementioned Solidity interfaces. For example, the join
function's signature can be updated as follows:
function join(uint256 amount, uint256 poolId) external;
SOLVED: The Tangle Network team solved the issue in the following commit IDs:
//
In the line metadata.deposits.entry(asset_id).and_modify(|e| *e += amount).or_insert(amount);
, there is a potential overflow risk when adding the amount
to the existing balance in deposits
. If the amount
added results in a balance exceeding the maximum value for BalanceOf<T>
, it could cause an overflow, leading to incorrect values in the deposit storage.
// Update storage
Delegators::<T>::mutate(&who, |maybe_metadata| {
let metadata = maybe_metadata.get_or_insert_with(Default::default);
metadata.deposits.entry(asset_id).and_modify(|e| *e += amount).or_insert(amount); // @bug
});
To prevent overflow, use checked_add
to safely add the deposit amount to the existing balance. This ensures that if the addition exceeds the allowable maximum, an error is triggered instead of an overflow. Add appropriate error handling to manage cases where the addition fails due to this limit.
SOLVED: The Tangle Network team has solved the issue by introducing increase_deposited_amount
function.
//
The line operator.stake -= request.amount
poses an underflow risk if request.amount
exceeds the operator's current stake. Although the code intends to reduce the stake by a scheduled amount, if for any reason request.amount
is higher than the operator.stake
, it would result in an underflow, potentially leading to an incorrect stake balance or an unintended panic.
operator.stake -= request.amount; // @bug
operator.request = None;
Operators::<T>::insert(who, operator);
#[test]
fn schedule_operator_unstake_halborn() {
new_test_ext().execute_with(|| {
let bond_amount = 10_000;
let unstake_amount = 15_000;
// Join operator first
assert_ok!(MultiAssetDelegation::join_operators(RuntimeOrigin::signed(1), bond_amount));
// Schedule unstake
assert_ok!(MultiAssetDelegation::schedule_operator_unstake(
RuntimeOrigin::signed(1),
unstake_amount
));
// Verify operator metadata
let operator_info = MultiAssetDelegation::operator_info(1).unwrap();
println!("{:?}", operator_info);
<CurrentRound<Test>>::put(15);
// assert_eq!(operator_info.request.unwrap().amount, unstake_amount);
assert_ok!(MultiAssetDelegation::execute_operator_unstake(RuntimeOrigin::signed(1)));
let operator_info = MultiAssetDelegation::operator_info(1).unwrap();
println!("{:?}", operator_info);
// // Verify event
// System::assert_has_event(RuntimeEvent::MultiAssetDelegation(
// Event::OperatorBondLessScheduled { who: 1, unstake_amount },
// ));
});
}
cargo test --release --package pallet-multi-asset-delegation --lib -- tests::operator::schedule_operator_unstake_halborn --exact --show-output
Use checked_sub
when reducing the operator.stake
by request.amount
. This will safely handle the subtraction and return an error if the requested reduction exceeds the current stake. Implement error handling to prevent further actions if the stake cannot be reduced by the requested amount.
SOLVED: The Tangle Network team has solved the issue using checked_sub.
//
The process_schedule_operator_unstake
function does not validate whether the unstake_amount
is less than or equal to the operator’s current stake. This could lead to scenarios where an unstake request exceeds the operator's available stake, which would cause issues during the execution of the unstake operation.
Add a check to ensure that unstake_amount
does not exceed the operator’s current stake before scheduling the unstake request.
SOLVED: The Tangle Network team has solved the issue by checking the operator.stake
value.
//
In the line operator.stake += additional_bond
, there is a risk of overflow when adding the additional_bond
to the existing operator.stake
. If the resulting value exceeds the maximum limit for BalanceOf<T>
, it could lead to an overflow, causing an incorrect stake balance.
pub fn process_operator_bond_more(
who: &T::AccountId,
additional_bond: BalanceOf<T>,
) -> Result<(), DispatchError> {
let mut operator = Operators::<T>::get(who).ok_or(Error::<T>::NotAnOperator)?;
T::Currency::reserve(who, additional_bond)?;
operator.stake += additional_bond; // @bug
Operators::<T>::insert(who, operator);
Ok(())
}
Use checked_add
to safely handle the addition of additional_bond
to operator.stake
. If the addition caused an overflow, return an error to prevent incorrect stake values. Implement appropriate error handling for cases where the bond addition exceeds the maximum allowed value.
SOLVED: The Tangle Network team has solved the issue using checked_add.
//
The set_incentive_apy_and_cap
function lacks validation checks for the apy
(Annual Percentage Yield) and cap
values being set for a specific asset vault. Without these checks, it’s possible for invalid or unintended values to be set, which could lead to issues such as overly high or low APY values or unrealistic caps. This could result in economic risk for the system, either by over-incentivizing or under-incentivizing asset deposits, potentially causing financial instability.
/// Sets the APY and cap for a specific asset.
#[pallet::call_index(19)]
#[pallet::weight(Weight::from_parts(10_000, 0) + T::DbWeight::get().writes(1))]
pub fn set_incentive_apy_and_cap(
origin: OriginFor<T>,
vault_id: T::VaultId,
apy: sp_runtime::Percent,
cap: BalanceOf<T>,
) -> DispatchResult {
// Ensure that the origin is authorized
T::ForceOrigin::ensure_origin(origin)?;
// Initialize the reward config if not already initialized
RewardConfigStorage::<T>::mutate(|maybe_config| {
let mut config = maybe_config.take().unwrap_or_else(|| RewardConfig {
configs: BTreeMap::new(),
whitelisted_blueprint_ids: Vec::new(),
});
config.configs.insert(vault_id, RewardConfigForAssetVault { apy, cap }); // @bug
*maybe_config = Some(config);
});
// Emit an event
Self::deposit_event(Event::IncentiveAPYAndCapSet { vault_id, apy, cap });
Ok(())
}
Add validation logic to ensure that both apy
and cap
values are within predefined, acceptable ranges before they are stored. Define safe upper and lower bounds for apy
and cap
to prevent unintended values and enforce these bounds in the code, either directly or by calling pre-defined validation functions. Additionally, add unit tests to confirm that only values within the safe range can be applied.
SOLVED: The Tangle Network team solved the issue.
//
In the process_schedule_operator_unstake
function, there is no check to ensure that an operator does not already have an active stake reduction request. If a new unstake request is scheduled while a previous one is still active, it may overwrite the previous request, leading to inconsistent or unintended behavior in managing stake reductions.
Add a validation check to ensure that no active stake reduction request already exists for the operator before scheduling a new one. If a request is active, return an appropriate error (e.g., Error::<T>::ExistingUnstakeRequest
) to prevent overwriting existing requests. This will help ensure that only one unstake request can be processed at a time.
SOLVED: The Tangle Network team has solved the issue. Code now checking the existence of the unstake request.
//
In the gas
function, the calculation self.gas_limit - self.used_gas - self.memory_gas
may cause an underflow if used_gas
or memory_gas
exceeds the gas_limit
.
Use saturating_sub
to safely handle the subtraction in a way that prevents underflow. For example, (self.gas_limit.saturating_sub(self.used_gas)).saturating_sub(self.memory_gas)
. This will ensure that the function returns zero when the subtracted value would otherwise become negative, thereby avoiding any unintended panics or negative values.
SOLVED: The Tangle Network team did solve this issue.
//
The mint_claim
function mints a claim for an address to collect native tokens. It updates the Total<T>
storage item to reflect the increased total supply by adding the value
parameter. However, this operation does not handle potential overflow scenarios. Specifically
The line <Total<T>>::mutate(|t| *t += value);
increments the total supply without bounds. If the sum of the current total supply and value
exceeds the maximum value for BalanceOf<T>
, an overflow could occur.
If Total<T>
overflows, it can cause incorrect accounting of the total supply, leading to inconsistencies in the token economics of the system.
Use checked_add
to ensure safe addition.
SOLVED: The Tangle Network team has solved the issue by using saturating_add.
//
In the current_reward_counter
function, the calculation of new_pending_commission
uses the formula:
let new_pending_commission = commission * current_payout_balance;
This operation multiplies the commission
(of type Perbill
) with current_payout_balance
(of type BalanceOf<T>
). Since Perbill
represents a fixed-point fraction and BalanceOf<T>
can be large, the multiplication may result in an overflow if current_payout_balance
reaches a high value.
Furthermore, while subsequent calculations use saturating_sub
and checked_from_rational
to avoid unsafe operations, the initial multiplication does not explicitly account for overflow.
Replace the direct multiplication with checked_mul
to ensure safety.
SOLVED: The Tangle Network team has solved the issue.
//
The verify
function within the SchnorrRistretto255Precompile
implementation utilizes the verify_signature!
macro. However, the results from the verify_signature!
macro are not handled
The macro call does not return an explicitly used result, and instead, the function proceeds to return Ok(false)
regardless of the signature verification outcome. This unhandled behavior leads to incorrect functionality as the verification result is ignored. Any dependent smart contracts or external applications relying on this precompile will receive inaccurate results, which could lead to security vulnerabilities or incorrect application logic.
Ensure the verify_signature!
macro's results are captured and processed.
SOLVED: The Tangle Network team solved the issue.
//
Hardcoded Gas Limit:
The function sets a static gas limit of 300,000
for the EVM call to onRegister
. This value may be insufficient for complex contract logic, leading to failed executions.
Insufficient gas can cause transaction failures, potentially interrupting operator registration.
Inconsistent Error Handling:
In the match arm for BlueprintManager::Evm
, errors during the EVM call (Self::evm_call
) are returned correctly. However, the _
arm defaults to (true, Weight::zero())
without signaling an error. This behavior might allow unverified registrations to proceed erroneously.
Returning (true, Weight::zero())
for unhandled cases introduces a security risk where invalid blueprints could bypass registration requirements.
Dynamic Gas Limit:
Replace the hardcoded limit with a configurable value, or dynamically calculate it based on the expected complexity of the onRegister
function.
Consistent Error Handling:
In the _
arm, replace (true, Weight::zero())
with an explicit error like Err(Error::<T>::InvalidBlueprint)
to enforce stricter checks.
SOLVED: The Tangle Network team Increased the hardcoded limits.
//
The dispute
function allows a caller to dispute an UnappliedSlash
for a given era
and index
. While the function includes checks for the dispute origin and properly removes the slash record, it does not validate whether the provided era
falls within an acceptable or active range.
Lack of Era Range Check:
The function does not ensure that the era
parameter is within a valid range (e.g., the current or recent eras). This omission could allow:
Disputes against slashes from outdated or irrelevant eras.
Introduce a range check to ensure the era
is within an acceptable window relative to the current era.
NOT APPLICABLE: The Tangle Network team responded with the following not valid since only eras within unapplied_slashes will be considered, if an era is not applied yet it can be disputed
.
//
The decode
implementation for WrappedScalar
defaults to returning Scalar::ZERO
if the provided bytes are not valid canonical scalar bytes. This behavior is implemented using unwrap_or(Scalar::ZERO)
. While this prevents a panic in the decoding process, it introduces potential risks:
Silent Failure: Returning a zero scalar masks errors and could inadvertently allow invalid or malicious input to propagate through the system.
Security Risk: In cryptographic contexts, the use of a zero scalar may lead to unintended consequences, such as weakening the guarantees of a protocol or introducing vulnerabilities.
Return an Error for Invalid Scalars: Instead of defaulting to Scalar::ZERO
, return a decoding error when from_canonical_bytes
fails.
SOLVED: The Tangle Network team solved the issue.
//
When reducing the delegator's balance in the deposits, there is a potential for underflow at the line *balance -= amount
. Although a check is performed to ensure the balance is sufficient before this line, the underflow risk still exists if concurrent updates or errors in the surrounding logic affect this balance unexpectedly.
ensure!(*balance >= amount, Error::<T>::InsufficientBalance);
// Reduce the balance in deposits
*balance -= amount; // @audit possible underflow
if *balance == Zero::zero() {
metadata.deposits.remove(&asset_id);
}
To prevent any possible underflow, apply checked_sub
on the balance to reduce it in a safe, overflow-protected manner. Alternatively, use saturating_sub
, which would ensure the balance does not go below zero, mitigating the risk of unintended underflow effects.
SOLVED: The Tangle Network team solved the issue.
//
The ensure!
check for current_round >= leaving_round
currently returns Error::<T>::NotLeavingRound
if the leave operation is attempted before the required round is reached. This may not clearly communicate the specific cause of failure.
match operator.status {
OperatorStatus::Leaving(leaving_round) => {
ensure!(current_round >= leaving_round, Error::<T>::NotLeavingRound); // @bug
Define a new, distinct error variant, such as Error::<T>::RoundNotReached
, to specifically handle cases where the current round has not yet reached the required leaving_round
. This improves error clarity, allowing users to distinguish between different failure scenarios in the leave process and better understand the pending status for their leave operation.
SOLVED: The Tangle Network team has solved the issue.
//
The codebase contains the following typographical errors:
precompiles/precompile-registry/src/lib.rs
#L95: asume
precompiles/batch/src/lib.rs
#L61: transfered
#L226: immediatly
precompiles/assets-erc20/src/mock.rs
#L205: dont
#L206: substracted
precompiles/assets-erc20/src/lib.rs
#L124: Allows to knows
It is recommended to correct the aforementioned typos as follows:
asume -> assume
transfered -> transferred
immediatly -> immediately
dont -> don't
substracted -> subtracted
knows -> know
SOLVED: The Tangle Network team solved the issue in the following commit IDs:
//
Open TODOs in the codebase may indicate unresolved architectural or programming issues, and are listed as follows:
precompiles/pallet-democracy/src/mock.rs
213: type OriginPrivilegeCmp = EqualPrivilegeOnly; // TODO : Simplest type, maybe there is better ?
precompiles/pallet-democracy/DemocracyInterface.sol
30: /// TODO This is supposed to be a vec. Let's save this one for later.
114: //TODO should we have an alternative `simpleSecond` where the upper bound is read from storage?
precompiles/precompile-registry/src/lib.rs
49: // (TODO make it more generic, maybe add a const generic on PrecompileRegistry type)
64: // (TODO make it more generic, maybe add a const generic on PrecompileRegistry type)
84: // (TODO make it more generic, maybe add a const generic on PrecompileRegistry type)
It is recommended to resolve the open TODOs.
SOLVED: The Tangle Network team solved the issue in the following commit IDs:
//
The codebase contains the following erroneous documentation:
References to Moonbeam instead of Tangle:
precompiles/tangle-lst/Cargo.toml
10:# Moonbeam
35:# Moonbeam
precompiles/services/Cargo.toml
10:# Moonbeam
43:# Moonbeam
precompiles/vesting/Cargo.toml
10:# Moonbeam
35:# Moonbeam
precompiles/pallet-democracy/Cargo.toml
11:# Moonbeam
34:# Moonbeam
precompiles/pallet-democracy/DemocracyInterface.sol
10:/// @author The Moonbeam Team
precompiles/assets-erc20/src/file.rs
2:// This file is part of Moonbeam.
4:// Moonbeam is free software: you can redistribute it and/or modify
9:// Moonbeam is distributed in the hope that it will be useful,
15:// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
precompiles/balances-erc20/src/tests.rs
2:// This file is part of Moonbeam.
4:// Moonbeam is free software: you can redistribute it and/or modify
9:// Moonbeam is distributed in the hope that it will be useful,
15:// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
precompiles/call-permit/CallPermit.sol
10:/// @author The Moonbeam Team
precompiles/multi-asset-delegation/Cargo.toml
10:# Moonbeam
35:# Moonbeam
precompiles/balances-erc20/src/eip2612.rs
2:// This file is part of Moonbeam.
4:// Moonbeam is free software: you can redistribute it and/or modify
9:// Moonbeam is distributed in the hope that it will be useful,
15:// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
precompiles/balances-erc20/src/lib.rs
2:// This file is part of Moonbeam.
4:// Moonbeam is free software: you can redistribute it and/or modify
9:// Moonbeam is distributed in the hope that it will be useful,
15:// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
precompiles/balances-erc20/Cargo.toml
11:# Moonbeam
37:# Moonbeam
precompiles/balances-erc20/Permit.sol
4:/// @author The Moonbeam Team
precompiles/balances-erc20/src/mock.rs
2:// This file is part of Moonbeam.
4:// Moonbeam is free software: you can redistribute it and/or modify
9:// Moonbeam is distributed in the hope that it will be useful,
15:// along with Moonbeam. If not, see <http://www.gnu.org/licenses/>.
precompiles/batch/Batch.sol
10:/// @author The Moonbeam Team
precompiles/batch/Cargo.toml
10:# Moonbeam
precompiles/call-permit/Cargo.toml
10:# Moonbeam
precompiles/staking/Cargo.toml
11:# Moonbeam
33:# Moonbeam
precompiles/assets-erc20/Cargo.toml
11:# Moonbeam
precompiles/assets-erc20/src/lib.rs
75:/// 1024-2047 Precompiles that are not in Ethereum Mainnet but are neither Moonbeam specific
76:/// 2048-4095 Moonbeam specific precompiles
precompiles/preimage/Preimage.sol
10:/// @author The Moonbeam Team
precompiles/proxy/Cargo.toml
10:# Moonbeam
33:# Moonbeam
precompiles/preimage/Cargo.toml
9:# Moonbeam
31:# Moonbeam
precompiles/precompile-registry/Cargo.toml
11:# Moonbeam
31:# Moonbeam
precompiles/precompile-registry/PrecompileRegistry.sol
4:/// @author The Moonbeam Team
precompiles/proxy/Proxy.sol
10:/// @author The Moonbeam Team
In the precompiles/multi-asset-delegation/src/lib.rs
file, the convert_to_account_id
function is documented as a 'Helper for converting from u8 to RewardDestination', but it actually returns an AccountId
instead.
/// Helper for converting from u8 to RewardDestination
fn convert_to_account_id(payee: H256) -> EvmResult<Runtime::AccountId> {
The precompiles/proxy/src/lib.rs
file contains a comment indicating an incorrect link:
// See: https://github.com/PureStake/sr-/issues/30
The precompiles/precompile-registry/src/lib.rs
file contains the incomplete comment 'In the case of [missing context], the storage item that can be read is pallet_asset::Asset
' on three occasions, specifically at lines 48, 63, and 83.
The documentation in the precompiles/tangle-lst/src/lib.rs
file is incorrect, as it appears to be copied from the multi-asset-delegation
precompile's documentation.
The description within the precompiles/tangle-lst/Cargo.toml
manifest file is incorrect, as it refers to the multi-asset-delegation
pallet instead of tangle-lst
:
description = "A Precompile to make pallet-multi-asset-delegation calls encoding accessible to pallet-evm"
The documentation in the precompiles/vesting/src/lib.rs
file contains some errors, as the word 'staking' was used instead of 'vesting' in the following instances:
4:// This file is part of pallet-evm-precompile-staking package, originally developed by Purestake
5:// Inc. Pallet-evm-precompile-staking package used in Tangle Network in terms of GPLv3.
32://! that includes the staking pallet. This allows the precompile to work with any runtime that
33://! includes the staking pallet and meets the other trait bounds required by the precompile.
It is recommended to correct the mentioned erroneous comments.
SOLVED: The Tangle Network team solved the issue in the following commit IDs:
//
Including multiple Rust crates in the same workspace allows for efficient management, shared dependencies, unified builds, and simplified version control. However, the following precompiles were not included within the Tangle workspace:
tangle-lst
staking
vesting
Below is a code snippet of the Cargo.toml file:
[workspace]
members = [
"primitives",
"primitives/crypto",
"primitives/rpc/*",
"primitives/ext",
"client/evm-tracing",
"client/rpc/*",
"client/rpc-core/*",
"node",
"runtime/testnet",
"runtime/mainnet",
"pallets/*",
"pallets/services/rpc",
"pallets/services/rpc/runtime-api",
"pallets/tangle-lst/benchmarking",
"frost",
"frost/frost-*",
"precompiles/pallet-democracy",
"precompiles/batch",
"precompiles/call-permit",
"precompiles/proxy",
"precompiles/preimage",
"precompiles/balances-erc20",
"precompiles/assets-erc20",
"precompiles/verify-ecdsa-secp256k1-signature",
"precompiles/verify-ecdsa-secp256r1-signature",
"precompiles/verify-ecdsa-stark-signature",
"precompiles/verify-schnorr-signatures",
"precompiles/verify-bls381-signature",
"precompiles/multi-asset-delegation",
"precompiles/services",
"tangle-subxt",
"evm-tracer",
]
It is recommended to incorporate the missing precompiles into the Tangle crate.
SOLVED: The Tangle Network team solved the issue in the following commit IDs:
//
The tangle-lst
precompile contains the following compilation errors:
Reliance on the pallet_multi_asset_delegation
configuration trait instead of the pallet_tangle_lst
configuration trait:
type BalanceOf<Runtime> =
<<Runtime as pallet_multi_asset_delegation::Config>::Currency as Currency<
<Runtime as frame_system::Config>::AccountId,
>>::Balance;
Usage of an undefined variant in the BondExtra
enum:
let extra = match extra_type {
0 => BondExtra::FreeBalance(extra),
1 => BondExtra::Rewards,
_ => return Err(revert("Invalid extra type")),
};
#[derive(Encode, Decode, Clone, Copy, Debug, PartialEq, Eq, TypeInfo)]
pub enum BondExtra<Balance> {
/// Take from the free balance.
FreeBalance(Balance),
}
Usage of the undefined function Self::convert_to_account_id
on multiple occasions.
let call = pallet_tangle_lst::Call::<Runtime>::create { amount, root, nominator, bouncer };
let call = pallet_tangle_lst::Call::<Runtime>::create_with_pool_id {
amount,
root,
nominator,
bouncer,
pool_id,
};
#[precompile::public("setConfigs(uint256,uint256,uint32,uint32)")]
fn set_configs(
handle: &mut impl PrecompileHandle,
min_join_bond: U256,
min_create_bond: U256,
max_pools: u32,
global_max_commission: u32,
) -> EvmResult {
handle.record_cost(RuntimeHelper::<Runtime>::db_read_gas_cost())?;
ensure_root(handle)?;
...
}
Usage of the undeclared types ConfigOp
and RawOrigin
:
let call = pallet_tangle_lst::Call::<Runtime>::set_configs {
min_join_bond: min_join_bond.map(ConfigOp::Set).unwrap_or(ConfigOp::Noop),
min_create_bond: min_create_bond.map(ConfigOp::Set).unwrap_or(ConfigOp::Noop),
max_pools: max_pools.map(ConfigOp::Set).unwrap_or(ConfigOp::Noop),
global_max_commission: global_max_commission
.map(ConfigOp::Set)
.unwrap_or(ConfigOp::Noop),
};
RuntimeHelper::<Runtime>::try_dispatch(handle, RawOrigin::Root.into(), call)?;
It is recommended to correct the aforementioned compilation errors within the tangle-lst
precompile.
SOLVED: The Tangle Network team solved the issue in the specified commit id.
//
The tangle-lst
precompile lacks an appropriate mock and tests, as they are copied from the multi-asset-delegation
precompile.
It is recommended to implement the correct mock and tests for the tangle-lst
precompile to ensure it functions as expected.
SOLVED: The Tangle Network team solved the issue in the specified commit id.
//
The services
precompile contains two functions with identical logic:
Below is the implementation of the terminate_service
function:
#[precompile::public("terminateService(uint256)")]
fn terminate_service(handle: &mut impl PrecompileHandle, service_id: U256) -> EvmResult {
handle.record_cost(RuntimeHelper::<Runtime>::db_read_gas_cost())?;
let origin = Runtime::AddressMapping::into_account_id(handle.context().caller);
let service_id: u64 = service_id.as_u64();
let call = pallet_services::Call::<Runtime>::terminate { service_id };
RuntimeHelper::<Runtime>::try_dispatch(handle, Some(origin).into(), call)?;
Ok(())
}
Below is the implementation of the terminate
function:
#[precompile::public("terminate(uint256)")]
fn terminate(handle: &mut impl PrecompileHandle, service_id: U256) -> EvmResult {
handle.record_cost(RuntimeHelper::<Runtime>::db_read_gas_cost())?;
let origin = Runtime::AddressMapping::into_account_id(handle.context().caller);
let service_id: u64 = service_id.as_u64();
let call = pallet_services::Call::<Runtime>::terminate { service_id };
RuntimeHelper::<Runtime>::try_dispatch(handle, Some(origin).into(), call)?;
Ok(())
}
However, the Services.sol
interface only exposes the terminateService
function, which is referenced by the terminate_service
function. Therefore, there appears to be no reason to include the terminate
function.
It is recommended to retain only one of the redundant functions.
SOLVED: The Tangle Network team solved the issue in the specified commit id.
//
The move_claim
function transfers claims, vesting, and signing data from an old
MultiAddress
to a new
one. If the old
and new
addresses are identical, the function performs unnecessary storage operations by taking and reinserting data into the same location. This adds computational overhead and increases transaction weight without achieving any meaningful change.
Performing redundant storage operations when old
and new
are the same results in wasted resources and increased transaction fees.
SOLVED: The Tangle Network team solved the issue.
//
The deserialize
function for Ed25519Group
does not validate against the deserialization of the identity element. This omission creates a potential risk where the identity element might be inadvertently accepted during deserialization. Although this is unlikely to result in immediate security issues, validating the identity element aligns with the FROST specification and serves as a defense-in-depth mechanism.
In other implementations, such as Zcash Foundation's FROST library, the identity element is explicitly rejected during deserialization to ensure compliance with cryptographic standards. Failing to enforce this validation may lead to unexpected behaviors in systems relying on this function.
Add Identity Validation: Incorporate a check to explicitly reject the identity element.
SOLVED: The Tangle Network team responded with we are using the latest stable version from zkcash repo, so don’t believe this is applicable anymore
Crate | Version | Title | Date | ID | URL | Solution |
---|---|---|---|---|---|---|
rustls | 0.20.9 |
| 2024-04-19 | RUSTSEC-2024-0336 | Upgrade to >=0.23.5 OR >=0.22.4, <0.23.0 OR >=0.21.11, <0.22.0 | |
sqlx | 0.7.4 | Binary Protocol Misinterpretation caused by Truncating or Overflowing Casts | 2024-08-15 | RUSTSEC-2024-0363 | Upgrade to >=0.8.1 | |
ansi_term | 0.12.1 | ansi_term is Unmaintained | 2021-08-18 | RUSTSEC-2021-0139 | Unmaintained | |
mach | 0.3.2 | mach is unmaintained | 2020-07-14 | RUSTSEC-2020-0168 | Unmaintained | |
parity-wasm | 0.45.0 | Crate | 2022-10-01 | RUSTSEC-2022-0061 | Deprecated | |
proc-macro-error | 1.0.4 | proc-macro-error is unmaintained | 2024-09-01 | RUSTSEC-2024-0370 | Unmaintained |
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
Tangle Network Code Review
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed