Prepared by:
HALBORN
Last Updated 08/26/2025
Date of Engagement: June 30th, 2025 - July 30th, 2025
100% of all REPORTED Findings have been addressed
All findings
19
Critical
0
High
0
Medium
1
Low
5
Informational
13
Oroswap
engaged Halborn to conduct a security assessment on their smart contracts beginning on June 30th, 2025 and ending on July 30th, 2025. The security assessment was scoped to the smart contracts provided to Halborn. Commit hashes and further details can be found in the Scope section of this report.
Halborn was provided with 4 weeks for this engagement and assigned 2 security engineers to review the security of the smart contracts in scope. The assigned engineers possess deep expertise in blockchain and smart contract security, including hands-on experience with multiple blockchain protocols.
The objectives of this assessment were to:
Identify potential security vulnerabilities within the smart contracts.
Ensure that the smart contracts function as intended.
In summary, Halborn identified several areas for improvement to reduce the likelihood and impact of security risks, which were mostly addressed by the Oroswap team
. The main ones were:
Restrict collect to an authorised role or enforce an internal minimum limit per asset.
Cap the number of future schedules per pool or token.
Apply length-prefix encoding to each AssetInfo.as_bytes().
Store new decimal registrations as pending and require explicit owner approval.
Enforce that governance_cut + second_receiver_cut + dev_fund_cut ≤ 100%.
Halborn performed a combination of manual and automated security testing to balance efficiency, timeliness, practicality, and accuracy in regard to the scope of the custom modules. While manual testing is recommended to uncover flaws in logic, process, and implementation; automated testing techniques help enhance coverage of structures and can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment :
Research into architecture and purpose.
Static Analysis of security for scoped repository, and imported functions.
Manual Assessment for discovering security vulnerabilities on the codebase.
Ensuring the correctness of the codebase.
Dynamic Analysis of files and modules in scope.
EXPLOITABILITY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
0
High
0
Medium
1
Low
5
Informational
13
Security analysis | Risk level | Remediation Date |
---|---|---|
Permissionless “Collect” enables fee-harvest griefing | Medium | Solved - 08/06/2025 |
Unbounded external-schedule spam could enable gas DoS | Low | Solved - 08/12/2025 |
Pair keys can collide | Low | Solved - 08/11/2025 |
Initial stake penalises first user | Low | Solved - 08/15/2025 |
Burning small xORO amounts can result in no redemption | Low | Solved - 08/15/2025 |
Over-allocation revert distribution due to wrong fees percentages | Low | Solved - 08/12/2025 |
Permissionless decimal spoofing | Informational | Acknowledged - 08/17/2025 |
Missing guard against stale pending operation | Informational | Solved - 08/25/2025 |
Owner can confiscate future reward tokens | Informational | Solved - 08/15/2025 |
Missing balance validation when bypassing amount check | Informational | Solved - 08/15/2025 |
Schedule-limit bypass via improper length check | Informational | Solved - 08/15/2025 |
Vesting schedules bypass validation when end_point is missing | Informational | Solved - 08/15/2025 |
Asset info deduplication ignores the font case | Informational | Solved - 08/11/2025 |
Formula deviation with reference contracts | Informational | Acknowledged - 08/17/2025 |
Missing or incomplete instantiate attributes | Informational | Solved - 08/12/2025 |
Unbounded pagination in query endpoints | Informational | Solved - 08/12/2025 |
Malicious admin can seize all fees | Informational | Solved - 08/12/2025 |
Vesting withdraw from active schedule leaves a single token unit in the schedule | Informational | Solved - 08/15/2025 |
Redundant branch after minimum fee enforcement in funds splitting | Informational | Solved - 08/25/2025 |
//
The collect
entry point from Maker contract can be invoked by anyone and accepts a caller-supplied input called limit
, of type AssetWithLimit[]
. If the caller sets an arbitrarily small limit
, only minimal amounts of each token are swapped, while the full cooldown
timer is still updated. An attacker could repeatedly front-run legitimate keepers, preventing effective fee conversion and distribution.
This can degrade APR and negatively impact user experience, effectively causing a denial-of-service (DoS) through reward starvation.
Code snippet of collect
function from contracts/tokenomics/maker/contract.rs file:
fn collect(
deps: DepsMut,
env: Env,
assets: Vec<AssetWithLimit>,
) -> Result<Response, ContractError> {
let mut cfg = CONFIG.load(deps.storage)?;
// Allowing collect only once per cooldown period
LAST_COLLECT_TS.update(deps.storage, |last_ts| match cfg.collect_cooldown {
Some(cd_period) if env.block.time.seconds() < last_ts + cd_period => {
Err(ContractError::Cooldown {
next_collect_ts: last_ts + cd_period,
})
}
_ => Ok(env.block.time.seconds()),
})?;
let oro = cfg.oro_token.clone();
// Check for duplicate assets
let mut uniq = HashSet::new();
if !assets
.clone()
.into_iter()
.all(|a| uniq.insert(a.info.to_string()))
{
return Err(ContractError::DuplicatedAsset {});
}
// Swap all non ORO tokens
let (mut response, bridge_assets) = swap_assets(
deps.as_ref(),
&env.contract.address,
&cfg,
assets.into_iter().filter(|a| a.info.ne(&oro)).collect(),
)?;
It is recommended to restrict collect
to an authorized role or enforce an internal minimum limit
per asset and update the cooldown only when the swap amount exceeds that threshold.
SOLVED: The issue was fixed in the specified commit. The collect function now requires authorization through the authorized_keepers
list.
//
In the Incentives contract, the ExecuteMsg::Incentivize
message allows to create an external reward schedule for a deposited LP token. This function has no token whitelist, and charges a native-coin incentivization fee only when a new reward token is added to a pool for the first time. Once this initial fee is paid, the same (potentially worthless) token can be reused to add an unlimited number of schedules, as there is no hard cap on the number of schedules.
Each schedule is stored as a separate entry in the Map EXTERNAL_REWARD_SCHEDULES
. Every call that interacts with the pool (deposit, withdraw, claim_rewards) triggers PoolInfo::update_rewards
, which iterates over all these entries with .range()
. Repeatedly flooding the pool with numerous short-term schedules can cause the per-call gas cost to exceed the block gas limit, resulting in transaction reverts and effectively freezing the pool. This situation necessitates an expensive and itself, gas-intensive cleanup process performed by the owner.
It is recommended to cap the number of future schedules per (pool, token) or make the incentivization fee scale with schedule count.
SOLVED: The issue was fixed in the specified commit. The incentivization fee is now charged for every schedule instead of only when a new reward token is added, making spam attacks economically unfeasible by requiring payment for each schedule.
//
The Factory::state::pair_key
function allows key collision which causes different denoms pairs to have the same key. For example, the key derived from denom "abc" + denom "def" is equivalent to the key derived from denom "ab" + denom "cdef".
/// Calculates a pair key from the specified parameters in the `asset_infos` variable.
///
/// `asset_infos` is an array with multiple items of type [`AssetInfo`].
pub fn pair_key(asset_infos: &[AssetInfo], pair_type: &PairType) -> Vec<u8> {
let mut key = asset_infos
.iter()
.map(AssetInfo::as_bytes)
.sorted()
.flatten()
.copied()
.collect::<Vec<u8>>();
// Append pair type to the key
key.extend_from_slice(pair_type.to_string().as_bytes());
key
}
The pair_key
helper is used in execute_create_pair
and may lead to impossibility to create a pair when the new derived key is already present in the PAIRS
dictionary.
The following POC illustrates that different pairs of denoms can result in the same pair key:
/// POC: Demonstrates denom collision vulnerability
/// Different asset pairs can generate the same pair key due to concatenation without separators
#[test]
fn test_denom_collision_poc() {
let pair_type = PairType::Xyk {};
// Example 1: Classic collision case
let assets_1 = [
native_asset_info("a".to_string()), // "a"
native_asset_info("bcdef".to_string()), // "bcdef"
];
let assets_2 = [
native_asset_info("ab".to_string()), // "ab"
native_asset_info("cdef".to_string()), // "cdef"
];
let assets_3 = [
native_asset_info("abc".to_string()), // "abc"
native_asset_info("def".to_string()), // "def"
];
let key_1 = pair_key(&assets_1, &pair_type);
let key_2 = pair_key(&assets_2, &pair_type);
let key_3 = pair_key(&assets_3, &pair_type);
// All three different asset pairs produce the same key: b"abcdef" + b"xyk"
assert_eq!(key_1, key_2, "Collision detected: ['a', 'bcdef'] == ['ab', 'cdef']");
assert_eq!(key_1, key_3, "Collision detected: ['a', 'bcdef'] == ['abc', 'def']");
assert_eq!(key_2, key_3, "Collision detected: ['ab', 'cdef'] == ['abc', 'def']");
// Verify the actual key content
let expected_key = b"abcdefxyk".to_vec();
assert_eq!(key_1, expected_key);
// Example 2: Real-world style collision with crypto denoms
let crypto_1 = [
native_asset_info("uusd".to_string()), // "uusd"
native_asset_info("uluna".to_string()), // "uluna"
];
let crypto_2 = [
native_asset_info("uusdu".to_string()), // "uusdu"
native_asset_info("luna".to_string()), // "luna"
];
let crypto_key_1 = pair_key(&crypto_1, &pair_type);
let crypto_key_2 = pair_key(&crypto_2, &pair_type);
// These should be different (this won't collide due to sorting)
// But demonstrates the principle - after sorting: "luna" + "uusdu" vs "uluna" + "uusd"
assert_ne!(crypto_key_1, crypto_key_2, "These particular denoms don't collide due to sorting");
// Example 3: Collision that works even with sorting
let sorted_1 = [
native_asset_info("a".to_string()),
native_asset_info("b".to_string()),
];
let sorted_2 = [
native_asset_info("".to_string()), // Empty string
native_asset_info("ab".to_string()),
];
let sorted_key_1 = pair_key(&sorted_1, &pair_type);
let sorted_key_2 = pair_key(&sorted_2, &pair_type);
// Both result in "ab" after concatenation
assert_eq!(sorted_key_1, sorted_key_2, "Collision with empty denom: ['a', 'b'] == ['', 'ab']");
println!("POC: Denom collision vulnerability demonstrated!");
println!("Key 1 (a+bcdef): {:?}", String::from_utf8_lossy(&key_1));
println!("Key 2 (ab+cdef): {:?}", String::from_utf8_lossy(&key_2));
println!("Key 3 (abc+def): {:?}", String::from_utf8_lossy(&key_3));
}
The result shows that the test is passing, confirming the issue:
running 1 test
test state::tests::test_denom_collision_poc ... ok
successes:
---- state::tests::test_denom_collision_poc stdout ----
POC: Denom collision vulnerability demonstrated!
Key 1 (a+bcdef): "abcdefxyk"
Key 2 (ab+cdef): "abcdefxyk"
Key 3 (abc+def): "abcdefxyk"
successes:
state::tests::test_denom_collision_poc
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 18 filtered out; finished in 0.00s
The following POC illustrates that when there is a collision in the key derived from denoms, it results in the impossibility to create the wanted pair:
/// POC: Demonstrates DoS vulnerability through denom collision
/// This test shows how different asset pairs can collide and prevent pair creation
#[test]
fn test_dos_collision_poc() {
let mut app = mock_app();
let owner = Addr::unchecked("owner");
// Initialize factory
let factory_helper = FactoryHelper::init(&mut app, &owner);
// Test Case: Classic collision with valid native tokens
// These pairs will generate the same key due to concatenation without separators
let collision_pairs = vec![
// Pair 1: ["abcdef", "ghijkl"] -> "abcdefghijkl"
vec![
AssetInfo::NativeToken { denom: "abcdef".to_string() },
AssetInfo::NativeToken { denom: "ghijkl".to_string() },
],
// Pair 2: ["abcde", "fghijkl"] -> "abcdefghijkl" (after sorting)
vec![
AssetInfo::NativeToken { denom: "abcde".to_string() },
AssetInfo::NativeToken { denom: "fghijkl".to_string() },
],
// Pair 3: ["abcd", "efghijkl"] -> "abcdefghijkl" (after sorting)
vec![
AssetInfo::NativeToken { denom: "abcd".to_string() },
AssetInfo::NativeToken { denom: "efghijkl".to_string() },
],
];
let pair_type = PairType::Xyk {};
// Create the first pair successfully
let msg1 = oroswap::factory::ExecuteMsg::CreatePair {
pair_type: pair_type.clone(),
asset_infos: collision_pairs[0].clone(),
init_params: None,
};
let result1 = app.execute_contract(
owner.clone(),
factory_helper.factory.clone(),
&msg1,
&[Coin {
denom: "uzig".to_string(),
amount: Uint128::new(1000),
}]
);
println!("✅ First pair creation result: {:?}", result1);
assert!(result1.is_ok(), "First pair should be created successfully");
// Try to create the second pair - this should fail due to collision
let msg2 = oroswap::factory::ExecuteMsg::CreatePair {
pair_type: pair_type.clone(),
asset_infos: collision_pairs[1].clone(),
init_params: None,
};
let result2 = app.execute_contract(
owner.clone(),
factory_helper.factory.clone(),
&msg2,
&[Coin {
denom: "uzig".to_string(),
amount: Uint128::new(1000),
}]
);
println!("❌ Second pair creation result: {:?}", result2);
assert!(result2.is_err(), "Second pair should fail due to collision");
// Try to create the third pair - this should also fail due to collision
let msg3 = oroswap::factory::ExecuteMsg::CreatePair {
pair_type: pair_type.clone(),
asset_infos: collision_pairs[2].clone(),
init_params: None,
};
let result3 = app.execute_contract(
owner.clone(),
factory_helper.factory.clone(),
&msg3,
&[Coin {
denom: "uzig".to_string(),
amount: Uint128::new(1000),
}]
);
println!("❌ Third pair creation result: {:?}", result3);
assert!(result3.is_err(), "Third pair should fail due to collision");
// Test Case 2: Demonstrate the actual collision by showing the keys are identical
use oroswap_factory::state::pair_key;
let key1 = pair_key(&collision_pairs[0], &pair_type);
let key2 = pair_key(&collision_pairs[1], &pair_type);
let key3 = pair_key(&collision_pairs[2], &pair_type);
println!("🔑 Generated keys:");
println!(" Key 1 (abc+def): {:?}", String::from_utf8_lossy(&key1));
println!(" Key 2 (abcd+ef): {:?}", String::from_utf8_lossy(&key2));
println!(" Key 3 (abcde+f): {:?}", String::from_utf8_lossy(&key3));
}
The results shows that the test passed successfully, highlighting that the pairs couldn't be created because of the collisions:
running 1 test
test test_dos_collision_poc ... ok
successes:
---- test_dos_collision_poc stdout ----
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
✅ First pair creation result: Ok(AppResponse { events: [Event { ty: "execute", attributes: [Attribute { key: "_contract_address", value: "contract1" }] }, Event { ty: "wasm", attributes: [Attribute { key: "_contract_address", value: "contract1" }, Attribute { key: "action", value: "create_pair" }, Attribute { key: "pair", value: "abcdef-ghijkl" }, Attribute { key: "pair_type", value: "xyk" }, Attribute { key: "pool_creation_fee", value: "1000" }, Attribute { key: "total_funds", value: "1000" }] }, Event { ty: "instantiate", attributes: [Attribute { key: "_contract_address", value: "contract2" }, Attribute { key: "code_id", value: "2" }] }, Event { ty: "wasm", attributes: [Attribute { key: "_contract_address", value: "contract2" }, Attribute { key: "action", value: "instantiate" }, Attribute { key: "asset_balances_tracking", value: "disabled" }, Attribute { key: "maker_fee_address", value: "owner" }, Attribute { key: "pool_creation_fee", value: "1000" }] }, Event { ty: "reply", attributes: [Attribute { key: "_contract_address", value: "contract2" }, Attribute { key: "mode", value: "handle_success" }] }, Event { ty: "wasm", attributes: [Attribute { key: "_contract_address", value: "contract2" }, Attribute { key: "lp_denom", value: "coin.contract2.oroswaplptoken" }] }, Event { ty: "transfer", attributes: [Attribute { key: "recipient", value: "owner" }, Attribute { key: "sender", value: "contract2" }, Attribute { key: "amount", value: "1000uzig" }] }, Event { ty: "reply", attributes: [Attribute { key: "_contract_address", value: "contract1" }, Attribute { key: "mode", value: "handle_success" }] }, Event { ty: "wasm", attributes: [Attribute { key: "_contract_address", value: "contract1" }, Attribute { key: "action", value: "register" }, Attribute { key: "pair_contract_addr", value: "contract2" }] }], data: None })
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
❌ Second pair creation result: Err(Error executing WasmMsg:
sender: owner
Execute { contract_addr: "contract1", msg: {"create_pair":{"pair_type":{"xyk":{}},"asset_infos":[{"native_token":{"denom":"abcde"}},{"native_token":{"denom":"fghijkl"}}],"init_params":null}}, funds: [Coin { 1000 "uzig" }] }
Caused by:
Pair was already created
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
❌ Third pair creation result: Err(Error executing WasmMsg:
sender: owner
Execute { contract_addr: "contract1", msg: {"create_pair":{"pair_type":{"xyk":{}},"asset_infos":[{"native_token":{"denom":"abcd"}},{"native_token":{"denom":"efghijkl"}}],"init_params":null}}, funds: [Coin { 1000 "uzig" }] }
Caused by:
Pair was already created
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
Pair key: [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 120, 121, 107]
🔑 Generated keys:
Key 1 (abc+def): "abcdefghijklxyk"
Key 2 (abcd+ef): "abcdefghijklxyk"
Key 3 (abcde+f): "abcdefghijklxyk"
successes:
test_dos_collision_poc
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s
Apply length-prefix encoding to each AssetInfo.as_bytes()
in Factory::state::pair_key
prior to sorting and concatenation to ensure unique key derivation and prevent denomination pair collisions.
SOLVED: The issue was fixed in the specified commit. The pair_key function now uses delimiters (\\x01
) between asset bytes and before the pair type to prevent key collisions between different denomination combinations.
//
In Staking
contract, the execute_enter
function mints xORO tokens for the first staker but withholds a fixed MINIMUM_STAKE_AMOUNT
of ORO as initial bootstrap liquidity. As a result, the first user receives fewer xORO tokens than the amount of ORO deposited, effectively subsidizing the pool's startup cost. This breaks the expected 1:1 minting ratio, which may discourage early participation and undermine user trust.
Code snippet of execute_enter
function from contracts/tokenomics/staking/src/contract.rs file:
fn execute_enter(
deps: DepsMut,
env: Env,
info: MessageInfo,
) -> Result<(Response, Coin), ContractError> {
let config = CONFIG.load(deps.storage)?;
// Ensure that the correct denom is sent. Sending zero tokens is prohibited on chain level
let amount = must_pay(&info, &config.oro_denom)?;
// Get the current deposits and shares held in the contract.
// Amount sent along with the message already included. Subtract it from the total deposit
let total_deposit = deps
.querier
.query_balance(&env.contract.address, &config.oro_denom)?
.amount
- amount;
let total_shares = deps.querier.query_supply(&config.xoro_denom)?.amount;
let mut messages: Vec<CosmosMsg> = vec![];
let mint_amount = if total_shares.is_zero() || total_deposit.is_zero() {
// There needs to be a minimum amount initially staked, thus the result
// cannot be zero if the amount is not enough
if amount.saturating_sub(MINIMUM_STAKE_AMOUNT).is_zero() {
return Err(ContractError::MinimumStakeAmountError {});
}
// Mint the xORO tokens to ourselves if this is the first stake
messages.push(
MsgMint {
sender: env.contract.address.to_string(),
amount: Some(coin(MINIMUM_STAKE_AMOUNT.u128(), &config.xoro_denom).into()),
mint_to_address: env.contract.address.to_string(),
}
.into(),
);
amount - MINIMUM_STAKE_AMOUNT
} else {
It is recommended to initialize the pool during deployment. This can be achieved by either funding the instantiate step with the minimum ORO and minting the corresponding xORO in the denom-creation reply, or by implementing a one-time bootstrap_pool function that is callable only by the deployer before any regular staking actions are performed.
SOLVED: The issue was fixed in the specified commit. The contract now implements a bootstrap mechanism during instantiation that mints xORO
tokens equal to the bootstrap amount, ensuring the pool is properly initialized before any user stakes, eliminating the need to withhold funds from the first staker.
//
In the Staking contract, the execute_leave
function allows users to burn xORO tokens in exchange for ORO based on a proportional formula.
However, if a user burns a very small amount—especially when total_shares
exceeds total_deposit
—the resulting return_amount
can round down to zero. Since the burn is irrevocable, this results in a deterministic loss: the user permanently loses xORO without receiving any ORO in return. This breaks the implicit 1:1 redeemability assumption, potentially undermining user trust and damaging the perceived fairness and integrity of the staking mechanism.
Code snippet of execute_leave
function from contracts/tokenomics/staking/src/contract.rs file:
fn execute_leave(
deps: DepsMut,
env: Env,
info: MessageInfo,
recipient: String,
) -> Result<Response, ContractError> {
let config = CONFIG.load(deps.storage)?;
// Ensure that the correct denom is sent. Sending zero tokens is prohibited on chain level
let amount = must_pay(&info, &config.xoro_denom)?;
// Get the current deposits and shares held in the contract
let total_deposit = deps
.querier
.query_balance(&env.contract.address, &config.oro_denom)?
.amount;
let total_shares = deps.querier.query_supply(&config.xoro_denom)?.amount;
// Calculate the amount of ORO to return based on the ratios of
// deposit and shares
let return_amount = amount.multiply_ratio(total_deposit, total_shares);
It is recommended to abort the transaction when return_amount
is zero, mirroring the check already applied to mint_amount
in execute_enter
(e.g. if return_amount.is_zero() { return Err(ContractError::StakeAmountTooSmall {}); }).
SOLVED: the issue was fixed in the specified commit. The execute_leave
function now includes a check for return_amount.is_zero()
that prevents users from losing xORO
tokens without receiving any ORO
, aborting the transaction when the calculated return amount would be zero.
//
In the distribute
function, the dev-fund
amount is calculated from the original amount
after the governance fee has been deducted from the same variable. This can lead to two failure scenarios:
When staking_contract == None
and governance_percent == 100 %
, the governance receives the entire balance. However, the code still attempts to send dev_share = amount * dev_fund_conf.share
; this exceeds the contract’s available balance after governance has taken its share. As a result, the transaction reverts, blocking all future distributions—effectively causing a permanent denial-of-service until parameters are adjusted.
When a staking contract is configured but the sum of governance_percent + second_receiver_cut + dev_fund_conf.share
exceeds 100 %, the remaining balance of ORO (the native token) becomes insufficient. Depending on rounding, this can intermittently cause a revert, halting payouts unexpectedly.
Therefore, misconfigurations can unintentionally freeze reward distributions, impacting system availability and fairness.
Code snippet of distribute
function from contracts/tokenomics/maker/src/contract.rs file:
let second_receiver_amount = if let Some(second_receiver_cfg) = &cfg.second_receiver_cfg {
let amount = amount.multiply_ratio(
Uint128::from(second_receiver_cfg.second_receiver_cut),
Uint128::new(100),
);
if !amount.is_zero() {
let asset = Asset {
info: cfg.oro_token.clone(),
amount,
};
result.push(SubMsg::new(
asset.into_msg(second_receiver_cfg.second_fee_receiver.to_string())?,
))
}
amount
} else {
Uint128::zero()
};
let governance_amount = if let Some(governance_contract) = &cfg.governance_contract {
let amount = amount
.checked_sub(second_receiver_amount)?
.multiply_ratio(Uint128::from(cfg.governance_percent), Uint128::new(100));
if !amount.is_zero() {
result.push(SubMsg::new(build_send_msg(
&Asset {
info: cfg.oro_token.clone(),
amount,
},
governance_contract.to_string(),
None,
)?))
}
amount
} else {
Uint128::zero()
};
let dev_amount = if let Some(dev_fund_conf) = &cfg.dev_fund_conf {
let dev_share = amount * dev_fund_conf.share;
if !dev_share.is_zero() {
// Swap ORO and process result in reply
let pool = get_pool(
&deps.querier,
&cfg.factory_contract,
&cfg.oro_token,
&dev_fund_conf.asset_info,
)?;
let mut swap_msg = build_swap_msg(
cfg.max_spread,
&pool,
&cfg.oro_token,
Some(&dev_fund_conf.asset_info),
dev_share,
)?;
swap_msg.reply_on = ReplyOn::Success;
swap_msg.id = PROCESS_DEV_FUND_REPLY_ID;
result.push(swap_msg);
}
dev_share
It is recommended to calculate the dev-fund share
before (or on the remaining balance after) the governance deduction and to enforce at configuration time that second_receiver_cut + governance_percent + dev_fund_conf.share ≤ 100 %
.
SOLVED: The issue was fixed in the specified commit. The distribute function now calculates the dev fund from the remaining balance after governance and second receiver deductions, and the update_config
function includes validation to ensure total percentages don't exceed 100%, preventing over-allocation scenarios.
//
The register_decimals
function allows any user to register a denom
with an arbitrary (but within a valid range) number of decimals
. An attacker who owns a single unit of a popular asset can register an incorrect precision (e.g., 3 instead of 6) and immediately recover the coins sent. Downstream contracts that rely on native_coin_registry
will misinterpret the scaled balances, resulting in over (or under) crediting of funds.
Code snippet of register_decimals
function from contracts/periphery/native_coin_registry/src/contract.rs file:
pub fn register_decimals(
deps: DepsMut,
info: MessageInfo,
native_coins: Vec<(String, u8)>,
) -> Result<Response, ContractError> {
let coins_map = info
.funds
.iter()
.map(|coin| &coin.denom)
.collect::<BTreeSet<_>>();
for (denom, _) in &native_coins {
coins_map
.get(denom)
.ok_or(ContractError::MustSendCoin(denom.clone()))?;
}
// Return the funds back to the sender
let send_msg = BankMsg::Send {
to_address: info.sender.to_string(),
amount: info.funds,
};
inner_add(deps.storage, native_coins, Some(send_msg))
}
Code snippet of inner_add
function from contracts/periphery/native_coin_registry/src/contract.rs file:
pub fn inner_add(
storage: &mut dyn Storage,
native_coins: Vec<(String, u8)>,
maybe_send_msg: Option<BankMsg>,
) -> Result<Response, ContractError> {
// Check for duplicate native coins
let mut uniq = HashSet::new();
if !native_coins.iter().all(|a| uniq.insert(&a.0)) {
return Err(ContractError::DuplicateCoins {});
}
native_coins.iter().try_for_each(|(denom, decimals)| {
ensure!(
ALLOWED_DECIMALS.contains(decimals),
ContractError::InvalidDecimals {
denom: denom.clone(),
decimals: *decimals,
}
);
COINS_INFO
.update(storage, denom.clone(), |v| match v {
Some(_) if maybe_send_msg.is_some() => {
Err(ContractError::CoinAlreadyExists(denom.clone()))
}
_ => Ok(*decimals),
})
.map(|_| ())
})?;
It is recommended to store new registrations as pending and require explicit owner / governance approval before they become active, or alternatively only allow the owner to make registrations.
ACKNOWLEDGED: The Oroswap team acknowledged this finding.
//
Although CosmWasm atomicity prevents partial state on failure, adding a defensive guard to ensure PENDING_LIQUIDITY
is empty at the start of execute_create_pair_and_provide_liquidity
hardens the contract against unforeseen edge cases.
This prevents starting a new operation if a prior pending record still exists and clarifies operational policy (no concurrent in-flight operations).
It is recommended to check PENDING_LIQUIDITY.may_load(...)
at the very start of execute_create_pair_and_provide_liquidity
and return an error (e.g., OperationInProgress) if a pending item exists, requiring the admin to call execute_emergency_recovery
before accepting a new one.
SOLVED: The Oroswap team solved this issue by adding the check for the PENDING_LIQUIDITY
item.
//
The remove_reward_from_pool
function, from Incentives contract, grants the contract owner the ability to set bypass_upcoming_schedules=true.
When this flag is enabled, the function deregister_reward
deletes all upcoming schedules for the reward token but does not transfer any remaining reward balance to stakers or a designated receiver.
The undeclared tokens remain locked within the Incentives contract indefinitely, granting the owner excessive privilege to eliminate future rewards and effectively seize users’ funds.
Code snippet of deregister_reward
function from contracts/tokenomics/incentives/src/state.rs file:
pub fn deregister_reward(
&mut self,
storage: &mut dyn Storage,
lp_asset: &AssetInfo,
reward_asset: &AssetInfo,
bypass_upcoming_schedules: bool,
) -> Result<Uint128, ContractError> {
let (pos, reward_info) = self
.rewards
.iter()
.find_position(|reward| matches!(&reward.reward, RewardType::Ext { info, .. } if info == reward_asset))
.ok_or_else(|| ContractError::RewardNotFound { pool: lp_asset.to_string(), reward: reward_asset.to_string() })?;
self.rewards_to_remove.insert(
reward_info.reward.clone(),
(reward_info.index, reward_info.orphaned),
);
let reward_info = self.rewards.remove(pos);
let next_update_ts = match &reward_info.reward {
RewardType::Ext { next_update_ts, .. } => *next_update_ts,
RewardType::Int(_) => unreachable!("Only external rewards can be deregistered"),
};
// Assume update_rewards() was called before
let mut remaining = reward_info.rps
* Decimal256::from_ratio(next_update_ts.saturating_sub(self.last_update_ts), 1u8);
// Remove active schedule from state
EXTERNAL_REWARD_SCHEDULES.remove(storage, (lp_asset, reward_asset, next_update_ts));
// If there is too much spam in the state, we can bypass upcoming schedules
if !bypass_upcoming_schedules {
let schedules = EXTERNAL_REWARD_SCHEDULES
.prefix((lp_asset, reward_asset))
.range(
storage,
Some(Bound::exclusive(next_update_ts)),
None,
Order::Ascending,
)
.collect::<StdResult<Vec<_>>>()?;
// Collect future rewards and remove future schedules from state
let mut prev_time = next_update_ts;
schedules
.into_iter()
.for_each(|(update_ts, period_reward_per_sec)| {
if update_ts > next_update_ts {
remaining += period_reward_per_sec
* Decimal256::from_ratio(update_ts - prev_time, 1u8);
prev_time = update_ts;
}
EXTERNAL_REWARD_SCHEDULES.remove(storage, (lp_asset, reward_asset, update_ts));
})
}
// Take orphaned rewards as well
remaining += reward_info.orphaned;
Ok(remaining.to_uint_floor().try_into()?)
}
It is recommended to always calculate and transfer the value of upcoming schedules to the designated receiver (or revert if non-zero) regardless of the bypass flag, or restrict bypass_upcoming_schedules
to an emergency guardian with on-chain justification.
SOLVED: The issue was fixed in the specified commit. The deregister_reward
function now always calculates upcoming schedule rewards regardless of the bypass_upcoming_schedules
flag, preventing fund seizure while the flag only controls whether schedules are removed from the state for performance reasons.
//
The fee_granter contract allows a grant to be created with bypass_amount_check = true
without verifying that the contract actually holds the requested amount
. This lets a grantee obtain a fee-grant they cannot realistically spend, potentially causing failed transactions or denial-of-service if users rely on the allowance.
Code snippet of grant
function from contracts/periphery/fee_granter/src/contract.rs file:
fn grant(
deps: DepsMut,
env: Env,
info: MessageInfo,
grantee_contract: Addr,
amount: Uint128,
bypass_amount_check: bool,
) -> Result<Response, ContractError> {
let config = CONFIG.load(deps.storage)?;
if config.owner != info.sender && !config.admins.contains(&info.sender) {
return Err(ContractError::Unauthorized {});
}
if !bypass_amount_check {
let sent_amount = must_pay(&info, &config.gas_denom)?;
if sent_amount != amount {
return Err(ContractError::InvalidAmount {
expected: amount,
actual: sent_amount,
});
}
}
GRANTS.update(
deps.storage,
&grantee_contract,
|existing| -> StdResult<_> {
match existing {
None => Ok(amount),
Some(_) => Err(StdError::generic_err(format!(
"Grant already exists for {grantee_contract}",
))),
}
},
)?;
let allowance = BasicAllowance {
spend_limit: vec![SdkCoin {
denom: config.gas_denom,
amount: amount.to_string(),
}],
expiration: None,
};
let grant_msg = MsgGrantAllowance {
granter: env.contract.address.to_string(),
grantee: grantee_contract.to_string(),
allowance: Some(Any {
type_url: BasicAllowance::TYPE_URL.to_string(),
value: allowance.encode_to_vec(),
}),
};
let msg = CosmosMsg::Stargate {
type_url: MsgGrantAllowance::TYPE_URL.to_string(),
value: grant_msg.encode_to_vec().into(),
};
Ok(Response::default().add_message(msg).add_attributes([
("action", "grant"),
("grantee_contract", grantee_contract.as_str()),
("amount", amount.to_string().as_str()),
]))
}
It is recommended to verify that the contract’s balance in gas_denom is ≥ amount prior to emitting MsgGrantAllowance
. Any discrepancy should result in aborting the transaction.
SOLVED: The issue was fixed in the specified commit. The grant function now always verifies that the contract's balance in gas_denom
is sufficient to grant the requested amount, preventing the creation of grants that cannot be realistically spent regardless of the bypass_amount_check
flag.
//
In the Vesting
contract, the register_vesting_accounts
function is intended to enforce a per-account limit of SCHEDULES_LIMIT
(8) vesting schedules. However, the current check only verifies:
old_info.schedules.len() + 1 <= SCHEDULES_LIMIT
This logic assumes a single schedule is added per call and does not account for the number of schedules being submitted in the transaction. Consequently:
Bypass of the Intended Limit
An attacker with 8 existing schedules can submit a call containing n new schedules, exceeding the limit by n - 1 in a single transaction.
A new account can register n schedules in its initial call, effectively bypassing the schedule cap entirely.
Denial-of-Service Risk
Allowing unrestricted growth of schedules increases storage iteration costs. Queries and updates that loop over schedules may run out of gas or become prohibitively expensive, potentially leading to a denial of service.
Inaccurate Accounting
Clients and front-end interfaces rely on the schedule count to enforce user experience limits. Bypassing this check can result in unexpected behavior or UI failures.
It is recommended to enforce the true cap by validating
old_info.schedules.len() + vesting_account.schedules.len() <= SCHEDULES_LIMIT
and applying the same check when old_info
does not exist so that no transaction can register more than 8 schedules in total.
SOLVED: The issue was fixed in the specified commit. The register_vesting_accounts
function now properly validates that the total number of schedules (existing + new) does not exceed SCHEDULES_LIMIT
, preventing bypass of the intended schedule cap.
//
In the Vesting contract, the assert_vesting_schedules
function only verifies schedules that include an end_point
. A schedule registered without an end_point
but with its start_point.time
already in the pastis considered fully unlocked at registration, effectively bypassing the intended vesting period.
Code snippet of assert_vesting_schedules
function from contracts/tokenomics/vesting/src/contract.rs file:
fn assert_vesting_schedules(
env: &Env,
addr: &Addr,
vesting_schedules: &[VestingSchedule],
) -> Result<(), ContractError> {
for sch in vesting_schedules {
if let Some(end_point) = &sch.end_point {
if !(sch.start_point.time < end_point.time
&& end_point.time > env.block.time.seconds()
&& sch.start_point.amount < end_point.amount)
{
return Err(ContractError::VestingScheduleError(addr.to_string()));
}
}
}
Ok(())
}
It is recommended to enforce start_time ≥ block_time
when end_point
is absent.
SOLVED: The issue was fixed in the specified commit. The assert_vesting_schedules
function now enforces start_time >= block_time
when end_point is absent, preventing immediate unlocking of tokens with past start times.
//
The asset info holds information like token denom or token contract address and is used to instantiate a pair. A uniqueness check ensures that the creator does not use the same token twice in the pair, however this checks is sensible to font case and can be bypassed if the creator sends an address with lowercase and an address with uppercase.
The asset info declaration, in packages/oroswap-core/src/asset.rs
:
#[cw_serde]
#[derive(Hash, Eq)]
pub enum AssetInfo {
/// Non-native Token
Token { contract_addr: Addr },
/// Native token
NativeToken { denom: String },
}
pub(crate) fn check_asset_infos(
api: &dyn Api,
asset_infos: &[AssetInfo],
) -> Result<(), ContractError> {
if !asset_infos.iter().all_unique() {
return Err(ContractError::DoublingAssets {});
}
asset_infos
.iter()
.try_for_each(|asset_info| asset_info.check(api))
.map_err(Into::into)
}
The following test shows that different cases bypass the unique check for the asset info:
fn test_asset_info_case_sensitivity_should_be_duplicate() {
// Test that contract addresses with different cases SHOULD be considered duplicates
// This test is expected to FAIL if the system doesn't treat different cases as duplicates
let lowercase_addr = AssetInfo::cw20_unchecked("wasm1abc123def456");
let uppercase_addr = AssetInfo::cw20_unchecked("WASM1ABC123DEF456");
let mixed_case_addr = AssetInfo::cw20_unchecked("Wasm1Abc123Def456");
// These SHOULD be equal (test will fail if they're not)
assert_eq!(lowercase_addr, uppercase_addr, "Lowercase and uppercase addresses should be considered equal");
assert_eq!(lowercase_addr, mixed_case_addr, "Lowercase and mixed case addresses should be considered equal");
assert_eq!(uppercase_addr, mixed_case_addr, "Uppercase and mixed case addresses should be considered equal");
// Test with the all_unique() method used in assert_coins_properly_sent
let asset_infos = vec![
Asset::cw20_unchecked("wasm1abc123def456", 1000u128),
Asset::cw20_unchecked("WASM1ABC123DEF456", 2000u128),
];
// This should FAIL because they should be considered duplicates
assert!(!asset_infos.iter().map(|asset| &asset.info).all_unique(),
"Contract addresses with different cases should be detected as duplicates");
// Test with HashSet to ensure they have the same hash
use std::collections::HashSet;
let mut asset_set = HashSet::new();
asset_set.insert(lowercase_addr.clone());
asset_set.insert(uppercase_addr.clone());
asset_set.insert(mixed_case_addr.clone());
assert_eq!(asset_set.len(), 1, "All three addresses should be treated as the same address");
}
The test fails:
It is recommended that the uniqueness check treats the different cases identically.
SOLVED: The issue was fixed in the specified commit. The asset info deduplication now uses case-insensitive comparison through the case_insensitive_eq
method, preventing bypass of uniqueness checks when using different case variations of the same asset address or denom.
//
The oroswap pair contract is inspired from the UniswapV2 contracts. There is a small discrepancy in the fee calculations that deviates from the original code and may cause confusion in greater integrations.
In UniswapV2 pair contract, the fee is applied to the input amount before calculating the output, using the constant product formula with the fee-adjusted input:
// UniswapV2Router02.sol - getAmountOut function
function getAmountOut(uint amountIn, uint reserveIn, uint reserveOut) internal pure returns (uint amountOut) {
require(amountIn > 0, 'UniswapV2Library: INSUFFICIENT_INPUT_AMOUNT');
require(reserveIn > 0 && reserveOut > 0, 'UniswapV2Library: INSUFFICIENT_LIQUIDITY');
uint amountInWithFee = amountIn.mul(997);
uint numerator = amountInWithFee.mul(reserveOut);
uint denominator = reserveIn.mul(1000).add(amountInWithFee);
amountOut = numerator / denominator;
}
In contrast, the Oroswap pair contract calculates the swap amount first, then deducts the fee from the output:
// contract.rs - compute_swap function
pub fn compute_swap(
offer_pool: Uint128,
ask_pool: Uint128,
offer_amount: Uint128,
commission_rate: Decimal,
) -> StdResult<(Uint128, Uint128, Uint128)> {
// Calculate output using constant product formula WITHOUT fee
let cp: Uint256 = offer_pool * ask_pool;
let return_amount: Uint256 = (Decimal256::from_ratio(ask_pool, 1u8)
- Decimal256::from_ratio(cp, offer_pool + offer_amount))
* Uint256::from(1u8);
// Then calculate and deduct the fee
let commission_amount: Uint256 = return_amount * commission_rate;
let return_amount: Uint256 = return_amount - commission_amount;
Ok((
return_amount.try_into()?,
spread_amount.try_into()?,
commission_amount.try_into()?,
))
}
For a swap of 1 token with 0.3% fee and reserves of 100:100:
UniswapV2: Returns 0.987158034397061298 tokens
Oroswap: Returns 0.987128712871287129 tokens
Difference: ~0.003% (approximately the fee percentage)
This discrepancy means that:
Integration with existing UniswapV2 tooling may produce unexpected results.
Arbitrage bots calibrated for UniswapV2 math may calculate incorrect profits.
Price impact calculations will differ slightly from UniswapV2.
The constant product invariant (k) increases by different amounts after swaps.
Apply the fee on the input amount by computing amountInWithFee = offer_amount * (1 - commission_rate)
, then calculate return_amount
using the constant product formula: (amountInWithFee * ask_pool) / (offer_pool + amountInWithFee)
. This approach aligns Oroswap swaps with the UniswapV2 mathematical model.
ACKNOWLEDGED: The Oroswap team acknowledged this finding.
//
Several contract instantiate entry points currently emit an empty or inadequately attributed Response
, omitting standard attributes such as action="instantiate"
(and, where applicable, contract-specific fields like owner
, oroswap_factory
, or contract_version
). Without these attributes:
On-chain traceability is compromised, as indexers and auditors cannot reliably link deployment transactions to their parameters.
Monitoring and alerting tools lose key hooks for filtering instantiation events.
Compliance audits become more laborious, since standard event logs are not emitted.
Although no direct fund loss is possible, the lack of observability degrades overall contract transparency and incident-response capabilities.
Affected Locations
contracts/router/src/contract.rs
contracts/tokenomics/incentives/src/instantiate.rs
contracts/tokenomics/vesting/contract.rs
It is recommended to modify each instantiate entry point to include meaningful attributes in the returned Response
. This will ensure consistent on-chain events, improve filtering and traceability, and facilitate automated indexing, monitoring, and auditing.
SOLVED: The issue was fixed in the specified commit. All three contracts now include meaningful attributes in their instantiate functions.
//
Several query handlers across multiple contracts accept a user-supplied limit
and immediately call .take(limit…)
without enforcing any hard upper bound. This allows an attacker to request an excessively large page (e.g. u32::MAX
), causing the node to spend excessive CPU/memory serializing results, potentially leading to service degradation or denial of service for other users.
Location
contracts/periphery/fee_granter/src/query.rs::grants_list
contracts/periphery/native_coin_registry/src/contract.rs::query_native_tokens
contracts/tokenomics/incentives/src/query.rs::list_pools
contracts/tokenomics/incentives/src/query.rs::query_blocked_tokens
As a best practice, it is recommended to always enforce a sensible, default maximum limit before iterating. For example:
const DEFAULT_LIMIT: u32 = 50;
const MAX_LIMIT: u32 = 100;
let limit = limit.unwrap_or(DEFAULT_LIMIT).min(MAX_LIMIT) as usize;
let items = storage.items().take(limit).collect::<Vec<_>>();
SOLVED: The issue was fixed in the specified commit. All query endpoints now enforce sensible maximum limits.
//
The maker contract is used to collect fees from the pools, via the collect function that anyone can call. An emergency feature allows the admin to declare pools that can be "seized" at any time, sending the funds to a predefined address.
The malicious owner can therefore:
Set any asset as "seizable" via UpdateSeizeConfig
Set themselves as the receiver
Call seize to transfer any funds to themselves
The attack scenario is as follows:
// 1. Owner sets up seize configuration
UpdateSeizeConfig {
receiver: Some("owner_address"),
seizable_assets: vec![AssetInfo::native("uusdc"), AssetInfo::native("uluna"), ...]
}
// 2. Owner calls seize to steal funds
Seize { assets: vec![AssetInfo::native("uusdc"), AssetInfo::native("uluna")] }
The seize function is also callable by anyone, even though the funds will be directed to the receiver defined by the owner. Therefore as long as the owner defined a pool as seizable, anyone can call the seize function to grief the fee harvest instead of calling the regular collect function.
Enforce strict access control on the UpdateSeizeConfig
and seize
functions by implementing an on‐chain verification ensuring that only the designated admin address is authorized to invoke them, thereby preventing unauthorized asset seizures.
SOLVED: The issue was fixed in the specified commit. The Seize
and UpdateSeizeConfig
entry points now have proper access control.
//
The vesting contract allows owner to prematurely close a schedule for any given user. While trying to remove the complete amount, the following amount >= amount_left
check does not allow it and forces the owner to leave a single unit in the schedule.
In withdraw_from_active_schedule
:
let amount_left = end_point.amount.checked_sub(sch_unlocked_amount)?;
if amount >= amount_left {
return Err(ContractError::NotEnoughTokens(amount_left));
}
Replace the amount >= amount_left
condition in withdraw_from_active_schedule
with amount > amount_left
to allow withdrawals that match the remaining balance, thereby fully clearing the schedule.
SOLVED: The issue was fixed in the specified commit. The withdraw_from_active_schedule
function now uses amount > amount_left
instead of amount >= amount_left
, allowing withdrawals that match the remaining balance and fully clearing the schedule.
//
After enforcing the minimum pool-creation fee
upfront, the subsequent branch if coin.amount >= pool_creation_fee
during info.funds
processing is unnecessary.
This adds complexity without functional benefit, since the fee requirement is already guaranteed. Simplifying the split (extract fee, treat remainder as liquidity) improves readability and reduces potential for future mistakes.
Code of execute_create_pair_and_provide_liquidity
function from contracts/periphery/pool_initializer/src/contract.rs file:
// 5. Enforce minimum pool creation fee
let fee_denom = config.fee_denom.clone();
let fee_sent = info.funds.iter()
.find(|c| c.denom == fee_denom)
.map(|c| c.amount)
.unwrap_or_default();
if fee_sent < config.pair_creation_fee {
return Err(ContractError::InsufficientFundsForDenom { denom: fee_denom });
}
// Extract pool creation fee and keep the rest for liquidity
let mut factory_funds = vec![];
let mut liquidity_funds = vec![];
let mut cw20_messages = vec![];
for coin in &info.funds {
if coin.denom == fee_denom {
let pool_creation_fee = config.pair_creation_fee;
if coin.amount >= pool_creation_fee {
factory_funds.push(cosmwasm_std::Coin {
denom: fee_denom.clone(),
amount: pool_creation_fee,
});
// Keep the rest for liquidity
let remaining = coin.amount - pool_creation_fee;
if !remaining.is_zero() {
liquidity_funds.push(cosmwasm_std::Coin {
denom: fee_denom.clone(),
amount: remaining,
});
}
} else {
// If less than pool creation fee, send all to factory
factory_funds.push(coin.clone());
}
} else {
// Non-fee tokens go to liquidity
liquidity_funds.push(coin.clone());
}
}
It is recommended to remove the redundant if coin.amount >= pool_creation_fee
branch and directly split the fee from the fee_denom
amount after the upfront fee check, treating the remainder as liquidity.
SOLVED: The Oroswap team solved this issue by removing the redundant code.
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
CosmWasm Contracts
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed