Prepared by:
HALBORN
Last Updated 09/16/2025
Date of Engagement: August 26th, 2024 - September 20th, 2024
100% of all REPORTED Findings have been addressed
All findings
27
Critical
4
High
1
Medium
5
Low
12
Informational
5
Holonym engaged Halborn to perform a security assessment of their Rust codebase from September 2, 2024, to September 27, 2024. The assessment focused on the specific crates listed in the provided GitHub repository and included relevant commit hashes. More details can be found in the Scope section of this report.
The Halborn team was allocated four weeks for the engagement and assigned a full-time security engineer to assess the security of the crates and the overall codebase. The security engineer is an expert in blockchain and smart contract security, with advanced skills in penetration testing and smart contract auditing, as well as extensive knowledge of various blockchain protocols.
The purpose of this assessment is to:
Ensure that codebase functions operate as intended
Identify potential security issues within codebase
Halborn
Performed a combination of the manual view of the code and automated security testing to balance efficiency, timeliness, practicality, and accuracy regarding the scope of the codebase assessment. While manual testing is recommended to uncover flaws in logic, process, and implementation, automated testing techniques. They can quickly identify items that do not follow security best practices. The following phases and associated tools were used throughout the term of the assessment:
Research on the architecture, purpose, and usage of the Human network.
Manual code reading and walkthroughs to gain an understanding of the overall design and potential vulnerabilities.
Manual assessment of critical Rust variables and functions to identify arithmetic-related vulnerabilities.
Testing for race conditions and thread safety in the Rust actor framework.
Audit of the DKG (Distributed Key Generation) cryptographic protocol to ensure robustness against potential attacks.
Review of libp2p configuration and security to validate network communication integrity.
Security testing of cryptographic primitives to ensure they meet industry standards.
Scanning Rust files for vulnerabilities using Cargo Audit, identifying outdated dependencies and known security issues.
Checking for unsafe code usage with Cargo Geiger to minimize risks associated with unsafe Rust features.
Analysis of node communication and message integrity to mitigate risks from malicious nodes.
Review of error handling and logging practices to ensure sensitive information is not exposed.
Testing for denial-of-service vulnerabilities and resilience against resource exhaustion attacks.
EXPLOITABILITY METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Attack Origin (AO) | Arbitrary (AO:A) Specific (AO:S) | 1 0.2 |
Attack Cost (AC) | Low (AC:L) Medium (AC:M) High (AC:H) | 1 0.67 0.33 |
Attack Complexity (AX) | Low (AX:L) Medium (AX:M) High (AX:H) | 1 0.67 0.33 |
IMPACT METRIC () | METRIC VALUE | NUMERICAL VALUE |
---|---|---|
Confidentiality (C) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Integrity (I) | None (I:N) Low (I:L) Medium (I:M) High (I:H) Critical (I:C) | 0 0.25 0.5 0.75 1 |
Availability (A) | None (A:N) Low (A:L) Medium (A:M) High (A:H) Critical (A:C) | 0 0.25 0.5 0.75 1 |
Deposit (D) | None (D:N) Low (D:L) Medium (D:M) High (D:H) Critical (D:C) | 0 0.25 0.5 0.75 1 |
Yield (Y) | None (Y:N) Low (Y:L) Medium (Y:M) High (Y:H) Critical (Y:C) | 0 0.25 0.5 0.75 1 |
SEVERITY COEFFICIENT () | COEFFICIENT VALUE | NUMERICAL VALUE |
---|---|---|
Reversibility () | None (R:N) Partial (R:P) Full (R:F) | 1 0.5 0.25 |
Scope () | Changed (S:C) Unchanged (S:U) | 1.25 1 |
Severity | Score Value Range |
---|---|
Critical | 9 - 10 |
High | 7 - 8.9 |
Medium | 4.5 - 6.9 |
Low | 2 - 4.4 |
Informational | 0 - 1.9 |
Critical
4
High
1
Medium
5
Low
12
Informational
5
Security analysis | Risk level | Remediation Date |
---|---|---|
Unrestricted Growth of PubkeyShares HashMap Can Lead to Out-of-Memory (OOM) | Critical | Solved - 03/08/2025 |
Message::ForwardMulRequest Potential DDoS | Critical | Solved - 06/04/2025 |
Inadequate Validation of Messages in DKG Protocol | Critical | Solved - 03/08/2025 |
Deadlock in DKG | Critical | Solved - 03/08/2025 |
Lack of Validation for Threshold Parameters in update_threshold Function | High | Solved - 02/13/2025 |
Incorrect Threshold Check for Multiplication Verification in process_verification Function | Medium | Solved - 03/08/2025 |
Incomplete Election State Update and Lack of Error Handling in conduct_election Function | Medium | Not Applicable - 03/08/2025 |
Panic in Polynomial Generation from Seed | Medium | Solved - 06/11/2025 |
Out-of-Bounds Access Due to Empty Vectors | Medium | Not Applicable - 03/08/2025 |
Missing t,n Validation in Network Initialization | Medium | Solved - 03/08/2025 |
Non-Constant Time Cryptographic Operations In PointTrait | Low | Not Applicable - 03/08/2025 |
Missing Error Handling for encode() in process_verification Function | Low | Not Applicable - 03/08/2025 |
Lack of I/O Lock for File Operations in StoreKeyShares Handling | Low | Not Applicable - 03/08/2025 |
Missing Synchronization for Shared State in ElectionEngineState | Low | Not Applicable - 03/08/2025 |
Insecure RNG for Polynomial Coefficients | Low | Not Applicable - 03/08/2025 |
Sensitive Data Exposure Through Logging of Seed Value | Low | Solved - 10/10/2024 |
Misconfigurations in gossip Initialization | Low | Solved - 03/08/2025 |
Lack of Peer Blacklisting in Gossipsub Engine | Low | Solved - 03/08/2025 |
Unchecked Return Values in Request and Subscription Handlers | Low | Not Applicable - 03/08/2025 |
Insufficient Error Handling in handle_store_reshared_received_pubshare | Low | Solved - 06/04/2025 |
Missing Update of Total Nodes in add_node Function | Low | Not Applicable - 03/08/2025 |
Multiple Overflows in Polynomial Operations | Low | Not Applicable - 03/08/2025 |
Invalid Threshold Calculation in calculate_threshold Function | Informational | Not Applicable - 03/08/2025 |
Function Naming and State Update Issue in check_election_status | Informational | Not Applicable - 03/08/2025 |
Missing Degree Constraints in Polynomial Creation Function | Informational | Solved - 06/01/2025 |
Insecure Fallback to Local Random Number Generation | Informational | Not Applicable - 03/08/2025 |
Use of Non-Cryptographically Secure Random Number Generators in dkg nodes | Informational | Not Applicable - 03/08/2025 |
//
In the store_kset
function, there is a potential risk of Out-of-Memory (OOM) issues due to unrestricted growth of the pubkey_shares
data structure, which is based on a HashMap
of type PubkeyShares
. This issue arises because there is no explicit limit on how many key shares can be stored within this map or the received_group_public_keys
list.
Additionally, since the pubkey_shares
is of type PubkeyShares
, which is defined as:
pub type PubkeyShares = HashMap<String, Vec<u8>>;
This HashMap
stores public key shares with a String
as the key (which may represent a node or peer) and a Vec<u8>
(which holds the binary representation of the public keys) as the value. Unchecked addition of entries to this structure, combined with the fact that Vec<u8>
can hold large amounts of data, increases the risk of memory exhaustion.
To mitigate this issue, several measures can be taken:
Set Size Limits: Introduce size limits on both pubkey_shares
and received_group_public_keys
.
Validate Before Insertion: Before inserting data into pubkey_shares
or appending to received_group_public_keys
, check that the size of these collections does not exceed the defined limits.
Log and Monitor: In addition to limiting the sizes, log events when limits are hit. This can help in debugging and alerting if the limits are reached too frequently.
SOLVED: The OOM risk in store_kset has been mitigated.
The function now:
Calls check_memory_usage to enforce per-entry size caps (100 bytes for pubkey_shares, 200 bytes for group_public_keys).
Accepts key sets only from peers already present in active_provers, so the total number of stored entries cannot grow beyond the configured quorum size n.
Adds each peer’s share only once and finalises when ready_nodes.len() == n, keeping received_group_public_keys bounded.
//
The current implementation of handling the ForwardMulRequest
message and managing the Gossipsub engine's data storage presents several vulnerabilities that could be exploited to cause a Distributed Denial of Service (DDoS) attack. These vulnerabilities stem from unchecked return values, potential memory exhaustion, and a lack of constraints on data size.
Message::ForwardMulRequest(request_id,request)=>{
if let Some(peers)=msg.2{
for (peer_id,multi_address) in peers.into_iter() {
info!("Forwarding oprf request to peer: {:?} , Multiaddress :{:?}", peer_id,multi_address);
let status=self.swarm.dial(multi_address); // @audit unchecked return value
info!("Connection status :{:?}",status);
self.cached_data.entry(peer_id) // @audit critical , oom/dos bug
.or_default()
.push(Message::ForwardMulRequest(request_id.clone(),request.clone()));
}
}
}
1. Unchecked Return Value
In the Message::ForwardMulRequest
handler, the return value from self.swarm.dial(multi_address)
is not checked. This method attempts to initiate a connection to a peer, and the return value indicates whether the connection attempt was successful or failed. Ignoring this value means that:
Failure to Handle Connection Issues: If the connection fails, there is no mechanism to handle or log the error. This could lead to a situation where failed connections are silently ignored, potentially resulting in cascading failures if retries or alternative actions are not performed.
2. Potential Memory Exhaustion
The cached_data
field is a HashMap<PeerId, Vec<Message>>
that stores messages for each peer. In the Message::ForwardMulRequest
handler:
Unbounded Growth of cached_data
: Each incoming ForwardMulRequest
message can add new entries or extend existing entries in the cached_data
map. If an attacker floods the system with numerous ForwardMulRequest
messages, the size of the cached_data
map can grow uncontrollably.
Out-of-Memory Risk: The unchecked growth of Vec<Message>
for each PeerId
can lead to excessive memory consumption. This can exhaust the system’s resources, causing degraded performance or even crashes, particularly under a high-volume attack.
3. Denial of Service (DoS) Vector in RequestToNetwork
The RequestToNetwork
struct includes a field extra_data
of type Option<Vec<u8>>
. This field is designed to store optional data but presents several risks:
Unbounded Data Size: Since extra_data
is a Vec<u8>
, it can grow arbitrarily large. If an attacker sends requests with large extra_data
, this can lead to excessive memory usage or processing time, contributing to a DoS condition.
Chaining with cached_data
Vulnerability: Combined with the potential unbounded growth of cached_data
, large extra_data
fields can exacerbate memory exhaustion. The system could be overwhelmed by both the sheer volume of messages and the size of the data within those messages.
#[derive(Debug, Clone, Serialize, Deserialize)]
// NOTE: If you modify this struct, please ensure you modify the validation logic for requests accordingly
pub struct RequestToNetwork /*<const N: usize, C: Curve<N>>*/ {
/// The method of the request
pub method: Method,
/// Encoded point to be multiplied
pub point: Vec<u8>, // C::Point,
/// Which epoch this request is from
pub epoch: u32,
/// Which request number this is from this user. Starts at 1 not 0 to ensure the first request is paid for.
pub request_per_user: u128,
/// Signature of the request. If valid, this can be used to recover the address of the signer
pub signature: Option<Signature>,
/// Extra optional data
pub extra_data: Option<Vec<u8>>, // @audit This not used anywhere around the codeabse
}
Handle Return Values: Implement error handling for connection attempts to ensure that failures are logged and managed.
Implement Limits and Rate Limiting: Introduce constraints on the size of cached_data
and enforce rate limits on incoming messages to prevent flooding.
Validate and Restrict extra_data
: Set size limits and perform validation on extra_data
to avoid excessive resource consumption.
SOLVED: The Holonym team solved this finding.
if let Some(messages) = self.cached_data.get_mut(&peer_id) {
messages.push(Message::ForwardMulRequest(request_id.clone(), request.clone()));
} else {
self.cached_data.insert(peer_id.clone(), vec![Message::ForwardMulRequest(request_id.clone(), request.clone())]);
}
//
The process_message
function lacks proper peer validation for messages related to the Distributed Key Generation (DKG) protocol. As a result, any node, regardless of whether it is part of the DKG group, can send critical DKG messages. This creates vulnerabilities where malicious nodes can corrupt the protocol by sending arbitrary public key shares or election details, potentially disrupting consensus and compromising the dkg security.
Specific examples include:
Message::Round1Out / Message::ResharingRound1Out: Malicious nodes can send arbitrary public key shares in round one of the DKG process. Since there is no mapping between the sender's peer_id
and the index (idx
), these public keys are accepted without validation.
Message::Round2Out / Message::ResharingRound2Out: Similar to the round one messages, in round two, arbitrary public keys from unauthorized peers can be sent and processed. Again, there is no validation of whether the sender is an authorized DKG node.
Message::ElectedProvers / Message::ReElectedProvers: Any node, even those outside the quorum, can send election details or provers’ public keys. This can lead to a disruption in the election process, allowing malicious nodes to inject incorrect group keys and outputs.
Peer Validation: Ensure that only nodes part of the DKG quorum are allowed to send DKG-related messages. Validate the peer_id
of the sender to confirm they are an authorized participant in the DKG process.
Mapping peer_id
to Index: For all DKG messages involving public keys or outputs (e.g., Round1Out
, Round2Out
), map the idx
to the sender’s peer_id
. Reject messages where this mapping does not match.
SOLVED: The Holonym team solved the issue; the code now checks if the node is part of the tstar provers.
//
A vulnerability has been identified in the Distributed Key Generation (DKG) protocol, where malicious nodes can disrupt the process by withholding public key shares. This issue affects the following functions:
handle_round1_output
handle_resharing_round1_output
handle_store_received_pubshare
handle_store_reshared_received_pubshare
These functions wait for all public key shares from the participating nodes to be received. If a malicious node deliberately withholds its share or fails to send it, the process can stall indefinitely, leading to a deadlock. As a result, the DKG process cannot progress, making the protocol susceptible to DoS attacks.
Timeout Mechanism: Implement a timeout for receiving public shares. If a node fails to submit its share within the allotted time, initiate corrective actions such as retries or excluding the malicious node from the DKG process.
SOLVED: The Holonym team introduced DKG_RESHARING_WAIT_TIME and monitor_quorum_formation.
//
The update_threshold
function updates the network parameters (n
, t
, and epsilon
) based on the provided ElectionInfo
without validating the values. This can introduce risks if the values provided are incorrect or malicious, such as setting a threshold (t
) higher than the total number of provers (n
), or using invalid values for epsilon
. These scenarios could potentially lead to undefined behavior within the protocol.
Before updating the application state, add validation checks for the n
, t
, and epsilon
values to ensure they are within acceptable ranges. For example, ensure that t
(the threshold) is not greater than n
(the total number of provers).
SOLVED: The Holonym team fixed the issue in the specified commit.
//
In the process_verification
function, there is a bug in how the threshold is being compared against the number of nodes (self.num_nodes
). The function currently checks if self.num_nodes > threshold
, which incorrectly excludes the scenario where the number of nodes equals the threshold. This could lead to the function skipping critical verification steps if the exact number of nodes required by the threshold is present.
Modify the threshold check to self.num_nodes >= threshold
to ensure that the verification process proceeds when the number of nodes is equal to or greater than the required threshold. This adjustment ensures that verification is not inadvertently skipped when the exact threshold is met,
if self.num_nodes >= threshold {
// Proceed with verification
}
SOLVED: The Holonym team solved this finding.
//
The conduct_election
function updates the election state to ElectionState::QuorumElected
unconditionally, even if the election process fails or returns an empty array of provers. This can lead to incorrect state representation, where the node believes a quorum is elected, despite potentially invalid or incomplete results. Additionally, the function does not provide any error handling, allowing it to continue execution even if no provers are successfully fetched from the fetch_provers_v2
function.
Implement a check to validate that the list of provers is non-empty before updating the state to ElectionState::QuorumElected
.
Ensure that the system only updates the state once a valid list of provers is confirmed.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The from_seed
function generates a polynomial by repeatedly hashing a 32-byte seed, using Blake2b512
to derive polynomial coefficients. The function attempts to convert the hash output directly into a seed array using try_into().unwrap()
. If the conversion from the hash output to the seed array fails, the function will panic, causing potential runtime errors.
fn from_seed_panic(){
let seed = [0u8; 32]; // Proper seed length
let result = panic::catch_unwind(|| {
Polynomial::<F256k1>::from_seed(&seed, 1);
});
assert!(result.is_err(), "Short seed should trigger a panic"); // Expect a panic
}
Safeguard Against Conversion Failures: Replace unwrap()
with proper error handling to avoid panics and handle unexpected conversion failures gracefully.
SOLVED: The use of try_into().unwrap()
has been eliminated.
//
The fetch_provers_v2
function can return an empty vector if no prover information is fetched successfully. This situation can lead to out-of-bounds access in the conduct_election
function, where the code attempts to slice the vector starting from index 2. If fetch_provers_v2
returns an empty vector or a vector with fewer than 3 elements, slicing from index 2 can cause a panic.
Validation of Vector Length: Before performing operations that assume the vector has a certain minimum length, validate the length of the vector and handle cases where it is shorter. For example, check if the length of self.elected_provers
is sufficient before slicing or accessing specific indices.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The Network::new
function constructs a new Network
instance by accepting parameters for the threshold t
and the total number of nodes n
. However, there is currently no validation to ensure that the values of t
and n
are within acceptable bounds. Specifically, the function should ensure that:
The threshold t
is greater than 0.
The threshold t
is less than the total number of nodes n
.
The total number of nodes n
is greater than 0.
The lack of these validations can lead to unintended behavior during the Distributed Key Generation (DKG) process.
pub fn new(t: usize, n: usize) -> Result<Self, Error> {
let set: HashMap<u128, RsaPublicKey> = HashMap::new();
Ok(Self {
threshold: t,
total_nodes: n,
public_keys: set,
})
}
Input Validation: Add explicit checks for the validity of t
and n
at the beginning of the Network::new
function. If the parameters do not meet the required conditions, return an appropriate error.
SOLVED: The Holonym team solved the issue by adding more validations on t
and n
.
//
The code snippets provided for the AffinePoint
implementation of the PointTrait<Scalar>
interface exposes two vulnerabilities: non-constant time addition and non-constant time scalar multiplication. In cryptographic applications, it is crucial that operations involving sensitive data, such as point addition and scalar multiplication on elliptic curves, are performed in constant time to prevent timing-based side-channel attacks.
the following functions are affected
add_self
scalar_mul
Implement point addition and scalar multiplication operations in constant time to prevent timing side-channel attacks. Constant-time algorithms ensure that the execution time is independent of the input values, making it harder for attackers to deduce any sensitive information.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
In the process_verification
function, the reconstructed point is being assigned to self.reconstructed_point
and the transaction status is updated to TxState::Verified
without validating whether the reconstructed point is correct. This lack of validation introduces the risk that an incorrect or invalid point could be marked as successfully verified,
Add a validation step to check the correctness of the reconstructed point before updating the transaction status to TxState::Verified
. Ensure the reconstructed point passes all necessary checks (e.g., it lies on the curve or satisfies certain cryptographic properties). Only after successful validation should the transaction be marked as verified.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
In the AppStateChangeMessage::StoreKeyShares(privkey_shares)
case, the code writes the private key shares to a local file (private_keyshares.json
) without using any I/O locking mechanisms. This can lead to a race condition if multiple threads or processes attempt to access or modify the file simultaneously. As a result, the file could become corrupted or contain incomplete data,
Introduce file locking to ensure that only one process or thread can write to the private_keyshares.json
file at a time.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The ElectionEngineState
struct represents the mutable state for the ElectionEngineActor
from the ractor
framework in Rust. It contains fields such as current_state
and elected_provers
that are accessed and modified concurrently by different parts of the actor. This can lead to data races and inconsistencies in the state without proper synchronization, as multiple actors or tasks may interact with these fields simultaneously.
Add Mutex for Shared State: Use a Mutex
or other synchronization mechanisms to protect the current_state
and elected_provers
fields. This ensures that state modifications are performed atomically, preventing race conditions and ensuring consistency.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The random_polynomial
and random_polynomials_with_secret
functions create polynomials with random coefficients using F::random(&mut thread_rng())
. The use of thread_rng()
is inappropriate for cryptographic purposes because it is not a cryptographically secure random number generator (CSPRNG). For cryptographic applications, such as secure multiparty computation or secret sharing schemes, using a CSPRNG is essential to ensure the security and unpredictability of generated values.
Use a Cryptographically Secure Random Number Generator (CSPRNG): Replace thread_rng()
with a CSPRNG to ensure that the random values used in polynomial coefficients are secure and suitable for cryptographic applications.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The fetch_secp256k1_keypair
function logs the seed value used for generating cryptographic keys. Logging sensitive information like this seed can expose it to unauthorized parties, potentially compromising the security of the cryptographic operations that depend on it.
Avoid logging sensitive information such as cryptographic seeds or keys. Instead, ensure that such values are handled securely and only used in memory, without exposing them through logs or other outputs.
SOLVED: The Holonym team fixed the issue in the specified commit.
//
The MyBehaviour
struct integrates several network protocol behaviors (gossipsub
, kademlia
, relay_client
, and request_response_behaviour
) in a peer-to-peer network environment. While the implementation sets up the core functionalities for each protocol, there are several areas requiring improvements to enhance security, configuration robustness:
Hashing and Message Identification:
The message_id_fn
function uses DefaultHasher
for generating unique message IDs. This hasher may not be secure for cryptographic purposes.
Gossipsub Configuration:
The current configuration lacks peer blacklisting and a limit on the maximum number of messages per RPC. These are important for mitigating potential denial of service attacks .
Use a Stronger Hasher:
Replace DefaultHasher
with a cryptographically secure hasher, such as SHA-256, to ensure that message IDs are unique and collision-resistant.
let message_id_fn = |message: &gossipsub::Message| {
let mut hasher = Sha256::new();
hasher.update(&message.data);
gossipsub::MessageId::from(hex::encode(hasher.finalize()))
};
Enhance Gossipsub Configuration:
Add peer blacklisting and set a maximum number of messages per RPC to improve network security and performance management.
let gossipsub_config = gossipsub::ConfigBuilder::default()
.heartbeat_interval(Duration::from_secs(HEART_BEAT_INTERVAL))
.validation_mode(gossipsub::ValidationMode::Strict)
.duplicate_cache_time(Duration::from_secs(DUPLICATE_CACHE_TIME))
.message_id_fn(message_id_fn)
.max_messages_per_rpc(100) // Example limit
.blacklist_peers(vec![]) // Optionally add blacklisted peers
.build()
.map_err(|msg| io::Error::new(io::ErrorKind::Other, msg))?;
SOLVED: The Holonym team solved this finding.
//
In the current implementation of the Gossipsub engine for the MyBehaviour
struct, there is no mechanism to block or blacklist misbehaving peers
Implement blacklisting or banning mechanisms. This can help in removing or isolating peers that exhibit malicious or disruptive behavior.
SOLVED: The Holonym team solved the issue by adding AddToBlacklist
.
//
In two sections of the code, unchecked return values pose security and reliability risks:
Gossipsub Subscription:
The return value of self.swarm.behaviour_mut().gossipsub.subscribe()
is not verified. This could lead to silent failures in subscribing to topics, causing the node to miss messages in the network.
OPRF Request Response:
The return value of self.swarm.behaviour_mut().request_response_behaviour.send_response()
is also unchecked. Failure to handle this could lead to unacknowledged or lost OPRF responses, breaking the communication flow between nodes.
Ensure that the return values of both subscribe()
and send_response()
are checked for success or failure. Log appropriate errors and handle failures to maintain the reliability of the node’s operations.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The handle_store_reshared_received_pubshare
function is responsible for processing and storing reshared public key shares during the resharing phase of the Distributed Key Generation (DKG) protocol. An error handling related issue arises from the function’s handling of scenarios where a node is not part of the new quorum. The function currently returns Ok(())
without raising an error when it encounters invalid node participation.
#[tracing::instrument(name = "handle_store_reshared_received_pubshare", skip(myself,state), fields(pubshares = ?debug_key_shares(&pubshares)))]
fn handle_store_reshared_received_pubshare(
myself: ActorRef<DkgEngineMessage>,
state: &mut DkgStateWrapper,
// The index of the node that sent the public key shares
idx: u128,
// The public key shares for a particular node, organized by which key they are for
pubshares: HashMap<String, Vec<u8>>,
) -> Result<(), ActorProcessingErr> {
if let Some(state) = &mut state.0 {
// Skip storing keys from oneself
if state.idx == idx {
return Ok(());
}
// Ensure the current node is part of the new quorum
if !state.new_provers.contains_key(&state.peer_id) {
info!("Cannot handle round 2 output since {} not part of new quorum", state.idx);
return Ok(()); // @audit this should return an error not OK
}
// Verify the validity of the node by checking if the provided index matches any in the new quorum
let mut valid_node = false;
for node in state.new_provers.iter() {
if node.1 .1 as u128 == idx {
valid_node = true;
break;
}
}
if !valid_node {
info!("Cannot handle round 2 output since {} not part of new quorum", idx);
return Ok(()); // @audit this should return an error not OK
}
...
}
Ok(())
}
Error Handling: Replace the Ok(())
return statement with an appropriate error response when a node is not part of the new quorum. This can be achieved by returning an error variant that indicates the reason for the failure.
SOLVED: The Holonym team solved this finding.
//
The add_node
function in the Network
structure is designed to add a new node's public key to the public_keys
HashMap. However, it currently lacks a mechanism to update the total_nodes
count whenever a new node is added. This oversight could lead to inconsistencies in the state of the Network
, especially during processes that rely on the accurate count of total nodes, such as distributed key generation (DKG) protocols.
Update Total Nodes Count: Modify the function to increment the total_nodes
counter whenever a new node is successfully added. This can be accomplished by checking if the node ID already exists in the public_keys
HashMap. If it does not, increment the total_nodes
count.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The implementation of polynomial operations in the provided code exhibits several potential overflow issues, particularly in the context of arithmetic operations involving coefficients and exponents. Such overflows can lead to unexpected behaviors, especially when working with cryptographic primitives or large datasets.
Overflow in verify_vss
Function:
The line exp *= index;
can overflow if the product exceeds the maximum value of the underlying integer type. This can lead to incorrect calculations.
Overflow in eval
Method:
The pow
function used here (x.pow([i as u64])
) can overflow for large values of i
. Although it's unlikely that usize
would exceed u64
, it’s a good practice to use checked arithmetic to prevent any unexpected overflow.
Overflow in add_same_deg
Method:
The b + a
operation when summing coefficients can also overflow if the sum exceeds the maximum value for the data type of the coefficients.
Using Checked Arithmetic: To mitigate the overflow risks, use checked arithmetic provided by the Rust standard library. This allows you to safely handle cases where an overflow might occur.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The calculate_threshold
function calculates a threshold value based on the number of provers (n
) and a threshold percentage. However, if called with n = 0
or threshold_percentage = 0
, the function may produce invalid results. Specifically, if n = 0
, the result will always be 0, which is not a valid threshold in practical scenarios. Similarly, a threshold_percentage
of 0 will also result in a threshold of 0, which is generally not meaningful.
Input Validation: Implement validation checks to ensure that n
and threshold_percentage
are greater than 0 before performing the calculation. This can prevent invalid or meaningless threshold values.
Error Handling: If invalid values are detected, return an appropriate error or handle the situation gracefully to avoid potential issues in downstream processes.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The check_election_status
function updates the current_state
of the ElectionEngine
to CollectedProposals
and logs the updated state. However, the function name does not accurately reflect its operation, which is to update the state rather than merely check it. This can lead to confusion about the function's intent and behavior. Renaming the function to better align with its functionality will improve code clarity and maintainability.
Rename the Function: Change the function name from check_election_status
to update_and_check_election_status
to better reflect its purpose, which is to update the election state and then check or log the status.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The random_polynomials_with_secret
function generates a polynomial with random coefficients and sets a specific secret as the free term. However, there is no constraint or validation on the maximum degree of the polynomial. This could lead to issues if the degree exceeds practical limits or if there are constraints on the polynomial's degree in the application's context.
Add Degree Constraints or Documentation: Include a maximum degree parameter or validation to ensure that the polynomial's degree does not exceed practical or application-specific limits.
SOLVED: The Holonym team solved this finding.
if degree >= MAX_DEGREE {
return Err(PolynomialError::DegreeTooHigh);
}
//
In the current implementation of the TriggerReElection
message in the ElectionEngineMessage
, there is a fallback mechanism that defaults to insecure local random number generation if fetching or verifying randomness from the drand
network fails.
When the drand
randomness cannot be fetched or verified, the code falls back to generating a random number locally using rand::thread_rng().gen::<[u8; 32]>()
. This is insecure for the following reasons:
thread_rng randomness is not cryptographically secure.
Do Not Fallback to Insecure Local Randomness: Instead of falling back to local randomness, consider aborting the process or retrying until a secure random number from drand
is obtained.
Alternative Secure Sources: If a fallback is necessary, use a secure randomness source.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
//
The new
method of the Node
structure is responsible for creating a new instance of a node with various cryptographic properties, including RSA keys and other shares. Currently, the implementation utilizes the standard rand::thread_rng()
for generating random numbers, which is not suitable for cryptographic purposes.
it is crucial to replace the standard RNG with a CSPRNG. This can be achieved by using libraries that provide secure random number generation, such as the rand::rngs::OsRng
, which leverages the operating system's secure random number generator.
NOT APPLICABLE: The Holonym team considered this finding as not applicable.
Halborn used automated security scanners to assist with detection of well-known security issues and vulnerabilities. Among the tools used was cargo audit
, a security scanner for vulnerabilities reported to the RustSec Advisory Database. All vulnerabilities published in https://crates.io
are stored in a repository named The RustSec Advisory Database. cargo audit
is a human-readable version of the advisory database which performs a scanning on Cargo.lock. Security Detections are only in scope. All vulnerabilities shown here were already disclosed in the above report. However, to better assist the developers maintaining this code, the auditors are including the output with the dependencies tree, and this is included in the cargo audit
output to better know the dependencies affected by unmaintained and vulnerable crates.
Crate | Version | Title | Date | ID | Severity | Solution | URL |
---|---|---|---|---|---|---|---|
rsa | 0.9.6 | Marvin Attack: potential key recovery through timing sidechannels | 2023-11-22 | RUSTSEC-2023-0071 | 5.9 (medium) | No fixed upgrade is available! | |
telemetry | 0.1.0 | misc::vec_with_size() can drop uninitialized memory if clone panics | 2021-02-17 | RUSTSEC-2021-0046 | 9.8 (critical) | No fixed upgrade is available! | |
aes-soft | 0.6.4 |
| 2021-04-29 | RUSTSEC-2021-0060 | |||
aesni | 0.10.0 |
| 2021-04-29 | RUSTSEC-2021-0059 | |||
cpuid-bool | 0.2.0 |
| 2021-05-06 | RUSTSEC-2021-0064 | |||
dotenv | 0.15.0 | dotenv is unmaintained | 2021-12-24 | RUSTSEC-2021-0141 | |||
failure | 0.1.8 | failure is officially deprecated/unmaintained | 2020-05-02 | RUSTSEC-2020-0036 | 9.8 (critical) | ||
stdweb | 0.4.20 | stdweb is unmaintained | 2020-05-04 | RUSTSEC-2020-0056 | |||
failure | 0.1.8 | Type confusion if | 2019-11-13 | RUSTSEC-2019-0036 | 9.8 (critical) |
Halborn strongly recommends conducting a follow-up assessment of the project either within six months or immediately following any material changes to the codebase, whichever comes first. This approach is crucial for maintaining the project’s integrity and addressing potential vulnerabilities introduced by code modifications.
// Download the full report
MPC/Threshold Cryptography
* Use Google Chrome for best results
** Check "Background Graphics" in the print settings if needed