Dev Q+A
This guide effectively addresses key aspects of the project, from error handling and security to scalability and testing. Devs can understand Thrylos's technical foundation and priorities.
1.Error Handling
How are errors handled throughout the codebase?
The most common pattern observed throughout the codebase is the direct checking of errors immediately after a function call that can fail. When an error occurs, it's typically returned immediately to the caller. For example, in NewBlockchain().
Error handling is through Go’s native error handling mechanism which involves returning errors mainly from functions and checking these errors in the calling code. To enhance the error handling we thought about using Logrus or Zap, allowing for easier error analysis and monitoring.
Are there specific error scenarios that need to be accounted for?
Not at the moment but the code could always be improved with adding further error scenarios in.
Are there any error cases that are not currently handled but should be?
The code could benefit from more detailed error messages explaining why a transaction or block is invalid for example.
2. Security
How are cryptographic operations, such as transaction signing and verification, handled?
Cryptographic operations, including transaction signing and verification, are handled using RSA encryption for digital signatures, SHA-256 for hashing, and secure serialisation methods, with a focus on ensuring the integrity and authenticity of transactions.
Are there any potential vulnerabilities in these processes?
Unit tests have been carried out but security audits may be need to look into unknown attacks.
Is there any input validation or sanitization implemented, especially in functions handling external data (e.g., HTTP requests)?
The input validation and sanitisation for external data, such as HTTP requests are minimally implemented just focusing on basic checks and the presence of requested parameters.
Some simple conditions checks have been implemented such as including verifying if a node has enough take to vote or if certain entities exist with the system.
Libraries such as ‘go-validator’ that come with built in validation mechanisms could simplify some of these validation rules , reducing the risk of security vulnerabilities and data integrity issues.
3. Concurrency
Are there any concurrency issues in the code, especially in scenarios where multiple goroutines might access shared data?
Mutex are in place to mitigate concurrency issues of shared data access.
How are race conditions prevented or mitigated?
Primarily through the use of a mutex by locking and unlocking pieces of code. Race conditions are mitigated in the provided code through careful synchronisation of access to shared resources using muteness, ensuring that only on coroutine can modify the data at a time.
Mutexes are used when accessing the blockchain:
When updating the blockchain with new blocks, reading the current state of the blockchain, or performing operations that require consistency (such as calculating the total stake or validating transactions again current UTOs), muteness ensure that these operations do not interface with each other. This is important when multiple nodes might attempt to add blocks concurrently or when transactions are being verified and added to blocks ion a multi-threaded environment.
Mutexes are used for UTXO set management:
UXTOs defines where each blockchain transactions starts and finishes, the transaction output. Adding new UTXOs, marking UTXOs as spent, and querying UTXOs for transaction validation, are critical sections of code that require synchronisation. (‘UTXOs map[string][]*thrylos.UTXO’) is a common resource accessed by various goroutines for transaction processing and needs to be protected by mutexes to prevent concurrent read/write issues.
4. Performance
Have performance considerations been taken into account, especially in critical operations like blockchain synchronisation, transaction processing, and block validation?
These are addressed through the use of efficient data structures and algorithms, such as employing Merkle trees for compact transaction proofs and utilizing hashing for block integrity checks. Additionally, the transition to ed25519 for digital signatures significantly enhances performance in critical operations due to its faster signature generation and verification compared to traditional RSA cryptography.
Are there any potential bottlenecks or areas for optimisation?
While significant performance enhancements have been achieved, potential bottlenecks remain in areas such as sequential processing, which could benefit from further optimization through concurrent handling and improved data access strategies. We have implemented caching for frequently used ed25519 public keys, significantly reducing latency in transaction processing by avoiding the computation of the public address from the public key each time it is needed. This optimization enhances system efficiency, particularly in scenarios where certain keys are used repeatedly.
Additionally, evaluating the performance impact of different cryptographic libraries could yield further improvements. The adoption of Protocol Buffers for the serialization of transactions has already facilitated quicker signing and verification processes. By converting transaction data into a byte format, which is then hashed, the speed of serialization and deserialization processes has been enhanced, allowing the system to handle transactions more efficiently.
Ongoing exploration of these and other optimization strategies will be critical in ensuring the scalability and responsiveness of the system as transaction volumes grow. Further optimizations might include exploring more advanced caching mechanisms, such as distributed caching for larger-scale systems, and considering parallel processing techniques to minimize the impact of sequential processing bottlenecks.
What is the current transaction per second (TPS) rate? After running a test to measure the total time taken for the processing, and calculates the TPS by dividing by dividing the number of transactions by the total time in seconds, we found it successfully processed 1000 transactions in 0.87 milliseconds. This translates to a Transaction Per Second (TPS) rate of approx 11,468.448864 transactions per second. Real-world TPS rates may vary though, depending on network conditions and the complexity of transactions.
5. Network Communication
How are network requests handled? Are there any timeouts or retries configured for network operations?
Network requests for Thrylos are primarily handled suing HTTP, without specific configs for timeouts or retires for operations. Further enhancements such as timeouts and retires will be taken into conservation as the system scales.
Is there any support for secure communication (e.g., TLS) between nodes?
The current setup replies on HTTP for peer-to-peer interactions without direct transport layer security. We are exploring the adoption of HTTPS protocols for enhanced security. Plans are underway to integrate SSL/TLS encryption, ensuing that all data exchanged between nodes is securely encrypted, thereby bolstering the overall security posture of Thrylos blockchain.
6. Blockchain Consensus:
How is consensus achieved in the blockchain implementation? Are there any specific algorithms or mechanisms used?
Voting mechanism where validators vote for the validity of blocks.
Stake-based Validation, where validators with more stake have a more significant say in the consensus process as the blockchain is proof-of-stake.
How are validators selected, and what criteria are used for validation?
Validators are selected based on their stake in the blockchain. There is minimal stake required to become a validator. At the moment validators are chosen from among stakeholders who meet this minimum stake requirement.
7. Data Integrity and Persistence
How is blockchain data persisted and ensured against corruption or tampering?
Blockchain data is persisted through serialisation and database storage, ensuring integrity against corruption or tampering with having techniques and block validation mechanisms.
Are there mechanisms in place to detect and handle forked chains or invalid blocks?
Yes, Thrylos blockchain includes mechanisms to track and resolve folks and validate blocks, ensuring only valid blocks are added to the chain and maintaining a single, continuous ledger.
8. Scalability
How does the system handle scalability concerns, especially in terms of increasing numbers of transactions and nodes?
By implementing Sharding, this approach addresses scalability concerns by portioning the blockchain network into smaller manageable pieces ‘shards’, allowing for processing of transactions and block validations across different Shards. This enables the network to scale effectively and accommodate a growing number of transactions and nodes without a proportion increase in processing time or resources required.
Are there any limitations or challenges related to scalability that need to be addressed?
Sharding introduces challenges such as cross-shard transaction management and data consistency that need careful addressing. For cross-shard transaction the challenge is to prevent issues such as double-spending, while for data consistency the challenge is maintaining a consistent state across shards that demands robust consensus mechanisms.
9. Code Maintainability and Extensibility
Is the codebase well-structured and modular?
Yes the code is organised into separate packages and files e.g. transaction.go for transaction operations, block.go for block related functionalities. All the code is well documented with information above each part explaining what it does.
The ‘Core’ package is the main section of the application with blockchain.go, peer.go, node.go in. In the ‘shared’ package BlockchainDBInterface defines a set of operations for interaction with the data. Transaction.go is in the shared package to make transaction-related functionality accessible across parts of the project.
Having the ‘UTXO’ struct in the ‘shared’ package allows for clear separation of concerns such as conversion of the different data formats such as for the serialisation of Protobufs.
How easy is it to extend the functionality or add new features to the system?
With the core blockchain logic separated from the shared, utilities like transaction handling and UTXO management in the shared are designed to be more generic
API documentation and developer guides are coming soon but there are guides about contributing and setting up a node within the code.
10. Testing
What testing strategies have been employed for ensuring the correctness and robustness of the code?
A comprehensive unit testing strategy has been adopted. This includes unit testing for core functionalities such as transaction signature verification, RSA key generation and usage, and the creation of the genesis block. These unit tests validates the individual components of the system under a variety of scenarios, ensuring each part performs as expected.
Are there unit tests, integration tests, or end-to-end tests covering critical functionalities?
Yes, the testing framework currently focuses on unit tests that cover critical functionalities of the blockchain, including transactions, signature verification and blockchain integrity checks. Third party audits will be done to uncover any further vulnerabilities not caught by unit testing.
While integrating and end-to-end tests are essential for ensuring that different parts of the system work together seamlessly, the current emphasis is on unit testing, with further testing strategies to be developed as the project evolves.
Last updated