top of page

68 items found for ""

  • Formal Land: Building Trust in Decentralized App Development

    Smart contracts undeniably stand out as one of the most groundbreaking innovations stemming from blockchain development, notably emerging with Ethereum's introduction of the functionality in 2015 when the Ethereum network launched. This advancement paved a remarkable path for decentralized finance, spawning a diverse array of decentralized instruments, platforms, and numerous dApps that would then go on to transform the crypto space in the many years to come. However, the sophistication of hackers, coupled with contract vulnerability exploits, marked a watershed moment in the growth trajectory of smart contracts. This catapulted numerous dApps into a new phase of extreme caution, and put them into a race for developing and/or finding robust formal verification tools to safeguard investments and, in some cases, lives. This need for formal verification extends beyond smart contract auditing to critical applications such as space rocket software or medical devices and planes, where human lives are at stake. According to statistics from a leading blockchain security company, Beosin Eagleeye, as presented in their 2023 Global Web3 Security Report, AML Analysis & Crypto Regulatory Landscape, the total losses from hacks, phishing scams, and rug pulls soared to a staggering $2.02 billion in 2023. A significant percentage of these attacks resulted from bugs in smart contracts, accounting for more than 50% of the total losses. What is Formal Verification? Formal verification, a technique employed in software engineering and computer science, offers a mathematical guarantee that a program will consistently meet its intended functionality without unexpected bugs or erroneous behavior. In the words of Taraji P. Henson's iconic portrayal of Katherine Goble Johnson in "Hidden Figures," "It could be old math; something that looks at the problem numerically and not theoretically. Math is always dependable." This sentiment captures the essence of formal verification, where rigorous mathematical techniques are employed to prove that a program will consistently adhere to its specifications or requirements. By leveraging mathematical rigor and logic, formal verification provides confidence that the program will function correctly under all conditions, highlighting the reliability of mathematics in ensuring the integrity of complex systems. In formal verification, dedicated programs or formal verification tools such as Coq proof assistants are widely used, providing a formal language for expressing mathematical assertions, definitions, and proofs, and checking their correctness using type theory-based logic. Type theory forms the foundation for specifying and verifying correctness properties of programs by defining types that represent the structure and behavior of data and functions. In Coq, the type system allows developers to express both the shapes and structures of data and properties of how that data behaves and interacts within a program. By leveraging type theory-based logic, Coq enables developers to formally reason about and prove the correctness of their programs, ensuring adherence to specified properties and constraints. In a Rust-based token contract, developers can define a type of token in Coq, representing the structure and associated properties of the token. By incorporating principles from Coq's type system into the design and implementation of smart contracts, developers can enhance reliability and security. For instance, the token object's structure and properties, such as total_supply, are clearly defined. Moreover, the transfer method enforces constraints to prevent transfer amounts from exceeding the total supply. While Coq's capabilities enable developers to reason about and improve the correctness of smart contracts formally, it's crucial to recognize that trust and correctness play vital roles in blockchain-based systems. Formal Verification with Coq-of-Rust: Introducing Formal Land Supported by Aleph Zero's Ecosystem Funding Program (EFP), Formal Land has developed a formal verification tool for Rust code called "Coq-of-Rust". This tool aims to create safer smart contracts by translating Rust programs into the battle-tested formal verification system Coq, ensuring bug-free programs. While Rust's type system already provides strong guarantees to prevent bugs present in languages like C or Python, testing remains necessary to affirm and verify the code's integrity. However, testing alone is incomplete as it cannot cover all execution scenarios. Formal verification replaces testing with mathematical reasoning on code, effectively extending the type system without restrictions on developer expressivity. When a program undergoes formal verification, we can be mathematically certain that it will consistently adhere to its specifications. This process effectively removes all bugs, provided we possess a complete specification of its intended functionalities and limitations. The Importance of Formal Verification Having a preemptive advantage is crucial in problem-solving as it allows one to anticipate challenges before they arise, enabling proactive measures to be taken. This proactive approach not only increases efficiency but also minimizes the impact of potential obstacles, ultimately leading to more effective and successful problem resolution. Identifying Potential Bugs Early: Formal verification proves highly beneficial as it can pre-emptively identify all potential bugs by thoroughly examining all conceivable execution scenarios of a program. In critical domains like financial systems or applications involving human safety, a single bug can incur significant costs or, worse, lead to loss of life. For instance, consider the consequences of a software glitch in a medical device or an error in a financial transaction system. Therefore, the ability to identify and rectify these issues beforehand through formal verification is paramount for ensuring reliability and safety in such contexts. Promoting Clear Programming Constructs: Furthermore, formal verification incentivizes using clear programming constructs by requiring code to be precise, unambiguous, and easy to reason about mathematically. This not only improves the quality of the program but also enhances its maintainability and reliability over time. Clear programming constructs lead to better code readability, reduce debugging efforts, and facilitate collaboration among developers. Mitigating Challenges Associated with Scaling Projects: Using formal verification in software development can mitigate challenges associated with scaling projects by providing explicit specifications and proof of correctness. This approach enables developers to make changes to existing code confidently, knowing they won't inadvertently impact other components or introduce security vulnerabilities. For example, in large-scale projects with numerous interconnected modules, formal verification ensures that modifications to one module do not disrupt the functionality of others, thereby maintaining system integrity and security. Streamlining Onboarding of New Developers: Formal specifications streamline the process of onboarding new developers by providing explicit documentation of the code's behavior. This documentation reduces the risk for new developers to inadvertently disrupt existing functionality and allows them to grasp how the system is intended to operate by reading the specifications. For instance, in software projects with frequent turnover or expansion, formal specifications guide new developers and enable them to work with greater confidence and understanding, while minimizing the potential for errors or unintended changes to the codebase. Conclusion Formal Land's Coq-of-Rust exemplifies the transformative potential of formal verification in software development, offering cutting-edge services that guarantee bug-free solutions across critical domains such as databases, smart contracts, banking systems, and automotive technologies. This innovative tool not only eliminates the need for exhaustive manual testing but also instills confidence in the integrity of software solutions. As we continue to advance in the digital age, the success of tools like Coq-of-Rust underscores the paramount importance of formal verification in building robust, dependable, and scalable software solutions for projects and companies that need to safeguard their systems from malicious attacks. From verifying nodes in complex cryptocurrency systems to ensuring the reliability, safety, and maintainability of software projects, formal verification remains a cornerstone of modern software development practices.

  • Exploring Aleph Zero: Balancing Confidentiality and Compliance

    In the aftermath of the events on December 29, 2023, following the delisting of nearly all available privacy tokens on the second-largest offshore exchange, OKX, another cryptocurrency exchange has followed suit. In what can be described as a crackdown on privacy-oriented blockchain networks, Binance, the world's leading cryptocurrency exchange, dropped a bombshell in the early hours of February 6. The exchange announced the delisting and suspension of corresponding deposits for a couple of tokens on February 20, including Monero, one of the pioneering privacy-centric blockchain networks in the cryptocurrency space. While it may not be the absolute first privacy blockchain, Monero is renowned for its strong privacy features, including ring signatures, stealth addresses, and confidential transactions, which provide users with enhanced privacy and anonymity compared to many other cryptocurrencies. This sudden move left privacy-centric blockchain enthusiasts and the general crypto community bewildered and uncertain about the fate of other privacy-oriented Blockchain networks. Monero XMR, the so-called OG of privacy Blockchain networks, has been abruptly cast into the shadows as the world's largest exchange chooses to delist it. As the saying goes, "When the mighty oak falls, the bushes tremble"; what can we make of this development? Is this indeed a crackdown on privacy-oriented Blockchain networks, an attempt to sabotage the gains made in championing the cause of restoring individual prerogative and resolving the dichotomy between compliance and confidentiality? In the face of the uncertain demands of regulatory compliance, how can other privacy blockchain networks survive the hostile delistings from prominent exchanges? How does Aleph Zero fare in comparison to other privacy blockchains? Confidentiality and Non-compliance are not Synonymous Amid escalating regulatory pressure and the wave of hostile delistings from major exchanges worldwide, Aleph Zero has embarked on a path toward optimal blockchain privacy. This approach aims to overcome the limitations of the default design philosophy found in conventional blockchains, notably pseudonymity, which offers the lowest level of privacy. Aleph Zero's strategy integrates regulatory compliance into its framework while exerting influence across both enterprise-grade and end-user sectors. In a tweet on the popular platform Twitter, Antoni Zolciak, co-founder of Aleph Zero, weighed in on the matter, responding to a post suggesting that if Monero could be delisted despite its robust fundamentals, other privacy-oriented tokens are susceptible to the same treatment. The co-founder called for calm, asserting that there are no conspiracies against privacy protocols, stating: “Confidentiality and lack of compliance are not synonymous. Zcash remains listed on Binance as it approaches compliance proactively. We share a similar approach at Aleph Zero and have integrated AML/CFT by Coinfirm/Lukka, and will be deploying a Zero-knowledge (ZK)-ID system via idOS" In response to the public outcry across social media, Monero issued a public statement on Twitter, stating the facts behind the delisting: “The delisting is happening because Binance is now requiring that deposits come from a publicly transparent address. Monero has used stealth addresses for ALL addresses since its launch in April 2014. Monero allows selective disclosure with view keys but not a transparent address.” This statement means that Monero offers a feature called view keys that allows users to selectively disclose certain information about their transactions. With view keys, users can provide others with permission to view specific details of their transactions, such as the transaction history or incoming funds, while keeping other information, like their account balance or transaction amounts, private. However, Monero does not have transparent addresses. In Monero, transactions are private by default, obscuring the sender, receiver, and transaction amount using advanced cryptographic techniques. Therefore, while Monero offers selective disclosure through view keys, it does not have transparent addresses, which makes it a difficult process to track individual and/or multiple addresses and transactions on the Monero network. In contrast, Aleph Zero differs from Monero in its approach to privacy and regulatory compliance, potentially allowing it to avoid the fate of privacy tokens being delisted. Aleph Zero positions itself as more than just a privacy coin by defaulting to transparency on its network, with confidentiality being optional and regulated by anti-money laundering (AML), countering the financing of terrorism (CFT), and identity products. By defaulting to transparency, Aleph Zero aims to safeguard against misuse by bad actors while still allowing users to maintain confidentiality when needed. The successful integration between Aleph Zero and Lukka, the company that acquired Coinfirm in 2023, significantly upgrades transaction security and anti-money laundering (AML) compliance within the Aleph Zero blockchain ecosystem. This collaboration introduces an on-chain monitoring system that allows for the analysis of network activity to detect potential misuse and illicit activities such as money laundering and fraud, providing valuable data for exchanges, institutions, and regulatory bodies, ensuring a safer, more secure, and compliant blockchain economy. Furthermore, Koinly's recent integration with Aleph Zero's native coin, AZERO, serves over 1 million users on its crypto tax management platform. Beyond tax calculations, Koinly aids in projecting capital gains to optimize tax strategies for the following year. Users seamlessly import trades, preview potential gains, and download tax documents. With a global presence in over 100 countries and trusted by 1.3 million users, Koinly is a leading tax reporting software. The platform automatically syncs data from exchanges, wallets, and blockchain addresses, ensuring precise transaction records with accurate timestamps and market prices. This integration provides users with comprehensive tools to manage their cryptocurrency taxes efficiently. Conclusion Although blockchain networks like Aleph Zero have strived to craft a suitable framework that accommodates individual prerogatives while also allowing for checks and balances by third-party blockchain analytics and anti-money laundering (AML) solutions, providing a pathway for vetting the network for illicit activities and fraud, the framework of regulatory compliance is not yet tangible. As a result, blockchain networks that are either private or public by design must develop innovative cryptographic solutions that have the potential to satisfy the demands of regulators and meet compliance standards without sacrificing individual prerogatives. Lukka, founded in 2014, is a global company headquartered in the United States of America, specializing in providing institutional data and software solutions for the most risk-mature businesses worldwide. With a focus on bridging the complexities of blockchain data within the global crypto ecosystem with traditional business and reporting needs, Lukka stands at the forefront of innovation in this rapidly evolving industry. Following its acquisition of Coinfirm in 2023, Lukka has integrated Coinfirm's solutions into its offerings, enhancing its enterprise data capabilities. Coinfirm's product teams continue to support businesses globally by providing on-chain analytical solutions to address risk and compliance challenges in the blockchain and digital asset space. Lukka's commitment to upholding institutional standards is evident through its attainment of various certifications and audits. With AICPA SOC 1 Type II and SOC 2 Type II Audits, Lukka ensures data quality, financial calculation accuracy, and completeness while effectively managing technology operational risks. Additionally, Lukka holds an ISO/IEC-27001 certification, indicating its adherence to internationally recognized standards for information security management systems. Furthermore, Lukka has undergone a NIST Cybersecurity Assessment, demonstrating its dedication to maintaining robust cybersecurity measures. Through these qualifications, Lukka solidifies its position as a leader in the industry, showcasing its commitment to best-in-class technology risk governance and providing clients with the assurance of adherence to rigorous standards for data security, integrity, and operational excellence.

  • The Synergy between Zero-Knowledge Proofs (ZKPs) and WebAssembly (WASM)

    Introduction Over time, the development of the digital landscape has been marked by technological breakthroughs that have reshaped our perception of what is possible. Blockchain is one of such technological breakthroughs. Blockchain technology has become a revolutionary force transforming the digital landscape across several sectors. Another technological breakthrough that is gaining momentum is Zero-Knowledge Proofs. Zero-Knowledge Proofs (ZKPs), even before the advancement of blockchain technology, have a history in various industries, providing security and privacy among their numerous applications. The integration of zero-knowledge and blockchain technology is expected to expand in the coming years. WebAssembly (WASM) was developed in 2015 and soon after, sought its way to becoming one of the top web languages. It is a web browser programming language based on binary data. WebAssembly is a secure and effective language that translates high-level source codes derived from various languages into portable bytecode that runs on a variety of applications. These languages include C, C++, and Rust. When WebAssembly emerged, there were lots of arguments as to whether it would replace Javascript. However, it has shown its stance as a programming language that complements Javascript instead. Hence, it integrates Javascript to ensure the smooth operation of applications on web pages. The blockchain industry went wild with the integration of Zero-Knowledge Proofs and EVM (an Ethereum-based virtual machine) which is, undoubtedly, an amazing combination that saw the development of multiple zkEVM rollups last year. However, there are still some constraints with the integration. As dApps become increasingly popular, there is a growing demand for security and privacy. This is where WebAssembly (WASM) and Zero-Knowledge Proofs (ZKPs) come into play since they are a prominent combination. The combination of Web Assembly (WASM) and Zero Knowledge Proofs (ZKPs) represents a major development in the areas of privacy and security for the digital age and this article aims to explore the marriage between Zero-Knowledge Proofs and WebAssembly. The Synergy of Zero-Knowledge Proofs (ZKPs) & WebAssembly (WASM) ZKPs and WASM are two innovations paving the way for secure and scalable dApps. Combining the cryptographic security of ZKPs with the versatility of WASM, protocols and blockchains like Aleph Zero are building significant web3 applications that improve privacy and security in the digital landscape. An underlying runtime environment that allows the smooth coherence of ZKPs and WASM is the virtual machine - zkWASM VM. But what is a Virtual Machine (VM)? A virtual machine is what it is - a virtual machine. To give a more concrete explanation, virtual machines are systems that act like physical machines, run on physical machines (referred to as hosts) but use software to execute programs. Just like physical machines (e.g., smartphones, PCs, etc), virtual machines have their operating systems. In the blockchain space, virtual machines execute smart contracts. Think of zkWASM-VM as virtual machines specific to protocols or networks that marry the bytecode functionality of WASM and the security of zero-knowledge tech into one. Meaning, zkWASM-VM is a machine that executes WebAssembly code and supports ZKP generation. The virtual machine that WASM code runs inside is a barrier between the code and the underlying host (hardware). This creates a more secure environment for sensitive code execution by making it more difficult for malicious actors to hack and exploit vulnerabilities. Aleph Zero Implementing the Use of ZKPs and WASM Aleph Zero uses the WASM virtual machine together with the ink! programming language, making it easy for developers to deploy solutions ranging from DeFi, gaming, etc, that are scalable, private, and secure. In an article published last year, Aleph Zero explains their reason for choosing to execute smart contracts with WASM and ink! instead of EVM, a more popular option. You can read about it here. Capabilities of ZKPs and WASM The integration of ZKPs and WASM prompts the conceptualization of a lot of developments. Some of these developments are: Private smart contracts: Smart contracts are self-executing programs that are triggered when pre-determined conditions are attained. Smart contracts built on public blockchains are transparent for observers to see. Nonetheless, the creation of private smart contracts has been prompted by privacy concerns. The purpose of these contracts is to improve the confidentiality of the information that they contain. The goal is to allow parties to carry out smart contract execution and transactions without disclosing to the network or other parties the details of the transactions. Increased security in dApps and authentication protocols: Based on a report published by IT Governance, a leading global provider of cyber security and privacy management solutions, there were over two thousand data breach incidents in 2023, with billions of records breached. The blockchain industry is not pardoned. Millions have been stolen due to security breaches in dApps, smart contracts, and authentication protocols (meant to facilitate the secure transfer of authentication data) which makes people question if blockchain technology is truly as secure as it has been publicized to be. Cross-platform compatibility: Long before, native apps were the deal but with the increasing demand for cross-platform desktop apps, developers are always on the lookout for innovative ways to develop solutions. The emergence of WebAssembly brought about a new era in the aspect of cross-platform application development, making WASM applications compatible with multiple platforms. Improved scalability: WebAssembly is capable of handling complex computations and can, therefore, be used to improve the performance and speed of applications. This makes it a viable platform for developing applications that handle a lot of data or need to accommodate a lot of users (for instance, blockchain dApps). Use Cases of Integration of ZKPs and WASM The combination of zero-knowledge proofs and an efficient programming language like WebAssembly creates a lot of use cases both for projects and end users. Here are some of the use cases from the synergy between ZKPs and WASM. Financial transactions: For transaction confidentiality to be maintained in blockchain-based systems and financial applications, privacy is essential. ZKPs, when combined with WebAssembly, can be applied in creating solutions that aid in the protection of financial transactions and provide privacy. Users can validate transactions without providing specific details, and the use of WebAssembly to carry out these verifications guarantees quick and safe financial transactions across diverse platforms. Decentralized Identity Management: ZKPs can be used in decentralized identity systems to prove ownership of certain information without disclosing sensitive data, while WASM can ensure seamless identity verification processes. Supply chain management: Before now, we’ve discussed the role of zero-knowledge proofs in supply chain management. In an industry like the supply chain where information regarding products must be shared amongst multiple stakeholders, a need for cross-platform compatible applications persists. Decentralized apps that can provide safe and privacy-preserving data sharing by integrating ZKPs with WebAssembly can be developed for this purpose. Healthcare: The combination of ZKPs with WebAssembly has enormous potential in the healthcare industry. Because medical data is sensitive, privacy regulations must be adhered to. Healthcare apps and software that use ZKPs can process confidential data in a way that keeps patient information private, facilitating collaboration between many entities. When integrated with WebAssembly, they make sure that these privacy-preserving processes can be carried out uniformly across various healthcare systems. E-voting: ZKPs can improve the privacy of voting systems by enabling people to prove their eligibility without revealing their votes. The voting procedure can be carried out securely across multiple platforms with the use of WebAssembly. Conclusion Zero-Knowledge Proofs and WebAssembly are, without a doubt, a powerful pair. However, they are not without constraints, some of which are: the computational overhead that can arise due to the complexity of the protocols, developers not yet overly familiar with using WASM let alone integrating with ZKPs, the issue of maintaining the balance between privacy and regulatory compliance depending on the industry and sometimes, location, etc. Although, there has been a lot of anticipation for solutions to these challenges with ongoing advancements in cryptographic research and development. The synergy between ZKPs and WebAssembly opens up new possibilities for innovation and growth in the digital realm, from improving security in online apps to simplifying transactions in numerous industries that respect privacy.

  • Aleph Zero as a Privacy-Enhancing Public Blockchain: Clarifying the Confusion

    Key Takeaways Aleph Zero has always been confused as either a public or a private blockchain. It is a privacy-enhancing public blockchain that allows any participant to build solutions that drive security and privacy to users' data. Aside from decentralization, one major difference between public and private blockchains is transparency. Aleph Zero blockchain promoted both decentralization and transparency. Aleph Zero has an in-built privacy layer, called Liminal that utilizes ZKPs and sMPC to create a multichain privacy framework. There are differing theories regarding the type of blockchain Aleph Zero falls under. One group will argue that Aleph Zero is a private blockchain, like Zcash. The other school of thought, shield in hand, will contrast the argument. How will you react when you read that the better explanation for these differing schools of thought is that “Aleph Zero is a privacy-enhancing public blockchain”? Confusing? Let’s get on to it then. What is Aleph Zero? Aleph Zero is a layer 1 blockchain that provides an infrastructure for developers to deploy dApps across various niches and use cases like gaming, DeFi, metaverse, security, bridges, and even businesses. It is a layer-1 public (open source) blockchain network that allows developers to create decentralized, scalable, secure, and privacy-focused solutions. It is with this definition that Aleph Zero is regarded as a privacy-enhancing blockchain. Since its development in 2018 and mainnet launch in 2021, Aleph Zero has experienced remarkable adoption within the developer community especially those out to explore and build innovative projects. With over 40 projects live on the blockchain, Aleph Zero is a continuously growing ecosystem. Aleph Zero as a Public Blockchain Contrary to what you might have seen elsewhere, Aleph Zero is NOT a private blockchain but rather it is a public blockchain. A public blockchain, also called a permissionless blockchain, is a decentralized and open-source network that is available for anyone to build and deploy on. On public blockchains, developers are not required to request permission before they can access the network because they are open for anyone to participate in. Developers can opt-in to join the consensus mechanism, validate transactions, and secure the network. This differs from the other class of blockchain referred to as private or most popularly, permissionless blockchain. Features of Public (Permissionless) Blockchain Decentralization: The core pillar of public blockchains is decentralization - a distributed way of delegating authority instead of concentrating it on a single central entity. The open nature of public blockchain allows participants from all parts of the world to come together and see to the growth and safety of a network. Transparency: Like nightingales, blockchain experts have often sung this song. The very existence of decentralization in itself paves the way for transparency amongst all participants. It is safe to say that permissionless blockchains bring with them a high level of transparency. Immutability: The blockchain tech is immutable (unchangeable), meaning transactions already recorded there are permanent and cannot be edited or deleted. Censor resistance: Unlike private blockchains where authority is concentrated in the hands of a few central bodies, with public blockchains, such is not the case. Hence, public blockchains are resistant to censorship from centralized bodies. Pseudonymity: Transactions recorded on the blockchain do not explicitly reveal the private information of the users, just the wallet addresses. Public blockchains can promote pseudonymity. However, private blockchains are more effective in keeping users’ identities and data private. So, while public blockchains are an upgrade from centralized financial institutions in terms of users’ privacy, permissioned blockchains are a better upgrade to that. A look at all of the above-mentioned features, we can ascertain that Aleph Zero fits into the picture of what we know as a “public or permissionless blockchain”. Aleph Zero as a Privacy-Enhancing Blockchain “Aleph Zero is a privacy-focused blockchain" - This statement has led to the other school of thought that believes Aleph Zero to be a private blockchain. This article section aims to demystify the actual context behind that statement. Aleph Zero as a "privacy-enhancing" or "privacy-focused blockchain" implies the blockchain’s capacity to provide tools/infrastructures that enable developers to create dApps or projects that ensure the privacy of users' data. Think of it this way: Aleph Zero is a tool that developers can utilize to create solutions that answer the need for users’ data and identity privacy, but that doesn’t make the tool (Aleph Zero) private because it is open to all developers to practice with. Aleph Zero has two distinguishing techs that make it possible for developers to create privacy solutions, marking it as a privacy-enhancing blockchain. They are: A native privacy-layer called Liminal The Shielder Liminal Liminal serves as Aleph Zero’s native and interchain privacy layer. Over the years, we’ve heard how blockchain interoperability is the future. We’ve seen a lot of projects (bridges, interoperability protocols) being built for one sole purpose - to ensure seamless interactions between blockchains. All of these solutions address one issue which is blockchain communication at the execution layer. Bitcoin can be transferred to be utilized on the Ethereum chain. Solana can be borrowed via another blockchain. They hardly address the issue at the privacy level. With Liminal, developers can write smart contracts on other blockchains and store their private state on Aleph Zero by integrating it. For developers who wish to build directly on the Aleph Zero blockchain, Liminal is available to them natively. Acknowledging the necessity for performance while maintaining stringent privacy for user data, Liminal employs two privacy-enhancing technologies (ZKPs and sMPC) to enable solutions that allow private inter and intra-chain transactions. While sMPCs encrypt data across several computers that cannot access data without unanimous consent, ZK-SNARKs transfer a secret and secure key between users. You can learn more about how Aleph Zero’s Liminal utilizes a formidable pair (ZKPs and sMPC) to create a privacy-enhancing network here. Shielder Aleph Zero’s Shielder is a feature of Liminal (hence, it is not a separate technology besides itself) that enables private transactions of PSP22 tokens on the blockchain network. The main objective of the Shielder is to mask the details of transactions that happen on-chain and keep them from the eyes of external observers. Like similar solutions, Aleph Zero’s Shielder is used mainly by DeFi protocols. Side note: PSP22 is a standard token for any fungible token that runs on blockchains based on the Substrate architecture and is constructed using WebAssembly (WASM) smart contracts. Aleph Zero’s native token is a PSP22 token. So, DeFi protocols and DEXs built on Aleph Zero can utilize this Shielder feature. $AZERO: A Private or a Public Coin? Aleph Zero’s native coin is $AZERO, the currency that powers the blockchain ecosystem. It is a public coin that promotes pseudonymity of identities like every other public coin out there. $AZERO is an inflationary token whose supply isn’t fixed, unlike other deflationary cryptocurrencies like Bitcoin. There is an annual 30,000,000 token release used for staking rewards, where 90% of the token release is distributed back to the validators and nominators, whereas the remaining 10% is stored in the Aleph Zero Foundations ecosystem treasury. The utility of the $AZERO token is as follows: Staking: Aleph Zero uses the PoS consensus mechanism to validate transactions and ensure the security of the network. $AZERO is used to secure the network through staking. Stakers delegate their tokens to validators who in turn, are responsible for running nodes on-chain, verifying transactions, and preventing malicious transactions from entering the blockchain. Both stakers and validators receive rewards for this. Gas fees: As with any other blockchain, activities on Aleph Zero require the use of the $AZERO token as payment for gas fees for transactions carried out on the network. Aleph Zero is widely known to have some of the cheapest gas fees, and in fact, it boasts of offering users close to zero gas fees on each transaction made within the network. Payment: Most dApps on Aleph Zero utilize the $AZERO token for participating in activities. Governance: Users who stake $AZERO can participate in various voting proposals that concern the ecosystem. References https://docs.alephzero.org/aleph-zero/shielder/introduction-informal https://alephzero.org/utility-and-economics

  • CTRL + Hack + ZK: How to Build on the Telco Innovate Track Using Aleph Zero's Tools

    On November 2, 2023, Aleph Zero made a groundbreaking announcement: Deutsche Telekom, a telecommunications giant, had established itself as a validator on Aleph Zero's Mainnet and Testnet. This marked the first instance of a large company joining the privacy-enhancing blockchain network, a clear testament to Aleph Zero's privacy-enabling capabilities and enterprise-grade scalability potential. Fast forward to January 2024, Deutsche Telekom made its debut at Aleph Zero's inaugural hackathon, CTRL + Hack + ZK. This event brought together a community of non-EVM network enthusiasts, industry experts, and highly skilled 'techno-vert' developers who captivated the audience with their proficient dispensation of knowledge. The technical expertise covered various subjects, including Telco/DePIN, DID, tooling and infrastructure, gaming, and others. During one of the workshop sessions at the CTRL + Hack + ZK hackathon, Tobias Jung, the workshop moderator from Deutsche Telekom, provided valuable insights into the building on the "Telco Innovate Track" using tools from Aleph Zero. A Q&A session showcased how telcos can facilitate the utilization of decentralized technologies in areas such as Decentralized IDs and wallets, Decentralized physical infrastructure Networks (DePIN), and others. Deutsche Telekom's Role in the Web3 Space 1. Voice: Communication at scale: Deutsche Telekom, at its core, is a telco company deeply rooted in providing infrastructure for communication at scale. For over 20 years, they have facilitated mobile phone communication through telecommunication towers, enabling users to connect with friends via voice calls. These towers are erected to facilitate the transmission of signals for various forms of communication, including mobile phone services. Telco towers play a crucial role in expanding the coverage and reach of telecommunication networks by providing elevated platforms for antennas and equipment, enabling the transmission of voice and data signals across a specific geographic area. 2. Internet (web2): Information at scale: As a major player in the telecommunications industry, Deutsche Telekom has played a pivotal role in providing infrastructure for the Internet over the past two decades. From laying cables in the ground to connecting users to Web2, they have contributed significantly to information dissemination at scale. The laying of cables involves the physical installation of various types of cables, including optic cables, to establish the infrastructure that supports communication networks. This process encompasses the deployment of cables along designated routes, connecting key points to create a network backbone. Optic cables, known for their ability to transmit data using light signals, are often utilized for high-speed and reliable communication. 3. Blockchains (web3): value at scale: In the last four years, Deutsche Telekom has actively entered the space of public blockchain networks, offering infrastructure support for decentralized ecosystems. Initiatives like joining the Chainlink decentralized Oracle network showcase their dedication to providing value at scale in Web3. Deutsche Telekom’s involvement extends to proof-of-stake-based consensus blockchains, with successful engagement in projects like Flow as well as some experience with Substrate through the Polkadot network. Telekom’s recent venture into the Aleph Zero blockchain reflects their enthusiasm for exploring new horizons, particularly the privacy-enhancing features in smart contracts. While their focus has primarily been on the infrastructure layer, there is much excitement about the potential use cases in Web3. Notably, Telekom has already garnered recognition and trust from eight (8) blockchain networks with over 200 validators, safeguarding assets totaling over 60 million euros across Chainlink, Ethereum, Polkadot, Polygon, Energy Web, Celo, Flow & other networks. How Can Telcos Utilize Decentralized Technologies? According to Tobias, there are five key areas where telcos can deploy the strength of decentralized technologies to provide solutions that can improve customer relationships with web3 models. This includes using web3 models to develop better products, simplifying customer interactions in retail and online via web3, tracking, preventing, or offsetting carbon emissions through blockchain tech operations, onboarding the next 1 billion users, and establishing a participatory infrastructure that excels in crowd-sourcing and technologies Customer Engagement Loyalty: A significant focus of Deutsche Telekom is on improving customer engagement and loyalty. The goal is to enhance interactions with customers, making them more attached to Deutsche Telekom, thereby incentivizing them to use telecom services and providing rewards for their loyalty. Looking into web3 and innovative ownership models like token-based concepts such as NFTs and loyalty tokens, there's potential for exciting use cases. These approaches offer a fun and engaging way to spark customer interest and enthusiasm for Deutsche Telekom services, particularly through the creative implementation of loyalty tokens or NFTs. Sustainability: Deutsche Telekom is strategically advancing its sustainability goals within the framework of its "Horizon" strategy, a key component of the "Capital Markets Day Ambition (2021-2024)" as outlined in the strategy and transformation section of the Deutsche Telekom Company presentation 2023. Prioritizing sustainability, the company is actively addressing carbon emissions and exploring innovative web3 solutions, collaborating with Celo to potentially tokenize carbon footprints. With a firm commitment to becoming completely carbon-negative by 2030, Deutsche Telekom aims to offset more emissions than it generates, aligning with the forward-looking objectives encapsulated in "Horizon Three" – the phase dedicated to next-gen delivery for enabling growth. Despite challenges as one of Germany's major energy consumers, the company remains dedicated to achieving environmental responsibility and integrating sustainable practices into its operations, marking a proactive step toward leading as a digital Telco with a strong emphasis on environmental stewardship. Decentralized ID & Wallets: Deutsche Telekom is strategically focusing on integrating decentralized identity and wallets into Web3 adoption. Emphasizing the critical role of wallets in onboarding users, the ongoing development of their Web3 infrastructure, including the Aleph zero network, aims to expose the company's customer base to the decentralized ecosystem. Notably, decentralized applications (DApps), particularly those centered on loyalty-based rewards and engaging games, are seen as potential tools to strengthen customer bonds. In the entertainment and media sector, leveraging strong partnerships, especially in sports sponsorship, offers avenues for enhancing customer experiences. Network and Infrastructure: The speaker addressed the role of telcos in enabling the development of blockchain infrastructure, particularly focusing on the practical aspects of building their own infrastructure layer. Responding to a question posed at the Q&A session of the workshop, the discussion delves into the decision-making process for choosing layer one blockchain infrastructure. The choice of infrastructure depends on the specific use case, emphasizing an agnostic approach that considers a multi-chain future. Acknowledging the significance of the mobile phone number as a future identifier, the speaker highlights the necessity of not being limited to one blockchain. Conclusion Deutsche Telekom, a telecommunications powerhouse with a global presence across 50 countries, boasts a workforce of over 205,000 and a customer base exceeding 240 million mobile and 21.4 million broadband users. Its extensive impact encompasses fixed networks, broadband lines, TV, and internet services, and serves 8.3 million TV customers, along with 4.1 million IPTV, satellite, and cable customers in Europe. In the corporate landscape, the company contributes significantly to the industry, yielding a staggering revenue of 114.4 billion Euros in 2022. Noteworthy is its subsidiary, Deutsche Telekom MMS, specializing in cloud infrastructure for robust blockchain networks. Tailored for high-profile enterprises like Ethereum or Polkadot, the hosted nodes play a pivotal role in transaction recording, verification, and validation, enhancing overall blockchain network security. This strategic amalgamation of global reach, technological prowess, and adaptability positions Deutsche Telekom as a key player in the integrated telecommunications landscape.

  • Defending Your Investments: A Comprehensive Guide to Spotting Crypto Honeypots

    In the dynamic realm of cryptocurrency, investors face not only the promise of innovation but also the lurking dangers of scams and fraudulent schemes. Among these threats, honeypot scams stand out as deceptive traps designed to lure unsuspecting individuals into parting with their hard-earned money. Broadly speaking, a honeypot scam is a financial ruse wherein funds can be deposited, but withdrawal becomes an insurmountable challenge. In the crypto space, these scams manifest primarily as fake platforms and fake assets, each presenting unique challenges for investors. In this article, we will dissect the intricacies of honeypot scams, examining what they entail and equipping you with the tools to identify and avoid them. What Constitutes a Crypto Honeypot Scam? 1. Fake Platforms Fake platforms in honeypot scams are a straightforward yet effective method employed by scammers. These malevolent actors set up websites that mimic the appearance and functionality of legitimate centralized exchanges or investment platforms. The goal is to dupe individuals into depositing funds into what appears to be their "accounts." These fraudulent sites often boast scrolling price trackers, fabricated testimonials, and false partnerships, all meticulously designed to create an illusion of legitimacy. However, any funds deposited on these platforms serve only to line the scammer's pockets, as no genuine services or returns are provided. 2. Deceptive Assets Honeypot scams involving fake assets add a layer of complexity to the deception. In this scenario, scammers create tokens or NFTs that can be purchased but are rendered unsellable. Typically, these scams orchestrate an intensive marketing push in the weeks leading up to the presale or mint of the asset. Once the asset goes live, associated social media accounts vanish, leaving investors with tokens they cannot offload on any decentralized exchange (DEX). This predicament arises through mechanisms such as setting the selling tax to 100% or incorporating a whitelist/ban-list function in the smart contract. Spotting the Telltale Signs of a Honeypot Identifying Fake Platforms Fake platforms leave a trail of red flags that, when recognized, can shield potential victims from falling prey to the scam. Here are key indicators to watch for: Unverifiable Claims: Any unverified claims about partnerships, audits, insurance, or investments should trigger alarms, prompting investors to scrutinize the project's legitimacy further. No Social Media Presence: Legitimate projects actively engage on social media; the absence of such a presence is suspicious and raises questions about the project's credibility and outreach efforts. Poor Language Quality: Spelling and grammatical errors on the site may signal a lack of professionalism typical of scams, underscoring the importance of thorough due diligence in evaluating project authenticity. Unsolicited DMs:  If you receive an investment offer via direct message from an unknown sender, exercise extreme caution, as legitimate projects typically use official channels for communication, and unsolicited messages may indicate fraudulent intent. Unreasonably High Returns: Be skeptical of investment or staking sites offering returns that seem too good to be true, as such promises often align with classic hallmarks of fraudulent schemes designed to lure unsuspecting investors. Non-Functional Elements:  If a significant portion of the website's features is non-functional, it may indicate a scam, emphasizing the importance of checking the site's functionality and legitimacy before engaging with it. Recent Site Launch: Checking the site's launch date through an ICANN lookup is crucial; honeypot sites often emerge hastily and disappear just as quickly, highlighting the need for investors to assess the project's history and stability. Guaranteed Returns with Zero Risk: Promises of guaranteed returns with no risk are classic hallmarks of fraudulent schemes, urging investors to critically evaluate the feasibility of such claims and exercise caution when encountering them. Detecting Fake Assets While spotting fake assets can be challenging, vigilant investors can still discern warning signs: Low Softcap, High Hardcap Discrepancy: Unrealistic financial goals, such as an extremely low softcap paired with a high or non-existent hardcap, are red flags, signaling potential inconsistencies in the project's financial planning that warrant careful consideration. Freshly Funded Wallets: If the wallet funding the smart contract is recently funded from an exchange or Tornado Cash, it may be a sign of a honeypot, emphasizing the need for investors to investigate the funding history of wallets associated with a project. Unsolicited DM Recommendations: If someone recommends an asset via unsolicited DM, be cautious, as unsolicited advice can be a tactic used by scammers to lure unsuspecting individuals into fraudulent schemes. Hidden Contract Address: Difficulty in finding the contract address raises suspicions; transparent projects readily provide this information, and the absence of it should prompt investors to question the project's transparency and legitimacy. Ambitious Roadmaps: Projects promising an excessively ambitious roadmap may be attempting to over-promise and under-deliver, highlighting the importance of realistic expectations and thorough evaluation of a project's development plans. Unverified Audits: Claims of a contract audit should be scrutinized; if the audit is unverifiable or appears fake, exercise caution, as genuine projects often provide transparent and verifiable details about their security audits. Suspicious Social Media Metrics: Accounts that gain thousands of followers within weeks but lack genuine engagement may be fraudulent. Tools and Tactics for Defense In the ever-evolving landscape of crypto scams, investors can employ various tools to bolster their defenses, and platforms like TokenSniffer, Scamsniper, Honeypot.is, and RugScreen offers smart contract scanning services for users to find out what they're dealing with. Here's why these tools matter: Preventive Measures: By utilizing these tools, investors can take proactive measures to avoid falling victim to honeypot scams. The early identification of suspicious characteristics allows for informed decision-making. Risk Mitigation: Assessing the risk associated with a smart contract provides investors with a clearer understanding of the potential vulnerabilities, enabling them to make more informed investment decisions. Community Collaboration: Leveraging these tools fosters a collaborative approach within the crypto community. Investors contribute to a collective effort to identify and mitigate potential risks, creating a safer environment for all participants. Advantages of Honeypots In a peculiar twist, honeypots, despite their apparent drawbacks, come with a set of merits. These prove particularly valuable for developers of secure smart contracts, offering a live assessment of their performance and security. Unlike typical decoy wallets or instruments for indiscriminate spam, these honeypots involve intricate smart contracts devoid of blatant vulnerabilities. Instead, they serve as sophisticated mechanisms designed to absorb authentic attacks from hackers seeking weaknesses for illicit activities such as theft. A noteworthy benefit is the reduction in false alarms triggered by protection systems. Unlike traditional cybercrime detection setups prone to generating numerous false positives, honeypots maintain a minimum level of such instances. This is attributed to the fact that regular users lack any motivation to interact with these honeypots, ensuring a more accurate threat detection mechanism. Conclusion In the ever-evolving landscape of cryptocurrencies, the specter of honeypot scams looms large, presenting a formidable challenge to unsuspecting investors. As we conclude this exploration into the depths of crypto deception, it becomes clear that vigilance and due diligence are paramount. As we move forward in the crypto space, one thing remains clear: trust is earned through verification. Whether evaluating the legitimacy of a project's partnerships, scrutinizing audits, or delving into smart contract details, a thorough and skeptical approach is the best defense against falling victim to honeypot scams. In the pursuit of a secure and trustworthy crypto ecosystem, investors play a pivotal role. By staying informed, questioning the status quo, and leveraging available tools, they contribute to the collective effort to make the crypto space safer for everyone. Ultimately, the lesson is clear—while the crypto world holds vast potential, it also demands discernment. The journey toward financial growth and security in the crypto realm is paved with due diligence, skepticism, and an unwavering commitment to safeguarding one's assets. As we bid farewell to this exploration, let these insights serve as a compass, guiding you through the intricate and often deceptive terrain of crypto investments.

  • CTRL + Hack + ZK: Limitless Compute - Access to a Decentralized Cloud using Acurast

    In January 2024, Andreas Gasmann, a developer at Acurast, hosted a workshop for the CTRL+Hack+ZK hackathon, the inaugural hackathon organized by Aleph Zero. Speaking expressly about the manifold problems associated with present cloud providers and the blockchain quadrilemma and its negative impact on distributed applications, he presented Acurast. This Layer-1 blockchain addresses these shortcomings with a novel decentralized and serverless approach, and this article will go into depth about its solutions and how it solves these problems. What is Cloud computing? Cloud computing is a technology that involves delivering various computing services over the internet. Users can access these services on demand, often paying for only the resources they consume. Examples of on-demand cloud services include "Infrastructure as a Service (IaaS)" offered by providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Additionally, there are "Platform as a Service (PaaS)" platforms such as Heroku and Google App Engine, enabling users to build, deploy, and manage applications without dealing with the underlying infrastructure. "Software as a Service (SaaS)" applications like Salesforce, Microsoft 365, and Dropbox are also notable, being delivered over the internet for easy access and usage. Cloud storage services, including Google Cloud Storage, and Microsoft Azure Storage, provide scalable and on-demand storage solutions, exemplifying the flexibility and convenience inherent in cloud computing, thereby enabling collaborative work environments. Contrasting with Decentralized, Serverless Cloud The evolution from Web2 to Web3 signifies a pivotal shift. Web2-based cloud services rely on centralized entities, fostering concerns over data control and privacy. In contrast, Web3, embodied in decentralized, serverless cloud solutions, prioritizes user ownership and privacy, removing intermediaries. This transition from Web2 to Web3 encapsulates a broader move from centralized trust and control to a democratized, permissionless, and user-centric Internet experience. The serverless cloud, operating on a peer-to-peer network, ensures robust security and confidentiality through smart contracts, albeit facing challenges in scalability and managing decentralized resources. Spotlight on Cloud Provider Hurdles Cloud providers contend with 4 challenging aspects of Cloud computing: Cost: Often expensive due to centralized infrastructure and maintenance costs. Cloud services are dominated by a few major players like Google, Microsoft, and others, creating a competitive landscape that can limit options for smaller businesses. This concentration can result in higher prices as these major companies have significant control over the market. Also, factoring in the cost of potential cloud outages which is a commonplace occurrence, a 2023 report from Parametrix Insurance - pioneers in monitoring and modeling of cloud providers and cloud-based services - concluded that a 24-hour outage of mission-critical services from AWS us-east-1 - the cloud region with the largest number of Fortune 500 companies relying on it - could cost $3.4 billion in direct revenue. Transparency: The lack of transparency in pricing for cloud services is a common concern. Many traditional cloud providers often have complex pricing models with various factors such as data transfer, storage, and compute resources, making it challenging for users to understand the true cost implications of their usage. In contrast, there's a growing demand for more transparent pricing models in decentralized and serverless services. Clear, straightforward pricing structures allow users to better anticipate and manage costs, fostering trust and enabling more informed decision-making. Centralization: The centralization of cloud services, particularly dominated by a few major U.S.-based providers, introduces concerns related to control, potential single points of failure, and legal jurisdiction. Reliance on a small number of providers may expose businesses to disruptions if a major provider experiences downtime. Moreover, the concentration of providers in a specific geographic location can subject users to the regulatory framework of that jurisdiction, potentially raising concerns about data privacy and security. This lack of geographic diversity limits options for businesses seeking alternatives that align with different legal and regulatory environments. Confidentiality: Confidentiality concerns within cloud services involve worries about unauthorized data access or exposure. Users trust cloud providers to implement robust security measures, such as encryption, access controls, and secure authentication, to safeguard their sensitive information. The potential for insider threats within the cloud provider organization raises questions about who might access or misuse the data. Solutions like Acurast aim to address these concerns by providing a network with trust assumptions that align with blockchains' inherent permissionless ethos, offering a low cost of computation with an added layer of effectiveness and confidentiality. Limitless computing with Acurast Although blockchain development has been around for over a decade, one might more willingly volunteer to undertake the mythical task of slaying the Lernaean Hydra from Greek mythology, one of the twelve labors of Hercules, than to design a blockchain that is simultaneously secure, scalable, and decentralized while offering computational effectiveness - the ability to run complex decentralized computations at affordable costs - and an additional layer of confidentiality. Acurast embraced innovation, tackling the quadrilemma with a decentralized and serverless strategy within its L1 Blockchain architecture network. This audacious move, coupled with a zero-trust security model, not only signifies a departure from conventional approaches but also promises robust benefits. Decentralization enhances resilience, minimizing the risk of a single point of failure and fostering scalability. The adoption of serverless architectures ensures automatic scaling, optimizing resource utilization and potentially lowering costs. This approach not only bolsters flexibility by offering a choice among decentralized platforms but also mitigates the risk of vendor lock-in. Improved accessibility is a natural outcome, as decentralized systems, by their nature, distribute services, resulting in reduced latency and heightened performance across diverse locations for end-users. Andreas Gasmann, a developer at Acurast, spoke on the transformative impact of new chips, particularly ARM 64-based chips found in mobile phones like the Google Pixel, and highlighted that these chips, with not just CPUs but also GPUs and dedicated machine learning chips, offer substantial computing power. Despite being six times cheaper than traditional server racks, mobile phones exhibit faster CPU benchmarks, and when factoring in additional chips, they can achieve an 80% higher performance. The energy efficiency of these newer chips is noteworthy, requiring about 50 times less energy, resulting in improved overall performance for the same price. A significant point emphasized was the ability to repurpose old cell phones for the Acurast network, allowing users to participate as processors, contributing compute power at a low cost, sometimes as low as $10. Anyone can join the network, with a simple three-minute onboarding process involving scanning a QR code on the website, making it accessible even to those without technical expertise. This approach has led to the formation of small device farms driving the Acurast backend. In comparison to a Google instance, running on the Acurast network is estimated to be around 30 times cheaper, coupled with the added benefit of confidential computing. The Architecture of Acurast Acurast's modular architecture, with distinct layers for consensus, execution, and application, exemplifies a strategic approach that not only champions a broader interoperability framework but also facilitates harmonious collaboration across Web3 and Web2 environments. The modular execution layer, a standout feature of Acurast, empowers the network to leverage secure hardware coprocessors, eliminating the need for trust in third parties and enhancing overall security. This design not only emphasizes universal interoperability but also prioritizes robustness and trustlessness at its core. Essentially, the consensus layer of Acurast has 2 cores: the purpose-built Orchestrator and a reputation engine. The Orchestrator in Acurast orchestrates consumer jobs, employing an End-to-End Zero Trust Job Execution approach. A "job" refers to a task or computational process defined by a consumer within the Acurast system. It involves specifying details such as the destination for settling the job, selecting deployment templates, and choosing execution processors. These jobs can range from various computational tasks, and the Acurast system manages the entire lifecycle of a job, from definition and deployment to completion. Acurast's job lifecycle comprises four essential components. First, in the job registration phase, consumers define task details, select deployment templates, and specify execution processors, while payments and integration preferences are settled. Subsequently, in the job acknowledgment stage, processors fetch job details and move through states like MATCHED and ASSIGNED based on fulfillment capabilities. The third phase involves job execution, where the actual computational process occurs within secure runtimes like Acurast Secure Hardware Runtime (ASHR). Finally, in the job fulfillment and reporting phase, the completed output is delivered to specified destinations, settling gas fees for cross-chain transactions. Processors report back to the Acurast Consensus Layer, entering the DONE state (which means job fulfillment is successful), and continuous reliability metrics are fed into the reputation engine to ensure the protocol's robustness. The reputation engine in Acurast plays a critical role in maintaining the integrity of the system by ensuring accurate updates to the reputation scores of processors and incentivizing honest behavior. It acts as a mechanism to evaluate and track the performance of processors based on their successful job fulfillment or reported failures. By continuously feeding reliability metrics, particularly after job completion or failure, the reputation engine provides a dynamic way to gauge the trustworthiness of processors. This, in turn, creates a powerful incentive structure, encouraging processors to act honestly and efficiently in executing tasks within the Acurast ecosystem. The four components of the Orchestrator layer and reputation engine delineate a comprehensive and secure journey for computational tasks within the Acurast ecosystem. Conclusion Addressing the widely recognized challenges of centralization of trust, seamless interoperability, and confidentiality in the execution layer within Web2 and Web3, Acurast presents a disruptive solution to the US cloud monopoly. By leveraging mobile hardware, Acurast democratically decentralizes the cloud, offering everyone the opportunity to participate in this decentralized ecosystem using their mobile phones. This approach grants developers permissionless access to trustless, affordable, and confidential computing resources. As a Layer-1 blockchain, Acurast introduces a novel decentralized and serverless architecture, transforming it into a decentralized and serverless cloud. This modular design not only mitigates the identified shortcomings but also enables seamless, native settlements across ecosystems, ultimately enhancing the efficiency of computation. Acurast emerges as a stalwart vine, reshaping the landscape of decentralized cloud computing.

  • Understanding Privacy-Enhancing Technologies (PETs): Non-Cryptographic Based Solutions - Part 2

    In the first part of this series, we delved into some cryptographically-based privacy-enhancing technologies (PETs) along with blockchains and companies that utilize them. In this second part, we’ll be taking a look at privacy-enhancing technologies that are not cryptographic or don’t involve encryption. Communication Anonymizers Communication anonymizers are tools that are used to keep online activities untraceable by hiding users’ identifiable information stored on the computer or by replacing them with one-time identifiable information. The underlying concept behind anonymizers is anonymous proxy servers. Proxy servers are intermediary servers or privacy shields between the information that users consume from the internet and the data that they share on the web, thereby providing security and privacy. Think of proxy servers as “bodyguards” that gatekeep information that passes through users and the internet/websites they browse. These proxy servers hide the IP address of the users, that way, websites are unable to track the browsing histories of the users. How proxy servers work: Users request to visit a website, the request goes to the proxy server and passes through the proxy server to the site. The website responds by displaying the information which the proxy server then accesses and relays to the user. This technique prevents the website from knowing the IP address of the real user behind the request. There are different types of proxy servers each with its function, however, the anonymous proxy servers are our interest because they work like communication anonymizers. Anonymizers conceal users' identities and data while they access the internet. There are various kinds of anonymizers, some of which are very common to us. VPN, for instance, is a type of anonymizer because it establishes a secure connection over the internet using the user's device. Anonymizers are useful for preventing data theft and unwanted access to browsing histories. Federated Learning This is a machine-learning technique that uses several decentralized data sets to train models for statistical analysis. When the data is shared among decentralized servers rather than being kept on a single server, data minimization is achieved because the amount of data that would normally be on one server is shared amongst many. In businesses, data minimization is very useful because the less data a company has, the lesser the harm that can be done. Several model versions are trained and run locally on the device, rather than the data being fed into a central model. IoT applications are one area where federated learning is notably employed. Trusted Execution Environments (TEEs) or Secure Enclave TEEs are non-cryptographic privacy-enhancing technologies. They are physical regions in the main processors of physical computer systems. TEEs are physical regions of computer processors, isolated from the other parts of the central processing unit, where data and codes are stored to prevent tampering from unauthorized parties. It is also referred to as a secure enclave. The term “enclave” comes from a French word that means “to enclose” and it is based on a Latin word that means “key.” From this, we can deduce that TEEs or secure enclaves are like secured black boxes or keys. In design, TEEs allow for encrypted data to be stored in them such that no one - not even the owner(s) of the servers can access users’ data inside them. Encrypted data are decrypted and computed only within the TEEs. They aid in the confidentiality and integrity protection of codes and data stored within them. With TEEs, unauthorized parties are unable to modify or replace data, thereby preserving the privacy of the data. TEEs are advantageous in the sense that they have computation time overhead lesser than cryptographic techniques. However, because TEEs are hardware-dependent, they pose certain risks because they can be exploited. In regards to use in blockchains, permissioned blockchains are better suited to employ the use of TEEs than public blockchains. To avoid making this lengthy, you can learn more about why this is so in this Medium article. An example of a blockchain that uses TEEs is Oasis Network. Synthetic Data Synthetic data generation involves artificially generating non-identifiable datasets from the simulation of an original dataset for use. Synthetic data generation is the process of taking an original dataset or data source and using it to create new non-identifiable artificial data that has comparable statistical characteristics. Maintaining the statistical qualities entails ensuring that an analyst of a particular dataset of synthetic data gets the same statistical conclusions from it as he would if he were working with the real (original) data. Artificially generated data can be produced by machine learning algorithms. Use Cases of Privacy Enhancing Technologies (PETs) By offering transparency, choice, and auditability within data systems, PETs have the potential to enable increased security and confidentiality of data across use cases and industries. Here are some highlighted use cases. Financial institutions and DeFi protocols: This is one of the sectors where PETs are applied. Privacy-enhancing technologies are very useful in safeguarding the financial history and balances of users who wish to remain private whether in centralized (traditional finance) or decentralized finance. Data analysis: Data analysts work with a lot of data, without data they can’t work effectively so they are often exposed to a large set of data. This poses a high risk because, even though the analysts are trustworthy, their computers can be intentionally hacked to steal users’ data. In a measure to prevent this, certain privacy-enhancing technologies can be employed so that these analysts can compute with data but are not exposed to the sensitive content of the datasets. AI/Machine Learning: With the use of AI and machine learning, the risk of privacy breaches can be minimized. Personal data can be encrypted, human error can be reduced, and possible cybersecurity events can be detected. PETs can be employed in training AI and machine learning models to develop privacy-preserving machine learning techniques that aid in preserving sensitive data. Healthcare: The healthcare industry holds one of the highest records of individuals’ data and personal information (medical records inclusive) making it a high target for data breaches. Employing PETs is beneficial not just for hospitals and healthcare centers but also for patients. In addition to PETs, other classes of technologies are designed to protect users’ privacy. These technologies are said to be complementary to PETs and they are Transparency Enhancing Technologies (TETs) and Intervenability Enhancing Technologies (IETs). Let’s take a brief look at what they offer. Transparency Enhancing Technologies (TETs) While PETs are designed to minimize the amount of sensitive data accessible by third parties, TETs are tools and techniques designed to allow users to get more insights into how their data and private information are utilized by organizations and online service providers, giving them control over how they share it. It is believed that when organizations are made to account to their users about how they utilize their data, they’ll be kept in check on how to safeguard these data. Intervenability Enhancing Technologies (IETs) As the name insinuates, IETs are tools and techniques that give users the ability to intervene or interfere with data processing provided their data is concerned. Users can delete, give consent, and choose which data to share, etc; giving users better control of their data. Challenges of Privacy Enhancing Technologies (PETs) Complexity: The most effective PET solutions are complex to deploy and manage. Expensive: Due to the high computational tasks involved in some PET approaches, the hardware components used are usually not cheap to get. Lack of expertise: Some companies do not have the internal capacity to integrate PETs into their systems. Outsourcing to third-party services contrasts the whole idea of users’ data privacy. Compliance issues: Many PET solutions do not comply with data privacy laws set in place by regulatory bodies. Privacy-enhancing technologies emerge in various forms; some are solutions that focus on hiding the data, some are approaches to change the original data, and others are solutions that focus on encrypting the data. In all of these, one goal is common - to ensure that users' data are protected both within and outside the blockchain.

  • Understanding Privacy-Enhancing Technologies (PETs): Cryptographic Solutions - Part 1

    Data has become a far more valuable resource for businesses in recent years. It facilitates swift decision-making for firms, increasing their chances of success. However, businesses also have to contend with an ever-growing array of data risk issues. Hence, there are several measures taken to ensure data protection. The ever-evolving exploration of technology has paved the way for more data theft by bad actors. On the other hand, it has also provided more exposure for developers to create solutions to counter the actions of these bad actors, hence giving organizations a range of options to utilize in protecting their consumers’ data. A class of these solutions tending to the protection of users’ privacy is referred to as “privacy-enhancing technologies”. What are Privacy-Enhancing Technologies? Privacy-enhancing technologies (PETs) are technologies and approaches that allow data to remain private even while it is being computed, hence preserving sensitive data. They maximize data security by helping minimize the use of personal data (personally identifiable information, PII), thereby giving people more control over their data. These technologies or solutions may be software or hardware components and they encompass all technologies that serve as the fundamental building blocks of data security and privacy. The main goals of PETs: Some PETs aim to allow users to choose which personal information to share with third parties like online service providers. It’s no secret that these online service providers collect users' data and utilize them for several activities, including selling them. Some PETs are aimed at giving users anonymity such that their data is being shared and utilized online but they are kept private because they use anonymous or pseudonymous credentials. Some PETs employ cryptographic methods to ensure that users' data are kept private even while they are being processed. This piece will focus on this category of PETs. Data can be kept private through various means such as encryption, pseudonymization, obfuscation or masking, opaqueness, and inaccessibility. As the world changes and adopts big tech, there is a rapid acceptance of the huge potential of technologies in solving many problems in various industries. Before now, sharing personal data with online service providers seemed like the only way to go about becoming a member of this “global village”. However, now we know better. Now we know that these providers are not looking out for us. Now we know that our data is not safe and is at risk of being stolen. The emergence of privacy-enhancing technologies can be regarded as a much-needed breakthrough in the aspect of data security. The next section of this article extensively explains some privacy-enhancing technologies (PETs) that are based on cryptography. You’ll also see examples of blockchains, blockchain-based applications, or protocols that use some of these PET solutions. Types of Privacy-Enhancing Technologies (PETs): Cryptographic Techniques Homomorphic Encryption: Typically, encrypted data can only be worked on after it has been decrypted. Say Alice wants to send Bob some information that she doesn’t want any other eye peeking into, she sends it as an encrypted file, Bob can only access the file after he has decrypted it…or so we thought. With homomorphic encryption, this file can be accessed and worked on even while it is still encrypted. This concept is known as homomorphic encryption and it has been around for over three decades. Homomorphic encryption is an encryption technique that makes it possible to compute with encrypted data. The result obtained after computational operations have been carried out is usually encrypted but can be decrypted thereafter. This is an interesting model of how two parties can work together on a single piece of data because it prevents disclosure of private information to peeking pairs of “online eyes” that might be on the lookout for an opportunity to steal or hack data. With homomorphic encryption, Alice can send Bob that piece of information in an encrypted file, Bob can compute with it while encrypted and even send it back to Alice still encrypted. This way, they’ve avoided the risk of exposing the data to any hacker who might have been watching online. There are three types of homomorphic encryption namely: Fully Homomorphic Encryption (FHE), Partial Homomorphic Encryption (PHE), and Somewhat Homomorphic Encryption (SHE). One downside of homomorphic encryption is that it requires high and multiple computations (i.e., it is compute-intensive). Zero-Knowledge Proofs (ZKPs): If you’ve been following through with CodeTavern, you should be familiar with zero-knowledge proofs by now. If you wish to learn more, check out this piece that expansively discusses ZKPs. Zero-knowledge tech is a cryptographic mechanism by which transactions are validated on the blockchain and it involves two parties, the prover who proves the truth of a statement, and the verifier who checks to validate the authenticity of the statement. The prover, in a bid not to reveal the content of the transactions, provides the verifier with nothing but a cryptographic proof which is a summarized piece of the transactions and a computation that proves his knowledge of the secret. The verifier then throws a challenge to the prover to ascertain that he indeed knows the secret (Interactive ZKPs). Only when the verifier is convinced does he validate the transactions and send them to the blockchain. Zero-knowledge proofs are employed in different industries today. They have found wide usage in the web3 industry, particularly, in blockchain projects. Some ZKP blockchain-specific solutions are zk Rollups, zk DEXs, privacy coins, and more. Examples of blockchain networks that utilize zero knowledge are Aleph Zero, Polygon zkEVM, Linea, zkSync Era, Taiko, Mina Protocol, Scroll, Loopring, StarkNet, etc. Secure Multi-Party Computation (sMPC): This is a cryptographic technique that allows multiple parties (each with encrypted data) to work together on joint computational tasks without any of the parties revealing private data to one another. This is particularly effective in cases that involve more than two parties computing values from multiple encrypted data sources. Each party shares inputs but doesn’t reveal their secret data; these inputs are used during computations to obtain results. This technique has found use cases in areas like e-voting, machine learning, private auctions, medical research, data analysis, blockchain, genetic testing, etc. The combination of the distributed processing and encryption of sMPC can significantly impact data security and privacy. In summary, sMPC technology allows networks and protocols to protect "secrets" by breaking them into several parts, making it impossible for anyone to know the underlying "truth." Examples of blockchain infrastructures and other distributed ledger technologies (DLTs) that utilize sMPC are Qredo Network, Partisia Blockchain, Aleph Zero, Nillion Network, Secret Network, Continuum DAO, IOTA MPC, Hedera, Oasis Network, etc. Blockchains like Aleph Zero utilize both ZKPs and sMPC to improve privacy. Learn more about Liminal (Aleph Zero’s privacy-enhancing layer utilizing both ZKPs and sMPC) here. Verifiable Credentials (VCs): Verifiable Credentials (VCs) are cryptographically (digital) signed signatures made by an issuer to a verifier about a holder. Here’s how it works. VCs involve three parties: the issuer, the holder, and the verifier. Say, I wish to work for a company and I claim to possess a certain certificate necessary for the role I’m applying for. This role requires me to share the certificate in question with the company but I don’t want to explicitly share this certificate because I don’t want my personal data exposed (I’m not a fugitive I promise). Instead, I go back to the university that issued me the certificate and ask for a VC (which is a cryptographically signed statement) asserting that I indeed possess the certificate in reference. The VC could state “Julia has a certificate from the University of Michigan” with a valid digital signature from the university without my personal information on it. With the VC, I can go back to the company and prove to them that the claim I made was indeed the truth. The university, in this case, is the issuer; I am the holder about whom the claim is being made; and the company I’m applying to is the verifier because they will need to verify that the VC is valid. Source: Lastrust Verifiable Credentials are useful in verifying truths about claims without disclosing private information about the individuals. They are mostly used in digital identity management. In an article published in 2021, Garner - a company focused on delivering actionable and objective insights to businesses, predicts that by 2024, a “truly global, portable, decentralized identity standard will emerge in the market to address business, personal, social, and societal, and identity-invisible use cases.” An example of such a standard is VCs. Examples of blockchain-based projects that use VCs are Civic, Sovrin, etc. Differential Privacy: This is yet another cryptographic mechanism for enhancing privacy that requires the intended inclusion of a statistical “noise” layer to the dataset before computations are carried out. The essence of this “noise” is to mask certain private data or personal information of individuals in the set but it is not large enough to affect the results produced. The results produced can’t reveal the particular information used in computing them. This technique is mainly utilized in mathematics, data analysis, and statistics. Although differential privacy can find good use in blockchains to protect the privacy of data stored in them, however, based on research, there is no standard record of any blockchain that currently utilizes differential privacy. On the other hand, experts and researchers have proposed a futuristic technique to incorporate differential privacy into blockchain layers. Enhanced Privacy ID (EPID): Like VCs, Enhanced Privacy ID (EPID) is a digital signature mechanism. Unlike conventional digital techniques where every party has a unique public key for verifying transactions and a unique private key for signing & approving transactions; in EPID, each party still has a unique private signature key but one common public verification key linked to all the private keys in the system. EPID also involves three parties: the issuer, the member, and the verifier. For instance, in an organization of 20 employees. Each employee is given an EPID private signing key which verifies their status as an employee of the organization, cryptographically, without disclosing their "real name" identity. Meaning, there are 20 private keys. However, there is a single EPID public key common to all the employees of the organization. This public key can be used externally to verify these employees’ identities and the authenticity of their signatures (say, to know if they’re telling the truth about their employment status) without disclosing their personal information. The issuer in this instance is the organization issuing out private keys. The member(s) is an employee(s). The verifier is the entity verifying the authenticity of a signature supposedly made by the organization. EPID allows hardware devices to be remotely authenticated while maintaining their privacy, i.e., a device wouldn't have to reveal its identity to an outside party to demonstrate to them what kind of device it is. EPID is also used to provide anonymous and untraceable signatures. Even issuers of the private keys are not made aware of the content. Intel Corporation introduced EPID in 2008 as its recommended algorithm for attestation of a trusted system and has since incorporated the scheme into its products. Format-Preserving Encryption (FPE): Format-Preserving Encryption is a type of PET solution that allows data to be encrypted while retaining its original format. This means turning plain texts into ciphertext (encrypted information that preserves the format of the underlying information) which cannot be understood without deciphering. FPE differs from homomorphic encryption in that the former allows data to be encrypted whilst maintaining its original format; the latter is designed to allow computations on encrypted data. Blinding: This is a PET solution that involves concealing sensitive data from third parties while still allowing them to compute on it. The sensitive data is hidden by multiplying it with a random number, then the output is divided by the same random number. Blinding is used in blinding signatures where a signer digitally signs a message without learning of the content of that message. Ring Signatures: One kind of privacy-enhancing technology (PET) that can be used to safeguard data privacy is ring signatures. They are a type of digital signature based on cryptography that enables many users to sign a message. The way ring signatures operate is by forming a group of users known as a ring, each of whom has a public key. So when a ring signature is used in signing a transaction, it will give the impression that several users have joined forces to form a ring and are carrying out a transaction together. But out of the transaction, nobody will be able to identify the real signer. To put it simply, numerous users will sign a single transaction instead of just one, just as opening a joint account at a bank would require multiple signatures from different individuals. Ring signatures were introduced back in 2001, making them one of the earliest cryptographic solutions to be made, and they are still effective to date. They can be applied in e-voting systems and also in identity management applications. An example of a blockchain that uses ring signatures is Monero. Monero utilizes ring signatures as one of its transaction-privacy techniques. Dash and Ethereum also use ring signatures to protect users’ privacy. An upgraded version of ring signatures is “linkable ring signatures.” Conclusion Data privacy is not just a trend or the “shiny new toy” that everyone is trying to play around it. Several solutions to protect users’ data privacy have been in existence long before now. What we’re experiencing is an evolution or rather an upgrade to previously used techniques and also their applications in blockchain. In the second part of this article, we’ll discuss other privacy-enhancing technologies (PETs) that are non-cryptographic. References: Privacy-enhancing technologies - Wikipedia Privacy on the Blockchain | Ethereum Foundation Blog

  • CTRL + Hack + ZK: Scaffolding your dApp with ink!athon - Part 2: Exploring the Boilerplate, the ink!athon Stack

    In Part 1 of our journey into decentralized app development with ink!athon, we explored the decentralized foundational principles of a blockchain at the CTRL+Hack+ZK hackathon. From the vibrant coding symphony workshop to a quick review of decentralized front-end and back-end components, Dennis Zoma, Co-founder of Scio Labs, laid the groundwork for building robust full-stack dApps. Now, as we venture into Part 2, our focus shifts to a deeper exploration of the boilerplate – the ink!athon stack itself! The Stack of ink!athon - An Overview The ink!athon stack is capsuled in a monolithic repository (Monorepo) structure. The key characteristic is to house multiple related projects or components (in this case, the front-end and smart contract) under a single repository, facilitating easier code sharing and centralized management. According to Dennis, PNPM or Yarn may serve as a package manager, although he clearly stated he preferred the former over the latter as it works better in a Monorepo in his experience. PNPM is a package manager for JavaScript, standing for "Plug'n'Play Package Manager." Package managers and Monorepos serve distinct purposes in software development, but there can be interactions between them, especially in the context of managing dependencies in larger projects. Think of a package manager as a single tree in a forest, where each tree represents an individual project. Each tree has its own set of roots (dependencies) that it relies on. On the other hand, a Monorepo is like an interconnected ecosystem where multiple plants (projects) share the same soil (repository) and can benefit from common resources (shared code and dependencies). Dependencies, on the other hand, refer to external libraries, modules, or packages that a project relies on to function properly. These external components are very essential for providing reusable code, additional functionalities, or specific tools. Dependencies are typically managed by a package manager and are specified in a project's configuration. For example, in a Rust project using Cargo as the package manager, dependencies are declared in the Cargo.toml file. When you execute cargo build, it fetches and builds the specified dependencies, enabling your project to incorporate code from those external sources. How package managers and Monorepos relate Dependency Management: Package managers like npm (node package manager) for JavaScript or Cargo for Rust focus on individual project dependency management. In contrast, Monorepos allows multiple projects to coexist in a single repository, streamlining efficient dependency sharing across projects. Code Sharing: Package managers facilitate sharing and distributing code or libraries within a broader community. Monorepos, on the other hand, promotes code sharing among projects within the repository, enabling the sharing of dependencies between different projects for improved code reuse and maintenance. Centralized Configuration: Package managers streamline code or library sharing within a broader community, while Monorepos encourages sharing dependencies between projects within a repository, enhancing code reuse and maintenance. Development Workflow: Package Managers concentrate on the workflow of individual projects, while Monorepos facilitate a collective development workflow, simplifying the coordination of changes across interconnected projects. While package managers are primarily concerned with individual project dependency management, Monorepos provides a structural and efficient approach to managing dependencies and code sharing across multiple projects within a unified repository. Additionally, having Dockerfiles and deployment configurations in a Monorepo can simplify the process of managing and deploying multiple services. Key characteristics of a Monorepo Centralized Management: Changes to any part of the system can be managed within the same repository, simplifying collaboration, version control, and code sharing. Atomic Changes: Developers can make atomic changes across multiple projects simultaneously, ensuring that changes to one component do not break others. An atomic change ensures that a set of modifications to code or components is treated as a single, cohesive operation. This is beneficial for consistency and reduces the risk of introducing errors by ensuring that all related changes are applied together, minimizing the chances of breaking other parts of the system. Shared Dependencies: Dependencies, such as libraries and tools, can be shared among projects, reducing duplication and making it more straightforward to maintain and update common components. Simplified Build and CI/CD Processes: Building, testing, and deploying the entire system or subsets of it can be streamlined in a Monorepo, simplifying continuous integration and continuous deployment (CI/CD) processes. Ease of Refactoring: Refactoring becomes more straightforward as all related code is in one place, allowing developers to make consistent changes across the entire codebase. The Stack of ink!athon - The Contract Side What you have in the Monorepo are two folders: one for the front-end code and another for contracts in one repository. In the Monorepo structure, the /contract directory contains the necessary components for Ink! smart contract development: Rust Language: The Rust programming language is available in the /contract directory. This is where you would find the Rust source code files for your smart contracts. Cargo Package Manager: Cargo, which is Rust's package manager, is also present in the `/contract` directory. Cargo is used to manage dependencies, build projects, and handle various aspects of Rust project management. ink!: The ink! smart contract programming language is part of the setup in the /contract directory. It is specifically designed for writing smart contracts on the Polkadot and Substrate blockchain platforms. Having these components within the /contract directory allows for a focused and organized structure, making it easier to manage and develop ink! smart contracts within the overall Monorepo. There are also convenient and efficient scripts available for quickly setting up various aspects of the Substrate contracts development environment: Substrate Contracts: There's a shorthand script for swiftly initiating Substrate contracts. Contracts UI: A shorthand script is available for launching the user interface (UI) related to the contracts. Polkadot.js/apps: There's a shorthand script for spinning up Polkadot.js/apps, a tool for interacting with Substrate-based blockchains. In the /contract directory, there are preexisting, pre-deployed sample contracts, which are also pre-connected to the front end, although there is also an option to build and deploy them locally. This flexibility suggests that developers have the choice to use the existing deployed version or to customize and deploy their instances of the smart contract. Overall, the provided shorthand scripts and pre-configurations aim to streamline the process of setting up, deploying, and interacting with Substrate contracts and associated components. The Stack of ink!athon - The Front-end Side In the front-end, there's a Next.js application, which is essentially a React application. However, the unique aspect is that it doesn't extensively utilize Next.js API routes. Instead, it is compatible with both the traditional pages directory structure and the newer app directory structure introduced in Next.js. Pages Directory: The pages directory in the front-end would serve as a building block for distinct parts of the smart contract interaction. Each file within the pages directory corresponds to a specific route or functionality in the dApp, and these routes typically involve interactions with the underlying smart contract. For example, you might have pages representing different aspects of your dApp, such as viewing contract details, interacting with specific smart contract functions (such as updating a user's balance, transferring tokens, or modifying contract settings), or displaying information related to blockchain transactions. The pages directory in this case helps organize the frontend codebase and aligns with different functionalities associated with the smart contract within the dApp. App Directory Structure: The app directory structure is a newer way of organizing the code in the main branch of a smart contract project. Instead of having all the code for the application in a single directory, it's now organized into an app directory. This makes the code more modular and easier to manage, allowing for better scalability as the project grows. The dApp boilerplate offers a foundation with basic styling and pre-components for the user interface, allowing developers flexibility in choosing their preferred styling approach, whether it's conventional CSS, utility-first frameworks like Tailwind CSS, styled-components, or any UI framework. According to Dennis Zoma, the Co-founder of AZERO.ID, the intention is to remain as unopinionated as possible, empowering developers to make styling decisions based on personal preferences. This adaptability is particularly valuable in a hackathon scenario, where developers may opt for different styling tools to efficiently prototype and build their decentralized applications. Additionally, in ink!athon, Polkadot.js serves as a foundational layer for frontend functionality. It is utilized as a base layer to interact with the Polkadot and Substrate blockchain. On top of this, the dApp leverages the ink!athon hook library, which acts as an abstraction layer, simplifying and streamlining the interaction with smart contracts. The ink!athon hook library provides essential functionality, such as managing contract instances and ensuring access to these instances across the entire project. Essentially, it removes away complexities, making it easier for developers to work with smart contracts, access their instances, and integrate blockchain functionality seamlessly throughout the entire project. This abstraction contributes to a more efficient and organized development process for decentralized applications. Conclusion ink!athon is a game-changer as it significantly reduces the amount of code you need to write. The difference between a lengthy function using Polkadot.js and a concise one using ink!athon is like turning pages of code into just a few lines. This efficiency isn't a one-time occurrence; it repeats across the entire project. If you have multiple components interacting with the blockchain, handling queries, and executing transactions, ink!athon pays off remarkably. With ink!athon, you import your contracts, metadata, and addresses just once, making them accessible everywhere using a hook. It provides shortcuts for many common tasks and offers constants for various chains and orders, streamlining the development process. Furthermore, it is battle-tested in production by AZERO.ID and hundreds of GitHub dependents.

  • CTRL+Hack+ZK: Scaffolding your dApp with ink!athon - Part 1

    In January 2024, Dennis Zoma (Co-founder of Scio Labs) and the developers behind AZERO.ID, the official DNS (Domain Name Service) on Aleph Zero that's written in the ink! Smart Contract, spoke expressly about building full-stack dApps using a dev tooling kit called 'ink!athon' at the CTRL+Hack+ZK hackathon that was organized by Aleph Zero. ## Part 1: Building The Foundation Why Frontend Matters During the bustling event, the atmosphere resonates with vigor, mirroring the energetic grace of new wine flavors pirouetting on the palate. As participants immerse themselves in the coding symphony cum workshop, a lingering echo of yearning for another gathering intertwines with a zest reminiscent of an autumn breeze, filled with hopeful anticipation for a promised future rendezvous. According to Dennis Zoma, a decentralized app, or DApp, operates on a blockchain network, using both decentralized front-end and back-end components. The front end is the user interface (UI), while the back end involves the application's logic and data storage. The decentralization aims to distribute control and eliminate single points of failure, often achieved through smart contracts on the blockchain. How do the front end and back end contribute to decentralization? Here is how: Front-end: Connecting the browser to the blockchain: Establishing a direct connection between the user's browser and the blockchain decentralizes the interaction, reducing reliance on centralized servers. Interacting with the blockchain using a wallet extension: Using a wallet extension, a user can interact with the blockchain directly from their browser, decentralizing the management of cryptocurrency wallets and transactions. Retrieving data from IPFS (InterPlanetary File System): Retrieving data from IPFS instead of the traditional AWS servers embraces a decentralized approach to storage. IPFS operates seamlessly with the distributed nature of blockchains, allowing efficient data retrieval across multiple nodes, the censorship-resistant nature of IPFS contributes to the security of smart contracts, vital in decentralized applications while the content addressing mechanism ensures immutable data, crucial for the integrity of smart contracts. Using light clients: Light clients allow direct connection to the blockchain without relying on RPCs (Remote Procedure Calls) or centralized servers. This decentralizes the process of interacting with blockchain data. These front-end methods contribute to decentralization by reducing reliance on central servers and promoting direct interactions with blockchain networks and decentralized storage solutions like IPFS. Back-end: No Reliance on Traditional Databases: Departing from reliance on popular databases like Postgres or SQL eliminates the centralization inherent in traditional Web 2.0 setups. Traditional databases are typically centralized and at risk of single points of failure. Storage on the Blockchain: Unlike a centralized database, storing data on the blockchain itself, decentralizes the data storage infrastructure. In a blockchain, data is distributed across nodes in a peer-to-peer network, reducing the risk of manipulation or data loss through centralized control. Managed through Smart Contracts or Pallets: Using smart contracts or pallets (in Substrate-based blockchains like Polkadot) for managing data introduces decentralized execution logic. Smart contracts are self-executing and operate on a distributed network of nodes, ensuring that the rules governing the data are not controlled by a single entity. Indexed by an Indexer Network: An indexer network, when decentralized, ensures that data retrieval and querying are not reliant on a centralized service, enhancing the overall decentralization of the system. Therefore, Indexing data on a network of indexers further distributes the responsibility of maintaining and accessing data. By departing from traditional databases, opting for data storage on the blockchain through smart contracts or pallets, and leveraging decentralized indexing networks, the back-end becomes more resilient, resistant to censorship, and less dependent on centralized infrastructure. Dennis Zoma (the Co-founder of Scio Labs and AZERO.ID), acknowledging the broader scope of blockchain and crypto applications, emphasized the significance of not solely focusing on smart contracts and backend development but also prioritizing the front end. Recognizing that achieving widespread adoption requires significantly improved user experience (UX), he advocated for creating interfaces that are not only intuitive but aim for a "10x better UX" to onboard the masses, highlighting the inadequacy of hacky command line interfaces and stressing the need for user-friendly applications. Additionally, he recognized the importance of a "10x better DX" to onboard substrate frontend developers by enhancing the developer experience. The commitment to addressing these challenges is evident in the ink!athon project, which aims to contribute to achieving superior UX and a smoother onboarding process for both end-users and front-end developers. "An unpopular opinion of mine is that crafting a good design and delving into frontend work demands a considerable investment of time compared to contract work. It's not that contracts are a breeze – far from it. Yet, when you factor in the entire process, from conceptualizing initial sketches to shaping the first UI frames, executing the frontend, engaging in feedback loops, and fine-tuning it based on client input, it becomes evident that the complexity extends beyond a mere contract audit" - Dennis Zoma Introducing the ink! dApp Tooling Kit During the CTRL+Hack+ZK hackathon, Dennis Zoma and the developers at AZERO.ID unveiled their full-stack dApp boilerplate, ink!athon, which gives the audience a peek under the hood to assist them on their development journey on Substrate systems, Aleph Zero, and ink!, a Rust-based smart contract programming language. There are 4 elements to ink!athon: Contract Level ink!: Described as a Rust EDSL (Embedded Domain-Specific Language), ink! is a framework specifically crafted for writing efficient smart contracts on a blockchain. It provides abstractions and functionalities tailored to the requirements of blockchain development. Cargo Contract: Cargo is the package manager for Rust, and "Cargo Contract" is a command-line tool or module associated with managing and building smart contracts written in ink!. It suggests a specific workflow for compiling and handling contracts within the Rust ecosystem. CLI (Command-Line Interface): The Command-Line Interface is a tool that allows developers to interact with and control the deployment of smart contracts. Using the CLI, developers can execute various commands to deploy, manage, and test their smart contracts on the blockchain. PSP22 & PSP34 by Cardinal Cryptography: PSP22 and PSP32 serve as standards for fungible and non-fungible tokens on Aleph Zero (not A0), offering equivalents to ERC20 and ERC721 in the Ethereum ecosystem. These standards provide a structured framework for developers to deploy their tokens on the Aleph Zero blockchain. For easy deployment, the recently announced Cardinal Cryptography repositories, especially the readme files, offer comprehensive documentation with examples and guidelines, simplifying the process of creating and utilizing fungible or non-fungible tokens within the Aleph Zero blockchain ecosystem. Together, with the combination of ink!, Cargo Contract, PSP22 & PSP34, and the CLI, ink!athon provides a robust toolset for developing, compiling, and deploying smart contracts in Rust for application in substrate-based Blockchains. Node and development On the node and deployment side, local development is facilitated by the option to deploy on the Aleph Zero testnet, where a faucet allows claiming Testnet Aleph Zero (TZERO) tokens for use. However, for efficient development cycles and quick iterations, the preference is to spawn a local substrate contract.node. This node is a streamlined, lightweight version of Substrate, focusing solely on smart contract functionality. This approach enhances the development process, enabling rapid iterations and testing in a local environment before deploying on the broader Aleph Zero network. In addition to the local substrate contract.node, developers can also use Swanky, a more feature-complete tool, for deployment and contract interaction during the development process. Deployment & interaction For seamless contract testing and interaction in the terminal, Aleph Zero provides Drink!, a framework for building dApps on Polkadot using Rust and ink!, a domain-specific language for writing smart contracts. Drink! provides tools and libraries to help developers create, test, and deploy their dApps—a perfect tool recommended for experimentation. Cargo contract is another valuable tool for deployment, while Contract UI caters to those preferring a GUI. By uploading frontend metadata into the browser, Contract UI generates an auto-generated UI, providing an intuitive interface for interacting with each function or performing queries on the contract. This diverse set of tools caters to various preferences and workflows throughout the development lifecycle. Front-end level On the frontend level, the foundation is the Polkadot.js API, a powerful library for interacting with the Polkadot network. However, its setup can be intricate and time-consuming, especially in the context of a hackathon. To address this challenge, an abstraction layer called ink!athon has been developed specifically for React frontends. This layer, built on top of the Polkadot.js API, streamlines the development process by providing a simplified interface. ink!athon acts as a bridge, enabling React developers to seamlessly integrate Polkadot functionality into their applications without the complexities associated with direct usage of the Polkadot.js API. This approach not only enhances accessibility but also facilitates efficient development in time-sensitive environments like hackathons. This concludes Part 1 of our exploration into decentralized app development with Ink!Athon. Stay tuned for Part 2, where we delve deeper into boilerplate, the Ink!Athon stack itself Conclusion CTRL+Hack+ZK, the inaugural hackathon organized by Aleph Zero and supported by major partners, is set to be a transformative event for developers and Web3 enthusiasts. Focused on Aleph Zero, a high-performance blockchain platform with cutting-edge privacy features, participants will embark on a three-week journey featuring workshops, educational programming, and hands-on mentorship from the core development team and ecosystem partners. This hackathon provides a unique opportunity to connect with fellow developers, founders, and Web3 enthusiasts, fostering collaboration and sparking high-impact ideas. Participants will benefit from real-life mentorship, engaging workshops, and valuable resources, delving deep into use cases and promising integrations. The event also includes intense hacking sessions guided by Aleph Zero's experts, allowing developers to discover and build on the platform. As a highlight, participants will have the chance to present their ideas to prominent enterprises and venture capital firms keen on identifying the most promising projects in the blockchain space. In collaboration with key partners such as Cardinal, Deutsche Telekom, buidl.so, WWVentures, arca, heartcore, Blockchain founders capital, stc, and generative vendors, CTRL+Hack+ZK promises to be a dynamic and impactful event at the forefront of blockchain innovation.

  • Basic Knowledge of Zero Knowledge Proofs: Voyaging Through the Concept of ZKPs

    “Prover” “Verifier” “Without revealing” “Data” “Authenticity” “Transactions” Bet you see these terms scattered all over the internet in your search for “what is zero knowledge tech?” to the point you get déjà vu over and over again. It is what it is. Zero-knowledge is what they say it is. A technique that allows one party (the prover) to cryptographically prove the fact or truth of a data (say a transaction) without revealing the content of the data, while another party (the verifier) verifies the authenticity of the data. “But can I get an easier explanation?” “Explain to me like I’m sixteen” Alright, here we go. Zero Knowledge Like You’re Sixteen Say I go to the shopping mall to shop for items. Excitedly, I waltz into the mall, card in my back pocket, ready to spend some money. Wheeling the shopping cart, I begin selecting items to purchase. Filling my cart with the necessary items and goodies (of course), I head over to the checkout counter and wait my turn. Finally, my turn. I take out all the items I shopped and place them on the counter. After being scanned and prices summed up, I take out the card and give the attendant. She does the swiping thing and gives me two receipts along with my card. The first receipt contains just the debited amount from the card. The second receipt contains a proper list of all the items I purchased along with the prices and finally, the total amount debited. Supposing 10 minutes later, a friend asks me to borrow her some money, all of which I had just spent a few minutes ago. I tell her that I’m out of money and I have just spent my last one. To prove to my friend that I’m not telling lies, I show her just the first receipt which contains the amount I’ve just spent but NOT the second receipt which contains details of the items I purchased with that amount. In this case: Me ↔️ The prover because I’m trying to prove a claim My friend ↔️ The verifier The data ↔️ The transaction made in the mall The proof ↔️ The first receipt The fact/truth of the data ↔️ The fact that I had just spent all my money The content/details of the data ↔️ The items I purchased (shown in the second receipt) Verifying the authenticity ↔️ My friend checked the date and time the debit was made through the receipt This story is, no doubt, lacking in its entirety but it gives a basic idea of what zero-knowledge tech and proofs are. How so? My friend has zero knowledge of what I spent my money on and I didn’t have to show her the details either but I’ve proven (or so I hope) that I indeed spent my money on something. For a more adequate explanation, you can check out the analogy used in the well-known story - The Ali Baba Cave, published by Jean-Jacques Quisquater and some others in 1990. The paper is titled “How to Explain Zero-Knowledge Protocols to Your Children.” What is Zero-Knowledge Tech? It is a privacy mechanism and a cryptographic technique of verifying transactions between two parties whereby one party proves the authenticity of the data without disclosing details of the data, while the other party verifies the authenticity of the data. The prover provides the verifier with nothing more than a cryptographic proof of the authenticity of the data but makes sure not to reveal the content of the data. The proof provided is what we call “zero-knowledge proof” or “ZKP” for short. What are Zero-Knowledge Proofs (ZKPs)? Source: Chainslab Zero-knowledge proofs are cryptographic primitives generated by the prover to send to the verifier to prove the correctness of a statement. The verifier ensures the accuracy of these proofs and verifies them. Verifiers validate claims in different ways which could include challenging the prover to perform a task that shows that he truly knows the content of the statement as he claims. Zero-knowledge protocols depend on algorithms that take in some data as input and provide "true" or "false" as output. This makes it possible for claims to be attested even without the full information present. Criteria that a Zero-Knowledge Protocol Must Satisfy A zero-knowledge protocol is a set of rules that the prover and verifier must adhere to during communication. The protocol is in place to ensure the accuracy of a statement without any private information being shared between the two parties. In essence, the protocol will ask a prover to prove they have the right data, even if they don't physically present it to the verifier. A zero-knowledge protocol must satisfy these three criteria to perform this role: Completeness: If the statement or claim made by the prover is true, the verifier will accept and verify the proof presented, assuming both the prover and the verifier are honest and adhere to the rules of the protocol. Soundness: If the statement or claim made by the prover is false (a dishonest prover), the verifier cannot accept the proof provided. The zero-knowledge protocol cannot be tricked, so the prover cannot fool the verifier into believing or accepting an invalid claim. Zero-knowledge: Except whether a statement is true or false, the verifier learns nothing about the content (secret) of the claim. In other words, the verifier has “zero knowledge” of the statement in question. General Applications of Zero-Knowledge Tech Over the years, zero-knowledge tech has found usage in various sectors even beyond blockchain. Below are some of the industries in which zero-knowledge tech can be applied: Identity verification and data privacy Finance E-voting Education Machine learning Supply chain: you can learn more about the roles of zero-knowledge proofs in this article Compliance Healthcare Cybersecurity and many other industries Different Types of Zero-Knowledge Proofs Different types of zero-knowledge proofs vary in either the extent of communication, or other factors. Let’s extensively take a look at two popular types of ZKPs: interactive and non-interactive ZKPs. Interactive Zero-Knowledge Proofs (IZKPs) As the name implies, this type of ZKP involves a series of interactions between the parties involved (the prover and the verifier). It is the first kind of zero-knowledge proof that was utilized. To ascertain that the prover isn’t “bluffing” and that he knows the content of the claim made, the verifier can challenge the prover to a task. The prover, in return, fixes the challenge and sends it back to the verifier. The verifier, if not yet convinced, sets another challenge which the prover answers. Thus, back-and-forth communication takes place. In all of these, the prover is careful not to share the secret (content) of the statement/claim. Source: "Towards Data Science” Medium Publication Three elements make up IZKPs and they are witness, challenge, and response: Witness: the secret or details of the statement that only the prover has knowledge of but cannot share with the verifier is termed the “witness.” In practice, the prover is responsible for starting the proving process. He does this by picking a question he feels proves that he has knowledge of the witness as he claims, solving it, and sending it to the verifier along with the proof. Challenge: the verifier, if not satisfied, sets his challenge and throws it to the prover. Response: the prover solves the challenge and relates it to the verifier. The process goes on and on until the verifier is indeed certain, beyond unreasonable doubt, that the prover knows the secret (witness) like he claims to. The prover engages with these but doesn’t disclose the witness in any instance. ZK Set Membership (ZKSM) Zero-Knowledge Set Membership (ZKSM) was released in 2018 by ING, a Dutch-based banking and financial group. It was launched to be used mainly in the banking sector. Zero-Knowledge Set Membership (ZKSM) is a variation of proof that is used to prove that a data value (could be alphanumeric) is a member of a determined set, hence the term “set membership”. This allows for secret data in a range to be validated even while in the dataset without revealing the data in question. ZKSM is, in fact, natively interactive but can be made non-interactive using the Fiat-Shamir heuristic because it is preferable to avoid back-and-forth communication, especially in blockchain and generally DLT applications. Zero-Knowledge Range Proofs (ZKRPs) This is a subclass of ZKSM and it is used to prove a number or integer within a specified range or dataset of integers. This type of proof can only be used for numerical datasets. For instance, an employee looking to apply for a loan can prove to the bank that her salary is above $45,000 per annum but doesn’t need to reveal the exact value. Both ZKSM and ZKRPs are termed specific ZKPs because they are used in proving specific data types. Limitations of Interactive ZKPs The constant communication requires the prover and verifier to be online at the same time which can be difficult to achieve. The back-and-forth communication makes the process of proving slow and thus, not very scalable. Non-Interactive Zero-Knowledge Proofs (NIZKPs) Unlike with IZKPs, non-interactive zero-knowledge proofs don’t require constant communication between the prover and verifier. The prover computes a problem and sends the output to the verifier who can verify it in one step and be convinced that the prover indeed knows the witness without the need for the back-and-forth interaction. There is no challenge and response in non-interactive zero-knowledge proofs. With NIZKPs, there can be many verifiers and any one of them can check to confirm the output. This is suitable for open-source blockchain infrastructures where many parties are acting as verifiers. ZK-SNARK is classified under non-interactive zero-knowledge proofs. ZK-SNARK ZK-SNARK stands for 'Zero-Knowledge Succinct Non-interactive ARgument of Knowledge'. This refers to a kind of ZKP where a prover can prove knowledge of the secret (the witness) to the verifier without revealing it and also without the need for a series of communications between the parties involved. ZK stands for Zero-Knowledge. S stands for Succinct and it refers to the fact that the zero-knowledge proof size is small (even smaller than the secret) and doesn’t need much computation or time to prove. N stands for Non-interactive and it implies the lack of any series of actual communication between both parties (prover and verifier). ARK stands for ARgument of Knowledge and it implies that a fake prover can hardly cheat the system because there will be an argument about his supposed knowledge of the secret. The argument in this case refers to the computation the prover sends along with the proof as it is typically meant to be sufficient in proving knowledge of the information without the need for extra communication. However, there is a tiny chance that a bad actor with unlimited computational power can fake knowledge of a claim and thus provide malicious proof. Zk-SNARK, first widely applied by Zcash, is employed by blockchains that utilize shielded transactions, and private smart contracts because it allows those transactions to be validated without revealing the addresses or other shielded info, which, otherwise, will contrast the whole idea of shielding. For instance, Aleph Zero’s DEX, Common implements shielded pools to enhance privacy and this feature is protected because Aleph Zero, as a blockchain, utilizes private smart contracts, hence the use of zk-SNARKs is paramount. Zcash also utilized zk-SNARK proofs. ZK-SNARKs are mostly utilized by blockchains that aim to solve the issue of lack of blockchain privacy. ZK-STARK ZK-STARK stands for 'Zero-Knowledge Scalable Transparent ARgument of Knowledge'. Like with zk-SNARKs, zk-STARKs refers to zero-knowledge proofs where a prover proves knowledge of a secret without disclosing the secret to the other party. Unlike zk-SNARKs, the proof size is much bigger. ZK stands for Zero-Knowledge. S stands for Scalable which describes its ability to increase blockchain scalability. With zk-STARKs, transactions can be computed and verified faster off the main chain and sent back to be added to the blocks of the main chain. This is especially useful for blockchains that process a small number of transactions per second. T stands for Transparent and it implies the suitability for public (open) blockchains, hence, eliminating the exigency of a trusted setup. ARK stands for ARgument of Knowledge and it suggests that it is impossible to generate a zero-knowledge proof without having access to the witness, or hidden information. ZK-STARKs were first created by Eli-Ben Sasson, a professor and the co-founder of StarkWare, a company that utilizes ZKPs to solve two major problems that blockchains face which are scalability and privacy. ZK-STARKs are mostly utilized by blockchains that aim to solve blockchain scalability rather than privacy. Bulletproofs Bulletproofs are non-interactive zero-knowledge proofs that don’t require a trusted setup and can be used to convince a verifier that an encrypted value is located within a given range without decrypting it or revealing any other info about the value. Bulletproofs are even smaller in size than zk-SNARKs, however, they are more difficult and time-consuming to verify. Bulletproofs, zk-SNARKs, and zk-STARKs are termed generic zero-knowledge proofs because they can be applied in various ways and for general data types. Many publications classify Multi-Party Computation (MPC) as a type of Zero-Knowledge Proof. However, it’s best described as a more advanced form of proving because it involves more than two parties solving a computational problem but none of them revealing their secrets to one another. References Zero-knowledge proofs | ethereum.org zero-knowledge-set-membership_whitepaper.pdf (ingwb.com) zkrangeproof/README.md at master · thebalaa/zkrangeproof · GitHub Bulletproofs | Stanford Applied Crypto Group

Search Results

bottom of page