442 views
<!-- # Privacy and Auditability by Design: Re-Visiting Open Finance --> # Open Finance Revisited: Strengthening Data Governance with Cryptographic Privacy and Auditability **By [Silence Laboratories](https://www.silencelaboratories.com/) and Partners** > **Contributors:** Yashvanth Kondi<sup> 1 </sup>, Kush Kanwar<sup> 1 </sup>, Siddharth Shetty<sup> 2 </sup>, Anurag Arjun<sup> 1,3 </sup>, and Jay Prakash <sup> 1 </sup> > [<sup> 1 </sup>:Silence Laboratories; <sup> 2 </sup>:Sahamati; <sup> 3 </sup>:Avail] ![cover _ 3](https://hackmd.io/_uploads/HJE-V8Lxyg.jpg) <!--![image](https://hackmd.io/_uploads/BJIgjJd11e.png)--> <!-- # ![DALL·E 2024-10-06 12.17.29 - An infographic representing the concept of Open Finance, featuring key elements such as data portability, privacy and security, consumer consent manag](https://hackmd.io/_uploads/ryijfhJk1x.jpg)--> <!--## Content 1. Abstract 2. Open Finance: Background 3. Open Finance: Challenges 4. Role of Privacy, Consent and Trust 5. PETs: MPC 6. Design Adaptation in Account Aggregator: Silent Compute 7. Technical details for MPC in above application 8. Achieving Regulatory Compliance 9. Value Propositions: Before Vs Now 10. Further Readings--> :::info :bulb: The document proposes design adaptations to enhance data privacy in Open Banking ecosystems using PETs, drawing from research initiatives by Silence Laboratories, collaborators at Sahamati, and other ecosystem contributors. Although the proposed privacy preserving compute and economic models are application-agnostic within Open Banking, the account aggregator framework is showcased as an example, given its rapid adoption and potential therein for improved data governance. Furthermore, we illustrate how the model can improve governance, enhance transparency, promote fair economics, and stimulate an influx of new business cases and revenue streams. ::: ## 1. Abstract Open Banking is gaining significant traction globally, achieving key objectives such as increasing competition, promoting innovation and the creation of new services. At its core is the portability of customer data between financial institutions, which naturally extends into Open Finance and broader non-financial applications under Open Data. With data increasingly becoming a valuable asset, adoption and growth will be driven by trust in these ecosystems, making proactive discussions on privacy crucial in the design of such frameworks. This whitepaper evaluates the current challenges in Open Finance and data economy, and proposes a revision in data governance to consented data collaboration for extracting maximum social value via Multi-Party Computation (MPC), a tool to ensure privacy guarantees, facilitate secure collaboration and empower customers with true control over their data.  ## 2. Open Finance: Background - #### Power of Open-Finance Open banking represents a paradigm shift in the financial services industry, enabling secure collaboration between financial institutions over customer data via APIs. Emerging as a response to the market driven need for greater transparency and control over data and to encourage healthy competition, Open Banking allows customer-consented sharing of banking data with non-banking financial institutions to promote affordable, convenient, quick and easily accessible services. According to McKinsey Global Institute<sup>[1]</sup>, the boost to the economy from broad adoption of open-data ecosystems could range from about 1 to 1.5 percent of GDP in 2030 in the European Union, the United Kingdom, and the United States, to as much as 4 to 5 percent in India. - #### What Open Finance promises  For consumers, Open Banking democratises access to structured insights out of their financial data, unlocks efficient, tailored and innovative services [Figure 1], improving financial inclusion especially for underbanked population, and assures secure frameworks for handling their sensitive financial information. For businesses, it promotes competition between banking and non-banking financial institutions and improves the quality of the financial services offered through data-driven decision making.  <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/HyXPJKc1yx.png" alt="Image 6" width="440"/> <figcaption>Figure 1: Key services unlocked via Open Banking/Finance </figcaption> </figure> </div> - #### Open-Finance is working Driven by the numerous benefits it offers, the Open Banking market is experiencing rapid growth, with a value of approximately $23.5 billion<sup>[2]</sup> in 2023 and expected to rise at a CAGR of 22% over the next 8 years [Figure 2]. Also contributing to this growth are factors such as the rapid digitization of financial information, increasing mobile and internet penetration, regulatory initiatives, and ongoing technological advancements. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/rk-C0ucJyx.png" alt="Image 2" width="640"/> <figcaption>Figure 2: Open Banking Global Market Size</figcaption> </figure> </div> Pioneered by the Payment Services Directive (PSD2) framework in Europe, Open Banking is being currently pursued by >35% of countries worldwide<sup>[3]</sup>. This includes the Account Aggregator framework in India, Consumer Data Right (CDR) in Australia, and similar frameworks in US, Singapore and Brazil. Some of the geographies implementing Open Banking have been listed and compared in Figure 3. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/rJa7T9cy1l.png" alt="Image 4" width="640"/> <figcaption>Figure 3: Open Banking volumes and growth across key geographies<sup>[4]</sup> </figcaption> </figure> </div> While Open Banking initiatives are regulatory-driven for a majority of the countries, certain regions, especially Asia lack formal regulation as shown in Figure 4. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/B1ylTc91ke.png" alt="Image 3" width="640"/> <figcaption>Figure 4: Adoption of Open Banking across the globe<sup>[5]</sup></figcaption> </figure> </div> - #### UK’s Open Banking journey  The UK has experienced a remarkable Open Banking journey, with approximately 14 billion API calls in 2023 and a year-on-year growth rate of 25%<sup>[4]</sup>. Estimates suggest that the UK's Open Banking penetration is significantly higher compared to other major European markets, including France, Spain, Italy, and Germany. By breaking data siloes and having clear, accurate and immediate financial insights, Open Banking has greatly impacted both customers and businesses, as seen in Figure 5. This success can be attributed to regulatory support, seamless and standardised API infrastructure, higher awareness and continuous innovation. ![Image 14](https://hackmd.io/_uploads/SkmW4mT11x.png) <div style="text-align: center;"> <figure> <figcaption>Figure 5: Impact of Open Banking in UK<sup>[6][7]</sup> </figcaption> </figure> </div> With this success, the UK market plans expansion to Smart data, including sectors such as energy, finance, telecom, retail and more. The Data Protection and Digital Information (DPDI) Bill in the UK also sets out enabling legislation for Smart Data<sup>[8]</sup>, which will facilitate private sector data sharing across the UK economy. - #### What lies ahead Building on the success of Open Banking across the globe, there has been a transition towards Open Finance and Open Data, the former encompassing non-banking financial data such as investments and insurance, and the latter extending the framework to non-financial data such as healthcare, utilities and more. This wider and comprehensive coverage of data would give a more holistic view of the customer, allowing hyper-personalised services spanning across multiple sectors. Overall Open Finance can help achieve several macroeconomic and societal objectives such as empowering small businesses, financial well-being for individuals, driving innovation and economic growth, financial inclusion, and ultimately strengthening digital trust and global competitiveness. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/SJHpRO5kJx.png" alt="Image 1" width="640"/> <figcaption>Figure 6: Evolution of Open Banking to Open Finance and Open Data</figcaption> </figure> </div> ## 3. Open-Finance’s Key Enablers Components <div style="text-align: left;"> <figure> <img src="https://hackmd.io/_uploads/r1Mf7-iyJx.png" alt="Icon 1" width="40"/> <figcaption> </figcaption> </figure> </div> - **Data Portability and Interoperability**: through standardized data formats and APIs, enabling secure, consistent data exchange across financial platforms. These APIs, developed with common standards, ensure compatibility, security, and ease of integration between financial institutions and third-party providers, driving a unified user experience. <div style="text-align: left;"> <figure> <img src="https://hackmd.io/_uploads/r1Cu7Ws1kg.png" alt="Icon 2" width="40"/> <figcaption> </figcaption> </figure> </div> - **Consent Management**: Consumer control over data is crucial to Open Finance, ensuring transparency and trust by allowing individuals to decide who can access their data, for how long, and for what purpose. <div style="text-align: left;"> <figure> <img src="https://hackmd.io/_uploads/By39X-jy1l.png" alt="Icon 3" width="40"/> <figcaption> </figcaption> </figure> </div> - **Data Regulations**: Robust data protection, security standards, and adaptable privacy laws are essential to balance innovation with consumer protection in evolving data-sharing models. A harmonized regulatory environment fosters innovation, enabling secure cross-border data sharing and expanding Open Finance's global reach.** ## 4. Open Finance: Opportunities for Advancing Governance and Transparency In any Open Banking framework, multiple stakeholders interact for seamless flow of data, including the data custodians, data owners, financial service providers and technology service providers. In the process of achieving the goal of creating a competitive, innovative and inclusive ecosystem, each party faces distinct challenges as summarised in Figure 7. Understanding these pain points is crucial while redesigning for robustness and privacy. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/Hk_Y3YQbyx.png" alt="Image 7" width="640"/> <figcaption>Figure 7: Opportunities for various stakeholders in Open Banking ecoystem </figcaption> </figure> </div> :::info 1. #### Data custodians: Incentivisation to Incease Participation and Technology Adoption to Enhance Trust From a data custodian’s perspective, building the technical infrastructure to support Open Banking APIs brings both exciting opportunities and unique challenges. Establishing a robust system to standardize data into specified formats and handle a high volume of daily requests is a demanding yet rewarding endeavor. Furthermore, they need to ensure compliance with regulations, which could be difficult in case there is deviation from consent terms or duplication of data across multiple parties. For a custodian to actively participate and innovate in such an ecosystem, they must be fairly compensated and incentivised, at least enough to recover their costs. Currently, data availability, throughput and performance is hindered due to the absence of such compensation structures for data providers, offering them little incentive to contribute meaningfully to this ecosystem. The economic value of open finance ecosystems can truly be tapped by harnessing the economies of scale through network effects, and therefore, onboarding more data providers and custodians is crucial for improving and innovating services for customers.  ::: :::info 2. #### Data owners: Deepning understanding of privacy, awareness and control For a data owner (or the customer), the way they interact with Open Banking ecosystem is via consent. Consent envisions control of data sharing with the customer, however often falls short in achieving this objective, as highlighted in Figure 8. Numerous studies have shown that consent mechanisms fail to educate the customer, offer them complete control of their data or provide complete transparency & auditability, ending up becoming a mere check in the box exercise. For example - a survey conducted by PwC<sup>[9]</sup> in 2023 for websites of 100 organisations showed that consent that was free, specific and informed was sought by only 9% of organisations, and only 2% organisations provided multilingual consent. Similarly, a recent survey conducted by Silence Laboratories<sup>[10]</sup> indicated an alarming gap between the customer’s perception of the level of awareness and control, and the ground reality. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/SJWVVbi1Jg.png" alt="Image 8" width="640"/> <figcaption>Figure 8: Challenges with current consent mechanisms </figcaption> </figure> </div> ::: According to the Silence Laboratories' [survey](https://report.silencelaboratories.com), although a majority of customers (~80%) believe they have a clear understanding of data-sharing terms, their clarity diminishes when confronted with specific, granular details, especially around consent revocation and frequency of data access , a shown in Figure 9. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/H1aKJKckkx.png" alt="Image 9" width="640"/> <figcaption>Figure 9: Clarity of data sharing terms via consent mechanisms </figcaption> </figure> </div> As shown in Figure 9, with only 1 in 20 people having control over all aspects of data sharing, there is a strong desire for greater control, with ~47% of customers seeking control at the document level and ~40% wanting the ability to select specific data points. Satisfaction with the level of control in the consent process is lower compared to other aspects of the consent experience, as per the [survey](https://report.silencelaboratories.com/). <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/HJv5yYc1Je.png" alt="Image 10" width="640"/> <figcaption>Figure 10: Clarity and control gaps in data sharing terms. </figcaption> </figure> </div> Furthermore, even though every 3 in 4 respondents believe that financial institutions collect more data than what is actually required, some of them indicate a willingess to share data for clearly demonstrable benefits. :::warning 3. #### Data fiduciaries: Compliance and privacy and utility tradeoffs Today, data fiduciaries and processing institutions either cannot access data (which could help them understand their customers better) due to privacy and compliance reasons, or are subjected to compromise on the data utility due to tradeoff between privacy and data-driven offerings. Furthermore, centralisation of data on their end makes them vulnerable to single point of failure, increasing the attack surface especially when plaintext data is being processed. Currently, there are no guarantees that the data with the fiduciary is processed only for the pre-decided purpose. ::: :::info 4. #### Regulators: Governance tools to establish accountability guidelines Finally, the involvement of multiple entities in handling customer data, including certain unregulated participants such as technology providers makes it challenging to establish clear guidelines for accountability in case of breaches or non-compliance. Regulatory fragmentation, lack of transparency, traceability, and trust gaps are other challenges which are throttling the adoption of Open Finance. ::: While the discussed challenges and opportunities outlined may seem straightforward and manageable, it is vital to address them proactively as the ecosystems scale. Ensuring that these issues are tackled from the outset is key to building trust, fostering greater adoption, and ensuring the smooth operation of the system in the long term. A successful rollout will require close collaboration between key stakeholders across both public and private sectors, including governments, regulators, and industry players. In the following subsections, we will outline how the challenges discussed above can be addressed through cryptographic techniques and corresponding design adaptations. These solutions aim to bridge the trust gap, encouraging more data providers to participate in the ecosystem, with the **Account Aggregator** model as an example. Ultimately, these advancements will help align ecosystem participants towards shared economic benefits. ## 5. Open Finance Case: Account Aggregator One of the successful examples of Open Finance: the **Account Aggregator (AA)** is a consent-driven platform designed to securely facilitate the sharing of financial data between various institutions. It plays a crucial role in Open Finance by acting as an intermediary between **Financial Information Providers (FIPs)**—such as banks, insurance companies, and mutual funds—that hold customer data, and **Financial Information Users (FIUs)**—including lenders, fintech companies, and investment platforms—that need this data to provide services like loans, wealth management, or personalized financial advice. The AA does not store the data but enables secure, encrypted transfers with the customer's explicit consent, ensuring privacy, security, and transparency. This system empowers consumers to control their financial data, fostering trust and enhancing innovation in financial services. In the current design of the AA ecosystem, FIPs transmit encrypted data $E=\mathsf{Enc}(\mathsf{pk},d)$ to FIUs with the AAs acting as a facilitating pipeline. Although $E$ is exposed to AA, the data $d$ stays hidden from AA as it does not have knowledge of the key $\mathsf{sk}$ to be able to decrypt the ciphertext, as mentioned in ReBIT<sup>[12]</sup> and represented in Figure. 11. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/H1kQNb7R0.png" alt="Image 11" width="640"/> <figcaption>Figure 11: Current Account Aggregator design </figcaption> </figure> </div> Let's explore how the growth of the Account Aggregator (AA) ecosystem can be fueled by cryptographic guarantees and enhanced auditability. Once the AA model is optimized, similar designs and frameworks can be adapted for Open Finance in other regions. We propose a design adaptation at the Financial Information User (FIU) level to ensure that the FIU never gains plaintext visibility of data. In the following sections, we’ll discuss potential improvements in the AA’s design, particularly how data governance can be integrated with consent auditability, thereby enforcing strict ***use-limitations***. :::info As mentioned in Digital Personal Data Protection Act, 2023, India<sup>[13]</sup> - "Data fiduciaries must specify a purpose of processing and describe the personal data involved in the processing (Sec. 5(1)). When relying on consent, the processing must only be necessary to achieve the stated purpose (Sec. 6(1)") ::: ## 6. Empowering Account Aggregator with Silent Compute: A Secure Computation Model for Open Finance While the AA ecosystem has experienced significant growth, the 1059% growth in FY 2023-24<sup>[14]</sup> makes it the fastest growing Open Finance Network in the World, the adoption can grow exponentially with increased participation and a stronger foundation of trust. As in the broader Open Finance landscape, Financial Information Users (FIUs)—the data users and processors—receive access to **plain text financial information** of their customers (the true data owners). :::success As of August 2024, the Account Aggregator ecosystem had-155 FIPs (Financial Information Providers) across banks, insurance firms, investment, pension and taxes and 475 FIUs (Financial Information Users)<sup>[15]</sup>. ::: ### 6.1 Challenges and Opportunities for Privacy Driven Data Governance As previously discussed, the plain text visibility of complete financial information and missing purpose limitation (by code) create a trust gap between FIP, FIU and data owners (users) which lead to friction in overall data flow. The transmission of plain text financial data from Financial Information Providers (FIPs) to FIUs imposes significant liabilities and fiduciary duties on FIUs as well. Since FIUs hold a copy of the financial data, they are responsible for any data leaks or security breaches. This leads FIPs to have reservations about the quality of FIU data governance, as FIUs become the true custodians of the data. This, in turn, introduces reputational risks for FIPs, which adds to the slower response times in consented data movements and limits optimal participation. <!-- <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/rJEethP1Jg.png" alt="Image 12" width="640"/> <figcaption>Figure 12: FIU as the single point of failure </figcaption> </figure> </div> --> Moreover, governance within the ecosystem may be strained by a trust gap between FIPs, data owners, and FIUs. There are theoretical risks of FIUs cloning the data undetected and also using it for purposes *not explicitly consented* to by the data principals. Thus, FIUs are an attractive targets by malicious entities with intention of exploiting as a single point of failure in data security and trust failure within the AA ecosystem. We will keep these opportunities of improvements in mind to set design contraints of a powerful version of AA ecosystem. ### 6.1 Step 1: Setting Design Constraints in the Architecture of AA Framework and Workflows Based on aforementioned insights, we establish a set of governing design constraints that form the backbone of the proposed, in the subsequent sections, privacy-preserving computation version of the AA. The constraints, along with the data sharing and inference architecture, are explained in the following subsections. First, let's enumerate the lists of design contraints for strengthening the governance architecture: :::success **1. Zero exposure of data in use (analytics):** User data should never be exposed in plaintext to *any* party other than FIPs, **2. No single point of failure:** Data, should not be in once place, be it at rest or in use **3. Purpose limitation-transparency and auditability:** Ensuring data usage is just for the consented purpose ::: ### 6.2 Silent Compute: Driven by Privacy and Auditability for Open Finance Silent Compute adds privacy preserving computation modules as an adapter in the AA's architecture to ensure that the aforementioned constraints are taken into account. We will begin by describing the overall system architecture of the proposed method, followed by a detailed exploration of each component in the subsequent subsections. Figure 13 gives an oveview of the data flows, computation and inference generation at the request of a FIUs. The overarching goal is to ensure that FIPs and AAs can adopt the proposed design without modifying any part of the existing protocol. As in the exisiting architecture, FIP responds with the encryted data, $E=\mathsf{Enc}(\mathsf{pk},d)$. Unlike existing approach of decrypting the data at FIU, $E$ is a passed to a network of 3 nodes wherein distributed privacy preserving computations are performed at request of FIU and only insigths are revealed in the end. We shall be referring these compute nodes as Financial Information Compute Units (FICUs). As showing in Figure 13, FIUs can only send inference computation requests. They do so by writing the logic using pre-defined and optimised opcodes. The scripts are decomposed by a compiler, verified for specific purposes through a consent prover, and subsequently delegated to the FICU network by an orchestrator. In the following subsections, we will first establish how these modules meet the previously defined design constraints, and then delve into the specific functionalities of each module in detail. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/BJLO6U9eyl.png" alt="Image 13" width="940"/> <figcaption>Figure 13: Secure Computation in Open Finance </figcaption> </figure> </div> - **Compute on Encrypted and Sharded Data (<span style="color: green;"> Zero exposure of data in use and no single point of failure</span>):** Encrypted data from the FIP is sharded (split into multiple pieces—three in the proposed design) across legally isolated computing nodes, FICUs before any computation is performed. This ensures that the user's data is never exposed in plaintext to _any_ party other than the FIP and is not stored as a whole with any single entity, thus eliminating a single point of failure. Silent Compute uses a privacy technique called Multi-Party Computation (MPC) to achieve computation on encrypted data, the details of which are discussed in next sections. - **Binding Consent with Inference (<span style="color: green;"> Purpose limitation-transparency and auditability</span>):** As highlighted in the Figure 13, consent is registered in form of signing of decomposed sequences of purpose-bound operation codes. Any financial inference by the FIU is performed only if the *opcode* sequence for the corresponding logic can be proved and verified by the consent prover. Thus, Silent Compute enables end-to-end **verifiable** consent that is tightly coupled with computations." - **Consesnsus for Inference:** Additionally, no single party (FIU, AA, Sahamati) can operate on user data without the permission of the other two parties. This restriction is enforced cryptographically, as user data remains entirely undefined within the scope of any individual party's view—requiring two or more parties to actively cooperate to unlock the data. :::info The Account Aggregator ecosystem has proposed a Fair Use Policy framework<sup>[16]</sup>, establishing comprehensive guidelines for data collection, defining boundaries for consent attributes and aligning use cases with purpose codes. By setting specific rules that must be satisfied for any operation to proceed, the framework aims to integrate policy as code into the Account Aggregator ecosystem. While consent terms may adhere to the Fair Use Policy, Silence Laboratories ensures that the actual data usage strictly follows the consented terms. By binding consent with compute, the framework guarantees that data is fetched according to established rules and the data is utilised strictly in accordance with the request. ::: <!--src="https://hackmd.io/_uploads/ryejSZsk1l.png" --> In the subsequent sub-sections, we will briefly explain MPC and delve into the architectural nuances of the proposed system :::info #### A Modern Privacy Enhancing Technologies: A premier on Multiparty Computation (MPC) Fundamental cryptography—encryption, digital signatures, etc.—can be thought of as mathematically guaranteed enforcement of policies that govern access to data. This was well suited to the era of internet applications that required data to be secured only in transit and in rest. The next generation of applications, such as Open Finance, requires data to be secured *while in use*. This warrants the adoption of advanced cryptographic tools and Privacy Enhancing Technologies (PETs). Multiparty Computation (MPC) is one such PETs that encompasses the extraction of utility from private data that is jointly held across different parties. In its most general usage, MPC refers to the evaluation of arbitrary functions upon such data without any individual party—or specified subsets of parties—learning any information beyond its own input, and the output of the computation. Under a well-specified non-collusion assumption, an MPC protocol that is run amongst a group of parties emulates an oracle that accepts each party's private input, performs some computation on them, and returns private (or public) outputs to each party. Importantly, the oracle does not leak any intermediate state, or other secret information. Theoretical feasibility of securely computing any function on distributed data was established in the 1980s, by the seminal works of Yao, Ben Or et al.<sup>[17]</sup>, and Goldreich et al.<sup>[18]</sup> among others. The field of MPC has come a long way since then, making rapid and empirically validated progress as surveyed in a recent ACM Communications article<sup>[19]</sup>. While general secure function evaluation has served as an efficiency benchmark for MPC over time, the last few years has seen a blossoming of targeted applications of MPC for specific tasks such as threshold signatures, machine learning, and private set intersection. As documented in case studies by Archer et al.<sup>[20]</sup>, MPC can be used to meet data privacy compliance requirements, and mitigate the risks of single points of failure in data custody. In this paper, we apply these principles to the domain of Open Finance. As a target application, we demonstrate how MPC can be used to exercise fine-grained control over data exposure in the Account Aggregator (AA) ecosystem. ::: #### 6.2.1 Workflow of the Proposed Design, in AA model Figure 14 shows the revised workflow<sup>[21]</sup> of financial information request. Our proposal is to *distribute* the storage of the key ephemeral key, $\mathsf{sk}$, amongst Financial Information Compute Unit (FICU) nodes. In practice, as an example in context of this white paper, these compute nodes can be operated by AA, FIU, and Sahamati, each running one instance. Let's see the adaptations in context of the existing workflow<sup>[21]</sup> <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/r109Gsql1x.png" alt="Image 16" width="2000"/> <figcaption>Figure 14: Adapted Data Fetching and Analytics Workflow </figcaption> </figure> </div> - **A. Initiating a Data Request with Distributed Key Generation (DKG):** As shown in Figure 14, after the user’s consent is obtained, the FIU sends a Financial Information (FI) request to the AA to retrieve the data from the relevant FIPs. This request contains the details of the data required (such as bank account statements, loan details, etc.). The Data Request comprises the details of the consent and key material ($\mathsf{pk}$) that is to be shared with the FIP to encrypt the data sent in response. The Data Request is digitally signed by the FIU. As shown in Figure 14, FICU nodes perform DKG to generate an ephemeral Curve 25519 Key Pair comprising the FIU public key, ($\mathsf{pk}$), and the FIU distributed private key shares ($\mathsf{sk}_1,\mathsf{sk}_2,\mathsf{sk}_3$). These keys are valid only for one data exchange session. The key shares as equivalent to $\mathsf{sk}$ splitting into secret shares $\mathsf{sk}_1,\mathsf{sk}_2,\mathsf{sk}_3$. The parameters of the secret sharing based DKG are set so that no individual $\mathsf{sk}_i$ reveals *any information at all* about $k$ itself, and any pair of shares $\mathsf{sk}_i,\mathsf{sk}_j$ fully specify $\mathsf{sk}$. This ensures that any party who wishes to reconstruct $\mathsf{sk}$ (and therefore any data it encrypts) will need to convince another party to collude with it. :::warning **Secret sharing** refers to methods for securely distributing a secret among a group of parties. The guarantee is that no individual party holds any intelligible information about the secret, but when a sufficient number of parties combine their 'shares', the secret can be reconstructed. While secret sharing offers secure storage, MPC protocols offer a method to *use* the underlying secret for a computation without having to reconstruct it—at every intermediate stage of the MPC, all sensitive state remains secret shared. ::: - **B. Distributed Decryption:** As further adaption, shown in Figure 14, upon transmission of the ciphertext $E$, the FICU nodes engage in an MPC protocol to jointly decrypt $E$ and obtain secret shares of $d$: $d_1,d_2,d_3$ as their respective private outputs. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/B1rS3Z6yJx.png" alt="Image 124 width="640"/> <figcaption>Figure 14: Secret sharing of ciphertext E across multiple nodes </figcaption> </figure> </div> Although "MPC-Decryption" is depicted as an idealized trusted third party in the above diagram, in reality this functionality is emulated by means of a distributed protocol. In particular, $d_1,d_2,d_3$ are computed and delivered as private outputs to each of the nodes, while leaking no information about $\mathsf{sk}_1,\mathsf{sk}_2,\mathsf{sk}_3$. - **C. Secure Distributied Multi-Party Computation:** Subsequent to this distributed decryption of $d$, each query upon the dataset is answered in a distributed fashion as well, without physically reconstructing $d$ at any single place. In particular, to compute a function $f(d)$ upon the data, FICU nodes run an MPC protocol (using $d_1,d_2,d_3$ as private inputs) at the end of which they output $f(d)$, while leaking no additional information about $d$ to each other. Recall the diagram from the previous section that described the flow of information in the Account Aggregator ecosystem. With our new MPC-enabled data processing and governance framework, the FIU's side of the diagram is further decentralized, as depicted below: <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/SyCsysak1e.png" alt="Image 15" width="640"/> <figcaption>Figure 15: Computation of f(d) without reconstructing d at a single place </figcaption> </figure> </div> This system allows for AA, FIU, and Sahamati nodes to securely store the data in a decentralized fashion, and unlock computations on it as relevant and consented to by the user/data principal. A non-compliant node is prevented from accessing any information about the user data if other nodes do not explicitly agree. In combination with a robust consent management framework, the new MPC-enabled design is able to deliver strong cryptographic guarantees for binding computation on user data with consent. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/HkqqhXCbyg.png" alt="Image 17" width="440"/> <figcaption>Figure 16: Preventing access of input information incase of a non-consented data usage attempt </figcaption> </figure> </div> :::success :wrench: **Computation Over Secrets for Financial Inference** - **Workflow** - FICU nodes (Sahamati, FIU, and AA for an example or FIU and TSPs as another configuration) each can operate a (proxy) node to form a compute network. - AA continues to collect consent for computation/inference from user, and convey it to FIU. - User consent is mapped to precisely defined script/commands. The privacy leakage of these commands is analysed by auditors, who may periodically suggest updates. - FIU submits consented commands to compute network. - Compute network nodes verify consent, and evaluate corresponding functions on distributed data, with logs saved for audit purposes. - No individual node in the network has the authorization—or indeed the information—to operate on user data. ::: ## 7. Synergies with SahamatiNet In order to create a seamless infrastructure for robust performance, trust and reliability, [Sahamati](https://sahamati.org.in/) has developed the [SahamatiNet](https://developer.sahamati.org.in/sahamatinet) infrastructure. Sahamati hopes to achieve interoperability, compliance to fair use of customer’s consent and derive actionable insights from the operational and technical data of the ecosystem. Silent Compute can augment the SahamatiNet infrastructure, especially in areas of fair use compliance, and governance via cryptographically auditable mechanisms.  ### 7.1 [Fair Use Compliance](https://developer.sahamati.org.in/sahamatinet/fair-use-compliance) SahamatiNet has proposed a Fair Use Policy framework, establishing comprehensive guidelines for data collection, defining boundaries for consent attributes and aligning use cases with purpose codes. By setting specific rules that must be satisfied for any operation to proceed, the framework aims to integrate policy as code into the Account Aggregator ecosystem. While consent terms may adhere to the Fair Use Policy, Silent Compute ensures that the actual data usage strictly follows the consented terms. By binding consent with compute, the framework guarantees that data is fetched according to established rules and the data is utilised strictly in accordance with the request. **![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXfuchY5mUWd5vwNNU-cfjZvqaIJMEhJ2ToypQI0lybGqb5F0oxptO1_ES9mwgfbG2cBhpEoQ4iFlV5534LSnVW1bIIJeqMTmpfP0BkXBVgh6QDF1A9kFc2vcgYPG63Nn0pvs69zxw?key=nUWnRQLeheLo6Rfte_ek0K0q)** In the illustration above, while the fair use policy ensures consent-fetch terms stay within the specified policy boundaries, Silent Compute further enforces the use of pre-defined operation codes for the designated purpose during data computation. This safeguards the customer data from being misused by the FIU diverging from the intended purpose. In simple terms, while the current model links purpose with the boundaries of consent terms, the proposed model introduces an additional layer by connecting consent terms directly to specific data usage operations. ### 7.2 [Observability](https://developer.sahamati.org.in/sahamatinet/network-observability/observability-services) The Silent Compute Network is designed to support all telemetry and statistical logs. Additionally, as the proposed model enables collaboration on inferences, it can enhance existing telemetry data to provide deeper and accurate insights for use cases such as billing, fraud detection, and more. With verifiable mechanisms to audit data usage, Silent Compute ensures transparency across the entire value chain, facilitating the development of an economic model that attributes value to each stage of the data journey. ### 7.3 [Data Governance](https://developer.sahamati.org.in/sahamatinet/network-observability/data-governance) Finally, SahamatiNet defines certain guiding principles for governance in the context of network observability, including data minimisation, privacy by design, principle of least privilege and data as a shared asset.  These principles align directly with Privacy Enhancing Technologies (PETs), the foundational element of Silent Compute and the proposed solution. Integrating the privacy compute network within the AA infrastructure not only adheres to privacy by design principles, but also allows computation on encrypted data without exposing private inputs. This approach alleviates concerns around data minimisation and the principle of least privilege, as sensitive data remains with the owners/custodians, and is never aggregated in a single location. While SahamatiNet suggests using non-identifiable data or techniques like anonymisation and pseudonymisation for privacy, processing based on MPC offers significantly stronger privacy guarantees without compromising data utility. Lastly, the decentralised nature of processing, which ensures that no party gains full ownership or access to others’ inputs, aligns with the principle of treating data as a shared asset. ## 8. Application Agnostic Technical Deep Dive: Silent Compute Modules and Protocols Introductory protocols for MPC are intelligible to an audience with only a basic understanding of cryptography, however advanced protocols (such as those relevant to applications in Open Finance) require a substantial amount of technical background knowledge. The difference between such introductory and advanced protocols is primarily in efficiency rather than functionality. We therefore provide in this section a detailed account of a vastly simplified protocol, with the intention of conveying a flavour of the protocols that comprise the Silent Compute platform. The protocols actually implemented in the platform can be found in the technical whitepaper, however we will provide references to the literature for futher reading later in this section. The writeup in this section can serve as a guide to understanding certain standard design principles in MPC protocols, including the advanced ones to which we refer. ### 8.1 Secret Sharing The most basic unit for an MPC protocol is its underlying secret sharing format. Simply put, a secret sharing scheme allows for a secret value to be split into multiple shares, so that any individual share (or specified subset of shares) reveals nothing about the original secret. A simple example is *additive secret sharing*, shown below for a secret bit $x\in\mathbb{Z}_2$: $$ x_0,x_1,x_2\gets\mathsf{AdditiveShare}(x), \text{ such that } x_0\oplus x_1\oplus x_2=x$$ In plain words, $x_0,x_1,x_2$ are randomly chosen bits subject to their XOR being the secret. We elide rigorous mathematical details, but observe that knowledge of any pair of shares $x_i,x_j$ leaves the original secret $x$ entirely unspecified. One way to conceive of this is that if $x_0,x_1$ are known shares, then $x_0\oplus x_1=x\oplus x_2$, meaning that the missing share $x_2$ is essentially a one-time pad that encrypts $x$. While additive secret sharing serves as a helpful stepping stone—and is in fact used directly in many protocols—for our Account Aggregator application we will use a different secret sharing scheme that supports MPC protocols tailored to the collusion model. In particular, there are three parties that will each hold a share of the secret, and it is assumed that no pair of parties will collude. The tool best suited to this model is *replicated secret sharing*, depicted below: $\mathsf{ReplicatedShare}(x)$: 1. $x_0,x_1,x_2\gets\mathsf{AdditiveShare}(x)$ 2. Set $\mathsf{sh}_0=(x_0,x_1)$, $\mathsf{sh}_1=(x_1,x_2)$, and $\mathsf{sh}_2=(x_2,x_0)$ 3. Output shares $\mathsf{sh}_0,\mathsf{sh}_1,\mathsf{sh}_2$ In plain words, the replicated secret sharing algorithm produces three secret shares, and each share consists of a pair of bits, such that each pair of shares have exactly one bit in common. There is a degree of redundancy (unlike plain additive secret sharing) which will support substantially more efficient MPC, and yet each individual share contains only two additive shares, which as shown previously reveals nothing about the secret. Going forward, we will use the shorthand $[x]$ to denote a replicated secret sharing of a secret bit $x$ jointly held by $P_0,P_1,P_2$, i.e. that $P_0$ holds $(x_0,x_1)$, $P_1$ holds $(x_1,x_2)$, and $P_2$ holds $(x_2,x_0)$. In later parts of the article, we will denote these values with variable subscripts (eg. $x_i$) where the subscript is implicitly always taken modulo 3. In addition, we define the following instructions: * $\mathsf{Share}(x)$ executed by a party $P_i$ that holds secret a $x$: sample $\mathsf{sh}_0,\mathsf{sh}_1,\mathsf{sh}_2\gets\mathsf{ReplicatedShare}(x)$, and send each $\mathsf{sh}_j$ to party $P_j$. * $\mathsf{Open}([x])$: each party $P_i$ sends $\mathsf{sh}_i$ to both other parties. Upon collecting $\mathsf{sh}_0,\mathsf{sh}_1,\mathsf{sh}_2$, if this constitutes a valid secret sharing—i.e. $\exists x_0,x_1,x_2$ such that $\mathsf{sh}_0=(x_0,x_1)$, $\mathsf{sh}_1=(x_1,x_2)$, and $\mathsf{sh}_2=(x_2,x_0)$—then each party outputs $x=x_0\oplus x_1\oplus x_2$. Observe that the $\mathsf{Open}$ instruction can detect when the shares it receives are faulty; at least two shares are guaranteed to be correct and fully specify the secret, meaning that the third share will trigger an error if it is inconsistent. In addition, observe that it is simple to derive a canonical sharing of a public constant. In particular, a public value $x$ implies a trivial sharing $[x]$ where $x_0=x_1=0$ and $x_2=x$. ### 8.2 Manipulating Secrets: Addition and Multiplication It is well known fact that any efficiently computable function can be expressed as a Boolean circuit that consists of only NAND gates. It is typically more efficient to work with circuit representations that use AND and XOR gates, as we will explain below. A standard paradigm in MPC is to express a function to be evaluated as a Boolean circuit, and then process this circuit gate by gate under the invariant that each intermediate value is secret shared. In more detail, any function $f$ can be expressed as a collection of topologically sorted two-input Boolean gates $\vec{g}$, where (abusing notation) the $i^\text{th}$ gate $g_i(g_j,g_k)$ takes as input the outputs of gates $g_j$ and $g_k$. Evaluating this circuit in plaintext entails progressively evaluating each gate in topological order, starting from the input layer, all the way to the output layer. Secure function evaluation via MPC essentially replicates this procedure in secret shared form. In particular, rather than evaluating $g_i(x,y)=z$ in the clear, parties execute a secure protocol to derive $[z]=g_i([x],[y])$ without leaking any extra information about $x,y,z$. Therefore, once we construct protocols to securely evaluate $[z]=[x]\oplus[y]$ and $[z]=[x]\land[y]$, it is straightforward to compose them to securely evaluate $f$ in its entirety gate by gate. #### 8.2.1 Evaluating XOR Gates As hinted earlier, not every gate is equally complex to compute securely. In particular, *linear* gates such as addition or XOR, tend to be easier to handle. This is because most natural secret sharing schemes tend to permit some kind of linear homomorphism. Let us begin with this easier case. Assume that parties hold two secret shared values $[x],[y]$, and wish to derive $[z]=[x\oplus y]$. They can execute the following protocol: $\mathsf{Eval}\text{-}\mathsf{XOR}\text{-}\mathsf{Gate}([x],[y])$: 1. Each party $P_i$ parses its shares of $x$ and $y$ as $\mathsf{sh}^x_{i}=(x_i,x_{i+1})$ and $\mathsf{sh}^y_{i}=(y_i,y_{i+1})$ respectively. 2. Each $P_i$ computes $z_i=x_i\oplus y_i$ and $z_{i+1}=x_{i+1}\oplus y_{i+1}$ 3. Each $P_i$ outputs its share of $z$ as $\mathsf{sh}^z_{i}=(z_i, z_{i+1})$ Observe that as per the above protocol, $$z_0\oplus z_1\oplus z_2 = (x_0\oplus y_0)\oplus(x_1\oplus y_1)\oplus(x_2\oplus y_2)$$ $$= (x_0\oplus x_1\oplus x_2)\oplus(y_0\oplus y_1\oplus y_2)=x\oplus y$$ This means that $(z_0,z_1),(z_1,z_2),(z_2,z_0)$ in fact constitutes a valid replicated secret sharing of $z=x\oplus y$. Security is trivial, as parties don't even interact to achieve this effect. Going forward, we will use the shorthand $[z]=[x]\oplus[y]$ to denote that $[z]$ is computed as the result of executing $\mathsf{Eval}\text{-}\mathsf{XOR}\text{-}\mathsf{Gate}([x],[y])$. #### 8.2.2 Evaluating Affine Functions Much like XOR, computing affine functions—i.e. $[z=(a\land x)\oplus b]$ where $a,b$ are public—is straightforward as shown below, $\mathsf{Eval}\text{-}\mathsf{Affine}\text{-}\mathsf{Gate}([x],a,b)$: 1. Each party $P_i$ parses its shares of $x$ as $\mathsf{sh}^x_{i}=(x_i,x_{i+1})$ 2. Each $P_i$ computes $z_i=(x_i\land a)\oplus b$ and $z_{i+1}=(x_{i+1}\land a)\oplus b$ 3. Each $P_i$ outputs its share of $z$ as $\mathsf{sh}^z_{i}=(z_i, z_{i+1})$ One can verify that $$z_0\oplus z_1\oplus z_2 = ((x_0\land a)\oplus b)\oplus((x_1\land a)\oplus b)\oplus((x_2\land a)\oplus b)$$ $$= ((x_0\oplus x_1\oplus x_2)\land a)\oplus(b\oplus b\oplus b)=(x\land a)\oplus b$$ We will use the shorthand $[z]=(a\land[x])\oplus b$ to denote $\mathsf{Eval}\text{-}\mathsf{Affine}\text{-}\mathsf{Gate}([x],a,b)$, when it is clear that $a,b$ are public constants. #### 8.2.3 Evaluating AND Gates Unlike XOR gates, AND gates are non-linear, and therefore evaluating them securely is more involved. In fact, the technical meat of most MPC protocols lies in how they handle such non-linearity. At a high level, evaluating an AND gate will proceed in two phases: a "semi-honest" phase that succeeds only if all parties follow the protocol, and a "verification" phase that checks that all parties did indeed follow the protocol. Let us begin by writing out the full expression: $$ x\land y = (x_0\oplus x_1\oplus x_2)\land(y_0\oplus y_1\oplus y_2) $$ $$ = (x_0\land y_0)\oplus (x_0\land y_1) \oplus (x_0\land y_2) $$ $$\text{ } \oplus (x_1\land y_0)\oplus (x_1\land y_1) \oplus (x_1\land y_2) $$ $$\text{ } \oplus (x_2\land y_0)\oplus (x_2\land y_1) \oplus (x_2\land y_2) $$ Observe that each of the $(x_i\land y_j)$ terms above are in fact entirely known to some party, and so that party can contribute this term as an input to a protocol to securely combine all terms. There are nine $(x_i\land y_j)$ terms, and each of the three parties can contribute their knowledge of three of them. For instance if each $P_i$ contributes $(x_i\land y_i)$, $(x_i\land y_{i+1})$, and $(x_{i+1}\land y_{i})$, then the joint collection of all parties' contributions will span all terms required to compute $x\land y$. This idea yields the following protocol: $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}([x],[y])$: 1. Each party $P_i$ parses its shares of $x$ and $y$ as $\mathsf{sh}^x_{i}=(x_i,x_{i+1})$ and $\mathsf{sh}^y_{i}=(y_i,y_{i+1})$ respectively. 2. Each $P_i$ computes $t_i=(x_i\land y_i)\oplus (x_i\land y_{i+1})\oplus (x_{i+1}\land y_{i})$ 3. Each $P_i$ executes $\mathsf{Share}(t_i)$, as a result of which the parties jointly hold secret shared values $[t_0],[t_1],[t_2]$ 4. Parties derive the output secret sharing $[z]=[t_0]\oplus[t_1]\oplus[t_2]$ As discussed previously, the terms $t_0,t_1,t_2$ span all of the terms required to compute $x\land y$. This implies that $[z] = [t_0]\oplus [t_1]\oplus [t_2] = [x\land y]$ if all parties follow the protocol, which is the desired outcome of this phase of the computation. The only interaction required in this protocol is during the $\mathsf{Share}$ subprotocol, which as argued earlier does not leak any information about its constituent secret. Now let us reason about the case where a party might have deviated from the protocol. As $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}([x],[y])$ is an interactive protocol, it does open the prospect of a compromised party sending malformed messages. Fortunately, within the scope of the protocol, all secrets stay hidden even in this event, as honest parties' messages are independent of what a corrupt party might send. However, the output may not be *correct* in the event of a cheat, in that parties may not terminate with a valid secret sharing of $[x\land y]$. In isolation, this may not be an issue. However, as this output will be fed into a larger computation—which in turn might be insecure if its intermediate state is incorrect—we must construct a mechanism to verify correctness of the output. ### 8.3 Detecting Active Protocol Deviations The true difficulty in evaluating AND gates lies in the second phase, i.e. verifying the well-formedness of a putative triple $[x],[y],[z=x\land y]$ once it has been computed via $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}$. We will introduce a basic version of the check here; the full check used in the Silent Compute platform is much more efficient, but quite involved. Verification combines two concepts: triple sacrifice, and cut-and-choose. Triple sacrifice is the idea of using one putative triple to check another. Subsequent to this check, one of the triples must be discarded, hence the "sacrifice". The cut-and-choose technique assures us—probabilistically—that the sacrificed triple is well-formed, which is necessary for the triple sacrifice step to be meaningful. Let us explore these ideas in further detail. #### 8.3.1 Triple Sacrifice First, we must define the notion of using one triple to multiply another, employing a technique introduced by Beaver<sup>[22]</sup>. The idea is to compute $[z=x\land y]$ using *another* triple $[a],[b],[c=a\land b]$, by simply opening a linear function of the two. Consider the values $\alpha=a\oplus x$ and $\beta=b\oplus y$. Assuming that $a,b$ are never reused, $\alpha,\beta$ are effectively one-time pad encryptions of $x,y$, and therefore safe to reveal. They allow for the following computation: $$ [x\land y]= (\alpha\oplus [a])\land(\beta\oplus [b]) = (\alpha\land\beta)\oplus (\beta\land [a])\oplus (\alpha\land[b])\oplus [c] $$ The reason the above equation is useful is that it computes $[x\land y]$ without having to invoke the $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}([x],[y])$ protocol; indeed, all multiplications of values in the above equation involve at most one secret, which can be performed with the $\mathsf{Eval}\text{-}\mathsf{Affine}\text{-}\mathsf{Gate}([x],[y])$ protocol—which we know to be private and correct. In particular, the multiplication will be *guaranteed correct* if $[a]$,$[b]$,$[c=a\land b]$ is a well-formed triple. A validation method for $[x]$, $[y]$, $[z=x\land y]$ then begins to take shape: use $[a]$,$[b]$,$[c=a\land b]$ to compute $[z'=x\land y]$, and then check if $[z]=[z']$. The latter check is simple: parties execute $\mathsf{Open}([z]\oplus[z'])$ and check if the result is 0. We give the triple sacrifice protocol below. $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}([x],[y],[z],[a],[b],[c])$: 1. Parties compute $[\alpha]=[x]\oplus[a]$ and $[\beta]=[y]\oplus[b]$ 2. Parties reveal $\alpha=\mathsf{Open}([\alpha])$ and $\beta=\mathsf{Open}([\beta])$ 3. Parties compute $[z'] = (\alpha\land\beta)\oplus (\beta\land [a])\oplus (\alpha\land[b])\oplus [c]$ 4. Parties compute $[v]=[z]\oplus[z']$ 4. Parties reveal $v=\mathsf{Open}([v])$ and return $\mathsf{PASS}$ if $v=0$, or $\mathsf{FAIL}$ otherwise An underlying assumption thus far has been that we are given a randomly sampled, well-formed triple $[a]$,$[b]$,$[c=a\land b]$. The first step towards removing this assumption is to reason about what happens if $[a]$,$[b]$,$[c]$ are not in fact well-formed. While $[z']$ may not be correct, as long as $a,b$ are random, revealing $\alpha,\beta$ is still safe; they are one-time pad encryptions of $x,y$ and therefore $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ still maintains their privacy. Equipped with the knowledge that it is safe to execute $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ with an incorrect $[a],[b],[c]$, we can relax the requirement on this triple to *probabilistic* correctness. In particular, consider a scenario in which $[a],[b],[c]$ are well-formed with probability $1/2$, and incorrect with probability $1/2$. There are two cases: * **Case 1**: $[a],[b],[c]$ are well-formed, and so the output of $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ is either a true $\mathsf{PASS}$ or a true $\mathsf{FAIL}$. A malformed $[x],[y],[z]$ is caught with certainty in this case. * **Case 2**: $[a],[b],[c]$ are malformed, implying that the output of $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ is either correct or incorrect depending on whether $[x],[y],[z]$ itself is well-formed. As we are in Case 1 with probability $1/2$, we can infer that, $$ \Pr\left[\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}([x],[y],[z\neq x\land y])=\mathsf{PASS}\leq \frac{1}{2}\right] $$ Recall that in both cases, the privacy of $[x],[y],[z]$ is preserved. We can therefore execute $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ repeatedly with fresh randomness in order to tighten the probability of catching a malformed $[x],[y],[z]$ to our satisfaction, at no detriment to privacy. If we denote by $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}^{\kappa\text{-}\mathsf{reps}}$ the process of repeating $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}$ a total of $\kappa$ times (with independently sampled $[a],[b],[c]$ triples) and returning $\mathsf{PASS}$ only if all invocations do, then we have that: $$ \Pr\left[\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}^{\kappa\text{-}\mathsf{reps}}([x],[y],[z\neq x\land y])=\mathsf{PASS}\leq \frac{1}{2^\kappa}\right] $$ In practice, one can set $\kappa=80$ to obtain a statistically negligible probability that a malformed $[x],[y],[z]$ passes the check. But now, how does one obtain $\kappa$ different $[a],[b],[c]$ triples that are each independently well-formed with a probability of $1/2$? This need is precisely what cut-and-choose satisfies. #### 8.3.2 Probabilistic Correctness via Cut-and-choose Cut-and-choose is a classic technique that underlies many zero-knowledge proof and actively secure MPC constructions. In its simplest form, it involves generating a batch of objects, opening and checking some subset of them, and utilizing the unopened objects if all of the opened objects are well-formed. Applying this concept to generating $[a],[b],[c]$ triples yields the following template: generate a pair of random putative triples $[a_0]$,$[b_0]$,$[c_0=a_0\land b_0]$ and $[a_1]$,$[b_1]$,$[c_1=a_1\land b_1]$, toss a coin $\sigma\gets\{0,1\}$, open $[a_\sigma]$,$[b_\sigma]$,$[c_\sigma]$, and verify that $c_\sigma=a_\sigma\land b_\sigma$. If this equality holds, then $[a_{1-\sigma}]$,$[b_{1-\sigma}]$,$[c_{1-\sigma}]$ is output as a probabilistically correct triple. First, let us concretely specify how a random secret bit can be jointly generated. $\mathsf{Gen}\text{-}\mathsf{Random}\text{-}\mathsf{Bit}():$ 1. Each party $P_i$ samples a random bit $r_i\gets\{0,1\}$ 2. Each $P_i$ shares their bit so that the parties jointly hold $[r_i]=\mathsf{Share}(r_i)$ 3. Parties define and output $[r] = [r_0]\oplus[r_1]\oplus[r_2]$ We will use the shorthand $\mathsf{Gen}\text{-}n\text{-}\mathsf{Random}\text{-}\mathsf{Bits}$ to denote that $\mathsf{Gen}\text{-}\mathsf{Random}\text{-}\mathsf{Bit}()$ is invoked $n$ times to generate $n$ independent random bits. Now, we can invoke this procedure to generate a probabilistically correct triple via cut-and-choose as follows, $\mathsf{Gen}\text{-}\mathsf{Cut\&Choose}\text{-}\mathsf{Triple}():$ 1. Parties jointly sample $[a_0]$,$[b_0]$,$[a_1]$,$[b_1]$,$[\sigma]$ $\gets\mathsf{Gen}\text{-}5\text{-}\mathsf{Random}\text{-}\mathsf{Bits}()$ 2. Parties compute $[c_0]=[a_0]\land[b_0]$ and $[c_1]=[a_1]\land[b_1]$ 3. Parties reveal $\sigma=\mathsf{Open}([\sigma])$ 4. Parties then reveal $a_\sigma=\mathsf{Open}([a_\sigma])$, $b_\sigma=\mathsf{Open}([b_\sigma])$, and $c_\sigma=\mathsf{Open}([c_\sigma])$ 5. If $c_\sigma=a_\sigma\land b_\sigma$ then parties output the triple $[a_{1-\sigma}]$, $[b_{1-\sigma}]$, $[c_{1-\sigma}]$ * otherwise, they output $\mathsf{FAIL}$ There are three cases in the above protocol: * **Case 1**: Both triples are malformed. In this case, $\mathsf{FAIL}$ is output with certainty. * **Case 2**: Both triples are well-formed. In this case, a valid triple $[a_{1-\sigma}]$, $[b_{1-\sigma}]$, $[c_{1-\sigma}]$ is output with certainty. * **Case 3**: One triple—$[a_{\mathsf{bad}}]$,$[b_{\mathsf{bad}}]$,$[c_{\mathsf{bad}}]$—is malformed. As $\sigma$ is revealed only after $[c_0],[c_1]$ have been computed, it holds that $\Pr[\mathsf{bad}=\sigma]=1/2$. Accounting for all of the above cases, we infer that for any adversarial strategy: $$ \Pr\left[c=a\land b: [a],[b],[c]\gets\mathsf{Gen}\text{-}\mathsf{Cut\&Choose}\text{-}\mathsf{Triple}\right]\geq 1/2 $$ which is precisely the property required for $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}^{\kappa\text{-}\mathsf{reps}}$ to deliver its guarantee that a malformed $[x]$,$[y]$,$[z]$ will be caught except with probability $1/2^\kappa$. ### 8.4 Putting it All Together Given a putative triple $[x]$,$[y]$,$[z]$, one can verify up to statistical certainty (i.e. except with probability $2^{-\kappa}$) that $z=x\land y$ by executing $\mathsf{Triple}\text{-}\mathsf{Sacrifice}\text{-}\mathsf{Check}^{\kappa\text{-}\mathsf{reps}}([x],[y],[z])$. This protocol requires as additional input $\kappa$ triples $([a_i],[b_i],[c_i])_{i\in[\kappa]}$ such that for each $i\in[\kappa]$ it holds that $\Pr[c_i=a_i\land b_i]\geq 1/2$. This exact guarantee is delivered by $\kappa$ independent executions of the $\mathsf{Cut\&Choose}\text{-}\mathsf{Triple}$ protocol. Once $[x]$,$[y]$,$[z=x\land y]$ obtained via $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}$ has been verified to be well-formed, secure evaluation of the AND gate has been completed, and $[z]$ is safe to use in the larger circuit evaluation. This overall procedure delivers a strong guarantee—even a party that deviates from the protocol can not tamper with the procedure except with probability $2^{-\kappa}$. However, the verification subprotocols given above incur quite a bit of overhead; while $\mathsf{Eval}\text{-}\mathsf{AND}\text{-}\mathsf{Gate}$ require only a few bits of communication, verifying its correct execution requires over a hundred bits. More advanced techniques for the cut-and-choose phase can guarantee much higher probabilities of well-formedness for the $[a_i],[b_i],[c_i]$ triples used by the triple sacrifice phase, meaning that fewer of them need be sacrificed. The protocol of Araki et al.<sup>[23]</sup> implemented in the Silent Compute platform improves the overall amortized cost to roughly 7 bits per AND gate. Equipped with methods to securely share secret inputs and open them verifiably, evaluate XOR and AND gates in secret shared form, and compose these gates securely, we are able to construct an MPC protocol to evaluate any arbitrary function $f$ on private inputs. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/Hkjk1iPyyl.png" alt="Image 18" width="640"/> <figcaption>Figure 18: Secure Function Evaluation by Composition of XOR/AND Gates </figcaption> </figure> </div> ### 8.5 Implementing Instructions on Private Data in the Silent Compute Platform The MPC protocol outlined in the previous section allows for the secure evaluation of any tractable function, and in principle suffices to implement any instruction. In fact, there are many functions for which it is not known how to do much better than to invoke such a general purpose MPC protocol. Such functions include comparing secret values, and evaluating symmetric-key ciphers on secret inputs. However, executing instructions on secret data in the Silent Compute platform typically involves a combination of generic components and tailored protocols to achieve substantially better performance than generic evaluation. We discuss two instructions pertinent to the Open Finance setting, and the Account Aggregator use case in particular: distributed decryption, and answering a query on the private bank statement of a user. #### 8.5.1 Distributed Decryption of a Data Stream Let us revisit the "distributed decryption" phase of the MPC-enabled data management framework we proposed for FIUs in the Account Aggregator ecosystem. Briefly put, the parties wish to compute a secret sharing $[d]$, where $d=\mathsf{Dec}(\mathsf{sk},E)$, given $[\mathsf{sk}]$ and public ciphertext $E$ from an external source. One could of course represent the function $f(x)=\mathsf{Dec}(\cdot,E)$ as a circuit consisting of AND and XOR gates, and securely compute it with the MPC protocol described previously. The issue with this approach is that due to the structure of most standard encryption schemes—including the one used in the AA ecosystem—such a circuit would consist of billions of gates, which would require substantial resources to securely compute with MPC. Roughly, the component of the encryption function that would bottleneck a generic approach is structured $f(x)=\mathsf{AES}(x\cdot G, E)$, where $x\cdot G$ corresponds to performing an elliptic curve multiplication of a public generator $G$ by a secret scalar exponent $x$. Efficient MPC protocols to operate over elliptic curves are known (to handle the $x\cdot G$ component), and $\mathsf{AES}$ can be securely computed with reasonable efficiency by the generic MPC protocol described earlier. However, the two classes of MPC protocols—those for elliptic curves and those for AES—operate with fundamentally different secret sharing formats under the hood. Using either one class of MPC protocols to execute the entire computation would require a large amount of "non-native" operations, such as emulating XORs and ANDs with the elliptic curve group law, or the other way around. In the technical whitepaper, we describe how one can still use the "native" MPC protocol for each component, and connect them efficiently with techniques inspired by those of Mohassel and Rindal<sup>[24]</sup> developed in the context of privacy-preserving machine learning. Note that this MPC protocol could be made substantially simpler if a custom MPC-aware encryption scheme were to be used. For instance, Agrawal et al.<sup>[25]</sup> propose several such schemes in a particular setting based on standard cryptography. Similar techniques could be adapted for the AA ecosystem. #### 8.5.2 Securely Answering a Query on a Secret Shared Bank Statement Suppose the data referenced above corresponds to a bank statement of a user. In particular, $[d]$ obtained following distributed decryption is a secret shared bank statement consisting of a list of transactions. Consider the following query, the answer to which a user consents to reveal: > "How many purchases were made with merchant *example.com*, and what is the average transaction value?" We will describe how this query can be broken down into components and handled efficiently via MPC protocols. For the sake of simplicity, assume that $[d]$ is a list of $n$ records structured as $([\mathsf{id}_i],[\mathsf{amt}_i],[\mathsf{aux}_i])_{i\in[n]}$, where $\mathsf{id}_i$ is a unique identifier, $\mathsf{amt}_i$ is the transaction amount, and $\mathsf{aux}_i$ is auxiliary data pertaining to the transaction—in this case, the name of the merchant. Consider how this operation might be implemented over plaintext: initialize two variables `count=0` and `total=0`, update them as `for i from 1 to n: {if (aux_i == "example.com") {count+=1; total+=amt_i;}}`. Finally, output `count` and `total/count` as the number of purchases, and the average transaction values respectively. The above logic can be compiled to a circuit naively and evaluated with generic MPC, however it will likely scale very poorly as `n` increases due to its conditional branching. Note that one can not reveal the outcome of the comparison `aux_i == "example.com"`, as this would leak the positions of the transactions that pertain to *example.com*. It may not be immediate as to why this leakage can be dangerous, but in order to rule out sophisticated attacks that combine leakage from different sources, it is imperative to avoid such leakage altogether. We can therefore tweak the logic to be more MPC-friendly as follows: `for i from 1 to n: {b=isEqual(aux_i, "example.com"); count+=b; total+=b*amt_i;}`. The adjusted MPC-friendly logic is free of conditional branching, and can be compiled into a circuit consisting of XOR and AND gates in a straightforward fashion; there are standard, highly optimized circuits for comparison and addition that can be composed easily. Subsequently, the compute nodes that hold $[d]$ can execute an MPC protocol (like the one outlined earlier in the section) to securely answer the query at hand. The Silent Compute platform can handle substantially more complex queries. An overview of the approach is mentione [here](https://drive.google.com/file/d/1wlRDElks4k2RR95tD7e6HhgL2bGBhMY7/view?usp=sharing). Further designs and deeps dives are detailed in a technical whitepaper, which is available on request (please mail to: info@silencelaboratories.com). :::info #### :wrench: Optimizing the Ecosystem for MPC-enabled Data Management The design outlined in the previous section delivers strong guarantees, without having to make any changes to existing AA standards and protocols. The rest of the ecosystem can in principle be agnostic to how the FIU chooses to manage its data. However, if existing AA protocols could be adjusted to accommodate MPC-based data management, certain aspects of the MPC can be substantially simplified. In particular, much of the complexity of distributed decryption is an artefact of MPC-unfriendly design decisions baked into encryption standards. The complexity of distributed decryption is highly dependent on the structure of the encryption scheme. Encryption standards tend to be optimized for efficient hardware implementations and other such considerations, and unfortunately require substantial effort to handle within MPC protocols. A similar effect is seen in internet infrastructure with signature schemes: the standard ECDSA induces relatively complex MPC protocols, whereas the equally secure and efficient EdDSA/Schnorr is very MPC-friendly, but not as widely supported. One could instead employ an alternative "MPC-aware" encryption scheme, constructed by composing existing building blocks. For example, the encryption scheme could first compute secret shares of the plaintext itself, and then encrypt each individual secret share under the public key of the intended recipient node. Such a construction would fully inherit the security of the original encryption algorithm, while making distributed decryption a simple matter of having each node decrypt its share. The tradeoff is that such an MPC-aware encryption scheme is less efficient, with its costs scaling with the number of nodes involved in the MPC (eg. $3\times$ in the proposed architecture). ::: ### 8.3 Further Readings There are several textbooks that cover the fundamentals of MPC. The foundations of secret sharing and MPC—including replicated secret sharing—are covered by Cramer et al.<sup>[26]</sup>, and Evans et al.<sup>[27]</sup> provide an introduction to a large class of MPC protocols for an applied audience. The MPC protocol outlined in this section is drawn from folklore techniques rather than a specific construction, and can be inferred from works as far back as Damgård and Orlandi<sup>[28]</sup> or SPDZ<sup>[29]</sup>, although Furukawa et al.<sup>[30]</sup> optimize the techniques for the three-party case. --- <!--## 9. Supported Use Cases in AA [WIP] ## 10. Roadmap [WIP] <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/rJKeRrdy1l.png" alt="Image 19" width="640"/> <figcaption>Figure 19: Roadmap for Silent Compute </figcaption> </figure> </div> --> ## 9. Achieving Regulatory Compliance using Silent Compute in Open Finance In recent years, there has been a growing emphasis on privacy from governments, regulatory authorities, and organisations worldwide. With approximately 80% of countries now having either active or draft privacy legislation in place<sup>[31]</sup>, ensuring compliance ranks as one of the key priority areas for businesses. This regulatory landscape is driving a surge in demand for privacy-compliance solutions, with global spending on tools and technologies aimed at meeting these standards expected to reach $8 billion<sup>[32]</sup>. Organisations are increasingly investing in privacy infrastructure to avoid penalties, protect customer trust, and maintain competitive advantage in an era of heightened data sensitivity. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/BJea1Fckke.png" alt="Image 20" width="640"/> <figcaption>Figure 19: Adoption timeline of privacy laws globally </figcaption> </figure> </div> Silent Compute is designed to align seamlessly with the core principles outlined in privacy regulations across various jurisdictions. By prioritising data privacy, transparency, and user control, this solution addresses key compliance requirements, making it easier for organisations to meet the stringent standards set by privacy laws. By embedding privacy by design, Silent Compute not only helps businesses stay compliant but also builds trust with customers, who can be confident that their data is being managed responsibly. There are certain fundamental principles common across most privacy regulations, namely: - **Purpose limitation:** Cryptographically binding consent with computation ensures strict adherence with the agreed purpose, down to the precise opcodes that are directly mapped to the data request - **Storage:** Movement and collaboration on inferences eliminates duplication of data, and consequently, the storage requirements - **Accountability:** Having an immutable audit trail to track the exact usage of the data not only ensures that the organisations handle data responsibly, but also enables traceability in case of breach or misuse - **Data encryption:** Silent Compute allows computation over encrypted data, ensuring that inputs are never revealed during storage, transit, and processing. This guarantees privacy at all stages of data handling. - **Data minimisation:** Requesting inferences from specific queries ensures usage of the minimal amount of data necessary for the specified purpose Regulatory authorities worldwide are strongly advocating the use of Privacy Enhancing Technologies (PETs) for scenarios that involve collaboration over sensitive data. Organisations such as the FCA and ICO in the UK, IMDA in Singapore, the UN, HKMA, and others have actively promoted PETs through guides, whitepapers, sandboxes, and tech sprints. With the PET market expected to grow at a CAGR of 26.6% over the next decade<sup>[33]</sup>, Gartner forecasts that 60% of large organisations will adopt at least one PET by 2025<sup>[34]</sup>. PETs have also been featured in Gartner's "30 Emerging Technologies That Will Guide Your Business Decisions”<sup>[34]</sup>. ## 10. Value Propositions: Before Vs Now Data custodians can imagine a system where they never have to worry about losing control of sensitive information. Their data stays securely within their environment, never leaving their boundaries, yet they can still participate in a collaborative ecosystem. Privacy concerns are no longer a burden, as guarantees of no duplication or misuse of data allow them to monetize customer insights confidently, all while maintaining the highest standards of privacy and security. Data custodians can unlock new revenue streams without compromising the trust they've built with their customers. <div style="display: flex; flex-direction: row; gap: 10px;"> <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/HkenD36Zye.png" alt="Image 20" width="640"/> </figure> </div> <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/HkdtO26bJg.png" alt="Image 20" width="540"/> </figure> </div> </div> *Figure 20: Benefits to data custodians and fiduciaries* On the other side, data fiduciaries have the unprecedented ability to generate powerful inferences without ever needing access to the underlying raw data. Privacy compliance becomes seamless, with privacy laws and key principles like data minimization and purpose limitation baked into the solution. They can now tap into additional data sources that were once out of reach, transforming their models and services with richer insights, all while ensuring the highest level of data privacy. For the data owner - the customer - the future is equally exciting. Their data remains fully encrypted throughout its journey, even during computation, ensuring that their privacy is never compromised. They have complete control over how their data is shared, thanks to purpose-bound consent mechanisms that empower them to decide who can access their information and for what purpose. This new level of transparency and auditability lets them track how their data is used at every step, fostering trust and confidence in the system. And most importantly, their financial information is never exposed beyond the custodian they already trust, creating a secure and personalized experience. The possibilities of this future are immense, transforming the way data is shared, protected, and utilized across industries. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/SyPLa13y1e.png" alt="Image 21" width="640"/> <figcaption>Figure 21: How the proposed model fares vs current implementation </figcaption> </figure> </div> Furthermore, the enhancement to the Fair Use Policy by not only mapping use cases to purpose codes but also establishing a dictionary that maps purpose codes with opcodes, which execute only if they're approved during consent would create the guardrails for consent-driven data usage, effectively minimising the risk of misuse. ## 11. Enabler of More Business Opportunities in Open Finance :::success The productivity benefits of the digital economy depend on economies of scale in data utilization—where larger datasets yield greater insights—and network effects driven by an increasing number of users adopting common standards. Data usage generates positive externalities, as the social value derived from data exceeds its private value. ::: Data in the digital economy, Open Finance particularly, serves as a critical driver of value creation. Data has unique economic characteristics such as non-rivalry (data can be used by multiple parties simultaneously without depletion) and increasing returns to scale (the value of data increases as more data is aggregated) as highlighted in the research by Charles I. Jones and Christopher Tonetti.<sup>[11]</sup> ![image](https://hackmd.io/_uploads/r18Tt4RZJl.png) Data can generate positive externalities, where the social value of data use often exceeds its private value due to network effects and the enhanced capabilities that come with larger datasets. Consequently, the market equilibrium is suboptimal, leading to underinvestment by data holders in the maintenance and sharing of data. Data sharing arrangements, whether through private agreements or regulatory mandates, aim to expand data usage and address this *market failure.* A significant challenge within the economic model of Open Banking ecosystem lies in the uneven distribution of the value generated. Data fiduciaries and processors derive substantial value by utilising data to offer personalised services to customers. However, the upstream players - namely, data custodians and enabler such as AAs - do not equally share in these benefits. They can be incentivised with equitable distribution of the value generated within this ecosystem using a usage-based compensation model. In this framework, the ultimate beneficiaries (customers) could bear a portion of the compensation cost, or the revenue generated could be shared between fiduciaries, processors, providers, and aggregators. By tying compensation to the actual usage of data, rather than a one-time data transfer, this approach would more fairly distribute the wealth created. Additionally, it would incentivise better performance and availability from data custodians and providers, as their financial outcomes would be directly linked to the quality of their service delivery. #### 11.1 How would a fairer distribution of economics be enabled by PETs? Currently, data fiduciaries and their technical service providers benefit from strong revenue streams, largely due to the financial analytics business cases they have developed, whether through subscriptions or technical licensing for lending models, personal finance management tools, and more. ::: warning As of now data providers, custodians and protocol maintaners are largely compensated per data fetch request and variables at best are functions of volume of data being requested by the fiduciaries, and that too only once. Once collected, which typically is a dump for large span of duration, the processors can keep using data without any visibility or economic benefits to the custodians. Since data was originally collected by the custodians, eventually transpaent and auditable governance models like Silent Compute can provide better telemtry and incentives to them. ::: As discussed before, architecture like Silent Compute can enable fairer economics becuase fiduciaries do not have access to the data as whole. We can assign variables and attached responsibilities for the custodians on which incestives can be built, for direct incentives: - *Volume of data* - *Quality of data* - SLAs (response to fech requests, uptime etc.). These variables govern how efficiently data processors can build business cases for thier customers. Similarly, the custodians can get downstream revenues based on indirect incentives which are closely linked withe value extracted from the data: - *Number of compute requests sent to the FICU network:* Data custodians and network maintainers (operators) should be paid, at base as function of the numbers of data usage instances, compute requests, sent to the network. - *Categorization of the compute requests:* Not all compute requests would be of same business value and incurred computational cost. Hence, the cost to the custodians and operators should be a function of the type of compute requested. These models introduce greater granularity in upstream revenue streams and ensure transparent economics. Collectively, data custodians and collectors, including banks and financial institutions, will benefit from attractive incentives as they become active participants in this growth, hence amplifying their earnings. At an advanced level, fair use policies, like those developed by SahamatiNet for AA ecosystem can be mapped to predefined opcodes and associated with the cost of inference. The network will support all telemetries and statistical logs by design. Hence, policies can be enforced and protocols maintainers can be incentivised for the same. #### 11.2 How more business opportunities will be created? Silent Compute enables the creation of a marketplace for audited and verified functions, optimized for computational efficiency by FICU. In this marketplace setting, TSPs can host functions, facilitating a revenue-sharing model among all stakeholders based on usage and contribution. Overall, the proposed model helps narrow the trust gap between custodians, users, and processors, encouraging more data providers to join the ecosystem. With a higher volume of data—given its non-rivalrous nature—the quality of analytics improves, unlocking additional business use cases and opportunities. ## 12. Interplay of Privacy, Consent and Trust Understanding the trust flywheel is crucial for ensuring the smooth functioning of the ecosystem. When data owners are provided with absolute control, transparency, and auditability over their data through robust consent mechanisms, it fosters a strong sense of privacy assurance. This assurance, in turn, builds trust in the system, encouraging data owners to share more information in exchange for improved services. The availability of more data enables data fiduciaries to deliver increasingly personalised offerings, ultimately enhancing customer satisfaction. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/r1jR1Y5Jyx.png" alt="Image 22" width="940"/> <figcaption>Figure 22: Trust flywheel onboarding users </figcaption> </figure> </div> This is wonderfully demonstrated by a few research studies conducted by IBM<sup>[35]</sup> and IAPP<sup>[36]</sup>, which studies consumer’s perception of privacy. ![Image 20](https://hackmd.io/_uploads/SymHLXayke.png) ![Image 21](https://hackmd.io/_uploads/BkjO8maJkg.png) <div style="text-align: center;"> <figure> <figcaption>Figure 23: Studies on factors affecting customer trust </figcaption> </figure> </div> The World Economic Forum has developed a framework<sup>[37]</sup> aimed at fostering trust in an increasingly digital world. Centred around key principles such as security and reliability, accountability and oversight, and inclusive, ethical, and responsible use, this framework is highly relevant to Open Finance ecosystems. As the volumes soar and the ecosystems scale and mature encompassing a broader range of banking, financial, and non-financial applications, addressing critical challenges related to privacy, transparency, auditability, and accountability will be essential for their long-term success. <div style="text-align: center;"> <figure> <img src="https://hackmd.io/_uploads/B19Q9s1gyl.png" alt="Image 24" width="640"/> <figcaption> Figure 24: WEF's framework dimensions of digital trust </figcaption> </figure> </div> <!-- ### Auditable Consent - Roadmap to connect with share use policy ## Archived content > Design 1: The flow of data and compute would be as follows: 1. FIP sends encrypted data $E=Enc(d, k)$ to the network, at the request of FIU, 2. FIU get the decryption key $k$, 3. AA's computing node or Sahamati's computing node/proxy shards the encrypted data using **[Secret Sharing](https://en.wikipedia.org/wiki/Secret_sharing).** After secret sharing, AA, FIUs and Sahamati's proxy gets secret shared of the encrypted data: $e_1$, $e_2$, $e_3$ respectively. 3. FIU does **[Secret Sharing](https://en.wikipedia.org/wiki/Secret_sharing).** of the key so that AA, FIUs and Sahamati's proxy gets secret shared of the key: $k_1$, $k_2$, $k_3$ respectively. By now: * AA has $E$ and $k_1$ * Sahamati has $E$ and $k_2$ * FIU has $E$ and $k_3$. * Hence no party any tangible information--> <!-- ![image](https://hackmd.io/_uploads/HJJR_9ls0.png) 4. The network now runs a protcol called *[Threshold DeCryption](https://en.wikipedia.org/wiki/Threshold_cryptosystem#:~:text=With%20a%20threshold%20cryptosystem%2C%20in,the%20decryption%20or%20signature%20protocol.)* where AA's Sahamati proxy, Sahamati and FIUs'Sahamati proxy computing node talk to each other through a defined protocol with aim of getting distributed shares of the encrypted data, $E$, [1](https://www.ieee-security.org/TC/SP2017/papers/96.pdf). At the end: * AA gets $d_1$, * Sahamati gets $d_2$, * FIU gets $d_3$. * Hence, no party any tangible information on the decrypted financial data as well after this step as well. 6. Now the network is ready to received requests for any inference. Now whenever FIUs wants to run a query, the corresponding compute request is sent to the network. * Network checks for validity of the users'/owners' consent for the purpose. * Once consent verification and scope test passes, *the nodes enter to Secure MPC protocol based function calculation* and output is revealed at FIU. All of the three parties are required to participate for successful run of the distribued computation, else the protocol will abort with no result.--> :::info Through the proposed design, Open Finance ecosystem can gurantee that data is never in one place as whole and end-to-end privacy is guranteed by design and cryptographic claims. ::: <!-- ### Add WIPs like economics, forth coming elemts, hosted functions ### Important points to added and discussed - Value proposition for FIUs, FIPs and TSPs should be clearly defined. ## Trust in claims? ## Attestations ## Public/Secret functions ## Trust: third parties not learning more than what they should.--> ## 13. Exerpts from Experts in Open Finance ::: info *"There’s a strong need for a privacy preserving compute layer to support data minimisation and to mitigate any potential malicious use of data for which end users had not consented. However, any intermediate layer would add a compute cost, add to the latency of data and insight delivery and would be a cost overhead to FIUs. All FIUs therefore need to be sensitised on the same and need to be on the same page for why implementation of such a layer in beneficial in the short and long term. Incentives need to be aligned for all stakeholders and we need to instil more trust in these open networks. Privacy preserving techniques like multiparty compute can provide us just that. At Finarkein, as an early adopter and enabler of the AA ecosystem, we have followed Privacy preserving techniques from day zero such as in-memory data processing only, complete data encryption in transit and at rest and no storage of data whatsoever."* ### Nikhil Kurhe, ****Co-Founder Finarkein Analytics**** ::: ::: info *"Open Finance is entering a pivotal phase of growth, where the realization of its full potential depends on a deeper respect for consent and data privacy. These elements are becoming increasingly essential to encourage data custodians to participate, thereby driving the industry toward socially optimal outcomes.* *I believe that Privacy-Enhancing Technologies (PETs), such as the proposed solution leveraging Multi-Party Computation (MPC), are both highly relevant and immediately implementable. I support the idea of FIUs and TSPs forming closer partnerships to implement PETs. By doing so, they will enable the responsible use of data, ensuring secure participation in computations without compromising privacy."* ### Vamsi Madhav, ****CEO Finvu**** ::: :::info *"As the Account Aggregator framework expands and its usage increases, the importance of safeguarding the system against data leaks and misuse becomes paramount. The adoption of the Multi-Party Computation (MPC) method marks a significant advancement in protecting user data. It ensures that access is granted strictly when necessary, and sensitive details are not exposed to entities that do not require them. As an FIU, Fold recognize the potential of such technologies to enhance security and remain committed to implementing these measures responsibly to maintain trust and ensure privacy within the ecosystem."* ### Akash Nimare, CEO Fold Money ::: ## 14. Authors <div style="text-align: center;"> <figure> <img src="https://md.silencelaboratories.com/uploads/39ba5b86-39e4-471c-9f04-70e0d54c9b41.png" width="640"/> </div> <figure> <img src="https://hackmd.io/_uploads/HJkqw4mzyl.png" width="640"/> </figure> <figure> <img src="https://md.silencelaboratories.com/uploads/6ff9c95b-ed5e-4116-a60b-c679ba1b94bb.png" width="640"/> </figure> <figure> <img src="https://hackmd.io/_uploads/HJN8i47fkl.png" width="640"/> </figure> <figure> <img src="https://md.silencelaboratories.com/uploads/dbb2f932-49d0-4b0e-adf6-7c352207fae5.png" width="640"/> </figure> ## 15. References 1. McKinsey Global Institute: https://www.mckinsey.com/industries/financial-services/our-insights/financial-data-unbound-the-value-of-open-data-for-individuals-and-institutions 2. Global Market Insights: https://www.gminsights.com/industry-analysis/open-banking-market#:~:text=Open%20Banking%20Market%20was%20valued,major%20drivers%20in%20the%20market. 3. Open Banking Expo: https://www.openbankingexpo.com/news/konsentus-open-banking-underway-or-live-in-68-countries/#:~:text=The%20study%20by%20Open%20Finance,of%2035%25%20of%20the%20world. 4. A. UK: https://openbanking.foleon.com/live-publications/the-open-banking-impact-report-2024-march/executive-summary B. EU:https://www.kontomatik.com/blog/open-banking-statistics-across-europe; https://thefintechtimes.com/open-banking-sixth-anniversary/; https://www.juniperresearch.com/press/open-banking-use-to-surge-as-open-banking-api/; https://www.konsentus.com/open-banking-api-transactions; https://qwist.com/en/resources/blog/ndgit/insights-and-outlooks-after-three-years-of-psd2/ C. India: https://sahamati.org.in/wp-content/uploads/2024/09/Account-Aggregator-Adoption-update-for-website-31st-Aug-2024.pptx.pdf D. Brazil: https://dashboard.openfinancebrasil.org.br/transactional-data/api-requests/evolution E. Australia: https://www.cdr.gov.au/performance F. USA: https://thepaypers.com/expert-opinion/an-overview-of-open-banking-and-open-finance-in-the-us-in-2023--1266123 5. BCBS member survey 2024: https://www.bis.org/bcbs/publ/d575.pdf 6. https://trovata.io/blog/open-banking-opportunity/ 7. Open Banking Impact Report June 2022: https://openbanking.foleon.com/live-publications/the-open-banking-impact-report-june-2022/ultimate-outcomes 8. TechUK: https://www.techuk.org/resource/smart-data-the-uk-s-new-data-sharing-laws-will-spur-innovation-and-improve-consumer-outcomes.html 9. Readiness of India Inc. for the Digital Personal Data Protection Act, 2023: A PwC analysis: https://www.pwc.in/assets/pdfs/consulting/risk-consulting/readiness-of-india-inc-for-the-digital-personal-data-protection-act-2023.pdf 10. Unraveling blind spots in financial data sharing: User perception vs reality by Silence Laboratories & Scaling Trust: https://report.silencelaboratories.com/ 11. Nonrivalry and the Economics of Data: https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.20191330 12. ReBIT API Specs: https://api.rebit.org.in/ 13. The Digital Personal Data Protection Act, 2023: https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf 14. Account Aggregator growth: https://sahamati.org.in/media-article/indias-account-aggregator-framework-crosses-100-million-consents-in-three-years/ 15. Account Aggregator adoption statistics: https://www.medianama.com/2024/08/223-100m-consents-on-indias-account-aggregator-framework/ 16. Fair Use Compliance: https://developer.sahamati.org.in/sahamatinet/fair-use-compliance 17. Completeness theorems for non-cryptographic fault-tolerant distributed computation: https://dl.acm.org/doi/10.1145/62212.62213 18. Proofs that yield nothing but their validity and a methodology of cryptographic protocol design :https://ieeexplore.ieee.org/document/4568209 19. ACM Communications article: https://dl.acm.org/doi/pdf/10.1145/3387108 20. From Keys to Databases – Real-World Applications of Secure Multi-Party Computation: https://eprint.iacr.org/2018/450.pdf 21. Sahamati FI Request Workflow: https://developer.sahamati.org.in/buildaathon-2024/network-scenarios/fi-request-workflow 22. Efficient Multiparty Protocols Using Circuit Randomization: https://link.springer.com/content/pdf/10.1007/3-540-46766-1_34.pdf 23. Optimized Honest-Majority MPC for Malicious Adversaries — Breaking the 1 Billion-Gate Per Second Barrier: https://ieeexplore.ieee.org/document/7958613 24. ABY<sup>3</sup>: A Mixed Protocol Framework for Machine Learning: https://dl.acm.org/doi/10.1145/3243734.3243760 25. DiSE: Distributed Symmetric-key Encryption: https://dl.acm.org/doi/10.1145/3243734.3243774 26. Secure Multiparty Computation and Secret Sharing: https://www.cambridge.org/core/books/secure-multiparty-computation-and-secret-sharing/4C2480B202905CE5370B2609F0C2A67A 27. A Pragmatic Introduction to Secure Multi-Party Computation: https://securecomputation.org 28. Multiparty Computation for Dishonest Majority: From Passive to Active Security at Low Cost: https://link.springer.com/chapter/10.1007/978-3-642-14623-7_30 29. Multiparty Computation from Somewhat Homomorphic Encryption: https://link.springer.com/chapter/10.1007/978-3-642-32009-5_38 30. High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority: https://link.springer.com/chapter/10.1007/978-3-319-56614-6_8 31. UNCTAD: https://unctad.org/page/data-protection-and-privacy-legislation-worldwide 32. Gartner: https://www.gartner.com/en/newsroom/press-releases/2020-02-25-gartner-says-over-40-percent-of-privacy-compliance-technology-will-rely-on-artificial-intelligence-in-the-next-three-years 33. Future Market Insights, PET Market Growth:https://www.futuremarketinsights.com/reports/privacy-enhancing-technology-market 34. Gartner Forecast & Emerging Technologies: https://www.gartner.com/en/newsroom/press-releases/2022-05-31-gartner-identifies-top-five-trends-in-privacy-through-2024 35. Consumer Attitudes Towards Data Privacy, IBM (2019): https://newsroom.ibm.com/download/IBM+Data+Privacy.pdf 36. IAPP Privacy and Consumer Trust Report: https://iapp.org/resources/article/privacy-and-consumer-trust-summary/ 37. World Economic Forum's Digital Trust Framework: https://initiatives.weforum.org/digital-trust/framework