Whitepaper 2021 (Ver 0.1)


Based on the revolutionary Rho calculus, RChain solves a series of problems preventing blockchain platforms from realizing mainstream adoption.

Its fast and scalable conflict detection algorithm accompanied with Casper CBC Proof of Stake consensus allows:

⇨ all nodes to produce and verify blocks concurrently without global epochs so it becomes the first smart contract platform to achieve single-shard scalability; and

⇨ large data to be stored directly on chain, removing the dependency on other data storage solutions such as IPFS.

Built on Rholang, RChain’s native programming language, RChain is the first chain to allow any complicated cross-shard transactions to be verified and finalized atomically and concurrently by all involved shards. This means the cross shard transactions can be done seamlessly and safely just as within a single shard. It will also be equipped with a Rho calculus based behavioural type system.

This allows smart contracts to be formally verified quickly in a concurrent and sharded setting, which makes possible the orchestration of large quantities of smart contracts. Its unique reactive smart contract system is more suitable than other chains’ active systems for time sensitive applications such as Defi. RChain’s unique technology makes it the best candidate to build a functional world computer.

Introduction and Motivation

How difficult are these times? By many estimates we have lost over 70% of insects (by biomass) globally. Stop and let that sink in. How many different ecosystems depend on insects? We are not just talking about pollinators like bees, which would be devastating enough; we are talking about the full range of insects. The correct response to this is for your mouth to get dry and a very sick feeling to arise in your gut. This is the harbinger of the 6th mass extinction on planet Earth.

Even Goldman Sachs is acknowledging the global scale of climate change. Their cheery notion of how to profit from it might be adorable if it weren’t so misguided. Neil deGrasse Tyson is not wrong when mentioning on CNN that we don’t have the technology to move our coastal cities inland by 20 miles in the next 20 years, but don’t let this view lull you into thinking climate change is all about sea-level rise. Look no further than the LA fires (such as the current one on the Warner Bros lot). Fires will come with the floods.

Our drinking water is also in peril. The major aquifer under India is down to one-quarter of what it was. Due to glacier melt, it will not be refreshed. Think about what this means for farmable land in the region, then extrapolate. Arable land tomorrow will be very different from what it is today. This will have massive impacts on supply chain management.

Lest you get the wrong idea, this document is not all doom and gloom. This document is about hope—the kind of hope that is beyond reason and opens the way for the transformative power of love. Still, everything presented here will be grounded in reason with a sound mathematical basis. In fact, we are going to need both reason and love, head and heart, and maybe even a third force, working in concert to find our way out of the mess we have created for ourselves.

Coordination Technology

If we are going to get through the upcoming decades and build a world based not on a sustainable culture, but a regenerative culture, we are going to have to coordinate in a manner that we have never coordinated before in the history of humanity. Fortunately, coordination is Homo sapiens’ superpower. One hairless biped couldn’t stand a chance against a woolly mammoth. By working together in groups early hunters brought them to extinction.

In the modern era, we have amplified our superpower with three major tools: capital, governance, and social telecommunications. Unfortunately, all three of these have become badly in need of a reboot. Capital, whose original purpose is to help us take care of each other and the planet, is now stuck in the hands of a few. Regardless of your politics, you must acknowledge that this dramatically reduces the number of coordination models we can explore using capital at a time when we are in need of expanding our modes of coordination.

Governance is already under transformative pressure because of Internet technologies. These technologies, which can bring fact-checking and video records of political behavior to the fingertips of virtually every citizen, make transparency and accountability so imminent that resistance to them must attack truth itself. The fake news epidemic is a symptom of the deathrows of dictatorial impulses. World governments are moving too slowly to address the problem, and many, such as the US, are actually in denial of the existential crisis we are facing despite the overwhelming scientific consensus, turning a blind eye to the flooding, fires, and famine already in progress. This inability to act in a timely manner on behalf of their citizens is due to the majority of them being corrupted to the core.

Likewise, the centralized social media have reached the point where, like Facebook, they can be weaponized by foreign powers to influence the outcomes of elections in major democratic countries. Further, the draconian responses to these abuses (such as Facebook’s frequent flirtations with censorship) are just as worrisome. Democracy vitally depends on unrestricted access to public information.

Self-sovereign identity, data privacy, and decentralization

This last point has been a concern of both the market and the public sector alike. The market has responded to the dominance by a few centralized companies of the digital asset management platform capabilities by pursuing key alternatives: self-sovereign identity, data privacy, and decentralization. These pursuits have all converged on the technology known as blockchain.

Self-sovereign identity

The market has experienced increasing concern over Facebook and Google’s dominance of online identity. Virtually every online service offers (in fact, prefers) registration and login via Facebook or Google. This means just two companies control the online identities of hundreds of millions of people on the Internet. The response to this serious threat to online independence and personal privacy is the development of self-sovereign identity—a term referring to the idea that people should identify themselves and reveal just what they want to reveal in any particular communications context. The data they reveal should be verified via a combination of cryptographic services and a web of trusted parties rather than any single provider.

Data privacy

As with self-sovereign identity, the market has grown increasingly anxious over the hoarding of personal information by the large online companies, especially in the wake of the number of security breaches these companies have experienced in the past several years. Many terms have entered the common parlance describing the growing public anxiety. One example, surveillance capitalism, describes the fact that the consumer (and their personal data) has become the product which these major online providers sell to advertisers, as well as more nefarious agencies, such as Cambridge Analytica, the company largely acknowledged to be the architect of Britain’s Brexit vote and involved in the election of Donald Trump.

Solutions for personal data vaults have come from many sectors, including the World Wide Web’s inventor, Sir Tim Berners-Lee. Simultaneously, the public sector has responded with fairly strong regulations, such as the European Union’s GDPR.

Decentralization and the blockchain

More generally, the market has been exploring decentralized alternatives to building the kinds of online services that have sprung up and insinuated themselves into modern life up over the last few decades. None is more prominent than the blockchain.

At the core of every blockchain (at least those that deserve the name) is an economically-secured, leaderless consensus algorithm. Essentially, algorithms of this type—be they proof of work, proof of stake, or some other kind—allow computer programs that do not trust one another to come to agreement on a value. Since they agree on the value they can store a local copy for easy access and only run the algorithm if there is a change to that value. Such a capacity, if it were scalable enough, makes it possible to deploy a global decentralized data network.

The importance of such a breakthrough needs to be called out. The last 15 years of development of online services have all been about digital asset management platforms, such as GitHub, Spotify, Facebook, Instagram, Twitter, Dropbox, GMail, Google Maps, etc. They help billions of people upload, disseminate, and manage zettabytes of data. Even the second wave of Internet services, including PayPal, AirBnB, and Uber (sometimes called the sharing economy) is also filled with digital asset management platforms that just happen to connect to physical and other kinds of assets.

The fact that it’s all about the data has consequences for our need to coordinate in the face of climate change. In just the same way that concentration of capital in the hands of a few limits the number of coordination models available to us to implement via capital, the concentration of data in the hands of a few has the potential to limit the coordination models available to us to implement via data and social telecommunications. While many of the online digital asset management platforms began with more open models, they are becoming less open everyday. Google no longer has the motto “Don’t be evil.” Facebook has gone through several waves of censorship. When Microsoft acquired GitHub, developers from several geopolitical regions were disenfranchised from their code.

The emergence of a decentralized global data network represents a major shift, yet the potential of the blockchain to completely upend the market doesn’t stop there. Having developed the consensus algorithm known as proof-of-work, the Bitcoin network chose to use it to store a ledger recording the balances at Bitcoin holder’s addresses. This choice has obvious but limited utility. A more sophisticated choice for what to store with your consensus algorithm is the state of the virtual machine. This choice, originally conceived and developed by Ethereum, turns the global data store into a global computer. This computer runs everywhere and nowhere. Any time a local instance of the computer is killed, two more can take its place somewhere else on Earth.

Blockchain and scalability

Of course, the promise of a real shift towards decentralization of these services built on blockchain technology is predicated on scalability. If we are to base our analysis of this question on the proof-of-work blockchains like Bitcoin and Ethereum, we might reach a resounding “no, it cannot scale” as our conclusion. Proof-of-work is inherently wasteful, spending inordinate compute cycles. Essentially, the protocol trades heat for security, and only a tenuous security at that.

Further, even if the proof-of-work consensus algorithm could scale, Ethereum’s choice of virtual machine is a sequential machine, meaning that all transactions must be processed sequentially through the machine. However, most transactions are isolated, and are processed concurrently. Someone buying an empanada in Chile is touching different financial resources than someone else buying grilled tofu on the streets of Shanghai. They can and do proceed independently. To understand what the introduction of a sequential virtual machine would do, think of an eight-lane freeway. What happens when all of those lanes get funneled down to one? Worse still, because it is sequential, as you add more server nodes to the network, it has more contention and as a result, adds more computational resources causing the network to slow down.

Finally, even if it were possible to scale such a virtual machine, this kind of platform, one that enables developers to write programs on the global computer, exposes itself to a massive security risk, namely those very user-defined smart contracts. The core protocol could be delivered from heaven on high, perfect in every way, yet because pesky human developers are writing programs on top of this global computer, they will have errors. The DAO bug, in which $50M was drained from a $150M pool, was a problem in user-level code, not core protocol code. The fact that these incredibly damaging errors have been relatively sparse is only because no one is building serious applications on Ethereum because it doesn’t scale. In fact, the DAO bug itself was recoverable precisely because Ethereum doesn’t scale. Imagine what would have happened to the network if Ethereum had been running at Visa speeds of 40K transactions per second instead of the 10 transactions/sec it was clocking at the time.

To recap, the blockchain comes along at a time when we really need it, and we need it not just because there is a global concern for the centralization of capital and data assets, but because we need to reboot our coordination infrastructure to address the consequences of climate change, which poses an existential threat to humanity. RChain was developed in this context: to answer these questions in a timeframe that will matter.


RChain brings together five major technological components in delivering its RNode software:

Each of these components and the innovations they realize is designed to meet specific market needs. In general, RChain would much rather follow state of the art practice and engineering where it makes sense to do so, rather than spend on costly R&D. At the same time, RChain recognizes that if we are going to build a platform sound enough to rebuild the world’s data and financial networks on it, it needs to be of a completely different kind of quality than we find in most Internet software.

Taking this recognition seriously, RChain uses a radically different development methodology, sometimes called correct-by-construction. This method extracts programs from proofs of their correctness. The aim of this approach is that ultimately the running, production code has been formally verified, and proven correct. This is the level of software quality and reliability necessary for a platform with the aims and goals described above.

Note that correct-by-construction does not harken back to waterfall models of software development. That is, it doesn’t wait for either the math or the software to be perfect. A mathematical model may be internally consistent and proven correct, yet not be applicable in a production setting because there are new or subtly different requirements in production. Seasoned mathematicians, as well as seasoned software developers know that both proofs and programs only work to the extent that they accurately model and encode their requirements. In a developing market a deep understanding of requirements happens iteratively, much in the same way that viable adaptation in Nature happens iteratively. As such, correct-by-construction fits perfectly well with agile development methods – which are all about managing the iterative nature of software development.


RSpace: a new kind of store

In the last decade the technical communities, especially those involved in big data, have seen a rethinking of storage and retrieval. In particular, a dialectic around the no-SQL alternative to relational data stores has developed. First, a wave of storage systems based on the key-value, together with the map-reduce paradigm emerged. A backlash followed this, levying critiques of the key-value store paradigm in terms of the semantics of both query and transactions. RSpace threads the needle, offering a no-SQL store, but with a clear query semantics and a clear transactional semantics. It goes beyond this by offering a critical feature necessary to support user-controlled concurrency in queries; the ability to store code and data. In fact, putting code and data on equal footing in the storage layer actually derives from a consistency constraint coming from one of the oldest rules of logic, the law of the excluded middle.

Lest this seem too heavily slanted toward the theoretical, it is important to understand that this is third generation technology. Meredith devised a version of this for Microsoft’s BizTalk process orchestration, a business process automation platform, then updated the idea to use so-called delimited continuation in SpecialK, and finally proposed the RSpace design as a further refinement to allow RChain to scale. To learn more about this we invite you to explore RSpace on RChain’s GitHub repository, which like all of RChain’s software is open source.

In the context of our previous discussion, in which we identified the need to organize and handle the world’s data in a decentralized way, beginning with a store that builds and improves upon what we have learned from the last decade of big data makes sense. Essentially, a component like this is sitting as a private asset inside all the major digital asset management platforms, from Google to Facebook. One key difference, however, is that our store is open source, and fits into a decentralized, public infrastructure, and its features and functions are derived from a specific concurrency semantics, embodied in rholang.

Rholang: a new kind of programming language

Rholang speaks directly to the requirement identified above that the virtual machine, i.e. the model of computation, whose state is stored on the blockchain, must be fundamentally concurrent, rather than sequential. A careful analysis of the various models of computation, from Turing machines to lambda calculus, from Petri nets to the π-calculus, shows that there are four properties we are interested in relative to this market’s requirements. Specifically,

Completeness Compositionality Concurrency Complexity
Turing machines ✔︎ X X ✔︎
lambda calculus ✔︎ ✔︎ X X
Petri nets ✔︎ X ✔︎ ✔︎
CCS ✔︎ ✔︎ ✔︎ X
π-calculus ✔︎ ✔︎ ✔︎ ✔︎

A quick glance at the table shows that the π-calculus, and more generally, the family of models of computation known as the mobile process calculi, is the only one that has all four features. Likewise, using this table makes it possible for a quick glance at most blockchain projects on the market to be essentially all one needs to see which projects have what it takes to scale. If they are not based on a model that has all four features, they will not scale.

We could add other lines to this table: has already been used as the basis of enterprise-grade products built to support the previous generation of smart contract; and, has been used as the basis of Internet standards specifying the previous generation of smart contract. Again, the π-calculus is the only one that meets these requirements, as well. Specifically, the author of this prospectus, Greg Meredith, was the principal architect of Microsoft’s BizTalk Process Orchestration, as well as the language XLang, which was not only the basis for enterprise and Internet scale business process automation, but also as the basis for a number of W3C standards, including BEPL and BPML, as well as informing the WS-Choreography standard. All this is a way of saying that making a choice to go with a model like the π-calculus is not just theoretically sound, but also practically sound, and in alignment with industry standards.

We cannot leave this topic without also mentioning that it’s not just the blockchain that demands concurrency as the model of computation. The programming model for Internet-scale programs has been under considerable pressure to move to a concurrent model of computation for quite some time. For the last two decades two trends have been putting pressure on the programming model from below, as well as from above. From below, we see that Moore’s law ended in the early 2000’s. In the previous era a developer could write some C code, sit on their hands for a year, and their code’s performance would double because of advances in processor speed. This trend ended when limits to sequential processing speeds were hit in the early 2000’s and the predominant way computation sped up has been to put more cores per die, more chips per box, more boxes per rack, more racks per data center. Code that doesn’t take advantage of this concurrency doesn’t scale. Likewise, from above, the commercialization of the Internet has resulted in user demand for programs that are globally accessible by millions of concurrent users, 24x7. Again, code that isn’t essentially concurrent will fail to be responsive.

This means that the programming model used for programming the Internet scale applications, whether they are blockchain savvy or not, must evolve to be concurrent. The mobile process calculi form the basis for that evolution not only because they represent significant advances in language design, but because a sound basis for static analysis of programs. The importance of this feature is hard to state. Concurrent programming is many times harder than sequential programming. Without significant support from static program analysis, the bugs in concurrent code will become overwhelming as the numbers of programmers writing concurrent code increases.

Rho-calculus vs π-calculus

As mentioned, the π-calculus is just one example of a family of models enjoying all of the features necessary to address this market. Since Turing award winner Robin Milner put forward the model, several models of computation sharing many of the π-calculus’ features have been identified and studied, including the join calculus, the blue calculus, and the ambient calculus. Each of these has interesting properties, but ultimately fails in one way or another to map as well to programming the Internet as the π-calculus. There is one model, however, that derives from the π-calculus, but fixes a small lacuna and at the same time adds some powerful features that are quite common in programming the Internet, namely Meredith and Radestock’s rho-calculus.

The rho-calculus plugs a hole in the π-calculus by making names first class elements of the model. The π-calculus is parametric in a theory of names; that is, given a theory of names, the π-calculus will produce a theory of processes that get computation done in terms of communications that use those names as channels. The π-calculus is agnostic as to whether names are telephone numbers, email addresses, blockchain addresses, or all of the above. However, the pure π-calculus can only exchange names between processes. It’s like people getting work done entirely by exchanging phone numbers. It turns out it is possible, in theory, to do just that, but it’s a lot of work! What happens on the Internet, however, is that not only data, but code gets shipped around from process to process, and the rho-calculus supports this feature.

Namespaces and sharding

The rho-calculus achieves this ability to ship processes by making names be the codes of processes. Once names are code, it is possible to encode all of the different kinds of common telecommunications notions of addresses into the rho-calculus setting, everything from email to blockchain addresses embed nicely, and thus it is straightforward to embed the most common addressing scheme on the Internet: URI’s and URL’s. This latter is important not just because it is the way the entire world wide web is organized, but also because it identifies a very powerful feature that the rho-calculus refines: namespaces. URI’s organize the web into a tree of resources (a forest actually, but trees will suffice for our discussion). Each URI is a path from the root of the tree along the branches to the leaf, or endpoint that holds the resource. Because of this path structure, you can indicate entire groups or spaces of resources using only partial paths. This allows us to organize and search spaces of resources in terms of the tree and path structure. The rho-calculus kicks this paradigm up a notch by identifying these spaces programmatically.

The reason this feature is so critical in this market is again because most transactions are isolated. We need a programmatic way to segment, organize, and reorganize transactions so that they are grouped in terms of the resources they have in common. This is commonly called sharding in the blockchain space. The rho-calculus namespace capability provides an extremely powerful approach to sharding. Specifically, the same approach can be used to rejoin forks, interoperate with other networks, as well as give speed ups on transactions that are isolated.

Operational semantics and correct-by-construction language design

This discussion wouldn’t be complete without mentioning the interplay between the correct-by-construction methodology and the typing system for rholang. The semantics of rholang the programming language is grounded in best practices. The rho-calculus provides a Turing complete operational semantics of rholang’s core features. Each of rholang’s additional features are defined in terms of a mapping back to the core calculus. This means that the language is correct-by-construction. Contrasting it with Ethereum’s Solidity and the EVM, Solidity doesn’t yet have a formal semantics. When it does eventually achieve one, there will be a proof obligation that any compiler in production compiles Solidity to EVM byte codes in a way that preserves the semantics. Without such a proof what safeguard prevents against bytecode injection attacks? Who is to be able to tell that an attacker’s compiler doesn’t emit bytecode that syphons off small bits of Ether to their account?

Rholang cannot suffer such an attack because the language and the execution mechanism (which is simply RSpace!) are both derived directly from the rho-calculus. These are the kind of benefits that come from the correct-by-construction methodology. Also note how we can reap the benefits of these results without having to know about the wider market requirements related to distribution and consensus. This reflects the point made earlier that correct-by-construction fits with iterative and agile requirements gathering and development processes.

It is also worth noting that a clean operational semantics is necessary to identify when one program is substitutable for another, in other words, when is it safe to put a different program in place of another in a wider execution context. This notion goes all the way back to Liskov’s substitutability principle, and in the practical terms of the maintenance of a system long term, this is a critical feature. You need to know when you can swap out upgrades or fixes for old or failing components. In software this is predicated upon having a semantics of the programming language, and operational semantics are by far the most widely used form of programming language semantics in theory and practice.

This principle of substitutability is closely connected to static analysis and especially the form of static analysis commonly called type checking. All programs inhabiting a given type are substitutable in any context requiring that type. This is the basis of correct plug-n-play software and is used in all serious production setting s. However, the type systems for most popular modern languages are relatively weak, only ensuring that data structure supplied matches the data structure expected. They say nothing about program structure, let alone program behavior. Over the last two decades a new class of type systems has emerged, but have yet to find their way into a mainstream programming language. Rholang’s behavioral types are designed to change that.

Behavioral types

The 2005 paper “Namespace logic” describes both a logic and a type system for programs in the rho-calculus. These types can be used to describe an incredibly wide range of phenomena from a compile-time firewall, ensuring that a process can and will only communicate on a specific range of channels, to deadlock-freedom, ensuring that a process doesn’t get stuck, to encodings of common data formats, such as XML schema. The logic is part of a development that goes all the way back to Brouwer’s Intuitionistic mathematics programme, which insisted that the proof of the existence of a mathematical object needed to construct it, rather than infer it by reductio ad absurdum arguments.

Other highlights along the way include Curry and Howards correspondence between types in the simply typed lambda calculus, and formulae in intuitionistic logic; Abramsky’s domain theory in logical form; Robin Milner and Matthew Hennessey’s modal logics for process calculi; Caires’ spatial-behavioral logics; Girard’s linear logic; and, Wadler’s session types. It’s a long and illustrious development that has only been partially mined by programming language designers.

Rholang completely embraces the approach and for good reason. In 2015, Meredith warned Vitalik Buterin and anyone in the Ethereum community that would listen that the major security exposure for their platform – any smart contract platform – was in the user-defined contracts. Shortly after articulating this warning, Ethereum was hit with a bug in the DAO contract. The DAO smart contract implemented a kind of decentralized kickstarter. The idea was so popular, $150M worth of ETH was sent to the contract. Then an attacker found an exploit and drained $50M from the contract. Fortunately, Ethereum was only running at about 10tps, otherwise all $150M would have been gone in a few seconds.

As soon as the bug was reported, Meredith and Pettersson showed that when the buggy contract was translated in rholang and typed, it wouldn’t type check. As a result, the problematic code would never have compiled, let alone been checked in and deployed. In rholang the bug shows up as a race between updating the contract’s state and serving the next client request. The exploit amounts to being able to access stale state. The types coming from namespace logic catch race conditions. The value loss and community upheaval for Ethereum could have paid for the development of such a type system for Ethereum’s smart contracting language, Solidity, more than 10 times over. This is why RChain begins with such a type system on our roadmap.

Operational Semantics in Logical Form (OSLF)

What is hinted at by the many developments along the way to behavioral types is the existence of an algorithm that generates type systems. The OSLF algorithm, developed by Meredith and Stay, is such an algorithm. Given

OSLF generates a type system. This type system is much richer than the type systems found in Java, or Haskell, or Scala. It can see things about program structure and program behavior that the older type systems cannot. Yet, it is decidable for an interesting subset of programs.

Because of this richness, the types can also be thought of as a _query language. _If digital asset management is the cornerstone of the Internet as a global coordination technology, then there is one digital asset that is both vitally important and vastly underserved: code. Code is the dark matter of the Internet. It accounts for most of the function, is stored and managed as a data asset, and yet is opaque to the standard query mechanisms for structured data. Yet, behavioral types allow us to filter code, i.e. select code on the basis of how it is structured and what it does. This capability not only revolutionizes services like GitHub, it also prepares the way for a smart contracting platform successful enough to host billions of smart contracts because it makes it possible to find what one is looking for amongst billions of possibilities.

Casper: a new kind of consensus algorithm

Prior to blockchain, and specifically proof-of-work, distributed consensus algorithms like PAXOS favored lockstep consistency over availability. At scale this choice is unworkable, and almost all global platforms, such as Facebook and Twitter, use some form of eventual consistency. The blockchain takes this idea much further, and tethers it to an even more radical idea: securing the protocol economically. The trick is to make it too costly to attack the network. While the famously “impossible” 51% attack against proof-of-work based networks has long been shown not only to be possible, but to have happened to prominent proof-of-work networks, the idea of making it too costly to attack is still quite valid. What is less tenable is wasting cycles on guessing numbers as a means of demonstration of work.

Instead, one can continue with the correct-by-construction methodology, and follow logic’s lead. In the late 80’s and early 90’s Girard’s linear logic was revolutionizing our notion of what both logic and proof are. Specifically, linear logic is resource sensitive. Unlike either classical or intuitionistic logic, where the proposition A & A is the same as the proposition A, linear logic takes into account the resources necessary to establish a proposition. One might think of it in terms of establishing properties of chemical compounds. Many assays to establish that a compound has a given property require modifying or even destroying a quantity of the compound. More prosaically, in classical logic, saying “I have a dollar” & “I have a dollar” isn’t sensitive to the possibility that this could mean “I have two dollars.” In point of fact, valid linear proofs are balanced in the sense that all resources must be carefully accounted for, and linear proofs prevent double-spend. Needless to say, it seems natural to look to linear logic for clues about consensus, especially economically secured consensus.

Among the different semantics for linear logic, games semantics stands out as both intuitive and pluripotent, with many variations illuminating a wide variety of features of linear logic. Hyland and Ong’s games semantics provides an extremely faithful interpretation of the logic. In Hyland-Ong games semantics each move of player and opponent must be justified by previous moves, and it is this justification structure that their semantics uses to establish single threadedness of strategies. In 2009, Meredith proposed using this kind of structure to secure network protocols in the communication between instances of an earlier version of RSpace known as SpecialK. Following these insights, CBC-Casper imposes and exploits a justification structure on blocks to detect equivocation, provide liveness constraints, and fairness constraints. These properties are typically ensured by imposing certain patterns of communication, and the justification structure gives a view into just enough of the history of communication that these properties, or their violations can be detected.

Armed with a means to detect violations reliably, proof-of-stake can be economically secured. Participants put up stake in the token used to prevent denial of service and have their stake slashed, i.e. partially depleted or entirely forfeit, if they commit a violation. In the long run, only participants that play by the rules are able to stay in the game. This is considerably more efficient than wasting cycles guessing numbers.


If we are talking about a global, economically secured, decentralized compute and data storage network, there are critical questions about how the network is governed that are inescapable. First and foremost, RChain follows nature, quantum mechanics, and the emergent practices of software development in open source projects. Just as beehives fork when there is a new queen, or the wave function splits in the many worlds interpretation of quantum mechanics, or open source projects branch where there is sufficient technical disagreement, forks are natural. The question is how to relate those forks to each other in such a way that a healthy ecosystem of speciation and integration emerges.

Again, the idea is to follow natural tendencies. We already have evidence from existing blockchains that communities will hedge on both sides of a fork. It’s the safest thing to do, and it allows both sides to access the network effects of the other. It’s also what happens in beehives. Hives that fork are genetically rejoined during the summer flights of the queens. In RChain, the simplest thing to do is to treat forks as shards. Sharding, as mentioned above, is an elegant mechanism for establishing the economic bridge between the two networks,


Beyond technological and market driven solutions, there are still matters that require socially organized governance. RChain proposes the long standing tradition of cooperatives. In the US, Europe, China, and all over the world cooperatives have provided a natural, grass roots means of governance of common resources. In North America, famous instances include REI, Tillamook Dairy Co-op, and many utility companies. At the heart of the cooperative movement are democratic principles, not the least of which is one member, one vote. In RChain the membership votes directly on items of business and elects the board, which in turn appoints the officers of the company to handle day-to-day affairs.

As such, RChain Cooperative is not yet a direct democracy, but a representative democracy that allows for natural differentials in levels of engagement.

We hypothesize that this model can be made to scale in much the same way REI has scaled, or Puget Sound Community Cooperative has scaled, or … the beehive scales. When there is a significant division of interests, even if that division is not rooted in conflict, but in differences in language, or time zones, it makes sense to split up the governance into local Cooperatives. As an example, RChain is actively exploring with the Chinese community the establishment of RChain China.

Impact on token status

One of the most important features the Cooperative structure has is its relationship to securities laws, especially US securities laws. The Cooperative structure allows members voting control over the code, the treasury, and the board. As long as tokens are sold only to members, even if the token has no established utility, it still passes the major checks of the Howie test. In the case of RChains RHOC/REV tokens, utility is evident. RHOC/REV is both the DoS prevention mechanism, and is also the means to effect computation and storage on the network.

Which jurisdictions?

One absolutely critical reason to explore these more scalable governance structures is because blockchain technology cuts across jurisdictions. In the same way that the commercialization of the Internet brought about global commerce and global markets, the blockchain goes further. Consider, even in a single shard node can be running in the US, Europe, and Asia. It is perfectly possible to write a smart contract that cryptographically locks up a digital asset in a manner that all parties to the contract agree is unfair or undesirable. In which jurisdiction is it reasonable to address any conflict about how to resolve the situation? Further, even if this were possible to decide, an adjudicator might render a decision, but there is no actual remedy because the code doesn’t admit it.

Geopolitically based jurisdictional boundaries are of no practical use in these situations. Communities sharing common interests and common goals, despite being geographically, culturally, and even politically diverse are more aligned with the basic organizational components of the blockchain. Make no mistake, blockchain will provide fundamental challenges to global jurisprudence. In light of the crises we are facing, this development seems timely: Climate change also cuts across jurisdictional boundaries. We must work together across municipal, provincial, and national boundaries. The Earth is humanity’s commons. We cannot afford to destroy entire global ecosystems because we will all suffer the consequences, regardless of any particular geopolitical allegiances.


RChain Cooperative is set up to accept all kinds of engagement. We value all forms, whether it is work on community development, business development, or software development; whether it is financial guidance, business partnership, or acquiring tokens, we value all of it. You will find that the RChain community is one of energy and diversity, inspiration, commitment and hard work. If you find these ideas are speaking to you and want to get involved, in the section below we identify the opportunities and speak to ways different communities and stakeholders can get involved.


The single biggest factor in driving network adoption is the total volume of transactions on the network, and the single biggest factor in driving the volume of network transactions is decentralized apps, or dApps. As with any adoption strategy, part of it is magical mystery, what narrative will capture the imagination of the market, and part of it is hard core analytics. In the section below we want to speak to both, but emphasize the analytics.

Developer engagement

Developers are the thought leaders in technology driven markets, like the blockchain. Creating buzz within the technical communities attracts the attention of the business community, as well as the public sector. Among the many factors, the combination of economic opportunity and technological innovation is a powerful attractor for the developer community. The initial blockchain combination of a new insight into distributed consensus, together with the potential to reinvent money was irresistible for many developers and we witnessed a gold rush. Now, however, with the inability of proof-of-work to scale widely accepted and the exposure of many projects as scams, the ICO rush is over.

RChain offers a very different, and more grounded combination of technical innovation and economic opportunity. Despite the field being decades old, and the end of Moore’s law for over 15 years, concurrency theory has not yet penetrated mainstream programming language design, though we do see signs such as the growing popularity of Go and Rust. Likewise, neither the benefits nor the brain candy of behavioral types have penetrated mainstream programming language design, nor the mainstream developer’s mindset. With the current emphasis in standard practice on microservices, Internet scale applications, and blockchain’s revelations about distributed consensus, a shift toward protocol oriented programming (POP) is a high likelihood.

The mobile process calculi have simply dominated the fields of protocol design and protocol analysis for decades. RChain’s rho-calculus based language, rholang, and correct-by-construction methodology offer developers a chance to come into the modern world where programming language semantics meets protocol design. Yet, it goes much further than that. Rather than stopping at a data-network-scale blockchain, RChain recognizes that data acquisition and management is more subtle than simply storing blobs on disk. Data comes with meta-data and rights management.


RChain has built a next layer tool, RCat, short for RChain Asset Tracker. RCat allows developers to package large data blobs with metadata assets, such as audio or video files with information about the creators and rights holders of the audio or video data. The metadata is vital to provide search capabilities for data blobs that do not enjoy semantic search functions, as one finds in relational data.

RChain has also run experiments building both in-house and third party dApps. The in-house developed dApp, RSong, factored into the RCat backend and a RSong player front end to allow artists to offer songs to their audiences. The audio data is stored on-chain and delivered from chain. The payments for audience access to the audio data go directly to the artist on the basis of use and the rights management data in the associated data. The 3rd party dApp, Proof, provided a crowd sourced verification of news stories, again with the data stored on-chain and delivered from chain.

One of the most important measures in the development of these two dApps is the time to MVP (or minimum viable product) and how many resources were needed to get there. In both cases, we saw on the order of two months and two engineers. This means that for roughly the same resource investment it was costing a project to get all the assets together to offer an ICO, startups could have already put together an early stage working product. These are the kind of metrics that developers and the business community pay attention to, as they represent demonstrable, measurable economic opportunity, while at the same time weeding out the scams.

Stepping back to consider the wider view, RChain offers a more solid and stable way to reinvigorate the developer interest in blockchain by creating a dApp factory. Decentralizing digital asset platforms was the impulse that had developers exploring technologies like blockchain in the first place. RChain reminds us of that objective and shows that it is in reach. This brings our aim of building a new coordination infrastructure, one capable of global, grass roots coordination around the problems of climate change, much closer into view by attracting the developers with the drive and incentives to make it happen.

Mainstream adoption

Keeping an eye on the main point of this section, the real adoption of the network is going to come from high transaction volume. This comes in turn from network effects associated with wider adoption of dApps running on the network. If developer engagement is the spark, what is the oxygen than fans the flames of mainstream adoption? Again, the Internet has been very clear: Its social use of digital assets. Look no further than TikTok, YouTube, Instagram, or WeChat. This phenomenon is incredibly fortuitous because it means dApps can focus on high volume, low risk transaction markets. There are trillions of Facebook posts, the loss of any 10,000 of which has virtually no impact on market perception of the robustness or utility of the network.

By contrast, in the high volume financial markets, the loss of 10,000 trades would be immediately noticed and cause for serious concern. The loss or corruption of a single high value transaction would bring immediate repercussions in terms of the market perception of the viability of the network. The angst of the Ethereum community around the DAO bug would be nothing by comparison to Wall St or Beijing’s reaction to the loss of 100’s millions in foreign exchange trades. But, even with correct-by-construction methodology we must acknowledge that all blockchain technologies are in their infancy. There will be bugs. Again, look at the history of software. Whether we are talking about Microsoft products or organically grown Internet projects, such as Python, it’s typically not until the 4th iteration of a project that we see the kind of maturity and robustness that we can deploy it on mission-critical applications involving human lives or high dollar value transactions.

So, dreams of deploying blockchain on supply chain management infrastructure, or manufacturing, or financial markets must wait a good decade before the technology is robust enough to meet those standards. Ironically, while the fate of the ecosystems of the entire planet should by all rights be considered mission critical, the application of blockchain technologies to these problems is not direct. Instead it’s all about the communication and coordination of humans who are focused on these problems. In short it’s about the social use of digital assets. That’s what makes blockchain a critical component in our urgent need to coordinate our global response to Climate Change.


More ironically, the fair treatment of the creative classes, especially artists in the entertainment sectors, may be the shortest and cheapest path to mainstream adoption. In much the same way that developers end up being the thought leaders for the technically oriented markets, artists in the entertainment sectors are often the thought leaders for mainstream adoption of certain platforms. None of these market sectors is more organized like this than music. By way of example, when Taylor Swift tweeted her sentiments regarding engagement in the political process in November of 2018, 65,000 people registered to vote that day.

This observation leads to some very basic arithmetic. Take 100 artists with a million followers each, if there is not too much cross-over between audiences, then those 100 artists together command an audience of 100M followers. If there were a dApp allowing those artists to share their music with their audience, and those followers listened, quite conservatively, to 10 songs / month, that would be a minimum of 1B transactions / month on the network, generated by that one dApp. By contrast, Ethereum and Bitcoin together generate about 1M transactions / day. That’s 30M / month, a tiny fraction of what would be generated by a single music dApp.

Further noting that the cost of user acquisition is much smaller in the entertainment sector than in social media, WhatsApp spent roughly $20/user, while Facebook spent more than twice that for user acquisition. This means the acquisition of a million users costs between $20M - $50M. In music, the artist sells to their followers. So, if the artist adopts the platform, they bring their followers, especially if content is released exclusively to the platform. This means that one could spend $100K / artist on the top 100 artists and still cut the user acquisition spend anywhere between ½ and ⅕.

Of course, RChain’s aim is not a cynical reduction of market dynamics to numbers. This discussion primarily serves to illustrate the point that careful consideration of how to support the creative classes will bring about a massive shift in platform adoption; and this fits very well with an aim to accelerate global coordination around climate change. The artists and creative classes more generally are going to be people who are most likely to spearhead the change that must take place. Helping them find their way to an autonomous, self-directed engagement with their audience and supporters and helping them find their own regenerative economy provides a high profile example of how a community can self-organize and move away from co-dependence on a corrupt industry to something healthier and more lucrative.

In general, a movement toward a decentralized, green economy will have these kinds of benefits across the board, not just in the arts and entertainment sector. Leading with the arts and entertainment sectors provides an approach to the wider markets that is organized to minimize risk. The reader who feels compelled to build decentralized apps should look to the arts and entertainment as the first wave of blockchain. Just like the development of the Internet, begin with dApps that appeal to mainstream audiences but don’t risk everything on a single failure.


As with the gold rush to become a Bitcoin miner, the market is already seeing a surge to put together staking services for proof-of-stake nodes. RChain’s CBC-Casper is proof-of-stake and we are seeing a tremendous grass-roots interest in running RChain Validator nodes. Initially, the Cooperative will run the majority of the nodes in the root shard, but eventually, this should shift to a more decentralized distribution of nodes. However, to get the contemplated transaction throughput, the hardware for the backbone of the root shard is very high-end.

As with all offerings built around network effects, there is a cyclic dependency that is both an initial source of resistance and later becomes an engine of economic regeneration: to be economically attractive to validators the network needs high transaction volume; this means dApps, plain and simple. Yet, to be economically attractive to dApps the network needs a critical mass of validators. Of course, once critical mass is achieved, validators will flock to the network because they can make 10X what they were making on mining Ethereum while only charging users 1/100 of the transaction fees. Likewise, dApps will flock to the network because it’s the only network offering this transaction throughput with a trust model similar to Bitcoin’s that also addresses user-level smart contract security.

We believe the first wave of validators will be a mix of people who understand the wider mission, to build a coordination infrastructure that will support the global coordination necessary to deal with climate change, as well as people who see the real economic potential. This is one place where RChain and Goldman Sachs agree. Moving to a regenerative economy will create enormous wealth. It doesn’t matter which camp you see yourself belonging to, and you can certainly belong to both. If you want to engage by running validator nodes, don’t hesitate to contact the Cooperative. We are ready to support you.


When RChain achieves critical mass network effects, the REV token will be needed by every dApp on the network, and those dApps will constitute a vibrant ecosystem of users services, from decentralized maps and location services to decentralized payments to decentralized Uber and AirBnB. We want to work with those who understand that the management of this resource is a virtualization of managing similarly precious resources in the natural world. Every living organism on this planet respires. Oxygen is a critical resource for all life. Likewise water is essential to all life. Practicing on a resource that is essential for all compute-enabling key user services becomes a laboratory for learning how to work with more precious resources.

As mentioned several times, responsible management of such a resource is not at odds with wealth creation and abundance. Quite the contrary; learning how to manage critical resources responsibly is at the very heart of the regeneration of life on planet Earth.

Summary of opportunities

If you want to be involved with RChain, the first thing to do is to join the Cooperative. As a member, you are able to participate in the governance of the protocol, get rebates on use of the REV token to run smart contracts, and participate in member sales of tokens. Additionally, you can run validator nodes to help secure the network, and earn transaction fees. Finally, you can build dApps that offer key user services and bring transactions to the network.

None of these are mutually exclusive. Every dApp will want to run its own validator nodes to ensure delivery of transactions to the network and earn transaction fees to recoup some of the cost of running them. Every validator will need tokens for staking, and every dApp will need tokens to run their smart contracts. All of these roles in the ecosystem benefit from being a member of the Cooperative, and taking an active role in the governance of the protocol.

Technical Status

RChain main net is live and has been in operation since February 2020. Network statistics are available at https://revdefine.io

We are continuously improving the network with new features and performance improvements. You can join the weekly tech updates and community debriefs on Wednesdays.

All of RChain’s code is open source and is available on Github here. The websites www.rchain.coop , https://developer.rchain.coop/ and http://blog.rchain.coop/ contain more information.


https://rchain.coop/team.html contains details about the current management, board, and dev leadership.


“In strange and uncertain times, sometimes a reasonable person might despair. But Hope is unreasonable. And Love is greater even than this.” Robert Fripp

Without a doubt we are facing catastrophic collapse of ecosystems. We will not get through this unscathed. We cannot continue doing business as usual. Blockchain, as with other key technologies, shows up at just the time we need technological support for massive global coordination. All the ingredients we need to transition to a better world are available in our present moment. It’s up to us to recognize them and put them to use.

RChain is not just a network of computers and innovative software. RChain is a network of people, people who understand the situation and rather than succumbing to despair are willing to work together to find solutions. Please join us, not because it’s reasonable, but because you also feel that unreasonable, uncontainable sense of Hope and the warmth of Love beyond it.

copyright © 2022 RChain Cooperative, All rights reserved
RChain and REV are registered trademarks of RChain Cooperative
Privacy terms and conditions