Welcome to Hyperledger Fabric¶
Hyperledger Fabric is a platform for distributed ledger solutions, underpinned by a modular architecture delivering high degrees of confidentiality, resiliency, flexibility and scalability. It is designed to support pluggable implementations of different components, and accommodate the complexity and intricacies that exist across the economic ecosystem.
Hyperledger Fabric delivers a uniquely elastic and extensible architecture, distinguishing it from alternative blockchain solutions. Planning for the future of enterprise blockchain requires building on top of a fully-vetted, open source architecture; Hyperledger Fabric is your starting point.
It’s recommended for first-time users to begin by going through the Getting Started section in order to gain familiarity with the Hyperledger Fabric components and the basic transaction flow. Once comfortable, continue exploring the library for demos, technical specifications, APIs, etc.
注解
If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.
Before diving in, watch how Hyperledger Fabric is Building a Blockchain for Business:
Prerequisites¶
Install cURL¶
Download the latest version of the cURL tool if it is not already installed or if you get errors running the curl commands from the documentation.
注解
If you’re on Windows please see the specific note on Windows extras below.
Docker and Docker Compose¶
You will need the following installed on the platform on which you will be operating, or developing on (or for), Hyperledger Fabric:
- MacOSX, *nix, or Windows 10: Docker Docker version 17.03.0-ce or greater is required.
- Older versions of Windows: Docker Toolbox - again, Docker version Docker 17.03.0-ce or greater is required.
You can check the version of Docker you have installed with the following command from a terminal prompt:
docker --version
注解
Installing Docker for Mac or Windows, or Docker Toolbox will also install Docker Compose. If you already had Docker installed, you should check that you have Docker Compose version 1.8 or greater installed. If not, we recommend that you install a more recent version of Docker.
You can check the version of Docker Compose you have installed with the following command from a terminal prompt:
docker-compose --version
Go Programming Language¶
Hyperledger Fabric uses the Go programming language 1.7.x for many of its components.
Given that we are writing a Go chaincode program, we need to be sure that the
source code is located somewhere within the $GOPATH
tree. First, you will
need to check that you have set your $GOPATH
environment variable.
echo $GOPATH
/Users/xxx/go
If nothing is displayed when you echo $GOPATH
, you will need to set it.
Typically, the value will be a directory tree child of your development
workspace, if you have one, or as a child of your $HOME directory. Since we’ll
be doing a bunch of coding in Go, you might want to add the following to your
~/.bashrc
:
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
Node.js Runtime and NPM¶
If you will be developing applications for Hyperledger Fabric leveraging the Hyperledger Fabric SDK for Node.js, you will need to have version 6.9.x of Node.js installed.
注解
Installing Node.js will also install NPM, however it is recommended
that you confirm the version of NPM installed. You can upgrade
the npm
tool with the following command:
npm install npm@3.10.10 -g
Windows extras¶
If you are developing on Windows, you will want to work within the Docker Quickstart Terminal which provides a better alternative to the built-in Windows such as Git Bash which you typically get as part of installing Docker Toolbox on Windows 7.
However experience has shown this to be a poor development environment
with limited functionality. It is suitable to run Docker based
scenarios, such as Getting Started, but you may have
difficulties with operations involving the make
command.
Before running any git clone
commands, run the following commands:
git config --global core.autocrlf false
git config --global core.longpaths true
You can check the setting of these parameters with the following commands:
git config --get core.autocrlf
git config --get core.longpaths
These need to be false
and true
respectively.
The curl
command that comes with Git and Docker Toolbox is old and
does not handle properly the redirect used in
Getting Started. Make sure you install and use a newer version
from the cURL downloads page
For Node.js you also need the necessary Visual Studio C++ Build Tools which are freely available and can be installed with the following command:
npm install --global windows-build-tools
See the NPM windows-build-tools page for more details.
Once this is done, you should also install the NPM GRPC module with the following command:
npm install --global grpc
Your environment should now be ready to go through the Getting Started samples and tutorials.
注解
If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.
Getting Started¶
Install Prerequisites¶
Before we begin, if you haven’t already done so, you may wish to check that you have all the Prerequisites installed on the platform(s) on which you’ll be developing blockchain applications and/or operating Hyperledger Fabric.
Install Binaries and Docker Images¶
While we work on developing real installers for the Hyperledger Fabric binaries, we provide a script that will Download Platform-specific Binaries to your system. The script also will download the Docker images to your local registry.
Hyperledger Fabric Samples¶
We offer a set of sample applications that you may wish to install these Hyperledger Fabric Samples before starting with the tutorials as the tutorials leverage the sample code.
API Documentation¶
The API documentation for Hyperledger Fabric’s Golang APIs can be found on the godoc site for Fabric. If you plan on doing any development using these APIs, you may want to bookmark those links now.
Hyperledger Fabric SDKs¶
Hyperledger Fabric intends to offer a number of SDKs for a wide variety of programming languages. The first two delivered SDKs are the Node.js and Java SDKs. We hope to provide Python and Go SDKs soon after the 1.0.0 release.
Hyperledger Fabric CA¶
Hyperledger Fabric provides an optional certificate authority service that you may choose to use to generate the certificates and key material to configure and manage identity in your blockchain network. However, any CA that can generate ECDSA certificates may be used.
Tutorials¶
We offer four initial tutorials to get you started with Hyperledger Fabric. The first is oriented to the Hyperledger Fabric application developer, Writing Your First Application. It takes you through the process of writing your first blockchain application for Hyperledger Fabric using the Hyperledger Fabric Node SDK.
The second tutorial is oriented towards the Hyperledger Fabric network operators, 构建你的第一个网络. This one walks you through the process of establishing a blockchain network using Hyperledger Fabric and provides a basic sample application to test it out.
Finally, we offer two chaincode tutorials. One oriented to developers, Chaincode for Developers, and the other oriented to operators, Chaincode for Operators.
注解
If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.
Hyperledger Fabric Samples¶
注解
If you are running on Windows you will want to make use of the Docker Quickstart Terminal for the upcoming terminal commands. Please visit the Prerequisites if you haven’t previously installed it.
If you are using Docker Toolbox on Windows 7 or macOS, you
will need to use a location under C:\Users
(Windows 7) or
/Users
(macOS) when installing and running the samples.
If you are using Docker for Mac, you will need to use a location
under /Users
, /Volumes
, /private
, or /tmp
. To use a different
location, please consult the Docker documentation for
file sharing.
If you are using Docker for Windows, please consult the Docker documentation for shared drives and use a location under one of the shared drives.
Determine a location on your machine where you want to place the Hyperledger Fabric samples applications repository and open that in a terminal window. Then, execute the following commands:
git clone https://github.com/hyperledger/fabric-samples.git
cd fabric-samples
Download Platform-specific Binaries¶
Next, we will install the Hyperledger Fabric platform-specific binaries. This process was designed to complement the Hyperledger Fabric Samples above, but can be used independently. If you are not installing the samples above, then simply create and enter a directory into which to extract the contents of the platform-specific binaries.
Please execute the following command from within the directory into which you will extract the platform-specific binaries:
curl -sSL https://goo.gl/byy2Qj | bash -s 1.0.5
注解
If you get an error running the above curl command, you may have too old a version of curl. Please visit the Prerequisites page for additional information on where to find the latest version.
The curl command above downloads and executes a bash script that will download and extract all of the platform-specific binaries you will need to set up your network and place them into the cloned repo you created above. It retrieves four platform-specific binaries:
cryptogen
,configtxgen
,configtxlator
, andpeer
and places them in the bin
sub-directory of the current working
directory.
You may want to add that to your PATH environment variable so that these can be picked up without fully qualifying the path to each binary. e.g.:
export PATH=<path to download location>/bin:$PATH
Finally, the script will download the Hyperledger Fabric docker images from Docker Hub into your local Docker registry and tag them as ‘latest’.
The script lists out the Docker images installed upon conclusion.
Look at the names for each image; these are the components that will ultimately comprise our Hyperledger Fabric network. You will also notice that you have two instances of the same image ID - one tagged as “x86_64-1.0.X” and one tagged as “latest”.
注解
On different architectures, the x86_64 would be replaced with the string identifying your architecture.
注解
If you have questions not addressed by this documentation, or run into issues with any of the tutorials, please visit the Still Have Questions? page for some tips on where to find additional help.
Introduction¶
Hyperledger Fabric is a platform for distributed ledger solutions underpinned by a modular architecture delivering high degrees of confidentiality, resiliency, flexibility and scalability. It is designed to support pluggable implementations of different components and accommodate the complexity and intricacies that exist across the economic ecosystem.
Hyperledger Fabric delivers a uniquely elastic and extensible architecture, distinguishing it from alternative blockchain solutions. Planning for the future of enterprise blockchain requires building on top of a fully vetted, open-source architecture; Hyperledger Fabric is your starting point.
We recommended first-time users begin by going through the rest of the introduction below in order to gain familiarity with how blockchains work and with the specific features and components of Hyperledger Fabric.
Once comfortable – or if you’re already familiar with blockchain and Hyperledger Fabric – go to Getting Started and from there explore the demos, technical specifications, APIs, etc.
What is a Blockchain?¶
A Distributed Ledger
At the heart of a blockchain network is a distributed ledger that records all the transactions that take place on the network.
A blockchain ledger is often described as decentralized because it is replicated across many network participants, each of whom collaborate in its maintenance. We’ll see that decentralization and collaboration are powerful attributes that mirror the way businesses exchange goods and services in the real world.

In addition to being decentralized and collaborative, the information recorded to a blockchain is append-only, using cryptographic techniques that guarantee that once a transaction has been added to the ledger it cannot be modified. This property of immutability makes it simple to determine the provenance of information because participants can be sure information has not been changed after the fact. It’s why blockchains are sometimes described as systems of proof.
Smart Contracts
To support the consistent update of information – and to enable a whole host of ledger functions (transacting, querying, etc) – a blockchain network uses smart contracts to provide controlled access to the ledger.

Smart contracts are not only a key mechanism for encapsulating information and keeping it simple across the network, they can also be written to allow participants to execute certain aspects of transactions automatically.
A smart contract can, for example, be written to stipulate the cost of shipping an item that changes depending on when it arrives. With the terms agreed to by both parties and written to the ledger, the appropriate funds change hands automatically when the item is received.
Consensus
The process of keeping the ledger transactions synchronized across the network – to ensure that ledgers only update when transactions are approved by the appropriate participants, and that when ledgers do update, they update with the same transactions in the same order – is called consensus.

We’ll learn a lot more about ledgers, smart contracts and consensus later. For now, it’s enough to think of a blockchain as a shared, replicated transaction system which is updated via smart contracts and kept consistently synchronized through a collaborative process called consensus.
Why is a Blockchain useful?¶
Today’s Systems of Record
The transactional networks of today are little more than slightly updated versions of networks that have existed since business records have been kept. The members of a Business Network transact with each other, but they maintain separate records of their transactions. And the things they’re transacting – whether it’s Flemish tapestries in the 16th century or the securities of today – must have their provenance established each time they’re sold to ensure that the business selling an item possesses a chain of title verifying their ownership of it.
What you’re left with is a business network that looks like this:

Modern technology has taken this process from stone tablets and paper folders to hard drives and cloud platforms, but the underlying structure is the same. Unified systems for managing the identity of network participants do not exist, establishing provenance is so laborious it takes days to clear securities transactions (the world volume of which is numbered in the many trillions of dollars), contracts must be signed and executed manually, and every database in the system contains unique information and therefore represents a single point of failure.
It’s impossible with today’s fractured approach to information and process sharing to build a system of record that spans a business network, even though the needs of visibility and trust are clear.
The Blockchain Difference
What if instead of the rat’s nest of inefficiencies represented by the “modern” system of transactions, business networks had standard methods for establishing identity on the network, executing transactions, and storing data? What if establishing the provenance of an asset could be determined by looking through a list of transactions that, once written, cannot be changed, and can therefore be trusted?
That business network would look more like this:

This is a blockchain network. Every participant in it has their own replicated copy of the ledger. In addition to ledger information being shared, the processes which update the ledger are also shared. Unlike today’s systems, where a participant’s private programs are used to update their private ledgers, a blockchain system has shared programs to update shared ledgers.
With the ability to coordinate their business network through a shared ledger, blockchain networks can reduce the time, cost, and risk associated with private information and processing while improving trust and visibility.
You now know what blockchain is and why it’s useful. There are a lot of other details that are important, but they all relate to these fundamental ideas of the sharing of information and processes.
What is Hyperledger Fabric?¶
The Linux Foundation founded Hyperledger in 2015 to advance cross-industry blockchain technologies. Rather than declaring a single blockchain standard, it encourages a collaborative approach to developing blockchain technologies via a community process, with intellectual property rights that encourage open development and the adoption of key standards over time.
Hyperledger Fabric is one of the blockchain projects within Hyperledger. Like other blockchain technologies, it has a ledger, uses smart contracts, and is a system by which participants manage their transactions.
Where Hyperledger Fabric breaks from some other blockchain systems is that it is private and permissioned. Rather than an open permissionless system that allows unknown identities to participate in the network (requiring protocols like Proof of Work to validate transactions and secure the network), the members of a Hyperledger Fabric network enroll through a Membership Service Provider (MSP).
Hyperledger Fabric also offers several pluggable options. Ledger data can be stored in multiple formats, consensus mechanisms can be switched in and out, and different MSPs are supported.
Hyperledger Fabric also offers the ability to create channels, allowing a group of participants to create a separate ledger of transactions. This is an especially important option for networks where some participants might be competitors and not want every transaction they make - a special price they’re offering to some participants and not others, for example - known to every participant. If two participants form a channel, then those participants – and no others – have copies of the ledger for that channel.
Shared Ledger
Hyperledger Fabric has a ledger subsystem comprising two components: the world state and the transaction log. Each participant has a copy of the ledger to every Hyperledger Fabric network they belong to.
The world state component describes the state of the ledger at a given point in time. It’s the database of the ledger. The transaction log component records all transactions which have resulted in the current value of the world state. It’s the update history for the world state. The ledger, then, is a combination of the world state database and the transaction log history.
The ledger has a replaceable data store for the world state. By default, this is a LevelDB key-value store database. The transaction log does not need to be pluggable. It simply records the before and after values of the ledger database being used by the blockchain network.
Smart Contracts
Hyperledger Fabric smart contracts are written in chaincode and are invoked by an application external to the blockchain when that application needs to interact with the ledger. In most cases chaincode only interacts with the database component of the ledger, the world state (querying it, for example), and not the transaction log.
Chaincode can be implemented in several programming languages. The currently supported chaincode language is Go with support for Java and other languages coming in future releases.
Privacy
Depending on the needs of a network, participants in a Business-to-Business (B2B) network might be extremely sensitive about how much information they share. For other networks, privacy will not be a top concern.
Hyperledger Fabric supports networks where privacy (using channels) is a key operational requirement as well as networks that are comparatively open.
Consensus
Transactions must be written to the ledger in the order in which they occur, even though they might be between different sets of participants within the network. For this to happen, the order of transactions must be established and a method for rejecting bad transactions that have been inserted into the ledger in error (or maliciously) must be put into place.
This is a thoroughly researched area of computer science, and there are many ways to achieve it, each with different trade-offs. For example, PBFT (Practical Byzantine Fault Tolerance) can provide a mechanism for file replicas to communicate with each other to keep each copy consistent, even in the event of corruption. Alternatively, in Bitcoin, ordering happens through a process called mining where competing computers race to solve a cryptographic puzzle which defines the order that all processes subsequently build upon.
Hyperledger Fabric has been designed to allow network starters to choose a consensus mechanism that best represents the relationships that exist between participants. As with privacy, there is a spectrum of needs; from networks that are highly structured in their relationships to those that are more peer-to-peer.
We’ll learn more about the Hyperledger Fabric consensus mechanisms, which currently include SOLO, Kafka, and will soon extend to SBFT (Simplified Byzantine Fault Tolerance), in another document.
Where can I learn more?¶
We provide a number of tutorials where you’ll be introduced to most of the key components within a blockchain network, learn more about how they interact with each other, and then you’ll actually get the code and run some simple transactions against a running blockchain network. We also provide tutorials for those of you thinking of operating a blockchain network using Hyperledger Fabric.
A deeper look at the components and concepts brought up in this introduction as well as a few others and describes how they work together in a sample transaction flow.
Hyperledger Fabric Capabilities¶
Hyperledger Fabric is a unique implementation of distributed ledger technology (DLT) that delivers enterprise-ready network security, scalability, confidentiality and performance, in a modular blockchain architecture. Hyperledger Fabric delivers the following blockchain network capabilities:
Identity management¶
To enable permissioned networks, Hyperledger Fabric provides a membership identity service that manages user IDs and authenticates all participants on the network. Access control lists can be used to provide additional layers of permission through authorization of specific network operations. For example, a specific user ID could be permitted to invoke a chaincode application, but blocked from deploying new chaincode. One truism about Hyperledger Fabric networks is that members know each other (identity), but they do not know what each other are doing (privacy and confidentiality).
Privacy and confidentiality¶
Hyperledger Fabric enables competing business interests, and any groups that require private, confidential transactions, to coexist on the same permissioned network. Private channels are restricted messaging paths that can be used to provide transaction privacy and confidentiality for specific subsets of network members. All data, including transaction, member and channel information, on a channel are invisible and inaccessible to any network members not explicitly granted access to that channel.
Efficient processing¶
Hyperledger Fabric assigns network roles by node type. To provide concurrency and parallelism to the network, transaction execution is separated from transaction ordering and commitment. Executing transactions prior to ordering them enables each peer node to process multiple transactions simultaneously. This concurrent execution increases processing efficiency on each peer and accelerates delivery of transactions to the ordering service.
In addition to enabling parallel processing, the division of labor unburdens ordering nodes from the demands of transaction execution and ledger maintenance, while peer nodes are freed from ordering (consensus) workloads. This bifurcation of roles also limits the processing required for authorization and authentication; all peer nodes do not have to trust all ordering nodes, and vice versa, so processes on one can run independently of verification by the other.
Chaincode functionality¶
Chaincode applications encode logic that is invoked by specific types of transactions on the channel. Chaincode that defines parameters for a change of asset ownership, for example, ensures that all transactions that transfer ownership are subject to the same rules and requirements. System chaincode is distinguished as chaincode that defines operating parameters for the entire channel. Lifecycle and configuration system chaincode defines the rules for the channel; endorsement and validation system chaincode defines the requirements for endorsing and validating transactions.
Modular design¶
Hyperledger Fabric implements a modular architecture to provide functional choice to network designers. Specific algorithms for identity, ordering (consensus) and encryption, for example, can be plugged in to any Hyperledger Fabric network. The result is a universal blockchain architecture that any industry or public domain can adopt, with the assurance that its networks will be interoperable across market, regulatory and geographic boundaries. By contrast, current alternatives to Hyperledger Fabric are largely partisan, constrained and industry-specific.
此章节由 刘博宇 翻译,最后更新于2018.1.3 (原文链接)
Hyperledger Fabric 模型¶
本节概述了 Hyperledger Fabric 的关键设计特征,并介绍了其是如何履行其全面而可定制的企业区块链解决方案的承诺的。
- 资产(Assets) - 资产(Assets)的定义使得在网络上可以交换几乎所有具有货币价值的东西,从食物到古董车到货币期货。
- 链码(Chaincode) - 链码(Chaincode)包括,交易排序,限制所需的信任级别,跨节点类型的验证,优化网络可扩展性和性能。
- 帐本功能(Ledger Features) - 不可变的共享账本,记录了一个频道(Channel)的所有交易历史记录,并包含类似于SQL的查询功能,以有效审计和解决争议。
- 基于频道(Channel)的隐私保护 - 频道(Channel)使多边交易具有高度的隐私性和机密性,并使竞争的企业和受管制的行业在共同的网络上交换资产。
- 安全及会员服务(Security-Membership-Services) - 具有权限的会员提供了一个受信任的区块链网络,在这个网络中,参与者知道所有交易都可以被授权的监管机构和审计人员发现并追查。
- 共识(Consensus) - 统一的共识(Consensus)使企业具备所需的灵活性和可伸缩性。
资产(Assets)¶
资产(Assets)可以从有形资产(不动产和硬件)到无形资产(合同和知识产权)。Hyperledger Fabric提供了使用链码交易(Chaincode Transaction)修改资产(Assets)的能力。
在 Hyperledger Fabric 中,资产(Assets)被表示为一组键值对集合,其状态更改的历史记录做为“交易”被记录在 频道(Channel) 的账本之上。资产(Assets)可以用二进制和JSON形式表示。
你可以在 Hyperledger Fabric 应用程序中轻松定义和使用资产,使用 Hyperledger Composer 。
链码(Chaincode)¶
链码(Chaincode)是一段代码,它定义一个或多个资产(Assets),以及修改资产的相关交易指令。换言之,就是业务逻辑。链码(Chaincode)按执行规则读取或修改键值对,或状态数据库(State Database)信息。链码(Chaincode)函数基于账本的当前状态数据库(State Database)执行,当有交易提议(Transaction Proposal)时启动。链码(Chaincode)执行后产生一组键值的写入操作(写集),可以提交给网络并应用于所有节点(Peer)的帐本(Ledger)。
帐本功能(Ledger Features)¶
账本(Ledger)是一组顺序的、防篡改的记录集,记录了 Fabric 中所有的状态改变(State Transition)。状态改变(State Transition)是链码(Chaincode)调用(“交易”)的结果,由相关的参与者提交。每笔交易都会产生一组资产(Assets)的键值对集合,并将其提交给账本(Ledger),进行创建,更新或删除。
账本(Ledger)由一个区块链(“链”)和一个状态数据库组成,区块链用于存储不可变的顺序记录块,状态数据库用来维护当前的数据状态。每一个账本(Ledger)对应一个频道(Channel)。每个节点(Peer)为其所属的每个频道(Channel)保留一份账本(Ledger)的副本。
- 基于键(Key)查询和更新账本(Ledger),范围查询和组合键查询
- 基于丰富查询语言(a rich query language)的只读查询(如果使用 CouchDB 作为状态数据库)
- 只读历史记录查询 - 基于键(Key)的历史记录查询,可以支持数据溯源场景
- 交易(Transaction)由在链码(Chaincode)中读取的各个版本的键值集合(读集)和写入的各个版本的键值集合(写集)组成
- 交易(Transaction)包含每个背书节点(Endorsing Peer)的签名,并提交给排序服务(共识服务)(Ordering Service)
- 交易(Transaction)被顺序打包进区块(Block)中,并从排序服务(共识服务)(Ordering Service)“交付”给同一频道(Channel)的其他节点(Peer)
- 节点(Peer)基于背书策略(Endorsement Policy)来验证交易并执行
- 在追加进区块(Block)之前,会进行版本检查(Versioning Check),以确保,在链码(Chaincode)执行期间,读取的资产(Assets)状态未发生变化
- 交易一旦得到确认和提交,将不可改变
- 每个频道(Channel)的账本(Ledger)都包含了一个配置区块(Configuration Block),用于定义策略,访问控制列表和其他相关信息
- 频道(Channel)包含了 会员服务提供商(Membership Service Provider - MSP) 实例,允许从不同的证书颁发机构派生加密资料
请参阅 Ledger 主题,以深入了解数据库,存储结构和“查询能力”。
基于频道(Channel)的隐私保护¶
Hyperledger Fabric在每个频道(Channel)的基础上使用了一个不可变的帐本,以及可以操纵和修改资产当前状态的链码(Chaincode)(例如:更新键值对)。帐本存在于频道(Channel)的范围之内 - 它可以共享给整个网络(假设所有参与者都在一个共同的频道上运行),又或者,也可以私有化,只包含一组特定的参与者。
在后一种情况下,这些参与者将创建一个单独的频道(Channel),从而隔离他们的交易和帐本。为了解决透明度与隐私之间的矛盾,链码(Chaincode)只需安装在需要访问资产(Assets)状态的节点上,执行读取和写入操作(换言之,如果链码(Chaincode)没有安装在某节点之上,则此节点将不能访问此账本)。或更进一步,混淆数据,链码(Chaincode)中的数据(部分或全部)在追加到账本之前,可以使用常见的加密算法(例如AES)进行加密。
安全及会员服务(Security-Membership-Services)¶
Hyperledger Fabric巩固了所有参与者都拥有已知身份的交易网络。公钥基础设施(Public Key Infrastructure - PKI)用于生成加密证书,加密证书与组织机构,网络组件、最终用户或客户端应用相绑定。因此,数据访问控制可以在更广泛的网络和渠道层面进行管理和维护。在 Hyperledger Fabric 中,这个“具有权限的(permissioned)”的概念与“频道(channel)”的存在和能力相关联,这有助于解决将隐私性和机密性放在首要位置的场景。
请参阅 Membership Service Providers (MSP) 主题以更好地理解 Hyperledger Fabric 的加密实现及相关的签名,校验,鉴权方法。
共识(Consensus)¶
在分布式账本技术中,共识(Consensus)最近已成为一个单一函数内特定算法的词汇。然而,共识(Consensus)所包含的含义更多,不仅仅是简单地共同商议交易顺序。这在 Hyperledger Fabric 的整个交易流程中,非常突出地表现出来,从提议(Proposal)和背书(Endorsement),到排序(Ordering),确认(Validation)和提交(Commitment)。简而言之,共识(Consensus)被定义为一个区块内交易集合的正确性闭环校验。
当一个区块内所有交易的顺序和结果都已经明确地按策略标准检查后,共识(Consensus)最终达成。这些检查发生在交易的整个生命周期,包括使用背书策略(Endorsement Policy)来决定哪些特定的会员(Member)必须背书(Endorse)某个特定的交易类型,以及,使用系统链码(System Chaincode)来确保这些策略得到执行和维护。提交之前,节点们(Peers)将调用这些系统链码(System Chaincode),以确保获得来自适当实体(The Appropriate Entities)的足够数量的背书(Endorsement)。而且,在任何一个包含交易的区块被追加进账本之前,都会进行版本检查(Versioning Check),以使账本的当前状态达成一致。最后的这步检查提供非常必要的保护,以避免双重支出操作和其他可能危及数据完整性的威胁,并允许针对非静态变量的函数执行。
除了大量的背书(Endorsement),确认(Validity)和版本检查(Versioning Check)之外,身份验证也同时在交易流程的所有方向进行着。访问控制列表在分层的网络结构上实现(排序服务(Ordering Service)到频道(Channel)),并且,当交易提议(Transaction Proposal)通过不同的构件时,负载数据将被多次签名(Signed),验证(Verified)和认证(Authenticated)。总而言之,共识(Consensus)不仅限于代表一组批量交易的商议顺序,还包括了发生在整个交易的过程期间的,从提议(Proposal)到提交(Commitment)的持续不断的各种校验。
查看 交易流程 图表以更直观的理解共识。
此章节由 刘博宇 翻译,最后更新于2018.1.3 (原文链接)
此章节由 刘博宇 翻译,最后更新于2018.1.10 (原文链接)
构建你的第一个网络¶
注解
此文档已经被验证过,基于 “1.0.3” 版本的Docker镜像和预编译的安装实用程序。如果你使用当前主分支中的镜像和工具运行这些命令,则有可能需要修改配置或遇到错误。
“构建你的第一个网络”(BYFN)场景提供了一个示例性的 Hyperledger Fabric 网络,包括了两个组织机构(Organization),每个组织机构(Organization)都拥有两个对等节点(Peer),并提供“单独”的排序服务(Ordering Service)。
前提条件¶
在我们开始之前,如果你还尚未这样做,最好检查一下是否安装了所有的 Prerequisites ,在要开发区块链应用程序或操作 Hyperledger Fabric 的平台之上。
你还需要下载并安装 Hyperledger Fabric Samples 。你会注意在 fabric-samples
的源码中,包含了大量的例子。我们将使用 first-network
这个例子。现在打开这个子目录。
cd first-network
注解
本文档中所提及的命令 必须 在 fabric-samples
源码的 first-network
子目录中运行。如果你选择从其他位置运行命令,则提供的各种脚本将无法找到所需的可执行文件。
想要现在就运行起来?¶
我们提供了一个完整注释的脚本 - byfn.sh
- 利用这些Docker镜像,可以快速地启动包括2个组织机构(Organization),4个节点(Peer)组成的 Hyperledger Fabric 网络和一个排序服务节点(orderer node)。它还将启动一个容器来运行脚本,将节点(Peer)加入到一个频道(Channel)中,部署和实例化链码(Chaincode),并根据所部署的链码(Chaincode)来执行交易(Transaction)。
以下是 byfn.sh
脚本的帮助信息:
./byfn.sh -h
Usage:
byfn.sh -m up|down|restart|generate [-c <channel name>] [-t <timeout>]
byfn.sh -h|--help (print this message)
-m <mode> - one of 'up', 'down', 'restart' or 'generate'
- 'up' - bring up the network with docker-compose up
- 'down' - clear the network with docker-compose down
- 'restart' - restart the network
- 'generate' - generate required certificates and genesis block
-c <channel name> - config name to use (defaults to "mychannel")
-t <timeout> - CLI timeout duration in microseconds (defaults to 10000)
Typically, one would first generate the required certificates and
genesis block, then bring up the network. e.g.:
byfn.sh -m generate -c <channelname>
byfn.sh -m up -c <channelname>
如果你没有指定频道(Channel)的名称,则脚本将使用默认名称 mychannel
。CLI超时参数(使用 -t 标志)是一个可选值,如果你选择不设置它,那么你的CLI容器将在脚本结束时退出。
生成网络构件(Network Artifacts)¶
准备好让它运行起来了吗?OK!执行如下的命令:
./byfn.sh -m generate
你将会看到一段简要的说明,描述了接下来将会发生什么,以及 yes/no 的命令行提示。按下 y
以执行相关的动作。
Generating certs and genesis block for with channel 'mychannel' and CLI timeout of '10000'
Continue (y/n)?y
proceeding ...
/Users/xxx/dev/fabric-samples/bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
org1.example.com
2017-06-12 21:01:37.334 EDT [bccsp] GetDefault -> WARN 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
...
/Users/xxx/dev/fabric-samples/bin/configtxgen
##########################################################
######### Generating Orderer Genesis block ##############
##########################################################
2017-06-12 21:01:37.558 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.562 EDT [msp] getMspConfig -> INFO 002 intermediate certs folder not found at [/Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts]. Skipping.: [stat /Users/xxx/dev/byfn/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts: no such file or directory]
...
2017-06-12 21:01:37.588 EDT [common/configtx/tool] doOutputBlock -> INFO 00b Generating genesis block
2017-06-12 21:01:37.590 EDT [common/configtx/tool] doOutputBlock -> INFO 00c Writing genesis block
#################################################################
### Generating channel configuration transaction 'channel.tx' ###
#################################################################
2017-06-12 21:01:37.634 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.644 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2017-06-12 21:01:37.645 EDT [common/configtx/tool] doOutputChannelCreateTx -> INFO 003 Writing new channel tx
#################################################################
####### Generating anchor peer update for Org1MSP ##########
#################################################################
2017-06-12 21:01:37.674 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.678 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.679 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update
#################################################################
####### Generating anchor peer update for Org2MSP ##########
#################################################################
2017-06-12 21:01:37.700 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002 Generating anchor peer update
2017-06-12 21:01:37.704 EDT [common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing anchor peer update
第一步为我们所有的网络实体生成了所需的证书和密钥,以及用于启动排序服务(Ordering Service)的 创世区块(Genesis Block)
以及一组用来配置 频道(Channel) 的配置交易(configuration transactions)。
启动网络¶
接下来,你可以使用如下命令来启动网络:
./byfn.sh -m up
再一次,你会被提示,是否要继续还是中断,回应 y
:
Starting with channel 'mychannel' and CLI timeout of '10000'
Continue (y/n)?y
proceeding ...
Creating network "net_byfn" with the default driver
Creating peer0.org1.example.com
Creating peer1.org1.example.com
Creating peer0.org2.example.com
Creating orderer.example.com
Creating peer1.org2.example.com
Creating cli
____ _____ _ ____ _____
/ ___| |_ _| / \ | _ \ |_ _|
\___ \ | | / _ \ | |_) | | |
___) | | | / ___ \ | _ < | |
|____/ |_| /_/ \_\ |_| \_\ |_|
Channel name : mychannel
Creating channel...
日志将从这里开始。这将启动所有的容器,然后完成一个完整的端到端的应用场景。成功完成后,在终端窗口中,会得到以下的报告内容:
2017-05-16 17:08:01.366 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-05-16 17:08:01.366 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-05-16 17:08:01.366 UTC [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB1070A6708031A0C08F1E3ECC80510...6D7963631A0A0A0571756572790A0161
2017-05-16 17:08:01.367 UTC [msp/identity] Sign -> DEBU 007 Sign: digest: E61DB37F4E8B0D32C9FE10E3936BA9B8CD278FAA1F3320B08712164248285C54
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query on PEER3 on channel 'mychannel' is successful =====================
===================== All GOOD, BYFN execution completed =====================
_____ _ _ ____
| ____| | \ | | | _ \
| _| | \| | | | | |
| |___ | |\ | | |_| |
|_____| |_| \_| |____/
你可以滚动浏览这些日志以查看各种交易。如果你没有看到这些,那么请查看 疑难解答 章节,看看我们能否帮助你发现问题的所在。
关闭网络¶
最后,让我们把它全部都关闭掉,这样我们就可以逐步来探索网络的配置。接下来,将关闭您的容器,删除加密素材,以及四个构件(artifacts),并从你的Docker注册表中删除链码镜像:
./byfn.sh -m down
再一次,你会被提示是否继续,用 y
来回应:
Stopping with channel 'mychannel' and CLI timeout of '10000'
Continue (y/n)?y
proceeding ...
WARNING: The CHANNEL_NAME variable is not set. Defaulting to a blank string.
WARNING: The TIMEOUT variable is not set. Defaulting to a blank string.
Removing network net_byfn
468aaa6201ed
...
Untagged: dev-peer1.org2.example.com-mycc-1.0:latest
Deleted: sha256:ed3230614e64e1c83e510c0c282e982d2b06d148b1c498bbdcc429e2b2531e91
...
如果你打算了解更多底层工具和引导机制的关于信息,请继续阅读下面的章节。在接下来的部分中,我们将介绍构建完整功能 Hyperledger Fabric 网络的各个步骤和相关要求。
生成加密证书¶
我们使用 cryptogen
工具来为各种网络实体,生成加密素材(x509证书)。这些证书是代表了身份的标示,实体在进行通信和交易的时候,会利用这些证书来进行签名和验证身份。
它是如何工作的?¶
Cryptogen使用包含了网络拓扑结构的 crypto-config.yaml
文件,并可以为组织机构和属于该机构的组件生成一组证书和密钥。每个组织机构都配备了一个唯一的根证书( ca-cert
),将特定的组件(节点(Peer)和排序服务节点(orderer))绑定到该机构之上。通过为每个组织机构分配一个唯一的CA证书,我们正在模拟一个典型的网络,其中 会员(Member) 将使用自己的CA证书颁发机构。Hyperledger Fabric中的交易和通信,都将由实体的私钥( keystore
)进行签名,然后通过公钥( signcerts
)进行验证。
你会注意到文件中的 count
变量。我们用这个来指定每个组织机构中节点(Peer)的数量。在我们的案例中,每个机构有两个节点(Peer)。我们现在不会深入研究 x.509证书和公钥基础设施 的细节。如果你有兴趣,你可以自己找时间来仔细阅读这些资料。
在运行该工具之前,让我们快速浏览一下 crypto-config.yaml
的代码片段。请特别注意 OrdererOrgs
标题下的 “Name” , “Domain” 和 “Specs” 参数:
OrdererOrgs:
#---------------------------------------------------------
# Orderer
# --------------------------------------------------------
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
# OrganizationalUnit: Hyperledger Fabric
# StreetAddress: address for org # default nil
# PostalCode: postalCode for org # default nil
# ------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# -----------------------------------------------------
Specs:
- Hostname: orderer
# -------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ------------------------------------------------------
PeerOrgs:
# -----------------------------------------------------
# Org1
# ----------------------------------------------------
- Name: Org1
Domain: org1.example.com
网络实体的命名约定如下 - “{{.Hostname}}.{{.Domain}}” 。因此,将我们的排序节点作为参考,我们设置了一个名为 - orderer.example.com
的排序节点,该排序节点绑定到 Orderer
的MSP ID上。该文档包含有关定义和语法的大量描述,您也可以参阅 Membership Service Providers (MSP) 文档,以深入了解MSP。
运行 cryptogen
工具后,生成的证书和密钥将被保存到 crypto-config
文件夹中。
生成配置交易(Configuration Transaction)¶
configtxgen
工具用于创建以下四个配置构件:
- 排序节点
创世区块(Genesis Block)
- 频道
配置交易(Configuration Transaction)
- 两个
锚节点交易(anchor peer transactions)
- 每个机构对应一个节点
请参阅 Channel Configuration (configtxgen) 以获取完整的使用说明。
排序节点区块(the orderer block)是排序服务(Ordering Service)的 创世区块(Genesis Block) ,并且 频道交易文件(the channel transaction file) 在 频道(Channel) 被创建时,被广播给排序节点(the orderer)。 锚节点交易(anchor peer transactions) ,正如名字所表述的一样,指定了频道(Channel)中,每个机构的 锚节点(Anchor Peer) 。
它是如何工作的?¶
Configtxgen使用 configtx.yaml
文件,其包含了示例网络的定义。有三个会员 - 一个排序节点机构(Orderer Org)(OrdererOrg
)和两个节点机构(Peer Orgs)(Org1
& Org2
),每个管理和维护两个节点(Peer)。该文件还指定了由两个节点机构(Peer Orgs)组成的联盟 - SampleConsortium
。请特别注意文件顶部的 “Profiles” 部分。你会注意到有两个特别的部分,一个是排序节点创世区块(the orderer genesis
block) - TwoOrgsOrdererGenesis
- 一个是频道(Channel) - TwoOrgsChannel
。
上面的两个很重要,因为我们会在创建构件时,将它们作为参数。
注解
请注意,我们的 SampleConsortium
是在系统级配置文件中定义的,然后在频道级配置文件中被引用。频道(Channel)存在于一个联盟的范围内,所有联盟都必须在整个网络范围内进行界定。
文件还包含了其他两个值得注意的地方。首先,我们为每个节点机构(Peer Orgs)指定了锚节点(Anchor Peer)(peer0.org1.example.com
& peer0.org2.example.com
)。其次,每个会员(Member)的MSP目录位置,反过来,都可以让我们将在排序节点创世区块(the orderer genesis block)内的每个机构的根证书存储在其之中。这是一个非常重要的概念,这样任何与排序服务(Ordering Service)通信的网络实体都可以验证数字签名了。
使用工具¶
你可以使用 configtxgen
和 cryptogen
命令,来手动生成证书/密钥和各种配置构件。或者,您可以尝试修改 byfn.sh 脚本来实现您的目的。
手动生成构件¶
可以参考 byfn.sh 脚本中的 generateCerts
函数,以获取所需的命令,生成用于网络配置的证书,就如 crypto-config.yaml
中所定义的那样。不过,为方便起见,我们也会在这里提供参考。
首先,让我们运行 cryptogen
工具。执行文件位于 bin
目录下,所以我们需要工具所在的相对路径。
../bin/cryptogen generate --config=./crypto-config.yaml
你可能会看到以下的警告。但这是无害的,忽略它:
[bccsp] GetDefault -> WARN 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
接下来,我们需要告诉 configtxgen
工具,在哪里查找需要获取的 configtx.yaml
文件。我们会告诉它在当前的目录中查找:
首先,我们需要设置一个环境变量来指定 configtxgen
应该在哪里查找 configtx.yaml
配置文件:
export FABRIC_CFG_PATH=$PWD
然后,我们将调用 configtxgen
工具来创建排序节点创世区块(the orderer genesis block):
../bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
你可以忽略有关中间证书,证书吊销列表(crls)和MSP配置的日志警告。在这个示例网络中,我们没有使用其中的任何一个。
创建频道配置交易(Channel Configuration Transaction)¶
接下来,我们需要创建频道配置交易(Channel Configuration Transaction)构件。请务必替换 $CHANNEL_NAME 或将 CHANNEL_NAME 设置为在整个上下文环境中可使用的环境变量:
export CHANNEL_NAME=mychannel
# this file contains the definitions for our sample channel
../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
接下来,我们将定义频道上的 Org1 的锚节点(Anchor Peer)。同样,请确保替换 $CHANNEL_NAME 或为以下命令设置环境变量:
../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP
现在,我们将在同一个频道上定义 Org2 的锚节点(Anchor Peer):
../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org2MSP
启动网络¶
We will leverage a docker-compose script to spin up our network. The
docker-compose file references the images that we have previously downloaded,
and bootstraps the orderer with our previously generated genesis.block
.
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
volumes
If left uncommented, that script will exercise all of the CLI commands when the network is started, as we describe in the What’s happening behind the scenes? section. However, we want to go through the commands manually in order to expose the syntax and functionality of each call.
Pass in a moderately high value for the TIMEOUT
variable (specified in seconds);
otherwise the CLI container, by default, will exit after 60 seconds.
Start your network:
CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=<pick_a_value> docker-compose -f docker-compose-cli.yaml up -d
If you want to see the realtime logs for your network, then do not supply the -d
flag.
If you let the logs stream, then you will need to open a second terminal to execute the CLI calls.
环境变量¶
For the following CLI commands against peer0.org1.example.com
to work, we need
to preface our commands with the four environment variables given below. These
variables for peer0.org1.example.com
are baked into the CLI container,
therefore we can operate without passing them. HOWEVER, if you want to send
calls to other peers or the orderer, then you will need to provide these
values accordingly. Inspect the docker-compose-base.yaml
for the specific
paths:
# Environment variables for PEER0
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
CORE_PEER_LOCALMSPID="Org1MSP"
CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
创建&添加频道(Channel)¶
Recall that we created the channel configuration transaction using the
configtxgen
tool in the 创建频道配置交易(Channel Configuration Transaction) section, above. You can
repeat that process to create additional channel configuration transactions,
using the same or different profiles in the configtx.yaml
that you pass
to the configtxgen
tool. Then you can repeat the process defined in this
section to establish those other channels in your network.
We will enter the CLI container using the docker exec
command:
docker exec -it cli bash
If successful you should see the following:
root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer#
Next, we are going to pass in the generated channel configuration transaction
artifact that we created in the 创建频道配置交易(Channel Configuration Transaction) section (we called
it channel.tx
) to the orderer as part of the create channel request.
We specify our channel name with the -c
flag and our channel configuration
transaction with the -f
flag. In this case it is channel.tx
, however
you can mount your own configuration transaction with a different name.
export CHANNEL_NAME=mychannel
# the channel.tx file is mounted in the channel-artifacts directory within your CLI container
# as a result, we pass the full path for the file
# we also pass the path for the orderer ca-cert in order to verify the TLS handshake
# be sure to replace the $CHANNEL_NAME variable appropriately
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
注解
Notice the -- cafile
that we pass as part of this command. It is
the local path to the orderer’s root cert, allowing us to verify the
TLS handshake.
This command returns a genesis block - <channel-ID.block>
- which we will use to join the channel.
It contains the configuration information specified in channel.tx
.
注解
You will remain in the CLI container for the remainder of
these manual commands. You must also remember to preface all commands
with the corresponding environment variables when targeting a peer other than
peer0.org1.example.com
.
Now let’s join peer0.org1.example.com
to the channel.
# By default, this joins ``peer0.org1.example.com`` only
# the <channel-ID.block> was returned by the previous command
peer channel join -b <channel-ID.block>
You can make other peers join the channel as necessary by making appropriate changes in the four environment variables we used in the 环境变量 section, above.
安装&实例化链码(Chaincode)¶
注解
We will utilize a simple existing chaincode. To learn how to write your own chaincode, see the Chaincode for Developers tutorial.
Applications interact with the blockchain ledger through chaincode
. As
such we need to install the chaincode on every peer that will execute and
endorse our transactions, and then instantiate the chaincode on the channel.
First, install the sample Go code onto one of the four peer nodes. This command places the source code onto our peer’s filesystem.
peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
Next, instantiate the chaincode on the channel. This will initialize the
chaincode on the channel, set the endorsement policy for the chaincode, and
launch a chaincode container for the targeted peer. Take note of the -P
argument. This is our policy where we specify the required level of endorsement
for a transaction against this chaincode to be validated.
In the command below you’ll notice that we specify our policy as
-P "OR ('Org0MSP.member','Org1MSP.member')"
. This means that we need
“endorsement” from a peer belonging to Org1 OR Org2 (i.e. only one endorsement).
If we changed the syntax to AND
then we would need two endorsements.
# be sure to replace the $CHANNEL_NAME environment variable
# if you did not install your chaincode with a name of mycc, then modify that argument as well
peer chaincode instantiate -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org1MSP.member','Org2MSP.member')"
See the endorsement policies documentation for more details on policy implementation.
查询¶
Let’s query for the value of a
to make sure the chaincode was properly
instantiated and the state DB was populated. The syntax for query is as follows:
# be sure to set the -C and -n flags appropriately
peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
调用¶
Now let’s move 10
from a
to b
. This transaction will cut a new block and
update the state DB. The syntax for invoke is as follows:
# be sure to set the -C and -n flags appropriately
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n mycc -c '{"Args":["invoke","a","b","10"]}'
查询¶
Let’s confirm that our previous invocation executed properly. We initialized the
key a
with a value of 100
and just removed 10
with our previous
invocation. Therefore, a query against a
should reveal 90
. The syntax
for query is as follows.
# be sure to set the -C and -n flags appropriately
peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
We should see the following:
Query Result: 90
Feel free to start over and manipulate the key value pairs and subsequent invocations.
What’s happening behind the scenes?¶
注解
These steps describe the scenario in which
script.sh
is not commented out in the
docker-compose-cli.yaml file. Clean your network
with ./byfn.sh -m down
and ensure
this command is active. Then use the same
docker-compose prompt to launch your network again
- A script -
script.sh
- is baked inside the CLI container. The script drives thecreateChannel
command against the supplied channel name and uses the channel.tx file for channel configuration. - The output of
createChannel
is a genesis block -<your_channel_name>.block
- which gets stored on the peers’ file systems and contains the channel configuration specified from channel.tx. - The
joinChannel
command is exercised for all four peers, which takes as input the previously generated genesis block. This command instructs the peers to join<your_channel_name>
and create a chain starting with<your_channel_name>.block
. - Now we have a channel consisting of four peers, and two
organizations. This is our
TwoOrgsChannel
profile. peer0.org1.example.com
andpeer1.org1.example.com
belong to Org1;peer0.org2.example.com
andpeer1.org2.example.com
belong to Org2- These relationships are defined through the
crypto-config.yaml
and the MSP path is specified in our docker compose. - The anchor peers for Org1MSP (
peer0.org1.example.com
) and Org2MSP (peer0.org2.example.com
) are then updated. We do this by passing theOrg1MSPanchors.tx
andOrg2MSPanchors.tx
artifacts to the ordering service along with the name of our channel. - A chaincode - chaincode_example02 - is installed on
peer0.org1.example.com
andpeer0.org2.example.com
- The chaincode is then “instantiated” on
peer0.org2.example.com
. Instantiation adds the chaincode to the channel, starts the container for the target peer, and initializes the key value pairs associated with the chaincode. The initial values for this example are [“a”,”100” “b”,”200”]. This “instantiation” results in a container by the name ofdev-peer0.org2.example.com-mycc-1.0
starting. - The instantiation also passes in an argument for the endorsement
policy. The policy is defined as
-P "OR ('Org1MSP.member','Org2MSP.member')"
, meaning that any transaction must be endorsed by a peer tied to Org1 or Org2. - A query against the value of “a” is issued to
peer0.org1.example.com
. The chaincode was previously installed onpeer0.org1.example.com
, so this will start a container for Org1 peer0 by the name ofdev-peer0.org1.example.com-mycc-1.0
. The result of the query is also returned. No write operations have occurred, so a query against “a” will still return a value of “100”. - An invoke is sent to
peer0.org1.example.com
to move “10” from “a” to “b” - The chaincode is then installed on
peer1.org2.example.com
- A query is sent to
peer1.org2.example.com
for the value of “a”. This starts a third chaincode container by the name ofdev-peer1.org2.example.com-mycc-1.0
. A value of 90 is returned, correctly reflecting the previous transaction during which the value for key “a” was modified by 10.
What does this demonstrate?¶
Chaincode MUST be installed on a peer in order for it to
successfully perform read/write operations against the ledger.
Furthermore, a chaincode container is not started for a peer until an init
or
traditional transaction - read/write - is performed against that chaincode (e.g. query for
the value of “a”). The transaction causes the container to start. Also,
all peers in a channel maintain an exact copy of the ledger which
comprises the blockchain to store the immutable, sequenced record in
blocks, as well as a state database to maintain a snapshot of the current state.
This includes those peers that do not have chaincode installed on them
(like peer1.org1.example.com
in the above example) . Finally, the chaincode is accessible
after it is installed (like peer1.org2.example.com
in the above example) because it
has already been instantiated.
如何查看交易?¶
Check the logs for the CLI Docker container.
docker logs -f cli
You should see the following output:
2017-05-16 17:08:01.366 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-05-16 17:08:01.366 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-05-16 17:08:01.366 UTC [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB1070A6708031A0C08F1E3ECC80510...6D7963631A0A0A0571756572790A0161
2017-05-16 17:08:01.367 UTC [msp/identity] Sign -> DEBU 007 Sign: digest: E61DB37F4E8B0D32C9FE10E3936BA9B8CD278FAA1F3320B08712164248285C54
Query Result: 90
2017-05-16 17:08:15.158 UTC [main] main -> INFO 008 Exiting.....
===================== Query on PEER3 on channel 'mychannel' is successful =====================
===================== All GOOD, BYFN execution completed =====================
_____ _ _ ____
| ____| | \ | | | _ \
| _| | \| | | | | |
| |___ | |\ | | |_| |
|_____| |_| \_| |____/
You can scroll through these logs to see the various transactions.
如何查看链码的日志?¶
Inspect the individual chaincode containers to see the separate transactions executed against each container. Here is the combined output from each container:
$ docker logs dev-peer0.org2.example.com-mycc-1.0
04:30:45.947 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Init
Aval = 100, Bval = 200
$ docker logs dev-peer0.org1.example.com-mycc-1.0
04:31:10.569 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"100"}
ex02 Invoke
Aval = 90, Bval = 210
$ docker logs dev-peer1.org2.example.com-mycc-1.0
04:31:30.420 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW]
ex02 Invoke
Query Response:{"Name":"a","Amount":"90"}
理解 Docker Compose 的拓扑结构¶
The BYFN sample offers us two flavors of Docker Compose files, both of which
are extended from the docker-compose-base.yaml
(located in the base
folder). Our first flavor, docker-compose-cli.yaml
, provides us with a
CLI container, along with an orderer, four peers. We use this file
for the entirety of the instructions on this page.
注解
the remainder of this section covers a docker-compose file designed for the SDK. Refer to the Node SDK repo for details on running these tests.
The second flavor, docker-compose-e2e.yaml
, is constructed to run end-to-end tests
using the Node.js SDK. Aside from functioning with the SDK, its primary differentiation
is that there are containers for the fabric-ca servers. As a result, we are able
to send REST calls to the organizational CAs for user registration and enrollment.
If you want to use the docker-compose-e2e.yaml
without first running the
byfn.sh script, then we will need to make four slight modifications.
We need to point to the private keys for our Organization’s CA’s. You can locate
these values in your crypto-config folder. For example, to locate the private
key for Org1 we would follow this path - crypto-config/peerOrganizations/org1.example.com/ca/
.
The private key is a long hash value followed by _sk
. The path for Org2
would be - crypto-config/peerOrganizations/org2.example.com/ca/
.
In the docker-compose-e2e.yaml
update the FABRIC_CA_SERVER_TLS_KEYFILE variable
for ca0 and ca1. You also need to edit the path that is provided in the command
to start the ca server. You are providing the same private key twice for each
CA container.
使用CouchDB¶
The state database can be switched from the default (goleveldb) to CouchDB. The same chaincode functions are available with CouchDB, however, there is the added ability to perform rich and complex queries against the state database data content contingent upon the chaincode data being modeled as JSON.
To use CouchDB instead of the default database (goleveldb), follow the same
procedures outlined earlier for generating the artifacts, except when starting
the network pass docker-compose-couch.yaml
as well:
CHANNEL_NAME=$CHANNEL_NAME TIMEOUT=<pick_a_value> docker-compose -f docker-compose-cli.yaml -f docker-compose-couch.yaml up -d
chaincode_example02 should now work using CouchDB underneath.
注解
If you choose to implement mapping of the fabric-couchdb container port to a host port, please make sure you are aware of the security implications. Mapping of the port in a development environment makes the CouchDB REST API available, and allows the visualization of the database via the CouchDB web interface (Fauxton). Production environments would likely refrain from implementing port mapping in order to restrict outside access to the CouchDB containers.
You can use chaincode_example02 chaincode against the CouchDB state database
using the steps outlined above, however in order to exercise the CouchDB query
capabilities you will need to use a chaincode that has data modeled as JSON,
(e.g. marbles02). You can locate the marbles02 chaincode in the
fabric/examples/chaincode/go
directory.
We will follow the same process to create and join the channel as outlined in the 创建&添加频道(Channel) section above. Once you have joined your peer(s) to the channel, use the following steps to interact with the marbles02 chaincode:
- Install and instantiate the chaincode on
peer0.org1.example.com
:
# be sure to modify the $CHANNEL_NAME variable accordingly for the instantiate command
peer chaincode install -n marbles -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/marbles02
peer chaincode instantiate -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -v 1.0 -c '{"Args":["init"]}' -P "OR ('Org0MSP.member','Org1MSP.member')"
- Create some marbles and move them around:
# be sure to modify the $CHANNEL_NAME variable accordingly
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble1","blue","35","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble2","red","50","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["initMarble","marble3","blue","70","tom"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["transferMarble","marble2","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C $CHANNEL_NAME -n marbles -c '{"Args":["delete","marble1"]}'
If you chose to map the CouchDB ports in docker-compose, you can now view the state database through the CouchDB web interface (Fauxton) by opening a browser and navigating to the following URL:
http://localhost:5984/_utils
You should see a database named mychannel
(or your unique channel name) and
the documents inside it.
注解
For the below commands, be sure to update the $CHANNEL_NAME variable appropriately.
You can run regular queries from the CLI (e.g. reading marble2
):
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["readMarble","marble2"]}'
The output should display the details of marble2
:
Query Result: {"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}
You can retrieve the history of a specific marble - e.g. marble1
:
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["getHistoryForMarble","marble1"]}'
The output should display the transactions on marble1
:
Query Result: [{"TxId":"1c3d3caf124c89f91a4c0f353723ac736c58155325f02890adebaa15e16e6464", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}},{"TxId":"755d55c281889eaeebf405586f9e25d71d36eb3d35420af833a20a2f53a3eefd", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"jerry"}},{"TxId":"819451032d813dde6247f85e56a89262555e04f14788ee33e28b232eef36d98f", "Value":}]
You can also perform rich queries on the data content, such as querying marble fields by owner jerry
:
peer chaincode query -C $CHANNEL_NAME -n marbles -c '{"Args":["queryMarblesByOwner","jerry"]}'
The output should display the two marbles owned by jerry
:
Query Result: [{"Key":"marble2", "Record":{"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}},{"Key":"marble3", "Record":{"color":"blue","docType":"marble","name":"marble3","owner":"jerry","size":70}}]
为什么是CouchDB¶
CouchDB is a kind of NoSQL solution. It is a document oriented database where document fields are stored as key-value mpas. Fields can be either a simple key/value pair, list, or map. In addition to keyed/composite-key/key-range queries which are supported by LevelDB, CouchDB also supports full data rich queries capability, such as non-key queries against the whole blockchain data, since its data content is stored in JSON format and fully queryable. Therefore, CouchDB can meet chaincode, auditing, reporting requirements for many use cases that not supported by LevelDB.
CouchDB can also enhance the security for compliance and data protection in the blockchain. As it is able to implement field-level security through the filtering and masking of individual attributes within a transaction, and only authorizing the read-only permission if needed.
In addition, CouchDB falls into the AP-type (Availability and Partition Tolerance) of the CAP theorem. It uses a master-master replication model with Eventual Consistency
.
More information can be found on the
Eventual Consistency page of the CouchDB documentation.
However, under each fabric peer, there is no database replicas, writes to database are guaranteed consistent and durable (not Eventual Consistency
).
CouchDB is the first external pluggable state database for Fabric, and there could and should be other external database options. For example, IBM enables the relational database for its blockchain. And the CP-type (Consistency and Partition Tolerance) databases may also in need, so as to enable data consistency without application level guarantee.
关于数据持久化的备注说明¶
If data persistence is desired on the peer container or the CouchDB container,
one option is to mount a directory in the docker-host into a relevant directory
in the container. For example, you may add the following two lines in
the peer container specification in the docker-compose-base.yaml
file:
volumes:
- /var/hyperledger/peer0:/var/hyperledger/production
For the CouchDB container, you may add the following two lines in the CouchDB container specification:
volumes:
- /var/hyperledger/couchdb0:/opt/couchdb/data
疑难解答¶
Always start your network fresh. Use the following command to remove artifacts, crypto, containers and chaincode images:
./byfn.sh -m down
注解
You will see errors if you do not remove old containers and images.
If you see Docker errors, first check your docker version (Prerequisites), and then try restarting your Docker process. Problems with Docker are oftentimes not immediately recognizable. For example, you may see errors resulting from an inability to access crypto material mounted within a container.
If they persist remove your images and start from scratch:
docker rm -f $(docker ps -aq) docker rmi -f $(docker images -q)
If you see errors on your create, instantiate, invoke or query commands, make sure you have properly updated the channel name and chaincode name. There are placeholder values in the supplied sample commands.
If you see the below error:
Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.0 exits)
You likely have chaincode images (e.g.
dev-peer1.org2.example.com-mycc-1.0
ordev-peer0.org1.example.com-mycc-1.0
) from prior runs. Remove them and try again.docker rmi -f $(docker images | grep peer[0-9]-peer[0-9] | awk '{print $3}')
If you see something similar to the following:
Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure Error: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
Make sure you are running your network against the “1.0.0” images that have been retagged as “latest”.
If you see the below error:
[configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type "" panic: Error reading configuration: Unsupported Config Type ""
Then you did not set the
FABRIC_CFG_PATH
environment variable properly. The configtxgen tool needs this variable in order to locate the configtx.yaml. Go back and execute anexport FABRIC_CFG_PATH=$PWD
, then recreate your channel artifacts.To cleanup the network, use the
down
option:./byfn.sh -m down
If you see an error stating that you still have “active endpoints”, then prune your Docker networks. This will wipe your previous networks and start you with a fresh environment:
docker network prune
You will see the following message:
WARNING! This will remove all networks not used by at least one container. Are you sure you want to continue? [y/N]
Select
y
.
注解
If you continue to see errors, share your logs on the fabric-questions channel on Hyperledger Rocket Chat or on StackOverflow.
Writing Your First Application¶
注解
If you’re not yet familiar with the fundamental architecture of a Fabric network, you may want to visit the Introduction and 构建你的第一个网络 documentation prior to continuing.
In this section we’ll be looking at a handful of sample programs to see how Fabric
apps work. These apps (and the smart contract they use) – collectively known as
fabcar
– provide a broad demonstration of Fabric functionality. Notably, we
will show the process for interacting with a Certificate Authority and generating
enrollment certificates, after which we will leverage these generated identities
(user objects) to query and update a ledger.
We’ll go through three principle steps:
1. Setting up a development environment. Our application needs a network to interact with, so we’ll download one stripped down to just the components we need for registration/enrollment, queries and updates:
![]()
2. Learning the parameters of the sample smart contract our app will use. Our smart contract contains various functions that allow us to interact with the ledger in different ways. We’ll go in and inspect that smart contract to learn about the functions our applications will be using.
3. Developing the applications to be able to query and update assets on the ledger. We’ll get into the app code itself (our apps have been written in Javascript) and manually manipulate the variables to run different kinds of queries and updates.
After completing this tutorial you should have a basic understanding of how an application is programmed in conjunction with a smart contract to interact with the ledger (i.e. the peer) on a Fabric network.
Setting up your Dev Environment¶
First thing, let’s download the Fabric images and the accompanying artifacts for the network and applications...
Visit the Prerequisites page and ensure you have the necessary dependencies installed on your machine.
Next, visit the Hyperledger Fabric Samples page and follow the provided instructions. Return to
this tutorial once you have cloned the fabric-samples
repository, and downloaded
the latest stable Fabric images and available utilities.
At this point everything should be installed. Navigate to the fabcar
subdirectory
within your fabric-samples
repository and take a look at what’s inside:
cd fabric-samples/fabcar && ls
You should see the following:
enrollAdmin.js invoke.js package.json query.js registerUser.js startFabric.sh
Before starting we also need to do a little housekeeping. Run the following command to kill any stale or active containers:
docker rm -f $(docker ps -aq)
Clear any cached networks:
# Press 'y' when prompted by the command
docker network prune
And lastly if you’ve already run through this tutorial, you’ll also want to delete the
underlying chaincode image for the fabcar
smart contract. If you’re a user going through
this content for the first time, then you won’t have this chaincode image on your system:
docker rmi dev-peer0.org1.example.com-fabcar-1.0-5c906e402ed29f20260ae42283216aa75549c571e2e380f3615826365d8269ba
Install the clients & launch the network¶
注解
The following instructions require you to be in the fabcar
subdirectory
within your local clone of the fabric-samples
repo. Remain at the
root of this subdirectory for the remainder of this tutorial.
Run the following command to install the Fabric dependencies for the applications.
We are concerned with fabric-ca-client
which will allow our app(s) to communicate
with the CA server and retrieve identity material, and with fabric-client
which
allows us to load the identity material and talk to the peers and ordering service.
npm install
Launch your network using the startFabric.sh
shell script. This command
will spin up our various Fabric entities and launch a smart contract container for
chaincode written in Golang:
./startFabric.sh
Alright, now that you’ve got a sample network and some code, let’s take a look at how the different pieces fit together.
How Applications Interact with the Network¶
For a more in-depth look at the components in our fabcar
network (and how
they’re deployed) as well as how applications interact with those components
on more of a granular level, see Understanding the Fabcar Network.
Developers more interested in seeing what applications do – as well as looking at the code itself to see how an application is constructed – should continue. For now, the most important thing to know is that applications use a software development kit (SDK) to access the APIs that permit queries and updates to the ledger.
Enrolling the Admin User¶
注解
The following two sections involve communication with the Certificate Authority. You may find it useful to stream the CA logs when running the upcoming programs.
To stream your CA logs, split your terminal or open a new shell and issue the following:
docker logs -f ca.example.com
Now hop back to your terminal with the fabcar
content...
When we launched our network, an admin user - admin
- was registered with our
Certificate Authority. Now we need to send an enroll call to the CA server and
retrieve the enrollment certificate (eCert) for this user. We won’t delve into enrollment
details here, but suffice it to say that the SDK and by extension our applications
need this cert in order to form a user object for the admin. We will then use this admin
object to subsequently register and enroll a new user. Send the admin enroll call to the CA
server:
node enrollAdmin.js
This program will invoke a certificate signing request (CSR) and ultimately output
an eCert and key material into a newly created folder - hfc-key-store
- at the
root of this project. Our apps will then look to this location when they need to
create or load the identity objects for our various users.
Register and Enroll user1
¶
With our newly generated admin eCert, we will now communicate with the CA server
once more to register and enroll a new user. This user - user1
- will be
the identity we use when querying and updating the ledger. It’s important to
note here that it is the admin
identity that is issuing the registration and
enrollment calls for our new user (i.e. this user is acting in the role of a registrar).
Send the register and enroll calls for user1
:
node registerUser.js
Similar to the admin enrollment, this program invokes a CSR and outputs the keys
and eCert into the hfc-key-store
subdirectory. So now we have identity material for two
separate users - admin
& user1
. Time to interact with the ledger...
Querying the Ledger¶
Queries are how you read data from the ledger. This data is stored as a series of key/value pairs, and you can query for the value of a single key, multiple keys, or – if the ledger is written in a rich data storage format like JSON – perform complex searches against it (looking for all assets that contain certain keywords, for example).
This is a representation of how a query works:

First, let’s run our query.js
program to return a listing of all the cars on
the ledger. We will use our second identity - user1
- as the signing entity
for this application. The following line in our program specifies user1
as
the signer:
fabric_client.getUserContext('user1', true);
Recall that the user1
enrollment material has already been placed into our
hfc-key-store
subdirectory, so we simply need to tell our application to grab that identity.
With the user object defined, we can now proceed with reading from the ledger.
A function that will query all the cars, queryAllCars
, is
pre-loaded in the app, so we can simply run the program as is:
node query.js
It should return something like this:
Query result count = 1
Response is [{"Key":"CAR0", "Record":{"colour":"blue","make":"Toyota","model":"Prius","owner":"Tomoko"}},
{"Key":"CAR1", "Record":{"colour":"red","make":"Ford","model":"Mustang","owner":"Brad"}},
{"Key":"CAR2", "Record":{"colour":"green","make":"Hyundai","model":"Tucson","owner":"Jin Soo"}},
{"Key":"CAR3", "Record":{"colour":"yellow","make":"Volkswagen","model":"Passat","owner":"Max"}},
{"Key":"CAR4", "Record":{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}},
{"Key":"CAR5", "Record":{"colour":"purple","make":"Peugeot","model":"205","owner":"Michel"}},
{"Key":"CAR6", "Record":{"colour":"white","make":"Chery","model":"S22L","owner":"Aarav"}},
{"Key":"CAR7", "Record":{"colour":"violet","make":"Fiat","model":"Punto","owner":"Pari"}},
{"Key":"CAR8", "Record":{"colour":"indigo","make":"Tata","model":"Nano","owner":"Valeria"}},
{"Key":"CAR9", "Record":{"colour":"brown","make":"Holden","model":"Barina","owner":"Shotaro"}}]
These are the 10 cars. A black Tesla Model S owned by Adriana, a red Ford Mustang
owned by Brad, a violet Fiat Punto owned by Pari, and so on. The ledger is
key/value based and in our implementation the key is CAR0
through CAR9
.
This will become particularly important in a moment.
Let’s take a closer look at this program. Use an editor (e.g. atom or visual studio)
and open query.js
.
The initial section of the application defines certain variables such as channel name, cert store location and network endpoints. In our sample app, these variables have been baked-in, but in a real app these variables would have to be specified by the app dev.
var channel = fabric_client.newChannel('mychannel');
var peer = fabric_client.newPeer('grpc://localhost:7051');
channel.addPeer(peer);
var member_user = null;
var store_path = path.join(__dirname, 'hfc-key-store');
console.log('Store path:'+store_path);
var tx_id = null;
This is the chunk where we construct our query:
// queryCar chaincode function - requires 1 argument, ex: args: ['CAR4'],
// queryAllCars chaincode function - requires no arguments , ex: args: [''],
const request = {
//targets : --- letting this default to the peers assigned to the channel
chaincodeId: 'fabcar',
fcn: 'queryAllCars',
args: ['']
};
When the application ran, it invoked the fabcar
chaincode on the peer, ran the
queryAllCars
function within it, and passed no arguments to it.
To take a look at the available functions within our smart contract, navigate
to the chaincode/fabcar/go
subdirectory at the root of fabric-samples
and open
fabcar.go
in your editor.
注解
These same functions are defined within the Node.js version of the
fabcar
chaincode.
You’ll see that we have the following functions available to call: initLedger
,
queryCar
, queryAllCars
, createCar
, and changeCarOwner
.
Let’s take a closer look at the queryAllCars
function to see how it
interacts with the ledger.
func (s *SmartContract) queryAllCars(APIstub shim.ChaincodeStubInterface) sc.Response {
startKey := "CAR0"
endKey := "CAR999"
resultsIterator, err := APIstub.GetStateByRange(startKey, endKey)
This defines the range of queryAllCars
. Every car between CAR0
and
CAR999
– 1,000 cars in all, assuming every key has been tagged properly
– will be returned by the query.
Below is a representation of how an app would call different functions in chaincode. Each function must be coded against an available API in the chaincode shim interface, which in turn allows the smart contract container to properly interface with the peer ledger.

We can see our queryAllCars
function, as well as one called createCar
,
that will allow us to update the ledger and ultimately append a new block to
the chain in a moment.
But first, go back to the query.js
program and edit the constructor request
to query CAR4
. We do this by changing the function in query.js
from
queryAllCars
to queryCar
and passing CAR4
as the specific key.
The query.js
program should now look like this:
const request = {
//targets : --- letting this default to the peers assigned to the channel
chaincodeId: 'fabcar',
fcn: 'queryCar',
args: ['CAR4']
};
Save the program and navigate back to your fabcar
directory. Now run the
program again:
node query.js
You should see the following:
{"colour":"black","make":"Tesla","model":"S","owner":"Adriana"}
If you go back and look at the result from when we queried every car before,
you can see that CAR4
was Adriana’s black Tesla model S, which is the result
that was returned here.
Using the queryCar
function, we can query against any key (e.g. CAR0
)
and get whatever make, model, color, and owner correspond to that car.
Great. At this point you should be comfortable with the basic query functions in the smart contract and the handful of parameters in the query program. Time to update the ledger...
Updating the Ledger¶
Now that we’ve done a few ledger queries and added a bit of code, we’re ready to update the ledger. There are a lot of potential updates we could make, but let’s start by creating a car.
Below we can see how this process works. An update is proposed, endorsed, then returned to the application, which in turn sends it to be ordered and written to every peer’s ledger:

Our first update to the ledger will be to create a new car. We have a separate
Javascript program – invoke.js
– that we will use to make updates. Just
as with queries, use an editor to open the program and navigate to the
code block where we construct our invocation:
// createCar chaincode function - requires 5 args, ex: args: ['CAR12', 'Honda', 'Accord', 'Black', 'Tom'],
// changeCarOwner chaincode function - requires 2 args , ex: args: ['CAR10', 'Barry'],
// must send the proposal to endorsing peers
var request = {
//targets: let default to the peer assigned to the client
chaincodeId: 'fabcar',
fcn: '',
args: [''],
chainId: 'mychannel',
txId: tx_id
};
You’ll see that we can call one of two functions - createCar
or
changeCarOwner
. First, let’s create a red Chevy Volt and give it to an
owner named Nick. We’re up to CAR9
on our ledger, so we’ll use CAR10
as the identifying key here. Edit this code block to look like this:
var request = {
//targets: let default to the peer assigned to the client
chaincodeId: 'fabcar',
fcn: 'createCar',
args: ['CAR10', 'Chevy', 'Volt', 'Red', 'Nick'],
chainId: 'mychannel',
txId: tx_id
};
Save it and run the program:
node invoke.js
There will be some output in the terminal about ProposalResponse
and
promises. However, all we’re concerned with is this message:
The transaction has been committed on peer localhost:7053
To see that this transaction has been written, go back to query.js
and
change the argument from CAR4
to CAR10
.
In other words, change this:
const request = {
//targets : --- letting this default to the peers assigned to the channel
chaincodeId: 'fabcar',
fcn: 'queryCar',
args: ['CAR4']
};
To this:
const request = {
//targets : --- letting this default to the peers assigned to the channel
chaincodeId: 'fabcar',
fcn: 'queryCar',
args: ['CAR10']
};
Save once again, then query:
node query.js
Which should return this:
Response is {"colour":"Red","make":"Chevy","model":"Volt","owner":"Nick"}
Congratulations. You’ve created a car!
So now that we’ve done that, let’s say that Nick is feeling generous and he wants to give his Chevy Volt to someone named Dave.
To do this go back to invoke.js
and change the function from createCar
to changeCarOwner
and input the arguments like this:
var request = {
//targets: let default to the peer assigned to the client
chaincodeId: 'fabcar',
fcn: 'changeCarOwner',
args: ['CAR10', 'Dave'],
chainId: 'mychannel',
txId: tx_id
};
The first argument – CAR10
– reflects the car that will be changing
owners. The second argument – Dave
– defines the new owner of the car.
Save and execute the program again:
node invoke.js
Now let’s query the ledger again and ensure that Dave is now associated with the
CAR10
key:
node query.js
It should return this result:
Response is {"colour":"Red","make":"Chevy","model":"Volt","owner":"Dave"}
The ownership of CAR10
has been changed from Nick to Dave.
注解
In a real world application the chaincode would likely have some access control logic. For example, only certain authorized users may create new cars, and only the car owner may transfer the car to somebody else.
Summary¶
Now that we’ve done a few queries and a few updates, you should have a pretty good sense of how applications interact with the network. You’ve seen the basics of the roles smart contracts, APIs, and the SDK play in queries and updates and you should have a feel for how different kinds of applications could be used to perform other business tasks and operations.
In subsequent documents we’ll learn how to actually write a smart contract and how some of these more low level application functions can be leveraged (especially relating to identity and membership services).
Additional Resources¶
The Hyperledger Fabric Node SDK repo is an excellent resource for deeper documentation and sample code. You can also consult the Fabric community and component experts on Hyperledger Rocket Chat.
Chaincode Tutorials¶
What is Chaincode?¶
Chaincode is a program, written in Go, and eventually in other programming languages such as Java, that implements a prescribed interface. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a “smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke another chaincode to access its state.
Two Personas¶
We offer two different perspectives on chaincode. One, from the perspective of an application developer developing a blockchain application/solution entitled Chaincode for Developers, and the other, Chaincode for Operators oriented to the blockchain network operator who is responsible for managing a blockchain network, and who would leverage the Hyperledger Fabric API to install, instantiate, and upgrade chaincode, but would likely not be involved in the development of a chaincode application.
Chaincode for Developers¶
What is Chaincode?¶
Chaincode is a program, written in Go that implements a prescribed interface. Eventually, other programming languages such as Java, will be supported. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages the ledger state through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it similar to a “smart contract”. Ledger state created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly by another chaincode. Given the appropriate permission, a chaincode may invoke another chaincode to access its state within the same network.
In the following sections, we will explore chaincode through the eyes of an application developer. We’ll present a simple chaincode sample application and walk through the purpose of each method in the Chaincode Shim API.
Chaincode API¶
Every chaincode program must implement the
Chaincode interface
whose methods are called in response to received transactions.
In particular the Init
method is called when a
chaincode receives an instantiate
or upgrade
transaction so that the
chaincode may perform any necessary initialization, including initialization of
application state. The Invoke
method is called in response to receiving an
invoke
transaction to process transaction proposals.
The other interface in the chaincode “shim” APIs is the ChaincodeStubInterface which is used to access and modify the ledger, and to make invocations between chaincodes.
In this tutorial, we will demonstrate the use of these APIs by implementing a simple chaincode application that manages simple “assets”.
Simple Asset Chaincode¶
Our application is a basic sample chaincode to create assets (key-value pairs) on the ledger.
Choosing a Location for the Code¶
If you haven’t been doing programming in Go, you may want to make sure that you have Go Programming Language installed and your system properly configured.
Now, you will want to create a directory for your chaincode application as a
child directory of $GOPATH/src/
.
To keep things simple, let’s use the following command:
mkdir -p $GOPATH/src/sacc && cd $GOPATH/src/sacc
Now, let’s create the source file that we’ll fill in with code:
touch sacc.go
Housekeeping¶
First, let’s start with some housekeeping. As with every chaincode, it implements the
Chaincode interface
in particular, Init
and Invoke
functions. So, let’s add the go import
statements for the necessary dependencies for our chaincode. We’ll import the
chaincode shim package and the
peer protobuf package.
Next, let’s add a struct SimpleAsset
as a receiver for Chaincode shim functions.
package main
import (
"fmt"
"github.com/hyperledger/fabric/core/chaincode/shim"
"github.com/hyperledger/fabric/protos/peer"
)
// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}
Initializing the Chaincode¶
Next, we’ll implement the Init
function.
// Init is called during chaincode instantiation to initialize any data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
}
注解
Note that chaincode upgrade also calls this function. When writing a
chaincode that will upgrade an existing one, make sure to modify the Init
function appropriately. In particular, provide an empty “Init” method if there’s
no “migration” or nothing to be initialized as part of the upgrade.
Next, we’ll retrieve the arguments to the Init
call using the
ChaincodeStubInterface.GetStringArgs
function and check for validity. In our case, we are expecting a key-value pair.
// Init is called during chaincode instantiation to initialize any // data. Note that chaincode upgrade also calls this function to reset // or to migrate data, so be careful to avoid a scenario where you // inadvertently clobber your ledger's data! func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response { // Get the args from the transaction proposal args := stub.GetStringArgs() if len(args) != 2 { return shim.Error("Incorrect arguments. Expecting a key and a value") } }
Next, now that we have established that the call is valid, we’ll store the initial state in the ledger. To do this, we will call ChaincodeStubInterface.PutState with the key and value passed in as the arguments. Assuming all went well, return a peer.Response object that indicates the initialization was a success.
// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data, so be careful to avoid a scenario where you
// inadvertently clobber your ledger's data!
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
// Get the args from the transaction proposal
args := stub.GetStringArgs()
if len(args) != 2 {
return shim.Error("Incorrect arguments. Expecting a key and a value")
}
// Set up any variables or assets here by calling stub.PutState()
// We store the key and the value on the ledger
err := stub.PutState(args[0], []byte(args[1]))
if err != nil {
return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
}
return shim.Success(nil)
}
Invoking the Chaincode¶
First, let’s add the Invoke
function’s signature.
// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The 'set'
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
}
As with the Init
function above, we need to extract the arguments from the
ChaincodeStubInterface
. The Invoke
function’s arguments will be the
name of the chaincode application function to invoke. In our case, our application
will simply have two functions: set
and get
, that allow the value of an
asset to be set or its current state to be retrieved. We first call
ChaincodeStubInterface.GetFunctionAndParameters
to extract the function name and the parameters to that chaincode application
function.
// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
// Extract the function and args from the transaction proposal
fn, args := stub.GetFunctionAndParameters()
}
Next, we’ll validate the function name as being either set
or get
, and
invoke those chaincode application functions, returning an appropriate
response via the shim.Success
or shim.Error
functions that will
serialize the response into a gRPC protobuf message.
// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
// Extract the function and args from the transaction proposal
fn, args := stub.GetFunctionAndParameters()
var result string
var err error
if fn == "set" {
result, err = set(stub, args)
} else {
result, err = get(stub, args)
}
if err != nil {
return shim.Error(err.Error())
}
// Return the result as success payload
return shim.Success([]byte(result))
}
Implementing the Chaincode Application¶
As noted, our chaincode application implements two functions that can be
invoked via the Invoke
function. Let’s implement those functions now.
Note that as we mentioned above, to access the ledger’s state, we will leverage
the ChaincodeStubInterface.PutState
and ChaincodeStubInterface.GetState
functions of the chaincode shim API.
// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 2 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
}
err := stub.PutState(args[0], []byte(args[1]))
if err != nil {
return "", fmt.Errorf("Failed to set asset: %s", args[0])
}
return args[1], nil
}
// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 1 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key")
}
value, err := stub.GetState(args[0])
if err != nil {
return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
}
if value == nil {
return "", fmt.Errorf("Asset not found: %s", args[0])
}
return string(value), nil
}
Pulling it All Together¶
Finally, we need to add the main
function, which will call the
shim.Start
function. Here’s the whole chaincode program source.
package main
import (
"fmt"
"github.com/hyperledger/fabric/core/chaincode/shim"
"github.com/hyperledger/fabric/protos/peer"
)
// SimpleAsset implements a simple chaincode to manage an asset
type SimpleAsset struct {
}
// Init is called during chaincode instantiation to initialize any
// data. Note that chaincode upgrade also calls this function to reset
// or to migrate data.
func (t *SimpleAsset) Init(stub shim.ChaincodeStubInterface) peer.Response {
// Get the args from the transaction proposal
args := stub.GetStringArgs()
if len(args) != 2 {
return shim.Error("Incorrect arguments. Expecting a key and a value")
}
// Set up any variables or assets here by calling stub.PutState()
// We store the key and the value on the ledger
err := stub.PutState(args[0], []byte(args[1]))
if err != nil {
return shim.Error(fmt.Sprintf("Failed to create asset: %s", args[0]))
}
return shim.Success(nil)
}
// Invoke is called per transaction on the chaincode. Each transaction is
// either a 'get' or a 'set' on the asset created by Init function. The Set
// method may create a new asset by specifying a new key-value pair.
func (t *SimpleAsset) Invoke(stub shim.ChaincodeStubInterface) peer.Response {
// Extract the function and args from the transaction proposal
fn, args := stub.GetFunctionAndParameters()
var result string
var err error
if fn == "set" {
result, err = set(stub, args)
} else { // assume 'get' even if fn is nil
result, err = get(stub, args)
}
if err != nil {
return shim.Error(err.Error())
}
// Return the result as success payload
return shim.Success([]byte(result))
}
// Set stores the asset (both key and value) on the ledger. If the key exists,
// it will override the value with the new one
func set(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 2 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key and a value")
}
err := stub.PutState(args[0], []byte(args[1]))
if err != nil {
return "", fmt.Errorf("Failed to set asset: %s", args[0])
}
return args[1], nil
}
// Get returns the value of the specified asset key
func get(stub shim.ChaincodeStubInterface, args []string) (string, error) {
if len(args) != 1 {
return "", fmt.Errorf("Incorrect arguments. Expecting a key")
}
value, err := stub.GetState(args[0])
if err != nil {
return "", fmt.Errorf("Failed to get asset: %s with error: %s", args[0], err)
}
if value == nil {
return "", fmt.Errorf("Asset not found: %s", args[0])
}
return string(value), nil
}
// main function starts up the chaincode in the container during instantiate
func main() {
if err := shim.Start(new(SimpleAsset)); err != nil {
fmt.Printf("Error starting SimpleAsset chaincode: %s", err)
}
}
Building Chaincode¶
Now let’s compile your chaincode.
go get -u --tags nopkcs11 github.com/hyperledger/fabric/core/chaincode/shim
go build --tags nopkcs11
Assuming there are no errors, now we can proceed to the next step, testing your chaincode.
Testing Using dev mode¶
Normally chaincodes are started and maintained by peer. However in “dev mode”, chaincode is built and started by the user. This mode is useful during chaincode development phase for rapid code/build/run/debug cycle turnaround.
We start “dev mode” by leveraging pre-generated orderer and channel artifacts for a sample dev network. As such, the user can immediately jump into the process of compiling chaincode and driving calls.
Install Hyperledger Fabric Samples¶
If you haven’t already done so, please install the Hyperledger Fabric Samples.
Navigate to the chaincode-docker-devmode
directory of the fabric-samples
clone:
cd chaincode-docker-devmode
Download Docker images¶
We need four Docker images in order for “dev mode” to run against the supplied
docker compose script. If you installed the fabric-samples
repo clone and
followed the instructions to download-platform-specific-binaries, then
you should have the necessary Docker images installed locally.
注解
If you choose to manually pull the images then you must retag them as
latest
.
Issue a docker images
command to reveal your local Docker Registry. You
should see something similar to following:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-tools latest e09f38f8928d 4 hours ago 1.32 GB
hyperledger/fabric-tools x86_64-1.0.0 e09f38f8928d 4 hours ago 1.32 GB
hyperledger/fabric-orderer latest 0df93ba35a25 4 hours ago 179 MB
hyperledger/fabric-orderer x86_64-1.0.0 0df93ba35a25 4 hours ago 179 MB
hyperledger/fabric-peer latest 533aec3f5a01 4 hours ago 182 MB
hyperledger/fabric-peer x86_64-1.0.0 533aec3f5a01 4 hours ago 182 MB
hyperledger/fabric-ccenv latest 4b70698a71d3 4 hours ago 1.29 GB
hyperledger/fabric-ccenv x86_64-1.0.0 4b70698a71d3 4 hours ago 1.29 GB
注解
If you retrieved the images through the download-platform-specific-binaries, then you will see additional images listed. However, we are only concerned with these four.
Now open three terminals and navigate to your chaincode-docker-devmode
directory in each.
Terminal 1 - Start the network¶
docker-compose -f docker-compose-simple.yaml up
The above starts the network with the SingleSampleMSPSolo
orderer profile and
launches the peer in “dev mode”. It also launches two additional containers -
one for the chaincode environment and a CLI to interact with the chaincode. The
commands for create and join channel are embedded in the CLI container, so we
can jump immediately to the chaincode calls.
Terminal 2 - Build & start the chaincode¶
docker exec -it chaincode bash
You should see the following:
root@d2629980e76b:/opt/gopath/src/chaincode#
Now, compile your chaincode:
cd sacc
go build
Now run the chaincode:
CORE_PEER_ADDRESS=peer:7051 CORE_CHAINCODE_ID_NAME=mycc:0 ./sacc
The chaincode is started with peer and chaincode logs indicating successful registration with the peer.
Note that at this stage the chaincode is not associated with any channel. This is done in subsequent steps
using the instantiate
command.
Terminal 3 - Use the chaincode¶
Even though you are in --peer-chaincodedev
mode, you still have to install the
chaincode so the life-cycle system chaincode can go through its checks normally.
This requirement may be removed in future when in --peer-chaincodedev
mode.
We’ll leverage the CLI container to drive these calls.
docker exec -it cli bash
peer chaincode install -p chaincodedev/chaincode/sacc -n mycc -v 0
peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a","10"]}' -C myc
Now issue an invoke to change the value of “a” to “20”.
peer chaincode invoke -n mycc -c '{"Args":["set", "a", "20"]}' -C myc
Finally, query a
. We should see a value of 20
.
peer chaincode query -n mycc -c '{"Args":["query","a"]}' -C myc
Testing new chaincode¶
By default, we mount only sacc
. However, you can easily test different
chaincodes by adding them to the chaincode
subdirectory and relaunching
your network. At this point they will be accessible in your chaincode
container.
Chaincode for Operators¶
What is Chaincode?¶
Chaincode is a program, written in Go, and eventually in other programming languages such as Java, that implements a prescribed interface. Chaincode runs in a secured Docker container isolated from the endorsing peer process. Chaincode initializes and manages ledger state through transactions submitted by applications.
A chaincode typically handles business logic agreed to by members of the network, so it may be considered as a “smart contract”. State created by a chaincode is scoped exclusively to that chaincode and can’t be accessed directly by another chaincode. However, within the same network, given the appropriate permission a chaincode may invoke another chaincode to access its state.
In the following sections, we will explore chaincode through the eyes of a blockchain network operator, Noah. For Noah’s interests, we will focus on chaincode lifecycle operations; the process of packaging, installing, instantiating and upgrading the chaincode as a function of the chaincode’s operational lifecycle within a blockchain network.
Chaincode lifecycle¶
The Hyperledger Fabric API enables interaction with the various nodes in a blockchain network - the peers, orderers and MSPs - and it also allows one to package, install, instantiate and upgrade chaincode on the endorsing peer nodes. The Hyperledger Fabric language-specific SDKs abstract the specifics of the Hyperledger Fabric API to facilitate application development, though it can be used to manage a chaincode’s lifecycle. Additionally, the Hyperledger Fabric API can be accessed directly via the CLI, which we will use in this document.
We provide four commands to manage a chaincode’s lifecycle: package
,
install
, instantiate
, and upgrade
. In a future release, we are
considering adding stop
and start
transactions to disable and re-enable
a chaincode without having to actually uninstall it. After a chaincode has
been successfully installed and instantiated, the chaincode is active (running)
and can process transactions via the invoke
transaction. A chaincode may be
upgraded any time after it has been installed.
Packaging¶
The chaincode package consists of 3 parts:
- the chaincode, as defined by
ChaincodeDeploymentSpec
or CDS. The CDS defines the chaincode package in terms of the code and other properties such as name and version,- an optional instantiation policy which can be syntactically described by the same policy used for endorsement and described in Endorsement policies, and
- a set of signatures by the entities that “own” the chaincode.
The signatures serve the following purposes:
- to establish an ownership of the chaincode,
- to allow verification of the contents of the package, and
- to allow detection of package tampering.
The creator of the instantiation transaction of the chaincode on a channel is validated against the instantiation policy of the chaincode.
Creating the package¶
There are two approaches to packaging chaincode. One for when you want to have
multiple owners of a chaincode, and hence need to have the chaincode package
signed by multiple identities. This workflow requires that we initially create a
signed chaincode package (a SignedCDS
) which is subsequently passed serially
to each of the other owners for signing.
The simpler workflow is for when you are deploying a SignedCDS that has only the
signature of the identity of the node that is issuing the install
transaction.
We will address the more complex case first. However, you may skip ahead to the Installing chaincode section below if you do not need to worry about multiple owners just yet.
To create a signed chaincode package, use the following command:
peer chaincode package -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -v 0 -s -S -i "AND('OrgA.admin')" ccpack.out
The -s
option creates a package that can be signed by multiple owners as
opposed to simply creating a raw CDS. When -s
is specified, the -S
option must also be specified if other owners are going to need to sign.
Otherwise, the process will create a SignedCDS that includes only the
instantiation policy in addition to the CDS.
The -S
option directs the process to sign the package
using the MSP identified by the value of the localMspid
property in
core.yaml
.
The -S
option is optional. However if a package is created without a
signature, it cannot be signed by any other owner using the
signpackage
command.
The optional -i
option allows one to specify an instantiation policy
for the chaincode. The instantiation policy has the same format as an
endorsement policy and specifies which identities can instantiate the
chaincode. In the example above, only the admin of OrgA is allowed to
instantiate the chaincode. If no policy is provided, the default policy
is used, which only allows the admin identity of the peer’s MSP to
instantiate chaincode.
Package signing¶
A chaincode package that was signed at creation can be handed over to other owners for inspection and signing. The workflow supports out-of-band signing of chaincode package.
The ChaincodeDeploymentSpec may be optionally be signed by the collective owners to create a SignedChaincodeDeploymentSpec (or SignedCDS). The SignedCDS contains 3 elements:
- The CDS contains the source code, the name, and version of the chaincode.
- An instantiation policy of the chaincode, expressed as endorsement policies.
- The list of chaincode owners, defined by means of Endorsement.
注解
Note that this endorsement policy is determined out-of-band to provide proper MSP principals when the chaincode is instantiated on some channels. If the instantiation policy is not specified, the default policy is any MSP administrator of the channel.
Each owner endorses the ChaincodeDeploymentSpec by combining it with that owner’s identity (e.g. certificate) and signing the combined result.
A chaincode owner can sign a previously created signed package using the following command:
peer chaincode signpackage ccpack.out signedccpack.out
Where ccpack.out
and signedccpack.out
are the input and output
packages, respectively. signedccpack.out
contains an additional
signature over the package signed using the Local MSP.
Installing chaincode¶
The install
transaction packages a chaincode’s source code into a prescribed
format called a ChaincodeDeploymentSpec
(or CDS) and installs it on a
peer node that will run that chaincode.
注解
You must install the chaincode on each endorsing peer node of a channel that will run your chaincode.
When the install
API is given simply a ChaincodeDeploymentSpec
,
it will default the instantiation policy and include an empty owner list.
注解
Chaincode should only be installed on endorsing peer nodes of the owning members of the chaincode to protect the confidentiality of the chaincode logic from other members on the network. Those members without the chaincode, can’t be the endorsers of the chaincode’s transactions; that is, they can’t execute the chaincode. However, they can still validate and commit the transactions to the ledger.
To install a chaincode, send a SignedProposal
to the lifecycle system chaincode
(LSCC) described in the System Chaincode
section. For example, to install the sacc sample chaincode described
in section Simple Asset Chaincode
using the CLI, the command would look like the following:
peer chaincode install -n asset_mgmt -v 1.0 -p sacc
The CLI internally creates the SignedChaincodeDeploymentSpec for sacc and
sends it to the local peer, which calls the Install
method on the LSCC. The
argument to the -p
option specifies the path to the chaincode, which must be
located within the source tree of the user’s GOPATH
, e.g.
$GOPATH/src/sacc
. See the CLI section for a complete description of
the command options.
Note that in order to install on a peer, the signature of the SignedProposal must be from 1 of the peer’s local MSP administrators.
Instantiate¶
The instantiate
transaction invokes the lifecycle System Chaincode
(LSCC) to create and initialize a chaincode on a channel. This is a
chaincode-channel binding process: a chaincode may be bound to any number of
channels and operate on each channel individually and independently. In other
words, regardless of how many other channels on which a chaincode might be
installed and instantiated, state is kept isolated to the channel to which
a transaction is submitted.
The creator of an instantiate
transaction must satisfy the instantiation
policy of the chaincode included in SignedCDS and must also be a writer on the
channel, which is configured as part of the channel creation. This is important
for the security of the channel to prevent rogue entities from deploying
chaincodes or tricking members to execute chaincodes on an unbound channel.
For example, recall that the default instantiation policy is any channel MSP administrator, so the creator of a chaincode instantiate transaction must be a member of the channel administrators. When the transaction proposal arrives at the endorser, it verifies the creator’s signature against the instantiation policy. This is done again during the transaction validation before committing it to the ledger.
The instantiate transaction also sets up the endorsement policy for that chaincode on the channel. The endorsement policy describes the attestation requirements for the transaction result to be accepted by members of the channel.
For example, using the CLI to instantiate the sacc chaincode and initialize
the state with john
and 0
, the command would look like the following:
peer chaincode instantiate -n sacc -v 1.0 -c '{"Args":["john","0"]}' -P "OR ('Org1.member','Org2.member')"
注解
Note the endorsement policy (CLI uses polish notation), which requires an endorsement from either member of Org1 or Org2 for all transactions to sacc. That is, either Org1 or Org2 must sign the result of executing the Invoke on sacc for the transactions to be valid.
After being successfully instantiated, the chaincode enters the active state on the channel and is ready to process any transaction proposals of type ENDORSER_TRANSACTION. The transactions are processed concurrently as they arrive at the endorsing peer.
Upgrade¶
A chaincode may be upgraded any time by changing its version, which is part of the SignedCDS. Other parts, such as owners and instantiation policy are optional. However, the chaincode name must be the same; otherwise it would be considered as a totally different chaincode.
Prior to upgrade, the new version of the chaincode must be installed on
the required endorsers. Upgrade is a transaction similar to the instantiate
transaction, which binds the new version of the chaincode to the channel. Other
channels bound to the old version of the chaincode still run with the old
version. In other words, the upgrade
transaction only affects one channel
at a time, the channel to which the transaction is submitted.
注解
Note that since multiple versions of a chaincode may be active simultaneously, the upgrade process doesn’t automatically remove the old versions, so user must manage this for the time being.
There’s one subtle difference with the instantiate
transaction: the
upgrade
transaction is checked against the current chaincode instantiation
policy, not the new policy (if specified). This is to ensure that only existing
members specified in the current instantiation policy may upgrade the chaincode.
注解
Note that during upgrade, the chaincode Init
function is called to
perform any data related updates or re-initialize it, so care must be
taken to avoid resetting states when upgrading chaincode.
Stop and Start¶
Note that stop
and start
lifecycle transactions have not yet been
implemented. However, you may stop a chaincode manually by removing the
chaincode container and the SignedCDS package from each of the endorsers. This
is done by deleting the chaincode’s container on each of the hosts or virtual
machines on which the endorsing peer nodes are running, and then deleting
the SignedCDS from each of the endorsing peer nodes:
注解
TODO - in order to delete the CDS from the peer node, you would need to enter the peer node’s container, first. We really need to provide a utility script that can do this.
docker rm -f <container id>
rm /var/hyperledger/production/chaincodes/<ccname>:<ccversion>
Stop would be useful in the workflow for doing upgrade in controlled manner, where a chaincode can be stopped on a channel on all peers before issuing an upgrade.
CLI¶
注解
We are assessing the need to distribute platform-specific binaries
for the Hyperledger Fabric peer
binary. For the time being, you
can simply invoke the commands from within a running docker container.
To view the currently available CLI commands, execute the following command from
within a running fabric-peer
Docker container:
docker run -it hyperledger/fabric-peer bash
# peer chaincode --help
Which shows output similar to the example below:
Usage:
peer chaincode [command]
Available Commands:
install Package the specified chaincode into a deployment spec and save it on the peer's path.
instantiate Deploy the specified chaincode to the network.
invoke Invoke the specified chaincode.
package Package the specified chaincode into a deployment spec.
query Query using the specified chaincode.
signpackage Sign the specified chaincode package
upgrade Upgrade chaincode.
Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
-C, --chainID string The chain on which this command should be executed (default "testchainid")
-c, --ctor string Constructor message for the chaincode in JSON format (default "{}")
-E, --escc string The name of the endorsement system chaincode to be used for this chaincode
-l, --lang string Language the chaincode is written in (default "golang")
-n, --name string Name of the chaincode
-o, --orderer string Ordering service endpoint
-p, --path string Path to chaincode
-P, --policy string The endorsement policy associated to this chaincode
-t, --tid string Name of a custom ID generation algorithm (hashing and decoding) e.g. sha256base64
--tls Use TLS when communicating with the orderer endpoint
-u, --username string Username for chaincode operations when security is enabled
-v, --version string Version of the chaincode specified in install/instantiate/upgrade commands
-V, --vscc string The name of the verification system chaincode to be used for this chaincode
Global Flags:
--logging-level string Default logging level and overrides, see core.yaml for full syntax
--test.coverprofile string Done (default "coverage.cov")
Use "peer chaincode [command] --help" for more information about a command.
To facilitate its use in scripted applications, the peer
command always
produces a non-zero return code in the event of command failure.
Example of chaincode commands:
peer chaincode install -n mycc -v 0 -p path/to/my/chaincode/v0
peer chaincode instantiate -n mycc -v 0 -c '{"Args":["a", "b", "c"]}' -C mychannel
peer chaincode install -n mycc -v 1 -p path/to/my/chaincode/v1
peer chaincode upgrade -n mycc -v 1 -c '{"Args":["d", "e", "f"]}' -C mychannel
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","e"]}'
peer chaincode invoke -o orderer.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
System chaincode¶
System chaincode has the same programming model except that it runs within the peer process rather than in an isolated container like normal chaincode. Therefore, system chaincode is built into the peer executable and doesn’t follow the same lifecycle described above. In particular, install, instantiate and upgrade do not apply to system chaincodes.
The purpose of system chaincode is to shortcut gRPC communication cost between peer and chaincode, and tradeoff the flexibility in management. For example, a system chaincode can only be upgraded with the peer binary. It must also register with a fixed set of parameters compiled in and doesn’t have endorsement policies or endorsement policy functionality.
System chaincode is used in Hyperledger Fabric to implement a number of system behaviors so that they can be replaced or modified as appropriate by a system integrator.
The current list of system chaincodes:
- LSCC Lifecycle system chaincode handles lifecycle requests described above.
- CSCC Configuration system chaincode handles channel configuration on the peer side.
- QSCC Query system chaincode provides ledger query APIs such as getting blocks and transactions.
- ESCC Endorsement system chaincode handles endorsement by signing the transaction proposal response.
- VSCC Validation system chaincode handles the transaction validation, including checking endorsement policy and multiversioning concurrency control.
Care must be taken when modifying or replacing these system chaincodes, especially LSCC, ESCC and VSCC since they are in the main transaction execution path. It is worth noting that as VSCC validates a block before committing it to the ledger, it is important that all peers in the channel compute the same validation to avoid ledger divergence (non-determinism). So special care is needed if VSCC is modified or replaced.
Videos¶
Refer to the Hyperledger Fabric channel on YouTube
This collection contains developers demonstrating various v1 features and components such as: ledger, channels, gossip, SDK, chaincode, MSP, and more...
Membership Service Providers (MSP)¶
The document serves to provide details on the setup and best practices for MSPs.
Membership Service Provider (MSP) is a component that aims to offer an abstraction of a membership operation architecture.
In particular, MSP abstracts away all cryptographic mechanisms and protocols behind issuing and validating certificates, and user authentication. An MSP may define their own notion of identity, and the rules by which those identities are governed (identity validation) and authenticated (signature generation and verification).
A Hyperledger Fabric blockchain network can be governed by one or more MSPs. This provides modularity of membership operations, and interoperability across different membership standards and architectures.
In the rest of this document we elaborate on the setup of the MSP implementation supported by Hyperledger Fabric, and discuss best practices concerning its use.
MSP Configuration¶
To setup an instance of the MSP, its configuration needs to be specified locally at each peer and orderer (to enable peer, and orderer signing), and on the channels to enable peer, orderer, client identity validation, and respective signature verification (authentication) by and for all channel members.
Firstly, for each MSP a name needs to be specified in order to reference that MSP
in the network (e.g. msp1
, org2
, and org3.divA
). This is the name under
which membership rules of an MSP representing a consortium, organization or
organization division is to be referenced in a channel. This is also referred
to as the MSP Identifier or MSP ID. MSP Identifiers are required to be unique per MSP
instance. For example, shall two MSP instances with the same identifier be
detected at the system channel genesis, orderer setup will fail.
In the case of default implementation of MSP, a set of parameters need to be specified to allow for identity (certificate) validation and signature verification. These parameters are deduced by RFC5280, and include:
- A list of self-signed (X.509) certificates to constitute the root of trust
- A list of X.509 certificates to represent intermediate CAs this provider considers for certificate validation; these certificates ought to be certified by exactly one of the certificates in the root of trust; intermediate CAs are optional parameters
- A list of X.509 certificates with a verifiable certificate path to exactly one of the certificates of the root of trust to represent the administrators of this MSP; owners of these certificates are authorized to request changes to this MSP configuration (e.g. root CAs, intermediate CAs)
- A list of Organizational Units that valid members of this MSP should include in their X.509 certificate; this is an optional configuration parameter, used when, e.g., multiple organisations leverage the same root of trust, and intermediate CAs, and have reserved an OU field for their members
- A list of certificate revocation lists (CRLs) each corresponding to exactly one of the listed (intermediate or root) MSP Certificate Authorities; this is an optional parameter
- A list of self-signed (X.509) certificates to constitute the TLS root of trust for TLS certificate.
- A list of X.509 certificates to represent intermediate TLS CAs this provider considers; these certificates ought to be certified by exactly one of the certificates in the TLS root of trust; intermediate CAs are optional parameters.
Valid identities for this MSP instance are required to satisfy the following conditions:
- They are in the form of X.509 certificates with a verifiable certificate path to exactly one of the root of trust certificates;
- They are not included in any CRL;
- And they list one or more of the Organizational Units of the MSP configuration
in the
OU
field of their X.509 certificate structure.
For more information on the validity of identities in the current MSP implementation, we refer the reader to MSP Identity Validity Rules.
In addition to verification related parameters, for the MSP to enable the node on which it is instantiated to sign or authenticate, one needs to specify:
- The signing key used for signing by the node (currently only ECDSA keys are supported), and
- The node’s X.509 certificate, that is a valid identity under the verification parameters of this MSP.
It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.
How to generate MSP certificates and their signing keys?¶
To generate X.509 certificates to feed its MSP configuration, the application can use Openssl. We emphasise that in Hyperledger Fabric there is no support for certificates including RSA keys.
Alternatively one can use cryptogen
tool, whose operation is explained in
Getting Started.
Hyperledger Fabric CA can also be used to generate the keys and certificates needed to configure an MSP.
MSP setup on the peer & orderer side¶
To set up a local MSP (for either a peer or an orderer), the administrator
should create a folder (e.g. $MY_PATH/mspconfig
) that contains six subfolders
and a file:
- a folder
admincerts
to include PEM files each corresponding to an administrator certificate - a folder
cacerts
to include PEM files each corresponding to a root CA’s certificate - (optional) a folder
intermediatecerts
to include PEM files each corresponding to an intermediate CA’s certificate - (optional) a file
config.yaml
to include information on the considered OUs; the latter are defined as pairs of<Certificate, OrganizationalUnitIdentifier>
entries of a yaml array calledOrganizationalUnitIdentifiers
, whereCertificate
represents the relative path to the certificate of the certificate authority (root or intermediate) that should be considered for certifying members of this organizational unit (e.g. ./cacerts/cacert.pem), andOrganizationalUnitIdentifier
represents the actual string as expected to appear in X.509 certificate OU-field (e.g. “COP”) - (optional) a folder
crls
to include the considered CRLs - a folder
keystore
to include a PEM file with the node’s signing key; we emphasise that currently RSA keys are not supported - a folder
signcerts
to include a PEM file with the node’s X.509 certificate - (optional) a folder
tlscacerts
to include PEM files each corresponding to a TLS root CA’s certificate - (optional) a folder
tlsintermediatecerts
to include PEM files each corresponding to an intermediate TLS CA’s certificate
In the configuration file of the node (core.yaml file for the peer, and
orderer.yaml for the orderer), one needs to specify the path to the
mspconfig folder, and the MSP Identifier of the node’s MSP. The path to the
mspconfig folder is expected to be relative to FABRIC_CFG_PATH and is provided
as the value of parameter mspConfigPath
for the peer, and LocalMSPDir
for the orderer. The identifier of the node’s MSP is provided as a value of
parameter localMspId
for the peer and LocalMSPID
for the orderer.
These variables can be overridden via the environment using the CORE prefix for
peer (e.g. CORE_PEER_LOCALMSPID) and the ORDERER prefix for the orderer (e.g.
ORDERER_GENERAL_LOCALMSPID). Notice that for the orderer setup, one needs to
generate, and provide to the orderer the genesis block of the system channel.
The MSP configuration needs of this block are detailed in the next section.
Reconfiguration of a “local” MSP is only possible manually, and requires that the peer or orderer process is restarted. In subsequent releases we aim to offer online/dynamic reconfiguration (i.e. without requiring to stop the node by using a node managed system chaincode).
Channel MSP setup¶
At the genesis of the system, verification parameters of all the MSPs that appear in the network need to be specified, and included in the system channel’s genesis block. Recall that MSP verification parameters consist of the MSP identifier, the root of trust certificates, intermediate CA and admin certificates, as well as OU specifications and CRLs. The system genesis block is provided to the orderers at their setup phase, and allows them to authenticate channel creation requests. Orderers would reject the system genesis block, if the latter includes two MSPs with the same identifier, and consequently the bootstrapping of the network would fail.
For application channels, the verification components of only the MSPs that govern a channel need to reside in the channel’s genesis block. We emphasise that it is the responsibility of the application to ensure that correct MSP configuration information is included in the genesis blocks (or the most recent configuration block) of a channel prior to instructing one or more of their peers to join the channel.
When bootstrapping a channel with the help of the configtxgen tool, one can
configure the channel MSPs by including the verification parameters of MSP
in the mspconfig folder, and setting that path in the relevant section in
configtx.yaml
.
Reconfiguration of an MSP on the channel, including announcements of the
certificate revocation lists associated to the CAs of that MSP is achieved
through the creation of a config_update
object by the owner of one of the
administrator certificates of the MSP. The client application managed by the
admin would then announce this update to the channels in which this MSP appears.
Best Practices¶
In this section we elaborate on best practices for MSP configuration in commonly met scenarios.
1) Mapping between organizations/corporations and MSPs
We recommend that there is a one-to-one mapping between organizations and MSPs. If a different mapping type of mapping is chosen, the following needs to be to considered:
- One organization employing various MSPs. This corresponds to the case of an organization including a variety of divisions each represented by its MSP, either for management independence reasons, or for privacy reasons. In this case a peer can only be owned by a single MSP, and will not recognize peers with identities from other MSPs as peers of the same organization. The implication of this is that peers may share through gossip organization-scoped data with a set of peers that are members of the same subdivision, and NOT with the full set of providers constituting the actual organization.
- Multiple organizations using a single MSP. This corresponds to a case of a consortium of organisations that are governed by similar membership architecture. One needs to know here that peers would propagate organization-scoped messages to the peers that have an identity under the same MSP regardless of whether they belong to the same actual organization. This is a limitation of the granularity of MSP definition, and/or of the peer’s configuration.
2) One organization has different divisions (say organizational units), to which it wants to grant access to different channels.
Two ways to handle this:
- Define one MSP to accommodate membership for all organization’s members.
Configuration of that MSP would consist of a list of root CAs,
intermediate CAs and admin certificates; and membership identities would
include the organizational unit (
OU
) a member belongs to. Policies can then be defined to capture members of a specificOU
, and these policies may constitute the read/write policies of a channel or endorsement policies of a chaincode. A limitation of this approach is that gossip peers would consider peers with membership identities under their local MSP as members of the same organization, and would consequently gossip with them organisation-scoped data (e.g. their status). - Defining one MSP to represent each division. This would involve specifying for each division, a set of certificates for root CAs, intermediate CAs, and admin Certs, such that there is no overlapping certification path across MSPs. This would mean that, for example, a different intermediate CA per subdivision is employed. Here the disadvantage is the management of more than one MSPs instead of one, but this circumvents the issue present in the previous approach. One could also define one MSP for each division by leveraging an OU extension of the MSP configuration.
3) Separating clients from peers of the same organization.
In many cases it is required that the “type” of an identity is retrievable from the identity itself (e.g. it may be needed that endorsements are guaranteed to have derived by peers, and not clients or nodes acting solely as orderers).
There is limited support for such requirements.
One way to allow for this separation is to to create a separate intermediate CA for each node type - one for clients and one for peers/orderers; and configure two different MSPs - one for clients and one for peers/orderers. Channels this organization should be accessing would need to include both MSPs, while endorsement policies will leverage only the MSP that refers to the peers. This would ultimately result in the organization being mapped to two MSP instances, and would have certain consequences on the way peers and clients interact.
Gossip would not be drastically impacted as all peers of the same organization would still belong to one MSP. Peers can restrict the execution of certain system chaincodes to local MSP based policies. For example, peers would only execute “joinChannel” request if the request is signed by the admin of their local MSP who can only be a client (end-user should be sitting at the origin of that request). We can go around this inconsistency if we accept that the only clients to be members of a peer/orderer MSP would be the administrators of that MSP.
Another point to be considered with this approach is that peers authorize event registration requests based on membership of request originator within their local MSP. Clearly, since the originator of the request is a client, the request originator is always doomed to belong to a different MSP than the requested peer and the peer would reject the request.
4) Admin and CA certificates.
It is important to set MSP admin certificates to be different than any of the
certificates considered by the MSP for root of trust
, or intermediate CAs.
This is a common (security) practice to separate the duties of management of
membership components from the issuing of new certificates, and/or validation of existing ones.
5) Blacklisting an intermediate CA.
As mentioned in previous sections, reconfiguration of an MSP is achieved by
reconfiguration mechanisms (manual reconfiguration for the local MSP instances,
and via properly constructed config_update
messages for MSP instances of a channel).
Clearly, there are two ways to ensure an intermediate CA considered in an MSP is no longer
considered for that MSP’s identity validation:
- Reconfigure the MSP to no longer include the certificate of that
intermediate CA in the list of trusted intermediate CA certs. For the
locally configured MSP, this would mean that the certificate of this CA is
removed from the
intermediatecerts
folder. - Reconfigure the MSP to include a CRL produced by the root of trust which denounces the mentioned intermediate CA’s certificate.
In the current MSP implementation we only support method (1) as it is simpler and does not require blacklisting the no longer considered intermediate CA.
6) CAs and TLS CAs
MSP identities’ root CAs and MSP TLS certificates’ root CAs (and relative intermediate CAs) need to be declared in different folders. This is to avoid confusion between different classes of certificates. It is not forbidden to reuse the same CAs for both MSP identities and TLS certificates but best practices suggest to avoid this in production.
Channel Configuration (configtx)¶
Shared configuration for a Hyperledger Fabric blockchain network is stored in a collection configuration transactions, one per channel. Each configuration transaction is usually referred to by the shorter name configtx.
Channel configuration has the following important properties:
- Versioned: All elements of the configuration have an associated version which is advanced with every modification. Further, every committed configuration receives a sequence number.
- Permissioned: Each element of the configuration has an associated policy which governs whether or not modification to that element is permitted. Anyone with a copy of the previous configtx (and no additional info) may verify the validity of a new config based on these policies.
- Hierarchical: A root configuration group contains sub-groups, and each group of the hierarchy has associated values and policies. These policies can take advantage of the hierarchy to derive policies at one level from policies of lower levels.
Anatomy of a configuration¶
Configuration is stored as a transaction of type HeaderType_CONFIG
in a block with no other transactions. These blocks are referred to as
Configuration Blocks, the first of which is referred to as the
Genesis Block.
The proto structures for configuration are stored in
fabric/protos/common/configtx.proto
. The Envelope of type
HeaderType_CONFIG
encodes a ConfigEnvelope
message as the
Payload
data
field. The proto for ConfigEnvelope
is defined
as follows:
message ConfigEnvelope {
Config config = 1;
Envelope last_update = 2;
}
The last_update
field is defined below in the Updates to
configuration section, but is only necessary when validating the
configuration, not reading it. Instead, the currently committed
configuration is stored in the config
field, containing a Config
message.
message Config {
uint64 sequence = 1;
ConfigGroup channel_group = 2;
}
The sequence
number is incremented by one for each committed
configuration. The channel_group
field is the root group which
contains the configuration. The ConfigGroup
structure is recursively
defined, and builds a tree of groups, each of which contains values and
policies. It is defined as follows:
message ConfigGroup {
uint64 version = 1;
map<string,ConfigGroup> groups = 2;
map<string,ConfigValue> values = 3;
map<string,ConfigPolicy> policies = 4;
string mod_policy = 5;
}
Because ConfigGroup
is a recursive structure, it has hierarchical
arrangement. The following example is expressed for clarity in golang
notation.
// Assume the following groups are defined
var root, child1, child2, grandChild1, grandChild2, grandChild3 *ConfigGroup
// Set the following values
root.Groups["child1"] = child1
root.Groups["child2"] = child2
child1.Groups["grandChild1"] = grandChild1
child2.Groups["grandChild2"] = grandChild2
child2.Groups["grandChild3"] = grandChild3
// The resulting config structure of groups looks like:
// root:
// child1:
// grandChild1
// child2:
// grandChild2
// grandChild3
Each group defines a level in the config hierarchy, and each group has an associated set of values (indexed by string key) and policies (also indexed by string key).
Values are defined by:
message ConfigValue {
uint64 version = 1;
bytes value = 2;
string mod_policy = 3;
}
Policies are defined by:
message ConfigPolicy {
uint64 version = 1;
Policy policy = 2;
string mod_policy = 3;
}
Note that Values, Policies, and Groups all have a version
and a
mod_policy
. The version
of an element is incremented each time
that element is modified. The mod_policy
is used to govern the
required signatures to modify that element. For Groups, modification is
adding or removing elements to the Values, Policies, or Groups maps (or
changing the mod_policy
). For Values and Policies, modification is
changing the Value and Policy fields respectively (or changing the
mod_policy
). Each element’s mod_policy
is evaluated in the
context of the current level of the config. Consider the following
example mod policies defined at Channel.Groups["Application"]
(Here,
we use the golang map reference syntax, so
Channel.Groups["Application"].Policies["policy1"]
refers to the base
Channel
group’s Application
group’s Policies
map’s
policy1
policy.)
policy1
maps toChannel.Groups["Application"].Policies["policy1"]
Org1/policy2
maps toChannel.Groups["Application"].Groups["Org1"].Policies["policy2"]
/Channel/policy3
maps toChannel.Policies["policy3"]
Note that if a mod_policy
references a policy which does not exist,
the item cannot be modified.
Configuration updates¶
Configuration updates are submitted as an Envelope
message of type
HeaderType_CONFIG_UPDATE
. The Payload
data
of the
transaction is a marshaled ConfigUpdateEnvelope
. The ConfigUpdateEnvelope
is defined as follows:
message ConfigUpdateEnvelope {
bytes config_update = 1;
repeated ConfigSignature signatures = 2;
}
The signatures
field contains the set of signatures which authorizes
the config update. Its message definition is:
message ConfigSignature {
bytes signature_header = 1;
bytes signature = 2;
}
The signature_header
is as defined for standard transactions, while
the signature is over the concatenation of the signature_header
bytes and the config_update
bytes from the ConfigUpdateEnvelope
message.
The ConfigUpdateEnvelope
config_update
bytes are a marshaled
ConfigUpdate
message which is defined as follows:
message ConfigUpdate {
string channel_id = 1;
ConfigGroup read_set = 2;
ConfigGroup write_set = 3;
}
The channel_id
is the channel ID the update is bound for, this is
necessary to scope the signatures which support this reconfiguration.
The read_set
specifies a subset of the existing configuration,
specified sparsely where only the version
field is set and no other
fields must be populated. The particular ConfigValue
value
or
ConfigPolicy
policy
fields should never be set in the
read_set
. The ConfigGroup
may have a subset of its map fields
populated, so as to reference an element deeper in the config tree. For
instance, to include the Application
group in the read_set
, its
parent (the Channel
group) must also be included in the read set,
but, the Channel
group does not need to populate all of the keys,
such as the Orderer
group
key, or any of the values
or
policies
keys.
The write_set
specifies the pieces of configuration which are
modified. Because of the hierarchical nature of the configuration, a
write to an element deep in the hierarchy must contain the higher level
elements in its write_set
as well. However, for any element in the
write_set
which is also specified in the read_set
at the same
version, the element should be specified sparsely, just as in the
read_set
.
For example, given the configuration:
Channel: (version 0)
Orderer (version 0)
Appplication (version 3)
Org1 (version 2)
To submit a configuration update which modifies Org1
, the
read_set
would be:
Channel: (version 0)
Application: (version 3)
and the write_set
would be
Channel: (version 0)
Application: (version 3)
Org1 (version 3)
When the CONFIG_UPDATE
is received, the orderer computes the
resulting CONFIG
by doing the following:
- Verifies the
channel_id
andread_set
. All elements in theread_set
must exist at the given versions. - Computes the update set by collecting all elements in the
write_set
which do not appear at the same version in theread_set
. - Verifies that each element in the update set increments the version number of the element update by exactly 1.
- Verifies that the signature set attached to the
ConfigUpdateEnvelope
satisfies themod_policy
for each element in the update set. - Computes a new complete version of the config by applying the update set to the current config.
- Writes the new config into a
ConfigEnvelope
which includes theCONFIG_UPDATE
as thelast_update
field and the new config encoded in theconfig
field, along with the incrementedsequence
value. - Writes the new
ConfigEnvelope
into aEnvelope
of typeCONFIG
, and ultimately writes this as the sole transaction in a new configuration block.
When the peer (or any other receiver for Deliver
) receives this
configuration block, it should verify that the config was appropriately
validated by applying the last_update
message to the current config
and verifying that the orderer-computed config
field contains the
correct new configuration.
Permitted configuration groups and values¶
Any valid configuration is a subset of the following configuration. Here
we use the notation peer.<MSG>
to define a ConfigValue
whose
value
field is a marshaled proto message of name <MSG>
defined
in fabric/protos/peer/configuration.proto
. The notations
common.<MSG>
, msp.<MSG>
, and orderer.<MSG>
correspond
similarly, but with their messages defined in
fabric/protos/common/configuration.proto
,
fabric/protos/msp/mspconfig.proto
, and
fabric/protos/orderer/configuration.proto
respectively.
Note, that the keys {{org_name}}
and {{consortium_name}}
represent arbitrary names, and indicate an element which may be repeated
with different names.
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Application":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
"AnchorPeers":peer.AnchorPeers,
},
},
},
},
"Orderer":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ConsensusType":orderer.ConsensusType,
"BatchSize":orderer.BatchSize,
"BatchTimeout":orderer.BatchTimeout,
"KafkaBrokers":orderer.KafkaBrokers,
},
},
"Consortiums":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{consortium_name}}:&ConfigGroup{
Groups:map<string, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ChannelCreationPolicy":common.Policy,
}
},
},
},
},
Values: map<string, *ConfigValue> {
"HashingAlgorithm":common.HashingAlgorithm,
"BlockHashingDataStructure":common.BlockDataHashingStructure,
"Consortium":common.Consortium,
"OrdererAddresses":common.OrdererAddresses,
},
}
Orderer system channel configuration¶
The ordering system channel needs to define ordering parameters, and consortiums for creating channels. There must be exactly one ordering system channel for an ordering service, and it is the first channel to be created (or more accurately bootstrapped). It is recommended never to define an Application section inside of the ordering system channel genesis configuration, but may be done for testing. Note that any member with read access to the ordering system channel may see all channel creations, so this channel’s access should be restricted.
The ordering parameters are defined as the following subset of config:
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Orderer":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ConsensusType":orderer.ConsensusType,
"BatchSize":orderer.BatchSize,
"BatchTimeout":orderer.BatchTimeout,
"KafkaBrokers":orderer.KafkaBrokers,
},
},
},
Each organization participating in ordering has a group element under
the Orderer
group. This group defines a single parameter MSP
which contains the cryptographic identity information for that
organization. The Values
of the Orderer
group determine how the
ordering nodes function. They exist per channel, so
orderer.BatchTimeout
for instance may be specified differently on
one channel than another.
At startup, the orderer is faced with a filesystem which contains information for many channels. The orderer identifies the system channel by identifying the channel with the consortiums group defined. The consortiums group has the following structure.
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Consortiums":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{consortium_name}}:&ConfigGroup{
Groups:map<string, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
},
},
},
Values:map<string, *ConfigValue> {
"ChannelCreationPolicy":common.Policy,
}
},
},
},
},
},
Note that each consortium defines a set of members, just like the
organizational members for the ordering orgs. Each consortium also
defines a ChannelCreationPolicy
. This is a policy which is applied
to authorize channel creation requests. Typically, this value will be
set to an ImplicitMetaPolicy
requiring that the new members of the
channel sign to authorize the channel creation. More details about
channel creation follow later in this document.
Application channel configuration¶
Application configuration is for channels which are designed for application type transactions. It is defined as follows:
&ConfigGroup{
Groups: map<string, *ConfigGroup> {
"Application":&ConfigGroup{
Groups:map<String, *ConfigGroup> {
{{org_name}}:&ConfigGroup{
Values:map<string, *ConfigValue>{
"MSP":msp.MSPConfig,
"AnchorPeers":peer.AnchorPeers,
},
},
},
},
},
}
Just like with the Orderer
section, each organization is encoded as
a group. However, instead of only encoding the MSP
identity
information, each org additionally encodes a list of AnchorPeers
.
This list allows the peers of different organizations to contact each
other for peer gossip networking.
The application channel encodes a copy of the orderer orgs and consensus
options to allow for deterministic updating of these parameters, so the
same Orderer
section from the orderer system channel configuration
is included. However from an application perspective this may be largely
ignored.
Channel creation¶
When the orderer receives a CONFIG_UPDATE
for a channel which does
not exist, the orderer assumes that this must be a channel creation
request and performs the following.
- The orderer identifies the consortium which the channel creation
request is to be performed for. It does this by looking at the
Consortium
value of the top level group. - The orderer verifies that the organizations included in the
Application
group are a subset of the organizations included in the corresponding consortium and that theApplicationGroup
is set toversion
1
. - The orderer verifies that if the consortium has members, that the new channel also has application members (creation consortiums and channels with no members is useful for testing only).
- The orderer creates a template configuration by taking the
Orderer
group from the ordering system channel, and creating anApplication
group with the newly specified members and specifying itsmod_policy
to be theChannelCreationPolicy
as specified in the consortium config. Note that the policy is evaluated in the context of the new configuration, so a policy requiringALL
members, would require signatures from all the new channel members, not all the members of the consortium. - The orderer then applies the
CONFIG_UPDATE
as an update to this template configuration. Because theCONFIG_UPDATE
applies modifications to theApplication
group (itsversion
is1
), the config code validates these updates against theChannelCreationPolicy
. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked. - The new
CONFIG
transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created.
Channel Configuration (configtxgen)¶
This document describe the usage for the configtxgen
utility for
manipulating Hyperledger Fabric channel configuration.
For now, the tool is primarily focused on generating the genesis block for bootstrapping the orderer, but it is intended to be enhanced in the future for generating new channel configurations as well as reconfiguring existing channels.
Configuration Profiles¶
The configuration parameters supplied to the configtxgen
tool are
primarily provided by the configtx.yaml
file. This file is located
at fabric/sampleconfig/configtx.yaml
in the fabric.git
repository.
This configuration file is split primarily into three pieces.
- The
Profiles
section. By default, this section includes some sample configurations which can be used for development or testing scenarios, and refer to crypto material present in the fabric.git tree. These profiles can make a good starting point for construction a real deployment profile. Theconfigtxgen
tool allows you to specify the profile it is operating under by passing the-profile
flag. Profiles may explicitly declare all configuration, but usually inherit configuration from the defaults in (3) below. - The
Organizations
section. By default, this section includes a single reference to the sampleconfig MSP definition. For production deployments, the sample organization should be removed, and the MSP definitions of the network members should be referenced and defined instead. Each element in theOrganizations
section should be tagged with an anchor label such as&orgName
which will allow the definition to be referenced in theProfiles
sections. - The default sections. There are default sections for
Orderer
andApplication
configuration, these include attributes likeBatchTimeout
and are generally used as the base inherited values for the profiles.
This configuration file may be edited, or, individual properties may be
overridden by setting environment variables, such as
CONFIGTX_ORDERER_ORDERERTYPE=kafka
. Note that the Profiles
element and profile name do not need to be specified.
Bootstrapping the orderer¶
After creating a configuration profile as desired, simply invoke
configtxgen -profile <profile_name> -outputBlock orderer_genesisblock.pb
This will produce an orderer_genesisblock.pb
file in the current directory.
This genesis block is used to bootstrap the ordering system channel, which the
orderers use to authorize and orchestrate creation of other channels. By
default, the channel ID encoded into the genesis block by configtxgen
will be
testchainid
. It is recommended that you modify this identifier to something
which will be globally unique.
Then, to utilize this genesis block, before starting the orderer, simply
specify ORDERER_GENERAL_GENESISMETHOD=file
and
ORDERER_GENERAL_GENESISFILE=$PWD/orderer_genesisblock.pb
or modify the
orderer.yaml
file to encode these values.
Creating a channel¶
The tool can also output a channel creation tx by executing
configtxgen -profile <profile_name> -channelID <channel_name> -outputCreateChannelTx <tx_filename>
This will output a marshaled Envelope
message which may be sent to
broadcast to create a channel.
Reviewing a configuration¶
In addition to creating configuration, the configtxgen
tool is also
capable of inspecting configuration.
It supports inspecting both configuration blocks, and configuration
transactions. You may use the inspect flags -inspectBlock
and
-inspectChannelCreateTx
respectively with the path to a file to
inspect to output a human readable (JSON) representation of the
configuration.
You may even wish to combine the inspection with generation. For example:
$ build/bin/configtxgen -channelID foo -outputBlock foo_genesisblock.pb -inspectBlock foo_genesisblock.pb
2017-11-02 17:56:04.489 EDT [common/tools/configtxgen] main -> INFO 001 Loading configuration
2017-11-02 17:56:04.564 EDT [common/tools/configtxgen] doOutputBlock -> INFO 002 Generating genesis block
2017-11-02 17:56:04.564 EDT [common/tools/configtxgen] doOutputBlock -> INFO 003 Writing genesis block
2017-11-02 17:56:04.564 EDT [common/tools/configtxgen] doInspectBlock -> INFO 004 Inspecting block
2017-11-02 17:56:04.564 EDT [common/tools/configtxgen] doInspectBlock -> INFO 005 Parsing genesis block
{
"data": {
"data": [
{
"payload": {
"data": {
"config": {
"channel_group": {
"groups": {
"Consortiums": {
"groups": {
"SampleConsortium": {
"mod_policy": "/Channel/Orderer/Admins",
"values": {
"ChannelCreationPolicy": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Admins"
}
},
"version": "0"
}
},
"version": "0"
}
},
"mod_policy": "/Channel/Orderer/Admins",
"policies": {
"Admins": {
"mod_policy": "/Channel/Orderer/Admins",
"policy": {
"type": 1,
"value": {
"rule": {
"n_out_of": {
"n": 0
}
},
"version": 0
}
},
"version": "0"
}
},
"version": "0"
},
"Orderer": {
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
},
"version": "0"
},
"BlockValidation": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Readers"
}
},
"version": "0"
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
}
},
"values": {
"BatchSize": {
"mod_policy": "Admins",
"value": {
"absolute_max_bytes": 10485760,
"max_message_count": 10,
"preferred_max_bytes": 524288
},
"version": "0"
},
"BatchTimeout": {
"mod_policy": "Admins",
"value": {
"timeout": "2s"
},
"version": "0"
},
"ChannelRestrictions": {
"mod_policy": "Admins",
"version": "0"
},
"ConsensusType": {
"mod_policy": "Admins",
"value": {
"type": "solo"
},
"version": "0"
}
},
"version": "0"
}
},
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Readers"
}
},
"version": "0"
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
}
},
"values": {
"BlockDataHashingStructure": {
"mod_policy": "Admins",
"value": {
"width": 4294967295
},
"version": "0"
},
"HashingAlgorithm": {
"mod_policy": "Admins",
"value": {
"name": "SHA256"
},
"version": "0"
},
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": [
"127.0.0.1:7050"
]
},
"version": "0"
}
},
"version": "0"
},
"sequence": "0",
"type": 0
}
},
"header": {
"channel_header": {
"channel_id": "foo",
"epoch": "0",
"timestamp": "2017-11-02T21:56:04.000Z",
"tx_id": "6acfe1257c23a4f844cc299cbf53acc7bf8fa8bcf8aae8d049193098fe982eab",
"type": 1,
"version": 1
},
"signature_header": {
"nonce": "eZOKru6jmeiWykBtSDwnkGjyQt69GwuS"
}
}
}
}
]
},
"header": {
"data_hash": "/86I/7NScbH/bHcDcYG0/9qTmVPWVoVVfSN8NKMARKI=",
"number": "0"
},
"metadata": {
"metadata": [
"",
"",
"",
""
]
}
}
Reconfiguring with configtxlator¶
Overview¶
The configtxlator
tool was created to support reconfiguration independent
of SDKs. Channel configuration is stored as a transaction in configuration
blocks of a channel and may be manipulated directly, such as in the bdd behave
tests. However, at the time of this writing, no SDK natively supports
manipulating the configuration directly, so the configtxlator
tool is
designed to provide an API which consumers of any SDK may interact with to
assist with configuration updates.
The tool name is a portmanteau of configtx and translator and is intended to convey that the tool simply converts between different equivalent data representations. It does not generate configuration. It does not submit or retrieve configuration. It does not modify configuration itself, it simply provides some bijective operations between different views of the configtx format.
The standard usage is expected to be:
- SDK retrieves latest config
configtxlator
produces human readable version of config- User or application edits the config
configtxlator
is used to compute config update representation of changes to the config- SDK submits signs and submits config
The configtxlator
tool exposes a truly stateless REST API for interacting
with configuration elements. These REST components support converting the
native configuration format to/from a human readable JSON representation, as
well as computing configuration updates based on the difference between two
configurations.
Because the configtxlator
service deliberately does not contain any crypto
material, or otherwise secret information, it does not include any authorization
or access control. The anticipated typical deployment would be to operate as
a sandboxed container, locally with the application, so that there is a
dedicated configtxlator
process for each consumer of it.
Running the configtxlator¶
The configtxlator
tool can be downloaded with the other Hyperledger Fabric
platform-specific binaries. Please see download-platform-specific-binaries
for details.
The tool may be configured to listen on a different port and you may also
specify the hostname using the --port
and --hostname
flags. To explore
the complete set of commands and flags, run configtxlator --help
.
The binary will start an http server listening on the designated port and is now ready to process request.
To start the configtxlator
server:
configtxlator start
2017-06-21 18:16:58.248 HKT [configtxlator] startServer -> INFO 001 Serving HTTP requests on 0.0.0.0:7059
Proto translation¶
For extensibility, and because certain fields must be signed over, many proto
fields are stored as bytes. This makes the natural proto to JSON translation
using the jsonpb
package ineffective for producing a human readable version
of the protobufs. Instead, the configtxlator
exposes a REST component to do
a more sophisticated translation.
To convert a proto to its human readable JSON equivalent, simply post the binary
proto to the rest target
http://$SERVER:$PORT/protolator/decode/<message.Name>
,
where <message.Name>
is the fully qualified proto name of the message.
For instance, to decode a configuration block saved as
configuration_block.pb
, run the command:
curl -X POST --data-binary @configuration_block.pb http://127.0.0.1:7059/protolator/decode/common.Block
To convert the human readable JSON version of the proto message, simply post the
JSON version to http://$SERVER:$PORT/protolator/encode/<message.Name
, where
<message.Name>
is again the fully qualified proto name of the message.
For instance, to re-encode the block saved as configuration_block.json
, run
the command:
curl -X POST --data-binary @configuration_block.json http://127.0.0.1:7059/protolator/encode/common.Block
Any of the configuration related protos, including common.Block
,
common.Envelope
, common.ConfigEnvelope
, common.ConfigUpdateEnvelope
,
common.Config
, and common.ConfigUpdate
are valid targets for
these URLs. In the future, other proto decoding types may be added, such as
for endorser transactions.
Config update computation¶
Given two different configurations, it is possible to compute the config update
which transitions between them. Simply POST the two common.Config
proto
encoded configurations as multipart/formdata
, with the original as field
original
and the updated as field updated
, to
http://$SERVER:$PORT/configtxlator/compute/update-from-configs
.
For example, given the original config as the file original_config.pb
and
the updated config as the file updated_config.pb
for the channel
desiredchannel
:
curl -X POST -F channel=desiredchannel -F original=@original_config.pb -F updated=@updated_config.pb http://127.0.0.1:7059/configtxlator/compute/update-from-configs
Bootstraping example¶
First start the configtxlator
:
$ configtxlator start
2017-05-31 12:57:22.499 EDT [configtxlator] main -> INFO 001 Serving HTTP requests on port: 7059
First, produce a genesis block for the ordering system channel:
$ configtxgen -outputBlock genesis_block.pb
2017-05-31 14:15:16.634 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-05-31 14:15:16.646 EDT [common/configtx/tool] doOutputBlock -> INFO 002 Generating genesis block
2017-05-31 14:15:16.646 EDT [common/configtx/tool] doOutputBlock -> INFO 003 Writing genesis block
Decode the genesis block into a human editable form:
curl -X POST --data-binary @genesis_block.pb http://127.0.0.1:7059/protolator/decode/common.Block > genesis_block.json
Edit the genesis_block.json
file in your favorite JSON editor, or manipulate
it programatically. Here we use the JSON CLI tool jq
. For simplicity, we
are editing the batch size for the channel, because it is a single numeric
field. However, any edits, including policy and MSP edits may be made here.
First, let’s establish an environment variable to hold the string that defines the path to a property in the json:
export MAXBATCHSIZEPATH=".data.data[0].payload.data.config.channel_group.groups.Orderer.values.BatchSize.value.max_message_count"
Next, let’s display the value of that property:
jq "$MAXBATCHSIZEPATH" genesis_block.json
10
Now, let’s set the new batch size, and display the new value:
jq “$MAXBATCHSIZEPATH = 20” genesis_block.json > updated_genesis_block.json jq “$MAXBATCHSIZEPATH” updated_genesis_block.json 20
The genesis block is now ready to be re-encoded into the native proto form to be used for bootstrapping:
curl -X POST --data-binary @updated_genesis_block.json http://127.0.0.1:7059/protolator/encode/common.Block > updated_genesis_block.pb
The updated_genesis_block.pb
file may now be used as the genesis block for
bootstrapping an ordering system channel.
Reconfiguration example¶
In another terminal window, start the orderer using the default options,
including the provisional bootstrapper which will create a testchainid
ordering system channel.
ORDERER_GENERAL_LOGLEVEL=debug orderer
Reconfiguring a channel can be performed in a very similar way to modifying a genesis config.
First, fetch the config_block proto:
$ peer channel fetch config config_block.pb -o 127.0.0.1:7050 -c testchainid
2017-05-31 15:11:37.617 EDT [msp] getMspConfig -> INFO 001 intermediate certs folder not found at [/home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts]. Skipping.: [stat /home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts: no such file or directory]
2017-05-31 15:11:37.617 EDT [msp] getMspConfig -> INFO 002 crls folder not found at [/home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts]. Skipping.: [stat /home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/crls: no such file or directory]
Received block: 1
Received block: 1
2017-05-31 15:11:37.635 EDT [main] main -> INFO 003 Exiting.....
Next, send the config block to the configtxlator
service for decoding:
curl -X POST --data-binary @config_block.pb http://127.0.0.1:7059/protolator/decode/common.Block > config_block.json
Extract the config section from the block:
jq .data.data[0].payload.data.config config_block.json > config.json
Edit the config, saving it as a new updated_config.json
. Here, we set the
batch size to 30.
jq ".channel_group.groups.Orderer.values.BatchSize.value.max_message_count = 30" config.json > updated_config.json
Re-encode both the original config, and the updated config into proto:
curl -X POST --data-binary @config.json http://127.0.0.1:7059/protolator/encode/common.Config > config.pb
curl -X POST --data-binary @updated_config.json http://127.0.0.1:7059/protolator/encode/common.Config > updated_config.pb
Now, with both configs properly encoded, send them to the configtxlator service to compute the config update which transitions between the two.
curl -X POST -F original=@config.pb -F updated=@updated_config.pb http://127.0.0.1:7059/configtxlator/compute/update-from-configs -F channel=testchainid > config_update.pb
At this point, the computed config update is now prepared. Traditionally, an SDK would be used to sign and wrap this message. However, in the interest of using only the peer cli, the configtxlator can also be used for this task.
First, we decode the ConfigUpdate so that we may work with it as text:
$ curl -X POST --data-binary @config_update.pb http://127.0.0.1:7059/protolator/decode/common.ConfigUpdate > config_update.json
Then, we wrap it in an envelope message:
echo '{"payload":{"header":{"channel_header":{"channel_id":"testchainid", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' > config_update_as_envelope.json
Next, convert it back into the proto form of a full fledged config transaction:
curl -X POST --data-binary @config_update_as_envelope.json http://127.0.0.1:7059/protolator/encode/common.Envelope > config_update_as_envelope.pb
Finally, submit the config update transaction to ordering to perform a config update.
peer channel update -f config_update_as_envelope.pb -c testchainid -o 127.0.0.1:7050
Adding an organization¶
First start the configtxlator
:
$ configtxlator start
2017-05-31 12:57:22.499 EDT [configtxlator] main -> INFO 001 Serving HTTP requests on port: 7059
Start the orderer using the SampleDevModeSolo
profile option.
ORDERER_GENERAL_LOGLEVEL=debug ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer
The process to add an organization then follows exactly like the batch size example. However, instead of setting the batch size, a new org is defined at the application level. Adding an organization is slightly more involved because we must first create a channel, then modify its membership set.
Endorsement policies¶
Endorsement policies are used to instruct a peer on how to decide whether a transaction is properly endorsed. When a peer receives a transaction, it invokes the VSCC (Validation System Chaincode) associated with the transaction’s Chaincode as part of the transaction validation flow to determine the validity of the transaction. Recall that a transaction contains one or more endorsement from as many endorsing peers. VSCC is tasked to make the following determinations: - all endorsements are valid (i.e. they are valid signatures from valid certificates over the expected message) - there is an appropriate number of endorsements - endorsements come from the expected source(s)
Endorsement policies are a way of specifying the second and third points.
Endorsement policy design¶
Endorsement policies have two main components: - a principal - a threshold gate
A principal P
identifies the entity whose signature is expected.
A threshold gate T
takes two inputs: an integer t
(the
threshold) and a list of n
principals or gates; this gate
essentially captures the expectation that out of those n
principals
or gates, t
are requested to be satisfied.
For example: - T(2, 'A', 'B', 'C')
requests a signature from any 2
principals out of ‘A’, ‘B’ or ‘C’; - T(1, 'A', T(2, 'B', 'C'))
requests either one signature from principal A
or 1 signature from
B
and C
each.
Endorsement policy syntax in the CLI¶
In the CLI, a simple language is used to express policies in terms of boolean expressions over principals.
A principal is described in terms of the MSP that is tasked to validate
the identity of the signer and of the role that the signer has within
that MSP. Currently, two roles are supported: member and admin.
Principals are described as MSP
.ROLE
, where MSP
is the MSP
ID that is required, and ROLE
is either one of the two strings
member
and admin
. Examples of valid principals are
'Org0.admin'
(any administrator of the Org0
MSP) or
'Org1.member'
(any member of the Org1
MSP).
The syntax of the language is:
EXPR(E[, E...])
where EXPR
is either AND
or OR
, representing the two boolean
expressions and E
is either a principal (with the syntax described
above) or another nested call to EXPR
.
For example: - AND('Org1.member', 'Org2.member', 'Org3.member')
requests 1 signature from each of the three principals -
OR('Org1.member', 'Org2.member')
requests 1 signature from either
one of the two principals -
OR('Org1.member', AND('Org2.member', 'Org3.member'))
requests either
one signature from a member of the Org1
MSP or 1 signature from a
member of the Org2
MSP and 1 signature from a member of the Org3
MSP.
Specifying endorsement policies for a chaincode¶
Using this language, a chaincode deployer can request that the
endorsements for a chaincode be validated against the specified policy.
NOTE - the default policy requires one signature from a member of the
DEFAULT
MSP). This is used if a policy is not specified in the CLI.
The policy can be specified at deploy time using the -P
switch,
followed by the policy.
For example:
peer chaincode deploy -C testchainid -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a","100","b","200"]}' -P "AND('Org1.member', 'Org2.member')"
This command deploys chaincode mycc
on chain testchainid
with
the policy AND('Org1.member', 'Org2.member')
.
Future enhancements¶
In this section we list future enhancements for endorsement policies: -
alongside the existing way of identifying principals by their
relationship with an MSP, we plan to identify principals in terms of the
Organization Unit (OU) expected in their certificates; this is useful
to express policies where we request signatures from any identity
displaying a valid certificate with an OU matching the one requested in
the definition of the principal. - instead of the syntax AND(., .)
we plan to move to a more intuitive syntax . AND .
- we plan to
expose generalized threshold gates in the language as well alongside
AND
(which is the special n
-out-of-n
gate) and OR
(which
is the special 1
-out-of-n
gate)
Error handling¶
General Overview¶
The Hyperledger Fabric error handling framework can be found in the source repository under common/errors. It defines a new type of error, CallStackError, to use in place of the standard error type provided by Go.
A CallStackError consists of the following:
- Component code - a name for the general area of the code that is generating the error. Component codes should consist of three uppercase letters. Numerics and special characters are not allowed. A set of component codes is defined in common/errors/codes.go
- Reason code - a short code to help identify the reason the error occurred. Reason codes should consist of three numeric values. Letters and special characters are not allowed. A set of reason codes is defined in common/error/codes.go
- Error code - the component code and reason code separated by a colon, e.g. MSP:404
- Error message - the text that describes the error. This is the same as the
input provided to
fmt.Errorf()
andErrors.New()
. If an error has been wrapped into the current error, its message will be appended. - Callstack - the callstack at the time the error is created. If an error has been wrapped into the current error, its error message and callstack will be appended to retain the context of the wrapped error.
The CallStackError interface exposes the following functions:
- Error() - returns the error message with callstack appended
- Message() - returns the error message (without callstack appended)
- GetComponentCode() - returns the 3-character component code
- GetReasonCode() - returns the 3-digit reason code
- GetErrorCode() - returns the error code, which is “component:reason”
- GetStack() - returns just the callstack
- WrapError(error) - wraps the provided error into the CallStackError
Usage Instructions¶
The new error handling framework should be used in place of all calls to
fmt.Errorf()
or Errors.new()
. Using this framework will provide error
codes to check against as well as the option to generate a callstack that will be
appended to the error message.
Using the framework is simple and will only require an easy tweak to your code.
First, you’ll need to import github.com/hyperledger/fabric/common/errors into any file that uses this framework.
Let’s take the following as an example from core/chaincode/chaincode_support.go:
err = fmt.Errorf("Error starting container: %s", err)
For this error, we will simply call the constructor for Error and pass a
component code, reason code, followed by the error message. At the end, we
then call the WrapError()
function, passing along the error itself.
fmt.Errorf("Error starting container: %s", err)
becomes
errors.ErrorWithCallstack("CHA", "505", "Error starting container").WrapError(err)
You could also just leave the message as is without any problems:
errors.ErrorWithCallstack("CHA", "505", "Error starting container: %s", err)
With this usage you will be able to format the error message from the previous error into the new error, but will lose the ability to print the callstack (if the wrapped error is a CallStackError).
A second example to highlight a scenario that involves formatting directives for parameters other than errors, while still wrapping an error, is as follows:
fmt.Errorf("failed to get deployment payload %s - %s", canName, err)
becomes
errors.ErrorWithCallstack("CHA", "506", "Failed to get deployment payload %s", canName).WrapError(err)
Displaying error messages¶
Once the error has been created using the framework, displaying the error message is as simple as:
logger.Errorf(err)
or
fmt.Println(err)
or
fmt.Printf("%s\n", err)
An example from peer/common/common.go:
errors.ErrorWithCallstack("PER", "404", "Error trying to connect to local peer").WrapError(err)
would display the error message:
PER:404 - Error trying to connect to local peer
Caused by: grpc: timed out when dialing
注解
The callstacks have not been displayed for this example for the sake of brevity.
General guidelines for error handling in Hyperledger Fabric¶
- If it is some sort of best effort thing you are doing, you should log the error and ignore it.
- If you are servicing a user request, you should log the error and return it.
- If the error comes from elsewhere, you have the choice to wrap the error or not. Typically, it’s best to not wrap the error and simply return it as is. However, for certain cases where a utility function is called, wrapping the error with a new component and reason code can help an end user understand where the error is really occurring without inspecting the callstack.
- A panic should be handled within the same layer by throwing an internal error code/start a recovery process and should not be allowed to propagate to other packages.
Logging Control¶
Overview¶
Logging in the peer
application and in the shim
interface to
chaincodes is programmed using facilities provided by the
github.com/op/go-logging
package. This package supports
- Logging control based on the severity of the message
- Logging control based on the software module generating the message
- Different pretty-printing options based on the severity of the message
All logs are currently directed to stderr
, and the pretty-printing
is currently fixed. However global and module-level control of logging
by severity is provided for both users and developers. There are
currently no formalized rules for the types of information provided at
each severity level, however when submitting bug reports the developers
may want to see full logs down to the DEBUG level.
In pretty-printed logs the logging level is indicated both by color and by a 4-character code, e.g, “ERRO” for ERROR, “DEBU” for DEBUG, etc. In the logging context a module is an arbitrary name (string) given by developers to groups of related messages. In the pretty-printed example below, the logging modules “peer”, “rest” and “main” are generating logs.
16:47:09.634 [peer] GetLocalAddress -> INFO 033 Auto detected peer address: 9.3.158.178:7051
16:47:09.635 [rest] StartOpenchainRESTServer -> INFO 035 Initializing the REST service...
16:47:09.635 [main] serve -> INFO 036 Starting peer with id=name:"vp1" , network id=dev, address=9.3.158.178:7051, discovery.rootnode=, validator=true
An arbitrary number of logging modules can be created at runtime, therefore there is no “master list” of modules, and logging control constructs can not check whether logging modules actually do or will exist. Also note that the logging module system does not understand hierarchy or wildcarding: You may see module names like “foo/bar” in the code, but the logging system only sees a flat string. It doesn’t understand that “foo/bar” is related to “foo” in any way, or that “foo/*” might indicate all “submodules” of foo.
peer¶
The logging level of the peer
command can be controlled from the
command line for each invocation using the --logging-level
flag, for
example
peer node start --logging-level=debug
The default logging level for each individual peer
subcommand can
also be set in the
core.yaml
file. For example the key logging.node
sets the default level for
the node
subcommmand. Comments in the file also explain how the
logging level can be overridden in various ways by using environment
varaibles.
Logging severity levels are specified using case-insensitive strings chosen from
CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG
The full logging level specification for the peer
is of the form
[<module>[,<module>...]=]<level>[:[<module>[,<module>...]=]<level>...]
A logging level by itself is taken as the overall default. Otherwise, overrides for individual or groups of modules can be specified using the
<module>[,<module>...]=<level>
syntax. Examples of specifications (valid for all of
--logging-level
, environment variable and
core.yaml
settings):
info - Set default to INFO
warning:main,db=debug:chaincode=info - Default WARNING; Override for main,db,chaincode
chaincode=info:main=debug:db=debug:warning - Same as above
Go chaincodes¶
The standard mechanism to log within a chaincode application is to
integrate with the logging transport exposed to each chaincode instance
via the peer. The chaincode shim
package provides APIs that allow a
chaincode to create and manage logging objects whose logs will be
formatted and interleaved consistently with the shim
logs.
As independently executed programs, user-provided chaincodes may technically also produce output on stdout/stderr. While naturally useful for “devmode”, these channels are normally disabled on a production network to mitigate abuse from broken or malicious code. However, it is possible to enable this output even for peer-managed containers (e.g. “netmode”) on a per-peer basis via the CORE_VM_DOCKER_ATTACHSTDOUT=true configuration option.
Once enabled, each chaincode will receive its own logging channel keyed by its container-id. Any output written to either stdout or stderr will be integrated with the peer’s log on a per-line basis. It is not recommended to enable this for production.
API¶
NewLogger(name string) *ChaincodeLogger
- Create a logging object
for use by a chaincode
(c *ChaincodeLogger) SetLevel(level LoggingLevel)
- Set the logging
level of the logger
(c *ChaincodeLogger) IsEnabledFor(level LoggingLevel) bool
- Return
true if logs will be generated at the given level
LogLevel(levelString string) (LoggingLevel, error)
- Convert a
string to a LoggingLevel
A LoggingLevel
is a member of the enumeration
LogDebug, LogInfo, LogNotice, LogWarning, LogError, LogCritical
which can be used directly, or generated by passing a case-insensitive version of the strings
DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL
to the LogLevel
API.
Formatted logging at various severity levels is provided by the functions
(c *ChaincodeLogger) Debug(args ...interface{})
(c *ChaincodeLogger) Info(args ...interface{})
(c *ChaincodeLogger) Notice(args ...interface{})
(c *ChaincodeLogger) Warning(args ...interface{})
(c *ChaincodeLogger) Error(args ...interface{})
(c *ChaincodeLogger) Critical(args ...interface{})
(c *ChaincodeLogger) Debugf(format string, args ...interface{})
(c *ChaincodeLogger) Infof(format string, args ...interface{})
(c *ChaincodeLogger) Noticef(format string, args ...interface{})
(c *ChaincodeLogger) Warningf(format string, args ...interface{})
(c *ChaincodeLogger) Errorf(format string, args ...interface{})
(c *ChaincodeLogger) Criticalf(format string, args ...interface{})
The f
forms of the logging APIs provide for precise control over the
formatting of the logs. The non-f
forms of the APIs currently
insert a space between the printed representations of the arguments, and
arbitrarily choose the formats to use.
In the current implementation, the logs produced by the shim
and a
ChaincodeLogger
are timestamped, marked with the logger name and
severity level, and written to stderr
. Note that logging level
control is currently based on the name provided when the
ChaincodeLogger
is created. To avoid ambiguities, all
ChaincodeLogger
should be given unique names other than “shim”. The
logger name will appear in all log messages created by the logger. The
shim
logs as “shim”.
Go language chaincodes can also control the logging level of the
chaincode shim
interface through the SetLoggingLevel
API.
SetLoggingLevel(LoggingLevel level)
- Control the logging level of
the shim
The default logging level for the shim is LogDebug
.
Below is a simple example of how a chaincode might create a private
logging object logging at the LogInfo
level, and also control the
amount of logging provided by the shim
based on an environment
variable.
var logger = shim.NewLogger("myChaincode")
func main() {
logger.SetLevel(shim.LogInfo)
logLevel, _ := shim.LogLevel(os.Getenv("SHIM_LOGGING_LEVEL"))
shim.SetLoggingLevel(logLevel)
...
}
Architecture Explained¶
The Hyperledger Fabric architecture delivers the following advantages:
- Chaincode trust flexibility. The architecture separates trust assumptions for chaincodes (blockchain applications) from trust assumptions for ordering. In other words, the ordering service may be provided by one set of nodes (orderers) and tolerate some of them to fail or misbehave, and the endorsers may be different for each chaincode.
- Scalability. As the endorser nodes responsible for particular chaincode are orthogonal to the orderers, the system may scale better than if these functions were done by the same nodes. In particular, this results when different chaincodes specify disjoint endorsers, which introduces a partitioning of chaincodes between endorsers and allows parallel chaincode execution (endorsement). Besides, chaincode execution, which can potentially be costly, is removed from the critical path of the ordering service.
- Confidentiality. The architecture facilitates deployment of chaincodes that have confidentiality requirements with respect to the content and state updates of its transactions.
- Consensus modularity. The architecture is modular and allows pluggable consensus (i.e., ordering service) implementations.
Part I: Elements of the architecture relevant to Hyperledger Fabric v1
System architecture
Basic workflow of transaction endorsement
Endorsement policies
Part II: Post-v1 elements of the architecture
Ledger checkpointing (pruning)
1. System architecture¶
The blockchain is a distributed system consisting of many nodes that communicate with each other. The blockchain runs programs called chaincode, holds state and ledger data, and executes transactions. The chaincode is the central element as transactions are operations invoked on the chaincode. Transactions have to be “endorsed” and only endorsed transactions may be committed and have an effect on the state. There may exist one or more special chaincodes for management functions and parameters, collectively called system chaincodes.
1.1. Transactions¶
Transactions may be of two types:
- Deploy transactions create new chaincode and take a program as parameter. When a deploy transaction executes successfully, the chaincode has been installed “on” the blockchain.
- Invoke transactions perform an operation in the context of previously deployed chaincode. An invoke transaction refers to a chaincode and to one of its provided functions. When successful, the chaincode executes the specified function - which may involve modifying the corresponding state, and returning an output.
As described later, deploy transactions are special cases of invoke transactions, where a deploy transaction that creates new chaincode, corresponds to an invoke transaction on a system chaincode.
Remark: This document currently assumes that a transaction either creates new chaincode or invokes an operation provided by *one already deployed chaincode. This document does not yet describe: a) optimizations for query (read-only) transactions (included in v1), b) support for cross-chaincode transactions (post-v1 feature).*
1.2. Blockchain datastructures¶
1.2.1. State¶
The latest state of the blockchain (or, simply, state) is modeled as a
versioned key/value store (KVS), where keys are names and values are
arbitrary blobs. These entries are manipulated by the chaincodes
(applications) running on the blockchain through put
and get
KVS-operations. The state is stored persistently and updates to the
state are logged. Notice that versioned KVS is adopted as state model,
an implementation may use actual KVSs, but also RDBMSs or any other
solution.
More formally, state s
is modeled as an element of a mapping
K -> (V X N)
, where:
K
is a set of keysV
is a set of valuesN
is an infinite ordered set of version numbers. Injective functionnext: N -> N
takes an element ofN
and returns the next version number.
Both V
and N
contain a special element \bot
, which is in
case of N
the lowest element. Initially all keys are mapped to
(\bot,\bot)
. For s(k)=(v,ver)
we denote v
by s(k).value
,
and ver
by s(k).version
.
KVS operations are modeled as follows:
put(k,v)
, fork\in K
andv\in V
, takes the blockchain states
and changes it tos'
such thats'(k)=(v,next(s(k).version))
withs'(k')=s(k')
for allk'!=k
.get(k)
returnss(k)
.
State is maintained by peers, but not by orderers and clients.
State partitioning. Keys in the KVS can be recognized from their name to belong to a particular chaincode, in the sense that only transaction of a certain chaincode may modify the keys belonging to this chaincode. In principle, any chaincode can read the keys belonging to other chaincodes. Support for cross-chaincode transactions, that modify the state belonging to two or more chaincodes is a post-v1 feature.
1.2.2 Ledger¶
Ledger provides a verifiable history of all successful state changes (we talk about valid transactions) and unsuccessful attempts to change state (we talk about invalid transactions), occurring during the operation of the system.
Ledger is constructed by the ordering service (see Sec 1.3.3) as a totally ordered hashchain of blocks of (valid or invalid) transactions. The hashchain imposes the total order of blocks in a ledger and each block contains an array of totally ordered transactions. This imposes total order across all transactions.
Ledger is kept at all peers and, optionally, at a subset of orderers. In
the context of an orderer we refer to the Ledger as to
OrdererLedger
, whereas in the context of a peer we refer to the
ledger as to PeerLedger
. PeerLedger
differs from the
OrdererLedger
in that peers locally maintain a bitmask that tells
apart valid transactions from invalid ones (see Section XX for more
details).
Peers may prune PeerLedger
as described in Section XX (post-v1
feature). Orderers maintain OrdererLedger
for fault-tolerance and
availability (of the PeerLedger
) and may decide to prune it at
anytime, provided that properties of the ordering service (see Sec.
1.3.3) are maintained.
The ledger allows peers to replay the history of all transactions and to reconstruct the state. Therefore, state as described in Sec 1.2.1 is an optional datastructure.
1.3. Nodes¶
Nodes are the communication entities of the blockchain. A “node” is only a logical function in the sense that multiple nodes of different types can run on the same physical server. What counts is how nodes are grouped in “trust domains” and associated to logical entities that control them.
There are three types of nodes:
- Client or submitting-client: a client that submits an actual transaction-invocation to the endorsers, and broadcasts transaction-proposals to the ordering service.
- Peer: a node that commits transactions and maintains the state and a copy of the ledger (see Sec, 1.2). Besides, peers can have a special endorser role.
- Ordering-service-node or orderer: a node running the communication service that implements a delivery guarantee, such as atomic or total order broadcast.
The types of nodes are explained next in more detail.
1.3.1. Client¶
The client represents the entity that acts on behalf of an end-user. It must connect to a peer for communicating with the blockchain. The client may connect to any peer of its choice. Clients create and thereby invoke transactions.
As detailed in Section 2, clients communicate with both peers and the ordering service.
1.3.2. Peer¶
A peer receives ordered state updates in the form of blocks from the ordering service and maintain the state and the ledger.
Peers can additionally take up a special role of an endorsing peer, or an endorser. The special function of an endorsing peer occurs with respect to a particular chaincode and consists in endorsing a transaction before it is committed. Every chaincode may specify an endorsement policy that may refer to a set of endorsing peers. The policy defines the necessary and sufficient conditions for a valid transaction endorsement (typically a set of endorsers’ signatures), as described later in Sections 2 and 3. In the special case of deploy transactions that install new chaincode the (deployment) endorsement policy is specified as an endorsement policy of the system chaincode.
1.3.3. Ordering service nodes (Orderers)¶
The orderers form the ordering service, i.e., a communication fabric that provides delivery guarantees. The ordering service can be implemented in different ways: ranging from a centralized service (used e.g., in development and testing) to distributed protocols that target different network and node fault models.
Ordering service provides a shared communication channel to clients and peers, offering a broadcast service for messages containing transactions. Clients connect to the channel and may broadcast messages on the channel which are then delivered to all peers. The channel supports atomic delivery of all messages, that is, message communication with total-order delivery and (implementation specific) reliability. In other words, the channel outputs the same messages to all connected peers and outputs them to all peers in the same logical order. This atomic communication guarantee is also called total-order broadcast, atomic broadcast, or consensus in the context of distributed systems. The communicated messages are the candidate transactions for inclusion in the blockchain state.
Partitioning (ordering service channels). Ordering service may support multiple channels similar to the topics of a publish/subscribe (pub/sub) messaging system. Clients can connect to a given channel and can then send messages and obtain the messages that arrive. Channels can be thought of as partitions - clients connecting to one channel are unaware of the existence of other channels, but clients may connect to multiple channels. Even though some ordering service implementations included with Hyperledger Fabric support multiple channels, for simplicity of presentation, in the rest of this document, we assume ordering service consists of a single channel/topic.
Ordering service API. Peers connect to the channel provided by the ordering service, via the interface provided by the ordering service. The ordering service API consists of two basic operations (more generally asynchronous events):
TODO add the part of the API for fetching particular blocks under client/peer specified sequence numbers.
broadcast(blob)
: a client calls this to broadcast an arbitrary messageblob
for dissemination over the channel. This is also calledrequest(blob)
in the BFT context, when sending a request to a service.deliver(seqno, prevhash, blob)
: the ordering service calls this on the peer to deliver the messageblob
with the specified non-negative integer sequence number (seqno
) and hash of the most recently delivered blob (prevhash
). In other words, it is an output event from the ordering service.deliver()
is also sometimes callednotify()
in pub-sub systems orcommit()
in BFT systems.
Ledger and block formation. The ledger (see also Sec. 1.2.2)
contains all data output by the ordering service. In a nutshell, it is a
sequence of deliver(seqno, prevhash, blob)
events, which form a hash
chain according to the computation of prevhash
described before.
Most of the time, for efficiency reasons, instead of outputting
individual transactions (blobs), the ordering service will group (batch)
the blobs and output blocks within a single deliver
event. In this
case, the ordering service must impose and convey a deterministic
ordering of the blobs within each block. The number of blobs in a block
may be chosen dynamically by an ordering service implementation.
In the following, for ease of presentation, we define ordering service
properties (rest of this subsection) and explain the workflow of
transaction endorsement (Section 2) assuming one blob per deliver
event. These are easily extended to blocks, assuming that a deliver
event for a block corresponds to a sequence of individual deliver
events for each blob within a block, according to the above mentioned
deterministic ordering of blobs within a blocs.
Ordering service properties
The guarantees of the ordering service (or atomic-broadcast channel) stipulate what happens to a broadcasted message and what relations exist among delivered messages. These guarantees are as follows:
Safety (consistency guarantees): As long as peers are connected for sufficiently long periods of time to the channel (they can disconnect or crash, but will restart and reconnect), they will see an identical series of delivered
(seqno, prevhash, blob)
messages. This means the outputs (deliver()
events) occur in the same order on all peers and according to sequence number and carry identical content (blob
andprevhash
) for the same sequence number. Note this is only a logical order, and adeliver(seqno, prevhash, blob)
on one peer is not required to occur in any real-time relation todeliver(seqno, prevhash, blob)
that outputs the same message at another peer. Put differently, given a particularseqno
, no two correct peers deliver differentprevhash
orblob
values. Moreover, no valueblob
is delivered unless some client (peer) actually calledbroadcast(blob)
and, preferably, every broadcasted blob is only delivered once.Furthermore, the
deliver()
event contains the cryptographic hash of the data in the previousdeliver()
event (prevhash
). When the ordering service implements atomic broadcast guarantees,prevhash
is the cryptographic hash of the parameters from thedeliver()
event with sequence numberseqno-1
. This establishes a hash chain acrossdeliver()
events, which is used to help verify the integrity of the ordering service output, as discussed in Sections 4 and 5 later. In the special case of the firstdeliver()
event,prevhash
has a default value.Liveness (delivery guarantee): Liveness guarantees of the ordering service are specified by a ordering service implementation. The exact guarantees may depend on the network and node fault model.
In principle, if the submitting client does not fail, the ordering service should guarantee that every correct peer that connects to the ordering service eventually delivers every submitted transaction.
To summarize, the ordering service ensures the following properties:
- Agreement. For any two events at correct peers
deliver(seqno, prevhash0, blob0)
anddeliver(seqno, prevhash1, blob1)
with the sameseqno
,prevhash0==prevhash1
andblob0==blob1
; - Hashchain integrity. For any two events at correct peers
deliver(seqno-1, prevhash0, blob0)
anddeliver(seqno, prevhash, blob)
,prevhash = HASH(seqno-1||prevhash0||blob0)
. - No skipping. If an ordering service outputs
deliver(seqno, prevhash, blob)
at a correct peer p, such thatseqno>0
, then p already delivered an eventdeliver(seqno-1, prevhash0, blob0)
. - No creation. Any event
deliver(seqno, prevhash, blob)
at a correct peer must be preceded by abroadcast(blob)
event at some (possibly distinct) peer; - No duplication (optional, yet desirable). For any two events
broadcast(blob)
andbroadcast(blob')
, when two eventsdeliver(seqno0, prevhash0, blob)
anddeliver(seqno1, prevhash1, blob')
occur at correct peers andblob == blob'
, thenseqno0==seqno1
andprevhash0==prevhash1
. - Liveness. If a correct client invokes an event
broadcast(blob)
then every correct peer “eventually” issues an eventdeliver(*, *, blob)
, where*
denotes an arbitrary value.
2. Basic workflow of transaction endorsement¶
In the following we outline the high-level request flow for a transaction.
Remark: Notice that the following protocol *does not assume that all transactions are deterministic, i.e., it allows for non-deterministic transactions.*
2.1. The client creates a transaction and sends it to endorsing peers of its choice¶
To invoke a transaction, the client sends a PROPOSE
message to a set
of endorsing peers of its choice (possibly not at the same time - see
Sections 2.1.2. and 2.3.). The set of endorsing peers for a given
chaincodeID
is made available to client via peer, which in turn
knows the set of endorsing peers from endorsement policy (see Section
3). For example, the transaction could be sent to all endorsers of a
given chaincodeID
. That said, some endorsers could be offline,
others may object and choose not to endorse the transaction. The
submitting client tries to satisfy the policy expression with the
endorsers available.
In the following, we first detail PROPOSE
message format and then
discuss possible patterns of interaction between submitting client and
endorsers.
2.1.1. PROPOSE
message format¶
The format of a PROPOSE
message is <PROPOSE,tx,[anchor]>
, where
tx
is a mandatory and anchor
optional argument explained in the
following.
tx=<clientID,chaincodeID,txPayload,timestamp,clientSig>
, whereclientID
is an ID of the submitting client,chaincodeID
refers to the chaincode to which the transaction pertains,txPayload
is the payload containing the submitted transaction itself,timestamp
is a monotonically increasing (for every new transaction) integer maintained by the client,clientSig
is signature of a client on other fields oftx
.
The details of
txPayload
will differ between invoke transactions and deploy transactions (i.e., invoke transactions referring to a deploy-specific system chaincode). For an invoke transaction,txPayload
would consist of two fieldstxPayload = <operation, metadata>
, whereoperation
denotes the chaincode operation (function) and arguments,metadata
denotes attributes related to the invocation.
For a deploy transaction,
txPayload
would consist of three fieldstxPayload = <source, metadata, policies>
, wheresource
denotes the source code of the chaincode,metadata
denotes attributes related to the chaincode and application,policies
contains policies related to the chaincode that are accessible to all peers, such as the endorsement policy. Note that endorsement policies are not supplied withtxPayload
in adeploy
transaction, buttxPayload
of adeploy
contains endorsement policy ID and its parameters (see Section 3).
anchor
contains read version dependencies, or more specifically, key-version pairs (i.e.,anchor
is a subset ofKxN
), that binds or “anchors” thePROPOSE
request to specified versions of keys in a KVS (see Section 1.2.). If the client specifies theanchor
argument, an endorser endorses a transaction only upon read version numbers of corresponding keys in its local KVS matchanchor
(see Section 2.2. for more details).
Cryptographic hash of tx
is used by all nodes as a unique
transaction identifier tid
(i.e., tid=HASH(tx)
). The client
stores tid
in memory and waits for responses from endorsing peers.
2.1.2. Message patterns¶
The client decides on the sequence of interaction with endorsers. For
example, a client would typically send <PROPOSE, tx>
(i.e., without
the anchor
argument) to a single endorser, which would then produce
the version dependencies (anchor
) which the client can later on use
as an argument of its PROPOSE
message to other endorsers. As another
example, the client could directly send <PROPOSE, tx>
(without
anchor
) to all endorsers of its choice. Different patterns of
communication are possible and client is free to decide on those (see
also Section 2.3.).
2.2. The endorsing peer simulates a transaction and produces an endorsement signature¶
On reception of a <PROPOSE,tx,[anchor]>
message from a client, the
endorsing peer epID
first verifies the client’s signature
clientSig
and then simulates a transaction. If the client specifies
anchor
then endorsing peer simulates the transactions only upon read
version numbers (i.e., readset
as defined below) of corresponding
keys in its local KVS match those version numbers specified by
anchor
.
Simulating a transaction involves endorsing peer tentatively executing
a transaction (txPayload
), by invoking the chaincode to which the
transaction refers (chaincodeID
) and the copy of the state that the
endorsing peer locally holds.
As a result of the execution, the endorsing peer computes read version
dependencies (readset
) and state updates (writeset
), also
called MVCC+postimage info in DB language.
Recall that the state consists of key/value (k/v) pairs. All k/v entries are versioned, that is, every entry contains ordered version information, which is incremented every time when the value stored under a key is updated. The peer that interprets the transaction records all k/v pairs accessed by the chaincode, either for reading or for writing, but the peer does not yet update its state. More specifically:
- Given state
s
before an endorsing peer executes a transaction, for every keyk
read by the transaction, pair(k,s(k).version)
is added toreadset
. - Additionally, for every key
k
modified by the transaction to the new valuev'
, pair(k,v')
is added towriteset
. Alternatively,v'
could be the delta of the new value to previous value (s(k).value
).
If a client specifies anchor
in the PROPOSE
message then client
specified anchor
must equal readset
produced by endorsing peer
when simulating the transaction.
Then, the peer forwards internally tran-proposal
(and possibly
tx
) to the part of its (peer’s) logic that endorses a transaction,
referred to as endorsing logic. By default, endorsing logic at a
peer accepts the tran-proposal
and simply signs the
tran-proposal
. However, endorsing logic may interpret arbitrary
functionality, to, e.g., interact with legacy systems with
tran-proposal
and tx
as inputs to reach the decision whether to
endorse a transaction or not.
If endorsing logic decides to endorse a transaction, it sends
<TRANSACTION-ENDORSED, tid, tran-proposal,epSig>
message to the
submitting client(tx.clientID
), where:
tran-proposal := (epID,tid,chaincodeID,txContentBlob,readset,writeset)
,where
txContentBlob
is chaincode/transaction specific information. The intention is to havetxContentBlob
used as some representation oftx
(e.g.,txContentBlob=tx.txPayload
).epSig
is the endorsing peer’s signature ontran-proposal
Else, in case the endorsing logic refuses to endorse the transaction, an
endorser may send a message (TRANSACTION-INVALID, tid, REJECTED)
to the submitting client.
Notice that an endorser does not change its state in this step, the updates produced by transaction simulation in the context of endorsement do not affect the state!
2.3. The submitting client collects an endorsement for a transaction and broadcasts it through ordering service¶
The submitting client waits until it receives “enough” messages and
signatures on (TRANSACTION-ENDORSED, tid, *, *)
statements to
conclude that the transaction proposal is endorsed. As discussed in
Section 2.1.2., this may involve one or more round-trips of interaction
with endorsers.
The exact number of “enough” depend on the chaincode endorsement policy
(see also Section 3). If the endorsement policy is satisfied, the
transaction has been endorsed; note that it is not yet committed. The
collection of signed TRANSACTION-ENDORSED
messages from endorsing
peers which establish that a transaction is endorsed is called an
endorsement and denoted by endorsement
.
If the submitting client does not manage to collect an endorsement for a transaction proposal, it abandons this transaction with an option to retry later.
For transaction with a valid endorsement, we now start using the
ordering service. The submitting client invokes ordering service using
the broadcast(blob)
, where blob=endorsement
. If the client does
not have capability of invoking ordering service directly, it may proxy
its broadcast through some peer of its choice. Such a peer must be
trusted by the client not to remove any message from the endorsement
or otherwise the transaction may be deemed invalid. Notice that,
however, a proxy peer may not fabricate a valid endorsement
.
2.4. The ordering service delivers a transactions to the peers¶
When an event deliver(seqno, prevhash, blob)
occurs and a peer has
applied all state updates for blobs with sequence number lower than
seqno
, a peer does the following:
- It checks that the
blob.endorsement
is valid according to the policy of the chaincode (blob.tran-proposal.chaincodeID
) to which it refers. - In a typical case, it also verifies that the dependencies
(
blob.endorsement.tran-proposal.readset
) have not been violated meanwhile. In more complex use cases,tran-proposal
fields in endorsement may differ and in this case endorsement policy (Section 3) specifies how the state evolves.
Verification of dependencies can be implemented in different ways,
according to a consistency property or “isolation guarantee” that is
chosen for the state updates. Serializability is a default isolation
guarantee, unless chaincode endorsement policy specifies a different
one. Serializability can be provided by requiring the version associated
with every key in the readset
to be equal to that key’s version in
the state, and rejecting transactions that do not satisfy this
requirement.
- If all these checks pass, the transaction is deemed valid or
committed. In this case, the peer marks the transaction with 1 in
the bitmask of the
PeerLedger
, appliesblob.endorsement.tran-proposal.writeset
to blockchain state (iftran-proposals
are the same, otherwise endorsement policy logic defines the function that takesblob.endorsement
). - If the endorsement policy verification of
blob.endorsement
fails, the transaction is invalid and the peer marks the transaction with 0 in the bitmask of thePeerLedger
. It is important to note that invalid transactions do not change the state.
Note that this is sufficient to have all (correct) peers have the same
state after processing a deliver event (block) with a given sequence
number. Namely, by the guarantees of the ordering service, all correct
peers will receive an identical sequence of
deliver(seqno, prevhash, blob)
events. As the evaluation of the
endorsement policy and evaluation of version dependencies in readset
are deterministic, all correct peers will also come to the same
conclusion whether a transaction contained in a blob is valid. Hence,
all peers commit and apply the same sequence of transactions and update
their state in the same way.

Figure 1. Illustration of one possible transaction flow (common-case path).
3. Endorsement policies¶
3.1. Endorsement policy specification¶
An endorsement policy, is a condition on what endorses a
transaction. Blockchain peers have a pre-specified set of endorsement
policies, which are referenced by a deploy
transaction that installs
specific chaincode. Endorsement policies can be parametrized, and these
parameters can be specified by a deploy
transaction.
To guarantee blockchain and security properties, the set of endorsement policies should be a set of proven policies with limited set of functions in order to ensure bounded execution time (termination), determinism, performance and security guarantees.
Dynamic addition of endorsement policies (e.g., by deploy
transaction on chaincode deploy time) is very sensitive in terms of
bounded policy evaluation time (termination), determinism, performance
and security guarantees. Therefore, dynamic addition of endorsement
policies is not allowed, but can be supported in future.
3.2. Transaction evaluation against endorsement policy¶
A transaction is declared valid only if it has been endorsed according to the policy. An invoke transaction for a chaincode will first have to obtain an endorsement that satisfies the chaincode’s policy or it will not be committed. This takes place through the interaction between the submitting client and endorsing peers as explained in Section 2.
Formally the endorsement policy is a predicate on the endorsement, and potentially further state that evaluates to TRUE or FALSE. For deploy transactions the endorsement is obtained according to a system-wide policy (for example, from the system chaincode).
An endorsement policy predicate refers to certain variables. Potentially it may refer to:
- keys or identities relating to the chaincode (found in the metadata of the chaincode), for example, a set of endorsers;
- further metadata of the chaincode;
- elements of the
endorsement
andendorsement.tran-proposal
; - and potentially more.
The above list is ordered by increasing expressiveness and complexity, that is, it will be relatively simple to support policies that only refer to keys and identities of nodes.
The evaluation of an endorsement policy predicate must be deterministic. An endorsement shall be evaluated locally by every peer such that a peer does not need to interact with other peers, yet all correct peers evaluate the endorsement policy in the same way.
3.3. Example endorsement policies¶
The predicate may contain logical expressions and evaluates to TRUE or FALSE. Typically the condition will use digital signatures on the transaction invocation issued by endorsing peers for the chaincode.
Suppose the chaincode specifies the endorser set
E = {Alice, Bob, Charlie, Dave, Eve, Frank, George}
. Some example
policies:
- A valid signature from on the same
tran-proposal
from all members of E. - A valid signature from any single member of E.
- Valid signatures on the same
tran-proposal
from endorsing peers according to the condition(Alice OR Bob) AND (any two of: Charlie, Dave, Eve, Frank, George)
. - Valid signatures on the same
tran-proposal
by any 5 out of the 7 endorsers. (More generally, for chaincode withn > 3f
endorsers, valid signatures by any2f+1
out of then
endorsers, or by any group of more than(n+f)/2
endorsers.) - Suppose there is an assignment of “stake” or “weights” to the
endorsers, like
{Alice=49, Bob=15, Charlie=15, Dave=10, Eve=7, Frank=3, George=1}
, where the total stake is 100: The policy requires valid signatures from a set that has a majority of the stake (i.e., a group with combined stake strictly more than 50), such as{Alice, X}
with anyX
different from George, or{everyone together except Alice}
. And so on. - The assignment of stake in the previous example condition could be static (fixed in the metadata of the chaincode) or dynamic (e.g., dependent on the state of the chaincode and be modified during the execution).
- Valid signatures from (Alice OR Bob) on
tran-proposal1
and valid signatures from(any two of: Charlie, Dave, Eve, Frank, George)
ontran-proposal2
, wheretran-proposal1
andtran-proposal2
differ only in their endorsing peers and state updates.
How useful these policies are will depend on the application, on the desired resilience of the solution against failures or misbehavior of endorsers, and on various other properties.
4 (post-v1). Validated ledger and PeerLedger
checkpointing (pruning)¶
4.1. Validated ledger (VLedger)¶
To maintain the abstraction of a ledger that contains only valid and committed transactions (that appears in Bitcoin, for example), peers may, in addition to state and Ledger, maintain the Validated Ledger (or VLedger). This is a hash chain derived from the ledger by filtering out invalid transactions.
The construction of the VLedger blocks (called here vBlocks) proceeds
as follows. As the PeerLedger
blocks may contain invalid
transactions (i.e., transactions with invalid endorsement or with
invalid version dependencies), such transactions are filtered out by
peers before a transaction from a block becomes added to a vBlock. Every
peer does this by itself (e.g., by using the bitmask associated with
PeerLedger
). A vBlock is defined as a block without the invalid
transactions, that have been filtered out. Such vBlocks are inherently
dynamic in size and may be empty. An illustration of vBlock construction
is given in the figure below.

Figure 2. Illustration of validated ledger block (vBlock) formation from ledger (PeerLedger) blocks.
vBlocks are chained together to a hash chain by every peer. More specifically, every block of a validated ledger contains:
- The hash of the previous vBlock.
- vBlock number.
- An ordered list of all valid transactions committed by the peers since the last vBlock was computed (i.e., list of valid transactions in a corresponding block).
- The hash of the corresponding block (in
PeerLedger
) from which the current vBlock is derived.
All this information is concatenated and hashed by a peer, producing the hash of the vBlock in the validated ledger.
4.2. PeerLedger
Checkpointing¶
The ledger contains invalid transactions, which may not necessarily be
recorded forever. However, peers cannot simply discard PeerLedger
blocks and thereby prune PeerLedger
once they establish the
corresponding vBlocks. Namely, in this case, if a new peer joins the
network, other peers could not transfer the discarded blocks (pertaining
to PeerLedger
) to the joining peer, nor convince the joining peer of
the validity of their vBlocks.
To facilitate pruning of the PeerLedger
, this document describes a
checkpointing mechanism. This mechanism establishes the validity of
the vBlocks across the peer network and allows checkpointed vBlocks to
replace the discarded PeerLedger
blocks. This, in turn, reduces
storage space, as there is no need to store invalid transactions. It
also reduces the work to reconstruct the state for new peers that join
the network (as they do not need to establish validity of individual
transactions when reconstructing the state by replaying PeerLedger
,
but may simply replay the state updates contained in the validated
ledger).
4.2.1. Checkpointing protocol¶
Checkpointing is performed periodically by the peers every CHK blocks,
where CHK is a configurable parameter. To initiate a checkpoint, the
peers broadcast (e.g., gossip) to other peers message
<CHECKPOINT,blocknohash,blockno,stateHash,peerSig>
, where
blockno
is the current blocknumber and blocknohash
is its
respective hash, stateHash
is the hash of the latest state (produced
by e.g., a Merkle hash) upon validation of block blockno
and
peerSig
is peer’s signature on
(CHECKPOINT,blocknohash,blockno,stateHash)
, referring to the
validated ledger.
A peer collects CHECKPOINT
messages until it obtains enough
correctly signed messages with matching blockno
, blocknohash
and
stateHash
to establish a valid checkpoint (see Section 4.2.2.).
Upon establishing a valid checkpoint for block number blockno
with
blocknohash
, a peer:
- if
blockno>latestValidCheckpoint.blockno
, then a peer assignslatestValidCheckpoint=(blocknohash,blockno)
, - stores the set of respective peer signatures that constitute a valid
checkpoint into the set
latestValidCheckpointProof
, - stores the state corresponding to
stateHash
tolatestValidCheckpointedState
, - (optionally) prunes its
PeerLedger
up to block numberblockno
(inclusive).
4.2.2. Valid checkpoints¶
Clearly, the checkpointing protocol raises the following questions: When can a peer prune its ``PeerLedger``? How many ``CHECKPOINT`` messages are “sufficiently many”?. This is defined by a checkpoint validity policy, with (at least) two possible approaches, which may also be combined:
- Local (peer-specific) checkpoint validity policy (LCVP). A local
policy at a given peer p may specify a set of peers which peer p
trusts and whose
CHECKPOINT
messages are sufficient to establish a valid checkpoint. For example, LCVP at peer Alice may define that Alice needs to receiveCHECKPOINT
message from Bob, or from both Charlie and Dave. - Global checkpoint validity policy (GCVP). A checkpoint validity
policy may be specified globally. This is similar to a local peer
policy, except that it is stipulated at the system (blockchain)
granularity, rather than peer granularity. For instance, GCVP may
specify that:
- each peer may trust a checkpoint if confirmed by 11 different peers.
- in a specific deployment in which every orderer is collocated with a peer in the same machine (i.e., trust domain) and where up to f orderers may be (Byzantine) faulty, each peer may trust a checkpoint if confirmed by f+1 different peers collocated with orderers.
此章节由 刘博宇 翻译,最后更新于2018.1.4 (原文链接)
交易流程¶
本文档概述了标准资产(Standard Asset)交换过程中发生的交易机制。该场景包括两个客户,A和B,分别是买萝卜和卖萝卜。他们在网络中各有一个节点(Peer),通过网络他们发送交易(Transactions)并与账本(Ledger)进行交互。

前提假设
这个流程假设一个频道(Channel)已经建立并运行。应用程序用户已经注册,并且也完成了组织机构在证书颁发机构(CA)的注册,并已收到了所需要的加密材料,用以在网络上进行身份验证。
链码(Chaincode)(包含了代表萝卜市场初始状态的一组键值对)已经安装在节点(Peer)之上并且也已经在频道(Channel)上实例化了。链码(Chaincode)内包含了一组交易指令和商定萝卜价格的业务逻辑。背书策略(Endorsement Policy)也已经为此链码(Chaincode)设置完成,并规定,任意一笔交易,都必须由节点(Peer)A和节点(Peer)B的共同背书(Endorse)。
- 客户A发起交易

都发生了什么呢? - 客户A正在发送购买萝卜的请求(Request)。该请求(Request)的发送目标是分别代表客户A和客户B的节点(Peer)A和节点(Peer)B。背书策略(Endorsement Policy)规定了任意一笔交易都必须由两个节点(Peer)共同背书(Endorse),因此请求(Request)被同时发送到节点(Peer)A和节点(Peer)B。
接下来,我们要创建交易提议(Transaction Proposal)。应用程序利用SDK(Node,Java,Python)的API生成交易提议(Transaction Proposal)。交易提议(Transaction Proposal)是一个请求(Request),以调用链码(Chaincode)函数,因此,数据可以被读取或写入账本(Ledger)(例如:为资产(Assets)写入新的键值对)。SDK将交易提议(Transaction Proposal)打包成框架所需要的正确格式(Protocol Buffers - gRPC),并以用户的加密凭证(Cryptographic Credentials)为此交易提议(Transaction Proposal)生成唯一签名。
- 背书节点(Endorsing Peers)校验签名并执行交易

背书节点(Endorsing Peer)将会对交易提议(Transaction Proposal)进行如下的校验:
- (1)格式是否正确
- (2)以前是否被提交过(重放攻击保护)
- (3)签名是否有效(使用MSP)
- (4)提交者(例如,客户端A)是否在该频道(Channel)上被授权执行提议操作(即,每个背书节点(Endorsing Peer)都要确保提交者满足频道(Channel)的写入策略(Writers Policy))。
背书节点(Endorsing Peer)将交易提议(Transaction Proposal)作为被调用的链码(Chaincode)函数的输入参数。然后,链码(Chaincode)执行于当前的状态数据库(State Database)之上,以生成交易结果,包括响应值,读取集合和写入集合。此时并没有更新账本(Ledger)。这些值的集合以及背书节点(Endorsing Peer)的签名将作为“交易提议响应(Transaction Proposal Response)”回传给SDK,该SDK将负责解析这些数据,以供应用程序使用。
{MSP是一个节点组件(Peer Component),用来校验来自客户端的交易请求并为交易结果签名(背书)。写入策略(Writers Policy)在频道(Channel)创建时被定义,并确定哪个用户有权限向该频道(Channel)提交交易。}
- 检查交易提议响应(Transaction Proposal Response)

应用程序会验证所有背书节点(Endorsing Peer)的签名并比较所有的交易提议响应(Transaction Proposal Response),以确定所有的交易提议响应(Transaction Proposal Response)是否相同。如果链码(Chaincode)仅仅是查询账本(Ledger),则应用程序将会检查查询响应(query response),并且通常不会将此交易(Transaction)提交给排序服务(Ordering Service)。如果客户端应用程序打算将交易(Transaction)提交给排序服务(Ordering Service)以更新账本(Ledger),则应用程序在提交之前,将会确定被指定的背书策略(Endorsement Policy)是否已经得到满足(即,节点(Peer)A和节点(Peer)B都已背书)。从架构的层面来说,即使应用程序选择不检查交易提议响应(Transaction Proposal Response),或者以其他方式转发未经过背书(Endorse)的交易(Transaction),在确认(Validate)提交(Commit)阶段,背书策略(Endorsement Policy)将依然会被各节点(Peer)强制执行。
- 客户端将所有背书(Endorsement)组装进交易(Transaction)

应用程序将交易提议(Transaction Proposal)和响应(Response)组成的“交易消息(Transaction Message)”“广播”给排序服务(Ordering Service)。该交易(Transaction)将包括读写集合,所有背书节点(Endorsing Peer)的签名和频道(Channel)ID。排序服务(Ordering Service)执行其相关的操作,并不需要检查此交易的全部内容,它仅接收来自网络中所有频道(Channel)的交易(Transaction),然后,基于频道(Channel)按时间顺序排序,并创建各个频道(Channel)的交易区块(the blocks of transactions)。
- 交易(Transaction)被确认(Validate)并提交(Commit)

交易区块(the blocks of transactions)被“交付”给频道(Channel)中的所有节点(Peer)。然后,对区块(Block)中的交易(Transaction)进行确认,以确保背书策略(Endorsement Policy)得到满足,并确保从账本状态(ledger state)中读取的变量集合,自从被该交易执行生成之后没有变化。最后,区块(Block)中的交易(Transaction)将被标记为有效或无效。
- 账本(Ledger)更新

每个节点(Peer)都会将此区块(Block)追加到所在频道(Channel)的链中,并且对于每一笔有效的交易,写集合都会被提交给当前的状态数据库(State Database)。同时,一个事件被触发,通知客户端应用程序此交易(调用)已被永久地追加到链中,以及,此交易是有效还是无效。
注解
请参阅 swimlane 图表以更好地了解服务器端流程和 protobuffers 。
此章节由 刘博宇 翻译,最后更新于2018.1.6 (原文链接)
Hyperledger Fabric SDKs¶
Hyperledger Fabric旨在为各种编程语言提供大量的SDK。目前已交付的两个是 Node.js 和 Java 的SDK。我们希望在1.0.0版本发布后尽快提供 Python 和 Go 的SDK。
此章节由 刘博宇 翻译,最后更新于2018.1.9 (原文链接)
基于 Kafka 的排序服务(Ordering Service)¶
前言¶
本文假设读者已经知道如何配置Kafka集群和ZooKeeper集合。本指南的目的是帮助您了解所需执行的步骤,以使一组 Hyperledger Fabric 排序服务节点(Ordering Service Node - OSN)可以使用您的Kafka群集,并向区块链网络提供排序服务。
重点¶
每一个频道(Channel)都会映射到Kafka中的一个独立的单分区主题(a separate single-partition topic)。当排序服务节点(Ordering Service Node - OSN)通过 广播(Broadcast)
RPC接收到交易(Transaction)时,它会进行检查以确保广播客户端拥有写入频道(Channel)的权限,然后,将这些交易(Transaction)转发(relay)(即,生产)至Kafka中的适当分区中。这个分区也会被排序服务节点(Ordering Service Node - OSN)使用,OSN将接收到的交易(Transaction)分组打包成本地区块(Block),然后,将它们保存在本地的帐本(Ledger)中,并通过 分发(Deliver)
RPC 发送给接收的客户端。对于低层细节,请参考文档 基于Kafka的Fabric排序服务 ( 英文原文 ) - 图8是对上述过程的概要表述。
步骤¶
设 K
和 Z
分别为Kafka集群和ZooKeeper集合的节点数量:
K
应该至少设置为4。(正如我们将在下面的步骤4中所解释的,这是宕机故障容错所必需的最小节点数量,即,对于4个代理人(Broker),可以有1个代理人(Broker)宕机,所有频道(Channel)将仍然可以继续读写,并且也可以创建新的频道(Channel)。)Z
可以是3,5或7。它必须是奇数,以避免令人头痛(split-brain)的情况发生,并且,为了避免单点故障,必须大于1。如果超过7台ZooKeeper服务器,则是完全没有必要的。
然后按以下步骤进行:
排序服务(Orderers): 对网络中创世区块(Genesis Block)里的Kafka相关信息进行编码 ,如果您使用
configtxgen
,请编辑configtx.yaml
,或为系统频道的创世区块,选择一个预设的配置文件。Orderer.OrdererType
设置为kafka
。Orderer.Kafka.Brokers
需要包括 至少两个 在集群中的Kafka代理的节点地址,以IP:port
的形式。该清单并不需要列出所有的节点地址。(这只是引导启动节点。)
排序服务(Orderers): 设置最大的区块大小 ,每个区块最大为 Orderer.AbsoluteMaxBytes 个字节(不包括区块头),您可以在
configtx.yaml
中设置这个值。将这个值标记为A
,并记下它。这将影响您如何在步骤6中配置Kafka代理。排序服务(Orderers): 创建创世区块 ,使用
configtxgen
。您在上面的步骤3和步骤4中的配置是系统范围的配置,即它们适用于整个OSN网络。然后,记下创世区块的位置。Kafka集群: 正确地配置您的Kafka代理 ,确保每一个Kafka代理,都进行了如下的关键配置:
unclean.leader.election.enable = false
— 数据一致性是区块链环境的关键。我们不能在同步副本集合之外选择频道的领导者,否则我们面临前任领导者生成的内容被重写的风险,并且因此导致重写排序节点生成的区块链。min.insync.replicas = M
— 选择一个值M
,使得1 < M < N
(请参阅下面的default.replication.factor
)。当数据被写入至少M
个副本时,数据被视为已提交,(然后将其视为同步并从属于同步副本集合或ISR)。在任何其他情况下,写操作都会返回一个错误。然后:- 如果
N-M
个副本不可用时,操作可以正常进行。 - 如果更多的副本变得不可用,Kafka将不能维持
M
的ISR集合,所以它将停止写入。读取没有问题。当M
个副本进入同步状态时,频道将再次变为可写入状态。
- 如果
default.replication.factor = N
— 选择一个值N
,使得N < K
。副本因子N
表示每个频道将其数据复制给N
个代理。这些是频道ISR集合的候选者。正如我们在上面的min.insync.replicas section
部分中提到的,并非所有代理都必须始终可用。N
应该设置为 严格小于K
,因为如果少于N
个代理,就无法创建频道。因此,如果设置N = K
,则单个代理停机,意味着在区块链网络中将不能创建新的频道 - 排序服务的崩溃容错功能将不复存在。根据上面所描述的,
M
和N
的最小值分别是2和3。 这个配置能保证,如果存在异常,将能依然允许创建新的频道,并保证所有的频道仍然是可写的。message.max.bytes
和replica.fetch.max.bytes
应设置为大于A
的值, 即上面步骤4中,您在Orderer.AbsoluteMaxBytes
中设置的值。为账户头添加一些缓冲的空间,1 MiB是足够用的。以下适用:Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
(为了完整性,我们注意到
message.max.bytes
应该严格的小于socket.request.max.bytes
,默认设置为100 MiB。如果你希望区块大于100 MiB,你需要修改fabric/orderer/kafka/config.go
中brokerConfig.Producer.MaxMessageBytes
的值,并从源码重新进行编译。不建议这么做。)log.retention.ms = -1
- 在排序服务(Ordering Service)添加对裁剪Kafka日志的支持之前,您应该禁用基于时间的留存并防止片段(segments)过期。(基于文件大小的留存 - 请参阅log.retention.bytes
- 写这篇文章时,在Kafka中是默认禁用的,所以不需要明确设置。)
排序服务(Orderers): 将每个OSN都指向创世区块 ,编辑
orderer.yaml
中的General.GenesisFile
,以使其指向步骤5中所创建的创始区块。(在此过程中,确保YAML文件中,其他的所有键值,都被正确的设置过。)排序服务(Orderers): 调整轮询间隔和超时 (可选步骤)
orderer.yaml
文件中的Kafka.Retry
一节,允许您调整 元数据/生产者/消费者 请求的频率,以及 socket 超时。(这也是Kafka生产者或消费者的所有配置。)另外,当一个新的频道(Channel)被创建,或者当一个现有的(Channel)被重新加载时(例如,刚刚重新启动的排序服务(orderer)),排序服务(orderer)将以下面的方式与Kafka集群进行交互:
- 它为该频道(Channel)所对应的Kafka分区,创建一个Kafka生产者(写入者)。
- 它使生产者发布一个
CONNECT
消息到该分区。 - 它为该分区创建一个Kafka消费者(读取者)。
如果这些步骤,有任何一个失败,您可以调整重复的频率。具体来说,他们将会不断地重新尝试在
Kafka.Retry.ShortTotal
中,以Kafka.Retry.ShortInterval
为间隔,然后,在Kafka.Retry.LongTotal
中,以Kafka.Retry.LongInterval
为间隔,直到成功为止。请注意,直到上述所有步骤都成功完成之前,排序服务(orderer)将无法写入或读取频道(Channel)。
设置OSNs和Kafka集群,以使其可以通过SSL进行通信 (可选步骤,但强烈建议),请参阅 the Confluent guide 以更进一步了解Kafka集群,在每个OSN的
orderer.yaml
中,为Kafka.TLS
进行设置。按照如下顺序启动各服务节点:ZooKeeper集合,Kafka集群,排序服务节点(Ordering Service Node)
其他注意事项¶
- 首选区块大小(Preferred Message Size) 。在上面的步骤4中(参见 步骤 一节),您也可以通过设置
Orderer.Batchsize.PreferredMaxBytes
,来配置区块(Block)的大小。Kafka在处理相对较小的消息时,可以提供更高的吞吐量,其设计目标为不超过1MiB。 - 使用环境变量来覆盖配置 。当使用Fabric提供的示例 Kafka 和 Zookeeper 的Docker镜像时(分别参见
images/kafka
和images/zookeeper
),您可以使用环境变量,来覆盖Kafka代理或ZooKeeper服务器的配置。用_
替换属性名称里的.
,例如KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
将允许您覆盖unclean.leader.election.enable
的默认值。OSN的 本地配置 也同样如此,也就是说,可以在orderer.yaml
中进行配置。例如,ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
允许您覆盖Orderer.Kafka.Retry.ShortInterval
的默认值。
支持的Kafka版本¶
Fabric使用 sarama客户端库 并支持以下的Kafka客户端版本:
版本: 0.9.0
版本: 0.10.0
版本: 0.10.1
版本: 0.10.2
Fabric提供的示例Kafka服务器映像包含了Kafka服务端版本 0.10.2
。Fabric的排序服务节点(Ordering Service Node - OSN)内置了Kafka客户端,并进行了默认配置,以匹配此版本,可以直接投入使用。如果您没有使用Fabric提供的示例Kafka服务器映像,请确保 orderer.yaml
中的 Kafka.Version
属性,进行了相关的配置,以兼容您的Kafka服务端版本。
调试¶
设置 General.LogLevel
为 DEBUG
,并在 orderer.yaml
中,设置 Kafka.Verbose
为 true
.
例子¶
Sample Docker Compose的配置文件中,包含了上面推荐的设置,可以在目录 fabric/bddtests
中找到。查找 dc-orderer-kafka-base.yml
和 dc-orderer-kafka.yml
。
此章节由 刘博宇 翻译,最后更新于2018.1.8 (原文链接)
频道(Channel)¶
一个 Hyperledger Fabric 的 频道(Channel) 是两个或多个特定网络成员之间相互通信的私有“子网”,用于进行隐私的和加密的交易(Transaction)。频道(Channel)包括如下的内容:会员(Member)(组织机构(Organization)),每个会员(Member)的锚节点(Anchor Peer),共享的帐本(Ledger),链码(Chaincode)程序和排序服务(Ordering Service)节点。网络中的每笔交易都在频道(Channel)中被执行,每个参与者都必须经过认证和授权才能在该通道上进行交易。每个加入频道(Channel)的节点(Peer)都拥有自己的身份标示,由会员服务提供商(Membership Services Provider - MSP)提供,它负责为其频道(Channel)中的节点(Peer)和服务(Service),认证每一个节点(Peer)。
为了创建一个新的频道(Channel),客户端SDK会调用配置系统(configuration system)的链码(Chaincode),并引用相关的参数,例如 锚节点(Anchor Peer) 、 会员(Member)(组织机构(Organization)) 。之后,该请求会为频道(Channel)的账本(Ledger)创建一个创世区块(Genesis Block),存储相关的配置信息,包括频道策略(channel policies),会员(Member)和锚节点(Anchor Peer)。当新会员(Member)被添加到现有的频道(Channel)时,这个创世区块(Genesis Block)(又或者,是最近重新配置的配置区块(reconfiguration block))将与新会员(Member)共享。
注解
请参阅 Channel Configuration (configtx) 章节,以更进一步了解有关配置交易(config transactions)的属性和数据结构。
对频道(Channel)中的每一个会员(Member)来说, 领导节点(Leading Peer) 的选举将决定哪一个节点(Peer)会代表会员(Member)与排序服务(Ordering Service)进行通信。如果没有确定领导节点(Leading Peer),则可以使用算法来确定领导节点(Leading Peer)。共识(Consensus)服务将交易排序,然后打包成一个区块(Block),分发给每一个领导节点(Leading Peer),最后,通过 gossip 协议,将其分发给其他的会员节点,及至整个频道(Channel)。
尽管一个锚节点(Anchor Peer)可以从属于多个频道(Channel),并可以维护多个账本(Ledger),但账本数据不可以从一个频道(Channel)传递到另一个频道(Channel)。以频道(Channel)为划分的账本(Ledger)是由如下因素定义和实现的,配置链码(configuration chaincode),身份会员服务(identity membership service)和gossip数据传播协议。数据的传播(交易(transactions)、账本状态(ledger state)、频道(在线)会员(channel membership))仅限于频道(Channel)中已验证会员资格的节点(Peer)。通过频道(Channel),使节点(Peer)和账本数据(ledger data)相互隔离,对于那些需要交易具有隐私性和加密性的网络成员而言,在同一个区块链网络中,与商业竞争对手或其他受限制的成员共存,成为可能。
Ledger¶
The ledger is the sequenced, tamper-resistant record of all state transitions. State transitions are a result of chaincode invocations (‘transactions’) submitted by participating parties. Each transaction results in a set of asset key-value pairs that are committed to the ledger as creates, updates, or deletes.
The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state database to maintain current state. There is one ledger per channel. Each peer maintains a copy of the ledger for each channel of which they are a member.
Chain¶
The chain is a transaction log, structured as hash-linked blocks, where each block contains a sequence of N transactions. The block header includes a hash of the block’s transactions, as well as a hash of the prior block’s header. In this way, all transactions on the ledger are sequenced and cryptographically linked together. In other words, it is not possible to tamper with the ledger data, without breaking the hash links. The hash of the latest block represents every transaction that has come before, making it possible to ensure that all peers are in a consistent and trusted state.
The chain is stored on the peer file system (either local or attached storage), efficiently supporting the append-only nature of the blockchain workload.
State Database¶
The ledger’s current state data represents the latest values for all keys ever included in the chain transaction log. Since current state represents all latest key values known to the channel, it is sometimes referred to as World State.
Chaincode invocations execute transactions against the current state data. To make these chaincode interactions extremely efficient, the latest values of all keys are stored in a state database. The state database is simply an indexed view into the chain’s transaction log, it can therefore be regenerated from the chain at any time. The state database will automatically get recovered (or generated if needed) upon peer startup, before transactions are accepted.
Transaction Flow¶
At a high level, the transaction flow consists of a transaction proposal sent by an application client to specific endorsing peers. The endorsing peers verify the client signature, and execute a chaincode function to simulate the transaction. The output is the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response gets sent back to the client along with an endorsement signature.
The client assembles the endorsements into a transaction payload and broadcasts it to an ordering service. The ordering service delivers ordered transactions as blocks to all peers on a channel.
Before committal, peers will validate the transactions. First, they will check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results, and they will authenticate the signatures against the transaction payload.
Secondly, peers will perform a versioning check against the transaction read set, to ensure data integrity and protect against threats such as double-spending. Hyperledger Fabric has concurrency control whereby transactions execute in parallel (by endorsers) to increase throughput, and upon commit (by all peers) each transaction is verified to ensure that no other transaction has modified data it has read. In other words, it ensures that the data that was read during chaincode execution has not changed since execution (endorsement) time, and therefore the execution results are still valid and can be committed to the ledger state database. If the data that was read has been changed by another transaction, then the transaction in the block is marked as invalid and is not applied to the ledger state database. The client application is alerted, and can handle the error or retry as appropriate.
See the 交易流程 and Read-Write set semantics topics for a deeper dive on transaction structure, concurrency control, and the state DB.
State Database options¶
State database options include LevelDB and CouchDB. LevelDB is the default key/value state database embedded in the peer process. CouchDB is an optional alternative external state database. Like the LevelDB key/value store, CouchDB can store any binary data that is modeled in chaincode (CouchDB attachment functionality is used internally for non-JSON binary data). But as a JSON document store, CouchDB additionally enables rich query against the chaincode data, when chaincode values (e.g. assets) are modeled as JSON data.
Both LevelDB and CouchDB support core chaincode operations such as getting and setting a key (asset), and querying based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of (owner,asset_id) can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.
If you model assets as JSON and use CouchDB, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. These types of queries are excellent for understanding what is on the ledger. Proposal responses for these types of queries are typically useful to the client application, but are not typically submitted as transactions to the ordering service. In fact, there is no guarantee the result set is stable between chaincode execution and commit time for rich queries, and therefore rich queries are not appropriate for use in update transactions, unless your application can guarantee the result set is stable between chaincode execution time and commit time, or can handle potential changes in subsequent transactions. For example, if you perform a rich query for all assets owned by Alice and transfer them to Bob, a new asset may be assigned to Alice by another transaction between chaincode execution time and commit time, and you would miss this ‘phantom’ item.
CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms of setup, management, and operations. You may consider starting with the default embedded LevelDB, and move to CouchDB if you require the additional complex rich queries. It is a good practice to model chaincode asset data as JSON, so that you have the option to perform complex rich queries if needed in the future.
CouchDB Configuration¶
CouchDB is enabled as the state database by changing the stateDatabase configuration option from
goleveldb to CouchDB. Additionally, the couchDBAddress
needs to configured to point to the
CouchDB to be used by the peer. The username and password properties should be populated with
an admin username and password if CouchDB is configured with a username and password. Additional
options are provided in the couchDBConfig
section and are documented in place. Changes to the
core.yaml will be effective immediately after restarting the peer.
You can also pass in docker environment variables to override core.yaml values, for example
CORE_LEDGER_STATE_STATEDATABASE
and CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
.
Below is the stateDatabase
section from core.yaml:
state:
# stateDatabase - options are "goleveldb", "CouchDB"
# goleveldb - default state database stored in goleveldb.
# CouchDB - store state database in CouchDB
stateDatabase: goleveldb
couchDBConfig:
# It is recommended to run CouchDB on the same server as the peer, and
# not map the CouchDB container port to a server port in docker-compose.
# Otherwise proper security must be provided on the connection between
# CouchDB client (on the peer) and server.
couchDBAddress: couchdb:5984
# This username must have read and write authority on CouchDB
username:
# The password is recommended to pass as an environment variable
# during start up (e.g. LEDGER_COUCHDBCONFIG_PASSWORD).
# If it is stored here, the file must be access control protected
# to prevent unintended users from discovering the password.
password:
# Number of retries for CouchDB errors
maxRetries: 3
# Number of retries for CouchDB errors during peer startup
maxRetriesOnStartup: 10
# CouchDB request timeout (unit: duration, e.g. 20s)
requestTimeout: 35s
# Limit on the number of records to return per query
queryLimit: 10000
CouchDB hosted in docker containers supplied with Hyperledger Fabric have the
capability of setting the CouchDB username and password with environment
variables passed in with the COUCHDB_USER
and COUCHDB_PASSWORD
environment
variables using Docker Compose scripting.
For CouchDB installations outside of the docker images supplied with Fabric, the local.ini file must be edited to set the admin username and password.
Docker compose scripts only set the username and password at the creation of the container. The local.ini file must be edited if the username or password is to be changed after creation of the container.
注解
CouchDB peer options are read on each peer startup.
Read-Write set semantics¶
This documents discusses the details of the current implementation about the semantics of read-write sets.
Transaction simulation and read-write set¶
During simulation of a transaction at an endorser
, a read-write set
is prepared for the transaction. The read set
contains a list of
unique keys and their committed versions that the transaction reads
during simulation. The write set
contains a list of unique keys
(though there can be overlap with the keys present in the read set) and
their new values that the transaction writes. A delete marker is set (in
the place of new value) for the key if the update performed by the
transaction is to delete the key.
Further, if the transaction writes a value multiple times for a key, only the last written value is retained. Also, if a transaction reads a value for a key, the value in the committed state is returned even if the transaction has updated the value for the key before issuing the read. In another words, Read-your-writes semantics are not supported.
As noted earlier, the versions of the keys are recorded only in the read set; the write set just contains the list of unique keys and their latest values set by the transaction.
There could be various schemes for implementing versions. The minimal requirement for a versioning scheme is to produce non-repeating identifiers for a given key. For instance, using monotonically increasing numbers for versions can be one such scheme. In the current implementation, we use a blockchain height based versioning scheme in which the height of the committing transaction is used as the latest version for all the keys modified by the transaction. In this scheme, the height of a transaction is represented by a tuple (txNumber is the height of the transaction within the block). This scheme has many advantages over the incremental number scheme - primarily, it enables other components such as statedb, transaction simulation and validation for making efficient design choices.
Following is an illustration of an example read-write set prepared by simulation of a hypothetical transaction. For the sake of simplicity, in the illustrations, we use the incremental numbers for representing the versions.
<TxReadWriteSet>
<NsReadWriteSet name="chaincode1">
<read-set>
<read key="K1", version="1">
<read key="K2", version="1">
</read-set>
<write-set>
<write key="K1", value="V1"
<write key="K3", value="V2"
<write key="K4", isDelete="true"
</write-set>
</NsReadWriteSet>
<TxReadWriteSet>
Additionally, if the transaction performs a range query during
simulation, the range query as well as its results will be added to the
read-write set as query-info
.
Transaction validation and updating world state using read-write set¶
A committer
uses the read set portion of the read-write set for
checking the validity of a transaction and the write set portion of the
read-write set for updating the versions and the values of the affected
keys.
In the validation phase, a transaction is considered valid
if the
version of each key present in the read set of the transaction matches
the version for the same key in the world state - assuming all the
preceding valid
transactions (including the preceding transactions
in the same block) are committed (committed-state). An additional
validation is performed if the read-write set also contains one or more
query-info.
This additional validation should ensure that no key has been
inserted/deleted/updated in the super range (i.e., union of the ranges)
of the results captured in the query-info(s). In other words, if we
re-execute any of the range queries (that the transaction performed
during simulation) during validation on the committed-state, it should
yield the same results that were observed by the transaction at the time
of simulation. This check ensures that if a transaction observes phantom
items during commit, the transaction should be marked as invalid. Note
that the this phantom protection is limited to range queries (i.e.,
GetStateByRange
function in the chaincode) and not yet implemented
for other queries (i.e., GetQueryResult
function in the chaincode).
Other queries are at risk of phantoms, and should therefore only be used
in read-only transactions that are not submitted to ordering, unless the
application can guarantee the stability of the result set between
simulation and validation/commit time.
If a transaction passes the validity check, the committer uses the write set for updating the world state. In the update phase, for each key present in the write set, the value in the world state for the same key is set to the value as specified in the write set. Further, the version of the key in the world state is changed to reflect the latest version.
Example simulation and validation¶
This section helps with understanding the semantics through an example
scenario. For the purpose of this example, the presence of a key, k
,
in the world state is represented by a tuple (k,ver,val)
where
ver
is the latest version of the key k
having val
as its
value.
Now, consider a set of five transactions T1, T2, T3, T4, and T5
, all
simulated on the same snapshot of the world state. The following snippet
shows the snapshot of the world state against which the transactions are
simulated and the sequence of read and write activities performed by
each of these transactions.
World state: (k1,1,v1), (k2,1,v2), (k3,1,v3), (k4,1,v4), (k5,1,v5)
T1 -> Write(k1, v1'), Write(k2, v2')
T2 -> Read(k1), Write(k3, v3')
T3 -> Write(k2, v2'')
T4 -> Write(k2, v2'''), read(k2)
T5 -> Write(k6, v6'), read(k5)
Now, assume that these transactions are ordered in the sequence of T1,..,T5 (could be contained in a single block or different blocks)
T1
passes validation because it does not perform any read. Further, the tuple of keysk1
andk2
in the world state are updated to(k1,2,v1'), (k2,2,v2')
T2
fails validation because it reads a key,k1
, which was modified by a preceding transaction -T1
T3
passes the validation because it does not perform a read. Further the tuple of the key,k2
, in the world state is updated to(k2,3,v2'')
T4
fails the validation because it reads a key,k2
, which was modified by a preceding transactionT1
T5
passes validation because it reads a key,k5,
which was not modified by any of the preceding transactions
Note: Transactions with multiple read-write sets are not yet supported.
此章节由 刘博宇 翻译,最后更新于2018.1.5 (原文链接)
Gossip数据传播协议¶
译者注: gossip协议是一个神奇的协议。它常用于P2P的通信协议,这个协议就是模拟人类中传播谣言的行为而来。简单的描述下这个协议,首先要传播谣言就要有种子节点。种子节点每秒都会随机向其他节点发送自己所拥有的节点列表,以及需要传播的消息。任何新加入的节点,就在这种传播方式下很快地被全网所知道。这个协议的神奇就在于它从设计开始就没想到信息一定要传递给所有的节点,但是随着时间的增长,在最终的某一时刻,全网会得到相同的信息。
Hyperledger Fabric通过拆分工作量为交易执行节点(背书(Endorsing)和提交(Committing))和交易排序节点,来优化区块链网络的性能,安全性和可伸缩性。这种网络操作的解耦需要一个安全的、可靠的和可伸缩的数据传播协议,以确保数据的完整性和一致性。为了满足这些要求,Hyperledger Fabric实现 gossip数据传播协议 。
Gossip协议¶
节点(Peer)利用gossip以可扩展的方式广播账本(Ledger)和频道(Channel)的数据。Gossip的消息发送是连续的,频道(Channel)上的每个节点(Peer)都不断地从多个其他节点(Peer)接收最新的和一致的账本数据。每个gossip消息都会被签名,从而,拜占庭参与者(Byzantine Participants)发送的伪造消息将会很容易地被识别出来,并且,阻止这些消息被分发到不想要的目标。节点(Peer)会受到网络延时、网络分区或其他因素的影响,从而导致丢失区块(Block),但最终会通过联系那些拥有缺失区块的节点(Peer),而同步到最新的帐本状态。
基于gossip的数据传播协议在 Hyperledger Fabric 网络中,主要执行三项功能:
- 通过持续不断地识别可用的会员节点,并最终检测已离线的节点,来管理节点发现(peer discovery)和频道(在线)会员(channel membership)。
- 在频道(Channel)中的所有节点(Peer)之间传播账本数据。可以识别与频道(Channel)中其余节点不同步的节点(Peer),标示出丢失的数据块,并通过复制正确的数据来同步自身。
- 通过以P2P的方式更新账本数据,使新连接的节点(Peer)加速。
Gossip的广播,首先,是由节点(Peer)接收频道(Channel)上的其他节点(Peer)消息,然后,将这些消息转发给频道(Channel)中的多个随机节点(Peer),这个数量是一个可配置的常量。节点(Peer)也可以使用拉取(pull)机制,而不是等待信息的传递。这个循环不断地重复,最终,频道(在线)会员(channel membership),账本(Ledger)和状态信息,将会持续不断地被同步至最新的结果。为了传播新的区块,该频道(Channel)中的 领导 节点(leader peer)将从排序服务(Ordering Service)中拉取(pull)数据,并开始向其他节点(Peer)传播gossip消息。
Gossip消息发送¶
在线节点(Peer)通过持续不断地广播“活着”的消息,来表明他们的可用性,每个消息都包含了 公钥基础设施(Public Key Infrastructure - PKI) ID和发件人针对消息的签名。节点(Peer)通过收集这些“活着”的消息来维护频道(在线)会员(channel membership)。如果没有节点(Peer)收到来自特定节点(Peer)的“活着”消息,则这个“死了”的节点将最终从频道(在线)会员(channel membership)中清除。因为“活着”消息是被加密签名的,并且,由于缺乏根证书授权机构(CA)的签名密钥,所以,恶意节点(Peer)永远不能冒充其他节点(Peer)。
除了自动转发收到的消息之外,频道(Channel)中的节点(Peer)之间,还会有一个状态核对过程,来同步 世界状态(world state) 。每个节点(Peer)持续不断地从该频道(Channel)的其他节点(Peer)中拉取(pull)区块,以便在发现差异的情况下修复自己的状态(state)。基于gossip的数据传播不需要保持固定的网络连接,因此,此过程非常可靠地保证了共享帐本的数据一致性和完整性,包括对节点崩溃的容错性。
由于频道(Channel)是相互隔离的,所以一个频道(Channel)上的节点(Peer)不能发送消息或分享信息给其他任何的频道(Channel)。虽然任意节点(Peer)可以从属于多个频道(Channel),但通过应用基于节点频道订阅(peers’ channel subscription)的消息路由策略(message routing policy),分区的消息发送机制(partitioned messaging)可以防止区块被传播到不在此频道(Channel)中的节点(Peer)。
备注:
- P2P消息的安全性由节点(Peer)的TLS层处理,不需要签名。节点(Peer)由CA分配的证书进行认证。尽管也使用TLS证书,但在gossip层中,节点(Peer)证书依然会被验证。账本区块(ledger blocks)由排序服务(Ordering Service)进行签名,然后交付给频道(Channel)中的领导节点(leader peers)。
- 身份验证由节点(Peer)的MSP(Membership Service Provider)负责管理。当节点(Peer)首次连接到该频道(Channel)时,TLS会话将与会员ID(membership identity)相绑定。实质上,每一个节点(Peer)都对连接节点(connecting peer)进行了身份验证,以确保其在当前网络和频道(Channel)中的会员资格。
Hyperledger Fabric FAQ¶
Endorsement¶
Endorsement architecture:
- How many peers in the network need to endorse a transaction?
A. The number of peers required to endorse a transaction is driven by the endorsement policy that is specified at chaincode deployment time.
- Does an application client need to connect to all peers?
A. Clients only need to connect to as many peers as are required by the endorsement policy for the chaincode.
Security & Access Control¶
Data Privacy and Access Control:
- How do I ensure data privacy?
A. There are various aspects to data privacy. First, you can segregate your network into channels, where each channel represents a subset of participants that are authorized to see the data for the chaincodes that are deployed to that channel. Second, within a channel you can restrict the input data to chaincode to the set of endorsers only, by using visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Third, you can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a means to share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys. Fourth, you can restrict data access to certain roles in your organization, by building access control into the chaincode logic. Fifth, ledger data at rest can be encrypted via file system encryption on the peer, and data in-transit is encrypted via TLS.
- Do the orderers see the transaction data?
A. No, the orderers only order transactions, they do not open the transactions. If you do not want the data to go through the orderers at all, and you are only concerned about the input data, then you can use visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Therefore, the input data can be private to the endorsers only. If you do not want the orderers to see chaincode output, then you can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a meansto share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys.
Application-side Programming Model¶
Transaction execution result:
- How do application clients know the outcome of a transaction?
A. The transaction simulation results are returned to the client by the endorser in the proposal response. If there are multiple endorsers, the client can check that the responses are all the same, and submit the results and endorsements for ordering and commitment. Ultimately the committing peers will validate or invalidate the transaction, and the client becomes aware of the outcome via an event, that the SDK makes available to the application client.
Ledger queries:
- How do I query the ledger data?
A. Within chaincode you can query based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of (owner,asset_id) can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger.
If you model asset data as JSON in chaincode and use CouchDB as the state database, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. The application client can perform read-only queries, but these responses are not typically submitted as part of transactions to the ordering service.
- How do I query the historical data to understand data provenance?
A. The chaincode API GetHistoryForKey()
will return history of
values for a key.
Q. How to guarantee the query result is correct, especially when the peer being queried may be recovering and catching up on block processing?
A. The client can query multiple peers, compare their block heights, compare their query results, and favor the peers at the higher block heights.
Chaincode (Smart Contracts and Digital Assets)¶
- Does Hyperledger Fabric support smart contract logic?
A. Yes. We call this feature chaincode. It is our interpretation of the smart contract method/algorithm, with additional features.
A chaincode is programmatic code deployed on the network, where it is executed and validated by chain validators together during the consensus process. Developers can use chaincodes to develop business contracts, asset definitions, and collectively-managed decentralized applications.
- How do I create a business contract?
A. There are generally two ways to develop business contracts: the first way is to code individual contracts into standalone instances of chaincode; the second way, and probably the more efficient way, is to use chaincode to create decentralized applications that manage the life cycle of one or multiple types of business contracts, and let end users instantiate instances of contracts within these applications.
- How do I create assets?
A. Users can use chaincode (for business rules) and membership service (for digital tokens) to design assets, as well as the logic that manages them.
There are two popular approaches to defining assets in most blockchain solutions: the stateless UTXO model, where account balances are encoded into past transaction records; and the account model, where account balances are kept in state storage space on the ledger.
Each approach carries its own benefits and drawbacks. This blockchain technology does not advocate either one over the other. Instead, one of our first requirements was to ensure that both approaches can be easily implemented.
- Which languages are supported for writing chaincode?
A. Chaincode can be written in any programming language and executed in containers. The first fully supported chaincode language is Golang.
Support for additional languages and the development of a templating language have been discussed, and more details will be released in the near future.
It is also possible to build Hyperledger Fabric applications using Hyperledger Composer.
- Does the Hyperledger Fabric have native currency?
A. No. However, if you really need a native currency for your chain network, you can develop your own native currency with chaincode. One common attribute of native currency is that some amount will get transacted (the chaincode defining that currency will get called) every time a transaction is processed on its chain.
Differences in Most Recent Releases¶
- As part of the v1.0.0 release, what are the highlight differences between v0.6 and v1.0?
A. The differences between any subsequent releases are provided together with the Release Notes. Since Fabric is a pluggable modular framework, you can refer to the design-docs for further information of these difference.
- Where to get help for the technical questions not answered above?
- Please use StackOverflow.
此章节由 刘博宇 翻译,最后更新于2018.1.8 (原文链接)
词汇表¶
术语非常重要,它让 Hyperledger Fabric 所有的用户和开发者就每个特定词语的含义达成一致。例如,链码是什么。文档将会根据需要引用词汇表,但如果你愿意的话,也可以一口气把词汇表通读一遍,这也是非常有启发性的!
锚节点(Anchor Peer)¶
一个节点(Peer),它可以被频道(Channel)中的所有其他节点(Peer)发现和通讯。频道(Channel)中的每个 会员(Member) 都有一个锚节点(Anchor Peer)(或多个锚节点(Anchor Peer),以防止单点故障),以允许从属于不同会员(Member)的节点(Peer)发现同频道(Channel)中的所有现有节点(Peer)。
区块(Block)¶
一组有序的交易(Transaction),以加密的方式链接到频道(Channel)中的前一个区块(Block)。
链(Chain)¶
账本(Ledger)中的链(Chain),是指一组交易日志,结构为哈希链接的交易区块(blocks of transactions)。节点(Peer)从排序服务(Ordering Service)接收交易区块(blocks of transactions),并根据背书策略(Endorsement Policy)和并发冲突(concurrency violation),标记区块(Block)中的交易(Transaction)是否有效,然后,将该区块(Block)追加到节点(Peer)文件系统中的哈希链(hash chain)上。
链码(Chaincode)¶
链码(Chaincode)是一段运行在账本(Ledger)中的代码,通过对资产(Assets)和交易指令(业务逻辑)的编码来修改资产(Assets)。
频道(Channel)¶
频道(Channel)是一块私有的区块链区域,实现了数据的隔离和保密。频道(Channel)所对应的账本(Ledger)在该频道(Channel)中,被所有的节点(Peer)共享,交易方必须通过该频道(Channel)的身份验证,才能与此账本(Ledger)交互。频道(Channel)由 配置区块(Configuration Block) 定义。
提交(Commitment)¶
频道(Channel)中的每个 节点(Peer) 都会验证(validate)被排序后的交易区块(blocks of transactions),然后,将区块(Block)提交(写入或追加)至该频道(Channel)中 账本(Ledger) 的各个副本。节点(Peer)还会标记每个区块(Block)中的每笔交易(Transaction)是否有效。
并发控制版本检查(Concurrency Control Version Check)¶
并发控制版本检查(Concurrency Control Version Check)是频道(Channel)中各节点(Peer)间保持状态同步的一种方法。节点(Peer)以并行的方式执行交易,在提交至账本(Ledger)之前,节点(Peer)会检查在执行期间读取的数据是否有被修改过。如果读取的数据在执行和提交期间被改变过,就会引发并发控制版本检查(Concurrency Control Version Check)冲突,该交易就会在账本(Ledger)中被标记为无效,其值不会更新到状态数据库(State Database)中。
配置区块(Configuration Block)¶
系统链(System Chain)(排序服务(Ordering Service))或频道(Channel)的配置数据,包含了会员和策略的配置。对频道(Channel)或整个网络的任何修改(例如,会员(Member)的离开或加入)都将会导致一个新的配置区块(Configuration Block)被追加到链中。这个区块(Block)将会包含创世区块(Genesis Block)的内容,以及其增量。
共识(Consensus)¶
共识(Consensus)是一个贯穿整个交易流程的广义术语,其用于表述,生成对排序(Order)达成的一致,以及确认构成区块(Block)的所有交易的正确性。
当前状态(Current State)¶
账本(Ledger)的当前状态(Current State)是指其链上的交易日志(transaction log)中所有键(Key)的最新值。节点(Peer)会将处理过的区块(Block)中的每笔有效交易的对应修改值,提交至账本(Ledger)的当前状态(Current State)。由于当前状态(Current State)代表了频道(Channel)中所有键(Key)的最新值,所以当前状态(Current State)也被称为世界状态(World State)。链码(Chaincode)执行交易提议(Transaction Proposal)就是针对的当前状态(Current State)。
动态成员(Dynamic Membership)¶
Hyperledger Fabric支持动态添加/移除会员(Member)、节点(Peer)和排序服务(Ordering Service)节点,而不会影响整个网络的可操作性。当业务关系调整或因为各种原因而需要添加/移除实体时,动态成员(Dynamic Membership)机制至关重要。
背书(Endorsement)¶
背书(Endorsement)是指特定节点(Peer)执行链码(Chaincode)交易并向客户端程序返回提议响应(Proposal Response)的过程。提议响应(Proposal Response)包括链码(Chaincode)执行的响应消息,结果(读取集合和写入集合)和事件,以及作为链码(Chaincode)在节点(Peer)中执行证据的签名。链码(Chaincode)程序具有相应的背书政策(Endorsement Policy),其中指定了背书节点(Endorsing Peer)。
背书策略(Endorsement Policy)¶
定义了频道(Channel)中执行交易的节点(Peer)(其必须以指定链码(Chaincode)程序执行),以及响应结果(背书(Endorsement))的必要组合条件。背书策略(Endorsement Policy)可以指定交易(Transaction)的背书方式,以背书节点(Endorsing Peer)的最少数量、或以背书节点(Endorsing Peer)的最少百分比,又或者,分配了某一特定链码(Chaincode)程序的所有背书节点(Endorsing Peer)。背书策略(Endorsement Policy)由背书节点(Endorsing Peer)基于应用程序和对抵御不良行为的期望水平来组织管理的。交易(Transaction)必须满足背书政策(Endorsement Policy),然后才能被提交节点(committing peers)标记为有效。安装(Install)和实例化(Instantiate)交易也需要明确的背书策略(Endorsement Policy)。
Hyperledger Fabric CA¶
Hyperledger Fabric CA是默认的CA(Certificate Authority)组件,它向网络会员及其用户颁发基于PKI的证书。CA为每个会员颁发一个根证书(rootCert),为每个授权用户颁发一个注册证书(eCert)。
创世区块(Genesis Block)¶
初始化区块链网络或频道(Channel)的配置区块(Configuration Block),也是链上的第一个区块(Block)。
Gossip协议¶
Gossip数据传播协议有三个功能:
- 管理节点发现(peer discovery)和频道(在线)会员(channel membership);
- 在频道(Channel)中的所有节点(Peer)间传播账本数据;
- 在频道(Channel)中的所有节点(Peer)间同步账本状态;
请参阅 Gossip 主题了解更多详情。
初始化(Initialize)¶
初始化链码(Chaincode)程序的方法。
安装(Install)¶
将链码(Chaincode)部署到节点(Peer)文件系统上的过程。
实例化(Instantiate)¶
启动和初始化特定频道(Channel)上链码(Chaincode)程序的过程。在实例化(Instantiate)之后,安装链码(Chaincode)的节点(Peer)就可以接受链码调用了。
调用(Invoke)¶
用于调用链码(Chaincode)内的函数。客户端程序通过向节点(Peer)发送交易提议(Transaction Proposal)来调用链码(Chaincode)。节点(Peer)会执行链码(Chaincode)并将经过背书(Endorsement)的提议响应(Proposal Response)返回给客户端程序。客户端程序收集到满足背书策略(Endorsement Policy)所需的提议响应(Proposal Response)后,将提交交易结果,然后,排序(Ordering),验证(Validation)和提交(Commitment)。客户端程序可以选择不提交交易结果。例如,如果只是查询账本(Ledger),客户端程序通常不会提交只读交易,除非希望出于审计的目的,将读取帐本(Ledger)的记录写入日志。调用(Invoke)包括一个频道ID(Channel ID),要调用的链码(Chaincode)函数和一个参数数组。
领导节点(Leading Peer)¶
每个 会员(Member) 在其订阅的每个频道上都可以拥有多个节点(Peer)。这些节点(Peer)中的其中一个会作为频道(Channel)中的领导节点(Leading Peer),代表会员(Peer)与网络排序服务(Ordering Service)进行通信。排序服务(Ordering Service)会首先将区块(Block)分发给频道(Channel)中的领导节点(Leading Peer),然后领导节点(Leading Peer)再会将这些区块(Block)分发给同一会员(Member)下的其他节点(Peer)。
账本(Ledger)¶
账本(Ledger)是指一个频道(Channel)所对应的链和其当前的状态数据(State Data),由频道(Channel)上的每个节点(Peer)共同维护。
会员(Member)¶
拥有网络唯一根证书的合法独立实体。像节点(Peer)和应用程序客户端这样的网络组件将会链接到会员(Member)。
会员服务提供商(Membership Service Provider - MSP)¶
会员服务提供商(Membership Service Provider - MSP)是一个系统抽象组件,它向客户端和节点(Peer)提供证书,以便它们参与 Hyperledger Fabric 的网络。客户端使用证书,对其交易进行认证(authenticate),节点(Peer)使用证书来认证(authenticate)交易处理结果(背书(Endorsement))。尽管该接口与系统的交易处理组件密切相关,但它可以可插拔的形式,平滑地替换已定义的会员服务(Membership Services)组件,而不会修改系统的交易处理组件的核心。
会员服务(Membership Services)¶
会员服务(Membership Services)在许可的区块链网络上认证、授权和管理身份。在节点(Peer)和排序节点(Orderer)中运行的会员服务(Membership Services)代码都会认证和授权区块链的操作。它是基于PKI的会员服务提供商(Membership Service Provider - MSP)的抽象实现。
排序服务(Ordering Service)¶
一组节点集合,负责将交易(Transaction)排序打包进一个区块(Block)中。排序服务(Ordering Service)独立于节点(Peer)流程存在,并以先到先得的原则为网络上所有频道(Channel)的交易进行排序。排序服务(Ordering Service)被设计为支持可插拔的实现方式,目前,已经实现了 SOLO 和 Kafka 。排序服务(Ordering Service)是整个网络的通用绑定,它包含了与每个 会员(Member) 关联的加密身份资料。
节点(Peer)¶
一个网络实体,负责维护账本(Ledger)并运行链码(Chaincode)容器,以对账本(Ledger)执行读写操作。节点(Peer)由会员(Member)拥有和维护。
策略(Policy)¶
有背书(Endorsement)策略,验证(Validation)策略,链码管理(chaincode management)策略和网络/频道管理(network/channel management)策略。
提议(Proposal)¶
针对频道(Channel)中某一节点(Peer)的背书请求(a request for endorsement)。每个提议(Proposal)或者是实例化(Instantiate)请求,或者是调用(Invoke)(读取/写入)请求。
查询(Query)¶
查询(Query)是一个链码调用(chaincode invocation),它读取帐本(Ledger)的当前状态(Current State),但不写入帐本(Ledger)。链码(Chaincode)函数可以查询帐本(Ledger)中的某些键,或者查询帐本(Ledger)中的一组键。由于查询(Query)不会改变帐本(Ledger)的状态,因此客户端应用程序通常不会提交这些只读交易进行排序(Ordering),验证(Validation)和提交(Commit)。虽然并不常见,但客户端应用程序也可以选择提交只读交易进行排序(Ordering),验证(Validation)和提交(Commit),例如,如果客户希望得到账本(Ledger)链中的审计证据,它记录了某个具体时间点上读取了某个特定的账本状态(ledger state)。
SDK¶
Hyperledger Fabric客户端SDK为开发人员提供了一个结构化的库环境,用于编写和测试链码(Chaincode)应用程序。SDK完全可以通过标准接口实现配置和扩展,像签名的加密算法、日志框架和状态存储(state store)这样的组件都可以轻松地实现替换。SDK提供了用于交易处理(transaction processing),会员服务(membership service),节点遍历(node traversal)和事件处理(event handling)的API。SDK将会有多种版本:Node.js、Java和Python。
状态数据库(State Database)¶
为了从链码(Chaincode)中高效地读写和查询,当前状态(Current State)数据被存储在状态数据库(State Database)中,支持的数据库包括levelDB和couchDB。
系统链(System Chain)¶
系统链(System Chain)包含了在系统级别定义网络的配置区块(Configuration Block)。系统链(System Chain)存在于排序服务(Ordering Service)中,与频道(Channel)类似,具有包含以下信息的初始配置:MSP信息、策略和配置详情。对整个网络的任何变化(例如新的组织机构(Org)加入或者添加新的排序节点)将导致新的配置区块(Configuration Block)被添加到系统链(System Chain)中。
系统链(System Chain)可看做是一个频道(Channel)或一组频道(Channel)的公用绑定(common binding)。例如,一系列金融机构可以组成一个财团(通过系统链(System Chain)表示),然后根据其各自不同的业务议程,来创建相对应的频道(Channel)。
交易(Transaction)¶
调用(Invoke)或实例化(Instantiate)用于排序(Ordering),验证(Validation)和提交(Commitment)的结果。调用(Invoke)是用来请求从账本(Ledger)中读取和写入数据。实例化(Instantiate)是用来请求启动和初始化频道(Channel)中的链码(Chaincode)。应用程序客户端收集来自背书节点(Endorsing Peer)调用(Invoke)或实例化(Instantiate)的响应,并将结果和背书(Endorsement)打包进用于排序(Ordering),验证(Validation)和提交(Commitment)的交易中。