Create your own blockchain using Python (pt. 10)

Publishing and testing

Guillaume Belanger
10 min readNov 13, 2021

In the tenth and final part of this tutorial, we finally deploy instances of our node, expose them to the outside world, and connect them together. We will also execute transactions on this network and validate that it behaves as expected. At that point I believe we can pride ourselves of having created a blockchain using Python.

Warning: For the first nine sections of this tutorial, I tried my best to make it as simple as possible for anybody, anywhere to be able to follow the guide. Here we make our lives a bit more complicated by increasing the amount of tools necessary to complete the tutorial. This section assumes basic knowledge of cloud infrastructure, Amazon Web Services, Kubernetes and networking. If you’re already familiar with all of that, you might find it a bit boring. And if you’re not familiar, you might find it a bit overwhelming. As you can see, there’s a bit to love for everybody. While I am fully aware that this section is not the most exciting, it just might be the most important one. For most software development project, it is relatively easy to have a proof-of-concept working but the last mile around pushing a reliable piece of software to the public is always difficult. Not that what we are building here is reliable by the way, I just felt it was a nice thing to say.

Infrastructure Overview

Cloud

We will deploy our node on a public cloud provider. While we could have deployed it on our own personal computer, everybody’s machine is different and using public cloud makes for a simplified standard experience. Using public cloud will also be very useful when we will expose our nodes to the outside world. As a choice of public cloud, we will use Amazon Web Services (AWS) but you could obtain the same results with any other one. Inside AWS, we will only use the Elastic Cloud Compute (EC2) service.

The virtual machine that we will create on AWS will run the operating system Ubuntu. Mainly because it’s free and simple to use.

Kubernetes

We will countainerize our blockchain. Again, while this is not absolutely necessary, doing this will make it easy for anybody anywhere to simply “pull it and run it”. Here is the Dockerfile for the main node application:

FROM python:3.9
COPY . .
RUN pip3 install -r requirements.txt
ENV PYTHONPATH="src"
ENV FLASK_APP=src/node/main.py
CMD ["python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=80"]
EXPOSE 80

And here is the one for the miner application:

FROM python:3.9
COPY . .
RUN pip3 install -r requirements.txt
ENV PYTHONPATH="src"
ENV FLASK_APP=src/node/main.py
CMD ["python3", "src/node/miner_app.py"]

Images for those services have been pushed to docker hub:

Those two services will be managed by Kubernetes. If you’re not already familiar with the thing, here’s a short introduction on Kubernetes, courtesy of Wikipedia:

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.

Kubernetes will act as a layer of abstraction between our blockchain and its host. This will make it possible and easy to deploy our blockchain on pretty much any hardware and operating system. There are many different releases of Kubernetes and here we will be using microk8s which is small, fast and free. For our usecase, we will have three resources in Kubernetes:

  • Deployment
  • Service
  • Ingress

Our deployment contains 2 containers (the node and the miner) both having access to a shared volume that will contain the mem_pool, the known_nodes file and the blockchain itself.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-blockchain
spec:
replicas: 1
selector:
matchLabels:
app: my-blockchain
template:
metadata:
labels:
app: my-blockchain
spec:
containers:
- name: my-blockchain
image: gruyaume/my-blockchain:1.0.0
imagePullPolicy: Always
env:
- name: MY_HOSTNAME
value: "node00.example-blockchain.com"
- name: MEMPOOL_DIR
value: "/mem/mempool"
- name: KNOWN_NODES_DIR
value: "/mem/known_nodes"
- name: BLOCKCHAIN_DIR
value: "/mem/blockchain"
volumeMounts:
- name: mem
mountPath: /mem
- name: my-miner
image: gruyaume/my-miner:1.0.0
imagePullPolicy: Always
env:
- name: MY_HOSTNAME
value: "node00.example-blockchain.com"
- name: MEMPOOL_DIR
value: "/mem/mempool"
- name: KNOWN_NODES_DIR
value: "/mem/known_nodes"
- name: BLOCKCHAIN_DIR
value: "/mem/blockchain"
volumeMounts:
- name: mem
mountPath: /mem
volumes:
- name: mem
hostPath:
path: /mem

The service simply presents which service will be exposed:

apiVersion: v1
kind: Service
metadata:
name: my-blockchain
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: my-blockchain
type: LoadBalancer

And the ingress makes sure that our service is available from outside the cluster:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-blockchain
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-blockchain
port:
number: 80

Infrastructure Setup

1. AWS Virtual Infrastructure with EC2

Now time for work. Here we build the necessary infrastructure to host our node. This part of the tutorial is very AWS centric, we will detail the necessary steps to:

  • Allocate public IP addresses
  • Create a Virtual Private Cloud (VPC)
  • Create a Security Group
  • Launch an EC2 Instance
  • Associate an IP address to our EC2 instance

Let’s start by allocating ourselves public IP addresses reachable from the internet. We will need two of them, one for the NAT gateway and one for our instance. Go to EC2 -> Elastic IP Addresses -> Allocate Elastic IP Address, then simply click allocate. Do this again a second time.

Let’s then create a Virtual Private Cloud (VPC). Head to VPC Dashboard and click on “Launch VPC Wizard”. Select “VPC with a Public and Private Subnet” and click “Select”. You will have to fill in a form, you can use the same values as I used below. For the Elastic IP allocation ID, select one of the two IP’s that you just created.

Now let’s create a VPC security group which is a virtual firewall for our instance to control inbound and outbound traffic. For our usecase, we will need to add inbound rules to to our VPC security group that allow traffic to connect from the internet. To do so, go to VPC -> Security Group -> Create Security Group. You can copy the information displayed below. Make sure the VPC you select is the one you just created. Note that our inbound and outbould rules are highly unsecure as we allow all TCP traffic. We could have reduced it to just the ports we are using but for testing purposes, I find it useful to unrestrict it.

Now create an EC2 instance with the following specifications:

  • Image: Ubuntu Server 20.04 LTS
  • Type: t2.medium

In the “Configure Instance” Section, make sure you select the network associated with your newly created VPC as well as your public subnet. Disable auto-assign public IP since we’ll be using our elastic IP shortly.

In the “Add Storage” section, bump up the storage amount to 50 GB and in the “Security Groups” section, select the security group you just created. Click launch.

Now we need to associate the second public IP address to our newly created EC2 instance. Go back to EC2 -> Elastic IP Addresses, select the non-associated address and click Actions/Associate Elastic IP Address. Make sure “Instance” is selected and enter the newly created instance’s ID.

2. Validating connectivity

Open a terminal from your workstation and try to connect to your instance using its public IP, in my case it’s 54.204.132.8. Yours will be different.

guillaume@thinkpad:~$ ssh -i guillaume.pem ubuntu@54.204.132.8

3. Setting up Microk8s

Once you’re inside the virtual machine, install microk8s:

ubuntu@ip-10-0-0-54:~$ sudo apt-get update && sudo apt-get upgrade
ubuntu@ip-10-0-0-54:~$ sudo snap install microk8s --classic

Update your user’s permission to be added to the microk8s group

ubuntu@ip-10-0-0-54:~$ sudo usermod -a -G microk8s ubuntu
ubuntu@ip-10-0-0-54:~$ sudo chown -f -R ubuntu ~/.kube

After changing those permissions, you’ll have to create a new shell for them to take effect, you can do this by running:

ubuntu@ip-10-0-0-54:~$ newgrp microk8s

Now enable some add ons to your microk8s cluster:

ubuntu@ip-10-0-0-54:~$ microk8s enable dns ingress storage

We will be using MetalLB as our load balancer for Kubernetes. It can be enabled the same way as the other add-ons:

ubuntu@ip-10-0-0-54:~$ microk8s enable metallb

You will be asked for a range of IP’s to provide, answer with the range of private addresses we set up earlier: 10.0.1.1–10.0.1.254.

4. Deploying our node using kubectl

MicroK8s uses a namespaced kubectl command to prevent conflicts with any existing installs of kubectl. In our case, we don’t have an existing install so we will add an alias like this:

ubuntu@ip-10-0-0-54:~$ alias kubectl='microk8s kubectl'

Note that this alias won’t survive exiting your shell session so you’ll have to re-run the command every time you log back in. Now clone the code from the github and head to the deploy directory:

ubuntu@ip-10-0-0-54:~$ git clone https://github.com/gruyaume/my-blockchain.git
ubuntu@ip-10-0-0-54:~$ cd my-blockchain/deploy/

Here, you can look at the deployment.yaml file:

cat kubernetes/deployment.yaml

Before deploying, change the value associated to the MY_HOSTNAME environment variable from example-blockchain.com to the actual hostname you will be using for your node. Once this is done, you can deploy the node using kubectl apply:

ubuntu@ip-10-0-0-54:~/my-blockchain/deploy$ kubectl apply -f kubernetes/

Voilà, you now have an instance of the blockchain running on AWS.

5. Validating the Kubernetes deployment

You can validate that our service is correctly created:

ubuntu@ip-10-0-0-54:~$ kubectl get svcNAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGEkubernetes      ClusterIP      10.152.183.1     <none>        443/TCP        22mmy-blockchain   LoadBalancer   10.152.183.158   10.0.1.1      80:32301/TCP   20m

And that our deployment is also up:

ubuntu@ip-10-0-0-54:~$ kubectl get podsNAME                            READY   STATUS    RESTARTS   AGEmy-blockchain-b9d844446-9hnzg   1/1     Running   0          21m

6. Validating Networking

Still from inside the virtual machine, validate that our service returns something when we call it. Here the IP is the EXTERNAL-IP associated to the my-blockchain service. Make sure you use the correct one:

ubuntu@ip-10-0-0-54:~$ curl 10.0.1.1/block

Now from your workstation, do the same thing but pointing the the instance’s public address:

guillaume@thinkpad:~$ curl 54.204.132.8/block

Both answers to these calls should be identical and returning you the blockchain in a list format. A typical answer looks like this

[{"header":{"merkle_root":"98594c8a1ac83eb053175e2ff943cf0d4a4f6c1613197c449344bdbb0d7d64ea","noonce":5,"previous_block_hash":"4400febaaadddb843e42649713d984b0f90355e8decaa8b16c005359c3a88796","timestamp":1320797473.333},"transactions":[{"inputs":[{"output_index":0,"transaction_hash":"53f41fef3b00c05448f3f1bd37265588b601b7b9c75c4b51a9876539bc2a3ecd","unlocking_script":""}],"outputs":[{"amount":5,"locking_script":"OP_DUP OP_HASH160 7681c82af05a85f68a5810d967ee3a4087711867 OP_EQUAL_VERIFY OP_CHECKSIG"},{"amount":25,"locking_script":"OP_DUP OP_HASH160 1ebcc7a0c357bdf3f2ffbfa327e7ff572a08c229 OP_EQUAL_VERIFY OP_CHECKSIG"}],"transaction_hash":"e10154f49ae1119777b93e5bcd1a1506b6a89c1f82cc85f63c6cbe83a39df5dc"}]},{"header":{"merkle_root":"0f27ecab1c6ce084cbdc3c1eb507c7b8c8aca1d4523b73d75a63aeb762f14d96","noonce":4,"previous_block_hash":"25585f9c51e46ea95e1f626871175e4c46fa1ce1a7482d73547eaaf7fbcd2ecc","timestamp":1320624313.222},"transactions":[{"inputs":[{"output_index":1,"transaction_hash":"53f41fef3b00c05448f3f1bd37265588b601b7b9c75c4b51a9876539bc2a3ecd","unlocking_script":""}],"outputs":[{"amount":10,"locking_script":"OP_DUP OP_HASH160 7681c82af05a85f68a5810d967ee3a4087711867 OP_EQUAL_VERIFY OP_CHECKSIG"}],"transaction_hash":"e1d7553f03fd2b578116c6ac7c72356c364535a120dbbb40ec182deb8d408961"}]},{"header":{"merkle_root":"d2cda94b5e5b81b83268d645d8c823268305486b35b65bbf0a7685e979662b6a","noonce":3,"previous_block_hash":"621e7d1964bf042ddb6e11a1860a71b6d3ebe534ca6c5868b8db5bb4037fb2b6","timestamp":1320365123.111},"transactions":[{"inputs":[{"output_index":0,"transaction_hash":"dbcf31ea959ccd541d24ac7f397e5b25bbe7078ac28521658db9a9689f1153a7","unlocking_script":""}],"outputs":[{"amount":30,"locking_script":"OP_DUP OP_HASH160 1ebcc7a0c357bdf3f2ffbfa327e7ff572a08c229 OP_EQUAL_VERIFY OP_CHECKSIG"},{"amount":10,"locking_script":"OP_DUP OP_HASH160 a037a093f0304f159fe1e49cfcfff769eaac7cda OP_EQUAL_VERIFY OP_CHECKSIG"}],"transaction_hash":"53f41fef3b00c05448f3f1bd37265588b601b7b9c75c4b51a9876539bc2a3ecd"}]},{"header":{"merkle_root":"a2ea00a3aa67dcd0562555329ba0cbcee88d8a075fdde975bc09baa4a1c7c21b","noonce":2,"previous_block_hash":"1111","timestamp":1320365123.111},"transactions":[{"inputs":[{"output_index":0,"transaction_hash":"abcd1234","unlocking_script":""}],"outputs":[{"amount":40,"locking_script":"OP_DUP OP_HASH160 b'Albert' OP_EQUAL_VERIFY OP_CHECKSIG"}],"transaction_hash":"dbcf31ea959ccd541d24ac7f397e5b25bbe7078ac28521658db9a9689f1153a7"}]}]

7. DNS

At this point, if you want, you can associate a domain name to your instance’s public IP address. This can be done using AWS’s route53. This step is optional so I won’t walk you through how to do it. For our first node, I’ve associated the domain name node00.my-blockchain.gruyaume.com to my elastic IP address. This means that from anywhere in the world, I can query node00.my-blockchain.gruyaume.com/block and unless there’s a bug, I will retrieve the blockchain. Note that I already shut down those instances so you won’t get any response from those endpoints.

Testing transactions

For the purpose of testing in a realistic environment, you can create two other nodes. To do so, allocate 2 new elastic IPs, create 2 new EC2 instances, allocate the IP’s to those instances and then follow steps 2–7 again for each of them. On my side, I ended up with three nodes that I named like this:

  • node00.my-blockchain.gruyaume.com
  • node01.my-blockchain.gruyaume.com
  • node02.my-blockchain.gruyaume.com

A series of 17 integration tests have been written to validate transaction acceptance and refusals, transaction propagation, block creation, block propagation and node initialization. I won’t go through each of them since it would take too much time but here’s one that validates that transactions are refused when a new transaction references a non-existant UTXO:

import time

import pytest
import requests

from blockchain_users.camille import private_key as camille_private_key
from common.node import Node
from common.transaction_input import TransactionInput
from common.transaction_output import TransactionOutput
from integration_tests.common.blockchain_network import DefaultBlockchainNetwork, NODE00_HOSTNAME
from wallet.wallet import Owner, Wallet, Transaction


@pytest.fixture(scope="module")
def camille():
return Owner(private_key=camille_private_key)


@pytest.fixture(scope="module")
def default_node():
return Node(NODE00_HOSTNAME)


@pytest.fixture(scope="module")
def blockchain_network():
return DefaultBlockchainNetwork()


@pytest.fixture(scope="module")
def camille_wallet(camille, default_node):
return Wallet(camille, default_node)


def test_given_user_points_to_non_existant_utxo_when_process_transaction_then_transaction_is_refused(
camille_wallet, blockchain_network):
time.sleep(2)
blockchain_network.restart()
time.sleep(2)
utxo_0 = TransactionInput(transaction_hash="5669d7971b76850a4d725c75fbbc20ea97bd1382e24fae43c41e121ca399b660",
output_index=0)
output_0 = TransactionOutput(public_key_hash=b"a037a093f0304f159fe1e49cfcfff769eaac7cda", amount=5)
with pytest.raises(requests.exceptions.HTTPError) as error:
camille_wallet.process_transaction(inputs=[utxo_0], outputs=[output_0])
assert "Could not find locking script for utxo" in error.value.response.text

Those tests are located under the integration_tests directory and they assume that you have an environment with three nodes running. You can adapt the tox.ini file to support your nodes’ hostnames. Once that is done, run tox -e integration. If everything goes fine, you will have a green message telling you that 17 tests passed:

--

--

Guillaume Belanger

Guillaume is a software developer from Montreal who writes about bip bop stuff.