Before diving into this article, please read the two disclosures about my involvement (1,2) and the one on data accuracy (3) at the bottom of the a...
For further actions, you may consider blocking this person and/or reporting abuse
I only understood about 10% of this, being totally uninformed about blockchain (my fault)...but I still have to <3 and applaud, because I know research effort when I see it! Great job.
I run a Parity node with --tracing on and --pruning archive and have done so since June of 2016. Since the Byzantium hard fork (October 16), the chain data (in this admittedly extreme case) has grown more than 125 GB. If one wishes to do what I call a "deep, full, audit level accounting", one needs the traces. It's unclear if one needs the archive from this article. If you're running tracing and archive, the chain will blow past 1TB very soon.
Hi Afri, thanks for the write-up!
I have a question, you write:
Could you explain why this should be discouraged?
Because it does not hold historic blocks. And you can only verify the integrity of the chain, the transactions, the state, and balances, if you have access to all historic block data. That is available in Configurations 00-06, and partially in 08, but not in 07.
Great article!
If I understood correctly, a full but pruned node would need all the blocks + the recent state of each account (and smart contract)?
Yesterday Dec 6th 2017 the number of Ethereum accounts grew to 13.4 million. Based on your article, the size of the chaindata of a pruned Ethereum node is now, depending on the mode, somewhere between 25 and 40 GB?
As more and more apps meant for global use rely on users to have cryptocurrencies and tokens, the number of Ethereum accounts can start to approach that of Internet users in general.
How big even the pruned node will grow if we have like 1 billion plus accounts? Proportional growth from 35 GB / 13.4M accounts node size would give 2.6 TB node size.
Not trying to invoke FUD, just asking?
Hey, thanks for reading it! Without running the numbers, I want to say: it's not impossible. I am just highlighting that we are far away from this and this is not happening soon.
There are a lot of proposals for scalability, I don't really have an overview, and also I do not feel technically qualified to discuss them currently. But regarding the state size, or let's say, state bloat, you might want to read about state-cleaning or dust-cleaning. There are proposals to just purge entries from the state which are provable non-recoverable accounts (i.e., balance smaller lowest possible transaction fee).
The good thing, we still have time, in that regard, to discuss proposals and eventually implement them.
Great article!
But I'm still confused by a few things.
What is the point of the archive node?
What happens when the archive node is projected to surpass 100 TB by early 2020?
If the fast full node is about 10% of an archived full node, again, what happens when say by 2020 that the archive node is over 100 TB?
And if the archive node is necessary to some degree to solidify the network, how does it affect the centralization aspect of things when say in 5 years that the archive node is over 5-10K TB?
Thanks in advance
Just finished syncing a full node with parity on archive (ID 02 in your table) and I can confirm the current size of parity's db folder is 954GB with 14,751 items.
Manuela Lisa
Hello Rando,
Thank you very much for the information. And I am checking your table data for Full / No-Warp --no-warp (05). I'm doing the installation of Parity in a Digital Ocean Droplet - Cents 2vcpu 2G RAM 60G Storage.
My installation for Parity in Centos 7 was like this:
1.- Dockear search:
docker search parity/parity
2.- We are looking for the latest version of Parity:
curl -sS 'registry.hub.docker.com/v2/reposit...' | jq '."results"[]["name"]' | sort
-bash: jq: command not found
We are missing a package yum:
yum install jq
curl -sS 'registry.hub.docker.com/v2/reposit...' | jq '."results"[]["name"]' | sort
Now yes! we see the latest versions:
[root@centos-s-2vcpu-2gb-lon1-01 ~]# curl -sS 'registry.hub.docker.com/v2/reposit...' | jq '."results"[]["name"]' | sort
"beta"
"beta-release"
"nightly"
"stable-release"
"v1.10.0-ci5"
"v1.10.0-ci6"
"v1.8.10"
"v1.8.11"
"v1.9.3"
"v1.9.4"
3.- We put the Docker parity (latest version):
docker pull parity/parity:v1.9.4
Ready put it:
[root@centos-s-2vcpu-2gb-lon1-01 ~]# docker pull parity/parity:v1.9.4
Trying to pull repository docker.io/parity/parity ...
v1.9.4: Pulling from docker.io/parity/parity
c954d15f947c: Pull complete
c3688624ef2b: Pull complete
848fe4263b3b: Pull complete
23b4459d3b04: Pull complete
36ab3b56c8f1: Pull complete
ecd224a1ca24: Pull complete
c6053fbd9bf9: Pull complete
52846da88991: Pull complete
Digest: sha256:a1b992c63edabd240cc5d77f914651b280e4dc5df55f7ec1c5ff07065ed4827a
4.- Ok before you let it run. We will put the ports:
docker run -ti -p 8180:8180 -p 8545:8545 -p 8546:8546 -p 30303:30303 -p 30303:30303/udp parity/parity:v1.9.4 --ui-interface all --jsonrpc-interface all
[root@centos-s-2vcpu-2gb-lon1-01 ~]# docker run -ti -p 8180:8180 -p 8545:8545 -p 8546:8546 -p 30303:30303 -p 30303:30303/udp parity/parity:v1.9.4 --ui-interface all --jsonrpc-interface all
2018-03-09 05:15:26 UTC Starting Parity/v1.9.4-beta-6f21a32-20180228/x86_64-linux-gnu/rustc1.24.0
2018-03-09 05:15:26 UTC Keys path /root/.local/share/io.parity.ethereum/keys/Foundation
2018-03-09 05:15:26 UTC DB path /root/.local/share/io.parity.ethereum/chains/ethereum/db/906a34e69aec8c0d
2018-03-09 05:15:26 UTC Path to dapps /root/.local/share/io.parity.ethereum/dapps
2018-03-09 05:15:26 UTC State DB configuration: fast
……
…
..
.
According to the table you indicate, I am in cell 5, with Fast --no-warp. And I've been synchronizing for about 2 days, and there should be little left over depending on the table:
[root@centos-s-2vcpu-2gb-lon1-01 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 62903276 27952700 34950576 45% /
devtmpfs 920968 0 920968 0% /dev
tmpfs 941688 0 941688 0% /dev/shm
tmpfs 941688 98812 842876 11% /run
tmpfs 941688 0 941688 0% /sys/fs/cgroup
tmpfs 188340 0 188340 0% /run/user/0
If you manage to synchronize, I will notify you how busy the disk is, so that you can update or check data.
Thank you very much!
Now the only thing I'm looking for is how to create the new account with Docker and the parity signer new-token command (it does not work so far, I think it should be something like docker run parity/parity: v1.9.4 signer new-token something well, but I still have the code)
Regards,
Boris Durán
I want to develop a server that deploys smart contract through web3j. Can I ask you that: is my server considered to be a node of ethereum decentralized system? And which parameters is the server required
Hi Rando,
Here my data for you to update or contrasts with the table. In Droplet Digital Ocean 2G RAM, 60G storage, 2 CPU (approx 48 hours synchronizing - IP London).
12.03.2018 Parity ethereum synchronized node (Fast --no-warp (cell 5)):
cd /root/.local/share/io.parity.ethereum/docker
[root@centos-s-2vcpu-2gb-lon1-01 docker]# du -sh chains
43G chains
[root@centos-s-2vcpu-2gb-lon1-01 docker]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 62903276 47366608 15536668 76% /
devtmpfs 920968 0 920968 0% /dev
tmpfs 941688 0 941688 0% /dev/shm
tmpfs 941688 102368 839320 11% /run
tmpfs 941688 0 941688 0% /sys/fs/cgroup
tmpfs 188340 0 188340 0% /run/user/0
[root@centos-s-2vcpu-2gb-lon1-01 io.parity.ethereum]# du -sh docker
44G docker
Regards,
Boris Durán
If by 'anytime soon' you mean, one quarter later...
Just finished syncing a full node with parity on archive (ID 02 in your table) and I can confirm the current size of parity's db folder is 954GB with 14,751 items.
when I restart Rest Server I faced following error :
Discovering types from business network definition ...
Connection fails: Error: Error trying to ping. Error: make sure the chaincode landregistry has been successfully instantiated and try again: getccdata composerchannel/landregistry responded with error: could not find chaincode with name 'landregistry'
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: make sure the chaincode landregistry has been successfully instantiated and try again: getccdata composerchannel/landregistry responded with error: could not find chaincode with name 'landregistry'
Error: Error trying to ping. Error: make sure the chaincode landregistry has been successfully instantiated and try again: getccdata composerchannel/landregistry responded with error: could not find chaincode with name 'landregistry'
at _checkRuntimeVersions.then.catch (/usr/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:806:34)
hello, I'd like to run "traceReplayTransaction" api with parity client node, which current cli param I used is "--pruning=archive". But the sync data size is too large, more than 1.0T . So I want to use the param "--tracing on --fat-db on" to instead and redo it.
The question is whether I can still run "traceReplayTransaction" api by using the param "--tracing on --fat-db on" ?
If not, could you please tell me the best optional param? thank you!
Hey, I'm about syncing with "--tracing on" on a 40GB SSD.
Did you figure out if it's enough for "trace_replayTransaction"?
This is one of the most linked articles on dev.to with a total of 176 ref domains. And like Jason above, I understood about 10%
It did not. Ha-ha! :)
Hi Afri, can you give us an update here and tell us what you meant by saying "We are running at capacity" on Twitter?
This did not age well
ETH blockchain is more than 1TB now :P
It's ~70GB now. I recommend reading the article beyond the headline.