-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need a mechanism to compress data to reduce storage costs #126
Comments
I also think it's quite necessary to act |
Thanks for the feedback and for opening this issue. Yes, currently it's not possible to prune the node, which would decrease the required storage space. Leaving this open and logging this feature request as I'm sure others would also like to have this option should it become available in the future. |
You can run the op-geth with We don't yet have a snapshot available for full nodes so it will take a while to sync. Will update this issue if and when we have full snapshots. |
@mdehoog I see. Thank you! |
@mdehoog ,are there any updates here? |
@wangtao19911111 Hello, full node snapshots are coming very soon. |
What about cloud or Ai computation assistance |
any update? maintaining a full node these day no long an easy task. |
@laptrinhbockchain @MindlessSteel @wangtao19911111 Hi there! Just wanted to update that Base Node now supports snap sync by default! Simply:
Whereas the current archive chain data is about 2.5TB, you'll be able to snap sync for less than 250GB. If you all need any help or support, please let us know! |
@wbnns hi there. So I had fully synced full node but almost got out of disk space so I've just used your instruction above and set sync mode to snap and rebuilt the docker. How long do I wait till geth data dir usage changes from 4TB to 0.25TB? It happens automatically I assume? |
Fastest way would be to just delete the old DB and start a fresh snap-sync. I am not sure it will go back in time and prune the old data automatically? |
@wbnns why main github page says |
Heya, this is factoring in room for the archive chain data to grow, plus unallocated space in the environment (in case it needs to be utilized). |
Currently the Base node need about additional 100G to store data every day. With data growth so fast, it causes huge obstacles in terms of infrastructure costs.
I think that need a mechanism to compress data to reduce storage costs. This helps many Nodes join the network to increase the decentralization of the project.
Currently, I don't know if this mechanism exists. But is there any way like Node configuration, or some solution to minimize this capacity. Such as:
The text was updated successfully, but these errors were encountered: