Skip to content
This repository has been archived by the owner on Nov 19, 2024. It is now read-only.

Commit

Permalink
docs(readme.md): shows usage with a single binary
Browse files Browse the repository at this point in the history
  • Loading branch information
pedro bufulin committed Mar 28, 2024
1 parent 9d87279 commit 3c20bc4
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 12 deletions.
17 changes: 9 additions & 8 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Here are some examples of how to use the commands:
1. To validate flat files in a folder, a start epoch and a end epoch must be provided. `-d` flag can be used for debugging or log information.

```
cargo run --bin flat_head -- -d era-validate --dir ~/firehose/sf-data/storage/merged-blocks/ --start-epoch 0
cargo run --bin flat_head -- era-validate --store-url file:///<full-path-to-folder> -s 0
```

Flat files should come compressed with Zstandard (zstd) from Firehose. Flat_head handles decompression by default, but if it is necessary to disable it pass to the args: `-c false`. This is the same for all other binaries.
Expand All @@ -37,31 +37,32 @@ Passing `--end-epoch` is not necessary, although without it, `flat_head` will on
2. To fetch flat files from a Webdav server and validate each file as they arrive:

```
cargo run --bin fetch-webdav -- --url <server-url> -s 0 -e 1
cargo run --bin flat_head -- era-validate --store-url http:///<full-path-to-folder> -s 0
```

This command will skip the files that were already verified and written into `lockfile.json`.
It stops abruptly if verification of any file fails. If files are compressed as `.zst` it is also capable
of decompressing them.

3. To fetch flat files from a gcloud bucket, and validate each epoch as they arrive:

```
cargo run --bin fetch-gcloud --bucket --fist-epoch 0 --end-epoch 1
cargo run --bin flat_head -- era-validate --store-url gs:///<full-path-to-folder> -s 0
```

4. To fetch flat files from a s3 bucket and validate each epoch as they arrive:

```
❯ cargo run --bin fetch-s3 -- -s 0 -e 2 --endpoint http://localhost:9000
cargo run --bin flat_head -- era-validate --store-url s3:///<full-path-to-folder> -s 0
```

`era-validate` will skip the files that were already verified and written into `lockfile.json`.
It stops abruptly if verification of any file fails. If files are compressed as `.zst` it is also capable
of decompressing them.

An optional endpoint can be provided if running in a local environment or in another s3 compatible API.

Environment variables for aws have to be set for s3 in this scenario. An example is provided in `.env.example`



## Goals

Our goal is to provide The Graph's Indexers the tools to trustlessly share flat files with cryptographic guarantees
Expand Down
4 changes: 2 additions & 2 deletions src/era_verifier.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ pub async fn verify_eras(
) -> Result<Vec<usize>, anyhow::Error> {
let mut validated_epochs = Vec::new();
for epoch in start_epoch..=end_epoch.unwrap_or(start_epoch + 1) {
let blocks = get_blocks_from_dir(epoch, store_url, decompress).await?;
let blocks = get_blocks_from_store(epoch, store_url, decompress).await?;
let (successful_headers, _): (Vec<_>, Vec<_>) = blocks
.iter()
.cloned()
Expand Down Expand Up @@ -48,7 +48,7 @@ pub async fn verify_eras(
Ok(validated_epochs)
}

async fn get_blocks_from_dir(
async fn get_blocks_from_store(
epoch: usize,
store_url: &String,
decompress: Option<bool>,
Expand Down
18 changes: 16 additions & 2 deletions src/store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@ use anyhow::Context;
use bytes::Bytes;
use decoder::handle_buf;
use object_store::{
gcp::GoogleCloudStorageBuilder, local::LocalFileSystem, path::Path, ObjectStore,
aws::AmazonS3Builder, gcp::GoogleCloudStorageBuilder, local::LocalFileSystem, path::Path,
ObjectStore,
};
use std::sync::Arc;
use thiserror::Error;
Expand All @@ -26,7 +27,20 @@ pub fn new<S: AsRef<str>>(store_url: S) -> Result<Store, anyhow::Error> {

match url.scheme() {
"s3" => {
unimplemented!("s3://... support not implemented yet")
let bucket = url.host_str().ok_or_else(|| anyhow::anyhow!("No bucket"))?;
let path = url.path();

let store = AmazonS3Builder::new()
.with_bucket_name(bucket.to_string())
.build()?;

Ok(Store {
store: Arc::new(store),
base: match path.starts_with("/") {
false => path.to_string(),
true => path[1..].to_string(),
},
})
}
"gs" => {
let bucket = url.host_str().ok_or_else(|| anyhow::anyhow!("No bucket"))?;
Expand Down

0 comments on commit 3c20bc4

Please sign in to comment.