Skip to content

Commit

Permalink
Backend dev docs (#26)
Browse files Browse the repository at this point in the history
* fix: merge conflicts

* fix: merge conflicts

* feat: add backend dev docs

* fix: failing test

* fix: failing test
  • Loading branch information
dPys authored Dec 27, 2024
1 parent 7d60b54 commit 0e6f850
Show file tree
Hide file tree
Showing 5 changed files with 172 additions and 74 deletions.
105 changes: 105 additions & 0 deletions doc/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,3 +114,108 @@ docker-compose -f docker/docker-compose.cpu.yaml run --rm nxbench --config 'nxbe
```bash
docker-compose -f docker/docker-compose.cpu.yaml run --rm nxbench --config 'nxbench/configs/example.yaml' benchmark export 'nxbench_results/9e3e8baa4a3443c392dc8fee00373b11_20241220002902.json' --output-format csv --output-file 'nxbench_results/results.csv'
```

## Adding a New Backend

> **Note:** The following guide assumes you have a recent version of NxBench with the new `BackendManager` and associated tools (e.g., [`core.py`](../nxbench/backends/core.py) and [`registry.py`](../nxbench/backends/registry.py)) already in place. It also assumes that your backend follows the [guidelines for developing custom NetworkX backends](https://networkx.org/documentation/stable/reference/backends.html#docs-for-backend-developers)
### 1. Verify Your Backend is Installable

1. **Install** your backend via `pip` (or conda, etc.).
For example, if your backend library is `my_cool_backend`, ensure that:

```bash
pip install my_cool_backend
```

2. **Check import**: NxBench’s detection system simply looks for `importlib.util.find_spec("my_cool_backend")`. So if your library is not found by Python, NxBench will conclude it is unavailable.

### 2. Write a Conversion Function

In NxBench, a “backend” is simply a library or extension that **converts a `networkx.Graph` into an alternate representation**. You must define one or more **conversion** functions:

```python
def convert_my_cool_backend(nx_graph: networkx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure multi-threading if relevant:
# my_cool_backend.configure_threads(num_threads)

# Convert the Nx graph to your library’s internal representation:
return my_cool_backend.from_networkx(nx_graph)
```

## 3. (Optional) Write a Teardown Function

If your backend has special cleanup needs (e.g., free GPU memory, close connections, revert global state, etc.), define a teardown function:

```python
def teardown_my_cool_backend():
import my_cool_backend
# e.g. my_cool_backend.shutdown()
pass
```

If your backend doesn’t need cleanup, skip this or simply define an empty function.

## 4. Register with NxBench

Locate NxBench’s [registry.py](../nxbench/backends/registry.py) (or a similar file where other backends are registered). Add your calls to `backend_manager.register_backend(...)`:

```python
from nxbench.backends.registry import backend_manager
import networkx as nx # only if needed

def convert_my_cool_backend(nx_graph: nx.Graph, num_threads: int):
import my_cool_backend
# Possibly configure my_cool_backend with num_threads
return my_cool_backend.from_networkx(nx_graph)

def teardown_my_cool_backend():
# e.g. release resources
pass

backend_manager.register_backend(
name="my_cool_backend", # The name NxBench will use to refer to it
import_name="my_cool_backend", # The importable Python module name
conversion_func=convert_my_cool_backend,
teardown_func=teardown_my_cool_backend # optional
)
```

**Important**:

- `name` is the “human-readable” alias in NxBench.
- `import_name` is the actual module import path. They can be the same (most common) or different if your library’s PyPI name differs from its Python import path.

## 5. Confirm It Works

1. **Check NxBench logs**: When NxBench runs, it will detect whether `"my_cool_backend"` is installed by calling `importlib.util.find_spec("my_cool_backend")`.
2. **Run a quick benchmark**:

```bash
nxbench --config my_config.yaml benchmark run
```

If you see logs like “Chosen backends: [‘my_cool_backend’ …]” then NxBench recognized your backend. If it fails with “No valid backends found,” ensure your library is installed and spelled correctly.

## 6. (Optional) Version Pinning

If you want NxBench to only run your backend if it matches a pinned version (e.g. `my_cool_backend==2.1.0`), add something like this to your NxBench config YAML:

```yaml
environ:
backend:
my_cool_backend:
- "my_cool_backend==2.1.0"
```
NxBench will:
- Detect the installed version automatically (via `my_cool_backend.**version**` or PyPI metadata)
- Skip running if it doesn’t match `2.1.0`.

---

### That’s it

You’ve successfully added a new backend to NxBench! Now, NxBench can detect it, convert graphs for it, optionally tear it down, and track its version during benchmarking.
13 changes: 13 additions & 0 deletions nxbench/backends/registry.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,19 @@ def teardown_networkx():
# ---- Nx-Parallel backend ----
def convert_parallel(original_graph: nx.Graph, num_threads: int):
nxp = import_module("nx_parallel")
from multiprocessing import cpu_count

total_cores = cpu_count()

n_jobs = min(num_threads, total_cores)

nx.config.backends.parallel.active = True
nx.config.backends.parallel.n_jobs = n_jobs
nx.config.backends.parallel.backend = "loky"
if hasattr(nx.config.backends.parallel, "inner_max_num_threads"):
nx.config.backends.parallel.inner_max_num_threads = max(
total_cores // n_jobs, 1
)

return nxp.ParallelGraph(original_graph)

Expand Down
8 changes: 6 additions & 2 deletions nxbench/benchmarking/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,12 @@
"PREFECT_API_DATABASE_CONNECTION_URL",
"postgresql+asyncpg://prefect_user:pass@localhost:5432/prefect_db",
)
os.environ.setdefault("PREFECT_ORION_DATABASE_CONNECTION_POOL_SIZE", "5")
os.environ.setdefault("PREFECT_ORION_DATABASE_CONNECTION_MAX_OVERFLOW", "10")
os.environ.setdefault("PREFECT_ORION_DATABASE_CONNECTION_POOL_SIZE", "10")
os.environ.setdefault("PREFECT_ORION_DATABASE_CONNECTION_MAX_OVERFLOW", "20")
os.environ.setdefault("PREFECT_API_URL", "http://127.0.0.1:4200/api")
os.environ.setdefault("PREFECT_ORION_API_ENABLE_TASK_RUN_DATA_PERSISTENCE", "false")
os.environ.setdefault("PREFECT_CLIENT_REQUEST_TIMEOUT", "60")
os.environ.setdefault("PREFECT_HTTPX_SETTINGS", '{"limits": {"max_connections": 50}')
os.environ.setdefault("MAX_WORKERS", "4")

run_uuid = uuid.uuid4().hex
Expand Down Expand Up @@ -108,6 +110,8 @@ def run_algorithm(
"OMP_NUM_THREADS",
"MKL_NUM_THREADS",
"OPENBLAS_NUM_THREADS",
"NUMEXPR_NUM_THREADS",
"VECLIB_MAXIMUM_THREADS",
]
for var_name in vars_to_set:
original_env[var_name] = os.environ.get(var_name)
Expand Down
5 changes: 4 additions & 1 deletion nxbench/benchmarking/tests/test_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,7 @@ def test_configure_backend_success(backend, example_graph):
assert result is example_graph

elif backend == "parallel":
nx_parallel = pytest.importorskip("nx_parallel")
mock_module = MagicMock()
mock_module.ParallelGraph.return_value = "parallel_graph"

Expand All @@ -209,6 +210,7 @@ def test_configure_backend_success(backend, example_graph):
assert result_p == "parallel_graph"

elif backend == "cugraph":
cugraph = pytest.importorskip("nx_cugraph")
mock_module = MagicMock()
mock_module.from_networkx.return_value = "cugraph_graph"

Expand All @@ -219,7 +221,8 @@ def test_configure_backend_success(backend, example_graph):
result_cu = configure_backend.fn(example_graph, "cugraph", 2)
assert result_cu == "cugraph_graph"

else: # "graphblas"
else:
graphblas = pytest.importorskip("graphblas_algorithms")
mock_module = MagicMock()
mock_ga = MagicMock()
mock_ga.Graph.from_networkx.return_value = "graphblas_graph"
Expand Down
115 changes: 44 additions & 71 deletions nxbench/configs/example2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,27 +16,21 @@ algorithms:
groups: ["centrality", "path_based"]
validate_result: "nxbench.validation.validate_node_scores"

# - name: "betweenness_centrality"
# func: "networkx.betweenness_centrality"
# params:
# normalized: true
# endpoints: false
# requires_directed: false
# groups: ["centrality", "path_based"]
# min_rounds: 5
# warmup: true
# warmup_iterations: 20
# validate_result: "nxbench.validation.validate_node_scores"
- name: "betweenness_centrality"
func: "networkx.betweenness_centrality"
params:
normalized: true
endpoints: false
requires_directed: false
groups: ["centrality", "path_based"]
validate_result: "nxbench.validation.validate_node_scores"

# - name: "edge_betweenness_centrality"
# func: "networkx.edge_betweenness_centrality"
# params:
# normalized: true
# requires_directed: false
# groups: ["centrality", "path_based"]
# min_rounds: 5
# warmup: true
# warmup_iterations: 20
# validate_result: "nxbench.validation.validate_edge_scores"

# - name: "approximate_all_pairs_node_connectivity"
Expand Down Expand Up @@ -93,12 +87,12 @@ algorithms:
# requires_directed: false
# groups: ["paths", "all_pairs"]

# - name: "all_pairs_shortest_path_length"
# func: "networkx.all_pairs_shortest_path_length"
# params: {}
# requires_directed: false
# groups: ["paths", "distance"]
# validate_result: "nxbench.validation.validate_scalar_result"
- name: "all_pairs_shortest_path_length"
func: "networkx.all_pairs_shortest_path_length"
params: {}
requires_directed: false
groups: ["paths", "distance"]
validate_result: "nxbench.validation.validate_scalar_result"

# - name: "all_pairs_shortest_path"
# func: "networkx.all_pairs_shortest_path"
Expand All @@ -113,21 +107,21 @@ algorithms:
# requires_directed: false
# groups: ["paths", "weighted"]

- name: "all_pairs_dijkstra_path_length"
func: "networkx.all_pairs_dijkstra_path_length"
params:
weight: "weight"
requires_directed: false
groups: ["paths", "weighted"]
validate_result: "nxbench.validation.validate_scalar_result"
# - name: "all_pairs_dijkstra_path_length"
# func: "networkx.all_pairs_dijkstra_path_length"
# params:
# weight: "weight"
# requires_directed: false
# groups: ["paths", "weighted"]
# validate_result: "nxbench.validation.validate_scalar_result"

- name: "all_pairs_bellman_ford_path_length"
func: "networkx.all_pairs_bellman_ford_path_length"
params:
weight: "weight"
requires_directed: false
groups: ["paths", "weighted"]
validate_result: "nxbench.validation.validate_scalar_result"
# - name: "all_pairs_bellman_ford_path_length"
# func: "networkx.all_pairs_bellman_ford_path_length"
# params:
# weight: "weight"
# requires_directed: false
# groups: ["paths", "weighted"]
# validate_result: "nxbench.validation.validate_scalar_result"

# - name: "johnson"
# func: "networkx.johnson"
Expand Down Expand Up @@ -169,58 +163,37 @@ datasets:
directed: false
weighted: false

- name: "erdos_renyi_small"
- name: "watts_strogatz_small"
source: "generator"
params:
generator: "networkx.erdos_renyi_graph"
generator: "networkx.watts_strogatz_graph"
n: 1000
p: 0.01
k: 6
p: 0.1
metadata:
directed: true
directed: false
weighted: false

- name: "watts_strogatz_small"
- name: "barabasi_albert_small"
source: "generator"
params:
generator: "networkx.watts_strogatz_graph"
generator: "networkx.barabasi_albert_graph"
n: 1000
k: 6
p: 0.1
m: 3
metadata:
directed: false
weighted: false

- name: "watts_strogatz_small"
- name: "powerlaw_cluster_small"
source: "generator"
params:
generator: "networkx.watts_strogatz_graph"
generator: "networkx.powerlaw_cluster_graph"
n: 1000
k: 6
m: 2
p: 0.1
metadata:
directed: false
weighted: true

# - name: "barabasi_albert_small"
# source: "generator"
# params:
# generator: "networkx.barabasi_albert_graph"
# n: 1000
# m: 3
# metadata:
# directed: false
# weighted: false

# - name: "powerlaw_cluster_small"
# source: "generator"
# params:
# generator: "networkx.powerlaw_cluster_graph"
# n: 1000
# m: 2
# p: 0.1
# metadata:
# directed: false
# weighted: false
weighted: false

# - name: "erdos_renyi_large"
# source: "generator"
Expand Down Expand Up @@ -274,13 +247,13 @@ environ:
backend:
networkx:
- "networkx==3.4.1"
- "networkx==3.4.2"
graphblas:
- "graphblas_algorithms==2023.10.0"
# graphblas:
# - "graphblas_algorithms==2023.10.0"
parallel:
- "nx_parallel==0.3rc0.dev0"
num_threads:
- "1"
- "4"
- "8"
pythons:
- "3.10"
- "3.11"

0 comments on commit 0e6f850

Please sign in to comment.