You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.
Questions
In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “dgl.ops.copy_u_sum(g, x)”.
Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as: update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o')) .Do SparseTir provide similar kernel and how can we compare their kernel performance?
Is there any code in this repo that can run GAT end-to-end directly?
For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?
The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?
Looking forward to your response. Thank you.
The text was updated successfully, but these errors were encountered:
For Q3: The end-to-end evaluations are available at https://github.com/uwsampl/sparsetir-artifact .
Regarding Q1,Q2, yes the same technique also applies to weighted spmm and we can use SparseTIR for GAT if you use weighted SpMM and SDDMM kernels generated by SparseTIR. However I don't have bandwidth to do them at the moment.
Regarding Q4, yes composable formats should also apply to SDDMM.
The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.
Questions
In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “
dgl.ops.copy_u_sum(g, x)
”.Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as:
update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o'))
.Do SparseTir provide similar kernel and how can we compare their kernel performance?Is there any code in this repo that can run GAT end-to-end directly?
For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?
The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?
Looking forward to your response. Thank you.
The text was updated successfully, but these errors were encountered: