-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benoit/eng-362-update-ndc-postgres-to-ndc_models-020 #666
base: main
Are you sure you want to change the base?
Benoit/eng-362-update-ndc-postgres-to-ndc_models-020 #666
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: Failing tests: we expect failing tests related to the deprecation of the root column comparison.
These will be fixed in a separate PR, to be merged on this one before merging to main.
This has now been merged.
}, | ||
) | ||
}) | ||
.collect(), | ||
) | ||
} | ||
|
||
/// Infer scalar type representation from scalar type name, if necessary. Defaults to JSON representation | ||
fn convert_or_infer_type_representation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
V0.2.0 requires type representation.
Type representation comes from introspection configuration, and may be absent.
So, if type representation is missing, we infer the type based on the name and fetch the corresponding type representation from the default introspection configuration.
ed42b7e
to
6fa57df
Compare
Note we are pointing to a specific sdk revision We should tag a release and point to that
…n does not include a type representation, we infer one based on the scalar type name. We default to JSON representation if we don't recognize the scalar type. The mapping is pulled from the default introspection configuration. This should enable a smooth upgrade, but we may need to publish a new version of the configuration with a mechanism to guarantee type representations, later.
Note! This is a regression with regards to named scopes, which replace the previously supported RootTableColumn. There was technically no way to consume this api from the engine, so this is not a major issue, and will be addressed in an upcoming PR.
Type representations are no longer optional Schema Response now includes a reference to the scalar type to be used for count results. AggregateFunctionDefinition is now an enum, so we map based on function name. Note! We are currently lying by omission about the return types. Postgres aggregates will return NULL if aggregating over no rows, except COUNT. We should have a discussion about wether we want to change aggregate function definitions to reflect this behavior, whether all these scalars will be implicitly nullable, or whether we want to change the SQL using COALESCE to default to some value when no rows are present. Arguably, there's no proper MAX, MIN, or AVG default values. As for SUM, ndc-test expects all SUM return values to be either represented as 64 bit integers or 64 bit floats. Postgres has types like INTERVAL, which is represented as a string, and can be aggregated with SUM. We need to discuss whether any of the above needs to be revisited. We cannot represent intervals as float64 or int64.
…, so that we may count nested properties using field_path
…eign key may be on a nested field. for now, we do not suport relationships.nested, so erroring out in that case
Add reference to configuration.schema.json Add missing type representations Add missing scalar types (v4 did not require all referenced scalar types to be defined)
note thise feature is still not implemented so the test still fails
…only non-null rows, instead of COUNT(*) which would count all rows
ndc spec expects sum aggregates return a scalar represented as either return f64 or i64 Because ndc-postgres represents i64 as a string, we only mark sum aggregates returning a f64 any other sum aggregate will function as a custom aggregate and have no special meaning additionally, we wrap SUM with `COALESCE(SUM(col), 0)` to ensure we return 0 when aggregating over no rows. similarly, we only mark avg functions returning a f64, and treat any other avg as a custom aggregate
…y tables in scope for an exists, instead of only root and current. (#674) <!-- The PR description should answer 2 (maybe 3) important questions: --> ### What <!-- What is this PR trying to accomplish (and why, if it's not obvious)? --> `ComparisonTarget::RootCollectionColumn` was removed, to be replaced by [named scopes](https://github.com/hasura/ndc-spec/blob/36855ff20dcbd7d129427794aee9746b895390af/rfcs/0015-named-scopes.md). This PR implements the replacement functionality. <!-- Consider: do we need to add a changelog entry? --> ### How <!-- How is it trying to accomplish it (what are the implementation steps)? --> This PR replaces RootAndCurrentTables, with TableScope, a struct that keeps track of the current table and any tables in scope for exists expression. See the accompanying review for details on the code itself.
4234509
to
829886f
Compare
}, | ||
) | ||
}) | ||
.collect(), | ||
) | ||
} | ||
|
||
/// Infer scalar type representation from scalar type name, if necessary. Defaults to JSON representation | ||
fn convert_or_infer_type_representation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
V0.2.0 requires type representation, but configuration did not require configuration to be present.
To maximize compatibility with older configuration versions, we infer missing type representations based on scalar type name. If missing, we default to JSON representation.
@@ -29,22 +29,22 @@ impl ComparisonOperatorMapping { | |||
ComparisonOperatorMapping { | |||
operator_name: "<=".to_string(), | |||
exposed_name: "_lte".to_string(), | |||
operator_kind: OperatorKind::Custom, | |||
operator_kind: OperatorKind::LessThanOrEqual, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default introspection configuration changed to tag lt(e),gt(e) operators.
This will only affect new configurations, so any deployments with existing configuration will see no change in behavior.
function_name.as_str(), | ||
function_definition.return_type.as_str(), | ||
) { | ||
("sum", "float8" | "int8") => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v0.2.0 adds standard aggregate functions. These have specific expectations, such as sum
needing to return a scalar represented as either Float64
or Int64
.
We check for specific aggregate functions returning matching data types, and mark applicable functions as such.
Non-compliant functions (eg. sum
on interval types which are represented as strings) will be tagged as custom aggregate functions
Ok(models::SchemaResponse { | ||
collections, | ||
procedures, | ||
functions: vec![], | ||
object_types, | ||
scalar_types, | ||
capabilities: Some(models::CapabilitySchemaInfo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this is required, but also means we will see a change in returned schemas, even if configuration has not been changed.
field_path, | ||
scope, | ||
} => { | ||
let scoped_table = current_table_scope.scoped_table(scope)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply scope, if any, before traversing path
args: vec![column], | ||
} | ||
} | ||
OrderByAggregate::CountStar | OrderByAggregate::Count => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Count Star and Count actually behave the same, where we only count left-hand rows that actually exists.
This is important, as left joins + count(*) will actually count all rows, even if there were no matching left-hand rows.
I believe those semantics are correct, but something to double check.
} | ||
|
||
enum OrderByAggregate { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We created a new enum for the various ordering aggregates
@@ -703,10 +740,10 @@ fn translate_targets( | |||
// Aggregates do not have a field path. | |||
field_path: (&None).into(), | |||
expression: sql::ast::Expression::Value(sql::ast::Value::Int4(1)), | |||
aggregate: Some(sql::ast::Function::Unknown("COUNT".to_string())), | |||
aggregate: Some(OrderByAggregate::CountStar), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used our new ordering aggregate enum instead of a direction SQL AST function.
/// this test should be ignored unless explicitly invoked | ||
#[ignore] | ||
#[test] | ||
fn generate_query_request_schema() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added these utilities to generate query and mutation request schemas, to validate tests files. These tests are not invoked unless explicitly called upon. There's probably better ways to do this, and we can remove them if there's any feelings against keeping them around.
@@ -21,9 +21,9 @@ pub async fn create_router( | |||
)]); | |||
let setup = PostgresSetup::new(environment); | |||
|
|||
let state = ndc_sdk::default_main::init_server_state(setup, &absolute_configuration_directory) | |||
let state = ndc_sdk::state::init_server_state(setup, &absolute_configuration_directory) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No idea what I did here or why, just made the SDK/compiler happy
What
This PR updates ndc-postgres to ndc spec v0.2.0
This includes a lot of changes to tests. These have been justified in individual commits.
How