-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add replication support for ClickHouse state table + Fixes ListMigration query for ClickHouse #520
Conversation
…ministic with time in nanos
…ions in ClickHouse
@mfridman Huge thanks for maintaining this library. I'm hoping to contribute the fixes back so if you can take some time to review and provide feedback. That will be great :) |
Thanks, it'll take me some time to get through this one. And I'm also not sure if the Having said that, I'll drop a few comments to get the conversation started. |
@@ -34,6 +59,20 @@ func (c *Clickhouse) GetMigrationByVersion(tableName string) string { | |||
} | |||
|
|||
func (c *Clickhouse) ListMigrations(tableName string) string { | |||
q := `SELECT version_id, is_applied FROM %s ORDER BY version_id DESC` | |||
q := `SELECT version_id, is_applied FROM %s ORDER BY tstamp DESC` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is fixing a bug? (I think you mentioned this in the description)
The core goose library relies on a non-user-defined order for when migrations get applied, this is typically a sequential id in all other databases. It seems here we were using version_id
, but this is incorrect. In lieu of a sequential id, we should be using time..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. There is a no equivalent of an auto incrementing ID in clickhouse and it doesn't provide any way to get a globally incremented unique ID across its nodes.
So I relied on using timestamps, with just second precision, it was non-deterministic as migrations could have same timestamps.
With nanosecond precision, that chance is almost non-existent and the implementation further augments it with use of KeeeperMap Engine.
|
||
// QuerierOptions is an interface to provide specific options to a dialect. | ||
// For ex: providing replication support to underlying migration table. | ||
type QuerierOptions interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I initially added a "store" and "querier" the idea was to have the core goose library only interact with the store, and have no idea about the underlying queries.
The store abstracts the underlying querier which holds the raw string queries for the different database technologies.
Not sure if this made it easier/harder to evolve, but we should probably think about if there's a more elegant way to now thread options from the core library to those 2 internal layers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I struggled around this as well. I didn't find an easier way to hookup options all the way to the implementation specific detail. I tried to minimize the leaky abstraction(s) but I think it will have to exposed in one way.
If you have any suggestions, I can try those out.
) | ||
ENGINE = MergeTree() | ||
ORDER BY (date)` | ||
ENGINE = KeeperMap('/goose_version') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been coming back to this PR on/off, trying to think through the implementation from the perspective of a "general purpose" migration tool. On the surface, everything looks reasonable with the exception of changing the engine.
For context, another PR #530 came in requesting another engine.
Should the default engine be MergeTree
but allow this to be configurable?
Maybe we could put our heads together on how to solve this so the tool can be useful for all or most ClickHouse users? cc @iamluc @arunk-s
I think PR #530 is on the right track to setting a custom engine, and afaics this would satisfy:
- default engine:
MergeTree()
ReplicatedMergeTree()
KeeperMap()
- ... etc
Other questions, does the ORDER BY
or PARTITION BY
need to be configurable? Hopefully not so we can make this as generic as possible.
Last thought, leaving ENGINE = %s
to the user may be a footgun and I wish we could do more upfront validation, but then we have to account for all the possible ClickHouse engine types and this seems like a lot of work to maintain.
Maybe there's a world where the user is allowed to supply a template for the table and as long as it contains:
version_id
is_applied
(legacy field, deprecated)tstamp
Not sure I like this idea, since goose should own the table it creates to ensure the identifiers match up across all implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
I had a quick look at the PR and it looks good to me (and could replace my own PR #530)
My only concern is about the engine. I'm not a ClickHouse expert and didn't know about the KeeperMap
engine, but it seems it must be enabled (https://clickhouse.com/docs/en/engines/table-engines/special/keeper-map) which could be not possible with hosted services.
I suggest to add the engine as a new option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having an option to toggle underlying table engine can definitely work!
Other questions, does the ORDER BY or PARTITION BY need to be configurable? Hopefully not so we can make this as generic as possible.
I think these settings are affected by the choice of Engine so yeah they'll need to be configurable if the custom engine is allowed.
I can understand the concern with hosted services to not have KeeperMap
though I doubt that platforms like ClickHouse Cloud disallows creating tables with this table engine.
Last thought, leaving ENGINE = %s to the user may be a footgun and I wish we could do more upfront validation, but then we have to account for all the possible ClickHouse engine types and this seems like a lot of work to maintain.
Definitely, convention over configuration is generally desirable however in this particular case I'm afraid the abstractions will leak a bit in order to properly support the underlying database. But in any case we need to have a sensible default and only selected number of choices.
The library does quite well supporting different databases but that does come at a cost I'm afraid.
I'll be happy to lend a hand with ClickHouse specific topics though it can take a bit of time for me to catch up.
Dropping a thought here. We're soon (#379) going to add a goose provider, which unlocks new capabilities. One option available to us is exposing the "sql store" interface so users could bring their own. The goose package offers a few default ones, but for more advanced use cases you can implement a thin interface while goose handles the migration logic. Let me know what you think, as that would satisfy a lot of requests like this one. ExampleThe "store" (unsure if that'll be the final name) is the interface goose leverges for all it's migration logic, and currently all implementations are internal: sqlite3, postgres, mysql, clickhouse, etc. type customStore struct{}
var _ Store = (*customStore)(nil)
func (s *customStore) CreateVersionTable(ctx context.Context, db sqlextended.DBTxConn) error {
return errors.New("not implemented")
}
func (s *customStore) InsertOrDelete(ctx context.Context, db sqlextended.DBTxConn, direction bool, version int64) error {
return errors.New("not implemented")
}
func (s *customStore) GetMigration(ctx context.Context, db sqlextended.DBTxConn, version int64) (*GetMigrationResult, error) {
return nil, errors.New("not implemented")
}
func (s *customStore) ListMigrations(ctx context.Context, db sqlextended.DBTxConn) ([]*ListMigrationsResult, error) {
return nil, errors.New("not implemented")
} The sqlextended is an interface to combine type DBTxConn interface {
ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
QueryContext(ctx context.Context, query string, args ...any) (*sql.Rows, error)
QueryRowContext(ctx context.Context, query string, args ...any) *sql.Row
}
var (
_ DBTxConn = (*sql.DB)(nil)
_ DBTxConn = (*sql.Tx)(nil)
_ DBTxConn = (*sql.Conn)(nil)
) And with the new goose provider, we could have something like: store := yourpkg.NewStore(...)
goose.NewProvider(goose.DialectCustom, db, fsys,
provider.WithCustomStore(store),
) By setting the dialect to Do you think that would satisfy your requirement? |
Yeah I think having a custom I still think there will need to be a helper function that can be called from the goose that passes arguments or options to the internal implementation. In this MR that role is fulfilled by |
Maybe not? Most (if not all) options can be defined when constructing the In this MR you implemented the type Store interface {
CreateVersionTable(ctx context.Context, db sqlextended.DBTxConn) error
InsertOrDelete(ctx context.Context, db sqlextended.DBTxConn, direction bool, version int64) error
GetMigration(ctx context.Context, db sqlextended.DBTxConn, version int64) (*GetMigrationResult, error)
ListMigrations(ctx context.Context, db sqlextended.DBTxConn) ([]*ListMigrationsResult, error)
} And then in your implementation / code you'd have: type CustomStore struct {
OnCluster bool
ClusterMacro string
}
var _ Store = (*CustomStore)(nil)
func NewCustomStore(onCluster bool, clusterMacro string) *CustomStore {
return &CustomStore{
OnCluster: onCluster,
ClusterMacro: clusterMacro,
}
}
func (c *CustomStore) CreateVersionTable(context.Context, sqlextended.DBTxConn) error {
return errors.New("not implemented")
}
func (c *CustomStore) InsertOrDelete(context.Context, sqlextended.DBTxConn, bool, int64) error {
return errors.New("not implemented")
}
func (c *CustomStore) GetMigration(context.Context, sqlextended.DBTxConn, int64) (*GetMigrationResult, error) {
return nil, errors.New("not implemented")
}
func (c *CustomStore) ListMigrations(context.Context, sqlextended.DBTxConn) ([]*ListMigrationsResult, error) {
return nil, errors.New("not implemented")
} And then when you start your app, you would: func main() {
// Bring your own Store implementation:
// - CreateVersionTable
// - InsertOrDelete
// - GetMigration
// - ListMigrations
store := NewCustomStore(true, "clusterMacro")
p, err := provider.NewProvider(CustomDialect, db, fsys, WithCustomStore(store))
} Based on registered migrations, goose knows how to use the 4 store methods to do various migration-related tasks. |
I like your proposal @mfridman ! Also, a bonus point would be that the custom store could access the regular store to decorate it instead of rewriting all the methods. |
@mfridman, I see. Yeah that might work. I can try to give it a go but I am not yet sure when I'll be able to get to it. |
No worries whatsoever, the provider API is still internal and under active development. It'll land sometime in the next few weeks with a blog post about what's new and the more advanced use cases. I think it'll open a lot of opportunities for us to build upon. |
Closing the MR as the provider support has landed and any additional requirements such as replication support on ClickHouse state table can be added via a custom store. |
This MR adds replication support for state table (
goose_db_version
) in ClickHouse.AttachOptions
to goose and provide additional parameters.AttachOptions
allows for free form variables to be passed to applicable drivers and currently only ClickHouse dialect inspects it.ON_CLUSTER
key present in the options and if enabled picks the cluster name provided or uses a{cluster}
macro that is expected to be present in ClickHouse configuration.AllowMissingMigrations
for Clickhouse.DateTime64
with nano seconds in precision to ensure as deterministic results as possible when returning rows forListMigrations
.KeeperMap
engine.testcontainers-go
andgoogle-cmp
as dependencies for end to end tests. Testcontainers offers much better API for providing disk mounts and ensuring cleanup.A fork of this library with these changes lives at https://gitlab.com/gitlab-org/opstrace/goose which is used in https://gitlab.com/gitlab-org/opstrace/opstrace.