-
Notifications
You must be signed in to change notification settings - Fork 110
/
Copy pathmetrika_logic_fake.xml
62 lines (58 loc) · 2.24 KB
/
metrika_logic_fake.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<yandex>
<zookeeper>
<node index="1">
<host>192.168.0.1</host>
<port>2181</port>
</node>
<node index="2">
<host>192.168.0.2</host>
<port>2181</port>
</node>
<node index="3">
<host>192.168.0.3</host>
<port>2181</port>
</node>
</zookeeper>
<remote_servers>
<test>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
And also (and which is more important), the initial_user will
be used as current user for the query.
Right now the protocol is pretty simple and it only takes into account:
- cluster name
- query
Also it will be nice if the following will be implemented:
- source hostname (see interserver_http_host), but then it will depends from DNS,
it can use IP address instead, but then the you need to get correct on the initiator node.
- target hostname / ip address (same notes as for source hostname)
- time-based security tokens -->
<secret>foo</secret>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.0.1</host>
<port>0</port>
</replica>
<replica>
<host>192.168.0.2</host>
<port>0</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.0.3</host>
<port>0</port>
</replica>
<replica>
<host>192.168.0.4</host>
<port>0</port>
</replica>
</shard>
</test>
</remote_servers>
</yandex>