1 | ClusteredCache (com.bowman.cardserv.ClusteredCache)
|
---|
2 | ---------------------------------------------------
|
---|
3 |
|
---|
4 | Cache-sharing with the csproxy is a powerful tool that can be used in a number of different ways. Before contemplating
|
---|
5 | the various possible scenarios, the basic cache model needs to be understood.
|
---|
6 |
|
---|
7 | Whenever a proxy node receives a new ecm request from a client, it will query the cache to see if this is the first
|
---|
8 | time this particular request has been received. One of three things can happen:
|
---|
9 |
|
---|
10 | 1. The ecm has been received before, and a cw reply has already been fetched from a card. This causes an instant hit.
|
---|
11 | 2. The ecm has been received before, but the card transaction is still ongoing and the reply not yet available.
|
---|
12 | This causes the request to be held in the cache for a maximum of 'max-cache-wait' seconds. As soon as a reply is
|
---|
13 | available all threads waiting for it will be released with their own copy of the cw.
|
---|
14 | 3. The ecm has not been seen before, no card transaction is pending for it. This will place the ecm in a list of pending
|
---|
15 | requests, and it will be up to the proxy service mapper and connection manager to find a card to handle it.
|
---|
16 |
|
---|
17 | The cache knows nothing about which ca-system the ecm belongs to, what sid it is for, or who is asking for it. It will
|
---|
18 | simply check its lists of already handled ecm -> cw pairs, and the list of pending ecm requests and see which (if any)
|
---|
19 | contains the new ecm. If two different profiles, or multiple services within one profile, happen to use the same ecm's
|
---|
20 | the cache will score hits. The chances of a false positive are insignificant enough to be safely ignored.
|
---|
21 |
|
---|
22 | When using remote cache-sharing, all of the above remains pretty much the same. However, whenever something is added
|
---|
23 | to the list of pending requests or requests with cw replies, the cache will broadcast the new addition to the remote
|
---|
24 | proxy (or proxies) that it has been configured to talk to. This broadcast can be done in several ways, but all involve
|
---|
25 | udp communication and standard java object serialization.
|
---|
26 |
|
---|
27 | Note that in any cache cluster, there can still be occasions when two or more proxies will attempt to query a card for
|
---|
28 | the same ecm. This will occur when the ecms arrive at both proxies at exactly the same time (which is of course often
|
---|
29 | the case). Depending on the roundtrip ping time between the proxies, this can be enough to reduce the effectiveness
|
---|
30 | of the cache significantly. Typically though, you have multiple proxies for the sake of redundancy, you probably want
|
---|
31 | the same ecm processed in several places so that if one proxy fails to produce a reply in time, another may succeed.
|
---|
32 |
|
---|
33 | That said, it is possible to achieve a strict synchronization between cache-instances by using the arbitration feature,
|
---|
34 | which introduces a negotiation procedure for each ecm, to determine which proxy is best suited to handle it (and then
|
---|
35 | only that proxy will proceed with forwarding, the others will wait). See sync-period below.
|
---|
36 |
|
---|
37 | NOTE: As of 0.8.13 the ClusteredCache no longer uses default java object serialization for the transport protocol.
|
---|
38 | The new custom protocol is briefly documented in ClusteredCache.java (it should be about 20-40 times more efficient).
|
---|
39 |
|
---|
40 | ------------------------------------------------------
|
---|
41 |
|
---|
42 | The following are settings are available, see proxy.xml for separate examples:
|
---|
43 |
|
---|
44 | <cache-handler class="com.bowman.cardserv.ClusteredCache">
|
---|
45 |
|
---|
46 | - To use the ClusteredCache as the cache-handler for the proxy, use the class name: com.bowman.cardserv.ClusteredCache
|
---|
47 | Changing cache-handlers requires a proxy restart.
|
---|
48 |
|
---|
49 | <cache-config>
|
---|
50 | <cw-max-age>19</cw-max-age>
|
---|
51 | <max-cache-wait>7</max-cache-wait>
|
---|
52 |
|
---|
53 | - Inherited from DefaultCache (see proxy-reference.html).
|
---|
54 |
|
---|
55 | <remote-host>peer.proxy.host.com</remote-host>
|
---|
56 | <remote-port>54278</remote-port>
|
---|
57 |
|
---|
58 | - One way of specifying where to send cache-updates, when there is only a single target proxy.
|
---|
59 |
|
---|
60 | <multicast-group>230.2.3.2</multicast-group>
|
---|
61 | <multicast-ttl>2</multicast-ttl>
|
---|
62 | <remote-port>54278</remote-port>
|
---|
63 |
|
---|
64 | - Another way to configure targets for cache-updates. Multicast typically only works in a LAN environment.
|
---|
65 |
|
---|
66 | <tracker-url>http://cstracker.host.com/list.enc</tracker-url>
|
---|
67 | <tracker-key>secretkey</tracker-key>
|
---|
68 | <tracker-update>10</tracker-update> <!-- minutes -->
|
---|
69 |
|
---|
70 | - A third (and perhaps the best) option for configuring multiple targets for cache-updates. The ClusteredCache will
|
---|
71 | fetch a plain text list of hostnames and portnumbers, and send updates to every entry in the list. The list can be
|
---|
72 | automatically fetched at regular intervals (or if tracker-update is 0, only when proxy.xml is modified/touched).
|
---|
73 | This approach allows proxies to be added to a cluster without having to modify the configuration of already existing nodes.
|
---|
74 | The list must have the following format (# are comments):
|
---|
75 |
|
---|
76 | # ClusteredCache list file. Syntax: hostname_or_ip:udp_port_nr
|
---|
77 | proxy1.host.com:54278
|
---|
78 | proxy2.host.com:54278
|
---|
79 | 192.168.0.3:54275
|
---|
80 |
|
---|
81 | The list can be stored anywhere, as long as it can be accessed via url (e.g file://, http, https, ftp).
|
---|
82 | Optionally, the list can also be blowfish encrypted using the included tool fishenc.jar (found in lib, java -jar fishenc.jar).
|
---|
83 | If encrypted, tracker-key must be correctly set.
|
---|
84 |
|
---|
85 | <local-port>54278</local-port>
|
---|
86 |
|
---|
87 | - The UDP port where this ClusteredCache instance will listen for incoming updates.
|
---|
88 |
|
---|
89 | <local-host>csproxy3.host.com</local-host>
|
---|
90 |
|
---|
91 | - The external hostname or IP of this ClusteredCache. This is only needed when using the above tracker setup. The name
|
---|
92 | should match the one in the tracker list, so the cache instance can identify itself in the list and avoid sending itself
|
---|
93 | updates.
|
---|
94 |
|
---|
95 | <debug>true</debug>
|
---|
96 |
|
---|
97 | - Set debug to true to enable additional cache information in the status-web. This can impact performance, use with care.
|
---|
98 |
|
---|
99 | <hide-names>false</hide-names>
|
---|
100 |
|
---|
101 | - If configured to send to one or more remote caches, this controls whether the names of the connectors are censored in
|
---|
102 | the outgoing cache updates (will appear as remote: unknown to the target). Only makes sense when dealing with untrusted
|
---|
103 | proxy peers.
|
---|
104 |
|
---|
105 | <sync-period>0</sync-period> <!-- milliseconds -->
|
---|
106 |
|
---|
107 | - Set this > 0 to enable the strict arbitration procedure. For example if you set it to 200, the ClusteredCache
|
---|
108 | would use 200 ms for every new (previously unseen ecm) to wait and synchronize with as many other proxies in the cluster
|
---|
109 | as possible, and determine who is best suited to handle it. Only this proxy would proceed with a forward to a card.
|
---|
110 | This adds 200 ms to every single transaction, but should ensure that a cluster of proxies will only ask one card in one
|
---|
111 | proxy, once, for the same ecm. Probably only usable in a cluster where all nodes are fully trusted, and where the
|
---|
112 | network is reliable with fairly fixed ping times and no congestion.
|
---|
113 |
|
---|
114 | <auto-add-peers>false</auto-add-peers>
|
---|
115 |
|
---|
116 | - Added in 0.9.1. Set to true to automatically add any remote peer that is sending updates to this cache (so updates
|
---|
117 | will be sent there from this node).
|
---|
118 |
|
---|
119 | <cw-validation checksum="true" zero-counting="true" log-warnings="true"> <!-- all default to true if omitted -->
|
---|
120 | <bad-dcw>00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00</bad-dcw>
|
---|
121 | <bad-dcw>01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10</bad-dcw>
|
---|
122 | </cw-validation>
|
---|
123 |
|
---|
124 | - Added in 0.9.1. Blocks incoming cache pairs where the dcw has a bad checksum, contains 5 or more zeroes (but less
|
---|
125 | than 8, as to not affect ca-deployments where the opposing cw is blank). Specific cw's known to be bad can also be
|
---|
126 | listed, but only do this if absolutely necessary and there is no way to fix it at the source.
|
---|
127 | Use the CacheCoveragePlugin overwrite analysis to determine if there are recurring bad cws that aren't filtered by
|
---|
128 | checksum or too many zeroes, and only add once it has been traced to the origin server and the cause is known.
|
---|
129 |
|
---|
130 | </cache-config>
|
---|
131 |
|
---|
132 | ------------------------------------------------------
|
---|
133 |
|
---|
134 | NOTE: It is possible to set up the ClusteredCache without any remote targets (receive-only-mode). If no remote-host/port
|
---|
135 | is set using any of the various methods then the default behaviour will be to not attempt any sending of updates.
|
---|
136 | This is useful when creating a cache-only proxy node, receiving updates from multiple other proxies but sending to none.
|
---|
137 | It can also be used to augment local cards with additional services (which will only be available through the cache).
|
---|
138 |
|
---|
139 | With 0.9.0, ClusteredCache in receive-only mode is the default in the auto-generated config template. If nothing is
|
---|
140 | received, it behaves exactly like the DefaultCache.
|
---|