Opened 9 years ago

Closed 9 years ago

#3328 closed enhancement (fixed)

make CSP protocol compatible with multics R69

Reported by: knasson Owned by: knasson
Priority: minor Component: Cache-EX
Severity: high Keywords: cache multics
Cc: Sensitive: no

Description

Reason for enhancement

make cache forwarding from multics R69 to oscam

Possible impacts on other features: unknown

Attachments (2)

oscam-csp.patch.gz (797 bytes ) - added by knasson 9 years ago.
oscam-csp.patch (1.8 KB ) - added by knasson 9 years ago.

Download all attachments as: .zip

Change History (17)

by knasson, 9 years ago

Attachment: oscam-csp.patch.gz added

comment:1 by Deas, 9 years ago

why did you attach a .gz file?!? just attach the .patch file so everybody can just click on the patch and it will be shown immediately in the browser...

by knasson, 9 years ago

Attachment: oscam-csp.patch added

comment:2 by bowman, 9 years ago

So responding to the ping requests is all thats needed to make it work with multics?
I added that in r8666 just now (although for other reasons and not exactly like in the suggested patch), so possibly this can now be closed.

Note that r8666 also adds experimental and automatic push back to csp (or I suppose multics) which may need more work before its usable.

comment:3 by japie, 9 years ago

Just finished a short test with r8666 connected with 4 multics peers and indeed it looks like it is working in base.

When adding the cache port (the same one my multics is using) all cache peers normally connected to multics are connected and are receiving cache, so far so good but there is no cache send. (logic, the peers aren't in my oscam config, don't know how or where to put them, not implemented yet right?)

I did another test to see if all can be working and started multics again (claiming the cache port)
After that I started oscam who claims the port then, and all 4 peers are sending and receiving cache for a few minutes, after that cache is only send.

Don't know or the send cache is usable for the client but I suppose so, maybe CSP cache peers can be added in the future as reader?

[reader]
label = CSP example
protocol = csp
device = csp_hostname,csp_port
inactivitytimeout = 30
group = 1,2

Edit: peers are sending cache and hits are happening, seems it is working very wel! (no clue how oscam nows which cache peer port to send to ;)

Last edited 9 years ago by japie (previous) (diff)

comment:4 by bowman, 9 years ago

Its actually part of the protocol (the pings contain the port number).

Someone may want to give the user increased control over whether to push back however (in the old csp clusteredcache there is a "auto-add-peers" setting that defauls to true but can be disabled).

Internally in oscam now its implemented (by corsair earlier) as cacheex lev 2. The incoming udp updates from a csp is seen as an anonymous user, which sort of makes sense - but doesn't allow you an obvious way to control where to send updates (you'll send back to anyone who is sending to you - except for those updates that originated with them).

comment:5 by japie, 9 years ago

That explains a lot, great job for both!

comment:6 by japie, 9 years ago

Update; Initialization of the connection not fully OK, when restarting oscam only chace is send. With oscam running I start multics (with same cache port settings as oscam on the same machine/ip) which connects to the peers, after killing multics oscam takes over the peers and they function as it should.

Don't know or this is multics specifick or also happening to CSP.

Last edited 9 years ago by japie (previous) (diff)

comment:7 by superg1972, 9 years ago

Tested with MultiCs R69. With 8641 and the patch that i found on the multics.info forum the oscam is receiving the cache from Multics R69, but it is not pushing the cache to the Multics.

I tested the 8667. Multics active, started the oscam, for 10 sec the got of the cache from multics worked fine, after that the oscam is not getting anything from multics, the multics lose the ping and the oscam continue only to "push" (but where....). The multics in all this processes is not receiving anything from oscam.

comment:8 by bowman, 9 years ago

Some caveats for the push experiment (although none of this affects responding to pings from multics which is what the ticket and patch was about):

  • oscam will only push back to cache peers that it has received a cache ping from, there is no way to configure the target(s) or manually trigger it
  • if there is no incoming traffic from one source in 120 secs, it will stop pushing to that one until a new ping is received
  • only reply type updates (with cw) are actually sent atm, meaning that a very high sync/wait time is probably required at the peer side to actually make use of the cache (unknown whether pending notifications will be possible)
  • unless the ecm originated from local dvbapi at the oscam side, onid will be set to a dummy value of 0xffff (requiring block-caid-mismatch to be set to false in csp)
  • oscam itself is not currently sending any cache pings while pushing (but that should be added)
  • cache updates that originated with camd35 or cccam peers may behave differently (or not work at all, untested)
  • if multiple csp cache peers are involved, loops are not checked for or avoided (so make sure not to test against one that might also indirectly be sending to you)
Last edited 9 years ago by bowman (previous) (diff)

comment:9 by superg1972, 9 years ago

Tested the 8668. The issue persists as described above. If the multics is linked to an oscam to push the cache using the patch posted on the multics info and you kill this pacthed oscam and start the oscam 8668, for the first ~10 secs the oscam gets the cache from the Multics and the ping is shown in the multics (green with the value). As soon as the ping is lost from the multics the multics stops sending the cache to the oscam. At this point if you restart the oscam the ping is completely lost and the oscam stops receing the cache.
The oscam and the multics are on the same server.

comment:10 by superg1972, 9 years ago

tested 8669 ... same issue

comment:11 by superg1972, 9 years ago

tested the version 8677 and works great. Moreover i have removed the change that you did 8668 regardimg the cacheex module and it works fantasticly. I can push and get cache on multics to and from oscam and viceversa without loops. I can send all the cacheex cache to multics and viceversa.
Could you remove it or do it as parameter?

Thank you

(i don't know if the rows are correct or no below)

160	160	                if (er->cacheex_src != cl) { 
161	161	                        if (get_module(cl)->num == R_CSP) { // always send to csp cl 
162		                                if (!er->cacheex_src) cacheex_cache_push_to_client(cl, er); // but not if the origin was cacheex (might loop)
	162 	                                cacheex_cache_push_to_client(cl, er); 
163	163	                        } else if (cl->typ == 'c' && !cl->dup && cl->account && cl->account->cacheex.mode == 2) { //send cache over user 
164	164	                                if (get_module(cl)->c_cache_push // cache-push able

comment:12 by bowman, 9 years ago

Re-forwarding other remote cacheex updates should probably be a setting (that defaults to false)... especially if someone later adds the ability to configure targets in oscam, and one of those targets end up being another oscam - there will inevitably be loops.

I'll hold off on any further changes until someone better acquainted with the oscam internals can weigh in.

comment:13 by japie, 9 years ago

Great work Bowman!
r8677 works splendid out of the box and pretty fast too, think you are probably right with the loop/oscam_client point but I think the that happens with csp and multics also.

comment:14 by bowman, 9 years ago

An unmodified csp clusteredcache never re-forwards any cache updates of remote origin (i.e. it has the same behaviour as this current oscam implementation now, anything sent is obtained from local readers/connectors). In an original csp cache cluster, all updates are sent directly from their origin to all other peers. It is an entirely flat model with no relaying, hence no need for loop prevention at the protocol level.

comment:15 by bowman, 9 years ago

Resolution: fixed
Status: newclosed

Implemented as of r8677

Note: See TracTickets for help on using tickets.