Store the size to context because different streams may have different
maximum payload sizes. For example, an SRTP stream with RTP authentication
enabled has a smaller payload size than a normal RTP stream.
Right after ZRTP has finished and the media reception has been started,
there may be a few ZRTP ACK/ConfACK messages coming in which are
perfectly valid.
Failing to check this caused an infrequent error where kvzRTP
started a fragmentation unit but because there was not enough bytes
it couldn't finish it and thus the receiver never returned the frame
to user because it only received the first fragment (which was the full
frame)
Dropping new inter frames when an intra is in progress is a little
too aggressive considering the intra might be finished by the time
all inter frame packets have been received so basically we would lost
a perfectly valid frame for no reason
Overall code cleanup and removal of unnecessarily complex logic.
Now the RTP frame delays are more adaptive to Kvazzup's needs:
frame delay for intra frames is the intra period so intra frame
can be late f.ex. 2s seconds whereas inter frames must be received
within the 100ms limit or they are dropped.
This change should remove the gray screens completely
Also remove the global frame array for HEVC fragments and use
frame-specific map to store them. I noticed that with high-quality streams
the number of packets used by one frame was starting to be close to UINT16_MAX,
which caused kvzRTP to overwrite previous frames' fragments
with new fragments and thus dropping frames by itself. Now each frame
has its own fragment buffer so the frame size no longer matters
At least on Windows (and possibly on Linux), FD_SET must be called
always before calling select() or else it won't work and on Linux
the struct timeval values are changed to relect the amount of time
spent waiting on the file descriptors.
34ms is too strict a deadline for some use cases. By increasing the
default frame delay to 100ms does not hurt the cases where the frames
are anyway received within 34 milliseconds but it also allows users
to stream at high resolutions/low QP values without recompling the library.
Making this adjustment dynamic based on timestamps of the packets is
a planned future task but this shall act as a bandaid until then.
MinGW seems to employ some very aggressive optimizations that
sometimes create invalid packets because both Start and End
flags are set simultaneously.
This fixes the problem where an intra frame seems to be dropped
every now and then. Nothing was actually dropped but kvzRTP
received an invalid packet and thus discarded the whole frame
To be honest, I have no idea why this worked but it did anyway.
The initial idea was to integrate Crypto++ to kvzRTP to make usage
very easy but as it turns out, the compilation of that library is quite
complex so it's better to use the Makefiles they provide.
This means that kvzRTP shall have one extra dependency IF application
wishes to use SRTP/ZRTP: Crypto++. The compilation and linking should
be quite straight-forward and if application wants to use SRTP/ZRTP
it must make the decision when kvzRTP is compiled by providing
-D__RTP_CRYPTO__ flag for the compiler and by adding -lcryptopp to
link list of the application.
The chances that ZRTP is enabled are quite high and because those
messages are received to the same socket as media, the error messages
would likely flood the log so better turn them off.
Now for each call (or IP) there will be a separate session which
shall contain one or more multimedia streams. Each session has a
single ZRTP object and each multimedia session shall have a single
socket which both the sender and receiver use to enable hole punching
on all platforms.
Each multimedia stream shall also have a single SRTP instance which
derives keys from the common ZRTP session.
It is highly likely that an invalid fragment will be received so
stopping the receiver after that and restarting the call after each
invalid fragment is very user-hostile
Making configuration global was moronic considering there are
different types of media streams per session (f.ex Opus and HEVC)
which have very different types of needs. For example, setting
receiver's UDP buffer size to 40 MB would make no sense for Opus.
Now each connection can be configured individually which is also
a needed feature for SRTP
This change reverted the changes made earlier to global API
The security layer is injected between reading a datagram from OS and
RTP/RTCP payload processing so the obvious place for that layer is socket.
Make all recv/send function calls go through socket API so the security
layer function calls doesn't have to be copied everywhere