►
From YouTube: Jitsi Community Call - January 28, 2019
Description
The Jitsi Community Call is held every other Monday at 10:30AM Central US time at https://meet.jit.si/TheCall. The Jitsi team provides updates and community members can ask the team questions.
See https://community.jitsi.org in the mean time for questions.
Subscribe for our new videos: http://bit.ly/2QVLZCG
Learn more About Jitsi at https://jitsi.org
Try Jitsi Meet now at https://meet.jit.si
See the code at https://github.com/jitsi
A
Oh
and
we're
life-
okay,
excellent,
so
welcome
everyone
to
this
edition
of
the
Gypsy
community
call
we're
going
to
start
as
usual,
with
the
most
recent
updates.
First
and
foremost,
if
you
are
running
a
too
timid,
deploy
by
all
means,
go
and
update
it
as
soon
as
possible
through
the
latest
stable,
if
it's
not
already
there
in
fact
build
numbers
that
would
fix
the
problem.
We're
talking
about
the
chrome,
72
problem,
chrome
72
requires
us
to
pass
and
as
DP
semantics
per
am,
and
that
has
been
in
builds
for
a
while.
A
A
C
A
A
A
So
we
have
been
working
hard,
as
you
know,
on
all
the
previous
calls
to
make
sure
that
we
have
better
error
reporting
with
debris
failures
that
they're,
better
handled
better
replaced.
I
know
of
that
is
now
deployed
on
me
disagrees
that
in
production,
which
means
go
into
the
sink
next
on
my
private
duty
build
I
currently
have
mobile,
I'm,
sorry
I'm,
the
private
mobile
duty
that
I
currently
have
check
I
can
see
all
the
messages
that
are
being
saying.
A
A
A
A
It
won't
look
good.
So
sometimes
that
will
happen.
So
we
think
it's
better
to
use
this
version
of
the
bridge.
We
think
you're
going
to
end
up
with
less
crappy
video,
but
just
so
you
be
warned,
all
the
issues
have
not
disappeared.
George
is
currently
looking
and
finding
at
fixing
the
past
simulcast
so
that,
even
if
this
bug
happens,
it
won't
be
causing
problems
in
the
bridge
and
we
still
have
to
look
into
also.
Why
is
it
that
a
certain
scenario,
especially
scenarios
involving
start
muted,
which
is
a
case
on
this,
go
for
example?
E
A
That's
a
good
idea,
so,
a
long
long
time
ago,
in
a
galaxy
absolutely
not
far
away.
This
thing
happened
on
the
ITF,
where
a
bunch
of
people
were
trying
to
agree,
but
how
is
it
that
he
tell
a
browser
using
the
outdated
as
dpe
declarative
language
that
you
wanted
to
do?
Things
are
more
complicated
at
one
to
one
or
even
David.
There
were
a
bunch
of
proposals
on
the
table,
but
two
that
really
matter
for
this
discussion.
One
was
code
Plan
B.
A
A
The
other
approach
that
came
out
of
these
discussions
used
to
be
called
unified
fund.
It
had
a
very
pretty
name:
unified
unification.
It's
amazing.
The
implementation
is
a
whole
different
story
and
I'm
going
to
keep
my
opinions
to
myself.
On
that.
On
that
point,
the
fact
is
that
this
is
where
the
ITF
set
out.
This
was
this
is
what
we're
all
going
to
do,
and
this
took
a
while,
but
eventually
chromed
started,
supporting
unified
and
what's
happening
now
with
chrome.
A
72
is
that
unified
plan
became
also
the
default
unless
you
explicitly
request
Plan
B
our
plan.
There
is
the
following
plan:
B
is
still
supported
in
Chrome
and
for
the
time
being
we're
going
to
stick
to
it.
It
doesn't
really
imply
anything
different,
absolutely
nothing
of
user,
that
a
user
would
notice.
A
A
Let's
see
what
did
just
arrived
in
the
channel
chrome
72,
that's
it.
A
E
E
We
have
learned
a
bit
from
using
it
and
we
think
we
can
make
it
better.
So
we
started
working
towards
this
goal
and
I
have
a
little
bit
of
the
slide
deck
with
what
the
plan
is
so
I'm
going
to
publish
it
in
the
community
later
tonight/tomorrow,
as
you
can
see
where
the
SDK
is
going.
This
should
be
ready
within
the
next
few
weeks.
E
If
anything
goes,
if
everything
goes
according
to
plan,
I
mean
it's
pretty
much.
The
refinement
on
on
how
you
use
it
should
be
easier
to
consume
by
native
applications.
I
will
be
released
with
version
number
2.0
point
O,
because
we
will
completely
break
backwards,
compatibility
and
we
will
commit
to
cember
so
that
we
stick
to
it,
and
if
we
make
a
backwards
compatible
change,
we
will
make
it
a
two
point.
One
point:
zero,
the
next
one,
then,
if
we
break
it,
it
will
be
in
a
three
point:
zero
and
so
on
and
so
forth.
E
So
versions
are
more
predictable
and
all
of
them
will
be
available
in
maven.
Unlike
right
now,
where
you
saw
there
was
a
jump
between
one
point.
Sixteen
and
1.19
I
think
and
then
21,
so
we're
hoping
to
get
that
on
track.
A
little
bit
of
these
sort
of
could
be
seen
by
a
fact
that
we
silently
published
an
iOS
portal.
Cocoapods,
that's
available
now
as
well,
and
we
plan
on
maintaining
it
going
forward
together
with
the
maven
repo
with
the
Android
bells,
and
that's
all
I
get.
E
A
It
goes
through
this
SDP
interrupt
layer
and
it
gets
converted
to
unified.
Now,
obviously
in
theory,
we
could
do
the
same
here
for
Chrome,
but
we
don't
want
that.
We
don't
want
an
extra
layer
that
just
makes
things
more
complicated,
so
we've
decided
to
not
bother
with
that
path
and
we
write
everything.
So
basically,
the
new
unified
approach
would
natively
cover
both
browsers,
obvious
answers.
Your
question
you.
A
F
F
A
F
A
A
But
just
go
to
download
youtube
/,
stable
and
you'll
find
it
there.
Thank
you,
yeah
and
obviously
using
unified
plan.
Natively
we
expect
would
give
us
better
support
Firefox
altogether,
because
because
we're
doing
the
same
way,
now
long
can
always
hope.
Obviously,
before
we
actually
hit
implementation,
I
was
in
which
what
you
see
but
yeah.
We
hope.
Okay.
Well.
At
that
point,
we're
open
to
questions
from
the
floor.
A
Who
else
would
like
to
visit
and
ask
answer
questions
either.
A
Yeah
they
are
really
really
old,
and
so
we've
actually
made
it
a
habit
to
not
commit
to
roadmaps
just
because
objectives
change
and
we
need
to
refocus.
We
can
always
talk
about
the
things
that
are
being
worked
on
right
now,
of
which
I
mentioned
a
few
at
the
start
of
the
call
there's,
obviously,
the
bridge
rewrite,
which
involves
better
performance,
more
participants
for
conference,
more
participants
per
machine
working
toward
a
bandwidth
bound
objective,
so
that
what
blocks
you
on
your
server
is
not
CPU
but
bandwidth.
A
This
is
this
is
the
purpose
of
the
rewrite
of
the
lower
layers
of
the
bridge.
We
also
have
things
like
more
investment
in
octo.
This
is
our
multi
bridge
protocol
that
allows
bridge
to
be
skating,
so
you
would
notice
that
on
this
specific
code,
people
are
joining
on
bridges
across
the
world.
Bridges
are
talking
to
each
other,
so
we're
planning
on
developing
that
feature
so
that
it
will
allow
for
hundreds
of
participants
in
a
single
call,
distributed
across
several
bridges
and
then
there's
a
bunch
of
other
stuff.
Is
there
anything
particular.
A
A
A
B
A
G
That
was
me.
This
is
Nico
from
one
of
the
Denver
offices,
I,
actually
kind
of
had
a
question
around
WebRTC
in
an
electron
environment.
Trying
to
screen
share
Microsoft
365
products,
I
found
an
open
bug
on
the
chromium,
page,
I'm
kind
of
curious.
If
you
guys
have
had
the
same
issue.
Basically,
the
behavior
is
someone
tries
to
screen
share
just
the
application,
any
365
product
and
you
end
up
getting
the
first
frame
and
then
the
UI
doesn't
update
you'll,
see
a
mouse
movement
you'll
see
like
everything's
behaving.
G
G
A
A
E
It's
an
electron
thing,
so
in
chrome,
the
workaround
is
to
run
electron
without
hardware,
accelerated,
CPU
or
systems,
I.
Think
that
solves
it
by
passing
some
command
line
flag.
So
it's
like
a
ping-pong
thing
where
electron
seems
to
tell
Microsoft
do
something
and
then
there's
an
open
issue
in
electron
in
the
electron
repo
for
a
while
and
as
far
as
I
know,
I
don't
follow
daily,
but
every
once
in
a
while.
A
G
C
A
G
G
E
A
A
E
E
A
J
Gus,
hey
yeah,
have
you
been
pretty
good?
How
are
you
I'm
fine?
Will
you
be
a
father?
It's
a
PDR,
not
sure
yet.
Maybe
I'll
come
for
the
beer.
Awesome.
I
All
right
I've
got
two
questions,
one
of
which
won't
surprise.
A
couple
of
people
in
here
I've
been
working
with
Brian,
and
so
who
is
aware
of
it,
at
least
since
this
afternoon,
on
an
issue
that
I
have
because
I'm
trying
to
run
jitsi
code
in
an
environment
that
has
a
bouncy
castle
version.
That
is
newer
than
what
you
guys
are
using,
and
that
introduces
a
compatibility
issue
now.
C
D
That
would
you
look
at
but
yeah
I
guess:
I
saw
your
email
this
morning.
I'll
try
and
look
at
it
this
morning.
I'm
sure
it's
probably
fine
I,
you
know
I
and
I
won't
be
able
to
test
it.
But
if
you've
tested
it
and
you
think
it's
looking
good
and
like
it
doesn't
it's
like
regressions
or
anything
that
you
can
tell
then
yeah.
I
I
I
A
A
D
A
I
E
E
E
I
E
I,
don't
know
if
you
look
on
so
on
the
react
native
WebRTC
repo
issue,
one
seven
six
is
where
the
commentary
is,
and
the
last
pretty
much.
The
last
comment
or
previous
before
last
is
contains
a
branch
with
the
guy
that
apparently
did
it
and
when
he
mentioned
that
they
said
well,
I'm
not
gonna
maintain
your
patches.
So
as
long
as
this
is
not
upstream
or
cannot
be
done
on
the
side,
I'm
not
gonna
happen.
E
I
K
This
is
this:
is
Paul
I'm,
a
follow-up
question
to
something
a
new
question:
I
posted
in
the
community
forums
about
so
we're.
We've
made
some
progress
with
our
kind
of
leveraging
the
recorder
RTP
in
poll
here
yeah,
it's
exciting
good
clothes
wear.
However,
we
are
experiencing
some
significant
choppiness,
even
with
an
audio-only
use
case.
K
You
know
using
opus
and
we're
trying
to
better
understand
like
what
the
cause
of
that
might
be
so
I
don't
know
if
it's
because
you
know
we'll
do
something
very
similar
in
juga,
see
using
just
a
recorder
and
pull
up
the
record
key
p.m.
poll.
It's
like
the
mixer.
We
didn't
experience
any
any
similar,
choppiness
and
I
I
believe
that's
totally
using
opus.
K
I
assume
the
jitter
buffer
configuration
is
about
the
same,
so
we've
been
tweaking
the
various
did,
the
jitter
buffer
and
the
RTP
packet
queue
and,
and
then
it's
helped,
but
I
think
fundamentally,
what
we're
trying
to
understand
is
where
is
this
mostly
like?
Could
this
be
is
like
a
blocking
issue?
Is
there
something
that
is
synchronized
or
blocking
sort
of
downstream
from
on
the
recorder
side?
K
A
K
A
A
K
It's
just
all
so
we're
not
doing
anything.
Oh
it's
just
audio
and
we
are
we're
extracting
so
there's
a
few
things
that
we
needed
that
were
a
little
bit
different
use
case
wise,
so
we're
doing
things
more
downstream.
We're
actually
taking
this
audio
data,
we're
pushing
it
out
into
or
like
that.
Okay,
the
Kinesis
cue
I've
been
doing
additional
post-processing
on
it,
but
we're
also
extracting
the
our
TCP
sender
reports
and
some
other
rtcp
packets
for
getting
you
know
with
improved
timing
data
that
we
can
use
sort
of
our
post-processing.
K
So
yeah
one
of
my
question
is
yeah.
Could
this
is
this
more?
Is
there
we
tried
pushing
things
into
separate
threads
in
the
recorder
just
to
see?
If
maybe
it's
this
contention
in
the
recorder,
that
sort
of
blocking
kind
of
things
are
just
kind
of
getting
blocked
up
and
that's
we
did
notice
drop
packets,
it
belongs
in
the
jitter
buffer.
K
We
were
wondering
if
me
what
we
initially
had
disabled
read
and
if
you
see
in
the
recorder
are
to
be
in
full
because
we're
getting
some
exceptions,
we
were
able
to
bring
that
back.
So
I
was
wondering-
maybe
something
related
to
that,
since
the
code
has
changed,
probably
pretty
significantly
over
the
last
year's.
K
So
or
is
this
more
of
like
a
giant
sort
of
like
the
older
jameth
approach
yeah,
because
the
record
over
TV
info
has
you
know
for
this
older
Scott,
older
style
of
creating
processors
and
configuring,
realizing
them
that
type
of
thing
and
it
uses
the
silence
effect
and
the
audio
the
actor
speaker.
Detection
I
was
wondering
if
maybe
the
transfer
with
transcoding
on
that
side
was
was
being
a
problem,
but
it
doesn't
really
make
sense.
I
assume
the
same
thing
is
happening.
K
That's
what
I'm
trying
to
understand
is
what
is
different
about
this
approach
versus
what's
happening
in
shigani,
because
it's
still
transcoding
the
Transco
is
using
the
audio
silence
media
device
in
the
audio
mixer,
and
we
were
using
the
receive
buffer
listeners
to
get
access
to
the
individual
audio
streams.
Is
there
a
way,
yeah?
Well
what
yeah
any
theories
on
what
might
be
causing
this
bottling
up
this
contention
and
be
if
it
is
some
sort
of
problem
with
this
JAMF
style?
D
That'd
be
the
first
place,
I
draw
the
line
yeah
and
if
it's
like
the,
but
if
it's
the
performance,
backup
like
you're
saying
then
they
I'd
say:
there's
a
there's,
a
decent
chance.
If
you
find
they're
getting
dropped
it
like
the
system
queue
then
yeah,
some
thread,
somewheres
better,
be
holding
something
up,
and
maybe
you
offload
maybe
there's
some
task
that
needs
to
be
offloaded
to
another
thread.
That's
not
so.
It
doesn't
block
like
a
main
read
thread
if
it's
on
the
inside
of
the
bridge,
then
it's
aussies,
probably
something
similar.
K
D
K
H
K
K
K
Its
own
transform
engine
to
intercept
the
rtcp
and
RTP
packets,
I,
guess,
right
and
then
and
then
and
then
from
there
I
guess
it
does
the
regular
JAMF
flow
right.
So
no
so
what?
What?
What?
What
are?
What
does
everybody
saw?
Some
like
he
is
there
a
if
this
is
potentially
like
sort
of
a
problem,
something
compatibility
with
some
of
the
older
JMS
stuff?
Is
there
a
way
that
we
could
still
use
the
translator
but
again
a
audio
silence,
media
device?
Okay,
do
something
similar,
which
Agha's
he's
doing
so
we
can
get
the
best.
K
A
My
suggestion
was
right
because
you
also
tried
the
gyasi
approach
that
didn't
work,
so
my
suggestion
would
be
why
don't
you
write
your
own
endpoint
on
instead
of
jiggity,
something
very
simple:
that
does
what
you
need
and
because
you
just
need
a
recorder
implementation,
that's
going
to
be
doing
things
on
packet
level
basic
before
you
reuse,
a
lot
of
the
code
that
you
get
in
the
bridge
right
now.
You
connected
as
a
separate
endpoint.
You
make
it
much
simpler
so
that
it's
easier
for
you
to
debug
and,
and
you
go
down
that
path.
K
K
K
So
we
just
kind
of
pea
couple
things
but
I
wonder
if,
since
we
already
have
the
translator
there,
it
might
be
simpler
to
just
leverage
that,
like
was
for
good
for
our
use
case
predominately,
really
that's
one
of
the
main
reasons
we're
trying
to
get
the
audio
we're
kind
of
trying
to
kill
two
birds
with
one
stone.
So
that's.
K
We
know
that
and
then
I
think
at
that
point.
I
think
it
probably
helps
us
because
then
we'll
have
a
simpler
infrastructure
to
you
know
we're
actually
mostly
problem-based
anyway,
so
I
think
it
I
think
getting
things
to
be
simpler
would
just
be
great.
We've
been
we've
been
watching.
Brian's
work
all
seems
really
great,
but
we're
also
talking
of
something
you
know
we're
we're
spending
a
lot
of
time
in
Mike
we're
trying
to
like
get
to
a
solution
as
quickly
as
we
can
so
yes
way,
you
know
you're.
K
Is
it
better
to
just
keep
tweaking
at
your
buffers
and
pushing
things,
intercept
the
thread
or
is
there?
Is
there
also
a
way
to
sort
you
have
the
translator?
There
pull
the
rtcp
packets
that
we
need
from
that
and
then
use
use
a
just.
An
existing
media,
streamer
audio
silence
media
device
to
get
around
whatever
badness
is
happening
in
JM
f
or
should
it
just
work,
I
mean
it
are
there
it's?
D
Without
I
I
never
really
enter
I,
never
interacted
with
this
recording
code
direct.
It
was
kind
of
before
my
time,
but
but
this
my
thought
in
general
would
just
be
like
yeah
I
think
your
instincts
are
right.
It's
probably
not
that,
like
FMJ
is
all
sudden
magically
unable
to
handle
this.
So
to
me
a
it's
it.
My
gut
tells
me
that
it's
like
some
interaction
of
some
threads
is
not
play
nice.
D
D
So
then
it's
just
a
question
of
like
what
thread
isn't
able
to
do
the
work
that
it
needs
to
get
done
because
it's
being
slowed
down
by
something
and
if
you
treat
it
like
that,
you
can
kind
of
look
at
Qs
in
each
of
the
places.
It's
like
somewhere.
There's
a
queue.
That's
overflowing,
so
you
have
a
you
kind
of
have
a
clue.
D
So
if
you
could
add
some
some
metrics
or
something
to
some
cues
the
system,
one
I
think
there's
there's
actually
a
Linux
I
can't
remember
off
the
top
of
my
head,
but
I'd
looked
at
it
before
there's
a
Linux
command.
It'll
tell
you
like
the
status
of
the
queue
and
how
many
things
have
been
dropped
so
that
might
give
you
some
clues
and
then
you
can
have
work
for
there.
But
to
me
it
seems
like
it's
just
like
a
threat.
It's
like
a
resource
problem
and
something's
blowing
up.
C
D
Can
get
to
that,
then
you
can
find
out.
Okay,
what's
everything
the
student
is
doing
and
it'll
probably
be
pretty
obvious
like?
Oh,
yes,
we're
doing
this
thing,
which
is
you
know,
we're
doing
a
bunch
of
file
I/o
on
the
thread
that
needs
to
read
from
this
queue
as
quickly
as
possible
and
and
maybe
happy
that'll,
be
your
the
your
problem.
Yeah.
K
I
think-
and
there
aren't
that
you
know
I,
don't
because
we
don't
have
that
much
data
coming
in
it's
more
likely,
it's
more
likely
one
of
the
conceivability
and
queues
right,
so
I
think
there's
the
jitter
buffer
and
then
there's
the
this
RTP
packet
q.
Is
there
any
other
queues
that
would
be
aware
of
so
like
a
separate
queue
for
transcoding
or
for
the
processors
or
anything
any
other.
L
L
A
And-
and
you
mentioned
that
you
were
doubtful
that
this
was
where
packets
were
being
robbed,
because
there's
not
a
lot
of
data,
but
keep
in
mind
that
if
they
were
job
that
wouldn't
be
because
the
kernel
isn't
somehow
able
to
cope,
it
would
still
be
because
we're
blocking
things
upstream
somehow
yeah.
However,
if
those
are
bursty
things,
you
would
still
be
a
solution
for
you
to
just
increase
that
number.
K
Yeah
that
sounds
great
and
yeah
I,
think
just
being
aware
of
where
the
and
maybe
just
assume
they're,
probably
Muslim,
have
logging
statements
at
a
debug
level.
So
maybe,
if
we
just
turn
on
those
logs
and
just
just
check
every
single
one
and
see
where
maybe
they
must.
My
guess
is
that
there's
there's
we're
missing
one
of
those
cues
and
that's
the
source
of
the
blocking
somewhere.
K
D
Affecting
other
streams
too,
you
know
at
that
point
where
things
are
still
common,
probably
right
like
if
it
was
isolated,
anymore
someplace,
where
a
thread
is
only
gonna,
is
only
doing
that
work
and
I.
Think
most
the
players
in
the
bridge
would
use
dedicated
threads
that
need
them.
So
if
it's
happening,
if
it's
affecting
all
the
streams,
then
there's
a
thread,
that's
doing
work
for
everyone.
That's
didn't
held
up.
Okay,
that's
good
yeah,
so
it's
probably,
then,
is
probably
at
the
RTP
level
right
still
to
be
practical.
Where
that.
K
D
D
What's
doing
what
but
yeah
might
be
even
lower
at
the
nice
4j
or
something
nice
for
J
threads
may
be
carrying
up
and
it's
doing
work
might
be
some
of
those
threads,
maybe
boss
or
probably
remembers,
but
it's
coming
up
and
is
doing
some
of
the
work.
And
if
it's
getting
held
up
in
one
of
those
paths,
you
know
doing
the
file
I/o
or
something
then
it
could
be
affecting
the
others.
Okay,.
K
A
So
beyond
that
we
haven't,
we
haven't
really
tried.
Oh,
so
we
did
resolve
a
bunch
of
issues
where
the
the
scalability
was.
Actually
it
wasn't
so
much
the
scalability,
but
the
leakage
was
just
big
so
that
if
you
go
through
multiple
Falls,
then
basically,
when
a
third
participant
on
the
GRC
would
start
the
cramp
or
so
so
with
the
whatever
experimentation.
C
And
the
other
thing
is:
we've
seen
something
about
the
redundancy
of
jitsi
video
bridge,
but
it
was
implied.
The
jacoco
wasn't
something
there
you
know
could
be
easily
made
redundant
there.
If
your
jacoco
went
down,
then
you
would
lose.
You
know
all
of
the
calls
that
that
was
handling,
and
there
wasn't
a
way
of
having
you
know
that
talking
to
another
Jacobo
and
keeping
that
updated
about
what
conferences
it
has.
Is
that
something
that's
in
the
pipeline
or
did
I
something
and
yeah.
A
A
But
it
does
that
automatically
so
that's
handled
in
GT
meat
code
and-
and
it
also
jacoco,
is
a
fairly
stable
part.
So
it
happens
really
enough
that
we
didn't
look
into
I'm
making
that
part
particularly
unnoticeable,
but
if
it,
if
something
so
basically
that's
it's
the
same
place
where
you
would
handle
problems
with
the
web
server
that
serves
your
page
and
and
things
like
that,
so
it's
yeah,
there's
really
there's
really.
A
A
Well,
J
Cofer
has
all
the
state
on
who
was
allocated
what
streams
on
which
ports,
whether
they
had
video
on
or
off,
and
all
that
so
that
you
can't
really
do
that
in
that
information
is
just
not
in
prosody,
so
so
no
that
won't
work
that
way.
But
what
is
what
is?
What
is
your
objective?
We're
talking
about
a
15
second
interruption?
That's
not
automatically!