►
From YouTube: Jitsi Community Call - March 11, 2019
Description
The Jitsi Community Call is held every other Monday at 10:30AM Central US time at https://meet.jit.si/TheCall. The Jitsi team provides updates and community members can ask the team questions.
See https://community.jitsi.org in the mean time for questions.
Subscribe for our new videos: http://bit.ly/2QVLZCG
Learn more About Jitsi at https://jitsi.org
Try Jitsi Meet now at https://meet.jit.si
See the code at https://github.com/jitsi
A
C
E
E
A
F
G
F
A
C
F
C
I
I
think
I
reinstall,
the
app
so
I,
don't
think
I
have
my
stuff
set,
but,
like
Bohr
said,
we've
got
he's
got
that
about
to
work
on
the
new
bridge
in
knocked.
Oh
and
the
schlocko
stuff
is
in
a
bunch
of
fixes.
I've
been
looking
at
memory
performance
stuff,
which
has
been
a
little
stubborn
but
is
slowly
coming
together,
so
we're
winding
down.
But
we
have
a
lot
of
perf
stuff
left
and
that
can
be
tricky.
So
we're
gonna
be
horses
working
on
finishing
up
a
bunch
of
feature
stuff
and
I'm
looking
at
performance
stuff.
E
J
Stuff
so
I,
so
it's
not
really
a
question
but
I'm
curious
if,
if
there's
sort
of
the
new
most
immediate
timeframe
for
when
they
like
a
early
beta,
might
be
ready.
But
my
my
main
question
is
we're
working
on
it.
We've
kind
of
moved
past
some
years
we
had
a
recording
side
using
using
jvb
reservist,
or
it's
slightly
for
capabie
reporting.
J
One
of
the
things
that
we're
trying
to
support
is
more
accurate,
mixing
of
the
audio
streams
in
specifically
in
the
recording,
and
so
we
as
the
initial
implementation.
We
were
just
relying
on
you
know
sending
reports
and
rtcp
packets
for
timing
information
Basinski
would
care.
We
can't
rely
on
NTP,
Fox
being
or
the
clocks.
Local
Fox
beat
sync
we're
wondering
if
there's
been
any
investigation
of
supporting
an
implementation
that
that
tends
to
to
synchronize
those
those
clocks
to
get
more
accurate
timing.
Data
across
streams
from
separate,
endpoints
I
know,
there's
there's
some.
J
Standards
that
are
that
exist
for
leveraging
STP
or
stream
level
specification
of
a
clock
and
then
there's
other
sort
of
JavaScript
approaches
that
try
to
use
a
data
channel
to
say
but
yeah
wondering
if
this
has
come
up
before
I
think
most
of
this
thinking
right
now
is
happening.
Just
the
jitter
buffer.
D
Paper
is
it
going
so
to
answer
the
first
question?
First,
we
are
in
the
next
couple
of
days
we're
going
to
create
an
environment
and
installed
a
new
bridge
there
for
testing
purposes.
You
can
already
build
it
and
run
it
and
I'll
jt8d
branch
we're
trying
to
keep
it
at
least
running
cool,
there's
bugs
burning.
You
will
need
to
manually
download
a
couple
of
glucose
and
install
them
with
I
think
in
the
coming
one
or
two
weeks.
We
should
have
progress.
They
are
making
this
more
exposed.
D
J
D
I
D
J
J
D
D
Are
no
the
synchronizer
is
used
to
is
between
two
streams
from
the
same
source
that
would
be
audio
versus
video
in
order
to
reduce
that
trick
or
delay
between
multiple
endpoints.
D
J
J
J
So
that's
how
if
the
clock
is
way
off,
that's
how,
if
the
local
clock
of
the
client
is
way
off,
that's
how
that
would
be
normalized,
because
basically
it's
the
RTT
times
sort
of
the
transfer
time
that
latency,
coupled
with
the
anchor
time
that
represents
the
receive
time
that
can
be
used
to
kind
of
estimate.
What
the
local
time
is
on
the
machine
relative
to
the
sir.
J
D
J
Okay,
that's
great!
So
sorry,
one
look
one
last
thing
is:
is
there
because
one
of
the
things
we
were
looking
at
in
terms
of
getting
even
more
because
there's
always
going
to
be
some
level
of
fuzziness?
Just
because
you
can't
perfectly
estimate
RTT
times
and
usually
it's
not
a
that
RTT
time
isn't
constant
as
Charlie
is
there?
Is
there
a
recommended
way
to
determine
what
what
peers
are
local?
J
I
J
Just
in
those
cases,
I
was
wondering
whether
we
could
optimize
it
further
by
by
doing
a
sync
between
just
all
of
those
local
clients.
So,
for
example,
if
there
were
like
five
clients,
five
peers
on
the
same
network,
we
could
try
to
get
those
clients
synced
together
and
the
RTT
time
between
them
would
be
much
smaller,
given
that
they're
on
the
same
local
network.
I
Understand
and
weigh
anything
I
mean
it
did
see
me
there's
not.
You
there's
obviously
plenty
of
ways
to
do
to
implement
that,
but
we
don't
have
any
p2p
match
sort
of
thing
and
jitsi
meets,
but
you'd
have
to
be
that'd,
be
it
like
entire.
An
entire
application
layer
you'd
have
to
put
on
top
basically
yeah
like.
J
Who
need
to
just
basically
like
a
separate
like
a
separate
connection
and
probably
I,
basically
establish
a
separate
data
channel
connection
between
superior,
which
is
which
is
probably
a
lot
of
alread,
not
that
words
it
yeah
I'm,
just
trying
to
figure
if
there's
a
way
to
optimize
the
timing.
Further,
if,
if
there's
a
situation
where
a
bunch
of
because
we
will
have
a
situation
where
there's
a
bunch
of
local,
let.
I
Me
connect
I,
would
yeah
I
would
I
would
think
if
you,
if
you
were
gonna,
do
that
you
might
as
well
just
implement
some
I
mean,
do
something
there
and
then
do
the
data
channel
syncing
to
the
bridge,
because
at
least
with
that,
you
end
up
with
the
generalized
solution
that
would
work
regardless
of
the
client
yeah
that
works
anyways
and
it's
probably
just
as
just
as
complex
than
it
is
to
handle.
Just
a
bunch
of
the
same
peers
happen
to
be
on
the
same
network
type
of
thing.
I
You're
Septon
I
mean
it's
a
hard
problem.
You
guys
I
mean
eyes,
you
clearly
know
so
it's
it's
kind
of
tricky
I,
wonder
if
getting
serious
drift
between
clients,
like
the
playback
of
the
audio
between
clients,
is
way
out
of
sync
from
how
it
originally
was,
or
is
it
like
pretty
close,
but
just
not
perfect?
It's.
J
I
J
And
then
I
think
the
last
leg
would
be.
We
can
leverage
the
active
speaker
data
to
adjust
the
volumes
in
a
mix
which
I
think
will
will
get
us
the
rest
of
the
way
I
mean
the
only
other
option.
I
think
we
would
be
to
do
some
sort
of
like
post-processing
echo
cancellation,
to
try
to
because
I
think
that's.
J
I
J
Yeah,
no,
that's
a
good
idea.
I
mean
I.
Think
I've
done
a
fair
amount
of
research
into
what
yeah
within
other
WebRTC
approaches
or
even
just
outside
a
web
part
to
see
how
people
are
syncing,
clocks
and
I.
Think
it's
yeah.
It's
basically
just
constant
syncing
from
you
know
peer
to
server
server
to
peer
back
and.
E
J
J
L
L
A
It's
so
basically,
all
the
connections
from
the
client
to
the
server
using
Bosch
and
by
the
full
push
works
on
60
seconds
timer.
So
we've
that
happened
for
60
seconds.
There
is
no
package
Bosch
packets
coming
into
prosody.
That
connection
will
be
dropped
and
you
will
see
the
participant
reads.
So
this
is
what
we
currently
have.
We
plan
to
work
more
on
that
and
in
the
following
months,
to
implement
in
something
more
so
it
can
be
even
instantaneously
not
to
wait
60
seconds,
but
this
is
some
future
work,
but
even.
L
So,
even
if
the,
if
the
client
did
not
send
the
hang
up,
then
we
should
not
see
some
phantom
participants
or
something
like
that.
Currently,
you
will
see
that
466,
okay
and
okay,
so
that
was
one
I
have
just
one
little
question.
So
in
our
scenarios
we
have
speakers
sometimes
being
not
using
headsets
or
they're,
not
really
close
to
the
microphone.
Do
you
have
any
recommendations,
for
example
disabling
the
the
noise
suppression
or
one
of
those
settings
in
the
client?
Is
there
any
recommendation
you
have
for
that?
A
Think
everything
is
on
by
default.
I
know
we
have
some
more
flags
flew
like
enabling
stereo
and
if
you
enable
stereo,
like
I,
hope
constellation
stuff
stops
working.
That's
why
everything
is
currently
the
before
one
will
activate
everything
in
the
browser.
So
if
the
browser
is
not
removing
the
noise
or
they
I
hope
we,
we
cannot
do
anything
about
it.
K
Hi,
this
is
Luis.
I
have
two
questions:
they're
kind
of
unrelated
first
one
is
around
best
practices
for
scaling,
juga
C
or
if
scaling
juga
C
is
even
necessary.
A
K
B
And
okay,
are
you
specifically
talking
about
in
going
incoming
or
outgoing
calls
incoming
okay?
So
for
incoming
your
you,
your
provider
is
the
one.
That's
selecting
that
your
Ghazi
is
that
correct,
at
least
in
our
in
our
current
model,
we
are
basically
we
have
algorithm
where
we
we
decide
which
to
Gossie
to
use
during
the
incoming
call
in
logic
that
is
outside
of
jacuzzi.
B
So,
for
outgoing
that
ragazzi
sit
in
a
muck
and
jacoco
will
load
balance
them
and
Damian.
Correct
me
if
I'm
wrong,
but
I
think
it's
based
on
the
streams
they
were
it
for
incoming
calls.
I
believe
that
that
is
entirely
on
what
other,
whatever
algorithm
you're
using
to
choose
your
juga,
see
for
the
incoming
call
and
that
that
is
gonna
depend
entirely
on.
You
know
how
you're
managing
your
to
God
sees
that
Ali
or
your
provider
selects
one.
K
B
K
It
performs
gosh,
you
got
you
okay,
so
perfect.
Thank
you.
That
was
that's
the
exact
answer.
I
appreciate
that
the
second
question
is
around
the
jvb
REST
API,
and
if
there
is
a
way
to
if
you
wanted
to
delete
a
conference
or
remove
a
single
user
from
a
conference
arm
without
using
the
kick
mechanism
from
the
UI
more
from
like
an
administrative
standpoint
from
like
a
back-end
REST
API,
only
you
know
accessible
from
our
internal
V
PC
or
whatever.
K
I
Doing
this
at
the
jvb
level,
like
you
could
force
expire,
you
know
an
endpoint,
basically,
but
that's
kind
of
gonna
be
cutting
the
endpoint
out
from
the
legs
like
without
knowing
because
Jacobo
won't
know
about
it.
So
it
won't
be
the
most
graceful
way
to
do
it
directly
to
to
jvb,
because
that's
kind
of
at
the
media
level.
Usually
what
happens
is
let
him
what
leaves
jakoli
knows
about
it
and
then
CofO
tells
bridge
like
hey
you
might
as
well.
Basically
you
don't
need
to
know
about
this.
K
A
A
I
M
A
I
J
Just
I
guess
sorry
what
I'm
just
a
follow
this
bargain,
it's
worth,
what
we'll
lose
it!
Where
did
you
say
that
that
line
was
to
capture
the
first
frame
right.
D
D
J
J
J
What
is
what
is
preventing
us
from
doing
that
now,
like
is
there?
Can
we
can?
We
safely
rely
on
the
levels
in
the
RTP
packets
instead
of
doing
the
audio
levels
just
by
analyzing
Yahoo,
which
and
I
guess
question
pouch?
That
is
why
why
why
was
that?
The
approach
for
the
recorder,
RTP
and
why?
Why
not
use
the
audio
levels
in
the
head
and
when
that
have
been
simpler,
I.