►
From YouTube: Jitsi Community Call - May 20, 2019
Description
The Jitsi Community Call is held every other Monday at 10:30AM Central US time at https://meet.jit.si/TheCall. The Jitsi team provides updates and community members can ask the team questions.
See https://community.jitsi.org in the mean time for questions.
Subscribe for our new videos: http://bit.ly/2QVLZCG
Learn more About Jitsi at https://jitsi.org
Try Jitsi Meet now at https://meet.jit.si
See the code at https://github.com/jitsi
A
A
We
have
most
of
our
blockers
closed
down.
We
have
some
work
with
jiggity
and
some
work
with
garbage
collection
that
we're
doing
now.
So
we
discovered
that,
with
the
new
code,
we
get
freezes
due
to
garbage
collection
that
are
too
long.
So
we
have
maybe
two
300
millisecond
pauses
during
which
we
drop
packets.
So
that's
too
long
and
we're
just
trying
to
figure
out.
A
A
A
A
This
conference
is
live
on,
being
live
strip
livestream
to
YouTube
right
now,
I'll
see
about
getting
the
link
in
a
second.
B
If
I
were
to
highlight
something,
I
think
it's
the
fact
that
we
now
have
a
flag
that
allows
us
to
make
a
rebuild
of
the
DC
meet
app.
So
this
build
type
doesn't
include
any
analytics
any
crash
reporting
any
proprietary,
Google
libraries
and
what's
gonna
happen,
hopefully
soon
and
I'm,
in
touch
with
the
guys,
is
that
it's
made
it's
going
to
be
available
on
f-droid.
So
whenever
we
release
to
the
store,
a
bill
will
be
triggered
on,
after
all,
with
these
liberal
being
flag
set.
B
C
C
I
can
do
next.
Okay,
I've
been
mainly
working
on
spot.
In
the
last
two
weeks,
we
are
going
to
release
an
electronic
version
of
spot.
That
was
needed
for
a
few
few
reasons.
Few
of
them
are
two
or
one
of
them
is
to
provide
some
native
functionality
for
spot,
for
example,
needed
volume,
control
for
host
OS
and
nucleus,
beacons
and
honest.
These
are
the
last
two
weeks
were
in
married
about
this
as
well.
A
G
A
That
reminds
me
just
last
week
or
the
week
before
we
change
the
way
that
Chico
4
does
load
balancing.
So
instead
of
a
video
stream
estimation,
it
uses
the
current
bitrate
of
the
bridges.
So
the
result
is
that
the
bitrate
is
evenly
distributed
right
now
there
is
a
downside
if
we
migrate
conferences
from
one
bridge
which
failed
and
that's
something
that
we'll
be
addressing
in
the
future.
H
J
K
L
L
A
L
All
fingers
crossed
everyone.
This
is
non-contract,
oh,
so
just
just
I
know
that
many
people
were
interested
in
this.
So
hopefully
this
gives
us
easy
a
sense
of
how
we're
thinking
about
it,
and
then
I
was
wondering
if
anyone
in
the
community
here
knows
what
spot
is
and
if
you
are
interested
in
that
work
and
to
what
level
I'd
love
to
hear
any
feedback
there.
L
L
You
run
the
spot
app
or
the
spot
page
in
full
screen,
and
then
you
have
a
separate
controller,
usually
a
tablet
Android
or
iPad
I,
guess
not
of
Huawei
too
soon,
and
then
you,
you
use
that
tablet
to
to
control
them
in
the
star
to
mute.
A
new
chair
desktop
all
of
that.
The
point
being
that
this
lets
you
equip
a
place
in
a
way,
that's
user-friendly
that
doesn't
require
mucking
with
a
keyboard
and
a
mouse
and
that's
our
stuff.
Does
that
make
sense?
L
Yeah
I
mean
a
sense.
That
sounds
really
interesting.
So
is
that
is
that
something
that,
in
general,
people
here
on
this
course
struggling
with?
Is
that
a
request
that
you,
some
of
you
here
often
want
to
do?
Or
has
that
been?
Mostly,
you
would
say,
on
the
lower
end
of
of
things,
pretty
already
wise,
that
you
would
like
dizzy
to
do
free.
L
So
Paul,
you
think
that
you
would
deploy
that
and
use
that
if
it
existed,
I
think
so
I
think
we
would
have
news
for
that.
You
would
be
side
up
with
Durham
or
is
that
just
on?
Your
first
impression?
Is
that
something
that
you
believe
you
would
use
kind
of
internally
as
a
conferencing
tool
or
more
like
it's
part
of
whatever
else
you're
doing
with
UT?
That
would
also
fit
there.
I.
M
L
L
L
M
L
L
For
sharing
the
feedback
week
just
to
be
clear
and
completely
transparent,
we
haven't
yet
decided
exactly
how
the
open-source
aspect
of
this
would
work.
It
was
rather
complicated
to
think
through
everything
with
who
back-end
support
being
open
source
in
generic.
So
at
this
point
our
stances
that
we're
going
to
open
source
the
clients
and.
L
M
L
Those
are
front,
end
apps
and
then
there's
the
whole
back-end
part
of
which
runs
on
prosody,
part
of
which
is
entirely
new
stuff
that
helps
you
pair
in
your
remote
controls
to
an
existing
room
that
helps
remote
controls
to
talk
to
the
to
the
actual
computer
running
the
meeting.
I
guess
we
we
should
just
do
a
presentation
of
the
entire
architecture
once
we
push
it
on
and
yeah
that
would
be
cool.
L
L
Right
we
we
have,
we
have
I,
suppose
a
couple
of
months
ahead
of
us
before
we're
ready
for
at
least
one
probably
two
months
ahead
of
us
before
we
array
for
this,
but
but
yes
good
to
know,
there's
some
interesting
if
anyone
else
is
also
interested
in
wants
to
hear
more
details,
it's
probably
talked
about
it.
The
next,
though,
to
answer
questions.
E
And
one
idea,
Emile
is
because
you
talked
about
the
open
source
aspect.
Being
you
know
we're
still
thinking
about
it
would
be
to
do
the
architecture
like
you
speak
of
and
then
ask
how
people
might
want
to
consume
it
as
open-source
consumers
when
we
get
there
to
help
input
and
drive
that
decision
yeah,
we
could
probably.
L
M
M
But
there's
also
the
you
know
the
end
to
end
ping.
So
so
my
first
question
is:
is
the
so
like
this
stats
that
show
up
like
in
the
filmstrip?
Where
shows
each
perky's
participants
yeah
RTT,
that
that's
just
from
the
data
channel
based
in
to
end
ping
right
at
that
timing,
and
is
that
is?
Is
there
a
does?
It
make
sense
for
us
to
leverage
this
or
the
calculation
that
happens
with
and
with
estimation
on
the
video
side
or
the
main
reason
we're
using?
M
Not
that
one,
it's
the
there's
a
general
calculation
that
happens
real
quick.
That
seems
only
happened
in
the
video
rtcp
determination.
Okay
for
audio
it
never
gets
called
just
because
it's
tied
to
one
of
the
bandwidth,
estimations
and
I
noticed
in
testing
that
it
doesn't
actually
I
think
eventually
and
get
called
if
you're
just
using.
If
you
just
have
an
audio
channel.
M
So
in
case
of
an
audio
only
mode,
it
looks
like
the
that
method
of
calculating
RTT
doesn't
actually
get
call,
but
we
can
just
use
yeah
you'll
be
into
in
paying
I
think
is,
is
something
that's
configurable
to
turn
that
off?
We
just
to
be
able
that
you
have
a
simple
thing
to
to
use,
but
I
wasn't
I'm
just
curious
if
there's
a
few
different
ways
of
doing
the
same
I'm
getting
roughly
the
same
metric
there's
a
one
that
would
be
better
to
use
over
another.
A
Might
be
a
little
confusing
because
if
you
just
look
at
it,
you
you're
not
able
to
tell
which
end
point
is
the
one
that
has
a
long
round-trip
time.
So
if
you
want
just
a
leg
from
a
participant
to
their
bridge,
you
can
always
get
that
from
the
ice
pair.
That
is
used,
I'm,
not
sure
how
it's
exposed,
but
it's
so
many
stats
mean
I,
guess,
there's
another
one
for
the
video
channel
that
is
used
somewhere,
I,
don't
remember
anymore,
but
even
without
video
you
should
be
able
to
get
it.
M
H
N
H
Observations
are
correct,
that
we
don't
calculate,
we
don't
have
our
discriminations
produce.
So
if
you
don't
have
video
traffic
or
so,
if
you
don't
have
any
video
channels
or
video
sources,
then
we
don't
not
to
tea
for
right.
So
I
started
T.
If
you
can
get
it
from
the
stats
that
it
would
be
the
most
but
the
best
option.
I
think.
M
H
Have
a
connection
starts
somewhere,
at
least
in
the
bowls:
that's
the
legacy
stats.
They
had
it.
So
they
have
a
connection,
they
have
an
object,
connection,
stats
and
there
you
get
an
RTT,
that's
independent
from
from
this
video
stream
RTT
or
the
audio
stream
RTT.
I'm
not
sure
if
it's
in
the
new
promise
based
stats,
though,
if
you
don't
find
it
there,
no
do
you
know
where
that
is
often
it's
big
connection,
stats
or
something
I.
H
That
would
be
in
the
peer
connection,
stance
here,
connection
that
which
I'm
not
sure
that
we
expose
yeah.
That's
another
complication,
I'm,
not
sure
whether
we
expose
that
when
I
would
provide
access
to
to
this
kind
of
statistics
from
the
user
application.
Does
anybody
know
if
we
expose
that
to
a
user
application?
The
web
RTC
stats.
H
M
H
M
B
B
B
M
M
A
M
H
H
A
A
H
A
M
D
A
So,
a
few
months
ago
we
started
a
project
where
we
are
rewriting
big
chunks
of
the
video
bridge
in
order
to
stop
using
some
legacy
code
that
we
were
using
and
in
order
to
be
able
to
improve
performance
that
pretty
much
replaces
Lib
GT
with
new
code.
So
we
have
a
completely
new
RTP
stack
and
a
new
pipeline
for
handling.
A
A
E
A
D
A
A
D
A
B
Hopefully,
not
on
the
for
the
first
stage,
I
would
say
no,
because
it
would
look
similar
to
the
way
things
are
done
on
Firefox
now,
but
to
add
to
what
Boris
they
said.
The
jury
is
still
out
on
our
ID
and
stuff
to
begin
with,
because
the
current
estate
is
pretty
bad.
Basically,
there
is
no
way
to
do
simulcast.
B
B
I
forgot
who
but
was
writing
a
draft
to
keep
as
a
Circe's
in
the
signaling,
so
we
can
still
do
simulcast,
let's
say
the
old-school
ways,
quote-unquote
and
and
the
migration
can
be
done
as
boys
mentioned
in
include
stages.
If
mail
is
still
here,
I
think
he
remembers.
Who
is
what
the
draft
name
is,
but
otherwise
that's
the
current
state
of
affairs.
Let's
say.
D
B
B
The
part
that
does
the
conversion
from
jingle
to
SDP
can
take
care
of
this
I
think
it
is
not
impossible
that
changes
are
needed,
I'm,
pretty
sure
that
when
our
ID
is
that
it
changes
will
be
needed,
but
I'm
I'm
hopeful
at
least
that
we
will
be
well
to
go
to
assist
and
do
our
ID
later.
So
in
a
two
stage
thing
the
migration
to
unify
should
be
transparent,
so.
B
Your
own
flag
right,
no,
it's
a
constraint.
So
at
this
point
the
plan
pun
intended
is
that
we
find
it.
We
modify
the
client
code
to
use
unit
five,
let's
start
ceasing
the
SDP
and
then
the
bridge
needs
no
changes
at
all,
and
then
we
have
in
the
leap
DT
made
code.
There
is
a
thing
where
you
say
what
was
the
name
SDP
there's
a
thing
to
select
what
plan
you
want
and
right
now
it
is
set
to
plan
B.
B
B
G
O
O
O
O
But
without
octo
it's
it's
it's
supposed
to
work
right,
I,
think
so
that
does
strange
because
I
don't
have
octo
and
my
deployment
I
tried
it
with
the
latest
version
of
Chrome
and
it
doesn't
work.
I
tried
even
on
non-meat
dodgy
that
see
on,
doesn't
work.
I,
see
that
on
the
tab
web
RTC
internals
that
my
browser
sends
at
the
same
resolution
and
uses
the
same
bandwidth,
I
used
it
with
another
participant.
C
H
Rough
edges,
Brian
will
know
better,
but
I
think
there
were
bugs
and
issues
around
that
there
was
a
constraint
he's
a
specific
version
of
Chrome
and
below
that
version,
wouldn't
work
but
we're
well
beyond
that
version.
Now
it
has
hit
stable,
I
think
the
requirement
was
chrome,
69
or
chrome
70,
but
but
apparently
something
is
broken
and
it
doesn't
work
as
expected.
All.
O
P
P
No
problem
a
couple
of
few
questions
here:
is
it
possible
for
a
participant
that
is
sharing
their
screen
to
also
still
be
seen
by
the
other
people?
The
other
participants
I
know
one
way
around.
It
is
if
you
log
in
on
another
device
and
maybe
use
the
other
device
for
sharing
your
screen,
but
is
there
a
way
that
you
can
do
both
at
the
same
time?
At
this
point,
Cory
Kingdom
in
the
current
goes
very
smoothly,
apparently
no
okay,
but
so
that
is
a
workaround.
P
P
F
A
F
P
In
the
chat,
the
link
to
still
more
than
that
fantastic,
ok,
so
then
it
is
possible.
If
we
have
these
rooms,
so
then
the
users
could
see
which
one
has
is
vacant,
and
then
maybe
they
could
go
in
there
right
if
they
want
to
create
a
little
one-on-one
or
something
or
into
many
conversation,
the
SDK
that
is
used
for
jitsi
mobile
app.
Is
you
guys
know?
If
is
that?
Is
that
open
source
and
available
as
well
somewhere,
yeah
sure.
P
P
B
There
so
the
GT
meat
repo
is
actually
right
now
the
source
for
three
different
things
well
times
yeah.
So
one
is
the
GT
meat
website
that
you're
used
to
when
you
go
to
mint.
It
sits
on
the
front
end
for
the
conference,
but
it
is
also
the
mobile
app
which
uses
react
native,
so
it
uses
JavaScript
for
all
them
internal
logic.
But
then
the
mobile
apps
are
kind
of
just
a
very
thin
wrapper
around
the
Mobile
SDK,
which
is
a
native
SDK
for
mobile.
B
So
if
you
more
in
find
the
JavaScript
file
to
do
something
else
or
look
something
somewhat
different,
you
can
build
the
artifacts
and
deploy
them
to
your
own,
whatever
maven
repo
in
case
of
Android
or
different
cocoa
pods,
and
create
an
app
of
your
own
and
publish
it
to
the
story,
you
can
do
whatever
you
want.
Okay,.
P
Regarding
the
the
desktop
app
is
it
is
it
possible
that
it
can
be
customized
so
that
there
is
an
API
that
grabs
the
existing
user
name,
because
on
our
system
we
have
a
database
of
users
and
little
mini
profiles
of
who
they
are
their
name,
their
occupation,
interest
and
so
forth,
and
so
we
need
to
have
a
way
of
of
taking.
You
know
who
are
the
participants
in
a
discussion
and
be
able
to
just
pass
that
particular
username
to
our
system
so
that
we
can
show
the
people
some
more
information
about
who's.
Talking.
P
G
So
I'm
trying
to
unmute,
can
you
help
me
understand
your
use
case
a
little
more.
So
are
you
imagining
that
a
user
in
the
user
interface
would
make
a
click
inside
the
jitsi
meet
user
interface,
but
that
you
would
somehow
be
displaying
extra
information?
That's
coming
from
a
third
party
source,
exactly.
P
I
don't
know
if
this
is
something
that
we
could
do.
I
mean
right
now
we
have
a
workaround
so
like
when
somebody
logs
into
our
system
and
we're
just
seas
embedded,
it
will
automatically,
and
then
they
come
into
the
Jitsu
room.
It
will
automatically
say
what
their
username
is,
but
you,
so
you
see
it
on
the
screen.
You
know
they
don't
have
to
type
in
their
name.
It's
actually
there
already.
You
know
their
username
and
their
real
name,
but
we
don't
have
a
way
of
getting
that
out.
P
You
know
by
clicking
on
it,
so
somebody
with
we
have
a
little
box
underneath
that
says,
entry
username
to
get
their
mini
profile.
So
you
have
to
basically
physically
write
that
name
in
there
press
enter
and
then
the
mini
profile
will
appear,
but
it
would
be
nice
if
you
could
just
click
on
somebody
or
even
hover
over
them,
and
the
mini
profile
appears.
Does
that
make
sense,
yeah.
G
But
there's
a
number
of
components
there.
You
have
to
have
the
tokens
working.
You
have
to
generate
tokens
for
your
user
and
said
have
them
joined
via
those
tokens,
and
your
back-end
would
have
to
trust
those
tokens
and
then
you'd
have
to
do
all
of
the
JavaScript
necessary
to
expose
all
those
details.
So
there's
nothing
that
you
could
easily
do
to
plug
in
at
the
moment.
For
that
that
I'm
aware
right.
P
Is
that
something
that
could
easily
be
done,
that
on
the
meet
that
sees
like,
let's
say,
I
even
offered
one
of
my
developers
to
help
out?
If
you
guys
are
too
busy?
Is
that
something
that
maybe
they
could
work
on
a
copy?
And
let
you
know
what
what
lines
of
code
need
to
be
added?
You
guys
upload
them
that
possible
or
not.
B
G
Because
what
you're
imagining
right
now
is
that
you
have
a
central
identity
provider
and
that
every
user
that
uses
the
system
would
somehow
have
access
both
to
query
that
provider
as
well
as
to
be
able
to
be
a
user
within
it
and
authenticate
themselves.
So
so
there's
a
number
of
sort
of
generic
components.
There,
an
identity
provider,
a
user's
credentials,
some
amount
of
trust
that
every
users
coming
in
with
that
name
you
actually
only
that
user
could
come
in
with
that
name,
so
that
people
can't
impersonate
each
other.
G
Once
you
have
identity
associated,
you
have
a
lot
of
other
issues
that
come
up,
that
anonymous
systems.
Don't
you,
and
just
you
mean
in
general,
has
done.
We've
tried
to
keep
the
identity
thing,
separated
so
I'd
be
happy
to
have
discussions
with
your
developer
about
the
best
way
to
do
that
in
a
generic
fashion.
But
it's
not
what's
going
to
be
a,
they
write,
some
riff
and
we
upload
it.
They'd
have
to
look
at
the
chat,
see
meet
project.
Look
at
the
changes.
G
P
G
G
P
Mention:
okay,
no,
no
problem.
When
one
last
thing,
if
we
were
to
actually
create
our
own
version
of
gypsy,
you
know
I
guess:
do
we
need
to
have
a
dedicated
server
for
this
any
idea
what
the
hardware
requirements
are
for
it
and
second
question
related
to
this-
is:
if
we
do
that,
does
that
also
enable
us
to
white
label
it
so
that
the
words
jitsi
and
slack
are
not
visible
externally?
I
guess
it
might
still
be
internal
on
the
code,
but
not
to
users.
B
Let's
say
that
if
you
made
this
change
of
probably
changing
the
ugt
source
code
itself,
so
of
course
you
will
need
to
deploy
to
your
own
in
front
and
then
since
you're
deploying
it
to
your
own
infra.
You
are,
of
course,
allowed
to
remove
the
logos
and
every
mentioned
kid
sees
like
or
anything
for
people
who
use
our
existing
deployment
by
in
banging
with
the
iframe.
We
kindly
request
to
not
remove
them
in
exchange
for
the
free
service.
It's
a
gentlemen's
agreement.
G
To
answer
your
question
about
what
size
of
server
it
really
is,
can
depend
on
what
kind
of
bandwidth
need
to
have
how
many
users
you
have
all
of
that
we're?
We
run
this
across
ten
tens
of
servers
across
six
regions
and
do
some
auto
scaling
type
operations.
So
it
really
is
just
going
to
pen
down
the
scale
of
your
customer
base.
How
much
triggers.
P
Right
yeah
I
mean
that's
something
I
want
to
explore
in
the
future,
but
for
now
we're
very
happy
just
to
use
the
the
existing
system
and
in
our
link,
is
just
a
manual
what
you
know
so,
like
I
said
before,
people
will
see
who's
talking
when
they
log
in
through
our
through
our
URL
through
our
domain.
They
they
everybody,
has
to
have
a
unique
user
ID.
P
You
know
from
the
or
user
name
from
the
signup
process
and
then
when
they
go
into
the
audio-video
room,
it
says
who
that
user
ID
is
which
figured
that
part
out
who's
talking
and
people
just
have
to
manually
type,
that
name
into
the
little
box
in
order
to
see
the
mini
profile,
but
from
there
they
can
see.
You
know
who
is
the
person
their
occupation?
They
can
chat
with
them
privately
on
our
own
system.
They
can
even
chat
publicly
there's
a
number
of
things
they
can
do,
but
it's
a
little
bit.
It's
just
a
link.