►
From YouTube: Jitsi Community Call - November 5, 2018
Description
The Jitsi Community Call is held every other Monday at 10:30AM Central US time at https://meet.jit.si/TheCall. The Jitsi team provides updates and community members can ask the team questions.
See https://community.jitsi.org in the mean time for questions.
Subscribe for our new videos: http://bit.ly/2QVLZCG
Learn more About Jitsi at https://jitsi.org
Try Jitsi Meet now at https://meet.jit.si
See the code at https://github.com/jitsi
A
B
Yeah
sure
we
we've
done
some
stuff
around
basically
making
sure
that
Guji
Bri
encounters
an
error.
It
reports
that
up
so
it's
taken
out
of
the
pool
suppose
we
were
running
into
issues
where
it
never
happened,
and
we
would
try
and
keep
selecting
that
same
debris
over
and
over
so
there's.
Some
a
lot
of
the
stuff
is
around
basically
that
stickiness
of
errors.
A
Great
and
for
everyone
recording
is
now
officially
enabled
only
jitsi
we're
just
not
announcing
it
yet
because
because
we
want
to
make
sure
that
we
have
all
the
reliability
work
to
make
sure
we
have
a
perfect
experience,
but
it's
so
far
it's
behaving
pretty
well.
I
made
you
tea,
feel
free
to
use
it.
You
just
need
a
job
box
account
I,
restore
the
recordings
right
now.
A
D
E
Yes,
so
it's
kind
of
stack
on
the
PR,
the
first
one
for
the
jvb,
which
fixes
regular
conference
in
relation
and
problem
that
the
data
channel
not
reopen
and
then
I
have
another
PR
with
actual
iced
restart
logic.
When
the
ice
fails,
it's
not
really
like
realized.
We
start
on
the
JV
b-side,
because
Chicago
allocates
new
channel,
but
I
have
this
logic,
it's
still
not
in
the
PR,
because
it
depends
on
this
one
for
the
DVD
I,
don't
know
when
it
gets
math,
because
right
now,
I'm
on
a
vacation.
E
A
A
So
we
also
went
through
some
of
you
might
have
noticed
me.
Gtp's
redesign
things
should
be
looking
a
little
bit
tidier
right
now.
Fans
here
TV
expect
a
little
bit
more
work
there.
So
we've
been
working
with
Chad,
specifically
around
the
fact
that
we
need
a
little
bit
more
of
a
little
bit
better
of
a
first-time
user
experience,
something
that
takes
us
through
a
few
explanatory
steps
that
say:
hey
here's,
here's
what
you're
here's,
what
the
site
is
about!
A
Here's
how
to
use
it
very
briefly,
so
so
that
our
users
in
your
all
users
can
can
be
slightly
less
confused
when
they,
when
they
get
there.
We're
also
planning
a
little
bit
more
work
around
letting
people
hooking
their
I.
Think
I
mentioned
that
already
last
time,
but
it's
worth
reminding
letting
people
who
in
their
own
analytics
engines
so
that
people
who
are
looking
at
this
from
a
product
perspective
and
trying
to
understand
user
behavior
and
which
features
are
getting
more
users
or
less
and
optimize.
For
that.
A
F
A
A
G
G
The
two
issues
that
that
I
have
PRS
for
are
for
the
betrayed
allocation,
not
up
scaling
the
the
onstage
participant
in
certain
occasions
and
that
should
fix
improve
the
user
experience
on
the
video
quality
of.
Of
course,
we
we,
we
don't
really
know
how.
Often
this
was
happening,
but
the
hunch
is
that
it
has
been
affecting
a
lot
of
calls.
So
yeah,
that's
gonna,
be
an
important
improvement,
fix
fixing
a
regression.
Actually,
you
know
great.
A
I
think
those
are
the
main
steps.
Would
anyone
from
the
teal
team
want
to
add
something?
Sorry,
not
the
main
steps.
The
main
points
in
the
past
couple
weeks,
if
I'm,
forgetting
anything
important
before
we
move
to
the
next
item
of
the
agenda.
Would
anyone
in
the
team
like
to
add
something
I
have
forgotten
all
right?
So
a
brief
note
there,
a
significant
number
of
us
are
currently
working
through
the
very
mundane
logistics
of
moving
to
a
new
place
as
you.
A
So,
if
you're
travelling
to
any
of
these
places,
reach
out
and
give
us
a
you
know-
and
obviously
all
of
this
is
part
of
the
aftermath
of
the
big
announcement
last
week,
which
is
that
the
entire
team,
the
entire
duty
team,
is
our
continuing.
Our
work
is
part
of
8x8.
I
know
that
a
number
of
people
have
reached
out
to
me
privately
with
concerns
that
what
is
the
future
of
GT
now
that
Atlassian
doesn't
actually
need
the
technology,
would
they
and
why
would
they
continue
supporting
it?
A
H
I
mean
I
could
question
so
yeah,
as
I
mentioned.
The
last
couple
calls
we've
been
working,
extending
jagathi
to
support
a
few
custom,
music
use
cases,
and
so
last
last
call.
The
recommendation
was
because
we
were
experiencing
some
problems.
I
guess
jitter
related
issues
when
multiple
people
in
the
same
room
were
present
and
so
we're
moving.
The
are
to
be
translator
and
I
just
want
to
sort
of
just
get
a
recommendation
as
far
as
either
a
two.
C
H
Have
no
use
case
for
the
video
but
just
kept
for
the
audio
directly
serves
as
RTP
okis
data,
so
the
the
way
we're
doing
it
right
now
is
we're
just
back
in
coffee.
Our
translator
I'm
just
wondering
whether
it
would
make
sense
for
us
to
you
officer,
you'll
use
the
audio
mixer,
whether
it's
impossible
to
get
the
the
raw
audio
and
separately.
Is
there
aside
from
kind
of
using
a
filter
to
to
sort
of
snoop
on
the
packets?
Is
there
a
approach
that
you
would
recommend
and
move
that
to
get
access
to
that
data?
H
I
I'm,
not
sure
I
have
a
good
answer
here
if
you're
using
the
translator
mode-
and
you
want
to
record
there's
the
recorder
RTP
implementation,
then
that
handles
a
lot
of
it.
If
you
don't
want
to
use
that,
it
would
be
pretty
hard
because
you
need
to
somehow
connect
to
and
make
sure
that
FMJ
is
the
decoding
for
you.
H
Who
we
are,
we
are
using
that
that
was
kind
of
like
well
yeah,
discuss
last
time,
but
I
there's
not
an
easy
way
when
you
use
that
to
just
to
still
get
the
audio
so
like
that
solves
all
use
case
for
getting
the
that
RTP
data
sort
of
this
raw
form.
But
is
there
a
better
approach
for
currently
Agassi?
We
were
using
the
audio
mixers
to
just
per
participant
grab
the
raw
audio,
totally
already
decoded
audio,
to
continue
support
that
use
case.
Is
there
yeah
as
a
recommendation
like
either.
H
G
C
A
They're
two
like
it
sounds
like
you
just
need
two
things
that
run
in
different
modes
so
rather
than
it
will
be
the
same
resources,
because
you
need
to
do
two
different
jobs
so,
rather
than
trying
and
get
this
all
fit
into
the
same
audio
flow
you're
not
really
gaining
anything
with
that.
So
you
might
as
well
just
have
two
participants
and
have
them
configured
differently
and.
A
H
D
H
It's
like
per
participant
stream,
so
it's
effectively
like
a
file
yeah,
so
yeah
and
for
the
most
part
or
the
record
or
et
people
gives
us
all
that
sort
of
directly
out
of
the
box.
I
just
didn't
see
an
easy
way.
Also
in
the
default
implementation.
The
only
you
know
we're
it's
using
the
audio
mixer
for
transcription,
and
then
you
know
the
receive
stream
buffers
getting
participants
getting
the
raw
audio
data,
so
yeah
I
mean
now
that
was
I.
Wasn't
it
wasn't
any
obvious
way
to
go
to
do
to.
H
A
At
all,
no,
no,
no,
actually
the
only
level
checking
is
based
on
RTP
header
extensions
that
carry
the
audio
level
right.
So
there's
no
decoding
there
now
I'm.
My
memories
of
lib
jitsi
itself
are
are
kind
of
dusty.
I
didn't
think
that
you
needed
to
have
the
mixer
in
every
individual
stream
in
order
to
know.
H
I
Think
you
can
have
a
mixer
and
a
translator
during
one
call.
It's
just
the
code.
Isn't
this
this
way,
but
if
you
want
to
transcribe
without
using
a
mixer
that
shouldn't
be
too
hard
to
do,
you
just
need
to
plug
in
the
input
to
the
transcriber
to
the
raw
audio
from
the
RTP
recorder
RTP
in
Poe.
That
should
work
is
just
just
need
some
glue
there.
So.
I
Don't
remember
exactly
how
its
structured,
but
it
does
save
everything
to
mp3
I
believe
and
there's
a
audio
silence
effect
that
has
access
to
the
raw
audio.
If
you
plug
in
your
code
there
and
sort
of
pass
it
as
a
buffer
to
the
transcriber
that
should
work
instead
of
taking
the
buffers
from
the
mixer
I.
H
Right
and
so
which,
which
works
for
that,
was
working
for
us,
but
I
the
use
case
to
still
have
the
RTP
streams,
as
well
as
the
audio.
It
wasn't
really
clear
what
was
the
simplest
approach?
Okay,
yeah!
So
if
we
can
just
piggyback
into
the
audio
silence
device
well,
we
should
should
give
us
the
best
of
both
world,
which
is
still
be
able
to
get
those
individual
file
streams,
RTP
streams,
but
also
get
the
decoded.
I
H
Okay
and
yes,
if
we
do
this,
that
other
approach,
we
would
just
you
this
should
be
no
problem,
just
inviting
we'll
just
make
a
setting
so
that
we
can
either
add
on-the-fly
per
call
activate,
either
a
translator
mode
or
not
translating
mode,
and
just
have
them
both
be
there
that
that's
the
that's.
What
you're
saying
right
and
then.
I
I
think
having
a
translator
and
a
mixer
for
a
single
code,
personal
call
would
be
a
major
surgery
there.
No.
A
A
H
H
Change:
okay,
yeah
I
think
I
can
simplify
my
code
because
yeah
okay,
great
that's,
that's,
really
helpful.
J
Hey
everybody:
this
is
Louis
I,
had
a
question
regarding
video
quality,
I
I'm
using
lib
gypsy
meat
directly
and
was
wondering
how
I
can
get
the
best
possible
video
quality
from
the
jbb
if
I'm
not
using
the
active
speaker
API.
So
that
way,
the
active
speaker
always
has
the
high
res
and
the
non
active
speakers
are
low
res.
So
when
it's
like
a
Brady
Bunch
tile
mode,
like
you
have
here
in
jet,
Z.
G
So
so
you
won't
do
you
have,
for
example,
720p
for
all
participants,
because
that's
not
I
mean
you
could
do
it
by
disabling
simulcast,
for
example,
but
I'm
not
sure
if
that's
gonna
lead
to
a
good
user
experience,
because
you're
gonna
well
in
a
you
know
more
than
five
people
call
it's
gonna
explode
the
the
receiver.
I
mean
it's,
it's
gonna
make
it
receive
a
lot
of
data
and
it's
gonna
process.
A
lot
of
data
and
okay
is.
J
A
He
wasn't
he
wasn't
saying
that
this
is
not
about
peer-to-peer
just
receiving
six
720p
streams
anywhere
they're
all
720p,
so
this
means
about
two
two-and-a-half
megabits
of
data
per
stream.
Okay,
so
you
end
up
having
to
handle
15
megabits
without
peer-to-peer
you
get
them
from
the
bridge,
all
of
them,
but
because
they're
all
720p.
They
all
have
a
lot
of
data
right.
B
K
B
We
use,
we
select
all
elements
and
then
we
limit
it
by
setting
the
max
resolution
incoming
resolution.
Think
that's
what
we
do
for
pretty
much
that
should
that
should
be
the
answer.
So
can
you
say
that
again
please?
So
if
you
send
a
selected
endpoint
message
for
an
endpoint,
that
means
the
bridge
will
kind
of
send
you
that
that
participant
in
high
quality,
so
what
we
do
for
Brady
Bunch's.
B
We
send
selected
endpoints
for
everyone,
but
then
there's
another
message
you
can
send,
which
is
the
max
a
incoming
resolution
you
want
to
receive
and
we
can
limit
that
so
that
in
Brady
Bunch
mode
we
select
every
once
we
get
them
higher
than
180,
but
we
can
cap
it
to
say
360
or
something.
We
have
a
lot
of
people
up.
The.
G
B
A
J
C
Is
also
in
Liberty
to
me
to
actually,
if
you
look
at
the
RTC
but
modules
RTC,
RTC
Jas,
they
are
actually
one
close
to
the
other
party.
Cif
yeah
select
endpoints
and
you
pass
an
array
of
endpoints
and
the
other
one
is
receive
video
constraint
and
the
max-frame
height.
They
are
side
by
side.
They
do
have
a
genius
doc,
so
you
can
check
for
that
there
and
that's.
And
if
you
grab
around
the
code
for
NGC
mint,
you
should
be
able
to
see
how
we
use
them
as
one.
H
Short
sorry
go
ahead,
so
one
more
a
very
quick
question,
corollary
to
my
last
one,
which
is
just
as
we're
testing
some
of
these.
Some
of
these
changes
where
we're
trying
to
leverage
the
unit
or
integration
tests
as
much
as
much
as
we
can
just
obviously
simplifies
any
any
amico
changes.
Is
there
a
you
know?
There
seems
to
be
sort
of
disparity
between
different
mocks
and
different
kind
of
testing
for
structure
and
different
different
modules.
H
E
A
A
A
Course,
yes,
it
is
planned
to
be
open
source,
and
so
fact
I
don't
know
Brian
if
you've
already
opened
up
some
of
it.
But
the
the
reason
is
that
every
time
basically
Lib
GT
Lib
has
diverged,
or
rather
the
the
use
cases
that
we
have
for
Liv
GT
and
the
bridge
are
different,
so
we
are
more
often
than
now
we're
finding
ourselves
Institute,
where
we
have
to
work
around
a
lot
of
the
architecture
of
Lib
GT
in
order
to
add
new
functionality
to
the
bridge,
while
we
don't
actually
use
like
there's
we're
never
decoding.
A
Now
that
we
know
our
use
case
is
better.
We
know
that
the
whole
mixing
thing
in
the
bridge
is
not
really
a
good
idea
all
together
and
that
for
all
the
use
cases
where
this
is
necessary,
there
are
better
approaches
with
individual
components
like
juga
Co
debris,
so
we
never
need
to
handle
filter
grafts
in
there
and
and
mixers,
and
all
that
we
on
the
other.
How
and
we
do
need
to
simplify
the
code
enormously
as
we
are
concentrating
more
and
more
on
on
things
like.
A
B
A
C
B
B
H
A
A
A
H
C
H
A
No
I
don't
know
that
we
don't
have
that
was
referring
to
is
it
does
spin
up
an
entire
Cole
and
it
just
captures
the
images
and
compares
for
noise
introduced
in
the
images,
so
that's
yeah
and
that
works
on
a
very,
very
high
level.
So
it's
not
a
mark
where
it
still
would
catch
some
regressions
in
RTP.
But
yes,
too
much
right
here.
A
B
H
B
L
L
Yeah
I
just
started
an
implementation
of
gcvideo
bridge
with
Calibri
protocol
and
what
I
found
difficulty
is
kind
of
where
some,
how
to
say
there
is
a
need
for
more
documentation,
for
example
starting
on
a
location
and
channels.
What
are
those
data
models
but
I
need
to
sound
water?
Must
water
lose
default
values
that
I
need
to
put
some
to
the
video
bridge
and
also?
A
B
L
L
L
L
A
A
So
obviously,
looking
at
I,
don't
like
looking
at
jakar
is
going
to
be
that
helpful
cuz.
The
couple
uses
kallipolis
or
XMPP,
so
I
don't
know
if,
if
that
would
be
particularly
easy
to
deal
with,
but
and
it's
it's
cold,
so
you
have
to
kind
of
run
it
through
your
head.
Maybe
an
easier
way
would
be
to
set
it
your
own
GTV
instance
and
I
believe
jvb.