►
From YouTube: 2022-08-03 Monthly Multicast W3C Community Group Meeting
Description
2022-08-03 Monthly Multicast W3C Community Group Meeting
notes here: https://docs.google.com/document/d/1vop1sEctzAGANR6zG37_6QxnUuP-z3R6-LdWXIQ-yfk
meeting series here: : https://github.com/w3c/multicast-cg/tree/main/meetings
A
Okay,
great,
we
are
now
recording
thanks
everyone
for
coming.
So
today
I
was
just
gonna
go
over
a
couple
things.
I
just
put
the
notes
in
the
agenda
there
in
the
chat
there
and
that's
got
the
agenda
in
it.
So
far,.
A
A
As
part
of
the
ietf
discussion,
I
might
want
to
have
a
little
digression
about
the
security
thing
that
we
were
talking
about
with
the
with
the
origin,
guarantee
that
the
the
multicast
traffic
in
the
quick
proposal
is
coming
from
the
same
origin.
A
B
A
Yeah
yeah
right,
I
just
I,
I
haven't
written
anything
either,
yet
I
just
had
a
few
more
thoughts
about
it
as
well.
So
I
guess
my
my
main
update
is:
we
had
some
presentations
in
mbondian
quick.
We
did
some
made
some
progress
at
the
hackathon.
I
think
we've
got
some
some
updates
about
what
we're
doing
in
the
implementation
that
are
likely
to
come
from
that.
A
Luke
curley's
proposal
and
and
running
demo
for
warp
using
web
transport,
and
I
think
it
would
actually
mesh
really
well
with
our
our
current
quick
proposal
because
it
uses
the
same
kind
of
you
know.
Server
push
like
idea
for
server,
initiated
streams
that
we
would
be
relying
on
and
a
little
bit
about
just
the
sad
discussions
we've
had.
A
So
I
I
guess,
oh
max
and
and
kylie
know
you
were
there
correct
me
if
I'm
wrong,
but
I
think
the
reception
at
quick
was
perhaps
slightly
confused,
but
not
hostile.
A
I
I
I
don't
think
we've
got
really
the
buy-in
from
from
people
who
have
been
working
on
quick
that
I
would
like
to
see,
but
we
did
have
a
nice
comment
from
lars
the
ietf
chair.
Who
was
there
at
the
mic
about
how
glad
he
is
to
see
this
work
moving
forward
and
and.
A
Yeah
he
found
it
interesting
and
useful
to
make
it
like
not
hard
for
applications
to
use
multicast
right,
so
so
doing
this
at
the
transport
layer.
I
think
it
was
a
good
insight
and
one
that
matches
well
with
with
the
kind
of
our
you
know,
at
least
my
own
insights
as
a
as
I've
been
working
on
this.
A
That,
like
it
is
a
nice
feature
that
the
any
web
transport
application
that
can
use
server
initiated
data
can
make
use
of
the
multicast
stuff
happening
underneath
it
without
having
to
be
specifically
aware
of
any
multicast
stuff
happening
underneath
it.
So
that's
like
that's
kind
of
valuable
in
some
way.
I
think
the
same
is
true
of
server
push,
so
even
request
based
things
that
you
know.
A
If
we
had
any
browsers
that
were
going
to
continue
supporting
server
push,
then
you
know
that
that
would
work
the
same
way
or
offline
application.
The
sort
of
fat
client
applications
right.
You
can
do
server
push
to
build
a
cache
in
the
in
the
receiver
and
then
sort
of
have
a
player
that
wants
to
make
requests
that
you
feed
out
of
your
server
push
cache
right.
B
I
wouldn't
be
shocked
if
the
same
basic
thing,
whether
it's
the
you
know
whether
it's
the
same
implementation
or
interface
or
not
reappears
at
some
point
once
enough
use
cases
that
actually
make
you
know
that
actually
make
good
use
of
it
appear.
It's
just
like
it's
one
of
those
things.
That
was
what
was
it
developed
in
like
2014
or
something
like
that,
and
it
just
never
really
got
used,
and
then
you
know-
and
it's
a
lot
of
code
to
maintain
for
something
that
isn't
used.
A
That's
what
they're
saying
I
don't
know
if
anybody
else
has
been
following
the
threads
where
they've
been
discussing,
that
in
I've
seen
it
in
chromium.
I
haven't.
I
haven't
seen
the
firefox
discussions
about
it,
but
the
basic
issue
is
that
there's
a
lot
of
complexity
there
and
a
most
people
aren't
using
it
and
b
the
people
that
are
using
it
are
not
getting
the
performance
benefits
that
they
that
they
were
expecting.
B
A
A
Right,
but
I
think
that
would
be
something
that,
at
this
point
having
you
know
been
rejected
on
grounds
of
complexity,
not
justifying
or
the
the
benefit
not
justifying
the
complexity,
then
the
the
onus
is
more
on
the
people
who
want
it
to
prove
that
it's
necessary
right.
So
yeah.
I
think
if
we
start
actually
shipping
content
with
multicast,
then
we
might
have
a
case
to
make
for
server
push
again
just
off
the
scalability
right
now
their
plan
is
basically
early
hints
instead,
which
is
the
server,
can
give
a
header
to
the.
I
think.
A
B
A
Right
and
if
the
client
thinks
that's
true,
then
it's
gonna,
then
it's
gonna
make
the
request
earlier
and
that
way
the
server
can
sort
of
achieve
the
same
thing
in
terms
of
speed
of
you
know
how
how
long
it
takes
to
get
that
thing
delivered.
B
The
current
hacks
to
deal
with
this
are,
are,
you
know,
referencing
an
object
early
in
a
page
so
that,
as
the
renderer
is
you
know,
or
as
the
as
the
the
dom
code
is
parsing
it
it
can
be
like
oh
yeah,
I'm
going
to
want
to
request
this
right.
It's
horrifying
terrible,
hacks
javascript
is
another
one
used
to
do
this
stuff.
B
A
The
point
is
that
I
think
there's
some
some
synergy
with
the
ongoing
work
in
in
mock.
A
That's
moq
media
over
quick,
that's
trying
to
make
use
of
the
capabilities
of
doing
server
push
for
low
latency
video
in
particular,
because
I
think
we
can
leverage
that
and
not
have
to
write
a
player
when
we
get
to
the
point
that
we're
delivering
video
data.
A
What
else
did
I
want
to
talk
about
there?
There
was
also
a
question,
a
quick.
It
was
not
like
universally.
You
know
happy.
I
think
most
people
just
would
like
to
ignore
it.
I
did
get
a
question.
Is
quick
really
the
right
place
to
do
this?
A
Maybe
I
should
write
up
a
document
justifying
why
I
think
it's
the
right
place
to
do
this.
I
think
there
are
some
good
reasons
and
they
include,
like
you
know.
I
think
we
can
start
looking
forward
to
some
hardware
offload
capabilities
for
decryption,
quick
packets
that
we'd
be
able
to
reuse.
A
Likewise,
we
can
reuse,
I
think,
a
lot
of
the
packet
processing
code
once
we
have
once
we
find
the
right
place
to
plug
that
in,
and
you
know,
in
existing
quick
implementations
and,
like
I
said,
the
the
applications
that
are
built
on
it
have
a
much
lower
burden
in
order
to
make
use
of
those.
Oh,
we
lost
kevin
anyway,.
A
That
was
in
the
quick
session
is
what
I'm
thinking
of
it
was
alex.
I
forget
his
last
name,
it's
about
three
or
four
syllables.
Okay,
I
might
start
with
a
or
have
a
k
in
it.
I
can
find
it
out.
B
I
was
listening
remotely.
I
got
the
remote
pass,
I
wasn't
there,
but
I
was
able
to
attend
the
quick
and
m
bone
sessions
at
least,
and
that
was
one
of
the
things
I
was
going
to
bring
up
where
there
was
kind
of
a
question
about
why
quick
and
you,
I
believe
you
did
mention-
you
know
that
it's
with
a
goal
of
browser
support
but
kind
of,
as
you
just
described
it,
maybe
a
paragraph
somewhere
that
just
kind
of
answers
that
question
you
know
like.
B
Because
I
mean
it
seems
like
that
was
one
of
the
questions.
First,
you
know
just
something
kind
of
simple,
but
you
know
it's
the
paragraph
saying.
A
Yeah
yeah
yeah,
I
I
agree.
I
think
I
think
a
paragraph
write
up,
maybe
in
the
in
the
draft
itself
could
be
could
be
a
valuable
addition.
So
maybe
I'll
open
an
issue
on
that.
I
think
I
think
the
answers
there
are
some
good
answers
and
maybe
maybe
there's
more.
A
You
know
in
some
sense
it's
like
possible
to
do
this,
not
inside
quick
to
get
the
same
level
of
security,
but
you'd
have
to
add
a
lot
more
pieces
that
you'd
have
to
sort
of
define
from
scratch,
whereas
quick
does
define,
you
know
how
it's
going
to
be
encrypted
and
what
the
structure
of
the
packet's
going
to
be
and
how
you
know
what
what
all
is
going
to
be
there
and
making
use
of
that.
A
B
Well,
so
I
mean
also,
I
think,
I'm
not
sure
whether
it
was
you
who
said
this
or
somebody
else
right,
but
the
but
there's
you
know,
multicast
itself
is
just
it's
just
an
address
scheme
right.
This
is
something
that
puts
guardrails
around
the
way
that
multicast
is
able
to
be
used
alongside
you
know
alongside
you
know,
web
traffic,
that
assumes
something
about
secure
transport.
So
it's
it's
good
in
that
sense
too,
that
it
says
these
are.
B
This
is
the
type
of
thing
we're
going
to
support
and
it
is
going
to
make
use
of
this
address
family
for
making
it
more
efficient
to
distribute
data,
but
otherwise
it's
going
to
work
the
same
general
way
that
the
web
does
and
carry
a
lot
of
the
properties
that
the
web
has
with
it.
So
I
think
that's
that
is
also
a
compelling
argument
for
some
people.
I
don't
think
we're
ever
gonna.
I
don't
think
we're
gonna
find
a
single
argument.
A
That
seems
plausible
yeah
yeah.
I
guess
so
on
the
other
side
of
that.
Like
the
comments
we
got
in
mbo
and
d
on
the
whole
we're
more
supportive
and
excited,
but
also
reflected
some
a
little
bit
of
confusion
about
how
quick
works
in
general.
A
I
think,
but
these
might
be
better
handled
by
conversation
with
people
who
want
to
know
more
than
by
writing
it
up
specifically,
but
I
think
that
you
know
some
of
the
questions
had
to
do
with,
like
you
know,
comparing
it
to
existing
multicast
transport
protocols
like
flute
or
norm.
A
A
I
think
the
quick
version
is
like
you
know,
obviously
different
in
many
ways,
but
it's
got,
but
in
terms
of
like
the
the
sort
of
feedback
of
information,
I
think
it
it's
more
similar
to
norm
than
than
to
other
multicast
transport
protocols
as
it
stands
today,
and
I
think
that
there's
extensions
that
could
be
made
if
we
sort
of
you
know
once
we're
over
the
first
hurdles,
that
that
would
improve
the
efficiency
such
as
defining
knack
frames
in
addition
to
the
ack
frames,
so
you
can
do
negative
acknowledgements
at
a
smaller
rate.
A
I
think
it
almost
is
the
same
thing
as
as
the
aggregated
acts
that
we've
got
already,
except
in
the
case
of
of
high
loss
or
something
also
the
way
norm.
Does
it
there's
like
a
delay
on
the
on
the
neck,
so
you
might
be
able
to
sort
of
reduce
the
number
of
servers
you
need
in
order
to
in
order
to
do
the
serving
of
the
of
the
multicast-based
quick
data,
if
you,
if
you
applied
nax
instead
of
x,
likewise,
there's
there's
like
possibilities
to
add
fec
frames.
A
I
think
there's
been
some
discussion
of
that
kind
of
thing.
In
quick
already,
you
know,
I
think,
the
early
versions
of
it
used
bad
fec
schemes,
and
so
they
concluded
it
was
a
bad
idea,
but
they
did
some.
Some
of
the
some
of
the
people
who
were
investigating
that
I
know
did
like
ian
sweat
in
particular,
did
go
talking
to
the
people
in
the
network
coding
research
group
about
like
real
fec
schemes
like
raptor,
that
that
can
provide
a
good
benefit.
A
There
was
some
work
on
tetris,
that's
t-e-t-r-y-s,
which
made
which
had
a
draft
on
how
to
sort
of
do
a
sliding
window
fec
based
recovery
in
tcp.
A
A
You
know
all
this
is
relatively
straightforward
to
extend
in
quick
if
there
is
value
in
doing
so,
but
I
think
the
first
step
here
is
just
like
the
most
basic
version
of
a
multicast
transport
protocol
we
can
get,
which
is
what
I
tried
to
what
we
tried
to
put
up
in
the
draft.
That's
there
so
once
that's
operational,
then
you
know,
I
think
there
are
some
down
the
road
future
ideas
to
to
explore
further
if
we
want
to
improve
on
the
efficiencies.
A
But
I
don't
want
to
get
sidetracked
on
that,
but
that
was
some
of
the
kind
of
questions
we
got
in
mbo
d
and
I
I'm
not
sure
how
well
java
did
it.
I've
explained
it
gone
on
the
spot,
but
I
I
don't
think
it
needs
a
write-up
at
this
point.
Probably
we
can
just
ignore
it
until
until
later
and
then
add
it
in
as
a
as
a
you
know,
as
an
as
an
optimization
once
we've
got
some
deployment,
but
you
know
first
things.
A
First,
in
my
opinion,
so
yeah
add
something
justifying
you
so
quick
and
leave
aside
for
now.
Maybe
maybe
a
document
to
make
some
mention
about
it,
I'm
not
sure
anything.
Actually,
I
guess
it'll
be
in
the
notes
in
this
recording,
but
outside
that,
I'm
not
sure
how
much
is
is
necessary.
A
Today,
let's
see
hackathon
kyle
did
a
great
job
got
some
good
stuff
done
max
and
I
well
max
had
a
bit
of
a
wall
on
finding
the
right
place
in
chromium
to
to
receive
the
udp
packets.
I
talked
to
victor
about
it
too
vasily
of
briefly.
He
said
that
well
he's
not
totally
sure
there's
a
bunch
of
places,
but
by
the
way,
he's
also
going
to
be
removing
the
e-pole
server
that
we've
been
using
and
and
putting
in
something
based
on
lib
event.
A
So
you
know
watch
for
that
at
some
point,
so
yeah,
the
the
hitting
the
moving
moving
target
of
having
an
extension
built
on
top
of
chromium,
is
kind
of
a
pain
and
rates
to
be
maybe
kind
of
more
of
a
pain
going
forward
max
had
a,
I
think,
a
good
idea
and
and
started,
I
believe
max.
Maybe
you
want
to
jump
in
here
on
looking
at
aio,
quick,
which
is
somewhat
easier
to
work
with.
I
think
you
know.
I
think,
we've
also
discussed
using
like.
C
A
Instead,
once
they
have
web
transport
support,
I
talked
to
martin
thompson
about
that.
He
said
they
have
an
implementation,
but
it's
not
integrated
yet.
A
So
I
was
looking
at
maybe
trying
to
keep
an
eye
on
that,
but
you
know
in
terms
of
where
the
demo
is
going
to
be
for
playing
video
over
web
transport.
A
I
don't
know,
but
maybe
there
are
some
earlier
steps
we
should
be
shooting
for
which
is
like
getting
any
data
transported
at
all,
which
is
perhaps
more
viable
to
get
running
under
aio,
quick.
So.
B
I
mean
I'd,
you
know
kind
of
like
to
know.
If
the
code
I
wrote
works,
I
know
tests.
A
Oh
sure
sure
yeah
I
would
too
actually
and
I'm
I
am
kind
of
tempted
to
keep
hacking
on
chromium,
but
maybe
maybe
the
dev
team
should
in
general
make
a
switch
to
to
io
quick.
I
think
we'll
be
discussing
that
more
on
friday
and
does.
D
D
A
I
don't
know
what
it
means
in
terms
of
how
we
would
get
video
out
of
it.
It
might
be
that
we
could
plug
it
into
an
external
video
player
or
use
some
of
the
video
rendering
features
that
are
in
python
somewhere
to
do
a
just.
You
know,
sort
of
a
fat
cloud
that
could
be
launched
from
a
browser
that
would
do
the
receive,
which
is
how
we're
gonna
have
to
deploy
it
anyway.
Until
we
get
actual
some
interest
in
using
browsers
so
yeah,
I
think
there's
a
high
possibility.
A
A
A
But,
like
a
secured
connection,
so
I
think
there's
you
know,
there's
some
work
to
do
on
figuring
out
exactly
what's
the
easiest
path
forward,
but
the
main
thing
we
want
to
get
first,
I
think,
is
actually
the
multicast
transport
using
quick
and
then
leverage
that
into
a
case
that
well,
we
should
be
doing
this
in
a
browser,
so
yeah
that
that's
one
of
the
outcomes
from
the
hackathon
is.
B
A
So
on
the
server
side,
it
could
be
a
reasonable
concept,
I
think,
but
to
use
it
in
production.
But
you
know
in
terms
of
receiver,
I'm
not
I'm
not
sure,
not
sure
I
mean
I
know,
there's
they
keep
talking
about
in
the
quick
presentations.
There's
like
19
different
implementations
of
quick
out
there
right.
Some
of
them
are
server.
Only
some
of
them
are
client
only,
but
regardless
there's
there's
a
whole
interop
table
where
they
tested
them
all
against
each
other
and
stuff
right.
A
So
the
other
honorable
mention
I'd
put
out
there
is
that
christian
utema's
one
that's
supposed
to
be
designed
for
extending
and
for
experimentation.
A
If
we
ever
found
a
way
to
plug
that
into
a
browser
and
use
it
underneath,
then
I
think
it
would
be
very
likely
to
be
much
much
cleaner
than
the
like.
Google,
quick
implementation,
which
I
know.
A
I
think
that
was,
I
have
the
links
to
it.
Go.
A
I
think
there's
a
plug-in
to
it
and
go
actually,
but
no,
I
think
the
main
one
I
think
the
main
implementation
of
the
quick
itself
proper,
where
we'd
have
to
be
adding
the
frame
stuff
was
in.
I
should
add
a
find
the.
A
And
we
might
take
a
look.
Oh,
it
looks
like
he's
now
got
a
quick
mp
empath
up
because
I
know
he's
been
working
on
the
multipath
stuff
too.
A
Well:
okay,
but
it's
better,
probably
in
some
ways
than
you
know,
the
c
plus
plus
google
quiche
implementation.
That
also
has
like
all
their
leftover
stuff
from
before
it
was
the
idea.
B
Yeah
standard,
I
I
so
I
mean
having
gone
down
as
I
mentioned,
having
gone
down
the
wrong
road
on
on
g
quick
stuff
on
a
few
occasions
and
then
had
to
throw
those
changes
away,
it's
a
little
annoying.
It
sounds
like
they
also
want.
I
mean,
according
to
my
my
conversation
with
ian,
they
want
to
tear
all
of
the
g
code
out
as
well.
So
at
some
point
it
will
disappear
and
hopefully
be
a
lot
simpler.
I
mean
honestly,
I
kind
of
like
the
quiche
code.
It's
pretty
good!
B
A
A
Hear
you
great,
let's
see
what
else
we're
gonna
talk,
so
I
I
did
talk
about
the
push
player
and
mock.
A
Oh
my,
my
results
on
the
hackathon
I
did
by
the
end
of
the
week,
get
it
working
where
I
can
like
push
a
file
and
have
it
show
up
in
the
browser
in
their
in
their
web
transport
implementation.
So
I
thought
that
part
was
pretty
neat
if
we
do
ever
get
back
to
the
chromium
stuff.
Maybe
we'll
get
back
to
that
push
api,
but
I
imagine
in
the
meantime,
we'll
probably
port
it
over
to
interoperate
with
the
with
the
aio
quick
server.
A
We
do
because
we
can
do
you
know
that
one
doesn't
have
the
same
deployment
problem
so,
okay
anyway,
so
some
progress,
but
also
some
confusion
and
some
indications.
Maybe
we
we
started
down
a
path,
that's
harder
than
we
want.
A
I
think
I
did
convince
maybe
well
maybe
that's
too
strong
a
term.
I
think
we
did.
I
did
talk
to
a
few
people
and
get
some
some
acknowledgement
that
that
there's
a
scaling
problem,
that's
worth
considering
solving
with
multicast
like
they.
I
think
I
got
into
at
least
a
couple.
A
People's
heads
that
that
making
use
of
broadcast
links
like
in
g,
pawn
and
cable
end
user
loops
you
know
is-
is
a
thing
that
you
can't
do
with
just
a
different
caching
strategy
and
that
this
is
a
real,
a
real
problem
for
network
delivery.
It's
not
just
about
you,
know,
saving
1001
on
your
servers
for
the
live
events,
but
but
also
there's
a
delivery
problem
that
can't
be
solved
in
really
any
other
known
way.
Besides
making
use
of
those
broadcast
links.
A
So
I'm
I'm
hopeful
that
that
will
lead
to
you
know:
reduced
opposition
further
reduced
opposition
and
quick.
It
may
already
have
led
to
reduced
opposition
in
the
in
the
meeting
we
were
in
because
that
was
before
the
meeting,
and
then
nobody
really
spoke
up
against
it,
except
in
terms
of
like
well
should
we
should
we
be
doing
this
in
a
different
way
instead
of
trying
to
inject
it
into
quick.
A
So
I
guess
that's
the
main
thing
about
side
discussions.
Was
there
anything
else.
A
I
think
there
were
separately
some
interesting
developments
on
multicast
in
general,
but
with
regard
to
the
quick
implementation
stuff,
I
don't
know
if
maxie
wanted
to
say
anything
about
we
with
some
feedback
from
one
of
the
network
operators
that
had
been
using
multicast
for
for
tv
delivery
and
tried
to
add
it
to
the
home
networks.
We
found
that
apparently
there's
a
problem
in
the
igmp3
implementations
that
are
widely
deployed
where
which
I've
seen
before
in
lab.
A
But
I
didn't
realize
it
was
such
a
problem
for
their
user
experience,
but
they
when
they
see
an
igmp
v1
membership
report,
then
they'll
stop
the
sending
of
igmp
v3
reports
and
queries
and
such
and
so
when
you're,
using
ssm
to
join
the
tv
flows
from
like
a
wi-fi
connected
device.
That's
that's
sharing
your
network
in
the
home.
A
What
this
means
is
that
the
tv
service
stops
working
because
you've
gone
plugged
in
a
like
15
year
old
printer,
that's
sending
packets
that
disable
it,
which
is
an
unfortunate
side
effect.
That
is
max,
and
I
took
a
look
through
the
spec.
A
I
think
if
I
remember
right,
it
was
7.2.2
of
the
igmpv3
section
said
that
you
may
suppress
further
igmpv3
membership
reports
when
you
see
an
igmp
v1
membership
report
on
the
lan
and
since
they
are
currently
engaged
in
an
igmp
v3
standardization
effort
to
move
it
from
proposed
standard
to
standard.
A
I
think
we're
probably
I'm
thinking
gonna
make
a
case
for
changing
that
made
to
a
should,
not
and
perhaps
submitting
a
patch
to
linux.
If
we
can
find
anybody
willing
to
do
that,
linux
and
maybe
freebsd,
because
that's
what
rdkb
uses
so
that'll
take
a
few
years
to
roll
out.
But
I
think
it's
the
right
thing
to
do
to
try
to
fix,
try
to
fix
that
and
make
multicast
in
general,
more
reliable
in
the
in
the
home
network
environments.
A
Yeah,
so
that
was
another
interesting
point
I
learned
and
thanks
mac
max
for
pointing
out
its
relevance
to
the
to
the
igmp
discussion
like
my
first
take
was:
oh
well,
we'll
never
get
that
rolled
out
in
time
for
me
to
care
about.
You
know
this.
This
year's
deployment
prospects,
but
while
true
it
doesn't
mean
we
shouldn't
do
it
and
it's
a
it's
a
timely
moment
to
get
in
there
and
and
get
the
spec
changed.
I
think
and
use
that
to
justify
changing
the
implementations
as
well.
A
Okay,
so
any
questions
about
any
of
that
anybody.
Anybody
have
anything
else
that
wanted
to
go
over
there.
D
They
said
that
would
already
like,
if
that
is
included,
would
already
break
a
bunch
of
implementations
and
make
them
not
compliant
to
the
standard
anymore.
So
now
also
sure
is
changing
a
mate
or
should
not
would
also
make
more
implementations,
not
compliant.
So
I'm
I
mean
it's
the
right
thing
to
do,
but
I'm
not
sure
how
hard
they're
gonna
fight
us
on
that
right
so
but
yeah
we
should
definitely
do
it
anyway.
A
Yeah
we
should
at
least
make
that
case.
I
think
eric
also
had
a
very
great
presentation
in
mbds
watched.
It
talked
about
the
the
work
they're
going
to
be
doing
at
vivo,
so
that
was
pretty
cool
to
see
handed
out
some
neat
vivo.
A
You
should
come
to
his
presentations
if
you
want
any
of
that
anything.
You
wanted
to
say
about
that.
Eric.
C
Well,
I
appreciate
the
opportunity
and
the
side
conversations
were
definitely
very
interesting,
which
yeah
a
lot
of
rethought
about
how
I
could
make
a
minute.
It
could
have
been
a
stronger
case
to
some
of
those
folks
who
are
pushing
back
on
you,
but
I
think
we'll
see
some
pretty
compelling
business
drivers.
You
know,
after.
A
Sure-
and
thank
you
by
the
way
for
your
for
your
use
cases
doc.
I
did
get
that
and
I'll
be
adding
it
somewhere
in
case
of
g
repo.
I
think
soon.
C
A
Yeah,
okay,
I'm
not
a
big
node
fan
really,
but
maybe
if,
if
you
wanna,
certainly
if
you
wanna
be
contributing
and
you
wanna
be
working
in
node,
that
would
that
would
contribute
to
the
discussion.
But
I
like
python
better
for
myself-
I
don't
know,
but
we
can
all
vote
if
you
anyone
who
comes
to
the
to
the
dev
team
meetings
that
we've
been
having
weekly
is
certainly
welcome
to
make
a
case
for
for
using
a
different
platform
while
we're
considering
switching
yeah.
C
Sorry,
what
I'll
change
the
topic
back
just
a
couple
of
minutes,
I
just
to
say
I
I
that
is
hugely
disappointing
about
the
igmp3
and
igmp1
thing,
but
we
encounter
this
kind
of
thing
all
the
time
really
in
networking.
So
I
appreciate
you
guys
making
the
effort
and
the
push
to
get
that
changed.
C
A
So
yeah,
I
definitely
want
to
push
for
it.
The
other
thing
I'd
like
to
do,
perhaps
that
can
be
rolled
out
more
quickly,
is
to
make
the
case
that
the
linux
implementation
that's
out.
There
can
be
tuned
from
its
current
defaults
and
still
remain
inspect
because
there
it
you
know
what
it
says
now
is
that
there's
a
may
suppress.
A
I
think
the
general
impression
among
operators
is
that
well,
this
implementation
is
according
to
spec.
We
don't
want
to
touch
it,
but
I
think
that
if
we
can
make
the
case
that
there
are
that
there
are
settings
that
you
can
that
you
can
set
that
will
solve
this
kind
of
a
problem
where
you
know.
Oh
we're,
observing
old
devices
that
are
changing
that
are
you
know,
disabling
our
our
use
of
this.
A
You
know
there
are
settings,
you
can
push
that
would
make
that
stop
happening
and
the
operation
would
still
be
in
spec.
So
I
think
that
this
this
might
be
a
faster
path
to
deployment
for
some
of
the
operators
that
are
experiencing
problems,
so
I
I
might
want
to
also
do
that
too,
but.
A
You
know
the
the
longer
term
thing
I
think
would
be
to
get
the
default
behavior
to
do
the
right
thing,
which
is
to
not
suppress
these
old
things.
Yeah.
A
A
Yeah,
that
seems
absolutely
worthwhile
to
me
and
one
of
the
reasons
that
I'm
glad
I
made
it
to
the
in
person
atf,
because
I'm
not
sure
when
I
had
that
conversation
otherwise
right,
so
nobody
has
anything
else
if
they
do
please
interrupt,
but
the
only
other
thing.
So
I
was
going
to
talk
about
where
I
want
to
go
what
I
want
to
do
with
tpack.
My
current
plan
is
to
try
to
justify
to
my
management
chain
that
I
should
in
fact
go.
A
Some
of
the
people
that
I
had
those
conversations
with
will
be
reappearing
at
or
who
are
on
the
same
teams
as
people
I
had
those
conversations
with
would
be
reappearing
at
cpac,
I
think,
and
so
continuing
those
conversations
and
getting
sort
of
a
wider
buy-in
on
the
website
for
the
need
to
solve
the
scalability
problem
that
we're
trying
to
solve,
especially
transparently
to
the
applications
and
also
getting
a
better
grasp
on
like
what
it
takes
to
not
be
mixed
content
sort
of
from
the
website.
A
You
know
that
was
one
of
the
objections.
I've
heard
is
that
this,
would
you
know
if
we
did
this
in
quick,
it
would
have
to
be
considered
mixed
content
because
of
the
differences
in
the
security
properties.
I
do
not
think
this
is
true,
but
if
that's
a
perception
then
we
need
we
need
to.
A
Discuss
why
and
to
see
if
we
can
pin
that
down
and
and
fix
it
in
the
security
considerations,
doc,
I
think
so.
That's
you
know
that,
and
I
want
to
have
a
better
grasp
of
what
tpack
is
like
for
next
year,
because
maybe
I'll
have
more
presentations
or
a
group
meeting
that
we
can
get
more
people
in
next
year.
I
do
not
think
I'm
going
to
run
a
meeting
for
this
community
group
at
tpac
unless
it's
it's
remote
or
with
whoever
happens
to
be
there.
A
As
I
can
on
the
topic
and
to
and
to
take
from
that,
whatever
we
can
ahead
of
the
november
ietf
meeting,
but
I
think
the
bulk
of
the
work
right
now
is
still
on
the
ietf
front,
and
this
this
tpack
and
w3c
work
is
more
like
a
warm-up
or
for
being
able
to
hit
the
ground
running
after
we
have
a
good
demo
running
for
the
itf
for
the
quick,
multicast
implementation
chris
in
particular,
but
anyone
want
to
argue
me
into
a
different
position
on
that
point.
E
There's
an
option
if
you
want
to
do
it
to
organize
like
a
breakout
session,
so
part
of
the
schedule,
I
think,
for
the
wednesday,
includes
a
few
hours
like
dedicated
to
breakout
sessions.
So
if
you
want
to
you,
so
you
could
raise
this
as
a
topic
like
you
could
say,
well,
you
know
to
propose
a
topic.
E
That's
you
know,
multicast
to
the
browser,
or
you
could
have
a
more
general
topic,
which
is
around
kind
of
definitions
of
mixed
content
or
kind
of
combine
that
so
like
in
addition
to
having
kind
of
side
conversations
around
meetings
like
there
is
this
option
to,
I
think,
to
propose
a
breakout.
A
A
Like
how
late
can
you
propose
a
breakout
session,
I
I
need
to
get
approval
to
trap.
First,
I
think.
E
Yeah,
I'm
not
sure
I
would
have
to
check
the.
I
don't
think
they've
announced.
E
The
the
well
they
I
don't,
I
haven't
seen
an
announcement
yet
that
invites
proposals
for
the
breakout
sessions.
A
Yeah,
I
should
do
that.
I'm
not
sure
I'm
on
the
right
list
to
be
getting
all
the
announcements.
Do
you
know
what
list
you'd
be
expecting
that
on.
E
A
Okay,
yeah,
why
don't
I?
Why
don't
I
send
him
an
email?
I
don't
think
I
need
to
burden
you
with
the
action
item
to
own
that,
but
I
will
send
an
email
to
dominique
and
ask
about.
A
Ask
about
the
tpac,
2022,
breakout
session,
scheduling,
process
and
deadlines
and
what
I
should
watch
for
that
yeah.
Thank
you.
Thank
you.
For
that
suggestion.
That's
that's
very
helpful
so
that
that
breakout
session
is
sort
of
an
open
invite.
Is
that
the
idea
and
then
it'll
be
on
the
list
of
here's
all
the
breakout
sessions
and
people
who
are
there
will
just
sort
of
wander
in.
E
That's
exactly
it
so
there'll
be
there'll,
be
like
multiple
sessions
running
in
parallel
and
then
it's
whichever
topic
catches.
People's
interest
is
the
one
that
they
go
to
so
so,
if
you're
going
to
propose
a
session,
then
it's
worthwhile
talking
with
people
ahead
of
time,
so
that
you
know
you've
invited
some
of
the
right
people
that
you've
learned
around
the
table.
E
Okay,
yep
because
there'll
be
competition,
for
you
know,
with
multiple
sessions
going
on
all
right.
A
Okay,
so
I
will
I
have
pieces,
then
it's,
it's
convinced,
my
management,
that
I
should
indeed
travel
to
this
and
and
get
some
buy-in
for
having
that
breakout
session
from
people
that
I
want
to
talk
to
that
are
going
to
be
there
and
then
running
the
session
itself.
So
what
is
the?
A
Yes,
that's
that's
what
I
would
assume
too.
I
anticipate
you
know
I've
seen
it
done
a
few
times
at
iatf.
Certainly
I
did
I
mean
it
almost.
It
almost
sounds
like
a
reprise
of
the
of
the
barb
off
that
I
ran
for
was
it
f-112?
I
think,
which
was
a
fully
online
thing,
but.
A
Okay,
I
will
I
will
try
to
I.
I
think
what
I'd
want
to
do.
There
is
make
the
case
for
the
scalability
problem
that
we
have
and
then.
A
A
Okay,
I
will,
I
will
start
trying
to
make
that
happen
and
see
where
I
can
get.
It
is
possible
that
the
answer
will
be
no
and
that
I
just
won't
go,
but
but
I
think
it
does
sound
valuable
to
me,
and
hopefully
I
can
make
the
case
to
my
management.
That
is
valuable
for
me
to
go
so
I'll.
Try
to
do
that.
Thank
you
for
the
feedback
on
that
very
much
appreciated
any
other
comments
or
questions.
I
think
we
only
have
three
minutes
left
and.
A
It's
going
to
shut
it
down
good,
okay!
Thank
you.
Everyone
for
coming
really
appreciate
it
good
to
talk
to
you
all
and
those
that
just
left
as
well.
Okay,
and
thanks
for
stopping
the
recording
now.