►
From YouTube: CORE WG Interim Meeting, 2021-06-09
Description
CORE WG Interim Meeting, 2021-06-09
A
So
welcome
everyone
to
this
virtual
interim
meeting
of
the
co-working
group.
My
name
is
marco
diloka.
My
co-chair
is
I'm
ahimets
and,
as
usual,
this
is
an
official
itf
meeting,
so
the
not
well
applies
and
be
sure
to
be
familiar
with
it.
If
you're
not
already,
it
is
not
just
about
ipr
and
patents.
It's
also
about
code
of
conduct,
so,
as
usual,
be
nice
with
each
other,
and
the
agenda
for
today
is
about
two
documents
on
group
communication
and
you
spend
most
of
the
time
for
the
session
actually
for
the
group.
A
Sure,
and
before
we
actually
start
the
question
to
you
christian,
actually
any
status
update
on
the
echo
request
tag
document.
B
I'm
I'm
I'm
working
through
the
through
the
list,
so
the
yeah
kind
of
trying
to
slot
it
in
between
between
other
other
things
that
are
that
are
also
urgent
right
now.
What
would
help
jordan,
as
you
asked,
if
you
could
have
a
look
at
the
the
point-to-point
responses
that
I'm
preparing
in
parallel
to
editing
in
things
other,
but
I
don't
see
any
kind
of
there's.
No
large
things,
it's
just
things
that
need
to
be
done.
C
B
B
Things
it's
as
we
go.
Basically,
all
the
all
the
all
the
issues
are
tracked.
It's
the
same
way,
I
did
it
with
the
resource
directory.
There's
a
file
in
the
in
the
github
repo,
where
all
the
all
the
comments
are
attract
and
they
basically,
as
I
solve
things
in
the
document,
I
also
add
the
responses
there
or
if
things
are
already
resolved
or
just
need
a
response,
then
those
are
already
in
there.
A
A
Thank
you,
okay.
Then
we
can
go
with
the
first
document
group
combis
and
esco
will
present.
D
D
Let's
see
if
there
any
complaints,
let
me
know
so
now
the
progress
in
this
okay.
Let's
move
on
to
the
next
slide.
D
D
D
D
D
Yeah,
so
we
got
a
review
it's
already
two
months
ago
it
was
from
john
and
I
made
recently
a
response
to
that
and
also
created
a
couple
of
new
issues
to
take
up
the
things
that
I
think
we
need
to
improve
the
need
for
the
draft.
D
Yeah,
so
the
first
one
is
to
provide
more
detail
of
what
exactly
is
updated
or
replaced
in
documents
that
we
update,
like
rfc,
for
example,
252..
D
D
D
D
D
Yeah,
it's
actually
requiring,
I
think,
a
little
bit
feedback
to
make
progress-
maybe
john
or
maybe
from
others
who
want
to
chime
in.
So
what
will
be
the
relevant
attack
scenarios,
for
example,
spoofing
messages
that
are
protected
with
google
score
using
a
different
source,
ip
address
and
replay
packets.
C
D
I
know
it
is
mentioned.
I
think
it
points
to
draft
basically
and
says
that
well
with
group
of
core
you
can
yeah,
you
can
mitigate
this
type
of
attacks.
C
I
wonder
if
we
could
use
some
source
there
and
I
don't
have
anything
at
the
top
of
my
head,
but
but
I
don't
think
we
need
to
sort
of
start
from
scratch.
We
could.
We
could
look
a
little
bit
find
some
existing
analysis
that
has
been
done
and
and
then
I
mean
in
particular
looking
at
at
multicast.
I
suppose,
and
let's
see
what
are
the,
what
can
be
what
is
applicable
and
and
where
does
group
of
score,
provide
a
solution,
and
where
does
it
not?
A
A
D
Altogether,
yeah,
that's
the
point.
So
if
you
use
the
echo
option,
then
you
just
don't
have
a
group
of
servers.
Potentially
it's
sending
something
back.
You
get
the
amplification
effect.
The
response
packet
is
getting
quite
small,
still
yeah.
You
get
multiplication
and
I
think
that
was
the
idea
to
make
that
also
explicit.
D
D
D
D
We
can
also
refer
to
it
in
this
case.
D
D
Number
14
was
a
point
that
question
that
came
up,
so
what
is
the
interaction
between
observe
option
and
no
response
option
so,
typically
with
doing
an
observe,
you
expect,
of
course,
to
get
responses
back
from
the
servers
and
with
no
response
option.
At
the
same
time
you
can
say
to
the
server:
please
don't
send
me
the
response
of
a
certain
class,
so
we
clarified
this
bit.
So
the
idea
was
basically
that
only
if
an
observer
is
added
at
the
server
then
that
server
yeah
should
actually
send
a
response
to
the
client
that
was
just
added.
D
It's
also
signals
to
the
client
that
okay,
this
server
request
was
successful
and
that's
why
it
should
normally
not
suppress
the
response,
but
the
user
can
still
override
this
by
including
a
specific,
no
response
option,
so
that
will
basically
override
the
default
response.
Suppression,
as
is
already
the
case,
so
this
can
be
combined
and
not
all
combinations
would
make
sense,
of
course,
but
you
can
do
it.
D
B
Like
this,
I'm
just
a
bit
curious
because
then
I'm
just
I'm
just
a
bit
curious
as
to
why
we
would
this
behave
any
different
than
regular
responses,
especially
given
that
the
observed
request
can
be
a
confirmable
request,
so
the
client
would
get
a
knack
back
and
like
with
all
other
response
cases,
if
there's
an
act
but
nothing
else,
it's
obviously
it
probably
went
through.
A
D
D
Ingress
devices
that
take
traffic
traffic
into
a
six
slope
on
network
because
from
yeah,
let's
say
from
things
like
earlier
simulations
that
I've
done
many
years
ago,
you
could
see
that
it's
fragmented,
multiple
traffic
is
sort
of
normally
very
bad
for
them
performance
sort
of
notes.
You
have
to
reassemble
slope
on
layer,
many
fragments
and
then
create
another
cast
ip
package
from
that
and
yeah.
We
can't
expect
very
good
performance,
so
it's
kind
of
normal
that
device
that
allows
you.
D
D
D
Yeah
so
number
17
jira
had
a
little
discussion
issue,
so
the
question
was
what
are
actually
valid
cases
of
having
a
forward
and
a
reverse
proxy,
with
either
end-to-end
security
or
what
they
call
a
two-leg
security.
So
that's
not
end-to-end
to
security,
but
just
that
two
lags
individually
protected.
Basically,.
D
And
this
is
a
topic
that
you
could
say
well,
more
or
less
belongs
to
the
new
graft
group
from
proxy,
and
I
think
so
too,
but
yeah.
My
proposal
was
still
at
a
high
level
to
include
something
about
it
in
this
draft,
just
to
mention
what
are
the
yeah?
What
are
the
valid
combinations
here,
so
you
can
sample
forward
proxy
combined
with
end-to-end
security,
or
you
could
have
a
for
proxy
case
where
there's
this
two
legs
of
security,
so
an
example
could
be
one
being
dtls
to
reach
the
box
itself.
D
D
D
Input
right
and
then
the
final
one
here
number
19:
what's
the
consideration
how
to
handle
cube
block
options
because
we
talk
about
block
options,
but
since
then
the
block
options
are
also
added
and
there
we
just
take
on
the
text.
Basically
that
was
already
defined
in
the
block,
so
it's
not
to
be
used
and
all
the
cards
requests.
So
the
servers
must
also
ignore
this
and
that
we've
added
to
the
route
editor's
copy.
D
So
this
is
a
lot
of
text
here,
so
we
have
it's
part
of
issue
number
eleven
by
the
rules
of
cashing
with
origin
clients.
Quite
some
discussion
there
so
general
problem.
There
is,
if
you
are
a
client,
you
can
basically
cache
the
results
of
this
group
request.
That's
no
problem!
But
what,
if
you
want
to
do
a
new
request,
can
you
serve
that
from
cash
partly
or
entirely
or
not?
D
So
if
you
yeah,
if
you
can
serve
their
request
entirely
from
a
cache,
then
you
miss
those
new
numbers.
So
that's
why
some
rules
islanded
here
and
now
we
need
to
edit
this
copy.
I
post
a
simple
solution,
so
yeah,
that's
in
all
cases
the
request
needs
to
be
sent
out,
so
you
can
serve
from
cache
as
defined
in
rfc
7252,
but
you
also
need
to
send
out
the
request
because
servers
could
have
joined
in
the
meantime.
D
D
Or
yeah,
this
is,
let's
see
the
code.
Yes,.
A
D
Basically,
where
the
request
goes
to
so
these
are
the
you
can
say
the
group
members
that
matter
in
this.
If
somebody
joins
security
group
but
not
to
the
co-op,
then
that's
yeah.
C
A
D
D
If
the
proxy
is
the
position
that
it
puts
all
the
new
members
that
are
joining,
for
example,
if
it's
a
border
router
that
receives
all
the
group
joining
requests
joining
the
call
and
can
just
keep
track
of
all
the
members
it's
network,
but
in
that
case
that
helps
to
make
those
decisions
but
yeah.
This
is
saying
that
well,
unless
you
have
that
kind
of
knowledge,
you
can
only
basically
the
only
thing
you
can
do
is
send
out
the
request
again
just
to
catch
yeah.
Maybe
servers
that
that
have
joined
in.
D
It
could
also
be
based
on
casey
level
knowledge.
Have
you
know
that
your
group
members
are
only
joined
at
midnight,
or
so
here
you
can
use
that
mode
support,
but
still
still,
in
that
case,
yeah
kind
of
you
need
to
be
sure
that
you
have
fresh
responses
from
every
year.
D
Okay
and
then
the
last
slide
was
also
about
number
11,
so
the
aspect
of
placement
of
these
caching
features
functions,
so
we
have
catching
up
proxies
and
that's
all
that
so
I
moved
to
proxy
at
least
yeah
for
the
details
and
the
caching
at
origin
clients.
That's
what
we
now
discuss
in
group
from
this
yeah.
The
same
is
for
the
revalidation,
so
if
there's
a
proxy
for
all
three,
we
put
it
in.
Maybe
what
you
want.
D
D
D
Yeah,
that's
basically
a
number
two
of
the
numbered
list
here.
What
I
mentioned
to
using
the
exact
option.
D
So
the
the
design
we
have
there
is
that
in
this
we
would
like
to
have
a
final
solution
for
that.
So
client
can
do
revalidation
with
the
group
of
servers
and
once
we
have
that
solution,
we
can
also
reuse
that,
hopefully
in
the
group
proxy
draft,
so
the
proxy
can
use
the
exact
same
mechanism
to
keep
it
simple.
D
D
D
B
Three
was
about
so
I
I
didn't
remember
what
number
three
was
about.
It
was
basically
a
way
of
compressing
down
the
multi
e
text
so
that
at
least
we
don't
have
to
send
the
addresses
around
we
still
and
have
a
bit
more
granularity
of
kind
of
how
how
likely
we
want
we
want
it.
B
We
want
kind
of
I'm
reducing
the
information
to
accept
some
likelihood
of
of
not
getting
a
particular
change,
but
that,
like
that,
like
the
multi
e
tag
probably
depends
on
a
very
strong
use
case,
based
on
which
we
could
sharpen
this
because
yeah,
if
it's
sending
sending
sending
addresses
around
in
e-taxes
and
kind
of
sending
so
many
e-tags
around,
that
it
makes
sense
for
an
application,
makes
those
requests
really
large,
and
we
have
that
issue
just
before.
B
So
I
I
do
remember
what
this
is.
It
is
a
form
of
compression,
but
I
don't
think
that
we
have
a
good
use
case
for
this.
Neither
for
for
number
one
number
four
would
be
something
like
and
number
four
would
be
something
like
if
not
modified
in
the
within
the
last
so
and
so
many
seconds,
which
might
might
make
sense.
D
Yeah,
that
was,
I
think,
a
similar
reason.
We
had
there
so
there's
yeah
potential
use
cases,
but
no
no
strong
space.
We
see
at
the
moment
it's
not
that
clear
where
it
will
exactly
be
used
for
so
it
was.
I
think
the
motivation
also
people
for
option
two,
so
you
should
use
something
that's
already
there
social
backwards
compatible
and
quite
simple
to
use
to
define
anything
or
not
much
new.
For
it.
The
option
can
be
reused.
D
D
There's
basically
one
instance
per
server
in
the
group.
You
want
to
revalidate
against
so
there's
three
of
these
servers.
For
example,
you
want
to
revalidate
with
then
three
multiple
etec
options,
so
we
request
and
each
option
contains
basically
a
number
of
seaboard
encoded,
some
sequence
of
dtec
values.
D
10
is
okay,
yeah,
sorry!
Maybe
it
was
a
bit
slow,
it's
okay,
but
these
are
options.
One
two
three
four
are
listed.
D
D
D
So
this
could
be
something
like
application
group
specific
or
maybe,
if
it
wants
to
or
well
anything
the
server
likes,
it
should
be
chosen
in
such
a
way
that
there's
a
low
probability
of
conflict
with
other
servers,
but
also
select
this
server
specific
id.
D
D
In
that
case
it
knows
I
cannot
re-validate
using
that
e-tag
or
at
least
yeah.
There
will
be
potential
conflicts
in
that
case,
but
that's
a
bit
of
yeah
extra
burden
of
to
the
client
to
use
these
detects
in
a
smart
way.
D
The
nice
thing
of
this
is
that
if
you
have
legacy
servers
that
follow
rfc,
725
yeah,
that
are
not
aware
of
this
new
detector,
which
they
will
just
ignore
the
option
because
it's
not
specified
to
be
not
not
allowed
or
multicast,
so
they
will
ignore
that
as
an
electric
option,
which
is
still
okay,
you
can
get
a
little
bit
of
extra
traffic,
but
that's
that's
fine,
but
that's
the
text
we
have
proposed
in
the
editor's
copy.
D
You
think
simple
and
reuse
what
we
have
and
now
the
question
is
yeah.
Do
we
want
to
explore
the
options?
Three
and
four
further
our
sus
christian
said
three
is
maybe
yeah
similar
to
one
and
a
really
really
good
use
case
for
that.
D
C
D
Yeah,
but
it
has
been
moved
into
an
appendix
with
the
idea
that
it
will
be
removed
from
the
draft
okay,
so
we
moved
to
somewhere
else,
but
we
still
wanted
to
keep
it.
So
it's
not
an
appendix,
but
it's
in
the
latest
published
version,
but
still
present
in
the
main
text.
C
C
Sorry,
okay,
so
I
was
reading
in
the
github,
but
on
the
wrong
branch.
Okay,.
A
So,
okay,
so,
but
but
this
group
e
tag
will
that
remain.
That
has
been
moved
to
group
comproxy,
of
course,
and
same
for
the
big
caching
model
for
the
proxy
all
moved
out.
But
let's
close
it
together,
yeah.
A
B
Just
brief
question
to
confirm,
because
I
didn't
find
it
quickly
in
the
in
the
pointed
to
document
that
client
that
server
id
that
the
server
picks
does
not
necessarily
have
any
particular
structure
inside
the
e-text.
So
the
e-take
is
still
an
opaque
option,
but
the
server
kind
of
phrases
it's
e-tag
in
a
way
that
it's
unlikely
to
collide
with
any
other
server.
Ctec
right.
D
Yeah,
that's
right!
So
maybe
this
text,
what
I
have
here
is
not
entirely
correct,
so
it's
not
about
embedding
necessarily
yeah.
So.
B
D
Same
so,
it's
not
in
that
case
nothing
better,
but
it's
still
used
to
create
it.
So
with
some
probability
it
will
be
unique
to
that
server.
Yeah.
Thank
you
and
indeed,
there's
a
choice
like
an
application
choices.
So
you
can
make
a
very
long
e-tax,
which
is
yeah,
of
course,
more
likely
to
be
unique
but
leads
to
more
bytes
over
the
wire
or
you
could
have
short
ones.
You
can
make
a
trade-off
there
just
like
for
the
normal
exam.
A
D
D
Yeah
and
that's
the
next
steps
yeah
so
you're,
seeing
the
comments
from
the
review
that
john
did
step
so
for
that
we
could
get
some
issues
with
the
tasks
to
do.
Also.
The
other
github
points
are
being
worked
on,
and
specifically
this
last
discussion
has
so
to
decide
now
on
the
validation
method,
between
client
and
original
servers,
so
that
we
already
proposed
an
attack
but
yeah.
I
think
we
should
also
give
that
a
chance
to
be
discussed
at
the
mailing
list.
People
want
to
say
something
about
that
then,
finally,
to
submit
version
4.
C
C
C
I
mean
we
know,
we
know
there
are.
There
are
misuse
of
co-op
and
and
the
narrow
service
attacks
has
been
executed
already,
and
I
just
wonder
why
why
we
want.
I
mean
I
understand
from
from
point
of
view
that
we
want
this
things
that
work
in
in
the
co-op
should
work
with
group
group
co-op
as
well,
but
I
just
wonder
that
if
there
would
be
a
version
two
of
co-op,
then
I
would
ask
the
same
question:
why?
Why
do
we
have
it
now
security
mode.
D
Yeah,
I
think
we
do
say
so
something
about
authentication
so.
D
B
Yes
might
be
isolated
might
also
be
secured
by
some
layer
that
we
don't
as
co-op
users.
We
don't
see
like
a
layer,
two
encryption,
something
or
even
even
an
ipsec,
although
that
doesn't
count
as
literally
no
sac,
but
it
behaves
like
that
for
group
purposes.
C
Yeah,
I
mean
that's,
I
think,
that's
reasonable,
but
why
why?
Why
wouldn't
if
you
want
to
apply
group,
combats
like
that,
surely
you
you
could
do
that,
but
why
do
we
in
the
specification
write
that
there
is
no
segment?
Why
do
we
allow
it?
I
mean
people
may
deviate
from
specifications
if
they,
if
they
know,
if
it's
an
isolated
network
or
does
it
matter
if
they
deviate
from
from
or
maybe
yeah.
C
A
C
B
B
So,
for
example,
if
we're
looking
in
that
case
of
early
early
discovery
at
cases
where
a
device
look
is
looking
for
its
group
manager
or
for
a
resource
directory
to
join
initially,
then
there
is
only
one
that
on
the
network,
there
will
be
only
one
or
maybe
two,
but
not
an
exploitable
number
of
group
managers
or
resource
directories,
but
still
the
device
doesn't
know
and
and
uses
the
multicast
address
the
bit
in
the
sense
of
an
anycast
address.
B
There
is
no
aspect
of
amplification
unless
there
is
a
great
number
of
those
devices
deployed,
in
which
case
you
have
these
have
to
coordinate
anyway,
and
then
some
of
them
can
decide
not
to
join
the
group,
but
kind
of
negotiate
with
each
of
the
ones
is
the
one
that
kind
of
responds
first
and
the
others
are
just
fall
back.
Failure
whatsoever.
C
C
And
that's
not
I
mean
the
problem
is
not
only
for
the
the
particular
application.
The
problem
is
for
all
other
applications
that
might
be
inflicted.
A
D
That
sorry,
okay,
there's
a
difference.
Of
course.
Now
so
john
was
mentioning
that
even
security
yeah,
we
still
have
issues
left.
Basically,
we
are
now
talking
about
no
second
mode
used
for
discovery.
That's
because
it's
discovery.
You
typically
have,
by
definition,
no
security
context
to
start
with,
find
out
example,
find
your
book
manager
and
decline
as
well
find
another
node
that
offers
certain
services,
but
that's
a
bit
of
the
problem.
Today,
you
accounted
by
a
score
solution.
D
C
D
D
Using
multicast
discover
a
single
particular
manager
or
resource
directory,
for
example,
yeah,
it's
better
than
then
to
start
discovering
and
yeah.
If
we
all
devices
with
any
co-op
service
and
hundreds
of
responses
done
in
their
case,
so
you
can
design
it
in
a
good
way
in
a
bad
way.
Of
course,.
B
C
A
A
Okay,
then,
we
can
move
to
the
other
document
I'll
give
an
update
here,
mostly
on
the
status
of
the
editing,
and
then
we
have
also
some
open
points,
mostly
on
reverse
proxies
and
improvement
for
those
that
the
desktop
is
also
covered.
A
So,
as
a
recap,
group
combis
is,
of
course
defining
how
a
group
communications
setup
can
use
proxies
either
forward
or
reverse
proxies,
but
it's
keeping
it
at
a
high
level
and
and
leaving
instead,
the
mechanics
and
the
details
for
some
other
specifications
still
highlighting
the
the
issues
that
one
has
to
handle
in
that
case,
and
this
groupcom
proxy
document
instead
is
defining
a
specific
mechanics
to
address
those
issues
and
it's
especially
about
a
signaling
protocol
between
the
client
and
the
proxy
through
two
new
cop
options.
A
A
Transferring
that
the
disco
mentioned
before
involved
involved,
especially
the
overall
response
caching
model
for
the
proxy
that
has
now
been
moved
into
this
document,
and
the
important
thing
here
is
that
proxies
other
than
a
good
configuration
on
their
side
are
also
required
to
clearly
identify
a
client
that
wants
to
send
the
request
out
to
a
group
which
implies
client
and
proxy
need
to
have
a
kind
of
security
association
that
opens
interesting
things
for
for
later,
so
quickly.
It
works
with
the
with
the
two
options.
A
A
The
client
also
includes
this
new
option:
multicast
signaling,
so
confirming
to
the
proxy
that
the
client
knows
what
it's
doing
and
what
he
wants,
and
the
option
also
has
a
time
indication
to
tell
the
proxy
for
how
long
to
collect
responses
to
be
forwarded
back
to
the
client
and
the
proxy
will
remove
this
option
before
forwarding
the
request
to
the
servers.
So
nothing
really
changes
for
the
for
the
servers.
A
A
So
when
the
client
starts
collecting
these
responses,
it
can
first
of
all
distinguish
the
responses
from
one
another
and
it
knows
which
exact
server
produces,
which
resources-
and
that
has
the
the
further
advantage
that
later
on
the
client,
can
individually
go
and
talk
to
that
server
again
through
the
proxy
or
bypassing
the
proxy.
A
If
the
network
setup
allows
for
that,
and
we
take
as
baseline,
the
the
the
known
secure
setup
is
end-to-end
but
of
course,
pretty
seamlessly
you
can
use
grouposcore
to
have
end-to-end
security
between
client
and
servers
and,
as
we
mentioned,
we
need
also
security
association
between
client
and
proxy,
and
that
can
be
achieved
through
oscore
or
dtls,
for
instance,
but
I
have
more
on
that
later
when
it
comes
to
allscore.
A
So
what
we've
done?
As
mentioned
in
the
previous
presentation,
a
lot
of
content
was
moved
here.
The
general
caching
model
and
the
response
revalidation
exactly
between
the
client
and
the
proxy,
so
above
above
other
points.
These
two
points
where
before
in
group
convince
reasons
to
update
rfc
7252
and
having
moved
them
here,
they
are
making
this
document
also
eligible
to
to
update
the
cost
specification
and
like
group
compass,
was
doing
in
fact
before
this.
A
Transferring
the
caching
and
the
property
is
possible
also
in
the
presence
of
end-to-end
security
with
grupo
score,
and
that's
also
discussed
at
the
high
level,
but
pointing
at
the
separate
document
on
uncashable
score
for
the
technical
solution
to
achieve
that.
A
A
Okay
and
for
the
caching
model
and
the
proxy
that
again
was
transferred
here.
This
roughly
works,
and
this
has
no
big
changes
from
from
what
was
in
group
convince
already
at
the
previous
itf
meeting.
A
The
proxy
has
a
number
of
individual
cache
entries
related
to
a
resource
ones
for
one
for
each
server.
A
It
is
also
ready
to
be
accessed
by
a
client,
and
then
there
is
an
aggregated
cache
entry
that,
in
a
sense,
covers
the
the
whole
group
of
servers
where
the
responses
to
a
group
request
coming
from
any
server
in
the
group
are
kept,
and
it
can
possibly
be
up
to
date
by
a
response
coming
from
one
server
in
the
group,
but
as
a
reply
to
a
request
intended
only
for
that
server
too,
which
makes
it
interesting
how
you
update
the
overall
lifetime
of
the
aggregated
cache
entry.
This
is
documented
too.
A
At
the
moment
and
a
heat
to
this
aggregated
entry
is
possibly
produced
by
a
request
instead
intended
to
the
whole
group
in
the
first
place.
A
So
this
is
the
model
and
may
I
briefly
here
sure.
B
Can
you
go
back
and
slide
yes,
now
that
the
that
the
freshness,
the
freshness
model
of
the
other
document
says
that
you
can
only
have
a
cached
response?
B
If
you
know
all
the
members
of
the
group
isn't
that
aggregate
cached
entry
now
replaced
with
the
knowledge
of
you
know
what
the
group
is,
so
you
can
have
kind
of
an
individual
cache
entry
for
every
each
of
them
and
then
it's
fine
or
you
not
do
not,
and
then
you
have
to
send
out
anyway.
A
Basically,
it's
something
we
wanted
to
do
all
along,
but
it
wasn't.
It
wasn't
clear
enough
in
the
text
and
the
proxy
is
indeed
proposed
to
forward
forward-back
responses
as
they
come
and
and
possibly
update
the
aggregated
cache
entry.
Only
when
the
indicated
time
interval
indicated
by
the
client
expires
and
all
responses
are
supposed
to
have
come
and
that's
when
the
entry
is
also
updated,
and
the
second
point
is
instead
what
you've
mentioned
now.
A
I
think
your
suggestion
is
to
start
with
caching
at
the
client
that
we're
leaving
group
this
lot
like
esther
showed,
and
then
you
mentioned
okay,
something
similar
should
happen
also
for
for
the
aggregated
cash
entry,
but
perhaps
something
more
can
be
done
and
at
least
in
the
it
was
copy
with
cash.
Something
like
that
you
can
think
of
a
proxy
that
has
a
lot
of
knowledge
in
real
time.
A
Almost
of
the
group,
for
instance,
if
it
sits
on
on
a
multicast
router
and
can
see
new
members
join
the
co-op
group
and
that's
the
best
thing,
because
you'd
be
able
to
produce
a
very
reliable
aggregated
cache
entry
and
possibly
invalidate
or
refresh
it
as
it
sees
are
entering
the
next
best
thing,
but
but
not
not
as
good
would
be,
that
the
proxy
has
more
knowledge
of
the
application
or
network
or
or
network
context
like
how
long
you
take
for
a
new
server
to
join
the
group
and
how
much
uncertainty
time
is
tolerable
for
client
to
live
with
an
aggregated
cache
entry
that
is
not
100
align
with
the
current
group
membership
and
as
long
as
you
stay
within
this,
you
can
still
be
fine
but
lacking
any
of
this
knowledge
or
acceptable
enough
knowledge.
C
A
I
think
christian
was
opening
even
for
for
forgetting
about
it
right
now.
It's
defined
with
this
kind
of
say
rules,
because
the
only
conclusion
was
we
need
rules
on
the
best
way
to
handle
it,
and
the
impression
was
that
under
some
hypothesis,
it
can
work
well
and
if
the
hypothesis
are
not
met
just
better
to
not
have
it,
which
is
what
we
are
trying
to
describe
now.
C
B
Yeah,
I
think
I
think
this
will
need
a
bit
of
alignment
with
with
the
other,
with
the
other
document,
because
yeah
kind
of
not
now
that
we
have
the
statement
that
we
have
the
document.
This
can
kind
of
the
practical
implications
would
still
be
the
same,
because
the
proxy
might
know
that
hey
devices
only
join
say
every
at
midnight
or
every
five
minutes,
and
and
then
we're
back
to
practically
this.
But
it
can
be
phrased
in
a
more
precise
way.
A
Yeah,
I
mean
more,
a
more
aggressive
words.
Me
thing
would
be
just
saying
here
exactly
what
group
combis
says
for
for
for
the
original
client.
D
Yep
still,
I'm
thinking
it
would
be
good
to
have
a
little
bit
more
detailed
use
cases
here
so
when,
when
you
keep
those
aggregated
entries,
so
at
least
the
one,
I
recognize
her
that
if
it's
located
on
the
router
that
it's
also
handling
multicast,
routing
and
handling,
maybe
the
registrations
to
these
multicast
boots.
D
Yeah
exactly
so,
it
would
be
good
in
this
draft
or
maybe
explicitly
explain
those
kind
of
typical
use
cases,
and
then
this
yeah
second
bullet.
This
was
the
five
minute
example.
So
if
probably
knows
that
group
members
can
only
join
five
minutes
or
every
day
or
if
it
knows
that
for
the
application
you
can
ignore
the
servers
that
joined
in
the
last
10
minutes.
Let's
say
something
like
that
as
well.
D
D
They
know
that
it
has
a
response
from
all
of
them,
because
it
could
be
one
group
member
that
has
joined
ten
days
ago,
but
that
yeah
still
didn't
get
the
chance
to
to
receive
the
request.
A
Right,
I
think,
by
the
way
you
listed
those
use
cases,
as
you
call
them
for
the
for
the
original
client
in
group
companies,
so
we
can
bring
them
in
here.
B
I
think
that
in
we
should
aim
for
a
proxy
to
be
just
a
server
tagged
to
a
client,
so
whatever
we
have
in
application
rules
that
a
proxy
could
know
of
should
also
be
reflected
in
in
client
rules
if
the
proxy
can,
if,
if
we're
not
talking
about
a
proxy
but
a
client
that
is
cashing,
so
I
think
we
shouldn't.
B
We
should
try
to
avoid
having
special
rules
that
apply
to
proxies
that
don't
work
just
as
well
also
for
the
client
caching
case,
and
if
we
need
something
better
than
you
can
have
out-of-band
knowledge
of
where
the
rotary
of
of
which
members
may
or
may
not
have
joined.
Then
maybe
this
should
better
be
in
in
group,
compass
and
here
just
kind
of
pointing
towards
that.
A
So
for
the
proxy
is
well
other
than
the
proxy
knows
everything
because,
for
instance,
it
sits
on
the
router
or
it
has
to
to
have
context
knowledge
like
any
possible
other
client
to
be
on
the
safe
side.
D
A
D
Yeah,
it
basically
means
just
send
it
out
always
have
a
new
request.
D
Did
not
basically
introduce
the
aggregate
gas
entry
because
it
so
far
it
didn't
seem
necessary,
but
the
client
itself
could
also
keep
something
like
an
aggregate.
B
From
the
client
thing,
the
aggregate-
the
aggregate
entry
is
nothing
more
than
a
pointer
to
the
to
the
cached
states.
We
have
so
if
and
now
that
the
the
requirement
has
come
in
from
the
client
side
or
maybe
kind
of
maybe
we
can
reshape
it,
but
in
some
way
the
client
knows
what
the
servers
are,
which
servers
are
they're
expected
to
be,
then
the
aggregate
kind
of
state
vanishes
into
something
that
has
no
state,
because
we
know
somewhere
else.
What
members
are
there
and
we
know
in
the
cache
what
their
individual
contents
are.
C
C
D
A
For
the
matter
of
phrasing,
it's
probably
easier
to
just
not
consider
an
explicit
aggregated
cache
entry,
but
thinking
instead
of
a
group
request
from
a
client
may
hit
all
the
individual
already
existing
cache
entries
at
the
proxy,
because
a
client
friendly
to
this
would
have
already
a
knowledge
of
the
whole
set
of
servers
in
the
group.
Anyway,
if
I
got
christian
correctly
yeah,
okay,
so
the
functionality
is
interesting
to
have
under
some
hypothesis,
it
can
be
presented
in
a
more
efficient
way.
C
A
D
D
So
this
talks
about
http,
2,
co-op
proxies
and
it
defines
a
nice
way
of
encoding
a
corp
request
in
a
uri
uri,
that's
included
in
a
request
to
the
proxy,
whereas
an
example
here
in
the
second
bullet,
just
to
show
that
so
in
this
case
it's
not
a
http
proxy,
but
it's
ready
for
the
color
proxy.
So
it's
called
access
from
btls
myproxy.example.com
and
it
has
a
resource
and
slash
p.
D
That's
the
resource
that
offers
the
oxy
functionality
and
through
the
proxy.
Now
the
client
wants
to
access
specific
group.
So
that's
three
dot
examples
on
that.
That's
and
of
that
group
it
was
to
access
the
resource.
Lite
example
to
do
a
get
request
there.
D
D
D
It's
a
customizable
template,
so
that
can
all
be
done
also
for
the
co-op
to
co-op
proxy
case
and
also
for
yeah
in
this
case
co-op
two
chord
group
box
here
I
listed
also
some
alternatives
here.
So
there's
the
light
green
line
and
the
purple
line,
and
that's
for
reference,
that's
how
to
do
it
by
an
http
proxy.
It's
also
going
through
another
resource,
slash
ac.
D
But
it's
just
an
example
and
that's
yeah,
basically,
something
that
would
be
nice
to
be
added
to
the
existing
reverse
sports
example
of
that,
because
it
uses
a
completely
different
ui
structure,
yeah
different
innovation
and
with
some
questions
on
these
reverse
proxies
specifically
listed
here
in
the
slides.
D
So
one
question
is:
can
the
multicast
signaling
option
be
received
by
the
first
boxing
and
then
also
used,
so
it
uses
the
sample
the
timeout
there
to
time
the
group
request
we
assume
the
answer
is:
yes,
that's
not
not
necessarily
a
forward
proxy
that
in
principle
a
reverse
box.
You
can
also
do
that,
and
second
question
was:
can
the
response
forwarding
option
be
used
also
by
reverse
proxy
and
there
we
assume
a
similar
way
that
the
answer
is
yes.
D
D
It
would
be
useful
to
allow
that
at
least
that's,
because
the
option
is
an
elective.
It
means
that
a
client
that
doesn't
know
anything
about
these
signaling
options
gets
the
response.
Forwarding
option
can
safely
ignore
it
and
that's
what
it's
co-op
implementation
will
also
do
in
this
case.
There's
no
harm
done.
D
D
Yeah,
in
this
case,
I
was
thinking
for
reverse
proxy.
Can
do
that?
Yes,
and
the
reason
is
actually
that
reverse
proxy
is
encoded
with
some
of
the
some
application
level
knowledge.
B
D
Yeah,
so
we
I
think
we
discussed
that
before
so.
One
assumption
is
that,
okay,
you
have
a
client
that
is
accessing
this
proxy
and
you
must
at
least
be
aware
at
some
level
that
it's
accessing
proxy
and
that's
it's
accessing
a
group
resource,
especially
if
you
look
at
the
example
above
yeah.
So
that's
very
clear
that
the
client
knows
what
he's
doing
he's
even
encoding,
specifically
right,
not
necessarily.
B
So
the
thing
is
that
your
that
reverse
proxy
uri
might
have
been
passed
to
the
client
by
some
other
by
some
other
party.
So,
for
example,
there
might
be
a
dime
link
pointing
to
that
resource,
and
the
client
has
no
idea
that
there
is
kind
of
that.
This
thing
in
there
is
a
is
a
is
a
is
a
uri
on
its
own
actually.
B
So
what
I
think
should
happen
personally
is
that,
if,
if
the,
if
the
request
comes
in
with,
if
the,
if
the
thing
encoded
in
the
reverse
proxy
uri
is
a
multicast
address-
and
there
is
no
multicast
signaling
request,
then
kind
of
there-
there
might
be
options.
B
There
can
be
various,
you
need
unique
answers,
but
there
should
only
be
one,
but
if
the
client
is
aware
that
this
is
a
multicast
uri
and
it
might
have
been
told
but
told
that
so
kind
of
in
a
side
channel
to
the
uri,
then
it
would
send
a
multicast
signaling
option
and
then
the
server
could
forward
the
many
responses
and
other.
B
And
if
not,
then
it
would
be
just
the
same
case
as
if,
if
a
regular
proxy
got
a
request
for
something
that
the
proxy
knew
was
a
multicast
atmosphere,
the
client
might
not
know,
in
which
case
also
we
I
think
we
prefer
not
to
not
to
send
multiple
requests,
even
though,
unless
explicitly
configured.
D
B
I
you
know,
I
think
that
the
kind
of
the
the
options
are
the
same
as
with
the
regular
proxy.
The
kind
of
the
easiest
thing
is
to
say:
yeah.
Sorry,
error.
That's
a
request
that
kind
of
request.
I
can't
process
to
that
to
the
requested
uri.
It
might
also
use
some
kind
of
application,
specific
aggregation.
D
And
the
way
that's
like
safer
solution.
D
Yeah:
okay,
that's
what
we
have
currently
yeah
so
and
depending
yeah.
You
need
to
depend
on
this
signaling
option.
Yeah.
I
was
thinking
for
the
reverse
proxy
case,
so
that
it's
something
we
don't
discuss
in
detail
yet,
but
what
we
can
actually
say
for
the
security
reasons
to
meet
the
expectations
of
the
client.
D
And
I
was
thinking
that
maybe
we
need
a
specific
option
value
after
the
reverse
proxy
case.
Client
might
may
not
know
what
is
the
time
value.
D
Yeah,
I
was
thinking
you
you
can,
as
a
client
also
include
the
option
without
a
specific
value.
I
think
it's
it's
a
viewing
option,
so
that
would
make
the
value
zero
yeah.
You
could
say
one
example:
a
zero,
a
meaning
as
okay,
you
figure
out
time.
I
don't
know
that
only
works
in
a
reverse
proxy
case
or
who
can
determine
a
sensible.
B
B
In
this
and
on
the
on
the
topic
of
whether
the
reverse
proxy
can't
send
a
reverse
forwarding
option
in
responses,
even
without
a
multicasting
link,
which
would
be
the
case
if
it
kind
of
fall
for
only
one
option,
I
think
in
kind
of
in
in
theory-
yes,
but
we
should
carefully
evaluate
this
against
other
ways
of
introducing
this
would
be
in
effect
introducing
lysine.
Now
this
is
okay,
because
that's
what
a
reverse
proxy
does,
but
this
might
need
coordination
with
critical.
C
B
Okay,
that's
what
you
meant
by
this,
because
I
mean
this
could
just
as
well
the
proxy
the
address
could
just
as
well.
I
I
thought
I
understood
this
third
point
to
indicate
that
if
the
request
in
there
happens
to
be
a
unicast
address
and
a
multicast
signal,
option
may
or
may
not
be
have
been
added,
then
the
the
proxy
might
still
include
that
response
forwarding
option,
and
that
would
be
basically
like
saying
yes,
but
by
the
way
my
host
name
is
actually,
and
that
would
mean
for
the
consideration.
D
A
Okay,
thank
you
and
I
think
we
covered
all
the
big
things
we
have
just
a
few
more
heads
up
also
a
truck
with
issues.
This
is
really
work
on
going
in
the
core
href
document,
where
also
the
the
final
details
are
awfully
converging.
A
A
And
another
open
issue-
I
I
mentioned
this
at
the
beginning:
we
need
a
security
association
between
client
and
proxy
in
a
way
or
another
for
client
identification.
A
This,
of
course,
can
be
dtls
in
principle,
but
especially
if
you
have
a
group
of
score
used
end-to-end
between
client
and
servers,
it's
just
more
convenient
also
from
a
code
size
point
of
view
to
have
instead
of
score
between
client
and
proxy.
A
So
the
conclusion
from
itf
110
was
that
this
is
only
one
of
a
few
more
use
cases
where
something
like
this
is
needed,
and
anyway
it's
some
work
that
requires
to
have
a
bit
more
attention
and
configuration
for
proper
analysis
and
design.
So
we
agreed
to
take
it
out
from
this
document
and
have
it
as
a
separate
draft
of
which
we
are
about
to
finalize
a
version
zero.
A
So
after
that,
the
plan
is
anyway
to
remove
the
appendix
document
and
and
so
that
this
document
can
instead
point
to
the
new
one
about
this
feature.
A
A
So
thinking
of
the
proxy
as
a
client,
so
most
likely
will
converge
again
to
an
adapted
version
of
the
simple
introduction
and
and
then
we
still
have
the
relatively
old
issue
that
keeps
staying
behind
because
more
urine
things
continue
coming
up
about,
enabling
this
same
mechanics
and
signaling
protocol
in
case
our
proxy
is
an
http
to
co-op
proxy
and
that
ultimately,
would
enable
an
http
client
to
talk
through
the
proxy
to
to
a
group
of
web
servers.
So
we
had
a
rough
idea
how
to
do
that.
A
Other
things
just
keep
preempting,
but
it's
in
the
cubits
yeah.
So
we'll
try
to
address
as
many
open
points
as
we
have.
Also
thanks
to
the
feedback
we
got
today
and
submit
a
version
for
for
the
cutoff
in
the
meanwhile,
more
feedback
or
human
reviews
are
welcome.
A
Thanks,
thank
you
also
for
the
very
good
feedback.
So
we
are
at
half
past
the
hour,
but
I
remember
christian
mentioned
some
any
other
business
at
the
beginning
of
the
meeting.
B
B
So
if
you
have
strong
kind
of,
if
you,
if
things
come
up
with
with
users
that
you
talk
to
again
and
again
or
you
just
want
to
look
at
what
is
out
there,
what
are
questions
that
come
up,
especially
if
you
have
opinions
that
might
kind
of
differ
from?
What's
in
there
look,
there
start
discussion
or
just
edit
it,
because
it's
a
wiki
and
it's
not
supposed
to
be
authoritative,
but
just
contains
a
few
kind
of
common
pitfalls
and
and
things
that
and
patterns
that
emerge.
B
In
what
sort
of
specifically
pretty
general
co-op
stuff,
so
there
are
things
like
do
you
I
mean
there
are
also
controversial
questions
and
they're
like
do.
I
have
to
send
a
uri
host
option
or
not,
which
is
to
me,
was
the
most
common
question.
I've
heard
about
yeah:
do
you?
How
do
you
do
redirect?
Why
do
you
have
plain
text
responses
to
errors
when
this
is
super
wasteful
on
the
on
a
constraint
device
things
like
that?
How
how
do
I
send
incremental
updates
over
an
observation
you
don't?
B
But
this
is
why
okay
thanks,
but
trying
to
capture
what
I
think
kind
of
emerges
as
things
that
you've
come
across
when
you've
been
on
the
core
mailing
list
for
some
time,
but
newcomers
might
not
kind
of
you.
You
can't
get
it
from
the
archive.
You
have
to
be
part
of
the
discussion
or
have
someone
digest
that.