►
From YouTube: IETF105-NETMOD-20190722-1330
Description
NETMOD meeting session at IETF105
2019/07/22 1330
https://datatracker.ietf.org/meeting/105/proceedings/
A
A
Welcome
to
Montreal
and
net
Mon
I'm
Lou
burger
Kent
Watson,
build
ugly
with
me,
my
co-chairs.
The
meeting
material
is
in
the
usual
spots.
We
are
doing
etherpad.
Also,
you
can
see
the
etherpad
link
which
somehow
drifted
to
the
bottom.
You
can
also
find
that
off
the
tools
page,
please
join
in
and
help
us
capture.
What's
being
said
at
the
mic,
it
comments
on
the
mic
and
responses.
We
don't
need
to
capture
everything
that
said
in
the
slides.
A
That
is
not
go
to
the
next
page.
Aha,
we
are
at
the
IETF,
which
means
we
have
rules
that
govern
our
participation.
You
can
find
that
under
our
note.
Well,
this
is
a
summary
of
that.
Those
rules,
if
you're
interested
in
are
unfamiliar
with
them.
Please
go
to
that.
Url
WWI
80s
org
about
note
well
that
HTML
that
also
give
you
a
pointer
to
the
governing
documents.
A
Since
we
are
Monday,
it's
worth
taking
a
look
at
that
make
sure
you're
familiar
with
that
as
usual,
we're
doing
video
streaming
as
well
as
audio
streaming
and
recording.
Please
make
sure
you
state
your
name
if
you
come
to
the
mic,
that's
very
important!
For
those
who
are
remote,
as
well
as
anyone
taking
notes,
I
will
be
jumping
on
jabber.
If
there's
a
people
who
are
willing
to
jump
on
jabber
and
to
help
channel
those
who
may
be
remote,
we
appreciate
that
I
think
we
have
had
an
agenda
change.
A
I'll
talk
about
that
a
moment
we
are
we.
We
wanted
two
and
a
half
hours,
because
we
thought
two
hours
was
a
little
tight.
We
try
to
squeeze
into
that,
because
we
asked
for
two
hours
that
pump
this
into
a
second
slot,
which
means
we
have
a
lot
more
time
than
that.
So
in
theory
we're
here
to
almost
6
o'clock.
That
seems
like
a
long
time
from
now.
I,
don't
doubt
that
we'll
use
off
of
time.
That
said,
we
haven't
added
a
couple
of
topics
that
we
think
are
useful
use
of
the
working
groups.
A
If
we're
having
good
conversation,
we're
not
going
to
break
that
break
that
which
means
that
we're
not
exactly
sure
what
what's
going
to
line
up
with
the
session
brain,
so
we're
just
going
to
go
in
the
order.
That's
on
in
the
published
agenda
and
that's
captured
here
this.
This
slide
is
unchanged.
This
says
the
agenda
is
unchanged:
the
design
team
items
aren't
change,
the
non-charter
items
are
promotes
previously
been
published,
but
we've
added
a
topic.
That's
come
up
on
some.
C
So
there's
the
no
update
to
yang
next,
since
the
last
ITF
meeting,
where
we
have
met
I,
think
primarily
I
asked
I.
Think
the
main
issue
is,
we
don't
have
someone
to
drive.
The
discussion
in
particular
folks
are
probably
looking
at
me,
not
just
because
in
the
front
of
the
room,
but
because
I
had
coordinated
several
of
those
meetings
before
to
do
that.
C
But
at
the
moment
I
don't
have
a
bandwidth
to
actually
drive
those
discussions,
much
less
even
coordinate
the
meetings
to
for
the
discussions
to
be
had
I
did
ask
someone
to
do
that,
but
they
also
had
not
really
signed
up
to
do
that,
and
so
that's
where
we're
at
right
now,
it's
just
sort
of
a
limbo
waiting
for
someone
to
have
the
bandwidth
to
actually
drive
the
discussions.
So
the
bottom
line
is
that
there's
been
no
progress
since
last
meeting.
A
Yeah
I
also
think
that
the
design
team
activity
that
we're
gonna
hear
about
is
probably
taking
a
bunch
of
attention
from
those
who
might
also
participate
in
the
egg
next
site.
My
personal
view,
not
as
someone
who
is
contributing
to
that
work,
my
personal
view
is
ones
that
the
current
design
team
winds
down.
There
may
be
some
more
energy.
A
A
A
A
That's
the
one,
in
fact
it's
listed
right
here
to
mention
so
thanks
for
that
appreciate
that
so
at
the
last
meeting
we
had
an
update
on
these
gang
types
and
we
were
actually
hoping
to
move
that
document
through
really
quickly
and
we
haven't
seen
an
update.
Uragan
is
online
and
if
he
feels
like
mentioning
something
but
we're
really
gated
by
the
author's
pushing
that
pillar.
A
So
so
we
don't
take
it
all
right.
Thanks
duplicated,
so
there's
been
a
recent
discussion
on
the
list
about
an
errata
and
then
we
realized
there's
actually
a
whole
slew
of
virata's
that
we
really
haven't
verified
as
a
working
group.
Now,
technically,
the
aetyi
is
the
one
who
verifies
an
errata,
but
generally
the
ADEs
take
input
from
the
working
group.
So
we
think
it's
worthwhile
to
go
through
each
of
these.
A
We're
not
going
to
go
through
each
of
them
now,
but
we
think
it
is
worthwhile
to
go
through
each
of
these
as
individuals
as
the
working
group
and
agree
on
what
we
think
the
right
answer
is
of
whether
or
not
these
should
be
verified
or
not.
Now,
I
think
we
had
a
couple
of
them
that
we
had
identified
it
being
to
be
objective.
E
A
The
other
one
that
should
be
rejected,
we
took
a
look
at,
and
it's
actually
a
technical
change,
and
one
thing
to
keep
in
mind
is:
is
that
if
you
see
something
in
a
RFC
and
would
like
to
modify
the
behavior
of
that
RFC
such
that
others
implement
that
change
behavior
the
right
way
to
do
that
is
in
a
disc
or
a
new
RFC
that
updates
the
existing
RFC.
You
can't
do
a
technical
change
through
a
neuron,
so
this
is
asking
for
that.
A
Particular
one,
that's
being
rejected
is
asking
for
a
substantive
technical
change
that
would
impact
all
implementations
of
the
document.
I
mean
it's
just
adding
a
default,
so
it's
not
like
a
controversial
thing,
but
you
just
can't
do
that
technical
type
of
technical
change
and
they
Iran
just
by
our
process.
So
that's
something
to
keep
in
mind.
A
It
is
going
to
be
important
that
we
go
through
each
of
these
and,
as
a
work
group
have
a
response
will
probably
push
that
as
chairs
that
if
there
isn't
something
that
has
a
clear
discussion
on
the
other
157
84,
which
is
the
recent
discussion,
clearly
that's
gonna.
We
think
that's
gonna
be
rejected.
Will
close
the
conversation
if
it's
not
otherwise
closed?
A
D
Okay,
so
more
reviews.
Please
then,
are
saying
that
one
of
the
issues
has
been
raised
by
EC
I've
had
verbal
comment
from
him
yesterday,
I
think
he's
happy,
but
we'll
discuss
that
today
and
then
a
separate
issue
raised
on
him
not
on
or
not
related
to
this
document,
but
more
generally,
but
I'm
proposing
and
we
discuss
and
have
a
potential
change
to
this
draft.
For
that.
D
So
the
issue
that
AC
raised
here
I
mean
his
actual
text.
Is
there,
but
basically
he's
saying
the
RFC
36:35
defined
a
load
of
Ethernet
like
counters
statistics,
and
maybe
we
should
include
a
subset
of
those
into
this
draft
and
in
particular
he
noted
that
this
draft
has
one
counter
for
or
for
reporting
destination
map
drops,
but
none
of
the
other
ones
there
well
there's
some
sort
of
background
history
to
this,
and
that's
really
that
we
looked
at
that
RC
another
one.
We
looked
at
the
current
a
third
like
mabe
and
I.
D
But
there
were
two
exceptions
to
that:
the
the
aren't
included
in
there
that
worth
discussing
the
first
one
is
whether
we
want
to
add
a
sub
interface
d-max
drop
counter.
So
this
would
be
on
the
trunk.
Parent
interface
is
if
you've
got
many
sub
interfaces,
they're
each
classifying
traffic
in
different
ways
and
a
frame
comes
in
that
can't
be
classified
to
any
sub
interface
and
you
drop
it.
We
don't
currently
have
a
drop
counter
for
that.
I
think
that
might
be
worth
adding
in
these
my
opinion.
D
So
that's
relatively
easy
to
add
in
a
think
is
that
anyone
has
an
opinion
on
that
or
is
opposed
to
that.
Okay,
if
not
I'll,
assume
I'll,
add
it
in
and
that's
fine
that's
fairly
easy,
then
the
the
second
one
is
more
interesting.
It's
about
the
ethernet
histogram
statistics
and
that's
this
is
what
they
look
like.
So
this
is
defined
in
one
of
the
existing
MIPS
or
RFC's.
I
think
is
a
breakdown
of
the
number
of
packets
received
based
on
the
length
of
that
frame.
D
Packets
and
also
the
80223,
has
a
interesting
view
of
how
you
handle
jumbo
frames,
so
I
don't
think
they
specifically
disallowed
but
I,
don't
think
either
the
spectra
T
sort
of
wants
to
talk
about
them
or
standardize
them.
Where
is
you
want
to
fill
out
this
table
properly?
You
want
to
go
up
to
about
9k
as
well.
D
A
D
So
when
there's
discussion,
when
we
do
the
work
with
80,
2.3
and
I,
didn't
follow
that
through
all
the
way
to
the
end,
I
was
involved
with
in
the
early
parts
of
that
as
I
say
him.
They
didn't
really
want
to
standardize
these
histogram
counters
and
various
issues,
and
then
the
other
sort
of
issue
that
comes
up
is
that,
because
there
hasn't
been
a
good
standard
here
that
goes
to
these
higher
ranges
that
different
vendors
hardware
does
different
things.
D
D
So
so,
basically,
the
proposal
here
is
that
you
could,
and
rather
having
strict
bucket
definitions,
we
could
return
a
list
of
bucket
entries
and
where
each
bucket
entry
defines
what
the
range
is
covering
and
low
and
high
end
inclusive
and
accounts
the
packets
that
match
that
bucket
range.
So
it's
a
bit
more
of
a
both
in
terms
of
the
data
model,
but
it's
more
flexible.
At
the
same
time,
we
would
give
recommendations
in
the
description
to
say
we
recommend
you
choose
these
bucket
sizes
so
and
trying
to
sort
of
encourage
heading
towards
a
consistency
somewhere.
D
So
I
had
the
question
is:
do
I
have
these
in
now
or
can
be
deferred
so
I
think
the
question
is:
do
we
try
and
do
this
now
in
this
working
room
last
call
at
all
or
if
we
do
want
to
try
and
do
this?
I
can
speak
to
David
law
and
find
out
whether
he's
happy
for
us
to
do
this
I'm
fact,
if
I
can
knock
it
up
and
say
that
this
is
what
it
would
look
like.
But
that's
my
question
is:
is
this
the
right
time
to
even
trying
to
do
this?
A
D
F
So
tim
carry
nokia,
so
the
question
I
would
really
have.
Is
that
those
those
sizes
that
were
there
were
therefore
identified
for
various
reasons
right?
You
know
for
for
good
reasons
at
the
time
right
and
so
the
the
question
is
is,
and
they
have
tools
and
tests
that
that
go
around
that
right,
and
so
what
we
said
is
okay,
we're
gonna,
provide
some.
You
know
abstraction
if
you
will
or
ability
to
do
some
meta
work
on
that,
such
that
we
can,
you
know,
define
the
ranges
themselves
and
then
provide
some
guidelines
to
two
different
pieces.
F
The
problem
is
that
when
you
say
you
have
a
guideline
now,
you
don't
have
anything
that
you
can
standardize
that
I
as
a
client
and
I
as
a
server
can
really
agree
upon
I
just
deal
with
section.
So
I
can't
guarantee
that
certain
servers
that
would
implement
these
things
would
implement,
say
packet,
size
256
through
511,
so
I
can't
really
rely
on
that
right.
I
have
to
use
a
a
mechanism,
may
be
an
operator
has
to
get
involved
in
stuff
like
that.
So
that's
the
biggest
certain
that
I
have
of
using
you
know.
F
D
F
D
F
E
Yeah
yeah,
Joel
yeagley
I
definitely
see
the
existing
values
as
being
a
product
of
historical
decisions
that
were
made
in
the
I
Triple
E,
like
I
mean
you
know
when
you
decide
that
you
need
something
bigger
than
fifteen
hundred,
because
you
added
a
VLAN
tag
right.
That
is
a
specific
historical
decisions
like
there's
a
lot
of
increments
that
you
could
come
up
with
above
1500
that
have
various
kinds
of
historical
meaning,
but
are
not
specifically
like
anchored
to
I
Triple
E
standards.
E
So
you
you're
going
to
end
up
with,
like
you
know,
15
19,
to
15
for
T
and
then
15:40
to
say,
44
70
and
then
you
know
nine
thousand,
nine
thousand
fourteen
ninety
190
120
and
then
every
Ethernet
vendor
trolli
specification
up
to
there
and
then
Intel
116
Kay,
because
it's
a
round
number
right
so
like
I,
actually
see
the
likelihood
of
us
getting
a
really
like
good
list.
That
goes
above
this
as
like
English
in
a
short
period
of
time
that
the
I
triple
you
will
go
yeah,
that's
cool
as
being
pretty
low.
E
G
E
Yeah
I
mean
I
think
this
is
I
think
this
is.
This
is
good
enough,
like
I
mean
from
my
vantage
point
is
a
jumbo
frame
user.
You
know
the
there
are
Jumbo's
or
they
aren't
so.
But
but
that's
that's
that's
my
network
like
I.
Don't
I
don't
spend
a
lot
of
time,
distinguishing
between
the
say
that
ninety
one
hundred
byte
packets
and
the
nine
thousand
five
packets,
because
I
know
what
my
empty
mem
MTU
is
set
to
yeah.
E
A
E
D
But
I
guess
the
point
is
my
point
is
if
we
there's
no
reason
for
us
to
add
these
counters,
if
we're
going
to
stop
at
15,
18
or
15
19,
whatever
that
dad
doesn't
seem
a
good
reason
for
why
we
would
choose
to
do
that,
because
they're
choosing
not
to
standardize
these
counters,
they
could
have
done
these
if
they
wanted
to.
So
if
we
want
to
do
it,
we
probably
want
to
at
least
fix
it,
so
it
works
for
other
use
cases
higher.
D
H
H
H
H
D
H
I
can
tell
you
from
some
debugging
the
MTU
sizes
into
so
many
variants
of
the
1500.
It
is
just
really
annoying
and
then
the
thing
does
work,
because
the
MTU
size
is
just
like
a
different
number
in
your
likes.
Damn
it
and
the
15
numbers
they're
all
over
the
place
yeah,
but
they
are
always
definitely
below
1600.
And
then
next
question
is:
oh:
are
those
jumbo
frames.
I
What
image
most
trans
pocket
I
would
to
add
to
this
discussion?
Maybe
we
should
also
add
pockets
in
and
pockets
out
counter,
because
this
we
don't
have.
We
have
unicast
and
multicast
pockets
and
for
various
reasons
about
farad,
just
don't
implement
differentiation
between
unit
costs
and
multicast
pockets.
So
really
the
the
ITF
interfaces
model
is
impossible
to
implement
on
those
devices.
There
are
many
devices
like
open
flow,
for
example
devices.
I
You
cannot
implement
interfaces
model
for
open
for
devices,
you
cannot
diplomatist
or
on
reach
through
traffic
generators,
and
these
are
supposed
to
be
flexible
devices
that
you
actually
should
not
have
any
problem.
Implementing
the
idea,
interfaces
and
I
think
I
am
against
adding
eternal,
specific
counters
in
this
draft.
Maybe
there
should
be
another
draft
that
should
add
them,
but
this
one
should
be
kept
as
compact
as
possible.
I
think
now
it's
to
watch
already.
Okay,.
D
Yeah,
so
the
John's,
your
question
about
the
total
packet
counts
in
and
out
I
I
could
be
mistaken.
I
have
a
feeling
that
went
into
the
a
2,
2
dot,
3.2
I
think
they
might
have
a
total
package
in
and
out
so
either
that
count.
I
think
should
be
neither
at
night
if
interfaces
or
it
should
be
noted
to
do
3.2.
A
A
D
A
D
A
A
A
Stating
the
opinion
now
because
I'd,
like
the
ear
of
anyone
in
the
working
group,
disagrees
with
that
so
I
think
we're
going
to
leave
this
meeting
where
we're
not
going
to
add
the
histogram
counters,
including
any
event
the
ones
each
other,
so
these
are
not
going
to
show
up
in
the
document.
So
if
you
disagree
with
that,
now
is
the
time.
D
All
right
can
I
can
I
ask
one
more
question,
which
is
who
thinks
that
we
should
try
and
stand
ice
if
we
don't
do
now
as
a
separate
draft
or
separate
work
item
to
try
and
standardize
this
and
these
counters
in
what
something
like
it.
A
D
D
The
MTU
issue,
which
I
hope
will
be
easier,
although
MTU,
is
always
contentious,
contentious
thing,
so
there's
a
network
thread
titled
question
regarding
rc8,
three,
four,
four
and
those.
Basically
the
premise
of
what
came
up
there
is
I
think
that
the
Linux
default
loopback
into
you
is
six
five,
five,
three
six
bytes
long,
whereas
the
all
the
m2
use
in
the
ITF
models
are
limited
to
you
in
sixteen
and
in
65535
somebody
pointed
out
that
an
IP
layer
having
them
to
you
above
65535
anyway,
and
this
model
does
define
an
L
M
to
you.
D
I
I
E
I
In
that
mail
that
actually
brought
up
that
it
was
my
mail
to
the
group,
and
that
was
in
the
end,
there
was
another
suggestion
in
addition
to
the
l2
MTU
I
think
it
makes
sense
to
have
MTU,
which
is
the
MTU
definition
actually
used
by
Linux
like
when
you
use
the
if'
interfaces,
configuration
any
glimpse.
What
you
get
is
actually
the
NQ
without
the
the
header
all
together,
that's
very
important
for
most
people
who
use
Linux,
they
are
used
to
that
value.
I
So
this
this
has
to
be
in
the
model,
I
think,
and
there
is
advantage
when
you,
when
you
define
protocol
configuration
based
on
the
ITA
interfaces.
You
want
to
know
the
the
the
pocket
payload
you
can
use.
So
when
you
compare
parameters
you
can,
when
you
are
configuring
other
end
to
use.
If
other
protocols,
you
can
put
a
must
statement
there
limiting
the
size
of
that
end,
you
to
to
that
empty
you.
I
So
this
makes
a
lot
of
sense
to
have
it
there
and
it's
more
elegant
here,
just
interfaces
interface
and
to
you
kind
of
elegant
l
to
empty
you,
and
especially
the
limitation
of
taking
out
the
view
on
in
the
description
statement,
creates
complications
because
you
have
the
moxx
that
actually
just
don't
care
about
it.
It's
a
v1
edit
or
not
the
into
you
is
the
same
register
value.
I
So
if
you
standardize
this
in
the
young
model,
it
will
be
difficult
implemented
like
you,
won't,
have
a
future
not
sending,
but
it's
bigger
than
that
sizing
harder,
and
you
have
this
difference
that
a
v1
edit
should
not
be
a
problem.
It
should
go
through.
You
cannot
do
it
with
the
existing
hardware.
That's
another
point.
I
I
have
about
this,
so
a
general
in
empty
you
that
corresponds
to
the
Linux
definition
of
MTU
and
is
something
that
I
proposed.
D
So
I'm
nervous
about
that
for
several
reasons.
One
is
that
I
think
losing
the
l2m
to
your
configuration.
Value
is
probably
a
mistake.
I
think
there's
lots
of
systems
that
use
that,
as
there
is
a
configurable
value
on
an
interface,
so
I
think
changing
that
to
an
l-3
MTU.
So
payload
m
to
you,
as
you
say,
would
be
probably
a
poor
choice.
D
I
think,
then
the
question
is:
could
you
have
both
in
coexisting,
but
that
would
also
you
could
you
could
probably
allow
either
of
the
two
to
be
configured
potentially
but
I'm,
not
sure
how
the
constraints
would
work?
So
if
your
constraint
was
against
a
payload
based
em
to
you,
but
the
user
to
configure
than
L
to
him
to
you,
then
it
wouldn't
necessarily
work.
I
can
certainly
see
how
you
can
report
both
values
in
the
operational
state.
D
I
My
argument
is
that
the
paywall
and
view
is
what
all
the
RFC
is
up
to
now
in
the
eye
idea
for
using
yeah.
So
it
is
strange
that
we
are
not
going
to
have
that
end
to
you
as
part
of
that.
Instead,
we
are
going
to
use
l2
m2,
which
actually
you
can
derive
from
the
type
of
the
interface.
If
the
interface
is
Ethernet,
there
is
no
doubt
what
is
the
difference
between
the
MTU
and
the
l2
empty.
I
So
why
are
we
going
to
bound
ourselves
to
the
interfaces
and
the
interface
type
is
not
going
to
be
used
as
information
source
for
that
calculation?
It
is
going
to
create
confusion.
S
on
this
mailing
list,
like
this
person,
was
trying
to
configure
NTU
for
the
paywall.
Thank
you
and
he
was
confused
that
he
cannot
do
that.
I
D
But
again
it
is
back
to
what
the
hardware
will
place.
Often
the
hardware
framers
and
things
might
place
a
single
value
and
the
value
of
everything
is
the
l2
frame
size,
not
necessarily
the
IP
or
the
l3
and
the
l2
payload.
So
that's
one
complexity
and
in
terms
of
the
same
discussions
about
historically,
might
have
standardized
the
l3
em
to
you.
They
did
do
that.
They
did
that
for
LT.
Vpn
and
they've
ended
up
in
a
world
of
pain
because
of
it.
D
So
in
the
l2
pn
specs
the
MTU
negotiate
across
the
wire
is
the
payload
m
to
you,
but
you
don't
know
what
size
those
headers
are.
So
you
can't
easily
agree
that
value
is
very
hard
to
calculate
when
you
want
L
to
frame
coming
in
to
say
well,
this
amount
of
its
headers
without
having
to
analyze
the
PAC.
You
don't
have
tags
on
there,
so
it's
a
very
strange
value
to
use
so
I'm
still
nervous
of
of
moving
something.
D
A
A
A
That's
my
memory,
that's
my
memories.
Is
it's
the
maximum
IP
size
and
the
the
use
here
were
in
the
document
is
saying
L
to
to
you,
you're,
including
the
L
to
headers
and
I.
Think
that's
more
back
again,
though.
Eager
that
terminology
max
rate
size-
and
it
may
be
helpful
I,
don't
know
well
seems
we
may
be
helpful
to
use
that
term.
A
D
E
E
K
My
observe
as
an
operator-
and
maybe
things
have
changed
over
the
last
couple
of
years,
where
I
haven't
cared
about
interface
configuration
actually,
the
MTU
thing
is
very
confused.
We
have
been
well
okay
in
the
configuration
languages
it
shows
up
as
far
as
I
remember,
and
there
are
vendors
that
have
different
of
interpretations
between
vendors.
There
are
vendors
that
have
different
interpretations
regardless,
depending
on
the
line
of
operating
system
running
on
the
hardware,
and
then
my
observation
is
that
the
usual,
the
usual
configuration
is
I.
K
K
Concept
is
the
model
to
be
to
address
and
yeah.
Well,
okay,
certainly
the
previous
question
should
the
concept
being
addressed
here
appear
to
be
at
a
precise
one
or
a
more
on
mirroring
of
the
existing
situation
that
well,
okay,
we
only
give
a
rough
number
and
let
let
the
ops
people
someday
perhaps
figure
out
that,
yes,
there
is
one
MPLS
label
too
much
too
many.
D
So
there's
there's
different
people
different
ways.
Different
people
implement
this
on.
Some
OS
is,
as
you
say,
you
put
give
it
an
L,
3
or
LT
payload
MTU,
and
then
they
add
on
some
slop
and
then
say
anything.
That's
between
this
is
fine,
there's
other
ones
that
have
a
values
like
this
l2
calculated,
and
they
will
check
that
strictly.
A
And
it
looks
like
there's
two
RFC's,
both
from
the
same
basically
set
of
authors
in
the
last
year,
or
so
in
that
space
that
you
talked
about,
that
you
use
the
layer
two
of
tu
contacts,
I.
Think
for
something
that's
so
general.
We
should
not
introduce
it
here,
but
stick
with
figure
out
whether
the
previous
RFC
is
help
them
to
use
maximum
frame
size
I,
think
that's,
probably
the
safer
term
and
it's
I'm
not
gonna,
say
that
we're
gonna
clarify
we're
gonna
clean
up
the
confusion
because
I
think
there,
but
we
won't
make
the
worse.
A
A
By
using
the
term
layer
to
to
you,
that's
partly
okay
and
I'm.
Sorry
to
see
that
that
one.
D
That
was
all
the
what
questions
are
had
on
that
the
southern
face.
This
is
also
working
group
last
call
this
one
is
much
shorter
in
terms
of
what
I've
said
to
review,
support
publication
and
no
comments
received
as
I
say.
Possibly
that
means
that
I
could
flawless,
but
otherwise,
if,
if
you
interested
it'd,
be
useful
to
have
a
review
even
you're,
happy
with
how
it
stands
now
and
this
Palace
worker
Glasgow
process,
I,
don't
know
when
the
work
loss
was
meant
to
finish,
but
we
need
more
reason.
A
That
said,
as
Shepherd
I'm
not
going
to
be
able
to
look
at
this
until
at
least
next
week
and
technically,
all
the
comments
are
closed.
You
still
have
some
comments
that
need
addressing
before
will
be
ready
to
go
and
there's
going
to
be
an
IDF
last
call
that
a
people
can
spit
comments
to
so,
rather
than
to
be
really
strict
about
it.
I
say:
if
you
have
comments,
it's
not
too
late
to
send
them.
A
H
B
L
B
Second,
so
the
concept
was
that
we
have
many
use
cases
where
we
want
to
document
instance
data,
so
not
the
models
themselves,
but
the
actual
values
that
integer
strings
and
we
want
document
them
offline
and
potentially
hand
them
out
to
customers
or
store
them
somewhere.
These
are
just
some
of
the
use
cases.
I
think
there
are
seven
or
eight.
Some
of
them
are
detailed
in
the
document
and
it
was
decided
that
at
least
will
needs
a
metadata
about
the
instance
data.
So
when
was
it
produced?
What
models
are
documenting?
B
Earlier
the
modules
that
define
the
content
were
called
to
target
modules,
number
of
people
didn't
like
that.
So
now
it's
called
content
schema
and
one
or
two
case
where
I
have
to
refer
to
the
individual
modules.
Then
it's
called
content,
defining
yang
modules,
chain
terminology
and
change
wherever
that
came
up
in
the
inside
the
draft,
some
people,
the
yang
instance
data
set
itself-
has
a
name
which
most
of
the
times
is
I,
think
is
needed.
Some
people
insisted
that
they
don't.
Oh,
don't
always
want
to
have
that,
so
it
became
optional.
B
There
were
some
up.
This
draft
is
using
the
yang
data,
and
your
yang
structure
is
that
it's
now
cooled
it
was
updated
according
to
that
draft,
including
yang
trees,
and
all
that-
and
there
was
a
comment
that
entity
tags
and
last
modified.
The
time
stamps
that
are
very,
quite
useful
in
in
rest,
conf
are
actually
encoded
in
HTTP
headers
and
we
can't
use
HTTP
headers
in
these
two
formats.
So
now
they
are
defined
as
metadata
and
if
they
are
used
they
can
be
encoded
as
metadata
and
next.
M
M
Annotations
because
I
can
see,
of
course,
a
good
use
for
many
other
annotation
that
can
that
are
or
they
available
or
that
may
be
defined
in
the
future.
So
I
don't
really
see
any
need
for
because,
as
you
said
currently,
this
entity,
tech
and
last
modified
are
used
by
by
the
Raskin
server
as
a
part
of
HTTP
headers.
So
I
don't
know
if
somebody
wants
to
define
it
as
as
an
innovation
that
that's
fine,
but
in
this
case
this
can
be
the
same
for
any
other
annotation.
B
They
are
quite
useful
bits
of
information
and
they're
just
doing
a
module
to
define
these
two
tags
when
clearly
we
use
them
here,
I,
don't
see
the
reason
to
split
them
out,
so
they
won't
harm
anyone.
They
are
obviously
useful
in
my
view,
I
didn't
want
to
define
them
because
I
thought
the
rest
comfort
handles
that,
but,
as
we
can't
use
the
restaurants
and
coding
solution,
I
like.
B
M
C
M
What
I
am
saying
is
that
we
should
treat
these
two
or
other
annotations
in
in
the
same
way,
and
the
proper
way
and
neutral
way
is
to
define
a
yang
module
that
defines
these
annotations
and
include
it
as
as
part
of
the
content
schema
for,
for
this
instance
data.
That's
that's
the
normally
published
either
in
line
or
in
any
other
way,
but
it's
it's
done
just
just
in
a
normal
standard
way,
rather
than
mentioning
it's
specifically
in
this
human.
So.
H
M
B
M
C
B
A
I
rent
it
as
being
sort
of
tunneling
the
metadata
you
get
from
the
Netcom
protocol
or
rest
Tom
protocol,
and
so
you
were
trying
to
add
a
parallel
piece
of
information
where
you
know
we
don't
have
a
perp
on
the
wire
read
before
that
night,
so
it
was.
It
was
parallel
to
that,
because
that's
not
what
I
was
thinking
which
is
quite
different
from
where
a
lot
of
sewing
where
it
becomes
part
of
the
content.
A
B
M
C
Contributor
I
think
a
lot
of
what
you're
saying
is
in
restaurant,
on
the
top-level
node
must
have
the
e-tag
and,
if
modified
tags,
Internode's
may
have
them
and
so
to
your
point,
any
node
may
have
them,
and
so
there's
that,
but
then
separately
was
the
question
of.
Why
do
they
need
to
have
this
information?
B
M
But
my
friend
is
that
we
needn't
really
care
about
use
cases
for
these
two
annotations.
If
somebody
does
have
a
use
for
them,
they
can
define
this
module,
defining
the
annotations
and
just
go
ahead
and
and
use
them.
So
that's
fine,
but
it
didn't
be
an
issue
for
this
document
because
it
can
accommodate
any
annotations,
yeah.
B
D
Roberts
ago,
I'm
not
sure
how
important
this
data
is.
So
that's
the
point
of
view
that
don't
really
care
that
much
either
way
the
moment
it
just
needs
to
be
one
sentence
in
the
draft
saying
that
these
are
done
the
same
way
as
restaurant,
which
is
fairly
minimal
text
I
to
getting
ladders
point
about.
If
we
go
to
do
weather,
why
don't
we
do
it
generally,
but
I
still
also
see
you
could
have
this
line
in
the
draft
saying
this
is
how
they're
done
and
it
still
be
done
generically
for
anything
else.
Even
these.
A
I'll
say
it's
less,
then
the
first
one.
None
of
these
are
statistically
significant
numbers,
but
still
it
seems
like
there's
a
very,
very
slight
preference
of
room
to
stay
as
it
is
I
would
say
it.
Let's
keep
it,
but
also
ask
the
bank
again
on
the
list.
If
people
have
any
comments.
Okay
on
this,
so
I'm
just
looking
down
to
see
if
there's
anything
from
from.
A
A
H
B
Please,
okay,
so
here
is
an
example.
Somewhat
wrote
an
example
of
how
this
would
look
like
the
most
interesting
part
here
is
that
we
have
some
metadata,
like
the
name
provision,
description
description
in
the
description
of
the
instance
data
set,
and
then
we
have
the
specification
of
the
content
schema
here.
We
have
an
example
where
the
content
schema
is
specified
in
line.
B
B
B
Okay,
so
never
mind
here
we
okay!
Here
we
have
this
in
line
from
inline
content,
schema
definition.
There
is
also
a
possibility
to
just
put
a
reference
to
the
content
schema.
If
you
don't
want
to
repeat
it
for
one.
In
the
case
that
you
have
I,
don't
know
diagnostic
state
data
every
five
seconds,
then
you
don't
want
the
converse
schemer
repeated
every
time
and
that's
it
and
I.
Think
I
would
like
to
bring
this
to
work.
Group
last
calls,
but
I
was
like.
C
M
B
D
Bob
Wilson
Cisco,
so
I
think
almost
three
choices
is
what
I
prefer.
One
is
a
very
simple
one
which
is
just
this
is
the
list
of
the
modules
and
their
revisions,
and
that
defines
the
schema
so
without
even
needing
the
inline
content
schema
just
that
list.
There's
one
choice.
The
second
one
is
what
you've
done
here,
where
you
specify
the
what
this
schema
is
in
that
way,
and
the
third
one
is
a
remote
schema.
D
B
Don't
agree
with
your
first
method.
Sorry,
because
this
is
a
simple
case
you
need
to
have
a
place
for
features,
supportive
features,
you
need
to
have
a
place
for
deviations,
and
you
also
need
to
at
least
specify
which
version
of
the
yang
library
you
are
using
or
which,
which
that's
a.
You
say
that
you
just
want
the
list
yeah,
you
want
to
say
what
what
who
defines
or
what
defines
the
format
of
that
list.
So.
H
B
D
I'm
not
saying
take
away
what
you
have
here,
I'm
saying,
add,
add
another
third
option:
that's
a
simpler
version
of
it
that
doesn't
worry
about
deviations
doesn't
worry
about
features.
It's
just
the
data
that
you're
uploading,
you
don't
you
could
potentially
just
have
the
features
enabled
by
default.
I.
D
J
B
I
I
I
would
rather
have
multiple
files
containing
multiple
data
stores,
so
it's
more
like
atomic
and
it's
not
overly
complex,
but
this
discussion
against
having
it
done
this
way
there
was
a
discussion
and
more
people
should
have
contributed
to
keeping
a
simple
single
data
store.
My
way
now
we
have
a
new
young
wire
which
is
obsoleting
the
simple
you
need
data
store
one.
So
it's
very
difficult.
We
are
going
to
use
the
old
one
to
make
new
RFC's.
I
It
is
going
to
create
even
more
confusion,
so
I
regret
not
having
more
support
when
there
was
a
discussion
that
we
should
keep
the
same
young
library
and
use
different
mechanisms
to
achieve
the
goal
it
does,
then
it
was
only
me
and
maybe
on
the
Beermen
who
was
opposing,
and
everyone
was
completing
to
that.
So
now
he
just
had
to
use
the
new
young
library,
I,
guess
I.
B
M
B
B
A
N
So
what
is
this
job
about
actually
a
day
job
to
actually
define
a
new
IP
say
we
call
the
factory
reset
a
PC
and
we
also
introduced
a
new
faculty
for
the
data
store.
It
is
read-only
data
store
and
obviously
no
data
store.
Actually,
we
typically
use
cases.
We
use
these
faculty
for
all
settings.
We
can
use
these
in
the
zero-touch
conversion
stage.
Also
in
some
cases
you
may
actually
major
the
each
error
on
the
conversion
you
can
use
leverage.
N
It
is
reset
kappa,
reset,
a
piece
a
to
reset
the
device
to
the
fact
default
state,
so
the
current
status.
Actually,
this
chapter.
Actually
we
have
two
code
option
for
this
job
and
we
resolve
summary
issue
actually
I
sing
in
so
the
changes
we
made
actually
in
this
draft
actually
in
in
a
flourishing,
they
were
when
working.
Actually,
my
issue
is
about
terminology.
The
young
server
waited
fun.
People
have
some
concerns,
so
we
actually
try
to
reuse.
N
The
existing
terminology
like
a
server
they
found
in
the
MB
architecture
ii
mentioned
major
change
actually
is
about
how
so
how
these
factory
reset
apply
to
the
data
store.
Actually,
we
we
can
apply
all
the
data
store,
but
people
have
some
consent.
Maybe
we
should
you
know,
take
out
that
candidate,
so
we
add
some
text
to
clarify
this
and
so
in
second
for
adoption.
Actually,
we
also
raise
some
of
issues
and
in
versions
there
were
two.
Actually
the
may
there's
a
two
issue.
We
try
to
resolve.
N
One
is
the
security
issues
and
we
make
some
proposed,
and
so
we
were
to
Scotty
so
as
the
open
issue
and
otherwise
the
copy
config.
Actually
this
is
we
actually
extend,
is
a
copy
config
operation
to
support
security
for
setting,
but
is
not
you
know
a
beautiful
set
in
specific,
so
we
so
the
result
is
we
remove
these
copy
config?
N
Actually,
so
so
this
just
to
reflect
the
discussion
on
the
many
needs
that
we
already
know
remove
these
copy
configure
and
as
I
mention
actually
diseases
and
not
the
factory
before
the
specific
is
so
so
also.
We
actually
release
the
sev.
Several
MDA
protocol
like
American
for
MT
and
the
rest
come
dear
support.
N
Actually,
it
doesn't
define
the
kaabah
config
like
RPC,
actually
so,
but
it
will
be
useful
to
have
geta
configured
because
we
have
faculty
for
data
stall,
so
it's
it
would
be
useful
to
to
have
the
configure
to
to
to
allow
get
a
configure
to
you
know,
get
it
access
to
these
data
store,
so
I
sounded
the
the
many
nice
discussing.
We
actually
remove
the
copy
configure
accession
from
the
module
in
the
job.
Actually,
we
defer
these
to
the
context
that
your
choice.
N
So
the
second
issue
is
about
the
security
issue
and
because
for
the
reset,
our
pcs
and
many
focused
actually
to
reset
the
device
to
the
factory
default
state,
but
also
it
will
be
useful.
You
know
to
use
these
up
easy
to
clean
our
the
fire.
We
started
to
notice
some
of
the
software
process
or
you
also
can
set
the
security
password
or
data
to
the
default
values,
but
all
of
these
information
may
be
sensitive,
so
we
need
to
actually
issue
the
SSL
relevant
discussion.
Adam
Kaufman
in
is
related
to
the
Crystal
draft.
N
Actually,
but
we
we
sink
yeah
the
cognitive
or
set
they
stock
could
be
useful
for
the
pistol
draft,
and
but
we
try
to
resolve
these
conditions.
So
the
proposal
we
that
we
propose
a
some
text
we
can
use
some
encryption
or
a
sign
mechanism,
also
talked
with
our
Khalsa.
We
think.
Maybe
we
should
you
know
around
the
access
control
rules
to
to
protect
the
sensitive
data.
So
that's
what
way
you
have,
but
we
are
not
a
security
experts.
J
Joe
Clark
Cisco
any
input
on
any
of
these
bullets
are
just
the
last
one.
J
I,
don't
have
a
input
on
the
last
one,
but
I
really
don't
like
bullet
number.
Two.
The
way
it's
defined
in
the
draft
I
could
have
my
device
reboot.
If
I
set
this
RPC
that
as
an
opera,
squishy
I
would
rather
these
things
be
more
atomic
like
I
use
the
RPC
to
reset
the
config
and
then
I
might
send
another
RPC
to
reboot
the
device.
J
B
But
angle
Ericsson
has
a
co-author
about
the
last
point
that
factory
default
might
contain
security
data
I
think
that's
actually
not
a
question
about
the
factory
default
data
store
because
the
same
data
will
be
available
in
running
after
reset.
So
it's
the
responsibility
of
the
data
model
to
somehow
protect
this,
the
security,
critical
items
and
this
that
man,
this
should
be
the
same
in
running
and
and
fact,
factory
default
data
store.
So
I
don't
see
why
this
is
specific
problem
to
this
draft.
C
So
the
the
actual
data
lives
in
operational
but
its
desire,
it
would
need
to
be
promoted
to
configuration
in
order
to
be
with
referenced
by
configuration
and,
of
course
it's
the
in
shipped
from
manufacturing
it'd
be
ideal
for
to
be
in
the
factory
default
in
store
or
I
mean
perhaps
startup,
but
all
right.
The
problem
startup
is,
it
could
be
deleted.
Thereafter,
I
mean
it
could
be
a
choice.
What
are
the
other
thing?
It'd
be
a
convenience.
It's
not
a
security
issue,
though,
because
the
data
would,
if
it's
hidden,
it's
hidden.
N
N
F
Jim
Carrey
Nokia,
so
two
points.
One
is
kind
of
agree
with
the
last
speaker
of
the
the
fact
that
you
know
the
factory
default.
I
don't
understand
security
issue,
because
I
would
understand
that
when
we
reset
something
the
factory
default,
the
factory
information
is
going
to
be
used
to
populate
the
startup
right.
You
know
that's
effectively,
what's
happening
to
the
other
question
about
the
options
and
another
protocols
that
we've
done
this
with,
and
we've
done
factory
resets
for
CPEs
for
going
on
20
years
now,
right
and
in
a
standard
way.
F
We
we
actually
allow
for
the
other
things
that
you're
talking
about
cleaning
up
files
or
restarting,
to
know,
particularly
we
starting
to
notice
simply
being
options
that
go
into
the
RPC.
So
you
just
simply
say:
hey
look
I'm
going
to
do
a
factory
reset
by
the
way
restart
this
thing
when
you're
done.
A
To
answer
the
question:
Tim
answer
your
question
of
why
a
security
issue
I
think
something
that's
a
little
different
here
is
this:
this
has
to
be
done
completely
remotely
and
in
many
of
the
factory
reset
options
that
come
up
equipment.
You
have
to
do
it
locally.
You
can't
do
it
over
your
network
management,
wait,
there's
something
dual
out:
network
management,
but
there's
some
systems
that
don't
and
there's
definitely
security
implications.
If
allowing
you
remote
access
to
reset
a
device,
a
network
device
sure.
F
H
C
D
E
N
D
O
A
co-op
again
reminder
this
Beijing
this.
The
center
of
this
tariff
is
RPC.
Did
it
well,
prepare
an
MVA
data
form
and
we're
basing
the
idea
is
that
you
can.
This
provides
a
view
tool
to
report
all
the
differences
between
data
stores
without
eating
two
upload.
The
entire
thing
in
private
apply.
Application
of
this
opposes
r2
for
the
troubleshoot
conditions,
which
are
due
to
unexpected
failures,
stink
issues
between
data
stores
lagging
to
change,
propagation
and
so
forth.
O
So
we
did
post
a
few
new
revisions.
The
current
revision
is
zero
tool.
The
main
changes
that
were
applied
are
on
one
hand,
basically
the
yang
patch
format
who
report
differences
was
updated
to
add
a
source
to
everything,
a
new
item,
source
value
that
shows
the
values
on
both
sides
of
the
comparison
and
the
comment
that
were
made
before
must
it
earlier.
Basically,
we
showed
well
there's
a
source
of
there's
a
target
and
the
comparison
is
done
in
terms
of
a
patch.
O
It
would
be
applied
to
the
source,
reach
the
target
and
then
would
not
tell
the
value
on
both
and
on
both
sides
influences
the
values
replaced.
You
would
then
only
nobody.
The
value
of
one
of
the
data
source,
you
know
that
it
would
be
different
in
the
other
one,
but
not
which
one
it
is
so
busy.
That
is
the
thing
that
that
has
been
that
has
been
added.
O
So
this
is
just
basically
is
a
snippet
of
the
main
difference,
all
of
the
new
of
the
new
new
portion
in
the
yang
data
model.
That
has
the
differences
format.
So
basically,
what
you
see
here
is
that
the
yang
patch
is
the
young
added,
is
augmented
to
include
a
new
source
value,
which
is
basically
the
any
data
value
that
basically
indicates
the
value
of
the
source
data
item
and
that
is,
it
is
being
replaced
and
which
is
been,
which
applies
basically,
whenever
you
are
deleting
something
from
the
source
off
on
the
left.
O
After
comparison,
if
you're
merging
it
moving
it
replacing
it
or
or
removing
it,
it
is
there
it's
obviously
not
there
if
there's
a
create
because
I'm
busy
that
value
did
not
exist
earlier
next.
One
next
slide-
and
this
already
brings
me
to
the
last
time
beta
to
the
to
the
to
the
discussion
items
so
basically
a
couple
of
items
fading
to
confirm
here,
we'll
discuss
it
with
the
group.
One
thing
first
thing
concerns
dispatch
format,
which
is
has
been
all
know
newly
proposed
here.
O
So
basically
this
augmentation,
we
do
believe
actually
addresses
requirement
to
you
have
to
show
the
values
of
both
sides
of
the
of
the
comparison.
The
earlier
request
was
also
believed
that
we
should
not
allow
for
different
formats,
so
we
should
basically
agree
and
settle
on
one
true
to
help
interoperability.
So
the
question
is
basically:
if
that
is
the
former
that
we
should
go
forward
with
and
other
formats
are
possible,
but
we
just
need
to
settle
on
ones.
Nobody
there's
the
question
really
in
the
room.
O
O
Second
discussion
item
turned
the
the
metadata
of
the
original
of
the
data
items.
So
basically
the
idea
was
if
an
operational
data
store
is
used
as
a
comparison
target,
then
the
the
it
would
be
useful
to
indicate
what
the
origin
of
the
data
is
right,
so
that
you
for
poisons.
If,
if
the
assumption
is
that
the
we
are,
they
came
from
intended,
you
get
it
yet
it
comes
from
yet
the
original
system
that
person
might
offer
explanations,
for
instance,
why
data
is
yeah.
O
Why
did
a
is
different,
so
this
might
be
useful
in
troubleshooting,
but
one
question
was
basically
whether
it
should
be
always
included
or
if
we
should
happen
now,
whether
an
option
added
whether
or
not
to
actually
include
it
or
to
to
omit
it.
So
currently
we
do
not
have
such
enough.
Basically,
this
foot
well,
the
mo
knobs.
You
add
the
more
complexity
you
add.
The
opinion
here
is
that
is
probably
just
to
include
it
best,
so
just
like
to
include
it
by
default.
H
O
O
Currently,
the
comparison
filter
is
defined
using
sub
tree
in
XPath
as
per
net
conf,
and
the
question
is
basically
whether
there
would
be
a
requirement
to
also
allow,
for
definition
of
filters
relating
to
target
resources
per
s,
cons
and
then
the
final
item
is
an
item
that
was
just
brought
up
recently
by
Tim
on
the
list
and
concerning
adding
potentially
adding
a
performance
consideration
section.
The
performance
reservation
are
way
implied
in
the
security
section.
This
concerns
that
basically
somebody
well
they're
there
Daisy
and
there
is
potentially
a
hits,
and
that
is
done
on
the
system.
O
That
is
doing
the
comparison
and
the
request
is
to
add
a
section
which
just
makes
this
more
explicit,
which
we
can
add,
but
we,
which
is
which
has
cutting
up
in,
not
going
to
add
it
to
this
current
revision.
Yet,
and
that's
that
concludes
what
I
have
so
if
we
can
go
through
these
items
and
get
opinions
in
the
room.
C
C
O
D
Rob
Wilson
says
no
browsers
have
one
other
question:
that's
not
related
to
these
at
all,
so
maybe
I
could
raise
that
one
never
go
ahead,
so
Alex.
Looking
the
draft
I'm
still
not
entirely
sure
the
diff
is
doing
quite
what
I
would
look
for.
So
you
have
you
all
option.
You
can
be
turned
on
or
off
and
if
the
all
option
is
off,
you
compare
nodes
that
exist
in
both
datastore
so
intended
and
operational.
Is
that
right,
yes,
and
only
only
exists
in
both?
D
Do
you
compare
the
value
and
if
the
all
options
on
it
says
among
you,
you
would
do
a
difficult
for
contents
of
both
datastore,
so
I
think
that
would
mean
an
operational
you
get
all
the
data
backs
or
all
the
operational
state
back
or
would
you
still
apply
a
filter,
so
it
only
only
conflict.
True
items
would
come
back.
Oh
no,.
D
D
O
Now,
okay,
so
right
now
baby,
it
would
be
in
the
way
this
is
defined.
Right
now
is
that
okay,
so
maybe
it's
also
video
you're
saying
we
need
to
have
something
between
between
these
two
two
option
right,
because
right
now
we
have
either
you
include
everything
or
you
include,
or
you
exclude
data
from
the
comparison
that
does
not
pertain
to
both
but
you're
saying
this.
You
would
want
to
have
it
restricted
a
little
bit
further.
So
if
your
optional
yeah.
O
Yeah
I
mean
the
obvious.
You
also
have
a
filter
spec
that
you'll.
Then
you
specify
as
well
so
so
when
you
say
that
that
all
differences
are
returned.
That
would
be
only
busy
if
your
filter
spec
is
empty
or,
if
your
or,
if
you're,
asking
to
to
compare
the
entire
tree,
which
in
general
might,
however,
not
be
the
case,
but
I
mean,
of
course.
If
you
do
this,
but
it's
true
everything
everything
will
come
back,
but
typically
you
would
have
a
filter
spec
as.
D
Well,
but
if
you,
if
you
consider,
for
example,
say
you're
checking
the
configuration
for
one
interface,
then
you
might
have
three
or
four
lines
of
interface
configuration
in
an
operational
you'd.
Have
that
plus
hundreds
of
counters
another
operational
data
and
that
would
automatically
always
be
returned
because
I'll
never
ever
be
in
intended
or
running.
So
I'll
always
be
reported
as
a
difference.
If
you
specified
your
all
option
but
make
you
know
what
they're
better.
D
O
Okay,
I
cannot
give
you
an
answer
right
now.
I
mean
it's
clearly
something
that
we
could
control.
I
mean
this
I
guess
that
is
the
so
I
think
you're
you're
asking.
Should
we
remove
this
all
option
or
group
and
just
include
another
option
fitting?
Instead?
Yes,
that's
my
question.
Okay,
so
I
don't
have
a
strong
opinion
on
the
on
the
all
right
now.
O
G
C
Okay,
so
that
drained
the
my
queue
and
we'll
go
back
to
going
over
these
points
and
asking
the
room
for
information
or
oops.
So
first
was
the
patch
format
or
the,
for
instance,
and
I
think
the
question
was
so
already
I
think
we've
had
an
agreement
that
there
should
only
be
one
format
that
should
be
returned
for
death.
C
The
question
is:
what
should
that
format
be,
and
the
current
proposal
is
an
augmentation
to
the
patch
format
and
the
question
is
that
have
sufficient
or
if
there
should
be
another
format
in
particular,
something
that
may
be
called
gained
if
a
dip
format,
so
instead
of
augmenting
patch,
maybe
we
should
actually
have
a
format.
That's
very
specifically
customised
for
returning
dips
is
that
did
I
capture
that
correct
Alex,
yeah.
O
D
C
D
D
I
also
don't
mind
Moulton
I,
also
in
mind.
If
you
want
to
define
a
DIF
format
again,
I
still
wouldn't
define
it
in
this
document,
something
that's
better
to
define
generically
and
reference
it
from
this.
If
the
proposal
is
to
use
a
yang
different
set
of
yang
patch,
that
also
is
fine
being
good,
but
I'm.
It's
going
to
slow
down
this
work,
though,
if
you
do
that
I
think
by
whatever
time
it
takes
to
define
a
yang
diplomat,
okay,.
C
C
Ask
a
question:
alright
say:
let's
start
with
that,
because
if
we
are
able
to
support
multiple
bit
formats
or
return
different
two
formats,
then
we
can
almost
let
go
the
other
remaining
questions.
First,
that's
the
first
question
for
the
person
right,
so
we're
gonna,
ask
two
questions
first
is
to
prove
supports,
and
then
the
second
would
be
who
does
not
support
so
for
those
who
do
support
the
idea
of
just
returning
a
single
format
that
there's
no
option
for
for
for
the
client
to
specify
the
format.
Please
raise
their
hand.
A
C
Format,
please
raise
your
hand
there's
very
few.
Thank
you
support
a
multiple
formats
and
also
a
few,
but
statistically
more
a
lot
more.
So
then,
okay,
so
then,
okay,
the
baton
effectively
reverse
the
decision
that
we
had
from
last
time
and
if
we
allow
for
multiple
formats
and
my
objection
for
moving
forward
with
this
format,
now
is
I'll.
Obviated
I
no
longer
worry
about
it,
because
I
know
we
can
fix
it
later.
So
then
I
think
we
don't
need
to
ask
any
more
questions.
We
could
move
forward
with
this
format.
D
C
O
One
one
question
this
well:
one
question
is
how
we
would
do
that
practically
be
earlier
busy.
We
had
a
flag
that
was
allowed
to
specify
the
format
and
base
is
the
preference
for
format,
and
if
this
is
about
the,
but
if
you're
saying
we
need
to
allow
for
future
formats
and
I'm,
not
sure
how
we
can.
Oh,
how
we
could
one
say
it's
only
this
or
one
other
formats
which
has
not
been
defined
yet.
C
C
Return
by
default,
okay,
so
actually,
maybe
if
in
case
that
wasn't
clear.
Actually
let
me
restart
that.
Okay,
because
I
don't
I'm,
not
sure
if
everyone
was
clear
about
that.
So
if
you
think
that
origin
met
it,
that
there's
no
parameter
and
origin,
but
it
it
should
be
returned
by
default.
Please
raise
your
hand.
Okay,
there's!
No
one!
D
Global
Cisco
sonal
devices
will
necessarily
support
origin
metadata.
So
the
question
really
is
whether
it's
better
to
have
that
as
an
input
parameter
such
that
you'd
fail,
the
request,
you
can't
support
it
or
whether
you
just
don't
return
it.
If
you
don't
have
it
so
so
that's
why
I'm
have
I
prefer
having
parameter
because
then
at
least
you
know
as
a
client
whether
I'm
not
going
to
get
this
data.
C
O
Then
is
something
that
we,
it
has
been
trap
for
for
y,
so
basically,
the
comparison
filter
that
we
use
to
baby
yepp,
the
filter
spec.
Well,
it's
it's
a
question
many,
but
include
as
part
of
the
filter
spec,
where
we
say
baby,
which
part
of
the
data
store
to
include
and
that
one
yeah
and
this
one
basically
is
defined
in
a
s
pro
Netcom
orbiting
in
a
dead
fish
weight
using
subtree
index
path
and-
and
the
issue
was
brought
up
in
the
past
betting-
you,
whether
well,
what
about?
O
C
C
O
Obviously,
even
when
we
define
it,
we
did
not
think
we
thought
actually
that
what
we
have
the
filter,
specs,
that
that
happened,
that
the
survived
and
to
be
sufficient
for
what
we
need
to
accomplish.
So
from
that
perspective,
yeah.
This
is
a
yeah.
It's
a
contributor.
If
you
will
think
I
I,
don't
see
the
need
for
that,
but
it
was
brought
up
by
the
group
before
and
we
have
listed
it
as
an
item
in
the
draft.
C
O
C
J
O
C
Anyone
else
have
an
opinion
about
this.
This
actually
I.
Don't
think
this
is
something
pull
the
room,
we're
gonna
pull
the
rooms.
Just
discussing
this
a
discussion
point.
Anyone
wants
to
commit.
Oh
all,
right,
so
Alex
I
think
we
should
probably
take
this
one
to
the
list.
It's
pretty
complicated,
but
some
examples
would
help
all.
O
Right,
yeah
or
maybe
or
if
or
I,
guess
believe,
if
nobody's
coming
forward,
with
the
reason
why
subtree
next
time
would
not
be
sufficient,
then
we
probably
just
can
tell
this
issue
sure.
C
And
actually,
we
may
actually
raise
wide
support
both
should
we
support
both
subtree
and
XPath.
Okay,
that's
a
I
guess
that
is
a
six
second
question.
Yeah
yeah,
because
in
in
that
combat
lis,
subtree
is
mandatory
to
implement
we're
experts,
not
of
course,
if
you're,
if
you're
a
restaurant,
surfer
and
you're
not
implementing
that
comp,
you
might
be.
You
know
that
might
be
unfortunate.
You
may
not
want
it
as
some
people
dream.
C
F
Yeah
Tim
carry
nokia.
So
when
we
read
the
draft
there
were
some
concerns
and
in
terms
of
an
implementation,
because
we
have
some
very
constrained
servers
right
that
if
I
was
given
a
request
to
do
a
diff
on
some
data
stores,
where
I
don't
have
the
compute
resources
to
return
the
the
information
being
requested.
How
what
is
the
question
came
back
says?
What
do
we
do?
What
is
the
appropriate
response
that
we
should
give
back?
O
Question
on
this
is
as
well,
but
one
thing
was
I
mean.
Obviously,
if
you
cannot
fulfill
the
request,
you
can
always
just
decline
because
you
can
just
deny
it
but
I
guess
the
underlying
other
question
relies
to
do.
You
want
to
have
some
kind
of
prodding
operation,
or
so
you
can
have
only
so
many
requests
per
time
unit
or
or
what
have
you
or?
Is
that
something
that
you
would
we.
F
Weren't
worried
so
much
about
a
median
or
a
throttling
aspect
of
it.
If
someone
else
might
be,
we
were
just
saying:
look,
you
know
we
we
might
not
necessarily
be
able
to.
You
know,
meet
the
meet
meet.
The
request
coming
in
what
we
wanted
was
was
that
we
wanted
this,
the
RFC
to
specify
the
behavior
specifically
so
that
people
that
are
implementing
this
will
know.
You
know
what
to
do
sure.
Yeah.