►
From YouTube: IETF109-NETCONF-20201118-0500
Description
NETCONF meeting session at IETF109
2020/11/18 0500
https://datatracker.ietf.org/meeting/109/proceedings/
A
This
is
the
netconf
working
group
and
let
me
pull
up.
A
B
A
Looking
good
all
right,
good
morning,
good
afternoon,
good
evening,
our
very
late
night
for
some
folks
in
north
america,
this
is
the
netcom
working
group
meeting
for
109.
A
A
If
you
are
aware
of
any
idea
of
contributions
that
are
covered
by
patents
or
patent
applications
that
are
under
control,
you
must
disclose
that
fact.
Of
course,
you
agree
to
to
abide
by
any
itf
rules.
Regulations
as
that
pertain
to
an
attendee.
A
Just
a
quick
recap
on
the
administrative
trivia,
kent
and
I
will
be
trying
to
monitor
the
meet
echo
chat,
window
or
comments,
but
if
you
want
to
present
or
have
one
of
us
speak
into
the
mic,
for
you
make
sure
you
preface
the
comment
with
mike
in
the
front,
so
we
know
that
it,
one
of
us
needs
to
take
it
to
the
mic.
A
The
hum
window,
which
is
the
one
we
you'll
use
for
polling,
will
be,
is
right
next
to
the
chat
tab
on
the
left
side.
No
need
to
do
any
virtual
blue
sheet
signing
meet.
Echo
will
record
your
presence
when
you
need
to
queue
up
to
ask
any
questions.
Make
sure
you
raise
click
on
the
raise
hand
icon,
and
we
will
then
call
you
and
put
you
on
the
cue
to
speak.
A
Make
sure
you
unmute
yourself
by
clicking
on
the
microphone
icon
with
the
play
symbol
right
jabber,
we'll
talk
about
it
on
the
next
slide.
We
do
encourage
minute
takers
as
many
as
we
can
to
take
the
minutes.
You
will
find
the
link
for
the
kodi
minute
taker
tab
on
the
top
side
of
your
screen.
A
Right
and
for
jabber,
I
have
this
link
to
use.
If
you
want
to
set
up
jabber
and
don't
ask
me
too
many
details,
the
whatever
information
is
there
probably
is:
what
does
I
use
to
set
up
jabber,
but
also
note
that
meet
echo
chat
window
will
cross
post
or
they
will
cross
post
to
each
other?
So
if
you
are
on
jabber,
we
will
see
it
on
the
meet
echo
chat
window
logs,
of
course,
for
the
meeting
will
be
available
after
the
meeting.
A
All
right
status
of
the
chartered
workgroup
items
young
push
notifications
when
passed.
Second
last
call:
this
has
been
called
the
second
time
we
are
still
waiting
for
a
pending
author's
update.
A
We
hope
we
can
get
the
authors
to
come
back
and
finish
the
document,
the
crypto
types,
trust
anchor
and
key
store
documents
are
in
working
group.
Last
call,
I
guess
ken
will
give
an
update
something
about
sector
review
and
author
responses,
and
let
him
speak
to
it
when
he
gets
the
status
update.
A
The
client
server
suite
of
drafts
will
go
into
working
class
called
once
the
security
drafts
listed
above
clear
last
call.
We
don't
want
to
inundate
the
working
group
with
more
documents
till
we
have
cleared
what's
on
the
deck
https
notification
is
nearing
working
group
last
call
we'll
talk
about
it.
When
I
present
that
draft
young
push
notification
messages
again
is
waiting
on.
Https
draft
will
resurrect
it
once
we
are
ready
to
send
https
notification
to
last
call.
A
The
remaining
documents
are
all
work
in
progress
so
and
we'll
hear
about
them.
In
this
meeting.
B
B
C
So
just
a
quick
question
on
the
meeting
notes.
I
went
to
cody
md,
but
it
gives
me
a
blank
new
page.
It
looks
like
the
fact
that
I've
created
it.
So
I
can
add
some
meeting
notes
to
that,
but
doesn't
have
the
normal
structure
that
you
would
expect.
Maybe
that'll
need
some
sorting
out
afterwards,
unless
you
know.
A
A
Right
so
here's
the
agenda
for
the
meeting
kent
will
walk
through
the
status
and
issues
with
all
the
client
server
suite
of
drafts,
including
also
the
security
set
of
drafts.
A
And
in
order
to
minimize
exchanging
too
many
screens
again,
we'll
continue
with
the
csr
bootstrapping
sctp
draft
and
then
I'll
take
over
followed
by
pair
and
thomas
the
non-chartered
items.
We
have
penguin
penglio
and
chen
coming
back
with
the
drafts
they
had
presented
before,
and
then
we
have
one
new
draft
from
yan
lindman,
followed
by
kent
and
chin
talking
about
a
list
pagination
mechanism.
A
There
is
no
draft
currently
posted.
I
guess
they
will
give
us
an
update
on
that
right.
Any
questions,
any
agenda,
bashing
anything
else.
We
want
to
see.
A
D
B
This
is
the
update
to
the
client
server
suite
of
drafts
since
itf
108,
just
a
few
small,
high-level
updates
and
crypto
types.
We
added
the
password
grouping
to
define
a
union
between
a
clear
text
password
and
an
encrypted
password
and
and
then
you'll
see
that
grouping
is
now
being
used
in.
I
think
three
of
the
other
drafts
and
we
also
added
feature
statements
for
the
encrypted
formats,
specifically
password
encryption,
symmetric,
key
encryption
and
private
key
encryption.
Those
are
all
now.
Those
are
the
names
of
the
features.
B
It's
okay,
those
are
the
names
of
the
features
and,
of
course,
they're
controlling
whether
or
not
the
server
supports
the
encrypting
of
passwords,
symmetric
keys
and
private
keys
respectively,
and
also
a
a
certificate
expiration
notification.
We
added
a
feature
for
that
for
for
controlling
whether
or
not
the
server
supports
sending
notifications
when
certificates
are
expiring
in
trust,
anchors
and
actually
really
for
the
remaining
the
well
you'll
see.
B
I
guess
you
might
even
say
to
some
degree
they
were
editorial,
and
so
you
know
the
changing
and
improving
the
way
the
content
was
being
presented
was
reflected
in
all
the
drafts.
So
that's
essentially
the
change
that
was
made
in
trust
anchors
and
in
all
the
others,
as
well.
Also
in
the
keystore
draft,
their
section
four,
which
is
entitled
encrypting
keys
for
configuration,
was
pretty
much
entirely
rewritten
to.
B
Please
moving
on,
we
have
a
tcp
client
server.
So
here
was
the
case
in
the
socks
gsa,
api
we
modified
it.
There
was
a
field
for
a
password,
so
we
modified
it
to
use
the
password
grouping
and
now
it's
the
case
that
the
password
may
be
in
clear
text
or
encrypted
there
in
that,
in
that
configuration
data
model
we
also
added.
There
was
missing
mandatory.
B
True
in
that
particular
node,
when
you
selected
whether
or
not
it
was
going
to
be
us
a
kind
of
socks,
a
password-based
socks,
it
didn't
have
it
that
the
username
and
password
were
actually
mandatory
leaves
so
that
got
added.
B
B
That
password
was
previously
only
it
could
previously
only
be
clear
text,
and
now
it's
using
the
password
grouping,
so
either
clear
text
or
encrypted
password
can
be
configured
in
tls
client
server.
The
in
the
both
the
client,
authentication
and
server
authentication
for
psks,
it's
got
converted
from
being
a
presence
container
to
a
leaf
of
type
empty.
B
There
was
a
number
of
fixed
needs
that
got
cleaned
up,
and
that
was
the
main
update.
Okay
in
the
http
client
server
draft,
there
was
also
a
some
fixed
means
that
got
removed
in
the
http
client.
You
know
when
you're
using
basic
authentication
there's
a
password.
It
was
previously
just
clear
text
now
it's
using
the
password
grouping,
so
it
can
be
encrypted,
and
strangely,
oh
okay,
I
see
it
now
hovering
over
the
screen.
It.
It
blocks
the
text
of
what's
being
presented
in
the
both
the
net
conf
incline
and
client
server
drafts.
B
So
I
don't
normally
spend
a
lot
of
time
on
the
changes
that
were
made
since
the
last
ietf
meeting,
because
there's
usually
a
lot
more
to
talk
about,
but
in
this
case
there
isn't,
there
was
just
the
sector
review
from
sandy
and
then
jav
near
submitted
a
sector
review
for
the
trust
tour
draft
which
I've
yet
to
work
through,
though
the
changes,
the
comments
that
were
made
there
were
fairly
high
level
and
didn't
appear
that
they
would
have
any
significant
impact
to
the
drafts.
B
In
general,
we've
been
waiting,
and
we
I
just
putting
on
my
chair
hat
for
a
soaked
moment
adam
and
I
have
been
waiting
for
these
first
three
drafts
to
stabilize
from
a
sector
perspective
and
also
from
a
yang
doctor
perspective.
I
think,
there's
a
rein
doctor
review
outstanding
for
these
drafts,
so
we're
hoping
to
get
that
those
updates
sort
of
completed
for
these
first
three
core
drafts
and
then
whatever
cascading
changes
would
occur,
such
as
the
ones
you
saw
already.
A
A
A
B
This
presentation
regards
an
update.
Of
course
we
just
adopted
this
draft
in
well.
Actually
it
was
one
of
the
remember
we
did.
The
chairs,
the
metconf
chairs
did
a
adoption
suitability,
a
number
of
email
threads
one.
B
You
know,
I
think,
for
nine
drafts
that
were
potentially
queued
up
for
adoption,
and
this
was
one
of
the
one
of
the
drafts
that
went
through
that
and
it
was
adopted
and
so
immediately
I
posted
the
zero
zero
which
was
identical
to
the
zero
one
from
before
the
the
k,
watts,
watson,
the
id
and
that's
really
up
until
monday.
B
B
I
noticed
an
editorial
note
and
found
myself
needing
to
make
a
quick
fix
to
that
and
that's
why
I
posted
a
zero
one
in
that
there's
in
the
security
considerations
section.
So
you,
the
update
that
you
saw
posted
on
monday,
was
just
to
for
that
zero
one
update.
B
Sorry
for
that,
for
that
edit
to
fix
that
mall.
That
editorial
comment
that
I
left
in
the
securities
considerations
section
next
slide,
please,
okay!
So
when,
when
this
draft
was
presented
last
time,
we
had
said
that
the
authors
had
made
every
effort
to
for
the
zero
for
the
draft
to
basically
be
ready
for
last
call.
It
was
already
ready
for
last
call.
We
had
taken
the
time
and
effort
to
to
do
everything,
including
security
considerations
and
considerations
and
everything
we
thought
we
were
done
and
ready.
B
But
then
one
of
the
authors
had
an
exchange
with
another
itf
contributor
from
a
different
company
and
not
not
someone
who's
commonly
in
the
netcomfort
group.
Regarding
crmf,
which
is
a
a
microsoft
format,
stands
for
certificate,
request
message:
format:
it's
not
actually
it's
not
technically
a
microsoft
format
anymore,
it's
an
itf
format,
but
it
was
originally
supported
by
microsoft
and
they
wanted
to
ensure
that
crmf
was
being
supported,
and
so
we
thought
that
we
were
going
to
need
to
do
an
update
for
that.
B
But
to
the
second
bullet
point,
it
turns
out
that
the
draft
already
supports
supporting
csrs
with
the
crmfs
and
in
particular
this
the
draft
supports
csrs
in
three
formats,
the
first
being
your
standard,
p10
format,
pkcs
number
10
format
and
then
also
cmp
and
cmc,
and
both
cmp
and
cmc
themselves
support
both
p10
and
crmf,
and
hence
already
the
draft
supports
crms,
vis-a-vis
its
support
for
cnp
and
cmc.
B
So
with
that
resolved
in
in
in
essence,
there
was
no
update
to
the
draft
it.
It
turns
out
other
than
that
editorial
emit
that
I
mentioned
in
the
previous
slide.
The
draft
was
in
fact
complete
and
done
and
ready
for
last
call
previously.
B
However,
to
bullet
point
number
three:
the
same
comment
from
that
offer:
author,
sorry
itf
contributor
in
a
different
working
group.
Spun
off
a
number
of
some
other
drafts,
so
you'll
notice
in
the
lamps
working
group,
a
couple
drafts
by
russ
housley
on
updating,
crmf,
algorithms
and
yet
another
one
for
updating,
aes
gmac
algorithm.
B
B
And
so
beyond
that,
there
is
in
fact
no
remaining
update
to
be
made
if,
if
there's
any
comments
or
questions
or
concerns
regarding
this
work,
please
mention
now.
B
A
A
A
A
All
right,
so
I
will
talk
about
updates
to
the
https
based
transport
for
subscribe
notifications
draft.
There
were
two
updates,
one,
just
a
quick
editorial,
not
an
extra
but
a
quick
update
which
I'll
talk
about
in
oh
six,
that
I'll
talk
about
on
the
slides
next
slide.
Please.
A
All
right
so
updates
since
o4,
really
there
are
only
two
main
changes:
the
last
update
to
zero
six
really
updated
the
security
considerations
section
to
bring
it
into
compliance
with
rx8407,
and
the
second
change
is
regarding
examples
for
receiving
of
event.
Notifications
next
slide,
please.
A
A
We
did
add
another
example
for
xml
the
corresponding
example
actually
for
xml
in
the
drum.
So
with
those
two
examples
and
the
fact
that
we
have
completed
the
security
considerations
section
let's
go
to
actually
I
don't
have
this
end
of
the
slide
yeah.
So
with
those
two
changes,
we
believe
that
this
document
is
also
ready
for
working
group
last
call,
but
rather
than
us
teacher,
is
calling
for
it,
we'll
both
step
down
and
see.
C
C
A
C
So
so,
just
so
people
where
they
know
there's
tabs
on
the
left,
it's
just
under
the
hand,
icon
looks
like
three
little
bars
and
you
can
either
click
to
raise
your
hand
or
do
not
raise
your
hand
or
or
choose
to
do
nothing.
C
So
we've
got
10
people
that
say
they're
happy
if
this
goes
to
working
with
last
call,
there's
no
one
who's
chosen
not
to
raise
their
hand
out
of
34
participants.
So
it's
11..
I
think
I
think
that's
fine,
so
I
think
we
can
yeah
it's
going
up
slightly,
so
I
think
we
can
take
this
to
first
to
do
to
kick
off
the
working
last
call.
I
will
work
with
kent
mahesh
to
work
out
exactly
how
what
the
process
should
be
in
this
particular
case.
A
A
A
B
Yes,
I
was
just
saying
if
pierre
could
come
to
the
queue
to
bring
up
I'll,
bring
up
their
presentation
right
now.
It
does
take
a
little
bit
of
effort
to
switch
presentations.
So
that's
why
there's
a
delay
you
have
to
load
it
and
then
ask
to
share
the
application
window.
A
So
here
yeah,
I
guess
you
don't
need
to
share
your
screen,
so
I'm
going
to
cancel
but.
F
So
next
slide,
please
smash
so
I'll
make
a
quick
reminder
of
the
goal
of
the
draft.
Then
I'll
show
the
main
difference
with
the
0-0
trying
to
cover
the
most
important
comments
that
we
received
on
the
mailing
list.
Since
then,
then,
the
main
point
is
to
shut
the
my
mic
and
let
you
guys
discuss
and
take
notes
on
this.
So
next
slide
please.
F
Next
slide,
please
manage
so
our
first
set
of
changes
based
on
the
comments
we
received
in
the
last
month.
We
renamed
for
obvious
reason
the
fragmentation
option
into
segmentation
option,
because
that's
basically
what
we
do
in
order
to
be
consistent
with
the
netconf
distributed
notif
draft,
I
changed
the
generator
id
term
into
observation
domain
id.
F
Then
we
reworked
the
applicability
section
part
of
the
draft
to
align
with
the
rfc85
on
udp
usage
guidelines,
mostly
what
we
covered
is
congestion,
control
or
lag
thereof,
dealing
with
mtu
and
lack
of
reliability
of
the
draft.
Basically,
what
we
did
is
to
show
that
the
context
in
which
this
protocol
would
be
deployed.
F
Is
aligned
with
the
recommendations
that
are
proposed
that
are
defined
in
the
rfc,
so
I
did
not
list
all
the
strict
recommendations
and
guidelines
of
the
rfc,
but
I
I've
been
sticking
to
the
actual
guidelines
for
the
context
of
application
of
this
protocol
next
slide.
Please,
then,
we
proposed
some
changes
to
the
notification
message
header.
So
what
you
can
see
first
is
that
we
stole
a
bit
from
the
version
field
to
introduce
what
we
called
an
encoding
space
flag
when
it
is
unset.
F
It
means
that
the
encoding
type
field
is
standard,
meaning
that
what
you,
the
value
that
is
in
there
is
defining
an
encoding
that
is
standard
and
when
this
flag
is
set,
it
means
that
we
are
falling
into
private
encoding
tab
space.
What
we
mean
by
private
is,
for
example,
gpb.
That
is
not
a
standard
and
what
would
go
there
is
any
encoding
type
that
a
vendor
would
support,
that.
That
is
not
a
standard,
and
that
is
the
vendor
would
designed
on
its
own,
which
encoding
type
it
would
decide
to
use.
F
I
agree
that
relying
on
the
vendor
documentation
to
figure
out
what
your
favorite
vendor
is
sending
to
you
would
feel
a
bit
archaic,
so
we
have
been
having
discussion
on
whether
we
would
need
netconf
to
be
able
to
retrieve
a
description
on
what
the
vendor
is
using
as
an
encoding
type.
This
is
open
for
discussion
me
personally.
I
don't
care
too
much
about
non-standard
encoding
types,
so
I'm
really
really
open
to
that
discussion.
F
The
main
one
critical
reason
to
do
that
is
that
I
would
like
to
not
have
the
big
data
people
that
are
further
downstream
the
communication
channel
to
have
to
deal
with
segmentation.
F
I
need
those
two
fields
in
order
to
do
a
consistent
job
there.
So
that's
the
main
reason.
Other
reason.
Another
reason
is
quality
of
life.
When
I
have
those
fields
up
there,
I
can
easily
do
load
balancing
based
on
by
preserving
consistency
on
the
generator
that
is
being
done,
so
I
can
do
load
balancing
based
on
the
on
the
generator
and
that
easiest
distributed
uses
of
the
of
the
collector.
F
F
To
be
honest,
if
you
don't
mind,
I
would
like
to
keep
it
to
zero
until
the
header
is
has
converged
to
a
stable
state,
so
I
would
suggest
that
we
do
that
on
the
version
of
the
draft
just
before
working
group
last
call,
if
you
have
strong
opinion
on
that,
I'm
okay
to
change
it
no
problem.
F
There
was
quite
a
bit
of
discussion
on
how
to
deal
with
this
private
space,
so
I
already
said
that
we
can
either
rely
on
vendor
documentation
or
write
a
netconf
draft
to
retrieve
that
information.
I
I
would
like
you
to
discuss
this
and
and
honestly
we'll
follow
we'll
follow
the
decision
on
the
working
group.
I
will
not
fight
against
any
decision
that
you
guys
would
make
on
this
sure
can't
if
you
want
to
go
ahead.
B
Discount
as
a
contributor
regarding
the
version
being
zero,
I
do
recommend
it.
I'd
recommend
that
in
fact
there
is
a
separate
rfc.
I
don't
know
the
number
of
hand,
but
there
is
a
ietf
recommendation
that
any
enumerated
field,
both
the
zero
and
the
max
bit
values
are
reserved
values,
and
so
yes,
both
zero
as
well
as
I
guess,
seven,
the
value
seven
should
be
reserved
fields,
values
for
us,
if
possible,.
F
A
Right
so
thank
actually
came
to
ask
one
of
the
questions
I
was
going
to
ask
just
what
he
suggested.
The
question
on
the
private
space
is
obviously
with
the
discussion
around
how
you're
going
to
learn
of
the
encoding.
A
F
No,
what
I
was
trying
to
to
say
is
that
if
you
want,
we
can
define
one
okay,
I'm
okay,
to
to
provide
a
draft,
a
companion
draft
to
do
that.
Okay,
personally,
I
I
don't
care
really
about
non-standard
encoding,
but
if,
if
that's
needed,
if
we
consider
here
as
a
working
group
that
basic
archaic
vendor
documentation
would
not
work,
then
we'll
do
the
work
of
providing
this
mechanism.
F
F
So
if
you
want
to
pull
the
working
group
on
how
they
want
to
deal
with
that,
I'm
fine
with
that.
Currently
the
people
I'm
interacting
with
care
about
the
standard
ones,
mostly
right,
and
so
this
was
the
reason
why
we
made
this
change
was
to
deal
with
the
fact
that
gpb
is
not
will
be
supported
by
vendors
and
is
not
standard,
and
so
it
was
feeling
weird
to
have
it
be
listed
in
the
draft
and
given
a
code
point
in
the
standard
encoding
field,
and
so
that
was
our
way
of
dealing
with
that.
F
If
you
want
to
change
this,
I'm
also
open
to
this.
We
had
also,
for
example,
a
suggestion
to
to
have
to
not
use
a
space
flag,
but
to
reserve
the
last
values
of
the
encoding
type
field
and
and
let
vendors
use
the
research
space.
I
wanted
to
make
things
more
clear.
You
know
you
see
and
so
and
not
have
to
use
reserved
non-documented
bits,
and
so
I
I
it
was.
I
did
that
for
clarity
reasons.
If
you
want
me
to
roll
this
back,
I'm
fine
with
this.
I
just
would
like
to
know
yeah.
H
G
Okay,
perfect,
I
just
want
to
raise
that
this
is
a
general
problem.
Do
we
need
to
care
about
private
space
or
not?
If,
yes,
then,
both
transport
protocols,
http
s,
notif
and
udp-
know
this
should
support
it.
A
All
right,
andy.
I
F
F
I
Well,
I
don't
know
what
you
know,
ad
reviews
and
other
later
reviews
are
gonna
produce
as
as
objections,
but
I
don't
know
if
there's
a
problem
with
with
how
it's
be
referenced
or
or
something
like
that,
if
it's
a
documentation
or
not
so
and
then
hopefully
you
just
put
the
enumeration
back
and
it'll
be
important
for
people
to
work
with
it.
My
original
action
was
just
the
the
the
there
was
no
really
reference
given
document,
not
that
it
shouldn't
be
there.
F
Actually,
you
might
be
right
that
if
we
take
it
out
and
and
and
use
the
approach
where
we
reserve
some
values
that
would
be
used
for
private
use,
there
would
not
be
any
reference
to
non-standard
encoding,
but
then
I'm
afraid
that
the
question
the
question
will
once
again
be
raised.
How
how
does
an
operator
realize
in
a
multi-vendor
context
and
multi-uh
release
context?
Which
box
is
sending
what
I
mean
right?
This
is
the
main
problem
with
non-standard
stuff.
G
A
C
I
I
wasn't
necessarily
going
to
give
an
opinion
as
an
ad
on
this.
I
think,
potentially
some
further
discussion.
I
I
had
a
question
really
as
an
individual
that
is
is
he's
just
saying
could
be
sufficient
enough
in
terms
of
knowing
what
that
encoding
actually
would
be,
because
I
assumed
that
with
yang
there's,
there's
a
couple
of
different
ways
that
you
can
code
this
data
in
yang.
C
You
can
either
have
a
generic
gpb
encoding
of
gang
data
or
in
some
cases
I
thought
some
people
use
specific
generated,
gpp
models
generated
from
the
gang
data.
So
is
there
an
issue
there?
That
gpp
is
is
not
the
issue
about
whether
the
encoding
is
known
or
not,
but
actually
what
will
that
data
look
like,
and
maybe
that
could
be
solved
by
being
more
specific
about
exactly
what
the
encoding
is.
But
I
I
do
worry
about
having
something
that
says
it's
done
in
this
way,
but
it's
unspecified.
F
Yeah
the
I
completely
agree
with
you.
The
good
thing
is
that
for
this
draft
it
does
not
matter
because
I
would
like
to
answer
with
not
my
problem
answer
because
to
me,
I'm
packing
messages
that
I
I'm
reassembling
messages
when
they
are
split
at
the
transport
layer
with
this
draft
and
then
I'm
passing
it
down
towards
the
big
data
people
which
know
when
they
register,
when
they
use
a
distributed,
subscription
to
go
to
connect
to
the
box
which
deal
at
that
on
that
side
on
how
they
are
going
to
do
that.
G
Yeah,
what
I
just
wanted
to
to
mention
here-
and
of
course
I
agree
here-
is
maybe
another
option
could
be
to
have
in
the
standard
space
have
a
fourth
option
where
we
just
say
otter
that
might
solve
the
problem
as
well
or
then
yeah.
The
question
is,
and
we
need.
G
I
think
feedback
from
ad
here
is,
if
you
can
have
like
in
this
area
like
where
we
just
do
gpb
or
any,
we
can
list
any
non-standard
encoding
there
and,
in
the
end,
it's
up
to
the
network
operator
to
figure
out
how
to
to
decode
those.
B
As
a
contributor,
as
there
was
a
comment
made
about
the
https
notif
draft
it,
it
would
also
need
to
support
private
encoding
if
this
were
necessary-
and
I
just
wanted
to
state
that
I
believe
that's
already.
The
case
speak
in
two
ways:
first,
if
if,
if
there's
using
http
media
types-
and
it
does,
you
know,
define
us-
or
I
should
say-
use
the
standard
media
types
for
json
coded
young
data
and
xml
encoded
yang
data,
and
but
of
course
they
could.
B
You
know
private
media
types
could
be
generated
and
created
and
used
and
then,
secondly,
for
those
for
when
subscribe,
notifications
are
being
used
and
they're
being
configured
for
configured
subscribe
notifications,
there's
a
yang
identity
called
encoding.
I
think,
and
currently
there's
sub
identities
for
json
and
xml,
but
of
course
private
or
other
standard
identities.
Sub
identities
could
be
defined
as
well.
So
I
think
I
think
it's
already
supported
there
if
to
encode
private
encodings
for
http,
for
notifications
for
https
native.
F
B
F
Last
last
update
we
shrunk
the
segment
number
space
to
16
bits
because
we
were
not
needing
32
and
then
I
tried
to
bring
more
clarity
on
boxes
that
would
rely
on
ip
fragmentation.
Instead
of
supporting
the
segmentation
option,
I
would
like
to
I
will
further.
I
will
simplify
this
even
more
and
basically
do
like
eip
fix
and
say
we
should
not
do
fragmentation.
F
I
was
just
wanting
to
try
and
not
allow
this
so
say
you
can
do
this,
but
not
by
default,
based
on
what
we
discovered
last
week
during
the
hackathon
etc.
I
would
like
to
change
this
and
simply
say
you
should
not
do
fragmentation.
F
Then
I
received
a
bunch
of
comments
on
the
relationship
between
the
last
bit
flag
of
this
option
and
the
sequence
number
mostly
by
andy.
I
agree
with
all
of
them
and
the
request
for
clarification
will
be
done
so
you
had.
You
were
asking
for
details
and
I
agree
with
you
it.
It
wasn't
clear
on
some
parts
and
and
the
changes
you
were
asking.
I
agree
with
all
of
them,
so
it
will.
You
will
find
them
in
the
show
too
next
slide.
Please.
F
Okay,
so
the
current
implementation
status
is
so
during
the
academy
we
were
working
in
an
environment
where,
within
swisscom
lab,
we
had
a
huawei
implementation
of
this
version,
so
we
were
following
dash01
based
on
the
feedback
that
I
will
receive.
I
will
update
to
the
comment
and
on
the
collector
slide
aside,
we
have
a
go
long
version
of
the
connector
and
a
c
version
of
the
collector.
F
The
c
version
is
being
validated
and
integrated
within
a
pmh
gt
right
now
for
our
next
steps,
we
will
probably
provide
a
ddls
support
for
this.
It's
not
our
top
priority,
because
in
all
the
deployment
scenarios
we
we
won't
need
it,
and
then
I
will
apply
the
changes
that
you
guys
recommend,
based
on
the
on
the
discussions
on
the
mailing
list
that
we
will
have
based
on
this
meeting.
Thanks.
C
Okay,
go
ahead
joe.
Yes,
it's
actually
on
the
previous
issue.
If
we
still
have
time,
I'm
I'm
not
sure
whether
we
need
to
discuss
it
further
now.
I
think
it
needs
further
discussion
on
the
list
and
potential
get
resolved
during
working
with
last
call.
I
think
the
question
was
really
about
whether
the
icg
would
accept
a
protocol
where
this
is
unspecified,
and
the
answer
is,
I
don't
know,
I
think
it
will
just
depend
on
the
ad
reviews
at
the
time
or
the
icg
reviews
when
that
happens.
C
F
Yes,
that
helps
a
lot,
so
what
I
would
suggest
is
I
remove
those
bit
and
I
reserve
the
last
few
values
of
on
them
so
that
I
don't
need
any
reference
to
anything,
and
I
and
I
and
we
proceed
this
way
and
so
that
I
don't-
and
I
follow
in
this
comment-
that
this
stuff
is
too
complicated.
For
no
reason
do
I
do
that
in
the
show
too.
F
I
would
not,
but
I
would
reserve
values
and
I
would
not
create
a
private
space,
but
I
would
just
reserve
values
and
if
we,
and
so
that
the
the
header
will
no
longer
change
and
if
based
on
the
review,
we
can
leave
a
reference
to
a
value
being
gpb,
then
we
just
leave
it
there
and
that
will
just
be
a
value
being
fixed.
Okay
and
if
there's
a
gpb
won't
work
because
it's
not
standard,
then
I
leave
it
in
the
reserved
space.
That
is
the
usual
blur
response
that
that
we
define
at
the
atf.
F
For
this
reason,
so
I
don't
over
complicate
stuff
with
this
s
bit,
I
leave
room
for
gpb
in
the
standard
space.
If
we
get
review
further
down
the
iesg
review,
that
says,
but
gpb
is
not
a
standard
you
can
put,
you
cannot
put
it
there,
then
I
will
leave
gpb
being
used
in
the
reserved
space
that
I
will
that
I
would
define.
C
So
I
don't
think
it
matters
whether
gpb
is
a
standard.
I
don't
think
that
unless
it'll
be
a
problem
from
that
perspective,
I
think
it's
just
a
matter
of
whether
it
the
the
specification
is
clear
or
not
into
what's
used,
and
it
it
may
be
that
effectively.
If
these
are
reserved,
you
can
have
some
area
in
a
space
that
actually
defines
what
these
fields
are
for
the
ones
that
aren't
done
in
the
document
and
hence
that
can
be
extended
with
future
documents
that
specify
this
behavior,
if
required,.
A
G
Exactly
exactly
I'm
sorry
for
that.
So
next
slide,
please
so
in
compared
to
zero
zero.
These
are
the
changes
we
made
in
the
draft,
mainly
in
two
areas.
In
the
terminologies
and
motivation
part
in
the
terminologies.
G
We
are
basically
borrowing
the
terms
from
rfc
7011,
so
ipfix
export
in
section,
2
and
3.1,
basically
using
the
terms
observation
domain
and
observation
domain
id
and
consequently
replacing
the
term
generator
id
with
observation
domain
id
in
the
drafts
distributed,
notif
udp,
notif
and
notification
messages
in
in
the
motive
motivation
section,
as
requested
from
the
the
working
group
mailing
list,
we
were
specifying
the
reasoning
for
observation
domain
id
mainly
on
on
the
on
the
data
integrate
decide
to
preserve
it
across
multiple
publisher
export
processes,
the
same
also
for
being
able
to
recognize,
lost
and
corrupt
young
notification
messages
across
multiple
export
publisher
processes.
G
G
The
processor
publisher,
export
process
by
mapping
to
itf
hardware,
yana
hardware
yang
models
by
looking
up
iona
hardware,
young
models,
the
closest
match
I
saw
was
cpu,
but
it's
unclear
what
cpu
actually
means
if
it's
actually
also
matching
the
term
network
processors
as
well,
and
here
I
like
to
get
more
feedback
from
the
working
group
and
also
from
the
authors
of
itf
hardware,
yana
hardware,
whatever
this
is
really
the
right
place
to
to
augment
to
tool,
because
if
we
want
to
map
to
the
export
process
or
just
to
the
to
the
to
the
the
network
processor
to
the
cpu.
G
Only
so
that's
one
question
the
the
second
one
is
in
regards
to
the
observation
domain
id
if
it's
needed
in
the
the
udp
notif
header-
and
I
think
pierre
already
answered
that
question
in
his
previous
presentation-
and
I
just
want
to
emphasize
here
that
basically
on
the
data
collection
side,
if
the
observation
domain
id
would
not
be
within
the
udp
notifier,
basically
we
in
order
to
to
enable
data
integrity,
we
would
need
to
look
into
the
notification
message
itself
and
I
think
that
would
be
a
cross-layer
violation
which
we
would
like
to
avoid
yeah.
C
So
this
is
a
comment
as
an
individual
and
I
have
to
say
I've
not
read
the
specifics
here,
but
I
would
I
would,
I
think
I
would
probably
regard
cpu
and
npu's
generally
being
different.
C
C
The
only
other
question
I
have,
and
again
I
might
be
completely
off
mark
here-
is
if
it's
the
observation
domain
ids
are
tied
to
what
that
is.
Obviously,
some
line
cards
may
have
multiple
separate
npus,
so
I
don't
know
if
that's
something
that
needs
to
be
considered
or
not
or
whether
that
is
not
an
issue.
G
Exactly
so,
as
you
nicely
pointed
out,
it's
not
only
about
the
naming.
It's
then
also
about
being
able
to
to
model
it
down
to
chassis
line
cards
and
pews
and
down
that
hold
yeah.
Absolutely.
A
If
I
understand
I
think
andy's
question
was,
how
would
you
identify,
and
I
don't
want
to
speak
for
him-
he
can
certainly
speak
also,
but
how
do
you
identify
a
line
card
that
has
either
been
unplugged
and
put
back
in?
A
G
That's
why
I'm
asking
for
feedback
from
the
working
group
and
the
authors,
how
you
intend
or
what
you
think
is
the
best
way
to
model
this,
because
once
we
we
have
a
feedback
on
on
this.
We
can
help
here
to
to
extend
augment
the
yang
model
and
then
basically
merge
the
observation
domain
id
there.
G
Exactly
if,
if
that
is
the
intent,
if
that
is
what
we
are
aiming
for,
then
absolutely,
for
instance
in
in
ipfix
they
did
not
resolve
that
problem.
So
there
the
the
ids
were
generated,
and
you
could
not
map
it
down
to
the
to
the
line
card
to
the
the
network,
processors
and
it
was
already
sufficient
to
to
ensure
data
integrity.
So
for
the
data
collection,
it's
not
needed
to
to
map
down
to
the
specific
network
processor,
but
it
in
order
to
troubleshoot
further.
A
A
K
K
K
K
First,
is
we
removes
the
max
node
per
sensor
group
and
max
sensor
group
per
update
in
yam
model,
because
there
are
maybe
some
vendors
don't
have
donate
to
have
these
characters
and
we
also
remove
the
subscription
mode
and
because
because
we
think
that
we
can
add
some
specific
and
functions
of
parameters-
and
we
don't
use
this
subscription
mode
and
it
can
be
more
simplified
and
and
simple
yeah
and
the
third
one
is,
we
can
add
adaptive
interval,
support
and
remove
sampling
interval
list
definition
and
to
support
the
adaptive
interval
collection,
which
will
be
introduced
later.
B
K
K
It
notifies
transport
protocol,
encoding
format
and
secure
security
and
protocol
yeah,
and
then
the
ms
can
subscript
the
young
notification
according
to
its
demand
and,
for
example,
it
needs
the
odp
protocol,
binary
encoding
format
and
js
security
protocol.
Then
the
server
will
send
the
notification
over
udp
and
also
satisfied
others
parameters
or
functions.
K
It
just
shows
the
products
are
needed.
In
fact,
the
server
will
notify
all
the
capability
to
the
network
management
system.
Yeah-
and
here
are
two
pictures-
shows
version.
Okay,
maybe
I
guess
it's
not,
so
it
is
very
no
problem.
Yeah
next
page,
please.
K
K
So
we
just
to
reply
that
the
answer
is
one
of
the
principles
set
by
rfc.
Young
push
is
to
minimize
the
number
of
subscription
iterations
between
subscriber
and
the
publisher
and
discourage
random
guessing
of
different
parameters
by
a
subscriber,
and
our
idea
is
to
try
to
prevent
the
problem
at
the
stage
of
the
negotiation
of
subsequent
subscription
in
order
to
minimize
the
number
of
integrations
yeah.
So
we
want
to
just
maybe
it
can
improve
the
efficiency
and
increase
the
loss
rate
yeah,
and
the
second
question
is
about
the
static
pronoun.
K
Yeah-
and
the
third
question
is
sensor
group
seems
a
very
vendor
specificability,
so
we
remove
the
two
parameters
and
one
is
max
node
per
sensor
group
and
max
sensor
group
per
object
and-
and
we
have
taken
them
out
in
the
latest
version
yeah-
and
these
three
questions
are
three
main
questions
discussed
in
the
mailing
list.
So
I
think
all
of
them
are
solved
now
yeah,
okay,
next
slide,
please.
B
It
says
this
is
ken,
that's
co-chair.
This
is
one
of
the
documents
that
was,
we
did
the
adoption
suitability
poll
for
before
yeah
and-
and
there
was
an
objection
that
was
raised,
I
think
by
andy.
I
just
want
to
be.
Was
that
covered?
How
did
that
objection
get
resolved?.
G
L
L
L
A
C
L
Yeah
andy
actually
will
clarify
your
issue
actually
in
the
first
bullet.
Actually,
I
I
think
we
we
did
a
center
updated
slides.
Actually,
you
know
you
we
did,
could
use
our
pc
and
use
the
error
option
to
show
what
was
what's
what's
wrong
actually,
but
the
if
we
define
this
capability
advertisement.
L
Actually
we
can
avoid
the
you
know
unnecessary
negotiation.
You
know
so
so
that's
the
way.
You
know
we
we
reference
some
of
the
texts
in
young
push.
Actually
you
know
it
also.
You
know
highlight
that
you
know
we
should
avoid.
You
know
unnecessary
subscription
exchange,
iteration
yeah.
This.
I
L
A
L
C
C
Bro,
so
just
one
one
quick
comment
on
that
is,
I
think
what
this
draft
is
providing
is
to
allow
the
capabilities
to
be
expressed
up
front
as
it
may
be
in
an
instance,
data
document,
so
a
client
could
be
coded
to
design
time
to
know
what
the
capabilities
of
the
servers
like
to
be.
C
So
that's
one
comment
to
that,
but
the
other
question
I
had
was
in
terms
of
the
structure
of
the
yang
it
looked
like
you
could
just
define,
for
example,
one
transport
that
supported
or
one
encoding,
whereas
I'd
have
thought
that
servers
have
different
capabilities
that
they
support,
and
hence
it
may
be
that
the
structure
the
model
needs
to
be
more
flexible
or
otherwise.
I
don't.
I
don't
understand
in
the
yang
why
it's
only
expressing
one
of
those
rather
than
this
is
the
set
of
different
transports
or
encodings
that
are
supported.
L
B
B
L
Okay,
so
this
is
the
chain
I'm
here
to
discuss:
adaptive
subscription
to
young
notification
drop
update
and
next.
L
So
kernel
status,
this
job
has
been
presented
in
previously
2itm
meeting
and
actually
we
tried
in
a
previous
item
and
it
was
suggested
to
align
with
essay
model
and
also
we
actually
introduced
a
new
subsequent
mode.
Besides
the
periodic
subscription
and
unchanged
substitution,
and
so
we
need
to
better
characterize
this
separation
subscription.
L
So
in
last
night
meeting
we
also
you
know,
discusses
and
we
got
actually
a
lot
of
support
and
so
in
latest
version,
actually
we
tried
to
you
know:
remove
the
dependency
to
the
esa
model.
We
will
remove
the
past
target
definition
and
we
also
you
know,
clarify
the
xpath
external
evaluation
node
in
the
young
model
and
and
rewrite
the
usage
example
to
align
with
these
young
model
parameter
changes.
L
L
So
for
people
who
don't
know
what
the
adaptive
subscription
is,
actually
it
is
the
extension
to
the
subscript
notification
for
young
push
subscription.
They
support
two
different
mode.
One
is
period
subscription
which
allow
you
to
publish
the
data
periodically
and
another
is
unchanging
substitution
which
allow
you
to
publish
data
when
data
gets
changed
or
protocol
operation
on
data
get
changed,
but
in
some
cases
maybe
for
server
and
client
over
they
may
support
multiple.
L
You
know
period
intervals,
so
the
server
may
need
to
switch
a
different
interval
according
to
the
network
condition
or
research
usage.
The
typical
example
is:
what
is
performance?
L
Monitoring
the
wireless
signal
stress
can
be
weak
can
be
strong
so
because
of
the
air
interface
resources
is
very,
is
very
expensive,
so
we
can,
when
the
wireless
signal
strength
is
very
strong,
we
actually
can
collect
the
data
at
a
lower
rate,
but
when
the
signal
strength
is
very
weak,
actually
we
can
collect
more
data,
since
we
need
to
have
sufficient
enough
data
to
do
the
job
shooting.
So
this
can
greatly
reduce
the
data
to
export
to
the
client.
L
So
he
so
we
introduce
this
subscription
adaptive
subscription
and
enable
this
subscription
to
the
publishing
event,
and
so
they
can
adjust
the
wrong
the
telemetry
traffic
next.
L
We
propose
a
set
of
parameters,
for
example,
periodic
angle
time,
and
this
will
specify
new
duration
for
push-up
data,
and
this
period
will
be
triggered
when
conditions
get
changed
and
the
anchor
time
will
specify
the
starting
point
for
each
updated
interval
at
the
same
time,
updated
interval
and
also
we
introduced
what
water,
marker
and
and
expanse
external
evaluation.
L
This
actually
can
provide
the
evaluation
creation
and
the
water
market
actually
is
part
of
this
evaluation
expression
and
that
can
use
to
to
ex
to
to
express
the
condition
to
be
satisfied
and
and
trigger
the
interval
switching.
So
here
we,
we
just
give
an
example.
So
when
such
condition,
excess
expressed
by
the
express
external
evaluation
actually
get
changed,
actually
the
condition
I'll
get
actually
satisfied.
L
Actually
it
will
send
the
adaptive
preridic
updated
notification
immediately,
and
so
the
client
can
adjust
the
the
the
telemetry
connection
rate
accordingly.
L
L
Here
is
the
issue:
we
actually
respect
the
nd
on
the
list
and
about
the
filter.
It's
this
filter
and
trigger,
maybe
not
a
in
good
design,
so
we
make
a
some
change
actually.
Accordingly,
for
example,
we
take
out
the
data
path
target
and
things
that
we
we
don't
think.
This
is
something
needed,
because
this
actually
will
impact
or
influence
the
event
recorder
output.
Actually,
so
in
this
model
we
also
changed
the
naming
for
then
we
changed
the
condition
expression
to
the
x-path's
external
expression.
L
This
expanse
x,
naught
expression,
will
keep
track
of
data
objects
change,
but
this
expression
will
not,
you
know,
affect
the
event
record
output
and
the
only
can
used
to
trigger
the
interval.
Switching
in
the
server.
L
B
The
main
change,
and
also
it
was
the
one
that
the
reason
why
the
adoption
poll
did
not
succeed.
I
don't
recall
it
being
discussed
in
the
list,
but
maybe
andy
since
it
was
your
objection.
I
I
haven't
read
the
latest
draft
and
don't
really
have
any
objections.
You
know.
That's
not
really.
Our
focus
right
now
with
yang
push.
I
B
So,
as
a
contributor
andy,
I'm
kind
of
curious
about
that
comment
that
you
just
made,
because
I
think
the
the
reason
for
the
periodic
was
because
there's
some
values
that
just
don't
change
very
often
at
all,
maybe
once
a
day
or
even
less
often,
but
because
they're
periodically
pushed
in
case
this
subscriber
comes
in
later
they
get
it.
You
know
the
value,
but
I
guess
there's
also
a
sync
up
that
occurs
right.
B
So
when
they
first
subscriber
first
there's
a
sync
up,
so
they
get
all
the
values
up
front
and
then
and
then
I
guess
it's
just
changed
from
that
point
forward.
Okay,
so,
okay,
maybe
I
do
understand
answer
my
only
question.
B
Okay,
thank
you,
robert
you're,
in
the
queue.
C
So
I
just
again
wonder
whether
this
is
quite
a
complex
solution
and
to
the
problem
and
whether
a
simpler
solution
might
just
be
to
mark
some
subscriptions
as
being
effectively
higher
priority
data
and
others
as
lower
priority
and
then
to
to
have
one
setting
to
say,
allow
this
to
be
adaptive
in
some
way
and
then
just
leave
it
to
the
device
to
try
and
reduce
the
rate
that
it's
sending
this
information
out,
if
necessary,
without
specifying
specific
periods
or
paths
or
that
sort
of
thing
so
to
have
it
as
a
more
course
level
type
of
qs,
rather
than
having
to
specify
specific
conditions,
and
things
like
that,
so
you
put
more
intelligence
into
the
device
you're
relying
on
it
to
do
some
things
rather
give
you
very
specific
instructions
as
to
how
to
manage
this
data.
L
L
G
I
just
wanted
to
follow
up
on
robert's
comment.
I
was
thinking
the
same,
so
I
think,
ideally,
if
for
a
periodical
subscription,
we
could
have
a
range
say
from
from
one
to
ten
and
then
basically,
the
the
the
publisher
decides
depending
on
on
the
state
which
which
value
it's
choosing
for
the
the
export
interval.
L
Yeah
the
assumption
we
made
is
you
can
support
multiple,
updated
interval,
but
a
server
they
may
have
the
capability
to
to
switch.
This
interval
to
you
know
reduce
you
know
the
volume
of
the
data
to
be
exported
to
to
the
client
or
too
so
you
can,
you
know,
address
this
performance
bottlenecker.
L
So
yeah,
that's
what
we
proposed
there.
A
Okay,
I
think
we
are
running
behind
so
maybe
take
it
to
the
mailing
list
again.
A
A
So
chin,
I
would
suggest
that
you
know
take
the
discussion
to
the
mailing
list,
including
the
question
of
adoption.
Let's
try
to.
L
H
So
now
that
we
have
net
conf
being
spread
around
and
managers
are
actually
starting
to
use
this
a
lot.
One
of
the
use
cases
I
see
very
often
is
that
the
manager
wants
to
keep
in
sync
with
the
configuration
changes
on
a
large
set
of
devices.
H
H
I
have
seen
actually
several
vendors
have
implemented
their
own
proprietary
mechanisms
for
providing
a
shorter
way
to
convey
this
information
like
a
time
stamp
of
last
change
or
some
checksum
of
the
config,
or
something
that
you
can
read
in
order
to
see.
If
anything
has
changed
at
all
in
the
device,
but
those
mechanisms
we
are
using
them
and
they
are
kind
of
nice.
H
They
are
better
than
nothing,
but
they
are
not
quite
good
enough,
because
in
many
cases
there
have
been
small
changes
somewhere,
but
maybe
not
in
the
area
where
this
particular
client
is
interested
to
see.
Oh
has
changed
so
it
will
be
reported
as
changed,
but
it
hasn't
changed
in
anything
in
any
area
that
this
client
is
caring
about.
So
we
are
back
to
r1.
H
Essentially,
if
you
go
to
the
next
slide,
this
mechanism
also
has
the
problem
that
it
also
often
gives
or
sometimes
gives
false
false
alarms
or
you
miss
it
false.
You
say
that
you
notice
that
oh
I
issue
you
get
config
and
you
get
to
get
a
complete
reply.
You
spend
a
lot
of
cpu
time
to
compute.
Is
this
the
same
as
last
time?
B
Contributor,
I
think,
with
netcall
it's
traditional
for
lock,
followed
by
get
config
followed
by
edit
config,
so
you're
ensured
that
you're
only
editing
the
data
that
you
had
gotten
inside
the
lock
you
speak
to
that.
H
That
would
take
a
long
time
then,
because
getting
the
config
and
computing.
If
there's
change,
can
take
several
minutes
so
then
you
would
have
to
lock
for
a
long
time,
and
I
think
that's
not
what
we
see
happening
in
real
networks.
H
Okay,
so
I'm
proposing
a
solution
where
we
say
that
we
have
a
concept
of
a
transaction
id
attribute
that
you
that
the
server
may
report
for
every
container
and
list
elements.
We
don't
want
to
have
this
on
every
leaf
and
everywhere,
but
on
containers
and
list
elements,
and
we
make
sure
that
whenever
there's
an
edit
config,
the
server
will
update
the
transaction.
The
id
value
for
every
container
and
list
element
that
have
been
touched
by
this
transaction,
so
that
the
client
can
rely
on
this
transaction
id
being
something
new.
H
So
anything
that
is
touched
by
this
client
also,
it
can
in
an
edit
configure
clients
can
specify
this
transaction
id
and
then
the
server
will
apply
that
transaction
id
value
to
everything
that
has
been
touched,
but
also
the
client
during
a
get
config
can
specify
transaction
ids
or
what
it
expects
to
say.
Oh,
I
believe
the
contents
of
this
container,
or
this
list
element
has
this
transaction
id
and
if
it
matches,
there's
no
need
for
the
server
to
send
the
content
of
that
and
on
the
next
slide,
please.
H
I
have
an
example
of
what
that
looks
like
so
here
we
have
an
initial
sync
where
the
client
is
issuing
a
get
config
and
with
a
filter
for
interfaces
in
nacom
and
on
the
interfaces
it
says.
Hey.
Can
I
have
the
transaction
ids
for
this
please
by
issuing
this
in
tag
equals
question
mark
next.
Please-
and
this
is
what
a
reply
might
look
like
so
here.
The
server
has
decorated
the
reply
with
these
transaction
and
tag
values,
and
you
can
see
this
def,
something
that's
something
on
various
containers
on
the
well.
H
You
have
on
the
top
level
here
data
which
is
for
the
entire
data
store,
saying
that
the
global
transaction
id
for
the
data
store
is
stiff,
whereas
for
interfaces
the
whole
set
of
interfaces
container
is
and
for
a
particular
interface
here,
gigabit
decent,
zero,
zero,
it's
def,
eight
eight,
but
another
interface,
gigabit
ethernet
zero
one
has
an
older
end
tag
called
abc123.
H
Next,
please,
and
then
later
the
client
can
say:
okay,
I
do
get
convict
here
on
the
interfaces
with
this
and
tag
value
abc123,
which
is
what
I
expect
to
be
here
and
next
please.
H
H
H
I
think
that
was
all
my
slides
basically
go.
Let's
take
the
next
one
yeah,
okay
right,
so
in
a
edit
config
we
could.
The
client
could
also
say
hey.
I
expect
the
transaction
id
of
this
interface
to
be
ghi5555,
so
if
it
is
go
ahead
and
run
this
transaction
and
delete
this
interface
in
this
case,
but
if
it
isn't,
if
one
of
these
transaction
ids
have
a
mismatch,
then
abort
the
whole
transaction
and
report
it
things
are
not
the
way
that
you
expected,
since
things
have
moved
since
you
last
synchronized.
H
D
I
have
a
question
and
it's
really
maybe
can
you
address
the
issue
of
the
the
race
condition
where
you
know
you
set
a
transaction
id
of
foo
and
by
the
time
you
come
back
with
the
git
config
elements
that
you
have
changed
is
have
been
changed
either
by
the
system
or
or
or
through
other
some
other
client
is
that
is
that
the
intention?
H
Yeah,
this
is
exactly
the
mechanism
to
handle
that
problem
so
by
the
client
proposing
a
transaction
id
when
it's
making
another
config
or
by
this,
and
certainly
by
the
server,
guaranteeing
that
whatever
something
changes,
a
new
transaction
id
has
to
come
on
to
this
to
the
touched
containers
and
list
elements,
that's
how
a
client
can
be
sure
that
only
it
can
read
updates
only
by
using
getconfig
with
these
transaction
ids,
and
it
can
edit
config
safely
by
adding
this
transaction
id
for
what
it
expects,
and
only
if
all
the
tags
that
is
mentioned
in
the
edit
config
are
matching.
D
Yeah,
but
I
as
a
client
won't
know,
I
don't
have
any
assurance
that
what
I
have
changed
under
this
transaction
id
is
indeed
what
is
still
changed
when
I
do
like
to
get
config
right
after
I
I
make
my
changes
or
or
I
wait
three
seconds
or
three
minutes
right
I
mean
is
that
is
that
expected
or
is
that
a
a
hole
in
the
the
concept
right?
Because
you
don't
have
any
with
lock
everything's
atomic
right?
You
know
everything's
there
and
it's
and
it's
atomic
with
this.
D
This
is
kind
of
it's,
not
it's,
not
as
it's
it's
not
safe
in
terms
of
the
the
the
transactional
set
right.
You
know
that
you
know
that
this
is
what's
changed.
What
has
changed,
which?
Which
elements
which
leafs
have
changed
right.
H
D
You're
saying
like
on
the
delete
in
in
the
I
was
talking
about
just
the
original
change
with
the
get
I'm
sorry.
I
was
back
on
the
previous
slide
when
I,
when
I
entered
the
q,
so
I
come
in,
I
do
an
edit
config
right,
I'm
not
doing
a
delete
operation
yet
right,
I'm
just
trying
to
figure
out
what
changed
right.
So
I
do
an
edit
config.
I
say
tag
foo
right.
D
E
D
Really
understand,
there's
there's
opportunity
or
there's
chances
there
that
I
will
never
really
understand
what
actually
changed
right,
particularly
if
it's
a
quick
changing
set
of
nodes.
H
Right,
I
may
not
understand
your
intent
exactly
here,
so
I
mean,
if
you're
interested
in
knowing
the
exact
state
of
the
device
you're
welcome
to
either
do
a
get
config
of
everything
and
or
to
do
a
subscription
of
some
sort.
You
can
do
that
to
info
in
order
to
follow
everything.
H
But
what
this
is
about
is
to
ensure
that
when,
when
a
client
has
a
view
of
the
states
of
the
server,
it
should
be
able
to
verify
that
view
quickly
by
saying,
okay,
I
know
this
about
the
configuration
and
these
are
the
tags,
I'm
aware
and
just
report
the
differences
versus
what
I
know.
So
that's
the
mechanism,
I'm
looking
for
here.
D
H
A
Okay,
sorry
guys,
I
know
I'm
in
the
queue
and
the
last
is
after
me,
but
we
are
really
out
of
time
on
this.
So
sorry,
bilas.
Can
you
take
your
question
to
the
mailing
list?
Actually.
B
E
My
point
is
that
we
have
something
very
similar
in
rest
golf
and
it
would
be
good
to
understand
that.
Is
this
the
same
mechanism
actually
or
are
we
if
we
have
a
data
store
that
is
visible
both
on
rest,
conf
and
net
confidence?
Do
we
have
to
supply
support
to
similar,
but
not
the
same
manners
so
describe
somehow
the
relationship
with
rest,
config
tags.
H
Yes,
this
was
definitely
greatly
inspired
by
the
e-tags,
and
my
intention
is
that
the
server
implementation
should
be
able
to
be
joined
here,
but
things
are
still
early
in
this
draft
and
we
will
see
where
it
takes
it.
I
don't
want
to
guarantee
that
it
will
be
exactly
the
same
or
very
much
compatible
with
e-tags.
But
yes,
it
is
the
same
mechanism
that
we
have
in
e-tax
already
that
I'm
trying
to
describe
here.
A
I'll
take
that
as
a
yes.
So
again,
I
I
understand
the
optimization
in
terms
of
trying
to
tag
at
least
at
the
container
and
the
list
known,
but
couldn't
this
be
a
little
more
cold
strained,
for
you
just
say
for
a
given
config
or
a
transaction,
you
have
a
tag
and,
if
you're
trying
to
edit
it
and
if
the
tag
doesn't
match,
you
request
that
you
do
it
now.
A
H
Yes,
it's
up
to
the
client
to
decide
where
what
parts
of
the
transaction.
It
really
cares
about
things
being
the
same.
It
can
say
for
this
part
of
the
tree,
I'll
just
go
ahead
as
traditional
config,
but
for
the
interfaces
list,
I'm
really
interested
to
make
sure
that
all
the
interviews
that
I
touch
are
untouched
or
that
no
interfaces
have
been
passed
or
a
particular
aspect
of
the
details
of
one
interface
are
important.
It's
up
to
the
client
to
decide
where
the
tags
have
to.
B
Match
so
so
as
chair,
can
you
go
ahead
and
bring
up
my
my
last
presentation
but
as
a
contributor
to
yan,
I
would
recommend
trying
to
align
this
work
exactly
with
resconf,
if
possible,
because
I
know
that
some
you
know
servers
that
present.
Both
restaurant
and
netconf
actually
build
one
of
those
interfaces
on
top
of
the
other,
and
I
think
typically,
rest
conf
is
built
on
top
of
netconf
but
anyway,
if
it,
I
guess
the
question
is:
why
wouldn't
why
couldn't
it
be
the
line
or
aligned?
B
Why
wouldn't
it
be
aligned?
It
was
the
reasoning
for
not
being
aligned.
I
think
I'm
taking
it
to
the
list,
but
please
try
to
address
that
comment
later.
Certainly,
all
right,
my
hush,
can
you
go
full
screen?
Please.
B
Okay,
so
this
is
the
last
presentation,
as
everyone
probably
saw
on
the
list,
there
was
some
interest
in
introducing
support
for
list
pagination
in
both
the
netconf
and
the
rest
comp
protocols.
I
sent
out
a
call
for
participation
and
did
get
some
respondents
responses.
B
Thank
you,
jin,
olaf
and
way
there
are
actually
also
other
respondents,
but
they
haven't
yet
contributed
and
so
they're
not
yet
listed
here,
but
hopefully
that
will
improve
next
next
presentation
next
slide,
please.
The
motivation
for
this
work
is
to
first
inform
us
to
better
support.
User-Facing
client
interfaces,
interacting
with
data
from
potentially
large
lists.
B
Examples
of
potentially
large
lists
include
traffic
logs
or
really
any
time
series
data
which
might
include
audit
log,
but
in
general
those
time
is
kind
of
like
config,
false
op
state
type
data,
but
also
within
configuration.
There
are
some
very
large
lists,
sometimes
interfaces
or
firewall
rule
bases
can
grow
to
be
in
the
thousands
and,
of
course,
it's
all
very
manageable.
Today
with
existing-
and
you
know,
netconf
I
mean
already-
the
client
gets
the
entirety
of
the
large
configured
list
and
then
handles
it
itself
in
its
own
memory.
B
B
The
solution-
okay,
the
I'm
just
looking
the
graphics.
No,
it's
not
like
the
bottom
line
is
all
crossed
out.
It's
supposed
to
be
filter
goes
to
sword.
Sword
goes
a
direction,
etc.
You.
E
B
The
little
arrows,
but
it's
it's
not
looking
quite
right
anyway
I'll.
Let
me
go
back
up
to
the
top
of
the
slide.
The
solution
proposal
is
to
introduce
to
both
netconf
and
russkoff
an
api
for
list
pagination,
there's
five
control
points
that
have
been
discussed,
and
this
was
on
the
list.
So
I'm
a
bit
repeating,
but
since
this
is
the
first
presentation
introducing
the
work,
I
wanted
to
make
sure
there's
a
slide
for
it.
There's
the
ability
to
limit
the
number
of
results
that
are
returned
in
the
response.
B
There's
the
ability
to
control
the
point
at
which
the
result
set
begins.
You
know,
maybe
you
don't
want
to
start
always
start
at
the
very
beginning
of
the
list.
You
might
want
to
begin
somewhere
in
the
middle
there's
the
direction
that
the
results
are
returned.
B
Are
they
returned
in
the
forward
or
reverse
direction,
and
potentially
there's
the
ability
to
sort
the
results,
maybe
on
a
particular
node
or
in
in
sql
terms
on
a
column
you
know
sort
on
a
particular
column
and
then
and
then
the
results
would
be
in
that
order
or
if
it's
an
order
by
user
list
by
default.
The
order
is
the
order,
the
configured
order,
as
it
were,
by
the
order
by
user
list
and
then,
lastly,
there's
potentially
the
ability
to
filter
the
items.
B
Maybe
the
client
is
only
interested
in
zooming
in
to
a
particular
subset
of
the
data.
Now
these
control
points
are
actually
ordered
in
processing,
there's
a
processing
order.
It
is,
in
fact
the
reverse
order,
so,
first
the
results
are
filtered
filters
in
general
are
very
fast
they're,
almost
always
well,
hopefully
the
node
or
the
leaf
that
you're
filtering
on
has
been
indexed
by
your
backend
database.
B
So
it's
a
fairly
it's
a
pretty
fast
operation
to
do
the
filter
and
then,
if
there
is
a
need
to
do
source
that
would
happen
after
the
filter,
so
you're
only
sorting
the
subset
of
the
data
that
has
gotten
through
the
filter.
B
B
So
that
is
the
processing
order
and
for
those
that
are
familiar
with
sql,
that
is
exactly
what
sql
does
and
I
imagine
it
is
the
same
for
most
backend
databases,
but
that
is
in
fact,
what
this
author
list
is
is
doing
so
there's
a
number
of
authors
that
are-
or
you
know,
some
authors
are
more
implementation
oriented,
and
so
we
have
a
representation
of
different
backend
databases
and
we're
doing
prototypes
to
of
this
of
all
this,
to
ensure
that
it's
mappable
to
the
various
backend
databases
next
slide.
Please.
B
B
So,
first,
how
important
is
it
to
iterate
over
stable
result
sets?
I
think
it
was
jan
who
had
posted
a
comment
about
this
to
the
list,
but
essentially
should
something
like
cursors
or
snapshots
be
supported
and
just
sort
of
thinking
out
loud,
I'm
I'm,
you
know
for
configured
for
configure
for
configuration
with
rest
conf,
at
least
and
and
in
the
last
presentation,
the
the
the
equivalent
of
an
e-tag
or
timestamps.
B
You
know
that
is
that
good
enough,
in
the
sense
that
you
know
if
you're
doing
like
a
get
on
you
know
data
and
and
you're
saying
okay,
I
know
the
e
tag
is
supposed
to
this.
Where
I
know
the
time
is
supposed
to
be
this
and
you
know
if
and
you're
you're
indexing
or
you're
paging
into
the
data
set.
But
now
the
data
set's
been
changed
under
under
under
the
hood.
You
know,
wouldn't
it
be
good
enough
to
just
for
the
client
to
get
back
an
error.
B
Saying
hey
the
data
set's
changed,
you
need
to
restart
your
pet,
your
pagination,
or
is
that
not
good
enough
and
you?
We
actually
need
to
create
something
like
a
cursor
or
snapchat.
So
that's
for
my
thinking
out
loud
for
configuration
and
then
thinking
out
for
upstate.
B
For
again
I'm
assuming
we're
talking
about
read
only
time
series
data
like
audit
log
or
traffic
logs
and
I'm
you
know,
I'm
guessing
it's
stable
enough.
It's
it's
read-only!
I
I
mean
sure
logs
can
expire.
You
know
given
how
much
resources
memory
or
you
know,
storage,
the
the
server
has
on
board.
But
you
know
you
can
imagine
logs
sticking
around,
for
you
know
at
least
a
few
hours
or
days
or
whatever,
and
I
mean
they
made.
B
So
all
the
logs
that
came
in
after
the
pagination
began
would
be
excluded
and
hence
effectively
the
remaining
logs
are.
It
is
a
stable
data
set
again
not
including
purging,
okay,
so
for
number
two
and
three
and
again
I'm
gonna
mention
these
quickly
and
then
go
back
to
one
for
two,
I'm
wondering
if
offset
or
skip
I
mean
those
are
we're
trying
to
figure
out
what
what
what
the
term,
what
the
name
should
be.
Is
it
offset
or
skip?
B
Then
that's
actually
the
comment
for
number
three,
but
it
is
the
offset
skip
parameter.
Should
it
be
an
integral
amount
like
an
integer
value
which
would
be
consistent
with
sql's
limit
sorry
as
equals
offset
parameter,
or
should
it
be
a
key
based
lookup?
So
you
you
begin
your
pagination
and
and
then
somehow
the
the
last
entry
of
the
results
that
you
got
back.
B
You
know
it
contains
a
keyed
value
and
then
your
your
subsequent
request
for
the
next
page
of
data
is
actually
by
page
or
I'm
sorry
by
key
you're
asking
for
the
next
set
of
data,
beginning
with
a
key,
as
opposed
to
the
next
set
of
data
indexed
by
an
integer
and
then
for
number
three,
and
then
it's
really
the
question
as
a
naming
question:
should
it
be
called,
skip
or
or
should
be
called
count,
I
think
in
least
sequels
count
and
or
in
and
then
and
likewise
should
be
called
offset
or
or
limit?
B
I'm
sorry.
I
think
I
have
this
backwards.
It's
skipper
offset
counter
limit,
I'm
sorry,
I
I
don't
have
a
but
anyway,
the
question
is
how
aligned
to
sql
names.
Should
we
be,
of
course,
if
going
to
number
two,
if
we're
doing
the
key
lookup
ie
we're
not
going
towards
sql
or
we're
moving
away
from
that
approach,
then
the
whole
notion
of
whether
or
not
we're
aligned
with
sql
doesn't
matter.
B
So
that's
why
or
how
it's
three
is
related
to
two
and
of
course,
two
is
connected
to
one
so
going
back
to
one
and
to
the
group.
Please,
if
anyone
has
any
comments,
how
important
is
it
to
iterate
over
a
stable
set
of
results?
We
only
have
five
minutes.
So
if
there's
any
urgent
urge
urgent
comment,
please
raise
your
hand.
B
Okay,
I
don't
see
anything
coming
I'll.
Take
that
to
the
list
for
number
four
question
is:
should
sub
sorts
be
supported,
and
currently
the
design
as
just
present
on
the
screen
was
very
much
just
a
single
sort,
which
is
you
know
by
and
large
95
of
what
clients
want.
Just
a
single
sort.
You
about
your
email,
client
right,
you
can
only
use
or
do
a
single
filter
at
a
time
it
it's
that's
very
common,
but
it
is
noted
that
sql
supports
sub
sorts.
B
So,
for
instance,
you
can
do
an
order
by
foo,
ascending
and
then
by
bar
descending,
for
instance.
So
there's
this
question
as
to
whether
or
not
sub
sorts
should
be
supported.
I
do
see
belage
in
the
queue.
Please
blush
go
ahead.
E
B
Okay,
so
I
think
martin
has
been
involved
of
some
with
this
comment
with
the
the
discussion
on
the
list
previously,
and
I
mean
in
general,
the
thinking
is
that
we
should
be
using
xpath
query
language
for
this,
and
so
whatever
that
it
supports,
would
be
by
and
large
what
would
be
doing
but
I'll
take
this
to
the
list.
E
B
Alright,
so
again,
with
the
protocol
independent
or
continuing
with
that,
how
many
drafts
should
there
be
so
this
one,
I'm
really
hoping
to
get
a
response
from
right
now.
Should
there
be
one
draft
I.e
that
contains
three
parts:
the
general
definition
netconf
specifics
and
then
restaurant
specifics.
B
So
the
pros
of
this
is
that
it's
a
nice
package
brings
it
all
together.
The
cons
is
that
it's
not
a
great
rfc
target
for
future
work
or
there
could
be
two
drafts
ones
for
each
netconfin
and
rescoff
pros
is
that
we
then
decoupled
and
now
that
they
make
better
rfc
targets
for
future
work.
Of
course
you
can
imagine
you
know
if
a
third
protocol
were
to
come
along.
B
It
could
then
be
a
tone
draft,
and
so
that's
very
nice,
but
the
cons
is
that
there's
some
redundancies
between
the
two
so,
for
instance,
the
those
five
control
points
that
we
talked
about
before
they
would
have
to
be
defined
individually,
each
in
those
drafts
and
likewise
the
examples.
You
can
imagine
an
example-
module
an
example.
Data
set
an
example
data
results
that
you
would
expect.
B
B
You
know
results
set
examples
in
each
and
then
or
or
three
option.
Three
is
to
have
three
drafts,
which
you
know
again
would
be
a
general
one
draft
should
be
general
definition
and
then
another
for
a
net
comp
and
a
third
for
rescoff
the
pros.
This
is
completely
decoupled.
Much
like
the
nmda
work.
In
fact,
if
we
were
to
do
this,
we
might
consider
moving
that
general
definition
draft
to
the
net
mod
working
group.
B
E
B
It's
the
hand
icon
in
the
upper
left
corner.
Okay,
I
don't
see
any
comments,
so
my
hash,
there
is
just
one
more
slide
and
now
I
know
we're
out
of
time,
but
I
just
quickly:
can
we
do
that
next
slide?
So
I
have
so.
Those
are
the
protocol
independent
questions
I'll
take
that
them
all
to
the
list
for
rest
count
specific.
B
I
do
and
then
there's
another
slide
for
neck
off,
but
there
are
no.
I
don't
have
any
netcon
specific
questions
so
there's
this
is
truly
the
last
slide.
If
you
will,
the
question
is:
what
should
the
scope
of
the
leaf
and
the
leaf
lift
list
targets
be,
and
in
particular,
are
we
just
doing
the
get
method,
which
would
be
the
minimally
invasive
type
thing,
because
after
all
we're
talking
about
list
pagination?
So
it's
just
get
that
we're
talking
about
or
should
we
define
leaf?
I'm
sorry,
I
said
leaf.
B
I
meant
list
list
and
leaf
list
those
two
nodes
as
being
targets
for
all
http
methods,
so
not
just
get
but
post
put
delete
patch
etc.
All
of
them.
So
you
know
the
of
course.
The
considerations
are,
you
know,
for
increasing
the
scope,
would
be
it's
it's
more
complete,
slash
pure,
but
you
know
I
question,
there's
actually
little
value
I
mean
putting
or
posting
or
patching
to
the
container
is
equally
good.
You
never
do.
You
need
to
actually
target
the
the
list
itself.
B
Of
course
that's
known
or
is
shown,
but
illustrated
by
the
fact
that
we
don't
do
it
today.
I
think
delete
actually
has
a
benefit.
You
could
delete
the
entire
list
at
one
go
as
opposed
to
you
know
in
case
it
hasn't
been
wrapped
inside
of
a
dedicated
container.
You
know,
instead
of
having
to
delete
the
entire
container,
you
could
delete
the
entire
list
that
way,
so
this
delete
has
some
benefit,
but
it
is
like.
B
B
That
concludes
everything
for
this
meeting,
so
switching
my
house
taking
off
my
contributor
hat,
putting
on
my
chair
hat.
Thank
you,
everyone
for
joining
and
do
you
have
any
comments,
closing
comments?
I
don't.
A
B
Okay,
very
good,
thank
you,
everyone
and
we'll
virtually
see
you
next
time
or
on
the
list
before
then.
Okay,.