►
From YouTube: IETF115-NETCONF-20221107-1530
Description
NETCONF meeting session at IETF115
2022/11/07 1530
https://datatracker.ietf.org/meeting/115/proceedings/
A
All
right,
it
is
time
this
is
the
netcon
working
group,
itf-115
meeting.
A
A
So
you
should
be
familiar
with
the
note
well
in
particular
BCP
79,
which
covers
the
patterns
and
participation
aspect
of
ITF,
but
also
we
want
to
make
a
special
note
of
BCP
54,
which
is
the
code
of
conduct.
Basically,
the
idea
is
all
participants
are
expected
to
be
participating
with
Grace
in
the
meetings.
A
A
A
The
queue
of
course
choose
speaking
whether
it's
in
the
room
or
remotely
will
be
managed
through
meet
Echo,
make
sure
you
queue
yourself
up
with
the
using
the
icon
with
the
hand
symbol
and
when
you
do
speak
remotely
make
sure
you
use
the
icon
with
the
play
window
and
finally,
of
course,
do
decide
do
make
sure
that
you
remove
yourself
from
the
queue
once
you're
done.
Talking.
A
A
A
The
authors
have
gone
through
and
resolved
all
the
received
Shepherd's
review
comments.
There
is
just
one
Shepherd
review
that
is
on
my
plate
that
I
need
to
complete
for
the
complete
Suite
of
drafts.
A
A
A
A
A
C
D
D
This
is
Alex
on
behalf
of
the
UDP
native
jobs
and
I
am
presenting
today
an
update
next
slide,
please
so
on
the
agenda
today,
I'm
presenting
the
different
agents
we
made
on
the
last
submitter
draft
I
wanted
to
discuss
and
check
out
the
discussion
through
the
through
the
working
group,
because
I
did
not
receive
a
lot
of
feedback
for
the
mailing
list.
I
also
wanted
to
present
the
last
plan,
changes
for
the
next
iteration
and
last
one,
the
Gap
we
found
on
jangposh.
D
So
next
slide,
please.
So
on
this
last
we
update
the
requirements.
Language
on
the
last
ATF
I
also
met
Sean,
Turner
and
10
seconds
for
the
feedback.
We
match
the
first
three
subsections
of
these
dtls
section
and
we
added
some
more
phrases,
such
as
zero
rtt
data
that
must
not
be
used
in
UDP
native
since
it's
not
encrypted,
and
also
that
in
non-secure
networks.
These
details
that
your
must
be
used
instead
of
shoot,
then,
of
course,
on
the
young
module.
D
The
DLS
container
has
been
made
the
presence
container
container
to
flag
to
enable
these
dtls
the
year
has
been
removed
and
then
the
dtls
1.2
parameters
has
been
removed
from
the
grouping.
D
D
So
next
slide
please.
So
there
are
some
discussions
going
on
in
in
the
mail
list.
The
first
ones
concerns
of
the
young
module.
There
is
a
tongue
patch
who
thinks
that
the
current
young
module
prefix
is
not
right.
D
The
current
prefix
is
UN
for
UDP
notif,
but
he
thinks
that
it
should
have
a
common
pattern
for
the
whole
family.
So
subscribe
notifications
down
push
that
comes
notifications
and
so
on
our
software.
We
don't
have
any
program
to
change
this,
these
prefix
so
maybe
SN
and
something
specifying
that
it's
UDP
native
since
the
jam
module
is
augmenting
social
notifications.
D
But
then
also
the
question
would
be
okay.
If
we
do
that,
what
are
the
rest
right
on
the
shcps
native,
with
hnt
Jan
bush,
with
YP
and
so
on,
so
yeah
I
would
like
some
feedback
from
the
working
group
on
what
to
do
with
that.
Should
we
change
un
to
SN,
un
or
or
stay
like
that
or.
E
So
rubbleton,
just
as
a
participant
I'm,
not
sure
this
matters
too
much
to
me
in
terms
of
it's
only
a
local
prefix
anyway,
so
I
quite
like
these
being
short,
so
I'm
I'm,
not
so
convinced,
I.
Think
the
un's,
probably
fine.
If
you
were
to
change
it,
I
would
certainly
suggest
you
put
like
a
pre
hyphen
in
the
middle.
So
if
you
do
young
YP,
Dash,
un
or
something
I,
don't
know
what
you're
suggesting
would
be
my
recommendation,
but
I
don't
think
they
should
block
this
draft
really.
You
could
solve
this
okay.
D
D
Then
also
the
the
to
configure
the
receiver
IP
so
to
which
IP
we
are
sending.
This
yam
push
notifications
messages.
We
are
using
the
IP
address
which,
by
default,
is
with
Zone
allowing
to
specify
it.
For
example,
an
ipe
V6
address,
link
link
scope.
D
The
question
on
the
Valley
News
was
why,
using
this
one
as
authors,
we
don't
mind
change
it
to
IP
address
with
no
Zone
so
the
same
here.
What
do
you
think?
Should
we
change
it
to
IP
address
Northern,
or
should
we
stay
or
just
with
the
current
one.
E
So
rubbleton
again,
this
one
again
is
a
contrary
to
this
one's
trickier,
because
I'm
not
sure,
there's
clear
indication
exactly
which
way
we
should
go
with
the
IP
addresses
I.
Think
using
IP
address
no
zone
is
a
fairly
safe
thing
to
do
at
the
moment,
so
that
doesn't
cause
any
harm.
I
I
yeah.
That
seems
like
a
pragmatic
solution.
It
may
be
longer
term.
E
D
So
here
for
me,
in
my
opinion,
hcps
not
even
USB,
not
if
they
are
the
transport
for
net
count.
Notifications
well
I
know
that
UDP
notification
notifications,
which
is
a
little
more
specific,
but
they
are
both
transport
and
they
in
what
I
think
is
that
they
should
be
configured
the
same
way
and
right
now.
D
The
current
young
module
from
UDP
native
is
based
on
an
example
of
a
subscribe
notification
RFC,
but
https
native
took
actually
another
approach
to
configure
that,
so
they
have
receivers
a
young
module
which
defines
depending
on
the
transport
the
parameters
and
then
an
actual
the
receiver
Leaf.
They
are
referencing
the
actual
receiver
instance.
D
So
here
would
be
the
same.
Should
a
UDP
notif
be
configured
the
same
way
as
https
native
or
should
we
stay
just
a
digit
because
yeah
as
I
said,
both
are
the
transport
for
Network
notifications,
I
I,
like
the
idea
of
having
to
configure
them
the
same
way
so
yeah.
D
C
So
the
need
for
why
the
HPS
nodeive
did
it.
That
way
was
because
it
was
a
TCP
based
transport
and
hence
the
desire
for
the
server
to
have
a
single
transport
connection
to
the
receiver,
regardless
the
number
of
subscriptions
that
we're
pointing
to
it
or
using
it
for
UDP
being
session
lists,
the
technical
requirement
isn't
there,
but
for
consistency's
sake,
it
makes
sense
to
me
that
it
would
actually
use
that
same
pattern.
D
Yeah
so
on
the
last
ADF
also
I
I
had
Joe
Clark,
who
approached
me
and
saying
that
maybe
the
detail
is
your
layer
might
impact
performance
and
that
it
might
be
interesting
to
add
a
section
saying
that
yeah
that
dtl,
this
implementing
details
layer
might
impact
the
performance
as
authors.
We
do
think
that
this
is
actually
pretty
obvious,
but
but
yeah
I
I
do
want
some
feedback.
If
is,
it
is
actually
necessary
for
the
draft.
C
D
D
So
we
added
the
phrase
that
D
may
be
called
may
be
dropped
we
plant
what
both
examples,
one
for
the
configuration
of
UDP
native
and
one
for
the
message
notification
and
then,
since
we
had
some
feedback
from
the
developers
that
this
message
ID
and
your
observation
domain
ID
was
not
clear
or
not.
In
the
draft.
We
proposed
some
text.
D
This
text
is
already
sent
her
to
the
melee
list,
so
it
would
be
nice
if
I
could
get
some
feedback
and
since
the
details,
I
do
think
that
we
it
is
already
stable.
We
do
think
that
the
current
traffic
is
ready
for
the
secular
review.
D
Next
slide,
please
yeah,
that's
all
for
udb
Native
and
then
I
I
just
wanted
to
point
out
the
Gap.
A
we
found
in
young
push
another
specification.
So
the
midconf,
not
the
even
notifications
header-
is
defined
in
5277
using
Excellence.
You
know
it's.
The
orange
one
on
the
young
push
the
yellow
one
is
defined
with
a
young
module.
D
So
when
the
message
is
actually
recorded
in
XML,
there
is
no
problem
because
you
can
get
the
the
net
of
every
notification
from
the
XML
schema
and
the
young
push
header
from
The
Young
module.
But
the
problem
arise
when
this
message
is
encoded
in
Json,
maybe
feature
in
in
the
future
iteration
in
civil,
that
we
don't
have
the
netconf
even
notifications
in
in
in
young
module,
which
does
not
allow
us
to
actually
parse
it
using
young
Jason
or
young
silver.
D
So
yeah
I
just
wanted
to
point
that
out
that
and
that
we
want
to
implement,
to
write
a
new
draft
by
the
next
ITF
filling
that
Gap.
F
Since
I
was
called
out,
I
had
to
refresh
my
memory
of
what
I
why
I
wanted
details
called
out
and
the
use
case
you
you
referred
to
last
time
was
Distributing
it
to
the
line
cards,
and
that
was
what
I
was
wondering.
If,
if
the
line
cards
off
of
the
control
plane
are
now
tasked
with
sending
out
encrypting
and
sending
this
out,
would
there
be
a
performance
impact
to
what
the
line
cards
are
otherwise
doing
from
a
data
plane
perspective?
F
F
G
H
B
So
Joe
I
checked
an
ipfix
all
right
and
the
section
about
dtls.
You
know
if
you
are
speaking
about
performance,
but
we
don't
there.
So
I
took
this
as
a
reference
that
we
don't
need
to
speak
about
performance
just
as
a
comparison.
D
So,
regarding
the
question
from
Thomas
I,
don't
mind
updating
the
netconf,
even
notifications,
but
I,
don't
know
the
actual
ITF
process
so
I'd
like.
If
anyone
have
any
proposals
on
that,
we
are
open
to
that.
D
D
So
no,
there
is
no
updates
on
on
this
distributed
native
draft
and,
as
I
said
in
the
last
8f
this,
this
draft
has
only
one
reference
in
UDP
native,
so
it's
it's.
It
is
waiting
for
UDP
native
to
be
last
called
to
to
be
last
call
in
this
one.
So
next,
oh
yeah.
I
D
J
Okay
hi,
my
name
is
Paul
and
I'm,
going
to
present
the
status
on
a
few
discussion
points
about
list
pagination
for
set
of
drafts
for
net
conferenceconf.
J
J
You
can
go
to
the
next
slide,
so
the
three
drafts
are
a
main
draft
for
for
young
driven
protocols
and
then
two
protocol
graphs,
one
for
a
net
confer
one
for
rest.
Conf
for
the
specifics
of
the
protocol
can
take
the
next
slide.
J
So
there
has
been
no
no
published
change
since
last
ITF
meeting,
but
there's
been
a
few
questions
raised
and
in
this
small
group
that
are
that
is
working
on
it
and
I
also
heard
that
there's
another
question.
That
was
not
so
it's
not
in
these
slides.
J
So
the
questions
that
have
been
raised
is
we
should
support
a
cursor
based
pagination,
instead
of
just
a
limit
an
offset
without
snapshots,
and
if
we
should
also
work
on
supporting
paginating,
a
snapshot
of
the
data
store
and
the
other
question
I
think
I
heard
was
the
locale-based
Sorting
with
different
languages
and
locales
and
how
that
should
be
sold
enough
or
if
we
have
thought
about
that
or
if
we're
going
to
work
on
that
next
slide,
please.
J
So
the
idea
for
the
cursor
based
pagination
now
is
to
have
to
new.
Well,
one
new
query
parameter
called
cursor,
which
would
be
Basics
before
encoded
position
and
for
the
start
position.
There's
the
question
then
of
how
it
should
be
encoded.
Should
it
just
be
empty,
should
it
be
some
Sentinel
value
and
so
on,
and
then
you
could
use
the
limit
to
size.
The
pages
down
to
pegging
it
next
slide.
J
So
having
a
base,
64
encoded,
opaque
value
would
give
some
it
would.
It
would
be
a
generally
unique,
well
obviously,
a
unique
value,
of
course,
but
for
instance
the
instantiated,
the
key
or
something
in
the
in
the
list,
for
it
could
bring
some
more
encoding
that
you
could
put
in
there
for
the
underlying
database
next
yeah,
and
so
this
could
be
one
example,
then,
with
the
rest,
conf
request.
J
So
the
cursor
is
then
the
business
basically
for
encoded
string,
Alice
and
then
we
limited
the
two
posts,
and
then
you
get
this
annotations
for
the
attributes
of
of
the
list.
Pagination
then,
and
we
have
the
next
the
next
cursor
in
the
previous
cursor
is
the
start.
So
it's
in
this
case
it's
an
empty
leaf
and
in
on
the
next
curve
service,
then
Joe
base64,
encoded
next.
J
So
this
is
the
so
this
is
the
discussion
about.
If
we
want
to
support
snapshots
of
Pagan
acting
snapshots
of
the
data
store,
then
they
can
be
very
costly.
So
how
would
that
work
and
what
constraints
and
limitations
to
that
push
down
to
the
to
the
server
yeah?
K
Jason
Stern
sorry
I
haven't
followed
the
draft
as
closely
as
I
should,
but
it's
a
discussion
about
snapshots
for
config
versus
State,
because
state
for
very
large
lists
where
this
is
probably
applicable,
strikes
me
as
I'm,
not
probably
not
very
feasible
or
practical,
but
maybe
config
might
be
so
I.
Don't
know
if
maybe
it'd
be
two
separate
discussions.
J
Yeah,
so
that
is
not
really
decided
yet,
but
that
question
has
been
raised
that
they
can
be
very
large
and
how
would
such
a
snapshot
work
of
operational
data,
for
instance,.
C
Hi
Kant
as
a
contributor
and
also
an
author,
but
to
Jason's
comment,
I,
actually,
interestingly,
I
think
the
reason
or
need
for
or
desire
for
snapshots
is
primarily
for
the
operational
State
list
and
your
Point's
very
well
taken,
but
for
configuration
it's
it's
been.
How
long
now,
over
a
decade
that
we've
been
able
or
clients,
have
been
able
to
completely
load
all
the
configuration,
the
need
for
paging
over
configuration
data
is
not
extensive.
C
The
the
need
for
pagination
is
primarily
I,
think
for
the
operational
data
and
and
I
think
when
this
was
discussed
previously,
the
concerns
were
that
the
operational
data
would
be
changing
very
quickly
underfoot,
and
that
was
the
reason
for
wanting
to
track
it.
C
E
Rob,
Wilson,
no
hats,
so
just
a
question
of
the
snapshot.
You
snapshotting
the
entire
data
of
the
things
being
retrieved,
or
is
it
a
choice
just
to
snapshot
the
keys?
So
you
make
sure
you
get
a
consistent
view
of
the
keys
in
the
databe,
allowing
the
data
to
need
to
potentially
be
changing
and
updating.
J
So
I've
not
been
into
those
discussions,
maybe
Kent
you
have
some
opinions.
C
Yeah
can't
as
a
contributor
again
I
I
think
that's
an
implementation
detail
right
the
from
the
client's
perspective.
They
just
would
snapshot
and
then
on,
but
from
to
your
point
implementation,
it
would
make
sense
to
maybe
just
track
the
keys
it'd
be
easier
to
do
that.
E
So
I
guess
my
question
is
all
about
the
consistency
of
the
data.
That's
being
returned.
What
are
you
guaranteeing?
Are
you
guaranteeing
that
you're
going
to
return
other
snapshot,
Point
you're,
going
to
return
all
the
stuff
that
all
that
data
in
a
consistent
way,
or
are
you
just
going
to
guarantee
you're
going
to
return
the
latest
values
for
each
entry
in
that
list?
I'm.
C
K
Jason
stern
here,
I
guess
just
commenting
on
that
so
I
I
kind
of
like
the
idea
Rob
was
proposing
it
first
but
then
does
introduce
a
bit
of
a
hybrid.
If,
if
we're
either
standardizing
and
saying
it
has
to
be
a
snapshot
or
it's
implementation,
specific
I
mean
if
it's
fully
implementation
specific,
then
probably
some
information
can
do
keys,
but
if
we
standardize
a
snapshot,
it
might
be
a
bit
of
an
odd
on
hybrid
to
have
just
the
key
snapshotted
and
not
the
rest
of
the
data.
K
Just
back
to
the
original
point
about
config
versus
State,
I
I
can
see
your
point
can't
about
it
being
more
needed
for
State
I,
guess
I'm
still
struggling
with.
If
it's.
If
it's
a
case,
it's
really
needed.
It's
probably
a
very
big
and
changing
data
set.
So
it's
either
going
to
be
very
expensive
to
snapshot
or
it's
going
to
be
churning
for
you,
as
you
page,
so
I
don't
have
a
solution.
Just
saying
the
use
case
seems
very
difficult
to
to
solve.
C
Canton
as
a
contributor
again
to
replying
to
Jason
but
I,
think
the
strongest
use
case
is
for
logs.
You
know
where
they're
always
being
appended
to
the
end
of
the
log
and
for
the
most
part,
they're
read
only
hence
you
know
earlier
Lodge
not
being
deleted
as
you're
moving
through
them,
I
I
I.
C
Imagine
there
probably
are
cases
where
items
are
being
deleted,
sometimes,
but
and
maybe
there's
other
operational
State
lists
that
are
enormous
and
and
they're
not
like
logs
that
where
the
data
is
being
appended
to
the
end,
but
I
think
that
by
Far
and
Away
the
largest
use
cases
for
logs.
K
Yeah
Jason
here
again,
I
I
can
see
your
point
for
logs,
where
that's
kind
of
just
entries
being
appended
to
an
end
of
a
flat
buffer,
but
it
would
be
interesting
to
see
if
other
people
are
thinking
of
well
I'm.
You
know
I'm
going
to
dump
my
rib
this
way
and
give
me
a
snapshot
Etc,
something
that's
Dynamic,
huge
and
dynamic
that
the
operation
model
look
at
for
whatever
reason.
E
C
L
Operation
called
get
gold
which
has
been
deployed
for
about
six
years
or
so
it
it
is
used
mostly
for
counters.
An
operational
State
I
have
never
heard
of
it
being
used
for
logs,
but
it
Returns
the
last
keys
and
then
the
next
request
return.
You
know
the
client
cans,
those
keys
back
and
that's
where
we
keep
from.
Otherwise
you
can't
use
a
cursor
because
there's
constantly
entries
being
added
and
deleted,
so
the
cursor
is
meaningless
over
time.
L
So
unless
you're
using
a
snapshot,
in
which
case
you've
taken
the
data
off
to
the
side
and
you're
letting
the
client
iterate
through
it.
But
if
you're
just
using
a
stateless
solution
where
there
is
no
snapshot,
a
cursor
doesn't
work.
M
Has
Juniper
so
the
thing
that
makes
this
messy
having
just
you
know
briefly
glance
through
the
draft
is
sorts
and
limits
and
depending
on
the
street,
where
the
stream
of
the
data
is
so
for
things
like
files,
you
know,
if
you're
applying
a
sword
or
a
limit
on
that
you
theoretically
can
regenerate
all
the
necessary
State
as
long
as
the
files
are
not
constantly
changing
the
the
case
of
large.
You
know.
Data
like
a
B2B
rib
is
an
example.
M
That's
a
case
where,
in
a
lot
of
circumstances,
as
long
as
you're,
not
trying
to
apply
sorting
or
limiting
there's
a
good
chance
that
keeping
track
of
the
last
displayed
key
is
sufficient
to
allow
you
to
resume
your
iteration
through
the
module.
As
long
as
the
container
order
is
something
that
makes
sense
that
can
be
continued
and
for
things
that
are
very
large,
like
HP
ribs,
that's
generally
how
things
tend
to
be
implemented,
so
I
guess
one
of
the
questions
is:
should
the
use
cases
be
split
up
so
that
you
can
decide?
M
Can
things
be
iterated
based
on
keys
versus
cases
where
the
only
way
to
generate
where
that
is?
Is
basically
taking
the
snapshots,
so
does
the
snapshot
need
to
happen,
or
can
you
generate
the
necessary
thing
just
simply
by
using
an
iterator.
C
Again
as
a
contributor
just
quickly,
the
the
solution
and
currently
in
the
draft
is
to
allow
the
server
to
specify
for
which
operational
State
lists.
It
supports
various
functions,
and
so
the
draft
might
Implement
an
ability
to
do
snapshots,
but
on
a
server
can
access
can
specify
if
it
supports
it
on
a
list
by
list
basis,
so
for
those
append
logs
type
list,
where
snapshots
really
don't
make
that
much
sense?
Perhaps
it's
not
there,
but
for
a
rib.
The
server
might
say
that
it
does
support
the
snapshot
for
it.
C
A
N
There
we
go.
Thank
you
yeah,
so
it's
the
draft
has
been
adopted
now,
finally,
and
I'm
very
happy
for
that.
Thank
you
for
all
for
for
supporting
it,
but
in
the
adoption
process
we
were
blocked
by
an
IPR
disclosure
issue,
so
this
thing
stopped
for
a
while,
but
this
claim
has
now
been
cleared,
so
that
was
enabling
the
adoption,
but
that
also
means
that
nothing
really
happened
to
the
transaction
ID
draft
since
114..
N
So
what
should
I
spend
the
10
minutes
on
so
one
of
the
things
I've
been
governing
a
lot
of
questions
about
is
how
this
one
overlaps
with
several
other
drafts
that
are
in
motion
right
now,
so
I
thought
I'd
spend
a
few
minutes
on
trying
to
clarify
that
next,
please
so
in
order
to
not
fill
the
slides
with
these
extremely
long
names,
I
shortened
them
a
bit.
You
see,
on
the
left
hand
side
my
abbreviation
for
these
drafts,
so
the
trans
ID
is
the
one
that
I'm
talking
about.
N
N
So
one
of
the
issues
that
the
transaction
ID
draft
is
trying
to
work
with
is
to
reduce
the
Locking
time
so
that
clients
wouldn't
have
a
log
flow
period
of
time,
and
that
is
something
that
this
private
candidate
draft
is
also
trying
to
address.
E-Tags,
of
course,
does
not
because
in
rest,
conf
there
are
no
logs.
N
Another
thing
that
transaction
ID
is
trying
to
solve
is
to
look
at
allowing
clients
to
synchronize.
What
has
changed
only
get
me
the
difficulties,
the
conflict.
Trace
draft
is
also
talking
about
that
use
case,
but
it's
not
really
specifying
any
any
anything
on
its
own
and
I.
Think
I
hope
it
means
that
it
depends
on
assumes
that
transaction
ID
or
something
like
it
already
exists,
they're
nodding
here
at
the
front
row.
N
N
I
would
say,
and
the
reverse
use
case,
where
you
have
a
configuration
in
the
server
already
and
you
have
a
client,
that's
trying
to
update
something
and
is
interested
to
detect
if
somebody
else
has
been
riding
the
same
area
as
this
client
is
an
important
use
case.
If,
when
you
have
many
multiple
clients
to
the
same
server
and
again,
the
config
Trace
is
also
discussing
this
use
case,
and
it
seems
to
me
that
it's
depending
on
a
transaction
ID
or
something
like
that-
and
we
have
similar
discussions
in
the
private
candidate
draft.
N
N
Something
that
I
think
is
important.
Also,
is
this
Yang
push
case
when
you're
pushing
configuration
updates
to
a
Yang
server
and
you
have
a
yank
push
subscription?
You
will
hear
an
echo
of
what
you
just
committed
to
it
and
it's
very
useful
for
the
for
the
client
to
know.
Oh,
this
is
actually
what
I
just
sent
down
so
I.
Don't
need
to
think
so
much
about
it
or
or
the
other
way
around
I
I
want
to
care,
especially
much
about
it.
N
N
It
isn't
that
number
and
it's
being
mapped
these
actions
on
these
devices
and
I
want
to
see
what
the
impact
was
in
the
network,
maybe
for
billing
or
for
forensics,
or
there
are
many
many
reasons
why
you
would
want
to
do
that
and
the
the
first
version
of
the
transaction
ID
draft
had
the
functionality
for
that,
but
it
was
by
from
feedback
from
this
forum
in
earlier
meetings,
it
was
said:
remove
that
so
I
did
that,
so
it's
not
included
now,
but
now.
Instead,
there
are
two
separate
drafts
that
are
addressing
this
particular
problem.
N
We
have
the
conflict
Trace
one
and
the
W3
Trace
one
that
is
looking
at
exactly
this.
How
do
you
trace
configuration
or
other?
Actually,
it's
not
just
configuration
changes
in
these
cases.
It's
generally
actions
towards
a
device
how
what
impact
do
they
have
on
the
on
the
devices?
So
those
config
trays
and
the
W3
trays
are
bigger
Solutions
than
what
trans
ID
transaction
ID
had.
N
But
it's
a
it's
a
question
here.
So
do
we
want
to
go
further
and
continue
this
discussion
with
these
two
drafts
and
leave
it
out
of
transaction
ID?
Maybe
things
are
possible
here,
I,
don't
know
if
there
are
any
immediate
comments
about
how
you
think
that
particular
tracing
should
happen.
You're
welcome
to
say.
N
C
As
a
contributor,
I,
well
I
mean
if,
if
the
transaction
ID
draft
can
check
all
the
boxes,
it
seems
convenient
for
one
solution
to
solve
them
all
so
to
speak.
But
what
I
don't
understand
yet
is:
are
these
other
Solutions
doing
the
checking
the
boxes
in
a
better
way,
and
so
I
don't
understand
that
part
just
yet.
I
I
think
the
transaction
idea,
Direction
transaction
ID-
solves
rows
two
and
three,
which
are
very
immediate
needs
in
a
nice
and
easy
way.
These
are
not
addressed
in
the
other,
so
I
see
the
need
for
this
draft.
It
could
be
somehow
harmonized
with
the
trace
drafts
and
one
more
thing
that
if
we
put
back
this
removed
capability,
that
has
a
potential
for
conflicts.
What
if
the
proposed
transaction
ID
is
not
not
suitable
for
some
reasons,
so
yeah
that
has
those
problems
thanks.
O
I
did
the
similar
table
when
looking
at
all
these
three
and
additionally
when
I
read
I,
said:
oh,
there
is
a
novella,
but
then,
when
I
thought
about
individual
use
cases
for
each
one
of
them,
it
was
obvious
that
was
transaction
ID.
It's
saying
conditional
configuration
Etc,
you
know
configuration
that
there
is
then,
on
the
other.
One
is
hierarchical.
O
Tracing
between
multiple
layers
and
then
on
the
third
one,
it's
application
tracing
which,
which
is
a
separate
use
case
again
so
I,
don't
believe
you
can
do
a
thinking
and
conditional
configuration
without
transaction
ID.
But
then
again,
maybe
it's
not
either
or
maybe
there
is
a
need
for
for
all
three
to
coexist.
Absolutely.
P
Hello,
Jean
Gilbert.
It's
the
same
comment
as
Olga.
Basically,
I
think
the
transaction
ID
can
can
continue
on
its
own,
and
maybe
we
need
to
have
some
management
for
the
for
the
other
draft
as
well,
but
I
think
it
makes
sense
to
have
just
the
transaction
ID
on
one
side
and
then
the
configuration
tracing
is
with
whatever
method.
On
the
on
the
other
side.
N
Thank
you
yeah.
My
personal
opinion
is
that
I
think
it
makes
sense
to
have
this
tracing
in
a
separate
draft
and
not
have
it
in
transaction
ID.
It.
N
Configuration
trace
and
w33
traces,
it's
larger
and
maybe
a
better
result,
even
though
I
think
for
implementers
that
would
Implement
transaction
ID.
We
probably
want
to
implement
the
tracing
at
the
same
time.
I
think
it's
closely
related,
so
implementers
would
have
to
deal
with
two
or
maybe
three
drafts
in
that
case.
N
So
the
well
after
you
have
another.
You
have
your
hands
every
day.
Okay,
the
the
remaining
time.
I
will
just
do
a
recap
of
what
this
use
case
is
our
life.
So
we
can
take
the
next
slide,
so
allow
clients
to
get
conflict
changes
and
then,
of
course,
without
blocking.
You
have
this
tradition
like
on
the
left.
Here,
a
client
is
doing
a
get
config
to
a
server
and
getting
a
block
of
data
back,
and
if
you
want
to
see,
is
there
any
changes?
N
It
would
do
a
good
config
again
and
get
the
same
block
of
data,
or
maybe
some
changes
deep
down
somewhere
and
that's,
of
course,
a
massive
effort
to
do
that
continuously,
whereas
with
the
transaction
ID,
you
would
do
a
get
config
and
ask
for
transaction
ID
and
you
get
a
block
of
data
associated
with
transaction
IDs.
And
then
you
do
a
get
config
again
and
you
tell
the
server.
This
is
what
I
already
know
and
those
parts
that
are
the
same
are
not
sent
back
slide.
Please.
N
And
here's
a
case
where
you
try
to
avoid
configuration
clashes,
so
a
client
might
want
to
synchronize
with
the
server.
Do
we
get
config
and
get
the
block
of
data
and
then
intends
to
do
an
edit
config,
which
is
changing
something
in
that?
N
Whereas
on
the
right
side
you
do
the
get
config
you
get
the
transaction
ID
back
and
then
you
do
an
edit
config,
with
the
condition
that
the
transaction
ID
is
this
and
that
and
if
that
it
has
changed,
you
will
get
the
error
return
back
from
the
server
next
slide.
Please
and
here's
for
the
Yang
push
subscription
case
where
a
clients
are
subscribing
and
if
somebody
doesn't
edit
config,
you
get
a
updates,
a
transaction
ID
industry.
N
Oh,
the
thing
that
came
back
now
was
not
mine,
conflict
change
actually
or
something
else
that
happened
and
in
the
right
case
here
you
send
an
edit
config
with
the
edit
config
with
transaction
ID,
and
then
you
get
an
update
back
with
the
same
expected
transaction
Edition.
N
Oh,
this
is
actually
what
I
already
knew
next
yeah,
and
these
are
the
some
numbers
that
I
did
it's
probably
a
year
ago
now,
when
I
apply
this
in
the
lab
environment,
we
had
a
real
world
management
application
running
in
our
lab
for
a
particular
use
case
in
for
one
hour,
and
it
made
569
requests
in
the
original
case.
C
Q
Hi
I'm
Sean
Turner
next
to
see
all
your
messed
up
faces
again:
hey
I'll!
Keep
this
simple:
I
only
got
one
slide
next,
so
we
had
a
zero
zero
version
that
came
out.
That
was
pretty
much
identical
to
the
individual
draft,
and
so
o1
is
basically
addressing
anything
that
we
received
during
working
group
adoption.
There
were
a
couple
of
editorials
that
I
got
off
list,
so
I
kind
of
made
those
you
can
follow
the
links
if
you're
really
interested.
One
of
the
comments
we
also
got
was
from
Jurgen
he's.
Q
Like
hey,
you
know
the
draft
in
which
you're
you're
basing
this
on
is
RC
58,
50,
75
89
and
it
already
gets
rid
of
or
obsolete's
RFC
5539.
So,
while
you're
doing
it
again
here
and
he's
right,
so
we
just
dropped
all
references
to
it
and
then
one
of
the
things
that
happened
was
Jeff.
Doty,
basically
came
and
said:
hey.
This
draft
really
applies
to
pce
as
well.
Q
Can
we
just
take
the
draft
and
apply
it
over
there
and
we
were
like
sure,
so
they
basically
got
a
couple
of
comments
that
clarified
the
text
a
little
bit
to
make
sure
that
you
know
it's.
It's
really
clear
that
you
can
do
TLS
1.3
without
zero
rtt.
So
it's
perfectly
okay
to
do
that,
and
then
we
wanted
to
clarify
also
that
the
requirements
apply
to
implementations,
that
support
1.3
and
it's
not
that
we're
musting
or
making
TLS
1.3
the
mandatory
to
implement,
because
that
wasn't
kind
of
the
goal
of
the
draft.
Q
So
now
we
have
these
two
drafts
they're,
basically
kind
of
a
line.
At
this
point
we
have
no
known
issues.
I
know
it
is
just
a
zero
one
version
and
it's
like
brand
spanking
new.
But
at
this
point
I
don't
have
anything
else
to
do
so.
Can
we
get
a
working
group?
Last
call
I,
don't
know
what
the
what
the
process
is
that
you
guys
want
to
do
with
this.
But
at
this
point
I
don't
know
of
any
known
issues,
and
so
basically
smart
people
that
do
Nick
conf
take
a
read.
J
R
So
hello,
everyone-
this
is
Tucson
from
Huawei,
and
this
presentation
is
about
adaptive
subscription
to
your
notification.
So
next
slide,
please
yeah
for
people
who
are
not
familiar
with
this
work,
as
we
all
know
that
young
push
has
provided
a
way
to
allow
the
server
to
continuously
push
updates
to
the
receiver
and
and
for
periodic
subscription.
The
updates
are
streamed
periodically,
based
on
a
config
time
interval,
but
sometimes
we
might
find
it
hard
to
configure
the
period
interval
because
usually
a
very
high
frequency.
R
So
our
goal
is
want
to
seek
the
balance
between
the
expensive
debt
management
cost
and
the
real-time
streaming
stream
Telemetry
data
for
the
top
shooting
and
our
main
idea
is
to
perform
the
Adaptive
subscription
a
policy
built
on
top
of
the
young
push
mechanism
and
allow
the
servers
to
switch
to
different
update
intervals
automatically
based
on
the
network
condition
chance.
So
next
slide,
please
yeah.
So
since
last
item
meeting
I
think
Adrian
has
had
a
very
thorough
and
detailed
review
of
this
document.
R
So
the
authors
would
like
to
thank
Adrian
for
the
helpful
comments
and
most
of
the
comments
are
the
editorial
change
to
improve
the
readability
and
beside
that.
We
also
update
the
Adaptive
subscription
modules.
For
example,
add
some
at
the
contact.
Editor
information
fix
the
ITF
trust
copyright
statement
and
fix
the
validation
errors
Etc,
and
besides
that,
we
also
clarify
that
the
pure
parameter
must
coexist
with
the
x-pass
external
evaluation
expression
parameter.
R
And
another
update
is
that
we
also
clarify
that
existing
RPC
sellers
defined
in
the
RFC
18639
and
86
41
are
still
appliable
to
this
document.
For
example,
if
any
config
period
is
not
supported
by
the
the
publisher,
so
it
can
still
send
a
period.
Unsupported
RPC,
error
response,
so
that's
also
okay
and
the
last
one
is
that
we
have
defined
a
a
PC
error
in
our
draft,
which
is
named
multiple
expats
Criterion
conflict,
and
this
episode
can
be
used
when
you
have
multiple
experts,
evaluation,
expression
and
the
evaluated
as
conflict.
R
That
also
means
that
more
than
one
condition
is
evaluated,
I
evaluated
as
true
at
the
same
time.
So
the
the
draft,
the
previous
draft,
said
that
such
a
RPC
error
could
also
cause
an
ongoing
adaptive
subscription
terminated,
but
instead
of
saying
that
the
latest
version
of
the
job
just
said
that
if
it's
during
and
the
life
cycle
of
an
Adaptive
subscription
and
the
server
can
still
push
updates
at
the
shortest
streaming
period.
G
G
Thomas
from
swisscom,
so
this
is
a
new
document
which
we're
presenting
here
at
netconf,
introducing
support
for
versioning
and
semantics
in
young
notifications.
Next
slide,
please.
G
So
in
a
date,
data
mesh,
big
data
architecture,
where
different
domains
can
exchange
data
with
about
the
context
panel
context
realize
that
we
always
have
semantics
and
versioning
and
semantics
means
basically
that
we
know
the
difference
between
when
we
see
a
value,
if
it's
a
Google
account
or,
for
instance,
an
IP
address
or
August
ring
so
for
a
counter.
For
instance,
we
need
to
do
monotonic
increasing
counter
normalization
where
a
goalk
value
we
can
simply
visualize
it.
G
The
versioning
is
needed
that
we
not
only
understand
that
semantic
has
changed,
but
whenever
we
introduce
our
new
semantics
that
we
know,
basically,
if
the
new
version
is
actually
a
background
compatible
or
not.
So
with
that,
we
are
preventing
that
we
are
actually
breaking
the
end-to-end
data
processing
pipeline,
so
in
Yang
push
defined
in
RFC
8641.
We
are
actually
missing
semantics
and
versioning,
and
within
this
documentary,
I'd
like
to
address
this
next
slide,
please
so
Network
operators
need
to
control
semantics
in
its
data
processing
pipeline.
G
That
includes
yank
push
that
today,
it's
only
possible
that's
only
possible
during
the
Yankees
subscription,
but
not
when
nodes
are
being
upgraded
or
messages
are
being
published.
So,
for
instance,
if
I
do
a
subscription,
I
cannot
really
include
the
the
version
or
the
semantic
version,
so
basically
I'm
just
subscribing
to
an
X
pass
and
when
the
node
is
being
upgraded,
that
version
of
the
Yang
model
could
change
and
could
potentially
introduce
a
new
version
which
is
not
Backward
Compatible
to
the
previous
one.
G
So,
with
this
extension,
we
can
actually
subscribe
now
to
either
a
specific
revision
of
of
a
of
an
x-pass
or
we
can
map
it
to
towards
a
revision
label.
So
we
can
say
that
when
the
node
is
being
upgraded,
it
needs
to
be
backward
compatible
to
that
revision
label.
G
On
the
other
hand,
when
the
when
the
notification
message
is
being
pushed,
we
have
a
young
push
header,
and
today
we
only
have
a
reference
to
the
subscription
ID,
but
we
do
not
have
a
semantic
reference,
and
in
this
document
we
are
proposing
to
add
the
revision
and
the
revision
label,
but
also
the
module
and
the
namespace
and
the
X
parcel
subtree
filter
as
a
reference,
so
that
we
have
clearly
a
semantic
reference
and
we
understand
not
only
the
the
message
itself,
but
also
what
the
fields
the
dimensions
are.
Meaning
next
slide.
G
Please
here
just
an
example
from
a
notification
message:
on
the
left
hand,
side
in
Json
on
the
right
hand,
side
in
XML,
I
highlighted
the
Yellow
Part,
the
additional
metadata
in
Yang
push.
We
are
adding
here
next
slide.
Please
do
you
recognize
the
problem
statement.
Network
operators
need
to
control
semantics
in
the
data
processing
Pipeline
and
therefore
want
to
persist
the
revision
or
semantic
version.
Also,
when
we
upgrade
notes,
have
a
semantic
reference
in
the
notification
message
and
we
are
working
on
Sample
implementations
in
the
ITF
116
hackathon.
Yes,
please,
yes,
hi.
K
Jason
Stern
I
guess
I
I
haven't
thought
about
this
problem
a
lot
yet
I
might
be
missing
something,
but
it
worries
me
a
little
bit
to
pack
all
the
compatibility
information
into
each
update.
It
seems
like
a
lot
of
overhead
for
something
that
may
not
change
very
frequently,
whereas
notifications
arriving
is
a
very
high
throughput
data
stream.
So
I'm
not
sure
if,
if
we
should
be
exchanging
information
about
versions
of
the
data,
kind
of
I
would
have
aband
relative
to
the
actual
notifications.
K
Coming
back
I
mean
we
do
have
other
mechanisms
in
netcomf
to
advertise.
What
module
versions
are
being
used
in
a
server
and
to
indicate
when
those
have
changed
so
I
don't
know
if
we
want
to
maybe
somehow
tie
into
those
rather
than
encoding
that
right
into
the
data
stream?
That's
just
an
initial
thought
sure.
Thanks.
E
Set
so
Robertson
Cisco's
contributor,
so
you
presented
this
earlier
or
work
related
to
this
in
the
side
meeting
and
things
and
so
I'll
give
you
lots
of
comments
back
on
that
and
things
and
generally
I
think
this
is
good
work
and
useful
work
to
do
I.
Think
that's
such
a
high
level
comment.
I
gave
this
one
that
comes
kind
of
gay
back.
E
There
is
I
think
you
may
need
to
either
have
a
list
of
modules
in
the
identifier
within
a
single
module
or
a
Yang
package
that
sort
of
identifies
something
bigger
to
go
back
to
Jason's,
point
and
I.
Think
it's
a
valid
company
he's
made
in
terms
of
efficiency,
I
think
the
key
Point
here
is:
you
need
to
be
versioning
the
individual
values
that
are
coming
through.
E
So
if
the
schema
changes
you
can
or
you
can
know
and
store
on
your
database,
the
fact
that
that's
changed
and
you've
got
different
values
of
different
versions
in
your
database.
So
I
think
that's
the
probably
the
key
thing
that
that
made
us
the
more
efficient
way
to
doing
it.
But
that's
the
key
points
being
solved.
I
think
thank
you.
I
G
Addressing
basically
a
similar
need,
but
there
basically
it's
about
having
additional
metadata,
describing
basically
the
the
data
collection,
the
the
how
the
data
is
being
collected
on
the
Node
itself,
while
here
the
focus
is
actually
on
having
the
semantic
reference.
So
the
two
have
I
mean
they're,
both
adding
metadata
but
different
metadata.
P
Just
to
complete
the
Thomas
and
sir
also
we,
this
is
actually
exactly
what
we
need
to
map.
For
instance,
the
path
is
exactly
what
we
need
to
map
the
data
we
just
collected
to
the
Manifest,
so
in
that
sense
they
are
complementary,
because
the
information
you
get
from
the
from
the
young
push
here
well
will
be
the
one
that
you
actually
need
to
have
in
order
to
do
this
mapping
afterwards.
N
So
this
is
one
of
the
drafts
that
I
was
mentioning
in
the
transaction.
Id
talk
the
w3c
context
Trace.
So
it's
an
entirely
new
draft
that
we
are
posting
next
slide.
Please
we
realized
when
we
deleted
this
from
the
transaction.
Id
drafted
the
tracing
parts
that
something
like
this
will
be
needed
anyway.
But
okay,
let's
do
that
in
a
separate
draft,
then
a
separate
mechanism,
and
actually
that
was
pretty
good
because
then
we
started
to
look
around
what's
already
out
there
and
we
found
this
well.
N
There
is
actually
something
very
similar,
that's
already
applicable
to
rest
conf
at
least
rest
that's
provided
by
w3c,
and
we
then
decided:
okay,
let's
make
a
proposal
to
bring
those
rest
headers
into
a
netconf
context
here
and
actually
reduce
the
full
stack
exactly
as
exactly
as
possible.
So
you
can
have
the
same
implementation
for
for
this,
that
you
might
already
have,
and
you
see
down
there
on
the
bottom,
we
have
a
reference
to
the
open,
Telemetry
specification.
That's
that's
talking
about
this
specifically
and
Telemetry.
N
N
N
So
let's
say
we
have
a
BSS
that
is
sending
down
an
order
and
that's
the
easiest
use
case
to
imagine-
and
this
is
an
edit
config,
but
it
could
be
basically
anything
in
RPC
or
whatever
operation.
That
BSS
is
doing.
You
is
tagging
that
operation
Within
with
a
transparent
and
then
it
goes
down
and
to
an
orchestrator
which
is
then
sending
work
with
the
same
Trace
ID
down
to
it's
controllers
and
down
three
devices.
N
N
Thanks-
and
this
is
what
it
might
look
like
in
some
tools,
since
this
is
a
well-known,
well-known
principle
and
the
extension
headers
extensions-
they
are
all
the
tools
that
can
work
with
this
stuff,
so
we
then
combine
traces
on
from
the
orchestration
level,
control
level
and
device
level
in
a
single
tool.
You
can
see
how
much
time
and
effort
and
whatever
effects
this
has
on
this
in
the
entire
system.
H
N
So
this
having
this
Trace
is
useful
in
many
situations,
as
I
said,
building
sustainability,
debugging
forensics
comes
to
mind
to
me.
We
separate
this
from
the
transaction
ID
mechanism
now,
even
though
I
think
implementers
will
care
about
both.
At
the
same
time,
it's
also
separated
from
the
same
work
and.
N
We
thought
long
and
hard
about
whether
to
implement
this
or
to
propose
this
as
being
implemented
using
attributes
or
as
Yang
modeled
leaves.
I
was
originally
a
proponent
of
doing
it
as
Yang
leads,
because
then
you
can
use
augment
and
deviate
and
other
things
that
we
already
know
about.
In
the
end,
when
we
really
dig
down
it,
it
makes
pretty
sense,
in
my
opinion,
right
now
to
do
this
as
attributes.
N
N
This
overlaps
greatly
with
the
the
other
Trace
draft,
that's
on
and
going
right
now,
but
maybe
we
can
discuss
together.
A
K
Jason
Stern
I
guess
I'm
not
really
understanding
how
you
do
it
with
a
leaf
like
that
would
be
a
convention,
I
guess
in
everybody's
data
models
like
no.
K
But
you
want
to
store
this
tag
or
whatever
against
individual
elements
in
the
data.
N
K
But
then
what
would
the
this
this
ID
be
stored
against
in
the
database.
N
Yeah
I
mean
it's
not.
These
attributes
are
not
tagged
towards
a
specific
Leaf,
really
they
are
stored
against
the
operation
or
tracked
against
the
operation,
so
an
at
higher
edit
configure
an
entire
get
config
or
an
entire
lock
or
whatever
it
is
that
you
are
doing
of
operation.
That's
what
you
need
to
track
here.
I.
N
K
H
All
right
so
Ahmad
al-hassani,
so
I
have
a
question.
You
know
with
auburn
Telemetry
usually
have
the
trace
ID
and
then
inside
your
application.
You
have
a
lot
of
spans
that
you
can
narrow
down
what's
happening
inside
your
application.
Do
you
invest?
Do
you
envision
later
on?
Some
of
that
will
be
standardized
inside
whatever
Yang
store?
N
A
O
Sorry
and
I
just
wanted
to
ask
you
in
your
example:
here
you
have
Evan
BSS
van
client
and
also
you
have
a
kind
of
Trace
ID
and
the
parent
ID.
But
how
would
but
then,
in
the
examples
you're
saying
you
can
look
at
the
trace,
ID
and
find
all
the
things
that
are
related.
How
would
you
deal
with
the
multiple
applications?
Let's
say
you
have
a
provisioning
versus
Assurance
application
which
may
have
conflicts
of
the
trace
contexts.
N
N
Yeah
yeah-
that's
right!
This
last
slide
here,
there's
also
a
few
more
graphs
or
sorry,
not
a
few
more
specifications
from
WCC
that
are
related,
like
the
baggage
specification,
and
that
may
also
be
interesting
to
bring
in
later,
but
I
thought
we'd
start
with
one,
and
if
people
think
this
is
a
nice
idea,
we
can
continue
with
this
other
one.
J
S
Yeah
hi
Charles
Echo
from
Cisco
and
I
I
think
it's
great
kind
of
reusing
leveraging.
This
mechanism
from
w3c,
however
I
don't
participate
in
in
w3c
I,
don't
know
if
you
or
some
others
who
are
in
here
is
this:
is
this
a
mechanism
that
w3c
is
like
happy
with
and
is
being
used
and
working
out?
Well
or
you
know,
we
have
some
examples
of
things
in
ITF
that
we
standardize
and
no
one
uses
so
I
want
to
make
sure
this
is
something
that
you
know.
N
I'm,
not
in
WCC
expert
but
I,
think
this
is
current.
It's
not
very
old.
Actually
it's
it's
just
a
year
or
something
like
that.
The
the
trace
context
and
baggage
is
actually
not
a
specification.
Yet
that's
why
we
decided
to
hold
off
of
that
a
bit
since
it's
still
in
motion,
but
so
it's
at
least
the
current
direction
of
WCC
for
the
how
well
used
it
will
be
in
the
future.
It
remains
to
be
seen,
but
it
seems
it
has
been
out
for
a
while.
N
L
C
J
J
P
Yes,
hello,
so
I'm
John,
kilbirth
from
away
and
I'm
going
to
present
you,
so
the
the
various
drafts
are
being
the
same
problem
as
the
one
we
just
saw.
Basically
so
next
slide,
please
so,
for
instance,
we
have
this.
P
This
let's
say
schema.
This
is
architecture
with
several
orchestrators
controllers
and
several
enemies,
and
if
something
happened,
for
instance
in
any
two
that
is
causing
an
issue
because
for
and
we
can
track
it
to
a
configuration
level
and
what
we
would
like
to
do
is
to
be
able
to
understand.
Where
does
the
the
error
come
from?
Actually,
what
was
the
the
original
service
service
request
that
caused
the
idea
of,
and
so
there
are
several
use
cases
for
that
it
could
be
that
there
is
a
that
was
a
mistake
somewhere.
P
It
could
be
that
we
have
two
nms
that
are
targeting
the
same
NE,
and
so
they
are
like
reverting
each
other
or
it
could
be
that
actually
there
is
an
error
in
the
interns
even
higher
up,
and
there
is
a
something
that
is
actually
conflicting.
So
we
need
to
be
able
to
track
that,
especially
if
we
want
to
hope
to
automatize
the
the
network
so
next
slide.
Please.
P
So
we
currently
have
a
lot
of
information
already
stored
in
most
data
with
I
guess
in
that
country
of
that
as
well.
But
in
most
in
most
devices
we
have
a
single
ID
that
maps
to
the
config
changes.
So
we
are
able
to
retrieve
that
already,
and
there
is
some
configuration
metadata
that
is
done
usually
with
that,
such
as
the
timestamp,
which
is
which
resulted
the
configuration
and
the
protocol
and
so
on.
So
we
we
are.
P
We
have
the
information
here
to
find
what
is
wrong
with
the
with
the
configuration
and
to
find
the
corresponding
wrong,
but
what
we
call
local
commit
ID,
and
so
no,
the
the
goal
is
to
The
Next
Step
would
be
how
to
map
that
local
commit
ID
to
something
that
that
corresponds
to
a
service
request,
for
instance.
P
So
next
next
slide,
please
so
the
other
existing
is
the
transaction
ID
which
actually
does
exactly
that,
but
mapping
the
configuration
sent
by
your
client
to
the
configuration
in
the
seller
because
they
will
have
a
common
transaction
ID,
and
so
this
is
the.
This
is
the
idea
that
we
want
to
to
use
in
in
this.
P
In
this
solution
is
to
keep
the
the
track
of
the
IDS
so
that
we
can
then
go
back
to
the
from
the
from
the
server
that
was
configured
to
the
client
and
so
and
so
on,
and
so
the
the
idea
here
is
more
to
to
keep
inside
the
the
the
server
who
configured
the
client
last
and
which,
with
with
ID,
so
next
slide.
Please.
P
So
we
have
a
little
bit
of
vocabulary
to
explain
that.
So,
basically,
if
we
take
the
transaction,
take
takes
one
here.
The
controller
is
for
the
controller
text.
One
is
the
Northbound
transaction,
because
it's
going
to
configure
the
controller
and
for
the
orchestrator
is
a
self-bound
orchestration,
and
so
the
idea
is
just
to
keep
in
each
element.
Whenever
there
is
a
local
commit
ID,
so
a
configuration
change.
If
that
configuration
was
caused
by
someone.
P
That
is
a
that.
Has
this
mechanism
enabled
then
we
can
track
the
corresponding
transaction
ID.
So
we
can
map
to
the
local,
commit
ID,
the
Northbound
transaction,
then
the
Northbound
transaction
ID,
if
any
and
the
southbound
transaction
IDs.
If
any
so
next
slide,
please
so
the
ID
to
store
that
is
just
to
store
that
in
a
young
model,
which
is
a
fairly
simple
exactly
what
I
said
before
we
have
the
local
transaction
ID,
we
have
the
Northbound
transaction
ID.
P
P
So
this
is
how
it
would
be
used.
So,
for
instance,
we
have
an
anomaly
detection
system
that
is
able
to
find
in
any
an
error.
So
this
would
be
the
first
step
at
the
bottom
here
and
then,
if
we
don't
Define
the
configuration
we
can
find
the
corresponding
Northbound
transaction
ID.
So
we
can
go
to
the
controller
and
the
controller
then
can
match
the
can
return
us
which
servant
transaction
ID,
which
actually
a
local
community
on
the
controller
match
this
server
transaction
ID
and
the
corresponding
Northbound
transaction
idea
step
four.
P
P
So
this
so
there
was
one
feature
that
Ian
was
talking
before.
That
was
that
we
could
set
from
the
client
the
transaction
ID
when
configuring
to
the
server,
so
that
would
actually
simplify
a
lot
our
work
because,
instead
of
having
several
transaction
IDs
so
when
transaction
ages
would
have
a
single
one
so
that
that
would
be-
and
that
would
also
possibly
avoid
collision
between
a
southbound
transaction
like
this,
so
that
that
would
be
good
for
us.
P
And
so
then
the
question
is
also
because
that
draft
I
mean
started
as
something
kind
of
generic.
We
need
to
trust
the
idea
we
need
to
have
a
transaction
ID
between
the
the
demand.
The
question
is:
is
net
confirmly,
the
rice
cup
or
do
we?
This
is
the
current
decision
to
have
netconf,
but
maybe
we
need
to
include
more.
P
So
maybe
this
is
I
mean
here
we're
in
netconf,
but
I
will
also
be
presenting
it
in
the
upset
WG,
which
is
more
the
target
actually
for
this
draft.
So
this
is
the
kind
of
question
maybe
for
there
and
then
the
link
with
the
with
the
draft.
P
So
I
think
this
is
the
the
main
point
we
should
discuss
this
one
and
so
for
that
one.
The
the
question
is
so
I
think
the
we
had
a
very
good
presentation
of
this.
One
I
think
we
it's
very,
very
nice
solution
as
well.
The
main
difference,
in
my
opinion,
is
that
it's
nice
to
have
also
the
the
information
kept
into
the
router,
because
in
that
case,
even
if
we
didn't
capture
the
trace,
we
can
still
find
back
the
the
original
solution.
P
Basically,
we
can
use
the
the
same
mechanism
of
augmenting
the
RPC
to
to
send
the
information,
and
then
we
could
maybe
store
it
so
that
we
have
it
locally
and
but
also
we
can,
we
can
send
it
to
the
seller,
so
I
I
think
that
would
be
the
kind
of
Ideal
solution,
because
in
that
case,
even
if
we
don't,
for
instance,
if
the
top
orchestrator
does
not
know
to
which
collector
he
has
to
send
it,
we
can
still
retrieve
the
information
afterwards,
so
yeah.
So
next
slide
finish
the
last
one.
B
I
P
But
so
this
is
something
that
needs
to
be
to
be
provided
only
for
I
mean
this
is
only
in
the
case
when
we
actually
get
the
transaction
ID,
which
means
it's
a
client
that
supports
the
transaction,
ID
and
clients
that
support
the
transaction
ID
mechanism
in.
If,
if
we
want
to
implement
this
draft,
they
will
have
to
as
well
send
their
client
ID.
I
P
C
Can't
yes,
I
can't
as
chair
to
your
last
question
with
regards
to
if
we
should
apply
to
the
rest,
conf
as
well.
I
think
it
is
the
case
that,
with
the
netcomf
working
group,
it
should
try
to
attempt
a
feature
parody
between
a
conference
reskov.
B
Class,
so
initially
we
posted
this
one
obstacle
ug,
but
we're
not
really
just
about
that
right.
We
thought
it
could
be
a
generic
mechanism.
There
is
like
a
young
model
and
by
the
way,
that's
something
Josh
trust
that
we
believe
that
having
the
leaf
in
the
router
is
a
good
thing,
even
if
we
could
have
it
also,
you
know
with
open
Telemetry.
So
it's
a
young
module.
It's
a
generic
mechanism.
There
is
no
matter
reference
to
not
conference
ID,
so
we're
not
too
sure
where
to
put
it.
Any
guidance
here
would
be
helpful.
A
All
right
so
as
a
chair
I
would
also
like
to
see
the
linkage
and
the
dependency
between
the
three
drafts,
the
transaction
ID,
the
w3tc
draft,
and
this
one
to
really
understand
how
we
want
to
progress.
All
the
way
you're
going
to
progress,
all
the
three
documents
and
if
so,
what
is
the
dependency
between
the
three
of
them?.
A
P
No
I
just
yeah
just
you
can
read
the
slide
just
just
maybe
there
is
a
there
is
a
repo
and
actually
Med
did
some
comments
on
the
on
the
repo.
So
if
you,
if
you
have
some
commands,
you
can
also
go
there.
I
will
try
to
put
them
back
in
the
in
the
group
as
well,
so
that
it's
not
lost.
A
T
T
Hi
I'm
James
I'm,
going
to
introduce
the
private
candidates
draft
it's
the
first
time
of
us
going
through
it
and
presenting
it
and
Rob
Wills
is
in
the
room
as
well
he's
going
to
present
as
well.
Halfway
through
there,
we
go
so
the
problem
space.
What
we
looked
at
here
is
the
ability
to
try
and
streamline
some
of
the
operations
in
netcomf
configuration.
T
Currently,
most
Nikon
servers
have
the
shared
candidate
concept
and
multiple
clients
can
make
changes
to
that
shared
candidate
and
there's
a
couple
of
issues
with
that
and
the
biggest
one,
of
course,
being
that
and
one
client
may
make
changes
to
the
server
that
they
maybe
hadn't,
pushed
themselves
unwittingly,
and
we
have
the
kind
of
locking
solution
available
to
us.
T
But
what
that
does
is,
of
course,
streamline
and
serialize
everything
that
that
happens
on
that
device
and
maybe
that's
not
appropriate
if
multiple
clients
are
wishing
to
configure
entirely
separate
areas
of
the
configuration.
T
T
So
this
just
graphically
demonstrates
the
current
issues
with
locking
really
you
end
up
with
a
completely
serialized
process
and,
and
that
might
be
okay
in
some
cases.
But
in
a
lot
of
cases
where
maybe
there
are
different
clients,
configuring
different
areas
and
it
can
really
hamper
the
operational
kind
of
Behavior.
T
So
the
high
level
aims
of
the
draft
is
to
defining
what
a
private
candidate
is
look
at,
how
we
can
create
one
and
manipulate
it
over
netconf
and
then
how
we
deal
with
conflict
resolution
if
and
when
it
arises
and
I
think
the
first
question
we
just
wanted
to
stop
here
and
just
check
with
the
working
group
whether
this
was
a
problem
space
that
that
people
have
some
interest
in.
T
A
T
Okay,
great,
thank
you.
So,
firstly,
what
is
a
private
candidate,
so
a
private
candidate
configuration
would
be
candidate.
Configuration
that's
not
visible
to
anyone
else
by
anyone
else.
We,
we
kind
of
feel
that
that
is
a
per
session
anywhere
else
so
individual
session,
and
it
gives
a
private
workspace
for
users
to
Stage
new
configurations
prior
to
committing.
T
It
would
contain
a
full
copy
of
the
running
configuration
now.
An
implementation
detail
is,
is
whether
or
not
you
would
store
that
or
otherwise,
but
in
terms
of
of
how
it
would
represent
itself
to
the
outside
world,
it
would
be
a
full
configuration
containing
all
changes
made
by
that
session
and
I
graphically
kind
of
described.
How
we'd
go
about
creating
that
in
the
next
couple
of
slides
and
I
used.
T
Compare
if
you,
if
you
think
of
the
compare
draft
that
exists
just
as
a
kind
of
representation
to
to
help
understand
what
what
might
happen
so
on
the
bottom
line
is
our
kind
of
running
configuration
and,
and
then
I've
done
this
in
a
source
code
Type
way,
so
that
there's
some
some
common
understanding
in
terms
of
terminology
that
that
hopefully
makes
it
a
bit
clearer,
I
know
as
as
authors
and
contributors,
we
spent
a
bit
of
time
kind
of
hashing
this
out
and
we
felt
this
was
probably
a
the
clearest
way
of
representing
it.
T
So
at
the
point
you
issue
an
RPC
that
requires
a
candidate
configuration
or
a
private
candidate
configuration.
We
would
Branch
the
running
configuration
into
its
own
branch
and
then
any
operation
would
manipulate
that
Branch
from
that
session.
T
So
you
see
multiple
edit
configs
and
if
you
were
to
compare
the
current
state
of
your
private
candidate
Branch,
it
would
compare
it
with
or
you
could
compare
it
with
the
head
of
that
Branch
to
see
what
changes
have
been
made
or
you
could
compare
it
with
the
running
configuration
as
well
to
see
the
differences
between
your
current
working
branch
and
the
the
running.
And,
of
course
this
running
might
might
change.
In
the
background.
T
And
to
that
name
there
was
two
methods
that
we've
thought
of
for
how
you
might
want
to
deal
and
interact
with
this
Branch
one.
We
called
a
static
Branch
mode,
which
is
kind
of
akin
to
the
source
code
approach
where
those
multiple
candidate,
those
multiple
edit
configs,
make
a
change
to
your
branch
and
then
at
the
point
you
commit
and
there's
some
more
details
on
on
potential
other
rpcs
later
or
in
the
draft.
T
Other
point
you
commit.
You
then
merge
it
back
to
that
running,
configuration
and
then
essentially,
rebase
and
re.
Remove
that
head
further
up
your
branch,
and
so
we
wouldn't
destroy
that
private
candidate
and
we
would
just
updated
and
then,
from
that
point
on
any
more
edit
configs
and
any
more
Compares
would
compare
to
that
new
head
of
the
branch.
T
The
other
option
that
we
thought
about
was
a
continuous
rebase
mode,
and
this
is
where
again
we
start
off
the
same.
We
create
our
Branch
edit
configs
make
a
change
to
that
Branch.
But
every
time
that
a
commit
from
another
client
pushes
their
data
into
the
running
configuration,
we
would
automatically
rebase
the
current
working
branch,
and
that
means
therefore,
that
any
Compares
would
go
to
the
current
head
of
that
branch.
T
T
U
U
Like
you,
you
could
in
theory
Implement
both
and
so
the
first
option
being
a
a
capability
and
when
that
capability
is
specified,
the
existing
net
comp
candidate
data
store
behaves
like
a
private
candidate,
and
so
the
get
config
edit
config
operations,
Etc
who's,
where
their
target
is
candidate,
would
actually
be
acting
against
the
private
candidate.
U
U
The
other
option
is
an
MDA
data
store
a
new
nmda
data
store,
so
we
could
Define
a
data
store
for
the
private
candidate
and
then
the
get
data
edit
data
operations
could
operate
against
our
data
store
next
slide.
Please.
U
So
James
was
talking
about
the
with
those
those
diagrams
with
the
branches
of
how
of
what
the
private
candidate
looks
like
at
any
point
in
time
and
the
relationship
between
the
private
candidate
and
the
running
data
store
and
if
two
clients
are
operating
on
completely
different
areas
of
the
configuration
tree,
then
clearly,
there's
no
real
conflict
between
the
clients.
But
of
course
the
fun
starts
happening
when
two
clients
want
to
change
the
same
configuration
and
there's
kind
of
various
places
where
this
could
occur.
U
So
with
the
with
the
the
first
mode
that
James
presented
with
the
static
Branch
when
it
comes
to
so
your
static
branch
is
sort
of
unaware
of
changes
to
the
running
data
store,
and
so
the
point
where
you
could
have
collisions
is
when
you
then
try
and
commit
your
private
candidates,
because
at
that
point
you
have
to
reconcile
the
changes
that
you've
made
in
your
private
candidate
with
the
running
data
store
in
the
second
mode
in
the
continuous
rebase
mode.
U
In
that
case,
every
time
the
running,
config
changes,
that's
brought
up
into
the
private
candidate
and
clearly,
therefore,
there's
chances
for
collisions
there,
and
this
list
is
not
necessarily
exhaustive,
so
that's
kind
of
where
collisions
could
occur.
How?
How
might
you
resolve
them-
and
this
is
something
that
we've
been
actively
discussing
and
I
will
continue
to
do
so.
U
U
We've
also
discussed
the
possibility
of
a
a
force
option
so
that,
at
the
point
where
you
reconcile
the
collisions,
you
could
have
it
so
that
your
the
changes
in
your
private
candidate
are
sort
of
force
through
just
to
reiterate
with
we're
still
sort
of
thinking
about
this,
and
the
draft
goes
into
more
detail
about
the
considerations
that
we
discussed
the
slide
please,
and
so
just
to
bring
this
to
an
end.
U
The
the
the
things
we're
actively
working
on
at
the
moment
are
the
the
reconciliation
methods.
So
that's
the
how
how
we
resolve
the
collisions.
U
U
S
Hi
Charles
I
call
Cisco
one
of
the
things
I
wasn't
sure
about
with
the
nmda.
S
Candidate
store
that
approach,
would
it
be
a
possible
for
actually
multiple
clients
to
use
the
same
like
private
to
those
candidates?
Group
sorry
store,
but
not
for,
like
all
the
other
random
clients
use
it
is
that
something
that's
actually
a
a
use
case.
That's
you're
trying
to
consider
and
that
you
think,
is
a
house
value.
So.
U
That's
not
a
use
case,
we've
considered
so
at
the
moment,
private
candidate
is
very
much
specific
to
a
session.
U
T
S
T
Guess
yeah
I
would
say
we
had
a
similar.
We
have
a
similar
concept
with
the
persist
ID
from
commits,
but
it's
it's
not
not
something
we
had
focused
on
unless
the
work
group
feels
there's
some
strong
kind
of
use
cases
for
that.
E
So
Rob
Walton,
as
a
participant,
so
I,
think
I
think
it's
interested
in
this
working
protection.
This
work
from
the
hands
were
raised
earlier.
I.
F
Cisco,
maybe
a
little
related
to
what
Rob
was
saying.
Do
you
have
to
have
one
or
the
other
reconciliation
method?
Could
the
client
not
decide?
This
actually
came
up
the
other
day
in
the
knock,
where
one
user
wanted
to
do
some
more
long-running
type
of
config
testing,
and
there
was
automation
that
was
pushing
config,
and
if
there
was
two
private
candidates
they
may
want
something
more
long-running.
Where
there's
continuous
integration,
the
automation
wants
something
where
it's
private
does
the
test
and
does
the
commit
and
I
on
the
other.
F
The
conflict
resolution
for
me
I
would
think
you'd
rather
have
something
that
is
on
merge.
If
it's
a
conflict,
it
aborts
it
fails,
it
seems
anything
else
is
going
to
get
into
like
git,
merge,
hell
type
of
thing,
and
it
seems
the
biggest
use
case
is
just.
Can
I
I?
Do
my
my
my
my
candidate
push?
Is
it
going
to
work
or
not
versus
trying
to
do
conflict
resolution
and
merge
resolution.
U
And
so
one
of
the
things
that
James
and
myself
and
the
other
contributors
have
tried
to
do-
is
to
define
a
a
sort
of
common
set
of
terminology
and
also
to
find
commonalities
between
the
two
reconciliation
methods.
U
T
And
then
I
think
you're,
right,
Joe
and
that
it's
useful
to
for
the
client
to
understand
if
there
is
an
issue
and
to
know
what
that
issue
is
and
I
think
that's
the
kind
of
starting
point
is,
you
know,
don't
go
committing
stuff.
If
there's
going
to
be
an
issue,
don't
suddenly
kind
of
Auto
deal
with
it?
That's
that's,
probably
not
the
approach.
T
We
want
the
client
to
be
aware
that
there's
an
issue
how
you
then
deal
with
that
issue
could
be
as
simple
as
I
mean
you
fail
and
they
have
to
go
fix
the
private
candidate
and
then
you're
all
good
again
or
it
could
be
that
you
know
there's
some
override
or
that.
Maybe
you
just
needed
to
update
your
Baseline
and
that's
where
the
updated
RPC
would
come
in
some
of
those
are
documented
in
the
draft,
and
certainly
it's
an
area
that
we'd
welcome
contributions.
L
Comment,
I
really
think
the
working
group
needs
to
clearly
specify
all
the
requirements
before
diving
into
a
solution.
I
I
think
that
the
session-based
approach
is
doesn't
support.
Rest
comp,
for
example,
because
rest
comp
doesn't
have
sessions
and
I
think
it's
important
that
it
be
protocol
independent
and
support
both
netconf
and
rest
comp
and
and
it
needs
to
be
per
user-
not
not
per
session,
because
because
of
that,
but
also
because
there's
no
reason
why
a
an
application
can't
have
multiple
sessions
working
on
this.
L
On
the
same,
you
know,
if
it's
the
same
user,
then
that's
that's
easy
for
the
server
to
control.
Also,
it
needs
to
be
explicitly
managed,
because
you
can't
have
you
can't
fire
up
multiple
copies
of
the
running
config
that
easy.
So
so
course
management
is,
is
critical,
so
and
then
the
details
of
how
it
work,
I
think
you
probably
end
up
with
something
like
Rob
suggested,
where
there's
like
three
variants.
L
The
original
netconf
has
the
three
variants
because
of
data
stores,
because
that's
what
the
vendors
supported,
so
it
wasn't
wasn't
possible
to
do.
One
size
fits
all,
and
that
is
probably
the
case
here.
That's
it.
T
C
Kent
as
a
contributor,
just
well
speaking,
to
the
idea
of
continuous
rebase
and
I
think
others
have
spoken
about
this
as
already
but
This
concerns,
I.
Think,
there's
a
there's
already
a
it's!
C
It's
unexpected
for
clients
that
the
that
the
server
would
change
the
data
store
underneath
them
and
without
without
them,
buying
or
opting
into
it,
and
by
analogy
in
netmod,
there's
a
draft
for
doing
with
system
system
defined
data
store,
and
there
there
was
an
idea
that
the
system
data
store
might
be
changing
underfoot
and
and
it
could
actually
impact
the
the
running
data
store
without
the
client's
supervision
and
the
the
fix
for
that
was
for
the
client
to
pass
a
parameter
into
its
rpcs,
which
basically
said
I
opt
into
the
idea
that
the
server
will
dynamically
or
automatically
implicitly
change
the
contents
of
the
data
store
underneath
and
so
so.
C
T
Can,
if
I,
if
I
kind
of
read
that
right
is
the
the
previous
solution?
Essentially
that's
saying
that
the
the
kind
of
static
Branch
mode
is
the
default
that
you
would
drop
to
a
continual
three-based
mode.
If
you
sent
some
kind
of
operation
to
the
to
the
node.
C
There's
a
number
of
Legacy
clients
out
there
that
will
not
be
expecting
for
the
change
to
occur,
but
if
the
client
can
advertise
that
it's
it's
okay
with
the
idea
that
it
could
occur
and
it
can
it
can
handle
that
case.
Then
it's
then
it's
okay.
V
I
just
wanted
to
just
mention
it
would
be
necessary,
I,
think,
to
consider
the
interaction
with
certain
scenarios
like
confirm
commit,
for
example,
like
let's
say,
for
example,
if
there's
another
confirmed
commit
you
already
receive
the
first
commit
you
didn't
get
the
second
commit
yet
so,
and
then
you
try
to
rebase
before
getting
the
second
commit.
V
T
And
the
auto
reverse
mode,
it
probably
wouldn't
matter
as
much
because
you
would
Auto
rebase
twice
if
you
rolled
back,
but
in
the
static
wheelbase
mode
yeah
you
would,
you
would
have
an
out
of
date,
Baseline.