►
From YouTube: IETF113-NETCONF-20220321-1330
Description
NETCONF meeting session at IETF113
2022/03/21 1330
https://datatracker.ietf.org/meeting/113/proceedings/
A
Yeah
all
right,
it's
time
for
the
net
count
working
group
session.
So,
let's
get
started.
A
This
is
the
first
hybrid
meeting
for
all
of
us
and
I'm
sure
we're
all
trying
to
get
used
to
the
system
a
little
bit
so
bear
with
us.
If
not
everything
quite
works
perfectly,
as
you
would
expect.
A
Okay,
this
is
the
113
meeting
for
netconf.
A
A
Okay,
moving
on
a
couple
of
administrative
things
for
people
who
are
in
the
room
first
do
remember
that
you
need
to
sign
in
to
meet
echo
for
it,
for
you
to
not
only
speak
to
the
mic,
but
also
to
be
registered
as
a
participant
in
blue
sheets.
A
A
A
B
Yes,
thank
you
mahesh,
so
ieee
802.1
reached
out
to
netconf
working
group
about
half
a
year
ago
with
regards
to
the
keystore
draft
and
in
particular,
some
language
with
regards
to
promoting
or
copying
private
keys
from
a
built-in
or
system
data
store
to
running
data
store
and
just
on
a
couple
weeks
ago,
rob
wilton
and
myself
attended
the
youngsters
meeting
on
march
the
10th,
which
is
during
the
ieee
802.1
plenary,
and
we
had
a
good
discussion.
B
B
So
the
actual
impact
would
be,
of
course,
the
updating
of
the
keystore
draft
and
potentially
also
the
crypto
types
draft.
So
that's
all
I
wanted
to
say
about
this,
but
I
know
that
scott
mansfield
who's
part
of
the
ieee
group
is
in
the
room
and
maybe
would
like
to
say
something
as
well
or
yes
go
ahead.
Scott.
C
I
don't
want
to
bust
in
line
rob
has
something
to
say
first
and
then
I
can
close
off.
D
Euro
robles
from
cisco,
so,
yes,
I
think
kent
sums
it
up.
Well,
I
think,
was
a
good
discussion
in
terms
of
laser
response.
I
raised
this
with
the
isg
and
iab
suggesting
that
I'd
write.
This
response,
they're
quite
happy
with
that.
So
I
think
that's
fine.
D
I
think
it
might
end
up
coming
from
the
working
group,
but
I'm
happy
to
work
on
the
text
and
then
work
out
what
the
exact
process
is,
and
probably
we
can
run
it
by
scott
and
mick
beforehand
to
check
that
they
can
be
happy
with
what
we're
writing
there
as
well.
Thanks.
C
Yes,
scott
mansfield
erickson
also
works
on
liaison
and
such
the
first
off
wanted
to
really
thank
rob
and
ken
for
reaching
out
and
trying
to
get
this
issue
resolved.
The
ietf
ieee
liaison
officer,
I
believe,
is
russ
housely.
So
we
should
include
him
to
make
sure
that
it
goes
through
the
right,
proper
channels.
C
If
you
need
any
help
from
me,
since
it
is
an
802.1
specific
thing,
I
can
also
help
with
the
wording
or
help
get
it
to
the
right
people,
but
other
than
that
it
was
a
great
discussion,
the
next
youngsters
meeting
for
those
that
care
will
be
on
the
29th
of
march.
So
if
you,
if
there's
any
follow-up
or
questions,
please
feel
free
to
join
that
meeting.
Thank
you.
A
All
right
so
continuing.
A
The
chair
status,
so
this
with
the
chartered
work
group
items
the
yang
push
notifications
as
an
rfc.
Congratulations
to
the
authors,
rfc
9196.
A
D
Robert,
yes,
I've
got
a
shepherd
lined
up,
so
that's
good
and
I
needed
the
write-up.
So
it's
just
waiting
on
me,
so
I
will
try
and
get
to
that
soon
this
week.
Hopefully
thanks.
A
Rob
the
client
service
suite
of
drafts
have
completed
work
group
last
call.
There
are
a
couple
of
last
call
issues
that
are
being
resolved
as
we
speak.
A
The
european
order
draft
is
work
in
progress
will
be
discussed
in
this
meeting
and
the
distributed.
Notif
is
that
version
three
and
there's
been
very
little
discussion
on
the
list.
A
So
here
is
the
agenda
for
today
we
have,
of
course
kent
with
the
client
server
suite
of
drafts
and
udp
notif.
They
sorry
udp,
based
transport
for
configured
is
all
the
other
work
group
chartered
item
on
the
non-chartered
item
list.
We
have
a
few
presentations,
so
it
should
be
a
fairly
packed
session
with
enough
time
for
questions
at
the
end
right,
I
think
that's
pretty
much
it
for
the
chair
presentations,
any
questions.
C
Hi,
it's
maybe
it's
just
me
because
I
am
remote
but
mahesh
you
sound.
It
sounds
like
he's
underwater
or
something
I
don't
know
if
I
just
need
to
turn
up
my
volume,
but
I
have
no
trouble
hearing
kent
or
rob
or
anyone
else
so
just
wanted
to
raise
that
thanks.
B
B
B
Okay,
all
right
mahesh,
I'm
going
to
revoke
yours.
Oh
you
did
it
yourself,
fine!
Let
me
start
sharing
the
next
slides.
B
Okay,
so
my
name
is
kent.
I'm
presenting
the
client
service
with
the
drafts.
This
has
been
a
long
project
for
the
working
group
and
we're
almost
to
the
to
the
end
of
it.
B
This
is
actually
the
first
time
that
the
client
service
suited
drafts
has
been
presented
since
itf
110,
though
they
were
touched
on
as
chair,
slides
amendment
or
you
know,
notes
during
the
chair,
slides
in
previous
ietf
sessions.
So
on
this
time,
I'll
just
do
a
little
bit
recap
going
back
to
110.
It's
not
much
actually,
because
we've
just
been
doing
fit
and
finish
for
the
most
part,
but
for
the
crypto
types
draft
we
did
accommodate
some
sector
review
comments
from
valerie
simslove
and
we
added
the
hidden
keys
feature
I
mean
so.
B
The
hidden
concept
of
hidden
keys
existed
before
just
the
the
feature
statement.
Hidden
keys
didn't
exist
before
in
the
trust
anchors
draft.
We
added
prefixes
to
the
path
statements
on
a
per
trust,
anchor
issue
regarding
that
issue,
and
we
renamed
the
trustpillar
supported
feature
to
central
trustor
supported
that
was
from
jurgen.
I
believe
we
removed
two
unnecessary,
slash,
unwanted
min
element,
1
statements
and
added
a
present
statement
and
we
added
an
informative
reference
to
the
mos
netmod
with
systemdraft.
B
We
removed
the
tcp
connection
grouping
grouping,
and
so
now
we
just
use
the
tcp
common
grouping
directly.
That
was
actually
done
quite
a
while
that
was
back
in
the
110
time
frame.
It
was
done
a
while
ago
we
added
a
security
consideration
section
for
the
local,
binding,
supported
feature
and
then
for
ssh
client
server
draft.
We
removed
the
supported
authentication
methods
from
the
client
authentication
grouping
and
we
moved
algorithms
away
from
the
ietf
ssh
common
module
to
a
more
general
iana
maintain
modules,
and
this
is
kind
of
a
big
thing.
B
So
that's
what
that
item
there
is
we
also
added
a
config
false
list
for
the
algorithm
supported.
So
while
they're
in
iana
registry
defines
hundreds
of
algorithms,
a
particular
server
may
only
support
a
subset
of
them,
so
this
config
false
list
enables
them
to
identify
the
server
identify
which
subset
they
support,
and
we
also
added
to
be
discussed
in
this
slide
presentation.
A
generate
public,
key
rpc
and
so
we'll
be
discussing
that
in
a
moment
for
the
tls
client
server
craft.
B
So
moving
on,
there
were
really
no
updates
to
the
http
client
server
draft.
In
these
past
a
year
just
nits
in
the
netconf
client
server
draft,
we
did
augment
in
a
mapping
required
flag
into
the
client
identity
mappings
only
for
the
ssh
transport
and
so
just
locking
things
down,
making
the
the
validation
more
correct
and
then,
lastly,
we
removed
appendix
a
which
was
having
before
fully
expanded
tree
diagrams.
But
those
tree
diagrams
were
enormous.
B
I
mean
literally
the
tree
diagram
itself,
spanned
five
pages,
I
think,
and
so
we
just
removed
those
foliage
and
once
they
fully
expanded,
I
mean
all
the
groups
have
been
expanded,
so
the
uses
statements
have
been
expanded.
So
now
only
both
the
netcomf
server,
netconf
and
rescon
client
server
drafts
have
just
the
unexpanded
where
the
grouping
are
displayed
as
users.
B
That's
what
so.
Those
are
the
updates
of
sense
110..
I
did
mention
a
couple
items
that
we're
going
to
circle
back
on
the
first
is
this
generate
public
key
rpc,
which
is
an
open
issue,
and
I'm
hoping
to
get
some
discussion
so
about
three
years
ago.
Folks
may
remember
that
the
crypto
types
draft
attempted
to
define
actions
for
generating
private
keys
and
after
much
discussion,
we
abandoned
these
action
statements
when
it
became
not
possible
to
define
a
set
of
algorithm
identifiers
that
span
protocol
stacks
so,
for
instance,
ssh
and
tls.
B
B
So
we
did
abandon
that
idea
about
three
years
ago.
But
since
then,
as
mentioned
earlier,
both
the
client,
the
ssh
client,
server
and
tls
client
server
drafts
now
have
their
own
iana
maintained,
algorithm
identifiers,
they're
being
pulled
from
iana
maintained
registries,
and
so
it
becomes
possible
again
to
maybe
define
an
rpc,
but
this
time
make
it
be
protocol
specific
rpcs
in
each
draft.
B
So
for
the
ssh
draft
you
can
see
it's
just
an
rpc,
it's
very
straightforward,
their
input,
but
notice
that
the
algorithm
at
the
very
top
on
the
right
hand
side
in
yellow
it's
a
it's
an
algorithm
identifier,
identity
reference.
So
that
is
a
reference
to
and
you
can
see
it
ssh.
Of
course,
the
protocol
ssh
pka
stands
for
public
key
algorithm,
so
in
ssh,
there's
four
different
kinds
of
algorithm
identities
based
classes.
Pka
is
one
of
the
four
and
is
the
only
one
that
really
matters
when
it
comes
to
generating
private
keys.
B
B
Do
you
want
the
key
to
be
returned
as
clear
text,
or
should
it
be
encrypted
by
another
key,
or
would
you
like,
in
fact,
the
system
to
hide
the
key
like
it
becomes
a
built-in
key
not
ever
visible
to
the
outside
outside
the
server
and
the
response
is
effectively
exactly
the
same
as
what
you'd
find
inside
keystore,
but
in
the
form
of
output
for
rpc
response,
so
the
format
of
the
key
how's
it
been
encoded
and
then,
of
course,
if
it's
different
kinds
of
key,
if
it's
clear
text
you
just
get
the
key.
B
If
it's
hidden,
then
it's
just
an
empty,
you
don't
really
get
the
key,
it
just
says
it's
empty
and
if
it's
encrypted
you
get
back
the
encrypted
data,
but
also
a
reference
to
the
other
key
that
it
was
encrypted
by.
So
that's
all
very
straightforward
and
it's
easy
I
mean
doing
this.
Rpc
took
me
no
more
than
45
minutes,
probably
to
put
together
very
straightforward.
Then
I
moved
on
to
tls
to
try
to
do
the
same
there
and
discovered
some
complication.
B
The
issue
is
that
with
tls
there's
only
one
registry
and
that
registry
defines
what's
called
cipher
suites.
So
these
cipher
suites
are,
you
know:
they're,
not
standalone,
private
key
algorithm
like
ssh.
It's
it's
more
of
a
combination
of
a
private
key
algorithm,
an
encryption
algorithm
blocking
padding
and
so
an
example
would
be,
for
instance,
tls-rsa-with
aes1256
cbc
sha-256.
B
So
you
can
just
see
how
many
different
algorithms
have
been
encoded
in
there,
but
the
main
thing
is
rsa,
so
this
is
in
fact
an
rsa
key,
and
if
you
were
to
ask
the
system
to
generate
a
key
for
that
cypher
suite,
it
would
be
expected
that
the
system
could
say.
Oh
what
you
want
me
to
do
is
inherit
an
rsa
key
anyway,
but
it's
a
little
bit.
It's
not
perfect
right!
It's
not
like
with
ssh.
So
the
question,
and
and
again
I'm
hoping
for
some
discussion
is:
should
we
move
forward?
B
Should
we
continue
and
and
and
define
this
rpc
passing
in
a
cypher
suite
algorithm,
assuming
that
the
server
can
identify
which
private
key
and
generate
the
appropriate
key
from
there
or
should
we
just
back
out
of
I
mean
I'm
kind
of
adding
in
this
generating
a
private
key
at
the
last
at
the
last
hour
again
it
was
removed
about
three
years
ago.
At
that
time
it
was
author's
preference
and
at
the
I
decided
I
prefer
not
to
move
forward
with
it.
B
It's
kind
of
what
I
was
thinking
at
that
time,
but
now
it's
been
it's
so
easy
because
the
iron
had
defined
registry.
So
I'm
willing
to
give
it
another
small
go
if
people
can
help
me
get
past
this
one
issue
any
comments
on
this.
E
E
B
E
Brian,
and
so
I
was
thinking,
would
generate
key
pair
or
something
along
those
lines
be
clearer.
I
just
think
of
like
analogous
to
ssh
key
gen,
something
that.
A
B
Okay,
possibly
I
I,
the
actual
name
of
this
rpc
generate
public
key.
I
think
I
stole
from
open
ssh
and
it
mimicked
their
language
and
remember
mean
I
mentioned
the
ssh
has
four
different
kinds
of
key
elements:
sorry
algorithm
identifiers.
So
this
is
the
public
key
algorithm.
It's
actually
called
quote
public
key,
so
I
was
trying
to
leverage
that,
but
you'd
make
a
good
point.
We
don't
have
to
necessarily,
we
would
just
say,
generate
key
pair,
a
symmetric
keypair.
That
would
be
an
option.
Yes,
okay,.
E
B
F
I
forgot
to
register
on
the.
I
was
checking
that
I
remember
on
the
in
the
good
old
days,
there
was
sifer
suites
that
were
allowed
to
have
a
new
value
for
some
of
the
I.
I
know
that
they
are
not
valid
for
tls,
but
you
could
either
you
could
use
a
cipher
suite
only
to
identify
one
particular
part
of
the
of
the
suite
is
here.
This
is
the
the
relevant
part
that
would
be
doable.
F
I'm
not
sure
this
has
been
deprecated
by
the
people
in
the
tls,
but
if
the
in
the
tls
group,
but
if
the
purpose
is
to
identify
a
concrete
private
key,
for
example,
I
think
it
would
be
valid
valid
for
the
intention
not
valid
as
a
tls
acceptable,
cipher
suite
anyway.
B
B
I'm
going
to
transition
now
to
slides
for
the
update
to
the
tls,
and
I
have
a
co-presenter,
jeff,
hartley
who's
joining
me
and
jeff
yeah
perfect.
I
see
he's
joined
the
microphone
queue
and
so
jeff
I'm
going
to
let
actually,
if
you'd,
like
I'm,
going
to
pass
slide
control
to
you,
which
you
now
have.
If
you
hit
your
right
left
mouse
button,
you
should
be
able
to
increment
the
slides
gotcha.
H
G
So,
just
a
quick
summary
try
not
to
take
too
much
time
here.
Some
feedback
had
arrived
via
previous
chatter
parts.
By
joining
the
team.
Think
of
me,
as
an
informal
liaison
from
bbf,
we
have
a
lot
of
client
server
type
applications
that
we're
looking
at
rolling
out
in
the
next
year
or
two
and
some
of
our
own
standards
and
having
a
standards-based
framework
for
integrating
those
would
be
perfect.
Things
like
grpc
kafka,
https
transport
things
like
that.
G
So
some
of
the
chatter
that
had
been
active
at
the
time
that
I
threw
some
time
over
the
wall
here
was
that
tls
one
two
and
tls
one
three's
usage
of
the
term
psk
are
vastly
different,
and
that
proved
to
be
the
case.
G
So
in
the
original
specifications
there,
of
course,
was
already
a
psk
in
place
and
it
sort
of
had
the
assumptions
of
tls
1.2
and
going
before
that.
So
it
was
quickly
deemed
necessary
to
split
that
out,
designate
which
one
was
one
two,
which
one
was
one
three.
In
fact,
this
notion
of
an
external
psk
is
prevalent
throughout
the
tls
1.3
rfc.
G
One
noteworthy
change.
Is
this
notion
of
zero
round
trip
time
as
they
call
it,
and
it's
that
the
entire
handshake
once
the
client
has
authenticated
itself
and
the
server's
accepted
that
and
they
they
effectively
agree
upon
the
parameters
embedded
in
the
external
psk.
They
can
immediately
start
sending
data.
G
The
server
can
immediately
start
fulfilling
requests
and,
while
that's
a
gross
oversimplification
of
the
full
description
which
you
can
of
course
find
in
one
three
it
it
really
did
need
some
attention
and
call
out
here
so
the
the
action
items
as
described
just
split
1.2
and
1.3
psks,
apart
from
each
other,
a
few
additional
identities
and
features
of
course.
So
we
could
use
a
feature
statements
to
just
apply
some
constraints
to
them,
but
nothing
too
earth
shattering
beyond
that.
G
Hopefully
the
font
is
legible,
but
I
I
kept
only
sort
of
the
the
main
body.
You
can
see
the
full
text
of
these
drafts
via
kent's
github
and
that
sort
of
thing,
so
all
the
all
the
descriptions
and
references-
I
assure
you-
are
there.
In
fact,
a
couple
of
the
the
notes
are
are
pretty
handy
reading,
I
would
say
so
in
the
case
of
the
tls
one
two
psk,
that
was
the
original
structure.
It
really
is
unchanged.
G
It's
just
renamed
the
the
original
psk
there
and
you'll
notice
that
you're
going
to
see
the
the
concept
of
identity
hints
and
things
like
that
which
don't
exist
in
one
three.
When
we
get
into
the
one
three
e
p
s,
k
right
and
again,
the
those
of
us
who
are
our
sticklers
for
syntax
will
notice
that
the
e
p
s
k
is
different
than
the
psk
above
it.
G
I
assure
you,
that's
consistent
with
the
rfcs,
and
it
helps
really
point
out
the
difference
that
there's
no
parity
between
these
things
and
how
they're
used
the
the
actual
cryptography
underneath
it
was
was
really
not
the
the
big
challenge
here
in
order,
in
other
words,
we
could
still
reuse
this
notion
of
keystore.
G
Of
course,
the
the
actual
hash
itself
you
can
see
right
there
there's
a
new
type
for
that,
like
what
hash
algorithm
you're
going
to
use,
there's
a
limit
of
exactly
two
that
are
currently
defined,
that
being
an
identity
could
be
extended
pretty
easily
in
the
future,
should
tls13
add
additional
types
of
hashes
supported,
and
a
lot
of
this
is
sort
of
context,
specific,
there's
the
the
actual
leaf
for
context
target
protocol,
for
example,
if
you
had
say
an
https
client
right
or
if
you
had
some
other
form
of
client
that
is
effectively
manipulating
this,
then
that
information
can
be
populated
and
the
key
derivation
function
is
another
mandatory
piece
in
here
again
the
descriptive
text
had
to
be
trimmed
to
fit
on
the
slides,
but
but
this
is
the
structure
of
what
was
added
and
differentiated
from
the
tls12
and
prior
psks.
G
The
server,
of
course,
when
we're
looking
at
the
client,
the
server
is,
is
the
inverse
of
the
clients
and
it
simply
specifies
hey
which
of
these.
If
features,
are
you
going
to
support
when
you're
talking
to
a
particular
server?
G
If
you
look
at
the
server,
it's
of
course
the
inverse
of
the
client,
the
exact
same
structures
right
with
opposite
positioning,
so
the
tls
server
grouping
contains
the
case,
statements
that
we're
just
speaking
of
and
the
container
specifying
what
client,
altercation
authentication
it
will
support
in
terms
of
psk's
is
there
and
I
believe
kent
did
I
have
the
I
did.
I
did
include
the
additional
identities
here.
There
was
a
bit
of
a
nomenclature
clean
up
there.
G
We
went
for
the
most
simplistic
structure
of
just
tls,
one
two
tls
one,
three
that
got
cleaned
up
throughout
the
files
and
we
talked
about
the
new,
the
new
type
def
for
epsk
supported
hashes,
that's
easily
extensible.
I
had
said
identity
earlier,
but
type
def
for
that
and
that's
about
it,
the
references.
G
Okay,
well
then,
I'll
end,
my
section
by
just
stating
that
if
folks
do
have
time
to
look
at
these
later
and
they
like
to
make
comments,
I
do
participate
in
the
netcoff
mailer.
So
thanks.
B
Okay,
here
we
go
sorry
about
that.
Zooming
ahead.
To
the
end,
thank
you
jeff
and,
by
the
way,
jeff
was
just
absolutely
awesome.
We
worked
together,
I
think,
over
the
course
of
maybe
three
months
and
possibly
meeting
every
other
week
on
all
this,
so
a
really
huge
contribution.
Thank
you,
jeff
last
slide.
For
me.
Next
steps
we
mentioned
so
jeff
just
talked
about
updates
to
the
tls.
We
you
know
we
need.
We
should
definitely
be
validating
the
correctness
of
these
updates.
Significant
security.
B
B
You
know,
sector
review
beforehand.
Before
doing
that,
we
could
discuss
that
as
well.
As
mentioned.
Also
there's
the
ieee
liaison.
Some
minor
updates
need
to
be
made
there
and,
lastly,
resolving
the
generatekey
rpc
action
issue
that
we
just
discussed
and
then
we're
done.
I
I
mean
amazingly
working
group
chairs,
can
publish
the
entire
set
to
a.d
at
this
point
in
just
a
few
weeks
and
I'll
be
noting
here
we
started
this
work
in
2014
so
eight
years
ago.
B
This
has
been
a
large
long
effort
and
happy
to
be
near
the
end,
and
that's
it
for
this
presentation.
Thank
you.
Oh
I
do
see
rob
in
the
queue
go
ahead.
Rob
like
you
up
the
next
thing.
D
So
just
robleton
cisco,
I
was
just
going
to
ask
in
terms
of
an
early
sec
dear
review.
Have
you
been
working
quite
closely
with
the
security
folks
anyway?
I
understand
so
I
think
you're,
probably
pretty
good
to
do
that.
Just
as
part
of
the
general
reviews,
I'm
not
sure
we
need
an
early
review
here
unless
you
think
otherwise,.
B
B
What
I
would
do
rob
is
in
your
write-up,
just
bring
note
to
it,
so
some
extra
attention
is
brought
to
that
section.
Just
I
think
it's
reasonable
to
just
have
people
take
an
extra
careful
look
at
the
way
the
tls-13
is
supported.
B
Okay
and
now
we're
moving
to
the
next
presentation.
This
is
udp
based
transport
for
configured
subscriptions.
B
B
If
I
can,
I'm
not
able
to
so
just
tell
me
when
you
want
to
see
next
slide.
Oh
I
hear
it.
I
can't
do
it
sorry
about
that.
I
Yes,
it's
working!
Yes,
thank
you,
so
this
is
alex
wang
from
insalion
and
I
am
presenting
today
the
few
changes
we
did
on
the
dash
o5
of
udp
native
draft.
I
On
the
agenda,
I
will
explain
shortly
the
div
we
made
on
this
new
draft
and
we
would
like
to
start
a
discussion
on
how
to
configure
dtls
from
the
young
module,
which
we
got
a
little
feedback
on
the
mailing
list.
I
I
I
We
also
have
been
talking
with
the
chairs
for
the
dtls
encryption
part
of
this
draft.
We
got
feedback
that
it
should
be.
I
So
we
would
like
to
start
actually
a
discussion
on
this
meeting
that
if
it
is
something
that
it
is
expected
for
for
from
the
community,
I
would
guess
yes,
but
and
then,
if
just
importing
the
I
we
saw
that
in
the
draft
net
conf
tls
client
server,
there
are
some
young
modules
implemented
and
if
just
importing
them
it's,
it
would
be
enough
or
but
what
starting?
The
discussion
on
on
this
meeting
and.
I
I
would
say
that
would
be
all
just
a
quick,
quick
presentation
too,
about
the
distributed
native
draft.
I
We
changed
a
lead,
so
we
we
did
some
minor
changes
on
the
young
model
and
we
also
think
that
the
draft
is
already
pretty
stable
and
that
we
would
like
to
last
call
when,
when
this
udp
notifies
last
called
so
that
would
that
would
be
from
from
my
part
and
we
we
could
start
this
discussion.
B
Thank
you,
sorry.
So,
yes,
it
is
a
good
idea
to
modify
the
configuration
model
so
that
the
actual
tls
part
of
dtls
can
be
configured
I
mean
so
we're
asking
the
server
to,
in
some
cases,
initiate
the
outbound
tls
connection
to
a
remote
tls
server,
and
you
know:
how
does
it
authenticate
that
server
certificate?
B
Does
it
need
to
authenticate
itself
to
the
server
you
know?
Where
is
the
configuration
for
these
things
specified?
It
is
necessary,
it's
not,
it
cannot,
I
don't
believe
be
just
a
flag
like
dtls
is
on
and
there's
a
configuration
that
has
to
be
specified
and
I
think
that's
the
request.
B
So,
yes,
exactly
what
it
would
come
down
to
is
the
yang
module
that
is
in
this
draft
would
import
the
tls
client
module
and
maybe
also
the
tls
server
module
if
it
turns
out
that
it
needs
to
actually
listen
to
connections
as
well,
and
then
that
should
be
most
of
it.
You
would
want
to
also
add
examples
that
illustrate
those
configurations
being
set,
and
I
think
that's
the
ask:
does
it
make
sense.
A
Thank
you
all
right,
so
I
had
similar
comments
as
kent
for
the
udp
native,
but
also
for
the
distributed
native.
I
would
love
to
see
some
examples
of
the
configuration
aspect.
I
know
there
are
some
examples
of
subscribe
notifications,
but
the
actual
configuration
part
examples.
I
did
at
least
the
last
version.
A
I
didn't
think
I
saw
anything
in
particular,
so
that
would
be
nice
to
have
and
then
in
summary,
would
you
say
that
the
distributed
node
of
draft
is
really
about
having
this
id
that
identifies
the
node
from
where
the
notifications
are
coming
from,
because
that's
pretty
much
the
summary
of
that.
What
I
understood
of
the
draft.
A
What
I
s,
what
I
remember
seeing
the
last
version
of
the
draft,
essentially
what
you're
trying
to
do
is
put
in.
I
don't
remember
the
name
you
you
gave
to
it,
but
it's
a
node
id
that
identifies
where
the
notification
is
coming
from
is
that
I.
J
J
Okay,
is
it
now
better,
yes,
okay,
domain
id
is
basically
identifier
which
identifies
export
process
on
the
route
if
more
than
one,
and
this
is
needed
for
the
segmentation
of
okay.
In
a
nutshell,
that's
what
distributed
notif
is
and
sits
its
transport
agnostic.
We
need
to
describe
that
in
a
safe
director.
A
Okay,
so
the
the
comment
on
the
it
being
transport
independent.
A
J
Which
is
basically
where
we
can
implement
that
it's
not
implemented
in
https
but
for
future
drafts.
That's
why.
J
A
Okay,
I'll
I'll,
send
you
the
comments
on
the
mailing
list
sure
because
we
are
having
some
problems
at
our
end.
B
Should
we
move
on
to
the
next
presentation
now
yeah,
okay,
and
why
don't
you
or
just
I
mean
let
people,
okay,
there's
a
small
agenda
bashing,
while
we
were
between
slides
here
and
we've
moved
the
adaptive
subscription
presentation
to
the
end.
Everything
else
will
be
the
same.
B
H
So
let
me
present
this
so
as
you
introduce
this.
This
is
like
the
per
node
capabilities
for
optimum
operational
data
collection.
So
that's
a
draft
that
started
about
two
years
ago
and
the
goal
today
is
to
see
if
there
is
still
interest
to
continue
on
this
draft
so
disclaimer,
I
have
not
updated
it
just
a
refresh.
H
So
if
you
go
to
the
next
slide,
there
is
just
you
know.
If
we're
looking
at
automation,
we
know
that,
in
the
end,
automation
is
as
good
as
your
data
models,
so
we
created
many
models
in
the
itf
and
in
different
places
in
industry,
so
we're
going
to
write
track
over
there.
So
your
automation
is
as
good
as
your
tool
chain
and
we
worked
on
the
tool
chain
in
hackathon
different
places.
H
H
This
is
what
we
did
with
this
new
rfc
9196
that
mahaj
presented
in
the
in
the
intro.
What
we
have
in
there
is
two
young
models.
The
first
one
is
the
placeholder
and
the
second
one
offers
some
capabilities.
So
if
I
look
at
the
last
four
entries
on
the
slides,
it
tell
us
per
nodes.
What
is
the
minimum
update
period
that
you
could
have
on
telemetry
right
for
a
specific
object?
H
It
tell
us
what
are
the
support
update
period,
whether
on
change
is
supported
right?
Let's
say
you
want
to
do
telemetry
with
unchanged.
H
H
So
the
first
one
is
like
the
suggested
observation
observable
period
right,
it's
like
yeah.
Maybe
the
router
supports
all
these
different
periods
for
observation
observation,
but
maybe
there
is
one
which
is
the
one
that
the
vendor
suggests
right
and
it's
different
different
at
the
minimum
one,
because
maybe
the
minimum
one
is
like
one
millisecond,
but
maybe
it's
not
very
practical
or
the
router
will
have
some
problem
to
sustain
it.
A
H
That
well,
if
let's
say
I
want
to
stream
the
flipped
content,
we
could
say
the
minimum.
One
is
one
millisecond
one.
Second,
but
actually
it
depends
how
big
your
fib
is
right.
So
just
giving
something
like
a
generic
answer
is
not
always
practical
so
telling
like
the
router
under
these
conditions,
knowing
the
number
of
entries
in
the
fip
as
one
example,
we
advise
you
to
stream
it
every
five
minutes
or
whatever.
H
This
one
is
about
the
corresponding
yang
object
and
mip
or
id.
If
we
know
it
right,
so
we
know
snmp
still
works
right.
We
know
it's
not
going
to
be
replaced
so
sometimes
just
knowing
that
a
specific
kang
objects
refer
to
no
id
is
interesting.
Now
you
could
say
I'm
going
to
do
it
offline
and
I
tried
now
here's
the
thing.
If
we
take
interface
that
we
all
know
it
seems
like
very
simple
but
interface
in
snmp.
We
know
this
admin
administrator
superstatus.
H
If
we
come
to
yang
it
depends
because
we
could
have
like
the
interface
young
module,
which
is
pre
and
mda,
with
two
different
containers.
No
one
container,
we
could
have
the
young
module
nmda,
which
is
one
container
but
different
data
stores.
We
could
have
the
open
conflict,
which
is
yet
a
different
one.
H
H
H
H
H
H
H
A
Feedback,
so
we
could
try
to
do
a
show
of
hands
if
that's
what
you're
interested
to
find
out
if
there's
interest.
K
Hello
yep
hear
you
good,
so
I
will
say
meanwhile,
that
we
have
seen
in
the
past
similar
problems
that
you're
that
you're
talking
about
with
like
the
mibo
id,
but
actually
for
other
protocols
as
well
like
for
ipfix,
for
example
the
the
ie
number
there
that
would
be
assigned
to
the
node.
So
you
know
we
do
see
some
usefulness
for,
for
those
features
to
be
able
to
have
the
source
correlate
it
back
with
other
identifiers
for
other
protocols.
K
But
when
you
do
that,
if
you
could,
if
we
could
somehow
make
that
kind
of
a
list
and
allow
for
you
know,
you
know
the
the
identities
to
be
done
for
or
for
you
know,
one
or
more
types
of
things
you
can
do.
You
know
ipfix
or
snmp
or
both
right.
You
know
where
you're,
giving
annoyed
or
or
the
I
can
fix,
number
for
for
whatever
purposes
that
the
system
needs.
H
Very
good,
thank
you.
Tim
for
the
iphic
is
slightly
more
complex
because
there
is
semantic
behind
it,
depending
on
your
key
field
definitions.
So
there
is
not
always
a
one-to-one
mapping.
I
could
go
into
more
details
later.
It's
slightly
more
complex
or
flows
than
yeah.
K
A
Oh
on
the
online
tool,
is
there
an
ability
on
the
phone?
Yes,
I
believe
there
is.
D
Thanks
very
much
it's
very
interesting,
so
I
had
one
comment
on
the
on
the
idea
of
sort
of
a
recommended
polling
interval
rather
than
a
minimum.
I
definitely
think
that
makes
sense
in
in
the
context
that,
as
you
say,
a
minimum
polling
until
doesn't
always
make
sense,
depending
on
the
capacity
and
things
one
of
the
things
that
even
with
the
recommended
one
is,
it
might
depend
on
what
other
subscriptions
you
are
at
the
same
time.
D
So
if
you've
only
got
a
subscription
against
the
variable,
the
fib,
then
that
might
give
you
one
interval,
whereas
if
you're
pulling
polling
pulling
off
a
lot
of
other
data
at
the
same
time,
then
that
will
change
and
I
think
that's
one
aspect-
that's
also
related
to
like
the
adaptive
telemetry
drafts,
the
same
issues.
How
do
you?
D
How
do
you
advertise
not
just
what
you
can
do
at
the
extreme,
but
what
you
can
do
in
a
sort
of
more
generalized
sense?
I
think
that's
an
interesting
problem.
I
don't
know
whether
the
recommended
one,
because
that
would
change
dynamically
is,
is
what
you
want
or
or
whether
you
need
something
to
be
able
to
say.
These
are
all
the
subscriptions.
D
I'm
interested
in
tell
me
what
you
can
do
here
and
it's
interesting
that
what
gmy
does
here
is
they
have
an
option
of
saying
just
do
the
best,
so
it's
just
whatever
the
server
can
provide,
but
again
if
more
subscriptions
come
along,
does
that
then
it
will
then
change
over
time,
and
how
does
the
client
adjust
to
that
or
not?
So
I
think
there's
some
interesting
here,
interesting
stuff.
Your
other
question
is:
are
you
interested
in
this
now
I've
not
I've
not
put
my
hand
up
by
the
way.
D
H
H
So
do
your
best
is
what
we
want
now
we
want
to
slightly
formalize
this
because
if,
at
an
example
of
flows,
you
know
ipfix,
you
do
sampling
great
you're
able
to
get
your
sampling
rate
back
to
your
flow
collector
so
that
you
could
do
your
own
computation
now
in
this
world
of
adaptive,
where
you
do
your
best,
if
you
don't
know
what
the
best
is
and
it
keeps
changing.
If
you
don't
advertise
this
to
your
data
collection
system,
then
you
cannot
reconcile
information
easily.
H
If
you
speak
about
some
data,
that's
fine,
because
you
could
kind
of
average
that
if
we
speak
about
multiplying
counters,
because
you
are
doing
some
sort
of
sampling,
then
we
have
a
problem.
But
this
is
this.
Is
the
idea
do
your
best
based
on
what
you
could
do
now
and
send
me
the
information
by
the
way,
because
I
need
to
get
the
context
that
will
follow
the
data.
L
Yeah
this
is
charles,
but
I'm
channeling
a
question
from
jurgen
because
we
weren't
able
to
confirm
with
him
about
asking
it,
but
he
just
asked
us
the
nodes,
tag,
work
and
netmod
provide
the
means
to
obtain
the
metadata.
Do
we
really
need
a
new
rpc.
E
Not
a
joe
clark,
cisco,
not
a
question
comment,
so
I
think
I
raised
my
virtual
hand.
I
think
it's
interesting,
I'm
kind
of
curious
and
I
can
leave
more
details
on
email.
Like
some
of
that,
you
think
the
the
what
rob
said
about
the
polling.
Yes,
the
snmp
oid,
I
mean
you
and
I
know
useful,
but
there's
a
lot
of
challenges
there,
like
just
knowing
the
oid
alone,
doesn't
really
help
me.
I
need
to
have
the
index
mapping
and
that's
on
a
per
instance.
E
H
The
very
good
point,
joe
actually,
I
tend
to
agree
with
you,
because
it's
whenever
I
will
go
in
the
details
of
getting
an
index
in
the
map
or
actually
multiple
indices,
and
then
we've
got
like
a
yang
which
has
different
indices
and
it's
going
to
be
complex.
So
this
one
was
put
in
there
because
we
want.
I
do
want
to
store
that
when
we
arrive,
I'm
not
sure.
So.
Maybe
if,
if
we
have
to
do
things
in
step,
might
be
the
right
answer.
F
I
was
trying
to
raise
my
hands
and
go
to
the
queue
at
the
same
time,
and
it's
not
well
yeah.
It's
simply
that
I
mean
this
is
the
kind
of
feature
that
we
have
been
demanding.
F
Quite
often
I'm
talking
among
us
precisely
because
it
would
be
really
helpful
in
precisely
warrantying
well,
I
would
not
say
comfortable
but
reasonable
path
for
migration
and
for
integrating
different
elements
that,
instead
of
using
what
we
are
using
right
now,
that
is
the
best
guess
or
whatever
we
get
fall
from
from
experience
or
from
interaction
with
the
with
the
with
the
manufacturers.
F
H
F
In
principle,
if
you're
asking
me
well,
is
it
the
letter
to
santa
claus
for
everything
come
on?
No,
no
seriously?
No,
if
we
had
to
to
experiment
with
something.
Personally,
I
found
I
mean
from
my
experience,
not
the
the
purely
operational
language
people
for
me,
but
I
or
the
oid
will
be
probably
more
interested.
The
one
about
the
related
note
is
something
that
interests
me
personally
more,
but
I
would
say
that
the
that
the
migration
pass
probably
will
be
the
most
interesting
for
the
real,
the
people
that
are
running
the
real
networks.
A
Thank
you.
We
do
have
one
more
question,
but
just
to
let
everyone
know
we.
I
ended
the
poll
and
we
had
29
participants
with
24
saying
they
had
some
interest
and
we
have
balaz
in
the
queue
go
ahead.
Bolas.
B
M
A
B
All
right
next
presentation
will
be
the
transaction
id
jan
lim
blood.
Let
me
get
the
slides
cued
up.
B
N
Ahead,
thank
you.
So
this
is
a
update.
What's
going
on
with
the
transaction
id
that
some
of
you
may
have
followed
earlier,
so
I
wanted
to
go.
Do
a
quick
problem
and
solution
recap
and
then
talk
about
the
simulated
results
that
we
have
since
earlier
and
then
mentioned
a
few
words
about.
What's
in
the
works
right
now,
so
this
work
is
trying
to
address
three
problems.
N
The
second
problem
is
with
jan
kush,
so
if
a
client
is
subscribing
to
updates
on
some
parts
of
the
server
configuration
and
then
getting
updates
after
that,
when
something
happens
in
the
server-
that's
of
course
great.
But
what
happens
is
that
if
the
client
is
configuring,
the
server
he
will
get
updates
about
his
own
change
as
well
and
it
kind
of
unnecessary,
and
it
takes
a
lot
of
computations
to
figure
out.
Oh
actually,
this.
N
And
the
third
thing
well
actually
more
more
than
cpu
consumption
and
the
network
bandwidth
consumption.
A
problem
is
that's
important
for
in
many
cases
is
that
we
want
to
detect
when
multiple
clients
are
making
changes
at
the
same
time
and
to
the
same
server
so
that
if
they
are
clobbering
each
other,
they
would
want
to
know
and
the
the
current
method
of
using
get
config
is
both
costly
and
it
has
a
hole
right.
N
N
N
So
a
lot
of
people
have
talked
about
yeah:
let's
introduce
a
sort
of
simplistic
top
level
transaction
id
where
every
server
keeps
track
of,
for
example,
the
timestamp
when
it
was
last
changed
or
some
sort
of
global
number
at
the
top.
That
reflects
the
contents
so
that
I
didn't
get
confident
you
can
ask.
What's
your
transaction
id
and
when
you
get
the
content,
forget
config
you'll
also
get
that
number
like
4
7
11..
N
N
Let's
say
we
have
a
vpn
client,
a
security
client
and
an
underlay
client
that
that
care
mostly
about
different
parts
of
the
config
of
the
server.
And
if
you
all
you
get,
is
this
top
level
e
tag
number
471
there
and
the
changes
these
clients?
They
don't
know
if
something
that
affects
them,
or
they
care
about,
has
changed
or
not,
and
in
many
cases
that
e-tag
number
is
changing
so
frequently
because
of
changes
that
are
going
on
on
this
device.
N
N
So
that's
why
we
think
a
transaction,
id
or
e-tag
mechanism
that
goes
deeper
into
the
tree
is
very
valuable
so
that
you
can
synchronize
the
part
of
the
configuration
or
track
changes
to
the
partial
configuration
that
your
that
you
actually
care
about,
and
the
draft
is
also
talking
about.
The
edit
conflict
mechanism
is
lock,
free
and
detects
clobbering
without
any
sort
of
vulnerable
window
vulnerability
windows.
N
In
order
to
measure
what
we,
what
this
draft
would
give
you,
we
did
a
measurement
on
a
real
world
application
that
was
running
in
one
of
our
labs
for
an
hour,
and
this
application
was
then
doing
a
lot
of
management
of
devices
in
the
network,
and
it
was
doing
569
round
trips.
N
But
if
we
implemented
the
sort
of
mechanism
that
we
talked
about
in
this
draft,
those
569
would
go
down
to
378
round
trips,
so
about
one
third
would
be
gone,
and
that
of
course
removes
both
network
load
and
delay
in
applications
because
round
trips
take
a
lot
of
time
and
reducing
the
amount
of
traffic
from
one
megabyte
to
just
a
little
more
than
a
half
megabyte,
which
again
reduces
network
load
and
processing
time
on
both
sides.
N
It's
something
that
is
not
mentioned
in
the
current
dash
one
version
at
all,
and
we
are
looking
at
in-house
doing
prototype
implementations
of
this
just
to
verify
this
with
a
broader
class
of
use
cases
than
this
simulation
that
we
did.
A
Yeah,
did
you
want
a
poll
to
see
if
there
was
interest
in
the
work?
That
could
be
a
good
idea
yeah,
if
you
can
arrange
that
please.
B
A
B
A
B
Poll,
okay,
great
jan,
so
I
I'm
highly
supportive
of
this
work.
But
first
can
you
just
clarify
you
didn't
use
the
word
rest
conf
in
your
presentation,
but
this
by
and
large,
in
my
mind,
is
enabling
that
conf
to
have
feature
parody
with
rescue,
at
least
in
terms
of
the
e-tag,
and
maybe
also
the
modification
time
on
a
you
know
within
the
tree
deep
in
the
tree
is
it?
Is
that
a
fair
characterization,
it's
a
as
a
effectively
enabling
that
contact
feature
parity
to
rest
conf
in
that
regard,.
M
N
That's
that's!
That's
going
beyond
is
also
applicable
to
rest
conf.
B
N
Yeah
I
mean
we
have
this.
Let's
see,
I
can
actually
go
back
here
in
the
slides.
You
know
I
have
this
pole
in
front
of
me,
so
I
can't
really
see
the
slide
very
well,
but
okay.
So
when
you
subscribe
to
configuration
updates
using
young
push
for
a
part
of
the
tree
that
you're
interested
in
whatever
you,
the
client
itself
is
pushing
to
that
part
of
the
tree
will
be
echoed
back
to
him.
N
But
it's
difficult
to
understand
if
this
is
another
update
from
some
other
transaction
that
happened
on
this
server
or
if
it
is
my
own
thing,
since
there's
there's
no
way
of
knowing
you
have
to
actually
parse
this
whole
thing
and
say
yeah.
Actually
it
matches
exactly
what
they
already
had
so.
B
Yeah,
this
may
be
an
echo
of
your
problem
number
one.
I
forget
if
it
was
or
not,
but
you
mentioned
the
server
getting
back
a
notification
for
something
that
it
had
already.
It
knows
it
did
it.
B
So
why
would
it
need
to
get
a
notification
for
it
and,
as
you
were
saying
that
I
was
thinking
to
myself
that
maybe
it's
not
so
bad,
I
mean
if,
if,
if
the
bandwidth
and
you
know,
processing
isn't
huge,
it's
still
probably
valuable
to
have
a
record
of
the
thing
having
occurred,
and
so
in
that
regard,
I
probably
would
say
it's
not
a
big
deal.
So
if
it's
the
same
issue,
then
maybe
it's
not
a
big
deal.
N
I
think
that,
actually,
if
you
look
into
how
we
could
do
this
with
the
transaction
id,
you
could
do
exactly
that,
but
much
more
efficient
than
getting
the
full
data
back
and
having
to
process
it
and
compare
and
all
that
so
getting
the
the
update
is
great.
But
if
you
have
the
young
push
mechanism
integrated
with
transaction
ids,
you
could
do
that.
B
A
Yes,
so
oh.
B
As
I
understand
it,
this
draft
actually
pretend
presents
the
use
cases
for
the
adaptive
subscription
presentation,
which
is
after
this
one
is
xiaomi
here.
B
O
Thanks
so
okay,
on
behalf
the
authors
and
contributors,
I
would
like
to
give
a
presentation
about
the
adaptive
subscription
to
young
notification
so.
O
For
people
who
are
not
familiar
with
this
work,
usually
a
high
frequency
debt
collection
leads
to
more
resource
consumption,
while
low
frequency
debt
collection
is
insufficient
for
for
photo.
Nuclear
neglection
and
people
sometimes
may
find
it
hard
to
balance
the
need
between
expensive
data
management
cost
and
better
fidelity
for
troubleshooting.
O
This
work
was
first
proposed
about
two
years
ago
and
we
have
received
a
lot
of
comments
over
the
past
two
years.
About
two
months
ago,
the
working
group
adoption
call
was
initiated
and
we
have
received
different
opinions.
I
think
we
got
a
lot
of
support
and
also
received
concerns
and
objections
from
andy
and
the
poor
and
some
of
the
objections
about
the
problem
statement,
while
others
is
about
the
evaluation
of
the
x-path
expression
like
the
usage
of
the
watermark
evaluation
of
expat,
external
evaluation,
etc.
O
So
the
authors
investigate
these
concerns
very
carefully
and
update
the
draft
accordingly,
which
includes
the
the
following
chance
like
we
have
defined
new
rpc
errors
in
last
itf
meeting.
I
think
it's
rob
rob
has
suggested
us
to
define
a
new
rpc
error
to
report
when
a
server
cannot
pass
the
expat
syntax
defined
in
the
x-pass
evaluation
expression,
which
is
more
complicated
than
it
can
handle
and
andy
asked
us
what,
if
multiple
expats
criteria
conflict.
O
So
we
have
removed
this
parameter,
and
there
are
also
some
other
questions
asked
by
andy
like
how
to
evaluate
the
x-path
expression.
How
to
compare
a
targeted
data
object
in
a
specific
list
entry?
How
often
does
the
server
checks
if
the
period
should
change
clarifications
in
the
updated
draft
has
been
met?
And
I
will
also
explain
some
of
the
later
in
my
slide,
and
another
progress
we
have
met
is
that
we
have
proposed
a
hacksaw
project
in
this
itf
meeting
to
provide
some
implementation
results.
O
O
We
use
the
grpc-based
telemetry
to
collect
data
from
different
access
points,
the
network
devices
in
our
campus,
and
we
have
evaluated
the
following
different
data
collection
methods,
a
high
frequency
periodic
telemetry,
a
low
from
quickly
periodic
telemetry
and
an
adaptive
frequency
telemetry,
and
for
each
debt
collection
method.
Two
cases
are
evaluated.
O
One
is
to
report
the
rssi
values
so
as
to
detect
the
real
time
whether
roaming
events
across
different
aps
and
the
other
is
to
stream
the
bytes
sent
from
the
ap
uplink
so
as
to
detect
the
potential
congestion,
and
we
also
use
the
elk
to
collect
analyze,
filter
and
visualize
the
data
so
in.
In
this
case,
we
tried
to.
O
O
So,
while
our
proposal
is
to
configure
the
adaptive
policy
into
the
server
and
allow
the
server
to
switch
to
different
intervals
automatically,
so
we
have
compared
these
two
different
ways
and
the
results
show
that
for
the
first
way
for
the
condition
evaluated
by
the
subscriber,
the
low
frequency
at
a
30-second
interval,
that
collection
prevents
the
subscriber
from
capturing
roaming
events
which
lasts
only
two
to
four
seconds,
because
the
subscriber
can
only
use
the
data
collected
every
13
seconds
as
the
input
of
the
condition
evaluation,
and
it
will
realize
to
increase
the
frequency
only
when
it
happens
to
receive
or
as
as
I
value,
which
is
less
than
the
threshold.
O
But
for
the
the
second
adaptive
way,
the
server
will
evaluate
the
condition
at
the
end
of
each
high
frequency
interval,
which
is
two
seconds
in
this
case.
So
it
will
check
whether
it
needs
to
switch
to
another
frequency
every
two
seconds,
even
it's
during
the
low
frequency
streaming
now.
So
any
there
is
no
no
important
events
and
the
data
will
be
missed
in
our
proposed
way.
O
Another
case
we
have
tried
is
to
stream
the
bytes
sent
from
the
ap
uplink.
So
as
to
detect
the
possible
uplink
congestion
and
similarly
adaptive
frequency,
debt
collection
is
able
to
capture
as
many
traffic
but
as
possible,
while
when
the
monitored
operational
data
is
normal,
the
frequency
can
also
be
decreased.
O
O
O
The
subscriber
can
only
use
the
that
collected
every
13
seconds
as
the
basis
input
for
the
condition
evaluation
and
such
a
time
interval
is
very
likely
to
cause
the
loss
of
important
data
and
also.
The
third
point
is
that,
when
tens
of
thousands
network
devices
need
need
to
be
managed,
the
frequent
frequent
modifications
are
prone
to
errors.
O
Then
people
may
ask
about
how
about
the
proposed
server
driven
method.
How
often
does
the
server
check
if
the
period
should
change?
O
I
think
it
sounds
more
like
an
implementation
decision,
because
the
more
from
the
condition
evaluations
are
performed,
the
faster
the
server
can
react
to
the
network,
condition
change,
but
it's
recommended
that
it's
recommended
to
be
at
the
end
of
each
high
frequency
streaming
update
period
and
which
means
that
the
data
is
reported
at
a
high
frequency
only
when
the
network
suffers,
but
the
server
should
periodically
detect
the
condition
change
at
this
high
frequency,
even
its
reporting
data
to
the
collector
in
a
low
low
frequency
currently,
and
this
to
reduce
the
frequency
of
the
evaluation.
O
O
O
O
O
So
we
think,
in
order
to
reduce
the
complexity,
we
have
recommended
the
implementers
to
use
the
comparison
of
the
specific
that
object,
value
and
the
threshold,
but
for
the
server
with
more
powerful
capabilities
to
handle
a
complex,
express
syntax.
We
think
that
it's
okay
to
use
the
mathematic
operations
in
the
expression,
so
it
so,
I
think
the
x
partition
text
supports
it
to
use
a
red.
O
D
D
It
just
feels
to
me,
like
it's,
a
very
computationally,
expensive
solution
to
this
problem,
so
I
think
one
of
the
things
that
I'd
be
interested
in
and
maybe
the
other
presentation
on
the
sort
of
use
cases
if
that's
related
to
this
might
be
interesting,
is
is
how
many
cases
do
we
actually
have
of
solving
this
generically?
Are
there
a
few
point
cases
that
we're
trying
to
solve,
and
would
it
be
better
to
build
solutions
and
concrete
data
models
to
solve
those
particular
cases
rather
than
building
a
generic
infrastructure
here?
D
D
And
then
you
can
monitor
that
a
high
frequency
and
then
and
leave
it
at
the
server
to
respond.
So
I
still
have
questions
as
to
what
data
you're
trying
to
capture.
Do
you
actually
need
all
this
data
at
a
high
rate,
or
are
you
just
trying
to
pick
out
when
these
particular
events
are
occurring
more
generally
and
and
hence,
if
you
had
a
summarized
value
in
the
data
model,
that
might
might
give
you
the
same
solution
with
far
less
complexity?
O
Okay,
well
for
your
your
first
question,
I
think
I
have
the
slide
to
yeah
for
the
adaptive
subscription.
The
condition
can
evaluated
at
the
subscriber
side
and
also
at
the
server
side,
and
if
we
want
to
leave
the
complexity
with
the
subscriber,
then
we
it's.
Maybe
we
have
some
issues
to
collect
the
enough
data
to
identify
the
important
events
and
and
data
and
and
for
the
the
second,
where
the
adaptive
subscription
evaluated
the
condition
at
the
server
side,
we
can
collect
enough
data,
but
yes,
we
did
add
some
complexity
to
the
server.
O
We
don't
really
have
a
performance
statistics
about
the
network
devices
load
to
implement
this
adaptive
subscription,
but
the
results
show
that
it
seems
okay,
because
it,
the
server,
only
adds
some
some.
If
then
logical,
programming
codes
on
the
adaptive
subscription
and
there
don't
really
have
so
much
load
than
we
expected
and
also
about
the
the
stream
data.
In
this
case,
with
the
evaluation
data,
is
the
issi
signal
data
which
is
defined
as
a
leaf
better
node
in
our
the
module
in
the
draft
appendix
section.
O
So
the
rssi
is
the
the
stream
data
and-
and
it's
also
the
evaluated
data.
So
this
is
the
the
filtered
about
the
these,
and
I,
I
think
that
for
adaptive
subscription
the
rssi
signal
data
streaming
to
identify
the
terminal
devices
roaming
events-
maybe
is
the
most
solid
use
case
for
for
for
for
this
draft,
and
I
also
know
that
there
is
another
draft
of
a
dedicated
discussion
about
the
use
case
and
the
problem
statement
about
adaptive
subscription.
But
I
we
can
we.
We
can
see
that.
O
I
I
think
that
it
seems
more
like
to
focus
on
the
data
collection,
the
traffic
debt
collection,
but
our
focus
is
about
the
the
young
push
mechanism,
which
is
to
stream
data
from
a
particular
young
data
store.
So,
but
we
can
see
that
whether
the
use
cases
dropped
can
fit
in
some
use
cases
in
this.
In
this
one,
we
can
see
that.
D
So
so,
just
to
to
concrete
examples
I
could
think
of
is
for
like
interface
statistics.
Often
I
don't
know
the
idf
yangon
has
this
don't
think
it
does,
but
other
ones
other
vendors
have
this?
Is
you
have
like
a
rate
calculation?
That's
telling
you
what
the
current
load
is
on
that
interface
and
that
decays
over
time.
So
it's
giving
you
sort
of
a
point
load
value
and
you
could
potentially
sample
that
a
reasonable
frequency
and
an
another
case
I
can
think
of
is
to
do
with
like
interface
flaps.
D
If
you
monitor
the
the
link
layer
flaps
at
the
very
low
hardware
layers,
you
may
be
getting
thousands
of
interrupts
per
second
coming
through,
but
you
don't
want
to
notify
all
of
those.
Instead,
you
could
notify
a
counter
of
how
many
flaps
have
occurred
each
point
of
time,
so
you
can
still.
You
can
still
spot
when
these
flaps
are
occurring
and
you
can
still
spot
it
occurring
at
high
frequency
or
not,
but
you
don't
necessarily
have
to
notify
every
single
flap
that
occurs
so
you're,
reducing
the
amount
of
data
you're,
pushing
off
the
device.
O
O
A
Okay,
if
there
are
no
other
comments,
suddenly
there
is
interest
in
the
use
case
draft.
I
think
we
would
have
really
liked
to
hear
that
today,
but
considering
the
author
is
not
here
to
talk
about
it,
one
other
way
that
actually.
B
I'm
sorry,
I
did
receive
a
private
chat
message
from
xinwu
who
says
that
the
presenter
xiaomin
will
be
presenting
or
joining
shortly
to
present
imminently,
I'm
hoping
so.
This
may
still
happen.
B
P
Q
P
Yeah,
this
is
actually,
I
do
see
online
and
I
did
talk
with
him
and
he
will
present.
A
All
right,
charming,
if
you
see
your
name
on
the
top
okay,
you
see
that
yes,
okay,.
R
B
To
you,
you
can
increment
the
slides
between
the
red
or
left
right
mouse
buttons.
R
Okay,
let
me
let
me
introduce
this
draft
problem
statement
and
the
use
cases
of
adaptive
traffic
that
con
collections
next.
One
list.
R
Motivation
and
objective
motivation
idea
career
network
can
lead
to
providing
real-time
traffic
visibility
to
have
network
upgrades
quickly
and
accurately.
Look
at
the
network
congestion
and
packed
loss
make
timeline
pass
adjustment
for
deterministic
services
to
avoid
congestion,
as
this
trendstyle
assembly
I
mean
second
intervals,
will
generate
generate
a
constant
variable
amount
of
data
which
might
claim
too
much
transport
by
the
way's
resource
will
be
load.
The
service
for
get
collection,
storage,
knowledge
and
lab.
R
Okay
problem
statement:
if
network
use
of
traffic-based
calculators
tickets
after
a
long
time
operates,
have
obtained
obtained
traffic
vicinity
from
ms,
which
cannot
reflect
this
kind
of
base
calculations
first
to
observe
it
observing
the
evolution
network
traffic,
my
excuse,
the
calculator's
tickets
are
for
traffic
based
game.
Direct
smt
is
widely
employed
to
collect
network
traffic
at
five
minutes
intervals.
R
Second,
in
spite
of
low
links,
usage
such
as
setting
to
14
percent
final
base
utilization
mailing
billing
time
plans
have
still
been
received
about
four
k
in
delivery:
application
based
the
sense
same
sensitivity
of
the
delay
and
the
pipeline
loss,
large
quality
of
laboratory
data
and
evolutionary
object,
medicare
diet
and
microbes,
but
belong
below
lab
because
frequently
he
operates
calorie
little
workers
such
as
ip
brain
antimatter,
measuring
quality,
little
work
actually
backbone.
Little
work
and
in
the
data
centers
families
of
television
television
indicates,
we
can
capture
their
company,
that's
packets,
of
macrophage
traffic.
R
R
Phenomena
occurring
like
united
kingdom
network
will
cause
political
detention
and
impact
laws
which
will
stimulate
the
effect
accurately
of
late
transcendent
lost
sentence,
applications
the
ability
of
detecting
market-based
traffic
of
inter
interface,
where
network
operates
quickly
and
equally
equitably
located
network
tangency
and
the
factory
loss
and
make
timely,
has
adjustment
for
for
deterministic
delay
services
in
order
to
avoid
the
congested
load
and
links
triggered
by
the
events
such
as
packet
loss
cure
depends
beyond
the
threshold,
which
is
the
detective
families.
Assembly
sector
must
be
timing.
Turning
to
milliseconds
to
capture
a
magnifier
of
interface.
R
The
real-time
check
visibility
is
based
on
the
adaptive
traffic.
That
collection
techniques
can
accurately
predict
the
time
congestion
and
quickly
have
shared
their
instantaneous
congestion
of
inter
interface
and
millions
of
delivery
time.
Traffic
visibility,
the
automatic
of
optimization
tool
brings
up
your
ai
can
make
timely
path.
Adjustment
for
case
type
flows.
R
Adaptive
approach
can
be
used
based
on
the
network,
condition
to
analytically
adjust
the
sampling
threat
in
low
mail
network
stated.
A
low
sampling
rate
is
enough
to
develop
reflect
network
performance
in
case
of
network
congestion.
Family
just
practiced
something
related
at
a
very
high
level,
so
as
soon
as
to
acquire
real-time
measurement
data
such
as
latency
data
and
packet
loss.
R
A
H
Please
so
thank
you
for
coming
to
the
itf
to
express
your
problems
now
something
so
when
I
read
the
slide,
it
seems
it's
mainly
about
data
plane,
monitoring
right
and
about,
and
sometimes
I
mean
it
would
be
easier
for
me
to
know
if
we
speak
about
data
plane
and
then
like
in
ban
oem,
like
what
you
described
in
the
last
use
case,
or
if
you
speak
about
flows.
If
you
speak
about
something
else,
as
opposed
to
you
know,
just
telemetry.
B
All
right,
thank
you.
I
don't
believe
there
are
any
other
comments,
so
just
so,
we
did
have
to
take
these
last
two
presentations
out
of
order.
B
The
idea
was
that
this
presentation
was
going
to
provide
the
use
cases
that
would
help
justify
the
potential
adoption
of
the
adaptive
subscription
draft,
which
was
the
the
previous
presentation,
but
now
that
we
have
had
this
presentation
I
know
so
it
required.
I
know
there
were
some
objections
that
were
raised
previously
on
list
by
andy
and
also
per
with
regards
to
concerns
with
the
adapter's
description
draft
and
andy's,
not
online.
B
I
don't
know,
I
think
I
see
her
if,
if
the
use
case
or
other
oh
jan
is
joining
the
queue
perfect
go
ahead
and
john.
N
Yeah
perry
is
not
able
to
speak
at
this
moment,
but
we
are
working
together.
So
I
think
it's
perhaps
better
to
discuss
the
details
on
the
list
if
it's
possible,
but
I
think
many
of
the
concerns
we
have
are
still
still
relevant.
B
Okay,
all
right,
thank
you,
so
I
do
want
to
put
a
show
of
hands
pull
to
the
working
group.
It's
on
screen
now.
B
All
right,
I'm
gonna
end
the
poll
now
because
it's
pretty
pretty
much
good
enough,
so
we
did
we.
We
just
got
24
participants
in
that
poll.
It's
about
half
half
in
terms
of
whether
or
not
the
working
group
should.
B
You
know,
continue
looking
at
adopting
this
as
a
proposed
standard
versus
an
experimental
message,
sorry
experimental
draft,
and
so
so
again
this
is
not
an
adoption
call
on
either
on
either
front,
but
it
looks
like
we'll
hold
the
door
open
longer
for
the
possibility
of
it
being
proposed
standard.
I
think,
as
jan
just
mentioned,
more
discussion
on
list
needs
to
occur
to
try
to
resolve
the
concerns
of
those
that
have
raised
them
and
I
think
that's
it
for
for
that
presentation.