►
From YouTube: IETF100-NETCONF-20171116-1550
Description
NETCONF meeting session at IETF100
2017/11/16 1550
https://datatracker.ietf.org/meeting/100/proceedings/
B
A
C
C
C
Before
we
get
started
some
administrative
things,
there
is
a
red
box.
Next
to
the
mic,
you
probably
would
want
to
stay
within
that
box.
If
you
want
to
be
viewed
in
meat,
go
on
the
mic,
please
make
sure
you
state
your
name
slowly
and
clearly
for
people
who
are
going
to
be
the
note
takers
so
that
they
can
record
it.
C
C
C
C
C
C
D
Is
that
better
yeah,
okay,
so
I'll
be
presenting
on
these
three
drafts
as
a
set
because
they're
all
closely
related
and
actually
onto
the
more
time
on
the
yang
library?
This
one,
that's
the
one
that
has
the
more
interesting
questions.
So
it's
on
behalf
II
in
the
MBA
office,
but
we
want
the
next
slide.
Kent
Kent
thanks.
D
D
So
effectively,
there's
just
a
reminder
of
what
these
three
drafts
are
trying
to
cover.
They
are
aimed
at
producing
updates
to
Netcom
press,
conf
and
yang
library
to
support
the
nmda
data
architecture,
that's
going
through
network
at
the
moment.
The
aim
of
these
protocol
updates
are
minimal
extensions.
Only
so
we're
trying
very
hard
not
to
add
anything
else
in
beyond
what
is
required
to
support
nmda,
to
make
it
easier
to
implement
and
to
get
the
drafts
through
more
quickly.
The
functionality
that
we're
adding
to
both
net
conf
andres
cough
is
equivalent.
D
So
the
aim
is
the
draft
should
look
quite
similar
I
think
it's
a
little
more
effort
to
make
the
text
in
the
two
drafts
a
closer
more
closely
aligned.
The
yang
Library
changes
are
more
interesting.
More
extensive
and
in
terms
of
the
draft
has
currently
been
described.
The
oh
we've
had
further
discussion
beyond
that
point.
D
I'll
also
be
presenting
what
is
a
proposed
change
to
that
and
again
another
idea
of
how
to
incorporate
it
to
cover
schema
out
better,
as
I
said,
we're
on
the
addressing
you
complete
quickly
and
we're
aiming
for
working
with
last
call
before
the
next
ITF.
So
hopefully,
once
these
issues
are
closed
and
we
align
the
text,
we
should
be
done
so
just
quick
summary.
What
change
may
be
made?
We've
clarified
the
origin,
metadata
encoding
and
align
the
text
between
the
two
drafts
and
then
effectively.
D
This
encoding
is
fairly
simple,
one
where
you
report
the
origin
metadata
status
for
each
node.
But
if
the
state
is
the
same
as
the
parents,
then
you
don't
need
to
report
it.
So
the
intention
is
that
this
should
be
fairly
compact
and
easy
to
include.
We've
clarified
that
Tessa
venom
da
so
basic
to
check
whether
you're
talking
to
an
MDA
server,
is
achieved
by
querying
the
new
ylang-ylang
library
using
the
new
get
data
operation
on
the
operational
state
data
store,
so
that
effectively
validates
that
the
device
is
supporting.
D
The
new
RPC
is
also
supporting
operational
date.
The
state
data
store,
and
it
gives
you
information
back
that
you
require
so
Andy
has
expressed
that
he
would
quite
like
there
to
be
an
explicit
capability
for
this
as
well,
but
it
would
actually
affect
it
would
be
the
same
as
doing
this
operation.
So
the
authors
are
not
quite
clear
whether
there's
any
great
benefit
of
doing
that,
whether
you
actually
need
an
extra
capability
as
well,
because
the
pragmatic
way
of
testing
for
this
is
to
do
that
operation
so
that
everyone's
got
any
opinions
on
that.
D
Please
speak
now
or
Orson.
The
78th
us
in
rests
with
restaurant.
We
clarified
that
the
with
defaults
extension
does
not
apply
to
operational,
so
the
operational
datastore
had
sort
of
a
different
way
of
handling
defaults.
The
default
values
are
regarded
as
being
informational
use
only
and
that
the
operational
state
they
still
will
contain
all
the
default
values
that
are
in
use
by
the
server.
So
in
terms
of
the
net
contrast
there's,
this
definition
of
the
data
is
in
use
and
that
effectively
replaces
the
use
of
this
with
defaults
and
then
finally,
we've
clarified.
D
D
The
next
steps
there
is
actually
that
much
to
do
here
align
the
structure
and
descriptive
text
between
the
two
protocol
drafts.
This
is
really
just
a
stylistic
thing:
it's
not
changing
the
content
and
just
really
need
to
make
sure
that
we've
got
the
same
sections
in
both
drafts.
Call
the
same
things
I
think
the
net
conf
draft
is
missing
a
performance
section
moment
generally
other
than
that.
D
There's
these
two
burning
issues
agree
to
close
on
the
performance,
questions
being
discussed
and
the
structure,
yon
library
and
then
hopefully,
you're
working
with
call
so
on
to
the
first
of
the
two
issues
and
they're
quite
closely
related,
really
is
conformance.
So
the
nmda
architectural
document
states
that
all
conventional
data
stores
must
have
the
same
schema
and
that
the
schema
for
the
operational
data
store
must
be
a
superset
of
all
configuration
data
stores,
but
data
nodes
may
be
emitted
through
deviation.
D
D
So
we
trust
strike
a
balance
here
between
not
making
this
too
complicated,
but
actually
ensuring
that
it
accurately
reflects
what
the
devices
will
do
and
a
key
consideration
here
is.
We
want
to
be
able
to
relate
between
the
data
that's
in
running
or
intended,
and
the
data
that's
in
operational
for
conflict,
true
notes,
so
on
the
next
slide,
I've,
actually
just
one
second
Rab
guys.
B
E
D
Next
layer,
this
one's
my
next
slide,
please
yeah.
So
this
is
saying
the
same
thing,
but
you
know
in
a
diagrammatic
or
format.
So
what
I'm
trying
to
do
is
show
you
that
the
schema
between
the
data
stores
here
on
the
left,
you
have
the
conventional
data
stores
running
intended
candidate
and
startup
and
on
the
right
you
have
operational
and
the
blue
boxes
represent
the
regular
schema
nodes
that
are
represented
by
the
yang
modules
and
then
in
orange.
D
I've
done
a
device
level
or
server
level
deviation,
so
that'd
be
a
normal
deviation
where
you're
modifying
the
type
or
you're,
adding
any
properties,
adding
values
into
the
values
space
and
those
deviations
apply
everywhere.
So
that's
that's
the
requirement
of
nmda,
then
I've
also
shown
here
in
the
empty
boxes.
The
case
where
you're
allowed
to
have
data
store
specific
deviations,
and
in
this
case
the
only
thing
you're
allowed
to
do
either
through
a
deviation
or
in
the
case
of
using
features,
is
to
remove
some
of
those
nodes.
D
So
in
the
diagram
on
the
left
for
the
intended
data
store,
you
can
see
that
the
last
node
is
is
obviously
not
implemented
in
that
datastore
or
when
that
schemer
in
this
or
schema,
and
the
one
on
the
right
operational
shows
you
that
middle
node
is
not
implemented
in
the
schema
for
that
datastore.
But
the
idea
is
that
for
all
the
ones
that
are
here,
blue
and
orange,
you
can
take
the
contents
of
the
running
and
intended
datastore
and
you
can
check
whether
those
have
been
existing
operational.
D
So
the
author's
opinion
is
that
the
current
performance
is
correct
and
it's
required
I
think
we
are
still
debating
as
to
whether
there
should
be
more
clarification
in
the
draft
that
may
help
help.
With
the
questions
that
Andy's
been
asking.
There
has
been
some
expression
that
be
nice
if
it
was
Hospital
a
simpler
performance,
in
particular
using
the
same
schema
for
all
the
data
stores,
with
no
no
/
datastore
features
or
deviations,
but
we
think
that
there's
reason
we
use
cases
for
both
of
these.
D
The
per
datastore
features
are
useful
when
we
were
migrating
is
I,
think
822
3rc,
that
has
a
route
routing,
config
or
route
ID
feature,
and
when
you
align
the
config
and
state
trees
together
into
single
tree,
you
want
to
retain
the
capability
of
of
the
device.
You
have
to
choose
whether
or
not
you
allowed
to
configure
the
route
I'd
reach
ID
or
whether
it
is
purely
operational
data,
and
so
allowing
that
feature
keyword
to
express
just
in
operational
I'm,
not
config
allows
that
capability
effectively
in
terms
of
deviations.
D
We
think
that
they're
important
and
partly
as
a
sort
of
migration
path
that
when
people
start
implementing
it,
it's
not
necessarily
the
case
that
everyone
from
day
one
will
be
able
to
implement
all
of
these
operations
States
for
the
configuration,
so
we're
expecting
that
there
may
be
some
deviations
in
that
case
for
an
interim
period,
while
vendors
catch
up
with
the
the
work
that's
required
here.
So
that's,
what's
expected
their
longer
term,
it's
desirable!
If
there
aren't
in
these
differences,
it's
not
meant
to
be
a
way
of
actually
modifying
schema.
D
It's
just
meant
to
be
a
way
of
I,
just
say
it
a
migration
path.
Next
slide.
Thank
you.
So
all
this
boils
down
to
what
the
yang
library
structure
looks
like,
and
so
we've
been
trying
to
get
a
structure
right,
we've
had
several
gos
and
the
aim
is
to
cover
the
required
functionality,
obviously
with
the
per
dates
or
and
deviations.
We
also
wanted
to
be
as
simple
as
poss
and
in
the
most
recent
revision.
D
That's
not
in
the
draft,
but
it's
been
discussed
on
net
conf
and
it's
included
here
is
we
want
to
try
and
MIT
optimize
it
for
the
mainline
case.
So
in
the
case
where
you
don't
have
any
per
datastore
deviations
and
the
case
where
you
don't
have
her
datastore
features
then
keep
the
output
very
simple.
D
So
what
I'm
going
to
do
here
is
going
to
go
through
effectively
the
iterations
of
the
four
versions
of
the
young
library
structure,
the
last
one
I've
included,
is
one
that's
been
extended
to
effectively
support
the
sort
of
scheme
amount
requirements
as
well
his
say-so.
The
follow
four
sides
are
present.
First
of
all,
I'll
start
with
the
current
yang
library.
So
our
C
seven
eight
nine
five
tree
diagram
just
to
see
where
I'm
starting
from
then
I
present
the
current
yang
library
Biss.
D
So
that's
the
OH
version
tree
diagram
that
the
covers
the
required
functionality
but
is
potentially
a
bit
more
complicated.
Then
number
three
I
proposed
a
simplified
version
of
that
that
we
think
is
better
and
then
finally,
that
simplified
version
tweak
to
extend
it
very
slightly
to
mean
that
it
can
be
reused
for
schema
mountain.
D
So
so
this
is
a
current
yang
library
version.
The
bits
and
read
that
I'm
pointing
out
are
the
fact
that
currently
it's
called
module
state,
so
we're
going
to
rename
that
80
or
the
state
path
and
actually
just
to
make
it
more
generic.
As
the
new
version
yang
library
won't
just
cover
modules
will
also
list
the
data
stores.
D
One
other
change.
That's
sort
of
coming
through
in
these
various
things
at
the
moment
is
that
the
key
for
the
modules
in
the
current
yang
library
is
name
and
revision,
and
because
of
that,
the
revision
is
actually
Union,
because
you
may
not
may
or
may
not
have
a
revision
for
a
module.
So
you
don't
have
a
revision
for
module,
there's
a
special
case
of
using
an
empty
string
to
represent
that
and
that's
that
structures,
because
the
same
module
list
has
been
used
to
represent
both
implemented
and
import.
D
So
this
is
the
current
version
in
the
best
draft
young
lad,
abyss
and
effectively.
This
is
the
one
that
we've
been
using
for
a
while.
The
key
changes
here
effectively
is
that
the
module
key
now
isn't
is
an
ID.
It's
a
string
rather
than
being
named
in
revision,
and
that's
because
you
may
require,
with
this
structure,
separate
module
entries
if
you
have
per
datastore
deviations
and
/
based
off
features.
D
D
If,
if
you
needed
it
in
terms
of
that
set
of
modules,
as
you
list
it
under
here,
the
entire
set
of
modules
of
support
on
the
device,
and
then
you
have
a
module,
set
the
defect
or
a
list
of
module
sets
that
the
build
subsets
of
those
modules
into
particular
schemas.
So,
whereas
you
needed
separate
schema
for
each
data
store,
you
can
build
one
or
more
module
sets
using
that
and
then
then,
for
each
data
store.
You
report,
which
module
set,
is
associated
with
that.
D
So
in
the
mainline
case,
you'd
expect
to
have
one
list
of
modules.
You
define
one
module
set.
That
has
all
the
modules
that
are
being
implemented
and
then
the
data
store
would
reference
that
single
module
set.
So
you'd
have
entries
in
the
data
source
for
conventional
startup
candidate
running
they
would
all
point
to
the
same
module
set,
and
so
it
operational,
if
you
had
deviations
that
differed
between
the
data
stores,
then
points
all
your
modules
are
listed
with
multiple
entries
potentially
for
the
same
module.
You
then
have
two
module
sets.
You'd
have
one
for
that.
D
Maybe
the
conventional
data
stores
and
a
separate
one
for
operational
and
then
the
data
stores
would
actually
reference.
Those
two
separate
module
sets.
So
this
is
this
covers
the
requirements,
but
potentially
is
more
complex
for
clients
to
deal
with.
So
the
proposed
and
simplified
version
of
this
and
there's
two
simplifications
they're
happening
here.
One
is
that
I've
split
out
the
list
of
modules
that
you
implement
against
the
list
of
modules
that
import
only
so
given
the
server
can
only
implement,
give
a
single
revision
of
a
module.
D
Then
that
list
can
now
be
listed
by
the
module
name
and
the
revision
becomes
simplified
because
it's
optional
as
it
is
in
them
in
the
in
the
schema
and
the
data
type,
can
again
can
then
be
simplified
as
well
and
separately
from
that.
You
have
also
the
import
only
module
list
now,
because
you
can
import
multiple
revisions
of
different
modules,
then
you
have
to
keep
the
name
revision
key
they're
effectively,
but
I
think.
D
Actually
it
feels
to
me
that
the
the
more
interesting
list
of
modules
that
is
actually
being
reported
here
is
the
list
of
implemented
modules.
I
think
that's
what
the
device
has
cares
about
more
than
more
so
than
the
imported
list,
then
again
by
simplifying
the
name
of
the
modules
list
it
allows
us
to
have.
D
D
So
this
is
to
say
that
for
a
given
module
you
have
you
choosing
not
to
implement
it
in
some
data
stores,
so
the
default
case
you
did
is
you'd
expect
that
not
implemented
in
leafless
to
be
empty
if
you've
entered
in
all
data
stores
and
again,
if
you
had
a
feature
that
was
enabled
in
all
those
stores,
then
you
would
again
have
an
empty
not
implemented.
Latest
nada,
yotaka.
D
So
and
I
think
that's
the
trade-off
effectively
in
terms
of.
If
you
have
one
one
list
here,
then
then
it's
there
is
a
compromise.
The
one
change
that
I
think
we
might
consider
making
on
top
of
this
is
the
moment.
You've
got
this
single,
not
implemented
in
list
I,
wonder
that
should
be
a
choice,
statement
that
either
lists
the
ones
it
is
implemented
in
or
not
implemented
inside,
but
actually.
F
I
think
that,
in
the
version
that
you
are
going
to
show
next
very
heavily
a
list
of
schemas,
then
it
would
be
easily
possible
to
have
some
schema.
That
is
this
MN
m
d8
in
R,
which
is
what
we
have
here
and
then
have
the
possibility
to
define
other
schemas
along
that
that
can
be
assigned
somehow
to
other
data
stores
that
appear
in
this
yes
or
lists.
Okay,.
G
Speaking
of
the
contributor
you've
been
thinking
about
a
way
to
export
from
a
server.
What
are
the
full
list
of
supportable
yang
models?
I
mean
by
that.
If
you
would
enable
a
license,
if
you
would
have
all
the
line,
cards
etc,
I
don't
specifically
want
to
have
it
in
there.
But
is
there
a
way
to
group
things
that
you
could
you
know
maybe
in
a
different
language
or
somewhere
tell
this
device
does
support
us,
but
you
now,
but
assuming
you
have
the
right
license,
assuming
you
do
the
right
thing,
you
would
have
way
more
so.
B
I
going
to
move
the
next
slide,
I
can't
Watson
as
a
contributor
going
back
to
yesterday's
discussion
that
mud
working
group
I'm
wondering
if
this
would
be
extensible
to
supporting
the
revision
handling
revisions.
Assembler
I
see
here,
you
have
the
revision
as
a
prepping,
a
key
key
field,
and,
and
also
you
said
earlier-
that
you
know
currently
there's
an
assumption.
I
can
only
have
a
single
instance
of
you
know,
implemented
at
time.
Both
of
those
are
no
longer
true,
so
wouldn't
want
to
hold
up
that.
B
D
H
D
So
this
is
the
final
proposed
extension
to
this,
and
so
the
text
in
purple
here
is
an
augmentation,
so
that
wouldn't
be
directly
in
young
library.
That
would
be
an
augmentation
by
skimming
out.
So
the
the
only
change
that's
been
made
here
is
rather
than
having
a
single
list
of
modules.
You
actually
have
separate
lists.
So
the
top
level
you
define
lists
of
schema
and
each
schema
has
a
set
of
modules
as
part
of
that
schema.
D
So
when
Benoit
raised
this
issue,
a
question
about
whether
you
wanted
to
do
have
different
licensing
or
something
like
that,
then
certainly
you
could
imagine
that
you
could
define
multiple
schema
and
in
terms
of
what
licenses
were
turned
on.
You
could
say
this
is
the
set
of
modules
would
be
represented
in
that
schema
effectively,
and
the
plan
here
is
that
in
the
case
that
you
have,
if
you
have
just
a
normal
device,
that's
only
it
has
one
schema,
so
you're
not
using
schema
mount.
D
Then
the
expectation
here
is,
you
did
have
one
well-known
schema,
you
might
call
it
like
primary
or
or
default
schema.
So
you'd
have
a
well
defined
name
here,
yeah.
What
the
default
scheme
would
be
and
for
four
simple
devices
or
normal
devices
you'd
expect
it
to
only
be
one
and
the
expectations
when
you
get
more
schema
and
the
one
that
I
was
considering
here
in
particular,
was
when
schema
mount
comes
along,
so
this
effectively
should
have
removed
most
of
the
requirement
of
what's
in
the
schema,
mount
yang
module.
D
F
Would
certainly
support
this
last
version,
not
only
because
it
supports
king
amount
and
it
is
basically
aligned
with
what
I
proposed
yesterday
during
the
net
mod
session,
but
I
think
also
it's
it's
really
more
future
proof,
because
you
know
that
any
Aang,
if
we
have
a
container,
then
it's
it's
really
hard
to
make
it
a
list.
If
you
want
to
do
it.
F
So
even
if
we
don't
do
anything
else,
this
list
means
even
if
it
has
just
one
entry,
it's
still
future
proof
and
we
can
use
it
for
four
different
sets
of
modules
due
to
licenses
or
for
support
of
other
later
stores,
of
course,
for
support
of
scheme
amount,
and
so
I
think
this
really
is
a
very
minor
change
and
shouldn't
cost
much
to
make
it.
Okay.
D
B
D
B
First
and
foremost,
we
reverted
back
to
the
device,
always
sending
a
side
of
ID
certificate
to
the
bootstrap
server,
even
when
it
doesn't
trust
the
bootstrap
server.
The
reason
for
doing
this
was
it
was
felt
that,
from
a
security
perspective,
it
was
better
for
the
device
to
give
its
identity
to
a
potentially
bad
bootstrap
server
than
it
would
be
for
the
bootstrap
server
to
give
the
device
Pacific
configuration
to
a
potentially
spoofed
device.
B
This
is
called
out
in
the
security
consideration
section
so
I'm
sure
we
looked
at
carefully
by
sector
when
they
do
their
review.
Secondly,
we
moved
the
content
from
it
was
previously
defined
as
a
hierarchy
of
data,
accessible
protocol
nodes,
sorry,
protocol
accessible
nodes
in
the
tree
and
that
got
moved
into
an
RPC
called
get
bootstrap
data.
B
In
that
RPC
we
added
a
parameter,
a
flag
called
untrusted
connection,
so
this
is
an
ability
for
the
device
to
alert
the
bootstrap
server,
that
it
doesn't
trust
with
shop
server
that
it
is
the
scenario
where
it's
blindly
accepting
the
bootstrap
servers,
TLS
server
certificate
and
therefore
the
only
way
the
device
would
accept
or
be
able
to
process
the
data.
The
bootstrap
server
returned
was
for
that
data
to
either
be
signed
or
over
for
to
be
unsigned,
redirect
information.
So
we
always
have
this
provision
that
redirect
information
does
not
have
to
be
signed.
B
We
did
add
a
you
know:
per
martin's
review
a
another
module
called
the
ITF
zero-touch
device
module.
It's
really.
It
is
a
standard
space
module,
so
it
can
be
implemented,
but
it's
more
exemplary
and
that
it
provides
a
mechanism
for
devices
to
have
a
flag
called
enabled
whether
or
not
the
zero
touch
is.
You
know
the
service
is
enabled
by
default.
B
So
primary
couple.
Ask
all
comments
that
we
received
first
was
that
the
redirect
information
needs
to
support
returning
partial
certificate
chains,
rather
than
just
a
single
root
certificate
to
support
deployments
using
public
CAS.
This
actually
came
from
an
operator
operator
that,
as
using
a
public
CA,
not
necessarily
the
one
on
the
screen,
and
so
from
that
public
CA
they
were
issued
a
server
certificate
and
but
through
the
mechanisms
they
could
only
put
the
public
CA
certificate.
You
know
the
root
certificate
and
the
reason
why
is
because
the
tools
that
were
they
were
being
used?
B
Not
all
tools,
support
the
notion
of
partial
certificate
chain.
Verification,
open
SSL
supports
it,
but
not
not
all
tools
do
so.
What
we
need
to
be
able
to
support
is
for
the
server
that,
when
it's
returning
the
redirect
information
to
provide
that
partial
train
chain,
this
would
be
the
red
box
at
the
top.
So
the
trust
anchor,
if
you
will,
is
not
Remy
we're
crossing
out
the
binary.
B
The
binary
actually
represents
an
x.509
certificate,
so
it's
becoming
a
pkcs7
which
is
actually
just
containing
the
partial
chain
of
certificates
and
then,
of
course,
in
the
blue
box
below
is
what
the
when
you're,
making
the
TLS
connection
to
the
bootstrap
server
and
is
doing
the
handshake
the
the
certificate
chain.
That's
in
the
blue
box
would
be
what
the
Selah
server
would
return
and
therefore,
then
the
entire
chain
can
be
resolved
and
complete
certificate.
Verification
can
be
performed.
B
Ok,
so
this
is
the
from
the
last
comment.
The
only
comment
I
want
to
make
this
is
we
sang
it
here,
we're
using
a
type
called
pkcs7
in
the
next
presentation
on
the
keystore
draft.
You'll,
see
that
there's
a
suggestion
that
we
might
want
to
define
another
yang
module
called
something
along
the
lines
of
IETF
crypto
types.
B
B
Well,
okay,
so
the
bc
option
currently
is
specifying
allowable
you'll,
write
contents
and
error
handling
that
can
be
improved.
The
proposals
remove
the
must
in
the
URI
description
so
that
there's
no
implication
of
server-side
processing.
What
we've
been
told
is
that
whenever
a
GC
options
go
for
review,
they
inevitably
get
kicked
back.
If
there's
any
implication
that
there's
a
requirement
for
server-side
processing,
so
we
wanted
to
eliminate
that
to
route.
So
we
don't
have
that
problem
and
then
to
accommodate
that.
B
We
then
added
additional
language
to
cover
how
clients
handle
errors
when
processing
lists
of
your
eyes.
So
you
know
and
kind
of
go
over
the
fact
that
the
URI
has
to
be
of
a
certain
structure,
and
if
it's
not
of
that
structure,
then
you
know
parts
of
it
can
throw
away
and
other
parts.
If
they're
there,
then
it
has
to
skip
over
that
entire
URI
and
as
if
it
worked
present
in
the
first
place.
So
again
that
full
proposal
was
sent
a
list.
We
view
this
and
actually
even
the
previous
comment
as
somewhat
editorial
comments.
B
So
here
we
are
on
the
final
stretch.
All
last
call
comments
have
been
addressed
on
list.
It
seems
that
a
simple
draft
update
is
all
that's
needed
now
before
being
forwarded
to
I
use
G
for
consideration,
and
that's
even
if
we
wanted
to
adopt
this
I
need
to
have
crypto
types
module.
If
we
wanted
to
introduce
a
normative
reference
to
that,
it's
I
mean
you
know
it's
just
an
adding
of
an
import
statement
and
referencing
that
type,
rather
than
a
type,
that's
embedded
into
the
module
itself.
So
that's
effectively
an
editorial
change
of
sorts.
B
C
So
chair,
coming
to
you
and
on
this
document,
has
gone
through
multiple
rounds
of
discussions
and
has
cumulatively
picked
up
Ward's
of
support
to
go
into
last
call.
Of
course,
every
time
there
were
a
few
more
comments
to
be
addressed.
Do
you
believe
at
this
point
that
the
draft
is
ready
with
all
the
last
set
of
comments,
questions
that
people
might
have.
B
I
do
like
again
I
just
believe
that
there's
these
final
editorial
level
changes
that
could
be
made
as
effectively
last
call
updates.
I,
don't
think
they're
technical!
You
know
they
are
a
little
bit
technical,
but
not
to
this
threshold
of
needing
to
trigger
another.
Last
call
I,
don't
think,
but
I
guess
I
believe
we're
ready
with
this
draft.
Okay,.
C
B
Okay,
next
set
of
drafts
to
the
keystore
and
Friends
drafts,
so
one
presentation
to
cover
them
all
recap:
we
had
there
was
a
last
call.
It
was
unsuccessful.
There
was
only
one
review
from
Jurgen
thanks
Jurgen
it
was
a
doozy
and
in
fact
it
was
calling
for
a
major
restructuring
of
the
modules
which
we'll
get
into
in
just
a
little
bit
here.
In
fact,
I
think
it
was
because
of
his
review
calling
for
that
matrix
restructuring
and
there
weren't
any
other
reviews
anyway.
B
The
groupings
themselves
are
still
defined
inside
this
key
store
module
and
then
are
being
used
by
both
the
TLS
client
server
and
the
SSH
client
server
modules,
and
actually
this
does
make
sense,
I
believe,
because
now
the
the
you
know,
f0
can
pointed
out
that
if
you
actually
look
two
implementations,
no
one
is
actually
having
a
key
store
where
they're
storing
their
private
keys
for
Association
TLS.
So,
while
we're
trying
to
introduce
an
abstraction
that
seemed
to
make
sense
of
the
ideal,
it
actually
didn't
map
to
reality
to
current
implementations.
B
So
I
think
this
is
a
good
change.
However,
in
italic
here
you
can
see,
does
it
mean
that
we
need
to
rename
the
module,
since
it
actually
no
longer
stores
any
keys
right
and
the
most
significant,
not
okay
them
before?
If
something
actually
somewhat
a
rhetorical
question
right,
I
mean
can't,
have
a
model
called
a
key
store
module,
there's
no
keys
in
it
and
you
might
be
wondering
well
what
should
we
call
it?
Well,
if
you
look
to
the
next
loaded
point,
the
most
significant
non
update
is
defining
reusable
crypto
types.
B
This
is
what
I
mentioned
in
the
previous
presentation:
a
module,
for
instance,
called
IETF
crypto
types
that
would
define
these
would
be
helpful
and
also
there
was
a
discussion
about
moving
algorithmic
identities
into
another
module.
Where
would
we
put
those
identities?
I
mean
we
wouldn't
put
them
into
a
model
called
crypto
types,
because
that's
I
think
only
supposed
to
hole,
have
types
or
type
types
in
them,
not
not
identities.
So
then,
maybe
we'd
have
a
more
generic
model,
called
something
like
ITF
crypto,
which
I
have
both
type
two
halves
and
identities.
B
So
I
mean
that's,
not
necessarily
answering
the
question
of
what
the
name
of
this
module
should
be,
but
it
is
sort
of
you
know
suggesting
that
we
should
move
some
of
this
information
out
of
this
module
and
turn
into
others
for
the
Association
teyla's
client/server
drafts.
The
most
significant
update
was
the
inlining
of
those
keys,
so
it's
using
the
aforementioned
groupings
that
also
it's
been
updated
to
use
as
type
deafs
to
use
type
types
around
leaf
rafts
to
the
common
key
store
paths.
B
So
from
a
readability
perspective,
I
think
you
know
you
may
or
may
not
think
this
is
better,
but
it's
okay
and
lastly,
we
removed
compression
algorithms,
and
this
is
because
I
Gary
or
my
co-author
didn't
feel
like
there
was
enough
commonality
across
vendors
to
support
the
configuring
configuration
of
compression
algorithms
and
could
be
added
in
a
future
date
for
the
net
conf
and
rest
cough
client-server
drafts.
The
most
significant
update
is
that
now
there
are
containers
in
addition
to
groupings
for
both
the
client
and
server
modules.
B
So
we've
always
had
the
groupings,
but
Juergen
felt
it
was
important
to
have
containers
as
well.
I've
continued
to
maintain
I,
don't
I
mean
the
server
container
does
make
sense,
I
understand,
but
the
client
container
I
never
really
understood,
because
I
feel
that
the
application
itself
would
probably
embed
the
client
grouping
into
its
own
configuration
model,
and
that
would
be
the
container
that's
being
used,
not
not
a
global
container,
but
it's
okay,
because
when
the
model
module
is
being
implemented
it
doesn't
have
to
you
know
the
conformance
type
doesn't
have
to
be
its
implement.
B
No,
the
containers
are
top
level
containers
that
are
using
the
groupings
I'll.
Finally,
so
so
yeah,
if
you
look
at
the
yang
module,
you'll
see
a
container.
It's
like
three
lines:
it
just
uses
grouping
with
a
description
statement,
okay
and
then
lastly,
there's
been
several
addition.
I
must
and
mandatory
statements
added
there's
a
testament
to
the
depth
of
reviews.
I
mean
these
were
actually
rather
deep
low-level.
B
B
The
comments
are
all
in
the
Kiester
module,
so
we
do
need
a
focus
on
that,
as
mentioned,
there
may
be
a
need
to
do
some
additional
refactoring
in
the
keystore
module
and
and
but
we
also
need
to
be
aware
of
the
fact
that
there
are
a
number
of
dependencies
lining
up
to
these
out
to
actually
surprise
the
other
day.
Benoit
presented
a
dependency
map
and
it
showed
there's
eight
or
nine
other
drafts
that
were
depending
on
mostly
the
TLS
client-server
modules.
So
we
do
need
to
get
these
modules
down
soon
and
I.
F
I
It's
gonna
stay
low,
I,
guess
hi,
I'm,
Eric
white
and
the
berliner
on
behalf.
A
lot
of
the
people
who
have
been
working
on
the
subscription
set
of
drafts
for
those
do
you've
been
around
for
a
while.
You
know
that
there's
a
bunch
of
people
who
have
been
working
in
a
design
team
in
the
previous
iteration
when
you
had
their
first
four
drafts,
people
like
ballasts
and
others.
We
also
have
a
bunch
of
new
people
who
have
joined
and
you're
gonna
see
some
of
their
drafts
at
the
end
of
the
optional
section.
I
This
is
really
just
to
thank
the
people.
Who've
been
been
working
on
this
for
a
while.
Now
this
is
the
second
hackathon.
This
is
just
doing
a
report
out
on
what
the
hackathon
results
were
from
from
last
weekend,
the
IHF
99.
We
did
a
event
where
we
had
two
different
vendors.
We
tended
to
interrupt
this
time.
We
did
a
second
plus
telemetry
with
the
I
can
push
hackathon
element
and
we
had
participation
from
sockem
net
comp
and
the
core
working
group,
and
we
ended
up
winning
the
best
cross
working
group
collaboration
I.
I
I
So
it's
it
seems
to
be
having
some
traction
and
the
fact
that
we're
using
you
know,
production
system
to
do
and
feed
information
to
another
working
group
is,
is
pretty
cool
now,
there's
a
whole
bunch
of
drafts
and
it's
easy
to
quail
under
the
set
of
them
so
I'm
going
to
try
to
break
it
down
the
ones
on
the
left.
Are
the
ones
I'm
going
to
be
talking
about
here
and
then
I'm
going
to
pass
it
off
to
other
people
who
have
different
drafts
and
they'll
go
through
them
majority?
Those
are
are
not
adopted.
I
The
first
one
is
adopted
and
you'll
hear
from
each
of
the
authors
there
again
I'm
speaking,
to
give
you
an
overview
of
the
top
five
on
left.
The
one.
The
other
draft,
which
I
can't
request
did
a
while
ago,
was
a
overview
draft
and
if
you
want
to
get
my
idea
of
what
the
of
the
drafts
are
see,
the
draft
net
comp
subscription
and
notification
overview
the
goal
there
is
just
to
give
people
the
easy
entry
to
what
they
are
and
it's
intended
as
a
as
a
as
a
as
an
entry
point.
I
Even
if
it's
not
adopted
it
gets
people.
Some
access
now
you've
seen
the
slide
before
in
earlier
versions.
This
just
adds
the
new
adopted
drafts
and
the
functions
that
are
supported.
So
this
is
just
really
color
codes.
What's
in
subscribed
notifications,
what's
in
yang
push
what's
in
the
different
transport
drafts
and
so
I'm
going
to
kind
of
not
go
in-depth
here
because
we'll
be
talking
about
the
deltas
in
the
different
slides,
so
subscribe
notifications,
probably
the
most
significant
stuff
on
the
mailing
list,
has
been
the
changes.
I
The
draft
driven
by
some
excellent
comments
for
many
people
in
the
working
group
on
subscribed
notifications.
There
been
a
number
of
fairly
fairly
good
changes.
For
example,
you
know
we
had
had
things
become
features
that
were
originally
mandatory.
That's
useful
for
even
people
over
in
the
core
working
group
where
they
are
not
going
to
need
IOT
is
not
going
to
be
needing
that
kind
of
stuff.
We
returned
to
some
of
the
earlier
things
that
we
had
earlier
drafts.
We
had
a
return
to
a
stream,
a
string
for
a
stream
which
were
in
earlier
drafts.
I
We
returned
there.
We
had
explicit
filter,
psych
typing
in
our
earlier
drafts.
We
went
back
and
then
we
returned
that
there's
also
some
excellent
comments
that
were
made
on
52,
77
and
use
for
one-way
notifications,
and
we
clarified
that
as
well.
So
there's
a
whole
bunch
of
of
cleanup
changes.
The
structure
of
the
document
stays
fairly
consistent,
but
now
it
is
definitely
more
readable
and
probably
more
applicable
than
before
the
review.
Now
there
are
three
issues
that
are
open
with
subscribe
notifications
and
we're
trying
to
look
at
ways
of
closing
them.
I
So
what
I'm
gonna
try
to
do
is
I,
don't
know
how
you
would
do
this
like
do
a
hum
poll
or
just
sort
of
raise
the
hands,
but
at
least
it
address
potentials.
This
is
not
definitive,
but
at
least
gets
an
idea
on
the
three
issues,
and
if
people
in
the
room
have
a
preference
for
one
with
the
other
for
the
most
part,
I
don't
really
have
a
strong
preference
for
the
one
that
are
still
open
can
go
either
way,
but
we
just
got
to
figure
out
what
we
want
to
do
as
a
community.
So.
I
I
So
the
first
question
really
is:
will
the
transport
vary
for
different
receivers
of
a
notification,
and
why
would
people
want
to
do
that?
Well,
let's
say
you
are
using
HTTP
1.1
and
you
have
an
upgrade.
You
go
to
an
hv
to
transport.
Do
you
have
to
actually
create
a
new
subscription
and
then
manually
change
everybody
over
to
the
new
subscription
working
just
add
the
new
transport
to
that
receiver.
I
So
so,
there's
there's
value
in
some
cases
of
having
an
extra
configured,
an
extra
transport
for
for
receiver,
rather
than
a
a
transcript
for
the
whole
subscription
now.
So
the
current
draft
has
that
as
a
benefit,
but
it
is
also
possible
to
not
allow
transport
to
vary
by
subscription.
This
makes
for
a
simpler
model,
but
it
does
have
extra
complexities
on
the
application.
Side
and
I
did
send
a
preview
of
this
issue
out
to
people
on
the
list.
D
D
J
I
F
I
I
Anybody
really
wants
to
fight
the
other
one.
The
please
don't
at
least
for
now
all
right
second
issue,
and
this
is
kind
of
a
general
one
that
we're
the
first
implementer
of
and
I
find
this
one
fascinating.
The
question
is:
what
how
do
you
represent?
Sora's
drf
for
configured
subscription?
Yes,
were
the
first
people,
I,
guess
that
are
going
to
be
doing
a
leaf
ref
to
the
network
instance
model
where
they
have
effectively
the
identifier
for
a
vrf,
but
it
requires
that
you
have
schema
mount
and
don't
get
me
wrong.
I
I
love
schema
mount,
but
I
can
imagine
that
IOT
people
don't
want
to
have
to
have
an
import
dependency
on
the
schema
mount
if
for
an
optional
leaf
ref.
So
so
the
question
is
the
current
draft
currently
has
string
and
you
can
populate
the
same
field
that
would
be
used
as
a
leaf
ref,
but
do
we
want
to
actually
explicitly
make
a
link
to
a
leaf
ref
to
a
model
that
has
schema
mount
so
again?
I
This
really
is
an
issue
which
probably
should
be
addressed
by
with
guidance
by
the
people
over
doing
the
network
instance
model.
But
I
guess
the
current
draft
says
why
not
just
populate
it
with
the
name.
That
would
be
same
as
the
leaf
ref
and
I'd
like
to
see.
If
people
have
opinions
on
this
one
as
well
so
row.
D
Bulletin
cisco,
my
option,
my
preference
is
option
3,
so
effectively,
I've
emailed
yours
on
the
alias
on
this
and
I
think
the
session
is,
you
do
use
a
leaf
ref,
but
you
put
under
a
feature
statement
in
future
statements
and
it
might
be
good
if
the
network
instance
model
defined
a
feature
for
vrf
capability
effectively
in
its
own
mobile,
separate
module.
So
you
have
dependency
on
that.
They
feature
assignment
but
not
have
a
dependency
on
schema
mount
directly
and
the
network
instance
molix
I.
D
I
This
is
a
fascinating
issue
for
overall
general
how
those
yang
get
managed,
because
we
are
have
a
belief
that
probably
the
format
of
another
model
is
the
same
and
do
we
have
the
ability
to
influence
how
that
other
model
should
be
structured
or
not,
and
at
this
point
I've
been
assuming
we
don't,
but
if
we
can
get
that
chain,
if
we
can't
get
the
change,
what
do
we
do
and
if
we
can
get
the
change
it?
Probably
we
can
adopt
that
that'll
be
great.
If
we
could
adopt
that
I.
E
To
repeat
your
name,
my
name
is
Jill
Maui
yeah.
We
face
a
similar
situation
in
actually
we
have
a
draft
NetID
young
model
we
proposed
in
OBS
laboratory.
We
actually
sync
Mary
instance
model
it's
not
a
stable
enough,
so
we
instead
of
use
an
air
instance.
Actually
we
actually
use
identity
to
represent
worth
so
this
is
another
option.
We
we
use
and.
I
D
I
F
I
But
there
is
one
interesting
issue
here,
because
the
routing
group
does
have
a
very
deep
understanding
of
routing
and
they
have
more
reason
that
the
there
versus
a
management
system
that
just
wants
to
kind
of
casually
have
a
chance
to
mention
and
match
the
vrf.
So
so
the
so.
The
need
for
the
deep
embedding
in
the
routing
working
group
will
be
higher
than
the
general
case
of
people
who
say.
Oh,
maybe
I
want
a
vrf
and
then
they
have
the
linkage.
I
F
C
I
It's
a
must
statement
then.
Do
you
must?
Must
you
also
import
the
other
bottle
that
has
the
so
it
does
the
same
thing.
I've
actually
almost
changed.
The
network
instance
model
to
be
the
string
with
that
reference,
but
the
ultimate
issues
you're
still
importing
in
getting
all
the
baggage
which
many
people
aren't
going
to
want.
I
I
Dependency,
so
I
think
that
you
know,
if
nothing
else,
we
can
take
it
to
them
to
say
you
know
that
there
should
be
some
guidance
on
how
to
do
this,
and
if
everybody
in
the
whole
world
has
to
have
a
feature
for
vrf,
we're
actually
mandating
a
fairly
large
dependency
across
all
the
models
that
would
have
there.
All
of
a
sudden
you
have
to
any
leaf.
Ref
is
going
to
become
a
feature
and
that's
gonna
proliferate
through
through
our
whole
modeling
environment,
I,
just
encourage
them
to
define
the
feature.
D
B
I
B
I
agree
and
what
you
said
earlier
about
you
know
the
routing
people
are
definitely
you
know.
They're
models,
of
course,
be
it
be
comfortable
and
natural
for
them
to
have
a
network
instance
model
there,
whereas
you
know
a
controller
type
application,
maybe
wouldn't
necessarily
have
it
there.
So
so
it
creates.
You
know,
maybe
an
unnecessary
dependency,
but
so
I'm
sympathetic
to
this,
but
I
do
with
I
think
we
probably
need
to
take
that
to
list.
Also,
you
know
to
try
to
figure
out
what
what
is
the
impact
you
know
to
how
many
applications
are
VRS.
I
H
I
F
F
Asks
what
is
the
cost
of
the
import
in
running
coat?
&Amp;
Marting
at
first,
he
says
agree.
We
should
ask
for
guidance
from
routing
working
group
and
then
he
says,
I
also
don't
understand
the
dependency
issue,
the
fact
that
there
is
a
dependency.
The
fact
is
that
there
is
a
dependency.
If
you
need
a
vrf.
M
I
I
M
M
I
M
Building
a
box-
and
you
want
to
support
network
instances,
you're
gonna
have
support
for
it,
and
you're
gonna
want
to
be
able
to
reference
it
so
you'll
use
the
feature
that
has
the
verb
that
points
the
verb
exactly
anyway.
If
I
you're,
building
a
box
that
doesn't
use
verse,
you
just
won't
in
Port
that'd
be
what
you
want
to
implement
that
feature,
but
that
feature
can
be
the
your
reference
in
your
model.
All.
I
M
M
M
I
M
Ground
I
think
you
should
you
know
from
from
my
standpoint.
It
should
be
a
feature
in
yours
and
you
should
set
that
pattern
for
for
modules
that
want
to
reference,
and
then
it
makes
sense
that
sometimes
they
reference
verbs
and
sometimes
they
don't.
That's
sounds
like
a
feature
in
that
model.
I'll
salute
it.
F
B
B
I
F
I'm,
not
sure
I
understand
really
the
proposal
by
ARP.
So
even
if
the
feature
is
not
supported,
then
the
device
needs
this
drink
right.
So
it's
it's
like
like
a
choice
either
or
so
that
you
would
meet
maybe
to
have
some
choice
already
in
the
original
code
and
then
maybe
augment
this
leaf
ref
into
it
under
a
different.
The.
F
I
We
still
can
do
a
leaf
ref
that
the
point
Rob
was
making
in
his
earlier
thing
is
that
if
they
changed
their
model,
then
we
wouldn't
have
to
worry
about
schema
mount
import
necessarily,
so
they
could
both
they
could
both
be
made
to
work
all
right.
We
have
to
really
close
this
because
we
have
a
number
of
other
topics,
but
it
looks
like
we're
till
the
guts
of
an
answer.
I
N
M
So
I
like
the
augment
solution,
if
it,
if
that
is,
if
use
of
a
feature
in
the
model
that
sometimes
references
verbs,
turns
out
to
be
a
real
problem.
But
I'm
not
sure
I
understand
the
problem,
because
it
seems
to
me
that
it's
a
development
time
just
compilation
reference
check
as
opposed
to
actually
having
to
do
any
extra
code,
because
you
don't
really
have
to
device,
doesn't
really
have
to
implement
scheme
amount.
M
It
just
has
to
be
able
to
compile
this
feature
that
it's
not
using
and
as
perhaps
I'm
missing,
something
with
which
quite
you
know
possible,
but
it
doesn't
sound
like
it's
a
survived.
I
I
I
All
right,
so
it
looks
like
we
have
the
guts
of
the
solution
there
and
we'll
try
to
write
it
up
now.
There's
one
other
topic
that
we
actually
declared
rough
consensus
on
there:
a
lot
of
people
who
chimed
in
on
string
or
integer
and
for
scripture
a
subscription,
ID
and
well
I,
showed
two
options
here.
Just
like
all
the
other
issues,
there's
always
four
options.
I
Five
options,
so
I
showed
the
two
most
different
ones,
one
issue
as
an
integer
to
avoid
collisions
and
reuse,
the
other
one
that
had
the
most
I
guess
difference
was
using
string
for
configuration
integer
for
updates
that
wasn't
the
you
know
the
only
way
to
do
it.
There
was
another
model
that
that
Robert
was
talking
about
where
you
had
a
combination
between
the
two,
where
you
still
had
some
issues
in
the
end,
I
think
the
issue
has
worked
fairly
well
on
the
list
and
even
though
you
know
not,
everybody,
you
know
had
the
answer.
I
All
right
so
moving
on
to
yank
push,
we
have
version
11
out
there,
based
on
the
revised
data
stores,
cut
it
catching
up.
We
went
ahead
and
integrated
that
model.
We
also
had
the
explicit
foot
subtyping
that
we
had
in
version
zero
to
five.
We
returned
that,
based
on
based
on
comments
to
to
get
away
at
rid
of
the
the
generalization
we
had
attempted
scrub.
We'll
have
examples
based
upon
balises
comments.
We
had
the
recent
one
change
RPC
that
got
inserted
and
the
multi-line
cart
issue
that
we
were
originally
putting
in
there.
B
I
N
I
O
O
I
mean
a
state
version
like
we
do
the
appendix
will
there
be
a
state
version
of
the
model
as
part
of
the
Appendix
I.
Think
that's
what
we
kind
of
decided
on
when
we're
trying
to
bring
this
out
to
the
communities
because
I
know,
there's
gonna,
be
there's
gonna,
be
some
some
modules
that
some
implementations,
some
solutions
that
will
be
you
know,
still
have
the
separate
state
trees
for
a
while
and
so
I
just
wanted
to
make
sure.
That's
really.
C
C
O
B
It
it's
not
that
they
won't
have
it's
just.
We
have
some
flexibility
as
to
whether
or
not
we
will
do
it.
It
was
I
think
in
the
guidelines.
It
said
that
based
on
market
demand
or
something
along
these
lines,
so
you
know
if
it
turns
out
that
this
is
a
module
or
feature
that's
needed
to
be
implemented
on
non
MDA
compliant
servers,
then
that
would
be
the
reason
for
us
to
go
ahead
and
doing
the
test
state,
but
I
think,
probably
more
generally
speaking,
that
we
should
defer.
B
O
Hope
so
because
it's
I
mean
you
know,
particularly
these
ones
are
getting
kind
of
that
getting
kind
of
close
right.
You
know
that
that
that
you
would
look
at
that,
because
we
certainly
would
want
to
adopt
the
feature,
but
some
sometimes
it's
gonna
be
a
little
bit
longer
before
you
can
get
into
that
M
and
DA,
because
they've
got
to
change
their
entire.
You
know
solutions,
that's
right!
So
I
you
know
my
preference
would
be
that
that
you
would
create
state
versions
for
for
a
period
of
time,
as
you
said,
as
the
marking
deals
with.
I
Other
once
one
of
the
benefits
of
actually
doing
the
conversion
lake,
because
we
already
had
the
non
NDA
version
so
by
having
them
there,
we
have
a
choice
by
going
down
this
path
versus
doing
a
shift
and
then
going
back
and
forth
so
we'll
have
something
that's
pretty
close.
Plus
we
have
implementations
out
there,
there's
several
open
source
implementations
of
this.
It's
already
documented,
there's
production.
You
know
production
code,
so
we
still
have
it,
but
at
least
there's
no
process
requirement
I'm
committing
here.
That
will
have
them
two.
I
I
Makes
life
easy
all
right,
so
Net,
Kampf
event,
notifications,
there's
a
big
big
set
of
changes
on
here.
Basically,
I
always
cared
more
about
HP,
but
I
had
to
pick
up
net
comp
in
order
to
sort
of
bring
this
draft
up
up
to
last
call
quality.
So
we
did
a
whole
bunch
of
changes,
moving
the
text
and
putting
all
the
normative
text
up
front
and
only
pages
2.4.
I
We
remove
the
JSON
we
put
in
the
call
home
text
and
moved
all
the
examples
to
the
appendix,
including
scrubbing
them
and
learning
at
how
that
works
for
net
comp.
So
I
think
that
there's
a
lot
of
changes
and
a
lot
of
fixes
from
the
last
we've
done.
Our
best
cut
comments
are
very
appreciated
and
welcomes
for
sure
now
I
guess
that
we
can
take
a
at
least
a
status
check
on
the
email
list.
We
had
said
that
if
there's
three
drafts,
ODF
camera.
K
What
are
you
book
from
Nokia?
Just
one
minor
comment
on
the
little
confident
notifications,
I,
think.
The
examples
probably
would
need
to
be
scrubbed
one
more
time,
because
there
was
a
couple
of
places
where
one
of
them
needs
to
match
the
latest
model
with
the
subscribe
notifications,
and
then
there
was
another
place
where
it
would
still
using
get
which
in
another
draft
we
say
it's
gonna
be
obsolete.
It
I
love.
I
Hearing
that
I
mean
I
can't
talk
me
just
a
few
days
ago
that
there's
a
yang
lint
to
scrub.
These
examples
and
I
didn't
know
that
tool
existed
so
I've
been
playing
with
it
manually
and
trying
to
scrub
it,
which
is
not
a
great
idea,
but
that's
what
I
had
so.
The
fact
that
you've
gone
and
done
that
look
is,
is
comforting
and
well
and
will
make
the
changes
to
send
them
to
the
list.
I
I
H
When
you
Erickson
I,
was
part
of
this
group
and
I
think
these
three
are
good
enough
for
workgroup
a
school
I
am
quite
happy
with
them,
and
I
definitely
see
these
three
SS
as
the
first
one.
So,
based
on
this,
you
can
make
something
work
now:
UDP
and
HTTP,
and
all
the
others
are
useful,
but
not
as
urgent.
Okay,.
B
So
I
think
there's
a
few
questions
here.
You
know
a
what
is
the
package
of
drafts
we
might
take
to
a
working
group
last
call
that
particular
question
I'd
like
to
wait
until
after
you've
presented
the
full
set
of
drafts.
The
second
being
is:
when
might
we
taking
the
last
call,
assuming
this
is
the
set
that
we
took
already
I?
Think
you
said
that
there's
a
updates
that
need
to
be
made
on
both
the
event:
notification,
scribe
notifications
and
yang
Bush
and
in
charge
event.
B
I
This
code,
it's
like
it
was
two
weeks
we'll
get
it
in
alright,
so
other
other
issues,
other
topics,
net,
Kampf
risk
half
note.
If
there
were
a
couple
technical
issues
and
and
other
things
that
even
though
I
started
the
other,
probably
one
of
my
first
drafts
in
the
space,
the
HTTP
two
and
the
rest
comp,
there
are
complexities
that
we
found
as
you
guys
who
have
been
here.
A
lot
no
I've
always
asked.
Can
we
have
help
with
G
RPC
help
with
your
a
PC?
I
I
There's
a
new
draft
or
a
new
problem
statement
on
coop
transfer
and
you'll
hear
about
that
later.
So
the
idea
of
yang
push
for
cuddle
also
means
that
we
have
another
left
in
the
general
space.
What's
the
actual
division,
the
one
that
really
got
me
going
and
the
one
that
I
think
is
the
most
important
to
me-
is
that
the
issue
with
DDoS
protection
there's
a
dependency
of
when
you
have
a
configured
subscription
to
stop
DDoS
on
sending
a
subscription
started
message
which
then
has
an
OK
reply.
I
You
don't
send
them,
you
don't
send
the
the
updates
until
after
you
receive
that
ok.
But
do
you
send
the
updates
with
the
current
notification
or
for
this
the
the
new
notification
messages
draft,
there's
nothing
there
that
says
it.
If
we
wait
and
because
nobody's
asking
for
this
draft,
if
we
wait
for
the
subscribed
our
site
with
the
notification
messages,
then
we
can
just
mandate
everybody
using
HTTP
to
uses
the
new
notification
message
and
we
don't
have
that
extra
complexity.
I
I
Well,
we
have
h,
we
have
net
Kampf
in
Cisco's
production
release,
XE
16-6,
so
you
can
subscribe
and
that's
what
we
used
in
the
hackathon
demos.
There
are
open-source
implementations
out
there
that
are
in
Python
and
and
Java
guys
over
a
while.
We
had
a
had
a
implementation,
they
used
the
Blas
hackathon.
So
there's
a
number
of
implementations
out
there
and
you
guys
who
work
with
Netcom
right
I,
think
you
guys
use
net
confident
and
the
hackathon
with
IETF
99.
Yes,
yes,
their
their
net
comp
as
well.
I
B
So
we
have
like
three
minutes
until
we
get
to
the
next
presentation,
so
first
I'd
like
we'd
like
to
get
some.
You
know
the
pulse
from
the
room.
The
working
group
is
in
the
room.
The
first
question,
I'd
like
to
ask,
is
to
get
a
sense
of
how
many
people
are
in
the
room
are
active
contributors
to
the
design
team.
That's
been
working
on
these
drafts,
so,
if
you've
been
active
on
these
on
the
design
team,
can
you
raise
your
hand?
Please
all
right?
B
A
second
question
is
for
the
whole
room,
how
many
people
have
read
this
these
drafts
that
the
three
drafts
so
we're
talking
about,
and
that
includes
the
design
team,
so
everyone,
okay,
good,
that's
a
good
number,
thank
you
and,
and
then
finally,
assuming
that
these
issues
are
resolved,
how
many
people
believe
that
these
drafts
would
be
ready
to
go
to
working
group
last
call
it's
also
a
good
number,
okay
good,
and
what
timeline?
What
do
you
think
you'd
be
ready?
We.
B
I
B
J
Yeah,
firstly,
is
here:
we
extinct
the
net
confer
for
distributed
collection
and
their
high
Wang
draft
adopter,
as
with
toolpaths,
where
is
for
multi,
streamer
or
Geneva's.
The
other
is
for
UDP
Papa
channel.
So,
as
we
discussed
on
the
deal
on
the
design
team
meeting-
and
we
we
think
it
should
be
splitted
to
draft
so
the
one
is
a
keeper
UDP
Papa
channel,
the
other
one
is
for
multi
stream
or
donita's
the.
Firstly,
we
is
udp-based
publication
and
channel
for
streaming
telemetry
why
you
DB
beasts
the
publishing
channel.
J
Firstly,
the
young
push
separated
the
mini
mentor
and
a
control
for
our
subscription
from
the
transporters
that
is
used
to
actually
stream
and
the
deliver.
The
data
and
country
for
young
push
already
mentioned.
As
a
existing
transport,
including
net
comes
the
rest,
come
for
our
TV
pistol
and
there
are
some
point
we
should
contain
faster
it.
Each
collector
will
suffer
a
lot
of
TCP
connections
from
many
line,
cars
equipped
on
different
devices.
The
second
is
no
connection.
State
needs
to
be
maintained.
J
The
diagram
were
we,
the
solution
will
be
two
main
parts
as
a
balloon
is
for
the
message
earlier
and
as
the
upper
one
is
for
the
content
layer
for
the
message
earlier,
there's
really
a
faster
faster
in
UDP
and
the
same
for
the
security
for
security,
TT
RS
and
the
message
header
for
the
TT
RS
is
where
providers
are
reusable
60
and
also
resisting
function.
Udp
and
a
for
the
message
header
will
keep
the
main
important
information
and
before
TC
writing.
J
The
notification
include
for
the
encoding
method
like
GPB
Cabo,
chasing
XML
and
as
a
message
telling
tidyrank,
timestamp
and
sequence
number
may
be:
fragmentation,
foster
fragmentation,
maybe
keep
opening
issue.
Maybe
we
can
discuss
in
future
and
maybe
some
other
option
for
the
18
same
pretty
and
the
same
notification
message.
The
new
message
is
young
push
notification.
The
data
itself
may
be
included.
Another
finishing
header,
the
nudity
header,
is
defined
in
the
other
draft
which
will
occur
just
now
mentioned
for
the
net
car,
for
a
notification
message
and
then
encoded
with
the
content.
J
Then
the
next
traffic
is
for
the
multi
stream
or
the
nitrous
here
for
a
4-3.
We
will
begin
with
the
to
use
kiss
the
use
kiss,
where
is
for
the
device
which
designed
as
its
tree
beautiful
main
board
and
ammachi
like
hard-line
Carla,
if
multi-line
kinda
or
all
the
data.
As
we
know,
all
the
data
may
be
generated
from
the
line
cards
and
if
we
collect
our
data
from
Lancaster's,
they
are
aggregate
from
to
the
with
the
main
border.
J
The
main
border
may
be
easily
become
a
bottleneck,
so
in
this
case,
maybe
let's
every
line
card
can
send,
as
the
senator
young
pushed
it
on
the
screen
directly
to
the
collector
saying
that
performance
may
be
become
better
and
the
second
I
use
kiss
is
a
out.
Edt
collection,
like
is
a
like
the
Cooper
I,
think
about
the
maximum
for
Cooper
I
mentioned
for
possible
mechanism
here
for
every
out
in
know
that
we
are
send
us
a
message.
J
Senator
theta
to
the
polar
rotor
then
assemble
the
teachers
in
senator
collector,
but
in
the
traditional
device
like
a
rotor
switch.
If
we
assemble
a
teacher
on
the
quadrotor,
maybe
the
CPU
were
too
high,
because
the
rotor
itself
is
a
focus
on
the
is
business
and
maybe
for
protocol.
So
here,
if
we,
if
we
send
the
data
directly
from
the
node
to
the
character,
may
be
better
here.
Is
the
solution
suggested?
Maybe,
though,
the
solution
is
not
the
scooper,
for
this
chapter
just
give
a
suggestion
here
in
this
diagram.
They
have
many
two-parter.
J
Where
is
a
force
collector
and
as
aim
for
the
publisher,
for
the
publisher
here
argue
to
note
some
node
for
the
math
node
here,
firstly,
is
no
the
some
node
we
already
stat
was
a
subsequent
server
on
the
master.
Zing
is
a
collector
we're
knitting
with
the
16
server.
The
and
the
corrector
subscriber
send
a
subscriber
message
to
the
supreme
server
to
establish
the
subscription
and
the
supreme
server.
J
J
Here
is
the
issues
being
worked
at
verse,
3
for
subscribing
decomposition,
keep
track
of
resource
and
the
associated
publisher,
make
decision
and
the
composing
the
global
solution
into
a
multi,
multiple
components,
substitutions
and
provisioning
composition
compose
the
component
and
infusion
into
an
substitution
measurements
may
be.
We
either
add
some
recorder
or
some
Magnum,
the
notification
related
to
the
substrate
decomposition
and
a
component
subscription
and
the
Zen
notifications
and
on
substitution
state
changes.
Each
component
subscription
maintains
its
own
substrate
state
and
its
response
responsible
for
sending
is
owen,
om
notifications.
J
B
J
B
P
J
B
To
confirm
now
right,
okay
and
then
for
the
second
document.
This
is
not
a
charter
working
document
item,
just
as
you
mentioned
also
has
not
been
mentioned.
It
hasn't
been
posted
to
the
list.
There's
the
no
discussion
on
the
list
just
yet
so
I
think
it
would
be
premature
to
ask
for
a
working
group
adoption
at
this
point.
We
need
to
have
some
discussion
on
list.
First,
yes
discussing
first,
there.
N
This
is
that,
it's
time
just
just
fun,
brief
remark.
Actually,
the
mother
of
swim,
originators
was
originally
actually
in
often
containing
the
other
draft,
and
also
in
the
notification
headers
rapper,
also
allusions
to
that
so
busy
as
part
of
so
but
it
was
pulled
out
as
it
was
very
separate,
more
general
about
general
ratio.
So
by
assuming
we
have
not
discussed
this
period,
an
individual
draft.
The
issue
has
been
there
actually
for
for
some
time.
A
P
So
currently
we
have
some
related
document,
three
related
document
here,
Windsor
notification.gem
Porsche
and
the
Smart
Filters,
based
on
those
already
existing
documents.
We
are
trying
to
expand
that
the
objective
is
to
make
it
more
generalized
from
the
existing
trigger,
then
son,
notification
to
a
concept,
event,
condition
and
action.
So
the
event
could
pretty
much
come
from
existing
trigger
from
the
yunkish
and
the
condition
will
cover
some
logic.
Expression
evaluated
based
on
the
data
store
state
and
the
action
will
go
beyond
the
current
simple
notification
and
the
cover
of
more
actions.
P
So
such
kind
of
event
condition
action
check,
play
the
Royal
fit
into
a
policy
framework
so
to
allow
you
to
specify
such
kind
of
policy
rules
and
the
generalized
action
we'll
cover
more
beyond
that
simple
notification
which
we
have
today.
We
also
try
to
do
the
network
configuration
schedule,
IPC
and
even
Lincoln
link
some
sub
trees
in
the
datastore,
so
the
because
this
such
kind
of
structure
will
fit
in
the
framework.
So
in
me,
policy
framework.
P
So
then,
that
we
can
utilize
some
polish
capabilities
so
decompose
the
condition
into
smaller
conditions
and
do
the
same
for
the
action
such
kind
of
remote
work
brings
us
some
benefit,
so
we
can
have
more
responsible
Network,
we
be
more
scalable
and
my
vision,
and
so
that
will
have
more
automation.
So
next
time,
I
will
try
to
get
feedback
from
the
working
group
and
we
can
start
working
on
some
models.
R
Yes,
comments:
Frank
Shahar,
sorry,
I
haven't
yet
read
this
draft,
but
I
I
feel
slice.
I
found
that
you
mentioned
it.
A
new
end
condition
action,
practical
motor
for
the
policy
design,
so
I
have
a
question
for
you,
but
I'm
wondering
if
you
know
that
there
are
existing
work
of
the
policy
design
in
super
working
group
and
in
I
to
myself
working
group
will
use
you
see
a
model,
so
I
I
want
to
know.
What's
your
motor
relation
with
those
existing
working
other
working
groups,
so
I
think
we
should.
R
P
R
In
addition,
a
lot
of
super
policy
work
is
still
inherited
and
extended
in
the
item
is
a
working
group.
We
are
work
where
were
to
tear
the
information
model
about
ECMO
during
our
capability
information
motor.
So
you
please
to
take
a
look
at
that
draft
and
I
think
we
can
sit
together
and
discuss.
What
is
the
relation
between
then?
Maybe
can
you
know
we
can
at
least
yes
I
think
we
can
now
contribute
to
each
other
or
something
yeah.
You
can
take
a
look.
Definitely
okay,.
P
Right
so
the
yeah
I'm,
trying
to
speak
with
hampi,
have
hanging
up
at
us
and
offering
me
for
many
witnesses:
sighs,
okay,
so
we're
trying
to
do
the
telemetry
model
and
over
not
nikon's,
but
we
have
another
option
here.
We
do
the
common,
so
he
called
it:
a
cow
cow,
it's
pretty
much
core
working
groups
that
equivalent
vessel-
and
the
purpose
here-
is
that
of
trying
to
hire
the
more
efficient
transport
so
that
trying
to
make
it
in
the
most
simple
constrained
environment.
P
So
they
can
have
binary
young
transfer
and
we
use
that
to
certain
device.
So
the
subscription
will
go
through
the
cope
and
call
me
and
then
the
configuration
we
also
used
what
what
we
have
here,
the
call
home
and
we
trying
to
try
to
utilize
some
features
and
to
do
the
post
wrapping
capability
here
and
the
problem
stating
that
we
have.
This
is
all
sort
of
problems
problem
statement
to
try
after
we,
we
cannot
to
work
on
the
list
so
that
we
can
complete
the
solution.
P
P
B
I
So
that
I
think
that
the
tank
was
talking
about
having
the
adoption
in
that
working
group
and
that's
what
they
talked
about
there,
but
it's
just
a
problem
statement
right
now.
In
the
end,
the
question
of
some
of
the
mechanisms
that
are
there
being
adopted
in
other
places
is
a
is
a.
You
know,
an
interesting
question
that
I
think
the
working
groups
will
have
to
work
through
okay,.
B
L
N
Actually,
the
other
one
first,
this
one.
Yes,
thank
you,
okay,
so
that's
that
okay,
so
basically,
this
is
the
problem
statement
for
Smart
Filters
for
push
updates,
so
this
is
busy
another
extension.
It
builds
on
top
of
the
yank
push
subscription
drive
since
above
it
was
listed
also
in
the
in
the
earlier
overview,
basically,
to
give
a
little
bit
background
and
purpose
of
this
young
push
filters
a
lot
to
allow
our
clients
to
select
which
nodes
to
subscribe
to.
However,
many
monitoring
applications
need
to
have
actually
a
little
bit
more.
N
They
would
basically,
they
are
interested
also
particularly
in
values
and,
for
instance,
filtering
things
based
on
whether
things
are
within
the
normal
operating
range
outside
of
that
range
shape,
for
instance,
so
points
the
question,
so
just
has
a
critical
range
even
reached
its
utilization
above
a
certain
percentage
in
support.
Those
are
things
that
you
would
not
be
able
to
provide
on
the
device
but
pitted
by
an
application
at
that
process.
The
stream
filter
on
values
is
currently
not
part,
is
currently
not
covered
in
yank.
N
Push
reason
for
this
word
should
not
include
not
needlessly
stretched
implementation
and
complexity,
and
also
in
many
cases
you
would
need
to
have
additional
things
on
top,
rather
than
just
value
based
filtering
and
so
forth
and
anyway.
So,
therefore,
basically,
this
is
this
is
currently
not
addressed,
and
this
smart
filter
problem
statement
here
is
a
proposal
to
address
this
gap
and
essentially
transition.
Also,
this
way
the
simple
updates
to
actually
really
events,
I,
think
that
indicate
condition
set
of
interest
in
terms
of
how
it
fits
in
with
the
other
drafts.
N
This
is
just
very
briefly
the
overview.
Well,
we
went
through
these.
Essentially,
the
smart
filters
are
basically
operating
missing
on
top
of
there
or
in
conjunction
with
with
yank
push,
and
they
also
feed
into
any
automations
of
all
that
you
want
to
pursue
in
terms
of
what
would
be
included
in
this
modern
stateful
filters,
basically
clearly
filters
based
on
values,
so
basic
metric
filters,
comparators
those
sort
of
items
and
also
certain
selected
saiful
filters,
such
as
special
crossings
in
recent
high
watermarks,
where
the
objects
are
generally
in
and
out
of
filter
conditions,
and
so.
K
N
Things
are
basically
yeah,
pretty
much
boilerplate
or
basically
tables
take
things
that
you
need
to
have
for
service
assurance
in
addition
and
busy.
The
proposal
here
is
to
focus
on
those,
and
there
are,
of
course,
even
smarter
filters
possible,
and
one
thing
concerns
aggregates
forming
aggregates
over
time.
Instance,
things
such
as
for
the
simple
statistical
aggregation,
computation
of
maximum
averages
and
so
forth.
We
thought
that
you
have
to
keep
it
outside
the
scope.
N
However,
there's
been
some
feedback
to
maybe
also
incorporate
that
things
that
we
view
as
he'd
be
well
not
within
this
problem
statement
are
more
complex.
Things
such
as
aggregates
across
objects,
full
ammonification
orbit,
forming
the
the
equivalent
of
an
expression
and
event
nib
in
the
yang
space.
Those
things
basically
would
go
beyond
what
we
are
proposing
here.
N
So
therefore
busy
what
is
in
what
is
within
the
proposed
scope
are
basically,
first
of
all
ready,
refined,
unchanged,
update
semantics.
There
was
also
another
reason
why
we
did
not
want
to
include
it
with
the
plane
and
young
push.
Basically,
we
need
to
have
a
distinction.
Whether
an
object
is
emitted
from
an
update
because
an
object
was
sponsored,
create
it
or
delete
it
versus
that
the
object
is
still,
they
had
only
fill
in
and
out
of
range,
so
busy
for
this.
N
We
will
need
to
have
additional
update
notifications,
then,
for
the
state
phone
filters
basis,
proposals
here
to
have
a
few
selected
stateful
filters,
the
one
thing
basic
tool
enable
special
crossing
alerts,
including
bitties,
with
the
counter
here
pressure
hold.
That
is
one
thing,
possibly
the
you
have
Multi
multi
level
factor
of
crossing
alerts.
There
are
certain
requirements
from
operators
for
this,
and
then
the
third
one
includes
recent
high
watermarks.
We
would
basically
keep
the
maximum
your
baby
up,
update
those
things,
but
you
have
busy
some
exploration
so
that
the
equivalent.
L
N
That
of
a
graphic
equalizer
that
you
can
use
also
for
performance
type
of
applications
when
again
outside
of
the
scope,
the
more
complex
things
that
were
mentioned
earlier.
So
therefore,
basically,
we
have
review.
This
is
a
logical
extension.
It
builds
on
top
of
the
existing
in
push
notification,
work,
it's
an
enabler
for
assurance,
application
and
the
building
got
for
Network
automation,
and
basically,
we
wanted
to
assess
here
the
interest
of
the
working
group,
whether
it
makes
sense
to
define
a
solution
for
this
problem
here.
N
H
Angel
Erickson,
when
we
go
into
thresholds
and
high-water
marks,
I
see
a
strong
connection
between
this
and
the
alarm
yang
module
that
we
pushed
into
C
camp
I
think
it
would
be
very
natural
that
safe
utilization
goes
over.
99%
I
want
an
alarm
about
that
quit
so
I
think
we
should
connect
them
up
to
that.
H
N
We
well,
we
have
a
solution
proposed
on
the
back
of
our
head,
not
not
in
not
inside
the
draft.
If
there's
interest
to
take
it
up,
we
believe
actually
that
this
is
busy
in
natural
extension
on
top
of
yank
push.
So
basically,
where
we
have
is
it
currently
filter
filter
constructs,
we
would
basically
add
a
smart
filter
augmentation
into
the
Eng
push
update
streams.
This
is
also
the
reason
why
we
were
thinking
of
net
count
as
the
working
group,
maybe
seachem
person
is
another
option,
but
this
is
busy.
I
B
Ok,
good
I
would
actually
like
to
get
pull
from
the
room
regarding
that
level
of
interests
in
the
strata.
Well,
I,
don't
know,
I
mean
we've
without
a
solution.
We
I
don't
we
can.
We
can't
bring
it
to
adoption
and
not
to
mention.
We
have
a
number
of
drafts
that
are
in
queue
that
we
probably
should
focus
on
first,
but
a
question
at
the
mic:
yeah.
O
Could
I
just
ask
a
quick
question,
so
it's
going
through
and
I
was
reading
the
draft
and
and
I
just
want
to
make
sure
is.
Is
this?
Is
this
draft
actually
tied
to
the
to
the
event?
You
know
the
event
condition
action
architecture
because
you
mentioned
you
know
this
is
something
that
could
be
extended
off
a
push.
I
want
to
make
sure
that
that
you're
and
that
you're
thinking
making
sure
that
that
you're
saying
well
it's
push
or
is
it
that
architecture
I?
Think,
though,.
N
O
B
As
a
contributor,
I
do
think
I
view.
This
is
a
natural
progression
to
the
yang
push
filtering
mechanisms,
Smart
Filters,
better
filters
more
and
better,
okay
back
to
the
interest
level
from
the
room.
If
this
is
interesting
work,
can
you
please
raise
your
hand?
You
think?
Okay,
that's
a
very
good
number!
Okay,
thank
you
and.
B
G
N
B
N
N
This
basically
addresses
something
that
we
feel
basic,
it
will
be,
will
be
useful
as
an
Indy
a
take
takes
holes
to
to
be
able
to
to
easier,
for
instance,
troubleshoot,
certain
conditions
see
and
so
forth.
Basically,
the
issue
is
that
within
nmda,
the
same
data
can
be
represented
across
different
data
source.
The
question
is:
what
happens
if
there
are
unexpected
different
discrepancies
that
persist
between
our
big
value
spots
in
the
operational
and
intended.
N
And
okay
I
don't
have
much
time,
but
basically
the
death
of
the
chase.
Therefore,
basically,
the
draft
defines
an
RPC
data
model
within
are
preceded.
It
allows
to
compare
and
in
the
a
data
source
and
essentially
allow
this
way
basic
to
see
if
there
are
any
discrepancies
for
it's
not
any
any
unexpected
discrepancies
between
points
and
intended
and
operational
and
that
something
is
either
not
propagating
or
maybe
something
that
was
intended
is
suddenly
learned
or
whatever
busy.
N
N
Defining
basic
this
nmda
compare
operation,
we're
busy
spend
exactly
specify
a
source,
a
target,
a
filter
specification
as
well
as
a
dampening
period,
which
basically
tells
you
how
long
the
points
in
that
discrepancy
would
need
to
persist
in
order
to
be
had
to
be
reported.
The
last
one
is
made
optional.
This
is
a
traditional
is
a
feature
based
on
feedback
from
from
Rob
and
anyway.
So
this
is
basically
whether
is
it
a
zero
zero
graph
right
now
now
we
believe
this
is
straightforward
and
useful
addition.
D
D
Robots
in
Cisco,
so
I've
reviewed
this
draft
invite
provided
some
feedback
on
to
it.
I
think
it's
useful
work,
I
think
it's
something
that
various
we've
been
asking
for
this
or
to
be
able
to
do
a
diff
comparison
between
the
different
date
stores
and
I.
Think
this
this
was
solution
is
on
the
right
guidelines,
so.