►
From YouTube: IETF106-NETMOD-20191119-1000
Description
NETMOD meeting session at IETF106
2019/11/19 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
B
Welcome
to
net
mod
our
first
session,
we
have
two
sessions
today,
I'm
Lou
burger.
We
have
Kent
Watson,
who
is
remote
at
some
point,
Joel,
the
other
co-chair
I
think
he's
going
to
show
up.
Hopefully
he
does
Mahesh
is
gonna,
be
sitting
here
with
me,
helping
out
with
the
session
and
with
jabber,
although
I'm
on
jabber
as
well
as
usual,
we're
using
etherpad,
the
URL
I
think
is
here
no
it's
the
URL
is
here:
please
do
join
in
and
help
with
our
collaborative
note-taking.
B
It's
very
helpful
to
do
that,
and
it's
also
a
good
opportunity
for
anyone
who
speaks
at
the
mic
to
make
sure
that
their
name
is
perfectly
captured
and
their
comments
are
accurately
captured.
So,
please
jump
on
this
URL.
You
can
also
find
it
off
the
tools
page
or
off
the
data
tracker
page
or
off
our
agenda
any
number
of
places.
B
This
is
the
IETF,
which
means
we
have
some
rules
governing.
What
we
do
here
and
what
said
here
and
what
makes
it
into
our
minutes
and
becomes
part
of
our
process.
Basically
anything
you
say
in
this
room
during
this
session
is
part
of
our
permanent
record.
We
are
using
YouTube
for
video
as
well
as
recording
audio,
and
so
please
be
aware
of
that.
B
The
blue
sheets
are
going
around
as
I
mentioned
myself
and
Mahesh
should
be
in
a
non
je
ne
do
see
something
show
up
in
jabber
and
we
don't
take
note
of
it
feel
free
to
come
to
the
mic
and
to
relay
a
question.
As
you
see
it,
the
agenda
has
changed
a
little
bit
I'll
get
into
those
details
in
a
moment,
so
we
have
two
sessions.
The
really
important
thing
to
note
is
we
have
a
room
change
so
this
afternoon
we
are
not
in
here.
Why
we're
down
here?
I,
don't
know,
but
just
be
aware.
B
On
to
document
status,
so
since
the
last
meeting
we
do
have
one
RFC
I
have
to
say
when
I
saw
that
this
was
since
the
last
meeting
I
thought
there
was
some
that
there
was
a
mistake
because
I
felt
like
we
were
done
with
this
a
long
time
ago,
but
it
does
sometimes
take
a
long
time
from
when
we
finished
something
in
the
working
group.
Do
we
actually
have
the
RFC
thanks
to
all
who
contributed
to
this?
It
was
an
important
piece
of
work,
of
course,
I'm
biased,
because
I
think
it's
useful.
B
We
have
two
documents
that
have
been
submitted
for
publication
I.
We
I
thought
we
were
gonna,
have
a
update
on
that
from
the
author's
I
think
they
decided.
They're,
just
gonna
speak
at
the
mic
and
give
us
a
brief
update.
Now,
Adrian
I
thought
had
volunteered
to
do
it,
but
I
don't
see
him
in
the
room.
Adrian.
Are
you
in
the
room?
You're?
Not
Kent.
Do
you
want
to
say
anything
about
artwork,
folding.
C
B
Thank
you,
I'm
gonna
drop.
You
off
come
back
and
Q.
If
you
want,
they
did
it
for
me,
so
we
also
have
yang
get
extension.
That's
been
that's
going
through.
Processing
I,
don't
think.
Is
anything
really
that
interesting
to
talk
about
the
post
last
call
we
have
a
few
documents.
The
first
is
module
tags
that
actually
left
the
working
group
and
then
came
back
to
the
working
group.
That
was
because
it
didn't
have
the.
B
State,
appendix
the
nmda
related
State,
appendix
in
it
I
believe
that's
been
produced.
I
should
know
that
answer.
I
am
at
least
the
contributor
may
be
a
co-author
on
that,
but
really
Chris
Hobbs
is
driving
that
work,
but
it
came
back
to
the
working
group.
It's
been
updated.
We
expect
to
have
a
second
last
call
on
it.
Next
week
we
have
a
couple
of
documents
that
have
been
through
last
call.
They
did
get
some
comments.
B
That
was
a
little
bit
of
an
extended
last
call
we're
gonna,
hear
from
Rob
Wilton
on
that
in
a
moment,
I
clicked
the
wrong
button.
Let's
see
if
this
goes
the
right
way.
Okay,
one
more
document,
that's
post
last
call
is
factory
default.
We've
had
I
think
some
discussion
on
that
the
last
call
is
was
ended
on
the
15th
I,
don't
believe
the
yang
doctor
review
came
in.
It
was
a
sign,
but
we're
still
waiting
on
that.
There
is
an
IPR,
Poland
and
Pross
going
on
we're
missing
one
response.
B
At
least
there
was
when
I
wrote
that
on
the
slide
and
it
that
will
of
course,
block
submission
to
the
is
G,
but
we
expect
that
to
progress
nicely
and
Kent
who's.
The
Shepherd
is
gonna,
wait
until
he
gets
the
final
revision
before
doing
his
his
write-up,
not
on
the
agenda.
That's
interesting
geolocation
was
not
on
the
agenda
when
we
put
these
slides
together,
it's
actually
on
the
agenda
now.
So
that's
incorrect,
but
we
have
a
couple
of
other
slides
a
couple.
B
B
B
That
is
several
years
out
of
date
that
came
I
believe
it
was
from
I,
don't
remember
who
the
standard
was,
but
if
we
reference
in
RFC
that
references
and
maybe
an
ITU
standard,
that's
literally
three
revisions
out
of
date,
Joel
made
the
nice
stroke
about
you
know
at
least
we
have
stable
time,
which
is
you
know
it's
good
to
have
a
stable
time,
but
we
probably
should
figure
out
how
to
update
the
reference
as
we
move
forward.
I'll
mention
that
to
your
again
the
agenda
is,
is
pretty
tight.
It
says
the
first
item
came
up.
B
We've
added
this
schema
comparison
document.
That's
noteworthy
because
is
the
last
sort
of
building
block
that's
necessary
to
satisfy
all
the
requirements
from
the
revision
handling
and
the
revision
design
team,
so
they
have
a
now
have
a
complete
set
of
documents
covering
the
requirements.
We're
going
to
spend
a
lot
of
time
on
that,
because
that's
a
really
important
work
item
for
the
working
group.
B
Our
second
session
we've
also
managed
to
fill
up
interesting
font
conversion
here.
The
the
stuff
at
the
top
I
guess
must
be,
more
importantly,
splits
larger
here.
So
we
have
updates
on
a
couple
of
working
group
documents
and
then
a
couple
of
other
documents
which
are
individual
contribution.
We've
seen.
Maybe
it
may
have
seen
the
list
discussion
related
to
ECA
and
the
two
different
documents.
B
I
think
the
authors
have
been
working
hard
to
figure
out
how
they
combine
their
work
and
I
suspect
it's
a
it's
a
merged
contribution
at
this
point,
but
we'll
hear
that
in
the
afternoon
we
have
one
liaison.
It's
actually
a
communication
that
came
in
since
the
ITU
that
sorry,
since
the
IETF
doesn't
have
a
formal
liaison
relationship
with
that
see.
Let's
I
don't
believe
we
have
one,
and
it's
really
just
to
be
aware
of
what
work
they're
doing.
B
D
Okay,
so
I
try
to
give
a
very
quick
update
on
these
two
models:
they
post
working
group
in
law
school,
so
first
apologies,
I've,
been
quite
slow
presses
in
the
working
room.
Last
call
comments.
I
was
intended
to
over
the
last
couple
weeks,
but
I
wanted
to
get
the
final
versioning
draft
out
to
try
and
get
that
work
to
progress,
a
steady
pace.
So
that's
why
the
sub
interest
draft
updates
haven't
yet
been
finished.
D
The
in
taste
extensions
draft
I've
applied
most
as
well
asked
or
comments.
There
are
a
few
open
issues
waiting
for
confirmation
from
the
people
who
submitted
the
issues
I'm
going
to
cover
some
of
those
today,
just
in
case
anyone
doesn't
give
any
feedback,
but
I'm,
not
sure
I,
believe
you
covered
on
this
anyway
and
subbing
taste
draft
are
still
in
progress,
and
so
hopefully
they
should
be
completed
fairly
soon
so
to
the
next
couple
of
weeks
or
four
weeks
a
month
and
then
we'll
get
those
done
so
into
the
issues.
D
The
one-on-ones
as
Romanies.
Do
we
renamed
the
carry
delay
function?
So
this
is
a
feature
that
delays
normally
a
hardware
state
change
so
that
you
can
allow
some
other
protection
equipment
to
kick
in
or
in
the
case
in
his
face
is
coming
up.
You
can
allow
to
stay,
relies
before
you
start
running
traffic
over
that.
So
this
carry
delays,
the
name
that
we've
used
within
Cisco.
The
suggestion
may
be
to
change
that
name.
Couple
of
possibilities
is
link,
flap
suppression
or
state
flap
suppression,
I,
don't
know.
D
If
anyone
has
any
comments
on
this
or
not
so
you
nobody
stands
up.
I
will
continue,
try
to
progress
on
the
list.
Otherwise,
I
am
I'll.
Either
keep
the
same
or
change
it
there's.
No,
no.
He
doesn't
reek
as
particularly
the
the
next
issue
is.
There
was
a
proposal
to
add
an
in
disguise
overflow
counter,
so
this
would
be
a
subset
of
the
in
discards
counter
so
I'll
add
that
one
and
an
in
discards
unknown
end
caps
again
that
was
discussed
before
so
I'll.
D
Add
both
of
those
counters
in
the
definitions
of
what
those
would
be
I've
been
sent
to
the
list.
There
was
also
discussion
about
whether
to
add
in
packets
and
out
packets
counters,
so
the
way
that
the
counters
defined
today
is
this
split
between
unicast
multicast
and
broadcast,
and
the
expectation
is
that
packets
that
are
okay,
well-formed
and
not
dropped
they
fit
into
one
of
those
three
buckets.
The
friends
faces
that
don't
have
that
split
between
those
three,
it's
not
quite
so
useful.
D
It
would
have
been
nicer
if
they
defined
and
in
packets
and
out
packets,
counters,
and
then
the
broadcast
and
multicast
were
subsets
of
those.
But
who
said
that
I
don't
think
this
documents
the
right
place
to
add
those
and
I
think
further
discussion
will
be
required.
So
the
plan
is
not
to
do
that
now
and
if
you
go
into
a
future
revision
or
ITF
interfaces,
if
it
was
required,
many
comments,
a.
E
D
The
case
where
this
came
up
the
concern
was
for
some
interface
types.
You
may
not
know
this
split
to
unicast
multicast
broadcast,
so
what
which,
which
bucket
you
put
those
in
because
it's
gonna
be
misleading
and
I
think
it
was
maybe
the
policy
models
they
wanted
to
refer
back
to
this
counter,
possibly,
and
their
concern
was
that
doing
the
maths
in
XPath
becomes
sort
of
convoluted,
whereas
you
had
a
single
counter.
That
would
be
easier.
So
there
was
justification
as
to
why
this
could
be
used.
D
This
is
reporting
the
maximum
size
of
frames
that
you
can
send
and
receive
over
the
another
physical
interface,
but
it
could
be
a
sub
interface
as
well
before
the
definition
had
a
tweak
to
be
fairly
close
in
line
to
eight
e
2.3
there
now
some
flexibilities
to
this
size,
changing
by
four
eight
bytes
to
count.
For
fact,
your
video
tags
in
the
packet
in
the
end
I
decided
to
actually
take
that
out
and
make
it
more
generic.
D
B
G
H
It's
getting
quite
stable
and
I
would
like
by
now,
first
of
all,
I'm
sorry
for
publishing
the
draft.
Just
yes
before
yesterday,
I
think
yeah
concept
is
still
the
same.
We
want
a
document
data
that
will
be
available
offline
server
capabilities,
reloading
data,
a
lot
of
use
cases,
XML,
JSON,
encoding,
multiple
new
ones,
can
be
added,
add
metadata
for
data
set
and
content.
It
will
be
similar
to
what
you
would
get
from
a
racket
reply
or
get
operation
reply.
G
H
Please
so
what
has
changed
since
the
last
IDF?
First
of
all,
we
had
this
entity
tag
and
last
modified
time
stamp,
which
was
from
rest
off
and
number
of
people
thought
it
useful.
Not
other
number
of
people
thought
that
it's
uncertain
how
it
should
be
used.
It's
tighter
and
quite
rested
on
specific
after
the
last
IDF,
there
was
a
email
discussion
on
this
and
it
was
decided
to
remove
it
maybe
later
reintroduce,
if,
if
there's
a
real
need,
then
just
very
lately-
and
these
suggestions
I
added
this
yet
yeah
instance
data
version.
H
H
Then
there
was
some
discussion
about
the
full
inline
schema
specification
method.
It
is
now
made
optional
and
just
the
days
yesterday,
maybe-
and
Andy
at
least
came
up
that
he
thinks
it's
too
flexible.
It's
he
wants
just
yank
library,
but
as
a
base,
but
earlier
I
think
to
two
IDs
earlier.
There
was
a
discussion
on
this
way.
It
was
decided
that
we
want
a
very
flexible
solution,
so
it's
not
just
yank
library,
module
format
that
can
be
used,
but
any
other
format
as
well
anyway.
H
Now
it's
a
optional
based
on
a
feature
then
I
added
the
based
on
the
last
idea
of
discussions,
the
simplified,
the
online
method
where
for
each
module,
you
just
have
to
specify
a
single
string
like
in
this
example,
and
we
will
come
to
an
open
issue
at
the
end
about
the
exact
format
of
this
string.
Next,
please
so,
and
that
Martin's
proposal,
I
added
the
wrapping
container
about
around
content,
schema
so
to
grab
the
whole
choice.
H
Three
methods
in
the
container
so
cleanly
separate
them
and
rename
some
of
the
options.
The
blue
parts
are
the
ones
that
changed,
simplified
in
line
was
added
in
line
module
and,
in
my
schema,
I
think
they
were
just
called
module
and
schema
before,
and
then
we
have
a
feature
for
in
line
itself.
Next,
please.
H
H
My
English
needs
some
updates
as
well
security
considerations,
so
this
is
not
a
normal
yang
module,
because
this
is
not
intended
to
be
accessed
online.
Basically,
it
doesn't
have
any
way
it
doesn't
involve
any
way
of
modifying
the
server
or
the
publishers
behavior.
It's
pure
read-only,
you
know
in
a
sense
and
it's
of
its
for
files,
so
file
handling
security
should
apply.
H
Then
this
dot
Ian
for
schema,
naming
I'll
come
back
to
this
in
open
issues,
and
there
was
statement
that
I
said
that
the
yang
what
you
revision
for
content
defining
yang
modules
should
be
mandatory,
because
otherwise
it
modules
can
change
between
revisions
greatly,
but
it
was
commented
that
some
modules
might
not
have
revision
date
at
all.
So
in
that
case,
it's
we
don't.
We
can't
have
a
revision
date
that
was
added
next.
Please,
and
this
is
the
example
of
how
we
look
after
all
these
changes.
H
So
you
see
that
the
geared
version-
one,
which
is
always
one
for
this
RFC
M,
like
we-
have
it
1.1,
fixed
1.1,
4,
7,
yang
yang
7950,
when
we
have
here
the
simplified
methods
and
next
one
please
this
is
next.
Maybe
this
yep,
we
have
one
major
open
issue
raised
by
Andy
and
Martin
that
in
the
draft
I
use
the
for
a
simplified
inline
method.
H
H
For
me,
this
is
a
simple
short
method,
but
I
could
leave.
We
could,
as
now
alternative
just
say,
ITF
young
library
date
and
then
not
dot
dot.
Nothing
to
me,
that's
somewhat
unusual,
but
yeah.
It's
can
be
used
or
we
could
use
a
more
complex
solution
where
we
have
a
list
with
two
two
leaves
or
module
name
and
revision.
I,
don't
want
that
because
I
was
specifically
asked
to
have
a
very
simple
and
short
solution,
so
it's
either
one
or
two
I
can
live
with
two.
H
D
Robot,
since
this
case,
just
a
quick
comment
on
the
module
naming
so
with
the
young
version,
work,
you're,
sorta,
better
name,
young
modules,
using
the
revision
label
as
well.
I
think
I
sent
a
comment
to
the
list.
Yesterday
there
might
be
nice
to
have
slightly
more
flexible,
so
it's
not
tighter
has
to
be
the
date.
You
could,
for
example,
have
a
Yank
Cimber
and
there's
that
or
anything
else,
an
extra
first.
H
Of
all
I
agree
with
you.
Second
I
would
rather
make
that
abyss
and
not
wait
for
the
food
revealed
provisioning
part
dependent
on
this.
Yes,
we
can't
really
I,
don't
know,
can
if
I
can
put
in
something
later,
revision
label
may
be
used
instead,
but
that's
rather
uncertain
at
this
point:
I,
don't
care
chair.
So
if
I
put
here
sentence
that
instead
of
the
data
revision
label,
which
is
undefined
at
this
point,
can
be
used.
Is
that
acceptable?
H
B
D
H
I
B
B
H
H
B
B
D
D
That's
it:
okay,
so
I,
just
an
update,
Navy
what
the
design
team's
been
doing.
I
just
say
overall,
summary
of
the
solution
space
and
then
we've
got
individual
drafts
covering
the
various
aspects
of
the
solution
there's
being
proposed
here.
So
in
general
terms,
what
the
design
team
sort
covering
I'm
gonna
give
a
quick
update
and
bow
is
going
to
talk
about
the
updates.
The
revision
modeling
draft
Joe's
gonna,
be
talking
about
updates
to
semantic
versioning
draft
are
big
thin
talk
about
yank
packages,
Joe
we're
talking
about
the
schema
version.
D
Selection
draft
wish
I
would
have
been
doing
that,
but
he's
not
here
and
then
I'm
going
to
talk
about
the
draft
that
I've
and
we
published
last
week
or
I
should
move
the
weekend
on
schema
comparison,
and
then
it
may
be
to
be
sometime
for
the
next
steps.
Discussion
in
the
end,
so
in
terms
of
design
and
teen
update
and
we've
been
meeting
on
a
sort
of
semi
regular
weekly
basis
in
scientific
work
has
been
done.
So
I'd
like
to
thank
you
to
everyone,
who's
been
participating
in
that
work.
D
There's
lots
of
various
people
want
our
or
another.
In
terms
of
the
main
output,
the
solution
over
you
draft
have
been
trivially
updated.
That's
not
particularly
interesting.
The
shape
of
that
hasn't
changed,
there's
been
updates
and
some
rating
minor
changes
to
the
module
revision
handling
draft
that
I
will
talk.
You
through
there's
been
quite
significant
updates
to
the
Sebo
draft.
The
packages
draft
in
the
version
selection
drafts
we'll
talk
through
and
then
there's
an
early
revision
of
these
human
heart
and
draught.
D
D
The
requirements
draft
that
Blokhin
is
stable
has
been
no
changes
since
ITF
105
and
none
are
anticipated
so
and
that's
just
to
sit
in
there
at
the
moment
again
in
terms
of
the
solution
overview,
the
updates
that
I've
been
fairly
trivial,
just
updating
the
references
to
the
fighter
of
solution
drafts
hasn't
yet
been
updated
with
this
scheme
of
comparison
draft,
because
that
was
too
fresh.
But
it's
worth
pointing
out
here
that
the
the
shape
of
the
solution,
the
component
parts-
hasn't
changed
in
scope
or
sighs.
D
D
Primarily
that's
about
be
able
to
notify
when
NBC
changes
have
occurred
throughout
dates
in
the
revision
history,
it
allows
a
revision
label
to
be
associated
with
a
revision,
and
that
mechanism
then
used
to
put
semantic
version
numbers
in,
and
it
also
allows
a
branch
for
revision
history,
so
RFC
791
a
linear
revision
history
of
modules,
but
but
that
I
think
was
the
intention,
the
expectation
that
that
was
written.
So
this
clarifies
have
a
branch
revision.
D
History
you
want,
as
I
said,
adds
provisional
labels
so
and
that
sort
of
the
core
draft
for
updating
young
modules
then
overlaid
on
top
of
that
is
a
somatic
version
number
scheme
and
that
allows
the
use
of
somatic
version
numbers
for
labeling
or
for
version
both
modules,
and
it's
also
used
in
package
versioning
as
well.
The
yang
package
is
draft
and
talks
about,
rather
than
doing
versioning,
of
single
modules
about
sets
of
models
modules
together.
D
So
that
explains
how
you
can
compare
two
modules
or
two
gang
schemas
and
yank
packages
to
detect
what
the
changes
are
between
those
and
it
defines
some
annotations
to
make
that
tooling
work
more
efficiently.
So
that's
still
an
early
draft,
there's
still
more
work
to
work
out
exactly
what
things
need
to
go
into
there,
but
I
think
that
the
aim
here
is.
It
shows
you
the
shape
of
that
solution,
and
then
this
is
sort
of
showing
you
what
the
dependencies
are
between
the
various
module
doing
the
various
drafts.
You
can
see
the
module
revision
hanging.
D
One
sits
at
the
top
and
the
package
draft
uses
that
the
semantic
versioning
scheme
is
optional,
so
if
you're
making
use
of
it,
you
have
those
dependencies.
The
packaged
version
selection
depends
on
the
packaged
version
schema.
Obviously,
so
that's
what
they
look
like
and
then,
as
I
said,
so
the
potential
next
steps
and
the
outcome
of
this
is
we've
been
working
on
this.
The
design
teams
overall
solution
for
quite
a
number
of
ITF
cycles
and
I
think
with
stage
where
we
would
like
to
know
that
the
working
group
supports
the
direction
we're
going
in.
D
So
that's
why
I
think
we
might
be
at
the
stage
where
working
adoption
of
this
set
of
draft
to
agree
that
this
is
the
right
direction,
would
be
a
good
discussion
to
have
now
I'm,
not
suggesting
you
necessarily
have
this
now.
Hopefully,
this
time
after
we've
presented
on
the
drafts
to
have
that
discussion.
D
B
I
think
that's
a
perfect
question
for
sort
of
setting
up
the
the
next
batch
of
slides,
okay
and
that
we
should
ask
this
question
at
the
end.
Okay,
I
think
you
are
actually
the
and
I
don't
know
which,
if
it's
you
or
blog
he's
better
present,
the
last
one.
B
B
K
So
here
is
a
young
model.
Revision
update
the
main
updates
to
the
young
1.1
night
7950,
the
core
enhancements
to
the
young
one
1.1
is
that
these
updates
explicitly
said
that
long,
nonlinear,
module
development
is
accepted
and
also
since
the
it
is
run
nonlinear
and
there
could
be
like
non
backwards,
compatible
changes
in
each
revision.
So
this
document
defines
the
number
or
discount
or
change
revision
extension
to
revision
that
that,
in
each
revision
it
must
be
specified
if
it's
non
backwards,
compatible
changes
and
also
this
dropped
defines
that
it
must
specify
the
revision
label.
K
This
is
the
change
to
the
previous
one
that
import
you
either
specify
the
revision
date
or
it's
optional,
but
this
one
is
given
another
option.
The
other
changes
are
like
this
drop
to
improve
the
young
status.
No
changes
make
the
non
backwards.
Compatible
is
more
clear
and
also
this
dropped
also
update
the
guidelines
for
updating
your
modules
revisions.
K
So
here
is
a
recap
and
the
Saints
last
meeting
the
major
changes
is
the
revision
label
that,
when
use
a
revision
label,
then
it
must
takes
a
young-sam
version
fernette
and
the
other
major
one
is
that
each
ITF
module
with
a
new
revision
must
include
a
revision
label
that
is
confirms
to
young-sam
worship.
So
these
are
two
major
changes
and
the
other
minor
changes
is
this
dropped
important
revision,
identify
when
defining
the
young
and
the
minor
improvements
to
that
the
text
and
modules.
These
are
main
changes.
D
Just
a
quick
clarification
so
and
the
review
the
revision
label-
it's
not
compulsory,
you
don't
have
to
have
one
of
those,
but
it's
saying
if
you
do
have
an
original
label
in
there
and
it
looks
like
a
yanked
similar
number.
Then
it
must
be
interpreted
that
way
so
to
link
an
interpreter
Yanks
and
the
and
clients
can
interpret
that
way.
So,
but
it's
still
optional
as
to
whether
you
include
a
revision
label
you'd
be
allowed
to
use
revision
dates
on
your
own
modules,
the
same
unite
if
modules
they
had
to
use
revision.
D
D
K
1979
fifties:
ambiguity
of
this
like
when
implementing
the
young
models
that
improvers
in
existing
defining
that
important
module
revision
is
and
bigger
than
choose
the
latest
revision
and
in
is
dropped.
They
proposed
another
different
definition
that,
if
imported
module
revisionism
because
then
choose
a
impotent
version
rather
than
use
the
latest
one,
but
it
could
otherwise
when
there
is
no
implicit
provision
than
resource
to
the
lady
is
important
abortion.
K
So.
This
is
a
question
to
the
working
group
so
and
the
other
open
issue
that
this
draft
is
still
working
on
the
the
non
backwards,
compatible
changes
and
also
the
back.
What
will
be
the
backward
changes
added
it
more
or
less
that,
because
existing
young
1.1
has
defined
a
list
of
what
is
backward,
compatible
changes.
But
our
draft
is
thinking
we
we
may
add
more
to
to
improve
this
definition.
So
also,
we
are
thinking
whether
we
give
an
exhausted
list
or
we
give
some
generic
one
that,
if
the
unless.
L
Questions
just
a
clarifying
comment:
Geo
Clark
Cisco
on
this
as
we
discuss
this
polish
in
particular,
has
come
up
with
other
things
that
potentially
break
backwards-compatibility
or
haven't
been
considered.
I
honestly,
don't
feel
an
exhaustive
list
as
possible.
I
think
we'll
always
find
some
corner
cases.
I
think
we
should
probably
err
on
the
side
of
non
exhaustive,
with
some
kind
of
clarifying
verbiage
there
to
say
we're
trying
to
do
the
best
thing
we
can
for
the
client
and
if
we
aren't
certain
better
to
err
on
on
saying
something
like
NBC
Nam
backwards
compatible.
So.
D
L
All
right,
that's
me
Joe
Clark,
and
this
is
the
version
of
the
yang
Cimber
work
that
the
design
team
was
doing.
That
Bo
alluded
to.
So
we
had
a
little
module
dance
here
or
draft
dance.
We
originally
made
module
versioning
the
related
to
drop
document.
Then
we
had
to
go
back
so
now.
Simba
does
stand
alone,
but
there
have
been
some
changes.
There
also
been
some
things
that
have
stayed
the
same,
so
in
particular
between
0
1,
&,
0
0.
L
We
have
not
changed
the
syntax
and
rules,
so
the
modified
Cimber
and
I'll
recap
with
an
example
here
in
a
second
that
Rob
described
at
a
previous
IETF,
I
think
104
or
103.
Even
that
has
stayed
the
same.
The
notion
and
the
definition
of
NBC
nan,
backwards-compatible,
backwards-compatible
or
BC
and
editorial
changes
stays
the
same.
We
did
officially
or
formally
define
editorial
changes
in
the
o1
draft,
and
modules
can
still
have
a
semantic
version
associated
with
them
through
the
revision
label
and
as
Robin
Bo
we're
talking
about
that.
L
So
we
obey
December,
2.00,
syntax
and
in
fact,
one
of
the
changes
we
made
in
ODOT.
One
fully
recognizes
that
syntax.
So
we
have
a
major
version
component,
a
minor
version
component
and
a
patch
version
component.
That's
all
2.0,
and
then
we
add
this
M
lowercase
M
or
uppercase
M
modifier,
and
you
can
see
there
that
how
those
are
applied
and
those
are
sticky,
once
you
add,
an
M
or
a
lowercase
M
or
an
uppercase
M.
L
Those
are
sticky,
and
this
allows
us
to
tag
our
specific
branches
while
we're
using
that
the
the
yang
module
versioning
rules
in
terms
of
that
lineage
that
that
we
talked
about
in
the
last
meeting.
Well,
the
one
exception
here
is
that
beta
or
pre-release
versions,
that
is,
if
the
major
version
is
a
zero
all
bets,
are
off
as
you're
developing.
Before
a
release,
an
initial
release,
you
can
continue
to
make
backwards,
compatible
and
non
backwards,
compatible
changes.
You
just
use
a
zero
for
the
to
denote
the
major
version
number
formatting
aside.
This
is
an
example.
L
This
was
previously
presented.
I
just
egregiously
stole
this
from
one
of
Rob's
previous
presentations,
and
you
can
see
how
the
versions
are
applied.
The
version,
numbers
or
components
are
applied,
a
where
NB
C
changes
show
up
and
then
how
the
M
and
the
M
of
the
uppercase
M
and
the
lowercase
M
are
applied
and
those
again
would
be
sticky
within
those
sub
branches
like
1.1.1
M.
That
would
be
sticky
as
1.1.1
X
keeps
going
on.
L
L
Artifacts
tend
to
be
used
in
some
coding
parlance
within
the
industry
and
the
reason
we
wanted
to
genera
size.
This
a
little
bit
was
to
recognize
that
Simba's
can
be
applied
for
other
things
and
in
particular
yang
packages
which
Rob
we'll
talk
about
in
a
few
minutes.
We
wanted
that
to
also
be
able
to
be
versioned
with
a
yang
semver,
so
we
changed
where,
where
module
is
needed
like
when
we
doing
a
import
revision
or
derived
that
we
still
use
the
word
module.
But
in
general
we
genericized
module
to
be
artifact.
L
The
Cimber
construct
is
no
longer
a
an
extension
like
a
top-level
extension
whereby
that
is
what
you
directly
import
from.
Instead,
it
is
now
a
revision
label
and
we'll
take
a
look
at
kind
of
how
that
transitioned,
though
you
can
still,
if
you
choose
to
import
from
a
simmer
you're
still
using
you're,
just
saying
I
want
to
refer
to
this
particular
revision
of
a
yang
module
by
the
semver,
but
the
linear
or
lineage,
or
the
lineage
import
is
still
how
things
the
tooling
will
resolve
that.
L
Since
we're
no
longer
updating
the
the
the
core
tenant
of
Yang,
we
pulled
out
the
update
7950.
We
did
add
full
support,
just
just
for
completeness
for
the
full
Cimber
2.0
dato
speckled
I'll
talk
about
that
in
a
minute,
and
we
formally
define
the
regular
expression,
the
type
def
for
what
a
yang
symbol
looks
like,
and
we
restated
the
rule
that
we
talked
about
and
Bo
mentioned.
If
something
looks
like
a
silver,
a
yang,
some
ver
then
tooling
needs
to
treat
it
as
such.
L
L
The
one
thing
we
hadn't
addressed
previously
is
that
cember
2.0
offers
this
metadata
for
both
pre-release
and
build,
and
so
we
just
expanded
our
regular
expression
to
say
we'll.
Allow
that
if
you
want
to
have
that
as
part
of
your
version
string,
but
it
has
no
applicability
to
the
yang
tooling,
so
gang
quill
yang
tooling,
will
effectively
ignore
any
of
that
build
or
pre-release
metadata.
But
we
will
allow
it
it
will.
L
M
M
Using
software
version
numbers
and
their
yang
models
that
sort
of
look
like
some
verb
aren't
just
you
guys
probably
put
some
thought
into
this.
How
is
that
going
to
be
thought
about
the
negative
ramifications
that,
because
those
models
aren't
all
going
to
be
like
renamed
to
two,
like
a
diversion
like
to
add
some
string,
I
mean
those
those
yang
models
are
out
there
and
they're
going
to
remain
out
there
and
being
used
for
a
long
time
right.
L
So,
what's
going
to
happen,
I
thought
about
that.
So
fair
point,
the
let
me
go
back
to
this,
so
the
the
revision
label
that
would
have
to
be
a
new
property
that
they
would
add
anyway,
we're
not
saying
they
need
to
change
the
name
of
the
module,
but
if
they
wanted
to
adopt
this,
the
meaning,
if
they
wanted
to
adopt
the
whole
lineage
base,
a
Providence
based
import
and
and
non
backwards,
compatible
backwards,
compatible
change.
Everything
we're
presenting
here,
essentially,
they
would
have
to
mark
in
their
in
their
new
revisions
that
this
is.
This.
L
Is
the
revision
label
that
we're
going
to
use,
and
then
they
have
to
be
aware
that
if
they
were
do
like
16.3
dot
like
I
said
that
and
and
they
don't
want
users,
because
this
is
mainly
for
users
to
look
at
this
and
say:
okay
I-
understand
that
that
between
this
version
and
the
last
there
have
been
some
non
backwards,
compatible
changes
that
they
would
have
to
use
something
that
does
not
look
like
a
Cimber
that
doesn't
match
that
regular
expression.
So
if
they
don't
want
to
adopt
this,
they
have
to
do
nothing.
L
D
H
B
Just
as
a
heads
up
right
now,
we
are
running
ahead,
which
means
folks
who
are
in
the
second
session,
might
be
bumped
up
to
this
session.
But
you
know
sometimes
we
end
up
running
better.
The
discussion
continues
if
we
end
up
going
back
on
schedule,
but
this
doesn't
heads
up
for
those
so
right
now,
I
think
we
have
Rob.
D
So
the
third
of
our
drafts
that
were
presenting
on
yank
packages
see
so
I
presented
this
one
at
least
once
before.
I
can
even
overview
what
it
is
again
just
to
remind
you
what
they
look
like
and
then
I'm
going
to
talk
about
what
we've
changed
in
here.
So
this
one's
had
more
significant
updates
in
terms
of
the
details
and
more
things
added
to
it.
I
don't
think
the
overall
solutions
changed
into
the
words
tried
to
achieve
it's
just
more
refinements.
So
what
is
a
yang
package?
D
We
know
what
a
yang
module
is
so
yang
package
is
where
you
take
a
set
of
yang
modules
together
and
use
them
to
define
a
schema.
So
what
a
device
might
currently
report
in
the
yang
library
via
module,
sets
and
things
you
could
also
report
on
via
yang
packages.
It
could
define
the
same
thing
so
so.
Why
do
we
do
something
new
here?
Well,
there's
two
things:
we're
trying
to
do.
D
One
of
the
other
key
changes
come
in
here
is
to
add
check
sums
for
integrity
checks
both
of
the
modules
and
of
the
packages
themselves.
The
idea
here
is
that,
if
you
know
what
the
package
is
a
design
time
from
ticular
version,
then
your
client
doesn't
need
to
download
the
full
set
of
modules
or
the
full
schema
from
the
device
and
and
check
whether
it
matches
what
you
what
you
need.
D
If
you
know
that
it's
what
you
expect
the
device
to
have,
you
can
just
check
it
has
a
package
that
is
either
what
you
expect
it
to
be
or
backwards
compatible
with
what
you
expect
it
to
be
and
avoid
that
sort
of
more
complicated
and
checking
of
the
schema.
So
it
sort
of
move
some
of
the
work
that
you
would
naturally
do
and
design
time
definitely
into
the
into
an
option
of
doing
our
design
time.
D
D
D
It
matches
what
you
want.
All
the
features
are
implemented,
as
you
expect
all
the
deviations.
What
you
expect
by
having
these
schemas
defined
off
the
box
and
available,
as
instance,
data
files,
you
can
move
that
work
off
to
be
done
than
once.
So
when
your
client
connects
the
route-
or
it
says,
okay
I'm
expecting
this
device
to
be
running
package
that
no
vendor
at
version
2.7
or
whatever
happens
to
be,
and
then
you
can
check
that.
Yes,
actually,
that
device
is
running
a
package
2.7
that
you
expect
and
that's
fine.
D
D
D
So
an
example
here
I've
got
an
example
of
ran
riot
of
Network
instance,
a
device
package,
and
here
it's
listing
in
this
package,
three
modules
that
it
implements
and
a
couple
of
import,
only
modules.
So
that's
listed
in
that
package
definition
and
the
definition
includes
metadata
about
the
package.
Like
where'd,
you
find
the
package
where
you
find
the
modules.
What
features
are
mandatory?
So
what
features
are
you
required
to
implement
to
say
you
conformed
to
this
package
definition?
D
It
also
can
import
packages,
there's
not
shown
in
this
example
shown
in
the
next
one
and
it
when
it
implements
modules
in
implements
specific
versions
or
revisions.
So
the
idea
here
is
that
a
package
defines
an
exact
schema.
So
whenever
you
download
a
package
at
a
particular
version,
you
know
exactly
what
every
single
data
node
looks
like
it's
the
intention.
If
you
know
what
features
are
enabled,
it
allows
import,
only
modules
versions,
revisions
and
then
the
things
have
been
added
here.
D
More
recently,
our
check
sums
that
allows
you
to
know,
without
necessarily
downloading,
on
the
assets
that
you've
got
the
correct
copy
of
them
and
also
the
other
thing
that's
been
refined.
More
recently
is
more
works
been
done
on
the
import
conflict
resolution
and
the
basic
principle
that's
being
applied
here
is
that
you
resolve
any
conflicts
explicitly,
so
the
conflicts
might
arise
when
you're
building
up
packages
from
other
sub
packages
and
they
are
implementing
or
importing
different
versions
of
modules.
D
D
D
So
if
you,
for
example,
have
a
case
where
one
of
the
package
was
implementing
BGP
at
version
X
and
another
one
was
implementing
BGP
version,
X
plus
one,
you
say
when
you
pull
those
two
in
what
does
that
combined
package
effectively?
Do
what
does
it
use
and,
as
it
says
here,
any
version?
Conflict
change
must
be
explicit
result.
So
you
always
want
to
be
very
clear
when
you're
reading
the
package
definition,
whether
or
not
you
are
implementing
those
packages,
you
pull
them
in
faithfully
or
whether
there
being
any
changed.
D
So
now
going
over
the
main
changes
and
since
oh
one
there's
been
quite
a
lot,
so
the
ones
I've
put
with
an
asterisk
down
there.
The
ones
I
talked
about
in
more
detail.
So
some
of
the
some
of
the
train
changes
a
fairly
and
formulaic
that
the
fact
that
the
work
on
yang
module
updates
moved
from
using
semantic
version
numbers
all
the
times
using
revision
labels
and
semantic
version.
Numbers
optionally
applies
to
packages
as
well.
D
So
if
you,
your
company,
wanted
to
use
just
revision
labels
and
didn't
want
to
use
yang
cember,
then
it
would
use.
It
can
also
define
packages
using
revision
labels
with
a
similar
versioning
scheme
in
terms
of
how
the
modular
version,
so
this
supports
both
as
support
for
check
sums
I'll
talk
about
in
a
bit
more
detail
as
support
for
locally
scoped
packages.
So
previously,
all
the
package
definitions
were
globally
scoped
available
off
the
box.
This
defines
packages
that
are
scoped
to
a
single
device,
I'll
explain
why
they
required
and
why
they.
D
This
improves
the
performance
that
I'll
talk
through
as
well,
and
the
use
of
packages
as
definitions
of
instance,
data
file
schema.
So
so
again,
I'll
talk
to
this
bit
more
detail,
but
this
isn't
just
about
using
a
package
putting
a
package
in
tune
into
an
instance
data
file.
It's
using
a
yang
package
as
the
definition
of
a
schema
for
an
yang
instance
data
file.
D
D
Packaging
module
check
sums,
so
this
was
a
request
that
came
in
to
effectively
have
some
way
of
knowing
that
the
yang
modules
that
you're
referencing
by
URL
or
the
packages
you're
referencing
value
RL
are
actually
what
you
expect
them
to
be.
So
the
solution
that
we've
added
here
is
to
use
a
sha-256
hash
of
either
the
module
or
the
package
definitions
and
to
avoid
you
having
to
download
them
each
time.
So
these
check
sums
are
written
into
the
package
definition
files.
D
When
you,
when
you
reference
a
package,
you
can
optionally
include
the
sha-256,
checksum
and,
and
likewise
with
the
modules
again,
when
you
provide
a
URL,
you
can
also
provide
a
checksum,
and
so
that
means
that
with
you,
obviously,
if
you've
got
those
things
locally
within
your
your
processor
or
server,
you
may
not
need
to
download
these
things
again.
You
can
be
sure
that
they
match
what
you
expect
them
to
be.
In
the
case
of
modules,
the
checksum
is
calculated
on
the
yang
file
so
effectively.
D
This
means
that
it
includes
whitespace
changes,
and
the
expectation
here
is
that
all
instances
would
match
if
you
had
a
URL
and
you
can
find
it
from
various
places
and
for
packages.
The
checksum
is
calculated
on
the
yang
instance,
data
file.
So
the
same
thing
and
again,
that
would
include
any
whitespace
changes
and
it
include
the
metadata
information
at
the
top
of
that
package.
Yes,
you.
B
D
Because
we
don't
trust
the
URL,
so
in
the
package
definition
you're
providing
a
URL
to
where
you
can
go
and
find
that
package,
and
so
you
want
to
check
what
you
actually
download
from
that
URL
max
where
you
expect
it
to
be.
That's
one
of
the
cases.
The
other
case
that
is
useful
is
that
again,
when
a
device
says
I'm
using
the
package
ITF
at
2.00.
D
F
By
the
hood,
I
wonder
how
stable
this
module
checksum
checksum
is
because
modules
are
often
extracted
from
our
FCS
and
different
extracting
tools,
just
add
or
remove
different
amount
of
white
white
space.
So
I
think
it
would
be
useful
maybe
to
to
to
transform
the
end
module
to
some
canonical
white
space
and
then
computer
checksum,
because
otherwise
it
won't
be
reliable.
D
Possibly
and
adds
complexity,
the
one
I
was
really
hoping
to
bind
it
to
is
the
fact
that
this
has
URLs
at
list
where
those
modules
could
be
found.
So
really
is.
The
key
for
me
was
trying
to
bind
that
the
files
that
are
downloaded
from
those
URLs
match
the
checksum
with
them.
So
whether
that's
still
required
I,
don't
know.
D
So
next,
the
next
change
is
the
relationship
between
packages
and
schema.
So
talking
about
local
packages,
the
the
aim
in
terms
of
what
this
works
trying
to
do
is
for
each
data
score.
Datastore
schema
to
be
fine
by
one
package,
so
you
have
a
one
package,
definition
for
that
datastore
schema.
That
makes
it
very
easy
for
the
device
to
advertise
for
each
of
the
data
source
schema.
What
the
package
is
that
defines
that
schema
and
it's
easy
for
clients
to
know
that
off
the
box.
So
an
ideally
like
just
really
to
be
useful.
D
You
want
names
to
be
available
offline
and
you
want
it
really
to
be
available
design
time,
but
there
are
cases
where
that
becomes
quite
tricky.
So
one
of
the
cases
is
that
your
software
itself
might
be
made
up
of
different
software
components
that
could
be
optionally
installed
and
added
or
removed,
and
hence
the
packages
that
you
can
generate
on
the
device
to
represent
the
combination
of
software
components
that
we
install
that
point
in
time
can
change
and
be
more
dynamic.
D
So
in
this
case,
you
wouldn't
expect
to
be
able
to
define
offline
packages
for
all
of
those
things.
It
might
be
helpful
to
define
a
local
package
that
device
that
says:
okay,
I'm
installing
these
sub
packages
and
it's
those
sub
packages
that
each
are
available
off
the
box
and
the
local
package
is
just
the
top-level
definition
to
pull
those
all
together
and
combine
them.
D
Similarly,
if
you
apply
software
bug
fixes
that
change
the
scheme
and
that's
another
case
that
we
think
where
you
might
deploy
particularly
an
advertised
particular
package
for
a
given
software
release
and
say
this
is
the
standard
version
of
software.
But
if
some
bug
fixes
have
come
along,
then
the
scheme
has
been
changed.
It
doesn't
no
longer
quite
reflects
that
what's
been
advertised
as
the
as
the
package
with
that
software,
and
so
you
use
a
local
package
to
say
actually
it's
that
it's
the
same
as
the
package
release
of
the
software.
B
We
have
a
couple
of
comments
from
jabber
really
on
the
previous
slide
about
format.
You
can
suggest
using
XML
as
it
is
lossless
and
he's
saying
that
as
a
contributor,
Martin
is
saying
actually
the
the
new
RFC
text
format
non
paged
is
lossless
for
text
lieu,
as
Charis
Walt
says,
whatever
we
decide
should
be
in
the
document.
Yes,.
B
D
D
Great
so
I
think
I've
covered
local
packages,
I
just
to
get
a
notional
I
said
to
the
end
of
these
in
terms
of
the
idea
of
local
packages.
The
two
key
changes
are
that
the
name
of
the
package
is
no
longer
global
scope.
So
everything
in
terms
of
the
other
package
definitions.
The
idea
is
that
package
name
is
effectively
globally
scoped,
but
here
a
local
package
you,
the
device,
could
choose
to
define
its
own
name
for
that
package.
That
may
collide
with
that
same
package
name
on
another
device
effectively.
D
So
that's
one
change
and
the
other
one
is
the
offline
definition
may
or
may
not
be
available
for
the
device
may
allow
you
to
download
an
instance
data
file
containing
an
offline
definition.
Perhaps,
but
that's
not
necessarily
expected
the
idea
really
is
it's
just
a
way
of
combining
package
packages
together
at
the
top
level,
if
required,.
D
So
the
idea
here
is
that
when
you
look
at
a
package
definition-
and
you
look
at
all
the
packages
that
it
includes,
you
should
be
able
to
know
whether
or
not
the
package
faithfully
implements
those
included
packages.
So
if
you
had
a
top-level
vendor
package
that
included
IHF
routing
a
particular
version,
you'd
have
to
clearly
indicate
whether
or
not
you
faithfully
implement
the
ITF
routing,
as
defined
by
its
package
definition
or
it's
been
modified
in
an
MVC
way,
perhaps
because
you've
got
some
deviations
or
perhaps
because
you've
included
some
different
versions.
D
D
We
had
this
sort
of
thing
in
the
draft
before
for
the
import,
only
modules
you
could
say
which
ones
you
no
longer
needed,
but
now
for
the
implemented
modules,
you
can
say
I'm
implementing
module
version,
X
and
I'm
also
effectively
replacing
other
module
versions
of
Y
so
that
that
really
matters
for,
like
the
import,
only
case
where
you
want
to
say,
and
they
want
to
have
this
dependency
on
an
import,
only
module
and
so
feedback
on.
That
would
be
very
useful
on
what
we
think
packages
is
schema
definition,
for
instance,
data
document.
D
So
this
also
goes
back
a
little
bit
to
what
balázs
was
presenting
on
the
idea
of
packages
is
that
they
define
a
yang
schema,
so
they're
meant
to
be
a
canonical
representation
of
yang
schema
instance.
Data
documents
obviously
have
a
schemer
associated
with
them.
Packages,
I
think,
would
be
a
good
way
of
associating
a
schema
with
an
instance
data
document.
The
reason
I
think
it's
good
is
because
the
idea
is
that
these
package
names
are
globally
globally.
Scoped
and
I
have
revision
numbers
and
you
have
a
checksum
associated
with
them.
D
So
you
need
relatively
little
information
to
guarantee
that
you
get
the
right
schema
and
it's
what
you
expect
it
to
be
the
one
thing
that
needs
to
be
resolved
with.
That,
though,
is
sort
of
like
the
bootstrap
scenario.
So
if
you
say
you're
referencing
up
to
a
package,
if
you're
saying
the
schema
for
your
particular
instance,
data
document
refers
to
a
yang
package.
Well
that
yang
package
itself
is
defined
an
instance
data
document.
What
does
it
use
as
its
schema?
D
Is
that
something
that
it
then
has
another
reference
to
a
another
package
or
a
module
set,
or
does
it
is
just
hard
coded
that
the
instance
data
library
understands
the
instance,
data
documents
Beast
to
understand
packages
as
a
native
construct
or
not?
So
that's
one
area,
I
think
that
needs
a
little
bit
of
worker
refinement
to
make
sure
that
doesn't
get
too
complicated.
So.
D
So
the
first
one
is
where
the
packages
should
use
a
different
structure
for
the
instance,
data
file,
representation
versus
what
you
get
out
of
the
device
eg
from
yang
library
or
or
similarly,
the
current
approach
is
sort
of
try
to
optimize
for
readability
in
the
file
and
optimized
to
minimize
data
transfer
from
the
device.
So
to
that
effect,
the
package
definitions
are
on
the
device
reused,
the
module
sets
from
the
yang
library
so
rather
than
having
affecting
the
same
equivalent
information
in
a
separate
tree
for
the
Yang
pakka
geez.
D
They
just
got
references
back
to
the
angle.
Library,
module
sets
the
idea
here
being
that
you
could
potentially
allow
those
same
word
row
sets
to
be
used,
define
to
define
the
young
library
schema
and
also
you
packages,
so
clients
have
the
option
using
both,
so
that
has
some
advantages
in
terms
of
affected
that
minimizing
the
data.
There's
a
disadvantage
of
doing
this,
though,
which
is
the
sort
of
more
complexity
in
structures
and
the
fact
that
the
structure
is
different
differ.
D
So
one
of
the
bits
of
feedback
from
balázs
was
it'd,
be
nice
to
use
the
same
structure
for
both
and
I.
Think
there's
obviously
two
ways
you
could
do
that
one
is.
You
could
try
and
augment
yang
library
with
the
packages
information
I'm,
not
sure
that
that
easily
works
and
I
think
it
fundamentally
the
hierarchical
nature
of
yang
packages,
I
think
with
them
break
the
yang
library
I.
Don't
think
you
can
easily
do
that
so
I
think
if
we
wanted
to
use
the
same
structure,
I
would
instead
go
for.
D
The
format
is
used
in
the
instance.
Data
document
and
use
that
on
the
devices
as
well,
so
you'd
have
more
repetition
of
this
data.
In
terms
of
of
defining
the
modules
that
comprise
the
packages,
however,
I'm
not
sure,
that's
really
a
problem,
because
the
intentional
yang
packaging
is
is
that
clients
shouldn't
have
to
download
this
information.
Is
there
if
they
need
it
and
they
want
it.
But
the
idea
here
is
that
you're
using
yang
packages,
you
know
what
they
are
off
the
box
and
you
avoid
having
to
download
this
information.
D
That's
that's
one
of
the
key
aims
here.
So
the
fact
there
is
a
hypothetical
duplication
of
that
operational
data
may
not
matter
in
reality,
and
so
currently
my
I'm
leaning
towards
changing
this,
but
I
think
this
would
be
something
again.
That
is
one
of
those
issues
that
we
need
to
sort
out.
It
doesn't
have
to
be
done
again
before
the
working
before
it's
doctored
by
the
working
group.
It
could
be
done
afterwards.
It's
not
a
significant,
an
issue,
but
it's
something
that
needs
to
be
considered.
D
Another
on
this
that
that
we've
considered
talked
about
is
the
yang
library
definition
requires
that
module
name
spaces
be
specified
in
terms
of
the
yang
package
definitions
they
allow
the
module
name
space.
We
specified
if
you
want
to
so
in
terms
of
the
yang
structures
being
used.
It's
it's
included
there,
but
it's
optional
rather
than
mandatory,
and
the
idea
here
is
I.
Think
that
that
with
the
JSON
encoding
effectively
is
almost
moved
to
the
point
that
the
module
names
are
globally
unique.
D
Anyway,
they
identify
the
data
of
the
namespace,
so
I'm
not
sure
whether
the
XML
namespace
is
still
that
useful
anymore.
For
these
things,
so
I
think
in
the
module
name,
the
region,
label
path
and
checksum
as
effectively
sufficient
and
by
path
or
II
mean
like
the
URIs.
But
you
can
fix
these
things
from
a
sufficient
to
go
to
identify
to
pull
these
things
down,
and
if
you
need
the
namespace,
you
can
get
that
out
of
the
module.
D
D
This
one
is
so
talking
about
the
checksum,
so
I
was
explaining
how
they're
used
so
this
question
here,
as
in
the
examples
in
the
draft
I,
think
use
the
full
sha-256
checksum,
which
is
64
characters.
Long
I
think
that's
right.
So
these
are
quite
long
and
vabase
in
the
files.
So
one
thing
I
was
thinking
about
was
rather
than
using
the
full
sha-256
checksum.
You
could
allow
prefixes
to
be
specified
in
the
same
way
that
gets
allows
you
to
use
prefixes
of
the
sha-256
to
identify
the
particular
commit
hashes.
D
You
could
potentially
do
the
same
thing
for
young
packages.
The
downside
with
that
is
that
in
get
it's
really
just
using
the
prefix
to
uniquely
identify
a
file,
it's
not
using
it
to
validate
the
integrity
of
that
file,
so
I
think.
If
we're
using
prefixes,
we
would
break
that
integrity
check.
Probably
so
the
proposal
is
actually
let's
keep
the
full
sha-256
checksums
in
the
files,
rather
than
allowing
prefixes
but
again
I'd
be
interested
in
one.
Has
opinions
going
the
other
way.
D
Use
of
module
tags,
so
the
draft
allows
you
to
use
module
tags
to
associate
additional
metadata
with
yang
packages.
It
doesn't
define
any
mechanism
to
talk
to
the
device
to
add
or
remove
or
modify
the
tags
associated
with
a
package
solely
the
module
tags
draft
allows
you
to
define
tags
within
a
module
definition,
and
it
also
allows
you
to
update
those
tags
associated
with
modules
on
a
particular
device.
You
can
dynamically
modify
them.
D
So
the
question
here
is
whether
this
work
should
be
added
now,
should
we
add
support
for
doing
adding,
removing
and
modifying
package
tags
to
this
draft,
or
would
it
be
reasonable
to
defer
that
to
future
work?
I'm,
not
sure
how
displays
I
might
ask
the
author
of
the
yang
one
of
the
author's
if
he
has
any
thoughts
on
this
I.
B
Think
it's
a
low
priority
feature,
so
it
this
is
Liu
Berger
answering
as
contributor
I
mean
it's
a
low
priority
feature,
so
I
would
leave
it
towards
the
end
and
if
we
decide
that
the
group,
besides
that
it's
important
enough,
someone
will
will
write
some
text
and
if,
at
the
end,
there's
no
text
I
think
that
that's
our
answer,
it
could
always
be
done
later.
Yeah.
D
Packages
for
schema,
so
this
is
an
interesting
one.
The
idea
for
each
package
is,
it
represents
a
schema
and
it
says
here
potentially
incomplete.
So
am
I
that's
one
things
I
didn't
mention
here
is
in
terms
of
the
yang
package
definitions,
the
schema
that
it's
representing
doesn't
have
to
be
complete.
It
could
represent
an
incomplete
schema,
so
it
represents
say
a
set
of
modules
that
they
themselves
have
dependencies
on.
Other
modules
aren't
defined
as
part
of
that
package
and
there's
a
couple
reasons
that
those
was
incomplete.
D
Schemas
are
useful
there
useful
in
the
case
that
you
might
have
a
dependency
on
maybe
I
Anna
I
have
types
where
you
don't
binding
to
a
particular
version
to
leave
it
loose
in
the
package
definition
and
then,
when
the
package
has
been
used,
it
would
specify
exactly
which
version
it's
using
and
again.
It's
also
where
we're
defining
things
like
packages
for
a
bug
fix
or
something
you
just
want
to
include
the
modules
that
been
changed
in
that
package.
D
You
don't
have
to
include
the
whole
scheme
each
time
so
again,
that's
an
example
where
a
package
might
represent
an
incomplete
schema
in
the
package.
Definition,
it
would
specify
whether
or
not
schema
it
represents
a
complete
scheme
or
an
incomplete
schema,
but
they
actually
issue
here
is
to
do
with
nmda
and
datastore,
so
each
data
store
defines
its
own
schema.
So,
as
such,
each
data
store
would
have
its
own
yank
package
definition.
D
They
might
be
the
same
for
the
same
data
sources
or
they
could
be
different,
but
in
the
destination
in
our
c83
for
to
the
nmda
RC,
it
sort
of
implies
the
existence
of
an
uber
schema
that
represents
a
common
parent
scheme
across
all
data
stores.
What
it
actually
specifies
is
it
says
that
the
schema
for
the
operational
state
data
store
must
be
a
superset
schema
of
all
the
configuration
data
stores,
except
you
can
remove
some
things,
so
you
can't
deviate.
You
can
deviate,
remove
things
to
take
it
out.
D
You
can
turn
features
off,
but
otherwise
you
can't
change
the
data
types.
You
can't
change
the
meaning
of
notes,
so
I
think
what
that
really
means
is
the
existence
of
this
uber
schema
on
a
device
where
the
schema
for
each
data
store
must
be
a
subset
of
that
schema.
So
it
might
have
things
missing,
you
might
have
features
turned
off
nodes
missing
and
it
might
have
deviations
remove
nodes,
but
otherwise
everything
else
is
always
a
subset
of
this
effect.
D
This
uber
schema-
and
this
has
come
up
as
being
something
that's
potentially
useful
in
the
versions
of
selection
work
rather
than
trying
to
select
sets
of
schemas
for
the
data
stores,
it
might
be
more
appropriate
to
try
and
select
using
these
uber
schemas
identify
the
schemas
across
all
these
data
stores,
rather
than
individual
ones.
So
that's
something
that
we
still
sort
of
talking
about
of
looking
at,
as
well
as
the
data
tools
and
advice.
The
same
sort
idea
applies
to
these
sort
of
schema
families.
D
So
if
you
had
a
set
of
packages
representing
so
the
ITF
modules
or
open
config
modules
related
modules,
the
same
principle
applies
that
for
those
schemas,
the
schema
for
the
individual
data
stores
may
differ,
but
they
still
logically
have
the
same
uber
schema.
That
represents
all
the
stuff
in
all
of
them
at
the
top
level.
So
again,
we
think
that
these
may
be
useful
to
describe
those
things
and
again
it's
really
the
packet.
D
But
you
can
calculate
it
and
you
can.
You
can
generate
it
by
merging
everything
together.
So
I
think
there's
a
question
whether
whether
that
would
be
useful
as
well,
and
so
that
one
is
still
in
his
open
discussion
on
what
we
do.
Those
and
I
think
it's
really
the
version
selection
draft
that
drives
that
I
think
maybe
it's
my
laughs.
Hopefully
my
last
slide
on
this
one
and
is
once
you've
got
these
packages.
D
One
of
the
principal
aims
is
to
try
and
add
some
more
conformity
between
what
ITF
produces
so
rather
than
producing
this
yang
modules
for
inderal
individual
features
can
ITF,
starts,
produce
and
sets
of
yang
modules
that
work
together
to
provide
functions
for
particular
services
and
things
not
service,
though
yang
modules,
but
implementing
those
services
on
devices.
So
so
I
would
like
the
packages
work
gets
adopted.
We
also
want
to
then
be
thinking
about.
Can
we
try
and
start
defining
what
these
things
look
like?
D
Does
it
work,
and
can
we
come
up
these
definitions
and
then
there's
a
question
of
how
do
you
manage
those
packages?
Do
we
need
some
ion
a
registry
for
those,
and
the
other
side
of
that
is
I
would
like
this
package
of
different
definitions
to
be
globally
unique.
So
again,
how
do
you
manage
that
namespace
I'm,
hoping
that
simple
registry
of
prefixes
on
the
package
names
is
sufficient
rather
than
using?
D
You
are
eyes
that
make
them
more
for
base,
but
against
there's
more
thought
about
this
and
how
we
do
that
and
how
that
works
and
questions
and
process
and
things.
So
this
is
just
all
early
days
and
this
not
really
draw
I
think
the
draft
mentions
the
idea
you
need
to
do.
This
doesn't
talk
about
the
details,
but
again
I.
Don't
think
this
is
something
needs
to
be
solved
for
working
group
adoption.
It's
just
part
of
work
as
this
work
evolves
in
the
working
group.
That's
my
last
slide
on
this
part
great.
D
B
K
K
Please,
okay
whoa
from
Wahby,
and
here
is
the
question
for
because
I
when
I
read
this
through
that
young
package
I
think
for
implementing
is
quite
useful,
but
right
now,
I
think
like
there's
no
standards
to
define
whether
how
how
can
we
form
a
young
package
for
the
uber
package.
It
seems
clear,
but
we
could
like
use
only
two
models
to
form
a
young
package.
So
so
right
now,
there's
no
standards
define
in
the
young
package
draft.
D
Okay,
I'm
not
be
added
I
think.
The
idea
here
would
be
that
the
I
wouldn't
want
to
define
any
actual
packages
within
the
packages.
Draft
has
a
couple
examples,
but
the
idea
would
be
to
have
separate
RCS
to
define
an
ITF
base
package
for
what
modules
would
go
into
that
and
one
for
who
our
eyes
have
rats
and
that
sort
of
thing
so
but
I
think
yes,
I,
think
I'll
be
tricky
to
define
those
but.
F
Hooked
up
one
idea-
maybe
it
might
be
useful
in
some
use
cases
to
include
a
PGP
signature
of
the
content
so
that
it's
somehow
a
sure
that
it's
the
right
packages
that
somebody
received
so
maybe
as
an
optional
item.
It
could
be
useful
to
add
some
kind
of,
let's
say
PGP
signature,
to
say
to
sign
the
checksum
so
that
it's
real
the
content
that
that's
supposed
to
be
there.
F
L
L
Specifically,
this
is
about
addressing
this
requirement
from
the
requirements
draft.
We
need
to
allow
for
a
way
that
existing
clients
have
a
way
of
interacting
with
a
yang
driven
server.
That
is,
is
a
way
in
which
they
expect
a
way.
That's
not
going
to
break
those
existing
clients,
and
we
also
need
a
way
to
be
able
to
distinguish
now
that
we've
introducing
yang
packages,
we
need
a
way
of
being
able
to
distinguish
what
version
of
a
package
we
may
want
to
use
if
a
device
happens
to
support
multiple
packages.
L
What
do
we
want
that
schema
to
look
like,
so
these
are
the
the
goals.
This
is
the
wherefore
of
the
version
selection
draft
in
particular.
The
solution
here
will
allow
servers
to
do
these
non
backwards-compatible
changes
and
clients
do
not
necessarily
then
have
to
always
track
the
latest
and
greatest
so,
for
example,
a
server
could
support
version,
two
of
a
given
package
and
version
one
of
a
given
package,
so
the
clients
that
understand
version
1
of
that
package
can
select
that
that
is
the
version
by
which
they
want
to
interact
with.
L
That
is
the
schema
that
they
want
to
see.
Obviously,
then,
therefore,
this
makes
use
of
the
yang
packages
that
Rob
just
presented,
and
we
need
to
have
a
way
for
the
servers
to
advertise
this
support.
What
packages
do
they
support
at?
What
version
do
they
support
and
we
have
to
be
able
to
say
this
is
the
default
version
and
we'll
talk
a
little
bit
about
how
we're
going
to
do
that?
That's
one
of
the
open
items,
and
then
additionally,
then,
how
does
the
client
make
that
selection?
How
does
the
client
say
this?
L
Is
the
package
the
schema
I
want
at
the
version
I
want,
so
that's
what's
laid
laid
out
in
this
particular
draft
servers
are
not
required.
This
is
something
that
we
debated.
Quite
a
bit
on
the
working
group
servers
are
not
required
to
concurrently
support
clients
using
different
schema
versions.
In
reality,
it
may
be
very
difficult
for
a
single
server,
a
given
server
to
support
two
major
revisions
of
a
given
package.
L
So,
for
example,
a
server
or
a
packaged
version,
3.0
may
come
out
3.00,
for
example,
but
not
all
servers
need
to
support
that
and
as
well,
if
you've
got
a
server
that
supports
version
2.0
and
1.0,
there
could
be
non
backwards,
compatible
changes
there
that
the
server
can't
reliably
render,
and
there
has
to
be
some
deviation
to
indicate
that,
for
example,
we
are
not
going
to
support
a
node
at
string
when
it
used
to
be
int.
We
can't
do
both
at
the
same
time.
L
L
We
talked
about
a
net
cough
solution
by
whereby
a
client
selects
a
specific
version
of
a
schema
by
using
a
different
TCP
port
number,
and
then
we
thought
about
that
and
we
thought
well
that's
going
to
really
put
proliferate
ports
as
we
go
and
saying
proliferate
ports
a
lot
quickly
is
tough
to
do
so.
We
we
pulled
that
out
and
we
picked
an
RPC
based
approach
in
order
for
Netcom
clients
to
be
able
to
select
a
specific
schema.
L
In
particular,
we
initially
started
by
saying
a
client
selects
this
particular
package
at
this
particular
version,
but
we
then
started
saying-
and
this
led
into
some
of
this
uber
schema
talk
or
uber
package
talk
that
Rob
mentioned.
How
does
a
client
string
together
multiples
of
these
packages?
So,
for
example,
if
they
have
a
l2
VPN
package
and
an
l3
v,
how
do
they
bring
these
together
to
come
up
with
a
overall
cohesive
schema
that
that
client
may
care
about?
So
we
added
support
and
still
an
open
issue
for
discussion.
L
L
So
here
we
go.
A
version
schema
is
associated
with,
as
we
talked
about
a
it
could
be.
A
semantic
version
has
a
revision
label,
but
it
is
associated
to
those
yang
packages
and
within
the
pack,
or
we
have
this.
This
notion
of
sets
of
schema
that
string
together
to
form
one
cohesive
schema
that
the
client
is
interested
in
using
at
specific
versions
of
the
the
sub
packages.
Within
that
we
can
do
multiple
things
here.
L
But
then
we
run
into
some
issues
with
how
and
Rob
mentioned
this,
how
do
we
resolve
some
of
the
intentionally
inherent
conflicts
between
different
schema
that
might
use
or
different
packages
that
might
use
different
modules
or
different
modules
and
different
versions
of
those
modules?
But
what
we
want
to
be
able
to
do
is
have
a
way
of
Netcom
clients
being
able
to
say
this
is
the
set
of
packages
and
versions
I
want
or
the
schema
that
I
want,
and
the
same
for
Netcom
or
sorry
with
rest
cough
and
with
rest
comp.
L
We
have
a
offshoot
branch
in
which
the
client
will
make
a
query
to
be
able
to
say
this
is
the
set
of
packages,
or
this
is
the
schema
at
this
particular
version
that
I'm
interested
in
so
different
route
for
rest,
kampf
and
the
RPC
for
net
conf.
This
is
the
version
selection,
the
yang
tree
output,
of
that
you
can
see
how
this
breaks
down
we'll
go
into
a
little
bit
more
details.
L
We
look
at
examples
specifically
of
how
the
RPC
works
and
that's
gonna
lead
us
into
some
of
the
open
questions
that
the
design
team
has
been
having.
This
is,
for
example,
how
a
server
will
advertise
support
for
specific
packages
at
specific
versions,
so
this
happens
during
the
capabilities
exchange,
so
the
server
will
say
that
I
have
the
capability
for
these
sets
of
packages,
so
example
ITF
routing
at
a
specific
version
or
two
specific
versions,
a
vendor
and
a
vendor
package
at
two
specific
versions.
So
this
could
be.
L
N
L
The
the
the
different
mechanism
for
ruskin
versus
Netcom
we
with
well
the
rest
comp
we
had
the
we
had
the
ability
of
doing
a
different
URL.
We
thought
the
RPC.
We
talked
about
a
few
different
things.
We
thought
the
RPC
seemed
more
natural
with
respect
to
what
a
net
comp
client
would
would
expect
to
do
so.
That
is
why
we
again
where
that
is
one
of
the
things
that
we
debated
most
recently,
is:
what
should
we
do?
L
L
And
that's
gonna,
we'll
get
personally
I
agree
with
you
and
we're
gonna
get
to
that,
in
particular
with
the
open,
open
questions
in
a
minute
here.
So
this
is
the
example
of
the
Netcom
for
PC
polish,
just
kind
of
hinted:
it's
something
that
we're
gonna
get
to
in
a
in
a
second
here.
What
happens
here
if,
in
this
example,
maybe
there
wouldn't
be
conflicts,
but
what
happens
if
the
client
selects
a
set
of
packages
or
a
set
of
schema
that
inherently
conflict?
L
Obviously
you
could
just
nak
this
and
instead
of
turning
returning
an
okay,
you
could
return.
An
error
replied
to
the
the
RPC
request,
but
it
might
be
better
if
there
was
a
way
of
having
a
single,
some
vetted,
probably
the
wrong
word,
but
a
single
definition
for
the
overall
schema
that
the
client
wants
to
use
so
that.
N
Anima
has
a
continuation
on
jabber,
so
Martin
says
config
false
data
instead
of
special
for
protocol
capability,
to
which
Rashad
says
Martin.
Do
you
mean
to
have
a
different
solution
from
what's
currently
in
the
document
and
Martin
responds
saying
why
not
use
config
faults
data
instead
of
special
protocol
capability,
I
thought
RC
was
using
a
config
fault
tree,
perhaps
I'm
mistaken.
D
Confused
at
the
at
the
question
so
robertson,
cisco,
so
I
think
that
the
information
of
what
you
could
choose
would
be
in
config
force
for
both
neck
get
from
restaurants
is
foam
in
both
cases.
The
reason
we
put
it
in
for
capabilities
exchange
for
neck
confers.
We
thought
that'd
be
easier
for
a
client
when
it
connects
to
know
what
to
vote
straight
away
and
deburr
to
choose
on
that
initial
RPC.
Beginning.
A
D
Want
to
choose
the
schema,
whereas
in
the
restaurant
solution,
because
it's
done
on
the
path
based
thing
effectively
once
you
get
the
data
and
then
just
choose
the
right
paths.
Are
things
to
do
with?
What
point
of
time
do
you
choose
the
scheme
you
using
and
getting
early
enough
in
the
process?
Yeah.
L
One
of
the
things
we
did
discuss
thanks
robb.
One
of
the
things
we
did
discuss
was
wind
is
when,
when
does
the
capabilities
exchange
occur
and
could
the
client
simply
say
in
its
capabilities
what
it
wanted
to
use,
but
the
capabilities
exchanged
can
occur
simultaneously,
so
we
wanted
to
it
had
to
happen
early.
We
had
to
have
some
way
of
of
having
in
the
Netcom
session
this
happened
early
and,
and
so
that
was
the
other
thing
that
we
another
reason
why
we
went
forward
on,
at
least
on
the
net
coincide
with
the
RPC.
N
B
L
L
So
this
is
we've
already
I've
already
touched
on
a
few
of
these
and
and
polish
brought
up
the
fact
that,
when
you
are
arbitrarily
allowed
to
chain
together
schema,
the
server
may
not
be
able.
You
know,
sorry,
sorry,
Dave
I,
can't
let
you
do
that.
So
this
the
server
main
a
kit,
because
the
string
together
schema
that
are
selected
don't
really
work
together.
L
They
can't
be
simultaneously
supported
by
the
device.
Maybe
another
client
in
another
session
has
already
made
their
choice
and
the
device
can't
concurrently
handle
both
versions,
say
of
a
particular
package
or
you
might
be
trying
to
set
change
this
on
the
fly,
meaning
you
did
it
once
at
the
beginning
of
the
session
and
now
you're
trying
to
send
an
RPC
again
for
a
different
schema
version,
and
then
the
server
can't
support.
That
here
is
a
config
example.
Rashad
is
very
meticulous
and
generating
the
full
example
of
the
of
the
yang
module
within
the
draft.
L
But
let's
get
to
the
open
item
since
I.
Think
we
are
running
behind.
Do
we
allow
multiple
schema
sets
to
be
selected?
Polish
already
mentioned,
there's
a
problem
with
that.
It
might
be
better
to
say
that
we
want
to
either
define
this,
maybe
at
a
config
time
where
the
the
client
has
to
resolve
those
those
conflicts
explicitly
as
Rob
was
mentioning
in
his
presentation,
or
maybe
it's
something
where
we
have.
These
uber
schema
on
the
device,
and
there
is
just
one
or
a
set
of
schema
that
are
supported
and
maybe,
for
example,
IETF
version.
L
One
IETF
version
two,
and
that
includes
all
of
the
IETF
modules
and
those
are
a
package.
I
should
say
at
a
specific
revision
and
version.
That
means
the
conflicts
are
resolved.
It's
clear
what
the
client
would
be
getting
in
terms
of
an
overall
schema,
and
you
don't
have
this
kind
of
frankensteining,
of
putting
together
different,
potentially
incompatible
sets
of
packages
in
terms
of
recommendations.
We've
been
I
said
earlier
myself
personally,
I,
like
the
kind
of
uber
or
or
predefined
schema,
that
is
free
of
conflicts
that
we
know
is
going
to
work.
L
I,
don't
know
if
we
have
a
design
team
consensus
on
that.
But,
oh
those
are,
that
is
one
of
the
things
we've
been
talking
about
very
seriously
in
some
of
the
latest
meetings.
I've
been
a
part
of,
but
that's
an
item
for
discussion
too.
We
have
Rob
mentioned
the
the
datastore
relationship.
We
now
have
a
one
to
end
relationship
between
the
datastore
and
the
schema
that
it
could.
That
could
be
defining
it.
How
do
we
handle
those
conflicts?
This
potentially
goes
away.
L
L
This
is
how
I
handle
the
potential
conflicts
between
the
modules
as
Rob
mentioned,
or
the
devices
just
define
a
or
the
gangue
servers
yang,
driven
servers
support
a
set
of
kind
of
overarching
or
uber
packages
that
define
their
schema,
and
that
is
at
the
level
that
the
client
can
select.
So
we
go
back
to
still
a
one-to-one
relationship
so
related.
Do
we
need
that?
Superset
schema
Rob
mentioned
the
ITF
open,
config
native
vendor.
L
That
is
one
way
of
resolving
these
conflicts
where
the
vendor
pre.
Does
it
and
says
that
we
support
Oh,
a
package
called
ITF
100
and
we
have
offline
this
yang
instance
data
that
shows
what
that
package
is,
and
the
client
therefore
knows
what
to
expect
same
thing
with
a
2.0,
so
that
could
include
sub
packages
around
l2
VPN,
l3
VPN.
But
the
client
selects
from
a
version
selection
standpoint
that
overarching
package
that
overarching
schema.
L
The
other
thing
is:
how
do
we
indicate
what
is
a
default
schema
inversion,
so
default
package?
I
should
say
one
of
the
things
we
talked
about
was
having
the
semicolon
notation
and
just
say
semicolon
default
and
say
is
part
of
the
capabilities
exchange.
This
is
the
default.
If
you
don't
do
that,
RPC,
if
you
don't
do
anything,
this
is
what
you'll
get
likewise.
Something
that
we
need
to
discuss
is
what
recommendations.
L
Might
we
want
to
give
to
implementers
that
says
how
do
I
decide
for
clients
that
don't
know
anything
about
this?
So
this
is
a
client
that
understands
version
selection
and
what
they'll
get
by
default.
But
what,
if
a
client,
doesn't
yet
understand
anything
about
this?
How
do
we
support
that
client?
We
can
give
recommendations
that
say
the
default
non-selected
should
be
something
that
maintains
backwards,
compatibility,
let's
say
so.
For
example,
we
could
give
that
recommendation.
H
B
D
A
B
D
I'll,
try
be
quite
okay,
so
this
is
the
last
last
draft
at
the
set
of
five.
So
this
completes
the
solution.
So
what
is
yang
skin
comparison
so
effectively?
This
draft
is
defining
algorithms,
to
compare
yang
modules
and
yang
schema
to
determine
the
scope
of
changes
between
different
arbitrary
revisions
and
versions.
So
it's
similar
to
what
we
all
have
been
talking
about:
a
lot
about
updating
and
using
Yang
semver
as
modules
change
with
NBC
changes
and
backs
compatible
changes.
D
But
the
idea
here
is
to
define
the
talling
on
how
you
do
that,
and
the
talling
would
work
both
between
modules
that
were
within
their
history,
but
also,
if
you
have
some
sort
of
branching
occurring
between
modules,
the
ability
to
compare
versions
between
different
branches.
So
the
reason
that
this
is
important
is
because
in
in
not
in
all
cases,
December
solve
all
the
issues.
D
D
This
is
a
0
0
revision
was
written
last
week
it
was
published
on
on
Saturday
Sunday,
the
in
terms
of
the
actual
solution.
This
has
been
discussed.
Quite
a
long
time
is
really
matter
writing
it
down,
but
this
is
a
relatively
new
draft,
so
why
do
we
want
this?
Well
revision
labels
and
yang
semver
work
in
the
mainline
case,
so
if
you're
updating
along
a
linear
provision
history,
then
it
it
works
quite
well,
but
in
the
case
you
get
to
where
it's
branched,
it
is
not
so
useful.
D
You
can't,
you
can't
rely
on
just
those
similar
numbers.
Second,
the
reason
this
is
useful
is
in
terms
of
actually
getting
the
right,
semver
numbers
or
the
labeling
modules,
with
the
correct
NBC
labels.
It's
useful:
you
got
tolling
that
can
actually
identify
those
rather
than
relying
on
humans,
doing
it
as
humans
generally
get
it
wrong.
The
third
reason
this
is
useful
is
that
clients
aren't
impacted
if
the
scheme
has
changed
in
the
bits
that
they're
not
using.
D
So
by
having
defined
in
a
in
a
standard,
it
means
that
any
tools
can
work
with
those
same
definitions
in
terms
of
the
details
with
a
generic
tree
comparison
algorithm,
it's
not
particularly
magical
in
what
it's
doing.
It's
just
doing,
walking
down
the
trees,
the
schema
trees
and
comparing
them
the
comparison
is
performed
via
identifiers
rather
than
the
ordering.
So
that's
the
difference
from
what's
then,
what's
in
7950,
this
means
that
you're
allowed
to
reorder
statements.
Okay
and
that's
not
a
problem,
but
it
does
have
and
some
other
complexity
so,
and
so
that's
one
choice.
D
I'll
talk
about
those
in
a
minute
and
give
examples,
so
the
filtered
version
for
the
full
yang
schema.
So
this
is
covering
the
case
where,
as
are
saying
it
tries
to
answer,
question
from
a
client's
perspective
is
moving
from
one
software
release
to
another
is
okay,
there
might
be
some
number
cause
collateral
changes
am
I
going
to
be
affected
by
those.
So
the
suggestion
here
is,
you
could
filter
out
some
of
these
aspects
because
they
probably
are
less
interested
for
clients.
D
So
if
groupings
have
been
changed
in
terms
of
the
actual
names
of
those
groupings
will
have
moved
around
that
doesn't
affect
the
scheme.
That's
constructed,
so
probably
clients
won't
care
very
much.
If
the
module
metadata
information
has
changed
you
product
care,
you
can
restrict
the
comparison
to
the
subset
of
features.
You
actually
care
about
you
using
them.
D
You
could
restrict
this
the
the
comparison
to
the
subset,
the
schema,
that's
being
used
by
the
clients
you
could
feed
in
some
instance,
data
document
that
says
this
is
the
configuration
I
use
or
you
could
feed
in
some
XPath
saying
this.
Is
the
trees
I'm
interested
in
so
that
the
the
results
of
that
comparison
actually
is
tuned
to
what
you're
interested
in?
And,
finally,
you
could
filter
it
out
very
toriel
changes.
D
These
annotations
work
so
I've
got
an
example
of
fixing
a
description,
so
I've
gone
from
revision,
1,
0,
0
to
1,
0
1,
that's
the
standard
module,
versioning
update
rules
and
the
similar
being
used
there
and
then
at
the
bottom.
In
that
container
foo,
you
can
see
that
I've
added
I've
changed
the
description
from
do
some
stuff
with
misspell
to
fix
that
I
did
a
full
stop
and
I've
now
labeled
that
that
is
an
editorial
change.
I
said,
which
particular
revision
that
editorial
changes
occurred
in.
D
D
Another
example,
so
this
a
different
label
is
a
different
annotation.
We,
the
draft,
defines
an
annotation
for
effectively
be
able
to
rename
a
node.
So
it's
changed
here
from
food
to
bar.
This
change
has
been
done
in
a
non
box
compatible
way,
so
the
multiple
versions
gone
from
one:
zero,
zero,
two,
two
zero
zero,
but
you've
got
a
label
under
the
new
container
bar
to
say
that
was
related
back
to
food.
D
So
when
the
talling
is
doing
the
comparison
of
the
two
trees,
first
of
all,
they
would
look
for
container
bar
in
the
old
module,
wouldn't
find
it.
If
then,
C's
got
renamed
from
foo
and
do
the
comparison
against
the
food,
so
it
allows
you
to
do
smarter
changes,
Sparta
comparisons,
whereas
by
default.
Otherwise
you
would
flag
it
up
as
a
delete
and
a
crate.
We
haven't
necessarily
figured
out
exactly
what
all
of
these
things
should
be
in
this
stage.
It's
just
these
ideas
of
sorts
things.
You
do
yes,
cuz.
D
D
N
D
Hubs
so
I
think
to
clarify
I
think
that's
a
detail
that
we
need
to
work
out
and
we
need
to
work
out
what
these
extra
annotations
should
be,
which
ones
are
useful,
so
the
next
steps
so
the
so
one.
The
question
is
this
defines
various
extensions
to
the
module
versioning
things
it
could
be
done
in
the
module.
Versioning
draft
it--if
young
revisions,
it's
in
a
new
module
within
this
draft.
So
it's
questionable
where
that
goes.
We
need
to
work
exactly
what
those
annotations
are
needed
and
useful
again.
D
I
think
that
could
be
done
after
work,
group,
adoption
and
there's.
One
question
is:
do
we
need
an
annotation
to
mark
something
as
NBC
at
the
moment
only
defines
its
assumes
NBC
by
default?
It
doesn't
know
and
then
adds
annotations
to
say
it's
either
backwards-compatible
editorial
and
then
another
question
that's
come
up.
B
Right,
thank
you.
We
see
that
people
already
streaming
out
because
we
are
out
of
time
we're
gonna,
take
the
first
ten
minutes
of
the
next
session
to
discuss
adoption.
There
has
been
some
interesting
conversation
are
anything
place
in
jabber
that
wanted
to
be
channeled
and
we
ran
at
a
time,
so
we're
gonna
do
that
first,
ten
minutes.
So
please
come
back.
Thank
you.
Thank
you.
Oh.