►
From YouTube: IETF114 NETMOD 20220727 1900
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Thank
you,
okay.
It
is
time
to
start
our
session.
Welcome
to
netmod
and
philadelphia
and
online.
I
am
lou
berger.
We
have
with
me
kent
watson
and
joel
jugley,
my
co-chairs.
They
are
both
online
and
kent
will
be
the
one
coordinating
the
slides
and
handing
it
to
the
different
presenters.
B
So
I
think
this
will
all
work.
Well,
that's
the
ietf,
so
we
have
our
usual
note.
Well,
the
short
of
this
is
anything
you
say
here
becomes
part
of
our
permanent
record
and
is
considered
a
contribution
if
you're
unfamiliar
with
the
notewell.
Please
take
a
look
at
the
main
ietf
page
and
follow
the
links
to
familiarize
yourself
with
that.
B
We've
also
been
asked
to
remind
participants
that
masks
are
required
in
session
and
keep
them
on
the
the
mics
seem
to
work
just
fine
with
the
with
the
mask
on
remote
participants.
If
you're
hearing
me,
you've
already
figured
this
out
you're
on
me,
deco
I'll
point,
everyone
out
also
to
the
there's
the
note
taking
the
joint
net
oops,
we
have
joint
note
taking
the
link
is
here.
It's
also
there's
a
very
nice
way
of
getting
it
from
need.
B
B
It
is
great
if
you
capture
someone
else's
and
it's
even
better.
If
you
go
make
sure
that
your
name
is
captured
correctly
in
the
minutes,
and
that
your
comment
was
correctly
accurately
represented,
all
material
has
been
uploaded.
B
We're
going
to
spend
much
of
our
time
today
talking
about
yang
versioning.
This
is
a
topic
that
I'm
sure
most
of
you
are
really
familiar
with.
It's
been
going
on
for
a
bit.
We
have
done
a
last
call
on
the
first
two
documents
and
we
had
talked
about
holding
the
documents
up
until
the
full
document
set
of
five
documents
was
ready
for
last
call
in
discussion
with
the
authors
and
the
aed
we're
going
to
actually
do
something
a
little
different.
B
Basically,
the
consensus
among
the
those
working
on
the
document
is
it's
not
close
to
ready
and
they
need
a
bit
of
time,
and
rather
than
hold
up
these
other
documents
for
a
long
time,
which
we
thought
originally
was
going
to
be
a
short
time.
But
now
it's
a
long
time
we're
going
to
wait
for
an
update
that
you're
going
to
hear
about
today
going
to
discuss
that
resolve
any
of
the
issues
that
are
open
get
out.
B
Okay.
So
that's
going
to
be
the
majority
of
our
discussion
today
and
then
we
still
have
some
time
available,
we're
going
to
talk
about
a
couple
of
unchartered
documents
and
we
had
25
minutes
left
on
the
schedule
so
in
thinking
about
what
we
might
do
with
that
time
to
make
good
use
of
us
all
being
together.
B
We
have
two
topics.
The
first
topic
is
going
to
be
yang
next,
which
we've
talked
about
a
few
times
and
we're
going
to
have
an
informal
discussion
led
by
the
chairs
and
then
the
other
topic
is
our
id
presented
some
ideas
on
how
to
move
faster
to
the
iesg
and
we're
going
to
talk
about
that
as
well,
and
I
think
we
have
25
minutes
for
those
conversations.
B
We
have
one
recently
published
rfc,
thank
you
to
everyone
who
worked
on
it,
we're
here
to
publish
documents,
it's
always
great,
to
see
everyone
and
meet
and
talk,
but
our
product
of
the
documents.
So
thank
you
for
all
who
made
that
happen.
C
All
right
so
rob
wilson
just
want
to
speak
to
the
two
documents
that
the
intense
extensions
and
sub
interests
feed
our
model
that
I've
been
setting
for
a
long
time.
So
I'm
glad
to
say
that
scott
and
donov
kindly
agreed
were
offered
to
help
me
progress
those
forward,
so
I've
sent
an
email
on
the
first
one
to
them.
Today.
I've
had
a
quick
chat
with
this
week,
but
the
aim
is
to
try
and
get
an
updated
version
of
the
first
one
out
quite
quickly.
C
B
B
The
next
document
is
the
abyss
on
69.91.
That
one
is
last
call
and
we
should
expect
a
a
write-up
soon
from
the
the
shepherd,
so
nothing
really
interesting
there,
and
then
we
have
the
two
documents
that
we
talked
about
already,
that
those
were
the
post
last
call
that
we
have.
The
new
plan
for
syslog
was
returned
to
us
a
little
while
ago.
It
needs
a
minor
change
and
that's
going
to
be
talked
about.
I
believe
next.
B
B
They
just
are
informing
us
of
an
activity,
and
this
one
they've
said
we
really
want
these
documents
to
be
published,
the
ones
we
just
talked
about,
and
you
let
us
know
when
you
expect
them
to
be
published,
so
we
had
actually
had
the
whole
conversation
about
not
holding
these
documents
up
before
this
came
in
and
it
sort
of
lines
up
well,
so
you
know,
I
don't
think
we
have
too
much
to
discuss
in
preparing
a
response.
B
We
expect
to
draft
one
and
send
it,
and
if
someone
feels
like
giving
us
a
draft
rather
than
waiting
for
the
chairs,
we're
welcome
to
to
take
a
look
at
it
and
the
way
to
submit
that.
Is
you
send
it
to
the
working
group
list
and
say
we
propose
this?
This
is
a
response
and
if
no
one
does
it,
the
chairs
will
probably
get
there.
Oh,
yes,
all
right
said
that
he
will
help
us
with
that.
B
Thank
you
and
again
just
send
it
to
the
list
and
say
this
is
a
draft,
a
proposed
response,
and
this
is
a
little
boring.
We
all
know
we
can
how
to
work
remote.
Now
we're
well
seasoned
at
that.
At
this
point,
the
main
point
in
showing
the
slide
is,
if
you
would
like
to
use
the
working
group
resources
for
interims
or
informal
meetings,
they
are
available
to
you
and
with
that
we're
gonna
move
over
to.
D
D
So
I
took
that
unfortunately,
next
slide.
Please,
we
only
had-
or
I
was
only
been
able
to
give
or
ken
was
only
able
to
give
me
the
rev
23.
It
was
currently
at
rev26,
the
rev23xml,
so
I
had
to
back
port
a
lot
of
the
changes
that
had
happened
into
that,
so
I
could
at
least
get
an
xml
that
that
looked
like
the
what
was
published
in
26.
So
we
did
that
mahesh
was
generous
enough
to
provide
his
build
system
and
he
helped
clean
up
some
of
this.
D
So
we
got
the
26
there
and
then
we
updated
things
so
that
it
had
the
right,
boilerplate
and
the
right
year
and
all
of
that
and
past
id
knits
and
everything
looked
good,
but
then
it
was
still
broken
and
the
thing
I
want
to
focus
on
the
most
is
what
you
see
there
in
orange:
the
replace
the
old
key
store
grouping
in
there
with
the
crypto
types
asymmetric,
key
pair
with
cert
group,
so
next
slide
and
I'll
show
you.
I
realize
this
doesn't
give
enough
context
this.
D
The
this
quote-unquote
cert
changes
in
the
draft.
This
is
under
the
signed,
syslog
container
area
in
that
draft.
So
if
you
support
the
feature
of
signing
your
syslog
messages,
you
need
a
way
of
reflecting
what
cert
you
are
going
to
use
in
order
to
sign
that
and
what
key
you
want
to
use
in
order
to
sign
or
to
use
that
cert.
D
D
What
it
now
is-
and
this
seems
to
be
restores
the
semantics
of
what
this
this
grouping
should
do
for
allowing
you
to
configure
and
support
sciences,
log
messages
with
this,
so
I
I
would
love
to
hear
if
I
got
it
right,
I
got
it
wrong,
but
if
I
got
it
right,
I
feel
that
the
draft
is
where
it
can
be
to
do
a
last
call
and
hopefully
push
it
through
for
ultimate
ratification,
and
I
see
kent
coming
to
the
queue
kent.
A
Yeah
hi
joe
thanks
for
taking
on
this
work,
really
appreciate
it.
I
do
think
that
you
got
it
right.
It
may
be
just
one
minor
thing:
the
you
have
asymmetric
key
pair
with
certs
plural
grouping
and
it
might
be
a
symmetric
key
pair
with
cert
singular
grouping.
Both
groupings
exist,
but
what
you
want,
I
think,
maybe
with
the
singular,
not
plural
regardless
I'll,
be
shepherd
for
this
and
we'll
look
at
it
again.
A
D
I
did
look
at
both
and
and
now
that
you're
saying
it
and
I'm
on
the
hot
seat.
I
can't
remember
why
I
chose
this
one,
but
I
I
consciously
chose
this
one
after
reading
through
the
crypto
types
draft,
but
I
would
appreciate
your
insight
was
kind
of
what
I
was
hoping
to
get
so
thank
you,
but
that
that
is
it.
The
next
slide
is
just
my
call
to
action
on
this.
I
would
like
the
working
group
to
kind
of
weigh
in
if
we're
ready.
E
Hello,
everyone,
my
name
is
ching
and
I
want
to
talk
about
unload
attack
in
young
lady
motive,
so
this
is
working
cool
job.
The
current
version
number
is
zero,
eight
next,
so
a
little
bit
a
little
bit
recapture.
So
what
is
this
job
talking
about?
Actually,
actually
we
we
know
the
young
module
can
comprise
a
set
of
data
node
and
they
know
in
a
young
data
model,
maybe
in
in
a
single
young
data
model
or
several
young
data
model
may
share
the
same
common
characteristics
or
feature.
E
For
example,
the
both
can
be
seen
as
some
some
kind
of
kpi
data,
so
no
tag
really
want
to
capture
this
kind
of
characteristic
data,
and
so
currently
no
tech
job
has
already
got
a
review
from
young
doctor
and
also
the
working
google
last
call
has
been
initiated
and
we
got
a
lot
of
comments
and
thanks
all
all
the
reviewers
and
actually
one
substantial
comments
actually
about
the
mechanism
defined
in
this
child,
whether
it
is
generic
enough
or
take
it
to
some
specific
use
cases.
E
So
for
these
comments,
actually
we
take
offline
discussion
with
the
reviewer
juggen
and
we
got
a
lot
of
suggestion
and
we
listed
several
questions
here.
We
will,
you
know,
give
a
detailed
discussion
about
this
question
and
our
answer
and
you
can
see
the
current
version
8
actually
try
to
address
all
the
comments
during
working
with
last
call
and
a
young
doctor
review
next.
E
So
this
is
a
change
we
made.
Actually
when
we
started
from
zero
six,
we
actually
made
a
two
update.
Actually,
the
main
change.
Actually,
we
changed
the
title
since
based
on
it
is
a
touching
with
single
no
tag
not
only
apply
to
some
schema
level
tag
and
but
also
instance,
level
tags.
E
So
we
change
the
title
to
reflect
that
and
also
we
actually
update
the
introduction
and
abstract
and
try
to
make
object
more
clearly
to
cover
both
cases
know
the
never
type
and
the
instance
level
schema
level
tag
and
instance
level
type
and
to
make
it
more
generic.
Actually
we
in
original
version,
actually
we
have
several
young
extension.
E
Actually
we
try
to.
We
have
three
extension,
not
like
metric
type
and
opn
tags.
That
is
not
a
magic
enough
kind
of
term.
Actually
we
try
to
consolidate
into
one
general
young
extension
also
in
early
version.
We
have
object,
tag
and
a
property
tag.
E
We
think
they
are
not
important
anymore,
so
we
just
remove
it
and
we
actually,
we
document
clearly
which
tag
can
be
captured,
how
application
can
be
supported
to
re-synchronize,
deploying
any
update
in
the
section
three
based
on
the
comments
from
eugene
and
also
actually,
we
clarify
schema
level
tag
can
be
used
in
the
xpath
query
in
section
3,
and
we
also
made
several
other
changes
in
appendix
so
you
can
see
that
actually,
some
of
them
just
the
editorial
chain.
Next.
E
So
to
revisit
the
question
we
discussed
with
yogan,
actually
we
have
five
questions
actually
try
to
go
through
this
question.
Actually,
the
first
question
is
about
no
tag,
whether
it
is
schema
level
attack,
or
instance,
level
attack.
Actually,
our
answer
is,
you
know
we
cover
both
cases.
Originally,
we
focus
on
schema
level
attack
better
based
on
the
managed
discussion.
E
We
think
it
would
be
useful
the
current
both
cases
actually
so,
for
example,
our
design,
we
define
the
node
node
id
actually,
as
the
node
instance
identified,
so
this
can
fully
support
these
both
cases.
So
we
update
the
abstract
introduction
several
sections
to
try
to
reflect
this
so
for
this
one.
Actually,
if
anyone
seems,
has
any
different
opening,
please
let
us
know
next.
E
The
second
one
is
about
how
no
tech
can
we
change
it,
whether
the
tag
changes
frequently
or
not?
Actually,
we
actually,
we
know
actually
there's
another
rfc
called
more
mod
module
tag
actually,
which
is
obviously
8819.
E
Actually,
they
support
both
system
tag
and
user
config
tag
so
actually
for
this
job
that
we
also
support
both
tag,
so
system
system
tag
can
be
seen
as
a
schema
level
tag.
Actually
so
system
client
can
add
text
into
the
data
node
list
of
the
node
text
module
if
a
model
writer
uses
young
extension
to
define
the
text,
so
this
is
usually
to
tag
a
specific
schema
notes
and
actually
usually,
this
kind
of
tag
actually
are
static
and
not
changing
frequently
but,
for
instance,
level
tags.
E
So
the
client
can
dynamic
to
add
or
remove
an
attack
on
day
node
instance
during
the
running
time
stage.
Actually,
so
this
one
actually
the
is
possible.
So
you
can
see
it's
possible
to
to
change
the
tag,
but
we
think
also
not
frequently.
Actually
you
can
see
plan
a
can
add
tag.
Client
b
actually
can
track.
These
tag.
Changes
by
you
know
subscribe
these
kind
of
tag,
changes
so
client
a
actually
change
tag.
Changing
can
be
automatically
synced
to
the
to
plan
b.
So.
E
So
we
actually,
this
is
a
discussion
actually.
E
Okay,
so
we
actually,
so
you
can
see
this
kind
of
change
actually
in
the
introduction
or
sample
use
case
section,
we
try
to
reflect
this
discussion
in
in
the
curtain
version.
Eight
next.
E
So
next
is
how
tag
are
retrieved,
so
we
you,
you
know.
Actually
no
tag
actually
can
be
retrieved,
but
we
have.
We
can
retrieve
the
tag
we
can
retrieve
the
data,
whether
they
you
know
can
be
retrieved
together
or
they
should
be
received
separately.
So
our
answer
is,
you
know
for
tag.
Actually,
you
know
tag
receiver.
It
should
be
recovered
from
the
data
retriever,
so
we
have
two
way
to
retrieve
the
tags.
The
first
is,
we
define
the
ietf
node
tags
module.
E
E
So
question
four
is
about
a
tank,
retriever
skill.
Actually
I
think
this
question
also
relates
to
the
question
two.
Actually
we
in
current
version,
actually
we
we
think
that
we
can
use
a
standard
protocol
operation
like
get
get
config
data
to
retrieve
the
tag
and
we
use
a
standard
field
operation.
So
we
have
no
intention
to
extend
a
filter
operation
for
tag
retriever
so
that
we
will
not
use
as
a
special
selection
filter.
E
So
we
clarify
this
and
secondly,
actually
we
have
two
type
of
tag:
schema
level
tag
instance,
level
type
so
for
schema
level
tag
system
only
at
one
entry
in
the
node,
so
the
frequency
so
tag
retrieval
scalability
is
not
a
big
issue
but,
for
instance,
level
tag
actually,
client
can
use
like
edit
config
to
add
a
tag
or
remove
the
tag.
So
the
number
of
the
tag
the
client
manager
is
controlled
by
the
client
itself,
so
I
think
the
scalability
still
can
be
controlled.
E
Actually,
so
here
we
also
discuss
about
you
know
this
instance
can
be
tagged
with
the
different
tags.
So
so
this
tag
actually
by
design
cannot
be
retrievable
every,
for
example,
for
instance.
That
is
tagged
with
a
different.
C
Rob
wilson,
so
I
just
got
a
quick
question
on
the
first
one
about
getting
data
and
you're,
saying
you're,
going
to
use
xpath
filter
for
as
the
operation
to
basically
select
to
just
return.
Data
with
a
particular
tag
set.
Is
that
right.
G
C
And
so
why
was
it?
Why
did
you
choose
to
do
it
that
way
and
not
added
an
augmentation
into
the
filter
to
say
I
want
to
just
receive
data
that
has
this
tag
set.
What
because
that,
because
to
me,
xpath
is
a
quite
expensive
way
to
do
this,
whereas
a
filter
operation.
That
just
said
I'm
just
looking
for
the
data
that
is
that's
marked
or
tagged.
This
way
seems
a
lot
cheaper,
computationally
for
the
backend
servers.
So
I
was
just
wondering
why
you
went
that
way
and
not
the
other
way.
E
Actually,
we,
as
we
clarify
we
just
try
to
use
a
standard
selection
filter.
We
don't
make
the
changes,
so
we
think
we
can
go
this
way,
but
at
the
complexity,
we
so
actually
we
we.
We
use
this
kind
of
tag.
Actually
you
know
to
capture
the
the
the
some
of
the
common
characters
data.
Actually
I
so
what?
What
do
you
suggest
so.
C
So
my
suggestion
is,
I
think,
you're
saying
you're
gonna
use
xpath,
I
I
guess
it's
doing
with
attributes
or
something
to
filter
the
data
and
the
attributes
to
check
against
the
tag.
But
I
always
see
the
x-path
filtering
is
quite
an
expensive
operation
and
aren't
sure
necessarily
whether
all
implementations
support
that
or
they
just
support
subtree
filters.
I'm
not
sure
how
you
do
with
the
subtree
filter.
C
Whereas
if
you
looked
look
at
like
the
nmd
nmda
netconf
extensions,
they
augmented,
like
the
get
data
operation,
had
new
fields,
new
options
for
like
limiting
the
depth
of
the
now
today
to
be
returned
or
filter
it
on
some
of
things
like
the
origin
metadata-
and
I
was
wondering
here
whether
that
would
be
another
option
to
have
another
augmentation
to
that
to
say,
filter
on
these
tags.
So
we
just
pull
out
the
data
that
has
that
tag
set,
or
maybe
the
descendant
data
that
has
a
tag
set
rather
than
using
the
xpath
mechanism.
E
E
E
G
Draft
you
have
these
mask
tags
and
if
you
can
use
this
mask
text
to
remove
text
that
or
mask
text
that
that
the
model
designer
put
there,
I
see
that
as
a
very
uncertain
man
is
because
maybe
the
module
designer
knew
that
this
must
be
tagged
x
and
then
the
user
comes
and
says
say
no
x.
I
see
that
is
a
big
problem.
G
E
G
E
Level
attack,
actually
it's
kind
of
static.
Yes,.
E
E
E
The
other
question
is
how
this
can
be
different
from
the
simple
young
extension
statements
for
things
that
are
static.
So
we
compare
the
metadata
annotation
with
with
the
mechanism
we
define
in
this
chart.
Actually
we
think
metadata
annotation
actually
usually
are
tied
with
a
given
date,
node
instance.
So
usually
the
value
of
the
metadata
annotation
is
assigned
by
the
server.
E
So
here
we
give
example,
you
can
tag
the
schema
sheet
to
to
to
to
to
apply
to
a
whole
list,
so
this
can
be
differentiated.
So
the
second
is
how
this
differentiates
from
the
young
extension
statements.
Actually,
we
also
give
analysis
actually,
for
instance,
level
tag
and
schema
level
attack,
for
instance,
level
attack.
Actually
this
yearly
only
tags,
they
don't
know
they
are
rather
than
actually
statement
or
data
modules.
E
So
so
we
don't
use
the
node
tag
to
tag
a
revision
statement
in
the
data
module,
but
but
definitely
actually
we
can
provide
some
of
the
additional
ordinary
information
or
they
know
the
property.
We
give
example
in
appendix
and
a
schema
level
tag.
Actually
this
can
be
deleted
by
the
they
know,
tag
module
and
remove
from
operational
they
store,
but
such
kind
of
tag,
because
it's
kind
of
static
that
still
exists
in
a
young
module.
So
this
still
can
be
differentiated
from
the
simple
young
extension
statements,
so
yeah
next.
E
So
we
will
allow
to
hear
all
the
comments
and
try
to
resolve
this
and
for
the
next
step,
we
want
to
yeah,
hear
guidance
from
chairs.
B
Yeah,
I
I
it's
there's
clearly
some
comments
in
the
room
on
the
list.
It
wasn't
clear
to
me
that
jorgen's
comments
were
all
addressed
and
I
know
the
last.
He
did
not
send
any
comments
after
the
last
update,
so
it
would
be
good
to
make
sure
that
his
comments
are
addressed
as
well
as
any
other
comments
that
came
in
as
part
of
that
discussion.
E
Yeah
yeah,
we
we
take
this
offline
with
jugang
and
have
several
iterations
and
the
the
latest
version
he
hasn't
confirmed.
All
the.
F
E
So
we
will,
you
know,
ping
that
we're
gonna
again
take
it
to
the
list
together.
His
conversation.
H
F
B
B
B
I
Find
out
hello,
everyone?
Yes,
this
is
the
beginning
of
the
three
pieces
on
the
young
versioning
solution.
Update
next
slide.
Please.
I
So
we're
going
to
have
a
little
overview
for
myself,
then
we'll
go
on
to
individual
drafts
with
balash
and
joe
next
slide.
I
So
a
quick
recap
on
the
versioning
solution.
The
complete
solution
consists
of
five
drafts,
updated
young
model,
revision,
handling,
module
semantic
version,
number
scheme
yang,
schema
comparison,
tooling
version
yang
packages,
my
favorite
protocol
operations
for
package
version
selection
and
the
working
drafts
can
be
found
on
github
at
the
moment,
with
all
the
issues
logged
there
next
slide.
Please.
I
And
just
a
quick
wrap,
though,
if
you're
having
a
look
at
wondering
where
some
of
these
drafts
are
those
are
the
urls
next
slide,
please
so
a
a
brief
overview
of
what
we're
doing
on
the
weekly
versioning
calls
at
the
moment,
authors
and
interesting
parties.
We
are
meeting
every
single
week.
These
meetings
are
open
to
all.
It's,
not
a
fixed
list
of
attendees
we're
having
regular
participation
from
five
different
companies.
It
is
quite
vendor
heavy,
I
believe,
there's
only
myself
and
maybe
one
other
who
who
represents
an
operator.
I
So
if
you
are
interested
in
versioning
package,
versioning
or
anything
of
the
sort
please
do
consider
coming
and
joining
in
from
those
backgrounds,
we're
obviously
bringing
key
issues
back
to
the
working
group,
mailing
list
and
yeah
the
times
are
there
at
the
moment.
So,
of
course,
if
you'd
like
to
join
us
in
and
you
can
and
you're
able
on
those
time
zones,
please
do
next
slide.
Thank
you.
I
To
dive
into
it
a
little
bit
more
deeply
the
main
focus
of
the
weekly
meetings
at
the
moment,
number
one
is
processing
the
feedback
from
the
last
call
on
the
module,
versioning
and
yang
some
of
the
drafts.
We've
had
a
lot
of
complex
discussions
around
this.
I
It's
it
has
taken
quite
a
bit
of
debate
and
a
number
of
the
the
weekly
calls
have
been
taken
up
with
this
we're
in
the
process
at
the
moment
of
addressing
each
of
those
and
posting
responses
back
to
the
mailing
list,
and
there
will
be
some
updates
to
the
drafts
required
quite
likely.
I
I
We
may
move
some
parts
of
the
schema
comparison
into
module,
versioning
per
element,
non-breaking
changes
or
breaking
changes
takes,
for
example,
we
haven't
yet
looked
at
packages
or
whole
version
selection
drafts.
I
I
believe
next
slide,
please.
I
think
there's
a
note
on
here
that
we
are
heading
in
that
direction.
I
I
We
are
expecting
to
do
another
working
group
last
call
for
both
of
those
drafts
with
the
intention
to
bring
them
to
rfc.
I
The
work
is
taking
quite
a
bit
a
lot
quite
a
lot
of
time
to
converge,
but
the
the
array
of
is,
of
course,
asking
us
for
when
the
drafts
will
be
completed,
which
has,
I
think,
kick
things
in
into
higher
gear
and
of
course,
as
I
mentioned,
more
work
will
then
continue
on
schema
comparison
packages
and
version
selection.
I
G
G
We
had
one
last
call
with
some
interesting
comments
and
some
basic
comments.
G
We
released
a
new
version,
but
that's
mostly
just
to
keep
it
up
to
date,
so
not
to
let
it
expire,
and
these
are
the
main
issues
heavily
with
the
first
one,
whether
we
should
have
data
node
change
on
on
the
module
level
or
is
it
enough
to
have,
or
is
it
better
to
have
it
on
the
individual
data
node
or
schema
node
level?
F
G
So
we
got
the
module
versioning
marking
that
says
the
change
is
not
compatible
and
it
indica
indicates
so
it's
basically
just
indicates
it's
not
compatible
and
if
you
have
sembler
and
a
revision
label
for
it,
that
would
also
give
you
an
indication
whether
changes
are
compatible,
non-compatible
or
maybe
just
editorial.
G
G
G
G
And
this
is
how
we
believe
a
node
level
or
schema
node
level
statement
would
look
like
this
indicates
that
one
leaf
or
one
container
or
one
item
is,
has
changed.
It
indicates
when
it's
changed,
so
you
can
track
back
that
you
know
with
the
revision
statements
and
the
development
steps
and
maybe
have
some
description
also.
G
This
these
two
are
not
either
or
they
can
the
node
and
the
module
level
can
live
side
by
side
and
help
each
other.
Maybe
editorial
change
can
also
be
marked,
but
that's
open
question.
F
G
So
the
or
alternative
proposal
was
that
we
have
to
put
have
to
mark
every
nbc
change,
non-backward
compatible
change
without
some
annotation,
and
that
means
that
there,
if
there's
no
annotation,
then
this
is
a
not
changed
or
changed
change.
The
backward,
compatible
or
editorial
way
and
the
consumers
of
the
module
will
have
to
look
through
the
scan,
the
module
to
find
what
kind
of
changes
happened
in
the
last
time.
G
We
think
that
this
has
problems
we
first
of
all,
it
might
end
up
with
a
lot
of
such
annotation
and
these
annotations,
and
we
have
to
remember
that
some
other
groups-
sdos
3gpp
or
on-
I
think,
a
broadband
forum
as
well.
They
release
modules
multiple
times
a
year,
and
that
means
that
they
will
have
more
changes.
They
will
have
more
nbc
changes
and
that
might
clutter
up
the
module
and
it
will
be
difficult
to
find
the
containers
between
all
these
annotations.
G
Also.
Can
we
convince
all
these
groups
to
use
this?
It's
not
not
sure,
and
also
how
do
you
handle
deleted
nodes
if
the
node
is
deleted,
you
can't
put
the
marking
that
it
is
in
nbc
change
on
something.
That's
not
there,
and
also
we
see
that
it
is
a.
It
might
be,
a
problem
that
what
if
the
author
forgets
to
mark
something
as
ndc
change,
that
would
automatically
mean
something
it
might
be
just
a
mistake,
but
now
it
became
a
backward
compatible
change.
G
So
the
weekly
group
came
to
an
idea
consensus
that
in
some
cases
where
node
data
change
notifications
are
good,
we
should
use
them
and
the
authors
may
add
them
any
any
place.
But
we
should
not
say
that
if
it's
missing
that
it
implies
something.
If
you
have
a
list
of
20
leaves-
and
you
forget
the
15th
to
forget
to
annotate
the
15th.
That
doesn't
mean
you
mean
something.
G
So
mostly
tooling
can
apply
the
versioning
and
other
nbc
rules
that
are
already
defined
in
the
rf
rfc
and
in
the
versioning
draft.
G
It
is
not
reliable
everywhere
because
of
multiple
reasons.
One
is
that
yep,
sometimes
it's
just
not
possible
to
find
out.
If
it's
the
plain
english
text
that
the
description
changed
you
might
or
might
that
fight
or
might
not
be
backward
compatible
and
for
some
complex
statements
like
regular
expression
when
or
a
must
statement
figuring
out,
whether
a
regular
expression
changes
compatible
or
not.
That's
very
tough.
G
G
G
But
if
we,
if
the
that
would
allow
the
author
to
explicitly
put
in
statements
like
I
changed
the
description,
and
this
is
actually
an
uncompatible
change,
because
it
really
works
differently,
but
we
are
not
decided
fully
there
yet
and
yes,
we
need
to
be
hurry
up
because
bbf
already
chosen
version
scheme.
We
don't
want
two
other
groups
to
choose
versioning
schemes
independent
of
the
idf
next.
G
G
G
Another
case
is
when
we
really
correct
mistakes.
What?
If
the,
though
I
allowed
an
ip
address
to
contain
three
five,
five,
instead
of
two
five
five,
yes,
the
pattern
is
changed
incompatibly,
but
it
is
still
not
a
real
change,
because
the
ip
addresses
were
not
355
earlier
and
in
some
cases,
when
all
the
clients
are
unknown
strictly
for
vendor
modules,
then
it
even
an
incompatible
change
might
be
a
don't
care.
Next.
G
So
that's
concludes
a
statement
or
statement
about
this
breaking
proposal
of
the
work
group.
Last
call
that
yes,
spare
node
annotations
are
good.
We
like
them,
but
not
generally,
and
not.
They
won't
replace
the
module
level
annotations,
they
serve
different
purposes
and
both
are
needed.
G
G
Try
to
restrict
what
is
imported,
using
comments
or
not
comments,
but
description
statements
and
references,
not
very
good
or
we
can
have
import
by
exact
revision.
But
that
means
that
if
the
imported
file
is
updated,
then
you
need
to
update
importer
and
that's
not
that's
a
big
work
very
hard
to
coordinate
and
it's
not
a
good
solution
which
is
shown
by
practically
no
one
using
it.
So
I
scanned
the
all
the
ietf
modules.
I
think
I
only
found
one
usage
of
import
by
exact
revision,
so
we
are
asking
for
import
by
derived
revision.
G
Basic
idea
is
that
I
have
one
or
two
new
things.
I
need
that
by
the
the
importer
needs
it
and
we
want,
but
otherwise
I
don't
want
to
update
and
I
want
to
follow
the
revisions.
G
And
this
just
enhances
compiler.
So
if
let's
say
I
have
five
revisions,
if
any
anything
can
be
used,
but
if
I
tell
the
compiler
that
two
are
actually
unsuitable,
that's
a
big
improvement
and
if
the
compiler
doesn't
understand
this
import
by
derived
revision
extension
and
no
worse
than
before,
it's
still
the
same
five
that
can
be
used
next.
G
G
I
also
want
to
follow
the
updates
if
there
are
error,
corrections
or
just
to
avoid
using
very
old
modules,
and
we
don't
want
to
update
all
my
importing
modules
all
the
time
and
yes,
there
is
this
trade-off
that
either
you
specify
a
very
precise
and
set
of
modules
that
you
are
willing
to
import.
But
then
this
is
complicated
and
restrictive,
or
if
you
specify
something
simple
like
what
we
propose
and
then
there
are
some
risks.
G
G
We
could,
in
earlier
versions
of
this
draft,
we
had
very
specific
ways
or
statements
how
what
we
want
to
import.
We
had
ranges
of
module
revisions
excluding
or
import
or
or
including
additional
ones.
Open-Ended
ranges
closed,
end
ranges
and
all
these
niceties
had
logic
behind
them
and
they
became
so
complicated
that
no
one
understood
them,
except
maybe
the
water.
G
F
G
G
By
derived,
revision
then
just
ignore
it,
and
the
consensus
was
that
let's
go
with
the
7950
behavior
because
already
to
fail
on
this
unknown
extension
that
would
need
updating
the
tools
and
yeah.
If
we
update
it,
why
don't
you
update
it
to
fully
so
we
expect
that
these
will
just
be
ignored
if,
if
I'm
not
recognized
by
a
tool,
and
then
we
have
a
small
separate
problem,
the
lower
point
is
about
revision
label
scheme.
G
G
C
So
robot
from
cisco,
so
I
want
to
speak
actually
back
to
the
previous
section.
So
at
the
moment
how
the
draft
is
specifying
it
is
effectively
saying
on
the
revision
or
derived.
If
you
support
that
extension
statement,
you
can
only
select
a
module
version
that
that
is
derived
from
that
specification.
So
it
limits
the
sets
you
choose
and
we
play
a
bit
of
a
game
in
terms
of
saying
that
you're
not
really
changing
the
behavior
for
a
compiler
that
doesn't
support
this
extension
statement,
because
it
could
have
chosen
that
module
version
anyway.
C
It's
also
quite
ambiguous
what
the
behavior
is
in
the
more
recently
in
the
author's
group
and
things
we've
had
this
discussion
about
a
slightly
alternative
interpretation
of
that
so,
rather
than
the
revision
or
derived
being
strict,
it
has
to
be
a
later
version
or
not.
C
It
becomes
a
sort
of
suggestive
approach
this,
so
it
suggests
a
version
that
you
should
use
but
doesn't
doesn't
mandate
or
require
that
so
a
compiler
would
still
be
allowed
to
pull
up
and
or
use
an
older
version
or
different
version
that
isn't
in
that
classification
or
original
derived
and
and
effectively.
The
suggestion
there
is,
if
you,
if
you
do,
choose
a
version
that's
compatible
and
within
that
within
the
sort
of
selection
criteria.
C
And
so
you
can't
compile
things
together,
but
it
also
might
mean
that
if
you
have
like
branched
histories,
it
allows
you
to
use
a
young
module
on
a
different
branch
that
might
actually
have
the
type
you
want,
because
it's
been
added
in
two
places.
But
it's
not
derived
from
that
original
thing.
So
it
sort
of
solves
that
problem.
But
the
reason
that
I
quite
like
it
is.
C
It
also
means
that
the
behavior
now
is
completely
sort
of
consistent
with
not
understanding
this
extension
as
well,
because
you're
not
ruling
out
which
selection
of
modules
you're
using
you're
just
giving
a
hint
to
the
compiler
as
to
which
one's
a
better
one.
So
I
don't
know
if
there's
anyone
in
the
room
has
they
understand.
My
explanation
have
any
thoughts
on
that
as
to
which
way
might
be
a
better
way
to
go,
but
we'd
like
to
hear
it.
If
you
do.
Thank
you.
G
G
G
G
And
yes,
we
agree
that
this
should
go
to
yank
two
and
it
should
become
a
mandatory
part
of
yank2,
but
that
doesn't
mean
we
should
stop
working
and
we
should
wait
for
yanktubric
or
yang
next,
because
that
takes
a
lot
of
time.
So
do
it
now
in
yang
1.1
as
extensions
and
integrate
it
into
yang
next,
when
whenever
that
happens,
so
yes,
it's
needed
urgently.
B
B
Went
this
is
lou.
I
I
went
through
my
notes
on
this
one
because
I
I
felt
like
we
discussed
this
at
the
last
meeting,
but
I
didn't
see
it
in
the
in
the
minutes,
so
I
thought
it
would
be
good
to
state
here
again
that
I
believe
this
was
the
consensus
at
the
last
meeting,
although
it's
not
supported
by
the
minutes.
B
Yes,
how
do
we
get
to
the
point
where
the
document
is
done
to
the,
and
we
have
no
more
questions
so
that
we
can
press
forward
with
it
at
last,
call
and
publish.
G
I
think
the
idea
is
to
have
one
more
last
call,
at
least
for
the
we
have
a
few
decisions
on
these
issues.
Can
you
go
back
to
slide
one?
Please.
B
G
I
think
we
kind
of
settled
on
most
of
the
issues,
maybe
the
where
to
place
the
per
node
extension.
That's
an
open
one,
and
we
need
to
add
that
if
we
settle
that,
then
we
need
to
adjust
to
the
current
versioning
draft.
Maybe
a
second
last
call
to
agree
agree
that
it's
finalized
and
then
yeah.
Okay,.
B
G
B
K
H
D
D
D
So
we
did
that
to
make
a
visual
disambiguation
between
the
simber
2.0
spec
and
I
use
the
air
quotes
for
those
not
in
the
room
you
can
see
on
video
because
they
did
in
fact
change
2.0.0
multiple
times,
but
never
never
increased
their
own
revision
or
version
number,
but
we
did
anchor
to
the
one
that
we
are
using
and
we
clarified
some
of
the
use
of
our
semantic
version,
the
the
con
or
the
concept
of
semantic
versioning,
as
opposed
to
the
semver,
the
spec
and
the
yang
simvers.
D
Specifically,
there
is
more
to
do
there,
there's
more
text
to
be
added.
There
is
a
github
issue
on
that.
We've
been
focusing
a
lot
more
on
just
as
balash
mentioned,
addressing
some
of
the
issues,
but
but
to.
F
D
Question
we
need
to
shore
up
the
text,
so
folk
have
like
a
focus
session
on
each
one
of
these
drafts
shore
up
the
text,
with
the
responses
to
the
the
comments
and
then
get
another
more
detailed
revision
out.
We
also
fixed
some
typo
in
the
prefix
small
things
like
I
said,
and
we
compressed
the
acknowledgement
section
not
to
say
we
removed
acknowledgements.
We
just
made
it
less,
taking
up
less
space
on
the
page
next
slide,
please
so
that's
the
boring
stuff
more
interesting
stuff
are
the
outstanding
issues
or
is
the
outstanding
issues.
D
The
the
the
big
thing
and
and
balash
hinted
at
it
that
that
was
raised
simver.
If
you
look
at
a
a
semantic
version,
a
yang
simverse
string.
Let's
say
you
can't
tell
unambiguously
and
and
authoritatively
where
it
derived
from,
and
that
was
never
the
intention
we'll
get
to
that
revision.
Labels
are
not
designed
to
be
linear,
they're,
designed
to
anchor
you
to
a
revision
and,
and
just
like.
In
fact,
I
stole
shamelessly
stole
bellagio's
slide,
we'll
look
at
that
again,
it's
simver,
but
it
isn't
and
we'll
get
to
that.
D
I
have
a
slide
for
it.
You
know
for
those
who
can't
wait
for
the
movie
it
it,
it
isn't,
but
it
can
be
yang.
Simba
is
trying
to
encode
the
release
train.
It
may
seem
that
way.
We
have
a
lot
of
vendors
on
the
call,
but
you
could
in
fact
create
your
own
revision
label
scheme
that
honestly
does
encode
the
release,
train
and
andy
brought
up
an
issue.
There
was
too
much
much
work
involved
or
too
noisy
to
bump
the
yang
simber
for
work
in
progress
and
we'll
talk
about
each
one
of
these.
D
The
next
slide,
please
ken-
and
this
is
balash's
slide.
He
did
a
great
job.
He
did
a
great
job
on
this
man.
The
idea
here
is
no.
You
can't
tell
just
by
looking
at
the
simver
like
in.
In
this
example,
you
or
I
should
say
in
the
in
the
top
example
that
if,
if
you
have
a
a
1.0.0
and
you
have
a
1.1.0
and
then
you
see
2.0.0,
you
can't
say
that
2.0.0
contains
everything
that
1.1.0
has
because
it
as
you
can
see
from
the
slide
it
derived
directly
from
1.0.0.
D
So,
while
you,
your
brain,
might
logically
assume
that
well,
they
made
a
minor
revision
to
1.1.
Surely
they
just
added
some
things
that
they
put
into
it
out
of
there's
no
way
to
know
that
what
you
do
know
is
that
2.0.0
points
to
a
revision,
and
I
can
look
at
that
revision
and
then
I
can
work
my
way
up
using
the
yang
module,
versioning
work
and
say
it
derived
directly
from
1.0.0,
so
that
I
know
there
may
be
stuff
in
1.1.0
that
it
isn't
reflected
in
2.0.0.
D
And
yes,
it
would
be
nice
if
we
knew
directly
that
the
2.0.0
must
include,
but
because
branching
the
fact
that
branching
exists
at
all
is
there.
It
means.
We
can't
know
that
it's
not
the
labeling
that
breaks
this.
This
knowledge,
it's
the
fact
that
branching
is
allowed
even
in
the
the
base
simver.
I
can
do
this
as
long
as
I
constantly
work
linearly
as
long
as
I
always
say
that,
once
I
come
up
with
a
new
major
release,
I'm
never
going
to
do
anything
back
on
the
minor
or
sorry
on
the
previous
major.
D
D
D
It's
the
fact
that
branching
can
happen
and
what
we're
trying
to
do
is
give
you
a
more
human,
readable,
anchor
point
the
revision
label.
That
gives
you
something
that
says.
Okay,
I
can
look
at
that
and
I
can
kind
of
get
from
what
I
know
of
simver.
I
know
that
between
1.1.0
and
2.0.0
there
are
some
potentially
breaking
things,
non-backwards
compatible
things
that
might
affect
me
and
now
I'm
going
to
go
and
look
at
that
further.
That's
what
we're
doing
with
with
the
revision.
F
D
D
We
have
additional
additives,
we
have
the
underscore
compatible
and
the
underscore
non
underscore
compatible.
You
don't
have
to
use
them.
If
you
want
to
do
things,
there's
more,
that
is
in
line
more
with
a
straight
capital
s,
capital,
v
semver!
You
can
do
that
in
fact,
the
ietf
we
don't
ever
expect
to
see
underscore
compatible
or
underscore
non-underscore
compatible.
D
What
we
do
expect
to
see
is
things
that
look
more
and
behave
more,
like
the
true
semver
2.0.0
for
that
spec,
that
we
we
reference
so
semver
r,
simver
yang
simver,
is
a
super
set
of
the
semver
2.0.0
and
you
can
choose
to
use
the
pure
some
respect,
though
there
is
an
action
for
us.
We
have
to
clarify
what
is
unique.
D
We
have
it
in
there,
but
we
need
to
in
no
in
certain
terms
spell
it
out
that
underscore
compatible
underscore
non-underscore
compatible.
These
are
unique
to
what
we're
proposing
the
rest
completely
in
line
with
the
rules
of
simba
you
can.
You
can
use
the
rules
of
simber
with
your
revision
label
scheme.
You
can
declare
a
revision
label
scheme
that
is
simver.
D
D
Aha,
I
hope
I
understood
this
correctly.
We
are
not
necessarily
encoding
the
release
train
of
software
into
the
yang
in
the
yang
simba.
What
we
are
doing
is
reflecting
what
changed
in
the
module
at
a
high
at
a
high
level.
We're
reflecting
that
we're
saying
if
you,
if
you
remember
that
picture
two
slides
ago,
that
if
I
had
1.1.0-
and
I
was
using
it-
and
now
I
am
using-
or
I
have
a
module
two
dot-
I
produce
a
module
2.0.0
that
there
are
potentially
well.
D
D
We
are
not
necessarily
reflecting
the
underlying
software
of
the
of
the
server
meaning
the
features.
The
capabilities
of
the
the
software
release,
train
of
the
vendor,
a
vendor
could
come
up
with
their
own
scheme.
I
work
for
cisco.
Cisco
could
come
up
with
our
own
scheme
for
revision
label.
That
really
reflects
the
ios
xr
version.
D
If
we
wanted
to,
but
what
we're
trying
to
do
here
with
yang
simver
is
reflect,
is
give
that
high
level
understanding.
You
can
look
at
it
and
quick
glance
and
say
I
know
that
there
are,
and
you
can
read
the
draft
non-backwards
compatible
changes
between
this
version
of
a
module
and
that
version
of
a
module-
or
I
know
there
are
potentially
those
minor
changes
there
are
back.
There
are
backwards,
compatible
changes
things
I
might
be
interested
in
learning
about,
or
I
know,
there's
only
editorial
changes.
D
D
Okay,
I
actually
like
this
bumping
the
the
work
in
progress,
but
I
I
realize
that
that
it's,
it
is
work
from
what
I
have
seen
working
at
a
vendor.
There
is
a
tendency
to
implement
work
in
progress
drafts,
especially
ones
that
are
long
running
people.
Look
at
that
and
they
go
that's
interesting.
D
I
want
you
to
do
that.
Okay
and
so,
as
a
product
manager
goes
yeah.
We
want
to
make
money
we'll
do
this.
How
do
you
know
what
they
implemented?
What
version
they
implemented?
Well,
there
might
be
in
the
documentation,
some
draft
that
says.
Okay,
I
implemented
in
this
version
of
our
product
draft
dash
netmod
dash
something
maybe
that
was
a
bad
example
there,
but
whatever
you
might
see
that.
But
how
would
you
reflect
that
with
a
yang
module?
D
D
So
we
bake
in
the
draft
name
and
its
revision,
and
we
do
the
name
in
there
because
there
can
be
parallel
work
streams
working
on
the
same,
revising
the
same
yang
module.
So
we
have
the
the
draft
name
in
there
with
its
revision
that
keeps
it
unique.
So
we
can
always
use
that
as
an
anchor
point
to
find
the
revision
defined
the
inheritance
and
and
and
the
the
changes
that
happened
within
that
module.
D
D
D
The
the
revision
label
didn't
change
and
I
said
well,
we
didn't
make
changes
to
the
module.
So
if
you
implement
this,
nothing
has
changed.
So
we
don't
want
to
gratuitously
bump
the
revision
and
we
don't
want
to
gratuitously
bump
the
december
revision
label
either.
D
D
That
was
costly,
that
is
the
last
slide.
Okay,
so
that
is
the
update
on
yang
simvar,
with
some
of
the
work
that
we
still
have
to
do.
Questions.
D
Are
you
assuming
that
you
never
use
the
date
you
could
use
the
date?
What
do
you
you
could.
D
D
G
F
L
D
J
And
some
set
liquid
is
there
anything
wrong
with
gratuitous
version
bumping.
D
I
I
don't
think,
there's
necessarily
anything
wrong
with
it
it.
I
guess
this
gets
to
the
noise
bit.
That
andy
was
raising
that,
if
you,
if
you
keep
bumping
these,
you
keep
incrementing.
So
there's
a
few
sets
of
tools
that
extract
yang
modules
and
add
them
to
the
file
systems
of
like
get
repositories.
D
You
just
keep
adding
on
to
that
and
you're
not
re,
and
you
could
interrupt
people
like.
Oh
I've
got
this
new
version.
I've
got
to
look
at
only
to
find
out
that
it's
the
same
as
before.
It
just
adds
work
where
it
seems,
like
you
know,
get,
doesn't
really
allow
you.
I
used
just
get
because
I've
been
using
it
a
lot
this
week.
J
Sure,
I'm
just
I'm
just
trying
to
take
the
parallel
from
pure
development
space,
where
I'm
trying
to
also
get
my
head
around
the
whole
version
tracking
and
it's
sem
verb
and
not
send
there
and-
and
I
just
think
it
might
actually
help.
If
we
working
towards
this
idea
that
we
don't
have
these
inversion
branches,
I
mean
if
we
look
at
the
development
flow.
You've
got
you've
got
branching,
but
you.
F
J
Have
merging
you
merge
back
your
features
into
the
parent
thing
and
then
that
follows
december
properly.
It's
from
me
personally,
I
think,
and
I'm
new
to
the
room.
It
just
feels
like
we're,
adding
confusion,
but
I
do
take
your
point
about
you
know
gratuitously
bumping
stuff
and
why
that
might
not
be
a
good
thing.
D
The
the
merging
is
is
is
interesting
because
you
have
you
can
look
back
at
that
history
and
you
can
see
that
that
that
a
branch
has
has
been
merged
back
in
here
you
we
don't
have
the
the
the
linearity
of
like
looking
back
in
a
in
a
yang
module.
You
could
still
reflect
that
if
you,
if
you
did
in
fact
merge,
you
could
say
that
okay,
I
I
merged
one.
D
I
I
merged
some
of
these
features
and
from
a
if
you
had
the
revisions
you
would
be
able
to
to
in
fact
see
that
okay,
this
2.0.0
really
did
extend
from
one
but
looking
just
at
2.0.0
compared
to
one
just
looking
at
those
two
strings
together,
you
don't
know
for
certain
where
they
inherited
from
you
have
to
do
the
the
upward
tracking
to
find
out
if
there
was
a
merge
and
where
that
merge
happened,
and
when
that
merge
happened.
So
I
think
we
still
get
that
it's
just
not
visually.
Comparing
the
two
simverse
strings.
D
K
Yeah,
charles
eccle-
and
you
know
I
just
think
it's
there's
different
consumers
of
these
doc.
These
drafts
and
the
yang
models
they
contain,
and
I
think
some
of
the
consumers
of
those
will
be
following.
K
You
know
the
work
of
the
itf
data
tracker
they'll
know
when
a
new
draft
comes
out,
so
for
them
it's
probably
not
a
big
deal,
but
I
think
there's
a
lot
of
consumers
of
this,
that
that
really
just
care
about
the
yang
gang
modules
and
for
them
to
it's
just
better.
If
they
don't
see
a
change,
I
would
think
if
nothing
changed
like,
I
would
save
them
the
extra
work
and
the
churn
and
hey.
Why
does
this
thing
keep
changing?
So
I
think
it's
better
for
them.
K
C
Robertson,
cisco-
I
was
just
going
to
reply
back
to
anne's
comments
and
things,
so
I
think
our
aim-
and
our
goal
here
is
to
try
even
for
the
vendors
to
get
a
sort
of
linear
development
of
the
young
models.
That's
what
you
want.
That's
that's
what
you're
aiming
for
and
the
case
where
the
branching
turns
up
is
because
we've
shipped
to
release
and
we've
got
a
bug
fix.
We
need
to
put
on
to
that
and
we
can't.
C
We
can't
easily
update
that
order,
at
least
to
the
latest
yam
model,
because
it's
got
a
load
of
other
code
that
we
don't
want
and
that's
where
we
get
the
branching.
So
it's
sort
of
quite
limited
stub
branches,
and
you
would,
I
would
expect
the
stuff
you're,
adding
into
those
branches
bug
fixes,
are
either
going
into
the
main
line
anyway
or
either
way
around.
Whichever
way
you
put
them,
so
you
could,
you
could
mark
the
merging
to
say
you've
done
that,
but
it
depends
which
way
depends
which
one
gets
written
first.
I
guess
so.
C
B
So
I
just
brought
up
the
outstanding
issues
to
make
sure
the
group
the
room
and
everyone
understands
the
plan
is,
is
you're
going
to
do
an
update
and
you
think
you're
going
to
be
ready
for
last
call
once
that
update's
done
or
you
think
you
have
more
issues
to
discuss.
D
You
want
my
honest
opinion.
Yes,
I
honestly
think
getting
people
on
the
call
who
are
going
to
be
vocal
about
this
would
probably
work
the
best
to
close
these
out
once
and
for
all,
rather
than
just
doing
a
another
round
of
email
last
call,
even
though
I
know
pro
probably
process-wise,
we
have
to
do
that.
That's
my
honest
opinion.
D
Well,
I
would
like
to
do
a
new
version.
I
think
the
last
call
is
important,
but
I
think
if
in
going
back
and
forth
on
the
last
call
it
might,
we
might
be
able
to
hammer
more
things
out
if
we
did
like
an
interim
getting
the
the
the
passionate
people
together
to
to
have
that
conversation
more
live
than
than
an
email.
C
So
rob
wilson,
not
with
my
id
hat
on
so
there's
just
been
an
observation
here-
that
the
the
people
who've
been
raising
lots
of
these
issues
during
the
working
with
us
calls,
unfortunately,
are
not
able
to
make
this
meeting
so
they're,
not
here,
so
we're
discussing
all
these
issues,
but
the
main
people
we
want
to
have
those
conversations
with
the
other
side
are
not
here,
so
it's
making
it
hard
to
resolve
them
and
we
can
try
and
resolve
them
over
email.
E
C
Through
so
so
my
view
as
a
contributor
is,
we
need
to
put
the
updates
into
new
versions
of
the
documents
we
need
to
do
another
working
last
call
and
a
proper
full
one
to
say
look.
This
is
this
is
where
we're
gonna
get
and
then
flush
them
out.
We
need
to
do
an
interim
after
that
to
resolve
those-
I
don't
know,
but
so.
F
B
If
everyone
is
okay
with
that-
and
I
know
you
you
would
maybe
for
not,
but
you
can
live
with
it
and
if
everyone's
okay
we're
gonna
proceed
that
way,
so
we
look
forward
to
seeing
the
the
update
and
look
we'll
the
working
group
should
look
forward
to
the
last
call.
Second,
last
call.
Thank
you.
No
thank
you
all
right.
B
We
are
now
going
to
move
on
to
the
non-chartered
items.
Shifeng.
H
Okay.
Okay,
thank
you!
So
hello,
everyone.
This
is
twofold
from
huawei
and
on
behalf
of
the
authors
and
contributors,
I
will
give
a
presentation
about
system
defined
configuration
work.
I
will
present
remotely.
H
Yeah,
so
actually,
this
work
has
already
been
presented
for
several
times
and
at
the
same
time,
there
are
a
lot
of
good
discussion
on
the
mailing
list,
and
so
many
folks
have
shared
their
excellent
ideas
and
provided
a
very
great
input.
So
thank
you
all.
Based
on
our
a
lot
of
discussion.
I
think
we
do
have
reached
some
agreements
which
are
already
be
written
in
the
current
version
of
the
draft,
and
the
very
first
one
is
that
config
true
read-only
system
data
style
to
hold
configuration
which
is
provided
by
the
system
itself.
H
H
We
had
a
really
rigorous
discussion
about
whether
offline
validation
of
running
alone
is
required
and
to
avoid
the
changing
the
definition
in
50
and
842,
and
also
not
to
break
any
lexical
lines
which
relies
on
the
validation
of
running
along.
We
agreed
that
the
validation
of
running
is
required,
so
as
to
say
any
reference
system.
Configuration
in
system
must
be
present
in
running
the
third
one
is
about
a
result
system
parameter
which
controls
whether
to
allow
a
server
to
copy
any
reference
system
defined
configuration
automatically.
H
H
It
was
a
very
strong
restriction
to
say
that
the
server
must
not
modify
running,
if
not
asked,
but
to
avoid
any
potential
non-backward
con
non-backward
con
probability
and
after
some
discussion
on
the
mailing
list,
we
weakened
it
and
just
said
that
it
should
not
modify
ronnie
if
not
asked.
So
it
is
preferred
that
a
client
can
control
configuration
in
running
as
much
as
possible
and
running
is
only
updated
by
the
client.
A
client
can
benefit
from
such
a
behavior,
which
should
work
as
a
best
practice.
G
H
We
used
to
have
some
some
discussion
on
the
mailing
list
and
there
was
some
agreement
to
say
that
we
would
like
a
client
control
objective
to
say
that
a
client
would
like
to
when,
when
the
client
retrieves
running,
he
won't
like
he
would
like
to
get
what
was
exactly
said
to
the
to
the
server.
So
we
used
to
have
a
client
control
objective.
H
G
C
Robertson,
cisco,
as
a
participant,
I
I
think
they
should
not
write
here.
I
think
that
otherwise,
it's
sort
of
breaking
how
this,
how
the
whole
young
ecosystems
have
been
designed
to
work-
and
I
do
take
bellagio's
point,
though,
that
I
would
say
in
that
case
you've
got
automation
running
on
the
device.
You
treat
that
as
another
client
and
that's
fine.
I
have
no
issues,
then
then
it's
modifying
it,
but
it's
still,
ultimately,
if
that's
under
the
client
control
and
they
can
turn
that
off
or
turn
on
and
do
whatever
they're
doing.
That's
fine.
A
I
also
think
this
is
right,
but
I
want
to
note
that
there's
existing
rfc,
the
one
that
defines
the
crypt
hash
and
specifically
it's
you
know-
for
user
management
and
for
how
to
specify
user
passwords,
and
it
says
specifically
that
if
you-
and
it
defines
it
in
the
description
statement-
that
if
the
password
sent
from
the
client
to
the
server
begins
with
dollar,
sign
zero
dollar
sign,
it
means
that
the
text
thereafter
is
clear
text
and
it's
expected
that
the
server
will
replace
with
and
with
an
actual
hashed
password
and
it
on
the
server
that
way.
A
And
then,
when
retrieved
back
to
the
client,
it
will
be,
of
course
not
the
same
as
what
the
client
had
sent.
It
will
then
be
hashed.
So
I
think
that's
a
that's
an
example
of
an
already
existing
rfc
that
doesn't
really
follow
the
should
not
principle,
but
it's
okay,
because
they
described
it.
I
mean
the
best
practice
stands,
even
though
the
description
statement's
there.
I
think
it's
perfectly
reasonable
for
that
to
be
the
case.
M
M
Jason
stern,
I
I
agree.
I
like
that.
Second
second
bullet
that
basically
saying
the
running
should
be
valid
and
should
be
able
to
be
offline
validated
and
that
my
big
concern
is
that
at
some
point-
and
maybe
we're
never
going
to
look
at
it.
But
you
know
nmda
mentions
the
the
some
of
the
some
of
the
translations
from
running
to
intended.
M
Include
templates
configuration
templates,
and
that,
for
me,
is
a
big
problem
here
and
I'm
you
know,
I'm
hesitant
to
tangle
up
this
work
in
templates,
but
I'm
worried
that
we
might
say
running
always
has
to
be
valid
and
then,
when
we
tackle
templates
at
some
point
I
I
don't
know
how
that
can
work
with
running
being
valid.
M
So
I
know
you
won't
have
an
answer
chiffang,
but
it's
just
it's
a
concern
that
may
be
for
some
discussion
as
part
of
the
work.
H
Yeah
we
we
said
that
ronnie
must
be
valid,
is
referenced
from
the
7915,
and
instead
of
that,
we
can
also.
We
can
just
say
that
any
reference
system
configuration
in
system
but
must
be
planned
in
running,
but
whether
that
would
ensure
that
running
must
always
be
valid
because
of
we
can
have
some
template
expansion
issue.
H
M
M
M
C
H
C
All
right,
so
I'm
going
to
try
and
avoid
going
down
the
rat
hole
that
jason's
putting
in
front
of
us,
so
I
think
already.
Actually,
though
at
least
an
mba
says
that
running
has
to
be
valid,
but
this
part
of
that
and
templates
and
outside.
I
just
want
to
go
back
to
kent's
comments
and
it's
like.
Yes,
I
think
that's
a
great
example
of
why
this
is
a
should
not
at
the
bottom
and
not
a
must
not.
C
H
So,
let's
move
on
since
last
itef
meeting.
H
H
H
Then
we
have
already.
We
have
also
made
several
editorial
changes
to
for
clarification
and
explanation,
which
includes
that
a
more
clear
definition
of
system
configuration
states
that
system
configuration
is
created
in
system
and
appears
intended
applied
system.
Configuration
also
appears
in
operational
with
origin
equal
to
system,
make
it
clear
that
system
must
always
be
valid
and
make
it
clear
that
any
updating
system
will
not
will
not
cause
the
automatic
update
of
running.
H
Even
if
some
of
the
system
configuration
has
already
be
copied
into
running
explicitly
or
automatically
before
the
update
and,
finally,
finally,
that
we
clarified
the
relationship
between
read-only,
two
clients
and
overriding
system
configuration
which
seems
contradictory
at
first
glance,
when
we
said
that
with
only
two
clients,
it
referred
to
the
contents
of
system
data
store.
The
client
is
not
allowed
to
modify
directly,
but
the
client
may
overwrite
a
system
defined
data
by
writing.
The
intended
configuration
into
running
so
read-only
and
or
writing
system
configuration.
They
look
conflicting,
but
they
are
different.
B
Okay,
so
a
question
to
the
group,
I
should
have
given
you
a
little
more
warning,
but
we
would
like
to
know
if
you
are
interested
in
the
working
group
continuing
to
discuss
this.
B
So
we're
gonna
hold
for
just
a
moment.
We
have
37
people.
Hopefully
we
can
get
little
bit
more
of
participation.
B
So
the
way
this
is
trending,
it's
clear.
The
group
is
interested
in
your
work,
so
we
look
forward
to
seeing
the
next
revision
of
the
document
and
to
for
you
to
continue
discussing
it
on
the
list,
as
you
have
been
so.
Thank
you
very
much
and
you
have
another
slot
now
and.
F
H
H
There
are
two
examples:
the
first
one
which
is
shown
on
left
side
of
the
slide
and
interface
data
tree.
The
root
is
the
container
node
interfaces
and
its
child,
which
is
an
interface
list.
Node
exists
in
two
interface
entries
inside
each
one.
We
have
the
name
as
the
keynote
interface
type
and
mtu
value.
We
can
see
these
two
interfaces
as
the
system
defined
ones
when
the
device
is
powered
on
and
the
related
hardware
is
present
and
a
client
may
try
to
edit
some
of
the
system.
H
Predefined
values,
for
example,
a
client
may
want
to
change
the
interface
type
ethernet
into
tunnel
with
an
edit
config
operation,
for
example,
or
try
to
modify
the
mtu
value.
But
if
a
client
trying
to
set
the
type
of
interface,
t
e
0
0
other
than
the
predefined
ones,
which
does
not
match
the
real
type
of
the
interface,
so
will
reject
that
request.
H
H
So,
while
in
the
right
diagram,
the
case
is
different,
let's
say
that
we
have
applications
module
to
define
some
application
layer
protocols
and
in
this
case
the
application
is
defined
as
a
list
which
is
a
child
node
of
applications
and
within
the
application
list.
There
is
a
protocol
name
which
is
at
the
key
and
the
underline
transport
protocol,
tcp
or
udp
and
the
port
number
and
for
the
convenience
of
the
users.
H
So
in
this
case,
application
list
can
exist
in
multiple
instances,
but
some
of
them
are
immutable,
while
others
are
not.
So
there
are
two
different
kinds
of
use
cases
of
immutable
flag.
Here,
I'm
using
system
configuration
as
examples,
but
to
remember
that
the
immutable
concept
can
be
used
outside
of
system
configuration.
H
So,
to
be
more
specific,
we
agree
that
it
is
already
the
case
today
that
a
server
can
reject
any
configuration
for
any
reason.
For
example,
when
a
client
is
trying
to
modify
an
immutable
configuration,
but
this
work
tries
to
provide
more
visibility
to
the
client
as
to
which
nodes
are
immutable.
I
think
the
client
can
benefit
from
such
a
standard
behavior,
a
standard
mechanism
up
to
which
allows
it
to
see
what
configuration
is
immutable
on
devices.
H
H
It
means
that
the
client
is
allowed
to
create
the
instance
of
that
node,
but
modification
and
deletion
is
not
allowed
and
for
mandatory
annotation,
it's
used
to
indicate
that
when
a
particular
instantiated
data
node
is
created,
the
client
cannot
update
or
delete
it,
and
this
annotation
is
only
applied
to
the
list
of
leave
list
entries
or
instances
inside
the
particular
list
entries
and
currently
it
is
defined
as
a
boolean
type.
But
if
we
can
agree
with
this
solution,
we
may
discuss
later
if
a
simple
trial
force
is
enough.
H
So
since
the
immutability
can
be
applied
to
a
node
both
in
the
schematic
and
in
the
data
tree,
we
make
a
statement
that
the
server
should
not
return
annotation.
If
a
particular
node
is
already
marked
marked
as
immutable
by
young
extension,
without
exceptions
for
update
or
delete
operations,
because
that
means
the
update
audited,
any
instance
of
the
data
node
is
not
allowed,
so
there
is
no
need
to
annotate
the
instance
repeatedly
there
that
will
provide
no
additional
information.
J
Hi
and
some
set
liquid
immutable
by
its
very
name
means
you
cannot
modify,
update
or
delete,
and
I
just
noted
there
that
you're
saying
exceptions
to
specific
operations
and
then
you've
given
examples
of
create,
update
and
delete.
J
I
think
we
might
just
need
to
clarify
that
if
we're
talking
about
things
like
sub-objects,
which
we
may
be
able
to
create,
but
the
parent
cannot
be
touched
because
otherwise
exception
suggests
that
the
object
is
not
immutable.
H
G
Yes,
I
think
this
might
be
a
misunderstanding,
because
this
immutable,
with
extension
with
exceptions,
means
that
you
can
create
it,
but
you
can't
delete
it,
for
example,
or
you
or
you
can
say
that
you
can
create
it
or
modify
it,
but
not
delete
it.
So
maybe
the
immutable
is
not
the
best
name
that
we
can
agree
on
that
and
any
suggestions
are
welcome,
but
okay
yeah,
but
we
can
call
it
float.
H
B
And
bellage,
you
just
gave
a
example
where
there
were
two
exceptions
and
it's
not
clear
in
the
draft.
How
you
support
that.
But
I'm
glad
to
hear
it
does.
C
As
a
participant,
I'm
really
conflicted
with
this
work,
because
I
don't
like
the
fact
that
you're
sort
of
potentially
stopping
clients
from
modifying
or
controlling
the
configuration,
which
is
something
that
they
own,
but
at
the
same
time
I
can
also
understand
that
if
servers
are
doing
this
and
going
to
reject
it
anyway,
then
it's
just
adding
extra
information
to
make
it
easier
for
them
to
do
it.
C
So
I'm
my
conflict
is:
are
we
going
to
then
encourage
more
server
implementations
to
then
choose
not
to
do
this
or
have
more
restrictions
and
make
it
hard
for
clients
to
manage
another
way
of
thinking
about?
This,
though,
is
about
putting
the
conf
of
what's
modifying
the
running
configuration
that
a
client
has
control
over
versus?
What's
in
like
system
configuration,
so
even
you
delete
it
from
the
running
config
you're
saying:
well,
it's
going
to
still
be
there
in
system
and
that's
another
way
of
thinking
about
it.
C
So
that
could
be
another
way
of
phrasing
it,
and
the
last
point
I
want
to
make
is
a
game
in
like
in
interfaces,
and
things
be
very
clear
about
the
fact
that
you
can
delete
the
interface
and
its
type,
but
you
just
can't
delete
the
type
itself
once
that's
been
set,
you
can't
change
the
type.
So
maybe
the
draft
is
clear
on
that
point,
but
I'm
not
with
the
latest
version.
B
G
D
Joe
clark,
cisco,
I
think
rob
touched
on
the
thing
that
I
was
concerned
about
looking
through
this,
and
that
is,
I
see
a
lot
of
language.
The
server
must
reject.
The
server
must
reject,
and
I
was
going
to
get
up
here
and
say
well
if
this
is
an
extension
and
certain
yang
clients
can
ignore
it,
and
you
also
mention
snmp
and
other
non-yang
related
things.
D
It's
really
not
must
reject
okay,
they
will
reject,
but
they
would
have
rejected
anyway,
and
I
think
that
is
probably
worth
calling
out
more
explicitly
in
the
text
to
indicate
that
this
is
like
palace.
You
just
said
more
of
a
documentation.
More
of
a
clarification
than
a
by
nature
of
this
extension
is
the
server
is
enforcing
this.
The
server
would
already
enforce
this,
and,
and
that
wasn't
clear
to
me
in
reading
the
draft.
B
Yeah,
so
we
we
are
actually
at
the
end
of
our
time
chiffon.
So
I
think
the
you
know
the
the
takeaway
that
I
see
from
this
poll
is
that
there's
still
interest
in
hearing
more,
and
I
think
your
plan
was
just
to
do
another
update
and
then
discuss
on
the
list.
Is
that
correct.
H
B
F
B
Of
the
poll,
I
don't
know
why
it's
still
showing
yet.
Oh
there
we
go.
Thank
you.
Whoever,
maybe
can't
fix
that
in
the
in
the
last
10
minutes.
We
had
two
topics
we
were
going
to
try
to
hit.
One
of
them
is
yang
next
and
the
other
one
is
from
rad
kent
rather
than
having
a
a
large
discussion
on
yangnex.
Do
you
want
to
just
summarize
sort
of
where
we
are
and
what
the
what
our
discussion
has
been
in
our
thinking.
A
Yes,
absolutely
thanks.
So
on
monday,
the
net
conf
working
group,
you
know,
had
a
chair-led
discussion
at
the
very
end,
prompted
by
a.d
rob
wilton
on
you
know
whether
or
not
netconf
should
know
what
we
should
do
about
the
rest,
conf
next
and
and
netconf
next
issue.
A
Trackers,
and
one
of
the
things
that
was
mentioned
is
that
in
netmod
there's
also
the
yang
next
issue
tracker
and
that
any
update
to
yang
would
necessitate
updates
to
netcomp
and
rescoff
well
anyway,
the
takeaway
is
that
you
know
the
the
netcomfort
group
is
going
to
do
what
it
can
to.
You
know,
make
updates
to
the
protocols
without
requiring
you
know,
updates
to
yang,
but
then
here
in
this
working
group,
what
to
do
there
is.
A
It
was
mentioned
this
has
been
discussed
before
I
think.
Actually,
initially
we
started
talking
about
yang
next
gosh
four
years
ago.
It
seems-
and
then
we
touched
on
it
again
not
too
long
ago.
Perhaps
it
was
a
year
ago
and
the
takeaway
is
that
it's
intrusive
and
we're
unsure
if
the
market
will
accept
an
update
at
this
time,
but
at
the
same
time
we
we
see
that
you
know
it's
getting
old
and
at
the
edges
and
we
might
a
refresh
it
seems
eminent
or
looming.
A
And
but
one
thing
also
is
that
it
was
noted.
Any
update
to
yang
needs
to-
or
I
should
say,
let
me
be
more
specific.
Any
update
to
rc
7950
would
necessitate
first
a
refactoring
or
factoring
out
of
all
the
netconf
and
xml
bits
that
are
in
7950
to
basically
make
it
a
a
protocol,
independent
specification,
not
tied
to
netconf,
and
so
that
you
know
some
of
the
discussion
that's
been
going
on
with
the
chairs,
and
the
ad
is
that
the
netmod
working
group
should
proceed
with
a
7950
biz.
A
That
does
not
change
the
yang
version.
It
would
still
be
yang
1.1,
but
it
would,
you
know,
basically
remove
all
the
netcoff
and
xml
specific
bits
from
it
and
that
would
lay
the
groundwork
for
then,
the
the
serious
gang
next
updates
to
come
in
and
and
and
you
know,
for
those
non-compatible
changes
that
that
were
that
we
need,
we
know,
are
coming,
but
you
know
we're.
A
We
need
to
do
this
first
part
first,
so
that's
the
current
plan
or
thinking
and
just
wondering
from
the
room,
if
there's
any
agreement
to
that,
and
also,
if
there's
any
interest
in
working
on
that,
certainly
if
anyone's
interested
in
working
on
that,
you
know
please
approach
the
chairs.
Thank
you.
B
C
So
I
actually
think
that
martin
might
already
have
some
starting
textures
we
need
to.
We
should
sync
up
with
him
as
well,
so
that
might
help,
but
I
don't
know
so.
This
is
five
minutes
just
to
talk
about.
I
gave
a
presentation
to
the
icg
at
the
retreat
about
how
we
manage
young
worlds
in
the
itf,
and
I
had
some
proposals
there
about
how
we
potentially
try
and
do
things
a
bit
differently
and
they
were
quite
positive
of
trying
to
do
things
with
it
differently
and
said
well
off.
C
You
go
go
and
take
it
to
net
mode
and
see
what
they
say.
So
I
don't
think
we've
got
time
to
get
some
many
answers
here,
but
hopefully
I'll
open
up
what
the
issue
is
and
the
ideas-
and
we
can
maybe
start
and
and
see
if
there's
people
interested
in
trying
to
go
down
this
path.
So
today
the
itef's
really
slow
the
publishing
model.
So
we
sort
of
know
that
that's
fine
and
we
work
on
lots
of
different
young
stuff.
So
we
go
to
the
next
slide.
I
don't
think
in
this
audience.
C
We
need
to
say
much
and
the
goal
of
what
we're
trying
to
produce
here
is
not
individual
yaml
to
individual
protocols,
we're
trying
to
develop
a
cohesive
api
between
the
management,
client
and
the
device
for
for
managing
it.
So,
although
we
definitely
separate
animals
really,
what
matters
is
do
these
things
work
together
and
solve
all
the
problems
and
things
and
implement
all
the
functionality
you
need
and
we're
doing
some
work
in
there
to
help
with
young
packages
to
bring
these
things
together,
but
still
in
the
itf.
C
We
have
this
goal,
as
does
open
config
for
doing
cohesive
api
next
slide.
Please
we've
got
some
problems,
though,
so
we're
not
very
focused
in
the
itef
of
doing
this
cohesive
api.
C
The
way
we
split
it
out
into
different
working
groups,
we
paralyzed
that
in
terms
of
getting
the
yam
models
done,
but
we
don't
have
the
sort
of
focus
on
one
api
that
the
open
config
had
some
of
the
rfc
7950
update
rules,
make
it
hard
to
fix
them,
we're
working
on
versioning
things
to
help
that,
but
the
itef
is
way
way
too
slow.
C
I
mean-
maybe
it's
massively
too
slow
here
and
open
conflict
is
getting
more
traction
in
the
market
and
if
you
want
itf
to
remain
relevant
in
terms
of
the
young
models
that
it's
producing,
we
need
to
change
and,
finally,
the
fractured
market-
I
don't
think,
is
good.
I
don't
think
that's
helping
young
adoption.
I
think
that's
making
it
harder
for
operators
to
choose
which
path
to
go
next
slide
please
and
open
config.
They
have
problems
as
well
because
they
turn
to
they
turn
very
frequently
they
in
terms
of
their
participation.
C
It's
not
so
open,
even
though
it's
named
open
config
and
the
models
and
the
design
of
them
are
less
technically
flexible.
So
they
have
issues
they're,
not
perfect
time.
It's
not
like,
we
can
just
say,
take
open
config
and
we're
done
so
next
slide.
Please
next
slide!
I
can
skip
that
because
that's
fine,
what's
all
the
remaining
property
to
deal
with
so
this
issue
about
the
itf
not
being
focused
on
a
cohesive
management
api,
that's
something
we
definitely
need
to
fix.
C
I
think
the
yang
packages
work,
I
hope,
will
will
help
with
that
and
start
with
that,
because
we'll
be
defining
sets
of
young
models
that
work
together
and
hopefully
finding
some
of
the
gaps
and
fixing
those.
So
that's
one
thing:
we're
already
fixing
the
7950
rules
to
allow
you
to
version
upgrade
stuff,
but
some
of
them,
but
the
key
one.
Is
this
itef
being
slow
to
standardize
so
next
slide?
Please.
C
So
the
idea
is
to
fundamentally
change
how
we
manage
young
models
in
the
itf
and
the
idea
is
to
stop
publishing
them
in
rfcs
and
instead
start
putting
young
models
in
github
and
version
the
young
models
themselves
in
github
as
standard
code
assets,
and
so
that
raises
lots
of
interesting
questions,
and
this
is
not
a
trivial
easy
change
to
make,
because
the
itf
process
is
all
about
publishing,
rfc
documents
and
things.
But
these
aren't
documents.
These
are
code
assets
and
we
should
be
treating
them
as
such.
C
So
the
idea,
therefore,
is:
we
still
need
some
rfcs
to
sort
of
support
the
young
models
that
provide
the
sort
of
descriptive
text
you
get
today
that
describe
the
model
behavior
and
things
like
that.
So
that's
all
useful.
That's
all
great
that
stuff
shouldn't
necessarily
have
to
change
at
all.
C
If
you're
putting
in
a
minor
bug
fix
and
the
example
that
came
up
recently
in
the
netmod,
one
is
the
fact
that
the
ipaddress
type
that
we
use
either
the
the
definition
of
address
type
is
wrong
or
we've
got
50
rfcs,
that's
using
it
wrong
and
we
need
to
fix
those,
and
I
look
at
publishing
or
republishing
50
rfcs
and
go.
I
don't
do
that
because
the
amount
of
work
for
it
to
go
through
the
g
makes
that
not
like
a
pleasant
idea.
C
So
so
this
is.
The
idea
is
to
try
and
do
this
and
hopefully
try
and
involve
this
a
bit
more.
I
think
the
next
slide
has
a
few
more
details,
and
this
isn't
a
massively
thought
out.
C
I
don't
have
all
the
answers
and
really
what
the
picture
here
is
to
get
a
fifth
of
the
room
is
yes,
should
we
be
heading
in
this
direction
and
and
is
there
somebody
who's
willing
to
help
write
some
drafts
here,
experimental
drafts
to
actually
test
the
waters
and
get
people
to
review
this
and
comment
on
and
there'll
be
a
lot
of
feedback
and
there'll
be
pushback
in
various
places
and
things,
but
we've
got
an
opportunity
to
try
and
do
this
and
make
it
better
some
ideas
here
is
you
effectively
have
some
level
of
stable
branches
and
you
can
put
in
bug
fixes
and
things
and
you'd
have
some
level
of
reviews
for
minor
changes
that
happen
within
the
working
group
and
just
keep
them
there.
C
Like
a
working
group,
last
call
and
say:
look,
this
looks
good.
This
is
done
and
then
every
maybe
a
few
years.
You
might
then,
if
it's
very
stable,
publish
an
updated
version
say
we're
good
or
if
you're,
making
major
version
changes,
then
you'd
have
to
go
through
like
a
full
idf
last
call
review.
So
it's
trying
to
get
a
balance
between
stuff
that
the
smaller
group
of
people
can
agree
that
these
are
right
versus
ones
that
actually
really
need
a
benefit
from
a
wider
itif
review.
So
so
that's
my
quick
pitch
and
comment.
B
C
I
think
yeah,
I
think
so.
I
think
I
think
that's
what
would
be
helpful
is
I'd
like
somebody
to
help
write
an
experimental
process
process,
experiment
rfc,
to
say:
if
you
want
to
try
and
do
this,
because
I
think
we
need
to
write
down
what
it
is,
how
it's
going
to
work,
what
the
rules
are
on
things
and
then
try
and
test
it
and
see
and
refine
it
and
be
able
to
take
that
out,
and
I
think
we
would
have
the
support
I'll
have
to
check
the
isg
of
running
experiments.
C
If,
if
I
can
get
some
help,
writing
some
of
the
text,
I
think
again
reasonably
fast.
If
I'm
stuck
with
me,
writing
the
document
given
how
quickly
I've
progressed
the
two
young
models
with
me
and
my
workload,
that's
the
tricky
thing,
but
this
I
think
this
really
matters.
So
this
is
important
and
we
have
to
do
this
so.
B
I
chair
a
different
group
and
in
that
group
we
have
a
really
small
document
change
and
me
as
co-chair,
want
to
move
faster
than
the
authors,
which
is
weird,
but
I
would
love
to
tie
together
that
draft
and
that
module
with
this
process
and
get
those
authors
to
help
you
out
and
let's,
let's
run
it
fast,
I
mean
it
there's
it's
it's
like
adding
two
identities
to
a
module.
This
should.
B
F
L
The
last
meeting
bernoullis-
actually
I
support
the
id,
and
how
fast
could
it
be?
The
young
catalog
was
done
actually
with
that
exact
id
in
mind
that
if
the
itf
was
not
fast
enough,
we
just
forget
about
that.
There
is
a
block
there
with
the
paper
from
2015,
which
says
disrupt
the
itf
process.
So
if
we
just
post
the
the
the
young
module
in
the
catalog
and
they
arrive
by
default
there,
what
do
we
need
kind
of
funds
up?
L
A
Kanta's
chair,
one
person
did
not
raise
their
hand,
or
rather
they
raised
their
hand,
saying
that
they
would
not.
They
would
not
like
to
see
this
work
proceed.
I'm
just
wondering
if
that
individual
would
be
willing
to
speak
as
to
why
they
did
not
raise
their
hand.
B
So
thanks
for
those
two
lightning
talks-
and
we
are
out
now
at
the
end
of
our
session-
appreciate
all
the
really
good
contribution
in
the
room
online
and
on
the
list
and
look
forward
to
seeing
you
in
all
all
those
places,
but
hopefully
in
person
at
the
next
meeting.
Thank
you
all.