►
From YouTube: IETF110-OPSAWG-20210312-1200
Description
OPSAWG meeting session at IETF110
2021/03/12 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
Okay,
welcome
everyone
at
this
fifth
day
of
the
iatf
meeting,
one
one
zero.
This
is
this
meeting
of
the
joint
operation
and
management
area
working
group.
We
our
suggests
today
tiernzo
and
myself
hank.
There
is
a
notion
here
next
slide,
please
that
this
meeting
is
getting
recorded.
Everybody
should
be
aware
of
that,
and
also,
if
you
haven't
seen
this
already,
which
is
maybe
unlikely
at
the
fifth
day,
there
is
the
I
have
note
where
under
which
we
operate.
A
So
if
you
are
curious
how
ipr
works
and
our
anti-harassment
and
code
of
conduct
rules,
there
are
links
here
that
you
can
follow
in
the
slide
material
found
on
the
data
tracker,
as
you
apparently
all
have
read
and
agreed
to
that
you
can
go
to
the
next
slide,
please,
which
is
again
making
sure
how
to
work
through
this
meetings.
If
you
have
never
done
the
virtual
one,
that's
just
a
informational
text
here.
So
next
slide.
Please.
A
There
is
already
a
scribe
duty
assigned.
Do
we
have
a
second
scribe
already
sorry.
B
Minutes
duty.
B
Right
when
elliott
lear
agreed
to
scribe
for
us
well
he's
not
talking,
I
don't
know
if
elliott
has
made
it,
I'm
checking,
he
has
hi
elliott.
Thank
you.
When
he's
talking,
I
will
scribe
for
the
minutes.
A
Again,
the
other
material
should
be
apparent
from
the
data
tracker
here.
So
next
slide,
please,
which
is
the
agenda?
Start?
No
sorry,
it's
the
micro
group
stutters
already
so
yeah
the
past
iteration
of
four
months
since
the
last
meeting
we
did
achieve
some
milestones
and
some
goals
here
so
rfc
8969
is
now
actually
published.
Thank
you
for
all
the
great
work
and
the
authors
to
be
so
as
vigilant
going
through
the
all
sections.
A
We
have
a
moved
two
publications
to
the
isd
process,
the
tech
accent,
module
of
course
ends
very
quickly,
and
I
think
that
publishing
using
the
process
fast,
finding
geofeeds
we
passed
a
working
group
last
call
and
the
ntf
id,
so
that
is
now
in
queue
to
being
moved
and
finishing
that
in
the
process
we
adopted
a
lot
of
drafts.
That
is
also
very
good
to
see
two
of
them
actually
like
three
of
them,
but
two
of
them
are
really
much
related
from
michael.
A
There
is
one
that
is
also
about
mud,
but
that's
the
s-bomb
stuff
in
finding
them
and
last
ones.
Finally,
about
the
yen
module
for
vpn
services,
which
was
a
great
reception
here
in
the
world
group
of
being
a
vital
building
block
useful
for
operations
in
general
next
slide.
Please.
A
Yeah,
as
you
can
see,
we
have
a
lot
of
presenters,
which
is
only
the
first
three
of
them,
I'm
not
going
through
all
of
them,
but
if
you
could
just
move
slowly
through
the
agenda,
so
everyone
can
a
small
have
a
small
overview
here.
A
Then
exactly
you
can
see
a
lot
of
things
happening
in
the
at
the
end.
Of
course,
we
are
joined
with
the
open
session
of
ops
area,
so
the
last
slot
is
going
to
our
most
dearest.
Ladies
here
for
a
rob
and
we
will
have
the
open
mic,
but
that
is
probably
mostly
that
by
not
the
chairs
but
the
id.
So
now
we
can
actually
move
to
the
first
presentation.
A
D
Thank
you.
Thank
you,
alex
we'll
start
presenting
the
decided
of
the
of
the
three
drafts
on
the
on
the
vpn.
So
we'll
start
with
the
common
part,
then
oscar
will
take
on
on
the
nm
and
samuel
last
one
next
slide.
Please.
D
So
this
is
this,
is
this
is
one
slide
that
provides?
I
would
say
this
summary
on
the
current
status
of
the
vpn
common
drive.
So
the
basic
message
is
that
here
we
are,
we
are
ready
and
we
we
we
received,
I
would
say
important
comments
and
reviews
for
for
that,
one,
the
one
of
the
I
would
say
the
important
reviews
we
received
was
from
the
the
young
doctors
and
basically
we
there
is
no.
I
would
say
major
issue
that
was
found
by
the
reviewers.
D
There
were
some
some
proposals
to
fix
some
parts
on
mainly
editorial
on
some,
oh,
I
would
say
on
minor
stuff,
we
fixed
that
one
and
we
released
the
a
revision
of
the
document.
There
was
last
in
the
last
meeting
where
you
see
there's
a
comment
from
from
basically
from
from
joe
t
to
to
to
say
that
yeah
that
the
the
model,
what
should
provide,
I
would
say,
more
annotations,
so
that
it
can
ease
to
its
readability,
because
it
is
not
in
the
document
itself.
D
So
what
we
have
done
is
that
we
are
reorganized
various
part
of
the
model
itself
and
we
include
a
lot
of
annotations.
So
that
that
can
be,
I
would
say,
more
easy
for
people
to
to
to
to
digest
and
we
think
that
what
we
have
so
far
is
it
it's
really
bitter
and
we
have
more
clarity
on
the
on
this
part.
D
We
also
received,
they
would
say
some
interesting
comment
from
from
julian
and
and
also
from
curity,
that
to
to
include
additional
transport
protocols
and
that
that
was
really,
I
would
say,
a
valid
comment.
So
we
had
we.
We
had
this
that
one
and
we
included
the
reference
into
into
the
I
would
say
into
the
model,
because
we,
even
if
our
scope
initial
scope,
was
not
the
on
the
test
centers
we
are.
I
think
that
it's
really
good
that
to
see
that
various
use
cases
can
make
use
of
this
model.
D
So
that's
one.
We
we
we
we
added
this,
these
extensions
and
this
reference
into
the
the
model,
and
then
we
added
more
text
to
describe,
I
would
say,
the
various
data
node
that
we
have
on
the
on
the
on
the
draft.
So
for
for
this
one,
we
think
that
we
there
is
no,
no,
no,
no
opening
issues.
So
far
on
the
document
and
the
the
next
step
that
we
would
like
to
to
suggest
to
the
working
of
also
for
ourselves
is
to
to
to
initiate
a
working
platform
for
this.
E
F
Okay,
so
thank
you
so
now
we're
gonna
give
you.
I
think
it's
the
go
to
the
previous
slide.
Please,
and
essentially
this
one
okay.
So,
as
you
know,
the
l3nm
model
has
been
here
for
a
while,
and
we
received
a
lot
of
feedback
from
from
implementations
and
it's
been
now
taken
also
near
to
production.
So
I
think
we,
we
are
now
very
comfortable
with
the
with
the
content
right
now
so
here
the
work
that
we
did
for
the
for
the
latest
version
is,
on
the
one
hand,
a
completed
editorial
work
that
he
needed.
F
F
So
we
received
some
also
some
comments
from
from
kerati
about
about
that
part.
So
it
was
not
understood
how
it
was
and
needed
to
be
done
in
the
model.
So
we
added,
we
didn't
didn't
need
to
change
the
model
itself,
but
just
we
had
a
destination
on
how
you
can
manage
the
loopback
interfaces
at
the
vpn
networks
level.
Also,
we
made
some
changes
to
the
appendix
to
also
do
the
examples
not
only
with
ipv4,
but
also
with
the
pvc
services.
Okay.
So
these
examples
that
are
provided
in
the
appendix
are
updated.
F
So
there
are,
you
can
go
there
and
check
all
the
all
the
discussions
so
just
to
give
you
a
summary
of
the
most
important
ones
that
have
been
incorporated
into
the
latest
version
and
we
receive
requests
to
add
additional
attributes
for
static
routes
how
to
handle
the
external
connectivity,
which
was
also
present
in
the
service
model.
So
we
incorporate
here
a
profile
to
er
to
add
it
now
it's
a
profile
just
because
we
have
different
solutions
today,
so
we
added
a
profile
to
cover
that
for
better
readability.
F
We
order
the
leaves
in
the
module
okay.
So
now
it's
a
it's,
it
was
a
suggestion
made
that
it
was,
as
we
came
from
the
very
early
versions,
adding
new
leaves
we
need
to
do
this,
reorganization
and
so
for
the
multicast
use
case.
Also
this.
So
this
model
is
being
used
also
today,
and
it's
putting
also
into
into
production
to
to
model
a
multicast
network
for
tv.
So
from
here
they
come
a
lot
of
inputs
and
we
have
now
the
support
for
bim
idmp
at
both
the
vpn
node
and
the
vpn
access
level.
F
This
tcp
level,
so
the
support
is,
is
full.
Then
the
cpu
security
part
and
some
fields
that
came
from
the
service
model
have
an
internet
network
model
and
some
generalization
of
the
concept
of
maximum
roads
allowed.
So
if
you
want
to
go
to
the
next
one,
please
so,
then
the
we
are
happy
with
the
with
the
issues
of
arson.
There
is
a
small
comment
added
in
the
in
the
github
yesterday
on
a
couple
of
days
ago,
with
that
we
think
it's
useful
and
we
already
have
a
proposal
to
cover
that
is
an
enhancement.
F
So
it's
not
changes
anything
fundamental
in
the
movies
as
an
engagement.
So
we
do
think
the
document
is
ready
for
working
group.
Last
call
and
there's
some
complimentary
work
to
this.
To
this
piece
of
work.
Follow
the
service
mapping
interiors
that
uses
this
model
as
a
base
and
then
adds
on
it
and
plugs
all
the
traffic
engineering
internet.
So
go
go
tears
for
that
discussions.
G
So
please
go
to
an
exit.
G
Like
hello,
I
was
just
briefly
described:
the
status
of
the
l2n,
so
the
l2nm
has
met
highlighted
in
the
first
slide,
is
in
the
same
git
repository
from
the
other
two
traps.
G
The
document
has
followed
the
same
structure
as
the
l3
name,
as
oscar
has
mentioned,
to
increase
the
readability
and
how
to
the
the
people
is
able
to
understand
the
model.
We
have
a
that's
gloss,
editing
several
parts,
including
the
the
connection,
the
routing
protocols
and
everything
is
explained
in
the
document.
G
During
the
last
months,
we
have
received
the
routing
the
directorate
review,
the
issues
that
they
rise.
There
were
sold
and
we
have
a
list
of
operations
in
indicator
in
the
git
repository
that
we
are
working
on.
So
some
of
the
issues
that
are
listed
here
in
the
in
this
slide,
the
first
one-
is
related
to
the
support
of
the
the
deep
serve
mpls
of
the
rfc
327.
G
The
issue
204
is
related
to
the
clarification
of
the
evpn
flavors,
the
next
one.
The
two
two
is
related
with
the
speed
horizon,
supported
this
vpn
network
access
level
and
the
finally,
the
ip
faster
route
support.
So
next
slide,
please
so
in
in
addition
to
the
revision
of
the
issues
that
of
the
routing
directory
we
have
included,
we
have
closed
at
the
additional
issues,
more
related
with,
as
I
commented
before,
with
the
avpn
support.
G
So
we
have
included
some
ethernet
segment,
identifier,
parameters,
control
wars
and
a
kind
of
configuration
for
the
targeted
vpn.q
support.
We
have
a
also
include
the
presence
type
in
order
to
create
define
the
availability
of
the
of
the
paths
we
have
included
in
the
document
additional
examples
and
how
to
use
as
a
l2nm,
and
we
have
included
a
node
list
just
in
order
to
not
limit
the
number
of
elements
in
in
the
service.
G
G
G
H
So
my
question
was
just
on
a
process
point
of
view,
and
I
know
you
pulled
the
common
stuff
out
into
a
separate
module
and
you're
saying
that
that's
ready
for
working
with
glass
core
and
the
l3
nm
model
is
my
question:
is:
is
it
okay
to
progress?
Those
two
with
this
one
still
being
open
and
being
actively
developed?
Is
that
is
there
gonna
be
any
issues
in
terms
of
wanting
to
subsequently
churn
stuff
that
is
lower
down
in
those
specs?
H
B
One
of
my
questions
I
see
med
you
were
coming
up,
but
med
or
any
of
the
authors
you
want
to
tackle
rob's
question.
D
Yeah
yeah,
this
is
actually
a
good
question.
What
we
that
we
already
discussed
last
time
and
the
the
agreement
we
we
had
is
that
we
will
try
to.
I
would
say
to
to
stabilize
the
three
document
by
this
this
this
atf
meeting
and
if
we
file
to
to
have,
I
would
say,
an
acceptable
set
of
the
ill-2
nm.
We
will
progress
only
the
the
cameron
and
the
hl
training,
so
what
we
at
least
am
on
the
the
the
authorship
team.
D
What
we
we
decided
is
that
for
the
coming
so
far
it
will
be
frozen.
So
there
is
no.
If
there
is
any
specific,
I
would
say,
item
that
can
be
induced
or
detected
when
we
will
go
into
specific
detail
for
the
h2nm
that
will
be
covered
directly
into
the
l2nm
document
itself.
So
that's
why
we
are
confident
that
we
progress
in
only
the
two
ones
want.
I
would
say
there
is
no
no
dependency
again
on
the
on
the
l2
one.
H
H
D
Personally,
I
I
don't
see,
I
would
say
again:
this
will
be
a
decision
from
I
would
say
from
from
from
the
working,
but
but
for
me
I
don't
see
a
need
to
wait
for
the
do
it
to
an
m
to
progress
due
to
two
other
ones,
because
if
the
the
initial,
I
would
say,
rationale
that
we
we
we
had
for
the
common
is
to,
I
would
say,
to
identify
the
common
data
that
can
be
reasonable
from
the
various
models
at
the
l2e
and
the
l3,
but
also
at
the
network
and
the
service
level,
and
that
I
think
that
we
we
we
have,
then
I
would
say
the
exercise.
D
We
can
continue
to
stretch
that
exercise,
but
it
it
it
will
never
be.
I
would
say
as
perfect
as
perfect,
so
yeah
we
can
wait,
but
I
don't.
I
don't
personally
see
a
value
for
that.
So
we
we
have.
We
have
the
camera.
We
have
one,
the
l
train
m,
which
I
would
say
exemplify
how
we
can
use
the
the
the
common
model.
If
there
is
anything
that
can
be,
I
would
say
require
that
it
is
to
it
to
new
level
that
can
be
just
handled
in
that
in
that
one
else.
D
So
perhaps
I'm
repeating
myself,
but
I
don't
see
a
need
to
wait
for
them
for
it.
J
And
thanks
for
this
presentation
one,
I
just
have
a
process,
question
myself,
but
I
I
haven't
noticed
a
lot
of
chat
on
these
drafts
on
the
group.
Maybe
I'm
just
not
seeing
them
or
they're
getting
misfiltered
on
my
email,
but
are
the
issues
being
resolved
on
list?
Are
they
being
resolved
by
the
authors
independently.
B
So
before
med
you
jump
in
that
was
going
to
be
another
one
of
my
things
that
are
brought
up
that
I
I'd
asked
previously
to
bring
some
of
that.
I
I
see
like
some
of
the
github
work
here
and
and
yes,
it's
the
official
github,
but
we
we
need
to
bring
some
of
these
items
back
to
the
list
as
you
discuss
them
in
your
author
group.
As
you
agree
on
resolutions,
some
I
I
saw
around
109
came
back
but
to
elliot's
point
I
haven't,
I
haven't
seen
a
lot.
D
Yes,
so
I
think
I
think
this.
This
is
also
another
point
that
we
have
discussed
also
in
in
previous
meeting
that
we
and
yes,
you
are-
you
are
right,
it's
it's
just
it's.
We
have.
We
have
regular
meeting
that
that
are
organized
every
week.
Among
the
I
would
say,
people
who
are
interested
on
this
work
and
this
meeting
are
advertised
and
the
I
would
say
sponsored
by
the
betty
working
group
and
this
the
I
would
say
the
information
to
connect
are
available.
D
I
think
from
the
from
the
from
the
working
group
level
some
of
the
of
the
issues.
Yes,
we
discussed
them
sometimes
ei
on
the
on
the
github.
It's
for
example.
Yesterday
I
have
sent
a
note
to
the
list
about
one
of
the.
I
would
say
the
comment
that
we
received
from
an
implementer
and
for
which
we
would
like
to.
I
would
say,
to
hear
more
comments
on
it,
so
it
it's
it's
always
for
us.
It's
it's
doubling
the
effort.
I
agree
with
you.
D
What
but
what
what
can
be
done,
at
least
by
I'd,
say
by
our
chairs,
is
that
so,
at
least
you
can
you
can
configure
the
the
github
to
send.
I
would,
I
would
say
a
summary
of
the
issues
on
a
weekly
basis
and
then
people
I
would
take
and
jump
in
if
there
is
any
point
to
be
to
be
to
be
further
discussed.
J
I
J
D
B
Thanks
yeah,
that's
what
we've
been
doing
in
netmod
with
with
some
of
the
yang
versioning
work
every
every
week
we
just
post
the
minutes
back
to
the
list.
I
think
that
would
give
a
little
bit
more
immediate
visibility.
B
I
I
would
say
yes
with
a
with
the
summary
of
like
the
title
of
the
issue
and
the
link
we
we've
I've
seen
that
work
but
like
where
there
are
decision,
points
or
specific
discussion
points.
I
think
a
a
brief
summary,
as
part
of
the
minutes
would
be
useful.
H
I
I
think
it's
sometimes
helpful
to
split
the
issues
between
one's
editorial
nature,
which
I
think
you
can
just
handle
and
ones
that
are
sort
of
like
a
design
change
or
something
that
requires
consensus
or
agreement
for
the
working
group
and
there.
I
don't
think
you
need
to
to
spread
all
the
discussion
back
to
working
necessarily,
but
I
do
think
it's
really
helpful
to
actually
give
the
conclusion
of
how
the
issue
was
resolved
and
and
if
possible.
H
If
the
text
is
one
block
of
text,
I
think
it's
helpful
to
actually
then
copy
that
working
group
and
say
look.
This
is
the
resolution.
Has
anyone
who's
not
been
following
the
actual
design
meetings
able
to
comment
on
that?
I
think
it's
helpful
to
get
as
much
coverage
of
those
as
you
go
along,
where
possible.
C
K
Thanks
yeah
largely
what
rob
and
you
said
if
there
is
a
decision
made
that
needs
to
be
posted
to
the
list
and
a
short
summary
of
sort
of
what
the
question
was
and
what
the
decision
was
slash,
how
it
was
arranged.
Obviously
we
don't
need
the
whole
discussion,
but
hey
like
somebody
raised.
This
is
a
concern
we
discussed
it.
We
decided
x,
here's
the
issue.
If
you
want
more
background,
I
think,
is
reasonable.
B
And
oh
med:
did
you
want
to
comment
to
that
nope?
The
last
thing
I
had
that
wasn't
covered,
so
I
got
the
I
jotted
down.
The
working
group
last
calls.
We
talked
about
that
on
the
expert
review
again
for
l2
and
m.
By
the
way
I
did
find
those
github
issues
they're
actually
under
l3
and
m
in
the
repository,
so
that
was
a
little
bit
of
of
the
the
200
level
issues.
But
for
that
expert
evpn
review,
we
found
someone
in
the
routing
directorate.
Who
could
who
did
that?
B
D
B
Okay,
I
I
thought
we
found
the
routing
directorate
pointed
us
to
someone
I'll
I'll
double
check
and
and
we'll
we'll
get
that
taken
care
of
after
after
110..
Thanks.
B
Okay,
you
mentioned
something
but
we're
running
short
on
time,
so
I'm
gonna
follow
up
on
list
about
that
comment
oscar,
but
thank
you.
L
L
Yes,
okay,
my
name
is
bo
and
I'm
I'm
going
to
provide
the
updates
of
this
looking
very
performance,
monitor
draft
and
I'll
present
on
behalf
of
all
the
other
authors
and
next
slide,
please,
oh
okay,
here's
a
recap
for
this:
vpn
service
performance
monitor
model
and
since
last
meeting
this
draft
has
been
adopted
as
working
group
draft
and
here
on
the
on
the
finger
you
can
see
the
figure
on
the
left.
L
This
is
the
this
model
is
for
is
used
between
service,
orchestration
and
network
controller,
and
it's
the
similar
interface
between
like
the
layer,
3
in
them
and
layer
two
and
I'm
just
mentioned
just
to
present
it
and
on
the
right.
Here's
a
like
here,
the
modeling
approach
of
this
model
and
the
underlying
network
has
been
modeled
by
8345,
and
this
modeling
approach
of
this
model
is
augmented.
L
The
vpn
and
network
performance
statistics
knows
based
on
the
basic
network
and
topology
model,
and
in
that
way
the
the
the
network
and
association
between
vpn
and
performance
monitor
has
been
collected
together
and
next
slide.
Please.
L
Here
is
the
open
issues
from
the
adoption
core
and
the
the
major
I
just
leave
the
major
ones,
and-
and
there
are
other
some
editorial
ones
we
are
going
to
resolve
and
the
first
major
one
is
to
add
a
reference
to
8309,
because
right
now
this
drop
only
refer
to
the
89
six
nights
about
the
the
modeling
automation
framework.
L
L
So
we
think
the
authors
think
this
is
a
reasonable
one
and
we
will
add
this
reference
and
and
the
next
next
major
one
is
the
layer
three
earlier
two
vpn
is
also
in
the
scope.
L
Currently,
the
model
mainly
gave
the
like
at
the
nodes
about
the
layer,
3
vpn,
so
the
comments
raised
about
earlier
two
vpns
should
also
add,
like
some
mac
table,
entry
statistics
and
also
other
static
is
like
different
oem
protocol
source
should
also
be
added
to
to
to
add
the
association
to
the
two
vpn
and
the
third
one
is
need
to
add
more
examples
to
illustrate
the
usage
of
your
model,
and
particularly
one
is
percentile.
L
L
And
I
think
the
next
step
mainly
is
to
address
the
pending
issues
I
just
mentioned,
and
also
we
can
like
to
to
work
together
with
layer,
two
and
them
and
layer2nm
team,
to
check
out
whether
there
are
some
additional
data
nodes
to
be
supported
on
this
model,
so
that
these
data
are
more
complete.
L
So
and
that's
all
for
my
presentation
and
we
like
to
collect
more
reviews
and
comments
from
the
working
groups.
So
any
questions
about
this.
B
I
think
one
from
the
previous
presentation
that,
as
I
understand
you,
you've
also
been
involved
in
the
the
authorship
meetings
for
the
the
common
l3
and
l2
and
ms
just
having
some
more
of
the
discussion
that
that's
happening
there
as
part
of
those
minutes
could
be
helpful
to
stir
more
more
comments
here.
L
Oh,
this
draft
hasn't
been
like
discussed
regularly
on
the
on
on
the
design
team
of
the
year
three
and
two
and
m
so,
but
we
will
provide
the
updates
decisions
to
the
working
group
list.
If
we
will,
we
will
see
that
okay.
M
Yes,
sir,
can
you
hear
and
see
me
yeah,
okay,
excellent,
so
next
slide,
please.
M
So
this
stuff
has
been
already
presented
at
ops
awg
on
itf
108.
So
I
just
go
briefly
very
briefly
through
the
draft
and
give
you
at
the
end,
an
update
what
has
been
changed
recently.
M
M
So
in
in
a
nutshell,
what's
missing
is
all
the
the
network
protocols
which
has
been
developed
at
itf
for
mpls
segment,
routing
these?
These
are
missing
in
the
in
this
registry
next
slide.
Please.
M
M
J
M
So
I
collected
various
feedbacks
from
spring
and
palace
lsr,
ops,
awg
mailing
lists.
I
also
presented
at
the
end
palace
at
itf
109,
originally
in
the
dasho
5
version
of
previous
2-05
versions.
There
was
also
an
additional
context
called
concerning
the
segment
routing
seat
type.
M
M
So
therefore
it's
not
needed
to
cover
that
as
well
on
ipfix
and,
as
I
said
previously,
the
history,
the
is46
registry
should
be
correct,
corrected
anytime
soon
and
once
it's
done
basically,
the
additional
code
points
could
be
added
there.
I
received
positive
feedback
as
well
from
from
mpls,
and
I
would
like
to
call
again
adoption
on
it.
Ops,
awg.
N
Okay,
then
that's
all
for
my
side.
We
have.
O
H
Oh,
that's,
okay,
so
just
to
check
and
ben
was
here,
he
probably
has
a
definitive
answer.
Am
I
right
in
thinking
this
is
adding
these
attributes?
Anyone
can
do
this
anyway.
I
think
that
that
registry
allows
anyone
to
add
them.
So
I
guess
what
I'm
thinking
is.
We
shouldn't
make
too
high
a
bar
if
these
are
useful
fields
to
add
the
ip
fix
registry,
then
I
think
that's
a
useful
thing
to
do
and
what's
going
to
comment.
H
I
Q
P
On
the
ayanna
and
register
information
element,
however,
the
lead
value
is
whenever
we've
got
like
multiple
ipv5
merchant
levels.
Together,
it's
worst
still
published
document,
I
believe,
to
explain
some
background
and
I
agree
with
you
that
the
boss
will
be
too
high
for
adoption,
but
for
for
requesting.
M
Another
yeah
exactly
I
actually
went
before
I
was
creating
this
stuff.
I
went
directly
to
aina
and
requesting
to
to
add
this
because
the
code
points
are
referring
to
existing
rscs,
but
ayanna
requested
for
me
that
there
is
a
official
document
and
now
I
see
airdraft
should
be
present
there
and
therefore
I
created
this
draft.
P
So
if
the
question
is
from
europe
I
mean
we
should,
I
believe
we
should
just
quickly
adopt
it
on
the
publishers,
and
I
can
commit
to
review
it
one
more
one
more
time.
If
you
want.
H
Thanks
benoit,
yes,
I
think
I
think.
Actually
I
was
coming
to
the
same
conclusion
that
I
agree
that
adopting
this
is
the
right
thing
to
do.
I
think
so.
I'm
supportive
of
that.
B
Thanks
for
agreeing
to
review
we'll
put
this
call
after
the
after
110,
and
I
I
haven't
forgotten
benoit
that
you
you'll
review
I'll
mention
that
as
well.
So
thank
you.
J
Oh
okay,
so
let
me
just
pause
for
10
seconds,
while
somebody
takes
over
the
minutes
taking
from
me
and
as
I'm
doing
I'd
like
to
just
introduce
this
draft
on
behalf
of
myself
and
scott
rose,
this
is
a
just
recently
adopted
draft
on
software
bills
of
materials.
Next
slide,
please.
J
Okay,
so
between
what's
happened
since
the
previous
ietf.
Obviously,
as
I
mentioned,
it's
now
posted
as
a
working
group
draft
scott
and
I
are
already
working
on
some
editorial
changes.
Just
editorial
changes
for
zero
one
and
I'll
come
to
some
of
the
bigger
issues
based
on
bigger
feedback
and
without
further
ado.
Let's
get
to
the
bigger
issues
next
slide.
Please.
J
J
Last
call
we
want
to
get
a
fair
amount
of
operational
experience
before
we
run
it
through
that
process,
so
you
might
expect
some
implementation
drafts
a
la
what
what
goes
on
in
quick
or
in
http
working
groups
before
we're
really
going
to
push
hard
to
to
get
this
thing
adopted.
J
So
so,
if
you're
looking
for
fast
a
fast
rfc,
this
is
not
the
draft
we
we
don't
know,
for
instance,
yet,
based
on
some
of
the
format
work,
that's
continuing
whether
it
will
be
necessary
to
discover
one
or
multiple
s-bombs
out
of
a
system,
and
so
we're
still
trying
to
work
that
out
this
amounts
to
you
know:
do
we
have
an
array
or
do
we
have
a
single
object?
And
you
know
it's
not
a
big
deal,
but
it
is
if
the
s-bombs
are
going
to
live
in
different
places.
J
How
to
return
that
information
in
some
reasonable
way?
The
second
major
point
that
scott
and
I
discussed
and
there's
one
of
three
there
are
a
couple
of
others-
is
that
an
s-bomb
on
its
own?
What
an
s-bomb
delivers
to
you
and
what
discovery
of
an
s-bomb
delivers
to
you
is
an
inventory
of
the
device
with
that
inventory,
especially
if
you're
using
common
names
of
of
software.
J
What
that
doesn't
tell
you
is
whether
or
not
the
software
has
already
been
repaired,
and
so,
for
instance,
it
could
be
that
the
vendors
spent
a
lot
of
time
working
on
all
these
vulnerabilities,
and
you
just
see
the
same
version
of
a
particular
piece
of
software,
and
yet
you
get
a
false
positive
when
you
go
to
test
so
other
work
is,
is
looking
at
a
concept,
it's
really
a
concept,
not
so
much
a
specification
of
something
called
vex,
which
is
vulnerability,
exploit
exploitability
exchange
and
in
fact
the
acronym
is
something
that
everybody
bemoans,
that
we
have
this
acronym
now
you're
talking
the
person
who's
speaking
created
something
called
mud.
J
So
I
am
no
person
to
judge
such
things.
However,
the
concept
basically
is:
can
you
say
that
for
a
given
vulnerability
on
a
product,
actually
that
vulnerability
has
been
repaired
or
is
otherwise
inapplicable
for
this
device?
J
J
Another
is-
and
this
really
just
does
answer
you
know
the
question
in
in
great
detail.
Actually
is
this
thing
vulnerable
to
a
particular
cve?
What
you
know
is
it
patched
in
and
how
is
it
patched
and
what
versions
do
you
need
to
to
go
fix
it,
and
so
one
and
there's
a
separate
format-
and
this
is
the
wonderful
thing
about
standards-
is
that
there
are
so
many
of
them
called
cyclone
dx
that
roughly
covers
the
same
ground.
J
Is
that
if
you
put
out
a
software,
build
materials
for
your
product-
and
you
don't
put
this
thing
out-
people
are
going
to
start
calling
saying:
are
you
really
vulnerable
because
you
happen
to
have
this
version
of
openssl
in
your
code,
and
I
just
saw
this
announcement
and
so
to
keep
the
phones
from
being
off
the
hook,
you'd
like
to
be
able
to
provide
this
either
on
your
website
or
programmatically,
and
this
would
be
the
programmatic
answer
next
slide,
please
so
the
other
thing
this
is
more
a
simpler
in
matter,
which
is
we're
thinking
that
we
want
to
remove
this
wood
references.
J
The
people
that
are
working
in
the
various
proofs
of
concept
on
this
technology
have
said.
You
know
suit,
isn't
really
useful
in
this
context,
but
cyclone
dx
is
so
we
might
just
want
to
switch
that,
but
our
goal
in
the
draft
just
to
be
clear,
is
we
want
to
be
a
little
bit.
Technology
neutral
about
all
this
there's
going
to
be
a
lot
of
formats.
J
We
really
want
to
rely
on
the
http
content
type
headers
accept
headers,
and
things
like
that,
so
that
the
information
can
come
across
in
whatever
way
it
can
come
across,
but
that
desires.
That
requires
a
little
bit
of
thought
and
scott,
who,
I
don't
believe,
is
here
on
a
call
would
say.
The
other
thing
that
we
have
to
do
is
examine
the
work
of
rowley,
which
I'm
not
familiar
with
to
see.
J
Oh
scott
is
on
the
call,
maybe
scott
you
want
to
comment
a
little
bit
about
rowley
before
and
I
know
we're
running
out
of
time.
So
if
you
want
to
briefly
comment,
maybe
you
could
be
a
little
bit
more
clear
than
I
am,
but
this
is
another
issue
that
we
have
to
talk
about,
and
that's
really
all.
I
have.
B
There
was
a
comment
from
jeff
haas
of
relay
for
him
in
chat,
should
the
s-bomb
actually
be
a
recursive
set
of
manifests.
A
component
may
contain
other
components
that
have
manifest
groups
of
components
are
used
to
build
a
system,
etc.
J
Yes,
jeff,
so
it's
a
great
question
and
our
problem
in
answering
that
question:
is
it
it?
That
is
a
matter
of
how
the
s-bomb
format
is
organized
and
we're
trying
to
be
format
neutral
and
so
that
that
gets
us
into
well
suppose,
you're
referencing,
if
you've
got
one
s
bomb
and
you're
referencing,
another
s-bomb
right?
J
How
does
that
referencing
work,
and
I
think
we
have
to
add
an
example
of
that
sort
of
thing
going
on,
but
I
need
a
live
example
and
right
now
the
people
who
are
doing
the
format
work
aren't
yet
up
to
that
task,
and
so
we're
that's.
Why
another
reason
why
we
have
to
slow
roll
this?
This
is
really
live.
Technology
work
across
the
across
several
different
industries.
A
Actually
so
hi,
this
is
hank
as
an
individual,
no
heads
on
with
a
question
to
one
or
more
s
bomb
to
be
discovered.
That
is,
I
think,
strong
related
to
the
dependencies
of
s-bombs
with
each
other.
If
they
are
somehow
related,
there
might
be
dependencies
on
them
and
you
have
to
retrieve
all
of
them.
Some
of
these
sets
subsets
of
this
might
be
from
one
authority
so
discovered
at
one
location.
A
There
might
be
more
than
one
yes
that
are
already
somehow,
depending
on
each
other,
which
could
be
done
via
a
native
nesting
of
a
document
that
would
solve
it.
With,
I
don't
know,
binary
stuff,
you
could
do
a
cbo
sequence,
that's
relatively
easy
with
other
stuff
yeah,
the
the
3t
spdx
side
of
the
modeling.
Here,
that's
happening
under
the
itna
umbrella.
They
have
this
artifact
concept.
That
else
has
some
defects
in
it
and
and
again
can
relate
to
other
s
bombs.
So,
yes,
these
things
can
point
to
each
other.
A
A
So,
if
you
go
with
an
example
like
cyclone
dx,
I
would
at
least
make
it
three
the
golden
number
to
to
show
this
little
bit
diversity.
Here,
I'm
fine
with
removing
sweet
as
the
top-level
s-bomb
reference,
because
by
itself
it's
it's
like
a
cve,
not
so
super
useful.
It
will
be
used
by
other
things.
So
so
that
is
therefore,
I
think
totally
fine.
J
J
J
Okay,
so
thank
you
very
much
and
look
for
more
comments
on
the
list.
J
O
So
talking
about
openmetrics,
I
need
to
talk
about
prometheus
for
a
second,
of
course,
prometheus
is
where
all
of
this
is
coming
from.
Prometheus
is
inspired
by
the
original
monitoring
system
within
google.
It's
a
time
series
database,
the
effort
or
the
protocol
underlying
this
effort
is
stable.
Since
2014.
O
O
So
the
summary
of
what
I
want
to
say
here
that
there
is
a
really
wide
base
of
adoption,
of
what
we
are
talking
about
on
the
millions
of
installation
is
something
we
have
heard
data
for
next
slide,
please
so
now
coming
to
open
mobiles
and
what
it
is.
O
But
it's
still
a
competing
thing,
and
so
we
wanted
to
have
a
neutral
name
for
the
whole
effort
doing
networking
for
15
years
myself,
I
consider
itf
and
rfc
still
the
gold
standard
of
how
to
communicate
standards
on
the
internet,
so
that
is
also
one
of
the
founding
goals
of
openmetrics
and
basically
do
a
really
careful
evolution
of
what
we
have.
We
had
input
from
dozens
of
people.
We
have
used
openmetrics
in
production
in
prometheus
for
three
years
now
we
had
outside
vendors
or
other
vendors
we
implement
from
from
our
reference
code.
O
O
This
is
how
the
whole
thing
looks,
and
yes,
I
appreciate
that
this
doesn't
give
you
the
full
the
full
depth
of
the
thing,
but
basically
you
have
you
have
key
value
pairs
which
you
attach
to
your
metric
data,
as
in
your
numeric
data
and
your
exposure.
O
B
So
we
do
have,
we
do
have
a
one
in
the
queue.
If
you
want
to
take
a
question
right
now,.
B
I
E
E
And
I
have
one
question
and
actually
I'm
missing
here:
what
do
you
want
to
actually
standardize?
Here?
The
I
mean
the
data
model
or
the
the
transport
vertical.
O
Both
it
is,
it
is
a
wire
for
the
transport
protocol
based
on
http,
but
it
inherently
carries
a
data
model
within
itself
to
I
mean
yeah,
so
it
is
a
data
model
and
also,
at
the
same
time,
a
wire
format,
slash
transport.
E
O
O
Yes,
this
is
one
thing
which
is
basically
ready
to
be
used
to
transmit
to
transmit
data
if
you
might
be
implying
that
this
is
better
split
up
into
different
parts
within
the
the
context
of
itf.
If,
if
I'm
right
with
this
assumption,
absolutely
no
no
issue
from
from
our
end
getting
feedback
on
how
to
do
this
properly
is,
is
my
main
intention.
B
P
Right,
thank
you.
So
thank
you
very
much
for
coming
to
the
itf
from
an
open
source
while
you
develop
and
deploy
projects.
So
thanks
for
that,
so
you're
very
welcome.
P
Now
there
is
just
one
thing
is
that
you
know
in
all
your
slightly
speakable
interfaces.
A
simple
example
right:
we've
been
having
many
definition
of
interfaces.
You
know
whether
this
is
like
in
mips
and
yang,
etc.
So
we
might
be
facing
the
issue
if
you
got
yet.
P
Model
that
we
have
to
map
somehow
somewhere,
but
since
this
is
like
anyway
way
deployed.
So
thanks
for
coming
with
the
itf
and
some
niceness.
O
I
agree
that
it's
super
icky
to
introduce
new
standards
and
it's
not
very
likely
that
the
old
standards
will
be
fully
going
away.
If
you
introduce
new
new
ones-
and
we
had
this
debate
for
quite
some
time
before
even
starting
this,
but
given
the
adoption
even
back,
then
we
decided
that
there
is
a
need
in
the
industry
to
standardize
on
this,
and
we
see
quite
some
confusion
around
people
not
doing
it
precisely
correct
or
not
at
all
so
yeah.
J
Hi
thanks
for
bringing
this
to
the
ietf,
the
we
we
have
a
bit
of
experience
in
how
we
take
on
these
sorts
of
this
sort
of
work
that
has
been
developed
elsewhere
and
what
works
and
what
doesn't
work.
J
What
doesn't
work
is
if
a
part
of
the
ecosystem
wants
to
come
to
the
ietf
with
the
work
and
another
part
doesn't
and
then
then,
what
you
end
up,
having
is
an
open
is
an
open
source
war,
and
so
you
know
if
one
of
the
questions
is,
if,
if
you're,
showing
up
here
well,
other
people
show
up
here
to
participate,
so
they
could
get
their
work
done
and
use
the
mechanisms
that
the
ietf
has
in
order
to
produce
the
standard.
J
And
that's
that's
a
question,
and
then
a
comment
here
is
one
thing
that
we
normally
do
in
these
processes
is
the
first
thing
we
do.
Is
we
just
document
existing
practice
so
is
not
to
perturb
what's
going
on
and
that
you
usually
produce
an
informational
document
first
and
then
a
standard
second,
and
so
the
idea
is
so
that
if
we
want
to
do
the
evolution,
people
know
what
they're
evolving
from
from
an
ietf's
process
standpoint.
J
So
we
can,
I
mean
usually
that
can
be
pretty
lightweight
as
long
as
you
can
write
out
the
description
and
I
would
absolutely
support
getting
the
informational
document
out
quickly.
Right,
that's
you
know,
you
know
what
the
wire
format
is
today.
You
know
what
your
information
model
is
today
and
then,
if
people
want
to
go
from
that
right,
then
the
other
thing
that
this
group
is
usually
pretty
good
about
is
understanding
that
oh
yeah
there
actually
is
an
installed
base,
so
they
don't
do
crazy
things.
J
Even
you
know,
as
even
though,
as
benoit
says,
you
know,
we
do
have
all
these
other
information
models,
but
if
what
you
have
is
working
for
you,
I
don't
think
there's
a
lot
of
religion
in
the
ietf
on
that
point,
because
we've
just
been
talking
about,
for
instance,
wireshark
as
an
as
a
separate
example,
and
what
what
the,
how
to
standardize,
p
cap
and
p
cap
and
g.
So
it's
it's
not
a
it's,
not
a
religious
point,
but
that
would
normally
be
the
path
thanks.
O
So
to
address
the
point
first,
that's
basically
our
intention
document
what
what
is
already
adopted,
that
is
the
1.0,
and
that
is
that
is
the
basis
which
everyone
can
then
agree
on
and
use
and
going
from
there
with
with
an
evolution.
O
The
point,
no,
let
me
mentally
restart
so
for
for
this
one
if
this
is
informational
or
if
this
is
bcp
or
I
don't
know
I'm.
I
need
guidance
here
simple.
As
that
I
will
follow
whatever
the
group
consensus
is
because
you
know
this
better
than
I
do
see
simple.
O
O
Initially,
there
are
some
other
or
there's
mainly
one
one
other
major
initiative
in
the
same
region
that
is
open
telemetry
and
the
two
groups
are
working
each
other
where
open
telemetry
is
trying
to
achieve
100
compatibility
with
open
metrics
for
the
simple
reason
of
adoption.
I
it's
probably
going
too
far
to
give
the
complete
dump
of
the
history
and
and
everything
right
now,
but
we
can
do
this
on
the
mailing
list
or
in
whichever
form
or
I
can
go
there
now.
I
don't
care.
J
K
Warren,
so
thanks
this
is
warren
yeah
I
mean,
I
think
that
it
would
be
really
good
if
we
could
get
this
documented.
Clearly,
it's
very
widely
deployed
and
having
the
official
way
to
do
it,
written
down
would
be
really
good,
but
I
think
this
needs
to
start
off
as
an
informational
document
and
like
elliot,
and
I
think,
ben
y.
I
have
some
concerns
that
we're
getting
full
representation
from
everybody,
who's
actually
participating.
K
We
should
make
it
clear
when
the
document
gets
adopted,
that
we
are
simply
documenting
an
existing
standard
or
sort
of
an
existing
deployed
system
and
that
the
working
group
can't
really
change
much
about
it,
because
what
we're
doing
is
documenting
a
deployed
thing,
but
the
working
group
does
have
change
control
when
it
becomes
an
ietf
document.
K
So
we'll
have
to
be
careful
that
we
don't
try
and
change
the
substance
of
it,
but
there
is
going
to
be,
I
think,
a
fair
bit
of
back
and
forth,
making
sure
that
the
document
is
actually
clear
and
understandable,
and
what
I'd
like
to
confirm
is
that
the
group
who's
working
on
it
will
actually
be
able
to
participate
fully,
and
you
know
we'll
be
able
to
devote
the
necessary
time
to
it.
K
O
That's
a
fair
point,
and
yes
that
is
like
it's:
it's
currently
for
people
doing
the
main
work.
That
is
a
commitment
from
our
side.
This
is
the
intention
to
to
do
it.
As
such.
We
took
a
bit
of
a
briefer
after
this
marathon
but
new
year,
and
we
are
back.
B
I
I
I'd
hate
to
do
this,
but
we're
going
to
have
to
to
cut
the
mic
in
order
for
us
to
get
through
the
remaining
three
sessions
and
leave
any
time
for
ops
area.
So
I
would
ask
chin
and
elliott
if
you
could
take
your
additional
comments
to
the
list
and
we're
gonna
need
to
move
on.
But
the
good
news
is:
there's
a
lot
of
discussion,
a
lot
of
interest.
So
thank
you
richard
for
coming
to
present
and
let's
keep
this
going
on
the
mailing
list
and
thank
you.
E
E
E
R
All
right
great,
thank
you
all
for
giving
me
a
little
bit
of
time
to
talk
about
q
log
next
slide.
Please.
R
R
Now,
what
you
would
typically
do
for
something
like
tcp
next
slide
is.
You
would,
of
course,
take
something
like
a
packet
capture
somewhere
in
the
network
and
then
analyze
that
using
a
tool
like,
for
example,
wireshark.
You
can
still
do
that
for
quick
and
other
encrypted
protocols,
but
it's
more
difficult
next
slide.
R
Quick
now
already
encrypts
a
lot
of
its
transport
metadata
as
well.
So
if
you
want
to
do
this,
you
would
have
to
store
the
entire
packet
capture,
including
the
very
large
payloads
and
then
also,
of
course,
the
tls
decryption
secrets
to
get
the
final
information
out
of
there,
which
could
lead
to
problems
at
scale
and
there's
a
second
long-standing
problem
with
this
approach.
Next
slide.
R
So
next
slide,
what
we
eventually
ended
up
doing
for
quick
was
take
a
different
approach
and
instead
of
logging
in
the
network
or
taking
packet
captures
means
that
x
will
trade.
This
information
from
the
implementations
directly
at
all
of
the
endpoints
or,
let's
say
vantage
points,
because
we
can
of
course,
also
look
at
intermediate
devices
as
well.
R
This
approach,
then
allows
us
to
only
log
the
things
that
we
actually
need
so
being
better
for
privacy
and
also
keeping
the
overhead
lower.
This
is,
of
course,
not
a
fantastic
new
idea.
Most
implementations
have
some
kind
of
debug
logging
outputs,
but
the
idea
of
qlog
was
to
have
like
a
single
way,
a
single
format
single
schema
that
all
the
different
implementations
can
reuse.
R
That
means
the
q
log
is
is
far
from
rocket
science.
Basically,
what
we
have
now
is
a
mapping
onto
json,
and
we
just
have
a
schema
that
defines
how
you
should
log
several
individual
event
types.
For
example.
What
should
the
receive
packet
look
like,
or
if
you
indeed
want
to
lock
some
congestion
control
stuff?
R
R
This
approach
turned
out
to
work
quite
well
for
quick
having
both
the
common
format
and
some
public
tools,
at
least
leading
to
most
of
the
quick
stacks,
currently
actually
outputting.
The
format
directly.
There's
also
some
experience
with
large
scale
usage
by
facebook
who
are
using
this
to
monitor
and
analyze
their
their
quick
deployments
at
scale.
R
So,
given
this
relative
success
for
quick,
we
have
now,
we
are
now
moving
to
adoption
of
this
work
inside
of
the
quick
working
group
next
slide,
which
is
intended
to
be
part
of
the
of
the
recharger
of
the
working
group,
and
there
are
two
goals.
The
main
one
is,
of
course,
finalizing
this
for
quick
and
hp3,
but
the
secondary
goal,
which
is
one
of
the
reasons
why
I'm
talking
to
you
right
now,
is
because
we
believe
that
q
log
can
be
used
for
more
than
just
quicken
hpg.
R
R
Of
course,
in
doing
something
like
this,
there
are
a
lot
of
different
challenges
that
arise.
R
But
the
thing
is
that
I,
and,
if
we're
honest,
not
many
people
in
the
quick
working
group
or
all
that
into
this
type
of
work
or
all
that
experience
to
these
type
of
formats,
so
we're
hoping
to
get
a
bit
of
feedback
from
you
on
these
things,
I'm
going
to
highlight
just
three
of
the
main
ones
that
I
think
might
be
most
relevant
to
you
all
here.
Next
slide.
R
The
first
one
is
that
we're
currently
using
json
as
a
civilization
format,
which
is
very
nice,
and
it's
very
flexible.
It's
also
not
the
most
performant
approach
that
you
can
have.
So
there
are
some
discussions
about
maybe
moving
this
to
something
else.
What
we've
currently
done
in
the
draft
to
kind
of
bypass?
R
R
Second
aspect
is,
of
course,
privacy,
it's
very
simple
to
say
that
in
theory
we
can
scrub
blocks
for
for
sensitive
information,
because
we're
logging
at
the
endpoints,
but
of
course,
for
some
use
cases
you
still
need,
for
example,
the
raw
ips
or
the
urls
that
users
connect
to
and
so
on
and
so
forth.
R
So
what
we're
thinking
about
now
is
having
some
kind
of
a
sanitization
level
approach
where
you
have
different
use
cases
or
more
or
less
tricked
in
how
to
sanitize,
and
we
give
some
guidelines
on
how
to
do
this,
which
hashing
to
apply
or
which
fields
should
be
left
out
completely
again.
This
is
something
that
I
think
you
hear
of
have
probably
encountered
at
some
point
or
the
other
when
collecting
information
would
be
interesting
to
hear
of
similar
approaches.
R
Finally,
next
slide,
as
you
might
have
heard,
there
are
some
questions
about
quix
manageability,
given
that
a
lot
of
things
like,
for
example,
latency
or
packet
loss
are
no
longer
simply
deducible
from
the
from
the
wire
image
like
they
are
with
tcp,
and
there
are
also
some
problems
with
the
current
approaches
around
spin
bit
and
loss
bits.
R
Some
might
not
even
want
to
deploy
them,
so
one
option
we
might
think
of
is
is
using
q
logs
to
kind
of
help
subvert
this,
where
you
can
imagine
an
idea
where
parties
share
q,
logs
or
very
sanitized
or
very
redacted
q
logs
amongst
themselves
to
help
monitor
end-to-end
encrypted
connections,
whether
or
not
that's
a
very
good
idea
or
workable
in
practice,
or
anything
like
that
is
at
this
point
quite
unclear,
but
it's
definitely
something
that
we
will
be
talking
about,
and
it's
also
something
where
in
a
sense
for
as
much
as
I
understand
about
these
things,
we
have
some
parallels
with
things
like
ipfix,
similar
projects
that
are
being
worked
on
here
so
also
there.
R
I
hope
we
can
leverage
your
experience
and
opinions
on
these
matters
to
let
us
know
if
this
can
be
done,
how
it
should
be
done
if
it's
a
good
idea
or
not.
Finally,
last
slide.
R
If
this
turns
out
to
all
work
quite
nicely
and
can
be
generalized,
the
idea
is
to
go
for
the
separate
working
group
down
the
line
that
helps
finalize
all
of
this,
but
for
now,
for
practical
reasons,
we're
going
to
keep
this
inside
of
the
quick
working
group
which
the
work
will
start
in
a
couple
of
months.
So
I'm
hoping
some
of
you
might
be
willing
to
join
us
there
to
give
us
some
insight.
R
S
Thank
you
I'll
try
to
keep
this
quick,
I'm
going
to
apologize
for
not
being
familiar
with
your
material.
It's
the
first
time,
I'm
seeing
related
stuff.
The
two
comments
I'd
give
you
is
that
I
do
cons.
I
suggest
you
consider
looking
at
yang,
not
specifically
because
you
think
that's
the
best
match
for
things,
but
one
of
the
features
that
the
language
is
enjoying
right
now
is
that
it's
a
modeling
language,
the
actual
serialization
formats
that
yang
is
allowed
to
output.
Are
you
know
multiple,
so
you
can
get
json
format
see
more.
S
You
can
even
get
you
know
other
more
denser
formats
as
well.
So
this
gives
you
one
level
of
abstraction
to
be
able
to
do
the
modeling
and
do
the
exchange
elsewhere.
S
My
second
comment
is,
I
think
the
majority
of
your
work
is
going
to
be
figuring
out
how
to
define
a
common
header
that
can
be
used
for
the
exchange
model
for
passing
these
objects
around
once
you
have
that
the
the
protocols
for
for
doing
subscriptions
for
things
of
interest
is
going.
I
think,
fallout
from
there.
A
Okay,
also
making
this
very
quick
so
to
speak.
Sorry,
the
you
were
talking
about
pcap
with
tcp
old
school,
how
you
would
do
it.
Bioshock
was
a
icon
there.
There
is
an
effort
that
wants
to
document
pcap
here.
Unfortunately,
that
effort
was
a
little
bit
the
pipe
that
is
fueling
that
effort.
It
was
a
little
bit
congested
for
this
idea.
A
There
was
no
update
on
this,
but
maybe
that's
something
to
synergize
on,
because
I
guess
people
are
used
to
that-
probably
want
to
use
the
dissection
tools
at
some
point
with
your
logs
and
might
be
interesting
to
look
at
and,
of
course,
the
adjacency
board
itf
and
that
stuff
offline
discussions,
for
that
will
most
certainly
emerge
thanks
for
that.
H
B
Interest
in
awg
start
session,
so
there
should
be.
You
should
see
a
a
raised
hand
session.
So
if
you
raise
hand
if
you
support
this
work
in
ops,
awg
or
do
not
raise
hand
if
you
do
not
support.
B
H
And
just
while
it's
going
on
the
the
background
here
of
why
I'm
asking
this
question
is
because,
obviously,
if
this
works
in
the
quick
working
group,
it
may
there's
obviously
a
lot
of
other
work
going
on
there,
and
it
may
be
that
the
people
who
are
interested
in
this
in
particular
may
find
that
they
get
swamped
with
other
stuff.
And
so
maybe
we
have
discussions
with
martin
and
I
as
to
whether
splitting
up
another
working
group
for
this
work
makes
more
sense
whether
it
be
better
in
a
different
home.
B
Well,
having
a
do
not
raise
for
the
negative
is
probably
not
the
great
idea.
If
we
wanted
to,
we
could
do
it
as
if
we
were
like
we
could
say
the
opposite,
but
so
far
in
the
interest
of
time.
I'll
just
call
it
here.
12
hands
have
gone
up
in
support
of
interest
here.
So
there's
there's
clearly
some
of
the
53
people
on
the
of
the
participants.
There's
there's
clearly
some
interest
in
that.
T
T
Hello
working
folks,
this
I'm
chaitanya
and
I'll,
be
presenting
updates
to
the
draft
on
behalf
of
my
fellow
co-authors.
Shiva
next
slide.
Please.
T
This.
This
can
really
help
network
operators
immediately
understand
the
root
cause
and
can
facilitate
automation
to
fix
the
underlying
issues
without
any
manual
intervention.
So
this
this
proposal
actually
focuses
on
extensions
to
ipfix
for
exporting
the
drop
packet
exception.
Information
in
ipfx
format
it
this
this
was
introduced
in
ietf
109..
T
Some
of
the
information
elements
already
exist
today.
This
draft
floats
a
couple,
a
couple
of
new
informational
elements,
which
are
the
forwarding
exception
code
and
the
next
stop
id,
and
the
corresponding
types
are
defined
next
slide.
Please.
T
So
we
have
addressed
a
few
comments
received
since
itf
109.
There
is
a
comment
from
joe
on
on
next
stop
id
specifically
a
request
to
add
additional
description
and
examples
to
populate
this
field.
T
We
have
added
the
section
4
4.2.2
for
forwarding
next
up
id
with
with
some
additional
description
and
an
l3
vpn
network
example
on
how
to
populate
this
field
in
in
in
some
use
cases.
Additionally,
we
have
added
section
4.2.1
with
some
additional
justification
on
forwarding.
Next,
a
forwarding
exception
code.
T
And
then
the
next
comment
that
we
received
was
from
robin
which,
which,
which
says,
there's
a
really
useful
use
case
and
will
be
a
lot
of
help
in
troubleshooting
the
forwarding
errors
in
any
network.
T
So
there
are
no
changes
corresponding
to
that
and
there
is
a
comment
from
rudiger
as
well
to
explore
c
c
bar
cdl
I've.
I've
had
a
convey
confirmed
with
this
and
rudiger
mention
that
this
is
not
applicable
to
this
draft
next
slide.
Please.
T
So
as
as
for
the
next
steps,
we
are
looking
for
more
feedback
and
and
comments
on
on
the
draft
and
possible
work
bugging
group
production.
T
S
Hi
very
brief
question:
after
looking
through
the
document,
so
I'm
seeing
some
of
the
codes
for
white
drops
are
happening.
Has
there
been
discussion
about
providing
drop
information
about
what
layer
it's
happening
at
like
you
know?
Maybe
a
layer,
two
forwarding
exception
layer,
three
forwarding
exception,
etc.
S
Okay,
I
think
there
may
be
some
mismatches
in
there
since
be
sure
email
domains.
I'll
said,
you'll
follow
up
there.
S
M
M
What
I
would
like
to
see
is
a
bit
more.
Why
you're
introducing
new
fields?
What
what
is
the
main
benefit
because,
for
instance,
the
iphix
entity
89,
is
focusing
on
drop,
forwarded
and
consumed
and
from
what
I
understood
is
your
main
motivation
is
basically
to
introduce
an
enterprise
bit
and
also
increase
the
point
space
for
for
for
dropped,
but
you're
not
referring
to
any
other
use
cases
like
forwarded
or
consumed.
M
U
One
of
the
reasons
I
mean
we
are
also
looking
at
in
a
different
way,
in
the
sense
that
if
you
look
at
it
for
a
given
box
right,
there
is
a
relationship
between
so
I
prefixes
sent
to
a
collector
right
eventually
and
then
there
is
a
relationship
between
this
collector
and
the
node,
which
is
reporting
this
high
prefix
packet
and
the
nature
of
the
node,
depending
upon
its,
for
example,
a
swisscom
or
a
cisco,
or
a
juniper
or
a
restar
right.
U
It's
it's
pretty
much
well
known
what
a
given
drop
code
means
and
what
is
its
behavior
on
the
box.
For
example,
on
a
given
box,
the
ttl
expiry
might
be
a
draft
packet,
but
maybe
on
a
different
box.
The
ttl
expiry
might
be
the
one
which
is
actually
a
consumed
exception
in
the
sense
that
the
packet
is
sent
to
the
control
plane,
which
will
respond
with
an
icmp
unreachable.
U
So
in
that
sense,
what
kind
of
initially
thought
is
that
probably
it
is
the
drop
code
which
in
itself
will
explain
what
kind
of
exception
it
is.
But
in
addition
to
that
again,
what
we
also
considered
is
remember:
all
of
this
encoding
is
consuming
the
forwarding
path
bandwidth.
So
what
we
want
is
only
the
organic
data
which
the
only
data
path
knows
to
be
encoded
and
no
other
extra
information
which
can
theoretically
be
derived.
U
For
example,
if
a
given
type
of
code
can
implicitly
mean
that
it
is
dropped
or
consumed,
then
we
don't
want
to
have
an
extra
bit
and
consume
data
about
bandwidth
in
order
to
and
do
that
extra
encoding.
So
that
was
one
of
the
motivation
why
we
didn't
add
it,
but
at
the
same
time
I
I
do
understand
what
you're
also
indicating
is.
If
we
have
a
marriage
of
the
two,
it
kind
of
provides
a
backward
compatibility.
Where
I
mean
some
entity
might
not
want
to
report
both
of
them
together.
U
For
the
second
part,
I
think
the
concern
is
mainly
see.
Forwarding
status
in
the
current
form
is
very,
very
limited
and
again
what
we
are
trying
to
indirectly
do
by
affording
status
is.
We
are
trying
to
standardize
the
drop
codes
right
and
the
problem
is.
We
have
so
many
variety
of
asics
in
the
networking
domain
today,
with
each
of
them
having
their
own
proprietary
pipelines
right.
Some
of
them
use
a
hard-coded
asic
pipeline
other.
Are
software
driven?
U
There
are
yet
another
which
are
microcode
driven
and
all
of
them
have
their
own
category
and
sets
of
exceptions
for
to
give
an
example.
On
the
juniper
side,
one
of
our
asics
reports
about
200
odd
exceptions
in
the
current
format
and
which
is
still
increasing
because
of
the
fact
that
there
is
a
bunch
of
state
I
mean
our
pipeline
is
very
flexible.
U
We
we
can
enhance
to
report
more
a
set
of
errors
and
exceptions,
so
in
that
sense
kind
of
generalizing
it
is,
is
a
big
challenge
and
what
is
an
exception
for
me,
might
not
be
an
exception
for
somebody
else
right
so
in.
In
that
sense,
what
we
thought
was
having
a
separate
id
right
gives
us
that
flexibility
gives
us
that
sample
space
to
do
it
and
just
to
code.
Some
examples
I
mean,
I
think
there
was
a
question
earlier
about
the
layer,
two
layer,
three
exceptions.
U
I
mean
from
a
forwarding
point
of
view.
Yes,
we
we
do
have
layer,
2,
layer,
3
packet,
exceptions,
the
data
plane,
state
exceptions
and
so
on,
but
in
most
of
the
cases
the
exception
code
in
itself
is
implicitly
defining
whether
it
is
a
layer,
2
or
a
layer.
3
exception
again
remember
that
this
will
not
work
in
standalone.
U
M
I
think
that
makes
completely
sense,
so
then
it's
understood
that
there
basically
many
reason
codes
and
I
I
also
welcome
that
you
introducing
the
enterprise
bit,
but
the
one
feedback,
maybe
that
you
consider,
is
to
increase
the
scope
so
not
just
look
for
the
drop,
but
also
for
the
forwarded
and
consumed
package.
Maybe.
B
E
P
Right
so
I
want
to
present
on
service
assurance
for
intent
based
networking.
We've
got
20
minutes
that
I
want
to
take
a
small
part
of
the
20
minutes,
maybe
seven
eight
minutes,
because
the
second
part
of
it
would
be
like
presented
by
another
one
number
one
in
this
case
at
the
liege
university,
where
you
want
to
show
what
he's
been
doing
in
development
research.
So
what
I
want
to
do
here
next
slide.
Please
is
present
those
two
slides,
so
we
present
those
two
tracks
very
quickly.
P
I
cannot
refresh
I
presented
multiple
times,
so
I
want
to
evaluate
the
group
interests
you
have
like
working
documents
and
most
of
the
time
will
be
used
by
the
professor,
denmark,
all
right
next
slide
so
trying
to
fly
over
the
slides,
because
I
played
multiple
times.
What
is
the
issue?
P
The
issue
is
that
whenever
you
configure
a
service
and
network
and
the
service
degrades,
you
wonder
where
the
fault
is
and
what
the
symptoms
are
and
in
the
end,
the
root
cause,
but
once
you've
identified
what
a
degrading
component
or
a
filling
component,
which
surface
is
impacted,
and
it's
just
not
easy.
So
this
architecture
decomposed
the
problem
into
what
we
call
subservices
smaller
components
and
through
an
assurance
graph,
we're
going
to
assure
those
subservices
independently
and
then
we're
going
to
infer
our
house
core
for
the
service
composed
from
different
subservices.
P
Now
this
approach
complements
the
end-to-end
monitoring.
What
is
this
like?
A
synthetic
a-wamp
or
you
know,
hybrid,
with
ifit
or
oracle
in
situ-
am
right,
because
this
end
to
run
monitoring
gives
you,
like.
The
network
is
a
black
box
in
case
of
one
t1,
and
we
won't
understand,
what's
happening
in
the
network,
all
right
next
slide.
P
P
This
is
a
mock-up,
I'm
not
sure
if
you
could
increase
the
font
to
have
full
screen.
This
is
a
mock-up
of
what
you
could
be
doing
at
the
top.
There
is
like
a
terminal
interface
and
to
get
the
house
of
that
service.
We
we
decompose
it
into
multiple
sub
components.
You
know
an
ip
connectivity,
let's
d4
or
v6.
P
We
decompose
it
into
the
house
of
an
interface,
a
loopback
interface
on
which
there's
a
tool,
interface
and
the
house
of
the
device.
So
I
explained
that
already
in
the
past,
so
this
is
basically
the
assurance
graph
and
every
single
service
in
there
should
be
assured
and
we
infer
the
health
of
the
service
by
reporting
up
to
the
top,
the
the
the
service,
the
health
of
the
components.
So,
if
you've
got
like
a
failing
component
in
that
assurance
graph,
it's
going
to
report
the
infra
health
of
the
device.
P
There
are
not
many
networks
in
the
world
that
are
only
a
single
vendor
and
that
architecture
and
those
young
modules
foresee
this
case,
where
we
could
just
augment
the
module
and
have
even
proprietary
components
with
this
way
vendor
specific,
it's
also
open
flexible
in
the
sense
that
it
cover
multi
domains,
I
believe
into
close
to
automation
where
we
could
do
it
inside
a
device
inside
a
controller
inside
a
domain,
or
even
above
that
now
the
idea
is
to
link
those
definitions,
graph
that
are
domain
specific
and
because
we
have
like
references
in
the
young
module,
we
could
just
link
those
graphs
together
and
if
you've
got
services
that
are
multi-domain
and
in
the
end
they
all
are
not
domain
right.
P
You
take
one
application
that
is
over
on
that
vpn.
That
goes
into
a
cloud
that
goes
into
a
server,
and
these
are
different
worlds.
So
we
could
connect
those
together
next
slide.
P
P
So
if
we
believe
that
the
answer
to
this
question
is
yes,
then
you
might
be
considering
adopting
those
as
we
can
document-
and
I
will
stop
here
now-
benoit
will
present
the
second
part
of
the
slide,
and
maybe
we
should
just
get
all
the
question
and
answer
at
the
at
the
end
of
this
plantation.
V
Okay,
so
I
guess
you
can
hear
me
guys,
thank
you,
reno,
just
a
quick
introduction,
so
I'm
from
university
de
liege
in
belgium
so
good
morning
or
good
afternoon,
everybody
depending
on
where
you
are
on
earth.
So
the
objective
here
is
just
to
share
with
you
our
experience
with
this
intent
based
networking
stuff
with
what
we
call
the
diagnostic
agent.
So
next
slide.
V
V
So,
as
I
said,
our
main
objective
behind
that
is
just
to
evaluate
the
part
of
the
architecture.
Benoit
has
just
presented
a
few
seconds
ago
and
tried
to
push
it
forward
and
how
do
we
want
to
push
it
forward
by
providing
open
source
tools
so
building
a
community
around
the
tool,
a
community
from
the
industry,
but
also
from
the
academic
world,
and
we
want
to
push
further
that
stuff
through
use
cases
and
demonstrate
through
use
cases,
the
validity
and
the
interest
for
both
world
industry
and
academic
of
the
same
architecture.
V
So
up
to
now,
we
have
worked
on
basic
use
cases,
but
we
we
are
going.
We
are
moving
forward
with
that
in
a
few
seconds
and
our
current
use
case
is
just
to
pretty
much
monitor
vpp
within
a
vm,
so
vpp
vector
packet
processing,
which
is
a
fast
packet
processor
at
the
user
level.
So
next
slide,
please.
V
So
to
position
ourselves
in
the
architecture
presented
by
benoit
a
few
seconds
ago,
if
you
remember
where
there
was
this
sane
agent,
so
that's
where
we
implement
stuff
or
what
we
call
the
diagnostic
agent
and
the
monitor
entities
up
to
now.
It's
just
a
vpp
running
inside
the
vm,
and
we
have
this
implementation
and
we
can
possibly
discuss
with
some
sane
collector
through
remote
procedure
core
to
send
back
some
information
about
what
we
have
monitor
next
slide.
V
Please
you
okay,
thank
you,
so
it
does
move
a
little
bit,
but
that's
the
architecture
of
the
diagnostic
agent,
so
the
red
rectangle
is
the
diagnostic
agent.
You
have
seen
on
the
previous
slide,
which
is
based
on
pretty
much
three
parts:
input,
metrics
and
rules.
So
in
terms
of
let's
say
input
we
with
through
the
input
we
discover
data
sources
and
collect
data
such
as
bar
metal,
virtual
machine
information,
vpp
data.
V
Then,
from
this
input
we
can
have
the
metrics,
which
are
actually
normalized
the
input
which
is
normalized
input,
data
which
is
normalized,
and
we
additionally
discover
subservices
the
dependency,
if
required,
we
build
the
graph
of
dependencies
and
so
on
and
again
up
to
now,
those
metrics
comes
with
through
a
file,
a
csv
file
as
a
kind
of
list
of
vendor
independent
metrics.
V
And
then
we
got
the
rules
which
are
based
on
the
metrics
and
the
input.
They
are
just
there
to
check
for
symptoms
based
on
well
the
normalized
metrics
and
so
on
those
rules.
Now
we
have
a
very
basic
language
for
them,
which
is
the
input
file
which
is
csv
file,
and
we
just
parse
it
in
python,
and
those
rules
are
just
it's
just
pythons
with
the
metrics
used
as
variables.
V
So
that's
pretty
much
what
we
have
if
we
want
to
push
forward.
It's
obvious
that
we
will
need
to
define
a
rigorous
language
for
the
rules
which
might
be
comes
with
some
grammar
stuff
and
implement
some
kind
of
compiler
within
the
diagnostic
agent
to
execute
the
rules
and
so
on.
V
V
Please
so
the
dxtop
is
just
a
console
application
which
will
display
the
data
that's
been
collected
and
that
has
been
processed
and
decided
based
on
the
rules
and
so
on.
So
you
have
multiple
screens,
you
can
switch
between
screens,
you
can
scroll
up,
you
can
scroll
down.
Obviously
maybe
you
don't
see
that
much
on
the
slide,
but
you
have
a
bunch
of
metrics
there
with
their
associated
values
and
you
can
have
a
look
on
what
happens?
V
V
The
ddx
web
is
a
web
interface
which
presents
the
dependency
graph.
That's
pretty
much
the
same
graph
as
benoit
presented
a
few
minutes
ago,
which
was
a
mockup.
It
is
a
one
from
the
real
world.
I
mean
our
experiments
and
we
have
a
color
code
so
that
the
operator
can
quickly
have
a
look
at
the
health
of
the
system.
If,
if
everything
is
green,
well,
that's
perfect.
If
everything
is
red,
that's
there
is
a.
L
V
So
that's
just
a
quick
overview,
a
more
deep
overview
of
the
notion
of
rules
which
are
there
to
highlight
symptoms.
That's
the
way
it
is
presented
pretty
much
within
the
csv
file
as
input.
So
we
have
a
comment
associated
to
the
rule.
Then
we
have
the
code
associated
to
the
problem
like
red
range
and
so
on
and
the
rule
per
se
on
the
right,
which
is
which
for
the
moment,
is
a
simple
rule
with
simple
operators
like
greater
than
less
or
equal.
V
V
So
that's
the
example
of
what
we
observed.
That's
the
our
use
case.
I
told
you
before:
that's
the
monitoring
of
the
vpp
in
a
virtual
machine
instance
with
two
screenshots,
the
above
the
top
screenshot.
Everything
is
going
down
smoothly
and
you
can
barely
not
see,
but
there
is
the
red
circle
on
the
bottom
graph
bottom
figure
on
which
we
see
a
drop
in
the
elf
situation
of
the
machine.
So
something
is
happening
there.
So
that's
what
we
have,
what
we
can
see
with
our
diagnostic
agent
next
slide,
please!
V
V
I
already
quickly
mentioned
that
we
have
iom
in
there,
so
that's
for
a
telemetry
for
traffic
telemetry.
So
we
want
to
push
further
that
so
we
are
active
on
iom
and
we
believe
that
putting
together
irm
and
this
diagnostic
agent
will
allow
to
have
some,
let's
say,
advanced
telemetry
or
advanced
observatory
tool
for
your
network
for
your
services
for
your
apps.
If
you
have
that
so
with
iom,
you
can
get
that's
just,
for
instance,
buffer
occupancy
q,
death
and
so
on,
which
are
interesting
feature
you
want
to
monitor.
V
We
have
pushed
a
little
bit
further
iom
with
what
the
so-called
cross-layer
telemetry,
which
is
a
way
to
let's
say,
mix
iom
and
open
telemetry.
So
the
the
idea
is
to
to
have
the
view
of
the
entire
stack
from
l2
l3
up
to
l7
layer
7.
So
that's
visible.
That's
for
any
distributed
tracing
tool,
so
basic
idea
is
to
have
within
iom
the
trace
id
and
spy
span
id
of
open
telemetry
within
the
packet
of
iom,
so
that
we
have
the
full
information.
V
So
that's
nice,
we
have
students
working
on
a
python
wrapper
for
a
the
one-way
measurement
protocol
which
we
want
to
plug
in
with
the
diagnostic
agent,
and
we
are
currently
thinking
about
more
complex
and,
let's
say
mature
use
cases
for
this
observation
and
service
assurance,
which
is
a
an
application
running
in
a
cloud
data
center
edge.
V
Whatever
you
want
through
multiple
instances,
and
we
have
customers
who
express
some
specific
service
level,
slo
between
ourself
and
the
app-
and
we
have
our
diagnostic
agent,
possibly
with
iom,
with
cross-layer
telemetry,
which
will
be
in
charge
of
discovering
the
most
appropriate
app
instance
within
all
the
possibilities
and
the
most
appropriate
path
to
the
most
appropriate
app
instance.
V
And
then
we
connect
both
the
app
instance
and
the
selected
path
and
the
client
through
some
protocols.
We
have
not
yet
decided,
that's
something
we
want
to
explore
like
segment
routing
like
possibly
multipath,
tcp
or
multi-pass,
quick
or
something
else,
and
obviously
we
will
control
control
the
loop
there.
We
will
make
sure
that
the
path
that's
infected.
The
app
instance
that
has
been
selected
is
still
the
most
appropriate
one
over
time
and
this
will
possibly
redirect
the
customer
to
the
up
to
now
most
appropriate
instance
or
path
or
whatever.
V
I
repeat:
our
objective
is
to
build
a
community
community
around
that
around
advanced
telemetry
from
both
the
industry
and
the
academic
world,
and
we
are
all
for
the
open
source,
so
you've
got
the
links
to
some
githubs
where
we
get
the
diagnostic
agent
implementation,
the
iom,
implementation,
the
iom,
agent
implementation
for
the
diagnostic
agent,
but
I
think
you
will
have
also
a
link
there
towards
our
iom,
linux,
kernel,
implementation
and
cross-layer
telemetry
implementation,
and
that's
it
for
me.
B
We're
at
10
till-
and
I
want
to
make
sure
that
ops
area
gets
some
time
for
the
open
mic.
But
if
there
is
any
quick
question
we
can
answer
it,
there
was
something
in
chat,
but
but
benoit
addressed
it.
Actually,
I
think
I
saw
another
thing
pop
up,
so
med
was
asking
about
how
the
service
assurance
is
designed.
V
Not
really
in
the
sense
that
the
as
long
as
as
bina
mentioned,
the
the
agent
will
communicate
with
the
same
architecture
through
some
yang
module
or
whatever
api,
so
whatever
the
service
you
have
seems,
okay
to
me,
or
maybe
I
didn't
get
your
point.
P
I
think
the
question
was
more
about.
How
do
you
compose
the
graph
right
and
in
there
composing
the
graph
is?
It
basically
depends
on
on
what
you're
trying
to
monitor
and
there
is
no
free
language
there.
You
know
that
you
got
the
intent
chemical
service
and
it's
going
to
decompose
the
graph,
the
ashrams
graph
automatically.
P
H
Rob
you
enter
the
queue
very
quickly,
so
so
benoit
asked
three
questions
at
the
beginning
and-
and
I
can't
remember
exactly-
I
think
the
first
one
is
this:
a
valid
thing
to
be
working
on
except
one
is,
is:
should
we
do
some
itf?
The
third
one
was
just
the
right
approach.
I
think
just
my
view
is,
I
think
yes,
this
is
something
that
is.
This
sort
of
assurance
of
devices
is
definitely
something
we
should
be
working
on.
I
do
would
like
to
see
this
work
at
the
itf.
H
I
think
it's
a
good
thing
to
do.
Whether
this
is
the
right
approach.
I
think
that
it
certainly
seems
to
have
merit,
but
obviously
there
needs
to
be
further
discussion
there.
I
think
of
of
interest
is
not
just
the
yang
models
between
the
devices.
I
think
the
expression
language
for
defining
the
relationships
between
the
different
services
is
also
important.
P
I
would
agree
with
that
and
then
this
is
why
somehow
the
university
is
also
working
on
this
right.
P
What
is
the
right,
formal
language
to
do
that
right
now
we
we
decided
that,
maybe
it's
not
something
that
we
want
to
solonize
day
one,
because
if
we
report
the
health
score
of
the
the
subservices
the
component,
this
is
good
already
and
somehow
you
don't
care
if
you
get
how
you
get
that
right
as
long
as
you
say,
okay,
my
component
may
be
a
virtual
firewall
or
whatever
works,
fine
or
doesn't
work
fine,
and
these
are
the
rest
of
sometimes
we're
good.
B
W
I
I
do
say
this
work
actually
is
very
valuable
and
I
think
it
follow
top
down
approach
actually
to
decompose
the
service
level
data
into
a
lower
level
data
object.
So
this
can
so
one
of
the
merit
I
see
actually
can
really
reduce
the
data
object
it
connect
from
the
online
network.
So
I
think
it's
very
useful.
B
Thank
you
benoit
and
benoit,
and
with
that
we
conclude,
I
think
there
was
a
few
other
questions
that
maybe
come
up
in
chat
from
eduard
and
adam.
If
you
could
take
those
to
the
list,
that
would
be
great
on
the
same
work
and
with
that
will
conclude:
ops,
ogg
ops
area
working
group
section
and
rob
and
warren
all
yours.
H
The
just
question
is:
is
tiana
unable
to
yeah
brilliant
share
the
slides
right?
Let
warren
start
talking.
First
then,.
K
Okay,
thank
you
thanks
elliot
thanks,
joe,
so
welcome
to
ops
area.
This
is
the
shorter
part
of
ops.
Awg
next
slide,
that's
rob
on
the
left.
This
is
me
on
the
right.
I
realize
that
we've
been
meeting
virtually
for
a
long
time,
and
so
some
set
of
people
might
not
know
who
we
look
like,
but
this
is
us
so
next
slide.
K
Iot
ops
had
its
first
working
group
meeting
this
time.
This
is
a
working
group
that
we've
been
wanting
to
get
charted
for
a
really
really
really
long
time,
seven
or
eight
years,
basically
a
place
where
we
can
discuss
iot
operations
type
stuff
in
the
ietf.
K
What
bits
we're
missing
in
our
protocols.
You
know,
if
you
take
all
of
the
protocols
that
are
developed
in
the
ietf
and
squish
them
together.
Do
you
end
up
with
a
real
architecture,
or
are
there
some
important
bits
that
are
missing
it
met?
Yesterday?
It
was
a
fascinating
session
they
were
about.
K
I
wrote
it
down
around
80
people
there
and
the
agenda
was
incredibly
full
and
what
we
realized
is
that
there
were
enough
people
and
enough
agenda
stuff
that
we're
planning
on
having
an
interim
relatively
soon
more
on
that
in
upcoming
and
next
slide,
which
I
think
might
be
the
last
one
so
questions
and
abuse.
H
Well,
it's
just
a
comment
already,
so
I
wanted
to
say
that
it'd
be
useful
if
anyone's
interested
in
doing
sort
of
working
group
chairing
type
role,
either
they've
done
it
before
or
if
they're
potentially
newer
to
itf
and
are
interesting.
That
sort
of
thing
it'd
be
quite
handy
if
you
could
drop
one
or
I
an
email,
and
it's
not
saying,
there's
necessarily
any
open
doors
right
at
the
moment.
But
it's
useful
for
us
to
have
people
know
people
who
are
interested
in
that
sort
of
thing.
H
So
we
can
try
and
plan
that
out
and
maybe
we
can
try
and
find
some
opportunities
to
get
some
experience.
So,
if
you
are
interested,
then
please
do.
Let
us
know
that'd
be
really
helpful.
Thank
you
and
the
other
questions.
Okay,
just
a
clarification
from
that.
K
That's
not
specifically
for
ops,
awg
right,
oh
you're,
not
trying
to
scare
the
characters
right,
of
course,
in
general,
you
know
within
the
ops
area,
if
there
is,
if
you're
willing
to
consider
being
a
chair,
etc.
H
Thank
you
any
other
questions
or
general
comments.
Oh
the
other
thing
I
should
also
say
is:
we've
got
an
ops
hour,
starting
ops
hours
session,
where
I
for
half
an
hour
just
after
this
meeting.
So
if
there's
anything
you
want
to
discuss
and
relate
back
to
warren,
and
I
a
slightly
more
private
setting,
then
that
is
is
available
and
the
webex
link
for
that
is
in
the
iitf
110
agenda.
H
K
Yes,
as
rob
said,
we
will
be
in
the
webex
room
in
a
bit
if
you
want
to
come
along,
and
you
know,
provide
feedback
in
a
more
private
session.