►
From YouTube: IETF104-SACM-20190327-1120
Description
SACM meeting session at IETF104
2019/03/27 1120
https://datatracker.ietf.org/meeting/104/proceedings/
A
A
A
B
A
A
C
This
is
the
note
well
which
you
have
seen,
which
you
agreed
to
when
you
signed
up
for
the
IHF
in
which
you
will
have
been
reminded
of
a
number
of
times,
since,
if
you
have
any
questions
about
that
be
sure
to
see
Chris
next
time.
This
is
our
agenda
for
the
day.
Is
there
any
agenda?
Bashing,
nope?
Okay,
we
have
scribes
and
we
have
a
minute
taker.
So
we
are
good
to
go.
E
I
only
have
one
other
all
right.
This
should
be
very
short
and
very
easy.
So
I
recently
just
pushed
a
new
version
of
the
Rollie
software
descriptor
draft
I.
Think
technically
I
actually
pushed
two
versions
like
in
quick
succession.
The
first
one
was
just
updating
the
draft
because
it
expired
and
the
second
one
was
a
very
small
number
of
editorial
fixes,
there's
no
major
content
updates,
and
then
there
were
one
or
two
like
to
do:
placeholder
texts
that
I
filled
in
as
well
next
slide.
C
E
C
E
G
G
H
H
Things
like
that
and
then
instruct
the
endpoint
to
perform
an
assessment.
All
I
had
was
an
Assessor
and
not
just
a
decoupled
collector,
slash
evaluator.
So
just
gonna
had
to
use
the
tools
that
I
had
at
on
hand,
so
at
the
at
the
hackathon
a
little
bit
before
the
hackathon
I
got
had
some
communications
with
Hank,
and
so
he
brought
a
colleague
karl-heinz
to
the
hackathon
and
brought
an
implementation
of
a
map,
client
and
server.
Basically,
a
pub/sub
broker
that
can,
basically
you
can.
H
The
map
client
can
publish
information
to
the
map
server,
which
then
you
know
sort
of
in
that
was
salt,
sort
of
see,
boring
coded
information
links
metadata
in
there
and
and
store
that
into
an
object
based
sort
of
graph
graph
data
model.
So
what
we
did
at
the
beginning.
First,
we
kind
of
you
know,
had
our
introductions
and
went
over
what
each
of
our
pieces
of
software
did
and
talked
a
little
bit
about
how
we
could
you
know,
integrate
the
integrate
the
two
components
and
see
how
they
how
they
could
work
together.
H
So
we
decided
on
a
couple
of
workflows
that
we
wanted
to
that.
We
wanted
to
try.
The
first
was
basically
in
the
stuff
that
I
brought
in
in
the
XMPP
clients
that
were
there
to
implement
that
as
one
of
the
map
clients
as
well.
So
we
could
publish
information
to
the
map
server
and
store
it
in
the
graph
which
we
were
able
to
do,
and
then
the
more
complicated
example
which
we
did
on
the
second
day
was
actually
to
kind
of
do
the
reverse
of
that.
H
We
we
took
the
the
map,
client
and
and
made
that
into
its
own
little
XMPP
client,
and
then
we
sort
of
orchestrated
a
full-on
publish/subscribe
workflow,
where
an
Orchestrator
would
publish
guidance
to
collect
a
collector
would
would
do
the
we
sort
of
stubbed
out
a
little
bit
of
collection
activities.
It
would
publish
those
results
back
where
the
the
map
clients
was
subscribed,
then,
to
that
topic,
grab
that
information
and
pushed
it
into
that
that
graph
data
model.
H
H
Yeah
so
I
guess
I
just
sort
of
covered
those
these
two
slides
in
in
the
last
one
there
so
again,
sort
of
started
with
making
my
XMPP
client
able
to
talk
to
the
map
client
and
then
made
it
more
more
difficult
to
add
the
XMPP
adapt
or
two
to
the
map
client.
So
the
publish/subscribe
workflow
would
work.
H
So
what
we,
what
we
learned
first
off
in
a
lot
of
the
introductions
that
the
map
server
and
that
and
that
graph
data
model
is,
it
was
really
actually
great
for
the
repository
information.
So
we
could
really
store
any
of
that
collection
information.
Any
of
that
data
along
with
metadata,
along
with
links
between
all
that
stuff
and
it
and
it
looked
really
promising
to
move
forward
and
also
it
kind
of
helped
to
inform
that
the
the
current
architecture
document
really
a
lot
of
the
diagrams,
and
things
like
that
are
focused
more
on
the
transport
aspect.
H
H
You
know
the
idea
of
having
interim
hackathons
and
kind
of
working
in
a
distributed
way
was
was
talked
a
little
bit
about
as
well
and
then
really
kind
of
built
on
those
lessons
learned
to
kind
of
focus,
on
really
defining
the
interactions
between
the
components
and
using
that
to
help
refine
the
draft
and
things
like
that.
We
also
kind
of
had
had
a
realization.
H
Kind
of
missed
turns
for
reviving
the
information
model,
but
kinda
want
to
you
know
kind
of
re.
Take
a
look
at
some
of
the
information
elements
there
and
start
to
really
get
that
down
to
a
minimum
sort
of
level
of
information
and
data
elements
that
to
to
help
facilitate
the
interactions.
The
next
steps
is
really
to
sort
of
start
really
creating
those
data
models
and
really
defining
those
interactions
between
the
components
and
just
wanted
to
give
a
big
shout
to
to
Carleton
rights
for
his
contributions
to
that
component.
H
C
J
H
We
simplified
a
couple
of
one
of
the
diagrams
to
just
kind
of
take
a
little
bit
of
the
bulk
out
of
the
diagram,
so
it
was
kind
of
mentioning
all
of
the
sort
of
XMPP
connectors
and
different
flavors
of
collection
capabilities.
We
just
sort
of
narrowed
that
down
and
called
it
one
collector
component
within
a
dabbu.
You
know
with
the
XMPP
adapter
in
it
put
a
lot
of
descriptive
information
about
some
of
the
XMPP
extension
protocols
that
we
think
were
useful
in
in
you
know
in
this
sort
of
data
flow.
H
Just
a
part,
you
know
kind
of
building
on
some
of
the
things
that
were
in
the
mile
working
group
with
the
with
the
publish/subscribe
we
found
in
previous
hackathons
that
sort
of
helped
inform
this
before
that.
Other
extensions
would
be
useful.
You
know
to
kind
of
leverage,
larger
policy
definition
structures.
You
know
some
of
the
you
know
oval
definition
files
or
you
know,
XML
based
checklists
or
anything
like
that
are
kind
of
large
data
sets
which
have
a
hard
time
going
through
it.
H
The
standard
published
subscribes
you
can
have
to
work
around
it
a
little
bit
to
to
move
that
that
bulk
of
information
and
other
potential
things
that
would
be
helpful
in
the
in
the
workflow
for
onboarding,
new
endpoint
components
and
and
things
of
that
nature.
So
we
added
a
bunch
of
descriptive
information
to
those
to
that
section
and
then
section
three,
three
added
a
very,
very
large
diagram
that
actually
put
in
a
great
amount
of
ASCII
art
to
show
the
endpoint
posture
collection
protocol
in
in
the
actual
model
as
one
of
those
collectors.
H
It
was
very
beneficial
to
have
the
sort
of
map
client
in
that
graph
data
model
to
use
as
a
repository
and
just
to
allow,
for
you
know,
interoperability
between
the
different
components
and
different
ways
to
implement
those
components.
So
it
helped
us.
You
know
with
the
idea
that
again,
we
kind
of
really
need
to
focus
on
the
interactions
instead
of
really
how
it's,
how
it's
being
built.
H
You
know
start
building
those
interactions
out
and
then
continue
working
towards
those
that
future
hackathons.
Even
if
that's
just
one
component
to
another
component
going
forward,
I'm
happy
to
you
know,
entertain
any
sort
of
suggestions
and
we
want
some
feedback
on
the
draft.
Keep
keep
moving
the
keep
moving
it
forward.
That's
I!
Guess
what
we
need-
and
you
know
any
amount
of
constructive
criticism
on
the
list-
we're
happy
to
integrate
in
going
forward.
B
I
Can
I
ask
a
quick
question
sure
a
Francia
how
what
taste,
graphs
data
mode
are.
H
K
K
Okay,
insight
inside
the
the
the
graph
there
is
no
prescription
how
to
straw
the
data.
You
can
do
it
whatever
you
want
on
a
piece
of
paper
whatever.
So
we
only
define
data
motion
and
the
data
model
for
data
motion
basically
is
a
nested
message
protocol
that
does
not
require
anything
but
a
secure
channel
such
as
TLS
and
there
there
will
be
messages
and
the
messages
are
either
client
created
or
server
created.
So
that's
the
first
distinction,
there's
always
an
application
Association
and
the
data
model.
K
Insight
is
basically
always
a
container
containing
two
identifiers
or
the
two
nodes,
and
one
link
on
metadata
item
so
and
all
of
these
three
items
can
have
a
multitude
of
attributes
associated
to
them
when
being
published
to
the
graph.
They
also
have
two
times
of
lifetime.
One
of
them
is
infinite,
so
even
if
the
application
Association
dies,
the
data
will
still
remain
in
the
graph
or
you
just
give
an
expiration
date
with
it,
so
it
will
be
removed
automatically
from
the
drop
over
from
the
graph.
K
H
I
You
I
think
yet
it's
also
my
question
and
I
think
I
assume
that
there
are
some
benefit
uses
model
for
what
what
kind
of
information
you
need
to
collect
right,
so
you
use
this
kind
of
emission
data
it
can
bring
out
you
a
lot
of
benefit
to
show
their
relation,
so
I
need
to
review
the
document
yeah.
Thank
you.
H
K
K
Also
it
is
the
place
or
to
get
a
bigger
picture
of
you
for
everything,
that's
happening
or
supposed
to
happen
and
in
the
state
the
processes
I
in,
so
it's
originally
produced
for
syndication
state
so
which
devices
of
authenticated
and
which
devices
to
go
into
the
network.
Why
and
and
there's
a
very
specific
purpose,
and
we
we
just
crushed
that
and
and
did
all
they're
not
photo
to
put
all
the
nice
pieces
put
in
receiver
and
CDR
and
then
works
quite
well.
C
So
Adam
asked
a
little
while
ago:
if
there
was
anybody
committing
to
review
and
we
didn't
really
get
adequate
response,
so
you're
committed
to
River.
Who
was
the
other
person
that
was
committed
to
review
this
one
I.
H
K
Think
after
seeing
the
output
of
the
hackathon
I
think
we
have
a
better
understanding
how
AB
spected
operation
we
work
for
the
architecture
and
give
them
the
time
of
the
next
four
to
eight
weeks.
You
might
even
be
able
to
give
you
a
contributions
for
the
relative
reviewing
it,
because
now
we
will
really
understand
what
we
are
doing
actually
have
other
coding,
yeah.
H
K
K
Think
again,
the
terminology
draft
has
not
progressed,
as
was
highlighted
at
the
last
ITF,
when
we
will
come
at
a
point
where
we
were
able
to
expect
the
XMPP
grid
out
of
the
architecture.
We
come
back
to
revisiting
some
of
the
concept.
I
was
reading
the
complete
terminology
two
days
ago
and
it's
still
quite
applicable
surprisingly.
K
What
we
do
not
have
yet
considered
and
which
was
always
out
sourced
into
the
terminology
document
where
does
not
belong,
is
the
target
endpoint
characteristic
profile.
So
what
is
that?
That's
a
very
long
term.
First
of
all,
it's
complicated
and
it
doesn't
mean
the
whole
description
is
in
the
terminology.
So
what
does
it
do
if
you
encounter
a
discernible
endpoint
target
endpoint,
you
want
to
assess
in
your
network,
and
you
don't
know
anything
about
it.
You
learn
about
it
with
different
specific
collectors,
for
example
a
profiler
or
a
I.
K
Don't
know
intrusion,
detection
system
or
something
like
that,
and
then
you
get
superficial
information,
but
that's
the
best,
but
you
can
get
so
you
aggregate
these
information
somewhere,
and
you
also
have
to
label
this
information
that
you
can
find
it
again
so
that
you
can.
We
recognize
this
device
that
you
actually
do
not
really
know
again
when
it
reappears.
K
So
this
is,
of
course,
a
best-effort
scenario.
If
people
are
smart
enough-
and
this
is
an
attack,
they
will
change
the
identifying
information
that
means
about
the
target
endpoint.
So
it
cannot
be
that
there
won't
be
a
consistent
profile
about
it,
but
the
concept
exists.
It
somehow
is
now
I,
don't
know
uploaded
to
the
terminology.
K
I
hope
there
will
be
textual
considerations
that
will
enable
to
prove
that
in
back
into
it,
for
example,
the
architecture
or
some
solution,
but
otherwise
at
some
point
we
will
gap
on
this
concept
and
the
continuous
monitoring
of
things
we
do
not
know
will
not
be
a
thing.
So
that's
that's
just
my
my
consideration
here.
So
I
hope
we
will
pick
up
on
this
again
so
and
let
me
reiterate,
the
terminology
draft
will
proceed
alongside
the
progress
of
the
architecture.
That's
the
only
way
we
can
do
it
at
the
moment.
Otherwise,
there's
no.
K
Otherwise
we
would
be
the
leading
document
to
define
how
the
architecture
works.
I,
don't
think
that's
the
way
it
should
be,
and
the
other
thing
is
that
there
are
still
this
one
profile.
Characterization
profile
things
stored
in
there,
that
should
not
be
in
there
should
go
into
a
yeah,
well
higher
level,
draft
I.
Think
and
that's
my
report
and
next
time
ever
request
him
anology
time.
If
there
is
something
to
report.
L
L
So
this
is
a
atf
sacrum
KO
suite,
so
we
haven't
made
any
changes
since
the
last
posted
data
tracker
version,
but
we're
in
the
process
of
making
them.
So
a
number
of
changes
were
made
this
week
on
the
draft,
so
we've
based
on
some
of
Chris's
feedback.
We
worked
to
reduce
some
of
the
representational
complexity
of
the
the
media
type.
One
of
the
challenges
we're
facing
is
it's
a
little
under
specified
from
from
an
Isis
with
tag
perspective
I,
the
ISIS
wit
tag
basically
just
points
to
the
w3c
specification
for
media
type.
L
We
were
attempting
to
try
to
correct
that
in
the
prior
versions
of
coast
web,
but
we
just
didn't
have
a
lot
to
go
on
so
instead,
I
think
what
we
and
what
we're
gonna
end
up
doing
is
just
treating
it
as
a
text
field
and
then
again
pointing
to
the
w3c
specification
on
the
use
of
media
type.
So
what
we'll
have
parody?
Essentially
with
the
ISO
spec,
we've
added
we've
enhanced
the
ability
to
include
signature
schemes
and
cozies
so
that
now
you
can
have
multiple
signatures.
You
can
do
co-signing
and
a
few
other
capabilities.
L
So
that's
in
the
editors
copy
right
now.
L
We've
we
were
looking
through
the
normative
language
of
the
draft
and
and
noticed
that
there
wasn't
really
any
sort
of
top-level
normative
language
for
the
model.
So
we
added
a
must
statement
that
basically
requires
the
minimal
set
of
required
attributes
to
make
sure
that
you
know,
since
it's
the
standards
track
document
that
there's
some
top-level
requirement
to
drive
everything
within
the
CD
VL,
we
reordered
the
attribute
names
and
sort
of
refactored
how
the
integer
labels
are
being
declared.
L
L
One
thing
that
we
talked
about
I
think
last
time.
That
still
remains
is
the
creation
of
a
Miami
attribute
registry
to
support
us
with
extension.
So
we
want
to
take
the
the
current
attributes
that
we
have
the
50
or
so
put
them
in
a
registry
and
then
have
a
registration
process
to
allow
updates
to
the
model
to
be.
You
know,
edit
over
over
time,
we'll
just
make
enhancements
to
the
coast,
whit
model
I'm
easier
to
do
I'm
in
a
much
more
controlled
way.
It
will
allow
for
some
expert
review
of
those
changes
as
well.
L
L
I
C
Is
there
anybody
objecting
to
going
to
working
group
last
call
so
I
I
will
remind
folks
that
when
we
get
to
working
group
last
call
we
need
to
show
that
we
need
people
to
actually
state
publicly
that
they've
read
and
reviewed
this
document
to
show
some
consensus
for
it,
and
we
can't
show
consensus
if
it's
like
the
two
authors
and
one
other
person,
but
that's
not
that's,
not
a
very
big
consensus.
So
so
thank
you.
Thanks
yeah.
B
You're
complexed
Dave
freely
I
have
a
quick
question.
Maybe
Hank
you
know,
which
is
I
mean
if
you
put
a
yang
model
in
a
document
right
there's
you
know
automated
review
and
it
comes
through
in
the
data
tracker.
Well.
Is
there
anything
even
a
little
bit
close
to
that
like
a
CD,
yet
CD
DL
in
a
document?
L
K
B
B
K
L
L
B
K
K
E
B
B
M
M
There
aren't
standards
for
a
lot
of
those
things,
though,
so,
is
making
the
drafts
in
there
kind
of
incomplete,
so
reduce
the
scope
to
focus
really
just
on
the
how
you
get
the
posture
information
from
the
endpoint:
how
to
communicate
that
information
to
a
centralized
server.
We
still
maintain
the
other
components
that
we
had
originally
discussed
in
the
before
draft,
but
it's
now
all
very
clearly
identified
as
future
work.
You
know
here
are
things
you
could
do
with
this
collected
information,
but
we
don't
really
define
what
you
ought
to
do.
M
We
received
feedback
that
the
draft
was
long.
We
agreed
we
try
to
improve
the
readability
of
it,
mostly
because
videos
for
transitioning
this
to
a
best
practices
document.
The
readers
really
care
about
the
technical
details.
Not
all
just
goes
like
great
ideas
about
what
you
ought
to
do
with
the
data.
So
we
tried
to
change
the
drop
to
make
it
read
more
like
a
BCP.
M
M
It
moved
all
that
into
the
into
the
intro,
so
readers
knew
what
they
were.
Looking
at.
A
lot
of
our
examples
particularly
pertain
to
yeah.
We
moved
appendix
and
we
move
supported
announcement
of
these
cases
to
the
appendix
yeah.
We
did
a
lot
of
like
fixing
knits
addressing
Oh
a
just
comas
new
in
the
privacy.
Information
may
not
be
owned
by
the
enterprise.
M
When
we
originally
wrote
this,
it
was
sort
of
with
the
intention
that
the
enterprise
owns
the
data,
but
that's
not
always
true,
so
we
updated
that
removed
a
lot
of
duplicative
text
and
yeah
fixed
some
of
the
diagrams.
We
all
dated
the
API
to
an
architectural
component,
but
I
did
see
your
comments
that
that
was
confusing.
We
will
work
to
make
that
seem
better
and
we'll
adjust
your
other
comments
as
well.
I
think
I
feel,
like
a
lot
of
them,
seem
to
the
substitute
one
centered
around
it.
M
Understanding
whether
or
not
the
posture
is
pushed
from
the
endpoint
or
queried
it's
both
right
with
yeah.
That
wasn't
super
clear.
So
we
will.
We
will
fix
up
next
steps,
Jessica's
losing
her
will
to
live
on
this
document,
so
presumably
Danny
has
to
but
Danny's
so
calm,
I,
don't
know,
I
may
be
he's
fine,
but
like
I
need
this
to
be
over
so
Adam
bill
had
said
they
will
help
with
some
architectural
alignment.
M
A
M
M
N
M
O
A
B
B
B
Fine,
fair
enough,
it
does
your.
You
are
correct
and
it
talks
about
how
to
maybe
not
with
separate
comment
from
my
review,
maybe
not
coherent.
We
talks
about
the
back
and
between.
If
you
had
net
Kampf
running
push
versus
the
TCR,
the
NIA
components
running
right
it
doesn't.
It
says
they
should
just
go
into
the
same
data
store,
but
doesn't
really?
You
know
no
profound
thought
past
that
missus
so.
M
B
Fair
comment,
but
so
to
the
point
of
kind
of
review
and
other
folks
reading
the
document.
It
actually
states
some
very
interesting
things
about
what
ii
would
look
like
in
future
parts
of
sockem,
which
you
may
or
may
not
agree
with,
but
you
should
absolutely
read
in
order
to
kind
of
get
their
their
vision
of
how
these
things
go
together,
and
so
some
amount
of
me
being
at
the
mic
as
a
contributor
here,
cristinaw
CEO
is,
does
it?
Q
Hi
there
Eric
Voigt,
just
as
an
extra
in
focus
you've
mentioned
MIT
conf
and
push
right.
Now
the
is
G.
It's
not
just
net
compton
yang
push
at
the
is
G.
It's
also
HTTP
push
and
we've
been
talking
about
having
a
concise,
see
more
type
of
push
as
well.
So
if
you're
looking
for
other
pushing
mechanisms,
don't
think
it's
limited
to
just
the
net
comp
transport
I.
M
E
M
E
M
E
F
P
L
B
B
Know
that's
why
you're
a
little
bit
further
that
way,
all
right,
so
just
some
thoughts
on
the
kind
of
the
previous
work
we
did
on
information
model
right,
so
it
was
very
ambitious.
There's
four
hundred
and
seventeen
elements
in
the
ten
which
was
the
last
revision
of
that
document.
It
felt,
like
you,
know,
kind
of
further
refining.
That
was
a
bit
of
an
effort,
probably
because
there
was
actually
417
data
elements
or
information
elements
in
there.
B
I
personally
certainly
felt
like
it
was
really
hard
to
understand
what
was
important
in
that,
because
we
defined
everything
it
felt
like
I
thought
became
super
hard
to
make
any
kind
of
trade-offs
to
think
about
data
models.
You
know
what
would
you
actually
do
for
an
actual
implementation,
because
I
really
personally
felt
like
there's
a
lot
of
if
you're
gonna
make
progress
have
to
boil
the
ocean
there.
B
So
you
know
kind
of
the
talk
that
you
things
that
came
up
with
the
hackathon.
What
if
we
approached
it
from
a
can
we
do
the
Minimum
Viable
set
of
things
in
order
to
make
some
of
this
stuff
work
into
you.
Just
kind
of
you
know:
15
minutes
jotting
down
some
thoughts
of
what's
the
minimal
minimum
viable
set.
You
know,
there's
some
super
obvious
ones.
Right,
IP
address
host
names.
We
absolutely
know
we
need
that.
B
We
need
to
be
able
to
handle
kind
of
date
time
so
the
sample
time
and
event
time
some
real
basic
things
of
kind
of
Coast
wits
with
firmware
revisions.
You
know
to
the
point
of
kind
of
saying:
hey,
you
know
if
I
you
leverage
what's
in
the
CPC
ee
and
what
end
point
posture,
collection,
profile,
I'm,
just
gonna
actually
say
the
name,
because
it's
actually
easier
than
remembering
the
acronym
you
know
we
would.
B
We
would
need
to
know
what
those
kind
of
core,
what
are
those
main
points
of
what
are
in
the
kind
of
yang
profiles
that
we
really
find
as
being
the
most
important?
And
then
we
need
a
handful
of
basics
on
what
do
people
really
use
in
practical
terms
for
identifying
an
endpoint
right.
So
what
are
their?
What
it's?
What's
the
naming
anchor
that's
kind
of
currently
used
right,
so
I
think
that's
the
kind
of
really
basic
minimal
set
and
I'm
hoping
the
hackathon.
You
know
kind
of
bill
and
Carl
Heine's.
B
You
know
as
they
glue
some
of
these
disparate
pieces
together.
I
think
there's
huge,
huge
knowledge,
that's
gained
from
that
and
understanding
like
no,
we
we
only
actually
figured
out
how
to
exchange
as
kind
of
key
fields
as
these
three
fields,
because
that
went
from
what
we
have
as
a
back-end
to
what
you
have
is
a
back-end,
and
it's
not
clear
to
me
that
we
need
that
much
more.
K
B
E
Is
Steven
I'm
channeling
at
a
Montville,
/
Jabbar,
so
is
another
way
to
say
this
is
to
look
at
the
workflows
were
interested
in
tackling
and
extracting
the
minimum
set
of
things
we
want
to
see
from
those.
If
so,
I
completely
agree,
we
are
making
good
progress
on
exchange
mechanisms.
Now
we
just
need
to
do
what
you
said.
Yes,.
B
So
I
would
say:
I'm
90%
in
agreement
with
Adams
risa
memorization.
Only
in
that
I
don't
know
that
I
didn't
want
to
capture
every
detail
of
information
we
want
to
exchange
and
if
we
have
to
exchange
information,
that
is
maybe
more
complete
in
one
of
those
information
exchanges
between
systems
that
I
just
need
enough
to
be
able
to
know
what
to
ask
for
from
one
side
to
get
to
the
other
and
an
ability
to
express
how
I'm
sending
the
rest
of
the
data
so
that
both
sides
can
agree
that
they
can
decode
the
larger
part.
B
K
K
You
have
to
respond
to
Adam,
which,
who
can
not
see
I'm
waving
it
right
now,
if
you
start
with
the
minimum
of
item
and
then
at
trustworthiness
realize
you
have
to
refactor
the
whole
thing,
though
this
was
the
experience
from
tool.
Time
is
reiterating
this
in
different
domains,
so
maybe
just
keeping
that
at
the
back
of
your
head
by
assessing
the
minimum
viable,
will
save
a
lot
of
time.
B
R
B
So
as
we
did,
you
know
trustworthiness
any
other
forms
of
attestation
as
we
decide
that
you
know
we
should
tackle.
You
know
satellite,
you
know
health
and
welfare,
and
somebody
needs
to
figure
out.
You
know
what
it
means
to
be
within
the
VNL
and
belt
versus
without
the
availability
some
vendor,
or
somebody
else
can
do
that
right.
It
doesn't
need
to
happen
in
the
working
group
to
make
that
work,
but
we
need
to
have
enough
of
these
meta
elements
right
so
kind
of
this
and
I
don't
know
what
this
set
is.
B
This
is
just
me
noodling
right,
so
named
basic
data
type
byte
length,
those
things
that
are
necessary
right,
we
need
to
know.
Is
this
a
standard
element,
or
is
this
a
vendor
specified
element?
We
eventually
move
this
into
a
an
a
registry.
So,
as
a
bunch
of
vendors,
say,
yeah,
hey
I
changed
that
same
piece
of
information.
B
So
thoughts
is
this.
Is
this
a
workable
way
forward
for
the
folks
in
the
room?
Can
you
see
how
this
would
or
would
not
work
for
you,
you
know.
Could
you
build
a
repository?
That's
smart
enough
to
deal
with
this,
such
that.
If
I
you
know
effectively,
you
know
how
painful
would
it
be
potentially
to
have
a
repository
that
might
have
significant
hunks
of
blobs
in
it
as
a
possibility
right.
B
B
I
K
K
We
brought
has
a
feature
that
that
alidium
eliminates
metadata
in
identifiers
checking
just
stores
it
into
the
map
and
if
you
are
doing
something
crazy
there
like
like,
like
like
gigabytes
of
data,
if
we
brett
break
so
so,
maybe
you
can
have
some
minor
filters
with
the
size,
restrictions
or
a
possibly
can
be
smart
and
record
or
types
it
has
in
there,
and
especially
if
it
does
not
know
the
type
you
can
still
use
it.
So
this
is,
this
is
a
developer
mode.
K
We
call
it
because
if
you
won't
have
an
in
production-
probably
that's
not
the
best
idea
ever,
but
if
your
start
experimenting
when
using
it
and
using
your
own
data
model,
extending
it
and
sometimes
just
just
the
MIT
data
and
the
graph
really
doesn't
know
what
the
semantics
are,
you
can
still
allow
it
to
the
basically
categorize
it
as-
and
this
is
a
typing.
Your
thing
is
an
identifier
or
as
a
metadata
and
then
it's
in
there.
K
So
we
have
this
today
and
I
think
that's
that's
a
way
to
go
forward
with
the
development
and
with
vendor
specific
extensions
and
enable
lazy
engineers
to
just
start
on
and
just
throw
something
in
there,
but
I,
don't
think
it's
a
head-fake
as
a
standard
way
to
use
it.
So
maybe
there
are
different
opinions
about
this.
B
K
The
question:
let
me
replace
what
I
get
us
understood.
The
second
needed
information.
Water,
yes,
does.
It
has
to
be
the
same
as
we
had
no,
maybe
giving
more
structured
information
like
how
to
nest
things
and
how
to
yeah
name
them
and
how
to
extend
it,
but
you
have
full
and
having
a
minimal
set
of
then
as
basically
working
as
an
example
null
so
as
a
viable
minimum.
That
would
be
a
good
middle
ground
for
an
information
model
and
I
would
say.
Yes,
that
is
the
answer.
Yes,
that's.
C
C
C
That
thing's
kind
of
appropriate
actually
alright.
So
this
is
the
part
of
the
agenda
where
we
talk
about
next
steps.
I
think
we've
identified
two
documents
that
will
be
close
to
going
to
working
group
last
call
by
the
in
the
very
short
future.
As
soon
as
we
get
a
couple
updates,
we
really
need
reviews
to
those.
We
have
identified
further
work
for
the
architecture
draft
and
a
new
information
model
draft,
so
I.
C
Am
we
could
do
a
quick?
Oh
we
did
so,
let's,
let's
just
do
a
quick
exercise
and
assigning
some
dates
to
some
of
this
stuff.
Let
me
find
my
and
all
right,
so
the
Roley
we
should
be
working
group
last
call
the
end
of
the
week.
C
L
Dave
altameyer
may
be
proposed
like
three
weeks
because
that'll
give
people
time
to
decompress
after
after
the
idea,
yeah.
C
I,
usually,
if
I
do
working,
your
blast
calls
close
to
an
ITF
I,
make
them
at
least
three
weeks
and
sometimes
I've
had
to
extend
them
so
so
yeah.
So
at
least
three
weeks
for
that
one.
So
if
we
go
to
working
group
last
call
April,
then
we
should
look
at
submitting
to
the
iesg
before
the
next
IETF
meeting
right.
It
should
be
no
reason
why
we
can't
do
that.
Okay,.
L
C
E
I
M
For
the
document,
formerly
known
as
CCP
I
can
have
another
drop-down,
maybe
in
three
weeks
four
weeks
trying
to
build
on
a
buffer.
So
I
don't.
C
H
Yeah,
this
is
built
yeah,
another
revision
around
that
same
time
line
three
or
four
weeks.
You
know
in
in
conjunction
with
some
of
the
work,
some
of
those
information
model
elements
that
Chris
took
them
out.
That
would
that'll
help
us
move
that
move
that
forward
together
as
well.
Okay,.