►
From YouTube: 20190604 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
Annotating
it
additionally,
which
we
could
potentially
get
to,
but
for
now
just
chewing
through
the
schema,
putting
up
basically
looking
at
each
field
in
any
given
back,
more
or
less
and
putting
out
a
generating
a
list
of
a
sort
of
behavior
I.
Think
what
we
can
we
can
sort
of
do
is
say
that
there's
a
default
there's
a
behavior
of
how
given
say
spec,
tional
behavior,
to
behave
when
a
research
was
created
with
the
default,
and
then
we
have
to
you
know,
describe
how
the
system
is
going
to
behave
as
we
alter
each
mutable
field.
C
A
lot
of
that
can
be
generated
by
tackling
by
tooling
to
the
point
of
test
skeletons
and
everything
like
that.
So
I
was
kind
of
playing
with
that.
I
put
some
little
hectic
ruling
together
as
I
trying
to
proof
of
concept
and
then
I'll
describe
I'll
update
the
cap
that
I
have
to
describe
that
tooling
and
how
that
that
process
might
work,
and
we
can
discuss
it
on
github
when
I
get
that
uploaded.
A
A
C
C
B
And
I'm
trying
to
pull
up
the
links
really
quick,
but
we
had
some
just
some
questions
that
we
we
came
up
with.
I
wanted
to
see
just
get
some
discussion
around.
It
did
the
wasn't
really
a
good
forum
for
discussion
in
presentation
mode.
Yeah
I
could
be,
but
I
wanted
to
kind
of
bring
those
up
and
I'm
trying
to
pull
the
links
up
now,
but
so,
let's
kind
of
do
API.
B
B
This
must
be
an
old
version
of
the
presentation,
but
I'm
I
had
some
links
and
then
this
next
one
if
I'll
go
and
pasting
just
someone
else
wouldn't
mind,
fixing
that
the
amazing
font,
size
I,
think
these
are
the
core
or
questions
that
we
tried
to
answer
in.
That
presentation
unfortunately
have
an
old
copy
unless
I
find
the
actual
presentation
itself.
B
Yeah
sorry
I'm,
like
I'm,
not
really
good
at
editing,
though
the
text
on
here
see
if
this
pastes.
So
this
link
here
is
the
looking
for
the
EDT
test
to
which
stable
api's
are
not
being
tested,
and
we
also
have
the
stable
core
api's,
which
are
not
conformant,
which
is
this
one.
And
then
we
have
that
last
one
which
is
what's
being
hit
by
what's
untested
things
being
hit
by
core.
B
C
B
D
B
E
B
Thinking
that's
the
document
that
we
were
working
on
and
that
would
definitely
help
to
show
this
line.
But
as
far
as
asking
answering
the
question
John,
what
we're
doing
on
that
first
one
is
we're.
Limiting
the
the
user
agents
to
just
what's
displayed
from
e
to
e
test
was
actually
what
is
being
hit
by
the
IDI
test
binary
and
we're
not
going
to
show
we're
gonna
zoom
into
just
the
stable
level
stable
stuff.
C
B
C
B
Now,
instead
of
that,
the
first
one
is
just
not
getting
tested
at
all
where
we're
hitting
all
of
these
endpoints
and
they're
they're
not
tested,
and
then,
if
you
actually
click
on
the
link,
it
gives
us
a
list
of
those
endpoints.
So
we
know
which
95
endpoints
we're
talking
about
and
if
you
want
to
change
your
perspective
review
on
that
you
can,
you
can
go
around,
but
this
is
stuff
that
is
tested
but
not
conformance
tested.
So.
A
Why
do
we
open
issues
or
like
first
sections
of
the
API,
where
it's
not
as
specific?
It's
a
it's
more
of
a
higher
level
feature?
Why
don't
we
just
open
up
issues,
or
at
least
an
umbrella
issue
for
this
one
and
then
write
down
the
list
for
the
higher
level
features?
We
don't
need
to
go
to
the
details
of
the
95
specific
endpoints,
because
they're
all
grouped.
B
We
thought
about
grouping
by
kind
or
having
a
limit
by
kind
or
other
type
of
things.
We
can
focus
on
those
those
specifics.
Yes,
that
seems
to
be
the
action
out
of
this
list
is,
is
taking
the
endpoints
and
creating
a
super
ticket
around
stable,
endpoints
being
hit
during
our
release,
job
that
aren't
being
tested
at
all.
B
Put
under
them
action
item,
actionable
means
and
then
the
the
other
other
two
big
questions
were
about
stable,
core
they're
tested
but
not
conform.
It
so
stable
core
endpoints
that
have
tests
that
we
could
tag
and
then
the
last
one
was
which
untested
things
are
being
hit
by
core
components.
So
this
is
untested
stuff
hit
by
specifically
see
what
are
we
looking
at
now?
This
is
the
yeah,
so
this
is
anything
with
cube
star
in
the
list.
B
So
if
you
go
to
the
go
back
to
the
case
new
page
and
go
to
the
very
top
just
so
it's
really
clear
what
we're
looking
at.
On
the
left
hand
side,
it
shows
you
the
list
to
see
all
matches
undo.
The
little
angle
thing
there
on
the
grey
bar
on
the
left
go
down
to
the
little
dot
down
a
little
farther.
Underneath
the
pattern
match
down
there,
you
go
and
up
to
the
CL
matches.
Just
it's
a
tree
and
those
are
the
components
that
we're
looking
at
right
now
filtered
down
to
that.
A
Same
comment
still
applies.
These
are
both
actionable.
We
should
open
up
an
issue
with
regards
to
basically
the
kind
and
at
point
with
the
details
over
to
here.
We
can
prioritize.
We
can
prioritize
them,
maybe
for
like
the
next
session
planning
I
think
one
of
the
things
that
we
should
do.
Probably
if
you
look
at
our
backlog
because
we're
kind
of
locked
out
of
115
now
115
we're
not
gonna,
be
able
to
get
anything
else
in
at
this
point,
yeah.
C
A
B
One
was
the
core
but
not
tested,
and
then
the
other
one
was
a
stable
core
tested,
but
not
conformance
with
stuff
we
could
upgrade
and
the
last
one
was
core
things
that
are
completely
untested.
Those
are
the
three
three
areas,
so
the
this
middle
one
tested
but
not
conformed.
It,
our
candidates
for
promotion.
D
E
A
A
A
F
So,
basically,
a
couple
of
months
ago
we
talked
about
that
we're
having
too
many
images
being
used
by
communities
and
each
witness-
and
we
talked
about-
maybe
centralizing
some
of
that
and
that
effort
is
being
tried
by
my
dad
issue
right
there.
Thank
you
for
the
map,
so
one
proposal
was
that
we
could
centralized
most
of
the
images
into
the
hog-nosed
image
and
basically
I
have
a
couple
of
requests
which
does
just
that
I
can
see
them
right
here.
Basically,
this
moment,
there's
a
dress
like
such
like
12
or
13
images
and
total
image.
F
Size
is
certainly
40
megabytes
and
considering
the
fact
that
the
webhook
image
of
office,
at
least
30
megabytes
I,
think
it's
pretty
decent,
having
so
many
images
being
of
that
size
in
regards
of
testing
the
changes
as
far
as
conformance
goes,
it
passed
for
me,
I
will,
at
the
say,
I've
run
using
the
new
image,
in
conformance
it
seems
to
be
fine.
But
of
course
some
of
those
images
are
being
used
by
literally
thousands
of
tests,
for
example
the
six
Felicia
and
six
for
each
artist,
which
is
basically
a
material
set
of
tests.
F
A
A
We
can't
do
anything
so
it's
gonna
have
to
wait
till
116,
but
in
the
meantime
that
doesn't
mean
that
we
can't
potentially
get
some
of
the
images
into
Google
registries
and
I'll
have
to
poke
possibly
dims
and
some
other
people
to
make
that
out,
because
that's
still
under
control,
we
said
I'm
aware
of
let's
folks
know
otherwise.
A
F
What
will
have
to
be
done
after
the
first
part?
Not
just
is
that
the
oddness
image
will
have
to
be
rebuilt
and
pushed
so.
Basically,
we
can
have
the
CIA
run
over
the
first
part
of
the
satirize
ation.
So
we
know
that
that's
going
to
be
fine
and
if
there
are,
if
there
are
any
issues
which
arise,
we'll
fix
them
before
we
actually
replace
all
of
the
images.
F
A
A
F
The
first
part
is
the
one
that
it's
a
little
bit
bigger
and
does
a
lot
of
logic.
That's
required
other
parts,
they're,
more
straightforward,
I,
think.
The
other
thing
that
would
remain
is
that
the
the
image
build
process
that
is
currently
being
used
by
I
know.
Assistance
we
have
at
the
moment
should
include
a
Windows
not
as
well
so
they
can
be
built
and
published
under
the
GCR
registry.
You
see
on
this
registry.
A
A
A
Are
there
any
other
topics
that
folks
want
to
discuss
today?
We
could
go
through
the
project
board,
but
what
I
was
thinking
is
to
get
updated
issues
and
to
take
a
quick
past
role
offline.
He
synchronously,
multiple
folks.
We
could
do
that
via
slack
and
try
to
prep
for,
like
an
actual
planning
session
in
the
next
meeting,
seems
like
more
appropriate
because
my
context
is
totally
swapped
out.
I,
don't
know
if
other
people
are
up-to-date
but
I'm
still
suffering
from
post
coupe
Khan
trying
to
swap
pages
back
in,
because
I
was
out
too
as
well.
A
C
Know
I
think
that
makes
sense.
I
think
that
there
are
several
issues
that
you
know
now
that
now
that
you
know
the
pool
is
closed,
I
think,
there's
you
know:
they're
gonna
stall,
a
little
bit
I
think
there's
some
sort
of
some
work
that
can
be
done
to
retarget
the
milestone,
3
3
Tiger
things
to
milestone
116
little
little
administrative
things
that
need
to
be
done,
and
you
know
I
think
then
as
I
recall,
and
maybe
this
is
what
we
need
to
go
through
and
check.
C
There
were
a
number
of
issues
still
pending
updates
waiting
for
the
PRS
to
be
updated
with
with
some
tweaks.
Do
you
Claudia
Dino?
Are
there
any
requisite
pr's
that
didn't
make
it
in
so
we
had
it?
We
had
a
bunch
of
PRS
that
promotion
PRS
that
we're
waiting
on
other
changes
to
get
in
reorganizing.
Some
things
do
did
all
of
those
get
in
or
do
we
are.
We
still
locked
right
now.
F
F
Basically,
the
purpose
of
the
NOS
image
is
to
be
able
to
run
certain
things
independent
of
the
platform
and
one
of
the
things
that
sometimes,
though,
is
nuts
that,
for
example,
which
basically
has
a
different
behavior
on
windows
and
a
different
behavior
on
linux,
and
that
should
be
treated
in
diagnosed
in
some
way
and
I
haven't
started
on
that.
Yet,
since
I've
been
busy
busy
pulling
pouring
in
all
the
images,
okay.
B
That
is
an
update
to
the
documentation,
but
also
runs
the
commands
that
bring
up
what's
expected
inside
the
documentation
and
having
that
being
captured
with
the
audit
logs,
so
that
we
can,
in
a
concise
form,
define
a
behavior
based
on
walking
through
our
own
documentation,
because
that
would
allow
us
to
scale
out
if
we
can
find
a
way
to
have
people
contribute
to
ensuring
the
documentation
does
what
it
says.
That
would
also
allow
that
documentation
section
to
actually
describe
a
behavior.
B
That
would
say
here
is
how
this
documentation
interfaces
with
API
server
and
allow
us
to
write.
Have
really.
You
know
very
informed
test,
writing
information
for
how
this
behavior
affects
a
cluster,
and
we
went
through
pretty
far,
but
we
didn't
concisely,
wrap
it
up
where
we
have.
The
document
is
jean-marc
exported
the
markdown
that
included
the
output
of
the
different
commands
and
the
audit
log,
and
at
that
point,
kind
of
handing
it
off
to
now.
B
How
do
you
analyze
those
audit
logs
to
either
give
us
some
actionable,
how
we
write
some
tests,
or
hopefully,
at
some
point
automating,
our
you
know
analyzing
all
of
those
behaviors
to
get
some
some
useful,
I,
don't
know
it's
kind
of
it's
kind
of
weird
I
stopped,
but
I
wanted
to
get
some
feedback
on
what?
What
interesting
ways
can
we
go
and
define
these
behaviors,
where
we
can
distribute
and
include
more
people
and
helping
to
easily
define
them
by
not
writing
no
code
and
just
following
the
documentation?
C
Yeah
I
mean
I
think
that
I
I
think
that
the
executable
documentation
thing
is
an
interesting
idea.
I
think
that
my
my
concern
about
it
with
respect
to
the
conformance
test
I
think
we
talked
about
this-
that
cube
con
is
at
least
part
of
the
goal
of
what
I
was
with
that
kept
em
working
I
was
trying
to
do
is
create
a
system,
a
single.
C
C
Don't
want
the
I
guess
I?
What
are
things
I?
Don't
want
the
documentation
to
necessarily
be
the
system
of
record
or
the
document
of
record,
of
what
the
behavior
should
be,
and
so
how
do
we
make
sure
that?
How
do
we
use
that
to
inform
the
the
record,
but
not
necessarily
be
the
record?
You
understand.
B
B
It's
not
saying
this
is
the
defendant
that
list
it's
just
saying
here
is
a
really
well
informed,
based
on
what
we
say:
kubernetes
is
definition
of
performance,
yeah.
C
And
I
mean
if
nothing
else
like
having
documentation
that
that
has
some
validation
right
that
runs
through
some
automated
process
and
shows
that
it
actually
works
so
that,
like
you're
saying
when
onboarding,
if
you're
going
through
an
onboarding
thing,
your
documentation
is
wrong.
It's
very
frustrating,
so
this
would
be
mouthful
of
matter
guard
I'm,
not
sure
what
what
it
would
take
to
get
it
there.
I
guess:
that's
what
you
were.
You
took
a
few
of
those
we're
gonna
go
through
and
try
to
try
it
out.
How
did
that
go.
B
We
we
kind
of
developed
some
other
tooling
around
it,
where
we're
working
with
someone
else
so
you're
pairing
together
and
as
you're
going
through
the
documentation.
It
actually
creates
a
little
session
pointing
out
a
cluster,
and
you
just
kind
of
hit
go
on
the
the
code
blocks
within
the
documentation
because
we
started
with
the
markdown
for
the
links
you
sent
us
and
then
just
kind
of
converted
that
into
executable
documentation.
B
So
it's
exactly
the
same,
except
that
says:
here's
the
code
block
you
run
and
it
captures
the
output
and
we
didn't
get
quite
to
the
point
of.
Let's
start
before
you
start
running
everything.
Caps
start
start
the
audit
log
capture
and
then,
when
we
stop,
stop
the
article
capture
and
bring
that
back
inside
kind
of
our
next
step
in
that
PhD.
But
before
we
went
too
far,
I
just
wanted
to
get
some
feedback
from
from
from
this
group.
B
If
we
wanted
to
spend
a
little
time
bringing
that
so
there
was
this
executable
documentation
that
brought
in
audit
logs,
which
allowed
us
to
have
some
definition
of
conformance
and
with
the
intention
of
that
flowing
into
an
IB.
You
know
in
the
ammo
file
a
possible
do
we
want
this
to
merge
into
the
defined
set
of
complex
behaviors
that
you're
working
on
John.
C
B
Concept,
I
guess
you've
got
something
it's
trying
to
get
to
a
POC
yeah
see
if
it
has
value
to
us
and
then,
if
it
has
value
to
us,
it's
not
it's
pretty
like
a
week
or
two
more,
but
then
taking
that
to
sig
documentation
and
going
hey.
We
found
this
useful
for
us
to
help
define
what
it
is
and
and
but
we
don't
want
to
be
people,
writing
the
documentation
and
creating
new
behaviors.
B
Is
this
useful
for
onboarding
more
people
if
you
raise
community
and
allowing
them
to
walk
through,
and
maybe
they
have
the
combination
of
cross
group
ideas?
Eventually,
this
becomes
a
way
we
define
what
it
is,
what
it
does
we're
doing
here.
It's
generated
by
us
instead
of
kind
of
kind
of
you
know
we're
guessing
pretty
hard.
Dude,
there's
really
fun
things,
but
I
think
it
would
be
really
cool
to
include
everybody
in
trying
to
define
what
it
is.
We're
doing.
B
B
Put
it
under
API
snoop,
it's
not
really
a
case.
No
but
I'm.
Gonna.
Add
an
here,
for
example,
bella
Marek.
He
gave
us
some
links
and
all
how
great
think
it's
about
Belle
John's
links
and
this
will
go
through
and
it
will
show
DNS
steps,
extend,
extend
documentation
for
Guinness
steps
and
then
execute
steps
and
then
captures
capture,
captures
audit
walks
and
I
guess.
The
last
thing
was
just
just
display
the
api's
new
graph
of
that
for
now,
but
more
interesting
would
be
like
there's
lots
of
interesting
stuff.
B
C
Just
as
context
for
everybody
did
the
two
to
at
least
two
of
the
three
links
and
one
was
sort
of
a
narrative
documentation
right.
It's
like
the
regular
documentation.
You
read
any
Constitution
examples
and
give
you
from
things
other
one
was
the
same
little
more
Lasser
superset
of
the
material,
but
in
more
of
a
reference
format.
So
I
thought
that
it
would
be
interesting
to
see
how
the
tooling
that
the
hippies
working
on
works
on
those
two
different
types
of
faculty
I'll.
F
A
Would
definitely
notify
sig
testing
ahead
of
time
because
they
will
be
the
first
to
notice,
as
well
as
the
release
team.
So
you
might
want
to
I,
wouldn't
plant
it
very
early
in
the
116th
cycle
and
or
at
least
give
a
PSA
to
the
release,
team
testing
and
maybe
sig
architecture
in
general.
Just
so
that
other
people
are
aware.