►
From YouTube: 20210112 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone-
this
is
the
now:
it's
not
the
july
26,
it's
the
january
12th
meeting
of
the
cncf
kubernetes
conformance
meeting
and
clayton
and
the
ii
team
are
here.
We've
got
a
short
agenda.
We
have
some
images
that
are
failing
and
I
think
people
are
on
to
that.
There's
a
large
discussion
in
a
sick
testing
thread
that,
hopefully,
that
will
unblock
there's
some
windows
containers
that
don't
have
that.
Another
quick
one
is
the
empirical
containers
support.
A
We
have
three
endpoints
patch
read
and
replace
pod
empirical
containers
that
has
promoted
the
ga.
We
need
to
figure
out
if
that's
an
accident
or
whether
they
need
to
get
tested
for
that.
B
B
C
B
A
I
was
thinking
about
updating
the
ket
process
where
everything
needs
the
feature
flag
and
then
noting,
where
that
feature
flag
is
in
the
process
as
a
as
a
sort
of
gate
to
tie
together
the
the
api
endpoints
to
their
feature
flag.
B
D
B
A
Clayton
notes
that
this
is
likely
alpha.
I
need
to
research
metadata
on
why
it
shows
up
as
a
ga.
Thank
you
for
that.
That's
an
action
item
for
us
rion.
You
got
your
right
list,
endpoints
piece
here.
A
Oh
me,
a
second
to
get
rihanna
back
on
on.
Please
rejoin
clayton's
on
all
right
pace
goes
down
at
the
bottom,
so
we're
gonna,
quick,
so
we're
gonna
start
what
we
can
get
clayton's
help
with.
Oh
it's
this
just
saying
that
it's
actually
merged.
Oh,
it's
marriage!
Oh
that's!
Just
the
plus
story!
This
is
our
plus
three
endpoints
clayton
one
thing
on
api's
group
itself.
It
looks
like
we
have
a
whole
bunch
of
points,
but
it's
not
true.
A
We
have
three
and
then
due
to
the
metadata
and
on
how
we've
been
calculating
things.
This
actually
is
part
of
two
releases.
It
was
updated
and
we
only
look
at
the
latest
runs,
which
means
it
shows
up
as
121
all
of
those
endpoints,
I'm
not
sure
of
a
clean
way
for
us
to
do
that.
Other
than
just
note,
a
lot
of
these
were
actually
tested
in
117..
A
Because
if
you
go
to
api
smooth
now
we
go
to
performance
progress
and
we
see
all
that
red
27,
so
there's
19
endpoints
that
are
just
we're
actually
testing,
except
for
the
three
that
we
just
updated.
This
happens
when
we
update
an
existing
test,
rather
than
creating
new
tests.
A
Our
our
go
ahead
has
been
we're.
A
E
If
you
just
go
to
the
bottom,
yeah
clayton,
if
you're,
just
looking
through
the
before
john
raised
an
issue
around,
was
it
not
the
patch
status,
even
though
we're
only
changing
the
spec
in
the
metadata
when
we're
looking
at
the
api,
snoop
shows
that
it
is
actually
hitting
the
patch
status?
E
Sorry,
if
you're
getting
sorry
if
we
go
back
up
to
the
room,
this
is
down.
The
bottom
is
where
we
were
where
I
did
a
check
before
and
after
the
actual
patch
is
done,
but
the
yaml
that
we're
putting
through
as
part
of
the
change
is,
if
we
go
back
to
the
very
top
for
the
go
code.
Oh
yeah,
further
down
the
golem
yeah.
E
Waiting
for
the
service
to
start
and
then
a
bit
further
down
now
we're
setting
up
the
change
in
the
actual
metadata
and
the
speak,
so
that
does
actually
patch
the
status.
E
Yeah,
that's
okay!
It
was
just
that
it
was
just
a
bit
confusing
of
like
when
you
look
at
the
response.
Coming
back.
There
is
a
subsection
that
shows
status
has
been
the
load
balancer
and
it's
just
a
little
confusing
of
it's.
It's
the
actual
change
of
just
the
metadata
on
the
specs,
a
suitable
change.
B
I
looking
down
here.
Yeah
I
mean
if
you,
if
you're
patching
status,
you
may
mutate
both
status
and
metadata,
even
though
it
is
a
status
endpoint
and
there's
historical
reasons
for
that,
and
we
couldn't
redefine
it.
Okay
and
in
the
spec,
the
the
full
object
should
be
returned,
but
only
mutations
and
status
and
metadata
should
be
accepted.
B
A
A
It's
nice
to
hit
quite
a
few
with
one
test,
and
this
is
again
before
the
pr
you
want
me
to
yeah.
E
If
you
just
want
to
quickly-
oh
I'm
sorry,
this
is
actually
just
looking
at
the
code.
They're
really
there's
some
feedback.
That
was
given
way
back
in
may
of
the
end
of
may
last
year,
and
I
need
to
look
at
that
particular
recording,
but
really
the
only
thing
in
the
code
really
a
little
bit
further
up.
There
is
some
docker
images
that
have
been
used.
E
E
E
But
we
would
yeah
yeah,
I
I've
worked
out
actually
how
to
change
the
mock
test
so
that
it
actually
does
use
the
new
ones
as
well.
So
how?
How
can
we?
How
can
the
clayton
help
us,
I
think,
there's
not.
I
think
that
was
just
more
of
a
status
of
where
I
was
with
the
ts
yeah.
Probably
this
real,
no
direct
sort
of
need
request
for
this.
One
specifically,
I
think
it's
we've
got
already
another
test.
That's
using
the
watch
tooling.
E
A
I'll
go
back
to
this
is
what
I've
been
trying
to
figure
out
and
it's
it's.
It's
been
here
for
a
while.
It's
seven
points.
A
And
it's
it
kind
of
jumps
around.
It's
probably
best
to
have
us
stuck
with
with
liggett.
There
are
some
concerns.
Initially,
this
is
kind
of
your
your
pr,
obviously
kind
of
navigating
it.
E
A
And
so
we
thought
we
addressed
that
I
saw
what
you've
done
there,
stephen
yeah
about
updating
it,
so
it
cannot
be
empty
and
his
response
back
here
was
things
aren't
right.
I
guess
this
is
where
I
wanted
to
kind
of
get
another
set
of
eyes
clean.
Are
we.
A
B
Yeah
I
mean
jordan's
asking
good
questions
which
I
don't
know
the
answers
to,
but
he
probably
I
would
probably
say
the
implication
here
is
that
the
behavior
in
the
proxy
might
be
confusing
man.
E
Pr
it's
an
issue.
I
think
that's
opened
that
one
here.
If
you
go
to
the
code,
change
code,
change
and
will
potentially
look
at
yeah,
if
you
look
at
the
version
before,
can
you
go
back
in
that
stream?
One
version
before?
Oh
sorry,
if
we
look
at
the
viewer
of
the
file,
isn't
it
then
can't
we
go
then
back
to
the
parent
of
that
commit
I'm
not
sure
how
to
get
there
from
here.
Can
we
destroy
looking
at
the
view
in
the
file.
E
E
Is
what
should
be
used,
but
because
of
the
question
that
jordan
came
up
came
up
with?
I
was
getting
a
little
confused
on
how
I
was
interpreting
the
feedback
from
my
tcp
dump
and
a
few
other
stuff
that
I
was
using
to
test.
It
was
and
then
that's
where
I
went
down
the
other
track.
So
does
this
look
a
little
bit
safer
or
closer
to
what
should
be
used.
B
I'm
probably
gonna
have
to
look
at
this
after
this.
I
can't
I
can't
bring
it
all
into
cash,
but
I
could.
I
could
maybe
see
that.
Let
me
I'll
put
this
on
my
list
of
things
to
look
at
is
gonna.
I
may
not
get
to
it
until
tomorrow
afternoon,
though
no
stress,
oh,
which
one
is
it.
What's
the
pr.
C
Yeah,
let
me
back
out
here
and
drop
this
and
then.
E
C
B
All
right,
I'm
going
to
leave
that
slack
message.
Unread
and
I
don't
know
about
you.
I
don't
know
about
you
guys,
but
I
I
tend
to
do
that.
A
lot
I'll
I'll
have
like
giant
sections
of
red
and
then
like
I
get
like
the
anxiety
about
having
all
the
red
so
that
I
don't
want
to
go
start
into
it.
But
when
I
do
go,
look
at
it,
I
mark
them
all
right.
So
yeah
slack
is
to-do
list.
A
This
is
our
last
last
one
headed
options:
don't
show
up
navy
server
logs
and
don't
forget
how
far
we
got,
but
it
keeps
getting
pushed
around
around
who
owns
it?
Nobody,
nobody
wants
it.
E
And
I
think
there's
a
document
that
they're
linking
to
is
this.
It.
A
E
Go
ahead,
it
got
at
the
start
with
stick
off
and
we
haven't
really
had
any
sort
of
feedback
from
them
on
yeah
yeah.
A
B
Yeah,
it's
unfortunate
that
that
everybody
reinvents
all
of
these
wheels
and
nobody.
We
never
like
anybody
else's
framework
for
this,
and
every
framework
is
different
and
everybody's
interpretation
of
the
framework.
I'd
probably
say
this
is
this:
is
api
machinery
side
but
like
there
are
two
parts
of
this
which
is.
B
We
transform
in
the
api
machinery
from
the
http
methods
to
what
we
call
like
the
internal
verbs
or
internal
methods,
and
that
is
not
a
one-to-one
transformation,
and
so
because
we're
doing
that
transformation,
I
probably
would
say
that
we
should
be
logging.
The
internal
verbs,
not
the
external
verbs,
and
so
that
this
is.
This
is
intentional.
If
I
were
to.
B
After
the
cube,
stack
picks
up,
so
we
are
not
audit
logging,
http
requests,
we're
audit
logging
cube
changes
and
because
we
do
that
subtle
mapping
between
http
and
cube
we're
losing
a
little
bit
of
the
like
headed
options
are
lost.
B
It's
pretty
complex
to
carry
them
through
all
the
way.
So
some
of
this
might
just
be
conformance
defines
the
behavior,
but
I'm
not
sure
that
it's
an.
I
don't
know
that
I
consider
either
these
I
could
buy
an
argument,
someone
making
that
this
is
actually
just
the
way
that
cube
works
and
it's
could
be
changed.
But
it's
a
big
change.
B
A
Our
way
forward,
rather
than
trying
to
create
a
huge
change
to
support,
that's
showing
up
in
the
auto
logs,
is
to
probably
find
a
one-off
way
for
counting
them
like
in
our
code.
If
we
hit
some,
I
don't
know
how
we're
going
to
do
it,
but
if
we
hit
this
one-off
case
we're
ahead
in
options
there,
there
should
be
another
mapping.
Here's
one
thing:
that's
what
I
was
really
hoping
we
could
do
and
it
seemed
like
a
really
simple
change
and
we
got
pushback
either
from
ligand
or
lava.
A
A
And
then
open
api,
it's
the
request
or
the
request.
A
It's
the
operation
id
operation
id
oh
cause!
That's
what
we
really
want,
I'm
not
that
interested
in
the
header
option,
but
if
we
could
add
that
to
the
audit,
that
would
fix
everything
and
reduce
our
complexity
of
trying
to
figure
out
what
they
were
trying
to
do
by
well.
What
combination
of
http
verb
slash
verb
option
inside
of
kubernetes
just
give
me
the
id
for
what
code
was
called
and
that's
the
operation
id.
A
I'd
have
to
go
find
that
pr,
but
it
sounded
like
they
didn't.
Whoever
there
was
some
pushback
on
was
they
didn't
want
to
increase
the
surface
area
of
the
api
for
a
one-off
use
case.
A
So
I'm
not
sure
if
it's
worth
pushing
back
down
that
road
again
since
we're
hitting
this
option
because
it
seems
like
we
would
have
the
operation
id
at
the
moment
of
generating
the
audit
log.
B
Yeah
this
is
a.
This
is
a
thorny
one.
This
is,
I
can
understand
why
people
are
people.
Are
it's
like
extra
work
for
something?
That's
that
is
kind
of
specific
yeah.
I
don't
have
a
strong
opinion
right
now.
B
Okay,
yeah,
I'm
just
thinking
about.
I
don't
have
a
strong
opinion
on
this,
but
I
can
I
am
I'm
putting
myself
in
the
spot
of
someone
who's
like
this
is
tough
and
it
it
isn't
like
one
of
those
both
both
sides
of
this,
whether
it's
head
and
options
showing
up
or
the
operation
id
like
there
are
a
ton
of
other
use
cases
clamoring
for
it
right
now,
so
I
can
buy
the
argument.
A
A
I
mean
the
test
like
so
for
this
particular
couple
of
endpoints,
rather
than
looking
at
the
operation
id.
We
look
at
the
the
user
agent
and
note
that
this
particular
test.
We
know
it
hits
these
two
endpoints
manually
and
change
the
way
we
calculate
it
in
this
one-off,
rather
than
trying
to
fix
it
like
data
that
feels
that's
un
lovely,
but
it
is
the
way
to
do
it.
That
doesn't
require
us
to
add,
add
the
correct
log.
B
A
A
That
I
we
have
to
have
to
write
some
pretty
gnarly
exception
code
for
it
yeah.
B
Yeah
the
audit
one,
the
auto
one's
rough-
I'm
not.
I
can't
that
that
is
something
that
has
come
up
in
other
contexts,
but
because
we're
doing
the
mapping
it's
a
little
uncomfortable
and
the
operation
id
is
a
bigger
scope,
so
yeah.
Unfortunately,
I
have
to
drop.
D
Were
there
other
questions
I'll
follow
up
on
the
one?
This
has
been
really
useful,
follow
up
with
that
one
and
see
if
you
can
let
this
other
one
sit
in
the
back
of
your
mind.
If
something
comes
up,
that
would
be
great.
Thank
you
for
your
ongoing
and
consistent
support.
We
really
appreciate
you.
B
Inconsistent
support,
I
think
we
can
all
be
honest
with
each
other.
It
is
occasional
and
somewhat
absent-minded
support.
So
I
will
I
appreciate
everything
you
all
have
done
and
welcome
back
to
the
mud
of
2021.
A
Checking,
oh
good
glad
are
you
here
in
the
meeting.
I
just
noticed
that
I
heard
single
and
he
is
here
hi.
A
A
Hey
we
had
a
little
it's
starting
at
10
minutes
after
we
went
ahead
and
tried
to
move
stuff
around
just
to
push
the
agenda
next
week,
but
then
clayton
showed
up,
and
then
I
saw
you
come
in
good
to
see
you.
How
are
you
good?
How
are
you
guys
enjoying
the
summer
and-
and
I
got-
I
hurt
my
leg
on
the
beach
with
the
boys.
F
A
I
think
we
went
through
the
agenda,
but
I
also
wanted
to
check
and
see
if
there
was
anything
how
things
were
going
with
the.
F
Mainly
right
now
we're
there's
a
big
effort
to
kind
of
on
a
boy
like
the
internal
coding.
F
So
I
don't
have
anything
right
now
today,
but
I'm
kind
of
thawing
out
trying
to
figure
out
what's
going
on
myself,
but
I'll
probably
have
a
better
answer
to
that.
Next
time,.
A
A
A
Okay,
we
don't
have
test,
runs
from
117
to
compare,
so
I
think
we'll
just
have
to
update
that,
but
we're
still
on
track
to
try
to
do
30,
end
points
and
in
parallel,
we're
also
trying
to
get
our
tooling
up
to
a
point
where
we're
using
it
to
do
more,
proud
stuff,
as
well,
for
testing
for
a
working
group
case
in
pro
working
group
to
get
more
vendors
being
able
to
support
it
because,
right
now,
all
that's
google.
F
Awesome
quick
question
the
as
far
as
the
mechanics
of
how
this
work,
so
you
run
just
you
run
the
e2e
binary
to
to
collect
this
stuff,
or
do
you
also
run
other
tests
outside
of
the
outside
of
e2e?
If
you're
wondering.
A
Apis,
too,
is
that
the
question
yeah
yeah,
sorry,
yes,
yep
I'll,
do
a
quick,
quick
dive
into
that
excellent
question,
not
sharing
a
pair
but
cncf.
No,
no!
It's
kateson
from
kubernetes.
A
Pushing
which
is
our
underlying
images
that
we
use
conformance
gate
conformance
all
pretty
sure
which
one's
the
main
one.
A
Here
and
here
so,
we
do
use
these
two
and
just
to
get
really
clear
on
where
it
comes
with
an
api
snoop
and
how
those
are
connected.
I'm
hitting
apple
key,
but
it's
not
making
a
new.
B
F
A
A
A
Let
me
stop
my
sharing
sure
and
stop
my
video
and
see
if
that
helps
I'll
I'll
talk.
Since
I'm
having
issues
showing
in
our
aka
smooth
repository,
we
have
some
sql
code
that
retrieves
the
latest
jobs
for
successful
jobs
for
several
pde
runs
and
those
runs
are
used.
I
think
they're
ready
to
be
a
cube
test,
so
test
goes
through
and
eventually
does
call
the
dd
binary
with
the
conformance
right
right.
A
I
think
you're
asking
us
how
maybe
that
linking
together
whenever
my
machine
comes
out-
and
I
can
show
you
exactly
how
those
work
but
the
source,
the
the
end
material
out
of
that
is
the
audit
logs
written
to
disk
and
those
are
written
and
saved
to
buckets
and
gcs
buckets
who
retrieve
those
logs
from
the
gcs
buckets
under
json,
and
we
slurped
them
into
a
postgresql
database
and
using
all
of
the
information
from
the
raw
logs.