►
From YouTube: 20201201 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
welcome
to
the
first
of
december
performance
meeting
for
the
subgroup
of
dig
architecture,
I'm
your
host
today,
hippie
hacker
with
rihanna,
taking
notes
behind
the
scenes
and
they
have
a
code
of
conduct,
which
I
think
everyone
here
is
familiar
with.
A
A
First
on
our
agenda
is
from
fallout
for
recent
pve,
so
I
will
bring
up
the
technical
advisory
for
that
looks
like
the
container
shim
api
is
exposed
to
a
host
network
container
when
using
container
d.
A
This
is
not
so
great,
though,
probably
don't
do,
post
networking
when
you're
doing
community
is
the
very
very
short
version
of
that
and
right
now
we
have
conformance.
C
A
That
require
that,
so
there
was
a
suggestion
by
them.
A
A
Some
thoughts
around
updating
our
verb
on,
not
requiring,
I
think,
in
that
same
issue.
There
was
a
a
conversation
on
should
we
update
our
requirement
to
include
any
test
that
needs
to
be
clustered
to
be
insecure
in
the
past
should
be
discouraged.
I
don't
want
to
make
a
call
on
that.
So
a
few
people
on
the
call
we'd
probably
give
it
time,
be
discussed
on
the
mailing
list
and
and
and
that
does
anybody
have
any
thoughts
around
this
particular
cbe
or
the
the
conformance
test
removals
that
are
suggested.
D
Just
one
idea
was:
I'm
not
sure
I
not
looked
at
the
particular
test,
but
can
you
modify
that
is
not
to
use
most
network
or
others
tests?
Absolutely
I
mean,
do
they
absolutely
require
host
network
and
not
cluster?
Networking
is
that's
something
we
could
look
at
or
the
other
alternative
would
just
be
to
demote
those
tests.
I
guess
yeah.
A
Absolutely
I
think
we
should
definitely
look
at
those
tests
and
see
if
they
can
be
modified
and
I
think
we're
in
a
long
cycle.
At
this
point
we
can't
demote
something
during
the
last
like
week
of
test
freeze.
I
don't
know
if
I
would
want
to
do
that.
So
we
have
an
entire,
probably
three
months
before
we
hit
test
freeze
again
for
121..
A
So
but
probably
an
ai
here
might
be
to
create
a
list
of
tests
and
and
then
and
see
how
hard
it
is
to
to
remove
that
the
use
of
host.
A
A
I
have
two
links
here
in
the
document.
One
is
the
html
which
I'll
go
through
and
then
one
is
the
markdown
I'll
copy
the
mark
down
and
maybe
paste
it
into
our
our
chat
somewhere.
Wherever
how
do
I
get
the
chat
there?
It
is
so
you
can
follow
along
there
and
then
I'll
go
through
the
presentation
and
in
this
style,
maybe
how
to
launch
a.
E
Launch
a
tab
right
click,
open
link.
There
we
go.
A
A
A
One
of
the
things
I
thought
we
bring
to
the
top
of
the
chart
this
time
is,
we
have
the
graphs,
and
this
is
really
just
copying-
pasting
the
numbers
from
the
graph.
A
It's
sometimes
nice,
just
to
see
those
raw
numbers
to
see
that
since
114
we
went
to
below
20
to
above
60
percent
in
what
is
this
is
about
two
two
years
two
and
a
half
years,
and
that's
that's
exciting,
so
we
don't
have
too
much
longer
to
go,
and
just
so
we
can
kind
of
see
where
that
data
comes
from.
A
There's
a
link
here
for
api
snoop
and
the
conformance
progress
tab
has
our
our
list
of
the
total
for
this
release
and
then
you've
got
untested
and
tested
secrecy,
155,
179
216,
and
back
at
this,
this
graph,
155,
179,
216
and
and
and
so
on,
and
so
it's
just
a
ratio
of
the
entire
number
of
tests
of
endpoints
that
we
do
test
versus
the
one
untested
ones
that
remain.
You
notice
that
the
tested
ones
go
up,
the
untested
ones
go
down
and
the
total
increases,
as
we
add
new
endpoints.
A
A
Our
primary
okr
for
the
120
release
was
to
obviously
increase
stable
coverage
and
it'll
probably
always
be
that
we
really
really
really
wanted
to
get
to
30,
and
we
had
hoped
we
would
get
to
40.
A
reality
is
we
got
24.,
and
this
is
a
link
to
all
those
and
I'll
just
I'll
open
them
all
up
real
quick
just
to
give
a
quick
list
of
those
here
is
the
controller
lifecycle
test,
the
pod
resource
status,
lifecycle
test
and
v1
apps
deployment,
and
these
all
list
the
endpoints
and
our
flow
of
promotions
for
making
sure
that
those
are
complete
and
we'll
talk
about
why
we
didn't
reach
all
those
in
the
next
in
the
next
we
reach
all
of
these,
but
the
ones
we
weren't
able
to
reach
some
of
those
other
conformance
news
related.
A
There
were
eight
new
endpoints
promoted
to
ga
this
was.
I
think
this
is
part
of
the
enhancement
issue
for
the
which
one
is
this.
This
is
the
runtime
class,
so
this
runtime
class
feature
was
started
in
2018
by
tim,
all
clear,
and
we
get
down
here
to
the
bottom.
This
just
got
g8
in
1.20
and
there
was
a
list
of
performance
tests
that
were
needed
to
be
promoted
as
well.
So
this
link
that
we
follow
here
is
actually
the
link
inside
of
api
snoop
at
the
top
level.
A
A
It
was
a
little
difficult
in
that
the
promotions
occurred
within
the
last
two
weeks
of
the
release
cycle
and
the
team
for
runtime
class
did
not
know
that
they
needed
two-week
soap
time
on
their
test
to
do
promotions.
So
I
think
it
was
a
little
bit
rushed
to
get
these
through
and
we
put
a
few
exceptions
in
there,
but
we
did
work
with
the
sig
release
team
to
update
the
caps
and
when
to
include
the
need
for
them
to
give
us
more
time
to
get
those
ga
endpoints
out.
A
In
addition
to
eight
new
endpoints
that
came
with
tests,
we
removed
13,
13
endpoints
of
debt
by
marking
them
ineligible
so,
and
that
was
based
on
community
feedback.
The
first
thing
is,
there
are
now
75
endpoints
that
are
themselves
ineligible
and
that
large
list
is
pretty
much
the
node
proxy
area,
and
here
is
our
conversation
with
the
community
that
the
node
proxy
endpoints
cannot
be
conformance,
tested
and
the
full
list
there.
A
The
feedback
happened
in
several
different
ways.
I
won't
go
into
the
details
of
that,
but
we
could
consider
those
13
endpoints
a
reduction
in
debt.
We
don't
currently
calculate
it
that
way,
because
it
wasn't
work
that
was
included
in
a
test,
written
and
promoted
and
coordinated
by
this
team,
but
it
does
reduce
our
overall
debt
by
reducing
that
end
target
number.
D
A
This
lists
all
of
the
endpoints
by
grouping
of
stable,
which,
for
the
most
part,
is
a
lack
of
an
alpha
or
beta
in
the
url,
and
then
the
group
version
kind.
This
is
the
group.
So
let
me
use
the
group
here
and
underneath
that
all
of
the
endpoints,
if
we
have
an
endpoint
that
is
tested,
it
means
that
the
ede
binary
set
the
user
agent
to
itself
e
to
e
and
we
detected
it,
but
it
wasn't
a
conformance
test.
If
we
have
a
conformance
test
that
hits
it,
we
mark
it
as
conformance
test.
A
A
A
A
A
I
think
it's
gonna
get
harder
from
here
on
out
we're
hitting
that
60
percent
barrier
and
things
are
getting
there.
We
had
some
of
the
reasons
we
had
a
lot
of
difficulty,
this
release
and
I'll
close
all
these
extra
windows
so
that
it's
a
little
more
clear
what
we're
focusing
on
we've
had
lots
of
the
flaking,
so
we've
had
jobs
going
up
and
down,
and
some
of
it's
been
on
infrastructure.
It
seems
a
lot
of
it.
So
these
two
promotions
that
we
were
actually
able
to
get
through.
A
We
they
weren't
from
this
release.
They
were
from
a
release
two
releases.
B
A
I
think
they're
more
starter
than
when
I
say
if
you
look
at
when
we
started
working
on
this
test.
It
was
a
couple
releases
ago,
but
we
had
flakes
and,
as
we've
looked
into
it,
it
was
probably
due
to
setting
the
timeouts
too
low
on
provisioning,
our
pods
and
after
we
were
able
to
look
at
those
flakes
and
bring
them
up.
We
finally
got
to
include
them
and
figuring
out
what
those
edges
are
on.
A
Flaking
is
something
we've
had
to
learn
and
I'll
close
those
particular
issues
where
I've
also
had
policy
changes
and
some
of
those
policy
changes
are
related
to
the
logging.
A
We
lost
our
audit
logging
because
of
a
change
in
in
the
configuration,
and
we
also
weren't
logging
particular
types
of
events,
because
the
logging
of
the
original
we
we
depend
on
audit
logs
and
you
can
set
up
your
audit
policy
to
only
log
a
few
things
and
we
were
missing
several
endpoints
and
in
changing
that
policy,
we're
able
to
actually
get
them
back,
so
that
was
a
a
bit
of
a
hindrance
for
us
this.
A
This
particular
release
we're
also
running
into
pushback,
because
they're
upstream
bugs-
and
so
it's
not
just
writing
a
test
anymore.
It's
trying
to
incorporate
the
and
make
changes
upstream
like
this
limit
api
server
redirects
is
a
request
because
the
api
server
is
currently
allowing
possibly
too
many
things,
and
we
can't
they
don't
there.
A
The
request
is
we
don't
write
tests
for
things
that
still
need
to
be
fixed,
and
so,
as
we
hit
those
all
that
research
kind
of
gets
put
on
hold
as
far
as
progress
output
from
our
from
our
team,
we
also
have
some
areas
that
are
still
open
for
head
and
options.
So
all
three
of
these
related
to
upstream
bugs
are
obviously
holding
back
a
few
of
our
points
and
that's
the
main
information
on
this.
A
This
this
point
it's
pretty
heavy
slide
as
far
as,
but
I
just
want
to
make
the
point
that
things
are
getting
harder
and
they're
not
getting
easier.
We
also
have
to
build
new
images.
This
job
is
still
not
fixed
and
it's
not
easy
to
trigger,
and
I
was
on
the
call
with
sig
testing
this
morning
trying
to
figure
out
how
we
can
make
it
easier
for
this
is
blocking
12
endpoints
here
to
make
those
images
available
for
our
test
to
use
this
isn't
a
bad
thing,
but
it
is
it's.
A
It's
requiring
a
lot
more
community
interaction
and
we're
getting
a
bit
of
latency
and
it's
likely
due
to
the
world
being
in
a
stressful
place.
People
having
a
lot
of
things
going
on,
but
the
latency
on
our
communications
with
with
folks,
tends
to
have
an
impact
as
well
summarize
it's
getting
harder
and-
and
that
is
that
the
remaining
endpoints
are
likely
going
to
take.
Take
a
bit
more
effort
to
be
clear.
A
Some
really
really
cool
news
for
our
key
result.
Two
key
result:
one
we
got
24
out
of
30,
which
I
think
from
google
standard
is
around
75.
So
it's
not
it's
it's
a
success,
but
it's
it's
at
80
right!
So
we're
I'm
still
I'm
still
happy
with
that,
but
we
did
knock
it
out
of
the
park
for
our
second
key
result
and
that
was
cleaning
up
technical
debt.
A
We
had
a
goal
of
cleaning
debt
back
to
115
and
we
were
hoping
to
clear
it
back
to
114,
but
what
we
actually
did
was
cleared
that
all
the
way
back
to
111.
so
back
to
111..
A
Here
we
go
the
next
that
is,
is
back
here.
All
everything
from
112
forward
anything
promoted
from
then
on
is
now
performance
tested.
This
is
really
cool.
Over
a
year
of
technical
debt
erased
in
a
single
release.
C
A
A
We
could
try
to
do
it
all
in
one
pr,
but
I
think
the
number
of
things
that
need
to
occur
for
that
is
is
too
high,
so
you
decided
to
instead
do
a
release
in
forming
release
blocking
job.
Where
ci
signal
can
say.
Oh
you
can
we're
going
to
have
to
revert
that
before
the
release,
because
the
requirements
in
your
kep
proposal
for
the
release
requires
that
you
have
this,
so
we
have
making
it
more
human
than
technical.
A
So
some
progress
on
that
I'll
go
ahead
and
close
our
links
for
here
we
have
our
k
in
for
draw
running
on
proud
scanning
traction
for
the
performance
gate.
This
first
one
is
where
we
actually
merged
that
job
into
kubernetes
test
interest.
So
that's
proud
at
case
that
io
status
is
still
currently
failing.
A
I
think
it's
because
we
probably
need
to
update
our
our
underlying
job.
Here
we
have
two
new
endpoints
to
get
service
account
issuer
and
get
service
account
open
id
configuration
which
need
to
be
added
likely
to
our
do
not
test
or
update
it
to
be
beta.
A
We
also
have
our
I'm
not
sure
why
that
one
came
up.
That
was
gaining
traction.
It's
probably
oh!
It's
in
use.
You
can
see
that
api,
snoop
links
for
the
gate
were
used
to
catch
runtime
class,
so
sergey,
not
definitely
close
to
release
date
got
informed.
Hey
we're
gonna
need
some
to
get
this
across
the
line
and
test
it.
So
that's
when
I
was
able
to
drop
links
to
api
students
so
that
gate
was
able
to
quickly
turn
the
wheels
of,
let's
be
sure
that
these
other
endpoints
get.
A
I'm
not
going
to
have
to
quite
make
this
the
right
size
there
we
go,
and
the
last
one
is
that
yeah
that
prowl
status,
which
we
need
we
need
to
get
on
those
few
new
endpoints
that
just
came
through
further
automation,
is
still
in
progress
to
where
we
actually
create
a
pull
request
to
api
snoop's
underlying
data,
which
is
kind
of
the
final
part.
A
So
when
there's
a
change
in
coverage,
whether
it's
an
introduction
of
a
new
endpoint
like
this
or
yeah,
we
wrote
a
new
endpoint
that
got
a
new
test
or
got
a
new
our
test
that
got
a
new,
endpoint
or
promoted.
That
would
be
inside
of
a
pr.
We
can
actually
discuss
and
note
it
currently,
it
all
gets
kind
of
lost
in
the
noise
and
we
have
to
pull
it
up
and
point
to
it
in
our
in
our
presentations.
A
All
right
so
other
important
news
in
case
you
were
not
aware.
1.20
was
the
not
it
wasn't
a
short
cycle.
It
was
a
long
cycle,
but
the
test
freeze
was
fairly
short,
so
we
only
had
from
november
23rd
to
december
8th,
so
our
code
freeze
is
is
coming
up
and
we're
probably
releasing
next
week.
It
looks
like
so
as
far
as
121
the
release
date
for
that
it's
still
under
under
discussion,
we'll
go
there
and
check
that
out.
A
This
is
the
hack
md,
where
they're
trying
to
make
it
the
choice
between.
Do
we
want
three
releases
a
year
or
four
releases
a
year,
and
we
don't
know
yet
so
we'll,
hopefully
hear
about
that
from
sig
release
after
a
while.
The
next
one
is
taylor.
Wagoner
is
the
human
at
the
cncf
who
has
been
managing
the
conformance
program
from
the
cncs
perspective
for
all
since
the
beginning,
and
so
when,
when
companies
submit
the
results,
she
has
manually
gone
through
and
changed
and
checked
the
prs.
A
We
added
the
support
for
our
a
proud
job
that
we
wrote
that
runs
in
the
cncf
cluster
to
add
that
there
were
no
failed
tests
that
we
actually
got
the
right
number
of
tests
in
and
assisting
so
that
it's
very
accurate
representation.
A
We've
had
people
before
that
submit
the
right
number
of
tests,
but
they
weren't
the
right
tests
themselves
and
that's
too
much
of
a
burden
to
put
on
a
human
to
try
to
assess
those.
So
we
thank
taylor
for
her
time
and
are
glad
to
be
of
service
to
our
conclusions
for
the
entire
120.
A
We
did
get
32
new,
conformant
endpoints,
eight
of
which
brought
along
by
a
promotion
with
the
endpoints
and
the
test
and
24
by
us,
and
we
had
13
newly
ineligible.
So
that's
45.
If
you
want
to
look
at
it
as
far
as
debt
cleared
or
kept
off
the
board
and
overall
from
a
percentages,
aren't
the
best
metric
realized.
But
overall
we
got
about
a
nine
percent
eligible
coverage
increase
in
this
release
as
oppo
coming
up
from
119..
A
I'm
I'm
super
happy
with
these
results
and
I'm
proud
of
the
team
looking
forward
on
121
no
radical
changes.
I
think
I
think
we're
on
the
right
track
for
how
we're
engaging
it
is
going
to
be
hard,
but
I
I
still
even
though
we
didn't
reach
our
goal
of
30,
I
still
want
to
keep
the
same
goals
as
120..
A
A
So
with
that
in
mind,
our
key
results.
Our
key
result,
one
for
1.21,
is
we're
going
to
still
try
for
30
new
endpoints,
in
spite
of
it
being
harder
and
I'd
love
it.
If
we
could
get
the
40.,
this
would
be
a
combination
of
of
things
going
just
right
and
us
trying
really
hard,
but
we
definitely
want
to
have
that
stretch
goal
in
place.
A
We
would
love
to
clear
out
debt
all
the
way
back
to
one
nine,
that's
going
to
require
at
least
these
seven
specific
endpoints
getting
off
the
board
and
we're
actually
engaging
with
api
machinery.
I
think,
in
order
to
to
try
to
get
these
cleared
so
any
other
questions
or
feedback
for
our
team
as
we
wrap
up
the
120.
D
F
C
I
have
a
question
when
you
say
that
x
amount
of
code
debt
has
been
cleared.
Does
that
is
that,
from
the
code
base,
the
kubernetes
code
base.
A
Let
me
show
you
where
that
sits
so
on
the
api's
new
website
under
conformal
progress,
there's
two
main
charts
we
use
one
is
at
time
of
release
which
shouldn't
change
too
much.
There
is
ways
that
it
does
change
because
of
how
we
tag
metadata,
but
for
the
most
part,
once
that
release
is
cut,
we
shouldn't
see
much
of
a
change
in
the
graph,
as
opposed
to
down
here
conformance
coverage
by
release.
A
We
should
probably
say
today
you'll
see
that
it's
as
of
today
and
if
we
look
back
debt,
is
created
or
introduced
like
this,
this
orange
stuff.
This
shouldn't
happen
so
in
114,
all
of
these
were
promoted
and
they
were
promoted
without
test.
But
if
we
sort
we
can
see,
we
cleared
a
lot
of
that
in
117.
Then
we
hit
a
few
and
119.
Then
we
cleared
a
few
and
120.
A
and
that
that's
been
cleared.
However,
if
we
go
back
further
and
we
look
at
stuff
that
was
introduced,
let's
just
go
all
the
way
back.
Well,
let's
go
see
those
endpoints
here.
I
think
here's
the
one
end
point
for
one
one
that.
A
Just
right
and
one
one
here
so
going
back
from
now
we
cleared
this
one
12
endpoint
that
was
promoted
in
in
112.
We
got
it
in
119.,
so
it's
no
longer
going
to
appear
in
the
debt.
However,
we've
got
these
two
endpoints
that
were
promoted
in
111
that
still
do
not
have
tests
again
related
to
that
service
status
and
api
registration.
A
The
other
ones
are
here,
it's
kind
of
a
mixed
for
110.
We
have
these
five
endpoints
that
aren't
tested,
but
the
rest
were
captured
in
the
117
release
and
rather
than
going
clicking
through
all
those
we
just
want
to
see
the
current
debt
you
just
scroll
down
to
the
bottom
and
can
tell
that
that's
all
cleared
up
to
here.
A
So
the
remaining
debt
like
1
9,
because
that's
what
we'll
probably
try
to
intentionally
focus
on
in
the
later
release,
we
go
to
one
nine,
it's
that
it'd
actually
be
these
two
columns
combined
together,
like
here
where
the
endpoints
introduced.
Without
that.
A
Here,
untested
and
I'll
still
have
to
go
back
and
do
some
more
mangling
to
get
us
to
go
further
than
that,
but
that's
the
full
list
of
debt
here
very
specific
on
the
list
of
tests,
and
I
think
rion's
actually
done
some
more
research
specifically
in
this,
because
most
of
this
is
all
apps,
so
underneath
the
apps,
this
is
definitely
on
daemon's
set,
so
there'll
be
a
lot
of
daemon
set
work,
deployment,
stuff
replica
set
stuff.
These
are
old,
apis,
they're,
definitely
core
and
they're
completely
untested.
A
This
is
not
to
scare.
This
is
just
this
facts,
and
we
it's
it's
it's
I
I
I
I
can.
The
technical
debt
we've
accrued
by
being
a
very
successful
restaurant,
where
we've
had
to
keep
going
and
the
kitchens
got
a
bit
messy
and
there's
dishes
to
be
washed,
and
so
we're
we're
the
dishwashers
that
are
coming
in
and
making
sure
that
the
kitchen
can
keep
operating
and
serving
up
the
delightful
community.
The
food
plates
that
we
that
we
serve.
A
Up,
that's
our
wrap
up
of
the
120
any
thoughts,
so
I
think
the
the
main
caveats
and
data
we're
going
to
try
again
for
30
points.
We
think
it's
going
to
be
hard,
but
that
will
and
depending
on
how
long
the
release
is.
Maybe
we
can
put
in
that
extra
stretch.
A
I
think
that
actually
mostly
covers
our
our
details
here.
This
is
the
that,
if
you
want
to
look
at
the
actual
list
of
24
endpoints,
it's
this
red
area
old,
endpoints
covered
by
new
test,
here's
our
24
tests,
so
they
were
promoted
back.
A
lot
of
this
was
one
five
that
one
eight
debt,
one
nine
debt
and
all
of
it
was
tested
in
this
release.
Previously
untested.
A
This
group-
and
this
call
are
kind
of
where
we
talk
about
that,
like
we
said
we're,
we're
gonna
try
to
get
rid
of
those
five
endpoints
for
those
last
two
releases
to
try
to
slowly
work
backwards
in
time
on
the
debt
we
had
talked
about
at
different
points
running.
I
think
it's
actually
with
with
with
the
vmware
and
the
constructing
something
where
we
run
something
like
sono
boyd,
but
we
collect
the
application
data
to
see
which
apis
what
applications
are
popular.
A
I
think
it
was
several
sorry
we
could
deconstruct
that
we
wanted
to
look
at
popular
helm,
charts
so
trying
to
look
at
the
logs
for
popular
helm,
charts
from
looking
at
the
the
downloads
from
the
helm
repository
that's
going
to
be
less
useful
now,
because
the
helm
repositories
have
since
been
distributed
out
to
the
various
projects
themselves.
Let
me
say,
helm
repo
ad,
so
we
can't
do
that
anymore.
A
We
also
thought
about
anyway,
so
prioritization
methods.
I.
A
There's
been
a
focus
on
pod,
I
think
in
the
past,
and
we
looked
at
going
beyond
just
hitting
endpoint,
because
pod
itself
is
a
super,
complex
and
core
type,
but
that
would
require
looking
at
and
measuring
conformance
differently
based
on
the
number
of
fields,
and
I
think
we
did
some
initial
research
and
there
were
77
000
different
field,
depth
combinations
that
kubernetes
could
use
from
the
top
level
of
the
api,
and
nobody
that
just.
I
think
we
all
just
shied
away
from
that.
A
So
right
now
the
the
focus
is,
and
the
prioritization
is
clear
if
you
just
click
on
that
conformance
area,
we're
just
working
from
the
oldest
the
the
newest
to
the
oldest.
When
we
get
back
here
to
this
one
five
area
I'll
do
a
bit
more
research
and
try
to
break
it
down
to
what
was
one
five
one,
four
one,
three
one
two
and
all
the
way
back
to
1.0.
A
So
we
can
actually
see
when
was
that
api
introduced
and
have
we
got
tests
for
it
if
you've
got
ideas
for
prioritization
feel
free
to
throw
them
out
there.
I
I
part
of
it
has
been
us,
and
probably
when
I
say
it's
gonna
get
harder.
A
So
it's
it
is
a
bit
of
shoving
the
harder
things
down
the
road.
If
there's
things
that
you
find
important
writing
tests
for
them
would
be
a
great
way
to
prioritize
them
right,
make
sense.
A
Our
120
roadmap
we've
done
that,
and
we
just
looked
at
this.
We
had
a
kubecon
talk.
If
you
get
a
chance
to
check
that
out,
I
don't
know
where
the
videos
are
yet,
though,
we
have
a
link
to
to
the
talk
itself
here
and
I
think
it's.
A
So
go
go.
Look
at
that.
We
also
have
a
an
exported
markdown,
which
might
be
easier
to
go
through
that
is
inside
of
a
link
here.
I
can
drop
that
into
the
chat
as
well.
A
It
goes
through
the
conformance
project
itself,
and
I
think
I
don't
know
if
anybody
noticed,
but
at
the
beginning
of
the
talk
we
submit
a
pr
to
the
cncf
conformance
and
at
the
end
we
go
through
and
look
at
that
job
that
validates
it
and
then
in
between
those
two.
We
go
through
and
and
write
a
test
to
do
some
coverage.
So
we
hit
a
lot
of
different
aspects
of
the
kubernetes
conformance
program
and
and
kubernetes
itself.
A
It
may
be
a
bit
dense,
but
we
only
had
the
for
22
minutes
within
which
to
get
the
presentation
out,
there's
also
a
static
sledge.
What
is
this
link
address?
This
is
the
slides
as
they
were
presented
at
the
conference,
not
in
markdown
mode.
A
We
don't
have
much
time
left
for
our
our
pr,
so
I'll
go
through
these
they're
and
I'll
just
kind
of
gloss
over
this
one.
We
have
some
push
ede
images
that
are
still
failing.
This
was
one
of
our
blockers
for
eight
points,
so
there
were
at
least
this
one
has
a
possibility
for
12
points.
Total
eight
of
those
points
are
blocked,
because
the
underlying
image
we
need
to
test
has
not
been
promoted
and
sig
testing
is
aware
of
it.
We
had
a
conversation
this
morning
from
that
agenda.
A
We
have
another
issue
that
is
for
limiting
apa
server
proxy
redirects,
and
I
think
clayton
was
on
the
call
for
a
moment
and
we
missed
him.
I
think
we
should
probably
try
to
move
these
up
to
the
beginning
and
try
to
fit
clayton
and
those
people
who
can
come
in
for
just
a
moment
earlier,
but.
B
The
the
redirect
at
the
moment,
just
needs
to
be
the
current
plan,
is
just
to
carry
on
the
redirect
only
for
get
and
head,
and
my
plan
was
to
look
at
having
a
case
in
the
dot
go
file.
The
only
thing
is,
the
logic
seems
to
be
able
out
and
I'm
just
needing
a
little
bit
more
clarity
around
where
my
understanding
is
probably
lacking.
I've
had
some
feedback
from
jordan
already,
so
it's
hopefully,
if
that's
unblocked,
then
we
can
get.
I
think,
there's
a
it's.
B
What
seven
points
yeah,
I
think
it's
at
least
seven
end
points
that
are
available
once
this
is
fixed,
and
then
we
can
start
the
conversation
about
what
happens
for
some
of
the
other
endpoints
that
are
currently
getting
blocked
with.
A
I
don't
know
if
later
they
have
any
thoughts
on
that
one,
it's
kind
of
a
so
far.
We
need
to
get
you
feedback
from
clayton
and
and
respond
to
liggett's
information
there
as
well.
In
parallel
with
the
the
image,
seemed
to
be
updated.
We
also
talked
about
policy
changes
and
when
we
say
policy
changes
in
this
case
we're
talking
about
the
audit
policies
that
are
part
of
the
audit
logging
and
the
http
head
and
options.
Verbs
are
not
showing
up
in
logs,
and
this
isn't
because
of
the
policy.
A
This
one
is
because
the
way
that
we
change
the
verbs
before
it
gets
logged
is
inconsistent
and
trying
to
find
the
right
people
to
speak
with
has
been
a
bit
of
an
issue
so
I'll
open
up
the
the
main
issue
that
we
have
open
right
now
and
we
just
had
our
last
information-
was
some
information
about
the
kubernetes
verb
being
associated
with
the
request,
that's
different
than
the
http
verb,
and
there
is
a
a
reference
that
is
underneath
auth
that
they
describe
the
mapping
between
the
http
verb
and
the
request
verb
that
this
may
be.
A
Arc,
I
can
show
you
our
code
where
it
makes
us
this
distinction,
but
it
seems
there
are
several
different
places
in
the
kubernetes
source
code
where
this
occurs
and
it
would
have
an
impact
everywhere.
If
we
tried
to
change
it,
I'm
afraid.
So
this
may
be
something
where
we
have
to
just
change
the
way
that
we're
recording
our
hits
because
from
an
open
api
perspective
and
coverage
of
the
swagger
json
endpoints
and
operations,
we
are
hitting
it
from
the
front
end,
but
the
way
that
the
logging
occurs.
A
More
api
surface
area
for
a
one-off
need,
but
it's
quite
hard
to
figure
out
in
a
authoritative
manner,
from
the
logs
which
urls
are
associated
because
it's
not
just
the
url,
it's
the
verb
and
when
it's
inconsistent,
how
we
map
that
to
api
operation,
we
get
inconsistent
results
so.
A
A
We
don't
have
much
time
left,
but
I
know
that
vlad
and
binay
have
been
working
on
some
interesting
things
too
and
wanted
to
give
a
bit
of
time
to
them.
C
D
At
vmware
are
looking
at
getting
our
119
stuff
conformance
tested
so
kind
of
busy
with
that,
but
fortunately
no
dependency
of
the
upstream
sites.
C
And
I
was
going
to
add
with
you
know,
with
the
description
of
the
holidays,
things
kind
of
slowed
down
for
me,
but
definitely
I
want
to
kind
of
revisit
the
the
conformance
profile
which
is
not
on
the
it's
not
on
the
agenda,
but
that's
something
we're
still
interested
in
to
to
kind
of
keep
driving.
C
A
C
Yeah,
it
has
been
tabled
so,
but
eventually,
hopefully
we
can
kind
of
circle
back
to
see
if
it's
something
that
should
be
tabled
or
keep
moving,
and
then
you
know
next
year.
Definitely
I
want
to
see
how
subtle
boy
can
help
with
this.
I'm
not
sure.
Yet,
with
what
you're
doing-
and
I
know
you've
used
on
the
board
to
kind
of
exercise
the
conformance
tests
itself,
but
I
don't
know
if
there's
anything
else,
we
can.
You
know
how
we
can
integrate
some
boy
in
any
of
this
stuff.
A
But
it
would
be
lovely
to
be
able
to
to
grab,
and
then
this
is
what
we
we've
had
trouble
with
for
a
while
is
when
the
atp
server
is
trying
to
log
to
dynamically
redirect
the
logs
we
lost
the
dynamic
web,
the
dynamic
audit
sync
in
119
and
needing
to
find
a
replacement
that
hooks
into
api
servers
logging
framework.
A
Somehow
it
may
be
a
feature
we
have
to
resurrect
somehow
and
then-
and
I
don't
know-
but
I
think
sono
boy
having
a
tool
where
we
can
gather
information
about
what's
occurring
in
the
cluster
at
an
api
level,
would
be
super
lovely
and
there's
some
really
fun
tooling.
C
Yeah,
you
know
what
I
I'll
make
a
note
to
to
circle
back
with
you
to
understand
this
a
little
bit
better.
Okay,
because
I'm
looking
for
that
for
ideas.
E
C
Revisit
and
and
see
how
we
can
help
out
there,
because
I'm
pretty
sure
if
you,
if
you
have
a
need
for
it,
the
rest
of
the
community
probably
does
as
well.
Okay,
yeah
so
definitely
we'll
circle.
A
Back
well,
I
will
give
everyone
back
the
last
five
minutes
of
the
hour
and
and
stop
the
recording
and
we'll
see
everybody
in
a
couple
of
weeks.
I
don't
think
we'll
meet
after
that,
and
the
only
thing
we'll
probably
do
in
the
next
meeting
is
have
sent
out
our
okrs
and
make
sure
that
I'll
probably
run
through
this
on
the
sig
architecture
meeting,
just
to
make
sure
that
the
roll-up
is
is
accepted
by
the
architecture
team,
so
that
we
can
blow
it
out
of
the
water.