►
From YouTube: 20201020 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Check
this
hello,
everybody:
this
is
hippie
hacker
and
I'm
your
your
lead,
you're
the
host
today
for
this
big
architecture,
conformance
sub
project
meeting.
B
B
We
have
an
agenda
today,
that's
quite
full
and
also
has
a
lot
of
unblocking.
That's
I'm
glad
to
see
we
have
good
representation
here
today,
so
I
will
go
ahead
and
and
share
my
screen
and
we'll
get
down
to
it
sure
screen
and
desktop
one
pardon.
If
you
see
yourself
for
a
moment
overall,
one
of
the
first
things
I
wanted
to
make
sure
we
got
through
on
the
agenda
is
every
month
or
so
we
go
through
and
update
our
roadmap
and
our
okrs,
which
are
objectives
and
key
results.
Yeah.
B
This
first
link
that
I'll
share
with
our
with
our,
I
think
I
can
share
it
within
the
chat,
is
the
presentation
that
I'm
going
to
go
over
in
markdown
form,
but
I'm
going
to
do
the
presentation
html,
which
is
that
second
link
we've
not
been
changing
much
from
our
approach,
just
trying
to
iterate
on
working
consistently
to
increase
velocity,
our
primary
objectives
were
to
increase
new
conformance,
stable,
endpoints,
we'd
love
to
get
to
40..
It's
been
a
hard
quarter,
hard
release,
so
I'm
still
sticking
with
30.
B
current
status
is
we've
got
19
that
are
written
and
four
that
are
on
the
flight
as
far
as
stuff
that
I
feel
pretty
confident
on
and
we'll.
I
think
we'll
go
through
some
of
those
pr's
later.
One
of
them
is
just
to
promote
pr
and
one's
a
test
that
is
getting
some
feedback
and
one's
at
issue.
B
B
Having
trouble
finding
the
flight
and
there's
our
17
points
here,
one
of
the
hardest
things
that
was
was
that
we've
been
trying
to
debug
us
we're
writing
tests
and
not
showing
up,
and
it
comes
down
to
our
audit
policies
on
the
main
ci
jobs
are
not
they're,
they're
filtering
out.
So
if
we
I'll
click
on
this
link,
just
kind
of
give
it
a
little
bit
of
a
walk
down,
our
policy
actually
has
a
section
that
says:
do
not
log
audit
do
not
log
events
because
of
the
performance
impact.
B
Events
does
not
show
up,
and
I
think
that
we're
going
to
have
to
pivot
from
trying
to
update
our
existing
gce,
based
conformance
pr
blocking
jobs
or
release
blocking
jobs
to
the
kind
and
probably
just
create
a
new
polis
or
a
new
ci
job
based
on
that,
and
I
just
actually
dropped
the
link
that
I
just
wrote
so
I'll
bring
that
up.
B
This
is
my
hack
e
to
e
that
goes
through,
and
attempts
to
modify
that
approach
to
lay
down
a
very
generous
audit
policy
that
says
log
everything's
metadata
when
you
get
to
response
complete
and
then
modifying
our
control
plane
when
we
bring
it
up
to
bring
in
that
audit
folder
and
update
the
api
audit
logs
to
include
the
audit
policy,
and
I
also
have
to
do
a
few
more
updates
there.
B
B
We've
had
some
issues
with
proxy
redirect
proxy
redirect
there's
some
bugs
that
we
have
found
because
we're
writing
tests
and
because
we
were
writing
these
tests.
We
there's
now
bugs
that
need
to
get
cleaned
up
in
how
we
do
proxy
redirects
and
it's
a
lower
priority,
because
it's
not
part
of
our
mandate
to
do
it
to
update
the
api
itself.
So
I'm
hesitant
to
find
the
way
forward.
But
liguit
noted
that
we
need
to
to
fix
some
of
these
things
and
I
just
wanted
to
let
kind
of
an
update.
B
In
conclusion,
I
still
think
we're
going
to
be
on
target
for
this,
for
this
release
yeah,
but
we
are
having
lots
of
flakes
policies
and
redirect
problems.
Flakes
are
difficult
to
debug
and
reduce
and
policy
getting
the
policy
audit
policy
changes
in
effect
in
the
ci
jobs
that
we
consume
has
been
slow
and
hard
and
proxy
redirects
bugs
that
we've
established
have
also
slowed
our
momentum
down,
but
I'm
still
quite
positive
on
the
release.
B
Some
super
exciting
things
are
that
we
have
cleared
our
debt.
Let's
go
back
to
here.
We
will
clear
our
debt
all
the
way
back
to
111..
I
think
we're
going
to
get
these
five
in
here
and
I'm
not
sure
if
we're
getting
the
ones
on
111
or
not
take
a
look
at
that
here.
So
we've
got
five
in
points
for
119
and
there's
a
priority
lifecycle
test
that
I
really
want
to
see
promoted
it's
nice
and
clean
and
it
says,
feel
free
to
promote
that
was
seven
days
ago.
B
And
that
would
get
us
to
110.
our
release.
Blocking
job
has
been
difficult
as
well.
There's
some
issues
around
how
it's
being
interpreted
by
the
I
think
it's
pod
utils
is
what
we
came
up
with.
B
So
we
have
it
running
in
that
I
prowl.cncf.io
and
it's
working,
but
if
you
look
at
the
same
job,
if
I
do
instead
of
cncf
at
I
o,
if
we
go
to
product
caseload,
io
and
we'll
do
the
same
job,
it's
failing
and
my
and
what
we've
come
to
a
little
bit
of
research
makes
us
feel
not
sure
how
to
quickly
get
to
that.
B
Is
that
it's
failing
because
of
entry
points,
and
so
that's
going
to
require
a
bit
of
debugging.
So
we're
not
quite
there.
Yet,
though,
I
think
that
will
come
soon.
B
Sig
network
actually
mentioned
in
their
request
for
a
test
group
for
blocking
jobs,
noted
and
took
some
even
put
a
screenshot
if
they'd
like
to
get
some,
should
they
be
using
api
sync
to
do
that,
and
probably
would
do
this
from
september
12,
but
it
might
be
nice
to
follow
back
back
around
on
that.
B
Presentation
don't
think
I
clicked
on
the
link,
here's
the
two
jobs,
so
here's
the
one
from
cncf
and
the
one
from
crowl
the
one
on
the
one
on
case
that
I
was
failing
and
it
has
something
to
do
with
when
we
wanted
this
route
and
some
issues
on
this
to
get
involved.
We
engaged
with
sick
testing
on
the
call
this
morning
and
got
some
good
feedback
other
important
news.
Just
so
we
know
the
1.10
20
timeline.
We've
got
code
freeze
in
about
four
weeks
and
release
date
is
december
8th.
B
B
And
I
think
so
there
again,
the
conformance
gate
is
up
and
running,
though
I
think
we
have
a
bundle,
we're
fixing
on
that
today
we
are
speaking
at
the
radius
on
kubernetes
performance
coverage,
there's
a
link
on
the
schedule
and
I
can
drop
that
into
our
to
our
chat.
Somehow.
B
C
Hi
so
first
question:
I
know
there
were
many
pr's
for
the
audit
stuff.
Where
does
it
stand
now?
What
is
the
last
one
standing
that
we
need
to
look
at.
B
B
So
it
sounds
like
what
we
need
to
do
is
having
the
audit
policy
created
via
that
env
var
is,
is
failing,
and
we've
tried
hard
to
figure
out
why
it
might
be
easier
to
create
a
new
job
on
time
conformance,
and
I
think
I
just
pasted
it
into
our
our
chat
at
the
beginning.
B
Maybe
I
didn't
type
it
into
this
one.
C
Hippie,
why
don't
we
do
that
in
our.
B
I'll
drop
that
in
and
so
there's
a
thread
in
sig
testing,
but
I'll
drop
it
in
case
conformance.
C
C
So
the
first
problem
is,
we
are
not
able
to
inject
the
custom
audit.
That
was
the
first
problem
and
the
problem
is
the
pro
you
said:
the
conformance
gate.
Api
snoop
is
not
working
in
our
pro
it's
working
in
the
other
pro.
So
those
are
the
two
problems
that
I
heard
as
like
really
critical.
Right
now
for
your
work.
B
Yes,
we
I
I
got
on
to
sig
testing's
call
this
morning
and
with
the
help
of
ben
the
elder
and
and
spiffex,
we
walked
through
what
that
issue
might
be,
and
I
was
able
to
generate
using
the
env
bars
a
proper
audit
policy,
but
debugging.
B
It
is
quite
difficult,
and
rather
than
trying
to
update
that
existing
job
because
it
might
be
performance
problems,
it
was
suggested
that
I
create
a
new
job
and
rather
than
basing
it
on
the
gci
gce
job,
basing
it
on
the
kind
conformance
job-
and
there
is
a
sig
testing
thread
about
that
right
now
that
was
opened
this
morning
and
I
can
share
the
link
to
that.
Inside
of
our
case.
Conformance
channel.
C
Okay,
so
you're
gonna
try
that
you're
gonna
try
yeah
like
a
kind
base,
but
with
a
custom
audit,
that's
correct!
Okay,
so
you
don't
need
my
help.
There
right.
C
C
Then
the
other
one
was
the
conformance
gate.
Do
you
have
some
hints.
B
This
is
the
next
part
of
the
conversation
was
mainly
around
that
kind
hack
here
and
let's
go
around
the
next
one.
B
And
it
seems
to
be
something
to
do
with
the
audit
with
the
sorry
it's
called
pod,
utils
and
decorators,
so
we're
not
using
those
decorators
when
those
decorators
are
replied
something
about
how
it's
executed.
I
think
we
just
learned
from
from
then
on
the
call
and
suggesting
that
there's
an
issue
where,
when
we
use
the
decorators
and
the
entry
point
sidecar
the
entry
point,
it
ignores
any
entry
point
set
by
the
container
itself
when
we
create
the
docker
container.
So
it's
treated
differently.
B
C
Okay,
so
let's
do
this:
if
you
are,
if
you
are
still
stuck
tomorrow,
then
ping
me
back
and
I'll
I'll
help
there.
C
B
Yeah,
I
I
I
it
was
a
good
thing
to
catch
up
with
sig
tested
this
morning
to
get
with
ben
and
aaron
at
the
same
time,
and
that
has
pushed
us
forward
a
bit.
I
did
not
expect
to
get
that
help
before
I
was
actually
walking
on
the
beach,
and
I
realized
that
call
was
coming
so
I
jumped
on
the
call
from
from
my
morning
walk.
B
I
that
to
clarify
the
stuff
that
dems
just
went
over
that
was
kind
of
the
next
point.
The
jobs
for
eve
do
not
deliver
the
event
endpoint
results
and
we
need
those
events
to
show
up.
In
order
for
us
to
calculate
conformance
correctly,
it's
likely
we're
going
to
create
a
new
job.
Based
on
this,
I'm
going
to
close
all
these
many
many
times
a
little
less
confusing
based
on
this
job
here.
B
This
job
runs
the
normal
test
image,
for,
I
think,
that's
the
kubernetes
runtime
test
environment.
Maybe
that's
what
krte
is,
but
they
download
kind
and
then
within
that
run
the
ede
case
sh
and
that's
what
I
just
wrote:
an
updated
version
of
and
posted
into
the
channel.
B
So
hopefully,
if
I
can
create
a
new
job
based
on
that
to
override
these
lines,
because
these
lines
here
are
where
they
create
the
config
that
does
not
create
audit
logs
inside
the
artifact
yet
so,
hopefully
that
will
allow
us
to
be
in
full
control
of
our
own
job
and
not
have
to
get
permission
from
outside
of
stick
architecture
and
look
for
that.
Pr
later
today
for
our
kk
release
blocking
job
we
just
mentioned,
I
think
we're
going
to
just
need
to
debug
that
a
bit
and
reach
out
to
them.
B
This
morning,
possible
issue
with
the
pod
details,
entry
entry
point
and
not
calling
our
doctor
file
entry
point
and
finally,
we'll
get
to
some
stuff
beyond
that,
and
that's
the
node
proxy
with
path
test
and
promote
that
should
give
us
an
extra
four
points
of
coverage.
It
kind
of
depends
on
kublet
and
I'll.
Let
hasty
steven
take
over
this
part.
Here's
a
bit
of
a
write-up
on
it.
F
Okay,
just
can
you
hear
me,
okay,.
F
Yeah
cool,
if
you
just
go
down
a
little
bit
further
into
the
actual
go
test
itself.
F
Yeah
yeah
about
there
thanks
happy.
So
where
are
we
doing
the
url
string
and
my
mock
test
so
far
the
end's
going
to
config
zed
and
I
did
some
checks
and
of
course
that's
relying
on
some
stuff.
That's
part
of
kubelet
and
there's
also
metrics.
G
Is
proxy
still
disable
on
the
cubelet
side,
so
like
the
two
endpoints
you
mentioned
can
be
disabled?
I
think
so.
That's
one
consideration.
G
I
know
of
people
who
disable
proxy
to
nodes.
It's
not
required
for
any
correct
operation
of
the
system.
Today,
it's
had
challenges
over
the
last
couple
of
years.
Right
we
did
the
tunneling
right,
because
people
don't
want
to
allow
the
control
plane
to
be
able
to
reach
out
to
their
internal
network.
We
used
to
be
vulnerabilities
there.
A
I
mean,
I
guess
the
question
would
be:
do
we
want
to
be
part
of
the
the
overall
surface
of
communities
and
required
as
part
of
conformance?
If,
if
it's
not
really
that
central
and
people
don't
rely
on
it,
then
you
know,
I
think
leaving
it
out
might
be
appropriate.
G
Yeah
under
the
old
criteria,
right
something
that
was
application-centric,
I
would
say
clearly
no
like
you
can
make
a
case
that
node
proxy
is
useful
for
establishing
your
application
is
going
to
run
consistently
across
kubernetes
clusters,
but
I
don't
think
it's
a
strong
one
under
the
broader
one
that
we've
been
trying
to
ensure
that
you
have
a
reasonable
expectation
as
both
as
an
admin
and
as
an
end
user,
not
heavily
used.
G
B
We
keep
track
of
these
endpoints
just
in
case.
If
anybody
wasn't
aware
on
api
snoop
under
conformance
project,
we
have
our
list
of
ineligible,
endpoints
and
they're.
Here
there
are
61
and
we
link
to
why
it's
not
available
for
for
testing.
That's
the
result
of.
If
we
decide
to
not
have
it
be
part
of
conformance.
What
would.
G
B
B
How
often
do
we
see
it
disabled,
like
I
guess
that's.
We
talked
about
this
in
an
earlier
meetings
around
getting
a
consensus
on
how
often
it's
deviated
from
the
norm
and
if
there's
more
than
a
20,
you
know
a
50
or
20
group
of
people
who
who
do
not
allow
it
and
if
the
thing
would
be,
if
we
it
might
take
to
identify
things
that
we
would
want
to
have
part
of
conformance
or
part
of
policies.
B
A
We
talked
about,
I
mean
that
we
can
get
into.
We
may
want
to
consider
some
sort
of
provisional
status
like
not
just
flagging
things
conformance,
but
some
sort
of
beta
conformance
or
something
like
that
to
say
that
it's
a
part
of
I
agree
with
that
thought
process.
A
In
this
particular
case
like
if
we
don't
like
it,
sounds
like
we
don't
even
have
necessarily
a
consistent
endpoint
to
point
it
at
can
we
can
we
create
an
endpoint
on
the
node
that
you
can?
You
can
get
the
node's
health
check?
I
think
that
would
probably
be
more
correct.
The
cubelets
health
check
yeah.
G
A
G
G
Oh
great,
I
think
you
can
pass
a
port.
I
think
we
could
do
a
damon
set
type
of
thing
and
verify
it
works.
I'm
kind
of
tempted
to
put
this
in
the
not
like
it's
totally
reasonable
to
go
after
the
behavior
of
it.
There's
more
than
just
the
like.
You
need
to
test
some
of
the
poor
semantics
too,
to
truly
test
conformance.
G
G
I'm
actually
now
worried
that
anything
we
did
could
back
us
into
a
corner,
so
yeah
provisional
status
or
not
yet
ready.
I'd
probably
go
for
not
yet
ready,
and
then
we
need
to
get
signo
to
get
an
edu
test
in
there
or
yeah
signature.
Ultimately,
was
this.
B
Let's
take
a
quick
verify
of
this
right,
so
I'm
going
to
go
through
and
see
here
are
the
endpoints
that
that
were
listed
there.
I
think
if
we
go
down
to
the
bottom
we'll
see,
and
that
e
there's
a
query
we
do
it
again.
Is
it
hit
by
an
e
to
e
at
all,
and
it's
not
there's
false
on
those
four
one
of
them
is
connect
v,
one
get
node
proxy
with
path
has
a
t
which
means
there
is
a
test
that
hits
that
one
but
the
other
four.
We
do
not.
G
B
We
will
add
that
as
the
ai
to
involve
them
in
that
and
let
them
see
our
ticket
because
we
haven't
written
a
test
for
this.
This
was
just
us
wanting
to
get
feedback
on
the
approach
and
whether
or
not
it
was
valid
for
conformance
it
sounds
like
we
have
have
that
status.
Any
other
comments
on
another
prophecy
with
path.
B
All
right
we'll
go
to
one
of
the
last
ones
here,
and
that
is
our
head
and
options.
Somehow
I
didn't
click
on
it
correctly,
our
head
and
options.
There
are
proxy
verbs
that
are
part
of
our
namespace,
and
this
is
the
find
and
head.
B
Do
you
want
me
to
cover
this
stephen
or
do
you
want
to
cover
it.
B
As
far
as
I
can
tell
note
right
here
on
our
new
request,
we
have
a
verb
which
is
strings
to
lower
on
the
request
method,
and
that
verb
is
the
cool
verb
associated
with
the
request,
not
the
http
verb,
which
we
did.
First,
we
did
a
translation
directly
to
lower,
and
then
we
have
some
special
verbs,
which
is
what
we're
hitting
in
this.
This
outlier
here
as
our
proxy
verbs
and
they're
supposed
to
be
able
to
do
some
really
non-cruddy
things
because
they're
they're
proxies.
B
B
Something
doesn't
add
up
for
us
because
when
we
do
head,
this
is
what
we
get
in
the
log
we
get
a
get
and
what's
hard
for
us,
is
we
don't
have
within
the
audit
logs
the
actual
operation
exactly
kubernetes
operation.
We
have
to
rely
on
a
combination
of
the
the
verb
as
stored
by
the
audit
logs
and
the
request
uri
to
match
up
and
find
the
operation
id,
which
means
if
that
verb
isn't
head.
G
Yeah
and
I'm
not
even
sure
that
the
current
behavior
is
correct
if
I
were
implementing
a
proxy,
but
I
don't
remember
whether
we
had
a
discussion
like
it's
been
five
years
now,
probably
since
the
last
time
we
really
touched
this,
I
don't
remember
us
saying
that
we
weren't
going
to
support
head-on
proxy
and
it's
kind
of
a
weird
thing
to
do
now.
The
risk
would
be
you
know
it's
up
to
proxies
to
decide
how
they
want
to
handle
head.
G
G
We
don't
use
the
proxy
verb
like
so
there
is
a
proxy
or
technically
connect
as
a
proxy
verb
in
http.
We're
not
doing
any
of
that
clever
stuff,
we're
just
we're
just
passing
it
through.
This
is
an
api
server.
I
feel
like
this
is
an
api
server
bug
that
actually
needs.
We
need
to
make
a
decision
about
whether
this
is
a
bug
or
not.
G
And
I
would
expect
options
to
work
for
proxy
and
there
are
some
implications
where
it's
possible
that
there's
other
code
intercepting
options
before
it
gets.
You
know
server,
but
I
I
do
feel
like
we.
I
don't
remember.
We
had
a
discussion
on
how
many
verbs
we
would
support
through
the
proxy,
and
I
think
we
chose
the
name
the
known
set,
and
then
it
got
refactored
a
couple
of
times,
and
it
does
not
surprise
me
at
all
that
yeah.
G
G
G
There's
there's
a
bunch
of
assumptions
here
that
I
think
proxy
is
actually
interfacing
with,
and
I
bet
you
custom
aggregated
api
servers
might
actually
have
this
issue
as
well,
but
yeah.
I
I
think
it's
just
at
some
point.
We
we
conflated
two
things
on
the
back
end,
so
this
really
does
feel
like
request
info
in
the
cube
model.
There's
no
head,
but
in
a
proxy
scenario.
G
G
I
can,
if
you
at
me
on
it
well,
black,
won't
see
email
notifications
anymore.
I
can
definitely
add
context
on
that.
Yeah.
Okay,
we'll
do
go
ahead.
I
will
relate
what
I
can
remember.
B
Create
a
ticket-
and
you
see
there
you
go.
Thank
you
thank
you
for
that,
and
now
we're
actually
getting
down
to
our
pr
review
stuff.
So
this
is
the
priority
class
lifecycle
test
and
I'll
also
pull
up
its
test
grid.
So
let
me
close
our
other
items.
We've
gone
over
today
and
look
at
our
priority
class
logical
test.
This
has
merged,
which
is
great,
and
I
think
there
was
a
few
let's
go
down
to
the
very
bottom,
so
this
will
be
a
conformance
test
in
a
week.
B
B
It's
12
points
it's
one
of
our
larger
tests
that
we
hope
everyone
is
happy
with.
If
we
go
down
to
the
bottom
on
this.
Currently
we
are
just
on
a
technical
level.
We
just
need
an
approve.
B
The
last
comment
that
we
got
we
had
omichi
he's
been
interwoven
through
a
lot
of
this
writing
and
he
is
lgtm's
on
that.
B
We
just
got
a
review
from
luciano
a
few
comments
on
the
test
itself,
but
was
wondering
why
we're
not
using
an
existing
test
or
maybe
why
we're
not
picking
it
up,
and
we
can
look
at
that
really
quick.
They
just
say:
we've
got
this,
should
proxy
a
service
and
a
pod,
and
they
expose
a
few
ports
here
steven
I
think
you
had
mentioned
that
in
your
the
difference
between
your
test
and
what
he
was
mentioning.
Do
you
want
to
bring
that
to
mind
again.
F
It's
the
fact
that,
at
the
end
of
his
comment,
he
was
talking
about
patching
and
deleting
where
the
image
we're
using
is
just
a
straight
echo
server
right.
So
that's
not
actually
possible
from
what
I
can
see.
But
there
is
the
comment
from
aaron
at
the
top
there
around
confirming.
What's
the
request
that's
going
through
is
actually
the
right
request,
but
then
the
current
porter
doesn't
seem
to
have
a
way
to
give
that
as
a
feedback.
So
I've
just
added
some
code.
B
F
Is
that
a
valid
option
for
updating
porter,
so
that
we
can
have
this
extra
check
that
aaron's
right.
A
So
you're
saying
that
the
image
the
the
pods
you're
proxying
to
doesn't
return
enough
data
for
you
to
verify
that
the
proxy
didn't
alter
the
http
method
on
the
way
back
on
the
way
to
the
pod.
Is
that
what
the
issue
is.
F
A
F
A
A
B
All
right,
that
is
great
feedback
and
we'll
push
this
test
out
a
little
further.
I
want
to
note
that
I
feel
like
any
points
we
get.
This
release
are
hard
earned
and
I'm
still
hopeful
that
we'll
get
our
30.
B
Is
there
anything
else
that
anybody
has
thoughts
on
her
once?
Oh
we've
got
a
few
more
sorry,
oh,
this
is
good.
This
is
points
yay
promote.
This
is
a
test
we
wrote
a
while
back
and
it
was
flaky
and
we
have
submitted
through
a
promotion
for
it,
because
test
grid
shows
for
the
very
longest
time
that
it
is
super
green,
there's,
no
reds
all
the
way
in
the
history
right
now.
B
I
did
re-run
these
jobs
just
once
so
you
can
look
in
the
history
of
when
when
we
ran
it.
So
this
was
the
initial
creation.
B
I
did
run
test
pull
kubernetes
once
I
ran
conformance
parallel
once
and
both
of
the
image
tests
it
it's
hard,
I'm
just
it's
I'm
unsure
what
that
job
does,
but
it
only
took
two
to
make
it
try
again,
and
so
we
would
find
it
here.
B
I
think
we
promoted
it
and
because
there
were
infrastructure
problems
at
the
time.
The
the
signal
like
it
was
flaky
to
be
honest
right,
and
so
we
just
said:
oh
so
much
work,
it's
not
going
to
promote
it's
just.
Hopefully
the
infrastructure
over
time
will
be
able
to
handle
this
kind
of
thing,
and
it
looks
like
the
infrastructure
has
done
so
and
now
we've
got
nice
clean,
non-flaky
tests
and
it's
been
written
for
conformance.
Let's
get
our
seven
points.
B
B
B
B
This
is
because
aaron
was
concerned
that
we
that,
while
it
is
less
flaky
now,
are
we
pushing
too
hard
just
to.
I
think,
because
I've
seen
a
lot
of
errors
on
kind
ipd6
that
are
not
specific
to
us.
This
is
why
that
tends
to
get
run
a
bit.
This
is
one
of
those
flaky
tests
that
we're
having
trouble
debugging
to
understand
how
to
get
it
across
the
line.
So
if
there's
debugging
help
or
if
you
think
we're
far
enough
along
then
go
ahead.
B
Thanks
so
we'll
go
to
for
the
whole
pr
right
so.
B
E
But
it's
pr
history,
because
he's
not
mentioned
so.
F
Go
ahead,
if
you
zoom
out
a
little
bit
to
see
the
number
of
runs,
I
think
just
time-wise
there's
been
a
lot
of
stuff
that
has
come
through
green
since
all
the
ci
cleaned
up
a
lot
about
two
months
ago.
That's
actually
quite
green.
So
a
lot
of
the
historical
flakiness
like
from
like
july,
roughly
yeah
and
it's
I've
done
tried
to
do
as
much
checking
offline.
B
F
F
So
if
you
scroll
up
just
a
fraction
more,
it's
like
the
myth
which
images
were
causing
issues.
An
image
couldn't
actually
be
pulled
properly.
F
B
So
I
think
his
comment
was
probably
around
running
these
three
yeah.
I
guess
that
what
I
would
like
to
have
feedback
from
the
conformance
group
as
a
team
as
a
since
we're
here
together,
do
we
need
to
go
through
and
specifically
address
the
legit
things
here
before
what
we
had
as
we
we
merged
this,
then
we
got
a
revert.
B
A
A
B
At
some
point
this
month,
don't
stress
and
then
the
next
one
is
this.
This
update
create
apps
lifecycle,
deployment.
B
B
B
Should
grab
from
the
scheduler,
it
doesn't
look
related,
but
this
would
be
the
other
one
that
if
we
can
have
someone.
B
Kind
of
is
it
flaky
or
not
now
so
these
last
two
are:
are
they
flaky
anymore
and
if
we
do
consider
them
flaky,
can
we
get
a
bit
of
help,
debugging
them
and
we've
by
the
way,
listen
to
all
of
the
sega
architecture,
testing
how
to
debug
flaky
test
stuff
as
well
and
we're
still
not
getting
very
far.
B
B
Thank
you
for
your
help
on
that,
and
that
concludes
our
meeting
with
six
minutes
to
go.
Unless
there
anybody
has
any
other
thoughts.
We
do
have
our
we're
recording
our
conformance
testing.
B
That's
our
conformance
kubecon
video
this
week
and
I'm
not
sure
if
it's
I
feel
like,
I
should
be
able
to
share
it
with
our
group
beforehand
to
get
some
feedback.
B
The
other
thing
is,
we
have
a
roll-up
and
I
just
wanted
to
ask
if
it
was
okay
to
present
that
small,
markdown
and
presentation
and
stick
architecture
on
in
two
days
time
as.
B
B
All
right,
thank
you
for
that.
Thank
you,
everyone
for
showing
up.
I
we
couldn't
do
this.
This
is
a
this
is
definitely
it
takes
a
village,
and
I
thank
you
for
everyone
for
showing
up
and
doing
their
part
enjoy
your
week
and
we'll
see
you
in
a
few
all
right.
Thank
you.
Bye.