►
From YouTube: Kubernetes SIG CLI 20220420
Description
Kubernetes SIG CLI bi-weekly meeting on April 20th, 2022.
Meeting Notes and Agenda: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.ejo8kx3zlcof
A
B
B
Let's
start
with
the
announcements,
there's
quite
quite
a
few
of
them,
so
the
version
124
release
has
been
delayed
and
this
is
going
to
affect
us
because
our
code's
not
going
to
be
able
to
get
in
until
we
after
that
gets
shipped,
and
then
the
code
will
be
unfrozen,
but
it's
going
to
be
delayed
until
tuesday.
The
3rd
of
may
I
put
a
couple
other
items
on
there
to
to
give
us
an
idea
of
how
we
can
track
how
this
release
is
going
out.
B
B
B
Okay,
let's
move
on
to
the
the
next
item.
The
next
item
has
to
do
with
a
proposal
by
the
tech
lead
for
the
sig
api
machinery,
who's
who's,
asking
all
of
the
kubernetes
developers
to
basically
raise
the
bar
higher
there's,
been
some
reliability
issues
with
kubernetes.
B
B
Raise
the
bar
on
our
reviews
and
the
examples
that
he
uses
are
that
we
should
not
allow
or
is
asking
us
not
to
allow
prs
to
be
submitted
without
tests.
So,
if
somebody
asked
to
add
tests
in
a
separate
pr,
daniel's
position
is
that
we
shouldn't
allow
that
in
the
near
term
and
that.
B
Any
argument
which,
which
is
that
a
particular
pr
is,
is
not
making
the
code
any
worse,
is
also
not
a
particularly
convincing
argument
that
we
should
be
ensuring
that
the
code
base
has
sufficient
test
coverage
and
the
code
base
is
getting
better
because
of
these
reliability.
Issues.
C
Yeah,
it's
probably
worth
adding
that
this.
This
is
basically
in
line
with
the
overall
approach
of
increasing
the
reliability
of
the
entire
platform.
You've
probably
seen
emails
from
vojtek
and
a
couple
other
folks
introducing.
C
Caps
for
increased
reliability,
so
this
is
more
about
the
ask
towards
all
the
reviewers
and
approvers
to
ensure
that
every
single
code,
edition
change
is
properly
covered
with
sufficient
tasks,
if
possible,
is
adding
extra
tasks
to
the
area
that
you
are,
that
the
submitter
is
expanding
or
building
or
fixing
this
way
slowly,
we
will
be
improving
the
test
coverage
or,
if
you're,
you
know
that
the
test
isn't
reliable
enough
or
isn't
covering
the
entire
area,
improving,
so
a
greater
focus
on
on
increasing
the
the
stability
of
the
platform
as
a
whole.
Basically,.
D
B
Okay,
yeah,
I
guess
I'll
have
to
update
that
link.
My
apologies.
B
Okay,
so
the
the
next
announcement
is
that
tomorrow
we
have
a
community
meeting
and
I
put
some
information
for
us
there,
including
the
zoom
link
and
the
draft
agenda.
B
C
If
I
remember
correctly
so,
yeah
the
if
you,
if
you
haven't
opened
the
link,
two
topics
that
that
are
probably
worth
calling
out
that
I
quickly
glanced
through
the
agenda
earlier
today,
is
that
there
will
be
discussion
about
the
team
lead
versus
chair
separation
and
discussion
about
terms
for
chairs.
Currently,
we
don't
have
any
bound
terms
for,
for
how
long
can
you
be
assured?
C
B
Great
hope
to
see
you
there,
so,
let's
move
on
to
introductions.
This
is
the
part
of
the
meeting
where
you
can.
We
we
can
actually
get
to
know
others
that
we
haven't
met
yet
or
and
to
hopefully
increase,
make
it
a
little
bit
more
collaborative
by
raising
a
profile
of
folks
who
might
not
be
get
much
exposure.
B
So
if
and
of
course,
this
is
just
optional,
so
if
you
haven't
been
here
or
if
it's
been
a
long
time,
this
is
the
opportunity
to
to
say
hi.
A
I
can
jump
in
here
and
introduce
myself
so
hello.
All
my
name
is
corey
jakerting
long
time,
kubernetes
user,
big
advocate
of
customize
and
yeah,
just
generally
really
passionate
about
developer
experience
and
good
documentation
and
yeah.
Looking
forward
to
helping,
however,
I
can,
in
this
sig
welcome.
A
Yeah
hi,
so
this
is
my
first
time
joining
this
call.
I
probably
started
like
working
on
some
issues
reviewing
some
like
ongoing
issues
and
contributing
through
some
pr's
to
cube
ctl,
so
just
wanted
to
see
how
we
proceed
here
so
join
this
call
hello.
B
If
I
might
ask,
how
is,
is
your
name
pronounced
har
harjos.
B
D
Yeah,
I
just
had
two
quick
ones:
hopefully
so
this
pr
by
we've
been
talking
over
slack
and
they
want
to
make
cube
control
rollout
status,
support
multiple
entries,
their
pr
actually
doesn't
add
the
recursive
flag
like
they
mentioned,
but
I
was
actually
okay
with
this
and
I
wanted
to
know.
Does
anyone
know
the
history
behind
the
restrictions
we
have
to
only
allow
a
single
resource
when
multiple
could
work
and
not
cause
a
problem.
C
C
I
guess
a
lot
will
depend
on
the
per
command
basis.
There
will
be
some
commands
where
it's
perfectly
fine
to
work
with
several
resources,
but
there
we
there
will
be
others
where
I
would
be
very
cautious
doing
so
so
per
se.
No
that
as
long
as
it
works
the
same
as
it
did
before.
C
C
It
it
it
basically
reuses
the
the
usual
pattern
where
we
have
the
visitor
and
then
just
apply
the
code
to
every
single
resource.
That's
perfectly
fine,
so
yeah.
D
C
It
will
always
depend
on
a
per
command
basis,
but
most
of
them
should
be
perfectly
fine,
especially
those
that
are
viewing,
or
I
don't
know
giving
you
information
rather
than
modifying
for
those
is
perfectly
fine
as
long
as
the
picture
isn't
somehow
murky,
because
I'll
be
throwing
so
many
information,
even
if
you
do
theoretically,
even
if
you
do
describe-
let's
say
I
don't
know
if
you
do
keep
cuddle
describe
parts
even
with
the
all
name
spaces
that
will
work
and
it
will
actually
generate
a
ton
of
requests,
but
it
will
describe
all
the
policy
it
can
find
in
your
cluster.
C
So
does
it
make
sense?
Yeah,
I
probably
would
argue.
What's
the
value
of
I
don't
know
describing
100
plus
parts,
but
there
will
be
some
cases
where
it
does
make
sense.
Like
I
don't
know
in
a
couple
node
cluster
doing
described
nodes.
Yes,
it
will
generate
quite
a
few
requests
because
describe
node
is
pretty
heavy
in.
C
D
Okay,
so
the
next
thing
is
so
we
have
skew
tests
that
are
testing
-1
version
of
kubernetes
against
the
master
branch.
This
has
been
the
source
of
many
release,
blocking
issues
that
have
come
up
in
the
past
couple
weeks
and
months.
D
I
kind
of
quietly
fix
these
and
it
just
turns
out
that,
like
someone
merged
a
pr
and
they
changed
a
test,
but
the
test
isn't
backported,
but
the
code
base
has
changed
right.
So
basically,
the
fix
here
is
usually
to
backport
the
test
to
a
previous
branch,
which
always
feels
wrong
to
me,
but
it
sounds
like
we
do
it
way
more
often
than
it's
a
normal
thing.
Yamato
is
nodding
his
head,
the
the
the
question
I
I
want
to
bring
up
and
the
conversation
I
really
want
to
have
is
around.
D
I
think
these
used
to
be
run
on
pre-submit
as
opposed
to
periodic
like
they
are
now.
They
run
twice
a
day
on
master,
and
we
only
usually
find
this
at
release
time
when
the
the
release
tags
get
updated.
So
there
are
these
tags
in
gcs
they're,
like
cade,
stable
dash,
one
which
is
like
the
minus
one
branch,
and
whenever
these
get
updated
usually
is
when
the
skew
tests
start
failing
on
master
blocking.
D
C
Okay,
can
we
because
and
I'll
be
honest
I
rarely
or
almost
never
look
at
the
post
submits?
That's,
probably
why
we
creep
those
in.
Can
we.
E
C
The
screw
job
to
be
a
at
least
informing
job,
but
only
for
cube,
color
related
changes.
If
I
remember
correctly,
prowl
has
an
option
to
trigger
a
job
based
on
directories
that
are
being
changed.
Given
that
the
the
majority
of
the
code
that
we
have
now
lives
under
staging
keep
cuddle,
we
could
easily
trigger
them
and
we
could
even
potentially
make
them
required
for
all
of
cube
cuddle
changes.
C
This
will
be
especially
important
for
us
once
we
will
go
to
separate
repo
or
we
will
be
potentially
thinking
about
releasing
on
a
different
cadence
and
that
will
imply
that
we'll
have
to
support
more
than
just
plus
minus
one
release.
D
Yeah,
the
problem
is
that
it
only
happens
when
the
tags
get
updated
so
until
those
times
that
that's
how
we're
determining
what
a
version
sku
is
is
when
they
cut
the
new
tag,
it's
still
the
same
test.
It's
just
that
file
that
it's
grabbing
the
check.
You
know
the
the
tag,
the
git
tag
version
to
be
testing
against
updates,
and
that's
when,
like
you
know,
it's
actually
doing
the
skew
testing.
Oh.
C
C
That
would
be
a
reasonable
ski
test
because
you
always
run
the
previous.
The
previously
released
and
you're
always
getting
the
master,
because
that's
what
we
run
from
the
current
pr.
D
So
we
can
absolutely
do
that,
but
that
means
we'll
have
to
keep
track
of
those
tags
ourselves
most
likely.
Okay,
which
is
fine.
We
can.
We
can
do
that,
but
maybe
we
can
have
some
jobs
that
open
pr's
to
update
those
right,
but
you
see
that
that's
the
problem
is
it's
all
dependent
on
those
tags
which
oh,
I
actually
have
the
docs.
Let
me
paste
these
in
here,
because
I
always
forget
where
this
lives
I'll
put
it
in
here
and
then
I'll
put
it
in
the
dock.
D
No,
this
one
there's
a
docs
page,
actually
sick
release,
kubernetes
version,
dot,
markdown.
D
So
maybe
this
is
a
bigger
thing
that
we
need
to
talk
about
with
sig
release,
but
yeah
it's
just
it.
We
only
find
these
things
because
the
tags
get
updated
and
that's
when
the
actual
skew
testing
happens.
So
the
answer
also
could
be
there's
nothing.
We
can
do
about
it.
We
just
got
to
fix
it
when
it
comes
up,
but
yeah
that
that's
when
I
start
getting,
you
know
blown
up
by
failing
test
grid
jobs
and
paged,
and
it's
like
oh
something
happened
here.
C
Well,
in
the
worst
case,
what
will
happen
is,
for
example,
we
will
create
this
job
against
123
master
123
and
will
continue
keeping
the
master
as
it
or
whatever
is
the
current
pr.
C
So
the
worst
thing
that
can
happen
is
we
will
not
update
this
could
test
to
run
against
124,
which
basically
means
we
automatically
keep
on
running
this
against
123,
which
is
kind
of
good
thing.
Until
we
get
a
failure,
at
which
point
we
should
be
able
to
pick
up.
Oh
yeah,
we
forgot
to
to
update
the
tag,
but
that's
not
a
bad
thing.
I
guess
it
will
be
by
accident.
We
will
be
expanding
the
requirements
for
skew
testing.
D
I
know
we
talked
in
the
last
leads
meeting
about
not
or
I
know,
josh
burke
has
brought
this
up
about
doing
better,
skew
testing
and
in
general,
so
maybe
I'll
sync
with
him
too
and
see
if
he
has
ideas
here.
I
just
wanted
to
put
this
on
people's
radar
because
it's
usually
you
know
not
an
actual
master
blocking
problem,
but
it
just
starts
failing
so.
B
Yeah
this
this
kind
of
fits
into
that
theme
we
were
talking
about
earlier
with
the
reliability
I
had.
I
had
also
seen
a
a
particular
and
treaty
from
from
dims
asking
to
to
try
to
fix
some
flaky
tests.
E
D
Yeah,
that's
a
good
call
like
for
that
rollout
pr,
philip,
is
working
on
the
test
suite
for
right
now,
because
I
don't
think
we
had
any
tests
for
that
command.
So.
B
Cool
thanks
for
bringing
that
to
our
attention
eddie.
Should
we
should
we
move
to
the
next
topic?
If
there
is
one,
I
don't
see
that
there
is.
B
A
D
Oh,
we
have,
we
have
our
cube
contact
we
actually
have
to
put
together.
I
just
remembered
that
the
other
day.
So
if
anyone
has
things
they
want
to
hear
about
or
topics,
they
think
we
should
talk
about
or
work
that
they
did,
that
we'd
like
to
highlight
in
our
kubecon
state
of
the
six
cli,
please
let
us
know
for
sure.
B
So
this
is
in
the
kubecon
europe
correct,
that's
coming
up
in
may
or
yes,
because
because
they've
also
asked
for
requests
for
proposals
for
kubecon
in
detroit
in
north
america
already.
A
B
Well,
why
don't
I?
Why
don't
we
give
you
back
your
time,
so
thanks
eddie
for
for
dropping
those
topics
in
at
the
at
the
last
minute
and
bringing
up
those
those
important
items
so
before
I
I
bring
this
to
a
close
I'll.
Ask
if
there's
anything
else,
anybody
would
like
to
to
bring
up.
A
E
B
E
E
B
Okay,
thanks
for
your
time
and
we'll
see
you
actually
next
week,
is
the
bug
scrub.
Is
that
correct,
eddie.