►
From YouTube: Kubernetes SIG CLI 20230322
Description
Kubernetes SIG CLI bi-weekly meeting on March 22nd, 2023. Agenda and Notes: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.5tnh4bwwgoub
A
A
We
will
jump
immediately
into
the
announcements,
so
the
code
for
version
127
is
has
been
frozen,
so
I
hope
that
we
got
the
code
in
that
we
needed
to
get
in,
and
this
will
be
released.
At
least
it's
scheduled
to
be
released.
April
11th
on
a
Tuesday
and
we've
got
kubecon
coming
up
as
well.
The
18th
through
the
21st
of
April,
so
that'll,
be.
A
Unfortunately,
I
won't
be
there,
but
I
hope
that
everyone,
that
is,
has
a
good
time
and
it's
very
productive.
A
So
the
the
next
item
is
the
6
CLI
annual
report,
which
is
the
responsibility
of
the
of
the
leadership
and
I
I'm,
pretty
sure
that
I
saw
Eddie
mentioning
that
the
that
he
had
already
started
our
six
CLI
annual
report.
If
there's
something
that
you
think
that
that
merits
a
mention
in
our
annual
report,
please
bring
it
to
our
attention
so
that
we
can
bring
that
to
the
attention
of
the
rest
of
the
kubernetes
community
and
it
needs
to
be
started
within
a
couple
days.
That's
already
happened.
Thank
you
very
much.
B
A
B
A
No
okay,
so
we'll
go
to
the
next
part
of
our
meeting,
which
is
where
we
introduce
new
members.
If
they
would
like
to
introduce
themselves
this
opportunity
to
meet
your
60
like
colleagues,
is
there
anyone
else
who
hasn't
been
here
before
or
hasn't
been
here
for
a
while
and
would
like
to
introduce
themselves.
D
Right
I
hope
my
microphone
is
working,
hi,
I'm,
Matthias
I'm,
not
part
of
the
six
CLI
but
I'm
the
first
time,
listening
in
here
so
I'm
a
Staff
engineer
at
anchorstore,
and
we
are
running
quite
a
lot
of
kubernetes,
tooling
and
I'm,
primarily
interested
also
in
customize
and
Cube
CTL.
So.
A
E
I
can
briefly
intro
myself,
hi
folks,
I'm
Arthur
I'm,
a
software
engineer
at
lacework.
My
first
time
here
at
the
meeting
later
on,
I'll,
be
presenting
or
discussing
an
idea
that
we've
kind
of
had
internally
so
Keen
to
hearing
from
you
in
a
few
moments:
I'm
not
part
of
the
the
six
CLI
either.
So
just
my
my
first
meeting
here,
nice
meeting,
you
all
welcome.
F
A
F
A
A
Cool
what
a
great
contingent
of
of
new
folks
or
folks,
who
haven't
been
here
for
a
while.
A
Okay,
so
let's
move
to
to
the
next
section
as
long
as
I
didn't
miss
anybody
as
long
as
there's
no
one
else
who
would
like
to
introduce
themselves?
A
Okay,
so
are
there
any
any
kep
or
sub
project
updates
that
we
would
like
to
address.
G
I
decided
a
quick.
There
was
a
PR.
We
got
recently
from
for
kui,
asking
to
add
the
append
server
path
option
to
the
our
use
of
the
proxy.
If
anyone
has
any
insights
into
I
know,
there's
some
some
member
issues
there,
but
other
than
that
is
there
any
other
implications
if
we
can
take
it
offline,
but
if
you
have
any
insights
and
implications
of
that
just
enabling
it
by
default.
Let
me
know.
G
The
dash
dash
append
server
path
option
to
coup
proxy
you've
got
a
proxy
accessible.
It's
just
gonna
turn
it
on,
even
though
the
default
is
on.
H
H
A
Okay,
so
why
don't?
Why
don't
we
move
on
to
our
open
discussion
and
we're
going
to
start
with
Arthur
I
I'm?
Sorry,
if
I'm
Mis
pronouncing
that
as
Arthur
Arthur?
That's
that's
perfect!
Okay!
A
So
so
there
was
a
discussion
on
slack
and
there's
a
link
to
that
discussion
on
slack
about
a
possible
new
flag
to
Coupe
cuddle
drain,
that's
being
proposed
and
so
Archer
just
has
graciously
accepted.
The
invitation
to
join
us
to
to
present
this
he's
considering
a
cap
is
that
correct,
Arthur,
Archer.
E
That's
that's
that's
correct,
thanks
for
the
invitation.
First
of
all,
yeah,
mostly
looking
for
for
direction
here
and
and
above
all,
just
like
is
this
a
good
idea
in
the
first
place,
I'm
happy
to
to
do
a
cap
for
it.
If
that's
what
the
group
feels,
this
is
the
best
way
forward,
but
Sean
going
back
to
your
suggestion.
I
think
it
would
be.
It
would
be
a
great
opportunity
to
actually
just
discuss
it
in
a
more
open-ended
version
here
first
and
then
get
to
do
the
details
in
the
cap
as
well.
E
May
I
give
a
brief,
intro
about
like
where
I'm
coming
from
and
what
I
hope
to
achieve.
Please
do
cool.
E
So
the
the
background
here
is
around
Cube
cattle
drain
when
we
at
kind
of
lacework
needs
to
to
drain
a
particular
node
many
reasons
behind
that
all
the
way
from
trying
to
keep
things
as
close
to
what
we
expect
it
to
be
after
kind
of
a
human
jumps
in
and
and
does
something
upgrades
of
kubernetes
itself.
E
We
might
need
to
change
the
image
of
the
node
in
the
first
place,
so
we
do
those
trains
and,
for
the
most
part,
it's
automated
and
it
goes
well,
except
for
when
we
have
pod
disorruption
budgets,
which
I'm
calling
here
impossible,
but
but
I'm
sure
that
the
documentation
says
something
else
on
the
on
the
pdb
I'm
just
remembering
now,
but
basically
that's
when
it's
either
Max
and
available
zero
or
when
you
have
a
minimum
available
to
the
exact
amount
of
replicas
that
you
have
on
the
controller
itself.
E
So
it
could
be
a
deployment
or
a
stateful
set
and
so
on.
I'll
keep
kind
of
naked
pods
on
the
side
for
a
second
I
I
know
that
they
have
like
some
some
specifics
on
those
that
would
potentially
need
to
discuss
too,
but
basically
reasons
outside
of
my
control.
We
do
have
those
kind
of
impossible
pdbs
and
what
happens
when
you
try
to
do
a
drain
on
those
nodes.
Is
that
until
kind
of
those
spots
get
evicted
somehow
from
the
node,
the
drain
will
kind
of
keep
every
five
seconds.
E
Printing
cannot
evict
pod
because
it
would
disrupt
the
bdb
and
that's
coming
from
the
API.
It
comes
out
as
a
as
a
400,
I
think
kind
of
a
HTTP
code,
but
it's
exactly
because
of
the
pdb.
If
you
look
at
the
spec,
so
what
we've
been
doing
kind
of
out
of
bound
is
basically
looking
at
the
log
messages
as
they
come
from
the
API.
Okay.
Is
this
pod?
Let
me
look
what's
the
owner?
E
What's
the
controller
of
this
pod,
let
me
actually
do
a
rollout
restart
of
that
deployment
or
stateful
set,
and
that
will,
in
most
situations
take
care
of
it
again.
Keeping
aside
the
situation
of
the
naked
Bots
themselves,
because
we
might
have
a
a
rollout
strategy
there
on
that
deployment
or
that
stateful
set
that
actually
allows
us
to
search
spin
up
a
new
pod
and
then
that
pod
can
safely
be
evicted
respecting
the
the
pdb.
So
we
are
doing
that
out
of
bound
and
it
works
tremendously.
E
Well,
so
that's
where
I'm
coming
from
here
to
actually
bring
the
proposal
back
to
drain
I'm
I
was
actually
when
I
was
thinking
about
it.
I
was
a
little
bit
divided
on
whether
this
should
be
on
the
cube
cattle
CLI
itself.
Like
most
of
the
drain
logic
lives
there
there's
no
drain
counterpart
API
on
the
on
the
API
itself,
so
all
of
that
logical
depicting
pods
one
by
one
lives
on
the
CLI,
but
I
could
also
be
persuaded.
E
I,
I
I,
don't
know
much
about
the
internals
that
perhaps
it
should
be
part
of
the
eviction
API
or
part
of
the
PDP
logic
itself.
Right.
I
know
that
they're
like
competing
priorities
here
between
the
pdb
and
kind
of
the
the
rollout
or
the
strategy
spec
in
the
deployment,
for
example
and
I,
think
I'll.
Stop
there
because
there's
a
lot
to
unpack
and
just
take
some
questions
at
least
to
begin
with
Sean
is
that
sufficient
information
to
get
us
started.
A
I
think
so
so,
just
to
just
to
recap
the
Pod
disruption
budget
when
it
interacts
with
the
drain.
There
are
some
situations
where
it
basically
hangs
where
you
can't
make
any
progress
and
so
you're
proposing
a
new
flag
that
will
automatically
do
this
rolling
restart,
which
which
will
fix
this
particular
condition.
That's.
E
That's
one
Avenue
right
like
look
at.
What's
the
controller
behind
that
pod
as
cube
control
drain
is
doing
an
eviction
plot
by
pod,
like
okay,
if
I
get
a
status
code
from
the
API
that
represents
the
the
pdb
like
the
impossible
pdb,
we
cannot
drain
then
do
that
automatically
like
do
the
rollout
restart
of
the
controller
itself,
so
the
deployment
stateful
set
so
on
and
so
forth.
B
I
H
E
I
haven't
bumped
into
this
specifically
I
I,
wonder
if,
if
it
leads
to
the
same
situation
here
like
in
this,
in
my
case
just
be
specific,
it
can
be
a
single
pdb
with
maximum
available
zero,
for
example,
leading
to
this
issue.
It's
very
easy
to
to
kind
of
simulate.
It.
C
Multiple
pdbs
is
a
separate
issue
yeah
and
on
the
server
side
we
added
warnings
that
that
the
selectors
are
touching
similar
or
that
pause
are
wrapped
by
more
than
one
pdbs
and
I.
Think
I
can't
remember
on
which
EV,
but
it
will
show
as
a
warning
as
an
event
warning
on
either
of
the
pdbs.
C
What
happens
currently
if
you
drain
and
you
run
into
the
the
pdb
issue,
it
will
just
fail.
It.
E
Will
Loop
forever
so,
unless
you
pass
more
options
like
to
kind
of
forcefully
remove,
for
example,
it
would
just
Loop
forever
there's
this
loop
on
the
drain
logic
that
will
keep
trying
to
evict
and
it
will
stay
there
for
until
the
operator
does
something.
C
Right,
but
that
means
basically
that
at
some
point
the
pdba
is
there
to
ensure
that
your
application
or
the
user
application
is
there
always
running
with
the
minimal
amount
of
parts
right
assuming
that
pdb
is
properly
configured
with
some
minimal
requirements,
then.
E
C
Always
a
sufficient
room
for
performing
a
particular
upgrade.
It
will
eventually
result
to
a
proper
drain.
How
long
it'll
take
a
is
a
matter
of
configuring.
The
PDP
accordingly
such
that
you
have
a
sufficient
room
for
rolling
out
your
application.
So
I'm
not
sure.
How
would
you
like
to
see
this
being
resolved
other
than
well
the
controller
and
the
pdb
will
ensure
that
this
is
because
the
the
node
itself
is
already
marked
or
for
deletion,
so
the
pods
will
be
eventually
removed
from
the
Pod.
C
It's
just
that
the
pdb
serves
as
a
security
boundary
pressure
that
the
application
isn't
disrupted
right
in.
E
C
Significant
instantaneously,
but
rather
in
a
fashion
that
ensures
that
it
is
always
available
right.
E
So
here's
the
minimum
example
right
like
imagine
that
you
have
three
nodes
and
you
have
a
deployment
of
two
replicas
each
one
in
one
node,
so
you
put
the
kind
of
the
tolerations
and
things
accordingly
so
that
they
land
in
two
different
nodes
and
you
put
a
pdb
of
Maximum
available
zero
on
the
selector
for
the
kind
of
odds
for
that
deployment.
E
If
you
issue
a
drain
in
one
of
the
nodes
that
has
the
Pod,
it
will
indefinitely
kind
of
try
to
evict
that
part
from
that
node
and
there's
nothing
that
will
happen
like
until
you
do
something
what
I'm
basically
proposing
what
I've
been
doing
in
situations
where
this
occurs
is
basically
I.
Do
a
rollout
restart
as
long
as
there's
like
places
for
that
kind
of
pod
to
go.
E
In
this
case,
we
have
a
third
node,
that's
completely
free
that
it
can
go
to,
or
if
you
have
like
an
auto
scaler
that,
after
it
kind
of
the
pending
pod
kind
of
appears
like
it
will
auto
scale
in
a
new
note,
would
be
paint
available.
That
will
kind
of
allow
things
to
move
in
the
drain
to
continue
eventually
being
successful.
C
I
C
So
in
this
particular
case,
because
we've
run
into
similar
issues
in
the
past
and
openshift,
so
if
you've
ever
worked
with
with
openshelf
or
if
you
get
a
chance,
openshift
has
an
alert
which
will
complain
to
the
operator
that
the
pdb
is
at
limit
because
either
a
Max
and
available
is
zero
or
or
in.
There
are
a
couple
conditions
that
that
are
required
to
ensure
that
the
pdb
is
misconfigured.
C
Sorry,
so
what
we
do
is
we
warn
the
cluster
operator
that
there
are
pdbs
which
will
Stomp
the
upgrade
from
happening,
because
it's
not
a
problem
with
drain
logic
or
any
of
the
of
the
confidence,
but
rather
that's
a
problem
with
the
pdb
misconfiguration.
We
cannot
stop
users
from
doing
this
or
we
can
learn
cluster
administrative,
so
I
would
what
I
would
personally
suggest
and
I
can?
Probably
all
I
can
find
the
the
alert
that
we
have
on
figures.
C
C
E
Thanks
for
thanks
for
the
feedback
there,
I
I
agree,
that's
why
I'm
calling
it
kind
of
an
impossible
pdb
but
I'm
also
trying
to
look
at
the
two
perspectives
here,
like
myself,
is
perhaps
the
administrator
of
the
cluster
and
from
the
user's
perspective
like
they
set
the
Max
and
available
zero
yeah
I
could
get
in
a
situation
that
I
try
to
persuade
them
to
move
away
from
it
and
I
I
can
I
can
even
argue
to
the
degree
that,
like
Max
and
available
zero,
maybe
you
thought
it
did
something
that
it
does
not
because
when
a
rollout
restart
happens,
for
example,
you're
gonna
have
kind
of
an
extra
like
you're
going
to
have
a
search
or
in
the
cases
where
you
have
a
minimum
available
equal
to
equals
the
number
of
replicas,
but
at
the
same
time,
I
see
this
as
a
cluster
operator
like
I.
E
Have
this
ability
to
match
the
expectations
of
the
user
and
make
my
life
easier
at
the
same
time,
if
I
opt
into
that
behavior
right,
I,
agree
that,
like
a
warning,
would
be
great
and
that's
what
we
proactively
try
to
do
and
we
try
to
get
that
down
to
zero
as
much
as
possible
at
least
work.
But
here
we
are
still
and
and
I
suppose,
from
the
the
real
actions
in
the
thread.
Others
have
kind
of
found
themselves
in
a
similar
situation
as
well.
J
Yeah
I
just
want
to
say
that,
even
though
it's
like
it's
wrongly
configured,
the
pdb
is
strongly
configured
from
there
like
the
cluster.
The
main
point
of
view
like
there
might
be
applications
which
use
the
pdb,
which
are
actually
like.
Maintaining
the
deployment
and
maintaining
the
pdb
and
mine
change
in
some
might
change
the
pdb
to
zero,
so
they,
for
example,
they
have
tight
storage
to
this,
to
them
to
their
pods
and
and
they
do
not
want
to
get
them
evicted
at
all
at
any
cost.
J
And
if
you
do
the
rollout
restart,
you
are
actually
going
like
the
behind
the
eviction
API
and
deleting
the
Bots
like
directly
without
without
the
manager
of
the
deployment
itself,
because
you
are
actually
not
the
manager
of
the
deployment,
but
some
other
entity
is
and
I
think.
If
we
want
to
support
this
use
case,
it
probably
would
be
best
to
enhance
the
eviction
API.
Somehow,
probably.
E
Yeah,
that's
that's
the
other
thought
that
I
had
like
that's.
Why
I
didn't
know?
Perhaps
it
could
be
a
new
kind
of
option
in
the
pdb
itself
right
like
if,
if
just
coming
up
with
a
name
like,
if
needed,
rollout
restart
right
like
something
along
those
lines.
C
I
link
the
alert
in
the
chat
that
we
have
configured
in
openshift
if
it
appears
to
look
into
it.
It's
definitely
a
good
starting
point
for
you,
as
the
administrator
to
have
something
in
place,
but
yeah
I
I
probably
would
agree
that
more
around
educational
topic
it
required
most
frequently
in
those
occasions.
A
So
Katrina
had
a
a
good
point
in
the
chat
about
it's.
It's
often
not
understood
the
workload
controllers.
The
ruling
restarts
and
updates
are
not
using
the
eviction
API
or
respecting
the
the
pdbs,
which
is
why
a
rolling
restart
could
get
past
this
situation,
but
it
might
not
be
what
you
actually
want.
A
Did
you
have
anything
else
you
wanted
to
add
to
that
Katrina
or
join
in
or
no.
K
Oh,
that
was
exactly
my
point
Sean
thank
you
for
bringing
it
up
yeah
that
we're
using
a
bypass
here
in
the
solution
proposed.
A
So
what
I'm
hearing
is
is
actually
that
it's
probably
not
a
good
idea
to
add
this
particular
flag.
E
K
I
think
that
can
be
helpful
to
Anchor
discussions,
because
I
would
suspect
that
the
because
we're
again
talking
about
eviction
API
versus
rollout,
like
workload
controller,
rollout
behaviors,
which
are
not
strictly
related
like
I,
wouldn't
see
necessarily
that
being
that
particular
solution
being
accepted
on
the
eviction
API
anymore.
So
then
in
drain.
So
it
would
be
more
useful,
I
think
to
have
an
overview
of
like
what
the
fundamental
problem
is,
so
that
folks
can
think
of
maybe
Alternatives
as
well
sounds.
E
I
A
Okay,
so
the
next
item
on
the
agenda
was
just
was
what
I
put
on
myself.
I
just
wanted
to
bring
this
particular
PR
to
our
attention.
Since
there's
several
of
us
that
I
know
are
interested
in
this
topic
and
the
topic
being
moving
transitioning
from
Speedy
to
websockets
as
a
bi-directional
streaming
protocol.
So
Speedy
has
been
deprecated
for
eight
years
and
it
actually
causes
quite
a
few
problems
with
with
gateways
and
load
balancers
and
proxies
that
don't
understand,
Speedy
or
not.
Supporting
speedy
and
we've
been
trying
to
move
to
websockets.
A
Actually,
for
for
many
many
years,
we
actually
made
some
progress
when
I
say
we
I
mean
Mikhail
Missouri,
he
he
made
quite
a
bit
of
progress
last
summer,
but
it
stalled
and
so
I've
resurrected
this
PR
to
start
our
kind
of
our
our
new
effort
to
get
this
across
the
Finish
Line
I
know
that
so
far,
Brian
Arda
and
Marley
have
mentioned
that
they're
interested
in
this
topic
and
I've
added
them
to
the
to
the
pr
to
show
where
we
are
and
I've
tried
to
also
enumerate
where
I
think
the
the
websockets
does
not
does
not
achieve
feature
parity
with
Speedy.
A
There's
there
and
there's
at
least
two
or
three
areas
where
we
need
to
work
on
in
order
to
get
it
to
the
same
level
as
Speedy,
so
that
we
can
transition
to
web
sockets.
A
This
to
our
attention
does
anybody
have
anything
that
they
wanted
to
say
before
we
move
on
to
Eddie.
I
The
websockets
feature
parody
is
that
was
that
the
stuff
that
you
had
noted
would
be
sold
by
updating
from
B4
to
D5.
Yes,.
A
So
it
appears,
and
and
again
this
is
where
I'm
just
trying
to
level
set
this
and
make
sure
that
we're
all
on
this
on
the
same
page,
that
it
appears
there's
about
three
deficiencies
in
the
current
V4
subprotocol.
A
That
would
need
to
be
addressed
so
that
it
would
have
the
same
would
have
feature
parity
with
speedy
and
one
of
those
being.
We
have
to
half
close
standard
in
in
certain
situations
in
order
to
signal
to
the
other
side
that
the
data
is
not
coming
anymore,
so
that
it
will
not
hang
and
if
you
look
at
the
pr
near
the
bottom.
A
There's
actually
I've
put
two
examples
where
this
happens
and
strangely
enough,
when
I
did
some
research,
they
ran
into
the
exact
same
problem
with
Speedy
eight
years
ago
and
I
put
that
issue
eight
years
ago
and
the
pr
that
resolved
it
eight
years
ago
in
the
in
in
the
pr
matcha
you
you
have
your
hand
raised,
do.
C
Don't
remember,
hold
on
I
haven't
finished,
yet
I,
don't
remember
how
far
back
the
cube.
Api
server
supports
the
websocket,
so
I'm
not
sure
if
we
can
just
drop
it
right
away,
because
websocket
is
being
supported
for
I,
don't
know
like
three
five
releases.
C
If
so,
then,
probably
we
don't
even
need
to
care
about
that,
and
we
just
have-
and
we
can
just
switch
this
because
it
will
be
fully
supported
and
what
are
the
downside
and
probably
a
caps
describing
all
the
pros
and
cons,
because
there
is
a
reason
why
this
hasn't
happened.
Yet
in
the
past
attempts
I
mean
every
single
one
of
them
failed.
So
having
this
written
down
and
and
explained
why
that
was
the
case.
Is
there
are?
C
Are
there
still
missing
bits
in
the
in
the
implementation
that
we're
going
to
pick
that
are
preventing
us
from
switching
one
over
to
the
other,
or
are
we
okay
with
having
partially
incomplete
implementation,
which
we
will
be
just
hiding?
We
have
an
environment
variable
for
a
certain
amount
of
time,
we're
still
relying
on
the
deprecated
speed
by
default.
So.
A
Yeah,
those
are
all
great
points
and
actually
that
that
is
the
the
goal,
which
is
why
I'm
bringing
this
up
for
128
right.
This
is
months
in
the
future.
I
want
to
make
sure
that
so
I
I
consider
kind
of
the
the
first
steps
being
exactly
what
you
said
to
try
to
like
bring
up
all
of
the
issue.
A
The
history
of
it
and
I
have
a
timeline
of
you
know
all
of
the
efforts
so
far
and
to
give
us
the
context
of
what
you
know
where,
where
we've
been
and
where
we're
going,
and
so
so
this
is
just
one
of
those.
This
is
actually
a
resurrection
of
what
looked
like
the
the
most
one
of
the
most
promising
paths
and
you're
right.
There
are
going
to
be,
as
I
mentioned.
There
will
be
a
cap
almost
for
sure
about
this
new
sub
protocol.
A
That
would
address
the
the
three
issues:
the
three
deficiencies
in
websockets,
so
so
yeah,
there's
quite
a
bit
going
on
here
and
and
I
recognize
that
this
hasn't
it's
been
years,
that
this
has
been
on
the
radar
and
hasn't
happened
and
I
think
that
we've
actually
made
significant
progress
and
hopefully
we'll
be
able
to
get
push
something
over
the
line
for
128.
But
those
are
all
good
points
that
you
made.
B
H
I
A
That
was
I
if
I
remember
correctly,
that's
Mikhail
Missouri's.
He
was
the
one
who
proposed
this,
so
he
already
had
a
cap
and
he
had
the
pr
which
is
the
one
that
I
Resurrected
and-
and
so
this
I,
as
I
mentioned
I,
think,
is
the
the
most
promising
and
productive
path.
So
far,
it's
just
that
I
think
Mikhail.
A
Whatever
the
logistics
didn't
work
out,
he
he
lives
in
Australia
from
what
I
understand
and
I.
Don't
think
that
he
was
able
to
bring
this
to
the
attention
of
the
the
people
that
needed
to
to
see
this
and
and
it
lost
traction
and
was
closed,
so
I'm
trying
to
kind
of
get
it
kick
started
with
with
that
effort
and
with
his
PR
and
his
cap.
A
And
and
as
you
as
you
mentioned,
mache
the
the
pr
that
that
you've
just
put
in
the
chat
that
actually,
that
also
is
in
the
pr
as
what
would
be
fixed
if
this
works,
I've,
yeah,
I,
think
I've
I've
got
a
like
three
or
four
pages
of
PR's
and
links
and
and
I'm
in
the
process,
now
of
actually
trying
to
make
something
a
a
document,
just
a
straight
document
to
to
kind
of
get
everybody
on
the
same
page
of
you
know
what
what
does
this
all
mean
and
here's
the
history?
A
What's
the
problem,
we're
trying
to
solve,
etc,
etc.
I
started
with
just
resurrecting
this
PR,
but
the
next
step
will
be
to
create
a
document
to
to
kind
of
organize
it
all,
because
I
know
that
there's
just
massive
history
behind
this
and
and
so
hoping
to
yeah,
hoping
to
to
kind
of
just
bring
this
to
everybody's
attention
way
in
the
you
know
way
before
any
of
these
kept
deadlines
for
128,
so
that
we
can.
You
know
where
time
will
be
on
our
side.
Hopefully.
A
Okay,
so
if
maybe
we
could
move
on
to
to
Eddie
now,
if,
if
that's
okay
I've
said
all
I
was
going
to
say
about
that
issue,
can
we
move
on
to
Eddie
foreign.
B
B
Does
anyone
have
any
interest
in
participating
with
this
coming
up
with
content
I'm
happy
to
to
do
the
recording
and
video
editing
and
all
that
it'd
be
great
if
we
could
not
put
a
white
male
on
the
stage
at
kubecon?
So
if
anyone
else
wants
to
volunteer
I'm
happy
to
support
and
help
with
that.
A
So
that's
this
particular
email.
B
Okay,
well,
if
anyone
does
become
interested,
feel
free
to
DM
me
very
happy
to
have
anyone
help
again.
That's
not
a
me
on
the
screen,
but
otherwise
I
will
go
ahead
and
I
will
just
record
something
to
highlight
some
of
our
work.
C
Can
we
use
I
remember
that
some
of
the
Caps
that
we
push
over
the
past
two
releases,
at
least
or
at
least
the
last
one,
some
of
them
we
were
discussing
in
a
couple
places
where
we
would
want
to
and
put
up
more
in
in
more
Spotlight,
maybe
just
picking
the
ones
where
we
want
to
bring
the
biggest
attention
to
and
maybe
outlining
the
the
most
pressing
issues
and
focusing
only
on
those
rather
than
everything
would
be
reasonable.
C
For
example,
the
the
translations
in
Cube
card
will
be
one
topic
and
maybe
something
around
the
cube.
Qrc
files
and
I.
Remember
was
the
third
topic
that
I
was
thinking,
but
maybe
something
along
the
the
pruning
that
Katrina
and
Justin
are
working
on,
which
is
another
topic
that
I
remember
that
we
want
to
add
a
little
bit
more
publicity
around.
B
Yeah,
that's
a
great
idea
and
the
other
thing
is
we
get
to
put
in
call
for
actions
or
asks
for
help
and
I'm
sure
we
all
have
lots
of
areas
that
we
would
like
help
and
new
faces
and
maintainers
in.
So
thank
you
for
volunteering,
Katrina
customized.
K
B
Awesome:
okay:
we
can
sync
on
slack
if
anyone
else
wants
to
help
or
has
ideas
feel
free
to
DM.
Me
second
thing
on
the
list
is
our
annual
report
so
same
vein.
This
is
a
chance
for
us
to
highlight
our
work
from
the
past
year
and
highlight
our
needs
and
our
asks
I
think
honestly
think
that
we
could
just
do
the
annual
report
briefly
and
summarize,
some
of
that
stuff
in
the
video
highlight
there
is
a
link
to
it
and
I'll
drop
it
in
the
chat.
B
This
again,
this
is
our
chance
to
highlight
it
at
the
cncf
and
the
executive
level,
because
what
happens
is
the
governing
board
will
take
all
of
the
aggregated,
cigs
updates
and
annual
reports
and
then
put
those
into
a
for
a
a
report
that
gets
presented
to
the
governing
board?
So
this
is
our
chance
to
get
in
front
of
like
Executives
who
sign
checks
and
you
know
allocate
headcount
for
open
source.
B
So
this
is
very
important
at
a
project
level,
I
figure
we
could,
if
no
one
else,
has
any
updates
or
things
to
share
or
stand
up
stuff,
I
figure.
We
could
just
spend
the
next
like
15-20
minutes
going
through
this,
and
anyone
else
can
is
welcome
to
drop
if
they
don't
want
to
do
that.
B
Yeah.
Well,
all
right.
So
the
the
data
at
the
bottom
are
numbers
that
I
was
starting
to
pull
it's
pretty
easy
to
get.
We
have
like
our
our
kubecon
talks,
some
of
it's
just
like
raw
slack
numbers
I.
Just
this
one.
We
need
to
pull
our
approvers
in
reviewers
from
our
owner's
files.
So
that's
easy.
I
can
take
care
of
that.
B
I
think
the
big
things
that
we
could
help
with
as
a
group
is
just
getting
down
those
notes
of
what
we
want
to
highlight
from
the
past
year
and
some
of
the
things
we
are
thinking
of
so
does
anyone
have
Caps
or
work
that
they
want
shouted
out
here.
A
There
were
a
couple
that
I
was
going
to
bring
to
our
attention
number
one.
Is
we've
just
made
the
aggregated
discovery
beta
in
127,
so
the
the
previous
Discovery
storm
that
we
would
always
see.
A
That's
been
resolved,
I'm,
not
sure
if
that
should
be
brought
to
any,
should
be
included
in
the
in
the
report
and
then
there's
been
multiple
efforts.
There's
a
cap
for
moving
open,
API
V2
to
V3
or
we're
kind
of
in
the
middle
of
it,
we're
still
using
in
Coupe
control
V2
as
well
as
V3,
but
we're
in
the
process
of
transitioning
to
the
new
version
of
open
API.
B
B
A
So
it's
it's
kind
of
in
the
intersection
of
there.
There
had
to
be.
You
know
a
client
part
of
that,
as
well
as
a
server
part
and
so
so
I
implemented
the
client
part.
H
B
K
Just
maybe
a
meta
suggestion:
every
time
we
do
the
maintainer
talk
at
a
kubecon,
we
have
like
we've
had
slides
that
the
biggest
changes
in
there,
so
that
could
be
a
useful
source
of
items.
B
I
K
So,
for
instance,
from
Detroit
we
called
out
the
cube
control
completions
improvements,
which
was
pretty
cool.
I
I
would
be
fine
with
that
I,
don't
know
we
haven't
I
haven't
made
it
very
far.
I
mean
I,
have
a
huge
PR,
but
I
gotta
break
it
up
and
actually
get
the
changes
into
the
code
base.
Okay,.
A
So
so
I
also
know
that
there's
been
some
progress
on
server-side
apply
in
Coupe
control.
Should
we
mention
that
I
mean
we
haven't
gotten
it
I
mean
we're
still
working
to
get
it
as
the
default
and,
of
course,
there's
plenty
of
hurdles
there,
but
I
think
that
we
there
is
some
migration
that
happens.
C
D
A
C
K
Yeah
that
then,
that
is
associated
with
a
cap
that
didn't
actually
end
up
moving
in
127.,
but
there
have
been
discussions
around
it,
so,
hopefully
128
there
will
actually
be
like
more
details
of
the
proposal
by
the
time.
Kubecon
rolls
around
anyway.
I,
don't
know
about
the
annual
report,
but
yeah.
That's
a
good
point.
K
I
think
the
work
that
did
actually
happen
in
the
past
year
on
that
front
in
Cube
Kettle
is
mostly
to
do
with
some
edge
cases
in
the
migration
path
like
when
a
user
upgrades
from
clients
I
apply
to
server
side
apply.
In
some
cases
they
would
end
up
in
a
a
kind
of
a
stuck
state
through
some
fields,
and
we
fixed
that
with
some
patches
in
126,
I
think.
A
A
For
server-side
apply
as
the
default
got
it
from
from
client-side
apply
yeah.
C
B
B
E
C
There's
also
the
plugins
for
for
create
some
commands
and
that's.
C
36-38
it's
an
alpha,
but
if
people
are
interested
and
I
know
that
with
the
proliferation
of
crds
there
are,
there
has
been
several
interests
to
be
able
to
create,
create
something.
Some
questions.
C
B
B
B
H
I
I
B
All
right
I'll
put
that
for
now
we
can
reword.
It
sounds.
H
K
B
K
On
the
list,
yet
the
sub
resources
support
in
Cube
cuddle.
It's
called
out
nurse
slides
what.
B
Resources
yeah
what
Nikita
was
working
on
right.
C
Yeah
and
they
promoted
that
to
Beta
just
recently,
I
was
hoping
it.
K
C
C
G
We're
sticking
to
the
the
release
notes
for
the
past
year.
We
had
two
major
releases,
although
the
second
one
was
just
because
of
a
react
bump.
It's
really
well.
This
past
year
has
mostly
just
been
maintenance
mode.
There
was
the
addition
of
the
the
tray
menu
feature
that
we
discussed
on
a
prior
call
sometime
last
year
and
a
few
mostly
refinements
and
Bug
fixes
beyond
that,
nothing.
G
G
Sure
it's
actually
a
cool
site
that
just
summarized
it
all
paste
it
in
the
chat.
It's
where
is
it
oops
sorry
sent
the
wrong
person.
G
Looks
like
there's
around
700
a
month
by
us,
Mac
OS,
probably
not
expected
Linux
arm
is
the
least
you
know
you
can
look
at
the
stats
about
700
per
month.
It
seems,
or
so
it's
of
course,
how
many
of
those
are
bots,
so
how
many
people,
how
many
thoughts
are
automatically
downloading
GitHub
releases?
Looking
for
Secrets,
who
knows.
G
H
I
K
Honest
for
customize
this
release
the
there
was
one
really
big
feature
similar
to
Nick.
We
did
lots
of
just
maintenance
work
on
customized
last
year,
but
fixes
minor
features,
performance
improvements
that
sort
of
thing,
but
customized,
localized,
localized,
being
a
new
sub
command
was,
was
the
biggest
new
feature
that
we
added
last
year
and
might
be
worth
calling
out.
Specifically,
that
was
part
of
5.0.
B
C
B
K
There's
the
ever
running
options
and
flags
refactor.
K
No,
it
has
a
meta
issue
and
a
spreadsheet
and
it
didn't
move
much.
Last
year,
Moon
moved
a
little.
So
it's
worth
calling
out
yeah.
B
Cool
with
that,
we
are
up
on
time,
feel
free
to
drop
thoughts
or
notes.
In
this
talk,
as
you
go
about
your
week,
we
have
until
Sean
you
said
it
earlier.
April.
A
H
A
This
in
the
in
the
next
one,
if
next
meeting,
if
we
like.