►
From YouTube: Kubernetes SIG CLI 20220615
Description
Kubernetes SIG CLI bi-weekly meeting for June 15, 2022.
Agenda and Notes: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.ue12xfwyaqks
A
Hello,
I'm
sean
sullivan,
I'm
your
host.
This
is
going
to
be
our
next
june
15th
version
of
the
six
cli
bi-weekly
meeting,
and
why
don't
we
get
directly
into
some
announcements?
So
we
could
make
sure
to
spend
time
on
our
on
our
topics
and
get
through
them
all
today.
So
most
of
our
announcements
have
to
do
with
the
125
upcoming
release,
and
so
the
enhancement
freeze
has
been
delayed
for
a
week,
so
it
was
supposed
to
be
tomorrow
and
is
now
a
week
from
tomorrow
on
the
23rd.
A
That
email
just
went
out
earlier
I'll,
try
to
add
if
somebody
can
figure
out
what
those
two
keps
r
that'd
be
nice
to
add
and,
of
course
down
the
road.
We've
got
our
code
freeze
august,
2nd.
Let's
keep
that
in
mind,
as
well
as
the
the
final
release
of
125
at
the
end
of
august
august
23rd.
A
Okay,
so
why
don't
we
move
on
to
introductions-
and
this
is
the
part
of
our
our
meeting
in
which
we
try
to
get
to
know
some
of
our
colleagues,
especially
those
who
are
just
joining
us?
So
if
you
haven't
been
to
a
meeting
before
or
if
it's
been
a
long
time
or
if
you
just
want
to
say
hi,
please
introduce
yourself
and
of
course
this
isn't
this
isn't
mandatory?
If
you
don't
want
to
that's
fine,
too,
is
there
anybody
who
would
like
to
introduce
themselves.
B
Yes,
I
guess
that's
my
call,
so
I'm
noah
I
I
guess
the
only
person
I
met
yet
is
eddie
at
the
kubecon,
but
for
everyone
else
my
name
is
noah
and
I'm
working
in
germany
for
a
consulting
based
company
and
I'm
helping
them
to
become
cloud
native.
That's
what
I
do
in
my
day,
job
and
besides
that
I'm
involved
with
the
kubernetes
community.
B
I
guess
since
almost
one
year
now,
I'm
working
with
the
ingress
engine
x,
folks
and
ricardo
and
advertised
me,
the
60
li,
and
I
I
plan
to
to
do
some
code
contributions
and
that's
why
I
thought
that
I
should
join
this
meeting.
C
I'll
do
an
intro
because
it's
been
a
minute.
It's
been
a
minute,
hello,
everyone.
I
am
paris.
I
sit
on
the
kubernetes
steering
committee
and
I'm
also
still
a
maintainer
of
a
ton
of
stuff
in
contrabex.
So
all
of
like
the
the
meta
infrastructure
for
how
the
project
runs,
I'm
gonna
be
talking
a
little
bit
about
mentoring,
more
reviewers
for
y'all,
but
hey,
hey
everybody.
B
Hello
I'll
go
next,
I'm
at
harvard.
This
is
my
first
time
here.
I've
been
contributing
to
the
contrivex
project.
I
am
also
here
on
the
sig
mentoring
cohort
program.
I
will
be
on
the
coordinator
side
and
will
be
discussing
more
in
this
meeting.
A
D
So
this
was,
I
think
it
was
just
carry
over
from
last
meeting.
I
don't
think,
there's
anything
to
discuss
right
now.
The
the
concerns
I
had
were
brought
up
on
the
mailing
list.
I
just
I
wanted
to
make
sure
we
weren't
doing
a
bunch
of
work
for
the
websockets
if
we
were
just
going
to
move
everything
to
http
2
anyway,
but
I
think
that
that
was
addressed.
So
we
can
probably
punt
this
to
the
mailing
list.
A
So
so
should
we
spend
just
like
two
minutes
giving
an
overview
of
what
that
what
the
whole
thing
is
before.
D
So
right
now
kubernetes
api
server
uses
a
protocol
called
speedy.
It's
kind
of
like
a,
I
guess
like
an
addition
to
like
the
http
one
one
or
one
two
spec
it's
for
like
streaming
and
bi-directional
it
wasn't.
D
It
didn't
gain
the
traction
that
it
was
intended
for,
but
it
was
what
was
hot
when
kubernetes
was
being
built,
and
so
we
have
a
bunch
of
old
stuff
using
the
speedy
protocol,
and
there
has
been
a
long-standing
issue
to
upgrade
and
convert
all
that
to
http
version
2,
which
has
full
bi-directional
streaming
and
headers
and
trailers
and
all
that
jazz.
D
So
this
issue
came
on
the
mailing
list.
Someone
wanted
to
convert
it
to
websockets,
so
an
actual
websocket
implementation
instead
of
a
speedy
protocol,
and
my
original
concern
was,
I
wanted
to
make
sure
we
weren't
going
to
do
a
bunch
of
work
around
the
websocket
implementation.
If
the
goal
was
to
get
the
http
to
anyways,
because
that's
a
better
place
to
put
those
efforts,
but
it
sounds
like
that
is
being
discussed
on
the
mailing
list.
So
I
don't
think
there's
any
action
for
us
to
take.
A
Okay,
so
so
just
a
couple,
quick
notes:
there
are
the
coupe
control
commands
that
use
speedy
are
the
ones
that
are
connecting
trying
to
connect
to
nodes
through
the
api
server.
I
think
like
copy
exec
attach
are
those
so
so
does
anybody
else
have
any
information
about
that?
I
only
have
just
some
basic
information.
Is
that
correct,
eddie.
D
B
A
Cool
so
sorry
to
bug
somebody
else:
is
there
any?
Can
we
re-share
the
notes.
C
A
Thank
you,
paris,
yeah,
your
lights,
favorite,
so
we're
gonna,
move
on
to
to
jeff's
aggregated
discovery.
So,
just
as
a
quick
intro
we
had
previously
jeff
had
previously
demoed
the
discovery
using
discover
cache
busting
and
after
talking
with
the
api
machinery,
we've
actually
updated
and
modified
the
kepp
and
so
jeff.
Would
you
like
to
take
it
over.
E
Yeah
sure
so
I
just
wanted
to
just
like
just
bring
awareness
that
a
couple
weeks
ago.
E
I
think
I
did
a
demo
around
the
discovery,
cache
busting,
but
after
talking
to
api
machinery,
we
decided
to
go
with
the
approach
of
aggregating
the
discovery,
and
this
is
to
solve
the
discovery,
storm
problem
that
we
have
right
now,
where,
on
startup,
a
cube
cuddle,
clients
would
need
to
send
a
request
to
every
single
group
version
present
in
a
kubernetes
cluster,
and
this
could
add
up
to
hundreds
of
requests,
causing
numerous
issues
and
even
things
like
increasing
the
the
the
sorry
increasing
the
timer,
for
when
the
caches
are
refreshed
and
aggregated
discovery.
E
This
proposal
basically
outlines
that
we
want
to
instead
of
publishing
the
group
versions
separately,
we
want
to
the
slash
api's
endpoint.
We
want
to
publish
an
aggregated
version
of
this
discovery
document
such
that
only
one
request
is
needed
for
the
client
to
fetch
all
the
discovery
information
rather
than
going
through
multiple
group
versions
and
sending
a
bunch
of
parallel
requests
that,
like
there's
a
potential
for
timeouts
and
stuff,
so
api
machinery
has,
I
think,
endorsed
this
method.
E
I
just
want
to
sort
of
bring
this
out
here
if
anyone
has
any
comments.
Feedback
around
this
feel
free
to
just
also
comment
on
the
cap
as
well,
but
we
are
planning
to
bring
this
to
alpha
in
the
next
release.
D
A
Yeah
thanks
for
that
jeffrey,
so
so
just
another
quick
note
from
the
coop
control
a
lot
of
times.
We
see
the
the
discovery
is
surfaced
through
the
rest
mapper,
and
so
so
we
don't
actually
see
a
lot
of.
We
see
code
called
like
the
discovery,
cache
or
the
discovery
client,
but
actually
the
the
rest.
Mapper
is
what
surfaces,
if
you
see
the
rest
mapper
trying
to
figure
out,
you
know
which
particular
gbks
or
gbrs
are
available
on
an
api
server.
A
That's
where
we
see
this
discovery
and
if
you
go
into
dot
kubi,
slash,
cache
you'll
see
a
lot
of
this
stored
on
your
disk,
so
the
so
jeff
can
I
move
on
to
the
next
topic.
B
Can
I
ask
a
question
for
that
sure
sure
I
think
this
is
a
really
good
idea,
because
it
really
simplifies
the
discovery
logic,
but
I
want
to
ask
a
question
and
why
api
national
release
doesn't
like
the
client
cache
fasting.
E
Okay,
so
we
actually
have
a
document
like
that
are
linked
the
comparison
between
physical
cache
and
disco
aggregated,
and
I
think
it
mainly
comes
down
to
on
performance
in
certain
scenarios
where
we
would
still
like,
for
example,
for
discovery,
cache
busting,
we
would
still
incur
the
discovery
storm
on
startup
and
the
whole
point
of
discovery.
Cache
busting
is
on
incremental
changes
to
the
api
server.
E
Those
like,
I
guess,
those
the
storm
requests,
would
not
be
present,
but
the
problem
still
exists
on
startup
and
also
if
a
lot
of
group
versions
are
added,
we
still
have
the
same
problem
of
need
to
send
a
request
to
every
single
group
version
that
was
updated
for
large
clusters.
This
could
be
like
pretty
big,
whereas
for
our
good
approach,
it's
at
the
end,
it
just
comes
down
to
one
request:
the
disadvantage
of
the
alligator
or
the
aggregating.
E
Everything
is
obviously
the
size,
but
after
sort
of
doing
some
digging
to
the
size
on
the
average,
the
size
of
a
discovery
document
fully
aggregated
for
just
the
built-in
types
is
around
30
kilobytes,
and
this
is
much
smaller
than
let's
say
the
open
api.
Where,
like
the
base,
this,
the
base
document
is
already
like
four
megabytes
and
it'll
be
really
hard
for
clients
to
sort
of
keep
downloading
on
every
update.
E
B
A
Great,
so
I'm
gonna
turn
it
over
to
katrina.
For
for
talking
about
our
deep
dive,
the
six
cli
deep
dive
talk.
It's
talks,
sorry
presentations
for
the
upcoming
north
american
kubecon
katrina.
Would
you
like
to
take
over.
F
I
don't
need
to
present
anything
just
the
agenda.
There
is
good.
This
is
just
a
quick
one.
The
call
for
proposals
for
the
maintainer
track
is
open
right
now,
so,
as
leads,
we
were
kind
of
having
a
conversation
about
what
we
wanted
to
present
and
we
were
thinking
you
know
this
is
a
session
for
the
community
for
six
cli.
So
why
don't
we
ask
you
what
you
would
like
to
hear
from
us?
What
kind
of
topics
you
would
like
us
to
cover?
F
So
I
threw
some
ideas
in
there
about
things
that
we
haven't
covered
recently
in
in
in
the
sessions,
but
if,
if
you
have
something
that
you
would
love
to
hear
us
talk
about
or
if
you
would
like
to
join
us
to
convey
a
message
to
the
sig
cli
community,
we're
super
super
open
to
having
that
happen.
So
please
give
us
your
ideas.
You
can
put
them
in
the
dock.
F
I'm
also
going
to
post
this
in
the
cli
slack
after
the
meeting
so
yeah
the
deadline
is
the
beginning
of
july.
If
I
remember
correctly
and
yeah,
that's
that's
all.
I
haven't
again
the
the
number
two
suggestion
that
came
to
mind
is
actually
that
discovery
work
that
we
were
just
talking
about,
because
I
think
that's
a
pretty
cool
topic
so
and
anything
is
related
to
six.
Eli
is
game,
though,.
A
Great
thanks
katrina,
so
I'll
just
actually
say
those
the
four
that
are
suggested
right
now.
The
first
one
is:
should
we
have
a
presentation
on
that
digs
into
a
coop
cuddle
apply,
especially
server
side
apply.
A
The
second
one
is
the
discovery
which
was
just
discussed
by
jeff
and
then
the
third
one
there's
a
cube
rc
proposal
for
for
config.
You
might
I'll
I'll
ask
eddie
to
make
sure
that
I'm
saying
this
correctly,
but
this
is
for
a
some
kind
of
a
config
file.
That's
going
to
have
more
information
than
the
current
config
file.
A
And
then
coupe
completion
we've
actually
had
some
good
demos
on
sorry
code,
completion
for
kube
control,
completing
commands,
etc.
F
Yeah,
the
idea
behind
that
last
one
to
explain
is
that
we've
done
a
lot
of
work
to
improve
it
actually
in
the
underlying
cobra
recently,
and
that
was
some
pretty
cool
work.
So
maybe
there's
something
to
say
there
from
the
perspective
of
like
encouraging
other
cli
tools
in
the
ecosystem
to
adopt
the
patterns
that
we
have
benefited
from.
So
that's
kind
of
the
angle
I
was
going
for
there.
C
That's
that
fourth
one
actually
sounds
like
a
good
main
track
talk
by
the
way
katrina,
it's
too
late.
For
that,
though,
I
know
I
know
it's
too
late,
but
there's
always
another
cube
con
around
the
corner.
You
see
so
it's
just
like
in
a
month.
The
cfps
will
probably
be
due
for
the
next
one
so
anyway,
so
I've
got
a
lot
of
y'all
on
the
phone
right
now,
and
I
know
that
some
know
what
I'm
talking
about.
Other
people
don't
know
what
I'm
talking
about.
C
So
I
most
likely
will
repeat
some
things
for
the
folks
that
have
heard
this
already.
So
I
apologize
to
the
vets
for
hearing
this
rant.
Okay,
we
have
a
program,
it's
a
shadow
review,
it's
a
shadow,
mentoring
program
anytime,
I
say
the
word
mentoring
program,
people
assume
that
it's
one-on-one
and
that
it's
a
lot
of
hours
and
it's
a
lot
of
burden
and
I'm
here
to
tell
you
that
it's
not
this,
not
this!
C
The
idea
here
is
a
group
of
people
who
are
all
taking
the
same
journey
together.
Think
of
like
you
know
back
when
you
were
in
uni
or
college,
you
had
these.
Like
group
projects,
but
you
were
all
on
the
same
you're
all
on
the
same
journey
together,
you
all
have
the
same
goal
at
the
end,
and
in
this
case
for
cli,
I
heard
that
y'all
need
more
reviewers.
C
Because
in
order
to
get
to
reviewer,
you
already
have
to
be
a
kubernetes
org
member
unless
you're
like
expediting
them
and
that's
like
a
whole
other
weird
thing,
but
at
the
end
of
the
day,
if
you
need
to
get
to
reviewer,
you
need
to
start
off
with
the
the
two
sponsors
and
the
like
the
trust
building
of
the
review
of
the
org
member
right.
C
So
the
idea
here
is
it's:
over
a
three
month
period
you
get
together
with
a
group
of
ten
people,
it's
anywhere
between
five
to
ten,
just
depending
on
how
many
mentors
we
can
get.
It
should
be
one
person
per
five
people
at
most.
C
So
if
we
have
a
group
of
ten
people,
we
should
have
at
least
two
mentors
in
in
the
group
and
when
I
say
in
the
group,
it
just
means
in
the
private
slack
channel
and
then
every
other
week
for
three
weeks
we
rotate
on
different
structures
and
the
like,
for
instance,
one
week
is
a
slack
stand
up
every
tuesday
or
wednesday,
for
instance,
meaning
people
check
in
with
you
you
as
the
mentor
and
then
the
following
week.
C
After
that
we
would
have
some
kind
of
structure
and
the
structure
is
you
as
the
mentor
being
a
reviewer
covering
reviewer
topics
for
cli?
Those
reviewer
topics
are
things
like?
Let
me
take
the
next
20
minutes
and
go
through
a
live
reviewer
workflow
with
you
as
you.
The
mentor
actually
live
review,
a
pr
and
you're
talking
out
loud
you're
telling
them.
You
know
how
you're
doing
things
they're
asking
you
live
questions,
but
there's
other
there's
a
little
bit
more
to
that.
C
This
is
the
the
hackmd
that
I
just
literally
hacked
up
for
y'all,
bad
pun,
but
so
the
idea
with
this.
This
is
a
little
template
that
I
put
together
that
we
have
to
answer
the
questions
for
and
then
once
we've
answered
the
questions.
We've
got
a
very
nice
schedule
for
the
session,
so
this
is
what
we
would
talk
about
every
other
week
right
and
these
would
be
guest
speakers
as
well.
C
So
some
of
these
folks,
you
don't
even
have
to
do
anything
for
you
literally
either
just
show
up
or
don't
show
up
as
mentors,
but
the
mentees
would
show
up
and
they
would
hear
things
every
other
week
like
things
that
reviewers
need
to
know
and
understand
about
the
enhancements
process
and
the
release
cycle,
and
then
one
could
be
a
deep
dive
into
a
subproject
area
that
you
all
really
need
help
on
and,
like
I
said
how
to
do
triage
things
like
that,
so
each
one
of
these
bullets
is
a
week
essentially.
C
C
So
this
is
what
we
need
right
now
to
get
y'all
launched
for
a
reviewer
group,
mentoring
program.
Okay,
we
need
to
id
your
mentors
id
a
kickoff
date
and
our
our
our
being
contrivex's
advice
here
is
usually
kickoff,
sometime
close
to
the
next
release.
C
If
that
mean
that
could
be
like
a
month
before
the
next
release
or
what
have
you
and
the
reason?
Why
is
it?
It
seems
to
be
pretty
cool
when
the
cohort
goes
with
the
release,
because
then
you
can
have
the
cohort
jump
in
and
do
things
for
you,
like
maybe
fix
tests
on
certain
things
that
it
just
seems
like
more
of
a
a
good
way
to.
C
It
seems
like
more
of
an
easy
way
to
give
out
homework,
and
you
know
I
say
homework
in
quotes,
but
like
learnings
and
things
like
that,
when
it's
going
with
the
release
and
not
necessarily
at
a
at
an
odd
time
sure
of
the
structure
which
I
just
showed
you,
which
is
the
let's
get
the
first
few
weeks
questions
answered
and
like
I
said
all
you
have
to
do,
is
answer
the
questions
and
then
controvex
does
everything
else,
meaning
contravex
will
schedule
and
coordinate
all
the
speakers
we'll
get
all
the
folks
together
for
you.
C
So
as
a
mentor,
you
literally
just
need
to
show
up
after
the
fact.
So
after
we
pick
a
date,
sure
of
the
structure
contributes
that's
either
myself
or
our
throw
on
the
phone.
We'll
send
out
a
note
to
your
mailing
list
that
hey
we're
doing
this.
If
you're
interested
in
this
fill
out
this
issue,
which,
on
the
issue
hilariously
enough,
we
already
have
at
least
10
folks
that
have
said
hey
we're
interested,
I
would
say,
half
of
them
are
qualified
again
and
the
qualification
bar
is
that
org
member
status
right.
C
So
we
already
have
quite
a
few
folks
that
have
said
that
they
were
interested.
So
now
we
need
to
like
get
the
outreach
out
with
the
sea
odd
mailing
list,
the
the
cli
mailing
list
as
well
as
kdev
and
and
then
we
pick
the
mentees
and
we're
done,
and
then
you've
got
a
cohort
for
three
months.
If
you
try,
if
we
have
10
people,
which
is
what
I
think
we
should
try
for
based
on
past
cohorts,
it's
about
50
percent
that
graduate
into
an
owner's
file.
C
So
that's
my
pitch
to
you
all
and,
like
I
said
it
seems
like
it's
a
lot
right
because
it's
new,
it's
stuff
that,
like
we
don't
usually
do
that
much
in
open
source,
which
is
like
group,
mentoring
and
like
feeding
off
of
peers
and
things
like
that
to
help
you
know
to
help
each
other
boost
each
other
up.
C
So
it
seems
like
it's
a
lot,
but
it's
that's
just
because
it's
new
and,
like
I
said
contrib
x,
does
a
large
part
of
the
coordination
here.
So
so
yeah.
Just
let
me
know
what
y'all
think
and
we'll
get
this
started.
F
Yeah
first
off
you
mentioned
that
I
was
in
a
similar
program
last
year
and
I
wanted
to
like
echo
that
that
was
a
really
good
program
very
useful
it.
I
was
I've
been
a
lead
since
last.
What
was
it
september
or
october,
and
it
was
that
program
that
really
prepared
me
to
step
up
into
that
role.
So
absolutely
thank
you
to
the
contributor.
F
Putting
that
together,
it
was,
it
was
great,
and
the
second
thing
is,
I
have
a
question
for
you.
I
see
it's
like
a
a
joint
cigar
six
cli
program.
I.
C
F
So
if
the
goal
is
to
get
people
into
reviewers
file,
I'm
wondering
like
are
we
targeting
a
particular
one
because,
like
in
6
eli,
we
have
at
least
you
pedal
and
customize
that
need
reviewers?
Maybe
some
other
projects
could
chime
in
I'm
not
sure
if
others
need
it
as
well,
but
at
minimum
those
two
and
like
qualifying
in
the
two
places
would
be
independent.
So
I
just
wonder
if
we
had
a
particular
goal
or
if
like
split
the
cohort
based
on
interest
or
what.
C
However,
I
know
that
there's
a
lot
of
y'all
that
do
both
that
are
owners
in
both
groups
and
the
initial
folks
that
came
to
me
wanted
to
kind
of
try
to
do
sort
of
a
a
both
approach.
I
definitely
don't
want.
C
I
definitely
think
that
we
should
make
this
two
groups,
though,
because
I
think
both
of
you
have
such
a
great
need
and
a
lot
of
content
that
I
think
it
would
be
too
much
on
the
mentee
to
have
them
both
together
and
then,
as
far
as
like
the
subproject
need
yeah,
we
can
do
as
many
as
you
want
with
the
structure
as
it
is
now,
meaning
we
could
do
all
of
your
sub
projects
and
meaning
like
we
could
just
do
deep
dives
and
then,
wherever
they
land
they
land
or
we
can
do
a
better,
more
scoped
approach,
which
I
I
definitely
think
is
good,
meaning
like
take
three
to
five
of
your
sub
projects
instead
of
all
of
them.
C
So
that's
up
to
y'all
to
decide,
but
we'll
run
with
whatever
but
yeah.
I
definitely
agree.
A
more
scoped
approach
is
better
for
you,
as
the
mentor
and
the
mentee
with
expectations,
and
things
like
that.
C
C
A
A
C
Yep
that
literally,
that
is
that's
what
happens
with
everyone
and
that's
okay.
I
would
say
if,
if
like
yeah,
if
we're
putting
some
some
hard
deadlines
on
here,
I
would
say:
let's
try
to
have.
Maybe
the
cohort
picked
a
few
days
after
this,
maybe
so
like
august's,
you
know.
Fourth,
maybe
like
so
after
you
all
get
out
of
code
freeze,
maybe
we'll
we'll
have
that
picked
right,
we'll
have
the
cohort
picked
and
then
maybe
you
know
sometime
in
between,
like
the
the
second
and
the
23rd,
we
have
the
kickoff.
B
A
C
For
mentor
id
like,
let's
try
to
have
that
you
know
I
guess
sometime
right
after
the
enhancements
freeze
and
then
we'll
do
a
week's
worth
of
at
least
a
week's
worth
of
outreach
on
your
mailing
lists.
C
So
yeah.
Let's
try
to
get
it
lee,
I
would
say
at
least
between
five
to
ten
days,
because
it's
summer
and
people
are
on
vacations
and
stuff
like
that,
so
yeah,
let's
try
to
I'll
come
to
your
next
meeting
too
I'll
keep
coming
until
we
have.
This
kicked
off
and
same
with
the
same
with
apps.
I
couldn't
make
it
to
the
apps
meeting
the
other
day,
but
I'll
try
to
we'll
try
to
run
both
of
the
cohorts,
at
least
on
you
know
similar
deadlines
and
paths
and
things.
C
So
we
can
just
have
two
groups.
You
know
chugging
the
chugging
along
doing
the
same
stuff,
but
yeah,
that's
kind
of
where
my
head
is.
What
do
you
think.
A
That
all
sounds
good
to
me.
I've
put
that
in
the
notes
that
we're
looking
to
id
the
mentors
after
the
code
freeze
so
after
the
23rd
and
we're
going
to
have
the
cohort
picked,
it
sounds
like
directly
after
the
code
freeze.
I
guess
you
said
august
4th.
Does
that
all
sound
decent
yeah.
F
C
Me
that
is
not
a
current
reviewer
in
ccli
there
is
that
issue.
That's
already
open,
collecting
interest
feel
free
to
add
your
interest
to
the
issue.
The
one
of
the
most
popular
questions
that
I've
gotten
so
far
is
I'm
not
an
org
member,
but
I
do
want
to
be
a
committed
maintainer
here
or
reviewer
approver,
etc.
C
Can
I
expedite
my
org
membership?
The
answer
is
yes,
however,
good
luck,
because
we
are
in
enhancements
phrase
right
now
and
I
think
most
folks
are
not
paying
attention
to
anything
outside
of
release.
So
it's
just
gonna
be
hard
for
folks
to
get
those
two
sponsors
and
two
merch
prs
in,
but
if
you're
listening
to
this,
you
can
dang
near
try.
C
There's
plenty
of
help
wanted
issues
out
there
that
you
can
grab
and
then,
once
you
merge
your
pr's,
the
people
that
have
helped
you
with
the
the
merge
can
be
your
sponsors
so
good
night
and
good
luck.
Thank
you.
A
Okay,
so
we've
got
if,
if,
if
we
can
we'll
move
on
to
eddie's
kubrc
is
that
okay
eddie.
D
Yeah
for
sure
yeah,
so
if
you
click
that,
thank
you
and
then
the
on
the
right
there.
Next
to
the
view
check
box
that
little
page
yeah
that
one
thank
you,
so
I'm
not
gonna
read
the
whole
cap.
This
is
one
that
we've
been
talking
about
for
a
while
and
it
just
never
got
prioritized
to
push
through,
I'm
pretty
determined
to
get
it
in
for
125
as
a
really
early
alpha.
So
we
can
start
getting
some
feedback.
D
The
goal
here
is
to
because
we
have
our
cube
config
files
and
they
hold
everything
from
user
credentials
to
cluster
endpoints
and
serps
and
there's
a
problem
that
arises
when
usually
when
you
create
a
new
pr,
I
mean
a
new
cluster.
You
get
a
new
cube
config,
so
managing
cube,
configs
and
merging
them
is
a
whole
other
topic.
I
should
probably
add
that
to
the
non-goals
here,
but
the
idea
is
to
have
a
place
for
user
configs
and
preferences.
D
That
is
a
separate
file
that
is
opt
into,
so
we
can
introduce
some
new
breaking
changes
that
won't
break
backwards,
compatibility
because
you're
opting
into
it,
and
so
the
idea
is
to
create
this
qrc
file
with
its
own
version
and
its
own
kind
of
api
definition
for
new
overrides
and
other
settings
right,
so
we're
again
we're
separating
user
preferences
out
from
cubeconfig.
So
that's
the
main
goal
here
and
if
you
scroll
down,
I
have
a
there.
We
go
yeah
the
code
block
yeah
right
there
cool.
D
So
this
is
the
kind
of
design
draft
that
I
put
together
with
jordan,
jordan-
and
I
were
talking
about
this
for
a
bit
and
how
to
introduce
this
and
how
to
structure
it
and
what
we
realized
was.
You
know
I
delete
confirmation
is
something
I've
been
trying
to
push
into
cube
control
for
a
long
time,
because
people
accidentally
delete
their
clusters.
We
can't
just
introduce
delete
confirmation
because
it
will
break
existing
ci
pipelines.
D
Detecting
ci
pipelines
isn't
something
that's
really
feasible,
because
most
ci
pipelines
will
fake
being
a
real
terminal
to
have
the
right
output
and
pretty
colors
and
all
that.
So
what
we
came
up
with
was
a
the
second
part
here,
this
command
override
section.
So
this
was
an
idea
to
we
realized
we
could
center
everything
around
flags
by
defining
default
flags
per
command.
D
So
if
you
look
at
the
second
one
there
that
command
delete
flags,
name
confirm
default
to
true
right
and
so
the
more
we
modeled
all
the
behavior
we
wanted,
the
more.
D
We
realized
that,
as
long
as
the
flag
existed,
we
could
set
defaults
for
that
command
and
so
there's
still
an
open
question
as
to
how
do
we
do
sub
commands
so
for
like
if
we
wanted
to
do
get,
for
example
like
get
pods
as
a
command,
I'm
not
sure
if
that
will
look
like
a
single
string
or
if
we
need
to
have
like
a
nested
object
there.
That's
something
I
have
to
figure
out
today,
but
if
anyone
has
thoughts
or
feedback
on
this,
please
comment
on
the
cap.
Shoot
me
a
message
on
slack.
D
The
other
thing
we're
trying
to
address
here.
If
you
go
back
up,
please
paris,
that
top
part
was
command
aliases.
So
this
has
been
a
request
from
a
bunch
of
people.
Most
notably
tim
hawkin
has
always
been
poking
me
about
how
we
can
get
this
in,
and
so
this
was
an
idea
to
create
your
own
command
aliases.
That
will
expand
to
a
default
thing
right.
So
here
this
is
like
get
db
prod,
and
so
it
will
expand
out
to
get
pods.
D
You
know,
databases
in
production,
so
this
one
you
know
we
could
still
discuss
the
shape
of
and
thought
and
there's
a
question
on
the
cap
about
like
precedence
for
for
what
takes
precedence.
If
you
try
to
alias
the
built-in
the,
but
the
idea
again
is
too.
This
file
will
be
a
way
for
us
to
introduce
types
of
overrides
or
like
fixed
old
behavior
that
we
all
agree.
We
wouldn't
rewrite
today,
but
we
just
can't
change
because
it
would
be
breaking.
D
D
Yes,
thank
you,
the
so,
and
so
here's
the
other
piece,
and
so
I
started
trying
to
proof
of
concept
this
a
bit.
We
don't
have
much
flexibility
in
how
cobra
does
things
so
we'll
have
to
potentially
get
creative
like
commands
in
cube
control.
Don't
know
that
they're
a
sub
command
right.
They
don't
really
know
their
parents
command
name,
and
so
we
might
have
to
get
a
little
creative.
We
might
have
to
make
some
changes
to
cobra.
D
Honestly,
thankfully,
we
met
the
maintainer
of
cobra
and
he's
a
great
dude
and
I'm
sure
he's
willing
to
help
here
but
yeah.
So
this
is
the
proposed
idea.
When
we
go
to
implement
it,
things
might
change
just
because
of
limitations.
A
Very
cool
yeah,
I
kind
of
I
kind
of
feel
like
the
current
coupe
config,
which
has
you
know,
believe
it
or
not.
You
could
actually
set
the
namespace
that
for
a
particular
context,
and
so
that
already
feels
like
one
defaulting
mechanism
like
here's,
your
default
and
and
actually
feels
like
it's
much
more,
would
be
much
more.
A
D
Yeah
and
then
so
the
other
piece
that
so
phil
had
a
request
for
a
long
time.
If
you
don't
know
phil,
he
was
a
emeritus
lead
of
60
li
and
steering,
but
he
wants
to
be
able
to
do
these
her
context,
so
he
wants
to
have
different,
overrides
and
different
aliases
per
context
in
your
cube
config.
So
maybe
you
want
to
force
delete
confirmation
on
your
production
cluster,
but
not
on
your
development
cluster
right,
and
so
that's
not
something
that
I
I've
really
come
to
a
great
decision
on.
D
F
I
have
a
question
that
kind
of
ties
into
that
which
is
like,
if
we're
overwriting
the
default
behavior
and
what
you'll
see
documented
in
in
the
command
docs.
How
are
we
going
to
like
make
user
experience
around
that
that
isn't
confusing
like
particularly
if
we
take
it
to
the
extent
of
it's
different
per
cluster,
and
you
run
your
command
or
whatever
and
you're
scratching
your
head
about?
Why
did
it
do
this
thing?
D
Yeah
so,
and
we
can
add
something
that
logs
out
the
command
that's
being
run
or
you
know
hey
we're
running
with
these
flags,
you
know
we
can
make
that
visible
by
default.
We
can
put
it
behind
another
flag
or
put
it
in.
You
know
dash
v
like
two
or
something
that's
a
great
point,
though
I'll
I'll
make
sure
to
add
something
on
the
kept
for
that.
D
Yeah,
that's
and
that's
the
thing
with
so
I
I
have
a
whole
nother
soapbox
to
stand
on
when
it
comes
to
the
template
here
like
the
cap,
template
is
so
specific
for
api
server
features,
because
ninety
percent
of
the
prf,
the
prr
stuff,
is
what
metrics
are
you
adding?
What
server
flag
is
this
behind?
So
it's
just
not
relevant
to
other
types
of
changes.
D
Yeah
I
want
to
get
this
in
as
as
soon
as
we
can.
Even
if
we
you
know,
I
think
we're
going
to
hide
it
behind
an
environment.
Variable
is
just
an
experiment
or
something
for
now,
but
if
we
don't
have
something
that
people
can
play
with
and
give
feedback
on,
there's
no
way
that
we
could
shape
it
the
right
way.
So
I
want
to
ship
as
soon
as
we
can.
B
I
mean
it's
just
I
think
I'll
just
try
and
put
those
comments
on
the
cap.
I
think
that's
the
right
place,
because
not
everybody
yeah,
I
mean
those
are
just.
I
think
my
usability
experiences
right,
like
emacs
rc,
whatever
like
those
are
things
are
they're
like
for
client.
That's
where
I
usually
put
my
own
shortcuts
that
are
for
me
as
a
human.
B
D
B
D
The
other
open
question
that
jordan
brought
up
was
he
linked
to
some
issues
around
like
config.d
and
xdg.
D
These
were
things
that
we
decided
not
to
go
with
in
the
past
right,
like
ever,
people
have
wanted
xcg
config,
which
is
you
know
if
you
in
your
home
directory,
you
have
your
dot
config
folder.
Basically,
that's
the
default
and
a
cube
folder
in
there.
Instead
of
populating
the
top
level,
there
was
a
kept
for
this.
That
doug
was
working
on.
D
Ultimately,
we
decided
it's
too
vast
of
an
ecosystem
change,
because,
if
that
that
work
needs
to
be
done
in
like
client
go
and
any
tool
that
does
an
upgrade
to
client
glow
ship,
a
new
version
and
someone
upgrades
to
you'll
be
operating
on
different
clusters
than
you
think,
and
that's
a
scary,
confusing
thing.
So
that's
an
open
question
on
is
this
something
we
want
to
support
here?
I
still
think
it's
probably
too
dangerous.
A
Cool
thanks
thanks
for
that
eddie
and
as
mentioned
before,
if
you
have
any
detailed
comments,
please
do
it
on
the
cap
that
it
sounds
like
that
is
the
one
kept
the
sig
cli
is
tracking.
Is
that
correct
eddie?
I
said
two
before
and
you
correct
me.
The
events
one
is
being
is
not
going
to
make
this
particular
release.
D
Correct
yeah
mache
is
pushing
it
back.
A
Okay,
great
so
we
could,
we've
got
another
12
minutes,
we
could
do
stand-ups
or
if
there
are
no
stand-ups,
we
can
give
you
guys
back
time.
So
so
now
I'll
I'll
ask
for
anybody.
Does
anybody
want
to
do
a
stand
up
for
their
particular
sub
project.
A
Okay,
cool,
I'm
gonna,
give
you
guys
we'll
give
you
guys.
Sorry,
you
all
some
time
back
and
appreciate
you're
joining
us,
and
it
was
great
to
see
paris
and
again
paris
thanks
for
saving
the
day,
if
you're
still
on
and
have
a
good
day.