►
From YouTube: Kubernetes SIG CLI 20230111
Description
Kubernetes SIG CLI bi-weekly meeting on January 11th, 2023.
Agenda and Notes: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.knnov04oprwo
A
A
A
A
Okay,
let's
move
on
to
welcoming
new
members,
there
are
a
couple
of
new
faces.
This
is
the
time
where
you
get
to
introduce
yourself
to
your
sexy,
like
colleagues,
if
you
want
to
this
is
totally
voluntary.
Does
anybody
want
to
say,
hi
and
introduce
themselves.
A
B
Are
yeah
I
am
pretty
new
to
this
CCL
CLI
I
have
attended
one
meeting
so
far,
and
just
so
just
to
put
it
out
there,
myself,
doggy
I'm,
joining
from
India
and
I,
have
been
contributed
to
kubernetes
talks
earlier
and
now
I'm
pretty
darn
interested
in
community
CLI.
So
that
is
why
I'm
here
and
trying
to
contribute
to
some
good
first
issues.
Thank
you.
A
Okay,
so
this
is
where
we're
gonna
have
the
sub
project
updates
we're
gonna
be
talking
caps,
so
I
guess
we
don't
have
to
unless
we
want
to
talk
about
kept
updates.
C
C
Bug
fix
a
lot
of
bug,
fixes
minor
stuff,
but
this
bumps
version
of
react
and
the
version
of
electron
so
major
bump.
A
couple
bug
fixes
nothing
deep.
A
Great
thanks
Nick.
Do
we
if
you'd
like
to
you,
please
update
the
please
update
the
notes
with
with
more
detail
if
you'd
like
on
on
that
update
sure
thing.
D
Related
to
that
for
customize
we're
also
planning
a
major
version.
Release
I
believe
I
mentioned
this
before
the
holidays,
because
this
is
a
plan,
that's
been
in
the
works
for
a
while,
and
we
actually
were
hoping
to
get
the
release
out
before
the
holidays,
but
for
various
reasons
we
weren't
able
to
and
we're
now
targeting
the
end
of
the
month,
so
we'll
provide
more
information
on
what
exactly
lands
in
the
release
when
we
do
it,
but
it's
going
to
be
an
exciting
one.
D
A
If
not,
we
could
move
into
the
open
discussion,
which
is
let's
let
me
open
up
the
cap
on
apply
prune
redesign,
so
Justin
I
I
made
you
a
co-host
in
case.
You
want
to
pre-present
anything.
Katrina
is
a
co-host.
D
Thanks
Sean
yeah,
so
this
is
still
a
draft
PR.
It's
still
something
that
we're
very
much
working
on
and
we
wanted
to
present
it
early
both
because
there's
only
a
month
left
before
enhancements
freeze
and
we
want
to
allow
lots
of
time
for
feedback,
but
we
also
want
to
invite
other
people
to
get
involved
now
that
we
think
we
have
enough
information
sort
of
set
down
in
the
cup
to
provide
that
background,
for
people
to
understand
where
we're
thinking
and
all
the
context
that
has
gone
into
the
process.
D
So
far,
notably,
we
like
tried
to
synthesize
several
previous
conversations
that
have
happened
in
this
forum
and
at
the
kubecon
North
America.
The
great
conversation
that
Sean
LED
with
me
there
at
the
contributor
Summit.
D
We
took
some
notes
from
that
time,
sort
of
put
that
into
the
background
and
also
attempted
to
use
that
as
the
starting
point
for
the
for
the
plan
we
made
here
so
so
far,
it's
been
the
authorship
of
this
document
is
me
and
Justin:
it's
been
really
great
to
collaborate
with
Justin
on
this,
and
we
would
love
to
have
more
people
who
are
passionate
about
this
topic.
D
Join
us
on
this
cup
whenever,
whenever
you're
able
to
so,
please
feel
free
to
jump
in
and
comment
on
the
kep.
The
main
point
of
presenting
today
is
that
we
would
like
to
invite
questions
and
discussion
and
also,
if
you
want
to
reach
out
to
us
in
in
Slack,
to
get
involved,
then
that's
great
too,
so
to
just
give
a
quick
overview
for
those
who
might
not
have
been
present
at
the
at
the
previous
conversations.
D
There
is
currently
a
feature
in
queue,
control
called
pruning,
which
is
part
of
the
apply
command
and
lets
you
sort
of
manage
a
set
of
objects.
The
way
that
you
applied
images
field.
So
you
want
to
be
able
to
declare
that
you
have
a
set
of
objects
that
exists
that
should
exist,
and
if
you
then
remove
something
from
your
set
and
you
apply
again
that
object
that
you
removed
from
the
configuration
should
disappear
from
the
cluster.
D
So
it
adds
it
adds
that
deletion
ability-
and
this
is
something
that's
existed
in
Q
control,
sense
version,
1.5,
so
very
long
standing.
But
it's
since
then
been
an
alpha
and
there
are
a
bunch
of
reasons
for
that
which
are
outlined
in
the
cup,
mainly
around
the
implementation.
That
was
chosen
having
some
fundamental
limitations
that
the
bugs
in
question,
like
it's
very,
very
difficult
to
to
fix
it
without
completely
changing
that.
D
So
there
have
been
some
previous
attempts
to
sort
of
come
up
with
Alternatives
and
there
have
been
some
experiments
within
6
CLI
in
a
a
different
repo,
not
in
in
the
acute
control
itself,
to
sort
of
prove
out
these
different
approaches.
So
what
the
Sig
was
thinking
at
kubecon
is
that
you
know
we're
at
this
point
where
we
really
need
to
get
rid
of
the
alpha.
D
It's
not
good
for
our
users
to
be
to
be
continuing
to
provide
an
important
feature
with
Alpha
level
functionality
and
that
we
can
draw
on
these
experiences
to
make
a
plan
that
we
can
that
we
can
agree
to
and
get
hopefully
into
127
a
new
Alpha
that
will
work
better
and
that
we
can
iterate
on
so
part
of
the
design.
D
That
we're
going
for
here
is
something
that
is
fairly
constrained
as
part
of
the
plan
and
lockdown,
where
we're
not
sure
what
we
should
be
doing
so
that
we
can
get
something
in.
That
will
be
a
very
good
foundation
for
iteration
low
level,
something
that
we
can
build
on
get
feedback
and
and
be
able
to
move
forward
with
and
have
it
completely
get
rid
of
the
older
Alpha.
D
So
there
is
really
quite
a
lot
of
information
in
there
about
the
background,
the
feature
history
and
all
the
different
Alternatives
that
we
saw
in
the
community.
The
discussions
that
we've
had
so
far
summarized
I
don't
want
to
spend
the
whole
meeting
like
going
over
that
so
like
if
you,
if
you're
interested
in
that
background,
if
you're
interested
in
getting
involved
in
the
effort,
please
take
a
look
at
that
and
I
think
it's
hopefully
very
useful.
D
We
can
go
over
a
bit
of
what
we're
proposing,
though,
at
a
high
level
to
to
set
a
foundation
for
discussion
and
I.
Don't
want
to
take
up
the
whole
the
whole
time.
Myself,
though
Justin
did
you
want
to
cover
that
part.
E
Sure
yeah
I'm
absolutely
happy
to
I
think
that
was
I.
Think
that
was
a
perfect
introduction.
I
think
yeah
I,
just
emphasize
like
it's
building
on
the
discussion
that
we
had
at
kubecon
I
think
we're
trying
to
do
something
very
much
in
that.
In
that
spirit
and
start
with
an
alpha
and
discover
things
and
like
I
was
saying
we
should
have
you
know
all
these
different
objects
and
Katrina
was
like
no,
no,
let's
just
like
pick
one
to
start
with,
and
we
can
always
expand
it
later,
but
you
can
never
sort
of
contract.
E
So
that's
the
spirit
we're
going
with
here.
I
think.
The
other
thing
that
I
think
was
a
big
lesson
from
for
me
from
that.
The
talk
which
Sean
Katrina
LED
was
an
idea
that
came
up
around
porcelain
versus
plumbing
and
that
we
should
try
to
create
some
Plumbing
that
can
be
used
by
lots
of
tooling,
and
so
we
are
proposing
something
that
we
think
is
hopefully
pretty
lightweight
and
Universal
and
that
all
the
tooling
we'll
be
able
to
use,
and
that
does
not
preclude.
E
E
The
idea
is,
we
have
a
parent
object
which
could
be,
as
it
says,
their
config
map
secret
or
crd
of
the
choice
tools,
choice.
We
might
constrain
that
for
the
alpha,
but
some,
let's
call
it
a
parent
config
map.
It
will
have
a
label,
so
you'll
be
able
to
do
one
of
the
accelerated
like
queries
to
get
those
objects.
E
You
discover
this
apply,
set
key
and
then,
in
order
to
find
the
child
objects
as
it
were,
you
do
have
to
go
and
do
I
think
a
weakness
of
this
approach
is,
you
do
have
to
go
and
do
a
query
on
a
list
of
gvks.
E
That's
something.
We've
wrestled
with
a
bunch,
but
I
think
there
are
some
things
we've
done
specifically
to
make
that
to
avoid
the
problems
we
identified
in
the
earlier
Alpha.
So
we
now
have
a
specific
list
of
gvks
that
we're
going
to
query
so
it's
scoped.
In
that
sense,
we
have
a
specific
list
of
namespaces
that
we're
going
to
query
those
are
both
annotations
on
that
apply
set
object.
E
So
we
think,
by
doing
that,
we
avoid
a
lot
of
the
problems
that
happened
before
and
there's
an
explicit
label
that
we
propose
on
those
child
objects.
So
it
becomes
a
set.
It
is
still
a
set
of
queries,
but
it
is
a
relatively
efficient
accelerated
set
of
queries
to
discover
the
the
child
object.
E
So
that's
just
a
little
bit
further
down
efficient
listing
of
Applied
set
contents,
but
you
know
the
comments
here
are
super
super
welcome
and
particularly,
if
you're,
a
porcelain
tool
that
you
know
whether
these
would
work
for
you,
we
think
they're,
pretty
Universal
and
and
not
very
demanding
and
that
they
give
us
at
the
same
time
the
functionality
we
need
and
it's
sort
of
actually
in
keeping
with
a
lot
of
the
implementation
that
we
have
today,
more
sort
of
fixing
what
they
have
so
I.
E
Think
it's
good
from
that
perspective
as
well,
but
there's
a
lot
of
details
in
this
in
this
cap
and
you
know,
obviously,
if
you
see
anything
that
we
could
defer.
That's
super
welcome
as
well
like
you
know,
if,
if
there's
something
that
we
don't
have
to
do
right
away
in
the
alpha,
that
would
be.
That
would
be
awesome
because
then
it
leaves
options
open
to
us.
So
the
goal
is:
get
this
Alpha
out
into
people's
hands,
see
what
works,
see,
what
doesn't
and
and
move
forwards
and
evolve
as
we
go
forwards.
D
And,
as
you
can
see
in
the
scroll
there's
like
a
bunch
of
places
where
we
just
tagged
something
as
a
result,
because
we're
kind
of
still
having
the
conversation,
so
that's
a
good
place
to
jump
in
as
well,
and
another
useful
thing
for
framing
is
that
like
Justin
was
talking
about
the
porcelain
and
plumbing,
so
like
Cube
control
will
have
a
reference
reference.
D
Sorry
reference
implementation
of
the
standard
that
we're
proposing
here.
So
a
lot
of
the
cap
is
talking
about
that
standard,
because
we
want
to
make
sure,
like
the
the
label,
basically
that
that's
going
to
work
for
tools,
Beyond
key
control
and
that
this
makes
sense
on
a
fundamental
level
as
the
plumbing
and
then
at
the
end
here,
which
is
now
on
screen.
We
have
like
right
now
fairly
skeleton
proposal
for
what
the
the
reference
implementation
itself
will
look
like.
D
So
that's
another
area,
that's
like
great
to
get
six
CLI
input
on,
because
that
that
is,
you
know
where
it
all
shakes
out
in
our
tool
and
the
the
basic
idea
here
is
that
we
can
actually
integrate
this
pretty
seamlessly
alongside
the
current
Alpha,
so
that
people
have
that
easier
transition
mechanism
that
should
actually
be
fairly
safe,
based
on
the
way
that
we
see
the
proposal
right
now.
D
Of
course,
when
we
start
coding,
we'll
probably
discover
new
things,
but
the
idea
is
that
we
sort
of
have
two
Mo
if
you
want
to
it's
a
different
framing,
not
when
we
use
in
any
docs,
but
one
that
sort
of
occurred
to
me
when
we're
when
we're
working
on
this
there's
sort
of
two
different
modes
for
using
pruning
right
now,
there's
like
the
dash
dash
all
flag,
the
dashed
label
flag
that
constrains
a
different
way,
and
this
would
be
sort
of
a
third
mode
that
that
you
could
use
it's
when
you
enable
the
plot
applies
that
flag,
it's
mutually
exclusive
to
the
other
two.
D
Let
me
throw
an
error
if
you
try
to
specify
more
than
one
of
those
three
and
then
that
sort
of
UPS
you
into
this
new
mode
and
we
should
be
able
to
to
then
have
the
two
Alphas
alongside
during
that
transition
period.
C
I'm
curious,
what
you
see
the
overlap
here
is
with
Helm,
because
Helm
has
some
similar,
similar
mechanisms
for
managing
an
installation
right
and
they
have
server-side
States
and
secrets
or
post
credits
and
fail
whatever
some
way
of
storing
what
that
unit
that
you
know
that
unit
of
work
is,
do
you
see
those
as
just
two
parallel
worlds
or
do
you
think
maybe
there's
some
overlap?
C
E
I
can
take
this,
so
we
do
we
do
somewhere.
In
this
talk,
though,
there
is
a
section
that
like
tries
to
look
at
like
Hellman
kept
and
what
and
Carvel
cap
I
think
it's
called
what
they
actually
do.
The
what
what
we're
hoping
is.
E
Labels
are
lightweight
enough
that
Helm,
if
they
want
to,
would
be
able
to
add
those
labels
as
well
and
then
what
you'd
be
able
to
do
is
you'd,
be
able
to
say
like
if
you
were
trying
to
like
prune
an
object
which,
which
helm
was
managing
Coupe
cuddle,
might
be
able
to
say
things
like
hey
hold
on
a
minute.
You
know
this,
isn't
your
object.
Are
you
sure
you
really
want
to
do
that
sort
of
thing?
E
E
Don't
really
know
what
what
potential
there
is
here,
but
like
the
this
is
the
plumbing
for
people
to
do
all
this
stuff
and
the
the
reason
one
of
the
reasons
to
do
labels
is
because
Helm
could
do
this
I
think
relatively
easily
as
an
additive
thing,
so
that,
and
they
wouldn't
have
to
like
change
how
they
store.
You
know
their
objects.
They
wouldn't
have
to
fundamentally
re-architect
they're,
just
adding
a
couple
of
labels
to
be
like
hey
I
own.
E
This
sort
of
like
an
owner's
ref
right
owner's
ref
is
something
they
could
add,
without
sort
of
changing
a
lot
of
what
they're
doing,
there's
a
description
of
why
we
can't
use
owners
drafts
or
why
we
think
we
can
use
it
as
refs
in
there
as
well
the
so
that's
that's
broadly
what
I?
What
I
would
hope
that
all
the
tooling
that
is
out
there
could
opt
in
to
start
using
these
labels
and
would
get
the
benefits
of
like
you
could
cuddle
not
stumping
over
them?
E
One
of
the
reasons
that
we
can't
like
there's
a
long-standing
thing
that
Helm
is
separate
from
crew
cuddle,
and
so
we
can't
integrate
the
two.
So
you
know
from
that
direction.
It
feels
like
we
plumbing
and
Coupe.
Cuddle
is
the
right
way
to
go
plumbing
and
reference
implementation
that
is
ideally
compatible
with
Helm,
but
I
don't
know
if
we
could
do
anything
deeper
from
a
sort
of
this
is
the
deal
type
thing
perspective.
D
The
name
applies
it
by
the
way
that
that's
taken
directly
from
a
suggestion
made
during
the
live
session
at
kubecon
and
I.
Think
it
was
made
by
a
health
maintainer.
Is
that
right
Justin?
Do
you
remember
it.
D
E
D
E
A
Foreign,
so
I
had
a
quick
question
just
as
a
clarification,
so
my
understanding
of
of
this
I've
already
talked
to
Justin
and
and
I've
and
and
Katrina
and
I.
We
also
talked
in
that
meeting
so,
if
I
understand
this
correctly
we're
storing
the
gvks
and
the
name
spaces
in
an
object
in
the
cluster
and
and
of
course
we
we
need
this
GVK
and
namespace
in
order
to
be
able
to
query
to
determine
what
the
set
of
previously
applied
objects
is.
Is
that
correct?
E
E
A
So
I
had
already
mentioned
to
Justin.
I
have
a
slight
concern
with
the,
so
this
is
very
similar
to
how
it's
done
now
and-
and
so
my
concern
is
with
the
the
performance.
You
know
we
especially
retrieving
objects
multiple
times
so
so
when
we
do
the
first
apply
step.
A
If
it's
client
side
apply
we're
going
to
be
retrieving
all
of
the
objects
in
order
to
calculate
the
patch
or
see
whether
or
not
they've
they
exist
and
and
then
the
the
second
prune
step
with
these
gbks
and
namespaces,
we
would
also
retrieve
a
set
of
objects
and,
and
if
there's
nothing
to
prune,
then
those
objects
are
completely
overlapping.
So
we're
retrieving
them
twice.
Is
that
correct.
E
Yeah
I
mean
I,
yes,
we've
certainly
been
like
I
think
you
raised
a
good
point
here
and
like
this
is
I,
think
you
know
something
that
we
we
are
trying
to
figure
out,
how
what
we
can
do
there
and
I
think
the
structure
of
Coupe
cuddle
means
that
we
probably
would
be
retrieving
the
same
object
twice
simply
because
of
how
it's
implemented.
E
I,
don't
know
if,
if
we
wanted
to
do
a
deeper
refactoring,
which
I
don't
think
we
want
to
do
for
Alpha,
but
if
we
want
to
do
a
deeper
refactoring
like
you
might
say
that
those
two
could
probably
be
combined
and
there
are,
there
are
a
lot
of
things
that
are
going
on
here,
such
that
I
would
like
to
sort
of
try
it
and
see
how
bad
it
is
in
practice.
So,
for
example,
if
you
have,
we
don't
expect
there
to
be
that
many
different
kinds
in
a
typical
apply
set.
E
We
don't
expect
to
be
that
many
namespaces
in
a
typical
apply
set,
so
we
may
actually
come
out
cheaper
in
that
a
list
can
be
fewer
requests
than
I
get
like.
So
we
we,
we
lose
some
on
the
one
side,
but
we
gain
some
on
the
other,
because
we're
using
a
list
which
is
like
scoped
by
the
only
accelerated
query
in
the
API
server,
which
is
like
label
queries.
It's
not.
The
other
thing
that
I
was
sort
of
thinking
about
is
like
we
do
want
to
get
everyone
to
service
at
apply
right.
E
That's
the
that's!
The
the
other
thing.
That's
I
think
motivated
like
your
involvement
here,
right,
Sean,
correct
and
I.
Think
we're
really
thinking
about
server
side,
apply,
I,
think
Katrina
and
I
were
talking
this
morning
about
whether
we
could
there's
some
flexibility
in
or
we're
debating
whether
a
tool
should
name
itself.
E
So,
in
other
words,
shouldn't
apply
to
encourage
the
tools
to
put
an
ID,
and
maybe
there's
some
concern
over
whether
a
server-side
apply
would
still
need
that
initial
get
because
you
want
to
detect
overlapping
or
overlapping,
apply
sets
and
I
was.
We
were
wondering
if
it's
possible
to
do
that,
something
with
the
field
manager
and
map
that
to
the
name
of
the
tool,
and
would
that
actually
be
a
really
nice
Synergy,
but
I
feel
like
this.
E
Is
sort
of
the
thing
where
we
have
to
do
the
do
the
code
and
see
what
shakes
out
there's?
Certainly,
but
yes,
there
is
certainly
the
potential
that
this
could
be
in
the
worst
case
scenario
inefficient
compared
to
other
approaches
like
the
inventory,
where
we
store
a
list
of
every
GB
k
n.
But
my
expectation
is
that
when
we,
when
we
build
that
Alpha
and
put
it
into
customers,
hands
that
we'll
find
that
in
typical
cases
it
is
not
actually
a
significant
performance
issue.
A
Okay,
sir,
do
we
have
any
other
questions
or
comments
for
Justin
and
Katrina.
A
Okay,
well
then,
let's
move
on
to
the
next
item,
since
we
do
have
a
large
agenda
so
mache,
would
you
like
to
go
over
some
of
the
Local
Host
issues
with
Coupe
control
and
the
PRS
and
issues
involved?
Yeah.
G
So
the
original
issue-
the
first
one
link
here-
popped
up
during
our
box
crop
last
week,
so
basically
to
all
of
them
in
a
single
place
and
decided
to
discuss
this
today.
G
It
started
basically
from
the
general
idea
that
the
current
message
that
we
give
a
user
when
there
is
no
value
Cube
config-
is
rather
not
helpful,
and
one
of
the
reasons
for
that
is
because
we
are
defaulting
client
go
to
localhost
8080..
G
So
to
tackle
this
problem,
there
will
be
actually
two
issues
that
we
would
have
to
solve.
One
would
be-
and
it
was
approach
in
the
past-
and
there
are
linked
in
the
agenda-
was
to
get
rid
of
the
local
hosts
defaulting
entirely.
Unfortunately,
there
are
other
code
that
relies
on
that
and,
if
I
remember
correctly,
the
dash
dash
local,
if
you
specify
that
and
we
drop
the
default
into
localhost
that
stopped
working.
G
So,
basically,
if
you
don't
have
a
coupon
fix
locals,
does
not
work
properly,
and
we
should
probably
just
make
it
so
that
the
locals
should
work
without
the
requirements
or,
if
I,
keep
configure
any
couponific
at
all
and
the
majority
of
the
code
that
we
currently
have
realized
the
amount
of
relies
on
the
presence
of
that
file.
G
So
solving
both
somewhat
in
parallel
or
one
after
another
would
be
desirable
and
we're
just
looking
for
someone
who
would
be
interested
in
in
picking
this
up.
There
will
be
a
lot
of
digging
through
existing
code
and
ensuring
that
this
is
working
as
it
should
I'm,
not
sure
how
much
code
we
have
with
regards
to
local
and
I.
Think
based
on
the
past
experience
that
we
don't
have
any
tests
around
the
local
invocation
so
yeah.
G
That
would
have
to
be
double
checked
before
we
proceed
with
with
eventual
moves
and
then
also
having
some
kind
of
a
reasonable
mechanism.
I
wonder
because
currently
we
have
the
mechanism
that
should
be
able
to
produce
to
know
whether
a
local
flag
was
specified,
because
the
parametrical
is
one
on
the
track
that
we
have
in
the
generics.
G
Delight
around
times
the
sorry
generation
light
options,
maybe
there
would
be
an
option
to
be
able
to
not
to
require,
because
one
of
the
PRS
linked
there
approached
the
Top
Lane
by
adding
conditions
across
all
the
commands,
ensuring
that,
if
it's
local,
it
should
do
something
else
which
I
don't
think
is
the
right
approach.
I
want
to
see
something
more
structural
with
regards
to
supporting
the
local
or
not
inside,
of
the
the
CLI
options
somewhere
that
this
is
more
like.
G
G
Whenever
there's
no
config,
we
would
prefer
to
see
a
something
reasonable,
rather
than
currently,
you
will
see
the
connection
to
server
localhost,
which
was
not
able
or
something
like
that.
So
maybe
providing
a
little
bit
more
helpful
message.
G
Oh
you
don't
have
we
weren't
able
to
find
a
valid
Q
config
or
your
Cube
config
is
empty
anything,
that's
what
basically
gives
a
little
bit
more
hand
towards
the
user
about
what
we've
tried
and
then
at
the
same
time
getting
rid
of
the
the
localhost
8080
defaulty
videos
I'm,
not
sure
if
anyone
is
running
keep
a
localhost
anymore.
G
That
goes
back
to
the
very
early
days
when,
when
we
used
to
do
so,
as
you
see
the
the
issue
from
Brian
page
from
2016.,
when
folks,
basically
not
keep
clusters
on
their
local
machines,
but
since
then
it
just
pointless,
usually
you
either
have
a
value
content
or
you
don't
and
you
don't
have
and
you
shouldn't
be
guessing
anything.
C
Problems,
one
thing
that
surprised
me
was
when
I
run
poop
cuddle
in
the
cluster
itself
inside
of
a
pod.
Imagine
if
it
works,
because
the
secrets
are
mounted
somewhere
and
not
an
issue.
That
would
surprise
me
because
it
I
wasn't
expecting
it
to
work.
I
was
expecting
not
to
do
something
and
it
just
magically
worked.
That's
not
the
case.
Maybe
where
there
was
seem
like
there
was
a
bit
of
a
gap
in
its
expectation
versus
reality.
G
I
mean
I
remember
either
making
a
presentation
during
one
of
the
past
few
cones
about
the
possibilities
or
hidden
options.
Let's
call
it
that
way
within
the
Cube
cuddle,
although
actually
that
particular
bit
is
even
present
inside
of
the
client
go
itself.
G
So,
whenever
you're
building
your
tools
on
top
of
the
client
go
the
defaulting
logic
that
figures
out
that
you're
already
running
inside
of
the
cluster-
and
it
is
looking
at
two
or
three
environment
variables
which
are
pointing
to
service
account,
Secrets
mounted
and
the
cube
API
server
addressed
and
based
on
that
it
can
easily
pick
it
up.
H
Yeah,
so
this
one
is
gonna
touch
a
lot
of
parts
of
the
stack
from
loading
cubeconfig
to
warning
the
user.
So
if
there's
anyone
who's
kind
of
interested
but
intimidated
in
working
in
this
feel
free
to
to,
let
us
know
you
can.
Let
me
know
I
think
this
might
be
a
good
one
if
we
get
like
a
pairing
group
together
on
if
we
have
a
few
people
working
on
it
together,
because
this
this
will
really
touch
all
parts
of
cube
control.
From
like
loading
to
warning
and
running.
B
Oh
just
to
add
to
that
and
a
similar
issue,
the
issue
issue
number
1340
that
was
assigned
to
me
and
I
thought
it
would
be
a
simple
issue
where
I
have
to
change
the
warnings
and
I
added
a
follow-up
comment
on
that
yeah
issue.
1340
yeah
I
believe
this
is
where
I
needed
help,
and
that
is
what's
what
is
being
discussed
right
now.
So
if
some,
if
someone
can
just
quickly
look
at
the
comment,
last
comment
over
there.
H
Yeah
we
we
should
sync
up
there
I'd
like
to
show
you
how
to
find
that
file.
B
A
Okay:
let's
do
that
Eddie,
take
it
away.
H
Yeah,
so
this
was
an
older
one
that
Howard
was
working
on.
H
This
was
adding
extra
column
support,
so
you
could
tack
on
extra
columns
and
his
implementation
was
good,
except
it
dropped
a
bunch
of
client-side
calculated
columns,
and
so
ultimately
he
closed
this
and
didn't
want
to
move
forward
on
it.
I
think
varsha
wanted
to
pick
this
up.
I
don't
know
if
arsha
is
on
the
call
right
now.
H
No
okay
I
can
poke
varsha
on
slack,
but
I
we
wanted
to.
This
originally
was
refreshed
by
a
user
requesting
adding
a
specific
column.
This
one
I'll
put
it
in
the
chat
and
we
don't
really
want
to
add
like
more
flags
for
like
small
specific
columns.
We'd
rather
have
like
a
blanket
people
can
add
the
columns
they
need
feature,
and
so
that's
where
the
stem
from
and
I
put
on
the
agenda,
because
there
was
something
we
had
to
talk
about,
but
I
didn't
write
it
down
on
the
agenda.
H
G
I
remember
looking
at
the
coat
and
in
general,
it
shouldn't
have
any
problems,
because
the
idea
that
I
had
in
my
head
was
that
the
code
for
his
extra
columns
was
only
to
be
built
on
top
of
the
current
printing
mechanism,
where
you
would
only
add
the
additional
value
specified
by
a
user.
G
The
printing
is
being
specified
by
the
plus
by
the
cluster,
the
cluster
returns
as
table
objects
which
provide
you
with
specific
values
that
should
be
printed
to
support
the
extra
columns.
We
would
have
to
explicitly
request
that
a
full
object
is
included
as
the
last
column,
which
is
not
a
case
by
default,
but
whenever,
for
example,
we.
G
We
are
requesting
those
research,
the
full
resource
for
sorting
purposes,
so
I
would
imagine
something
similar
to
what
sorting
does
with
regards
to
retrieving
the
resource
and
then
a
regular
printing
should
should
happen,
and
then
only
the
additional
resources
columns
could
be
added
at
the
end
of
the
of
the
output,
so
I'm
happy
to
help
someone
revive
this
PR
and
and
provide
some
hints
where
we
could
to
get
it
to
completion,
especially
that
plus
we
during
the
book
scrub.
G
There
was
a
request
to
add
additional
values
for
for
the
get
command
where
we
would
prefer
not
expanding
the
current
Gap,
but
rather
allow
users
to
have
the
freedom
to
pick
those
additional
columns
by
themselves.
G
H
Next
one
was
mine,
too
Kristoff
has
joined
us
I.
Hope
I
got
your
name
right.
Thank
you
for
joining
us.
H
This
was
a
PR
that
was
open
to
add
Pro
priority
support
when
draining
pods,
I
believe
right
now,
the
order
of
draining
pods
is
either
non-deterministic
or
just
based
on
goes
range,
and
so
Kristoff
had
a
PR
that
wanted
to
add
a
drain
By
Priority
flag,
where
we
respect
pods
priorities
when
choosing
which
ones
to
drain.
H
G
Trying
to
recall
I
think
that
the
original
issue
was
what
kind
of
guarantees
we
currently
provide
with
regards
to
prioritizing,
and
on
top
of
that,
there
was
a
question
about
whether
we
want
to
have
a.
G
For
sorting
the
pots
for
for
drain,
but
I
started
wondering
whether
there
will
be,
if
we
add
respecting
priorities,
if
there
will
be
questions
for
oh
I
want
to
have
yet
another
few
or
something
else,
and
what
kind
of
options
other
users
might
want
to
see,
and
rather
than
adding
one
off.
We
would
try
to
tackle
this
in
a
more
generic
way,
so
that
users
have
a
little
bit
more
freedom.
With
regards
to.
G
G
Yes
and
basically,
we
would
have
to
figure
out
whether
that
kind
of
approach
would
be
better
and
what
are
the
current
defaults
that
we
have?
We
could
probably
change
the
defaults
as
they
are
if
we
don't
have
any
particular
guarantees
currently
and
then,
rather
than
having
a
one-off,
more
generic
approach
would
be
better,
probably.
J
G
I
wouldn't
say
that,
just
within
the
cube,
it
would
be
nice
to
either
send
an
email
to
cute
Cube,
Def
or
TV
users
and
try
to
figure
out
where
product
users
or
questions
about
if
they
see
that
this
is
something
that
they
would
be
interested
in,
but
or
whether
they
considered
any
prioritization
for
any
kind
of
sorting
ordering
for
for
drain
command.
J
To
deal
with
it
at
all,
well,
the
the
one
thing
that
I
considered
but
never
really
implemented
or
tried
to
implement,
was
specifying
a
list
of
priorities
that
it
would
kind
of
Bucket
look
at
those
spots
in
based
on
the
list
and
not
drain
By
Priority
for
each
single
value.
So,
for
example,
you
could
like
draw
in
a
critical
pods,
then
those
second
level,
critical
chords,
then,
let's
say
priority
above
1000
or
something
like
that.
J
G
I
remember
when
we
were
discussing
this
last
time.
The
first
thing
that
popped
up
in
my
head-
and
you
just
said
it
the
same
way
I
started
wondering
whether
people
will
be
interested
in
when
they
will
be
interested
in
training
by
priorities,
whether
they
will
be
interested
in
draining
criticals
first
and
then
follow
or
in
a
different
order
where
the
protocols
will
be
the
last
one.
G
And
that
was
my
one
of
my
first
questions
that
I
had
and
then
the
generic
mechanism
basically
follow
because
I
think
questions
in
both
directions
to
be
able
to
train
either
critical
first
or
refers.
H
J
Other
the
one
thing
that
it
changes
is
the
speed
of
draining
the
node,
because
people
could
learn
to
expect
it
to
be
fast,
but
right
now
it
kind
of
pauses
on
every
level
of
the
priority.
So
it's
potentially
can
take
a
few
minutes
instead
of
half
a
minute.
Let's
say.
J
J
D
Option
that
might
be
suitable
instead
of
respect
priorities
or
completely
generic.
Maybe
we
have
something
like
a
like
a
order,
mode
flag
that
takes
an
enum
where
we
have
like
all
at
once
or
By
Priority,
and
that
would
give
us
an
extension.
Point
like
matcha
is
looking
for
in
the
future
to
potentially
support
different
ordering
targets
or
strategies
without
having
to
support
everything
arbitrarily
like
running
at
arbitrary
fields,
which
would
probably
be
pretty
difficult
to
implement.
J
Yeah
that
sounds
good
like
the
default
could
be
the
simplest
or
just
you
know,
Group,
by
critical
priorities,
and
then
everything
else,
and
then
there
could
be
that
detailed
ordering,
based
on
the
exact
value,
I.
G
G
I'll,
try
talking
with
our
class
results.
I
know
that
they
are
happily
relying
on
the
training
mechanism,
although
they
yeah
they
are
reusing.
The
current
training
drain
implementation
I'll
see
if
they
experimented
with
different
sorting
mechanism
for
for
if
they
did
not
be
curious,
because
we
are
heavily
relying
on
this
mechanism
during
openshift
upgrades,
so
I'll
check
with
them.
What
kind
of
options
are
they
consider
and
I'll
leave
some
comments
on
the
issue.
I
If
one
thing
to
consider,
we
have
discussions
about
the
terrain
if
we
could
prioritize
also
the
pods
that
are
not
ready,
which
has
an
impact,
especially
for
when,
when
you
are
using
pdbs
and
you
can
prioritize
the
unready
boards
first,
if
that
could
even
go
like
first,
if
you
could
prioritize
first,
according
to
the
unready
status
and
then
according
to
priority
or
which
combination.
J
A
Oops
I
was
on
mute
so
so
do
we
have
enough
information
to
to
move
forward?
Chris
stuff,
do
you
do
you
have
the
information
you
need
to
to
make
progress.
J
J
A
Okay,
so
we
have
about
10
minutes
left
and
I'd
like
to
move
on
to.
The
next
topic
is.
A
Okay
are
the
would
you
like
to
take
over
on
the
cap
about
built-in
command
shadowing
support
by
plugins.
F
Yes,
I
am
this:
cap
is
basically
allowing
external
plugins
and
sub
commands
for
control,
create
command
and
I
just
need
some
feedback
and
review,
and
that
would
be
great
if
you
have
some
time.
That's
why
I
put
it
in
in
the
agenda.
G
I'll
make
sure
to
have
a
look
at
him.
I
promise
him
earlier.
F
A
Okay,
great
thanks
for
bringing
that
up
and
Noah
with
with
the
the
final
topic
is
Noah
here.
K
Yeah
so
basically
I
recognized
that
some
of
the
documentation
might
be
outdated
and
I
brought
it
up
and
I
I
guess.
Katrina
was
kind
of
confirming
that
at
least
that
one
that
one
particular
documentation
was
outdated.
K
Somehow-
and
this
is
just
a
draft
where
I
tried
to
to
do
something
simplification,
but
there's
a
lot
of
questions
that
basically
has
have
to
be
answered
by
by
you
guys
so
I
just
wanted
to
bring
it
up
so
that
someone
can
maybe
have
a
look
there
and
give
me
some
feedback.
So
I
can
I
can
somehow
help
you
finalizing
that.
K
Yeah,
actually,
yes
and
I,
I
guess
I
made
some
comments
inside
the
documentation
in
this
draft
where
I
yeah
I
was
not
sure
yeah.
Maybe
someone
can
can
can
check
the
questions
and
also
the
the
comments
in
the
in
the
yeah.
D
A
Okay,
well,
it
looks
like
we
may
have
gotten
through
this
very
full
agenda
with
a
couple
minutes
left.
Is
there
anything
else
we'd
like
to
address
before
we
sign
off.
J
Yeah
actually
I
just
decided
about
it
now
of
an
example
of
using
atoms
in
keep
CTL.
J
L
J
Always
the
same
thing,
I
think
it
should
always
be
ascending,
because
those
class
that
system
critical-
usually
it
goes
to
really
critical
pots
that
should
not
be
torn
down
before
all
the
rest
before
it
is
off
like,
for
example,
I
was
unable
to
shut
down
my
machine
because
it
was
not
draining
in
the
proper
order.
J
Actually,
I'm
thinking
it
could
be
kind
of
like
a
list
of
values,
so
you,
like
comma
separated,
so
it
would
first
go
by
the
first
type
then
by
the
second
then
then
on
and
on,
and
we
can't
you
know,
drain
specifically
by
priority,
ignore
priorities
or,
as
you
noted,
based
on
status
like
if
it's
not
ready,
then
just
take
those
firsts.
G
Basically,
you
went
from
a
simple
by
priority
over
to
enums
through
emails
to
list
which
is
introducing
even
more
complicated
way
of
specifying
how
the
training
should
look
like
I'm
worried
that
by
following
this
pattern,
you
basically
come
into
a
place
where
you
come
up
with
a
simple
DSL,
just
to
be
able
to
specify
that
priority,
and
that's
something
that
I
would
not
want
to
see
happening
in
the
first
place.
So
I
would
prefer
to
go
back
a
step
and
try
to
figure
out
the
actual
problems
that
are
there
before
solving
them.
J
I
G
What
they
are
looking
for
before,
coming
up
with
the
variety
of
ways
that
we
can
solve
the
issue.
D
Right
we
understand
that,
but
at
the
same
time,
when
we
introduce
a
flag
to
Cape
control,
it's
a
pretty
big
deal.
So
we
want
to
make
sure
that
we're
introducing
something
that
is
going
to
serve
us
well
in
the
future,
like
once,
we
have
that
flag.
Is
that
going
to
give
somebody
an
idea?
Well,
I
wish
it
was
ordered
this
way.
D
So
that's
why
match
is
suggesting
that
you
send
an
email
to
kadev
to
solicit
opinions
on
what
those
other
things
like
those
unknowns
might
be,
so
that
it
can
inform
what
your
flag
looks
like
from
the
beginning,
and
we
don't
end
up
handing
ourselves
into
a
corner
where
the
flag
doesn't
actually
work
for
for
a
lot
of
other
legitimate
use
cases
in
this
exact
same
area.
Does
that
make
sense.
C
C
Yeah
yeah
because
schedule,
for
example,
already
takes
into
account.
You
know
the
gangs,
the
groups
of
the
Pod
groups
takes
into
account
preemptability
and
priority
classes,
so
maybe
even
like
a
an
alpha
perspective,
a
cool
cuddle
drain
dash
dash
preempt,
and
then
it
just
asks
the
scheduler,
in
this
case
the
code
scheduler
in
my
example,
which
to
do
a
pre-ent
all
you.
J
A
Okay,
well,
we've
we've
gone
slightly
over
the
time.
Appreciate
everybody
for
joining
us.
The
full
and
interesting
discussion
hope
to
see
you
again
in
two
weeks.
Goodbye
bye.