►
From YouTube: Kubernetes SIG CLI 20210331 - bug scrub
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Does
anyone
have
any
issues
or
prs?
They
want
to
start
with
done.
C
I
was
just
mentioning
that
I
was
looking
for,
you
know
prs
stuck
in
the
kubernetes
kubernetes
ripple
and
I
have
a
bunch
of
you
know
long-time
run
issues.
So
I'm
wondering
if
you
should
go
and
start
looking
those
right
now
or
later.
You
know.
B
C
Different,
like
a
schema,
let
me
try
to
remember
this
one
and
he
ended
up
using
kobe
control
meet
diff.
He
created
a
controlled,
please
div,
to
walk
around.
C
I
asked
him
yeah,
I
asked
him,
I
I
need
to
review
it.
You
know
again
and
see
if
it
can
help
him
to
move
forward.
A
Okay,
it
looks
like
I'm
looking
at
the
cube
cuddle
meat
diff
and
what
almost
at
the
top
it
mentions
the
managed
fields
which
are
now
hidden
by
default
in
cube
cuddle.
A
So
I
think
one
of
the
issues
that
they
were
trying
to
address
was
just
that
to
get
rid
of
some
of
the
unnecessary
fields
managed
fields,
I'm
inclined
to
say
that,
based
on
what
was
written
in
the
last
comment,
I
would
I'll
be
in
supporting
to
close
this
one.
C
B
Cool
cube
control,
create
config
map
for
bid
multiple
from
and
file.
B
A
D
A
That
that
doesn't
look
like
a
right
approach
because
he's
basically
checking
whether
because
if
you
look
under
the
covers
and
vile
source
variable
is
just
a
string
yeah.
So
his
approach
is
basically
checking
whether
the
string
is
longer
than
one
one
character
yeah.
A
So,
oh
it's
just
like
sean
mentioned.
There
has
to
be
an
internal
mechanism
within
cobra
which
allows
to
pass
a
flock
multiple
times,
and
we
only
read
the
last
one,
I'm
not
sure
if
what
other
tools
are
are
are
doing
and
what
might
be
the
answer
to
this.
But
this
won't
be
a
problem
with
cube
cuddle,
but
with
the
cobra
itself,
so
that
might
be
a
little
bit
deeper.
I
would
I'll
be
curious
to
see
what
what
happens
with
tools
such
as
I
don't
know.
A
B
B
D
D
A
A
So
the
default
way
of
passing
the
impersonation
flag
might
not
work
for
these
command,
because
the
impersonation
is
by
default,
a
a
header
that
is,
that
is
then
being.
A
B
C
Yeah,
I
can
do
it
which
issue
you
want.
Actually,
I
I
need
to
connect
a
little
bit
early,
this
meeting
and
have
another
meeting,
so
I
cannot
do
that.
B
B
C
Actually,
after
this
one,
I
have
one
other
one,
but
maybe
it's
good
to
have
you
guys
around.
Let's
question
it
as
well.
I
just
put
in
the
chat.
B
I'm
gonna
leave
us
time
because
it
will
require
touching
cube
control
at
some
point.
If
we
take
it,
I
will
open
yours
doug.
E
Okay,
yeah
that
that's
the
comment
on
the
first
third
two
issues
ago
that
we
were
looking
at
with
the
cobra
repeated
flags.
They
wanted
us
to
throw
an
error,
but
it
would
also
be
possible
to
just
consume
values
from
all
copies
of
the
flags.
Cobra
would
make
it
possible,
like
on
an
example.
C
Yeah,
so
I
just
wondering
what
you
guys
think
about
it
if
we
should
handle
this
one
to
have
the
same
behavior
between
cli
and
the
api.
You
know
this
guy.
If
you
understood
correctly,
he
works
for
a
wave
works
so
for
a
cni
provider-
and
this
is
just
one
patch
for
him.
C
He
wants
to
change
he.
This
one
was
just
about
documenting
what
he
got
it.
A
A
Default
or
to
have
the
library
default
similar
to
what
cube
cuddle
default
is.
A
A
We
would
provide
a
sort
of
constructor
which
had
which
is
compatible
with
what
you
call
drain
does
and-
and
we
could
just
use
that
and
keep
cuddles
right
so
yeah.
Definitely
that's
that
should
be
a
simple
thing
to
do.
B
B
A
Clayton
wrote
it
it's
called
observe,
which
basically
is
just
that.
A
B
A
A
C
Yeah
he
just
shared
me
before
the
meeting
start
here
that
he's
conflict
with
another
meeting
and
lunchtime
etc,
so
he'll
be
working
in
whatever
we
assigned
to
him.
He
said.
A
Okay,
I
think
I
think
I
know
what
what
jordan
wants
us
still
to
be
implemented,
because
we
currently
have
is
we
have
client-side
validation
based
on
the
open
api?
A
The
problem
with
this
approach
is
that
we
will
only
report
about
typos
and
stuff,
like
that,
not
sure
if
the
unknown
fields
are
still
reported,
if
they
will
raise
errors.
So,
for
example,
if
you
try
to
create
a
pod
and
you
made
a
typo
with
instead
of
spec,
you
will
type
spec
with
2c,
I'm
not
sure.
If
the
client
side,
validation
will
pick
it
up
or
the
server
side
validation,
we
will
most
likely
slide
silently
ignore
this
bit.
E
One
yeah
that
seems
to
be
explicitly
about
the
server
side
like
client
side,
like
you
said
we
use
typed
entries
to
check,
for
I
think
it
would
catch
on
fields
on
the
client.
But
if
it
made
made
it
to
the
server,
then
the
fields
just
get
dropped
and
that's
what
it's
complaining
about.
But
that's
not
us.
D
A
One
where
we
have
sean
has
an
issue
in
cube
cuddle,
which
basically
is
not
so
grand,
but
basically
it
says:
decouple
cube
cuddle
from
cobra.
D
Yes,
it's
decoupling
the
options
from
the
cobra
command
so
that
you
can
reuse
the
options
say
in
a
controller
somewhere.
If
you
wanted
and
yeah
I'm
not
sure
how
close
that
applies
to
what
brian
is
writing
about
here.
D
Actually,
I
I
think
I
I'm
going
to
get
permission
and
time
from
my
company
to
be
able
to
dig
into
this
like
within
the
next
week,
cool.
A
D
A
Can
you
go
back
to
the
original
example,
because
what
we
currently
have
is
a
lot
of
the
workloads
supports
picking
a
default
container,
a
default
pod
with
it
with
within
it,
and
most
of
the
commands
do
so.
Similarly,
so
if
you
do
keep
cuddle
exec
deployment,
it
will
pick
the
right
part
and
get
into
it.
A
And
with
labels,
I
think
it
might
be
a
little
bit
more
complicated.
I
think
it
would
be
nice
if
someone
could
give
it
a
try.
Actually,
let
me
quickly
go
into
the
code,
what
we
have
behind
exit,
but
if
we
have
their
the
necessary
method
to
pick
the
pod.
A
A
A
A
A
I'm
inclined
to
close
it
if
somebody
will
complain
that
it
doesn't
work
or
there
are
some
issues
we
can
always
revisit
that.
One.
A
A
E
Given
the
exact
docs
mention
the
ability
to
exec
services
and
deployments
so.
B
I
think
one
of
the
asks
in
that
issue,
though,
was
to
be
able
to
exec
on
multiple
pods
at
the
same
time,
but
that's
that's
a
bigger
feature,
request
and
yeah.
A
Yeah,
that's
that's
not
something
that
we
want
to
do.
C
I
need
to
disconnect,
but
do
you
mind
to
assign
this
one
to
me
because
it
looks
like
it
related
to
the
work
I
I
did
right.
That's.
D
B
B
B
A
That's
that
that's
the
problem,
the
if
we
would
come
up
with
this
sooner,
it
will
be
easier.
It's
the
the
scope
of
work
not
only
for
us,
but
for
the
entire.
Basically.
B
Yeah
all
right
well
we'll
have
to
close
that
cap
at
some
point,
then
I'll
put
on
the
agenda
to
talk
about.
Finally,
so
we
can
make
a
decision
there.
I
did
not
know
you
could
put
code
blocks
in
a
title.
That's
cool.
D
A
A
So
the
execution,
the
invocation
turns
to
be
dash
f,
one
young
too
young.
A
A
B
A
Yeah,
but
that
still
needs
I
I
would
want
to
have
like
backlog
with
good
first
time
issues
and
then
well.
Theoretically,
we
already
have
help
wanted
versus
first-time
issues
and
everything
that
is
not
first-time
issues
is
good
for
contributor
contributors
to
work
on,
but
those
with
an
explicit
first
time
issues
are
good
for
newcomers
to
pick
on.
E
E
A
F
B
A
We're
for
both
of
the
commands
being
called
out.
First
of
all,
I
agree
with
the
statement
that
had
clayton
mentioned
when
you
were
scrolling,
where
he
said
that
we
don't
want
to
do
long-running
server-side
operation
first
of
all.
Secondly,
we
change
cube
cut
also
that
people
can
actually
vendor
in
the
the
code
behind
particular
commands.
A
F
A
If,
if
someone
wants
to
do
it,
keep
cuddle
apply
is
my
answer.
I
recently
closed
a
similar
request
against
openshift
by
saying
use
use
apply
basically.
A
D
So
there's
there's
people
there.
I
think
there
are
some
who
complain
about
the
very
large
last
applied
annotation
and
that
you
have
these
massive.
They,
I
think
the
their
use
case
is
they
have
these
massive
config
maps
that
have
you
know,
doubled
the
size
because
of
the
last
applied
annotation,
and
this
will
be
resolve
itself
with
the
server
side
apply
as
long
as
the
managed
fields
isn't
too
big.
A
Yeah
venison
will
be
created.
Always
it
doesn't
matter
whether
you're
using
create
apply
or
whatever?
Yes,
oh
so
the
replace
is
going
to
have
the
managed
fields
too
yeah
yeah.
So
the
answer
is
use
server.
Side
apply
close.
E
There's
somebody
in
there
asking
they
actually
don't
want
cooperative
field
management
they're,
trying
to
overwrite
all
the
server-side
state
when
they
do
this.
There's
a.
D
Sorry
to
interrupt,
I
interrupted
my
bad
there's
a
force
to
to
basically
say
I
own
it
all
now
or
any
any
conflicts
force
conflicts
just
override
them
with
with
server-side,
apply.
A
B
A
Yeah,
it's
it's
definitely
something
that
I
would
want
to
see.
I'm
not
sure
how
much
we
can
do
and
how
much
is
required
on
the
server
side
to
be
done.
B
A
Yeah
but
the
problem
is
yeah.
We
talked
about
it
for
edit
to
provide
comments
for
fields,
but
I
think
this
is
about
maintaining
the
comments.
The
user
provided
comments
on
the
resources
which
is
currently
not
possible
because
down
the
road.
This
gets
translated
into
protobuf,
which
is
stored
in
ncd,
which
would
mean
that
protobuf
would
have
to
have
the
ability
to
sort
the
comments
and
I'm
not
sure,
if
that's
beautiful,.
B
B
B
A
A
A
I'm
inclined
to
close
it.
It
seems
pointless
to
have
them
open
as
an
umbrella
issue.
If
someone
is
happy
to
ask
to
you
know
improve
that
situation,
I'm
you
know
open
prs,
actually
improving
the
docs
per
command
and
I'll
be
happy
to
merge
them
like
not
typos,
but
bpr's,
where
you're
improving
the
dogs.
These
are
more
than
I'm
more
than
happy
to
merge
stuff
like.
A
B
A
I
think
we
go
in
alphabetical
order.
A
A
E
A
D
Yep,
so
even
if
they
get
even
if
a,
for
instance,
a
crd,
gets
throw
gets
applied
before
the
custom
resource
there's,
no,
there
still
has
to
be
work
done
in
the
background
to
make
sure
that
that
it
could
still,
there
could
still
has
to
be
like
a
weight
for
that
crd
to
be
you
know,
just
because
you
get
a
200
back
or
that
it
applied
correctly
doesn't
mean
that
all
the
work
has
happened
in
order
to
to
get
the
crd
in.
I
think
yeah.
D
D
A
Close
it
and
just
say
that
this
is
possible
with
the
server
side
applied,
it
will
merge
whatever
stale
input
you
have.
A
F
A
E
D
It
looks
like
there's:
this
is
a
decent,
that
kube
capacity
is
a
decent
plug-in
and
so
the
the
path
of
creating
a
plug-in
first
might
be.
You
know
we
might
be
able
to
push.
A
Yeah
this
comment
nicely
describes
my
fear
in
order
for
this
tool
to
be
truly
useful.
It
should
detect
all
kubernetes
device,
plugins
deployed
on
the
cluster
and
show
usage
for
all
of
them.
Cpu
mem
is
definitely
not
enough.
There's
also
gpus
tpus
for
machine
learning,
intel,
qat
and
probably
more.
I
don't
know
about
also
what
about
storage,
I
should
be
able
to
easily
see
what
was
requested
and
what
is
used,
ideally
in
terms
of
iops
as
well.
That's
a
common
from
last
year
not
going
to
happen.
G
Seems
like
there's
like
a
couple
different
issues
in
this
one
though
like
one
of
them
is
like
my
paw
is
pending.
I
want
to
know,
will
it
ever?
Will
it
ever
start
and
the
other
one
is
like
the
more
general
issue
of
I
want
to
see
the
resources
of
all
my
all
my
nose,
so
I
don't
know
if
that
can
be
separated
out
or
or
one
is
worth
doing,.
A
On
top
of
that,
to
answer
your
first
question
brian
about
why
my
pod
is
pending
or
why
my
pod
landed
on
this
note
and
not
the
other
one,
I
have
a
team
member
that
is
working
in
six
scheduling
that
will
be
doing
just
that,
because
currently,
the
decision
from
the
scheduler
are
very
deep
in
the
scheduler
logic
and
are
hidden
even
from
the
cluster
administrator,
because
you
can
get
the
decisions
if
you
run
scheduler
with
like
v10,
which
is
insane
we're
hoping
to
expose
some
kind
of
reasonable
information
about
why
my
pod
landed
here
most
likely
through
annotation.
A
A
But
that's
but
that's
again,
that's
not
sexy
lie,
but
rather
sick
scheduling
because
they
know
the
reasoning
behind
a
particular
decision.
B
Yeah
that
sounds
good
to
me.
I'm
throwing
in
a
little
amazonism
in
there
bias
for
action
all
right.
I'm
submitting
that.
I
imagine
the
last
comment
of
this
was
22
minutes
ago,
so
this
is
definitely
showing
up
in
google
search
results.
So
I'm
expecting
blowback
for
closing
this,
but
it's
not
a
productive
thread.
A
A
Eddie,
you
have
to
get
used
to
it
being
a
safety,
I'm
still
getting
smacked
or
removing
lots
of
run
stuff
export
and
similar.
Okay.
I
think
that
was
all
we're
three
minutes
past
top
of
the
hour.
Thank
you
very
much
all
have
a
good
one
and
see
you
in
one
week
actually.