►
From YouTube: Kubernetes SIG CLI 20220330 - Bug Scrub
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Especially
the
tests
which
are
still
a
little
to
merge
for
the
next
week,
if
I
remember
correctly,
yeah.
A
So
brian
added
a
really
cool
benchmark
test
to
cube
control,
get
and
it
outputs
the
memory
time
and
alex
for
getting
and
yeah.
His
idea
was
that
we
could
track
it
over
time.
So
I
don't
know
if
he
plans
to
get
that
set
up
as
a
proud
job,
but
that's
definitely
something
I
want
to
get
in
there.
So
we
can
record
the
results.
B
A
A
Brian,
this
was
an
awesome
idea.
Dude
way
to
go.
Do
you
have
plans
to
get
this
set
up
as
a
periodic
and
proud.
D
C
Yep
and
there's
there's
actually
a
there's
another
command,
and
so
I
was
looking
at
actually
at
the
golang's
encoding
package
and
in
there
somehow
I
came
across
there's
like
another
command,
where
you
can
take
to
run
two
benchmark,
runs
and
comp
and
like
diff
them
essentially
and
it'll.
Show
you
like
hey
this
is
this
is
like
20,
faster
or
whatever
so
yeah.
That
might
be
helpful
as
well.
A
E
Is
it
running
currently
because
we
I
I
think
we
should
add
some
code
tests,
as
you
said,
go
test
dash
dash
bench
commands
in
the
make
file
to
run
these
tests,
but
I
don't
see
it
am
I
missing
something.
A
I
can
dig
up
the
command
for
you
later,
but
yeah.
It
should
work
if
you
pass
the
the
bench
flag
to
make
tests.
D
D
Change
the
make
file
the
current
make
commands
would
work
with
this.
What
would
actually
simulate
this
entire
command
using
the
the
flat
the
go
flags?
I
think
that's
what
you're
looking
for.
B
A
A
F
A
G
I
myself
use
a
compound
cube
config.
It
does
combine
them
when
you
put
a
list
like
that
in
the
environment.
Variable.
A
B
Seems
reasonable,
probably
a
good
first
issue.
That's
rather
simple
thing.
B
I
would
check
what
we
have
because
I
remember
some
time
ago
there
were
tags
added
to
types
which
field
should
be
treated
as
secrets,
passwords
and
all
that
probably
could
reuse
this
mechanism
instead
of
hard
coding,
specific
fields.
Looking
at
the
metadata
of
that
type
and
learn,
because
I
think.
A
E
I
think
it's
also
worth
to
look
at
the
webs.
Maybe
these
confidential
data
is
written
in,
looks.
B
B
I
don't
recall
any
part
of
the
cube
cuddle
command
sub
commands
having
an
option
to
demonize,
which,
I
think
not
sure
if
the
author
or
someone
else
put
it
together
at
the
bottom,
that
it
actually
are.
There
are
two
issues
with
it.
One
is
the
writing
part
writing
the
ports
and
the
second
one
would
be
the
demonizing
once
the
the
forwarding
is
established.
I
I'm
not
envisioning
something
like
that
being
added
to
keep
pedals,
maybe
a
plug-in
or
something
yeah
plug-in
is
perfectly.
A
A
A
B
B
B
Especially
that
this
particular
set
would
be
very
limited
to
just
a
service
resource,
whereas
the
current
ones
that
we
have
are
a
little
bit
broader
because
they
are
capable
of
dealing
with
several
resources
because
they
are,
if
I
remember
correctly,
allowing
to
set
environment
variables,
which
basically
applies
to
anything
that
has
a
plot
template
in
it,
setting
images
which
again
similar
anything
that
has
a
pod
template
in
it.
B
I've
seen
that
one,
that's
not
gonna
fly
ever
because
the
way
convert
currently
works.
It
requires
internal
types
and
we
explicitly
put
it
in
the
in
the
main
kubernetes
repos,
because
the
internal
types
were
required.
It
will
only
work
for
cube
types.
It
will
never
work
for
any
of
the
custom
resources.
A
B
I
mean
when
you,
I
know
what
you
you
are
to
mean
hard
up,
but
you
need
to
take
into
account
that
the
cash
there
is
also
red
every
time
you
do
any
kind
of
get
operation
and
you're
outside
of
the
ttl.
For
for
the
discovery
we
extended
that
that
ttl
from
10
or
something
minutes
to
a
couple
hours,
six
hours,
yeah,
probably
yeah.
I
was
the
one
doing
it,
but
I
kind
of
remember
what
I
what
the
number
I
picked.
B
B
Cube
cache
like
that,
some
something
like
that:
q
cash
or
cube
cash
there,
something
along
those
lines.
I
don't
have
any
specific
name
in
in
hand.
Something
that
will
just
make
will
be
consistent
with
what
we
have.
E
E
B
B
She
was
referring
to
the
issue
that
she
was
working,
that
it
possibly
the
the
author
of
misused,
set
context
with
for
use
context.
G
D
F
A
A
A
A
C
A
A
Yeah,
I
think
we
talked
about
this
last
time
and
I
think
we
said
something
like
the
the
q
bar
c
aliases
would
help
with
this.
B
Also,
there's
the
the
cube
color
generated
aliases
from
ahmed.
He
hit
a
project,
not
sure
how
accurate
that
is
still
is.
It
has
a
bunch
of
pretty.
B
Which
is
a
viable
option,
but
yeah
the
rcr
acid
will
be
something
probably
that
we
will
want
to
do.
A
A
G
G
The
newer
issue
that
you
have
showing
so
the
first
one
I
pasted
is
specifically
calling
out
aliases,
whereas
the
other
one
is
just
any
resource
name,
not
specifically
aliases,
but
it
strikes
me
as
the
same
problem
where
users
are
making
cue
control
commands
using
a
ambiguous
resource
reference,
one
kind
or
the
other
of
ambiguous
through
this
reference
and
we're
not
warning
them.
G
We
just
pick
whatever
is
first,
according
to
the
priority
in
the
api
document
that
we
get
back,
which
I
think
is
deliberate
and
has
done
like
we've
been
taking
advantage
of
that
in
the
past.
For
example,
when
there
is
a
group
migration
for
deployment
you
didn't
want
to,
it
would
have
been
silly
to
warn
them
that
you
know
there's
that
this
also
exists
in
extensions,
v101,
etc,
etc.
G
G
So
I
was
wondering
if
we
should
close
the
alias
one
and
reopen
the
more
generic
one
for
starters
and
sort
of
they're
trying
to
the
pr
is
just
trying
to
output
a
warning
when
they're,
when
no
resources
are
found,
which
I
also
think
is
probably
not
what
we
want.
We
would
want
it
to
happen
during
like
discovery.
Whether
or
not
resources
are
found
at
the
end.
G
H
A
B
B
G
Sorry
I
didn't
mean
during
discovery,
like
the
actual
call
to
get
the
information
I
meant
like
when
we
are
looking
up
like
resolving
the
identifier
they
gave
us
to
yeah.
B
B
G
The
complaints
are
always
specifically
about
crds,
like
they
don't
seem
to
even
acknowledge
that
this
can
happen
with
your.
B
G
So
I
guess,
if
we
wanted
to
do
a
warning
theoretically,
we
could
exclude
the
built-in
api
groups,
but
that
sounds
pretty
sketchy
to
me.
I
don't
know.
B
B
Theoretically,
the
assumption
is
such
that
nobody
will
use
priorities
lower
than
the
built-ins
for
obvious
reasons,
but
it's
not
being
enforced
in
any
way
on
the
on
the
external
api
servers
and
one
can
easily
write
something
like
that.
Well,
maybe
not
as
easy
as
crds,
but
it's
it's
not
that
hard
at
the
end
of
the
day,
so
it's
theoretically
possible
to
to
introduce
different
ones,
in
which
case,
even
that
warning
would
be
wouldn't
be
working
correctly.
That's
why
I'm
I'm
very
hesitant
to
introduce
something
like
that.
G
It
feels
like
it's
almost
a
crd
installation
time
problem
like
that.
That
is
the
underlying
issue
in
the
bug
reports
here
I
imagine
they're
getting
name
conflicts.
They
don't
just
explicitly
mention
that.
But
if
you
looked
at
their
cid
srds,
I
wonder
if
there'd
be
a
name,
conflict,
error
and
status
or
if
there's
anything
more,
we
could
do
at
that
stage.
A
B
G
It
it's
that
it
matched
two
groups
and
the
first
group
it
the
one
of
the
highest
priority.
G
It
so
happened
that
in
the
target
name
space,
there
was
nothing,
so
it
correctly
says
there
was
nothing,
whereas
what
they
actually
happened
to
want
was
the
lower
priority
resource
which
had
things
so
they
are
not
realizing
that
they're
getting
a
response
about
thing
a
when
they
wanted
a
responsible
thing
b
that
that's
what's
happening,
so
it
could
equally
well
happen
in
reverse,
where
they
wanted
thing
b
and
they
we
tell
them
they're,
they're.
Sorry,
they
want
to
think
a.
We
tell
them.
There
are
things
when
they're
actually
no
things.
G
So
it's
not
a
problem
about
about
resource,
not
found.
It's
a
problem
about
the
duplicate
naming.
A
F
A
B
B
B
That's
basically
a
similar
bad
user
experience,
I'm
requesting
a
deployment.
I'm
fully
aware
that
I
there
are
two
deployments,
I'm
requesting
the
built-in
I'm
getting
at
the
built-in
deployment
every
single
time,
I'm
getting
warning,
I'm
being
super
pissed,
because
I
don't
care
about
the
warning.
I'm
fully
aware
do
I
need
to
put
a
ignore
warning?
No,
that's
not
the
point
you
just
broke
me.
That's
that's
basically
the
same
thinking
in
a
reverse.
B
A
I
still
think
we
should
be
warning.
I
think
that
the
there's
no
breaking
behavior
if
we
start
warning
right
if
we
keep
the
same
existing
choosing
one
and
start
writing
a
warning,
we
don't
version,
we
have
no
guarantee
on
output
of
cube
control
right.
So
starting
to
warning
is
not
a
breaking
change
there
and.
B
Yeah,
but
you
are
breaking
output
of
the
people
that
are
intending
to
get
the
built-ins
in
case
where
a
second
non-built-in
is
installed.
Yeah.
G
G
And
the
naming
conflict
when
you
install
the
crd,
presumably
like
this-
is
a
bad
cluster
configuration
in
a
way
that
if
the
administrator
has
chosen
to
install
multiple
things
with
the
exact
same
name,
then
all
the
users
are
going
to
need
to
be
specific
about
which
one
they
mean.
I,
I
think
that's
that's
an
unusual
and
explicit
decision
on
on
their
part.
H
D
I'm
I
want
this
warning,
I'm
not
sure
that's
the
best
experience
either,
but
that
would
that
would
keep
the
current
behavior
and
allow
someone
to
who
wanted
to
to
know
that
this
particular
collision
is
happening,
would
be
able
to
then
see
the
warning.
G
Like
an
extremely
verbose
mode,
or
something
like
that
that
you
could
opt
into
with
cube
rc,
I
don't
know,
I'm
also
worried
about
the
explosion
of
complexity
and
and
the
flags
that
we
have
for
every
little
thing,
because
that's.
A
B
I
mean
it's
a
it's
a
similar
statement
to
saying
that
there
are
no
resources
and
you
just
you
know.
There
are
sources
in
a
different
name:
space
yeah,
you
just
didn't
you
didn't
pick
it
and
it
we
can't
save
everyone
from
every
single
possible
edge
case,
and
this
one
is
an
edge
case
and
if
you're
fooling
around
with,
I
don't
know
installing
different
things
in
a
different
in
a
random
order,
and
you
end
up
messing
up
your
environment,
I'm
sorry,
but
we
we
can't
help
you.
G
G
Do
we,
though,
like
they
think
back
to
the
example
of
of
when
we've
had
deployments,
and
we
migrated
them
from
extensions
to
apps
and
for
there
were
many
releases
in
there
where
it
was
in
extensions
and
apps,
and
they
all
refer
to
the
same
resource.
And
as
far
as
the
information
that
we
got
from
discovery
was
concerned,
we
could
not
tell
that
those
are
all
the
same
resource
and
we
would
be
emitting
this
morning,
and
that
would
be.
B
G
B
B
Similarly,
every
single
scale
operation
would
throw
that
information,
because
we
have
currently
auto
scaling.
V1
v1,
beta
1,
v1,
beta
2
view
autoscaling,
v2,
v2,
beta1
and
multiple
of
those.
B
B
G
B
Not
in
the
output
katrina
is
stating
in
the
status
of
the
custom
resource,
because
only
then
you
will
be
able
to
verify,
because
when
you're
applying,
you
don't
have
a
fully
verified
api
yet
available.
Only
after
the
cluster,
the
api
extension
server
verifies
the
crd.
Only
then
it
can
tell
you
that
information
and
it
does
through
a
status
okay,
because
beforehand,
if
there
will
be
an
error
in
validation,
you
would
be
throwing
an
information
that
you
are
installing
a
duplicate
where
actually
there
is
a
warning
and
would
never
be
installed.
B
So
that's
why
it's
it's
actually
happening
in
the
status,
not
in
the
on
the
clock
on
the
client
side
of
things.
C
B
H
C
C
But
the
the
real
sticking
point
is
that
there's
when
it
says
status
running
that
is
like
that,
doesn't
correspond
to
something
in
the
json
that
gets
computed
on
the
fly
so
like
there's
no
way
to
for
people
and
we've
had
other
issues
where
people
want
to
manipulate
based
on
that
status
and
there's
no
way
to
get
that.
A
D
Can
we
try
the
coop
cuddle
applied
prune
one
next
and
then
next
one
is
there
in
the
kubernetes
yeah
that
one.
A
D
D
C
B
The
localhost
8080
is
a
default
that
we
have
and
hardcoded
still
and
client
goes.
So,
if
you
don't
have
anything
as
in
you,
don't
you
don't
have
a
valid
keep
config
the
localhost
8080
for
historical
reasons,
is
what
we're
trying.
I
I've
tried
to
remove
it,
but
we
have
some
a
ton
of
depth
still
behind
it,
and
it
wasn't
that
easy.
I
think
I
have
a
pr
that
I
need
to
review,
which
is
dropping
those
defaults,
but
I
just
didn't
get
to
there
yet.
A
B
And
the
error
messages
that
he's
seeing
about
leader
election
failing
and
whatnot
those
are
perfectly
valid
logs
in
the
queue
controller
manager
specifically
or
scheduler
as
well.
E
Okay,
I
totally
recalled
this
issue,
but
the
subject
looks
pretty
different
to
me
and
this
is
a
yeah.
As
you
said,
this
is
the
defaulting
issue
not
related
to
the
notepad
applied
without
paying
something
yeah.
B
I
I
I
don't
know,
I'm
not
sure
what
he
did
with
the
taint,
but
in
the
steps
that
he
put
in
the
description,
it's
he
did
a
gut
and
he
got
an
empty
config
which,
which
perfectly
reasonable
thing
and
the
symptoms
that
he's
seeing
are
what
I
would
expect.
B
We
should
probably
we
we
need
to
get
to
the
point
where
we
don't
default
to
localhost
8080,
but
rather
complain
that,
oh,
we
we
don't
know
where
you
want
to
talk
to,
and
then
the
information
will
be
much
much
more
explicit.
B
B
B
Yes,
and
I
I
believe
someone
created
apr,
I
just
it's
just
waiting
on
my
list
and
since
it
wasn't
a
big
priority,
it
keeps
sitting
on
that
list
of
mine.