►
From YouTube: Kubernetes SIG CLI 20210623 - bug scrub
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today
is
june
23rd-
and
this
is
another
of
our
monthly
blocker
box
crops-
my
name
is
mache,
but
that's
irrelevant,
because
our
host
today
is
eddie,
so
eddie
take
away
from
here.
B
B
B
We
talked
about
this
at
one
point
and
lao
took
a
look
at
it
and
the
issue
actually
turns
out
to
be
that
the
file
descriptor
that
gets
created
when
you
do
a
like
a
standard
in
read
like
this
gets
read
twice
and
those
file
descriptors
from
a
name
pipe
can
only
be
read
once
and
so
that's
actually
the
issue
that's
going
on
and
so
loud
traced
it
back
through
the
code
base
and
kind
of
did
a
really
good
write
up
here.
You
want
to
drop
any
comments
or
thoughts
now.
C
So
so,
every
time
you
need
to
talk
to
the
api,
so
we
need
to
set
up
the
rest,
client
and
because
every
time
we
set
up
a
new
client,
then
we
will
have
to
read
from
the
file
descriptor
and
they
will
require
being
read
twice
every
time
we
set
up
a
client
so
for
for
get
part
we
will
have
to
like
wait
about
six
to
seven
times.
That's
why
giving
us
the
errors.
A
Well,
yeah,
actually
I've
I've
noticed
somewhere
in
my
github
emails
that
there
there
is
a
pr
where
someone
is
trying
such
that
we
will
only
create
the
clients
once
for
the
entire
lifetime
cube
cuddle.
A
I
haven't
looked
at
the
pr,
I'm
not
sure
if
this
could
theoretically
help
with
this
situation,
I
would
have
to
look
it
up
where,
where
that.
A
A
B
B
All
right:
well,
that's
a
good
job
figuring
that
out
we'll
have
to.
I
guess,
we'll
wait
to
see
if
that
pr
exists.
Otherwise
that
might
just
be
a
not
fix
right
now,
type
of
thing.
B
Okay,
any
other
call-outs.
We
want
to
look
at
first.
B
D
I
did
okay,
it
was
more
of
an
idea
there
yeah,
I
kind
of
forgot
about
that.
Actually
one
thing
I
saw
on,
I
think
it
was
terraforms
repo.
Was
they
had
this
nice
little
note
on
the
community,
like
community
notes
for
every
issue,
and
I
thought
well
because
a
lot
of
times
we
get
low
effort
issues
that
are
open,
and
so
part
of
this
was
to
motivate
people
to
put
in
enough
details
to
say
hey.
This
is
your
chance
to
make
your
case
about
this
thing.
D
D
D
Just
an
idea:
I'm
not
I'm
not
trying
to
say
that
we
have
to
adopt
this
necessarily,
and
I
saw
that
I
think
it
was
tim
bannister
tagged
us
as
a
contributor
experience
or
something
like
that,
and
that
probably
makes
sense
as
well.
Maybe
it's
not
something
we
want
to
do
alone
and
it's
something
we
should
just
kind
of
follow.
C
A
Right,
it
is
in
line
with
what
we
were
saying
with
regards
to
all
of
the
either
plugins
or
bug
reports
or
everything.
If
you
care
about
it,
just
you
know,
load
it
up.
D
Oh
really,
I
thought
for
some
reason.
I
thought
that
the
that
they
get
specific
or
github
specific
action
or
not
actions,
but
though.
A
Yeah
but
they
are
within
the
repo,
but
the
full
contents
of
that
repo
is
being
mirrored
from
the
main
communities
repo.
So
if
you
look
under
the
staging
page.
D
Yeah
yeah,
I
knew
that
for
the
code
I
didn't
know
if
that
was
the
case
for
the
the
repo
metadata
stuff,
but
I'll.
D
B
Yeah,
no,
I
that's
I'm
totally
in
for
having
that
note
there
for
sure
we
can
run
it
by
controvex
too.
They
might
want
to
do
something
for,
but
we
have
our
own
issue
template
anyway,
but
they
might
want
to
do
something
broader.
B
A
So
theoretically,
this
could
solve
the
issue,
although
I'm
I'm
not
sure
how
I
feel
about
having
client
for
every
single
possible.
A
Resource
because,
basically
well
not
necessarily
resource,
but
we
are
what
we
are
doing
is
we
are
creating
separate
clients
per
every
group,
so,
for
example,
get
maybe
a
little
bit
different
because
if
I
remember
correctly,
gad
uses
the
resource
builder.
I
can't
remember
how
the
resource
builder,
but
under
the
covers,
I
think
it
uses
the
dynamic
client.
A
B
B
Okay,
okay
cool:
I
guess
we'll
walk
through
the
backlog.
A
I
mean
the
author
explicitly
mentioned
exclamation
mark
and
the
http
proxy
password,
so,
but
I'm
guessing
that
his
his
problem
with
the
exclamation
mark
could
be
related
with
the
with
the
bash.
B
B
A
E
B
A
I'm
I'm
I'm
literally,
I'm
super
focused
to
get
it
in
for
122..
A
It's
like
top
of
my
list
as
soon
as
I'm
done
with
all
the
origin
for
red
hat
stuff,
I'm
gonna
block
some
an
hour
or
two
and
and
go
through
it.
I
I
wanna.
I
want
it
in
basically,
and
it
has
to
be
for
122.
for
sure.
A
A
Currently
we
have
too
many
of
the
specifics
per
each
thing,
but
there
is
a
more
generic
approach
where
I'm
handling
those
os
specific
paths
versus
those,
always
linux
paths
and
there's
an
open
pr
which
potentially
could
solve
this.
A
B
B
B
A
Initially
so
in
the
first
run,
the
red
is
host
pass
and
gdsn
are
initiated.
So
when
the
and
that's
being
done
as
one
action
by
bash
and
that's
why
zdsn
does
not
have
those
filled
in,
but
when
you
are
invoking
that
again,
they
are
properly
expanded.
A
A
A
The
moment
we
use
121
with
a
new
customize,
it
stops
working,
so
I'm
guessing
they
are
using
some
deprecated
or
removed
behavior.
E
A
E
B
A
B
A
B
E
Yeah,
the
one
that
I
remember,
the
pr
that
I
think
is
still
open
somewhere
was
waiting
for
conditions.
I
think.
A
E
B
A
B
B
B
B
A
B
A
B
A
A
B
A
And
in
those
rare
cases
where
shadowing
happens,
there
are
two
ways
for
you:
either
you
are
specifying
a
full
group
version
kind
to
get
or
you
are.
A
A
Yes,
you're
trying
to
say
yes,
I
can't
I
can
speak
from
experience
because
that
popped
up
in
openshift.
Yes,
we
have,
for
example,
projects
and
a
friend
of
mine
was
reaching
to
me
earlier
today
because
he
was
getting
projects
and
was
getting
different
projects
than
he
would
normally
expect.
A
And
when
he,
when
we
started
looking
through,
it
appeared
that
one
of
the
api
servers
was
gone,
which
changed
the
the
priorities
and
he
was
getting
the
other
one
which
would
you
which
you
would
normally
not
get,
but
because
the
primary
one
was
was
not
available.
It
just
fall
back.
A
E
Isn't
it
true,
though,
that
there
are
like
very
normal,
certainly
in
historical
cases
where
you
we
wouldn't
want
to
warn,
because
we
are
making
a
valid
choice
across
multiple
resources
with
the
same
resource
name,
because
they've
migrated
group
versions
like
take
deployment,
for
example,
migrated
from
extensions
to
apps
and
then
there's
several
versions
within
that
like
we
want
to
silently
pick
there?
How
could
we
distinguish
between
that
case
and
this
one.
A
Yeah
that
that's
a
very
good
observation,
because
that
that's
where
the
priorities
came
into
play
because,
for
example,
if
I'm
gonna
do
pots-
and
I
can
easily
imagine
someone
mean,
come
up
coming
up
with
a
pots
resource
which
will
be
living
under
example.com,
for
example,
group
and
in
99
of
cases.
If
you
do
keep
cuddle
get
pots,
you
will
actually
care
about
the
pots
from
the
core
which
will
have
a
higher
priority
by
the
fact
of
being
a
built-in
versus
getting
pods,
which
are
either
a
crd
or
an
extension
api
server.
A
Right,
that's
a
good,
that's
a
good
thing,
and
I
could
definitely
see
that
this
particular
priority
being
exposed
in
the
api
resources
or
versions.
A
If
I
remember
correctly,
they
are
part
of
the
the
discovery
api,
although
I
might
be
wrong.
B
A
A
B
Well,
ricardo,
did
you
see
ricardo's
reply
yesterday
I
have
not
ricardo
commented
on
the
kept
thread:
ricardo
accidentally
left
off
a
c
when
doing
a
delete.
Pvc
dash
dash
all
dash
name
space,
but
he
dropped
off
the
c
the
v,
so
he
was
deleting
or
to
see
he
dr.
He
deleted
all
the
persistent
volumes.
Thankfully
it
was
a
staging
cluster,
but
he
made
the
mistake
so
experienced
people
are
making
this
mistake
too.
A
I
honestly
don't
know
of
any
developer
who
accidentally
removed
any
kind
of
resource.
I
personally,
I
think
I
was
like
my
first
or
second
year.
I
did
a
do
it
all
from
a
table.
Whatever
the
name
was
it
and
then
and
panic
realized
it
was
a
production
cluster,
not
a
my
my
environment.
So
I'm
I'm
pretty
sure
that
every
single
one
of
us
did
something
similar
to
this
or
will
eventually
do.
B
I
definitely
have
I
feel
like
we
have
to
bring
it
back
up
again
with
jordan,
because
jordan
was
so
opposed
to
making
a
breaking
change
there.
But
I
wonder
if
we're
just
straw
manning
right
like
no
one's,
come
forward
and
said
this
would
break
my
stuff.
So
how
do
we
get
those
people
to
come
forward.
A
The
problem
is
that
there's
a
lot
of
folks
behind
that
will
not
command
in
either
direction
if,
if
you're,
looking
at
the
at
the
issue
from
oh,
we
have
this.
Many
complaints
you're
actually
still
missing
that
there's
this
many
people
who
never
had
problems
with
those
because
they
have
either
different
approaches,
or
they
never
even
commented
on
that
issue.
A
So
you're
only
that's
the
other
problem.
We
are
always
exposed
to
only
a
certain
degree
of
problems
that
people
are
facing
and
you
still
missing
the
big
picture
of
maybe
the
the
problems
that,
even
I
don't
know,
50
100
people
that
are
struggling
with
this
is
like
a
drop
in
the
sea
of
this
many
people
that
yeah
it's
it's
working,
fine
and
they
will
be
more
pissed
when
we
would
change
it
and
you
will
only
get
those
people
speak
up
when
you
do
change
it.
A
So
it's
always
a
balance
of
do
we
want
to
do
it.
Do
we
not
and
that's
where
jordan
is
coming
from
and
if
you
recall,
my
initial
reaction
was
similar
to
jordan's
that
I
was
a
little
bit
skeptical
about
especially
something
so
fundamental
as
as
keep
cuddle
delete,
which
was
one
of
the
first
commands
that
was
added
so
yeah
it.
A
I
honestly
feel
both
ways
because
I've
been
in
a
position
where
I
did
delete
all
all
and
but
thankfully
that
was
in
a
in
a
testing
cluster,
but
still
yes,
it
happened
to
me.
I
even
approached
this
problem
with
the
pr
that
I
put
in
youtube
so
yeah,
but
at
the
same
time
I
do
understand
jordan
and
david
and
clayton
and
tim's
words
that
we
would
be
breaking
users
one
way
or
the
other
too.
A
It's
a
tough
one.
I
know
it's
a
tough
one.
It
seems
so
simple
because
it
is
oh,
it's
just
one
another
flag,
but
actually,
if
you
look
at
the
big
picture,
it's
it's
a
very
you
know
close
call
in
both
directions.
A
So
I
remember
talking,
I
I
think
I
stumbled
upon
both
type
of
customers
where,
before
approaching
every
single
upgrade,
they
would
go
line
by
line
with
the
release
notes
and
with
others
that
were
more
like.
Oh,
we
will
figure
out
as
we
go.
What
broke
so
yeah.
You
never
know
which
one
you're
going
to
work
with.
E
You
also
have
to
keep
in
mind
that
the
people
like
there
could
be
two
different
personas
involved.
The
person
who
decides
to
update
the
customer
that
the
cluster
might
be
totally
different
from
the
primary
set
of
end
users
that
actually
invoke
queue
control
on
a
day-to-day
basis
in
an
enterprise
setting.
B
E
Well,
we
can
we,
we
came
up
with
the
plan
right
and
and
yeah.
That
plan
is
a
logical
step
towards
making
the
bigger
change
if,
if,
if
we
get
buy-in
for
it
after
we've
shown
the
value,
that's
true.
B
B
A
Right,
that's.
Why
would
you
want
to
do
something
like
that.
A
Well,
conflict
is
one
thing,
but
the
amount
of
data
the
describe
reads
so
get
only
reads:
parts
and
nothing
more.
A
If
you,
if
you
think
about
get
past
but
describe
pot,
actually
reads
under
the
covers
at
least
events
with
other
resources,
we
might
be
scraping
some
additional
data,
but
usually
it
is
at
least
a
particular
resource
and
all
the
events
related
to
a
particular
resource.
So
even
with
100
pots,
that
means
100
gets
because
I
remember
correctly,
we
are
doing
one
by
one
or
even,
if
we
do
it
by
list,
then
there
will
be
a
hundred
follow-up
requests
for
events
for
this
particular
pod.
A
So
you
might
end
up
with
about
requests
with
a
single
describe
all,
and
if
you
multiply
that
by
name
spaces
and
whatnot
that
yeah.
B
Okay,
I
don't
know
how
I
would
live
without
spell
check,
get
pods
takes
a
long
time
after
context
switch
well.
That
is
a
discovery
issue.
A
C
C
A
Because
loading
and
follow-up
actions
are
pretty
fast,
the
initial,
so
we
are
loading
the
config
file
and
the
next
one
is
curl
to
get
the
api,
and
that
is
taking
two
minutes
yeah.
That
dark
call
is
taking
two
minutes,
but
the
time
of
or
on
this
one
is
okay.
Github
was
is
cool.
Oh
it's
32
seconds,
so
there
should
be
a
timeout
of
32
seconds
and
it
should
end
sooner
than
two
minutes
by
the
server.
A
Unless
the
amount
of
of
information
he's
scraping
from
the
server
is
so
big
like
he
has
the.
A
We
might
want
to
suggest
him
to
have
a
look
at
what's
causing
at
looking
at
the
networking
why
the
the
request
is
taking
this
long.
He
might
want
to
try
with
higher
verbosity.
I
think
10
prints,
the
entire
request
and
response
bodies,
and
I
haven't
seen
response
in
his
in
the
initial
curl.
B
All
right
I'll
write
a
comment
for
that:
I'm
gonna
call
it
there
and
a
few
minutes
early.
I
can
use
the
bathroom
before
my
next
meeting.
Yeah
sure
thing.