►
From YouTube: Kubernetes SIG CLI 20200527 - bug scrub
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
welcome
to
six
Eli
buck
grub,
and
today
our
host
will
be
Eddie.
Oh
I
forgot
to
mention
it's
May
27th
today,
and
the
block
amok
is
hosted
by
Eddie,
so
I'll
pass
the
baton
to
Eddie
who's
already,
sharing
his
cream
awesome
yeah.
So.
B
A
There
are
metrics
and
I'll
try
to
prepare
a
presentation
for
the
next
time
about
how
six
are
dealing
with
our
backlog
and
I
know
that,
for
example,
SiC
API
machinery.
What
they
are
doing
is
aside
from
going
through
bugs
they're,
also
going
through
PRS
and
assigning
the
PRS
to
to
people
for
reviews
and
approvals.
A
A
A
That
one
I
know
that
one
but
I'm
not
sure
I'm
ready
for
that
much
of
a
commitment
yet.
C
D
C
B
E
A
A
A
B
B
B
E
E
B
E
Yeah,
so
what
happens
is
when,
when
the
job
gets
created,
if
it
it'll
retry,
I,
guess
what
five
six
times
or
something
and
then
end
up
failing,
but
then
at
the
very
bottom
of
the
output
there
it
says
that
the
duration
is
15
minutes,
but
it's
I
mean
the
age
is
15
minutes,
because
15
minutes
ago
I
tried
to
run
it,
but
it
never.
It
did
it's
not
running
for
15
minutes.
E
A
I,
remember
correctly,
the
duration
for
the
job
is
it's
counted
from
the
minute.
The
first
one
started
all
the
way
till
the
last
part
ends,
and
the
reason
for
that
is,
we
can
have
one
or
multiple
pods
running
those
I
would
have.
To
sum
all
of
these
and
the
simplest
one
approach
was
to
pick
the
first
and
the
last,
because
the
the
start
and
end
time
for
the
job
we
have
within
the
job
resource
and
calculating
the
the
duration
per
pod
that
is
actually
executing.
This
might
be
10
time
consuming.
A
E
E
I
mean
each
of
those,
so
each
of
those
pods
ran
for
30
seconds
if
I
recall
and
then
failed.
I
think
that
oh
I
created
a
like
a
custom
docker
image
that
would
just
fail
after
after
30
seconds.
So
each
of
those
is
30
seconds
and
then
I,
let
it
sit
there
for
like
15
or
you
know,
if,
like
10
more
minutes-
and
it
said
so,
the
total
duration
from
the
first
time
first
job
to
our
first
pod
to
the
last
pod
is
probably
like
3
or
4
minutes.
E
E
E
A
More
that
what
we
have
and
the
code
is
that
we
have
a
fixed
chunk
size
that
people
can
control
and
we
are
just
touching
all
of
the
data
from
the
server
which
might
take
a
significant
amount
of
time.
If
there's
many
of
that
particular
resource
I,
don't
know
if
you
have
five
thousand
and
our
chunk
sizes
are.
If
I
remember
correctly.
If
the
default
is
500,
we
do
ten
requests
each
time
requesting
500
and
only
when
we
get
all
the
5,000th.
A
We
only
then
display
them
and
ask
at
least
from
their
quick
read
is
Antoine
things
that
we
should
be
printing
as
we
are
getting
so
you're,
a
printing
you're.
Getting
the
initial
chunk,
you
print
them
in
the
meantime,
you're
getting
the
additional
one,
you
print
the
other,
additional
one
and
so
forth,
and
so
forth.
Yeah,
that's
that's
a
reasonable
because
you
are
getting
a
constant
feedback.
A
What's
going
on
instead
of
oh
I'm,
I,
don't
know
what
it
what
what
what
is
happening
and
then
suddenly
you're
being
thrown
that
with
with
gazillions
of
people
of
of
data
that
you
so
and
instead
you
can.
You
know
halfway
through
you
can
wreck.
Oh,
no,
no
I
forgot
that
there's
this
many
resources
out
there
aboard
the
the
command
halfway
through.
Oh
it's!
It's
definitely
something
worthy
of
doing
it's.
Definitely
something
related
with
pagination
I
come
se
off
my
step
of
at
what
it
looks
like,
but
someone
yeah.
A
B
E
C
C
E
F
C
E
E
E
E
So
if
the
a
while
back,
I
and
just
kind
of
like
try
to
organize
things
a
little
bit
but
I,
don't
know
if
that
was
the
right
thing
to
do
or
not
like
if
I
put
kind
feature,
am
I
sort
of
implying
that
we've
agreed
that
it's
something
we're
going
to
work
on,
because
I
wasn't
trying
to
imply
that
I.
Don't
know
what
the
right.
A
E
A
A
The
actual
shape
is
is
a
separate
discussion,
because
I
can
think
of
either
having
a
no
Corden,
filet,
flack
or
I
can
see
a
taint
flag
for
drain,
which
would
taint
also
you
don't
have
to
do
taint
separately
and
then
drain
separately,
but
you
can
do
it
at
once.
Well,
there
are
multiple
options:
how
to
implement
it,
but
it's
definitely
a
feature
request.
B
B
A
A
F
A
A
C
A
B
A
E
C
A
A
A
A
B
B
C
D
C
E
C
Just
clarify
it
really
which
you're
asking
there's
what
versions
of
the
kubernetes
api
server
do
we
support
right,
which
is
like,
and
it
might
be
the
last
three
releases.
I
don't
quite
remember
what
the
cluster
version
is
and
then
there's
what
is
this
skew
between
client
and
server
support
so,
but
the
theoretically
good
is
the
oldest
version
of
the
code,
control
that
we
would,
you
know
be
responsible
for
which
would
be
the
oldest
version
of
the
server
minus
one
version
of
control.
C
It's
actually
someone
would
be
running
that,
however,
like
I,
don't
think
like
that
doesn't
mean
we
like
fix
every
bug
and
every
one
of
those
things
I
think
like
what
is
the?
What
is
the
version
we'd
fix,
like
minor
bug,
fixes
in
we're
not
gonna
patch
anything
older
than
the
most
recent?
Probably
it's.
If
it's
like
it's
a
minor
thing
that
yeah.
D
C
A
E
A
C
D
B
E
E
B
A
B
E
Cool
one
thing
I
noticed
is:
when
you
create
a
PR
it
the
system,
not
k-mer,
what
the
bot
or
whatever
tells
you
to
assign
it
to
somebody
but
I
think
usually
the
people
that
tells
you
to
assign
it
sign
it
to
or
not
either
they're,
not
working
on,
CLI
stuff
or
they're.
They
haven't
been
active
for
a
while,
so
yeah
where
that
list
comes
from
yeah.
F
A
E
It
doesn't
automatically
it
suggested,
it
says
it
tells
you
to
do
like
slash,
assign
somebody
now
mate.
Maybe
I
should
I,
usually
just
assign
it
to
whoever
it
tells
me
to,
but
to
be
honest,
I
haven't
had
a
lot
of
luck,
getting
attention
from
those
people
not
to
blame
them,
but
I
just
don't
think
they're
active.