►
From YouTube: Kubernetes SIG Node 20200817
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
it's
a
signate,
ci
group
subgroup,
meeting
welcome
everybody.
We
don't
have
much
attendance
today
and
it's
a
cube
con
week.
So
maybe
that's
the
reason
we
let's
go
into
agenda.
I
will
share
my
documents
right
now.
B
A
Yeah
and
it's
not
the
morning
for
you
and
you're
like
three
hours
before
us
so
yeah,
I
put
some
items
on
the
agenda.
Please
add
more
items
if
you
want
to
discuss
something,
so
I
want
to
start
with
some
triage,
so
there
are
links
to
issues
that
we
care
about.
I
I
I
didn't
do
anything
in
a
project
things
that
we
we
have,
because
I
I
I
don't
know
victor.
How
did
you
like?
How
did
you
manage
this
project
before?
A
Is
it
all
manual,
like
you
just
put
items
there
like
how's
it
working.
C
Yeah,
so
here's
here's
what
I
did
before
we
were
just
you
know:
we
had
a
spreadsheet
and
we're
tracking
it,
and
I
was
getting
a
flood
of
emails
and,
I
said
hey.
We
need
to
do
a
better
job
of
tracking,
and
so
that's
when
I
created
that
board
and
so
the
way
that
cards
are
added
to
the
board
is
a
manual
process,
and
so
I
put
up
a,
I
think,
yeah
to
add
cards
here
is
the
little
search
that
I
did.
C
Spr
is
open
label,
sig,
node
and
the
label
area
test.
So
if
you
click
on,
for
example,
to
add
cards,
I
would
go
through
and
say:
yes,
these
are
relevant
to
us
in
our
team,
and
there
is
when
I
added
a
card,
I
would
put
it
in
a
you
know,
one
of
the
columns
in
progress
or
review
based
on
where
it
was
and
that's
sort
of
how
the
board
got
initially
populated.
C
And
so
then,
once
the
card
is
on
the
board,
the
progression
is
sort
of
automated
based
on.
If
it
has
a,
you
know,
looks
good
to
me
or
if
somebody
has
started
review,
or
it's
approved
just
based
on
the
github
project,
board,
automation
tools,
but
the
process
of
adding
cards
there
is
is
manual.
I
don't
know
if
there's
an
automated
way
to
do
that
and
yeah.
A
Okay,
thank
you
so
yeah.
I
looked
at
this
set
of
issues
that
we
have
and
out
of
this
57
issues
like.
Unfortunately,
there
is
a
note,
cozo
didn't
start
because
of
1.19,
I
release,
so
we
have
like
17
that
ready
to
be
merged,
but
they're,
not
so
out
of
57.
There
are
plenty
of
issues
that
are
interesting
for
us,
so
maybe
we
can
just
add
all
of
them
to
dashboard
and
then
pick
up
from
the
dashboard.
Would
it
work?
A
Is
it
how
it's
intended
to
be?
I'm
sorry.
I
missed
that
so
out
of
this
57
issues,
should
we
just
put
all
of
them
into
first
column.
C
Well,
if
you
look
at
this,
some
of
these
may
not
be
and
probably
are
not
specific
to
sig
node
and
so
like,
for
example,
when
you
do
that
search.
If
that's
what
you
just
did,
let
me
make
my
own
a
little
bit
bigger,
there's
a
further
refinement
needed,
because
so,
for
example,
remove.
D
C
Linked
analytics,
I
mean
that's
yeah,
not
signaled,
okay,
so
I
don't
think
you
know
this,
there's
still
some
more
filtering
and
maybe
one
option
would
be
to
because
see
this
matches
a
filter.
But
this
is
clearly
not
you
know.
Just
looking
at
that.
That's
not
sig
node.
C
A
C
A
Ones
that
match
our
group
charter
do
we
need
to
put
all
of
them
into
first
column
and
then
assign
a
person
or
wait
for
somebody
to
pick
up
this
item
and
only
then
put
into
first
column.
C
Yeah,
so
what
I,
what
I
would
do
is
look
at
the
card
and
sort
of
see
what
what
stage
is
it
at
and
then
put
it
in
the
appropriate
column.
C
Right
yeah:
let's
take
an
example.
Let
me
pull
up
the
board,
I
don't
have
it
up
sorry.
I
was
running
a
little
bit
later.
Yeah.
C
C
C
This
one
I'm
I'm
just
scrolling
down
on
mine,
to
see
if
I
see
one
that
jumps
out
that
looks
like
cigna,
because.
D
C
C
C
That
was
92165.
C
C
Yes,
and
typically,
typically
either
you
know
for
for
an
issue
to
be
found
to
start
with
either
a
pr
or
an
issue
has
to
be
opened,
and
so,
when
you
say
hey
do
we
want
to
assign
an
owner
to
it?
I
tend
to
think
of
that,
as
it
may
be
possible
that
the
owner
is
the
one
that
created
the
issue
or
if
they
need
some
help,
then
we
could
perhaps
get
some
of
the
volunteers
to
look
and
see
if
they're
interested
to
help
out.
A
A
Some
reviews
may
take
a
while,
so
I
was
thinking
if
we
can
start
getting
some
ownership
like
after
this
meeting
like
we
have
clear,
like
understanding
who
owns
which
item
and
have
some
status
updates
from
peop
from
people.
It
will
be
useful
and
it
will
give
some
dynamic
to
the
meeting.
C
Yeah
sure
so
it
let
me
look
at
the
agenda
again.
I
apologize.
I
just
came
back
late.
I
got
back
like
last
night
and
I'm
still.
A
Interested
so
my
suggestion,
maybe
we
can
like
I
like
this
in
progress
column,
and
maybe
we
can
just
go
through
in
progress
column
and
if
person
who
working
on
the
item
is
in
in
the
chat
or
like
in
a
meeting,
we
can
discuss
this
item
really
quick,
and
this
will
be
our
triage
issue
and
like,
and
maybe
some
items
will
be
moved
to
in
progress
and
that's
how
we
will
finish
triage
and
then
go
to
other
discussions.
C
I
yeah,
I
think
that
might
be
more
efficient
for
this
meeting,
just
to
sort
of
go
through
what
what's
already
there
and
help.
If
anything
is
blocking,
or
you
know
if
there
needs
to
be
some
discussion
on
that,
I
think
that's
a
great
idea.
So,
for
example,
like
you
know,
in
the
reviewer
approved
column,
I
think
that
one.
C
Yeah,
let's
just
start
going
through
the
columns.
Okay,.
A
Okay,
so
morgan
started
this
nice
pr
about
moving
some
signal
tests
into
blocking
area.
What's
that
james
recommended
last
time,
so
it's
not
assigned
to
anybody
morgan
started
it.
Do
you
want
to
find
owner
beside
morgan
to
review?
A
I'm
actually
already
looks
at
it
and
it's
quite
I
I
can
assign
myself.
C
E
As
he
said,
you
know
well,
I
don't
know
who's
the
original
guy,
but
he
had
a
made
it
very
nice
problem
statement
of,
as
you
can
see
it
on
the
right
side
of
the
screen
there.
If
the
person's
not
part
of
stig
node,
can
I
look
at
the
test
grid
and
go?
E
Is
everything
okay
or
not,
and
I
just
kind
of
you
know
I
went
through
and
I
tried
to
make
a
a
tab
group
and
that
that's
the
result
and
I've
got
some
comments
of
my
own
on
it
on
maybe
how
we
go
forward,
but
I
really
just
wanted
to
create
it
to
get
some
some
other
people
to
look
at
it,
because
you
know
I
can't
merge
any
of
this
stuff.
You
know
I
need
to.
C
As
I
read
this,
this
is
really
a
quick
way
to
be
able
to
look
at
the
test
grid
and
say
hey,
because
we
have
we're
sort
of
thinking
that
release
informing
and
release
blocking
are
probably
some
of
the
higher
issues
that
may
need
attention
quicker
than
others
right
right
and-
and
so
this
is
a
way
to
go,
to
test
grid
and
say
hey
at
a
click.
C
We
can
see
those
jobs
at
a
glance
without
having
to
you
know,
reverse
engineer
some
code
or
have
some
kind
of
secret
magic
dakota
ring
to
do
that.
For
us.
E
C
Yes
and
I
from
my
input,
I
think
it's
a
great
idea,
because
you
know
we
had
been
sort
of
talking
to
this
before,
and
so,
if
we
could
do
that
with
a
test
grid,
tab
and
say
yes,
these
are
the
ones
that
are
priority
and
the
other
ones
would
be.
You
know
we
could
work
on
those
later.
So
I
think
that's
a
great
idea.
A
So
what
do
we
need?
I
put
it
in
this
agenda.
One
of
them
I
discussed
it.
This
is
jorge
as
well.
What
are
you
gonna
call
and
his
suggestion
was
exactly
like
you
said,
victor
everything
that
is
released
blocking
already,
so
it's
already
on
release
blocking
dashboard.
We
can
put
in
our
block
in
dashboard
as
well,
because
it's
guaranteed
to
be
blocking,
and
I
also
looked
at
the
spreadsheets
that
you
created
before
so
in
spreadsheet.
You
have
this
column
severity
or
like.
Let
me
open
it
up.
A
What
is
this
so
so
this
dashboard
and
then
one
of
the
columns
here.
A
Is
priorities
where
it's
something
that
priority,
so
you
mark
some
of
them
as
high,
so
is
it
based
like?
What's
this
high
based
on
like
do?
Do
we
need
to
mark
all
the
is
it
as
high
as
a
it
is
blocking
or
like
not
blocking.
C
Yeah,
so
a
little
little
history
on
the
spreadsheet.
So
this
thing
was,
you
know
when
this
effort
first
started
off.
This
was
a
initial
cut
at
looking
at
all
the
jobs
and
really
for
all
of
the
volunteers
to
sort
of
understand
where
the
jobs
were,
how
to
find
them
and
then
for
each
volunteer.
The
intent
was
to
go
through
and
you
know
populate
all
these
and
so
for
regarding
priority.
C
When
I
created
a
spreadsheet,
I
said
now
clearly
some
of
these
are
more
important
than
others
and,
as
you
know,
as
we
have
spent
more
time
on
it,
this
spreadsheet
hasn't.
I
haven't
updated
it
myself
in
several
weeks
now,
and
so
I
think,
as
a
group
and
as
a
team,
I
think
we
have
come
to.
I
think,
a
general
consensus
that
release
blocking
merge,
blocking
and
releasing
forming
are
probably
the
top
priorities.
C
A
A
Anyway,
so
we
can
just
like
this
high
reflects
into
like
probably
being
released,
blocking.
C
C
A
Be
released
blocking
necessarily,
it's
wasn't
node
itself
healthy,
like
with
the
signal
test,
a
healthier
knot.
So
if,
if
this
is
like
critical
enough,
we
can
put
it
on
a
broken
queue,
and
this
is
this
is
a
dashboard
that
we
will
be
watching
very
closely.
A
Is
it
in
this
category?
A
I'm
I'm
sorry,
I'm
not
quite
following
the
question,
so
the
idea
is
not
only
like
duplicate
all
the
effort
of
release
blocking
tests
so
like
we
don't
only
want
release
blocking
tests
into
this
dashboard.
You
also
want
to
have
tests
that
are
like
that
are
critical
for
signal,
so
this
is
signal
blocking
dashboard
and
if
topology
manager
is
critical
enough
to
to
be
in
this
review
in
the
signal
block
and
test
group,
we
need
to
add
it
into
dashboard,
even
though
it
may
not
be
released.
C
Blocking
okay,
so
if
we,
if
we
back
up
for
a
minute
and
if
you
can
you
pull
up
test
grid
absolutely-
and
I
mean
this
job
is
already
there
and
what,
if
you
look,
I
think
it's
under
yeah
there.
C
It
is
right
there,
so
it's
it's
already
there
cubelet,
if
you
look
to
the
bottom
right,
serial
gce
into
a
topology
manager
and
there's
one
for
cpu
manager
to
the
left
a
couple,
so
these
jobs
are
already
on
test
grid
and
I
think
I'm
I'm
missing
something
or
misunderstanding
your
your
comment
about.
A
So
the
pr's
at
the
morgan
signo
to
have
exactly
the
same
like
this
categorization
into
kublet
didn't
change.
So
what
will
happen?
There
will
be
another
dashboard
right
here
on
the
top
and
one
clicking
on
the
dashboard
you'll
get
into
I
mean.
Maybe
the
same
test
will
be
duplicated
there,
but
this
will
be
all
the
block
and
tests.
A
So
just
looking
at
summary
of
this
signal
blocking
dashboard
on
this
page,
you
will
see
like
if
you
see
something
not
green,
meaning
that
it's
clearly
something
needs
to
be
fixed
right
away
and
into
this
category
of
signal
blocking.
We
want
to
put
all
the
release
blocking
and
pre-submits,
but
also
you
want
to
put
everything
that
is
critical
for
signal.
E
That
yeah,
I
think,
we've
we're
constantly
getting
in
this,
not
argument,
but
we're
there's
a
little.
This
mini
conflict
of.
Do
we
want
one,
that's
just
the
same
exact
same
stuff
as
the
release
blocking
test,
or
do
we
want
to
have
our
own
extra
extra
dashboard,
and
so
maybe
we
need
two
two
dashboards.
We
need
one
that
just
duplicates
release
blocking,
but
we
also
want
our
own
of
well.
We
think
these
should
be
blocking
or
maybe,
release
blocking
or
merge
blocking
eventually,
but
we
think
these
are
critical
to
our
sig
node.
C
Okay,
I
got
you
so:
does
anyone
have
a
link
handy
for
the
release
blocking.
A
In
general,
for
kubernetes
yeah
release,
okay,.
D
C
With
me
is
a
a
new
test
grid
specific
for
sig
node.
That
is
something
that,
at
a
glance,
we
can
look
at
and
say
these
are
the
tests
that
are
important.
They
should
always
be
green,
whether
or
not
they're
flagged
in
sig
release
blocking
or
not.
We
think
they're
important
for
sig
node
and
they
should
always
be
green
yep
and.
A
James
also
suggested
that
we
can
do
a
gradual
approach,
so
we
will
make
all
the
tests
currently
green
as
a
signal
blocking,
but
then
we'll
create
another
dashboard
signal
work
in
progress
which
we
want
to
make
blocking,
but
they
are
not
green
right
now,
so
we
just
put
it
into
seconds.
Second
dashboard.
C
Okay,
I
see-
and
this
is
so
morgan
you've
got
a
pr
eight
eight
three
one
with
your
first
cut
at.
C
But
they,
the
jobs,
are
specific
to
cubelet
right
or
the.
E
C
E
The
test
grid
works
is
basically
you,
you
make
a
tab
and
then
you
add
existing
jobs
to
it,
and
so
you
can
see
yeah
what
he's
got
up
there.
As
you
see
right
on
the
end,
it's
got
okay,
so
you
can
see
it's
in
release
master
blocking
sig,
node
cubelet,
which
is
another
dashboard,
and
then
it's
then
I
made
a
new
dashboard
signaled
blocking,
and
so
you
sort
of
agglomerate
jobs
onto
a
onto
a
a
tab
group
dashboard.
E
Whatever
things
called
a
dashboard,
and
you
can
do
that
in
two
ways,
you
can
do
that
by
you,
you
you
go
to
the
actual
job
and
you
say
this
job
is
part
of
that
dashboard
right
or
which
I
think
we
might
need
to
do
just
so
we're
not
touching
everybody
else's
jobs.
All
the
time
is
you,
you
go
to
the
the
the
test
grid,
dashboard
config,
and
you
say
I
didn't
do
it
here
this
way.
E
But
if
you
were
to
expand
the
do
the
expand
on
the
bottom
of
the
the
file,
you
would
see
that
yeah
there
you
go,
so
you
can
see
that
this
has
test
screw.
You
know
so
you
can,
you
can
sort
of
agglomerate
the
tests
in
this
file
and
we
might
want
to
focus
on
doing
it
that
way.
E
C
E
The
reason
I
did
it
the
the
way
it
is
here
is
because
that's
that's
how
I
basically
I
found
the
which
which
jobs
are
released
blocking.
Is
I
just
well
what's
release
blocking,
and
I
just
added
it
so
you
know
it
makes
it
obvious
why
we're
doing
it,
but
I'm
touching
files
so
yeah.
I
can
go
either
way
on
it.
I
don't
really
care
I
just
it's.
Okay,.
A
C
Okay
yeah,
for
me,
I
need
to
just
I
guess,
put
some
comments
on
the
issue
and
the
pr.
A
Yep-
and
you
can
see-
I
put
get
some
statistics,
so
it's
like
42
percent
of
tests,
like
it's
only
105
jobs,
we're
running
right
now
and
out
of
them
like
42,
already
passing
so
we
we
have
a
pretty
big
chunk
of
test
that
we
can
potentially
mark
as
blocking.
If
you
decide
to
do
that,
and
it
will
be
just
a
like
heavy
routing
and
then
everything
else
it
needs
to
be
looked
at
so
it's
like,
but
it's
only
106
test
job.
So
it's
not
that.
A
Many
okay:
let's
go
to
the
test
feedback,
not
just
grid
dashboard,
so
node
end-to-end
remote
runner
is
blocker.
It's
broken
jorge.
Are
you
on
a
call.
A
Yeah,
I
don't
see,
I
don't
have
anything
to
say
about
either
of
these
three
items.
So
even
though
they
are
in
progress,
yeah
probably
need
to
just
ping
four
here
on
this
two
or
the
three,
because
he
opened
it
and
asked
if
he
wants
some
help
on
us.
A
I
want
to
talk
about
image
policies
to
emerge
this
thing
from
from
roy
about
how
to
how
to
pick
a
image
for
your
test,
and
there
is
a
recommendation
here
in
in
the
end
how
to
pick
a
image
for
the
test,
it
basically
a
split
test
into
four
categories:
release
blocking
pre-submits,
continuous
integration
with
other
technologies
and
latest
and
greatest
kind
of
like
future.
Looking
integration,
I
really.
A
Division
of
test,
and
but
currently
how
we
structure
our
test
cases
right
right
now,
like
we
have
one
image
file,
I
think
we
have
three
image
files:
one
image
file
for
most
of
the
tests
and
like
couple
image
files
for
one
for
benchmarks
and
one
for
something
else.
So
there
are
two
requests
that
trying
to
update
our
images
for
tests,
but
those
pull
requests
are
hard
to.
A
Let's
say:
is
this
one?
I
think
it's
from
morgan
as
well:
yeah,
morgan,
yeah.
A
Them
yeah,
so
my
issue
is
that
pull
request
is
not
the
issue
like.
I
think
this
pull
request
definitely
needs
to
go
in
like
we
need
to
update
cost
image
just
to
be
not
on
a
very
old
one,
but
then
we
also
may
need
to
split
this
one
image
file
into
multiple
files,
and
I
was
looking
for
suggestions
how
to
do
it
better,
and
maybe
somebody
already
thought
about
it.
A
So,
let's
see
like
there
is
an
image
config
yaml
that
is
used
in
many
jobs,
including
pre-submits,
and
release
blocking
and
feature
tests.
A
So
if
you
want
to
follow
the
recommendation
from
roy
on
which
images
to
use
when
we
will
probably
need
to
have
three
separate
files
like
one
image,
config
presubmits,
one
image,
config
ci
and
one
image
config
forward,
looking
this
kind
of
things
and
all
of
them
will
use
different
set
of
images
like
pre-submits,
probably
need
to
have
very
stable,
like
exact
version.
Of
course,
image
like
feature
work.
We
will
need
to
have
like
lts
image,
family.
C
So
why
why
would
we
want
to
have
different
images
based
on
the
job
type?
For
example,
you
said
a
different
image
for
the
pull
request
and
different
one,
for
you
know
the
periodic
ci
job
that
doesn't
seem
like,
don't
understand
what
the
benefit
would
be
there.
I
would
think
we
would
want
to
run
the
same
image
for
each
job
type.
A
Exactly
what
I
want
to
ask
like
idea
here
is
that
you
don't
want
to
fail
pre-submits
ever
unless
we
explicitly
want
to
migrate
the
next
image,
and
then
we.
A
I'm
sorry
I
was
muted,
so
the
idea
here
that
we
want
president
means
to
be
as
stable
as
possible,
so
we
want
to
lock
a
single
image
for
pre-submits
and
only
update
it
rarely
when
we
have
to
update
it.
So
this
way
we
guarantee
that
the
pre-submits
will
be
will
not
be
flaky
because
of
image.
So
we
don't
want
to
do
image
family
for
that.
We
just
want
to
stable
one
image.
C
I
see
okay,
so
this
is
based
on
input
from
roy
and
he
is
on
the
cost
team.
I've
been
working,
have
you,
let's
see,
have
you
given
that
any
thought
or
anyone
else.
E
I've,
given
this
plenty
of
thought,
I'm
not
sure
I've
ever
coherently
managed
to
tell
anybody
my
thoughts.
So
I
I
can
agree
that
maybe
some
of
the
release
blocking
stuff-
we
should
have
like
a
specific
image
just
so
that
it's
not
it's,
not
something.
We
need
to
change
all
the
time
and
and
it's
it's
stable,
but
I
I
these
things.
These
images
come
out
so
often
and
they
change
so
much
even
once
they're
in
an
lts
version
that
I'm
not
sure
that
there's
there's
use,
there's
a
stability
either
way.
E
I
I
just
I
don't
we
because
we've
done
this,
we've
we've
upgraded
lts
images
and
we've
gone
from
one
to
another
and
it's
broken
a
bunch
of
stuff.
So
I
don't
the
fact
that
we're
we're
using
one
for
a
while
seems
to
be
calcifying
yeah
we're
locked
to
that
image,
even
though
in
theory
they're
not
using
that
specific
image
anymore
they're
using
one,
that's
17
versions
newer,
because
they
come
out
like
once
a
week.
So
so
that
was
my
goal
of
trying
to
use
one
of
these
image.
Family
things
was
okay.
B
What
it's,
where
them
with
organ
here,
I
think
my
only
comment
would
be
instead
of
using
cost
stable,
we
should
just
point
it
to
class
81
lts
yeah.
B
It
is
rare
that
a
new
build
of
an
lts
image
would
actually
cause
an
issue,
and
if
it
does,
I
mean
I'm
not
exactly
sure
breaking
out
by
type
of
job
necessarily
helps
unless
kubernetes
itself
certifies,
for
example,
kubernetes
118
with
cos
81
or
like
a
specific
build,
for
example,
right,
which
probably
is
not
going
to
happen,
and
I
think
so,
at
least
internally,
even
in
gta.
This
is
how
we're
testing
new
images
pinning
to
a
lts
version.
B
B
Are
you
saying
has
therapien
I
mean
possibly
sometimes
in
kernel
patches?
There
might
be
a
new
kernel
refresh
that
happens.
That
changes,
for
example,
ebpf
behavior,
somehow
it
might
change
how
tcp
memory
handling
happens.
B
B
It's
the
second
level
things
that
we
do
changing
on
our
own
kernel
modules
and
stuff,
like
that.
That
might
cause
issues
or
that
have
historically
for
what
it's
worth
costs
also.
Does
some
testing
using
kubernetes
the
open
source
kubernetes?
You
know
they
package
their
own
cubelet
and
run
whatever
test.
They
run
before
releasing
an
image,
so
I
think
just
pinning
it
to
lts
in
all
places,
probably
mix
at
least
in
free
submits.
A
Right
and
it's
not
about
deterministic
like
the
problem,
is
once
we
hit
an
issue
with
a
newer
image,
then
all
pull
requests
are
stuck
into
like
not
being
able
to
merge,
because
you
need
to
like
before
you
fix
this
issue,
you,
you
start
with
failed
pull
requests.
I.
B
Think,
knowing
how
rare
it
is,
though,
I
would
err
on
the
side
of
not
not
having
more
prs
go
in
before
the
issue
is
fixed.
B
B
E
As
long
as
we
have
a
notification
thing
like
to
to
make
sure
we
can
fix
it,
real,
quick
because
yeah,
we
don't
keep
blocking
everyone,
but
I
don't
want
to
be
managing
these
images.
E
There's
no
family,
there's
no
there's;
no
it
it
whatever
it
pulls.
It
pulls
the
most
recent
version
of
the
the
family
right.
So
you
look
at
these
columns
at
the
bottom
third
of
the
screen,
it's
image
name,
which
is
a
specific
image
image
project
which
is
cost
cloud
and
then
image
family.
So
if
you
pick
cos,
81
lts
until
cos,
81
is
is
end
of
life,
which
is
like
december
of
next
year
or
november
of
next
year.
We
don't
have
to
touch
it.
E
E
E
B
F
F
B
A
new
lts,
I
think
around
march
april
and
august
september,
so
cost
85
will
be
stable.
I
believe
in
the
next
three
weeks
or
four
weeks
and
end
of
life
is
15
months.
F
E
A
So
another
concern
that
I
talked
to
rowan
roy
said
that
there
are
two
stable
images
that
we
test
every
job
piece,
whether
we
do,
we
actually
need
to
have
to
have
a
tester.
These
two
stable
images
like
do.
We
just
need
one.
I
have
no
idea.
E
This
is
what
it
was.
I
don't.
I
couldn't
divine
the
the
inspiration
from
looking
at
the
history
of
of
this
file.
It
has
changed
so
much
over
time
originally,
as
you
can
see
the
comments
like
the
the
deleted
comments
on
the
left
side.
Is
you
know?
Okay,
it
looks
like
at
some
point.
We
were
trying
to
pick
specific
versions
of
docker,
but
you
know
by
doing
that
by
picking
specific
images,
like
that's
you're,
doing
one
thing
by
doing
a
different
thing:
you're
not
really
achieving.
C
E
Goal
because
you're
trying
to
you,
know
you're
choosing
an
image
to
choose.
Well,
you
should
install
the
specific
version
of
docker.
You
want
to
test
so
do.
Are
we
covering
specific
versions
of
docker
and
specific
versions
of
container
d
and
run
times
and
whatnot?
Well
then,
we
need
to
do
that
and
not
pick
an
image
that
happens
to
have
it
at
this
specific
point
in
time.
E
A
Yeah:
okay,
like
one
reason
for
two
stable
images,
may
be
that
one
is
marked
as
lts
and
another
marked
as
just
a
specific
image,
and
this
way
we
can
always
say
whether
it's
only
fail
for
lts,
so
meaning
that
it's
newer
image
causes
cause
that
or
it's
like
general
issues
that
may
affect
either.
A
So
we
can
go
with
this
approach
or
we
can
just
save
some
money
and
like
just
only
have
one
stable
image
of
I'm
fine
with.
E
A
And
lastly,
I
wanted
to
ask:
do
you
want
to
have
this
beta
images
test
so
like
we
can
run
some
like,
maybe
same
test,
maybe
subset
of
test
with
a
beta
images,
so
meaning
that
we
will
test
it
with
latest
versions
of
kernel
and
tools?
And
it
may
give
us
some
insights
in
the
future.
I
don't
think
we
need
to
do
it
for
pre-submits
for
sure.
A
As
a
like
as
a
separate
job
like,
maybe
we
can
copy
past
all
the
preset
meets
and
run
them
with
the
latest
as
a
separate
jobs
in
the
separate
dashboards.
Does
anybody
has
anybody
thought
about
it
or
like?
Is
it
something
that
we
want
to
do,
or
we
just
completely
disregard
this
this
one
and
just
go
with
stable
for
now.
C
Yeah
I
I
would
say
that
you
know,
as
I'm
getting
called
up
and
reading
through
all
the
comments
and
the
doc
that
rory
put
together,
and
you
know,
input
from
everybody.
It
looks
like
the
latest.
Lts
image
would
work
for
all
of
the
jobs
which
would
be,
for
example,
cause
81
lts
and
because,
if
you
stop
and
think
what's
what's
the
purpose
of
what?
Why
are
we
testing?
C
Why
are
we
running
these
images?
Are
we
are
we
trying
to
test
each
and
every
version
of
the
lts
or
our
cause
image?
And
I
think
the
answer
is
no.
We
want
to
be
able
to
say
hey.
The
lts
version
works
on
all
the
sig
node
intune
test.
C
I
think
karen,
if
you
can
correct
me,
I
mean:
aren't
these
images
being
tested
in
some
other
test
suite,
also,
probably
as
each
image
comes
out,
and
they
would
perhaps
catch
these
failures
instead
of
relying
on
sig
node
into
and
tested
it
or
no.
B
They
are
tested,
so
cost
team
does
run
some
tests.
Each
class
image
has
a
cubelet
and
whatever
cubelet
version
they
have
in
that
image
is
the
one
they
validated
with.
I
think
if
we
wanna
be,
if
we
want
to
test
on
basically
tip,
I
would
say,
cost
step
might
be
better
than
cost
beta,
but
even
then
I
I
think
cost
that
might
be
very
unstable.
So
we
should.
B
We
should
be
definitely
not
opening
bugs
and
notifying
people
on
failures
on
constep.
E
B
A
Yeah,
I
think
it's
less
about
testing
latest
lts.
It's
about
how
out
like
signal
features,
will
work
his
future
version
of
container
differences
or
like
docker
update.
I
remember
a
couple
issues
like
one
roll
back
of
cherry
peak
into
118,
because
118
used
previous
version
continuously.
They
didn't
support
some
feature.
I
don't
remember
what
it
was,
but
it
was
a
like
just
recently.
It
happened
and
another
one
is
when
latest
version
of
docker
didn't
support
some
flag
and
we
tried
to
pause
this
clock
anyway,
and
this
caused
some
issues.
A
So
this
tip,
like
tip
of
technology
kind
of
test,
may
be
very
interesting,
but
I
don't
know
like
how,
like
we
probably
don't,
want
to
put
it
in
the
our
regular
image
file.
So
we
don't
want
to
run
every
single
test
with
them,
but
we
want
to
run
all
the
tests
with
them
like
maybe
like
once
in
a
while.
So
I
don't
know
how
to
implement
it
and.
E
What
anybody
thought
about?
I
think
this
is
something
we
should
put
off
for
the
future
and
write
it
down
as
a
thing
that
maybe
we
want
to
consider,
but
I
don't
think
we
can
make
a
decision
right
right
at
this
moment.
A
Okay,
so
morgan
wouldn't
be
possible
for
you
to
update
pull
requests
to
target
lts
yeah.
E
And
then
it
will
be
easier.
I'd
have
to
look
at
them.
I
think
they're,
probably
different
files
or
something
but
I'll
update
the
pr's.
A
A
Now
fix
my
experience
later,
karon.
B
Yeah
I
just
wanted
to-
I
think
I
just
I
noticed
in
our
internal
testing
this
but
gg
has
enabled
shielded
notes
by
default
for
118
and
above
clusters.
B
Mostly
it's
fine,
but
in
custom
images.
We
don't
support
custom
images,
but
shield
accounts,
so
there
will
be
a
fix
sometime
in
the
future
on
server
side
to
disable
it,
but
for
now
cubetest
will
fail
on
118
and
above
if
you
use
the
gk
provider,
so
I
just
have
a
pr
disabling
shielded
nodes.
On
that
I
didn't
see
any.
B
I
was
trying
to
look
on
tesco
if
there
were
any
118
failures
that
looked
similar,
and
I
guess
that's
why
I
joined
late,
but
I
didn't
find
too
many
of
them
or
any
bugs
reported,
which
makes
sense,
because
tests
don't
run
on
gk.
C
Karen,
can
you
give
us
for
folks
like
me,
what's
the
shielded
node.
B
I
have
to
look
at
the
docs
shielded
notes
is
basically
a
security
feature
where
I
guess
they
enable
secure
boot
and
other
security
things
that
don't
allow
you
to,
for
example,
load
kernel
modules.
Once
the
vm
is
booted
up.
Let
me
pay
still
oops.
B
Let's
see
it's
actually
doing
integrity
checks
for
vms,
so
you
can
enable
it
with
secure
boot,
but
it's
basically
making
sure
the
vm
that
comes
up
is
actually
a
gce
vm
and
not
you
know
something
in
between
has
not
been.
B
B
Yeah,
basically,
what
will
happen
is
there
needs
to
be
some
sort
of
there.
There
needs
to
be
some
stuff
that
happens
onto
a
os
image,
for
shielded
nodes
to
work
and
because
we're
using
custom
images,
those
base
images,
don't
have
that
stuff
happen
in
them,
so
the
nodes
will
just
not
come
up
and
it's
not
really
easy
to
debug.
Those
because
cube
light
itself
is
not
aware
of
anything
related
to
the
hypervisor
or
shielded
nodes.
B
But
it
will
only
this
will
only
fail
tests
that
run
against
gke.
I
don't
believe
any
tests
and
prowl
right
now
run
on
gke.
I
think
they're,
all
gce.
A
Okay,
thank
you
for
update
any
other
topics
or
any
questions
to
come
on.
A
If,
if
you
have
some
time
like,
I
wanted
to
ask
this
question,
I
was
really
surprised
by
this
note
yeah.
So
here
somebody
sent
a
pull
request
and
suggested
that
he
can
run
this
test
for
the
for
this
test
name
and
get
some
results.
So
I
was
like.
I
was
really
surprised,
because
I
thought
that
the
test
command
can
only
run
tests
out
of
this
pre-submit
hooks
like
those
ones.
A
But
do
you
know
if
this
is
a?
This
is
a
thing
like?
Can
you
actually
run
tests
on
some
tests
that
define
somewhere
in
in
register
repository,
and
it
will
just
run
this
test
on
your
behalf?
Somehow
is
the
thing.
E
It
depends
if
you
have
a
it
depends
on
the
job,
configuration
and
yeah.
It
is
possible
to
test
it
is
possible
to
have
prow
run
a
test
that
is
not
a
automatically
run,
puls
pull
test.
This
should
be
automatically
run,
I
would
think,
but
the
pull,
because
it's
got
pull
on
it,
but
maybe.
C
C
It
runs
on
each
pr
or
not,
and
so
yes,
I
I
agree
with
what
morgan
said,
because
when
I
did
one
of
the
end-to-end
tests
for
topology
management,
cpu
manager,
this
is
the
way
you
know
I
have
it
not
to
always
run
and
we
were
waiting
for
it
to
make
sure
it
was
all
stable,
because
we
don't
want
to
block
jobs
on
it,
and
this
is
the
way
we
were
manually
running
it
so
right
inside
of
a
pr.
C
A
Yeah,
I'm
asking
because
I
remember
this
discussion
about
personal
accounts
for
to
to
fix
test
infrared.
So
I
was
thinking
if
this
is
a
key.
If
this
is
a
way
for
us
to
go
like
if
you
can
send
pull
requests
with
the
fix
that
you
believe
can
fix.
Something
then
can
run
any
any
test.
You
want
not
not
even
from
pool
configuration
but
from
any
configuration.
C
Well,
you
know
my
understanding
is,
you
would
have
to
have
a
an
existing
pr
and
then
from
within
it
could
be
just
a
debug
pr
to
say:
hey
I'm
running
some,
you
know
just
the
pr
to
be
able
to
debug
my
pull
request
job,
and
so
this
is
a
way
to
manually,
trigger
trigger
it
and
run
in
that
environment
so
that
you
don't
have
to
have
your
own
private.
C
You
know
google
cloud
to
do
that,
but
if
you
do
have
your
own
private
cloud,
you
can
run
it,
but
there's
no
guarantee
that
if
it
passes
on
your
private
one,
it's
also
going
to
run
the
same
on.
You
know
the
gke
or
gce
environment,
at
least
based
on
what
I've
seen.
A
Yeah,
so
we
out
of
topics,
we
discussed
images
and
we
have
action
items
here.
We
discuss
some
triage
process
and
I
will
reach
out
to
jorge
asking
what
happening
with
us
two
issues
whether
he
needs
some
help
and
I
will
try
to
put
more
issues
into
the
first
column,
so
we
can
start
putting
items
from
first
column
to
in
progress
column.
A
A
Okay,
going
once
going
twice
happy
week
like
I
hope
this
week
will
be
energizing
and
there
are
lots
of
fun
happening
on
coupon
stay
tuned
for,
like
all
these
activities
and
have
a
great
week,
bye,
bye,.