►
From YouTube: Kubernetes SIG Node 20210518
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
edition
of
sig
node.
It
is
tuesday,
may
18
2021.
sergey.
Do
you
want
to
give
us
the
update
on
prs.
B
Yeah
we
did
a
great
job
last
week
with
caps,
like
enhancement
repository
like
we
have
many
caps
approved
emerged,
also
thanks
to
don
approving
community
contributions,
so
we
have
contributing
md
now
and
some
updates
there
archive
from
last
year
notes
this
kind
of
prs.
Unfortunately,
we
don't
have
much
progress
in
kkk
repository,
but
it's
it's
easy
to
explain.
I
mean
we've
been
busy
with
caps,
so
hopefully
we
can
pick
up
the
pace
and
start
working
on
that
there
are
plenty
of
prs
needs
needs
approval.
B
C
C
I
can
make
a
pass
at
the
other
one
as
well,
but
if
we
have
like
a
combined
view,
it
will
be
easier.
I.
B
C
B
Filed
a
buck
on
github
asking
whether
they
can
allow
us
to
query
by
column.
I
don't
know
I
didn't
get
any
replies
I
mean
github
is
not
very
good
in
deploying,
but.
D
A
Yeah
sergey
we
do
have
like
I
filed
an
issue
against.
I
think
contribex
and
I
think
some
folks
from
the
kubernetes
project
from
conservex
have
been
talking
to
github
about
improving
support
for
boards
and
that
kind
of
thing.
But
I
don't
know
where
exactly
those
things
have
landed.
A
Cool
anything
else
on
that.
A
Yeah,
I
didn't
have
any
updates
on
prs.
I've
been
mostly
heads
down
in
the
enhancements
repo,
so
yay,
almost
everything
that
node
proposed
got
approved,
which
is
very
exciting.
A
So
looking
at
today's
agenda,
there's
two
things
from
me:
I
don't
know
if
anybody
else
has
any
agenda
items
but
feel
free
to
add
them
to
the
backlog.
I
will
paste
the
link
in
the
chat.
A
Otherwise,
I
guess
we'll
have
a
short
meeting
and
my
first
item
is
a
short
thing,
which
is
a
node
bug
scrub.
So
I
was
looking
again
at
the
node
backlog
and
we
have
something
like
500
issues
in
kubernetes
kubernetes
and
I
haven't
been
doing
anything
really
to
try
to
stave
back
the
the
tides
of
issues
because
there's
just
so
many
and
they're
incoming-
and
you
know,
there's
such
a
big
backlog.
It
feels
kind
of
impossible
to
start.
So
I
was
thinking
we
could
hold
a
event
where
we
do
a
bug
scrub.
A
So
basically,
like
a
two-day
event,
we'll
try
to
find
some
sort
of
like
mentors
or
leads
in
different
time
zones.
We'll
have
a
bunch
of
documentation
for
what
to
do
and
set
up
with
tooling
and
whatnot.
A
F
A
Awesome
and
I'm
seeing
lots
of
plus
ones
in
the
chat
which
is
really
exciting,
so
I
hope
to
see
everybody
there.
My
plan
is:
I'm
gonna,
just
like
you
know
those
two
days,
I'm
gonna
be
doing
nothing
but
scrubbing.
Bugs
and
running
this
so
and
orc
is
very
happy
to.
Let
me
do
that,
so
I'm
gonna
continue
with
the
assumption
that
it's
going
to
be
june
24th
to
25th.
A
I
will
send
a
placeholder
invite
to
the
mailing
list
and
I'll
start
getting
details
organized
since
it's
more
than
a
month
out,
there's
still
some
time,
but
we
might
use
triage
party
or
something
like
that,
and
I
will
need
both
folks
who
are
interested
in.
You
know,
volunteering
to
squash
the
bugs
and
folks
who
are
interested
in
being
mentors.
You
know
sort
of
like
leading
forgiven
time
zones
and
whatnot
since
we'll
probably
have
people
from
all
over
the
globe
who
want
to
help.
A
Cool
okay
people
say
just
pick
a
date.
The
date
is
june,
24th
25th,
that's
the
date,
and
I
I
hope
that
the
two
days
will
be
enough,
but
if
not,
hopefully
what
the
outcome
of
this
will
be
we'll
get
the
bug
backlog
in
like
reasonable
shape,
and
then
we
can
start
adding
issues
to
our
weekly
triage.
A
So
we're
not
just
looking
at
prs
we're
also
looking
at
issues
every
week
as
well,
which
will
make
it
much
easier
to
maintain
that
steady
state
and
then,
if
we
want
to
do
something
like
this,
like
once
a
release,
I
think
it
shouldn't
be
too
hard
to
keep
up
on
a
future
basis,
so
yeah,
I'm
so
happy
that
everybody
in
the
chat
is
so
excited
about
this
great.
A
The
other
thing
I
had
on
the
agenda
is
a
work
breakdown
for
swap,
if
folks
want
to
talk
about
that,
I
made
it
jira
on
the
public
red
hat
jira.
If
people
want
to
see
how
I
broke
it
down,
but
basically
there's-
and
I
will
paste
that
in
the
chat-
basically
there's
like
some
coding
work
that
needs
to
be
done
and
there's
a
huge
amount
of
ci
work.
A
That
needs
to
be
done
and
I
am
very
happy
to
especially
since
I
have
made
api
changes
before
and
wrote
the
cap
I'm
happy
to
go
ahead
and
do
the
code
changes,
but
I
have
sort
of
a
prereq
of
I
need
some
like
ci
changes
done
like
we
need
images
with
swap
on
them
and
we
need
to
be
able
to
provision
those
in
end-to-end
jobs.
So
I
know
some
folks
at
google
and
possibly
elsewhere
were
interested
in
working
on
things.
Is
the
ci
stuff
that
something
someone
else
could
pick
up.
G
Yeah
hi
elena
yeah,
I
hear
I
can
help
with
the
swap
board
here.
I
guess:
do
we
have
a
ticket
for
that
particular
ci
requirement
already.
A
So
it's
documented
in
the
cap
I
haven't
filed
anything
in
the
upstream
kubernetes
repo
I
like
have.
You
know,
put
together
this
jira
epic,
which
is
public,
so
you
should
be
able
to
view
it
just
sort
of
tracks
like
like
an
overall
breakdown,
because
I
needed
to
do
that
anyways.
But
I
wanted
to
check
in
and
make
sure
you
know
like
how
do
people
feel
about
this
breakdown?
Is
this
division
of
work?
A
Okay
and
then,
if
so
like,
then
maybe
we
can
go
ahead
and
start
filing
upstream
issues
for
things
for
tracking.
A
A
It
can
sometimes
confuse
the
the
release
team
if
we
have,
like
you,
know
five
different
issues
for
the
cap,
so
sometimes
it's
easier
to
just
track
it.
As
like.
A
comment
on
the
cap
saying
like
this
piece
of
work.
Is
this
pr
and
this
piece
of
work
is
this
pr
and
so
on.
A
Okay,
then,
I
will
do
that
I'll
make
like
I'll
edit.
The
first
comment
in
the
enhancement
tracking
issue
to
sort
of
have
that
work
breakdown,
and
then
we
can
just
start
filling
in
prs
as
they
come
in.
A
And
I
know
that
there
have
been
some
folks
who
have
been
interested
in
testing
so
once
we
have
like
some
ci
infrastructure
and
also
the
code
implementation
is
done.
I
am
happy
to
go
and
like
write
the
documentation
or
to
enable
this
for
testing.
So
if
people
want
to
jump
in
with
that,
just
send
me
a
note.
I'm
happy
to
happy
to
help
out
with
that.
If
you
have
any
difficulties.
C
Hey
so
I
want
to
quickly
check
out
a
chat
about
this
issue,
so
I'm
not
sure
if
folks
have
been
following,
but
there
was
a
runcie
rc
94
update
and
it
caused
regressions
and
like
one
thing
we
realized
was
like
so
herschel
can
give
some
more
details.
He
just
joined.
It
was
like
if
we
had
the
cryoblocking
job
in
the
ci
enabled
it
would
have
kind
of
caught
that
so
like
just
wanted
to,
like
herschel,
has
been
running
that
job
on
the
pr's
for
a
while.
H
Yeah,
so
we
are
running
node,
conformance
and
feature
jobs
in
upstream
pr
upstream
for
our
job
and
what
happened
that
yesterday
about
22
or
23
hours
ago,
run
c
update
was
merged
in
kubernetes
and
immediately
the
only
job
that
is
there
right
now,
node
e
to
e
with
the
remote
runtime,
that's
the
one
I
pinged
it
here
on
the
chat,
started
failing
and
turns
out.
H
The
run
c
update
was
causing
that
and
if
this
job
was
blocking,
we
could
have
caught
it
so
right
now,
then,
from
the
signal
point
of
view,
the
only
node
job
running
is
running
using
docker,
not
with
a
remote
runtime,
nothing
of
nothing
that
is
blocking.
So
if
this
job
we
could
make
it
blocking,
then
we
can
catch
such
regressions
early
enough
with
the
remote
runtime.
H
Yeah
yeah
yeah
yeah,
so
this
this
job
uses
c
group
v1
with
systemd
on
on
off
federal
courtois
thing.
So
this
would
be
another
testing
scenario
that
we
can
cover
with
this
job.
B
Would
would
switch
to
containers
help
or
it's
purely
on.
C
So
so
sergey
I
guess
like,
since
we
are
already
adding
another
job
with
cryo,
I
mean
we
don't
mind
if
we
have
even
more
jobs,
but
I
think
there
might
be
like
a
budget
issue
that
we'll
need
to
how
to
check
so
we're
just
kind
of
like
bringing
this
back
up
again,
and
we
feel
that
we've
been
running
this
job
long
enough,
so
we're
going
to
check
with
sig
node.
C
If
there
are
any
objections
to
making
this
blocking,
we
feel
it'll
be
beneficial
if
we
can
prevent
regressions
of
this
sort
running
this
job
and
another
thing
that
came
up
in
the
discussion
over
there.
But
I
probably
I
I
admit
I
don't
have
enough
insight
into
is
the
node
e
to
e
serial
would
have
also
helped.
But
apparently
the
state
of
that
job
is
not
good.
A
So,
just
for
a
little
bit
of
context
because
I
had
taken,
I
think
I
had
bumped
this
conversation
at
some
point,
so
I
had
chatted
with
ben
who
was
concerned
about
adding
a
new
blocking
job,
but
we
had
kind
of
figured
all
of
that
out.
A
I
think
there's
some
goals
at
some
point
to
not
have
like
separate
blocking
jobs
for,
for
example,
like
container
d
or
docker
or
cryo
he'd,
prefer
them
to
all
be
like
one
single
thing,
because
it's
a
bit
expensive,
but
right
now
we
do
have
the
job
it's
already
running
on
every
pr.
A
It's
currently
not
reporting,
but
the
hope
I
think,
is
to
set
it
to
reporting
and
then,
after
a
little
bit
of
signal
there
to
set
it
to
blocking,
and
I
think
that
we
just
it
fell
off
the
radar
for
a
couple
of
months
until.
H
Runs
the
break
from
the
expense
point
of
view
whether
you
have
a
one
job
or
you
have
multiple
job
blocking,
it
doesn't
add
it.
It
doesn't
make
any
difference,
because
when
we
have
one
job
we
are
in
the
background
in
working
different
vms.
A
I'm
not
so
we
already
have
all
of
the
jobs,
so
that's
sort
of
moot.
I
think
the
the
expense
was
mostly
in
terms
of
just
like
build
time
right
like
we
have
to
have
like
these
three
separate
builds
running
in
parallel.
We
can't
just
reuse
the
one
build
because
we've
moved
away
from
bazel.
That
was
ben's
concern,
but
we're
already
running
these
things.
So
it's
it's
not
really
a
blocking
argument
at
this
point.
B
D
B
H
Didn't
I
don't
know,
I
I
didn't
the
the
default
docker
thing
didn't
get
affected,
so
I'm
assuming
there's.
It
has
no
effect
on
a
c
group
driver
because.
I
D
I'm
trying
to
ask
because
we
we
have
also
like
the
container
d
test
right
that
are
also
running
in
conformance,
so
that
one
isn't
using
the
c
group
driver
though
so,
if
it's,
if
it's
not
a
c
group
driver,
we
should
see
the
same
failure
in
the
container
details
right
but
yeah.
If
it's
a
system,
d
c
group
driver,
then
I
imagine
it's
only
would
show
up
here
in
the
cryo
test
right,
yeah.
I
C
F
There's
the
this
kind
of
things
we've
been
taught
discussed
a
couple
years
ago,
so
we
have
this
system
d
related
regression
in
the
past.
The
problem
is,
we
cannot
stay
on
always
on
the
latest
of
the
system
d,
so
which
version,
and
so
we
don't
want
to
make
the
r
conform
test.
Even
at
the
node
e
to
e.
We
come
to
the
blocker,
because
in
the
reality
for
the
user,
they
basically
have
to
rely
on
a
different
worship
right
they
differ.
We
have
different
version
of
the
system.
F
We
used
to
be
have
the
core,
as
always
block
load
like
the
constant
of
the
field,
because
they're
always
up
to
latest
kernel
natives
of
this
knee,
but
that's
really
give
us
like
a
lot
of
force
or
not
signal
to
the
signal.
So
we
spend
a
lot
of
time
to
debug
system
debug.
So
that's
why
we
decided
we
had
decided
to
remove
after
core
osp,
but
before
we
decided
remove
the
core
as
the
node
e2e
test,
we
also
want
to
make
the
they
are
not
blocker.
We
want
the
most
reliable
staff.
F
F
Okay
here
it
is
the
signal,
the
feature
and
and
all
the
dependency
we
try
to
find
the
most
reliable
things
like,
for
example,
we
could
open
up
the
os
image
with
the
sql
pro
version
2
and
to
test
in
that
one,
because
that
it
is
our
sig
note
the
feature,
but
we
are
not
open
to
test
of
the
different
version
of
the
system
d,
because
that's
not
our
feature,
that's
more
is
so
that's
the
decision
we
said
this
is
more
vendors
confirm
test
for
the
production,
so
needs
not
to
shift
those
kind
of
overhead
to
the
community
to
driving,
because
each
vendor
may
have
the
difference
of
the
requirement.
F
That's
kind
of
the
decision
we
make
in
the
past
same
so
so
I
just
want
to
share
with
here
so
like
that
we
do
have
the
fedora
of
the
image
in
the
past.
That's
even
a
couple
years
ago.
We
because
we
are
open
for
the
secret
version
too,
and
because
that's
the
feature
we
sign
up
and
also
we
have
this
cri
test
and
the
cri
tool
all
those
kind
of
things
it's
the
builder.
F
So
then
we
can
hide
off
the
different
of
the
container
runtime,
because
back
then
there's
several
container
runtime
came
to
the
signal
and
we
cannot
really
in
this
community
and
effecting
test
all
the
container
runtime.
But
of
course
we
did
later
because,
due
to
the
maturity
reliability,
we
we
graduate
continuity
and
acquire
to
the
cncf
for
on
the
container
runtime
right.
So,
but
at
that
time
we
did
the
we
did
the
different
decisions.
So
after
that
one
which
one
we
should
be
testing,
we
can
come
up
this
one
after
we
did
the
doing
that.
F
The
darker
shame
deprecation,
because
until
today,
basically
default
testing
still
goes
to
the
docker
share,
because
we
defer
to
that
discussion.
So
I
just
want
to
give
some
history
here.
C
Yeah
yeah
don
so
like,
I
think,
like
two
factors
here
so
so
in
run
c
upstream,
we
have
sierra
covering
at
least
a
couple
of
distributions
like
fedora.
C
And
like
we
have,
I
think
we
have
more
folks
participating
to
trying
to
keep
to
keep
this
screen
like
we
image
like
herschel,
is
monitoring
this
job
and,
like
our
hope,
is
like.
If
we
get
it
blocking,
then
we
don't
break
it
like
right
now.
We
are
in
this
cycle
that
every
time
we
try
to
update
ranci,
we
have
this
back
and
forth
so,
and
we
are
shipping
system
dc
group
driver
and
openshift
in
production.
C
C
We
updated
run
c,
it
broke
us
now,
we're
gonna
go
back
to
the
previous
rc,
and
so
so,
every
time
the
delta
to
the
next
rc
becomes
bigger
and
bigger,
because
meanwhile
we
get
bug,
fixes
and
stuff
and
and
the
like,
cve
fixes
for
which
we
have
to
update,
run
c
anyways.
So
if
I
I
feel
that
if
we
get
this
job
blocking,
then
it
will
make
our
job
of
updating,
run
c
easier
in
general
and
like.
F
I
want
to
I
want
to
the
I
want
to
make
sure
this
is
not
just
like
the.
If
it's
not
just
like
the
density
problem
sounds
like
that's
the
more
like
the
runs,
the
integration
with
the
system,
the
integration
with
cryo
problem.
So
this.
C
C
But
since
we
haven't
updated
run
c
in
so
long,
while
he
was
doing
the
updates,
he
probably
missed
some
code
parts
that
needed
to
be
updated
in
the
cubelet,
which
is-
and
we
don't
have
a
blocking
test
for
that,
which
is
why,
like
it,
it
got
merged
because
nothing
was
testing.
The
system
dc
group
integration
from
the
cubelet
side,
but
if
we
had
a
blocking
job
over
there,
then
we
would
have
like
more
coverage
there
right
if
we
say
that
cubelet
doesn't
support.
C
A
So
I'm
a
little
bit
of
a
newcomer
to
this.
But
I
I
read
through
the
sig
node
minutes
and
my
understanding,
based
on,
like
previous
meetings
that
we
had
last
year,
was
that
this
has
sort
of
continually
been
a
problem.
And
the
goal
of
like
rci
upstream
is
not
just
to
support
container
d
or
to
support
docker
it's
to
support
a
cri
and
then
in
order
to.
D
A
We
need
to
be
testing
more
than
one
container
runtime,
and
so
the
fact
that
we
don't
have
a
blocking
job
for
anything
I
other
than
the
container
d
right
now
is
probably
a
problem
from
that
perspective.
D
Yeah
I
mean
I
I
understand
like
if
we're
trying
to
if
the
coolest
supports
the
system
dc
group
driver
right
makes
sense.
We
should
have
some
type
of
test
for
that
right.
That
does
make
sense.
Yeah
yeah.
C
D
F
Link
is
kind
of
being
discussed
many
times
even
crowd.
I
think
the
manual
is
there,
so
we
do
want
to
have
like
the
crowd
test,
but
I
have
to
own
by
people
implement
of
the
crowd,
but
the
last
many
years.
There's
no
such
task
to
add
it
until
recently.
Here.
So
that's
the
problem
for
the
sick
for
the
know,
the
e2e,
so
I
the
we're,
also
open,
but
the
then
we
just
want
to
have
the
owner
right.
F
So,
every
time
we
talk
about
this
kind
of
problem,
every
time
came
here:
people
their
development
being
blocked.
I
tried
to
play
the
evil
decker
here.
So
the
people
just
said-
oh
add
this
one.
So
we
this
is
where
we
in
the
past.
We
either
a
lot,
but
then
nobody
really
met.
Nobody
really
have
this
kind
of
things.
So
that's
why
I
have
to
deprecate
it
all.
We
have
to
remove
that's
the
problem.
F
So
if
we
think
about
the
we
do
talk
about
this
for
continuities,
that's
why
earlier
I
said:
okay,
we
need
we
differed
discussing.
We
said:
okay
go
with
the
docker
shim
deprecation,
let's
figure
out
how
we
are
going
to
make
turn
off
the
cr
ci
related
container
runtime,
and
and
do
we
need
to
double
those
effort?
Can
we
go
to
the
node
e
to
e,
like
the
reduce
of
the
end
of
the
e
to
e,
like
the
class
level
of
ete?
F
So
we
didn't
draw
conclusion
yet
so
we
can
open
to
open
this
discussion
and
discuss,
but
the
other
hand
it
is
if
we
come
to
blocker
if
and
also
is
not
really
maintained,
carefully
maintained
that
will
block
a
lot
of
the
people's
pr.
This
is
what
we
have
considered
in
the
past.
Of
course,
the
staff
always
block
everybody's
merging
so
that
end
up,
we
always
have
to
debug
the
system.
The
issues
for
everybody
yeah.
C
F
C
Don
the
the
difference
is
like
when
we
started
cryo,
we
were
two
people
working
on
getting
it
working.
Now
we
have
a
much
bigger
team.
You
can
see
a
lot
more
folks
from
cryo
participating
and
working
on
these
efforts
so
like
we
are
definitely
like
on
board
to
to
be
responsible
for
keeping
any
such
job.
Like
greed,.
A
Yeah,
I
know
those
concerns
were
raised
by
ben
don,
and
so
I
think
our
goal
was.
This
is
why
we
added
that
job
and
it's
on
every
pr
running
right
now
to
see
to
make
sure
that
it's
stable
enough,
that
it's
not
just
going
to
be
randomly
flaking
and
therefore
like
blocking
ci
unnecessarily,
and
I
think
until
we
saw
this
breaking
change,
the
the
signal
was
quite
good.
A
It
was
it
was
green,
and
so
the
fact
that
the
signal
was
quite
good
until
it
was
green
and
then
something
merged
and
it
broke
it
like.
That
is
a
good
indicator
that
we
should
probably
consider
making
it
a
blocking
job.
But
if
those
criteria
aren't
being
met,
then
obviously
we
don't
want
to
have
it
a
blocking
job.
F
B
We
can
just
about,
I
think
ben's
point
is
we
need
to
minimize
the
number
of
jobs
and
have
established
some
way
to
look
at
jobs.
That's
right,
yeah
because
I
mean,
if
you
don't
have
a
way
to
track
jobs,
that
running
asynchronously
and
we
would
never
scale
to
the
like.
B
D
D
Kind
of
wanted
to
mention
also
is
just
like
the
the
job
I
think
we're
talking
about
here
is
like
the
conformance
job
right.
So
it
sounds
like
the
really
the
nice
job
that
that
would
cover
most
of
the
regressions
for
unc
are
the
serial
jobs
which
are
broken
in
general
and
can't
be
made
blocking
because
they're,
serial
and
stuff
right.
So
just
another
thought,
maybe
it
makes
sense
to
perhaps
like
spend
some
time
investing
in
those
tests,
as
opposed
to
like.
D
C
And
I
think,
there's
one
more
comment
from
brian
in
the
chat
like
the
fact
this
was
exposed
after
run
crc
like
so
bran,
that's
a
great
point,
but
the
challenge
is
like
we
don't
have
enough
bandwidth
on
the
runty
side
like,
as
is
a
car,
is
doing
most
of
the
work
to
keep
the
ransom.
Ci
green
and
like
like
having
run,
see
folks,
be
responsible
for
doing
kubernetes.
Ci
may
be
a
challenge.
C
What
we
could
do
is
like
somehow
absorb
ranty
head
here
in
kubernetes,
but
that's
just
one
more
job
and
the
question
will
come
up
like
who
owns
that
job
and
who
is
looking
at
it
so
like
at
least
a
release
has
some
checkpoints
where,
when
we
try
to
update
it,
if
we
have
like
enough
ci
that
gives
us
assurance
that
the
update
is
not
going
to
break
us,
but
it
will
be
better
than
that.
Vr.
F
I
totally
agree
with
your
brand.
This
is
basically
what
I
argue
is
just
blocker
for
the
every
pr
and
I
totally
agree.
We
should
have
like
the
periodical
ci
to
run
against
this
one.
We
even
talked
to
rancid
folks
in
the
hopefully
they
we
talk
to
them
and
maybe
and
identify
some
conformance
tests
or
cri
test.
That's
the
original.
We
talked
about
the
ci
test
and
the
share
with
them
to
become
to
their
blocker
in
the
past.
C
Yeah,
I'm
happy
to
facilitate
that
I'm
one
of
the
ransom
maintainers
like
so
I
think
one
again,
this
I've
had
this
conversation
with
david
and
a
few
others
like
one
more
challenge
we
have
had
with
run
c
is
like.
I
would
like
more
frequent
updates
of
run
c
into
kubernetes,
but
we
insist
on
getting
tags
and
because
of
that,
like
every
time
we
track
tags
try
to
tag
something
in
run
c
like
people,
oh,
no.
We
need
to
fix
this
little
thing,
this
little
thing
and
by
the
time
we
get
a
tag
out.
C
It's
like
three
four
months
and
so
for
solid
three
four
months.
We
never
update
anything
into
c
advisor
or
kubernetes
and
by
the
time
we
try
to
absorb
it.
It's
like
a
whole
lot
of
changes
and
it
makes
it
harder
so
like,
and
I
don't
know
like
how
we
can
change
that.
I
would
really
love
it
if
we
can
frequently
update,
run
c
into
c
advisor
and
upload
it.
C
C
J
Sorry
this
is
lantau.
I
think
today,
in
the
at
least
in
the
nerdy
repository
canadi
does
update
when
c
period
period
periodically
and
yeah.
All
the
updates
should
need
to
go
through
no
d2
test,
because
we
have
no
e3
spsm
there.
So,
although
wendy
doesn't
have
presummit
blocking
things,
but
kennedy
does
peer
directly
after
eventually
and
everything.
K
C
D
K
F
Yeah
wait,
it
sounds
like
we
are
converging
and
we
do
want
to
improve
our
signal
e3
test
and
have
that
build
of
that
test
scenario
cover
this
kind
of
things
and
also
have
like
the
continuous
integration
test
to
run
theoretically
here
right
so
for
this
kind
of
integration-
and
it's
no
matter
is
it's
for
continuity,
or
it
is
the
cryo,
because,
anyway,
we
do
have
the
load.
The
image
test,
though,
include
both
cloud
and
also
the
also
the
continuity
right
right
now.
J
C
So,
like,
like
david,
is
open
to
taking
whatever
tags,
but
I
think
in
the
main,
kubernetes
repo.
The
policy
is
that
we
don't
take
anything
that
doesn't
have
a
tag,
and
that
has
been
the
challenge
because,
like
there's
too
much
time
passes
in
between
tags
and
then
we
have
to
absorb
those
changes,
try
to
adapt
to
the
library
changes
like
the
goal
is
like
at
least
my
hope
is
that,
with
all
the
changes
core
has
made
recently,
we
shouldn't
see
a
lot
of
changes
after
we
get
past.
C
K
C
K
K
Well,
it
would
make
sense,
then,
to
run
both
if
you,
if
you
have
the
you
know
the
bandwidth,
to
run
both
the
tagged
version
that
you're
potentially
going
to
ship
and
also
master
so
that
we
could
have
at
least
in
an
optional
mode,
so
that
we
could
get
some
results,
and
you
know
push
that
back
to
run
c.
Your
pr
broke
us.
D
That's
actually
the
question
yeah,
that's
the
question
I
wanted
to
ask
because
we
have
in
the
container
d
job
today
it
does
there's
a
master
version
that
uses
the
latest
version
of
container
d
and
latest
version
of
run
c
like
the
binary.
So
the
question
is
maybe,
if
it's
possible
for
us
to
modify
that,
has
to
also
use
the
latest
version
around
c
vendored
in
right.
That
sounds
like
kind
of
what
we're
trying
to
go
after
right.
That's
yeah.
F
So
the
problem
is,
I
understand
the
menu
just
say:
okay,
we
could
do
whatever,
but
it
just
kubernetes,
because
once
you
start
to
integrate
with
kubernetes,
I
think
statewide.
There
should
be
okay,
easy
because
the
totally
under
signal
control,
but
the
kubernetes
actually,
because
and
also
it's
just
library
right,
so
we
could
address
airport
but
for
the
kubernetes
integration.
Even
we
build
off
the
node
e2e
test
there
we
there's
the
rule
we
have
to
validate
sounds
like,
and
so
that's
why
we
need.
F
We
need
to
partner
with
the
run
safe
folks
community,
unless
they
can,
they
introduce
tag
more
frankly,
they
need
is
the
possible.
They
could
tag
that
daily.
So
then
we
could
have
the
daily
job
run
and
we
could
automate
this
kind
of
the
rendering
things
for
our
periodicals.
They
are
just
for
this
kind
of
cases.
Can
we
do
that?
Maneuver?
Since
you
are
not.
C
Yeah
yeah,
I
I
think
don
what
what
we
can
do
is
like.
I
can
try
to
invite
some
of
the
run
see
folks
here
to
a
signaled
meeting,
or
we
can
have
like
a
special
meeting
that
works.
Good
lucky
hero
is
one
of
them,
and
I
think
this
time
is
not
great
for
him
and
we'll
probably
need
alexa
car
and
I
and
then
we
can
see
if
we
can
figure
out
a
plan.
Yeah
yeah.
A
I
just
spoke
with
dims
to
try
to
get
his
take,
and
the
issue
with
like
trying
to
integrate
things
like
sort
of
in
between
run
c
releases.
Is
that
then
there's
no
coordination
between,
for
example,
c
advisor
container
d,
the
other
run
times,
and
so
dimms
is
pro
more
tags,
because
trying
to
like
go
in
between
without
tags
is
going
to
just
make
that
integration
and
standardization
difficult.
C
A
F
F
Actually
that
time
we
want,
we
pattern
want
to
partner
with
the
containery
community,
also
run
c
community,
and
we
want
to
define
some
of
the
ci
tests.
I
know
this
is
not
the
cover
about
rendering
things.
I
think
we
could
evolve
that
one
that
time
that's
the
discussion
it
is.
We
could
simplify
our
load
et
test,
have
some
basic
tasks
and
work
with
continuity?
Can
we
include
the
community
and
can
we
include
that
one
at
the
run
c
community?
K
K
K
Then
then
we'd
always
be
in
sync,
with
run
c's
master,
at
least
in
an
optional
test,
and
that's
all,
I
always
say,
is
I'll
talk
to
dems
and
see
if
there's
a
way
that
we
can
get
an
optional
test
for
run
c
to
be
updated
in
the
c
advisor
to
bucket,
but
but
you're
right
don.
I
you
know,
we've
got
cry
test,
it's
got
a
good
subset,
it's
not
good
enough.
We
need
to
add
more
tests
to
it
to
cover
some
of
the
additional.
K
You
know
use
use
cases
that
that
we're
seeing
now
in
kubernetes,
but
if
we
can
keep
main,
if
everybody
knows
hey,
you
know
that's
being
tested
over
there
and
that's
making
run
c
better
when
we
when
we
consume
it
then
yeah.
I
think
I
think
we'll
be
better
off
right
and
I
know
we
just
moved
a
lot
of
the
stats
work,
which
is
why
a
lot
of
the
test
cases
we're
using
see
advisors
that
might
be
a
little
bit
more
cleaned
up
now.
F
C
K
Thank
you.
You
helped
set
that
up
with
yuju
and
right
feisker.
K
C
So
I
think
brian
had
one
more
comment
about
lots
of
different
runs
like
brian.
I
also
would
love
that,
but
when
I
like,
like
to
give
some
background
right,
like
car
has
been
spending
a
lot
of
time
on
run
c
and
like
the
fixes
has
been
doing
are
like
on
issues
we
have
seen
at
scale
over
years,
and
so
like
I,
I
wish
every
issue
is
fixed,
but
the
changes
that
are
going
in
are
like
very,
very
appreciated.
Like
some
of
the
head.
C
I
I
think
part
of
the
problem
in
this
particular
case
is
the
number
like,
where
there's
no
way
for
us
to
test
every
permutation,
that's
going
to
be
run
against
run
c.
However,
if
we
have
a
set
of,
I
think
don
might
have
mentioned
something
like
this
having
a
set
of
node
configs
that
are
well
known,
like
here's.
What
aks
uses
here
is
what
eks
uses
here's,
what
gce
uses
here's
what
openshift
needs
like
like
run
those
through.
D
F
Don't
want
to
because
the
system
d
regression
come
once
and
then
last
three
years
I
didn't
see
anything
so
so
sometimes
I
also
don't
want
to
wear
over
because
last
time
come
and
we
quickly
add
a
test
and
then
turns
out.
It's
become
better
us
later
because
didn't
have
much
issue,
but
the
test
itself
have
tons
of
the
issues
so
slow
down.
F
So
so.
This
is
why
I
believe,
because
ben
also
there
back
then
so
that's
why
he's
the
working?
Not
at
the
too
many
new
tasks,
new
type
of
the
things
as
the
blocker,
we
are
all
open
to
add
more
cr
tests
and
carry
more
configurations
and
make
this
better
deliver.
But
we
don't
want
to
add
this
kind
of
things
become
to
the
text
for
everyone.
That's
all.
K
What
we'll
add
some
system
d
test
buckets
that
we'll
have
to
configure?
I
guess
in
our
run
times
somehow
it's
it's
very
doable.
It's
just
gonna
be
some
work.
Yes,.
A
I
Sure
yeah,
so
in
virtual
kubelet
we
end
up
importing
some
things
from
kublet.
Some
things
have
been
moved
out
now,
which
are
great,
like
stats
types
are
now
out
of
in
a
separate
repo,
which
is
fantastic.
I
think,
specifically
one
thing
that
we
still
have
an
issue
with
that
we're
just
copying
at
the
moment
where
so
back
up,
we
don't
want
to
have
to
import
case.
I
I
o
kubernetes,
because
it
causes
a
whole
bunch
of
problems
and
we
currently
have
to
for
things
related
to
downward
api,
which
currently
we
just
copied
some
stuff
from
from
the
files
there
and
using
what
we
need.
I
was
thinking
this
would
be
a
nice
thing
to
have
as
a
common
shared
implementation.
I
That
is
outside
of
case.
I
o
kubernetes
problem
is
there's
important
things
in
there
like
field
path
and
there's
things
that
are
going
to
be
used
by
other
components
that
are
in
case
of
kubernetes,
so
it's
kind
of
kind
of
funky,
but
I
I
kind
of
wanted
to
throw
that
out
there
in
terms
of
can
we
make
downward
api
parsing
move
that
somewhere
else?
I
Like
I
don't
know,
if
it's
in
case
I
owe
slash
kublet
or
if
there's
a
better
place
for
that
off
the
top
of
my
head,
but
just
wanted
to
throw
that
out.
There.
A
I
F
Yeah
brian,
I
suggest
yeah,
I
found
the
issue
and
the
include
of
the
pros
cost.
I
can
share
a
lot
a
little
bit
the
cons.
I
recently
discovered
a
lot
of
things
move
out
of
the
main
kubernetes,
which
is
good
right.
That's
what
we
want,
but
also
add
a
lot
of
cost
for
people,
maintenance
and
mental
offerings
and
and
the
private
services,
at
least
as
the
one
of
the
vendor,
and
I
realized
actually
the
qualification
for
customer
as
a
vendor
and
and
also
it
is
much
much
the
cost,
much
much
higher
right
now.
F
F
So
basically
it's
very
easy
for
us
to
say:
oh
here's,
the
signal
on
whatever,
and
then
we
have
the
group
of
the
communi
big
community
and
group
of
people
on
a
lot
of
things
move
out,
and
so
during
each
kubernetes
we
need.
We
may
forget,
like
the
the
signal,
actually
did
a
good
job.
We
have
the
same
weather
we
have
npd.
But
honestly,
we
have
a
lot
of
time.
F
Last
couple
years
I
try
very
hard
to
identify
new
owner
when
the
people
move
on
with
their
carrier
and
the
job
new
owner
for
said,
whatever
thanks
david
and
and
also
the
new
owner
for
npd,
so
so
so,
and
also
new
owner
for
some
other
component.
So
so
so
it's
a
so.
This
is
the
challenge
I
just
want
to
show
here
to
the
community,
because
we
as
a
community
a
lot
of
things,
move
out.
We
still
have
to
have
to
identify
proactively,
attentive
and
a
new
owner
who
is
on.
F
It
is
really
easy
for
open
source
community.
It's
easy
to
miss
like,
for
example,
reason
I
found
have
a
kind
cluster.
I
just
don't
have
clear
ownership
in
the
community
and
used
to
be
actually
it's
not
that
that,
because
during
the
development
phase
and
now
actually
it
is
much
harder
to
to
find
okay.
Who
is
all
those
kind
of
things?
F
So
so
this
is
downward.
Api
is
quite
important
for
kubernetes,
customer
and
a
user.
Of
course.
Maybe
it's
not
won't
be
falling
apart,
but
but
I
have
this
concept
because
people
take
for
granted
and
then
don't
carefully
own.
Those
kind
of
things
end
up
could
be
like
the
renzi
issue
within
upgrade
they
have
their
own
release
pipeline
and
we
didn't
upgrade
frankly
enough
and
because
that's
the
actual
overhead
not
like
today,
bundled
together
and
in
the
end
up,
we
have
the
more
compatibility
issue,
integration
issue,
testing
issue.
F
I
F
F
A
Do
we
have
anything
else
for
today?
I
think
some
people
already
dropped
earlier,
so
probably
should
call
it,
but
I'll
see
everybody
next
week,
if
not
sooner
cheers.