►
From YouTube: Kubernetes SIG Node 20200721
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
A
So,
do
you
want
to
start
the
topic
yeah.
B
Yes,
okay
cool,
so
I'm
gonna
propose
this
node
resource
interface
as
a
extensible
way
to
manage
resources
for
containers,
and
I
have
this
quick
little
slide
deck,
we'll
kind
of
go
through
the
problem
that
we
currently
have
and
that's
resource
management,
see.
Groups
topology
is
hard.
B
We
have
a
lot
of
different
workload
requirements
batch,
latency,
sensitive
and
then,
if
you're,
giving
cube
to
customers,
you
have
slas
slos
and
then
different
priorities
of
workloads
that
they're
going
to
be
running
and
then
like.
We
have
many
different
things
that
are
classified
as
resources,
so
cpus
pneuma,
l3,
cache
huge
pages
and
then
going
down
even
farther.
B
You
want
to
have
your
workload
scheduled
close
to
a
gpu
or
network
card
and
so
on.
So
this
creates
a
large
matrix
and
we
have
a
couple
current
solutions.
So
I
did
a
lot
of
research.
Cubelet
today
has
cpu
manager
and
topology
manager
and
there's
a
few
caps
proposing
improvements
to
cpu
manager
like
adding
pneuma
support
and
so
on,
but
a
couple
of
things
that
I
noticed
it's
a
really
weird
ux
for
enabling
cpu
manager.
B
C
C
Well,
it
has
a
compatibility
mode,
so
we
have
a
policy
which
can
emulate
and
migrate
with
cmk
workloads,
but
implementation
is
completely
different.
B
Everyone
has
different
requirements,
so
let's
focus
on
apis
and
not
implementations,
which
is
my
main
focus
of
this
presentation
of
having
things
like
cpu
manager
and
topology
manager
in
the
cubelet
makes
it
hard
for
people
to
build
different
resource
types
and
extend
that
so
one
thing
we
have
in
the
in
the
container
space
is
cni
and
I
think
cni
is
a
very
simple,
elegant,
extensible
interface,
and
it
works
well
across
all
these
different
network
back-ends.
B
I
don't
know
of
any
controversy
that
we've
had
in
the
past
with
cni.
It's
you
have
plug-ins,
you
add
them
and
it
just
kind
of
works,
and
I
think
it's
it's
something
that
we
need
to
look
into
where
let's
make
cni
for
resources,
and
so
I'm
proposing
this
thing
called
nri,
because
cri
was
taken.
I
went
into
the
container
resource
interface,
but
I'll
settle
for
node
resource
interface
and
kind
of.
I
believe
the
cubelet
is
not
the
right
abstraction
to
have
things
like
cpu
manager
in
the
core.
B
It's
something
that
we
could
take
out
into
a
extensible
api
or
in
the
cubelet
or
at
the
cri
level
we
can
have
hooks
just
like
cni
is
implemented
into
the
life
cycle
of
our
containers
or
pods,
and
then
be
able
to
implement
plugins,
either
vendor
specific
deployment
specific
or
general
plugins
that
come
from
the
cube
community
like
ripping
out
cpu
manager
into
a
separate,
a
separate
binary,
so
I've
kind
of
modeled
this
heavily
after
cni.
So
you
have
kind
of
a
global
system
config
where
you
can
have
multiple
plug-ins
and
order
matters.
B
You
can
chain
them
together,
where
you
can
have
different
things
for
like
this.
One
manages
numa
this
one
manages
cfs
quotas
and
so
on,
and
then
I
built
kind
of
skeleton
code,
so
it
just
like
in
cni.
You
can
easily
create
these
plugins
effortlessly
to
to
do
this,
so
it's
kind
of
helping
people
with
all
the
boilerplate
and
then
at
the
cri
level
or
after
getting
more
feedback.
B
So,
as
I
was
working
through
this
trying
to
make
sure
my
ideas,
weren't
totally
crazy,
I
built
this
confine
plug-in
and
what
it
does
is
it
handles
dynamic,
topology
and
qos
management.
So
it
supports
both
latency
sensitive,
where
you
launch
a
pod
and
you
can
use
annotations
to
say
this
is
batch.
This
is
latency
sensitive
and
then
it
will
schedule
on
the
pod.
If
it's
latency
sensitive,
it
gets
entire
core
guarantees
and
then
that
other
plug-in
of
clear
cfs
to
remove
the
cfs
quotas
on
that.
B
B
So
this
plug-in
I
just
worked
on
it
to
build
a
dynamic,
node
topology
dynamically
place,
workloads
based
on
their
qos
class,
with
a
pneuma
support
as
well.
So
I
think
nri.
Overall,
we
have
a
lot
of
pluses.
We
don't
have
to
wait
for
cube
release
cycles
to
get
updates
to
different
resource
types
or
if
you
have
very
vendor-specific
hardware
that
you
need
to
integrate
with,
you
can
just
build
a
small
binary
that
handles
that
resource
type
and
if
things
don't
work,
you
can
fork
those
plugins
like
like
in
the
intel,
cri
manager
interface.
B
You
don't
have
this
huge
cri
interface
to
implement.
You
can
implement
a
very
small,
specific
binary.
Just
like
a
c
I
plug-in
for
macvlan
or
bridging
so
kind
of
next
steps.
I
have
a
formal
proposal
of
the
spec
up
on
container
d.
I
have
these
demo
plugins
that
I've
been
working
on
and
within
container
d.
B
A
D
Michael,
how
do
you
think
this
will
play
or
whether
twice
this
will
address
the
device
plugins?
Do
you
think
this
can
be
extended
to
incorporate
device
plugins
as
well?
So
maybe
we
can
provision
specific
hardware
like
virtual
functions
using
this
api.
B
It's
kind
of
like
at
the
cn
cni
level,
where
it's
very
low
level,
where
you
can
hook
into
container
life
cycles,
and
I
didn't
want
to
over
step
the
bounce
too
far
and
say:
oh
yeah,
you
can
totally
get
away
with
do
away
with
device
plug-ins
as
well
and
just
use
a
generic
interface
for
modifying,
because
there's
maybe
device
plug-ins
are
too
specific,
where
we
can
have
something
more
generic
and
manage
resources
as
cpus
devices
gpus
things
like
that.
So
it
would
work
like.
I
don't
see
anything
in
the
api
design
that
would
prevent
that.
A
And
michael
in
the
past,
we
so
all
your
problem
list.
There
is
real
problem,
so
we
try
to
in
the
past
there's
several
attempts.
We
try
to
consolidate
those
things
and,
and
one
of
the
time
it
is
extended,
the
extended
resource
and
tried
to
propose
like
the
resource
class
concept
and
the
resource
class.
Actually
it
is
combining
all
those
kind
of
the
technology
cpu
memory,
all
those
kind
of
things
so
then
could
go
further.
A
I
I
understand
you
mentioned
that
all
those
good
qualities
about
the
thing
eye,
but
the
thing
I
also
recently
like
the
couple
network
team
folks,
also
mentioned
to
me
the
same
eye
problem
seeing
my
problem.
It
is
because
exactly
the
problem
for
them
using
that
api
in
production,
it
is
because
it
is
too
extendable
and
it's
too
abstract
and
didn't
described
what
exactly
responsibility
about
the
negative
plugin
to
do
so.
A
lot
of
time.
A
Folks
stick
with
the
one
plugin
and
stick
like
the
stick
with
flower
and
but
so
so
that's
kind
of
the
end
option
like
the
to
switch
from
the
one
thing
I
plug
in
implementation
and
then
to
another
single
implementation.
I
I
totally
agree
with
all
your
api,
abstract
things,
but
the
problem
is
for
them.
Describe
this
also,
the
problem
we
saw
that
initially
we
tried
to
avoid,
because
initially,
when
we
started
cri,
we
also
want
to
be
the
generic,
but
the
reason
we
didn't
at
the
end
make
that
is
mixed.
A
It's
just
because
we
have
to
support
the
darker,
because
docker
is
the
only
one
qualified
before
for
the
production
use
right.
You
don't
have
the
problem.
People
want
to
take
that
this
year,
I
to
influence
darker,
but
still
have
another
problem.
Still
that's
the
only
product
to
use
I
that
is
it
so
so
we
take
a
different
approach,
but
during
that
time
we
also
realize
initially
more
abstract,
more
extensible
way
and
might
not
describe
to
what
we
exactly
want
like.
A
Even
today,
people
want
more
concrete
to
define
about
the
cia
even
and
instead,
so
that's
the
kind
of
same
thing.
Like
the
thing
I
see,
I
take
a
different
approach
because
when
we
first
started
talk
to
the
singer
today,
actually,
basically
it
is
the
thing
I
didn't
define
how
they
interact
with
wow
with
the
kubernetes
how
to
inter
act
well
with
the
cri.
So
there
is
another
effort
to
from
the
network
community.
They
want
to
redefine
the
cma.
A
So
so
I
want
to
share
with
you
those
things,
but
I
all
the
problem
you
described
and
the
there
and
also
the
focus
on
the
api
and
the
oh,
it's
the
good
direction,
and
I
I'm
really
looking
forward
to
continue
this
discussing,
because
the
ping
is
true.
The
problem
is
true.
The
challenge
is
the
real.
There,
we've
been
attempted
a
couple
times
and
then
the
back
to
the
topology
management,
cpu
memory.
A
So
far
I
like
of
the
I
but
I
have
a
personal
playlist.
I
like
the
intel's
latest
cri
plug-in
type
of
the
management
things,
but
I
also
see
the
potential
problems.
So,
oh
looking
forward
to
continuous
discussing
at
the
resource
management,
working
group
and
figure
out,
what's
problem,
how
we
are
going
to
make
that
evolve.
A
Just
share
with
you
like
as
vendor,
also
like
the
run
right
now,
I'm
not
wearing
the
open
source.
Well,
google
has,
I
feel
so
comfortable
to
tell
my
and
to
switch
to
continuity
and
just
all
maybe
switch
to
the
trial.
I
don't
need
to
worry
about
it,
because
I
quite
in
detail
even
like
that,
because
it
is
implements,
do
you
have
the
implementation
may
have
the
back,
but
I
feel
comfortable
from
high
level
to
switch,
because
the
effort
to
make
the
constraint
risk
is
the
contender
risk
just
share
this.
C
C
So
for
us
the
main
problem.
What
we
were
trying
to
solve
was
about
dynamic,
reconfiguration
of
all
workloads,
so
like
migration
between
the
nodes
and
migration
between
the
cpus
and
automatically
readjusting
the
things
which
like,
for
example
like
if
you
have
unused
capacity
of
a
machine
you
can
use
like
for
various
workloads
and
when
suddenly
you
get
the
workload
which
require
particular
device.
C
At
this
moment,
you
you
better
to
put
this
device
needed
workload
close
to
the
device,
but
the
rest
of
workloads
needs
to
be
migrated
dynamically,
so
like
cmi
kind
of
plug-in.
In
a
sense,
it's
hooked
to
a
life
cycle
of
one
particular
container,
where
cri
allows
us
to
get
the
whole
information
about
the
system
about,
like
the
whole,
the
whole
set
of
sandboxes
the
whole
set
of
containers
within
those
and
boxes.
C
So
for
us,
instead
of
trying
to
hack
two
separate
projects,
we
just
got
to
a
simple
model
like
okay:
let's
sit
in
between
them
and
if
somebody
wants
to
actually
implement
the
plugin,
we
don't
require
to
fork
the
whole
project.
We
we
have
the
idea
of
policies
so
kind
of
like
plugins
inside
of
it.
C
So
that's
that's
the
reason
why
we
gone
to
cri
level,
and
I
I
don't
know
I
I
would
like
to
improve
it.
I
I
believe
what
we
need
to
move
all
these
hardware
related
things
below
a
couplet
because
it
looks
like
well.
I
I
think
recouplet
is.
It
should
be
like
really
generic
implementation.
It
shouldn't
be
polluted.
C
These
were
hardware
dependencies
or
hardware
related
things,
but
how
to
implement
what
it's
it's
open
question
and
specifically
well
now
we
have
like
you,
know,
community
two
like
most
active
projects,
cryo
and
container
d.
Maybe
we
can
come
up
with
some
kind
of
mechanism
where
this
kind
of
hardware
information
we
can
collaborate
and
but
the
whole
thing
it's
it
shouldn't
be
like
single
container.
It
should
be
like
the
whole
system.
In
our
opinion,.
B
Yeah
I
actually
ran
into
that
exact
issue
myself,
because
I
started
out
where
this
would
model
after
cni
or
you
get
an
invocation
of
just
a
single
container.
But
when
you're
thinking
about
a
pod,
you
kind
of
need
to
know
or
even
going
down
the
route
of
vm
based
containers.
You
kind
of
need
to
know
the
entire
resource
request
for
that
pod
as
a
whole.
B
At
the
time
you
modify
things.
So
it's
something
that
we
have
to
have
it's
kind
of
a
two
step.
We
need
a
good
api
for
developing
plugins
and
we
also
need
support
at
the
cri
layer
from
container
d
or
cryo
to
be
able
to
push
that
the
whole
kind
of
pod
spec
as
a
payload
to
those
plug-ins,
instead
of
just
doing
it
as
a
one-off
one
per
container,
and
then
it
tries
to
aggregate
those
so
yeah.
It's
something
I'm
aware
of
as
well,
and
I've
been
working
through
this.
A
So
actually,
I
think
alexandra
and
michael
both
you
align
at
least
the
one
things
online.
It
is
we
all
think
about
the
kubernetes.
It
is
try
to
provide
a
generic
solution.
Try
to
not
have
those
hardware
in
detail
all
those
kind
of
things
pollute
of
the
kubernetes
different
vendor
different
provider
have
the
different
requirements
based
on
the
the
document
right
based
on
the
the
problem.
A
A
It's
not
just
like
a
part
and
a
container
and
and
but
there's
the
fundamental
difference
if
we
attack
either
ci
level
and
so
there's
the
so
so
attack
that
from
the
cri,
it
is
literally
through
the
cri
to
integrate
with
the
pod
and
the
container
life
cycle,
but
you
still
need
the
integrated
way
with
the
kubernetes
and
the
part
and
the
container
life
cycle
to
then
you
can
manage
those
resources.
A
A
We
have
to
get
into
the
detail.
We
can
iterate
this
we've
tried
to.
In
the
past.
We
tried
to
say
okay
here
it
is
the
one
scheduling
the
scheduling
should
be
hierarchical,
so
they
have
the
cluster
scheduler.
Even
could
it
be
global,
scheduler,
multiple
cluster?
That's
what
we
are
talking
about
long
time
ago,
then
I
know
that
I
also
have
a
schedule
now
right.
So
no,
the
kubernetes
is
the
one
scheduler
and
then
and
go
to
down
no
level
the
coupe.
The
kernel
is
another
process
level
of
the
scheduler.
A
So
that's
that's
at
least
the
picture
in
my
mind,
so
then
resource
how
to
coordinate
that
resource.
Should
we
separate
so
that's
why
a
while
back
we
talked
about
the
resource
management
and
but
then
what
it
is,
the
kubernetes
and
the
kubernetes
are
not
just
they're
kubernetes.
Basically,
it's
just
kubernetes
how
to
interact
with
this
separate
resource
management.
A
That
time,
I
hope
we
can
sue
the
c
group
naming
space
directly
because
it
should
refract
those
kind
of
things
at
the
very
low
level,
but
that
didn't
go
very
well
because
that
time
we,
I
guess
the
excuse
of
adoption,
phase
kubernetes
itself
is
adoption,
phase
and
and
and
another
thing
it
is
secret-
also
evolved-
that
time,
like
the
secret
version.
Two
still
in
the
debate
and
many
many
coming
thing.
A
So
people
ask
a
question:
how
we
are
reliable
to
make
sure
we
can
export
all
the
resources,
because
even
like
the
negative
device
plugin
that
I
think
the
who
asked
that
question
forgot
even
device
it.
It
does
not
really
can
be
wildly
represented
by
the
same
group
right.
So
you,
then
you
have
to
do
a
lot
of
things,
especially
like
the
single
version
to
that
time.
I
I
haven't
followed
recently
development
that
time
even
at
two
years
ago
they
don't
support
the
diverse
secret
period,
and
so
that's
the
meaning
unknown
to
us.
A
So
we
are
past
that
effort.
We
didn't
move
forward
instead
and
then
we
try
to
be
even
more
abstract,
the
even
more
abstract.
That's
what
I
mentioned
that
I
can
share
with
you
offline
about
some
discussing
quarter,
resource
class
and
like
the
highlight,
will
next
represent
those
kind
of
words
that
let
me
answer
your
question
earlier.
You
mentioned
ux
issue
and
the
usability
issue
and
that
one
because
of
a
structure
at
that
level,
even
makes
the
things
even
more
complicated,
because
you
had
so
many
detail
and
each
different
cases.
Then
you
end
up.
B
So
yeah
that
part,
the
ux
on
the
user's
side
and
then
how
that
gets
break
broken
down
into
how
we
express
that
on
the
node
side
is,
can
vary
a
lot,
but
I
also
look
at
it
as
kind
of
a
two-step
scheduling
problem
you
schedule
on
the
node
and
then
a
node
schedules
on
its
resources,
and
I,
like
one
thing
that
I
was
needing,
is
like
what
is
the
right
place
to
put
an
interface
like
this?
Is
it
cri
or
is
it
in
the
cubelet
where
we
need?
B
We
need
more
of
the
entire
view
of
it
or
if
it
wasn't
cri.
One
of
the
problems
is
air
propagation
back
like
what?
If
we
have
to
reject
this
workload
because
of
something
I
don't
think
today,
there's
a
way
for
the
cri
to
say
like
no,
I'm
not
even
going
to
try
this
like
it
would
just
like
it
would
be
a
runtime
error
and
not
like
a
scheduling
decision
if
it
was
at
that
level.
So
that
creates
some
complexities.
A
Well,
actually,
there's
the
recently
discussing
at
the
resource
management
growth.
Basically,
that's
it
with
the
problem
at
the
scheduling
we
kubernetes
want
to
reject
it.
That's
kind
of
we
move
forward.
We
try
to
move
forward
even
like
the
vertical
auto
scanning
for
the
part
of
the
skinny
we
endos
we
input.
We
implemented
some
like
the
rejection,
the
init
so
but
there's
the
balance
right.
So
you
cannot
like
the
schedule,
make
decision
because
they
get
scheduling,
don't
know
a
while.
A
Don't
worry
about
the
resource
usage
at
all
on
the
node,
this
is
kind
of
the
equipment
and
our
scheduler
is
not
to
usage
a
while
scheduler
which
they
are
making
like
the
based
on
the
request,
make
all
those
kind
of
decisions
placement
decision
which
is
not
intelligent
enough
because
the
user
could
be
lying
and
and
or
maybe
users
just
don't
know
and
made
around
this
discussing
so
how
we
are
going
to
adjust
that
kind
of
situation.
How
build
that
feedback
loop,
punish
those
abusive
users,
all
those
kind
of
things
missing.
A
So
then
we
add
more
function
on
it.
Here
more,
we
add
the
functionality
and
the
system
overall
is
hard
to
manage
it.
So
so
I'm
looking
forward,
we
can
continue
that
discussing
and-
and
we
especially
on
the
resource
management
as
a
whole,
because
it's
not
just
node
should
include
of
the
scheduler
and
the
control
plan
have
the
big
influence
on
all
those
kind
of
things.
C
One
step
to
solve
that
problem
actually
was
done
by
the
virtual
cubelet
guys,
where
we
have
actual
implementation
reports
to
work
with
virtual
kublet,
saying
like
I
have
this
amount
of
resources
and
that
reports
to
a
control
plane
and
when
the
virtual
couplet,
when
it
actually
executes
with
workload.
It
says
okay,
resource
report,
which
has
contained
a
recess
amount
of
resources,
what
it
wants
to
use,
and
it
just
passes
it
down
to
actual
plugin,
which
does
the
container
execution.
A
Yes,
okay,
so
so
michael,
we
please
attend
our
meeting
about
the
resource
management
group.
I
will
let
you
know
when
and
currently
is
cancelled,
but
we
are
going
to
come
back
and
discuss
that
more
so
and
also,
and
also
welcome
to
the
signals-
and
we
can
continue
this.
Thank
you.
So
we
have
something
good
yeah.
So
could
you
put
your
your
slides
to
the
signal
agenda
and
so
the
people
can
reference?
A
Thank
you.
Thank
you,
nice
to
see
you
here
and
let's
move
to
next
topic.
I
think
the
happy
is
the
the
topic
about
the
delete
core
vi
collection.
I
believe,
is
the
power
connection
collection
and
can
you
can
you.
E
I
can
speak
to
that
topic
if
I
can
put
it
in.
It
is
from
the
iii
team
we're
busy
dealing
with
conformance
testing
and
we've
been
running
this
in
the
test
grid
for
quite
a
while.
E
If
you
open
the
issue
of
the
pull
request,
you
will
see,
the
taste
grid
is
mentioned
at
the
top
and
if
you
follow
the
taste
grid,
there's
actually
some
flakes
every
now
and
then
and
then
stephen
haystick,
that's
actually
written
the
test
he
evaluated
and
if
you
look
at
the
bottom
of
the
comment,
the
last
comment
in
there
you'll
see
that
it
builds
quite
well
and
we
have
very
good
performance.
But
there
is
some
timeouts.
E
It
takes
only
12
seconds
and
16
seconds
to
perform
what
it
should
be
doing
for
the
delete
of
the
pods,
but
at
one
stage
it
actually
time
out
before
the
minute
is
over
and
we're
not
sure
what
this
fix
and
you
don't
think
it's
the
test.
You
think
there
might
be
something
else.
You
can
help
us
figure
out.
What's
going.
A
I
I
guess
we
just
saw
this
one
today,
so
we
we
have
to
look
at
that
in
detail.
Yet
hello,
and
so
we
definitely
want
to
look
into
this
one
and
on
this
one
so.
C
E
A
F
E
What
I'll
do
I'll
drop
your
message
in
slack
and
then
we
can
can
discuss
all
the
copy
stephen
in
as
well
he's
the
guy
that
actually
wrote
the
test
and
we'll
appreciate,
because
we're
sort
of
running
they're
running
short
of
time
for
119.
It's
got
to
be
in
in
the
next
few
days.
Otherwise
we're
going
to
miss
the
cut
off
I'll
appreciate
it.
A
So
thank
you
for
this
one,
and
maybe
you
can
carry
on
this
one
there's
the
georgie
is
there's
like
the
slack
channel
for
the
signal
e2e
test.
They
already
have
the
second
channel.
Can
we
using
that
one
so
they're
not
just.
A
Yeah,
so
you
can
you
add
the
people
the
to
there
and
then
can
carry
the
discussion.
One
thing
is,
I
want
to
really
we
really
make
sure
so.
The
good
one
is
an
open
source
project
and
to
for
the
open
source
project
to
make
that
sustainable.
A
We
really
try
to
make
that
it
is
inclusive
or
transparent.
Otherwise,
that's
a
big
licensed
island
too
right
so
like
the
company,
maybe
folks
may
be
joined
or
maybe
left
and
also
company
may
be
put
out
effort,
or
maybe
like
a
visual
effort,
but
to
make
open
source
project
sustainable.
We
need
to
try
our
every
time
we
try
our
best
to
make
that
transparent,
the
micro,
cosby
crosby
and
the
leading
effort
on
the
container
d
and
the
docker
community.
I
think
he
also
he's
really
working
advocate
on
open
source
community
in
the
past.
A
He
can
share
more,
but
at
least
I
personally
feel
to
make
the
open
source
project
to
succeed
and
sustainable
transparency
and
inclusive
is
pretty
important
here.
I
personally
learn
license
too
here.
F
Yeah,
absolutely
so
so
to
that
end,
besides
helping
out
with
helping
out
with
this,
I
and
I
get,
and
I
guess
what
you
just
said
sounds
like
what
it
sounds
more
or
less
like
we
are
on
the
same
page.
The
thing
that
I
was
planning
on
is
to
actually
take
this
and
I'll
bring
as
many
people
as
helping
everyone
on
slack
to
make
it
to
make
it
us
to
keep
the
conversation
going
and
I'll
make
sure
that
everything
is
organized
in
github
as
well.
G
Thank
you
thanks
for
your
time,
hey
george,
just
one
more
answer.
G
I
just
want
to
ask
on
the
on
the
components
test
and
I
didn't
understand
so
there
are
some
people
interested
in
doing
that.
It
would
be
great
if
you
can
document
the
stacks
and
have
some
instructions
for
the
community
to
follow
later
on.
If
we
see
if
they
have
similar
requests
in
the
future,.
A
Okay,
let's
move
to
the
next
topic,
you
know,
are
you
still
missing
something?
I
think
the
I
think
anything
to
to
move
forward
about
your
feature,
1019
feature.
I
think
we
discussed
this
and
we
agree.
A
Okay,
so
let's
move
to
next
topic,
I
think
that
we
already
addressed
I
I
even
the
morning
I
approve
his
pr
so
next
move
to
next
one,
like
the
george,
do
you
want
to
continue
about
your
ci
sub
project.
F
F
I
guess
something
like
you
know:
let's,
let's
get
some
people,
let's
get
some
initial
documentation,
let's
try
to
let's
try
and
fix
a
lot
of
the
a
lot
of
the
tests
and
a
lot
of
really
amazing
work
came
out
of
it.
We
have
better
a
better
documentation
for
the
images
that
we
use
better
documentation
for
how
the
node-end-to-end
test
actually
work.
A
lot
of
dashboards
went
from
complete
red
to
green
or
a
flaky
every
now
and
then,
which
is
which
is
a
huge
improvement.
F
And
since
then,
I
been
hearing
a
fro
from
a
from
a
lot
of
people
that
they
are
interested
in
learning,
learning
more
becoming
more
becoming
more
like
more
active
members
of
signal,
and
I
wanted
to
propose
to
actually
make
it
that
I
call
the
cis
project
a
I'm.
Not
the
name
is
the
name
in
this
case
is
just
a
name,
but
the
ciso
project,
ci
signaled,
a
node
end-to-end
test,
some
whatever
the
best
name
is
for
it.
H
Yeah,
I
just
wanted
to
just
add
a
couple
of
things
so
jorge
yeah.
This
is
what
you're
saying
is
absolutely
true.
You
know
we
had
this
thing
all
started
up
with
a
fire
drill
for
some
of
the
signaled
end-to-end
tests
were
sort
of
released
blocking
and
a
few
of
us
jumped
on
to
debug,
which
we
had
to
do
again
last
week
for
a
different
issue,
and
it
has
been
great.
You
know,
there's
a
lot
of
folks
volunteered.
H
A
lot
of
things
have
been
fixed,
but
there's
still
work
to
be
done,
as
you
say,
and
so
I
don't.
I
I'm
not
quite
sure
how
sub
projects
work
in
kubernetes.
You
know
this
meeting
and
also
the
signa
node
resource
management,
meeting
sort
of
came
to
a
pause.
H
H
You
know,
sort
of
start
that
back
up,
but
I
think
you
know
from
the
testing
perspective.
It
would
be
good,
absolutely
good
to
continue
this
and
you
know
there's
a
great
start
and
is
the
sub
project.
The
right
thing
to
do,
I'm
not
sure,
maybe
don
has
more
input
on
that
or
somebody
that's
been
in
kubernetes
and
knows
a
sub
projects,
and
that
I
I
don't
know
how
quite
that
works.
H
We
were
sort
of
doing
this
informally
under
sig
node,
because
it
you
know,
was
just
touching
really
sick,
node
stuff,
and
so
I
think
that
was
the
plan
so
continuing
it
is
good
the
right
way
to
continue
it
how
to
move
forward.
I'm
not
sure
I
would
support
something
like
this.
If
that's
the
kubernetes
subproject
signal
thing
way
to
do,
which
I
don't
know
that
part.
H
So
I
know
there
are
a
lot
of
folks
interested,
there's
still
a
handful
of
people
that
actively
jump
on
issues
when
when,
when
they
happen,
just
like
we
did
last
week-
or
it
was
last
week
so-
and
I
think
some
of
the
other
folks
that
want
to
contribute,
could
could
certainly
jump
in
and
help
and
learn
from
that
too.
So
I
think
it's
good.
That's
that's
sort
of
my
my
thinking
on
it
at
this
point.
I
Have
a
quick
note,
I
think
it
was.
It
was
a
great
effort
victor.
Thank
you
so
much
for
picking
that
off.
By
the
way,
I
think
when
we
make
this
more
official,
I
would
like
there
to
be
some
described
goals
and
explicit
problems
we
want
to
solve
just
so
we're
not
flailing
as
much
and
when
we
have
volunteers
we're
all
committed
to
the
same
goals.
I
To
that
end,
I've
been
thinking
about
just
writing
down
like
high
level
problems
we
want
to
solve.
With
this.
I
haven't
gotten
around
to
finishing
that
up
yet,
but
neng
and
I
were
gonna-
take
a
stab
at
it
sometime
next
week.
So
if,
if
it's
okay
like
we
can
send
that
around
and
crowdsource
what
other
community
members
wanna
get
out
of
a
subgroup
like
that,
just
so
we're
all.
On
the
same
page
and
working
towards
the
same
goal,
is
that
pretty
simple.
H
Yeah,
I
think
that
sounds
great.
You
know
this
was
something
that
we
had
just
started
off
and
yeah,
making
it
a
little
more
formal,
and
you
know,
soliciting
more
input
I
think,
would
be
great
yeah.
F
Okay,
sorry,
sorry,
the
other.
The
other
thing
that
I
wanted
to
that
I
wanted
to
add,
is
just
to
keep
to
it
to
make
the
group
a
little
bit
more
present.
What
would
you
all
think
about
a
at
least
while
we
so
wait?
What
what
karan
mentioned?
Sorry,
if
I
mispronounce
your
name,
but
I
get,
I
guess
it
sounds
like
a
charter
for
the
for
the
super
project.
F
So
while
we
work
on
the
while
we
work
on
that,
just
besides
that
also
get
some
weekly
or
bi-weekly
meetings
just
to
check
in
on
things,
and
there
is
also
a
project
core
on
the
kubernetes
organization
that
victor
created,
which
we
can
standardize
around
and
just
make
sure
that
issues
and
vrs
get
added
on
that
and
periodically
and
just
groom
and
make
sure
that
things
and
things
are
actually
moving
and
moving
along.
F
Okay,
I
guess
so-
and
this-
and
this
is
the
main
motivation
for
proposing
a
super
use
like
really
standardized
on
something
that
we
can
all
agree
on.
That's
going
to
be
kind
of
the
in
between
codes
process,
so
that
so
that
we
can
all
collaborate
and
make
sure
that
make
sure
that
we
have
complete
visibility
on
what
is
going
on
and
keep
pushing
keep
pushing
for
things.
When
we
don't
see
any
changes
in
long
intervals
of
times,
and
at
least
those
kind
of
things.
H
Yeah,
I
would
certainly
second
that
project
board.
You
know
when,
when
we
were
going
through
that
my
email
box
was
flooded
with
review
requests
and
I
was
like
which-
which
pr
do
I
need
to
look
at
which
one
is
where,
and
so
I
you
know
created
that
project
board
and
I
was
like
aha,
this
is
it
so
now
we
had
all
the
stuff
up
there
and
you
can
see
what
to
beat
on
waiting
for
reviews
waiting
for
approval,
and
that
was
one
of
the
greatest
tools.
H
So
I
would
strongly
encourage
that,
even
though
we
didn't
spend
a
ton
of
time
on
that,
because
that
was
sort
of
toward
the
end
before
I
took
a
little
time
off,
but
I
really
enjoyed
that
project
board
and
I
think
it'd
be
a
great
tool
to
help.
You
know
sort
of
manage,
keep
up
with
what
needs
to
be
done
and
where
it's
at
yeah.
A
That's
the
challenge
for
us,
but
since
I've
been
in
this
sick
group
since
day
zero-
and
I
see
the
people
come
and
go
and
for
any
sub
project
to
make
that
success,
we
need
the
start.
So
in
the
past
we
we
always
succeed
with
something
project.
It
is
have
the
clear
of
the
define.
What's
the
responsibility,
what's
the
deliverable
and
also
come
up
a
plan
executable
and
so
far
I
haven't
seen
that
here.
Obviously
I
can
make
this
test
the
high
level
goal.
A
It
is
the
signal
the
test,
improve
of
the
signal
to
test
the
coverage
and
it
makes
signal
test
reliable
and
a
diff
flicker
of
the
signal,
the
test
and
obviously
that's
the
good
call,
and
in
the
past,
in
the
sig
node,
we
tried
to
align
all
the
signal
the
test.
Next,
we
put
the
engineer
to
clearly
on
the
liquid
of
the
signal
right.
So
sigma
started
the
first
version
of
the
conform
test.
Then
we
delete
a
lot
of
our
circular
tests
to
the
conform
test
to
become
connected.
A
The
kubernetes
conform
test,
but
then
later
even
confirmed
has
it
is
like
the
getting
weaker.
The
and
and
here
community
put
people
dedicated
people
to
own
that
responsibility
and
the
signal
it
is
a
falling
apart
test.
The
following
part
is
just
because
some
vendor
and
have
some
internal
org
and
new
people
didn't
pick
up.
So
I
try
to
avoid
people
using
this
is
become
connected.
A
They
have
to
own
something
or
pick
up
some
work,
but
the
other
hand
we
have
to
because
of
the
build
the
ownership
the
problem
is,
I
have
it
is
how
we
build
that
ownership
make
that
ownership
just
a
little
bit
sustainable
like
even
like
the
people
come
and
gone,
but
still
the
ownership
is
here
and
still
have.
A
group
of
people
can
take
over
how
I
can
do
that.
So
I
think,
if,
if
we
can
do
that,
that's
really
important
charter
for
this
sample
project.
A
Otherwise
it's
just
like
the
temperature
fix
some
bug
time
to
fix
some
tests.
We've
been
doing
that
many
times.
We
don't
have
like
the
signal
to
fix,
yet
we
call
the
fix
yet
and
and
engineer
like
the
really
looking
forward.
Everybody
and
and
they've
tried
to
spend
the
next.
We
pass
the
work
even
and
for
one
week
just
just
do
the
test
fix
for
the
community
so,
but,
but
still
the
test
could
be
go
broadcast
again
because
new
stuff,
how
we
are
going
to
enforce
those
kind
of
things
we
need
to
come
up.
A
The
plan,
like
the
more
accident,
executable
and
sustainable
plan
and
how
to
measure
deliverable
also
is
important
in
that
plan.
I
just
share
the
past
experience.
I
have
because
we've
been
doing
this
a
couple
of
times
and
in
the
past
the
past
six
years.
I
think
many
people
like
mac
wrong
and
just
stay
here
and
the
david,
the
dashboard
and
being
here
for
long
to
assess,
we've
been
through
this
past
and
the
common
goal,
so
how
we
are
going
to
so.
I
Yeah,
I
think
that's
kind
of
where
I
was
coming
from
is
defining
some
goals.
I
I
think
tests
will
always
be
flaky,
but
if
we
have
tools
to
debug
them,
maybe
if
we
have
ways
for
starting
work
and
managing
work
and
getting
alerted
that
should
give
us
quite
a
bit
of
headroom
for
fixing
underlying
problems,
but
yeah
and
ling
just
posted
that
in
chat
yeah.
We
can
do
like
a
small
brainstorm
session
sometime
in
the
next
couple
of
weeks
and
just
agree
to
the
baseline
problems
we
want
to
solve
and
then,
as
a
follow-up,
we
can
come
up
with
ideas
for
solving
them
and
shard
the
work.
F
Also,
I
also
want
to
add
a
a
thank
to
the
company
to
the
comment
that
dan
just
made
is
people
come
and
go,
and
we,
I
guess
one
one
of
the
efforts
is
to
make
sure
that
people
come
and
go
and
they
are
completely
free
to
do
so,
and
you
know
if
they
want
to
take
time
away.
Please
please
do
so,
but
we
have
to
reduce
the
loss
of
knowledge
to
that.
F
Another
thing
that
I
was
thinking
is
that
we,
we
might
also
need
some
people
to
volunteer
to
be,
I
guess
so,
project
owners
or
what
or
so
or
something
of
the
sort
to
actually
to.
Actually,
I
guess
manage
it,
manage
the
team
and
make
sure
and
make
sure
that
we
have
some
a
ladder
for
people
for
people
that
are
interested
to
to
get
to
gain
the
knowledge
that
they
need
to
actually
do
things.
But
we
also
have
someone
looking
over
who's
covering
what
and
make
sure
that
we
can.
J
F
A
shelf
a
shuffle
things
around,
and
in
that
case
I'm
just
mirror
I'm
just
mirroring
what
I've
seen
what
I've
seen
in
other
sexual
projects,
and
I
guess
that
will
also
be
kind
of
up
to
the
sick-
leads
in
this
case
dan
and
derek,
and
I
guess,
to
anyone.
Anyone
who
wants
to
volunteer
and
thinks
that
might
be
a
good
idea.
A
Yeah
always
good
idea.
Michael,
do
you
want
comment
something
I
saw
you
name
here.
J
Oh
yeah,
no,
I'm
just
you
know.
I
think
this
is
an
exciting
area.
We've
got
a
lot
of
test,
buckets
that
are
sitting
in
test
sync
node
and
you
know
conformance,
and
we
probably
need
to
align
them
better.
You
know
they
need
to
be
more
stable,
get
rid
of
the
flakes.
J
It's
really
it's
a
node
conformance
that
we're
trying
to
work
on
right
and
absolutely
all
the
runtimes
that
our
cri
enabled
are
using
the
cry
test
and
we
try
to
get
our
you
know
our
prows
in
on.
You
know
on
making
sure
that
it
works
there
as
well.
You
know
against
signate,
but
yeah.
I
think
we're.
We've
got
a
lot
of
overlap
of
the
test
cases.
J
We
probably
need
to
refactor
a
lot,
come
up
with
a
different
way
to
write
the
test
cases
and
and
share
them
and
make
sure
that
they're,
you
know
not
testing
the
same
thing:
five
different
ways.
You
know
not
wasting
time
and
effort
right
now.
We've
got
a
lot
of
cri
and
node
apis
that
aren't
really
even
being
tested
rules
that
aren't
you
know
enforced.
So
I,
I
think,
there's
a
lot
of
holes.
J
I
think
we
need
to
work
together
as
a
work
group
and
focus
on
you
know,
cleaning
it
up
so
that
we
can
hand
it
off
to
you
know
people
in
the
future
when
they
want
to
add
new
a
new
test
case.
How
do
they
do
it?
Where
do
they
do
it?
How
does
that
test
propagate
up
through
you
know
the
conformance
as
opposed
to
just
cutting
and
pasting
them
and
yeah.
J
You
get
something,
that's
flaky
and
you
don't
have
a
way
to
fix
it
right,
because
you
have
to
fix
it
in
too
many
places
or
you
don't
know
why
it's
flaky
right.
We
need
ownership.
A
Yeah
I'm
looking
forward
for
the
proposal
in
the
next
week
and
then
we
can
discuss
at
the
community
and
here
and
then
we
can
talk
about
how
to
move
forward
with
whatever
format
could
be
sub
project
could
be
a
working
group
or
could
be
anything,
but
we
can
we
can
discuss
and
we
once
we
see
those
executable
plan
and
the
proposal
in
written
format
and
share
with
the
community
to
review.
I
want
to
leave
nectar
two
minutes,
the
first
test.
A
K
Yeah,
I
was
just
wanting
to
bring
up
that
as
part
of
another
work
to.
Basically,
we
would
notice
that
static
pods,
their
termination
grace
period
was
not
being
honored
like.
If
you
move
them
out,
move
the
manifest
out
of
the
directory
they
were,
they
were
killed
immediately.
So
we
made
some
changes
and,
along
with
that,
introduced
an
ede
test
which
I've
we
haven't
ever
validated
grace
that
termination
grace
period
seconds
was
honored.
K
So
this
is
the
first
dd
test
to
do
that
and
exposed
a
race
in
the
cubelet
where,
if
especially
in
the
ed
environment,
where
there's
lots
of
pod
creation
and
deletion
going
on
on
every
node
that
we
can
get
stale
data
from
the
runtime
cache
and
the
delete.
Pods
path
thinks
that
there's
a
pod
with
no
containers
running
when
actually
the
container
has
started,
and
the
pod
has
reported
ready
to
the
api
server.
K
So
I've
been
working
with
jordan,
who
uncovered
the
we
have
like
a
five
percent
flake
rate
when
we
fall
in
that
window.
So
I've
been
working
with
jordan
on
an
approach
here
and
david
has
chimed
in
on
the
on
the
pr
now,
and
I
think
we
have
we're
close
to
a
solution.
So
I
posted
kind
of
the
breadcrumbs
in
the
agenda
there
for
anyone
who
wants
to
follow
along.
But
we
want
to
get
that
fixed
soon,
because
it's
causing
lots
of
problems
in
the
no
dde.
A
Thank
you.
Thank
you.
Thanks
for
updating
this
one
yeah,
and
also
with
we
ran
out
of
time.
I
really
want
to
hurt
your
opinion
about
know
the
e2e
stuff
project,
but
we
don't
have
time,
but
anyway
we
haven't
seen
proposal
once
we
have
proposal
I
want
to,
because
you
are
the
one
of
the
tech
lead
in
this
group,
and
I
want
to
hire
your
opinion
on
those
things.
Yeah.
A
L
Yes,
we
have
a
short
status
update.
We
prepared
a
cab
for
our
port
resources
and
david
are
ready
to
go.
Look
at
it.
It
looks
good
for
him,
but
we
want
another
people
to
look
at
this
cap.
We
also
prepared
implementation
for
it
and
right
now
we
are
waiting
the
feedback
from
not
feature
discovery
community
to
continue.
M
Just
kind
of
background
information
on
this,
so
we've
been
exploring
container
runtime
interface
as
a
part
of
gathering
resources
which
have
been
allocated
currently,
and
there
are
shortcomings
with
different
runtimes.
So
containery
allows
us
to
expose
the
resources
allocated,
whereas
now
that
I
was
experimenting
with
cryo,
it
does
not.
I
know
I'm
familiar
with
vertical
or
board
auto
scaling
work,
which
is
in
progress.
L
M
A
I
I
just
want
to
say,
like
the
is
the
right
now
we
ran
half
the
time
and
people
feel
free
to
stay
here
to
continue
discussing,
but
this
one
is
continual
record-
and
I
will
be
here
to
finish
this
conversation,
but
do
other
people
if
they
have
the
other,
they
can
feel
free
to
drop
yeah.
A
Definitely
look
at
the
macro.
Also
mako
today
propose
another
proposal
right.
So
I
also
think
about
that.
We
need
to
spend
the
time
and
of
course,
after
mako
go
over
his
thoughts
and
put
those
together
and
we
can
we
can
discuss
more
so
so
I
we
discussed
that
at
last
week's
the
signal
meeting,
we
are
going
to
at
least
continue
some
of
the
resource
management
work
group
for
some
time
right.
So
that's
the
is
basically
direct
and
I
agree
upon,
but
I
believe
derek
is
a
vacation.
A
I
think
this
is
one
vacation
this
week,
so
we
will
come
back
and
next
week
and
see
maybe
next
week
schedule
another
five
weeks
continue
of
the
discussing
is
that
okay,
I
will
discuss
call
this
with
the
direct.
The
problem
is.
The
previous
proposal
is
at
the
sixth
for
the
east
coast.
It's
just
impossible.
A
A
lot
of
the
people
cannot
attend.
So
that's
another
problem,
so
so
I
will
discuss
with
the
direct
and
also
the
other
people
like
other,
and
then
we
can
come
up
with
some
other
some
other
solution,
and
so
six
o'clock
should
be
okay.
If
the
most
people
in
the
east
coast
and
the
europe,
so
that
will
be
okay,
but
I
just
want
to
make
the
make
sure
how
many
people
nick
will
base
on
this
once
and
find
the
best
time.
So
we
can
continue
those
things
and
I
will
try
to
figure
out.
A
Maybe
we
can
start
resume
that
discussing
next
week.
Is
that
okay.