►
From YouTube: Kubernetes SIG Node 20210817
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay
good
morning,
everyone
today
is
the
august
six
17th
and
we
have
the
sixth
note
meeting
and
today
we
are
continue.
First,
the
start
continue
with
the
1.23
planning.
Last
week
we
go
through
a
lot
of
our
four
caps,
I'm
not
going
to
have
a
lot
of
planning
and
items,
and
but
we
didn't
finish
because
out
of
the
time.
B
C
C
B
C
Yeah,
my
I
technically
they
are
not
related
to
each
other,
but
I
think
it
just
makes
sense
to
promote
to
beta
after
this
feature
is
matched.
We
had
another
feature
which
could
have
benefit
from
alpha
stage,
but
it
seems
the
author
doesn't
have
the
bandwidth
so
but
yeah.
I
will
follow
it
through
and
sync
with
kevin.
B
All
right
so
yeah
can
you
sync
with
kevin
and
update
it
over
here
so.
C
Yes,
and
during
the
discussion
which
led
to
the
cpu
manager
policy
kevin
shared
that
he
have
some
some
code
which
he
wants
to
open
source
and
he
said
he's
going
to
write
a
cap
to
implement
this
part
which
is
described
there
and
it's
going
to
be
a
new
option
and
I
think
he
should
be
more
active
started
next
week.
So
this
is
what
I
have
and,
of
course,
I'm
going
to
help
and
assist.
I'm
not
sure
I
would
look
like
from
review
perspective.
I
mean
from
a
prover
perspective,
sorry,
but
we'll
figure
it
out.
D
C
F
D
Look
forward
to
the
cup,
I
just
assume
this
is
so
yup.
I
don't
know
if
this
is
tied
to
power
management
or
something
else,
but
if
you
were
doing
pneuma
alignment
with
a
device
it
feels
like
you'd,
naturally
balance.
I
guess
this
is
handling
the
case
where
you
don't
require
a
device.
C
I
I
don't
have
my
many
more
details
because
he
just
mentioned
them
looking
forward
to
kept.
I
I
will
help
coordinating
though,
because
kevin
can
rarely
have.
He
can
really
join
this
meeting,
so
I
will
be
happy
to
to
bridge
the
gap.
Okay,.
E
Yeah,
so
we
announced
duplication
in
122..
Theoretically,
we
can
remove
it
quote
in
123..
I
just
need
to
understand
how
to
gather
feedback,
so
I
I
didn't
hear
any
like
vocal
concerns,
but
we
may
hear
some,
so
I
don't
want
to
start
this
work
too
early
in
the
release.
Maybe.
C
E
Absolutely
so
removal
includes
this
like
removal
of
usage
as
well
and
yeah.
There
is
a
pr
from
rtm
already
doing
some
of
this
work,
but
we
need
to
just
clean
up
everything
yeah,
definitely
part
of
removal
work.
I
just
commented
on
the
fact
that
I
mean
the
big
problem
with
duplication
is,
if
somebody
using
it
actively,
and
they
just
don't,
have
enough
time
to
migrate
to
the
new
way
to
distribute
configuration.
E
That's
the
biggest
worry
that
we'll
get
less
adoption
of
kubernetes,
because,
like
less
adoption
of
new
versions,
because
people
will
be
stuck
on
using
dynamic,
kubelet
config
on
122
or
something
so,
if
you
know
anybody
using
this
in,
like
real
life,
not
outside
the
test,
please
let
me
know.
A
But
sergey
now
I
remember
when
you
walk
out
this
way.
You
did
find
some
user
the
comment,
so
you
did
a
comment.
We
are
going
to
deprecate,
they'll,
never
come
back
to
say:
oh
we're
still
using
yeah.
A
A
Until
we
really
seriously
announced
people
didn't
is
concerned
right.
So
then
after
we
announce
people
is
the
concern
we
adjust
our
deprecation
time.
So,
if
I
feel
we
shouldn't
hold
on
the
allowance,
we
should
just
announce
and
then
people
may
be
behind
they
didn't
know
and
until
we
announced
and
then
they
they
maybe
stand
up
and
say:
oh
we
are
using.
Then
we
can
understand,
what's
the
use
cases
and
make
the
decision,
but
do
at
this
moment.
A
E
We
already
announced
it
and
there
is
a
warning
on
startup
when
dynamically
configured.
I
just
worry
that
not
everybody
tried
it
yet,
so
people
may
not
realize
that
they
are
in
duplicated
territory
right
now,
because
we
just
made
the
release
of
122.
This
announcement.
A
B
E
A
B
Okay,
all
right,
so
we
can
wait
for
that
announcement
and
then
assigned
reviewer
up
to
work
later
on.
So
docker
shim
removal
again
like
we're
still
waiting
and
the
plan
is
for
124
right
circuit.
E
Yeah
so
eks
just
announced
very
recently
that
they
support
continenergy
now
so
they
can
like
customer
can
choose
to
run
with
continuity
in
gk.
We
like
be
pushing
customers
to
migrate
for
a
while.
Now
we
discovered
so
many
issues
in
like
edge
cases
and
like
big
deployments
with
container
d
and
how
everything
works
like
one
of
the
issues
we'll
discuss
today
with
the
metrics
in
c
advisor
anyway.
E
So
right
now
we
like
we
the
things
that
want
to
like
and
another
aspect
of
it
is
gke
and
azure
support
windows
with
continuity.
Right
now.
They,
like,
I
think,
azure
sends
preview.
I
don't
think
gt
is
saying
it's
preview.
I
may
be
wrong
on
details
anyway,
so
it's
just
started
with.
As
far
as
I
know,
there
is
not
much
adoption
in
either
cloud,
but
it's
already
available
to
customers.
E
So
we
expect
that
with
a
host
process
in
122,
more
people
will
want
to
use
it
and
you'll
see
more
adoption
by
end
of
the
year,
and
it
will
be
easier
for
us
to
duplicate
docker
shim.
Since
windows,
customers
will
have
at
least
some
way
to
not
use
docker
ship
yeah.
I
can
prepare
more
information
by
for
next
time
if.
G
Like
I
said,
yeah
an
aks
for
for
linux,
nodes,
continuities
the
the
default
and
generally
available
for
windows,
it's
in
like
a
public
preview,
so
anybody
can
opt
in
but,
as
mentioned
the
others,
not
we're
not
seeing
a
whole
lot
of
adoption.
At
this
point.
G
E
Than
you
would
like
to
see,
I
will
say
good
thing:
we
already
have
a
way
for
customers
to
not
use
docker,
shim
and
124
is
april
next
year.
So
we
still
have
time
if
you'll
see
that
we
will
significantly
harm
customers
we'll
need
to
reconsider.
B
Okay,
so
that
sounds
good,
so
moving
on
to
the
next
one
pod
overhead,
so
I
I
remember,
I
collected
some
information
from
red
hat
and,
like
red
hat,
announced
a
tech
preview
of
sandbox
containers
which
is
based
on
kata
and
we're
using
pod
overhead
there.
B
So
sergey
did
you
get
anything
back
from
g
visor.
E
The
g
visor
currently
is
not
using
whatever
hat,
I
think
biggest
issue
is
dynamic,
nature
of
overhead
that
we
see
and
it's
hard
to
get
right
numbers
and
I'm
curious
like
what
kata
is
experienced
with
static
pod
overhead.
How
does
it
work
so
this
kind
of
feedback
we
are
looking
for?
I
think
eric
sent
another
note
of
what
he
believed.
E
Whatever
hat
needs
to
implement,
saying
that
like
it
may
be,
I
don't
know
what
they
are
gonna
call,
but
it
may
be
released
as
this
g8
is
is,
but
there
may
be
some
improvements,
but
as
far
as
I
know,
right
now
the
only
use
case
is
kata,
and
I
wonder
what
the
feedback
is.
A
I
also
want
to
add
in
the
neck
that
you,
sir,
the
overhead,
actually
is
just
comparable
to
the
standard
of
the
container
container
d
shim
overhead.
So
this
is
why
it's
no
different
from
the
when
we're
using
the
regular
container
and
container
dcm.
So
that's
the
same
overhead!
So
that's
why
they
are
not
to
link
onto
using
this
one
until
entire
of
the
kubernetes
by
default.
Kubernetes
kubernetes
community
by
default
end
up
of
the
part
overhydra
for
just
regular
container
runtime.
So
so
this
is
why
there's
no
dependency.
E
And
my
saying
that
it's
growing
with
disguise
usage,
so
it's
really
hard
to
predict
what
this
coyote
will
be
and
you
just
cannot
set
a
static
number.
Okay,.
B
A
A
It's
all
kinda,
like
the
can
kubernetes
community,
like
signaled,
have
some
default
option
and
the
end
of
the
product,
overhead
yeah,
it's
not
sure,
but
the
kata
or
other
things
which
is
rely
on
the
hypervisor
and
they
may
have
like
the
stronger
use
cases
here.
Yeah
yeah.
B
D
D
B
A
A
I
feel
the
reason
is
the
part
overhead
initially
initiated
by
the
signal
that
is
just
brexit
and
we
have
a
lot
of
uncharged
and
accounted
the
usage
right
because,
especially
in
the
old
time
there
are
have
when
we
first
at
the
earlier
stage
you
define
of
the
ci.
So
there's
a
lot
of
power
overhead,
and-
and
so
that's
why?
A
But
over
time
we
did
a
lot
of
optimization
on
our
container
runtime
and
like
I
can
see
that
every
block
and
when
nine
times
working
on
that
one
and
there's
the
huge,
dramatic
improvement,
I
believe
same
thing
applied
to
the
crowd
and
so
so
the
so
so
and
the
same
thing
for
the
divisor
and
also
hyper
ios,
all
those
kind
of
things.
So
do
we
still
feel
like
they're,
so
strong?
We
have
to
solve
this
immediate,
solve
this
part
overhead
problem.
A
So
far
I
haven't
said
I
haven't
seen
a
lot
of
cases,
cause
the
system
over
out
of
memory
out
of
cpu
because
and
accountable
and
encounter
this
usage
by
the
user
space,
the
user
space,
but
there's,
of
course,
always
have
an
account
resource
usage
like
memory
right.
So
that's
the
back
kernel
so,
but
we
do
have
to
know
that
I
locate
the
ball.
Try
to
kind
of
capture
some
of
those
things.
Do
we
still
have
like
the
this
one
in
our
production
because
and
also
caused
by
not
very
good,
node
config
config.
A
All
it
is.
We
do
think
about
the
this
part.
Overhead
is
huge.
You
will
have
to
account
except
the
qatari
use
cases
and
we
have
to
account
somewhere,
and
so
we
could
have
the
better
eviction
and
to
see
will
pre-product
or
load
from
the
system
or
out
of
cpu
situation.
A
D
D
Pod
overhead
went
beta
in
one
eighteen,
but
I
think
if
we
I,
I
would
rather
move
forward
with
graduating
in
its
present
capability
than
yeah
being
stuck
in
a
perma
beta
status.
So
I'm
trying
to
find
the
cigar
cap
on
perma-beta,
but
we
should
make
sure
we're
not
gonna
by
delaying
this,
be
at
risk
to
this
cap
that
I
placed
in
the
zoom.
B
J
J
I
think
that
that
can
be
challenging,
but
just
for
the
memory
side,
it's
pretty
critical.
So
I
think,
as
is
there's
plenty
of
utility
and
I'd,
be
happy
to
move
it
forward.
I'm
not
sure.
J
A
B
Maybe
eric
things
like:
oh
okay,
we
have
a
con
one
per
container,
so
it's
not
really
static
and
cryo.
It's
it's
more
dynamic,
so
anything
that
is
more
dynamic
than
the
static
overhead
we
defined
today.
That
would
be
like
a
phase
two
and
we
don't
have
to
address
it
right
now.
We
can
do
it
separate
from
what
we
have.
We
just
go.
Ga
with
this.
For
now.
A
B
Sounds
good,
I
captured
that.
No
thank
you
thanks
folks.
So
the
next
one
is
windows,
privilege,
containers
mark.
G
Yeah,
I
can
talk
to
this,
so
this
enhancement
hit
alpha
in
122.,
so
going
for
the
alpha
implementation,
we
relied
on
annotations
over
most
of
the
cri
calls
and
for
beta
we
are
going
to
remove
those
annotations
and
plumb
all
of
this
through
continuity
and
hcs
shim
directly.
So
majority
of
the
work
for
going
to
beta
for
this
enhancement
is
in
other
components
outside
of
kubernetes,
but
we're
going
to
be
updating
the
I'm
going
to
start
updating
the
cap
for
a
beta
release,
regardless
of
if
we
hit.
G
K
G
Okay,
that's
good
to
know,
yeah
we're,
also
there's
a
number
of
updates
in
the
hcs
gym
that
we're
also
kind
of
working
through
mainly
around
how
the
container
volumes
are
set
up
for
those
privileged
containers
too.
So,
okay,
that
is
good
to
know.
B
So
the
so
the
next
one
we
we
discussed
last
time
and
that
needs
like
follow-up
and
signore.
So
maybe
we'll
wait
for
that
presentation
before
we
make
any
decisions
on
that
so
cubelet
credential
providers,
either
aditi
or
andrew
on
the
call.
L
Yeah
I
can
comment
on
this
one.
I
think
we
were
targeting
beta
in
the
last
release
and
we
made
quite
some
improvements
in
the
caching
and
the
concurrency
of
the
exact
plugins.
L
One
of
the
things
that
we
just
missed
was
getting
an
actual.
You
know
ci
job,
to
have
some
good
test
coverage
for
this
plug-in
mechanism,
and
so
I
think
a
dt
is
going
to
be
doing
some
work
to
add
the
ci
job
for
for
1.23
and
hopefully
that
that
will
put
us
in
a
good
spot
to
promote
this
debate.
L
B
L
So
so
this
is
kind
of
this
feature
cuts
kind
of
multiple
things
I
think
from
a
sig
node.
Maybe
it's
not
super
high
priority
from
the
cloud
provider.
Sig
we're
trying
to
prioritize
is
pretty
high
because
it's
part
of
the
whole,
like
removing
cloud
provider,
capabilities
that
are
built
into
or
compiled
into
all
the
core
various
components-
and
this
is
one
of
the
like
lingering
features
that
have
like
a
first
class
integration
into
certain
public
clouds.
So
from
that
perspective,
it's
going
to
be
high
from
a
t-shirt
size.
L
L
Yeah
idt
is
working
on
it.
I
think,
and
I'm
going
to
be
helping
her
and
trying
to
get
this
into
beta,
okay,
cool
right.
So.
L
Yeah,
that's
part
of
the
complexity
of
this
is
that
we
can't
rely
on
cubelet
to
pull
an
image
that
has
the
binaries
and
the
plugins
in
place,
because
the
cubelet
would
need
the
plugin
to
authenticate
and
get
the
image
so
like.
We
have
to
do
a
bunch
of
work
like
deep
in
the
wheat
of
like
cloud
in
it,
and
the
way
these,
like
the
vms,
are
provisioned
on
google
cloud
and
get
the
plugin
installed
on
the
nodes
that
the
cubelet
can
then
actually
use
the
plugin
as
part
of
authenticating
to
image
registries.
B
B
Okay,
sorry,
who
spoke.
B
Okay:
okay:
we
are
down
to
the
last
item
on
the
list
exact
probe
time
out
circuit.
I'm
gonna
talk
to
that
one
yeah.
E
E
We
like
we
see
many
people
being
broken
by
enforcement
and
even
worse,
if
you
just
set
a
really
high
timeout,
it
still
changes
the
behavior,
especially
for
continuity
when,
before
it
wasn't
accounted
like
response,
value
wasn't
taken
into
account.
So
people
may
fail
the
pro,
but
it
still
reports
success
with
a
high,
really
high
timeout.
It
would
be
that
people
like
that
containers
will
start
receiving
this
failure
and
start
crashing
container
so
change
the
behavior.
E
In
any
case
like
we,
if
you
enforce
a
flag
and
just
set
a
very
high
value,
it
will
be
a
problem
and
if
we
just
enable
it
and
keep
the
default
one
second
value,
it's
still
crushing
like
may
break
customers
and
so
usage
wise.
There
are
so
many
people
using
grpc
exec
prop
just
too
many
of
them,
so
still
breaking
customers.
E
That's
why
I
just
put
it
into
the
list
to
make
sure
that
we're
taking
this
work-
and
we
take
in
some
like
resolution
like
pushing
customers
to
use
something
else.
But
I
don't
know
like
when,
when
we
can
lock
this
feature
and
just
don't
give
an
option
to
fall
back
to
original
behavior
any
longer.
K
Oh
surgery,
we
we'd
also
like
to
to
work
on
the
probe
timeout
values
they.
It
would
be
nice
if
they
were
a
little
more
atomic,
as
opposed
to
per
second
kind
of
thing.
There's
there's
probably
a
lot
of
improvements
we
can
do
here.
Maybe
if
you
and
I
can
get
together
and
talk
about
it
and
try
to
try
to
navigate
a
way
forward
from
you
know
the
existing
behavior
to
something
more
more
fine-grained,
and
you
know.
E
Okay,
yeah
definitely
good
directions
like
maybe
if
you,
if
you
give
extra
features
to
customers
and
they
will
exactly
they'll
force
them
to
make
dates.
B
Okay,
so
maybe
we
can
come
back
just
ignored
the
results
of
that
discussion
yeah.
It
sounds
good.
A
A
So
I
think,
thank
you,
everyone
for
participating
this
discussion
and
let's
move
to
next
one,
because
for
timing
and
kamala
do
you
want
to
start
talk
about
the
just
just
one?
Second,
sorry,
let
me
see.
N
Yeah,
okay,
this
was
discussed
before,
but
on
the
pr.
I
got
few
comments
from
clayton
and
thought
to
discuss
the
note,
but
I
don't
see
clayton
here,
maybe
I'll
check
with
him
and
shift
to
the
next
meeting,
because
since
he
has
most
of
the
comments,
I
don't
want
to
just
have
a
discussion
without
him.
A
H
Sure,
yes,
does
this
mean
me
and
chintang
have
wanted
to
bring
this
up
yeah
so
kind
of
the
the
context
here
is:
we've
we've
been
we've
been
working
on
gk
on
a
lot
of
kind
of
the
container
d
work
and
migrating
customers
to
container
d,
and
we've
noticed
some
issues
in
c
advisor
where
some
metrics
are
missing
for
various
things
and
so
to
to
get
these
metrics.
H
Basically,
we
actually
need
to
talk
to
container
d
in
the
cri
api
from
c
advisor,
and
so
traditionally
c
advisor
has
never
really
talked
to
the
cri
api.
It's
only
talked
to
the
container
d
api
or
the
docker
api
or
cryo
api
directly.
H
So
the
problem
comes
in
is
that
basically,
we
have
kind
of
a
dependency
issue,
because
c
advisor
is
currently
vendored
into
kubernetes.
Every
release
right
it's
compiled
in
and
the
cri
api
is
actually
also
in
kubernetes
itself
in
the
staging
repo.
So,
basically,
due
to
the
kind
of
that
issue
that
c
advisors
vendored
into
kubernetes
and
cri
api
is
part
of
kubernetes
staging
itself.
H
If
we
include
a
dependency
of
cri
api,
like
the
proto
definitions
in
c
advisor,
we
kind
of
have
a
circular
dependency
and
we
can't
actually
include
it.
So
this
basically
presents
us
kind
of
in
our
current
challenge,
which
basically
means
we
can't
include
or
use
the
cri
api
at
all
from
c
advisor.
H
So
that's
kind
of
where
we
are
right
now,
yeah
and
then
shinton.
Do
you
want
to
add
anything
kind
of
like
the
the
issues
you've
been
working
on
and
why?
This
is
helpful.
Maybe.
O
O
Also,
if,
if
we
integrate
cri
the
file
system,
matrix
can
be
can
be
fetched
in
a
more
efficient
way
rather
than
you
know
double
rather
than
it,
because
cri
has
an
internal
cache
for
the
file
system
magic.
So
so,
if
you're,
depending
on
that
it'd,
be
be
cheap
to
to
get
the
matrix
in
supervisor
yeah
yeah,
that's
the
context.
H
Yeah
yeah
exactly
so
yeah
like
we
kind
of
have
two
issues
already
where
we
kind
of
want
to
use
the
cri
container.
The
api
from
c
advisor
so
yeah
like
the
file
system.
Metrics,
is
one
because
otherwise
we
can't
catch
the
results
and
the
other
one
is
to
get
some
of
the
metadata
about
the
run.
Containers
like
the
restart
tone
and
stuff,
so
anyways
kind
of
what
we
were
proposing.
H
I
chatted
a
little
bit
with
dims
who's
kind
of
like
a
big
expert
on
like
a
lot
of
the
go,
module
and
kind
of
dependency
stuff,
and
so
one
of
the
things
he
brought
up
was
it's
actually
been.
Apparently
I
wanted
to
get
a
little
more
context.
One
of
the
ideas
that's
been
going
on
in
the
community
for
a
while
is
potentially
moving
the
cri
api
out
of
a
staging
repo
and
into
its
own
github,
repo,
so
outside
of
the
kk
repository
directly.
H
H
So
that
was
one
kind
of
idea
that
dims
brought
up
as
a
solution
to
kind
of
this
dependency
circular
dependency
issue.
So
I
kind
of
wanted
to
wonder
I
wanted
to
ask.
I
guess,
like
has
this
been
brought
up
before
and
what's
anyone's
thoughts
on
this
and
kind
of
I
don't
know,
do
there
are
many
big
issues
with
this
and
what's
kind
of
the
cost
associated
with
doing
something
like
that,
yeah.
D
When,
when
the
cri
can
report
usage
stats
for
your
containers,
like
my
understanding,
is
I
thought
we
were
on
a
path
to
try
to
eliminate
c
advisor
from
being
in
that
flow
at
all
from
the
cubit
side,
and
then
the
cr
implementation
itself
could
have
done
it.
So
I'm
wondering
what
what
is
expected
to
be
the
source
of
truth
for
these
metrics
and
the
topology
you're
all.
H
Yeah
yeah,
that's
a
great
question.
I
mean,
I
think,
long
term
you're
completely
right
so
long
term.
We
have
a
cap
that,
like
I'm
working
with
peter
on,
for
example,
and
we're
trying
to
move
those
metrics
out
to
the
cri
implementation
in
such
a
way
that
basically,
the
cri
implementation
itself
will
be
responsible
for
reporting
these
metrics,
not
c
advisor
so
so
long
term.
D
Looked
at
this
code
but
and
maybe
renault,
you
can
help
jog
memory
here
but
like
I'm
trying
to
think
so.
Our
deployment
is
basically,
we
run
cryo,
but
we're
still
using
c
advisor
as
a
source
of
truth
for
metrics
within
kubernetes,
and
I
was
trying
to
think
back
when
we
wrote
the
c
advisor
cryo
integration.
Why
we
didn't
have
the
same
issue.
H
Yes,
I
took
a
look
at
the
cryo
integration
in
c
advisor
and
it
looks
like
it's
talking.
It
doesn't
include
the
changes
directly.
It's
talked
to
the
http
api
of
cryo
and
get
some
data
there,
so
it
doesn't
include
any
of
the
definitions
or
anything
like
that.
I.
P
P
Q
Hey
so
so
this
is
peter,
the
so
yeah
I
kryol
the
way
that,
yes,
the
advisor
basically
pings,
like
a
totally
separate
endpoint,
like
independent
of
the
cri
right
now,
to
get
all
that.
It
needs
to
basically,
then
pretend
like
it's
the
lip
container
handler
so
like
that's
like
the
pid
of
the
container
and
to
get
the
network
stats
and
where
the
root
effect
is
to
get
to
this
steps.
H
To
get
there,
oh
sorry
right
there
recording
yeah,
so
I
think
that's
the
question
and
I'm
guessing
if
there's
any
been
because
I
guess
this
is
kind
of
a
general
issue
with
any
any
other
dependency.
Basically,
so
the
current
story
is
basically
any
dependency,
that's
vendored
into
kubernetes.
It
can't
depend
on
any
staging
module.
So
if
there's
any
other
use
case
right
where
c
advisor
or
some
other
staging
module
needs
to
depend
on
cri,
it
can't
do
that
because
it's
in
in
staging.
So
I
guess.
H
Just
just
in
general
like
if,
if
moving
cyri
out
of
staging
is
something
that
was
ever
considered
because
jim
said
this
was
actually
an
eventual
goal
that
we
wanted
to
do
just
independent
of
this.
So
I'm
just
trying
to
understand
if
that's
just
either
just
add
another.
P
P
So
it's
just
that
we
have
based
on
what
jordan
told
me
just
that
we
haven't
upgraded
upgraded
connecting
yet
once
we
update,
we
will
face
this
issue
immediately.
D
D
H
Yeah,
that's
a
great
point.
I
mean
yeah,
that's
kind
of
the
challenge
with
splitting
it
out
as
it
would
have
its
own
release
cycle.
But
the
way
I
could
see
it
is
like
you
know,
toward
the
end
of
the
cycle.
People
would
make
their
changes
in
this
uri
repo
update
the
cri
and
then
back
in
kubernetes
just
bump
into
the
next
version.
So
something
like
that
that.
D
H
A
They
also
have
a
lot
of
like
the
lingering
dependency
issue
and
also
release
issues.
So
maybe
we
can
share
some
further
csi
and
at
least
I
think
this
is
like
and
david.
I
understand
the
statewide
the
dependency
issue,
but
it
looks
like
the
csr
even
have
more
so
so
maybe
they
can
share
how
they
are
going
to
address
that
problem.
R
I'm
here
hi
everyone
thanks
for
thanks
for
having
us
over
here
don,
so
long
story
short
this
shin
and
I
are
working
on
a
cap
to
try
to
enable
users
to
send
arbitrary
signals.
Sorry
send
signals
to
arbitrary
containers
in
in
any
running
on
in
any
node.
This
cap
is
called
container
notifier.
R
It
has
been
there.
I
don't
know
three
years,
maybe.
A
F
Okay,
okay
yeah,
so
we
do
have
a
dependency
on
the
container
storage
interface.
So
if
you
look
at
the
kk
repo
right,
so
we
do
have
a
we
do
have
that
in
the
gold
module.
So
it's
like
every
time
we
have
a
release.
We
we
go
to
kubernetes
to
go
to
kk
and
update
that.
Basically,
I
think
that's
what
I
do,
I'm
not
sure
what
else
you
are
looking
for.
H
Around
like
versioning,
and
how
do
you
kind
of
sync,
that
external
release
to
the
kubernetes
like
in
terms
of
timeline?
And
so
if
you.
F
So
we
do
so
like.
Sometimes
we
do
some
development
on
the
con.
You
know
the
csi
repo
so
like
let's
say
we,
we
are
developing
some
new
feature
right,
let's
say
for
r22.
If
we
need
to
make
some
changes
in
the
csi,
we
try
to
align
that.
So
there
is
this.
Definitely
there
is
this
coordination
so
try
to
get
that
in
so
that
we
can
use
that
in
kk.
F
A
Thanks
actually,
the
one
of
those
nectar
soda
recently
shared
with
me.
He
so
regret
the
csi
in
a
separate
report,
but
I
will
ask
him
to
get
more
data
and
the.
Why
is
required?
Because
the
regression.
A
So
so
the
most
concern
is
like
the
how
to
collaborate
with
the
winnies
and,
like
the
timeline,
all
those
kind
of
things.
Actually,
it's
a
lot
of
the
process
work.
F
So
yeah,
I
think
definitely
there
are
pros
and
cons
right.
I
mean,
if
everything's
in
turn,
so
we
actually
moved
all.
We
are
in
the
process
of
moving
all
the
dry
entry
drivers
off
the
tree
as
well
right
because
the
sea
size
of
the
tree
and
all
the
drivers
arbitrary,
so
to
decouple
the
drivers
from
kubernetes,
because
if
they're
together,
you
have
other
issues,
if
there's
any
problems
in
your
driver,
then
then
you
have
a
bag
in
in
kubernetes
in
kokubuniti's
right.
So
that's
also
a
big
problem.
F
So
so
there
are
I'll
say
there
are
problems
in
either
way.
You
you
go.
A
P
I'm
I'm
not
I'm
not
a
go
expert,
but
I
don't
know
whether
it's
possible
that
we
do
some
good
model
magic
to
like
keep
the
ci
in
the
ripple
but
break
the
dependency
but,
for
example,
defining
a
separate
gold
model
file.
Something
like
that.
Yeah,
I'm
not
sure.
Just
some
idea.
L
A
Oh
sorry,
david,
can
we
watch
the
a
little
bit?
I
just
wanted
time
tracking
and
I
think
we
cannot
solve
this
problem.
We
have
two
more
discussing
yeah.
N
A
C
A
Start
the
email
strike
on
this
one
and
then
come
back
to
the
signal,
though,
if
we
reach
some
consensus
and
collect
enough
pros
and
cons,
then
we'll
come
back
to
here.
Discuss
that
again
is
that
okay
yeah,
it
makes
sense
yeah.
K
Thanks
yeah
set
up
a
call
with
lantell
and
and
the
answer
to
lantau.
Yes,
we
can.
We
can
do
that
with
a
go
mod
in
a
in
a
subdirectory
right,
so
yeah.
A
So,
let's,
let's
move
to
next
topic
thanks
everyone
for
participation,
then
the
match.
Do
you
want
to
quick
talk
about
the
keystone?
And
I
I
know
you
sh
you
you
send
us
the
type
but
but
I
haven't
looked
at
it
yet
I'm
not
sure
who
I
have
take
a
look
and.
D
Cap,
yet
what
I
was
hoping
was
if
we
could
have
some
time
to
review
the
cap,
and
I
don't
know
if
we
could
fit
that
discussion
in
seven
minutes,
but
I
I
did
want
to
if
it
was
okay
to
see.
If
we
could
also
then
make
sure
that
we
had
time
to
share
some
questions
or
feedback
on
the
podcast.
S
Sure
did
the
pr
today
so
then
we
can
discuss
in
the
pr.
If
you
can
just
assign
someone
to
review
or
and
then
we
can
advance
asynchronously.
D
Okay,
yeah,
I
think
great,
so
if
it's
okay,
don
seth
and
I
caught
up
this
morning
on
the
container
notification
kept
and
he
hadn't
had
a
chance
to
oh,
I
see
he
just
copied
it
on
the
pr
now,
but
I
was
hoping
that
the
kept
authors
were
maybe
able
to
maybe
talk
through
some
of
the
questions
so
seth
do
you
want
to
guide
through
this
or
do
you
want
me
or
it's
up
to
you.
T
Yeah
I
mean
I
can
so
I
mean
we're
looking
at
it
and
while
while
the
implementation
can
be
done
in
phases,
the
design
has
to
kind
of
be
considered
all
three
phases.
Gonna
have
to
be
considered
because
phase
one
makes
the
change
to
the
pod
spec,
and
once
we
start
down
that
road
it'll
be
kind
of
hard
to
undo
it,
and
so
I
I
I
was
compiling
these
this
morning
and
I
meant
to
put
them
in
the
in
the
cap
earlier,
but
yeah,
but
basically,
there's
always
been
scalability
concerns.
T
I've
had
with
this
in
that
phase
one
they're
talking
about
writing
a
controller
whose
logic
would
eventually
be
incorporated
into
the
into
the
cubelet.
T
Yeah
zoom
is
messing
with
me
like.
If
I
switch
workspaces,
it
shrinks
the
zoom
window
down
real
small.
So
just
in
terms
of
the
like
the
notification
mechanism,
it's
kind
of
a
one-shot
thing
where
lots
of
things
in
kubernetes
are
kind
of
reconciling.
You
know,
you've
got
spec,
you
got
status,
but
in
this
case
spec
is
kind
of
like
you're,
defining
it's
more
like
a
job
right.
It's
it's
more
like
it's
a
one-shot
thing.
T
T
The
time
you
created
the
pod,
not
a
notification
right,
and
so
it's
very
ordered,
and
some
of
that
is
concerning,
especially
if
there's
like
a
notification
like
a
higher
level
notification
controller
that
is
watching
for
pod
creation
right
and
it's
going
to
send
a
notification
to
the
pod
immediately
upon
its,
and
this
is
something
that's
not
specified
in
the
spec
it's
like
does
the
does
the
pod
get
the
notification
when
the
container
that
has
the
notification
becomes
ready
or
when
the
whole
pod
becomes
ready
or
or
what
either
way,
there's
something
that's
going
to
be
racing
to
exec
into
that
pod
as
soon
as
it
either
gets
created
or
becomes
ready,
and
that
is
concerning
and
then
we're
talking
with
derek
about
it,
and
I
mean
there's:
there
needs
to
be
rate
limiting
mechanisms
several
of
them.
T
Where,
like
you
can't,
you
can
only
create
so
many
pod
notifications
at
a
time.
Otherwise
you
can,
you
can
deny
a
little
service
to
cubelet
in
the
container
runtime.
You
know
if
you
happen
to
create
100
pod
notifications
against
a
hundred
pods
running
on
a
single
node.
Well,
that
cube
is
gonna,
get
swamped,
and
if
it's,
if
it
is
the
cube
job
to
update
all
of
these
pod
notification
resources,
then
it's
gonna
exhaust
its
qps
and
fall
behind
in
updating
like
actual
pod
statuses
and
things
like
that.
T
D
Yeah
so
like
I
would
love
for
us
to
have
a
way
of
saying,
like
two
things
so
like
right
now,
pod
notifications.
If
we
had
a
way
to
tie
that
to
the
kubernetes
node
authorizer,
so
that
we
knew
that
it
was
only
listing
and
watching
notifications
that
were
bound
to
pods
in
that
cubelet,
then
at
least
we
would
know
that
the
cube
would
have
seen
that
pod
before
acting
on
it.
I
also.
D
Issues
of
just
like
liveness
probes,
readiness,
probes
and
execs
generally
eating
up
a
large
amount
of
resource
on
the
node,
and
this
just
felt
like
another
scalability
concern
that
was
unclear
about
how
to
size,
and
so
I
was
curious
if,
if
there
was
any
motivations
or
thoughts
on
adding
some
type
of
mission
control,
that
would
say
for
any
given
pod,
there
is
only
allowed
to
be
one
or
two
active
pod
notification
resources
for
it
and
then
at
q,
api
server
at
mission
time.
D
D
It
was
unclear
if
a
container
had
been
restarted
or
not
if
a
notification
by
reading
the
api
that
that
notification
was
received
by
a
particular
container
id
or
not.
So
I
think
in
one
of
the
feedbacks
you'll
see
and
the
note
hopefully
that's
seth
and
I
transpired
was
just-
is
there
a
way
to
know
which
container
running
instance
received
that
notification
or
not
in
the
pod
notification
status?.
D
D
R
Thanks
there
and
seth
just
to
try
to
answer
a
couple
of
your
questions
in
terms
of
animation
control,
to
restrict
number
of
actively
execute
particular
pattern
notification.
This
is
not
in
the
first
phrase,
because
the
first
phrase
is
going
to
be
rely
on
an
external
controller
to
tweak
of
all
this
but
rate
limiting
is
certainly
concerned.
It's
certainly
considered.
D
I
don't
know
if
that's
sufficient
and
I
only
feel
that
way
like
we
should
document
some
garbage
collection
strategy
for
it
and
not
tie
it
till
later,
because,
like
understanding,
the
garbage
collection
strategy
would
inform
how
clients
then
use
this.
D
As
an
example,
I've
been
administering
clusters
where
other
add-on
components
to
our
environment
very
quickly,
destroyed,
xcd
unintended
and
I'm
just
trying
to
think
like
when
we
add
a
new
per
pod
per
node
api.
If
we
don't
have
some
garbage
collection
strategy
up
front,
I'm
not
sure
we
know
our
usage
pattern
well
enough,
and
so
I
would
just
push
back
softly
if
we
could
define
a
garbage
collection
strategy.
Even
if
that's
putting
a
ttl
on
that
resource.
R
Okay,
that
I
will
we'll
take
a
look
on
that
and
the
real
way
to
come
back
on
that.
Okay,
also,
the
the
second
question
with
regarding
to
whether
a
container
get
notified
or
not.
This
is
actually
including
the
status,
there's
a
condition,
then
in
it
whether
each
content,
sorry
there's
a
condition
array
in
the
status
of
the
pot
notification,
it
can
tell
you
whether
a
specific
container
has
been
notified
or
not.
It
didn't.
D
R
That's
a
great
one:
okay
I'll,
take
a
quick
look
on
that
and
and
come
back
on
that,
okay
other
than
that.
Maybe.
A
I
want
to
ask
her
one
thing:
I
think
the
what
fess
early
reads
the
concern,
which
is
the
wallet
concern
he
he's
because
the
first
one
it
is
just
update
of
the
products
back.
So
once
we
update
that
one,
even
it's
just
in
the
only
implement
this
one.
We
still
have
to
live
with
that
right,
so
that's
kind
of
like
the
one
concentric
wrist
it's
valid,
but
on
other
hand.
A
I
also
want
to
say
that
I
do
think
about
the
first
one
relative
to
safer
and
even
some
of
the
fish
to
together,
because
the
sean
can
please
correct
me.
I
think
about
that.
That's
a
little
bit
misleading
phase
25.
Because
later
I
read
the
last
of
the
alpha
beta
planning,
I
think
about
the
you
can't
alpha
actually
have
the
phase
one
plus
some
of
the
phase
2,
because
you
didn't
mention
that
the
container
notifier
controller
somewhere
and
in
your
alpha.
So
that's
but
I
on
other
hand.
A
I
still
think
about.
Maybe
it's
okay,
because
in
your
current
location
object
you
did
say
we
want
to
a
list
of
the
content
notifier.
But
then
to
me
please
explain.
Maybe
misunderstand
you
do
think
about
the
for
that
notifier
handler.
We
are
defining
a
list
of
the
comment.
We
are
not
an
arbitrary
used
to
customize
those
comments,
and
so
that's
made
me
think
about
the.
We
are
tied
with
the
one
use
case
or
two
use
cases.
It's
not
the
arbitrary
things
there.
R
There
are
a
couple
of
phrases
over
here,
so
the
first
phrase
more
or
less
the
targeted
use
cases
to
be
able
to
execute
quiet
comments
against
the
particular
command
container,
and
the
second
phrase
is
to
a
limit
set
of
signals,
basically
predefined
list
of
signals
that
a
user
can
define
to
the
container.
So
to
answer
your
question:
yes,
it's
going
to
be
a
limited
set.
A
Exactly
so,
that's
why
tonight's
one,
I
think
about
at
least
the
maybe
entire
of
the
life
cycle
of
this
feature,
or
maybe
just
at
least
the
initial.
I
think
this
is
one
dramatically
narrowed
down
the
scope
and
and
have
the
bonded
impact
on
the
load
based
on
that
time.
But
I
I
I
understand
our
concern
so.
F
A
Also
it's
one
of
those
risk
concerns
initially,
but
I
think
about
at
least
with
the
current
proposal.
We
basically
could
do
the
bounded
of
things.
We
could
just
say.
Oh
what
kind
of
comment
it
is
a
safe
and
what
kind
of
use
cases
is
that
have
the
higher
high
demanding?
So
we
could
just
only
support
those
things
and
the
rest
of
the
stuff
is.
G
D
I
talk
through
in
this:
it's
not
antagonistic
towards
the
future.
It
was
more
really
trying
to
scope
it
from
a
sizing
and
like
a
resource
sizing
standpoint
so
and
then,
just
like,
I
said
we
weren't
clear
if
you
had
a
way
as
a
consumer
of
the
api,
to
know
that
the
actual
existing
running
container
had
been
the
one
that
received
the
notification
or
not.
D
But
I
would
be
hard-pressed
to
put
this
feature
on
in
production
in
my
own
environments
right
now,
unless
I
could
better
understand
how
I
can
constrain
its
usage
on
a
per
pod
basis.
D
So,
like
the
one
tweak,
if
we
could
explore
it,
was
just
the
number
of
pod
notifications
per
pod
which
felt
tractable
in
an
emission,
and
it's
not
something
we
could
handle
cleanly
with
resource
quota
today,
because,
like
pods
pod
notifications
could
span
many
pods
in
a
namespace
that
might
span
many
nodes,
and
so
I
kind
of
wanted
a
way
of
capping.
D
Usage
per
node
per
pod
and
execs
in
particular
are
super
expensive,
we're
discovering
in
practice
in
many
environments,
so
having
some
way
to
rate
limit
that,
as
was
our
main
feeling.
R
D
R
Shin
and
I
really
appreciate
all
the
feedback
right,
don't
don't
worry
about
us-
we
will
make
whatever
adjustments
that
phrase
the
best
and,
let
me
let
me
go
back
and
discuss
with
this
in
a
little
bit
more
on
this
or
see
whether
we
can
change
some
of
the
apis
to
first
of
all
identify
the
container
using
id
instead
of
container
name.
That's
the
one,
and
the
second
thing
is
to.
I
think
the
major
concern
really
lies
in
the
performance
so
and
also
red
limiting.
R
How
can
we
effectively
not
lead
in
this
single
controller,
to
break
everything
or
take
a
note
down
et
cetera?
That's
I.
D
R
Right,
we
will
have
that
document
in
the
cap
and
I'm
glad
that
this
come
back
to
the
discussion,
so
the
only
the
only
thing
is
that
it
has
been
there
for
for
a
little
while
it
typically
takes
a
little
bit
long
round
trip
to
get
the
feedback.
So
I
really
appreciate
if
you,
if
you
to
there's
don
and
derrick
or
whoever
in
this
community,
respond
the
to
the
cab
updates.
R
D
D
For
hearing
the
feedback.
A
I
have
to
join
another
meeting,
so
I
think,
thanks
for
everyone
for
attending
today's
meeting
and
yeah
yeah.