►
From YouTube: Kubernetes SIG Node 20220510
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220510-170538_Recording_1920x1088
A
B
A
Then
you
know:
do
you
want
to
quickly
update
here,
because
many
people
maybe
missed
last
Friday
last
Wednesday,
the
testing
meeting?
Do
you
want
to
quickly
give
the
well
to
sentence
summary
and
people
will
see
your
report
pay
extra
attention
if
people
interested
in
to
help
on
the
know,
the
reliability,
foreign.
D
I
was
there
so
we
had
a
kickoff
last
week
on
broad
discussion
on
what
we
could
do
as
a
community
to
look
to
improve
reliability
of
a
kubernetes
node
I
would
say
that
the
the
recording
I
was
actually
in
the
process
of
uploading
a
backlog
of
recordings
today
for
the
signal
playlist
on
YouTube,
but
for
those
who
want
to
view
that
recording
it'll
be
there
shortly
this
afternoon,
but
those
who
don't
have
the
time
I
would
say
that
we
spent
much
of
the
time
trying
to
reach
consensus
on
what
we
actually
thought.
D
Reliability
meant
and
I
think
there's
still
maybe
some
clarity
we
can
draw
on
on
that
going
forward,
particularly
on
where
issues
may
exist
in
the
cubelet
OR.
It
may
exist
in
your
run
time
or
may
exist
in
the
you
know,
your
Linux
or
Windows
operating
environment
wherever
you're
running
kubernetes,
but
if
folks
are
interested
in
helping
to
raise
the
reliability
bar
generally
of
a
kubernetes
node
I'd
encourage
you
to
look
to
attend
or
participate
in
these
discussions
going
forward,
and
hopefully
we
can
find
a
few
key,
meaningful
things
to
tackle.
I.
D
Think
one
of
the
first
things
to
do
is
to
just
increase
the
scope
of
test
coverage
in
areas
where
maybe
coverage
at
the
unit
or
end-to-end
level
isn't
as
strong
as
desired
and
then
from
there
figure
out
next
steps
beyond
that.
So
I
think
Donya
were
there
as
well
and
I
know
some
other
community
members
were
there,
but
hopefully
that's
an
inaccurate
summary.
A
Yeah
really
great
summary
and
the
most
things
just
call
out
the
activity,
and
please
help
on
that
effort,
and
here
yeah,
so
we
can
follow
up
more
about
what
kind
of
things
and
and
Daniel
actually
asked
it
is.
This
is
the
signal
the
priority
I
think
the
both
Stark
and
I
clear.
Honestly,
we
want
to
endorse
this
one,
and
the
support
is
when
this
is
one
of
our
top
priority
for
the
for
the
this
quarter
and
also
H2
this
year.
So
so,
please
help
yeah.
E
To
yes,
I
just
wanted
to
add
something
regarding
the
rehability.
Maybe
that's
something
you
can
discuss
at
kubecon
next
week,
because
I
think
Antonio
responded
to
the
open
letter
to
communities,
reviewer
and
approvers
from
Danielle
Smith,
and
they
wanted
to
organize
like
one
hour
meeting
at
kubecon
during
the
contribute
or
submit
so
I
I.
Maybe
we
can
link
those
two
yeah.
F
D
Yeah
so
and
I'm
speaking
for
myself,
so
we
did
record
a
a
discussion
already
for
sign
note
for
Keep
Calm,
virtually
myself
I
know
I'm,
not
there
in
person
attendance,
but
both
Don
and
myself
did
endorse
Daniel's
appeal
and
I
just
kind
of
view.
The
the
meeting
last
week
as
the
first
step
of
trying
to
put
that
in
action
so
yeah.
Thank
you.
Yeah.
A
I
also
want
to
add
Derek
and
I
and
the
surgeon
we
also
record
the
the
half
year,
a
co-turner
signaled,
the
community
report.
So
one
of
that
one
is,
we
are
talking
about
reviewer
status,
maintenance
status,
approval
status,
but
in
that
one
actually
we
also
miss
a
lot
of
this
heavy
lifting
of
reliability.
You
need
to
work
over
there,
so
so
yeah,
it's
not
it's
not.
Maybe
it's
not
negative,
because
we
haven't
started
this
effort.
Yet
so
it's
not
a
spot
night,
but
you
can
see
that
the
same
is
unigned
altogether.
A
Yeah
Derek
will
talk
about
particularly
talk
about
the
signal,
the
lighter
and
other
things.
If
people
review
our
the
the
the
doc
we
the
other
approver
and
also
we
invite
the
lead
to
to
to
proposal,
you
can
see
that
the
reliability
work
and
also
the
heavy
lifting
work
it
is
play
Big
low
there.
So
I
just
want
to
share
with
everyone.
A
E
It's
it's
that
time
of
the
quarter
where
I
come
and
I
speak
about
this
cap
again
and
again
so
yeah
I
don't
know
so.
Last
time
we
missed
the
prr
review
because
of
a
misunderstanding
between
Elena
and
myself,
and
so
of
course,
I
put
the
cap
again
for
this
release
at
this
key
adity
is
not
available
to
work
on
this
during
this.
E
During
this,
this
release,
because
she's
on
maternity
leave
but
I
I
can
handle
it
myself
if
it's
okay
for
you
and
Rodrigo,
told
me
that
you
and
Derek
wanted
to
speak
about
this.
So
that's
why
I
came
this
week
so.
A
The
last
week
we
basically
last
two
weeks
we
did
the
one
that
25
planning,
so
we
we
don't
know
this
is
status
and
who
is
going
to
work
now
and
it
will
be
continue,
finished
driving
this
one
in
this
quarter,
this
release.
So
this
is
where
we
call
out
that
one.
So
then,
if
you
you,
you
will
reserve
the
boundaries,
what's
the
Target
and
like
the
Milestone
we
want
to
achieve,
and
then
we
can
identify
the
reviewer
and
approval.
A
Yeah
I
will
tag
you.
Maybe
you
can
provide
more
information
in
our
planning
dock
there
yeah.
So
so
you
can
find
our
our
planning
dog,
yeah.
E
C
I
guess
I
think
one
more
issue
here:
I
think
Don
is
I'm,
not
sure
if
we
reached
agreement
on
the
design
on
on
the
cap
itself.
So
yes.
A
Yes,
yes,
but
we
call
out
this
last
time.
The
first
thing
we
need
to
figure
out
who
is
going
to
came
here
to
talk
about
the
Keystone
container
right
and
then
we
can
discuss
so
it
looks
like
it
might.
Maybe
you
want
to
driving
this
one
came
here
and
the
discontinue
discuss
this
one
and
then
we
can
talk
about
so
then
next
one
it
is
for
the
planning
right,
so
the
so.
What's
the
Milestone
we
want
to
achieve
so
a
lot
necessary.
A
We
are
we
we
want
to
make
alpha
beta
those
kind
of
things,
that's
the
big
milestone,
but
do
we
there's
the
smaller
Milestone
we
want
to
move
forward?
For
example,
Community
reaches
certain
consensus
right.
So
what's
next,
so
that's
also
could
be
the
Milestone.
So
that's
why
I
I
hope
we
can
discuss
there
yeah.
E
Okay,
so
I
will
I
will
update
the
document
and,
if
you
want
I
can
I
come
back
like
in
in
a
few
weeks,
because
I
think
keep
going
so
maybe
the
week
after
or
the
week
after,
okay,
okay.
C
A
So
next
one
I
I'm,
not
sure
the
the
next
one.
It
is
I
think
the
people
cannot
attend.
That
cannot
attend
the
meeting
and.
C
Yeah
I
think
I
can
I
can
make
a
pass
at
it.
I
think
that
I
know
some
some
downsides
of
the
current
CRI
format,
so
maybe
they're
trying
to
address
that
or
something
you
see.
A
G
Yeah
so
I'm
Ryan
I'm,
the
team
lead
for
the
new
team
at
Red
Hat.
We
have
an
initiative
right
now
for
getting
more
pods
and
for
pod
density
on
the
nodes
and
so
Don.
Could
you
share
the
document
on
your
screen.
A
G
I
can
talk
to
it
so
right
now
we're
trying
to
improve
the
plague
performance
in
the
cubelet
right
now,
there's
a
loop
in
the
cubelet
that
pulls
every
10
seconds,
and
let
me
see
if
I.
G
I
have
to
mess
with
permissions
with
zoom,
it's
okay,
I'll
just
talk
to
it,
and
so
we're
trying
to
increase
or
the
Pod
density
on
the
nodes,
and
so
one
of
those
facets
is
making
it
so
that
the
plague
doesn't
re-list
pods
every
10
seconds
and
it
gets
evented
notifications
from
the
runtime
there's
one
issue
Upstream
currently
that
oh
thanks
that
is
talking
about
this
and
so
we're
sort
of
formalizing
working
on
it.
G
There's
a
couple
gotchas
with
it
currently,
but
we
feel
that
this
is
going
to
be
a
great
thing
for
the
cube
wood
so
that
we
don't
have
to
pull
the
containers
frequently
and
so
I
just
wanted
to
evangelize
this
with
everyone
partial
on
my
team
is
working
against
this
myrnal
and
I
are
also
working
on
it,
and
so,
if
anyone
in
the
community
wants
to
talk
to
us
on
slack
or
email,
just
just
send
us
a
note
on
it
and
it's
something
that
we
can
collaborate
on.
C
Yeah
I
think
Mike
calling
on
you
like
from
container
D
perspective,
and
if
you
can
take
a
look,
the
CRA
changes
that
we
are
proposing
and
if
those
make
sense
to
you.
D
Mino
I
don't
know
if
we
have
any
data,
we
can
share
right
now,
but
was
there
any?
There
were
two
things
related
to
this.
One
was
improved
density
but
then
also
reduce
cubelet
overhead
broadly.
G
I
guess
I'll
answer
that
we
don't
have
metrics
behind
it
yet,
but
Herschel
is
getting
some
really
good
results
not
much
to
share
yet
on
it.
Foreign.
C
I
Hey
Bruno,
this
is
vinay
a
question
about
this.
One
of
the
things
I
came
across
in
the
when
I
was
using
the
when
I
was
doing
the
input
in
place.
Part
vertical
scaling
is
that
we
have
to
kind
of
get
the
resource
updates.
We
pulled
that
I'm
wondering
if
that
can
be
something
that
can
be
addressed
as
well.
We
right
now
I
think
I'm
ending
up
calling
a
added
a
new
interface
to
the
plug
to
relist,
which
is
heavy
I.
I
Think
it
real
is
everything
all
the
containers
I
believe.
So,
if
the
CRI
can,
you
know,
tell
us
like
the
resource
update,
has
happened
and
give
the
new
update.
I,
don't
know
if
that's
something
that's
planned
as
part
of
this
I.
C
Think
yeah
that
that
could
could
make
sense,
maybe
like
if
you
can
comment
on.
F
C
B
So
I
have
other
questions
so
after
this
change,
if
the
part
postparts
was
created
or
container
inside
the
clouds
it's
created,
you
know
much
faster
than
the.
B
If
we
don't
depend
on
the
coding
mechanism
that
will
be
reported,
the
container
creation
completion
state
will
be
reported
more
accurately
or
at
an
earlier
time,
point
right
to
match
the
real
time.
Point.
C
B
D
Yeah
so
correct
me:
if
I'm
wrong,
I
know
it's
been
a
little
while,
since
we
discussed
this,
but
this
wouldn't
necessarily
entirely
eliminate
the
need
for
the
keyboard
to
occasionally
do
a
full
reconcile
Loop
yep
against
the
runtime.
It
would.
Let
us
extend
the
plug.
Realist
interval
much
more
widely
yeah.
E
D
G
So
that
was
basically,
unless
we're
gonna
start
formalizing,
the
enhancement
coming
up
and
taking
feedback
from
the
community.
You
can
reach
us
in
slack
and
email
and
if
you
have
any
questions
and
comments,
I
did
take
a
note
on
the
comment
earlier
about
the
resources.
G
H
Yeah
hi,
it's
Paul
you
here
too
I'm
here
all
right,
Paul's
in
our
OSS
group
he's
a
maintainer
for
k-native
and
we
we
brought
him
in
to
also
work
on
some
of
these
kubernetes
features.
H
Specifically,
he
was
excited
about
this
one,
so
he's
volunteered.
The
idea
here
is
that
when
you're
doing
these
probes
they're,
they
were
way
way
too
so
not
wide.
You
know
big
seconds,
you
know
you!
No,
you
couldn't
be
granular
and
in
a
faster
scenario,
we
we
wanted
to
be
able
to
get
down
to
maybe
100
milliseconds,
maybe
200,
or
at
least
some
value
that
you
guys
would
agree.
H
That's
not
thrashing
the
system
any
any
further,
and
especially
in
scenarios-
and
you
know,
host
environments
where
you're
you're
really
just
running
k-native
kind
of
kind
of
services
and
you're
expecting
these
things
to
happen
much
faster,
so
we're
hoping
to
get
get
this
Improvement
in
it's
gone
through
a
couple
of
reviews
and
example
and
Tim
to
take
a
look
at
it
and
I
think
it's
time
to
move
forward.
If
we
could,
we've
probably
had
five
different
versions
of
it.
So
if
you
could,
you
know,
take
a
look
at
the
last
one.
H
Basically,
where
we
left
off
is
we're
going
to
have
this
this
new
milliseconds
field
that
will
be
used
as
a
as
an
offset,
if
you
will
to
be
will
sum
the
current
seconds
that
you
selected
with
this
milliseconds
and
you
could
make
that
actual
time
be
smaller.
If
you,
if
you
use
negative
milliseconds
for
the
for
the
addition,
so
it'll
just
come
down
or
you
can
make
it
a
little
bigger.
H
We
think
that
that
was
very
important
too,
because
we're
finding
two
ways
too
many
situations
where
everybody's
waiting
for
everything
to
happen
at
the
first
second
or
the
second
second.
But
when
you
want
to
check
your
probe,
it's
probably
about
a
quarter
of
a
second
after
that
that,
first
or
second,
second,
not
not
somewhere
in
between
you,
don't
want
to
wait
waiting
for
an
extra
second
is
or
being
right
at
that
one.
Second,
it's
probably
going
to
make
it
go
through
that
iteration
you'll
have
to
do
two.
H
You
know
two
probe
executions,
which
is
more
expensive
than
you
probably
needed
it
to
be
in
a
good
scenario.
So
yeah
we
we
think
this
is
a.
This
is
a
good
solution.
We'd
like
you
to
take,
take
a
look
at
this
hack
MD.
If
everybody
agrees
that
this
is
a
good
direction
and
take
note,
we'd
like
to
go
ahead
and
maybe
take
it
out
of
work
in
progress
and
try
to
see
when
this
is
going
to
fall
in,
if
not
one,
two
five,
you
know
one
two,
six
but
I.
D
Yeah
so
Mike
I
think
this
is
a
good
topic
area
for
broader
discussion
on
what
do
we
want
to
give
as
a
quality
of
service
generally
around
probes,
because
yep,
it's
actually
a
way
people
will
measure
us,
and
my
fear
on
this
is
absent,
providing
some
quality
of
service
guarantee
around
probes,
where
we
have
a
my
own
experience
at
Red
Hat
generally,
is
we
have
a
pocket
that
says
similar
to
you,
where
you're
expressing
Canada,
which
is
I,
need
faster
responses
on
probes,
so
my
functions
can
appear
to
be
running
faster
and
be
happier.
D
D
Hoping
we
could
maybe
discuss
was
a
volunteer,
Renault
and
Ryan
from
Red
Hat
to
drive
this
forward
a
little
bit
is
we've
been
doing
a
lot
of
similar
to
the.
E
D
D
H
E
D
That
what
I
was
going
to
suggest
was,
could
we
could
we
under
a
broader
reliability
umbrella?
Could
we
reach
some
agreement
on
like
our
do?
We
have
an
established
Norm
on
how
we
measure
probe
performance
generally
and
get
that
as
a
Baseline
and
maybe
understand
where
probiotics
are
occurring,
whether
they're
occurring
in
the
go
run
time
in
the
Run
seed
launch
and
the
communication
from
the
qubit
to
the
CRI
and.
D
D
With
the
broader
community
on
how
we've
tried
to
bring
probe
time
down
significantly
by
doing
something
independent
of
the
cubelet
generally
so
I
don't
know.
If
we
could
cue
that
discussion
at
Bernal
for
afternoon.
J
D
Just
putting
that
experience
out
right
now
to
say
hey
is
everyone?
Does
this
make
sense
like
there's,
there's
an
overhead
to
even
running
a
probe
like
15,
Megs,
overhead
I,
think
per
probe
right
now,
so
I
just
want
to
look
at
probes
like
holistically
a
little
bit
and
just
give
an
appeal
of
hey.
This
is
what
we've
measured
at
red
hat,
I,
don't
know
if
IBM
or
Google
or
others
have
done
similar
measurements,
but
just
get
that
shared
and
then
tackle.
H
D
J
D
D
D
Long
way
was
like
saying,
we'd
love
to
support
this
discussion
going
forward.
I
just
want
to
make
sure
we
all
have
an
understanding
on
like
what
can
we
actually
achieve
and
what
knobs
we'd
have
to
turn
to
even
achieve
this.
C
Could
yeah
I
think
like
in
redirect?
We
kicked
off
a
deeper
like
work
with
the
PO
of
a
perf
team
to
measure
compare
and
I
think
my
hope
is
like
we
as
a
community
can
eventually
come
up
with
some
guidelines
right
right
now
we
don't
have
guidelines
on
probes,
how
frequently
you
can
run
them
on
a
typical
setup
and
and
customers
just
like
abuse
them
right,
and
they
expect
everything
to
work.
C
They
keep
on
adding
more
probes,
so
maybe
to
get
some
baseline
data
understanding
with
the
community
on
like
okay,
what
the
direction
is
where
we
want
to
fix
it
in
the
runtime
or
cubelet
side,
how
we
measure
it
and
what
is
a
good
number
of
probes
to
run,
and
then
we
can
support.
Frankly.
D
Charged
to
the
cubelet,
or
is
it
charged
to
the
Pod
and
if
it's
charged
to
the
Pod
probes
are
then
subject
to
CFS
constraints
on
that
node
as
well.
So
we've
taken
various
strategies
at
this
in
our
own,
like
experience,
trying
to
meet
particular
needs,
but
I'm
sure
everyone
in
the
community
has
done
similar,
but
like
that,
that's
that's
like
a
key
thing
is
like
if
you're
a
best
effort
pod
asking
for
sub
second
probes.
Those
things
are
kind
of
In
conflict.
A
I
I
want
the
Eric
director
said
I.
Think
the
week
discussed
this
in
the
past,
even
even
when
we
discussed
the
CRI
API,
we
kind
of
agree
I
think
we
didn't
do
any
document,
but
do
we
do
agree
that
is
charged
to
pod?
We
even
agree
because
that
time
we
already
have
the
sidecar
container
discussing
when
you
agree.
We
want
you
to
some
way
to
charge
to
The
Container
itself,
because
it
is
who
is
the
user
to
spec
specify
those
problems
right?
A
So
so,
but
we
love
a
really
agree
how
to
charge
right
so
so
this
is
why
part
of
the
part
overhead
initial
name
is
not
just
for
the
the
base
part
also
and
also
for
the
Kata
container.
One
of
those
kind
of
things
is
also
for
the
Cubo
cattle
logs
serving
those
dream
and
also
prob,
but
we
never
really
have
a
good
way
to
describe
how
you
are
going
to
Define
here.
A
It
is
reserved
resource
for
part
for
problem
and
here's
the
reserve
for
log
and
and
also
what
kind
of
level
like
the
it
is,
the
what
it
is,
how
that's
relationship
to
the
quantity
of
the
services.
So
I
think
that
we
we
talk
about
those
things
in
the
container
runtime
interface,
but
we
never
settle
down
so
because
that
time,
nobody
really
have
the
advanced
usage
like
this
way
right
people
still
try
to
understand
the
container
so
that
we
didn't
continue
that
that
discussing
so
we
could
reopen
this
one.
D
So
just
as
a
preview
like
my
own
experience,
is
that
it's
a
use
case
specific
on
where
the
charge
is
best
placed,
and
we
we've
taken
approaches
at
Red
Hat
to
be
flexible
for
a
deployer
to
figure
out
where
that
charts
to
be
similar
to
things
like
image,
pools,
we're
not
perfectly
consistent
on
who
that
is
charged
to,
but
probes
in
particular,
if
you're
trying
to
increase
the
bar,
we
need
to
rationalize
it
with
total
resources
available
to
that
pod
generally,
a
little
bit
so
anyway,
big
this
is
I
view
this
as
a
key
reliability.
D
Discussion
probes
are
are
one
way
we
get
measured
reliably
so
Mike
you
and
your
team
wanna
help
us
all
reach
consensus
on
like
what
we
can
do
here.
We're
happy
to
help
out.
H
Definitely
agree:
let's,
let's
I
guess
create
a
work
report
or
something
in
fact.
If
you
don't
already
have
one
or
we
can
do
it
in
the
plug
one
or
some
combination
sounds
great.
H
Yeah
we're
seeing
it
yeah
we're
all
trying
to
make
things
fast
and
yeah
yeah.
We
have
to
make
it
reliable
as
well.
I
agree
completely
so
yeah,
maybe
maybe
the
the
initial
bottom
bottom
number
would
be
500
milliseconds
instead
of
100.
If
we're
two,
if
we're
too
concerned
as
we
go
forward
but
yeah
we'll
talk.
H
D
Say
well,
it
works
on
the
transparency.
The
other
way
we
could
tackle
this
too,
is
because
a
lot
of
these
topics
are
operationally
specific
is
like
maybe
a
limit
range,
for
example,
should
be
able
to
to
restrict,
or
or
maybe
we
should
have,
practices
on-
how
a
gatekeeper
could
prevent
probe
abuse,
in
particular
scenarios
like
so
just
looking.
H
D
A
J
Yes,
hello,
it's
Marcus
from
from
Intel,
so
I
presented
this
class
resource
resource
Gap
and
and
have
a
demo
like
one
month
ago,
or
so
please
ignored
and
actually
I
thought
about.
It
sent
an
email
to
Signal
mail,
at
least
like
two
weeks
ago,
but
now
today
we
realized
that
I.
It
was
sitting
in
my
draft
folder
and
I
and
I'll
send
it
just
before
the
meeting
just
wanted
to
continue
a
bit
bit
on
the
discussion
that
we
had
last
time.
J
First,
most
kind
of
a
bit
misunderstanding
form
from
my
point
of
from
my
side,
so
I
think
you
don't
suggested
that
if
you
could
add
this,
we
could
do
it
so
that
one
one
where
it
would
be
to
use
under
the
airport
annotations
for
setting
setting
the
class
and
and
only
extend
the
CR
protocol
in
the
first
place.
J
So
actually
that
was
our
like
very
original
approach
to
to
introducing
this
so
having
having
this
very
minimal
cap
in
in
a
similar
way
that,
for
example,
SEC
compound
others
were
introduced
and
I
think
guy.
It
was
in
last
October
or
November
last
year.
The
type
first
time
presented
this
idea
and
then
the
feedback
in
that
signal
meeting
was
just
that.
J
But
I
think,
of
course,
if
if
people
say
that
the
annotations
are
kind
of
better
or
a
preferable
way
to
way
to
like
slowly
introduce
this
type
of
resources,
I
I
think
we're
totally
fine
with
that
approach
as
well.
So
that
was,
and
it's
another
game
also
mentioned
as
a
alternative
approach.
J
So
using
annotations
and
and
only
only
extending
the
CRF
protocol.
J
And
then
probably
one
kind
of
misconception
or
conception
or
understanding
in
the
discussion
was
that
kind
of
what
we
are
aiming
at
up
here
so
with
this
class
resources,
so
I
I
would
kind
of
see
this
as
some
some
sort
of
like
pop-up
non-intentary
resources
or
so
so
that
they
would
be
kind
of
mostly
operative
kubernetes.
J
J
Like
trying
not
to
have
any
complex,
characteristic
or
logic
like
like
in
this,
for
example,
this
resource
plus
resource
plus
proposal
a
few
years
back,
so
the
naming
is
a
bit
unfortunate.
There
I
guess
like
easily
easily
confused
with
that.
J
F
J
A
The
reason
we
really
suggest
an
occasion
is
just
because
most
of
the
proposal
it
is
decoupled
from
the
kubernetes
right,
so
so,
but
but
the
proposal
it
is
to
introduce
this
class
resource
the
field
for
the
custom
resource.
So
it
is
really
hard
to
reach
consensus
right.
So
that's
why
I
suggest
even
We
Know
The
annotation
for
those
things
a
lot
of
times,
maybe
introduce
the
Russian
School
and
also
backward
compatibility
partition
issue
but
giving
the
design
currently
actually.
So,
where
do
you
work
for
many
cases?
A
There's
already
have
to
plug
in
to
support
those
things
and
even
schedule
now
can
support
those
things.
So
I
suggest,
let's
just
start
with
annotation
and
then
start
this
blog
make
people
to
start
using
the
features
that
download
the
plugin
and
the
end
of
those
features.
So
then
we
can
have
the
more
feedback
from
the
customer.
A
Then
we
can
have
more
understanding,
then
come
back
to
the
community
and
make
that
it
is
the
first
class
field
to
the
part
spec
or
maybe
we
just
keep
that
way
because
keep
that
we
have
the
also
have
some
reserve
some
flexibility
so
just
to
fill
the
community
a
little
bit
the
background
context
here.
Do
you
want
to
comment.
C
Here,
yeah
and
I
think,
like
one
more
thing,
I
think
that
we
asked
in
the
last
meeting
was
like
doing
a
walk
through
here
of
how
the
block
I
o
will
work
end
to
end
right.
So.
J
F
J
From
the
card
Cuba
at
this
point
or
the
kind
of
Sierra
protocol
or
the
point
of
view
of
the
script
kind
of
it's,
it's
really
open
to
kubernetes.
So
it
doesn't
so
it's
just
open
Resource,
as
any
other
class
is
also
like
holiday,
but
yeah
I
we
yeah
we
can.
We
can
do
that
because
it's
of
course,
like
foreign.
F
J
A
F
J
It's
I
mean
the
camp
contains
a
lot
of
a
lot
of
kind
of
future
steps.
So
yeah
took
this
really
really
step-by-step.
Small
steps,
approach
and
well
using
annotations
are
just
one
one
more
step
to
the
con,
but
yeah.
It's
just
CRS
changes,
standpoint
spec
changes,
then
then,
what
status
right
now
resource
water
as
Futures,
but
for
Access,
Control
and
so
on
so
yeah
there's
a
lot
of
future
steps
for
that.
Yeah.
Try
to
do
small
steps
to
get
something
done.
A
Please
follow
up
on
that
cap
if
you
have
any
things
to
follow
up
yeah
and
the
next
winner.
Do
you
want
to
update.
I
Hi,
so
this
is
just
a
quick
update
on
I
think
we
have
the
pr
it's
been
sitting
in
that
state
for
a
little
while
I
might
I'll
have
to
do
a
catch
up
on
that
one
and
make
small
updates
to
it
as
well.
Probably
gonna,
add
a
test,
there's
a
issue
that
that
we
found
out
last
week
and
I
think
it
requires
a
test.
It's
not
a
Alpha
blocker,
but
I
think
it
should
be
fixed.
It's
better
to
have
it
sooner
than
later
and
I
want
to
see.
I
If
there's
some
small
code
change,
we
need
to
make,
because
this
is
going
to
be,
it
will
fix
itself
in
beta,
but
I'm
looking
at
it
a
little
bit
more
closely,
so
I
might
make
one
update.
The
main
thing
to
at
this
point
is
DirectX
review.
I.
Think
it's
halfway
done
on
the
kublet
side
to
see
if
we
can
close
the
loop
on
that.
My
time
is
going
to
get
very
limited
in
June
I
have
a
conference
in
Texas
that
multiple
papers
have
been
selected.
I
Talks
have
been
selected
and
I'm
preparing
for
that,
so
June
is
going
to
be
pretty
busy.
I
would
I
don't
want
to
run
into
a
situation
like
last
time
where
you
know
during
the
core
implementation
time
I'm
not
available,
and
it
comes
to
code
freeze
time
and
then
we're
getting
done
and
we
just
don't
have
enough
time
so
I'm
wondering
if
Derek
can
you
devote
some
Cycles,
probably
this
week
or
next?
So
we
can
finish
that
and
see
if
there's
anything
critical.
I
That
needs
to
be
done
addressed
right
now
and
we
get
it
to
a
shape
where
it's
acceptable.
I
Okay,
so
I'll
keep
an
eye
out
for
it
I'm
on
slack.
So
if
you
need
me
to
if
you
need
something
really
quick
this
next
few
weeks,
I
think
I'll
devote
time
most
likely
it'll
be
in
the
evenings,
so
I
might
get
to
your
questions
and
respond
or
weekends,
but
well
I'm,
hoping
that
we
can
get
this
done
by
end
of
this
month.
There's
also
the
housekeeping
item
I've
merged
the
two
caps,
the
CRI
cap
and
the
main
cap
that
we
have
I've
sent
out
a
PR
for
that.
I
Could
you
please
say:
okay
to
test
on
that
I
think
there
are
a
couple
of
issues:
I
need
to
fix
with
Doc
I'm
going
to
do
that
and
then
hopefully
we'll
merge
that
it
probably
needs
a
quick
review.
But
it's
just
a
combination
of
the
two
caps
I
just
took
the
sections
from
the
old
cap,
the
second
this
year
I
kept
and
dropped
it
in
there
made
some
small
adjustments
to
the
wording
and
that's
about
it,
but
yeah.
Please
take
a
look.
I
C
A
Do
you
want
to
you,
have
questions
yeah.