►
From YouTube: Kubernetes SIG Node 20200714
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
Yeah
hello,
so
I
just
I
wanted
to
do
a
quick
update,
I'm
looking
as
we
talk
in
passing,
notes
and
alternatives,
or
suggestions
into
into
the
design
callouts
to
open
that
vr
with,
hopefully
suggesting
our
attorneys
for
for
all
the
weird
cases,
and
also
I
I
understand
that
the
motivation
and
the
use
cases
is
an
important
part
of
of
a
cap,
and
especially
for
for
for
this
cap,
so
I
was
reaching
out
to
to
the
community
like
link
rd
or
other
companies
that
have
a
fork
with
this
functionality
to
make
sure
that
the
design
changes
that
I'm
about
to
propose
are
fit
for
yeah
fit
for
everyone,
yeah.
B
A
Yeah
this
started
up
here
and
the
direct
I
remember
we
did
talk
about
discussion.
Who
is
the
shepherd
right
last
time.
C
Yeah
I
shepherded
the
original
cap,
so
I'm
still
happy
to
keep
taking
on
that
responsibility
like
I
think
I
have
most
of
the
history,
and
so
I'm
I'm
happy
to
to
continue
to
do
that.
B
A
A
A
A
D
Yes,
yes
don,
so
I
am
working
on
a
cap.
So
currently
anyone
wants
to
like
access
pod
events.
Then
they
have
to
like
register
at
the
api
server.
D
So
what
I'm
working
on
is
that
we
can
directly
send
from
kubelet
to
the
clients
right
because
taking
the
events
like
is
not
it's
not
necessary
to
have
like
centralized
approach
for
that.
So
I
am
working
on
that.
I
also
prototype
ready
so
like
something
like
we
can
have
like
add
eyepiece
to
the
from
the
kubelet
flags.
D
So
from
that
couplets
can
start
sending
events
to
the
registered
clients
so
that
they
can
react
according
to
the
what
happened
like
something
pod
gets
deleted
and
we
have
to
something
we
have
to
clean
something
on
that
node.
Then
the
demon
can
have
access
to
that.
Some
demon
process
can
have
access
to
that
events,
and
it
can
react
according
to
what
happening
is
in
the
kubrick.
D
So
something
like
pod
gets
deleted
or
container
started
or
board
gets
failed.
C
Just
maybe
to
distinguish
like:
are
you
referring
to
what
a
cubelet
watches
in
order
to
figure
out
the
desired
state?
It
should
realize
or
you
refurbish,
no,
no.
D
Whatever
events
the
kubrick
is
sending
to
the
api
server,
yes,
so
so
the
same
kind
of
event
it
can
send
to
the
another
demon
process
so
that
anything
gets
cleaned
up
or
gets
filled.
Pod
gets
failed
and
they
have
to
react
in
some
way.
So
it
is
more
of
a
like
an
optimization
approach,
because
this
thing
should
not
it's
not
necessarily
study
to
have
like
this
centralized
approach.
D
Oh
yes,
so
something
like
if
some
pod
is
running
on
a
node
and
also
there
is
some
diamond
process
running
on
that
note
subscribe
to
the
kubrick.
So
some
you
want
to
clean
your
files
or
something
like
that,
according
in
the
node,
if
something
gets
filled
or
you
want
to.
D
C
Events
are
largely
a
mechanism
to
inform
humans
that
are
reviewing
the
system
to
understand
the
state
of
the
system
and
so
like.
We
would
discourage
people
programming
or
responding
to
like
pod,
create
pod
delete
events
as
a
guaranteed
communication
vehicle.
Largely
events
are
best
effort,
delivery.
D
C
So
the
key,
the
key
point
I'm
trying
to
make
is
that,
like
in
the
kubernetes
community,
events
are
not
intended
to
be
things
that
you
program
against
right.
They
do
not
have
guaranteed
delivery
and
then
that
the
individual
types
and
reasons
that
you
might
see
your
messages
associated
event,
they're,
not
api
contracts.
D
C
E
A
A
A
All
right,
so
I
just
want
to
say
that
if
you
want
to
based
on
this
one
program,
so
that's
why
you
have
to
optimize.
We
just
want
to
ask
not
to
give
the
problem
through
the
to
address.
D
No,
that's
what
pretty
much
is
pretty
much
that's
what
I
wanted
to
say.
Yeah.
A
D
G
Hello,
so
this
is
mauricio.
This
is
my
first
time
talking
here
in
this
meeting,
so
this
is
just
a
reminder,
a
reminder
that
we
are
kinfolk
are
working
on
getting
taken
over
over
taking
over
this
username
inspect
support.
So
basically,
what
I
want
to
say
here
is
the
we
are
starting
to
write
the
cap,
so
we
are
starting
with
the
first
session
about
the
motivation
about
the
summary,
mainly
based
on
the
oil
work
and
the
old
engagement
proposal.
G
H
G
A
I
Yeah
hi
sorry,
I
was
late.
I
was
at
the
doctor's
office,
so
a
possible
broken
rib.
I
hope
it's
not,
but
the
thing
that
I
wanted
to
talk
about
is,
I
think
david
has
already
been
responding
to
some
of
the
questions
that
tim
hawkin
has
raised
on
the
review
reopened
review
of
the
of
this
cab,
and
I
think
his
main
point
is:
why
do
we
need
resources
allocated
in
the
spec
and
we've
been
working
to
convince
him?
I
I
don't
know
if
he's
if
it's
been
convincing
enough,
so
if
he
doesn't
want
that,
then
question
is:
what
can
we
do
about
it
and
a
secondary
issue
that
I
was
following
up
with
tim
was
this
direct
had
also
mentioned
this
earlier
earlier,
where
empty
there,
when
you
use
memory
as
the
backing
store
for
the
empty
dir
file
system,
the
temperature
is
mounted
into
your
container.
I
did
some
experiments
with
that.
Now
that
we
have
the
implementation
for
with
full
with
restart
policy
available.
I
I
could
experiment
with
what
happens
and
I
saw
that
when
a
container
creates
a
file
in
the
temp
fs,
the
memory
that
it
uses
for
the
file
is
charged
to
its
c
group
quota
and
when
the
container
exits
either
voluntarily
with
exit
one
or
that
is
without
vertical
scaling
or
you
do
a
scaling
with
policy
and
you
lower
the
limit.
I
The
c
group
quota
for
that
container
is
cleared
upon
restart
and
the
file
goes
to
the
top
level.
Part
c
group,
so
in
this
case
the
podc
group
quota.
If
you
try
to
reduce
it,
it's
a
single
container,
then
you
cannot
update
the
pods
memory
limit
below
that
below
its
usage
and
that
usage
is
never
really
cleared
until
the
pod
goes
away.
I
So
I
think
a
reasonable
thing
to
do
here
is
either
say
that
you
cannot
use
restart.
Policy
of
you
cannot
use
restart
as
a
policy
with
empty
data
equals
memory,
that's
a
validation
block,
and
we
should
also
just
abort
in
case
some.
A
request
like
this
comes
in
realistically,
a
component
like
vpa
should
never
do
this.
It
should
observe
that
the
usage
has
dropped
and
then
set
the
new
limits
somewhere
a
little
bit
above
the
usage.
I
But
let's
say
something:
bad
is
going
on
some
misconfiguration
or
something
a
bug
in
the
vpa
or
the
user
is
trying
to
do
something
like
game
the
system.
We
probably
should
just
not
do
this.
We
abort
it
and
then
keep
the
limits
where
they
are
so
that
for
billing
purposes,.
C
E
I
Yes,
that
is
true,
and
this
this
is
not,
but
vertical
scaling
does
introduce
an
angle
where
they
can
intentionally
lower.
Today,
what
happens
in
the
case?
Let's
say:
there's
no
vertical
scaling
and
I
try
to
write
a
program
that
you
know
like
games,
the
system,
I'm
gonna
come
up
and
then
you
know,
create
a
file
and
then
exit
one
and
then
let
the
communities
restart
me
with
my
restart
on
failed
policy
and
then
I'm
I
have
a
clean
slate.
I
But
then,
if
I
try
to
write
more
than
what
my
part
c
group
level
has,
let's
say
I
have
one
gig
and
I
went
down
to
like
50
mb
or
this
is
without
without
vertical
scaling.
Let's
say
I
create
a
900
mb
file
and
then
exit
I
come
back
out.
I
effectively
still
have.
I
don't
have
one
gig
available.
I
have
100
mb
available.
C
Well,
but
the
podc
group
isn't
visible
to
the
cluster
scheduler
when
you
resize
the
request
right-
and
this
went
back
to
the
debate
on
it
was
visible
still
if
resource
allocated
was
somehow
exposing,
but
I
think
there's
probably
a
gap
here,
because
resource
alligator
was
still
like
on
a
per
container
basis
and
the
charge
here
would
be
lost
when
it
got
transferred
to
the
pod.
C
But
then,
even
if
we
don't
do
resource
allocated,
we'd
still
have
a
gap
here,
because
with
cubelet
checkpointing
we
would
have
to
still
represent
the
charge
to
the
pod,
and
then
this
also
can
happen
with
the
whole
resource
overhead
thing.
So
the
per
pod
resource
overhead,
the
pod
resource
of
red
proposal,
whatever
the
name
we
called
this.
C
Have
a
way
of
capturing
observed
usage
at
the
pod
boundary
that
may
not
actually
represent
some
container
usage
so.
I
So
new
thing
that
we
want
to
add
yeah,
I
was
considering
adding
that
mainly
for
you
know
in
case
you
are
trying
to
resize,
which
is
the
new
thing
that's
coming
in
here,
we'll
say:
okay,
you
will
resize
your
requests
down,
which
is
okay.
Your
your
request
is
the
reservation,
and
then
you
have
the
limits
and
will
not
resize
the
limits
down
until
your
usage
actually
drops.
So
yes,
there
is
a
problem
here.
Let
me
think
about
think
more
about
this
and
see
because
the
other
option
is.
I
If
this
occurs,
then
we
restart
the
pod
like
if
we
error
out
in.
If
we
see
that
the
user
is
requesting
a
memory
limit
lower
than
usage,
then
we
treat
it
as
a
critical
failure
and
then
we
restart
the
pod
in
place.
J
I
I'll
have
to
check,
but
the
empty
door
exists
for
the
life
cycle
of
the
pod.
What
I
I'm
not
sure
about
is
if
we
started
the
part,
does
that
you
know
create
a
new
life
cycle
for
the
part.
A
So
in
other
cases,
so
so
that's
why
what
you
suggest
you
want
to
using
not
restart
it's
like
is,
if
you
are
using
the
type
of
fs
using
the
impeda
cases
and
the
policy
is,
is
have
to
be
policy.
It
is
so
that's
why
you
earlier
purpose,
but
that
that
your
your
your
reset
policy
actually
only
applied
to
the
container
itself.
So
it's
not
the
part.
A
A
Yes,
so,
but
okay,
so,
but
if
it's
the
burstable,
so
then
that's
the
problem,
so
you
basically
blind
behind
the
scene.
You
are,
this
container
actually
steals
some
of
the
resources
and
and
the
worst
part.
This
is
back
to
next
the
original
equipment
that
I
think
our
scheduler
don't
have
the
usage.
Like
no
estimation,
so
it's
not
usage
a
while
after
scheduling,
so
those
kind
of
problems
cannot
be
spot
even
so,
we
have
to
prevent
in
kubernetes
in
case
okay.
I
We
prevent
this
by
doing
eviction.
If
the
usage
is
more
than
what
it's,
what
its
limits
are,
then
it
becomes
an
for
guaranteed
class.
It
saw,
I
think
it's
boom
killed
and
for
burstable.
I
think
it's
a
candidate
that
picks
it's
pretty
high
up
on
the
election
list.
Is
that
correct.
A
C
If
we
don't
do
resource
allocated
like
the
fallback
would
be,
we
need
to
do
checkpointing,
and
so
I'm
wondering
if
we
want
to
if
we
need
to
enrich
a
potential,
checkpointing
design
and
say,
okay
well,
if
we
don't
do
resource
allocated,
this
is
the
new
thing
the
qubit
will
do
instead
and
and
start
to
get
more
eyes
on
on
that
problem.
I
Sure,
okay
I'll
start
thinking
about
how
a
good
approach
for
that
is.
I
know
that
you
mentioned
another
potential
of
you
know
creating
a
new
object
in
the
store,
that's
backed
by
api
server.
That
could
be
one
way
of
doing
it,
like
sort
of
keep
your
own
track.
C
Object,
I
think
it's
too
hard.
There
was
checkpointing
in
the
cubelet
that
just
got
recently
removed
where
we
used
to
checkpoint,
mirror
pods
right
and
but
there
is,
there
is
some
checkpointing
logic
all
throughout
and
cpu
manager
uses
it
and
that
type
of
thing
so
just
understanding
like
if
we
had
to
do
a
a
resource,
consumption
checkpoint,
like
all
I
would
suggest,
is
like
you
could
look
at
what
cpu
manager
is
doing
today
for
what
it
chooses
to
checkpoint
and
maybe
use
that
to
help.
C
I
don't
know,
provide
another
option.
If
we
can't
reach
agreement
on
resource
allocated,
yeah.
I
At
the
minimal,
I
think
we
can
do
that
I'll,
look
into
that
and
understand
how
that
works.
I'm
also
increasingly
leaning
towards
don's
point
about
you
know,
get
away
to
have
usage
reflect
affected
in
the
status
or
something
that's
starting
to
make
more
and
more
sense
to
give
the
real
picture
of
what's
going
on
this,
this
information
should
be
available
through
other,
like
metrics,
the
stats
that
prometheus
or
the
metrics
api
gives
out,
and
the
consumer,
for
this
is
essentially
your
vpa.
I
It
should
be
getting
this
information
already,
but
I'm
wondering
if
it's
helpful
to
be
reflected
in
the
fault
status,
so
that
scheduler
can
make
more
intelligent
decisions
as
well
as
has
not
mentioned
it,
doesn't
have
to
be
part
of
this
gap.
I
don't
want
to
over
complicate
it,
but
it's
something
we
want
to
think
about
having
there.
I
Okay.
For
now,
I
think,
let's
see,
if
tim,
I,
I
kind
of
like
your
resources
are
getting
the
spec
for
multiple
reasons,
and
I
think
all
of
us
on
this
ignored
mostly
agree
with
that.
I
hope.
A
A
I'm
still
I'm
still
chewing
and
I'm
still
tuning
either
way
and
but
I
obviously
don't
like
the
name
like
a
failed
name
like
they
call
the
resource
located
sounds
like
the
status.
Obviously,
if
we
like
more
like
resource
too
allocate,
maybe
it's
like
more
like
the
desired
state,
but
anyway
we,
I
think
in
1980.
We
kind
of
agree
about
the
checker
point.
Let's
look
at
more
research.
A
I
personally
want
to
look
into
more
on
the
resource
to
allocate
and
because
I
kind
of
the
united
meeting
agree
like
the
to
checkpoint,
but
after
the
meeting
I
think
about,
there's
the
reason
we
want
to
in
the
spike
right.
So
I
want
to
refresh
my
memory
more
because
initially
our
decision
and
then
put
effort
on
that
one,
and
can
you
continue
research
on
the
check
point
and
then
look
at
what's
the
potential
problem?
What
kind
of
things
other
things
so
we
could
parallel
to
do
this
kind
of
things
yeah.
C
So
it's
like
having
an
inspect
like
having
quota
read
status
is
just
as
bad
to
me
as
putting
this
in
pod
spec
like
both
feel
unnatural,
yeah,
and
so
like
that.
C
That
is
that's
definitely
true
and
it's
like
which,
which
code
base.
Do
you
spend
more
time
looking
at
that
you'd
be
like
makes
you
get
that
I
guess
what
did
tim
say,
gives
you
a
bad
odor
or
a
bad
sniff
test
or
something
but
like.
C
Either
way,
I
do
actually
like
the
idea
that
don
ray's
there
as
a
resource
to
allocate
it
does
sound
appropriately
representative
of
desired
state,
but
I'm
absolutely
very
sympathetic
that
you
might
be
sitting
in
your
car
right
now
with
a
broken
rib,
and
so
I'm
happy
to
pick
this
up
afterwards.
I
Well,
the
doctor
just
said
that
let's
not
do
x-ray,
it
doesn't
look
as
bad,
it
could
be,
but
you
know
if
it
is,
then
the
treatment
would
be
the
same.
Give
it
a
lot
of
rest.
Don't
go
crazy,
running
up
mailbox
peak
or
something
like
that,
so
I'm
just
gonna
have
to
sit
and
drink
beer
for
the
next
couple
of
weeks.
I
guess.
I
No,
no,
it's
all
right.
It's
not
it's
not
as
bad.
It's
manageable
stupid
thing
I
did
in
the
bike
park
yesterday.
That's
okay,
so
yeah!
Okay,
for
my
actions,
are
I'll.
Take
a
look
at
how
checkpointing
is
done
on
the
cpu
manager
side
and
this
one.
Maybe
you
will
look
at
resource
to
allocate
some
some
naming
that
better
describes
the
eventually
okay
minimum
resources
and
then
desired
ideal
resources
that
we
that
david
had
mentioned
as
potential.
You
know
this
is
the
minimum.
We
need
to
start
with.
I
The
reason
I
like
it
in
the
spec
is
scheduler
just
looks
at
this,
and
this
is
the
point
I've
made
with
tim
as
well
with
the
other
case.
If
it's
in
status,
then
you
have
to
do
like
max
of
desired
versus,
what's
actual,
which
can
lead
to
periods
where
scheduler
is
not
able
to
schedule,
so
it
reduces
the
throughput.
So
there
are
some
real
performance
implications
of
doing
that,
whereas
in
this
case
there
is
no
ifs
of
bugs
resources
are
okay.
Kubelet
says
this
is
what
it.
A
Yeah
well
the
argument
that
shouldn't
brought
up
that
time
when
we
discussed,
I
think
too
much
but
in
our
realness.
Actually,
that's
because
one
of
the
arguments
really
commenced
from
the
time
he
said:
spike
is
200
user
and-
and
it's
not
not
changeable
right
right,
but
to
actually
in
this
situation,
no
matter
wpa
use
cases.
Actually
the
there's
some
field
in
the
spec
is
anyways
have
to
change
it's
by
nature
of
the
future.
A
So,
but
at
that
time,
do
you
think
the
convince
me?
I
didn't
really
think
about
it.
Our
feature,
I'm
just
kind
of
thinking
about
all
the
features
we
introduced,
which
is
so
true.
That's
d14
level
changing
like
the
people
come
after
know
the
name
in
that
disk
thing
and
but
then
I
realized
after
meeting.
I
realized.
Actually,
that's
not
you,
because
the
whole
vpn
introduced
field,
no
matter
what
is
updatable
the
whole
department.
A
It
is
the
make
this
resource
so
so
derek
come
up
with
another
example,
which
is
the
container,
but
the
all
feature
introduced
the
resource
and
the
design
resource
request,
which
is
we
just
want
to
make
a
container
the
request.
The
end.
The
limit
is
mutable.
This
is
nature,
so
that
argument,
I
totally,
I
think,
that's
wrong.
A
But
I
do
think
about
that
time
because
we
are
called
that
is
resource
located.
It's
some
unique
to
me.
Also
it's
like
a
status,
so
I
I
have
to
try
my
best
to
refresh
my
memory
because
I've
been
about
this
project
like
the
three
years
ago
and
then
two
years
ago,
I've
been
roughly
only
talk
about
the
stigma,
so
I
was
kind
of
trying
very
hard,
but
then
later
after
the
meeting,
I
think
about.
A
Actually
that's
the
reason,
because
we
do
talk
about
three
years
ago,
say:
usage
based
off
the
scheduling
and
make
this
is,
but
we
did
the
cautious
diseases.
The
that
didn't
change
is
too
hard
and
too
long,
and
maybe
for
kubernetes.
It's
not
a
good
way,
at
least
for
now.
So
so
when
we
come
after
all,
those
kind
of
the
resources
are
located
and
all
those
kind
of
things
we
back
of
us.
So
I
will
look
at
this
personally
and
thinking
about
more
and
more
and
also
same
thing
for
the
temple
fs.
I
Yeah,
I
think,
in
that
pr
that
tim
has
raised
to
the
cap.
It
is
asking
a
bunch
of
questions.
I
responded
to
this
one.
I
brought
up
the
link
to
an
older
version
of
the
kep
where
we
had
resources
in
the
status,
so
maybe
they
will
refresh
your
memory
because
we
looked
at
that
and
then
at
that
time
we
realized.
Oh
status
can
be
lost,
and
in
that
case
you
know,
scheduler
cannot
recover
its
cash.
Googler
cannot
recreate
the
state
before
if
both
of
them
were
to
restart,
and
then
we
have.
I
We
have
a
major
problem
here
and
then
we
discussed
whether
we
wanted
to
move
this
to
one
option
was
to
move
this
back
and
really.
We
should
have
changed
the
name
when
we
moved
to
the
spec
and
then
the
other
option
was
local
node
check,
pointing
which
I
believe
you
had
drained
a
lot
of
problems
in
the
project.
You
did
not
want
to
do
that
unless
it
was
the
last
resort.
I
So
that's
that's
my
rough
history
timeline
about
this,
but
please
take
a
look
at
that
response.
With
that
particular
conversation,
I'll
I'll
send
an
email.
Once
I
get
home
I'll
post
a
link
to
that
particular
thread
to
the
status
to
our
signal
meeting
notes.
I
I
F
I
Yeah
I'll
I'll
do
some
more
work
on
that
empty
there
and
see
like
concrete
by
next
week.
I'll
see
what
concrete
options
we
can
potentially
review
and
if
I
have
more
data
I'll
share
that
slide
or
something
sure
thanks.
Thank
you.
A
K
A
All
right
so
can
we
change
the
time,
otherwise
I
can't
help
and
run
the
meeting,
but
just
six
o'clock
in
the
morning.
That's
too
much
for
me
and
the
whole
day.
I
cannot
work.
So
if
we
change
like
eight
o'clock,
then
I
can
get
up
earlier,
but
it's
reasonable
time
then
random
meeting.
I
also
definitely
want
to
join
the
resource
management
work
group,
but
six
o'clock
is
just
too
much.
C
I
guess
again
I'm
so
I
I
know
victor
had
a
family
child
challenge
and
the
invite
I
don't
know
if
victor's
here
at
the
moment,
but
to
the
earlier
point
on,
like
the
the
idea,
wasn't
necessarily
keep
it
going
in
perpetuity
right
so
like
I
don't
want
to
fall
back
into
that.
So
if
there
were
items
that
we
were
not
having
sufficient
time
to
discuss
now
on
this
sick
call
like
let's
we
have
an
example
now
we
could
just
spend
20
minutes
discussing
these
topics.
C
We
can
follow
up
with
victor-
I
don't
know
if
kevin's
on
here,
but
like
I'm
personally
inclined
to
bring
discussion
back
into
this
call
unless
we
think
there's
an
overwhelming
amount
of
discussion
ahead
of
us.
K
A
I
just
want
to
comment
this
one.
What
direction
you
see
I
I
totally
agree
and
we
used
to
have
the
resource
management
work,
group,
weekly
meeting
and
really
exhausted,
and
sometimes
we
even
run
off
the
topic
and
the
big
day
just
because
those
discussion
becomes
backsharing
and
instant
without
making
any
decision.
A
So
so
so
thanks
direct
this
time
and
make
cautious
decision
to
make
the
okay
review
like
the
not
like
for
our
schedules,
meeting
and
instantaneously,
oh
pause
and
review
status,
can
we
bring
the
topic
back
to
the
signal?
So
then
we
could
make
a
progress,
make
decision
and
achieve
of
the
consensus.
A
But
I
also
heard
like
alexandra
mentioned,
that
there
are
certain
things
we
just
don't
have
enough
time
and
one
time
because
resume
those
discussing
sounds
like
it
is
also
overhead
is
pretty
high.
So
can
we
locate
the
neck
one
time
thing
like
the
three
hour
and
achieve
some
agreement
and
then
move
forward
and
then
come
back
to
regular
signal?
K
From
my
point
of
view,
like
one
slot
for
three
hours,
long
might
not
work.
So
I'm
thinking
more
in
the
sessions
like
what
first
we
discuss
when
people
come
back,
learn
a
bit
what
we
just
discussed,
rightly
thinking
and
then
maybe
like
one
or
more
or
two
more
sessions
to
actually
to
reach
your
consensus.
C
The
topics
on
the
potential
lists
to
discuss
right
now
are
memory
management
alignment.
I
believe
swati
has
a
proposal
on
topology.
Aware
scheduling
are
those
that
we
we
could
either
if
they're
longer
topics
we'll
we'll
hold
another
round
of
discussion
and
we'll
get
the
invite,
updated
and
I'll
try
to
help
it
with
victor,
but
are
those
the
three
hours
of
discussion
you're
looking
to
to
discuss
alex.
L
Is
definitely
going
to
be
one
one
of
those
and
it
might
be
recurrent,
so
we
haven't
been
putting
them
in
the
signal
consciously
because
we
have
the
resource
management
working
group.
So
it
was
just
a
bit
of
confusion
for
us
as
well.
C
I'm
going
to
keep
updating
the
language,
it's
not
a
working
group,
because
that
is
a
different
governance
term,
so
it
was
largely
just
a
second
signaled
meeting
where
we
didn't
have
time.
So
I'm
very
cautious
on
that,
because
we
had
the
working
group
before
and
it
was
successful
but
like
it
also
reached
a
point
where
it
was
appropriate
to
expire.
So
I'm
happy
to
also
we
could
discuss
topology
we're
scheduling,
ideas
in
this
forum
and
then
do
a
deeper
dive.
C
K
Yeah,
I
still
think
what
we
need
to
have
was
break
out
meetings
because,
like
where
memory
manager
discussion
is
not
it's
not
only
about
the
memory,
it's
about
like
what
we
are
doing
in
near
future
for
cpu
and
memory
management
and
overall,
like
something
what
we
were
going
to
leave
next
three
four
five
years
same
thing
with
swatches
project
so
like
there
are
certain
technical
issues
about
how
resources
are
discovered.
So
I
I
always
expect-
or
that
will
be
like
some
time
to
to
actually
discuss
the
best
options,
to
discover.
C
Them
the
reason
I'm
biased
to
still
doing
as
much
as
we
can
in
this
forum
is
the
previous
discussion
we
just
discussed
like
vertical
pod.
Auto
sizing
is
directly
related
to
the
same
topical
area
right
and
so
like.
In
my
view,
vinaya
is
a
new
expert
in
this
domain
that,
like
his
presence
here
in
this
call,
is
like
equally
as
valid.
When
discussing
how
to
do
some
of
these
topics,
you
know
so
doing
as
much
as
we
can
here
is,
is
ideal
and
so
just
to
ensure
we're
not
bifurcating.
C
Well,
what
does
this
happen
when
we
do
the
bpa
and
oftentimes,
I
feel
like
a
few
of
us
are
acting
as
the
bridge
for
like
different
meetings
that
where
we
can't
have
it,
let's
have
it
here
and
if
we
run
at
a
time
after
the
idea
is
first
presented,
maybe
we
then
begin
separately
in
a
deep
dive,
but
like
consistently
at
least
with
what
we
discussed
previously,
I
can
see
there's
topics
on
the
agenda
that
we
could
discuss
in
the
resource
management
forum.
We
can.
C
We
can
talk
through
those
and
I'll
off
up
with
victor
on
getting
something
scheduled,
ideally
this
week,
because
I
know
for
myself
I'll
be
out
next
week.
A
I
I
I
like
this
one.
Actually,
we
have
the
several
separate
meeting
already
for
the
signal.
They
also
have
the
eq
test.
Second
order
e2e
test
and
I'm
not
sure
derek
you
are
in
involved
with
that
one
or
not.
I
was
kind
of
like
the
poor,
the
by
different
people,
and
but
I
I
didn't
attend
that
meeting.
There
was
too
much
meeting
too
many
meetings
for
me
and
but
occasionally
I
was
being
pulled
away.
A
Okay,
what's
the
solution,
and
so
so
so
I
like
that
we
have
the
separate
meeting
breakout
meeting
and
but
just
do
you
come
back
to
the
signal
because
anyway
I
have
to
operate
with
community.
What's
going
on
and
what's
the
next
step
and
what's
the
decision,
all
those
kind
of
things
and
also
community
will
be
agree
upon
and
take
action
together.
A
C
Okay,
so
I'll
take
the
follow-up
to
get
to
ensure
we
have
a
forum
to
discuss
the
topics
on
here
and
I
apologize
alex
that
I
know
if
victor
was
here.
I
know
he
had
some
issues
in
the
meaning.
K
I
saw
his
message
in
slack
and
actually
one
thing
what
he
mentioned.
What
presence
of
the
seed
leads
is
kind
of
crucial
to
this
meeting,
so
it's
not
only
about
like
with
timing
or
calendar
and
whites,
it's
more
about
like
availability
of
you
derek
or
you
don't
yeah
yeah.
So
I.
C
I
definitely
been
attending
them
right,
so
I
I
agree.
It's
important,
so
yeah
I'll
follow
up
on
what
we
just
discussed
I'll
at
least
get
another
meeting
scheduled
and
we
can
continue
as
followed
and
full
disclosure
like
the
reason
the
meeting
fell
off
was
due
to
some
personal.
C
A
Okay,
so
so
we
are
going
to
at
least
directly
you
can.
You
can
come
back
and
the
next
week
and
say:
oh,
maybe
send
an
email
to
sig.
No,
the
group
meaning
needs
to
say.
Oh,
what's
the
next
meeting,
what's
the
next
decision
and
I
will
try
my
best
to
attend
the
meeting
too
okay.
So,
let's
move
to
last
topic
today
here
and
david
david,
you
just
call
us:
do
you
want
to
talk
to
anyone.
M
Yeah
I
just
wanted
to
bring
up
quickly.
There
was
a
issue
kind
of
a
while
ago,
back
in
march
that
clayton
and
seth
worked
on.
Basically,
there
was
an
issue
where
the
kubrick
could
report
pods
that
actually
exited,
like
when
failure
exit
code
has
succeeded
to
the
api
server.
M
This
was
a
problem
because,
like
for
some
controllers
that
depend
on
the
container
status
like,
for
example,
jobs,
jobs
could
be
marked
succeeded
even
if
they,
even
if
the
pods
actually
failed,
so
that's
kind
of
the
issue
and
it
was
fixed
in
118.
I
was
wondering,
since
it
sounds
kind
of
like
a
big
issue
and
I've
seen
some
some
customers
hit
this
issue.
M
I
was
wondering
if
anyone
had
some
thoughts
on
cherry
picking
this
to
117,
I'm
not
sure
if
sigmoid
has
some
like
more
official
cherry
pick
policy
other
than
like
the
official
kubernetes
one,
but
I
just
linked
this
pr
and
wanted
to
kind
of
get
some
thoughts.
If
this
is
something
critical
enough
to
to
cherry
pick
to
an
old
release.
C
E
C
No,
I
was
gonna
say
if,
in
the
case
of
what
we're
fixing
a
bug
on
behavior,
which
I
believe
is
what
seth
and
clayton
we're
definitely
exploring
here
then
yeah.
This
would
be
perfectly
appropriate
to
pick
back
and
then
we
could
just
look
at
the
pr
to
know
if
it
picked
cleanly
and
so
I've
no
objection
to
picking
this
back.
I
mean
I'm
sure
you're,
not
red
hat.
We've
probably
picked
this
back
as
well.
So
there's
no
reason
the
community
can't
benefit.
M
Actually,
the
original
authors
I
pinged
clayton
and
I
think
yeah
he
did
mention
in
openshift
you
guys
already
cherry,
picked
the
back
side
of
thinking,
it's
good
to
get
it
back
from
the
official.
You
know,
do
an
official
cherry
pick
and
in
kubernetes
as
well.
Yes,.
C
Make
sure
it's
fixed,
I'm
happy
to
tag
this.
I
think
seth
didn't
have
proper
rights
until
now,
current
branches,
so
yeah
I'll,
take
a
look
right
after
the
call
thanks.
A
Thanks,
that's
all
for
today
and
and
any
other
topic
people
want.
We
still
have
the
10
more
minutes
any
other
topic.
People
want
to
board
up.
N
It's
just
a
quick
update,
so
this
is
tina
from
and
we
are
now
having
a
documentation
and
addressing
demands
comments.
I
think
you
will
just
need
one
week
and
two
one
or
two
weeks,
so
we
can
get
all
the
comments
addressed.
So
then
we
can
come
back
and
revisit
to
see
whether
this
is
ready
to
claim
this
claim
arm.
64
is
supported.
N
Yeah
we
discussed,
I
think
the
week
before.
O
I
don't
this
is
ning.
Yes,
hey
just
one
question,
so
we
victor's
not
here
today,
but
we
usually
have
the
weekly
signal
test
meeting,
but
somehow
it's
not
happening
in
the
recent
weeks.
I
want
to
know
what
what
is
the
plan
moving
forward?
Should
we
resume
some
of
those
meetings,
maybe
not
weekly,
maybe
bi-weekly
or
monthly,
since
we
have
lots
of
issues
on
the
test
and.
A
C
Yes,
victor
and
j
pipes,
I
believe,
were
kind
enough
to
have
kickstarter
this
effort
when
we
first
discussed
my
memory
serves,
I
don't
know
if
today
is
on
the
call.
C
I
suspect
that
the
challenge
we
had
was
related
to
the
same
issue.
We
were
having
with
our
other
discussion,
which
was
a
victor
who
had
owned
the
calendar.
Invite
had
not
been
able
to
work
for
a
couple
weeks,
so
I
I
would
still
like
to
see
this
work
go
on
and
in
my
own
private
discussion
with
victor.
C
I
encourage
him
to
continue
to
shepherd
this
work,
so
I
will
follow
up,
but
just
know
that
it
probably
was
an
error
due
to
the
reality
of
life
and
stuff
that
happens
for
why
the
invite
either
expired
or
the
meeting
wasn't
held.
C
I
thought
that
jay
might
have
been
helping,
and
so
I
will
follow
up,
but
I
thought
victor
probably
ran
the
calendar
invite,
and
so
I
think,
since
jay
pipes
isn't
on
today's
call
and
victor
isn't
here
as
well
we're
kind
of
short
on
some
information.
But
I
think
I
I
know
I
was
talking
to
victor
earlier
in
the
week,
and
so
hopefully
I
can
get
back
to
him.
C
I
definitely
liked
how
things
were
proceeding
and
I
thought
we
should
continue
it
and
I
think
there's
no
reason
to
believe
that
at
least
I
can't
speak
for
victor
here,
but
my
communication
with
victor
was
that
he
was
still
interested
in
continuing
to
shepherd
it.
So
I'll
follow
up.
A
So,
that's
all
for
today
everyone
and
thank
you
for
attending
today's
meeting
and
have
a
great
day
today.
Everyone
and
see
you
next
week,
bye.