►
From YouTube: Kubernetes SIG Node 20220927
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20220927-170428_Recording_1920x1026
A
Hello,
it
says
Signet
weekly
meeting,
it's
September,
27
2022
welcome
everybody.
Today
we
have
quite
long
agenda
and
but
before
we
go
into
agenda
item,
I
want
to
make
a
few
reminders.
A
We
grow
in
a
number
of
PRS,
it's
expected
at
this
stage
of
a
Middle
East
cycle,
because
we
want
to
fix,
bugs
and
do
enhancement
work
at
this
stage,
but
I
think
it's
unhealthy
that
you
have
that
many
I
I
know
that
there
are
many
PRS
that
are
just
not
been
looked
at
yet
I'm
trying
to
triage
them,
but
it's
still
like
102
anti-arged
PRS,
so
easiest
to
review
something
that
needs
to
review
here.
It's
it
means
that
it
was
already
charged.
A
So
if
you
want
to
pick
something
up
pick
it
from
here,
if
you're
an
approver,
please
take
a
look
at
this
Nita
broker
column-
it's
probably
lgtm
to
already
by
somebody.
So
it's
safe
to
start
the
approving
response.
A
Anyway,
and
if
you're
interested
what
happened
on
last
week,
you
can
go
ahead
and
click
on
these
links
are
closed.
Are
closed.
It's
something!
Thank
you!
Everybody
who
are
closing
PRS
as
well.
It's
it's
good
to
have
PR
that
is
not
long
and
necessary
or
was
work
in
progress
to
be
closed.
It
doesn't
break
out
statistics
too
much
and
immersed
PRS
as
well
like
if
you're
interested
what's
going
on
and
what
people
are
working
on.
A
Just
click
on
this
link
and
you'll
know
what
happened
last
week
with
that
I
will
switch
to
Raven.
C
B
Is
Raven
speaking
through
Australia's
laptop?
This
is
just
a
quick
reminder
on
the
126
plenty.
B
So
if
you
have
anything
that
you
plan
to
work
on
on
126
psycho,
please
update
the
status
on
zata
126
planning
dock
I
linked
there,
and
if
you
are
planning
to
work
on
something
that
is
not
track
there,
please
add
it
there
today
and
tomorrow,
I
will
ping
the
owners
who
haven't
updated
their
status
and
yeah
try
to
find
out
if
you're
planning
to
work
on
it
or
not,
and
the
cab
freezes
on
6
PM,
PDT,
October
6th,
so
yeah.
Just
a
quick
reminder
on
that.
A
Thank
you,
Raymond
moving
on
Alexey
Alexey,
talking
about
Syria
image,
pooling
progress
and
notifications.
E
No,
no,
no,
no!
No!
No!
It's
fairly
straightforward.
There
is
no
diagram
or
no
presentation
to
be
shown
at
this
moment.
Basically,
the
idea
is
that
when
the
image
is
pulled
through
I
see
light,
for
example,
when
we
usually
do
a
Docker
pull
and
then
the
image
URI,
then
we
see
the
progress
of
the
layers
of
the
container
image
being
pulled
and
some
of
the
tools
like
container
D
to
CTR.
E
When
you
do
the
CTR
pool
image,
you
also
see
the
progress
that
doesn't
happen
with
the
CRI
based
tools,
for
example
the
CRI
CTL
tool
that
operates
with
the
CRI
protocol
between
the
runtime
and
the
client.
So
when
you
issue
command,
the
runtime
has
the
information
how
the
progress
of
pulling
the
image
goes,
but
it's
not
exposed
through
the
serial
interface
to
the
consumer,
who
wants
to
know
the
progress,
so
that
has
at
least
two,
in
my
opinion,
important
use
cases.
E
One
is
when
a
person,
an
admin
or
whoever
the
actor
is
pulling
an
image
manually
with
the
tool
it
gives
some
visibility
and
the
other
one
is
when,
when
the
cubelet
requests
to
pull
the
image,
and
sometimes
in
HPC
in
AI
area
at
least
the
images
can
be,
they
look,
8
10,
15,
gigabytes,
big
and
if
the
broadband
connection
is
not
fast
enough,
that
might
take
like
10
minutes
to
pull
the
image.
And
while
this
is
happening,
there
is
no
visibility
on
the
Pod.
E
There
is
no
notification
going
on
there,
so
you
know
it
could
be
very
good
to
to
see
some
sort
of
either
progress
expose
it,
or
maybe
even
the
notifications,
but
for
that
the
CRI
protocol
interface
needs
to
expose
this
information.
So
for
that
purpose,
I
suggest
we
create
a
an
API
that
will
expose
at
least
if
not
both
the
progress,
information
and
the
notification.
E
So
I
created
the
issue
for
that
and
the
draft
of
the
cap
and
I'm
planning
to
work
on
that,
and
my
colleague
Yuka,
who
is
present
here
is
I
believe
he
starts
in
prototyping.
If
he
hasn't
started
already
and
I
will
join
him
starting
next
week,
probably
so
there
is
a
fairly
good
chance.
We
want
it
to
be
part
of
the
126..
I
will
update
the
planning
document
respectively
and
I'm
just
bringing
up
the
topic
here.
I
brought
it
already
in
the
cryo
weekly
meeting
and
they
likes
the
idea.
E
F
Yeah,
so
this
came
up
in
cryo
and
like
if
I
remember
correctly
like
earlier,
we
a
bit
Dockers
him
right.
We
had
a
way
where
cubelet
will
get
updates
that
the
image
is
still
in
progress,
so
don't
cancel
or
fail
the
image
pool.
So
my
motivation
is
to
bring
that
feature
back
over
the
CRA
for
for
bigger
image
pools.
So
there's
like
no
cycle
of
cancellation
and
retrying
and
cancellation
and
retrying
for
bigger
images.
So
we
can
perform
much
better.
Then.
G
That
makes
sense
the
one
question
I
have
is:
this
has
come
up
in
the
past
where
books
wanted
to
see
the
progress
of
an
image
pool
by
inspecting
either
an
event
or
metadata
on
the
pod.
G
E
G
G
I
guess
I
would
I
would
still
push
back
on
that
a
little
bit
because
I
question
the
human
being.
That
would
typically
look
at
that
versus
maybe
a
metric
or
some
other
like
sometimes
we
optimize
for
like
a
a
debugging
scenario
but
like
that's
a
it'll,
be
it'd,
still
be
a
lot
of
information
coming
back
in
the
happy
path
where
it
would
drive
a
lot
of
loads.
So
we've
got
to
be
careful
on
that
is
all
I'm.
G
Gonna
say
the
the
other
thing
I
saw
listed
here
was
that
there
was
folks
still
using
the
serial
image
pool
policy.
I
was
actually
surprised
to
see
that
that
that
only
existed,
because
in
the
very
early
days
of
the
project,
it
was
unsafe
to
pull
concurrent
images
from
a
run
time,
but
I'm
not
aware
of
any
runtime.
That
actually
has
that
limitation
anymore.
C
I
think
actually,
with
that
one
there's
one
actually
like
for
we're,
actually
looking
to
we're
going
to
switch
to
like
parallel
pool
internally,
and
we
look
into
the
current
configuration
which
is
QPS
based,
but
I'm
now
sure
why
the
QPS
is
the
best
way
to
config
this,
because,
even
though
we
like
limit,
you
can
only
pull
10
like
one
image
per
second,
but
each
image
Pro
will
take
like,
could
take
several
minutes
and
if
you're
still
pull
one
image
every
second
continuously,
then
eventually
you'll
accumulate
a
lot.
A.
G
G
C
Yeah
we
do
support
parallels
yeah.
Let
me
let
me
go
back
and
check
and
see
whether
there's
anything
beyond
QPS.
If
there's
only
QPS
at
least
we
try
to
tweak
that
and
it's
hard
to
make
it
mid-time
requirement
and
if
there's
already
a
cap
there,
then
that's
much
better,
but
if
not,
maybe
we
should
consider
added.
C
F
I
G
I
guess
all
I'm
saying
is
I,
don't
necessarily
know
if
progress
like
that
is
best
reported
through
the
Pod
API
versus
maybe
reported
through
a
metrics
code
path.
G
C
Yeah
actually
another
issue
we
found
out
in
that
at
least
today:
I'm
not
sure
how
cryo
Implement
that
by
the
way,
is
Canada
D
previously
with
Docker
shim,
because
at
CI
level
first
at
CI
level
we
don't
have
a
timeout
for
image.
Pro
event,
remember
correctly,
because
image
pool
can
take
a
long
time
and
it's
hard
to
set
up
fix
the
timeout.
So
at
the
end,
what
we
implemented
was
that
at
the
docker
shim
level,
we
just
checked
whether
Docker
report
progress.
C
If,
like
you,
don't
report
progress
in
two
minutes,
we
will
treat
it
as
a
timeout,
but
that
implementation
is
currently
missing
in
continuity.
So
what
we're
observing
that
for
a
bad
container
image
it
matches
the
pending
there
for
two
days
and
then
eventually
come
like
some
something's
update,
so
so
I'm
not
sure
whether
Kyle
has
this
issue
or
not,
but
we.
F
C
C
C
A
I
think
we
doing
it
in
continuity.
At
least
I
remember:
somebody's
been
working
on
PR
for
that
I
see.
Raven
is
yeah.
B
Yeah
there
is
ongoing
work
and
I
think
not
100
sure
it
is
going
to
be
released
with
1.7.
I
can
go
that
percent
on
that,
but
there
is
some
coin
effort
on
progress
based
the
Timeout
on
image,
pools.
A
And
now
for
those
kind
of
timeouts,
we
currently
saying
them
on
runtime
side.
So
it's
a
runtime
configuration
rather
than
Google
controls.
It
I'm
not
sure
if
it's
right
approach,
long
term,
especially
if
you'll
start
doing
this
progress-based
reporting
as
well.
G
With
progress
is
supporting
is
like
if
the
image
would
have
to
think
about
how
that
works,
with
image
pool
policies
and
then,
if
we'd
want
to
have
a
more
granular
condition
to
know
like
if
the
when
we
had
the
ability
to
pull
secure
images
like
do,
we
want
conditions
to
know
if
the
actual
pool
was
authenticated
or
not
so
either
way.
It's
just
a
lot
to
think
through
on
the
progress
bits
and
I,
don't
know
if
we
can
get
far
with
metrics-based
Solutions
I,
that
seems
probably
better
than
abusing
the
Q
API
server.
J
G
But
basically
the
number
of
calls
the
kiblet
is
allowed
to
make
back
to
a
cube,
API
server.
So
it's
queries
per
second
yeah.
E
Okay,
another
option
instead
of
the
report
in
the
progress.
So
if
the
tire
base
could
be
after
we've
started
pulling
the
images,
then
we
know
the
the
speed
that
we
download.
Then
we
so
we
can
at
least
issue
the
ETA
if
there
is
like
7
or
10
15
minutes
to
wait
for
the
image
so
that
at
least
some
visibility
to
the
user
of
the
board.
A
So
if
you're
talking
about
status
updates,
however,
it
ever
came
up
to
report
which
registry
it
was
pulled
from
and
maybe
some
more
details,
I
I
think
right
now
it's
there
is
very
little
visibility
into
whether
runtime
configuration
was
applied
correctly.
K
G
So,
like
some
of
these
things,
I'm
starting
to
wonder
how
that
would
even
correlate
with
some
of
the
requests
that
y'all
were
presenting
here,
but
seems
complicated,
is
all
I'm
trying
to
think
through.
And
that
was
an
edge
case.
That
I
know
that
you
all
had
discussed
and
Ronald
you
and
I
at
least
have
discussed
this
in
the
past.
But
just.
G
F
K
Won't
benefit
by
this
way
also,
but
also
this
one
won't
cause
any
issue
for
laser
image.
So
last
thing
to
do
is
yeah,
but
I
I
hope
we
can
start
the
smaller
I
really
like
what
to
manure
earlier
mentioned
that
what's
the
topic
issue
right
so
top
issue.
Basically
it
is,
we
just
don't
know
it
is
the
stocked
or
make
still
make
a
progress,
and
we
start
from
there
and
before
we
get
into
like
any
API
change,
I
think
that
without
any
API
change
at
least
we
could
provide
something
like
the
darker
share.
K
We
have
and
also
I
noticed,
that
the
Pico
pekko
asked
something
like
the
problem
to
to
see
the
major
like
the
image
Imaging
pulling
latency,
and
actually
it
is
within
account.
Those
correctly.
So
that's,
maybe
sometimes
we'll
indicate
some
problems,
and
but
anyway,
that's
for
the
stereo
pruning.
It's
not
for
the
concurrent
pullings,
but
no
matter
what,
when
we
count
those
kind
of
things
to
say.
Do
we
have
the
issue
like
the
benefits
issue?
K
A
Yeah,
okay,
so
let's
say
just
please
submit
your
cap
as
like
properly
and
we'll
try
to
get
in
126.
If
you
will
be
working
on
that.
A
Yeah
I
type,
the
summary
and
next
one
is
thank
you.
L
Yes,
there
is
PPT,
you
can
just
open
it.
Yes,
okay,
I,
have
it
open
here,
okay,
okay
and
next.
So
today
my
topic
is
quiverland
supports
custom
Pro
protocol.
My
next
page,
please,
okay,
the
motivation
is
because
currently
we
have,
we
have
the
TCP
HTTP
and
now
grpc,
but
they
all
depends
on
matchwork.
That
means
the
couplet
must
has
the
must
be
able
to
access
to
use
Network
to
proper,
continuous
status.
But
now
we
have
our
own
kubernetes.
L
Implementation
is
in
a
multi-tendency,
VPC
enabled
device
use,
OV,
OS
and
OV,
and
you
know
plugin,
and
in
that
case
the
the
couplets
cannot
access
the
pod
in
VPC
for
security
reasons.
So
so
the
customers
can
only
use
exist
problem,
and
that
is
not
a
very
good
enough
for
for
for
them,
and
also
that
and
I
find
out
that
if
you
want
to
add
new
protocol
such
as
grpc
to
to
as
a
new
prototype,
it's
a
lot
of
work.
L
L
Please
so
we
proposed
our
solution
is
that
I
think
Kublai
should
mainly
focus
on
the
type
of
the
Prototype
of
a
container
such
as
now
we
have
live
news
Readiness
in
startup
and
maybe
even
more,
and
so
we
can
let
the
users
to
describe
the
proper
detail
freely
and
to
implement
it
freely,
and
so,
in
our
case
we
create
another
crd
to
describe
the
problem
and
also
we
use
our
Pro
manager
to
do
the
product
to
the
code.
L
L
So
our
country
importation
is
to
add
field
in
public
spec
to
communicate
between
the
customer
proper
and
the
coverage
next
page,
please.
So
this
is
our
examples
that
you
can
you
can
see.
We
have
two.
We
add
two
Fields
one
field
is
in
containers.
It
cause
customer
problems.
L
It
describes
the
current
container
that
will
support
custom
prototype,
so
we
can
have
liveliness
radius
and
startup,
and
also
we
have
another
field
that
was
added
in
the
products
back
postback
to
the
two
to
give
the
results
of
the
our
products
without
rise
to
the
code
so
which
every
progress
results
and
next
page
please,
and
so
that
we
have
found
that
the
first
we
have
the
customer
problems
field
in
containers.
L
It
cannot
be
modified
after
the
product
created
and
also,
if
you
turn
off
this
feature,
if
you
do
not
have
this
field,
the
container
customer
prop
status
field
that
was
ignored
and
also
we
have.
As
the
previous
page
says,
we
have
contained
a
custom
crop
status,
Fielding
horseback
the
reasons
we
do
not
project.
L
In
first
we
try
it
in
power
status
and
I
find
out
that
first,
we
cannot
use
CTL
to
modify
it
because
two
CDL
do
not
support,
do
not
support
the
to
write
the
to
modify
the
product
standards
and
even
if
I
use
the
apis,
I
just
directly
use
the
API
server
to
modify
the
power
standards.
The
equipment
cannot
watch
cannot
watch
the
hospitals
immediately.
L
It's
automatic
minutes
to
do
the
fully
re,
recycle
re
reconcile
to
watch
the
to
watch
the
results
and
next
time,
and
next
we
try
to
put
it
in
a
continuous
stack
and
I
find
out
that,
because
continuous
spec
will
do
a
hash
through
the
status
so
and
I,
because
because
this
value
will
be
modified,
a
lot
so
I
think
it's
not
it's
not
a
very
good
places
to
edit.
So
now
we
put
it
finally
posted
in
a
prospect.
L
I
saw
this,
so
this
values
we
hope
we
can
modify
it
with
Coolidge
and
also
use
the
controllers
watching
by
the
customers
and
also-
and
also
the
third
point
is
restart.
Count
is
important
because
now
the
problem
without,
if
I,
do
the,
if
I
make
maybe
I,
write
the
status
to
failure,
so
the
part
will
restart
it,
and
so
the
proper.
So
I
cannot
end
up
the
privilege,
as
this
is
not
equivalent.
So
maybe
that's
time.
We
only
want
to
use
the
current.
L
The
current
running
point
of
the
the
current
part
running
status.
So
if
the,
if
the
Polaris
started
it
continually
started
and
the
old
values
which
would
which
the
stock
count
is,
it
is
old.
It's
totally,
it's
totally
ignored
a
nice
so
to
to
keep
the
guarantee
that
we
only
get
the
most
recent
values
of
the
proper
result.
L
So
that's
it's
our
total
Plantation.
Thank
you.
G
G
System
and
then
allow
that
third-party
system
to
report
status
on
pods,
in
addition
to
the
qubit
acting
as
the
pad
startups.
Yes,
okay,
this
is
a
this-
is
a
first
of
request
of
this
nature.
I
guess
I
haven't
thought
about
this
before
the
one
question
I
had
was,
who
who
is
charged
for
the
in
your
scenario?
Who
who
is
charged
for
the
probe?
Is
it.
L
Paul
is
also
very
encube
in
the
kubernetes
node.
It's
running
you
know,
but
the
network
is
not
we.
It
was
for
two
reasons,
because
it's
a
VPC,
so
we
have
the
namespace
was
in
a
VPC
and
so
cool
H
is
in
kind
of
you
can
call
it
a
management
Network,
so
Google
cannotch
to
directly
access
the
products.
Network
yeah.
G
I
guess
art
is
the
compute
cost
of
running
the
probe
charge
to
the
management
process
on
your
node
setup,
or
is
it
charged
to
The
Container
the
end
users
container.
L
G
If
you
did
an
exec
probe,
the
runtime
could
launch
that
exec
action
inside
the
user's
pod,
C
group,
and
so
the
user's
pod
is
being
charged
to
run
the
probe.
I
wasn't
sure.
We've
had
requests
in
the
past
to
provide
some
like
QPS
guarantee
around
probes,
which
implied
that
the
management
part
of
a
node
should
be
the
one
who
who
does
that
in
this
solution,
where
you
have
like
a
delegated.
L
You
mean
who
is
in
charge
with
the
extension
of
this
progress.
It's
another
controller
to
do
the
charging
in
the
controllers.
It
will
because
it
will
run
in
the
past
Network
it's
in
the
VPC,
so
it
has
the
network
abilities
to
prop
the
due
to
crop
the
product.
It's
not
it's
not.
L
L
L
K
I
I
understand
what
you
request,
because
this
is
actually
the
dimension
in
the
past
in
the
signaled.
So
so,
but
there
are
still
some
other
way
you
can
achieve
the
similar
things
right.
So
you,
you
are
anyway
already
running
off
this
controller
kind
of
represent
of
the
proper
and
to
poke
a
list
of
the
parts
running
in
your
VPC.
So
you
actually
can
signal
your
controller,
which
it
is
also
schedule
management
of
those
kind
of
part
to
some
extent
right.
So
they
could
do
say.
K
K
So
you
don't
need
the
see
the
problem.
It
is
for
me
to
change
our
API
like
this
one
and
I
understand
that
in
your
use
cases
it
will
be
very
high
will,
but
the
problem
is
that
to
make
that
flexibility
here
at
the
node
level
could
be
people
in
store
some
really
dangerous
of
the
problem.
That's
the
security
concern
to
me
so
so
so
your
case
is,
as
you
can
achieve
at
the
cluster
level,
I
think
and
unless
you
think
about
there's
the
particular
things
you
hope
is
connected.
K
You
want
to
reach
some
balance
between
node
coordination
and
cluster
level
correlation
which
you
couldn't
do.
Let
us
know
so
we
can
help,
but
so
far
at
least
based
on
what
you
described
and
also
based
on
my
understanding,
fantastically
still
can
move
forward.
Instead,
the
complicated
node
here
yeah.
I
I
have
a
question
that
is
there
anything
preventing.
You
create
a
proxy
to
translate
the
signal
from
this
custom
prop
to
a
standard,
equivalent
problem.
L
G
Think
that's
the
part
that
I
find
most
interesting,
so
maybe,
if
not
today
or
in
the
future,
if
you
could
maybe
spend
some
time
talking
through
what
the
trust
boundary
is
you
want
to
have
between
the
cubelet
and
the
workload
I
would
find
that
interesting.
Like
I,
don't
know
if
this
is
a
confidential
Computing
scenario,
you're
exploring,
but
like
naively
right
now
in
the
community,
the
cubelet
is
basically
a
admin
over
all
workloads.
It
launches.
It
sounds
like
you
want
to
be
able
to
take
away
some
rights.
G
I,
just
don't
know
what
partial
rights
you
are
are
not
taken
away,
which
is
why
I
was
asking
like
in
a
world
where
the
cubic
can
launch
the
Pod
and
destroy
the
Pod
and
attach
storage
and
get
an
IP
like
understanding
the
Nuance
of
the
security
boundary
that
you
need
that
that
would
probably
be
really
enlightening,
and
if
it
was
tied
to
like
something
around
confidential
containers
and
like
if
there
was
unique
stuff
around
Secrets
I'd
also
find
that
interesting.
It's.
L
K
Yeah,
what
do
you?
What
do
you
describe
the
actors
opposite
you
just
mentioned
here,
but
the
direct
code.
It
definitely
will
have
this
use
cases
for
the
kubernetes
that
we
don't
want
the
container
running
working
out
the
access
demon
like
the
cubelet
right.
This
is
why
we
have
the
gvisor
project
I
mean
in
Google
or
in
gke,
so
that's
entrusted
workload.
If
you're
running
a
certain
cluster
majority
cluster,
it
is,
we
think
about
the
admin
and
the
other
working
out
running.
K
There
is
trustable
and
of
course
we
have
the
monitoring
metrics,
all
those
kind
of
things
to
detect
those.
You
know
sorry
work
or
not,
but
still
there's
some.
Sometimes
you
bring
some
like
the
third
party
and
trusted
workload
and
you
might
be
want
to
share
more
and
naming
space,
definitely
is
not
enough,
and
so
that's
why
I've
been
able
to
people
using
the
clear
container
and
also
Google
have
the
divisor.
That's
basically
that's
what
orange
law
came
from
yeah.
G
G
Is
just
what
seemed
unique,
either
way,
I
kind
of
agree
with
Don
that
it's
hard
to
go
and
add
this
flexibility
just
now,
particularly
if
the
trusts
boundaries
that
motivated
aren't
well
understood,
maybe
Dawn
understands
them
better
than
I
do
but
I'm
I'm
missing
some
of
them.
L
L
Or
CSI,
you
cannot
just
assume
that
all
these
users
use
the
same
use
disk
quantums.
It's
not
enough!.
G
L
G
Separating
like
a
security
Dimension
from
the
allow
me
to
support
custom
probes
that
that
makes
sense,
it
was
just
the
intersection
of
the
two
that
was
confusing
me
right
now
and
then
the
implication
of
supporting
custom
probes,
I,
think
or
custom
probe
plugins
needs
some
more
thought,
particularly
around
like
well.
How
do
I
know
that
that
prober
is
present
on
a
node
before
it
could
be
scheduled
like
what?
G
What's
the
what's
the
planning
that
needs
to
be
done
to
a
cluster,
to
support
that
so
I
I
think
we
would
need
a
little
more
context
on
those
questions.
L
Yeah,
maybe
maybe
I
understand
you
mentioned
it's
just
more
like
a
cni
or
CSI
level,
such
as
how
to
add
a
problem.
Custom
problems
controllers
such
as
this
right,
more
and
more
or
around
the
solutions
to
think
more
more
areas
of
this,
such
as
how
to
enable
a
problem:
how
to
disable
the
custom
problem
right
from
understanding.
A
Yeah,
maybe
we
can
drop
it
a
wrap
up
this
discussion
today
with
if
you
can
bring
some
more
requirements
and
like
maybe
first
split
into
different
types
of
props
and
security
problems
and.
M
A
Solve
for
alternate
think
of
the
alternative
there,
but
then
next
stage
is
explain,
better
security
requirements
and
why
it's
there
is
a
such
a
pushback
from
Google
to
not
being
able
to
access
user
workload.
A
L
Yeah,
okay,
okay,
because
first
this
is
our
first
time.
I
just
want
you
to
know
that
the
community's
opinion
about
it,
and
also
we
because
first
because
we
have
the
strong
requirements
of
it,
so
so
a
pretty
fast
like
because
you
cannot
reuse,
it
needs
a
user
requirement.
You
use
expectations,
and
so
now
we
can
have
more
dogs
to
describe
our
scenarios
detail.
Okay,
thank
you
and
I
still
think
that
this
is
the
right
path,
because,
probably
is
a
very
users.
L
L
A
A
Into
different
types
of
props,
maybe
like
a
streaming
props
or
maybe
like
sub
second
probes,
there
are
multiple
directions.
We're
going
to
you
just
need
to
understand
scenarios
better
understand
how
they
all
fit
into
a
longer
term
strategy.
Thank
you.
Thank
you.
Let's
switch
the
next
topic,
so
we'll
have
time
for
everything.
Kevin.
Do
you
want
to
go
ahead.
A
Okay,
thank
you
next
topic
about
Signet
recordings,
I
think
Derek.
It's
for
you.
D
Yeah,
usually
I
I
batch
these
up,
because
it's
actually
a
time
consuming
process
to
do
this,
so
I
will
try
to
get
all
the
meetings
up
to
this
week
uploaded
by
next
week
for
the.
D
G
Batches
because
it's
it's
timely
on
each
one.
D
The
it's
actually
kind
of
restrictive.
You
need
those
Zoom
credentials
to
get
to
the
meeting
recordings.
G
And
then
to
add
to
the
YouTube
playlist:
that's
also
restricted,
so
it's
probably
a
fewer
set
of
folks
who
could
do
it.
But
if
others
wanted
to
do
that,
but.
H
K
A
Okay,
next
topic
is
bunny.
This
is
about
pure
container
start
policy.
Every
right,
I
wrote
this
document.
So
let's
switch
to
some
explanation.
So
we've
been
discussing
sidecar
continuous
for
a
long
time
for
sidecar
containers.
We
need
to
solve
so
many
problems
like
they
need
to
shut
down
with
the
main
containers
they
need
to
stay
alive,
while
main
container
restart
policy
may
be
never
it's
for
jobs.
A
For
instance,
sidecar
containers
must
start
before
main
containers
in
many
cases,
and
maybe
even
before
you
need
containers
and
containers
may
have
weird
GPS
on
each
other,
so
some
counters
May
control
the
network.
Some
containers
may
not
control
the
logging
and
they
both
need
to
start
early,
but
Network
needs
to
start
earlier
than
logging,
for
instance,
that
kind
of
strange
interdependencies
between
car
containers,
but
also
between
regular
containers
as
well.
A
So
we've
been
discussing
introducing
sidecar
containers
so
like
other
types
of
containers
for
a
long
time,
and
we
never
were
able
to
solve
all
the
questions
and
all
the
problems,
but
the
community
grow
wants
some
problems
to
be
addressed
at
least
some
of
them.
So
this
proposal
is
limiting
the
problem
to
just
restart
policy,
and
proposal
here
is
to
introduce
some
per
container
I've
write
for
restart
policies.
So
today
you
can
only
set
restart
policy
on
a
port
and
it
will
apply
to
all
containers
in
the
port.
A
A
So
proposal
here
is
to
allow
to
authorize
this
per
container
and
it
will
allow
to
set
a
restart
policy
right
for
certain
containers
to
be
always
available.
So
let's
say
you
have
a
job,
and
this
job
has
login
container
like
metrics
upload
containers
at
this
sidecar.
You
can
Mark
a
job
to
be
to
be
running
and
never
restarted,
but
then
a
sidecar
maybe
may
be
restarted,
but
then
to
solve
the
second
problem
on
the
first
problem.
A
Perhaps
we
also
propose
to
introduce
new
flag
on
this
other
right
that
will
be
called
terminatepod,
and
this
flag
will
be
will
say
that
once
this
container
is
no
longer
need
to
be
restarted,
the
whole
Port
needs
to
be
terminated.
So
there
is
a
small
explanation
here,
and
there
is
some
tldr
like
one
sentence.
Explanation
here
and
whole
document
outlines
more
details.
A
A
So
please
take
a
look
and
if
you
have
comments,
if
this
sounds
good,
this
may
be
a
very
good
first
step
that
is
so
far
least
controversial
from
all
other
proposals
on
sidecar
containers
and
Keystone
containers.
G
Yes,
okay,
so
one
I
know
Tim,
Hawk
and
I
have
been
talking
about
this
last
few
days.
In
the
background
and
as
well
and
I
know
you
reached
out
that.
D
G
There,
which
I
I
still
like
was
I,
had
pointed
Tim
to
the
the
binds
to
directives
that
you
can
use
in
system
D,
which
allows
you
to
say
this
process
is
lifetime
is
bound
to
another
and
so
that
when
one
dies,
the
other
one
can
die.
G
I
I
still
think
that's
useful
and
might
be
a
thing
that
can
overcome
Tim's
feedback
on
your
proposal
there,
which
was
it's
weird,
that
a
restart
policy
can
basically
result
in
a
termination
of
something
else,
but
I
don't
know
if
you
had
thoughts
on
the
binds
to
semantic.
The
the
other
part
of
it
that
was
appealing
to
me
was
I.
G
Think
it'd
be
great
if
the
cubic
can
delegate
more
to
the
operating
system
for
some
of
these
things,
and
so
if
we
could
express
this
in
a
way
that
we
know
a
runtime
could
easily
map
to
popular
init
systems,
that
would
be
probably
shedding
load
for
what
the
cubelet
needs
to
manage
so
either
way.
I
know,
Tim
has
concerns
on
binds
too,
but
I
was
just
curious
if
you
could
review
that
part
afterwards.
That
I
saw
Tim
at
edit
and
give
your
thoughts
but
yeah
anyway,
I,
don't
remember
Noel.
F
I
think
without
experiment,
without
experiment
with
this
binds
too
to
see
how
well
we
can
rely
on
Direct
integration
there.
But
I
think
that
this
sounds
appealing
to
me
and
I'm
I'm
in
favor
of
delegating
to
the
OS
and
system
team,
where
possible.
A
H
J
G
Even
a
restart
policy,
it's
basically
like,
if
I
finish,
to
show
everyone
else
and
the
feedback
from
Tim
on
that
was
it's
weird:
to
have
a
restart
policy,
Express
the
fate
of
other
containers,
and
so
then
I
had
pointed
Tim
to
the
system
D
binds
to
directive,
which
basically
allows
a
unit
file
to
say.
The
fate
of
this
process
is
tied
to
another
and
either.
J
A
Yeah
Brian
story
is
another
alternative.
I
was
I
was
thinking
it's
yeah.
Maybe
you
need
to
write
down
a
pros
and
cons
of
One
Versus.
Another
I
can
do
that.
A
Okay,
if
there
is
no
more
comment
on
that,
please
read
the
doc
and
comment
there.
If
you
like
it
or
don't
like
it
more
on
sidecar,
there
is
a
issue
that
somebody
raised.
Basically
it's
another
issue
about
sidecar
canis
I.
Just
put
it
here
to
highlight
that
we
have
many
issues
beside
car
containers.
A
We
do
calculate
score
adjustment
in
couplet
and
the
current
calculation
is
based
on
amount
of
resources.
You
want
from
a
system.
So
if
you
have
a
node
and
we
calculate
score
adjustment
for
every
single
container
and
if
container
is
small,
then
ohm
score
will
be
higher
for
this
container,
because
we
believe
that
this
container
is
less
important
than
maybe
some
bigger
containers
that
is
harder
to
I,
really
like
it.
A
It
leads
to
some
unexpected
problems
when
all
sidecar
containers
being
here
like
being
terminated
right
away
when
home
score
when
home
situation
happens,
like
we
basically
paint
a
big
big
Target
on
every
sidecar
container
saying
this
container
is
likely
very
unimportant,
but
then
like
it
leads
to
all
sidecar
containers
being
terminated
very
fast.
It
doesn't
help
to
solve
the
own
problem
because
they
typically
very
tiny,
but
at
the
same
time
they
interrupt
the
normal
work
of
bigger
pods
here.
A
So
maybe
we
can
adjust
the
spoon
for
calculation
to
help
with
car
containers
and
account
for
entire
Port
request,
rather
than
this
individual
container.
So
I
was
wondering
if
anybody
has
some
sort
of
it
and
whether
we
need
a
cap
for
that
or
just
go
with
smaller
change
here.
K
I
want
topic
you
mentioned
here
actually
kubernetes.
Initially,
the
restart
policy
is
not
prepared
and
also
kubernetes,
pure
as
the
policy.
Also
it's
not
purple.
Actually
it's
per
container
both
is
start
from
per
container
and
there's
the
long
debate
in
the
past.
I
mean
I
I'm,
not
sure
it's
the
in
capture
in
the
pr
or
definitely
that
time
is
low
cap
and
maybe
it's
in
the
email.
In
the
past
we
have
the
long
debate
to
and
the
finalists
settled
down
on
the
power
level.
K
K
If
you
look
at
the
first
qos
I'm,
not
sure
you
that
time
you
are
around
or
not
actually
super
part,
it's
not
per
container,
then
we
spend
a
huge
amount
of
time
to
make
that
converted
to
the
per
container
so
Sergey.
Your
latest
proposal,
which
is
means,
is
basically
it
is
like
the
slowness,
the
only
change
it
to
the
per
container
course
that's
hugely
changing.
Once
you
start,
that
I
mean,
of
course,
humans
enact
a
small
container,
it's
a
set
of
container.
This
is
why
I
feel
so
strongly
about
the
static
Contender.
K
Every
time
brought
up,
because
that's
not
literally
fit
into
part
definition
originally,
and
also
is
the
off-band
people
inject
and
I
want
to
understand
how
many
today
using
static
calculator,
because
last
time
I
talked
to
ASM
and
istio.
This
is
the
move
away
from
the
cytical
container,
and
the
people
insist
of
the
second
continent
also
is
in
the
past.
Don't
want
to
do
the
per
container
course
and
per
container
restart
the
policy
I
want
to
understand
what
made
them
change
here,
because
that's
a
huge
change
for
a
lot
of
things
for
policy
for
those
things.
A
K
K
You
say
yeah,
you
can
see
that
original.
We
only
agree
about
the
namespace
is
the
part
level,
but
a
lot
of
the
resource
management,
because
that's
naturally
for
me,
is
the
per
container
also
restart,
because
that's
the
that's
the
that's.
The
threat
group
leads
right.
So
then,
when
you
kill
that
one,
but
the
time
in
actually
you
kill
other
things,
but
we
change
that
one.
We
literally
change
that
way.
One
of
the
reason
we
changed
that
way.
We
also
don't
want
the
power
to
become
to
another
scheduling
destination.
K
I
want
to
say
that,
because
initially
I'm
thinking
about
the
product
even
could
be
the
scheduling
destination
in
standard
scheduling
unit.
So
you
end
up,
could
schedule
part
container
into
a
part,
not
just
node
right.
So
we
are
trying
to
avoid
that
too,
while
the
design-
and
so
that's,
why
we
change
that.
A
F
Definitely
yeah
that
could
be
challenging
right
like
because
you
want
your
pause
container
to
have
a
specific
score,
and
how
does
that
like
reconcile
with
a
different
score
for
the
sidecar
one?
Oh.
F
G
Yeah
I
would
rather
move
towards
that
then
try
to
think
about
how
to
change
the
present
State
sidecars
in
general,
particularly
when
they're
injected
by
people
who
didn't
know
about
them
in
their
pod
spec,
introduce
all
sorts
of
interesting
quirks.
G
So
we
had
to
reason
our
way
through
them
carefully.
I
guess.
K
They
just
want
to
see
the
Southern
Country
exactly
match
my
original
sort.
The
power
is
the
destination
about
the
scheduling
destination
right,
but
they're,
probably
in
his
initially
we
say:
oh,
we
don't
want
to
do
that
and
we
don't
want
to
do
that
because
we
don't
do
that.
We
are
really
want
to
make
that
part
as
the
minimum
scheduling
like
the
scheduling
unit.
So
we
basically
have
the
long
discussing
on
that
one.
So
we
make
that
one.
But
then
we
later
we
introduce
a
static
container.
G
Think
the
ephemeral
containers
also
introduced
pot
as
a
scheduling
unit.
K
Because
that's
kind
of
good
for
debug
being
it
here
and
other
one-time
things.
So
most
people
don't
use
things
so
so,
but
the
state
actually
is
the
full
like
the
continuous
running
and
it
is
even
like
the
control
other
containers
destination.
So
that's
why
I
connect
the
screening
Phoenix,
that's
not
match.
A
M
M
Many
many
times
a
lot
last
weeks
and
we
had
a
separate
session
or
a
meeting
about
that
at
last.
Thursday
and
I
think
it
now
starts
to
be
some
some
clarity
on
the
subject
and
I
updated
to
Cape,
with
a
kind
of
figure
that
illustrates
what
it's
about
and
the
different
places
so
I
urge
people
to
take
a
look
at
that
other
update.
It
kept
PR.
H
Well,
I
think
this
is
just
a
quick
question.
This
is
some
current
behavior,
but
the
document
we
have
comments,
the
encode
that
this
is
maybe
about
and
I'm,
not
sure
if
we
win
will
fix
it
or
we
plan
to
fix
it
or
we
should
document
late.
J
A
So
action
item
here
is
to
take
a
look.
Where
are
we
interested?
Thank
you?
Maybe
if
nobody
will
have
reply
to
next
week
bring
it
again.
Thank
you.
Yeah.
F
So
I
think
Kevin
and
I
met
with
Marcus
and
folks
and
I.
Think
this
this
picture
is
a
very
clear
diagram
of
what
it's
trying
to
achieve
and
then
there's
another
diagram
that
shows
it
will
show
what's
phase
one
and
then
hopefully
we
can
have
something
similar
on
the
dra
side.
So
we
can
compare
and
contrast
what
all
these
efforts
are
trying
to
achieve.
J
M
A
Thank
you,
sorry
for
everybody
for
running
over
time,
it
was
signaled,
weekly,
missing,
I.
Think
we've
done
anything
else.