►
From YouTube: Kubernetes SIG Node 20200128
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
A
A
B
So
that
was
the
first
question
and
I
think
maybe
some
the
kept
was
updated
to
that
then
the
second
feeling
I
had
on
it
was
given
the
prerequisite
on
secrets.
We
should
we
focus
our
energies
on
secrets
we
first
and
hold
on
that
cap
until
it
can
be
placed
in
a
particular
release
from
at
window,
just
so
that
we
could
either
iterate
on
that
PR
merge
it
in
some
lightweight
form
that
says
maybe
I'll.
B
A
A
But
the
problem
is
the
sum
of
those
component:
it
is
need
of
the
privileged
access
and
we
iterate
I,
think
in
the
English
have
literally
to
some
of
those
kind
of
things
is
known
as
the
cannot
support
and
the
some
that's
the
proposal
certain
way
to
support
so
especial,
for
example,
for
resource
management.
That's
obviously
one
and
we
have
that
I
have
the
concern
about.
A
We
are
connec
that
goes
to
the
direction
to
over
address
our
full
potential
wanna
be
near
here
so
here
in
this
to
the
lewdness
for
those
owners
components,
but
at
the
same
time
we
lose
after
functional.
Many
hot
men
know
that
if
we
can't
let
you
know
you
an
option
of
the
official
events
just
for
simple
example
and
I
think
you
cannot
really
management
and
the
you
and
kill
you
and
make
the
overcome
media
the
subscribed
situation
and
they
cannot
inherit
older.
A
A
So
many
way
too
many
that
I
want
to
be
a
and
also
Cornel
West
humor
on
the
Venus
Colonel
couldn't
have
the
water
beneath
it.
So
I'm
not
sure
how
we're
at
this
moment,
I'm
not
convinced
about
Israel
and
is
that
the
direct
approach
to
go
so
I,
like
her
or
dark
proposed
burners
day,
makes
defined
I
pressure,
wanna
intimate
ordinal
support,
wouldn't
never
work,
all
those
kind
of
things
and
also
community
fine,
working
out
the
scenario
in
the
products
that
you
want
basis
being
a
part
of
them
there.
Maybe
later
we
can
move
forward.
C
If
I
can
add
a
few
things
to
that,
so
I'm,
definitely
in
favor
of
rootless
control,
lane
components,
I
think
that
we
can
run
most
of
the
control
plane
as
non-root
without
making
any
changes
to
the
functionality
and
there's
already
some
efforts
underway
to
do
that
as
kind
of
a
more
generally
I
think
pushing
towards
running
more
components
with
least
privileges,
so
not
necessarily
rootless.
But
if
there's
capabilities
we
can
drop.
If
there's
set
comp
profiles,
we
can
add
to
harden
the
components.
C
I
think
that
can
go
a
long
way
and
there's
actually
some
cases
where
I
think
running
components
as
non-root
would
actually
weaken
the
security.
So,
for
instance,
cubelet
talks
to
CRI
or
daughter
over
over
a
UNIX
domain,
socket
that's
owned
by
root,
and
so
we
sort
of
assume
that
you
need
to
be
root
on
hosts
in
order
to
have
privileged
access
to
run
things
over
that
socket.
And
if
we
were
to
run
those
components
as
non-root.
That
would
have
to
be
a
socket
owned
by
a
non
root
user
and.
D
E
A
C
C
But
yeah,
actually
that's
a
good
point
that
I
prefer
gonna
start
working
on
writing
things.
That's
not
root!
I!
Think
storage.
Plugins
would
be
a
good
place
to
start
focusing
that
effort.
A
So
in
your
in
your
tooth,
this
is
to
the
Oscar
I
think
these
things
so
I,
like
the
Patrick's
suggestion
to
find
it
he's
fine
after
think,
the
functionally
here.
What
meet
you
use,
Chris's,
but
I
think
that
he
based
on
that
the
virtual
killed
America
based
I,
can
actually
also
similar
situation
earlier
mentioned.
I
want
to
say,
okay,
what
exactly
you
needed?
Many
men
were
out
because
I
think
the
Medicaid
of
the
wannabe
needs
to
be
unlikely,
whatever
means
and
I
think
even
colonel
have
those
problem.
So
so
we
I
don't
want.
A
We
are
over
addressed
this
monitoring
the
problem
and
and
attitudes.
They
all
feel
incomplete,
supinator
different
model,
but
we
don't
understand
the
use
cases
and
then
we
kind
of
the
cord
Edward
has
this
model.
It
is
more
secure
model
and
confusing
the
user.
So
everyone's
think.
Oh,
that's
a
good
model
to
deploy
and
the
cost
power.
So
then
the
community
have
to
explain
it
has
not
yes
off
a
lot
of
a
functionality
and
necessary
for
anything
new.
Even
maybe
have
not
the
problem
with
the
resource
management.
All
those
campaigns,
I
think
also
in
your
cab.
A
I
suggested
this
Explorer
to
make
this
next
example
team,
dismissed
and
that
started
from
the
storage
packing
and
say
that's
possible
and
in
your
capacity
as
the
can,
we
start
a
fantastic
immersion
to
you
to
see
and
is
the
thing
that
all
those
gonna
play
interact
with
the
attorney.
Then
we
can
export
more.
B
F
B
B
F
A
H
C
C
A
H
D
D
Think
it's
asking
for
retest
as
well
to
run
the
tests
again
and
I
have
pinged
Jordan,
I'll
ping
him
on
slack
as
well
and
see,
if
he's
available,
to
take
a
look
at
this
and
then
run
the
review.
The
cap,
sorry
about
the
additional
work
I
didn't
realize
the
error
that
kept
formatting
Gamal
formatting
was
there
until
you
give
LG
team
and
ran
the
tests.
A
D
Let's
try
and
do
that
and
I
think
for
the
code
PR
that
I
sent
out
Tim
Tim
I'm
already
looking
at
it
and
gave
a
few
comments
and
I
responded
to
it.
So
I'll
continue
working
on
that.
So
to
get
the
kept
moist,
I
guess:
I,
don't
know
what
we
need.
I
guess
we
need
another
LG
TM
label,
because
I
made
a
commit
after
the
next
LG
TM.
No,
that's
the
only
I'm
hoping
that's
that's
what
we
need
and
Jordans
formal
approval.
J
Well,
so
it's
not
it's
not
just
windows,
so
we
talked
about
this
briefly
last
week,
but
Mike
was
able
to
give
comments
then,
and
so
this
is
continuing
the
discussion
around
how
we
can
pass
runtime
class
down
through
the
CRI
for
the
purposes
of
pulling
images
and
checking
if
they
exist,
and
so
KK
did
some
updates
there
did
you
want
to
talk
through
those
KK
and
yeah
and
double-check?
Are
you?
Are
you
actually
here
Mike.
K
One
of
the
concerns
that
was
brought
up
was
that
list
images
and
stats
do
not
take
in
each
spec
as
a
return
back.
So
should
we
consider
that,
as
like,
including
image
spec
in
return?
Is
that
something
which
we
need
to
bother
about
was
one
of
the
things
which
was
there
in
the
review
and
I
wanted
folks
inputs
on
I.
K
K
G
Yeah
I
think
which
I
check
the
API.
We
have
the
image,
so
the
image
stock
is
returning
image
takers,
but
at
least
image
on.
Maybe
it
doesn't
meet
any
Mitzvah,
I
think
at
least
behind
I'd
annotation
into
that
the
the
image
message
I
mean
the
one
returned
by
at
least
a
minute,
and
we
also
consider
right
away
should
pull
the
whole
image
back
there,
but
I
think
we
should
be
attending
information.
G
J
Yeah
I
just
don't
know
if
I
agree
with
that,
because
you
know
the
the
feedback
that
we
had
last
cycle
was.
This
was
getting
too
broad
and
we
needed
to
narrow
it
down,
and
so
you
know
if
we
make
it
as
narrow
as
possible,
we're
all
we're
doing
is
passing
down
the
annotations,
which
would,
from
the
CRI,
just
basically
include
runtime
class
down
to
the
pole
and
sure
image
operations
like
that
is
the
narrowest
change
we
could
possibly
make,
and
so
like
are
we
are.
We
are
you
proposing
that
we
broaden
this
again
or
not.
G
J
G
J
K
So
is
it
okay
to
include
this
into
the
cap
and
kind
of
take
a
phased
approach
where
we
don't
modify
the
larger
ap
ice
as
such,
but
the
ones
which
have
image
specs?
We
start
adding
and
then
the
next
phase
we
kind
of
change
list.
Is
that
an
OK
approach
to
take
so
that
we
don't
have
the
broader
impact
right
away,
but
we'll
get
what
what
we
need
to
help.
K
What
I'm
suggesting
lento
is
that
we
make
it
as
part
of
the
cap
so
that
we
have
this
as
a
goal,
but
when
we
implement
we
implement
it
in
phases
where
we
first
phase
is
always
like.
We
just
modify
the
ones
that
we
need
and
then
the
next
phase
we
come
back
and
go
about
modifying
this.
This
way
we
will
be
able
to
move
forward
rather
than
you
know,
coming
back
to
the
same
state,
whether
we
need
to
go
broader
or
not.
G
E
No
I
would
just
agree
with
the
approach
you
know
just
putting
the
annotations
in
there
if
they're
sword,
that's
good.
It
provides
for
the
ability
to
do
experimentation,
extensions
to
the
and
then,
when
those
extensions
become
you
know,
more
solid,
we'll
move
them
out
of
annotations
into
you
know
an
explicit
predefined.
You
know
value
so
that
we
can
person
check
right.
E
K
J
K
A
Needs
more
tour
next
talking
next,
why
is
more
like
to
ask
our
attention
and
I
know
you
already
look
at
that
and
also
the
content?
Well,
I.
Don't
know
why
you
and
I
also
knows
the
owner
shipped
to
those
directions.
So
I
already
talked
to
young
and
individually
supposes
truly
all
here,
and
then
we
could
true.
N
O
O
The
amount
of
CPUs
that
are
reserved
is
being
reduced
from
the
overall
capacity
allocatable
capacity
on
the
node.
So
then,
if
the
number
of
results
appears
is
small
and
then
non-guaranteed
pods
can
only
run
on
this
small
amount
of
of
CPUs
ritual,
which
can
lead
to
very
small,
can
I
choke
on
own
resources.
And
then,
if
the
number
of
reserved
CPUs
is
large
and
the
amount
of
ISOs
abuses
is
I
mean
relatively
balanced,
then
this
capacity
then
we'll
get
into
scheduling
problem,
because
the
capacity
will
be
reduced
from
the
overall
locatable
capacity.
O
So
then,
instead
of
using
results,
abusers
suggested
to
use
stat
which
will
allow
the
users
to
specify
which
CPUs
are
isolated
and
then
guaranteed.
Pods
will
only
be
able
to
get
scheduled
on
these
static
CPUs
and
it
will
not
reduce
the
capacity.
The
overall
capacity
of
the
of
the
node
and
and
the
non
grata
peers
will
be
able
to
run
freely
on
non-static
CPUs
or
on
static
CPUs
if
either
CPUs
are
not
used.
So
that's
a
larger
story
for
for
this
long
story
for
this.
For
this
PR
I.
B
B
My
reading
of
this
PR
meant
that
the
allocatable
capacity
recorded
by
the
node,
only
a
portion
of
that
capacity
could
be
used
by
pods,
not
in
a
guaranteed
quas
class,
which
I
first
wanted
to
confirm
that
that
is
the
my
understanding
of
the
behavior
and
then
I
didn't
understand
why
that
was
a
preferred
behavior
versus
what
I
viewed
the
CPU
manager
doing
today,
which
was
dynamically
shifting
the
set
of
CPUs.
It
could
hand
out
two
exclusive
cores
based
on
the
pods
presently
scheduled
to
the
node.
B
L
Director
I
think
it's
the
opposite
of
what
you
were
saying.
It's
that
pods
so
and
I
get
where
Derek
is
going,
is
there's
a
scheduling
problem
right
where
a
guaranteed
quas
pod
can
come
in
integer
request,
so
it
qualifies
for
static
CPUs.
The
scheduler
goes:
okay,
there's
enough
allocatable
on
the
on
the
node,
but
when
it
gets
there
all
the
static
cpus
that
you
specified
in
the
couplet
argument
are
occupied.
L
Yes,
we
have
somewhat
opened
this
Pandora's
box
of
you
know
having
having
situations
where
cubelet
has
information,
the
scheduler
doesn't
and
can
bounce
odds,
but
I
do
think
that
this
is
fundamentally
creating
a
separate
type.
A
separate,
CPU
type
resource
yes
and
yeah
and
I
get
your
argument
that
the
topology
manager
is
somewhat
created,
that
for
CPUs
and
memory
and
devices
and
stuff
like
that,
but
yeah
it's
complicated.
O
B
B
B
G
A
B
M
O
B
B
B
O
C
Yeah
I
mostly
just
wanted
to
bring
attention
to
this
I'm
hoping
to
get
this
into
winning
team,
but
awareness
today,
handful
of
people
have
already
looked
at
it
Jim,
so
back
in
it
was
Trinity's
one
six.
We
have
this
feature
called
streaming
proxy
redirects
and
what
we
were
trying
to
do
was
enable
the
exact
attachment
port
forward
streaming
requests
to
work
with
CRI
and
trying
to
avoid
proxying
those
requests
through
the
cubelet.
C
Because
of
the
way
we
need
to
sort
of
inspect
the
inspect
the
response
from
the
qeh
in
line
before
following
the
redirect,
and
it
also
opens
it
up
to
server
side
request
forgery,
which
is
where
you
can
basically
get
the
the
server,
in
this
case
the
API
server,
to
make
a
request
to
some
other
endpoint
that
it
shouldn't
be
talking
to.
So,
if
you
would
compromise
to
cubelet,
you
can
use
this
to
propagate
through
the
cluster
by
having
the
I
server,
send
an
arbitrary
run
command
exec
to
a
different
node.
C
So
we
added
the
validate
proxy
redirects
feature
to
prevent
the
SS
RF
attack
and
basically
says
that
the
redirect
URL
has
to
be
to
the
same
host
as
the
original
request
allows
the
court
to
change.
But
this
means
that
sarah
has
to
serve
on
not
just
the
same
interface
but
also
when
it
returns
the
URL.
It
has
to
use
the
same
format
so
IP
address
on
the
same
interface
in
the
same
version
actually
I'm,
not
sure,
if
has
to
be
the
same
version.
So
anyhow
that
was
sort
of
the
current
state.
C
And
then
we
wanted
to
add
authentication
to
that
screening
server
and
there
was
a
number
of
different
proposals
for
ways
to
kind
of
plumb
credentials
through
for
the
API
server
to
use
em
TLS
when
connecting
to
the
CRI
streaming
server,
and
it
was
all
very
complicated-
and
we
said
the
API
server
already
has
an
MPLS
connection
with
the
cubelet.
What
if
the
culet
just
follows
that
redirect
locally
and
proxies
back
to
the
API
server
sorts
getting
back
to
how
things
used
to
work
by
adding
at
cubelet
step
back
in
there?
C
This
has
worked
pretty
well,
and
so
this
proposal
now
is
to
basically
make
this.
The
only
way
of
these
exact
requests
to
the
CRI
and
that'll
lets
us
delete
a
bunch
of
complicated
code
and
potential
attack
surface.
The
disadvantage
to
this
approach
is
now
the
couplet
is
proxying
through
to
the
container,
and
so
so.
C
My
argument
against
those
is
that
we're
already
proxying
through
the
api
server
and
since
the
api
server
is
a
single
or
maybe
multiple,
but
it's
much
more
of
a
bottleneck
than
the
cubelet.
The
cubelet
has
a
maximum
of
110
pods
for
the
node,
whereas
the
api
server
is
potentially
serving
for
thousands
of
pods
and
so
any
concerns
about
a
resource.
Isolation
I
would
argue
applying
more
to
the
API
server,
which
this
doesn't
actually
fix
and
as
for
the
concern
about
latency
I
haven't
heard
any
concerns
about
latency
again
we're
already
processing
through
the
API
server.
C
So
I
think
that
if
you
really
need
a
low
latency,
very
high
bandwidth
connection
to
the
pod
I,
don't
think
that
using
the
communities.
Exec
is
necessarily
the
right
way
to
engineer
that
anyway,
so
yeah.
So
the
proposal
is
to
remove
that
they
guys
are
just
proxies
to
the
cubelet
over
MPLS
but
Kubica
next
to
the
CRI
server
over
local
host.
C
B
C
C
So
that's
marking
the
redirect
container
streaming,
which
controls
whether
the
Kuban
proxies
locally
marketing
that
is
deprecated,
and
so
the
default
behavior,
which
is
to
proxy
locally,
would
is
the
way
going
forward
and
similarly
marking
streaming
proxy
redirects
as
deprecated.
Now
the
Steampunk
proxy
redirects
feature
a
needs
to
persist
for
much
longer
in
order
to
handle
version
skew
between
a
curriculum
and
so
that
codec
we
can
actually
be
removed
until
1.22
for
a
more
aggressive
timeline
or
1.24
for
a
guaranteed
safety.