►
From YouTube: Kubernetes SIG Node 20190813
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
You
will
have
to
provide
the
required
keys
if
you
are
using
encrypted
images
and
how
do
we
do
it?
We,
we
could
have
realigned
certain
things
so
earlier
we
were
only
sending
the
required
decrypted
key,
so
I
has
a
pull
image
and
we
were
doing
the
image
of
version
check
here
after
the
after
the
after
this,
yes
here,
but
now
that
we
have
moved
that
call
of
checking
authorization
in
in
creo
or
in
container
B
in
in
runtime
service.
A
So
when
a
create
container
happens,
you
will
have
to
send
a
decryption
keys,
11,
decryption
keys
and
the
creo
or
continuity
will
will
try
to
fetch
the
image
annotations
and
check
the
check
the
authorization.
So
this
is
how
we
handling
it.
That
is
a
sample
implementation
of
this
in
I'm.
Having
some
trouble
with
my
laptop.
A
That
I
kept
here
so
along
with
this
KP.
If
you
want
to
try
it
out,
this
is
the
creo
or
run
time
you
have
to
use
and
soon
we'll
add
it
in
container
B
as
well.
So
there
is
that,
although
right
now,
since
we
are
doing
it
as
a
part
of
every
single,
create
container
call,
we
are
thinking,
we'll
probably
add
a
check
here
which
can
be
a
configuration
configuration
flag
during
the
runtime
which
can
disable
his
teammate
image
authorization.
A
B
A
A
B
And
because
I
believe
that's
the
off
side,
this
one
so
so
earlier,
I
saw
you
I
mean
if
I
remember
you
had
correctly.
You
mentioned
that
alternative
consideration.
So
so,
basically
right
now
you
press
enter
here.
Actually
it
is
in
your
card
is
the
is
the
alternative
consideration
in
your
initial
cap,
so
you
mentioned
that
each
continent
have
in
from
this
when
you
using
continuity,
as
example,
and
but
you
also
punked
out.
B
A
Things
are
still
true,
like
the
coopering,
as
you
can
see
here,
we
are
still
using
kubernetes
secrets.
We
are
still
going
to
use
governance,
sickness
infrastructure
to
store
the
private
keys.
What
has
to
change
here
is
earlier.
We
will
not
using
Sierra
and
time
service
who
send
an
inefficient
keys,
and
because
of
that,
so
we
were
only
sending
the
decryption
keys
to
I.
Am
full
image
call
here
and
what
was
happening
was
if
you,
if
you
don't
specify
the
image
full
policy
as
always.
A
So
if
he
says,
let's
say,
if
you
specify
as
never,
it
will
never
go
on
this
port
path,
and
you
will
directly
just
cycle
through
this
without
having
image
operation
check
and
this
problem
is
exists
even
for
image
pools
secret.
As
of
now
when
we
thought
we
could
handle
this
for
image,
put
secrets
by
doing
this
and
yeah.
D
D
A
E
The
official
after
last
signal
meeting
I
had
a
discussion
as
usual,
but
found
that,
although
the
image
for
the
gradient
in
question
simple
is
fairly
same,
everybody
is
still
different
from
all
the
free
image
for
secret.
Only
you
have
to
pull
the
image,
but
before
the
decryption
sequence.
Really
as
long
as
you
have
the
image
and
always
verify
that
it
doesn't
is
now
related
to
what
was
the
image
or
policy
is
yes,
don't.
G
D
B
Normally
it
is,
we
connect
the
card
equipment
around
the
signals
and
we
are
going
to
approve
your
cap
and
the
next
time
it
is
the
go
find
with
your
and
of
the
world
and
the
nominee.
A
lot
of
time
is
reviewer
and,
of
course,
the
same
pressure
and
the
furnace
signal.
I.
Think
that's
how
here
it
is
with
me
reviewer
and
so
a
sign
for
this
front,
a
signal.
So
so
then
we
can
start
to
the
implementation
so
but
mix
open
to
the
first
thing
is
I.
Think
the
darker
and
I
should
up
to
okay.
B
H
Done
so,
this
is
continuing
the
follow-up.
From
last
week
the
I
posted
added
to
the
thread
with
the
analysis
on
how
multiple
pods
to
plus
pods
would
we
handle
the
case
of
2
plus
pods
if
the
cubelet
were
to
restart
during
the
process
of
update
being
requested
and
I
was
wondering
if
Dave
I
don't
know
if
the
REC
is
not
here
today,
I
just.
I
Took
a
look
and
responded
to
that
specific
thread:
ok,
but
I
do
think
and
I
think
it.
It
looks.
Fine
to
me
I
think
that
either
if
we
do
a
local
checkpoint
or
some
other
way
of
keeping
state
across
restarts,
we
should
be
able
to
solve
that
problem.
Ok,
I
think
we
still
need
some
guidance
from
Dawn
and
Derek
on
the
approach
we
want
to
take
for
keeping
that
state.
H
Ok,
so
the
advantage
with
local
state
is
that
it
you
you're,
not
extending
the
pod
spec
and
it
falls
in
line
with
how
we
it
is
an
extension
of
what
we
are
doing
discovery
of
currently
running
pods
and
you
reduce
the
top
the
overhead
of
talking
to
API
server
during
the
update
process,
I
was
leaning
towards
the
local
store.
For
that
reason,
the
putting
it
in
the
in
the
pod
spec
is
similar
to
what
we
do
with
scheduler
using
a
sub
resource.
Just
like
we
set
the
hostname.
We
would
be
setting
this.
H
So
that's
also
a
fair
approach.
We
just
have
to
decide
so
I
guess
we
take
a
word
dawn
or
oh,
you
have
what
you
feel.
What
direct
fields
I
don't
have
Derek
has
had
a
chance
to
followup
with
it.
Maybe
you
and
David
can
kind
of
do
offline
thread.
I
cannot
be
involved
in
a
private
discussion
right
now.
It's
food
for
our
reasons
or
lower
self,
so
avoid
that,
if
possible,
although
it's
okay
to
do
it
with
us
companies,
so.
B
H
Would
be
great
so
once
I
get
guidance
on
which
way
to
proceed.
I'll
update
the
main
cap
with
whatever
design
we
decide
to
go
with
the
advantages
and
disadvantages
of
each
are
kind
of
laid
out.
If
you
need
me
to
put
bullet
points
I'll
do
that.
Let
me
just
do
that
extent
and
another
comment
to
that
saying:
what's
the
advantage
of
doing
local
versus
storing
it
in
the
pot
spec.
B
J
So
it
means
that
you
can
make
checkpoints
that
you
can
then
not
use,
and
the
whole
point
was
that
we
were
trying
to
make
checkpoints
so
that
we
can
restore
functionality
on
a
node
without
a
connection
to
the
API
server.
And
so
my
my
my
pull
request
is
just
the
fix.
I'm
suggesting
is
that
we
do
more.
The
node
selection
information
when
you
restore
when
we
read
the
checkpoint
information
back.
J
J
J
B
B
So
so
we
need,
though,
and
cannot
stand
based
on
the
problem.
You
stayed
here,
I
can't
clearly
see,
there's
the
problem
and
something
need
effects,
but
I'm
not
sure
this
is
the
right
way
to
fix.
Is
the
potential
disease
could
be
exposed
more
problem
to
the
production
so
or
not?
So
so
I
haven't
to
say:
oh
you
use
of
the
Eugene.
So
how
will
sync
up
with
Eugene
I'm
following
you
do
start
here?
So
so
we
will
talk
to
her
and
make
her
is
the
reviewer
for
this
one?
J
J
B
B
The
promises
we
we
and
at
this
moment
that
means
that
there's
there's
the
we
don't
know
we
are
fixed.
This
particularly
scenario
may
be
exposed
other
problem,
so
we
want
to
have
the
understand.
The
original.
Aren't
you
to
the
cause
and
to
see
all
those
kind
of
things,
because
it
everyone
have
this
problem
right.
So
obviously,
what
do
you
describe?
I
can
see.
There's
the
problem
for
that
particular
use
cases,
and
so
we,
the
cleaner,
the
test
and
understand,
is
so.
B
J
K
J
May
not,
they
may
not
be
a
huge
kind
of
people
saying:
oh,
yes,
I'm!
Definitely
using
this.
This
is
causing
a
problem.
I
think
a
lot
of
people
are
just
not
using
it.
We
have
a
particular
scenario
where
we
want
to
ensure
that,
for
example,
after
after
a
power
up
reset
event,
things
can
still
come
back
up.
Even
if
there's
no
network
connectivity,
that's
not
a
that's,
not
a
usual
use
case
to
people
I
understand
so
I'm
I
I
can
believe
that
maybe
people
will
think
it's
just
not
worth
putting
any
effort
into
fixing.
B
Yes,
I
will
just
think
I
wish
Yugi.
There
really
is
the
earlier
I
want
I'm
missing,
which
is
not
just
because
he/she
already
look
into
this.
It
is
because,
for
that
checkpoint
of
feature,
she
actually
used
to
be
the
reviewer
and,
as
you
know,
the
other
background
and
reviewer
and
known
to
the
sick
has
left
psycho
and
I'm
looking
to
that
feature,
sir,
so
she
shouldn't
know
the
more
then
asked
here.
Okay,
rest,
the
folks
find
a
signal
and
another
background
another.
This
is
the
same
fix
or
not
sick
fix
it.
Okay.
B
L
Yeah,
so
this
is
Kevin
here:
okay,
yeah!
So
the
issue
so
we're
we're
wrapping
up
the
majority
of
the
topology
manager
PR,
so
that
we
can
get
everything
in
by
the
code
freeze
at
the
end
of
the
month,
and
one
of
the
issues
we've
come
across
is
that
the
different
hip
providers
that
we
have
for
the
topology
manager
in
the
worst
case,
they
need
to
enumerate
all
possible
socket
masks
across
all
of
the
different
sockets
that
you
might
have
on
the
machine
which,
in
the
worst
case,
results
in
two
to
the
n
minus
one
masks.
L
Where
n
is
the
number
of
sockets
that
you
might
have
on
your
machine?
You
know
most
machines
tend
to
have
a
small
number
of
sockets.
So
this
isn't
an
issue.
But
if
you
do
happen
to
have
a
machine
with
a
lot
of
sockets,
the
state
can
explode
once
you
get
to
those
larger
number
of
sockets,
so
the
question
came
up
as
to
kind
of
what
the
best
way
to
safeguard
against
this
is
so
that
if
we
do
encounter
a
machine,
maybe
it's
a
virtual
machine
that
someone
set
up,
it
has
say
20
sockets
on
it.
L
We
want
to
do
some
things
to
limit
it,
to
only
allowing
8,
sockets
or
less
that's
just
kind
of
a
number
we
came
up
with
because
it
keeps
the
computation
pretty
small,
and
so
I
was
just
kind
of
wondering
what
your
guys's
recommendation
is
for
the
best
way
to
kind
of
safeguard
against
this.
Is
it
something
we
should
put
in
place
as
the
Kubla
tries
to
come
online?
Where
we
say
ok,
the
topology
manager
has
enabled
the
machine
that
we're
running
on
has
more
than
eight
sockets.
L
We
want
to
fail
fast
right
now,
so
that
they
can,
you
know,
go
look
at
the
logs
and
say:
ok,
why
did
this
just
fail?
Ok,
it's
because
I'm
on
this
kind
of
machine,
I
guess
I
can't
use
the
topology
manager
here,
or
is
it
better
to
at
the
time
that
we
try
to
launch
a
pod
and
it
wants
some
aligned
set
of
resources?
Have
it
fail
launching
of
the
pod
saying
hey?
We
can't
launch
this,
even
though
you
turned
on
the
topology
manager
too
many
sockets
on
this
machine.
L
That's
kind
of
what
we're
proposing
right
now,
there's
no
systems
that
I
know
out
there
that
have
more
than
eight
sizes,
and
so
that's
why
we
kind
of
picked
that
number.
It
actually
will
work
still
fine
up
to
16
sockets.
You
know,
because
that's
to
the
N
minus
1
for
16
is
still
65535
combinations.
If
the
mini
you
get
to
say,
32
sockets
that
you
know
explodes
so
8
or
16
would
be
fine,
but
we
want
some
way
to
just
limit
the
number
of
sockets
that
the
topology
manager
supports.
B
I
need
to
be
the
relevant
earlier.
We
talked
about.
We
never
really
figure
out
how
to
do
this
neck.
There
Nate
the
note
when
you
quarry
the
note
and
then
know
that,
can
this
can
display
what
kind
of
each
other
support
thanks.
I
love
the
feature
of
this
gory
type
of
thing,
so
Nagar.
If
we
could
have
that
one.
So,
then
you
even
you
learn
about
topology
and
an
opponent
manager,
but
because
it
is
the
arm,
eight
circuit
can
tweak
a
hand.
So
we
basically
can
say
sorry,
that's
disabled,
and
then
you
can.
B
You
can
figure
out
why
it's
disabled
same
thing
for
certain.
Like
the
other
resource
manual
feature,
it
is
because
you
are
missing
certain
certain
kernel
module
for
example,
and
they
support
even
you
can
accumulate
resource
management
you.
So
this
is
kind
of
a
signal
in
that
one,
but
we
never
figure
out
how
to
do
that.
B
So
I,
don't
have
a
strong
opinion.
Just
say
are:
we
is
either
is
just
disabled
or
is
just
give
their
warning
or
but
I
think
we
just
need
a
sink
apart.
You
subpoenaed
here
like
the
user,
how
easy
to
discover
and
the
we
don't
want
to
support.
Oh
it's
just
an
awful.
This
is
awful
ugly.
They
hold
the
integration,
it
is
for
you,
sir.
B
L
My
inclination
is,
at
least
in
the
alpha
release,
is
to
do
my
first
suggestion
there,
where
you
know
we
look
for
the
feature
gate
we
see
if
the
topology
managers
turned
on,
we
look
at
machine
info.
If
we
see
more
than
eight
sockets,
we
fail
the
couplet
right
there
and
say
sorry.
You
can't
enable
the
topology
manager
if
your
machine
has
more
than
90
seconds,
at
least
for
this
alpha
release,
and
we
could
think
about
going
forward
if
there's
something
better
or
more
robust,
I'm
curious
what
you
guys
as
thoughts
are
there.
B
In
this
cases,
at
least
that
from
my
own
product
perspective,
I
memo
is
approaching
because
I
we
don't
using
every
feature
so
but
I
think
there
is
more
some
penalty
user.
They
want
to
try
the
alpha
feature.
We
need
to
think
about
their
deployment.
The
messer,
though
I
don't
have
a
strong
opinion
here,
because
the
for
me
yeah
for
firefox
couponing
laudanum,
that's
good
for
my
philosophy.
I
think
that's
the
good
thing,
maybe
I,
don't
feel
a
lot
of
me,
but
I'm
not
sure
other
people's.
K
B
L
L
I
L
I
I
L
I
True
yeah
I
guess.
The
other
thing
we
can
do
is
just
out
a
time
out
for
this
sort
of
thing
and
amid
a
warning
but
yeah
my
I'm
not
sure
it
makes
sense
to
add
a
config
nog
for
this,
because
the
future
I
see
is
we
limit
it
to
eight,
and
then
someone
says:
oh
I
want
to
break
that
limit
until
we
add
a
new
config
knob
and
it's
a
knob
that
only
five
people
understand
and
just
I.
L
Tend
to
agree
with
that
yeah
instead
of
yeah,
because
hard-coding
it
to
eight
is
very
arbitrary,
so
yeah
how
about
how
about
how
about
this?
Its
own,
it
sounds
like
maybe
with
the
right
answer
is
in
some
ways
is
to
stick
with
doing
nothing
for
now,
because
again,
going
back
to
the
argument
that
this
is
an
alpha
release,
just
have
it
in
the
release.
L
L
L
L
B
My
cussing,
it
is
always
think
the
other
deployment
actually
in
chica.
We
don't
have
this.
We
shouldn't
go
into
this
problem
because
I
have
a
feature,
won't
be
a
number
anyway,
no
matter
what
so,
but
the
other
deployment
may
have
that
cousin,
like
the
there's
no
way
for
them
to
fix,
they
do
require
how
they
are
redeploy,
those
Cooper
net
and
I'm,
not
sure
everyone
have
the
dilemma.
Cuban
cafe
enable
so
they
could
disable.
On
the
note
more
than
eight
circuit,
this
feature
and
an
herbalist
went
on
the
Nexus,
a
track
head
and
be
no.
I
L
B
L
D
B
That
I
think
that
is
it.
It
is
your
call
to
for
the
Alpha,
because
I
think
they
feel
couponing,
I
gotta,
feel
law
delayed
by
the
me
anything.
It's
easy
for
you
to
get
user
improves,
so
you
can
see
how
many
people
depend
on
and
they
want
to.
They
have
the
more
than
eighty
seconds
or
whatever
numbers
are
kids
and
if
you
just
not
feel
the
Cuban
ID
and
they
definitely
don't
have
the
problem
with
I
only
mention
thank
you.
B
I
have
no
idea
dirty
Parliament
and
then
they
may
have
the
single
configuration
deployed
to
all
the
note.
So
then
they
basically
for
this
kind
of
things.
One
note
one
note
have
the
crash.
Then
they
may
have
to
disable
this
feature
completely,
because
a
single
configuration
unless
they
are
in
their
borders
build
their
own
dynamic.
The
Cuban
eight
configurations
infrastructure.
Even
we
have
been
a
feature
a
while
back,
but
this,
but
they
need
to
build
their
own
infrastructure
anywhere
features
so
so
both
have
the
pros
and
cons.
B
L
Yeah
I
think
that's
my
inclinations.
If
I
was
my
inclination
as
well
when
we
started
the
discussion
because
yeah
exactly
that,
I
don't
want
to
forget
about
this.
If
once
it
moves
to
beta
or
beyond,
because
we
just
decided
to
leave
it
without,
you
know
any
error
or
warning
and
then
there's
just
an
issue
lying
around
somewhere
saying
that
this
should
be
addressed
at
some
point.
Yeah.
L
K
B
So
you
have
the
Cuban
a
turn
on
Cuban
and
as
the
comment
nine,
and
so
you
can
remote
invoke,
connect
to
initial
in
power
of
the
node
any
and
Walla
dates.
The
note
also
so
the
thinking
that
particularly
you
can
think
about
our
what
kind
of
feature
you
couldn't
even
able
this
against
your
configurations.
But
we
didn't
go
that
way.
We
instead
the
people
based
on
that
idea,
proposed
of
Cuba
heightening.