►
From YouTube: Kubernetes SIG Node 20230516
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230516-170539_Recording_640x360.mp4
A
Hello,
hello,
it's
May,
16
2023
signals
weekly
meeting
welcome
everybody.
B
C
I,
remember:
I
felony
approval,
but
I
do
remember.
We
have
some
open
classing,
but
we
just
want
you
because
we
do
agree,
that's
the
alpha
and
we
we
also
agree
about
to
use
kisses.
So
that's
why,
at
the
end,
I
approve
but
I
do
know.
There's
the
open
question
there.
I
even
comment.
C
Like
rule
I
said:
okay,
that's
the
open
question,
take
a
note
not
converging
yet,
but
on
the
other
hand,
I
really
want
to
help
him
to
go
through
this,
because
he's
new
and
and
the
use
cases
also
it
is-
is
real.
The
one
thing
it
is
because
what
he
proposed
is
not
just
change
kubernetes
and
not
in
kubernator,
and
the
only
demand
side
or
controller
we
manage
the
they
also
change
neck,
the
CRI
containerdy
and
the
quail,
and
even
there
are
some
of
the
authority
of
people.
Thinking
about
is
oci
change.
C
So
that's
why
I
want
to
help
him,
because
in
the
past
we
have
so
many
people
working
on
across
the
kubernetes
right.
So
so
even
I
was
in
the
past
part
of
the
continuity
Community,
but
I
haven't
done
that
for
for
a
while.
So
for
the
new
people,
how
we
are
going
to
help
them
to
bridge
those
guys?
So
that's
why
I
kind
of
think
about
the
let's
discuss
that
part.
Otherwise,
I,
don't
think
all
this
can
move
forward.
C
One
thing
it
is
also
like,
if
continuity
don't
have
those
changes,
what
this
feature,
so
we
how
we
are
going
to
help
and
the
analysis
in
this
one
is
not
included
in
1.28
planning.
Maybe
we
should
include
that
and
the
identify
reviewer
and
from
the
very
like
the
CI
representative
and
the
kubernetes
representative,
and
then
there's
the
the
of
course
approver.
That's
easy.
I
can't
still
be
the
floor,
but
I
wanted
to
see
like
how
we
are
going
to
feel
the
guy
to
help
him
here.
C
A
This
is
about
cap.
Somebody
discovered
this
problem
with
supplemental
groups
being
inconsistently
applied,
so
you
may
get
not
what
you
expected
from
podsback
and
Docker
definition.
A
So
cap
is
trying
to
unify
the
way
how
we
apply
this
supplemental
groups
and
be
able
to
enforce
them
so
yeah
and
changing
the
default
like
how
contagious
and
cryo
Works
may
break
some
scenarios.
We
don't
know
anybody
who
will
be
broken,
but
we
believe
that
change
in
default,
which
is
that
expensive
Avengers
may
not
be
ideal
so
yeah,
so
it's
mostly
about
changing
default
carefully
and
allowing
customers
to
migrate
easier
next.
One
third.
D
Yeah
so
c
groups
B2
has
a
new
feature
called
the
C
group
aware
of
memory
killer
and
basically,
when
that
is
able,
enabling
on
a
secret
basis,
it
causes
all
processes
in
the
C
group
to
be
killed
simultaneously.
If,
you
experience
another
memory
issue
or
out
of
memory
a
kill
instead
of
an
individual
process.
D
This
the
consensus
seems
to
be
that
that
behavior
is
better
for
just
about
all
users.
So
much
so
that's
we'll
probably
make
for
a
pretty
good
default
and
then
the
real
sticking
point
is
there
may
be
some
users
that
rely
on
the
old
behavior,
like
some
discussion's
been
brought
like
AJ,
proxy
or
postgres
that
actually
might
handle
the
current
behavior
well,
where
they
can
support
individual
processes
being
killed
in
the
container,
but
sort
of
the
Assumption
is
that
the
vast
majority
of
users
don't
run
applications
that
can
handle
that.
D
Well,
so
I'm
sort
of
discussion
like
can
we
make
it
the
default
and
just
go
with
it
or
is
that
breaking
change?
And
we
need
to
kept
actually
cover
the
changing
of
that
behavior
this
week,
because.
B
I
think
not
like.
As
far
as
I
know,
there
is
like
with
the
current
implementation.
There
is
no
guarantee
that
the
postgres
scenario
will
work
right,
like
kernel
is
going
to
pick
a
process
at
random,
there's
no
definitive
way
where
we
can
say:
okay,
don't
kill
my
mid
one
in
the
container.
D
B
Yeah
exactly
if
it
picks
some
other
process,
then
it
can
recreate
the
worker
process
that
got
killed
or
something
so
right
now,
it's
Colonel
picks
random
I.
Think
that's
a
policy
if
you
were
to
come
up
with
some
weight
in
the
API
and
I.
Think.
Another
thing
to
consider
here
is
like
I
was
talking
with
David
Porter
on
the
next
steps
on
V2
PSI
So.
Eventually,
we'll
have
a
user
space
killer
like
umdi
and
then
we'll
we'll,
hopefully
have
more
control
over
picking
which
process
within
the
container
gets
killed.
B
So
one
way
here
is
to
go
with
Clayton's
suggestion
and
like
introduce
different
and
enum,
maybe
kill
all
the
processes.
Kernel
picks
a
process
which
is
random
today.
That
could
be
the
name
of
what
happens
and
then
in
the
future
we
can
add
other
fields
like
okay
boom,
boom
user
space
policy,
based
something
like
that.
A
D
Yeah
I,
don't
think,
there's
any
like
the
explicit
guarantees
that
it
anywhere
that
it
works
the
way
it
does.
It's
just
been
sort
of
it's
been
that
way
for
a
long
time
and
sort
of
my
thought
when
seeing
this
was
that
it's
probably
better
for
more
users
to
have
the
entire
container
killed
and
restarted
as
opposed
to
having
some
portion
of
processes
or
single
process
kills
and
then
get
into
a
sticky
debugging
situation.
I'm
trying
to
figure
out
like
well
container's
still
running,
because
there
happens
to
be
Bash.
D
That's
still,
you
know
running,
but
all
the
actual
workload
is
dead
now.
So
that
was
a
thought
to
just
try
to
make
it
a
little
more
reliable.
B
I
mean
I,
I
am
I'm,
I,
just
read,
Clayton's
comment
and
that
what
led
me
to
think
about.
Okay,
what's
coming
next
on
the
user
space
side,
so
if
you're
gonna
add
different
kinds
of
behavior,
maybe
it
makes
sense
to
just
retain
the
existing
and
expand,
though
initially
I
was
leaning
towards
just
changing
it.
One
more
case.
There
are
two
more
things
here,
I
want
to
add.
So
there
is
a
kernel
knob
that
says,
kill
the
allocating
one.
B
So
in
that
case,
what
happens
is
a
process
that
triggers
the
umkiller
gets
killed,
even
though
it
may
not
be
the
one
that
is
using
the
most
memory,
so
I'm,
not
sure
if
we
have
any
specific
use
cases
for
that.
C
C
Years
honestly,
I
used
to
think
if
I
remember
correctly,
that's
even
what
I
suggest
the
land
handbag,
but
that's
enhancement
to
me,
I
think,
that's
kind
of
the
you
know
has
the
based
on
the
current
behavior.
This
is
my
personal
opinion,
but
I
just
want
to
say
kind
of
proposal
sounds
to
me.
It
is
just
optimization
for
current
behavior
make
that
correct,
because
our
default
in
the
past
we
always
say.
Oh,
we
care
in
terms
of
the
container.
B
C
We
can
start
talk
about
the
user
space.
All
those
kind
of
things-
okay,
I
just
feel
like
the
current
one-
is
just
optimization
and
maybe
it's
overdue
for
introduced
API
introduce
the
conflict
decided
we
because
those
use
cases
is
still
need
a
little
bit
more
time
understand
and
we
do
have
experience
in
the
past,
but
we
never
really
make
that
to
work
man.
What
I
say
the
previous
experiences
in
world
world
user
space
boom
manage
hand
owner
so
that's
kind
of
things
where
I
feel
like.
C
C
D
So,
just
summarize,
what
I
think
I've
heard,
probably
change
that
to
default
Behavior
just
to
behave
better
for
the
for
most
users
is
fine,
but
maybe
not
documented
or
not
called
out
as
a
promise
that
we're
gonna
always
we're
gonna.
Try
to
do
this
and
sort
of
leave
open
the
option
of
in
the
future
adding
a
policy
for
you
know:
user
space
out
of
memory
kill
or
something
like
that.
C
A
Hey
next
one
Michal.
G
G
So
this
is
due
to
when
couplet
kills
the
pot
to
make
home
for
a
critical
pod.
Then
it's
like
blood
preemption,
so
this
can
happen,
for
example,
for
staple
sets
that
need
to
allocate
about,
and
in
the
popular
policy
cap
we
introduced
the
disruption
Target.
So
this
can
be
argued,
It's,
A
disruption-
and
this
was
like
Commission
of
this
scenario
when
looking
for
different
scenarios,
so
I
would
call
it
a
bug,
but
yeah
I
prepared
the
cap,
updated
I.
Think
the
code
is
to
do.
G
This
is
pretty
simple
and
it's
up
to
discuss
whether
we
want
to
like
eventually
pick
this
back.
But
it's
an
open
question.
So
I
will
it's
not
a
very
high
priority.
G
G
So
here
the
situation
is
that
when
the
pot
exceeds
the
timeout,
it
is
sick
time,
but
it
can
handle
it
and
exit
with
zero
exit
code
and
portfolio
policy
doesn't
allow
too
much
failed
Parts
by
the
so
in
it
allows
to
match
by
exit
codes,
but
it
doesn't
allow
too
much
by
exit
code
of
zero.
So
if
this
happens-
and
this
is
like
an
edge
guys-
we
cannot
match
and
then
assign
like
custom
handling
of
such
parts.
G
So
there
is
an
issue
based-
and
this
was
discussed
also
by
Jordan
and
suggested
that
the
proper
way
is
not
to
because
we
originally
thought
of
about
modifying
portfolio
policy,
to
allow
it
to
merge
by
exit
Code
Zero,
but
as
a
result
of
the
discussion.
The
conclusion
is
that
it's
better
to
actually
introduce
the
a
dedicated
condition,
but
then
the
question
is
so.
This
condition
will
need
to
be
added
by
a
signal
and
by
Q
blood,
and
then
the
question
is
shouldn't
be
part
of
portfolio
policy
cap.
G
So
in
general
the
view
is
that
if
it's
like
simple
and
then
we
could
probably
wrap
it
under
under
the
cap.
But
if
it
requires
some
of
the
factoring
or
not
obvious
changes,
then
we
will
Define
it
for
later
for
future
caps
and
I
already
got
like
a
this
discussion
from
seek
apps
approvers
that
this
wouldn't
block
like
the
portfolio
policy,
graduation
or
anything.
G
But
the
question
remains
whether
we
try
to
like,
if
it's
a
simple
change,
and
then
we
can
consider
helping
this
under
so
again,
I
prepared
the
POC,
but
in
this
POC
what
I
do
is
I
add
the
condition
whenever
we
set
the
so
whenever
the
Pod
exceeds
the
timeout.
We
said
that
the
face
has
failed
and
we
set
the
reasoning
message
and
aside
from
this,
based
on
the
reason,
I
add
the
condition.
G
So
this
is
the
comment
from
April
13.,
and
this
is
not
what
my
implementation
is
doing
and
I
think
to
do
this.
Some
refactoring
would
be
needed,
so
we
would
prefer
that
if
we
want
to
go
this
way,
then
then
we
will
leave
it
for
future
this
condition
and
for
like
12
by
people
who
need
this
condition
mark
so
I,
don't
know
it's
an
open
discussion.
What
are
your
thoughts
on
that
I
would
like
to
discuss.
G
Do
you
have
some
suggestions,
whether
it's
okay
to
just
add
the
condition
whenever
we
set
the
phase
and
ask
currently
like
without
changing
the
mechanics
or
or
indeed
it's
an
occasion
where
we
want
to?
G
If
we
surface
this
as
a
condition,
then
we
want
to
like
have
you
the
API
more
and
then,
and
then
maybe
maybe
modify,
and
in
that
case
we
will
probably
drop
this
Improvement
okay.
So
now,
I'm
I
would
be
happy
to
to
hear
some
comments
regarding
both
issues.
G
Yes,
that's
why
I
want
to
avoid
it
so
I
think
the
refactoring
is
out
of
question.
The
question
is:
is
it
like?
Okay
to
say,
like
I
mean
it's
out
of
question
for
the
portfolio
policy
cap
because
it's
scary
to
me
as
well,
but
the
question
remains
whether
it's
okay
to
surface
as
currently
the
reason
and
message,
but
also
promoted
to
the
condition
in
a
way
or
yeah,
or
we
want
to
to
delay
that
and
for
a
dedicated
cap,
adding
this
condition.
A
G
Yeah,
that's
that
are
all
good
questions
and
I.
Don't
think
we
have
a
good
answer
for
now.
The
way
currently
the
code
is
fractured.
Couplet
makes
the
decision
that
it
sees
that
it
exceeded
the
timeout,
so
it
sets
the
files
to
failed
and
based
off
the
face.
It
starts
the
termination
procedure,
but
then
it's
possible
two
things
that
two
scenarios
are
currently
possible:
first,
that
in
the
containers
just
terminate
on
their
own
before
it
actually
attempts
the
Nikki
link
even
sends
the
sick
time,
and
the
second
is
that
it
sends
the
system.
G
But
it's
you
know
repackaged
as
exit
Code
Zero
and
in
both
cases
you
can
argue
that
they
weren't
killed
really,
but
we
still
would
set
the
face
as
failed
and
and
I
think
this
might
be
a
long
discussion
during
her
views.
So
I.
G
That's
why
I'm
I'm
thinking
of
of
dropping
this
for
now
as
a
future
Improvement
but
I
also
like.
H
The
question
I
have:
does
the
job
controller
use
the
deadline
exceeded
for
its
timeout,
or
does
it
have
its
own
like
who
usually
uses
this
active
deadline
exceeded
in
in
the
Pod
spec?
Is
it
a
widely
used
feature
because
I
haven't
seen
it
too
widely
used
on
pods
themselves,
but
I'm
trying
to
understand?
If
maybe
some
control
receives
it.
G
So,
first
of
all,
job
does
have
its
own
active
deadline,
seconds
timeout
and
then
it's
job
controller
that
marks
the
job
and
kills
the
job
and
that
that's
one,
coincidentally
I
think
they
are
called
the
same,
and
the
second
is
couplet
level
and
the
one
for
couplet
I
think
is
maybe
more.
H
H
G
Think
this
issue
is
raised
by
job
users,
but
maybe
more
those
who
use
restart
policy
on
failure,
because
this,
then
it's
more
than
the
rest
data
controlled
by
Kublai
Not
By,
the
Drop
controller,
although
in
this
case
it's
it's
weird
because
in
this
case
actually
it
is.
It
is
job.
G
That's
why
we
don't
have
a
clear,
actually
motivation
now
in
job
controller,
to
do
that,
because
it's
not
like
a
widely
used
feature,
I
think
and
it's
an
edge
case
so
from
Another
Side,
the
investment
I,
I,
think
yeah,
then
I
I
don't
think
it's
like
very
much
needed
for
for
job
users,
but
there.
H
I
I
So,
similarly,
if
you
set
up
a
deadline
for
the
pawn
and
it
reaches
the
deadline,
it
can
be
considered
about
in
that
code.
So
that's
why
it's
important
to
detect
it
as
opposed
to,
as
opposed
to
the
pot
being
disrupted
by
priorities
or
something
something
else.
I
A
I
But
but
I
guess
we
we
need
to
do
the
best
possible
detection,
because
this
means
wasted
compute
right.
A
I
don't
know
if
anybody
has
other
comments
or
questions,
please
jump
in
I.
Think
I
I
haven't
heard
like
definitely
like
this
is
what's
happening.
This
is
a
scenario
that
needs
to
be
fixed.
I
feel
that
it's
still
not
clear
whether
the
fiction
is
needed
am
I,
understand,
correct
I
mean
how.
G
Yes,
so
yeah,
so
as
as
you
asked
if,
if
it's
okay
from
the
business
perspective,
if
I
reading
your
question
correctly,
that
we
said
face
to
failed,
even
though
it
then
I
think
from
the
user's
perspective,
it's
it
should
be.
Okay
and
the
sticks
wouldn't
be
needed,
but
what
yeah?
But
it
was
raised
that
it's
better
to
only
set
file
if,
if
it
was
actually
killed,
so
I
was
thinking
that
it
may
also
be
raised
like
doing
the
implementation
or
cap
review
Etc
and
committing
that
we
do.
G
This
may
risk
that
you
know
we
get
stuck
or
get
involved
into
refactorings
that
we
don't
have
capacity
to
commit
to
so
yeah.
That's
why
I'm
like
exploring
how
signals
is
and
then
needed
so
as
I
understand
from
the
comments
so
far,
you
don't
see
yourself
like
they
need
to
change
the
behavior
and
what
kublet
is
doing
in
Celsius.
A
The
deadline,
regardless
of
actual
cling,
to
be
Fair
like
I,
only
base
in
my
questions
and
the
answers
based
on
the
information
I
get
from
this
description.
So
maybe,
if
you
have
better
description
like
specific
use,
cases
like
this
is
where
it's
very
bad.
Then
it
will
be
different
conversation,
but
I
mean
based
on
what
you
just
presented.
I
I,
don't
feel
it's
a
critical
issue.
C
I
I
feel
the
same
same
way
and
I
have
to
admit
I'm
slow
on
this
kiss,
but
you
prison
here,
I'm
still
ready
and
so
I'm
not
fully
convinced
the
the
the
problem
you
state
here
it
is
critical.
We
need
to
address
and
I'm
more
concerned
about
the
potential
Behavior
change
and
potential
misuse
on
this
one.
So
so
we
can,
but
not
me
like
the
this
is
not
a
really
critical
problem.
C
G
C
G
Okay
sure,
thank
you.
I
Yes,
so
going
going
speak
a
little
bit
about
the
current
state,
the
current
state
is
that
there
is
already
a
reason
and
a
message
when
there
is
active
deadline-
and
that
is
that
is
buggy.
I
mean
sure
it's
not
critical,
but
it's
it's
buggy.
I
I
So
maybe
it's
okay
not
to
change
the
status
quo,
or
you
know
the
reason
message,
but
we
are
adding
extra
feature
on
top
of
it.
In
this
case,
the
condition
and
the
concern
from
Jordan
is
that.
Why
are
we
adding
something
that
we
know
is
buggy,
so
I?
Guess
in
that
case
distance
from
Jordan?
Is
we
either?
I
Don't
add
it
or
we
add
it,
but
we
make
sure
it's
not
buggy
so
kind
of
like
the
two
extremes
and
and
your
possibility
is
that
we
added
knowing
that
it's
buggy
and
we
are
we
document
in
which
scenarios
it
can
be
value.
I
I
I
H
And
I
just
have
one
follow-up
question
that
you
mentioned
it's
it's
buggy,
it's
buggy,
Because,
the
actual
behavior
and
Kubler
is
buggy
or
because,
like
there's
a
natural
race
with
that
sleep,
60
and
the
termination
being
60.,
the
act
of
deadline
being
60..
Is
that
like
a
fixable
problem
or
is
that
there's
naturally
racing.
G
G
You
know
Authority
here
so
I'm
and
it
was
like
contested
by
Jordan
who
sees
like
suggests
the
different
interpretation
of,
and
then
following
with
this
interpretation,
the
current
behavior
is
buggy
right.
So
oh.
A
The
discussion
to
cap,
because
we
have
two
few
more
topics
today:
okay,
I,
see
your
thumb
up.
Moving
on.
G
Yes,
yes,
I'm,
okay,
with
just
raising
awareness.
A
So
now
thank
you
for
bringing
it
yeah,
so
sidecar
cap
I
wanted
to
highlight
the
sidecar
work.
We're
doing.
We
have
a
working
group
and
we
meet
in
every
week
before
this
meeting
a
recently
meetings
weren't
very
long,
because
we
discovered
less
and
less
issues
with
the
PR
that
we
sent
PR
is
in
good
shape.
It
implements
a
skeleton
functionality,
it's
ready
to
be
reviewed,
approved
or
whatever
we
tested
it
in
istio.
A
Already
it's
working,
it's
implementing
new
features
like
sidecar,
for
you
need
containers
the
start
of
sidecars,
so
it's
all
working.
It's
great!
Please
spend
time
on
reviewing
it.
We
really
want
to
get
early
in
a
release
stage
this
time,
so
we
have
a
few
follow-ups
that
needs
to
be
made.
We
don't
want
to
over
complicate
the
big
PR,
but
skeleton
is
ready,
like,
as
I
said,
istio
already
working
with
that,
so
it's
supposed
to
be
in
a
very
good
shape,
so
yeah
next
any
questions
about
Sidecar.
B
So
I
started
reviewing
it.
Sergey
I
would
also
ask
like
more
folks
here
to
help
with
the
review,
because
it's
it's
big
and
it'll
be
good
to
get
some
eyes
on
the
folks
more
familiar
with
the
life
cycle.
Part
maybe
David.
If
you
have
some
time,
will
be
useful
to
have
your
thoughts
on
it
as
well.
Yeah.
C
A
From
API
perspective,
team
was
API
reviewer
for
that,
but
I
think
his
expecting
signal
to
get
more
approvals
first
and
then
he'll
jump
into
his
API,
or
we
can
wait
for
Derek
as
well
anyway.
Thank
you.
Everybody
I
really
appreciate
it.
Next
one
is
more
more
is
here.
F
Yep
I'm
here:
can
you
hear
me
yeah?
Okay?
So
let
me
actually
forgot
what
I
wrote
in
the
agenda,
but
let
me
just
introduce
the
topic
so
so
one
of
the
sub
projects
of
Sig
auth
is
secret
CSI,
which
effectively
exists
to
make
it
so
that
you
can
consume
secrets
from
external
vaults
in
kubernetes,
without
first
putting
them
into
kubernetes
Secrets
right,
so
they
basically
show
up
as
ephemeral
volumes
in
your
containers
and
you
just
consume
those
as
files,
and
then
they
should
just
link
to
the
repo.
F
So
we
have
the
capability
of
actually
putting
those
external
secrets
into
kubernetes
Secrets,
as
well,
primarily
for
use
cases
such
as
like
ingresses,
which
require
a
kubernetes
Secrets
reference
and
we're
not
really
trying
to
change
any
of
that.
The
specific
use
case
that
we're
trying
to
make
better
is
a
lot
of
times.
F
People
will
sink
into
kubernetes
Secrets
purely
for
the
act
of
having
them
easily
available
as
environment
variables
in
their
pods,
and
this
is
a
case
that
we
want
to
sort
of,
discourage
and
move
away
from,
because
it
kind
of
nullifies
a
lot
of
the
benefits
of
the
CSI
based
approach.
Because,
again
your
secrets
are
in
kubernetes
Secrets.
Instead
of
you
know
having
the
much
more
carefully
orchestrated
environment
that
your
Vault
could
provide.
F
So
I
wanted
to
sort
of
raise
this
question
you
guys
and
I
I
know
there
were
some
comments
made
on
my
little
post
there,
but
if
I
wanted
to
have
some
external
entity
provide
like
Dynamic
environment
variables,
to
containers
what
would
be
the
right
way
to
do
that.
A
So
I
posted
a
link
to
the
cap
that
is
currently
in
like
proposal
State.
This
Gap
is
looking
at
consuming
environment
variables
from
some
volume,
so
you
can
write
a
file
and
this
file
will
be
used
as
a
source
for
environment
variables.
Is
it
the
kind
of
thing
you're
looking
for.
F
So
I
guess,
theoretically
that
could
work
in
the
sense
that
if,
if
that
cap
was
implemented
as
a
way
in
the
in
a
way
that
the
like
CSI
API
got
to
go
first
and
thus
could
get
all
its
volumes
ready
and
then
it
could
be
consumed
that
way,
I
think
that
would
work.
I
think.
My
overall
question,
though,
is,
is
that
the
right
way
to
go
about
this,
or
is
there
some
other
layer
or
other
mechanism?
F
That
would
make
more
sense,
because
for
me,
I
the
if
I'm
going
to
make
Dynamic
environment
variables,
I
don't
necessarily
need
the
file
on
disk
at
all,
like
I'm
happy
to
make
it
to
make
it
work.
That's
that's
not
the
issue,
it's
more
of
what
would
you
expect
the
system
to
do?
How
would
you
build
it
if
nothing
existed
right
now
and
the
use
case
was
just
Dynamic
environment
variables.
E
That
sure,
so,
if
the
lowest
layers
of
dra
there's
the
the
there's
a
standard
that
we
use
called
CDI,
which
is
the
container
device
interface-
and
that
is
the
thing
that
eventually
makes
this
abstract
notion
of
a
device
or
resource
available
to
a
container.
But
one
of
the
things
that
CDI
allows
you
to
do
is
inject
environment
variables
into
the
Container
spec
So
in
theory,
just
in
the
same
way
you're
creating
one
of
these
CSI
drivers
today
for
secrets,
you
could
create
a
dra
driver
whose
job
it
was
to
call
out
to
Vault
figure
out.
E
F
And
is
what
is
like
the
security
model
for
that
information
like?
Is
it
treated
as
confidential,
at
least
at
some
level?.
E
F
The
host
I
mean
that
might
be
okay,
like
I'm,
not
necessarily
too
stressed
about
like.
If
you
can
do
like
host
mounts.
If
you
can
see
the
file,
maybe
maybe
that's
okay,
not
100
sure
I
I'm
more,
like
I,
want
to
make
sure
that,
like
there's
no
way
to
observe
these
contents
from
the
kubernetes
API,
outside
of,
like
literally
exacting
into
the
Pod
and
running
EMP,.
E
E
You
know
creating
the
container
step
so
yeah,
if
you,
if
you
implemented
the
node
side
of
this,
to
actually
be
the
piece
that
pulls
from
Vault
gets.
The
secret
generates
the
CDI
spec
with
those
contents
in
it
that
would
all
stay
local
to
the
disk
and
then
only
once
the
container
runtime
came
along
to
pick
up
those
edits
and
and
push
them
into
the
Container.
When
the
container
started,
nothing
would
ever
go
through
the
kubernetes
API,
okay,.
J
But
if
you
are
worried
about
executing
to
continue,
probably
where
environment,
where
able
is
at
all,
not
what
you're
looking
for,
because
you
can
always
flow
from
like
proc
beat
environments,
you
can
get
the
values.
What
was
injected
into
container.
F
F
B
B
F
I
mean
we're
already
like:
if
you
can
post
breakout,
then
you
could
see
the
files
that
we
were
writing
in
our
ephemeral
things.
Probably
anyway,
like
I'm,
not
that
concerned
I
mean
maybe
it's
a
little
easier,
because
it's
like
an
actual
file
on
disk,
instead
of
like
a
a
like
a
a
memory
mounted
file
or
whatever
I'm,
forgetting
the
right
name.
F
Okay,
the
other
question
I
had
is
what
exactly
is
the
interface
for
Dra
in
the
sense
of
like?
Can
I
have
like
like
a
Daemon
set
at
runtime?
That
adds
capabilities
or
have
does
it
have
to
be
like
statically,
given
to
the
cubelet
like
on
Startup.
E
Yeah,
so
I
I
gave
a
talk
on
this
at
the
last
kubecon
a
few
weeks
ago.
I
can
forward
you
the
link.
It's
the.
The
title
of
the
talk
was
how
to
build
a
driver
for
dynamic
resource
allocation,
so
it
goes
through
all
the
pieces
that
are
needed
to
build
one
of
these
drivers
and
how
it
interfaces
to
the
system.
F
Yeah,
okay,
I
can
definitely
look
at
that.
Okay,
I
think
that's,
basically,
all
I
have
I
can
I
can
look
at
dra
and
then
I
can
also
look
at
that
other
cap
and
see
if
they
make
sense,
yeah
and
I.
Think
dra
is
Alpha
right
now
right.
So
it's
early,
it's
still
Alpha
yeah
I
mean
in
a
sense
that's
good
for
me,
because
if
I
need
something
changed,
now
is
the
time
to
ask,
but.
E
I
didn't
understand
your
use
case
a
bit
more,
but
based
on
what
you're,
describing
this
to
me
would
be
if
you're,
already
comfortable,
building
a
CSI
driver
to
solve
this
problem,
then
moving
forward
writing
a
dra
driver
would
probably
be
the
more
correct
way
of
what
you're
looking
for
is
environment.
Variable
injection.
E
Oh
right,
yeah,
so
I
just
have
a
quick
I,
mostly
just
added
this,
to
see
who
the
right
person
to
be
an
approver
for
this
would
be
so
we're
planning
on
adding
seed,
a
field
for
CDI
devices
to
the
device
plug-in
API
in
128.
So
I've
already
added
this
to
the
planning
dock,
and
it's
really
just
a
simple
extension
that,
given
that
the
CDI
devices
have
now
already
been
added
to
the
CRI
in
127,
and
now
we
just
want
to
make
sure
that
device
plugins.
E
You
know
traditional
device
plugins
have
the
ability
to
pass
CDI
devices
if
they
want
to
then
get
forwarded
down
the
CRI,
and
so
I
was
really
mostly
just
wondering.
Who
would
be
the
right
person
to
be
the
approver
for
this
I
assumed
it
would
be
Renault
just
because
he
approved
the
CRI
changes,
but
yeah
I
just
wanted
to
check
with
who
the
right
person
is.
F
A
A
Okay,
if
nothing
else
have
a
good
rest
of
your
week,
bye.