►
From YouTube: Kubernetes SIG Node 20221101
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20221101-170531_Recording_1920x950
B
Sure
I
can
run
it
either
either
is
fine
yeah,
okay
cool
go
ahead
thanks
all
right,
so
the
first
item
on
the
agenda
is
already
taken
care
of.
Thank
you.
Don
update
for
the
camp
for
dynamic
resource
allocation.
The
second
one
is
cubelet
context.
Plumbing
from
David
and
I
I
noticed
that
Don
you
approved
that
PR,
so
I
think
that's
also
all
set,
and
hopefully
that
will
give
us
like
end-to-end
visibilities
into
the
spans
when
using.
A
A
Yeah
I
think
in
either
the
first
Division
and
David
Collins
my
internet,
not
using
that
when
using
open
parameter
that's
long
time
ago,
then
later
he
have
the
timer
I
kubernetes,
so
they
will
join
today's
meeting,
I'm
so
excited
and
I.
Think
you
have
the
time
right.
The
cute
I
think
he's
the
the
kubernetes
community
right
so
for
everybody.
So
that's
the
exciting
moment,
and
definitely
we
get
here
formally
David.
Do
you
want
to
say
something
you
want
to
ask
a
signaled?
C
Can
you
hear
me
now?
Yes,
hello,
okay,
no
idea
what
was
wrong
so
I,
just
I
mostly
just
wanted
to
provide
context,
because
it
was
multi
thousand
line
code
change.
This
actually
doesn't
do
any
tracing
at
all.
C
This
is
just
context
Plumbing,
which
is
a
prerequisite
for
any
tracing
that
we
want
to
do,
I'm,
hoping
that
I'll
probably
actually
hold
off
adding
spans
and
maybe
make
some
updates
to
the
kept.
So
everyone
can
discuss
what
they'd
like
to
see
from
the
tracing,
but
this
will
make
it
a
lot
easier
if
we
have
context
Plumbing.
C
So,
in
theory
with
something
like
an
exact
call,
it
might
be
possible
to
do
end
to
end
because
that
goes
API
server.cd,
then
or
not.
Let's
see
the
API
server
to
the
cubelet
itself,
but
for
like
creating
a
pod
or
something.
This
definitely
wouldn't
be
end
to
end
and
that's
not
part
of
what's
planned
in
the
near
future.
At
least
so
this
is
just
actually
I.
C
C
This
is
what
we
have
right
now,
which
is
just
each
CRI
call
and
its
own
Trace,
and
if
you
add
a
parent
span,
it
doesn't
really
matter
where,
as
long
as
it's
like
at
the
beginning
of
when
you're
starting
to
do
something,
then
you
can
actually
connect
all
of
the
CRI
spans
together
in
a
single
graph.
So
this
is
mostly
what
I'm,
after
with
the
context
change,
but
while
implementing
it
I
think,
there's
a
lot
of
other
things
we
could
trace
as
well.
C
For
example,
you
could
see
in
a
single
trace
a
pot
eviction
if
you
wanted,
including
like
getting
stats
from
the
CRI
and
then
actually
making
the
calls
to
terminate
a
pod
in
the
container
runtime.
So
all
that's
possible
here,
but
the
context
Plumbing
is
important
so
that
you
have
a
single
golang
context
that
ends
up
being
passed
to
all
of
these
calls.
A
So
we
are
more
and
more
so
David
since
you
also
involved
long-term
Bank
in
well
with
the
GPU
support
and
like
orange
or
GPS
policy.
So
you
can
always
the
device
plugin
right.
We
have
the
more
and
more
component
that
might
go
to
like
the
separated
demon
and
like
device
party.
So
how
we
are,
do
you
think
about
the
for
those
kind
of
things
like
a
gaming
part
or
giving
container
to
manage
their
life
cycle
or
associate
of
those
of
the
device
plugin
and
the
CSI
plug-in
all
those
kind
of
things?
C
C
A
You
track
that
at
least
the
I
want
to
see
like
the
stateful
side,
how
you
help
to
debug
State
foresight,
then
you
once
you
get
the
or
maybe
like
the
one
ml
workload,
just
give
us
idea
like
the
how
to
look
for
others
together
like
the
besides
the
kubernetes
containerdy,
then
there's
some
other
plugin,
so
all
together,
so
that
can
reach
huge
and
it
help
out
debug
being
it
here.
Yeah.
C
So
definitely
it's
easy
to
do
other
node
level
plugins
like
device,
plugin,
CSI
cni.
Those
sorts
of
things
should
be
really
simple
to
add.
As
long
as
we
can
Plum
context,
the
more
difficult
piece
would
be
connecting
this
to
like
a
higher
level.
Kubernetes
concept
like
a
stateful
set:
that's
not
possible
today,
because
we
don't
have
context
propagation
implemented
for
objects,
so
probably
to
start
with
we'll
focus
on
the
the
easy
things
to
do:
the
low-hanging
fruit
and
maybe
someday
revisit
context
propagation
for
objects.
D
D
Hey
David
yeah,
thanks
for
the
pr
fighting
span,
so
I'm
interested,
mainly
I
I,
send
out
a
PR
for
adding
spans
on
the
container
D
side.
So
so,
while
we
are
adding
spans
on
both
cubelet
and
like
the
runtime
side
as
well,
are
there
any
recommendations
or
is
there
any
point
where
we
need
to
work
together
to
figure
out
like
which
part
of
continuity?
Cubelet
side
is
more
interested
in
or
anything
specific
that
we
need
to
like
collaborate
on
this
foreign.
C
C
The
best
places
to
add
spans
generally
are
whenever
it
crosses
some
like
communication
boundary
so
like
maybe
you're
making
ciscalls
or
if
there's
communication
between
the
containerdy
shims
and
like
the
overall
containerdy
process,
or
something
like
that.
That
would
all
be
very
useful
to
cover
I.
Think
if
I
remember
correctly.
C
D
They
do
use
grpc
like
are
you
talking
about
when
it
talks
to
run
C
like
through
ttrpc.
C
Yes,
that
sounds
familiar,
but
that
might
be
something
that
that
might
be
a
boundary
that
you'll
want
to
trace
and
also
might
be
something
that
last
time
I
tried
this,
which
was
four
years
ago,
didn't
like
you,
couldn't
use
the
regular
grpc
open,
Telemetry
plugins
for
because,
like
they'd
gotten
rid
of
headers
or
something
I,
don't
remember
what
it
was,
but
I
don't
know
if
it's
trivial
or
not
to
make
open,
Telemetry
propagation
work
across
that.
D
I
see:
okay
thanks
a
lot
I'll,
probably
reach
out
to
you
and
like
if
I
have
any
other
questions,
I'll
ask
you
them
cool.
Thank
you.
Thanks.
B
All
right,
thanks
David,
moving
on
to
the
next
topic,
when
I.
E
All
right,
yeah,
okay,
great
so
for
the
intense
Pottery
says,
I
think
we
are
one
week
away
from
Port
trees
now
and
I
figured
out.
There
was
a
flake
in
one
of
the
schedule,
letters
that
we
added
towards
the
end
of
last
release
code
phase
of
last
release
and
figure
out
the
issue
and
fix
that.
What
we
need
now
is
I
think
we
need
to
add
the
we
added
a
couple
of
tests.
I
think
Raven
has
one
test
job
enabled
that
runs
every
two
hours.
E
It
runs
all
Alpha
and
I
added
one.
That's
focused
on
only
running
the
in
place,
pod
resize
for
this
test.
It's
about
a
one
and
a
half
hour,
job
I
believe
it
takes,
and
it's
a
it
runs
on
a
24
hour
Cadence,
so
either
Ravens
or
both
they
seem
to
be
a
little
bit
complementary.
In
some
sense,
we,
if
we
add
that
I'm
wondering
if
that's
sufficient
for
us
to
merge
this
code,
where
we
are
at
with
thoughts
of
merging
it's
getting.
B
I'll
talk
with
Derek
when
I,
okay,
oh
yeah,.
B
E
I
think
it
pulls
from
Main,
so
it
the
main
continuity
Branch
if
I'm
David
Porter,
if
you're
there.
Please
correct
me
if
I'm
not
saying
the
right
things,
but
both
jobs
seem
to
pull
the
latest
version
of
continuity
from
the
main
branch
which
should
have
the
support
and
what
I
need
to
do
for
the
test
which
I'll
complete
today
is
detect.
If
it's
continuity,
one
six
or
the
older
versions,
the
cost
97
is
166,
and
if
we
have
169
or
above
then
run
the
full
etoe
test
else.
E
Do
the
limited
E2
test,
because
we
are
doing
that
the
every
time
you
do
a
pull
request.
It
still
runs
that
job
which
runs
the
current
tests
and
which
that
does
everything
it
looks
at
the
C
group
to
make
sure
that
the
recess
has
happened.
The
only
thing
that
it
skips
is
verifying
that
the
Pod
status
resources
has
been
updated.
That
part
is
the
one
that
gets
added
in
the
when
the
CRI
support
is
in
foreign.
E
B
Yeah
so
I
took
a
look
at
the
test
when
I
I'll
make
another
pass
today
and.
E
E
We
don't
and
I
prefer
it
to
be
sooner
I'm
out
in
December
so,
and
we
want
to
get
this
in
before.
If
there
are
any
problems,
I
want
to
be
here
and-
and
we
should
probably
try
to
make
it
sooner
than
later-.
B
Great
that
brings
us
to
the
end
of
the
agenda.
Do
folks
have
any
other
topics
they
want
to
bring
up.
B
So
I
I
would
just
like
to
say
that
I
think
a
bunch
of
us
went
to
kubecon
and
we
had
a
mini
signaled
Meetup
over
a
few
days,
and
it
was.
It
was
pretty
good
seeing
everyone
in
person
and
I'm,
hoping
like
some
of
the
discussion,
will
spill
over
into
this
meeting
over
the
next
few
weeks.
E
B
I,
don't
think
we
took
notes.
Probably
Sasha
Sasha
from
intel
was
everywhere
Sasha.
Did
you
take
it
out,
sir?
It's
all
in
his
mind.
Well,.
F
I
actually
George
Berkus
took
a
note
from
his
contributor
Summit
session.
He
said
what
it
will
be
published
at
some
point
like
to
to
some
place,
but
I
can
share
a
document
in
which
yet
oh.
F
B
B
B
Right
I
know.