►
From YouTube: Kubernetes SIG Node 20230314
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230314-170537_Recording_1572x1120.mp4
A
Hello,
hello,
it's
March,
14
2023!
It's
a
signal.
Weekly
meeting
welcome
everybody
I
want
to
start
this
meeting
with
announcement
that
quad
freeze
is
tomorrow
for
127.
If
you
want
your
PR
to
be
in
127,
please
work
on
it
and
make
sure
you
have
somebody
who
will
review
and
approve
it.
It's.
A
5
PM
Pacific
for
today
today,
thank
you,
Mark.
You
know
better
because
you're,
a
dis
manager
but
yeah.
Thank
you.
Okay,
it
changes
things
for
me
a
little
bit
but
anyway,
okay,
today
code
free,
so
if
you
haven't
merged
today
and
your
back
or
fix
is
not
critical
enough,
it's
out
so
we
have
200
PRS
and
we
did
amazing
job
emerging
things.
Thank
you
for
all
approvers
who
looked
at
numerous
PRS.
A
Think
of
all
reviewers
who
prepare
this
PRS,
who
are
been
approved,
I
wanted
to
just
remind
that
there
is
still
41
PR,
which,
with
lhtm
label
but
without
approved
label.
A
So
if
you
want
to
so
just
a
reminder
for
a
programs
that
there's
still
something
to
approve,
it
doesn't
mean
that
all
of
them
are
critical.
I
think
some
judgment
needs
to
be
applied,
but
to
this
secure
queue
of
unapproved
PRS
that
may
need
to
be
looked
at.
I
also
wanted
to
look
at
the
issues.
First
start
with
critical
origin
issues
for
this,
like
typically
priority
critical
origin.
This
was
something
that
we
have
to
fix
in
the
cities.
A
A
A
Yeah
from
Clayton
static
audience.
A
No
now
I'm
talking
about
this,
a
couple
doesn't
already
started
static
Port
when
API
server
is
down
I
have
it
on
my
screen.
A
So
this
issue
survived
at
least
a
few
releases,
and
seal
Market
is
critical.
Origin
I,
don't
know
if
Clayton
is
on
the
call
right
now.
C
C
A
Okay,
so
we
need
to
clean
up
our
critical
agent
yeah,
given
like
we
have
CI
group
meeting
a
couple
of
weeks.
We've
been
cleaning
up
important
soon
issues,
so
there
are
plenty
important
soon.
We
can
take
a
look
at
this.
We
still
have
a
lot.
We
just
cleaned
up
many
of
them.
We
have
40
markets
important
soon.
A
Some
of
them
are
for
caps
that
were
targeting
this,
at
least
so
this
is
expected
to
have
them,
but
some
of
them
are
long-standing,
like
stale
issues
that
may
need
to
be
removed
from
this
Milestone.
So
if
you
have
time
to
review
those,
please
do
we
need
to
make
sure
that
we
only
have
something
marked
as
important
soon
if
it's
really
important
soon.
Otherwise,
you
just
defeats
the
purpose.
A
Okay
and
yeah,
when
I
do
you
want
to
talk
about
block
you
post
it
in
a
chat
or
you
want
to
go
to
the
next
item?
An
agenda.
C
I'll,
just
briefly
mention:
what's
going
on
at
116262,
I
think
the
we
found
another
null
pointer
access
and
I
fixed
that,
however
I'm
not
able
to
verify
it
because
I
don't
have
the
Repro
environment
that
Clayton
has
or
the
words
there
in
internal
gke.
With
this
couplet
I
tried
different
things.
Hacking
around
can
get
to
that.
But
what
I
there
is
a
defensive
fix,
which
probably
is
even
the
right
fix
at
this
point,
given
that
we
don't
process
updates
for
static
pod
manifests.
C
So
what
we
do
is
if
it's
a
static
part.
We
skip
calling
the
two
functions:
compute
resize
and
do
resize
in
the
compute
product
rates,
so
that
fix
the
more
I
look
at
it.
It
seems
right,
given
the
bug
I
just
posted,
because
if
we're
not
handling
updates
for
images,
why
should
we
you
know
not
skip
over
static
points?
Why
do
you
even
want
to
call
that
function
now?
Kitten
has
some
objections
with
I
think
he
has
some
concerns,
rather
with
whether
it
affects
the
existing
static.
C
C
A
Okay,
yeah
I,
don't
think
something
that
is
not
regression
should
be
fixed.
Is
that
urgently
yeah.
C
I
I
might
have
to
go
ask
for
an
exception
to
give
people
more
time
to
review
this
critical.
Probably
gonna
raise
a
few
days
exception
for
this
and
Tim
I.
Think
one
thing
that's
not
here
is
there
is
another
PR,
that's
outstanding
for
Tim
to
look
at.
Let
me
just
post
it
in
the
chat
as
well.
Please
add
it
to
the
doc
there's
so
many
things
going
on
I'm
just
trying
to
get
knock
one
things
out.
Look
at
this
one.
C
That's
whatever
one
week
after
this
code,
freeze
author
after
the
merge
I'd
be
drinking
beer.
What
had
we
missed
all
right,
so
this
particular
one
is
Tim
asked
for
a
follow-up
changes
to.
We
want
to
rename
the
rename
the
variables
for
resize
policy.
This
is
not
a
risky
change.
I
think
it
can
go
in.
It's
just
I,
don't
think
it's
just
the
bandwidth.
Tim
has
suggested
some
things.
I
made
those
changes,
it's
been
sitting
for
a
few
days
and
I
wonder
if
we
can
get
dimspan
with
on
this
one.
D
If
he
has
this
kind
of
the
winning
request,
why
are
the
first
place
approved
the
original
implementation?
This
is
this
is
this
is
faster
next
there's
the
discussing
to
rewards
the
orange
ones
and
I,
of
course,
I.
Don't
prefer
to
do
so,
but
this
is
make
the
others
either
complexity
to
everyone.
C
E
The
API
one
I
know
that
we
need
I
can't
single-handedly
approve
that
one
I
know
it
was
Tim's
desires
to
yeah.
C
E
C
A
Say
it
again,
so
we
have
this
null
pointer
right,
which,
in
Standalone
mode.
E
C
A
C
It
panics
further
down
the
road
when,
for
some
reason,
the
the
issue
with
this
is
that,
even
though
the
fix
looks
like
it
should
fix
it,
I
don't
have
a
Repro
and
there
is
no
I
tried,
different
manifests
and
the.
C
I
think
the
root
cause
of
this
is
really
not
wrong
for
whatever
reason
for
static
pod,
the
compute
resize
action
decided
that
this
pod
needs
to
resize
and
we
really
need
to
get
that
particular
many
set
of
manifest
that's
going
on
in
there
and
the
fix
that
I
made
here
is
initially
it
looked
like
a
defensive
fix
where
I
don't
even
invoke
compute
pod
resize
do
pod
resize
for
static
Parts,
but
in
light
of
this
bug,
I
just
created,
it
looks
more
like
the
right
fix,
because,
if
you're
not
gonna
process
updates
like
an
image
update
or
from
the
Manifest,
why
would
you
want
to
unnecessarily
call
compute
pod
resize
to
power
resize
at
all.
C
E
C
It's
on
or
off
I'm
sorry
I
couldn't
hear
it
on
on
yeah,
okay.
So
when
it's
on
and
you
are
in
kublet
only
mode-
and
there
is
a
static
part-
I-
don't
know
which,
what
kind
of
manifest
they
used
to,
which
triggered
this
and
added
the
checks
that
I've
added
here.
This
nil
checks
should
take
care
of
it,
because
I
saw
one
path,
which
was
an
existing
code
that
I
didn't
really
encounter
ever
before.
C
So
when
we
fill
out
in
the
resource
config
for
pods,
then
we
fill
out
the
structure
depending
on
the
kind
of
part,
if
it's
best
effort,
if
it's
burstable
burstable
with
CPU
limits
on
CPU
risks,
requests
only
specified,
it
may
give
out
some
null
pointers
and
we
are
accessing
one
of
those.
C
The
costac
in
the
information,
the
cost
Tech
is
not
really
helpful
in
telling
which
exact
parameter
was
accessed.
There
really
need
to
know
I
need
a
Repro
for
this
to
be
sure,
but
this
should
fix
it.
The
I
think
Jordan
has
a
problem
with
the
exclusion
of
static
pods,
and
there
are
a
couple
of
cases
that
I
want
to
verify
in
there
with
the.
C
F
But
I
I
have
a
kind
of
a
naive
question
about
in
general,
the
static
pods
in
general.
Like
I,
is
it
has
anybody
ever
talked
about
a
more
General
way
of
managing
static,
pods
or
more
like
something
like
a
I?
Don't
want
to
say
it,
but
it
Coop
CTL
edit
of
aesthetic
of
a
of
a
you
know,
of
the
actual
static
pods
on
the
kublet
or.
C
We
can't
do
that
yeah
what
I
used
I
use
my
I
use
patch.
Sorry.
D
Every
time
you
see
the
you
want
to
allow
this
vpa
working
on
this
static
powder,
thinking
and
comfortable
I
have
to
admit,
because
we,
the
problem,
is
that
your
your
source
of
the
choose,
the
other
statical
part.
It
is
that
file,
so
we
actually
discourage
people
using
that
feature.
D
The
reason
it
is
really
difficult
to
manage
that
the
kind
of
the
problem
and
as
you
cannot
otherwise
I,
already
approve
your
your
appear,
because
I
reviewed,
but
I
couldn't
figure
out
for
this
Standalone
kubernetes
how
we
are
going
to
because
we
are
missing
in
the
standing
Standalone
kubernet
cases
kubernetes
that
particularly
not
registered
to
the
API
server.
So
in
the
app
there's,
no
neural
part
that
we
created
in
the
API
server.
D
So
the
part
of
the
administration
is
only
at
the
node
level
defense,
so
so
this
is
really
difficult
for
for
me
to
think
about
the
connect,
all
the
dots,
how
we
are
going
to
do.
This
only
know
the
level
of
the
defense
to
do
the
part
admission.
So
that's
why
I
connect
the
I'm
still
I,
I'm
I,
read
your
fix
and
then
go
back
to
the
original
original
implementation,
which
is
the
which
is
the
hundreds
files.
D
So
I
try
to
figure
out
how
we
are
going
to
plan
to
support
this
static
parts,
and
so
this
is
why
I
haven't
so
so
if
we
have
the
Mirror
part,
at
least
you
could
powder
admission
at
the
schedule
hour.
You
basically
say:
oh
kindness,
is
the
part
of
the
already
occupied
the
resource
right,
so
you,
you
start
the
rest
of
stuff
that
actually,
but
in
here
you
basically
could
be
over
committed
on
the
Node
only
node
level,
but
that's
also
so
so
so.
D
This
is
why
I'm
not
sure,
because
remote
controller
can
do
better
job
here
and
to
adjust
the
static
part,
the
resource
usage
and
then
make
decision
at
the
only
Network
know
the
level.
So
that's
why
connect
I'm
I'm
still
struggle
otherwise
I
already?
Yes,
yes,
I,
agree
with
you
director.
This
is
why
we
talk
about
this
many
times.
D
We
want
to
use
the
reserve
static
powder
for
the
bootstrapping
scenario,
so
this
is
exactly
what
we
are
doing
here,
but
if
we
we
allow
this
powerful
feature,
we
see
a
feature
to
Dynamic
update
the
static
parts
of
the
resource
request,
which
is
make
me
I
have
to
think
about
more
to
make
sure
everything's
coming,
because
we
need
to
do
out.
So
that's
why
I'm,
okay,
with
your
current
lower
Point
fixed,
but
the
the
comment.
What
do
you
commonly
see
like
this
one
will
work
with
aesthetic
part
make
me
really
nervous.
C
Yeah
I
was
thinking
more
from
a
user
use
case
perspective.
Let's
say
you
configure
a
static
board,
some
kind
of
a
system
critical
part,
but
it's
over
provisioned
by
a
lot-
and
you
know
you
want
to
have
vpa,
manage
that.
Well,
not
I'm,
not
saying
that
we
should
do
it
for
everything,
but
maybe
for
resources,
and
the
right
thing
for
now
at
this
point
to
do
is
to
block
any
updates
to
static
paths,
resources
or
anything
else.
The
anything
else
part
of
it
is
taken
care
of
by
a
bug
in
the
kublet.
E
E
Pod
definitions
that
the
node
preparer
is
the
source
of
Truth
BPI,
is
only
in
the
business
of
things
that
are
managed
by
a
control
plane.
Okay,
what.
E
C
In
the
Ci
or
well
in
the
SDI,
there
is
I
believe
it's
in
the
GK
internal
I.
Don't
have
access
to
that.
I
can't
even
see
the
logs
to
make
more
sense
of
it.
I
have
what
Jordan
gives
me
in
the
book.
That's
all
I'm.
E
Just
saying
as
a
part
of
graduation,
it
seems
like
we
have
a
test
Gap
in
what
we
all
see
in
community
ede,
and
maybe
we
just
for
these
items
that
are
coming
up
late
mode.
We'll
get
the
the
no
pointer
merge,
but
following
work
should
probably
be
to
enrich
our
ease
where
we're
missing
my.
A
Follow-Up
is
this
I
created
this
PR
to
introduce
tenderloin
mode
tests
yeah
in
E3
node.
Unfortunately,
I
didn't
quote
this
I
called
another
new
pointer
that
we
may
also
called,
and
it's
already
fixed,
but
I
didn't
catch
this
one
and
I
wonder
like
I
mean
we
can
add
this
one
and
start
improving
coverage
for
Standalone
mode
and
like
documentation
as
well,
because
we
like
documentation,
the
question
would
be:
is
it
okay
level
of
technical
debt
that
we
like
accepting
feature
in
a
milestone?
So
we
we
have
this
one.
A
We
have
a
lot
of
follow-ups
like
I
know.
There
are
at
least
few
GitHub
issues
to
follow
up
with
more
fixes
I'm
trying
to
then.
If
this
is
one
thing.
I
reviewed,
though,
is
that
everything
is
protected
by
feature
gate,
so
it
feels
like
if
we
just
disable
feature
gate.
Everything
works
as
before.
So
I
think
this
is
a
reasonably
safe
assumption
and
I
I
looked
at
quote
twice,
so
it
should
be
fine.
A
C
I
think
it's
safe,
so
I
look
at
it
from
okay.
This
is
a
fairly
big
feature,
and
you
know
this
particular
the
Standalone
kublet
case
was
kind
of
sitting
in
our
blind
spot.
C
I'm
sorry
I
knocked
it
over,
but
if
I
know
I
would
not
have
it's
a
internal
GK
I
had
no
clue
it
doesn't
show
up
in
any
of
our
CI
as
watching
CI
and
I
didn't
find
anything
and
I
asked
around
and
the
dims
and-
and
somebody
else
mentioned-
oh
sorry,
we
don't
have
any
of
these
and
I
don't
feel
that
badly
about
it.
C
Frankly,
besides
this,
these
terminate,
it
would
be
nice
if
we
had
not
had
this
at
all,
but
besides
these
two
I
I
think
it's
hopeful
and
see
I
pretty
much
look
unaffected,
it's
hard
to
say,
because
there
are
quite
a
few
CI
that
were
running
red.
So
when
they
do
that
they
mask
potential
issues,
then
this
might
bring
in
right.
So.
D
D
D
Yeah
yeah,
just
just
one
minute
and
also
the
the
can
you
file
that
extension
I
think
I
will
prove
that
extension
and
okay,
so
make
sure
give
us
a
little
bit
the
time
to
review
your
new
API
change.
If
we
couldn't
agree
on
the
API
change
later,
we
have
to
reboot
all
those
things
for
this
feature.
Yeah.
G
D
I,
hopefully
we
could
have
all
the
fix
for
the
Panic.
We
try
to
merge
your
panic,
but
that
still
Connect
the
Dots
here,
so
you
fix
for
standard
mode
and
the
other
new
appointments
things
and
then
try
to
allocate
the
time
for
your
API
change.
Is
that
okay?
So
we
basically
still
move
forward.
This
is
okay,
please!
D
So
then,
we've
moved
still
trying
to
move
forward
with
this
feature
as
the
alpha
feature
in
a
1.27,
because
it's
been
so
long
and
also
it's
Alpha
feature
and
the
feature
gate
and
everywhere.
Maybe
we
need
to
double
check
that
one
but
then,
but
if
we
couldn't
get
the
API
change
in
so
obviously
you
also
prefer.
G
Well,
don't.
E
E
Like
I
have
trouble
holding
everyone
else
back
on
that
one
I
guess
with
respect
to
like
I
I,
would
really
really
like
a
big
plus
one
to
if
we
can't
get
this
get
the
exception
process
going.
But
the
idea
of
reverting
this
and
then
putting
it
back
actually
is
very
probably
personally
painful
for
Renee
and
personally
painful
for
me,
who
spent
a
lot
of
time
trying
to
review
it.
I'd
rather
be
able
to
knock
out
where
we
have
test
gaps
like
some
of
the
Cross
component.
E
Integration
flows
that
are
being
called
out
here,
I
find
not
surprising,
with
an
alpha
level
feature
for
any
feature
going
forward.
I'm
actually
surprised
that
this
this
particular
flow,
or
this
particular
issue,
was
even
discovered.
D
E
I
know
I
100,
don't
agree
on
a
girl.
I
just
want
to
be
real
with
everyone
on
like
our
emotional
weight
behind
this,
which
is
just
like
it
would
ping
me
on
a
personal
level.
Yes,.
D
I,
don't
see
the
really
convincing
pipes,
but
the
problem
is
the
API
change.
Actually
it
is
and
not
to
address
at
this
moment.
So
so,
unless
you
folks
all
think
about
okay,
we
can
pump
that
API
change.
The
new
API
change
to
1.28,
because
I,
don't
I,
don't
think
about
any
current
issue
we
find
is
the
Blocker
we
need
to
go.
This
is
what
I
earlier
said
right
so,
but
API
change.
This
is
why
I
ask
of
it.
Do
you
want
these
ones
in
the
1.27,
but
then
API
change?
Actually
it
is
unknown
to
me.
D
C
Yeah
I
think
that's,
okay,
the
I
think
so
Sergey
has
some
context
on
that.
He
already
discussed
with
Tim
on
renaming
that
particular
variable.
Why
we
do
why
we
chose
that
and
really
it's
a
pretty
safe
change.
I
don't
know
if
if
we
don't
merge
that
in
127
and
go
to
128
and
try
to
make
that
change,
we
have
to
do
a
bunch
of
extra
work
for
protobuf.
That
seems,
like
you
know,
an
unnecessary
headache
to
take
on.
Given
the
nature
of
the
change
is
pretty
innocuous.
C
It's
just
naming
X
to
X
bar,
that's,
that's
all.
It
does
and,
of
course,
adding
the
defaulting
which
which
Tim
wanted
and
all
this
came
up
in
the
last
like
a
month
or
so
so
this
renaming
thing.
So
it's
pretty.
We
still
have
to
update
the
cap
to
fix
the
documentation
in
this
case,
I
I
think
I'll
I'll
take
an
exception
for
both
these
issues.
To
give
us
more
time
and
I
would
like
to
see
it
but
yeah,
it's
okay
with
me.
C
A
And
I
think
team
approved
the
original
PR
because
he's
equally
emotionally
engaged
involved
in
this
PR.
So
it
feels
like
there
are
so
many
emotionally
involved.
People
I
just
want
to
be
level-headed
and
understand.
Yeah.
B
A
Be
put
in
amount
of
technical
depth
that
we
accumulate
behind
in
just
yeah.
C
You
know
I
I,
agree
with
that.
My
feelings
aside,
the
way
I
look
at
it
is
is
this
hurting
more
than
it's
helping
it's
hurting
a
little
bit
on
the
on
the
the
blind
spot
scenario.
The
single
mode
Standalone
go
black,
but
overall
I'm
totally
fine,
it's
very
about
it.
That's
that's
all
I'm
trying
to
say
I,
know.
I
know
you
guys
are
not
it's
it's
too
much.
C
A
So,
for
this
new
pointer
exception,
do
we
like
any
expectations
that
the
test
will
be
added
or
how
do
we
want
to
reproduce
it
in-house
with
something
like
that,
or
we
just
saying
that
we
will
merge
null
pointer
as
we
expect
yeah.
C
I
think
I
want
to
get
some
someone
who
has
knowledge
of
the
the
CI
that
Jordan
has
been
using
for
what
internal.
If
there
is
someone
who
has
knowledge
about
that,
what
kind
of
manifests
were
used
to
produce
this
take
a
look
at
this
and
see
if
it
if
we
can
reproduce
that
at
least
in-house
internally
in
Google,
if
you
can
reproduce
it,
that
would
be
great.
I
have
tried
a
bunch
of
things.
I
just
need
the
Manifest
and
I.
Don't
know
what
it
looks
like.
A
Specific
manifest
from
Jordan
I
I
try
to
reproduce
with
different,
like
updates
and
static
port
and
like
creation,
strategic
for
those
kind
of
things.
No.
D
Thanks
thanks
David
also
David's
offered
to
follow
up
on
this
way,
so
I
review
I
almost
approved.
It's
just
I
try
to
connect
all
the
dots
just
like
what
I
said.
Why
we
are
thinking
about
the
vpa
can
work
with
the
static
partner.
So
that's
why
I
didn't
prove
because
I
don't
want
to
so.
This
is
also
I.
Try
to
figure
out
why
everyone
can
offer
those
problems,
because
so
this
is
so
so.
C
Yeah,
let's
hold
on
let's
hold
on
that
I
just
want
to
get
I
want
to
convince
Jordan.
We
don't
want
to
kind
of.
You
know
push
on
that
without
us
without
a
zip
approval,
I
I
think
last
night
what
I
did
was
I
was
using
the
debugger,
so
it's
kind
of
a
very
it's
not
the
way
of
verifying.
But
let
me
do
a
very
formal
way
of
like
okay
I'll.
C
Do
these
scenarios
I'm
gonna
use
this
particular
static,
pod
or
two
different
static,
pods
and
then
try
with
feature,
get
enabled
disable
and
post
those
update
those
and
then
see
I
think
the
say
the
exclusion
of
static
Parts
change
is
safe.
It's
a
good
change,
I,
just
sort
of
convinced
Jordan
about
it.
I,
don't
know
how
he
said.
Maybe
he's
saying
something
that
I
don't
see
so
I'll
take
a
look.
I'll
take
another
look.
A
I,
don't
think
it's
about
whether
he
see
something
that
we
don't
see
just
once
it
will
be
graduated.
It
needs
to
work.
And
if
you
don't
have
enough
testing
in
Alpha,
we
can
start
breaking
people
in
beta
and.
C
A
Yeah,
we
just
cover
this
in
sidecar
as
well
like
this
change.
It
doesn't
have
any
unit,
even
not
a
unit
test
and
like
any
change
without
tests
scares
me,
especially
if
it
fixes
some
panic
situation.
C
A
A
Okay,
I'll
suggest
we
go
to
the
next
topic.
I
want
to
bring
on
status
of
sidecar
cap.
I
know
there
are
a
lot
of
interest
and
we
have
a
working
group
working
on
sidecars.
There
was
some
delays
on
sidecars
because
we
sometimes
struggle
like
we
needed
to
rework
something
after
In-Place
update
and
we
also
struggled
with
finding
approvers
for
early
pre-work
PRS,
but
I
think
we
just
had
a
meeting
tomorrow
morning
or
today
morning
and
we
had
a
that's.
A
What
we're
missing
and
I
think
everything
we're
missing.
We
can
get
done
so
we
have
we'll
have
PR
ready
and
we
have
a
very
extensive
end-to-end
testing
process.
Pr.
The
only
question
is
I
thought
it
asked
like
freezes
tomorrow.
So
I
thought
that
maybe
we
have
a
chance
if
there
is
a
lot
enough
interest
for
sidecar
cap
to
be
immersed
in
127.,
so
I
wanted
to
bring
it
up
like.
Is
there
any
like
if
there
is
like
immediate
blocker
like
it's?
A
Definitely
not
gonna
be
in
127
I
want
to
know
if
people
believe
that
there
may
be
a
chance,
let's
discuss
and
then
let's
see.
E
A
E
Yeah
I
mean
I,
would
I
would
struggle
with
this
one
versus?
Let's
just
do
it
not
last
minute,
particularly
I
might
have
just
the
prior
conversations
we
had
where
we
thought
we
had
a
lot
of
stuff
going
great
and.
A
A
Okay,
yeah,
okay,
I
think
it's
something
that
we
can
merge
very
early
in
128,
because
we
have
all
the
ducks
in
at
all
I
hope
it
wouldn't
be
the
same
early
in
the
in
the
Milestone
as
an
In-Place
update.
A
Tried
I
tried:
okay
Parker.
Do
you
want
to
talk
about
epr
ome.
H
Oh
yes,
replied
in
the
in
the
broadcast,
and
this
is
a
special
assist
control
that
changed.
D
I
agree
and
I
totally.
Okay
with
this
I
don't
know
sick.
The
only
things
I
think
someone
proposed
how
to
enforce
that
at
the
promotion
days
right
so
I
feel
like
that.
We
haven't
discussed
that
and
besides
that
I'm
totally
okay
from
the
signal,
the
person
I'm
totally
okay,
it's
actually
in-house.
If
I
remember
correctly,
for
the
gke,
we
already
made
that
namespaced
this
one
to
make
a
GK
Fox
a
while
there's
actually
so
I
didn't
face
the
internal
just
based
on
the
Kernel
worship,
yeah.
D
So
there
so
I
I
think
that
I'm,
okay
with
this
one's
the
safe,
because
if
I
remember
a
while
back,
we
did
have
the
minimal
Colonel
worship
is
3.18.
This
is
a
while
back
we
already
in
tonic,
but
the
only
things
that
I
just
happened
to
find
I
didn't
got
time
to
find
that
change
yet
and
also
I'm,
not
I'm,
not
too
sure
we
did
enforce
that,
because
the
huge
code
base.
Obviously
we
didn't
impose
that
that
is
only
in
the
system,
some
system,
never
tests
invoked
by
the
cube
itemi.
D
So
we
did
the
enforce
that
part.
So
so,
but
on
the
other
hand,
do
we
need
to
check
the
kernel
version
here.
I,
don't
see
that
we
checking
conversion.
Do
we
want
want.
Also
is
the
question
right?
So
if
what
I
say
that
we
basically
for
the
in
the
queue
that
has
been
put
up,
that
you
know
that
many
people
don't
use
in
Cube
admin
and
that's
also
another
thing
so
actually
we
did
say
check
of
the
kernel
version
on
that
perspective.
So.
D
D
I
D
Yeah
I
remember
the
1.18
that
time
I
we
did
have
this
tall
talk
that's
at
least
more
than
two
years
ago,
then
everyone
agreed
that
the
3.18
back
then
or
ID.
So
that's
why
I
kind
of
feel
this
is
super
safe
from
our
node
perspective,
Resource
Management
perspective.
J
Yeah
sorry
I
just
wanted
to
say,
like
it'd,
be
nice
if
we
I
think
this
has
come
up
a
couple
other
times
like
if
we
document
somewhere
the
minimum
kernel
version
for
various
features,
I,
don't
think
we
have
it
very
clear
anywhere
so
don.
If
you
mentioned,
like
we
said
318
somewhere
before
I,
think
if
you
can,
we
have
a
new
pointer
where
that
is,
we
can
maybe
make
it
more
prominent
so
that
we
can
use
it.
D
D
D
Inkles
at
all
only
take
off
the
our
kubernetes
release:
worship
right.
E
E
I
wouldn't
make
anything
drastic
right
now,
but
I
think
like
as
an
action
unless
we
should
come
forward,
come
out
with
like
a
go
forward
policy
but
I
tend
to
think
316
is
old
enough.
That
I'm
not
too
concerned
on
this.
G
Yeah
I
agree
Derek,
it's
it's
not
concerning
we're.
We
have
so
many
features
that
require
four
and
even
five
that
it's
not
you.
E
Know
this
isn't
a
problem.
Yeah
I
mean
I'll,
put
my
red
hat
hat
on
right
now
and
be
like
if,
if
we
had
a
men
kernel
version
of
418,
that
would
cover
like
our
Rel
latest
minus
one,
which
I
think
anybody
who
then
would
be
following
the
rail
family
of
derivative
or
red
hat
distros
would
be
probably
a
decent
policy.
I,
don't
know
for
the
Ubuntu
family
or
the
Google
OS
families,
but.
G
D
L
A
Yeah,
removing
the
Raspberry
Pi
builds
right
arm,
so
maybe
a
small
devices
will
be
affected,
but
we
already
removed
for
them.
So
I
don't
know.
A
K
Hey
Kevin
is,
but
I
can
speak
for
this
item
basically
is
closely
reviewing
and
monitoring.
This
PR
and
I
also
reviewed
that
and
looks
in
good
shape
this
approval
for
the
future
Gates.
So
everything
else
I
think
we
can
make
it
happen
in
meeting
the
the
standards
there
and
the
in
the
timeline
so
raise
attention
to
to
the
approvals
for
this.
A
A
L
Folks,
yeah
so
I'm
just
trying
to
figure
out
what
the
next
steps
for
merging
this
node
log
query
PR
is.
This
was
discussed
end
of
Jan
in
this
in
this
meeting
at
the
moment,
Jordan
wants
more
reviews
from
Sig
node
to
get
this
merged
and
I
did
get
it
reviewed
by
Renault
and
and
Ryan
Phillips,
and
for
the
windows
pieces,
Mark
Rossetti
who's
in
the
meeting
also
reviewed,
but
I'm.
It's
unclear
to
me
at
this
point.
What
is
Jordan's
definition
of
I
need
more
in-depth
review,
I.
I
Would
I
I
think
I
I
feel
like
this
is
safe
enough
to
merge,
given
that
it
requires
a
feature
flag
and
then
additional
configuration
to
enable
it?
Even
after
the
featured
flag,
I
mean-
and
this
is
Alpha
so
I
I
feel
like
we
should
be.
Okay
and
I
just
wanted
to
wanted
Arvin
to
raise
it
here,
if
folks
think
otherwise,
or
have
thoughts.
B
Yeah
I
commented
an
issue
too
and
I
feel
like
this
architecture
is
what
exactly
what
we
discussed
in
the
sake
architecture
meeting
with
with
Jordan
and
Tim,
and
it
seems
like
a
lot
of
the
concerns-
are
how
we're
putting
together
the
queries
but
I
feel
like
that.
Shouldn't
blacken
Alpha
implementation.
I
Yeah
and
the
yeah,
the
only
Windows
comment
is
outstanding
and
that
can
be
handled
if
we
find
it
a
better
way,
I
think.
On
the
Linux
side,
we
introduced
some
code
yesterday
to
make
it
more
secure.
L
Yeah,
in
fact,
that's
general
or
not
like
even
on
the
Windows
side.
Now
any
services
with
certain
like
you
will
not
be
able
to
add
services
with
new
line
characters
or
with
dots.
Okay
and
I've
made
it
very
conservative
so
that,
actually,
even
on
the
Linux
side,
there
are
valid
service
names
that
is
going
to
get
flagged.
But
given
how
valid
Jordan
is
about
this
I
thought:
okay,
I'm
just
going
to
be
conservative
about
this,
so
it's
I,
I,
don't
know
what
more
I
can
do
with
that.
L
F
Yeah
and
I
think
it's
similar
to
the
logic
we
were
talking
about
earlier.
I
mean
these
are
alpha
features
so
like
if
they
work
and
they're,
not
obviously
making
things
worse
and
they're
doing
what
we
agreed
on
it
seems
like
seems
like
we
should
be
able
to
short-circuit
some
class
of
conversations
like
this.
E
So,
just
just
to
focus
like
it
looks
like
we're
not,
you
gave
a
pretty
good
review
and
Arvin.
You
made
a
bunch
of
updates
like
it's
just
a
matter
of
knowledge.
You
tag
it
like
I
trust,
your
reviews
and
all.
L
All
right
once
you
do
that
or
not
I'll
reach
out
to
Jordan
and
on
maybe
API
reviews
or
someplace
like
that
and
ask
him
to
approve.
Thank
you.
B
I
started
I
started
talking
to
Xander
the
early
sleep
on
psych
too,
like
that
this
may
be
coming
so
yeah.
C
Yeah
Mark
I'll
bribe
him
with
you
know:
a
beer
at
plastic,
Pub.
A
Yeah,
okay,
so
could
you
just
cause
this
one
and
yeah?
It's
another
became
some
monitoring
agent
rather
than
orchestrator,
but
I
think
this
is
where's.
The
root
cause
of
Jordan
compliances,
we're
still
struggling
with
C
advisor
Port
advisor
metrics
that
we
exposed
through
cubot
and
trying
to
replicate
them
anyway,
but
yeah.
We
discussed
it
and
let's
continue
as
we
discussed
next
one
mark.
B
Yeah,
this
is
a
very,
very
simple
change.
Some
folks
and
segmentos
were
doing
a
lot
of
performance
testing
and
found
that
by
setting
the
that
perf
counter-up
day
period
for
the
windows
stack
collector
from
one
second
to
ten
seconds
to
kind
of
keep
or
prevent
a
lot
of
extra
CPU
usage.
B
So
we
and
we
tested
this
pretty
thoroughly
and
didn't
see
any
kind
of
downsides
to
going
to
10
seconds.
So
we're
just
wondering
if
signode
had
any
concerns
with
this
and.
B
Because
it's
in
the
keyboard
there's
a
lot
more
like
issue
or
a
lot
more
like
perfect,
like
reports
in
the
in
the
issue.
A
From
my
perspective,
it
should
be
sick,
Windows
responsibility
to
decide
on
that.
B
A
B
Discussed
it
in
a
couple
of
sick
Windows
meetings
and
I
think
James
and
Jay
the
some
of
the
tales.
D
A
C
I
just
wanted
to
mention
that
I
added
a
comment
to
the
to
the
vpa
Panic
bug,
based
on
our
discussion,
summarizing
that
we
don't
encourage
manifest
product
updates,
because
the
source
of
Truth
is
going
to
be
the
file,
and
in
light
of
that,
the
change
that
I
made
looks
not
only
defensive,
more
correct.
A
Thank
you,
everybody
is
that
yeah.
Let's
conclusion
bye.