►
From YouTube: Kubernetes SIG Node 20221213
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20221213-180519_Recording_1920x1120
A
Hello,
it's
a
signal.
Sigmatic.
Limiting
today
is
Tuesday
December
13th
2022.
welcome
everybody
I
think
it's
the
last
meeting
of
this
year.
We
can
discuss
next
meeting
later,
but
probably
will
cancel
it
so
first
and
the
most
important
topic
for
this
meeting
was
126
retro
and
127
planning
I
congratulate
everybody
with
release
of
126.
A
B
A
So
I
think
deems
is
doing
a
great
job
of
cleaning
up
PR's
Ronald
I
know
approving
stuff
I
closing
PRS
that
are
not
very
needed
to
have
some
long-standing
issues,
so
yeah
we're
down
to
196b
below
200,
which
is
a
great
news.
Let's
keep
working
on
that
and
I
think
we
can
clean
up
a
lot
of
things
before
New
Year.
C
A
Hey
Let's
walk
through
the
list
of
caps
that
are
even
prepared
for
us
that
were
immersed
in
126
and
then
once
we
go
through
that
we
can
discuss
what
went
well,
what
went
badly
so
we
can
help
ourselves
with
the
next
cities.
A
So
first
Google
credential
provider,
we
keep
making
progress.
This
is
not
the
very
important
cap
because
we
want
to
minimize
the
three
and
three
dependencies,
so
we
want
to
like
this
is
needed
for
everybody.
Community
will
definitely
benefit
from
that,
but
it's
hard
to
find
people
for
this
kind
of
work
and
I'm
glad
to
be
making
progress
here.
A
Metrics
we
did
some
progress
on
metrics
as
well,
which
is
a
great
news.
There
are
any
comments
on
this
cap.
A
Okay,
no
comments
here,
I
invented
flag,
so
we
keep
making
progress
in
minimizing
kublet
I
think
event
will
help
us
a
lot
in
terms
of
Google
diver
hat
we
entered
Alpha,
which
is,
if
you
want
to,
try
it
and
give
feedback
on
especially
on
reliability.
Of
this
events,
it
will
be
great
Dynamic
resource
allocation,
yeah.
We
it
progress
here
as
well:
solid,
solid
improvements
around
devices.
A
Port
failures
it
gets
into
beta,
which
is
very
good
news.
Many
HPC
workloads
will
benefit
from
that
by
Saving
resources.
When
something
got
started
for
the
night
and
suddenly
I
discovered
some
issue,
we
will
not
waste
all
the
customer
resources
any
longer
as
they
can
configure
it
properly.
A
Okay,
multi-numa
alignment,
topology
manager,
I
have
very
little
to
say
about
it.
Beyond
that,
topology
manager
needs
to
be
G8
soon.
D
Actually,
this
is
slightly
different.
This
was
an
addition
to
topology
manager
and
topology
manager.
Policy
options
were
introduced
as
part
of
this
cap,
and
this
specific
prefer
close.
This
pneumonode
was
the
first
policy
option
that
was
introduced.
A
Yeah,
what
I
meant
is
topology
manager
itself
would
be
great
to
GE
and
I
knew,
which
is
always
great
yeah.
A
Thank
you,
Swati
CPU
manager,
G8.
So
we
started
the
GN
managers
we
already
discussing
resource
manager,
plugins
and
the
first
step
is
to
graduate
whatever
we
have
entry.
So
we
have
a
solid
foundation
to
build
on
so
CPU
manager,
device
manager
will
graduated
yay
yeah.
That's
all
caps,
I.
Think
historical
statistics
is
we're
doing.
A
Okay,
we're
doing
I
would
say
average
average
release
in
in
terms
of
number
of
caps
and
I'm,
not
sure
whether
we
have
data
on
how
many
we
missed
yeah
I
think
we
missed
a
few.
At
least
I
missed
some
myself
yeah.
A
So
if
anybody
has
a
comment
on
what
what
went
well
and
what
didn't
went
well,
this
series,
please
speak
up.
E
I'm
trying
to
look
at
the
top
here,
I
feel
like
we've
kind
of
settled,
into
a
sustainable
pace
like
between
maybe
eight
and
twelve
things
in
a
given
release.
E
A
I
can
say
that
one
thing
that
didn't
go
well
is
we
needed
to
ask
Mark
to
apply
Milestones
to
all
caps
during
the
flying
stage?
Mark
I,
I
can't
say
more
things
to
you,
but
I
think.
Thank
you
for
your
time.
H
No
problem,
yeah
I,
think
that
the
there
was
a
little
bit
of
issues
this
round
with
a
bunch
of
sigs
not
like
just
with
the
chairs
and
TLS,
not
having
the
right
access,
but
we're
getting
that
sorted
out
on
the
release
team.
A
Was
there
any
troubles
with
API
reviews
this
time
around
I
feel
that
we
only
had
like
one
cab
that
needed
some
API
review,
but
it
was
CRI
API,
not
the
API
API.
C
A
Yeah,
it's
fair
boss
went
very
very
smoothly,
even
though
we
had
some
last
PR
reviewers
this
time
around.
It
still
went
very
well.
A
How
about
I
haven't
been
around
during
the
quad
freeze
stage?
Was
there
any
troubles
getting
things
in
last
minute?
I
remember.
It
was
constant
the
problem
previously
and
we
even
introduced
the
soft
fees
at
some
point
to
make
sure
that
people,
not
latent
caps,
be
around
to
the
last
minute.
F
B
A
E
I,
don't
I
don't
think
we
did
I
mean
some
of
the
items
we
tried
to
push
are
along
so
like
I
think
until
we
get
the
In-Place
resource
resizing
thing
over
the
hump,
we
should
always
identify
that
as
like
I
could
have
gone
better.
I
know
that
but
I
think.
Actually
we
did
a
great
effort
to
to
get
that
ready
to
be
good
to
go.
A
F
A
Yeah
understand
we
had
some
CI
issues
this
time
around
right.
We
had
more
test
feathers.
I
I
think
the
secret
V2
stuff
is
actually
a
little
bit
unrelated.
That
was
just
a
test
in
for
change.
I,
don't
think,
that's
related
to
the
release
necessarily
I'll,
just
kind
of
just
we
were
trying
to
update
the
OS
images
just
happened
to
coincide
with
the
release,
but.
I
I
can
add
another
one
here,
I
think
in
general,
I
think
looking
overall
I
feel
like
the
Sig
in
general,
has
kind
of
capacity
for
kind
of
one
really
big
cap
at
any
release.
So,
like
I,
think
this
release,
we
had
two
right.
We
had
the
dra
and
the
in
place
resize
and
so,
like
I,
think
we
had
to
pick.
You
know.
Dra
was
in
kind
of
better
condition
and
and
more
tested
and
more
ready
at
that
point,
but
I
think
that's
something
that,
like
we'll
have
maybe
moving
forward.
F
E
Yeah
I
was
trying
to
think
about
the
the
budget
comment
I
feel
like
we
struggle
with
a
budget
on
external
facing
changes,
but
we've
seem
to
do
better
with
internal
facing
stuff,
so,
like
I,
think
the
Plex
stuff,
even
when
it's
spam
components
outside
of
the
direct
Sig
here
seem
to
work
smoothly
and
I.
Don't
know
if
that's
just
a
symptom
of
it's
harder
to
change
the
API
of
kubernetes
than
to
improve
the
internal
implementation.
F
E
How
many
CRI
changes
actually
happened
and
if
they
were
actually
yeah.
A
You
don't
think
about
it.
I
have
a
agenda
topic
for
this
and
it
has
a
list.
A
So
I
wrote
some
CRI
policy
outline
is
a
problem,
and
so
in
1
26
we
had
window
spots
and
books
host
Network,
just
documentation,
change
and
event
at
black.
So
we
had
four
big
I
mean
three
big
and
one
small
change.
E
Yeah
so
I'm
sure
your
cup
is
going
to
say
why
that's
too
big
or
too
bad
or
too
too
good
or
too
awesome.
But
at
least
the
transition
to
the
CRI
doesn't
appear
to
be
slowing
our
ability
to
evolve
too
much.
E
Yeah
I
mean
part
of
me
wondered
when
we
moved
CRI
to
V1.
If
it
was
just
going
to
be
done,
but
it
seems
that
we
are
open
to
continuing
to
to
iterate
and
add
capability.
C
A
A
Okay,
is
there
I
think
we
at
time
I
wanted
to
allocate
like
15
minutes
for
this
discussion.
If
there
are
any
more
feedback
on
how
care
process
went
or
you
want
to
share
it
in
private,
somehow
we
can
start
a
slot
conversation
I,
think
action,
items
I,
don't
see
any
specific
action
items
out
of
it
or
Beyond
just
do
better
planning
and
maybe
try
to
do
soft
freeze
again
to
just
advocate
for
faster
and
earlier
reviews.
E
I
think
the
only
thing
I
I'll
just
say,
I
openly
struggle
with,
and
maybe
we
could
all
figure
out
ways
to
help
is
people
will
reach
out
to
discuss
their
cap
and
whether
or
not
you
know
all
the
things
that
we
have
in
a
release
it.
It
is
hard
to
keep
all
this
in
our
in
our
individual
heads
and
so.
E
I'm
I'm
wondering
if
well
one
we
should
probably
think
about
as
we
get
to
127
if
we
can
expand
the
list
of
folks,
maybe
within
domains
that
we
are
comfortable
taking
like
approver
on
a
cup
given.
E
If
we
look
at
the
past
two
releases,
I
guess
but
I
just
know
personally,
it
can
be
sometimes
hard
for
me
to
keep
everything
in
my
head
and
for
those
who
I
give
feedback
to
or
I'm
like
you
know.
Can
we
just
pick
one
of
these
two
or
three
different
things
right
now?
It's
largely
just
a
consequence
of
trying
to
absorb
everything
that
people
bring
forward,
but.
E
Just
top
of
Mind
here
that
we,
if
there's
people,
have
been
working
on
particular
domains,
we
want
to
continue
to
refine
it
like
I'm
thinking
about
the
device
space.
We
did
the
Resource,
Management,
Group
and
stuff,
maybe
Kevin
or
Swati
or
others
can
start
thinking
about
you
know.
Do
you
want
to
take
ownership
in
that
that
area
around
caps
going
forward?
Just
what
can
we
do
to
give
a
little
more
specialization
so
that
we.
E
I
pick
on
the
dra
one,
because
I
think
that
was
one
I
gave
feedback
there,
which
says
there
were
like
three
or
four
different
concurrent
resource
management,
related
ideas
and
personally
I
struggled
to
absorb
all
of
them.
A
Okay,
it's
a
good
thing,
good
addition,
so
we
have
two
of
them
like
because
it's
I
think
are
you
mostly
talking
about
earlier
review
by
by
experts
so
like
there
is
less
details
to
think
about.
E
To
grow
people's
scope
and
a
Sig
right,
okay,
and
so
you
know
we're
all
working
on
really
complicated
areas,
and
so,
like
you
know,
particularly
as
we
started
the
resource
management
group,
maybe
we
can
think
about
ways
to
grow
enhancement.
Scope
with
that,
because.
A
And
there
is
some
also
some
secretarial
work
as
well.
That
needs
to
be
helpless.
Okay,
is
there
any
more
feedback.
A
Perfect,
let's
move
on
to
127
planning.
A
Oops
not
strong
one.
This
one.
A
I
think
what
happened
is
Raven
took
everything
that
didn't
fit
into
126
and
put
it
here,
and
people
started
adding
more
caps
in
the
table.
A
K
F
Yeah
sure
I
mean
we
can
oh
yeah
I
think
maybe
we
could
have
ordered
it,
but
it's
okay.
Let's
try
to
get
through
as
many
of
them
as
we
can,
because
I
think
we
want
to
give
priority
to
things
like
in
place
vpa
that
didn't
make
it
in
1.6
all
right.
So,
first
on
the
list
we
have
cubelet
plugin
model
based
of
dra
Marlo.
Are
you
around
two
basically.
L
We
have
a
park
that
works,
but
we
haven't
based
off
the
array,
so
we
have
some
more
actual
work
to
do
to
get
that
through,
but
we
have
dedicated
resources
to
work
on
this
and
I.
Think
we've
spoken
already:
we've
had
our
first
working
group
meeting
with
that
included
Derek
Swati
and
Kevin
and
Sergey
we're
all
there
and
a
variety
of
others,
Francesco,
okay,
myself
and
a
Thomas,
but
we
are
also
happy
to
help
take
some
load
off
going
forward.
L
F
F
L
F
So
for
this
particular
cap,
do
we
have
reviewers
and
approvers
identified.
L
L
L
There
is
the
recording
we
need
to
get
that
to
Sergey
to
get
that
there
and
there's
also
been
multiple
posts
in
the
Channel
with
times
and
requests
for
people
to
give
times.
Yeah
there's.
E
A
calendly
linked
I
think,
but
there's
only
been
one
meeting
Alex
to
my
knowledge.
E
Yeah,
so
I,
don't
think
you
if
you
missed
something,
it's
a
shame
but
yeah,
please
join
Thursday.
If
you
can
and
I
still
think,
there's
more
discussion,
but
for
those
who
weren't
there,
it
was
a
general
desire
on
what
we
can
do
to
allow
plugable
resource
managers
for
those
things
that
we
don't
just
have
entry
or
maybe
ways
we
can
involve
the
things
that
we
have
entry
to
fit
a
plugable
model
and
it
was
a
good
discussion,
so
I
think
yeah
great
to
have
you
there.
E
Yeah
Swati
thanks
so
much
for
volunteering
I
also,
and
then
I
assume
Kevin
will
be
pretty
active
here
as
well.
G
A
C
F
Is
not
around
his
Sparkle
around
I
know
like
they
have
they
have
a
hack,
MD
or
a
cap
here,
I'm.
A
Yeah,
that's
it's
very
straightforward!
Improvement
that
you've
helped
by
with
image,
pools.
D
J
D
J
Mean
it's
close
enough,
so
yeah.
This
is
about
pretty
much.
What
I
know
about
the
brief
quick
conversation
with
Kevin.
Basically,
they
want
to
expose
the
area
location
through
the
existing
introspection,
endpoint
and
as
soon
as
they
get
more
details
from
Kevin
I
will
be
reporting
or
he
will
be
reporting.
Someone
will
be
reporting
and
yes,
I
will
be
happy
to
review.
G
F
Made
a
pass
at
this
one
that
seems
again
simple
enough.
We
may
need
some
API
review
so,
but
from
the
note
side
it's
seems
straight
forward.
F
A
D
Yeah
for
this
one
I
remember
last
time
we
were
having
we
kind
of
were
double
checking
and
reiterating
on
the
fact
if
you
want
to
move
ahead
in
this
direction,
so
I
wanted
to
discuss
that.
I
have
an
agenda
item
as
well
about
that.
But
if
everyone
is
happy,
I'm
I'm
happy
to
volunteer
graduate
this
feature
to
ga.
E
I
think
we
should
graduate
to
GA
and
anything
we're
discussing
on
the
first
topic
that
Marlo
raised
up
needs
to
be
inclusive
of
evolving.
This
feature
right
so
I,
don't
think
because
we're
discussing
those
other
things
we
should
stop
signaling
to
the
community
that
it's
safe
to
use
this
function,
given
its
blood
usage.
D
E
K
E
But
I
don't
want
to
put
fear
in
people's
hearts
that
it's
not
safe
to
use
this
function.
Given
that
I
know
it's
widely
used.
E
Maybe
it
was
a
part
of
the
resource,
plugin
stuff.
We
can
think
about
how
that
makes
the
policies
options
not
need
to
be
in
Perpetual
non-ga,
but
I
think
we
kind
of
recognize
that
that
was
never
able
to
graduate
when
we
had
the
discussion
so.
D
A
Perhaps
us
you
can
add
Marlo
Marlo.
You
can
recommend
somebody
to
as
a
reviewer
for
this
cap,
so
it
will
be
so
be
RDU
From
perspective
of
resource
manager,
plugins.
D
F
Actually,
on
cubelet
power
resources
to
ga
Francesco.
J
Yes,
this
is
a
simple
one:
it's
about
the
the
cubelet
server
itself
as
a
feature
gate
to
actually
expose
an
answer
to
the
padres's
endpoint,
whose
API
was
already
GA
in
120..
Actually,
it's
on
me,
which
I
didn't
notice
this
before.
Thank
you
to
sergiato,
not
the
internet.
Is
that
so
we
want
to
graduate
the
server
endpoint,
so
the
cubelet
can
actually
answer
the
apis,
which
should
be
90.
I
expected,
is
to
be
documentation
and
handling
the
prr
review
so
code.
Wise
expect
this
to
be
very
simple.
J
Yeah
about
caring
I,
he
raised
comments
about
the
interaction
of
this
work
with
dra
work.
He
was
proposing
and
he's
doing
they
are
doing.
I
do
think
there
is
pretty
much
no
interaction,
because
the
the
actual
interaction
would
have
been
with
the
producers
API
proper,
which
is
already
GA,
and
this
is
there
are
conversation
in
progress.
If
there
are
any
updates
on
this
area,
I
will
notify
you
folks.
F
F
All
right,
so
we
haven't
heard
any
updates
on
this
one
and
we
didn't
hear
anything
in
126
either.
So
this
is
basically
waiting
on
author.
This
is
split
STD
out
in
the
studio,
lock
screen,
so
yeah.
F
A
F
C
F
F
All
right,
Qs
class
resources,
Sasha
and
team-
is
anyone
gonna
give
an
update
on
this
one.
M
F
F
Yeah
I
think
like
we
probably
need
her
to
have
a
discussion
on
signals
to
make
sure
we
are
all
on
on
the
same
page.
So
maybe
we
can
queue
something
up
for
the
next
meeting
on
this
topic.
E
B
A
question
regarding
with
timing:
so
if
we
don't
have
any
more
signaled
making
quiz
here
and
the
next
opportunity
will
be
like
already
in
January,
what
is
the
deadline
to
get
this
caps
approved.
B
E
F
E
E
E
N
M
K
Yeah
I
wanted
to
say
something
here,
so
hi
I'm,
dikshita
I
work
for
the
node
runtime
team
at
Google.
I
might
be
taking
up
this
cap
by
the
way.
So,
if
you
could
also
include
me
in
the
Thursday
meetings,
it
would
be
great
I'm
just
starting
to
ramp
up
on
this
by
the
way
brought
it
up
in
one
of
our
meetings.
F
M
L
Yeah
one
one
thing
to
note:
is
we
ran
out
of
time
as
it
was
so
I
suspect,
we'll
use
the
rest
of
it.
If
we
need
other
meetings
for
other
things,
we
should
do,
though,
all.
E
I
said
my
honest
mistake,
but
I
thought
maybe
there
was
more
one
macro
organization
within
Intel,
pursuing
a
set
of
topics
and
I
thought
that
this
would
have
been
one
of
those
topics
rather
than
I
appreciate
the
plight
of
big
companies
everywhere.
So
if,
if
it
disrupts
anything,
we
could
just
do
this
on
our
next
signal
meeting.
F
Thanks
so
moving
on
to
the
next
one
probe
terminating
grace
period
to
GA,
so
Ryan
is
taking
it
on
any
reviewers.
G
G
A
Which
mic
and
Paul
this
is,
if
you
want
to
contact
me,
let's
discuss
but
yeah
this
feature,
but
I
just
wg8
any
previous
approvers.
C
G
Yeah,
so
this
one,
you
know
where
ideally
we'd
move
it
to
Beta
And,
Define
criteria
for
testing
and
performance
benchmarks
for
both
of
the
Cris
to
be
collecting
the
metrics
and
stats.
So
it's
would
largely
be
kind
of
refining
the
approach
that
we
found
when
we
met
Alpha
stage.
Last
yeah.
I
G
A
G
G
Yeah
I
think
on
the
cubelet
side.
It'll
mostly
be
you
know,
we
kind
of
have
passively
talked
about
having
about
a
validation
tests
for
all
of
the
T
advisor
metrics
and
then
also
having
like
perf
comparisons.
So
it'll.
There
won't
be
as
much
work
on
the
cubelet
side
but
kind
of
just
administrative
stuff
to
make
sure
that
we're
guarding
all
our
eyes
and
Crossing
RTS.
A
I,
wonder
wonder
where
we
are
on
in
case
of
documentation,
I
think
documentation
around
metrics
in
Kubla
is
very
weak
in
general.
So
if
we
can
handle
yeah
with
better
implementation
during
in
this
capital,.
G
Absolutely
that's
a
great
shout
out
that
was
also
on
my
mind
and
forgot
to
mention
it.
F
Great
Nord
look
plugin
support
for
pod,
but
they're,
probably
same
status.
A
And
what
is
interesting
here.
F
So
there
was
one
I
I
know
like
David.
You
are
interested
in
memory
QRS.
What
are
you
able
to
work
on
it
to
take
it
to
Beta
I
know.
Paco
also
came
up
last
week
with
the
question
around
the
threshold.
I
Yeah
feel
free
to
add
me
and
and
Dixie
Down,
the
memory
Qs
I
know
it's
here
or
not.
If
not,
we
can
add
it
to
the
talk.
I
A
And
if
you
love
this
details
here,
I
can
send
a
document
in
a
chance.
A
I
was
going
to
say
about
memory
swap
I
plan
to
work
on
that
I
was
applying
to
working
before,
but
now.
Okay,
so
is
anybody
who
wants
to
be
Ura
program.
F
A
F
A
Yeah,
it
wasn't
touched
for
a
long
time,
so
maybe
you
can
remove
it.
I
discussed
this
with
sickos.
They
need
help
with
documentation.
Mostly
feature
can
be
released
as
this
so
I.
If
anybody
interested
in
doing
this
work,
please
speak
up
or
reach
out
to
me.
Otherwise,
I
may
help.
I
may
have
time
to
help
us
do
that.
It's.
A
Is
this
we
have
in
this
cast
apology
manager,
yeah
it's
in
release
this
one
downward
API,
Pages
yeah,.
M
I
A
Yeah,
what
they
can
do
is
to
get
at
least
and
like
make
it
cleaner
and
see
how
we
do
it
in
terms
of
capacity
of
reviewers
and
uploaders
yeah.
A
So
we
have
seven
minutes
left
Let's
try
to
get
through
the
items
relatively
quickly,
I
think
we
covered
this
one.
We're
gonna
have
more
comments
here.
O
I
made
a
mistake
when
I
was
filling
out
the
stock
and
the
other
features
I
added
a
feature
that
I
presented
at
the
batch
script.
Can
we
go
to
that
back
to
that
Sig
node,
one
I,
I
added
it
last
week
and
I
guess:
I
missed
sorry,
the
other
features
there
was
an
option
about
adding
a
condition
for
when
pods
are
stuck
in
pending.
Do
I
presented
this
at
the
batch
working
group
and
they
suggested
if
I
wanna
work
on
a
potential
cap
for
this
release.
O
O
No,
no
sorry
signode
kept
planning.
There
was
a
other
features
option
that
I
think
I
added
last
week
and
then
this
doc
completely
changed
scroll
down.
Sorry.
O
Yeah
so
yeah
this
was
I
presented
this
at
the
batch
working
group,
where
it's
kind
of
related
to
the
retryable
cap.
This
is
my
first
proposal
for
kubernetes,
so
I
was
mostly
hoping
to
start
a
a
kept
process
in
1.27
I,
don't
I
doubt
it
will
be
fully
done,
but
I
wanted
to
know
if
that
would
be
possible.
O
So
mostly
this
is
like
for
batch
users.
You
run
the
problems
where
people
mess
up
an
image
name
or
you
have
invalid
config
map
or
Secrets
or
volumes
that
aren't
getting
mounted
correctly
and
your
pod
stays
and
pending
and
I've
noticed
that,
at
least
in
my
my
team,
we
have
a
lot
of
code
around
regular
Expressions
around
matching
against
events,
which
is
not
very
stable,
from
release
to
release
and
I'm
yeah
I
can
I
understand.
I
can
add
this
like
we
can
do
this
separately,
but
I
not
really
sure
the
process
yet.
A
So
the
idea
to
improve
status,
reporting
for
painting
pots,
yeah.
O
A
condition
that
kind
of
reflects
pods
that
are
stuck
due
to
configuration
errors,
and
some
of
these
are
might
involved
with
like
this
config
map
volume
missing
I
noticed
this
is
a
an
interesting
one.
Where
volumes
don't
give
information
about
the
condition
or
or
Reason
in
the
container.
O
They
just
have
an
event
that
says
whether
or
not
it
failed
to
this
one
says:
Mount
volume
setup
failed,
and
this
is
kind
of
a
case
where
we
match
on
the
event
regular
expression,
rather
than
looking
at
a
container
reason,
and
some
of
these
other
ones
are
related
to
like
invalid
image.
Name
image
like
air
image.
Never
pull
like
those
are.
Those
are
errors
that
happen,
and
then
the
pi
gets
stuck
in
pending.
N
Hey
one
quick
note
on
that:
we
are
tracking
I,
think
one
Capper
on
like
what
conditions
around
whether
a
bot
is
getting
successfully
started
and
when
that's
happening
so
I
think
that
was
like
the
last
row
in
the
table.
You
can
maybe
take
a
look
at
that,
and
maybe
we
can
also
discuss
if
there's
something
that's
common
that
may
be
solved
based
on
what
you're,
observing
and
your
use
use
cases
is.
N
Exactly
right,
I
mean
the
name
has
been
discussed
quite
a
bit
and
we
came
up
with
a
new
name.
But
yeah
I
was
wondering
if
you
already
took
a
look
and
if.
N
O
I
will
see
yeah
that
might
be.
I
have
yeah,
okay,.
N
A
N
A
Yeah
and
sorry
I
forgot,
who
was
speaking.
A
Yeah,
if
you
can
take
a
look
at
this
cap
and
see
if
those
can
be
combined
together
or
somehow
improved
to
do
be
great,
we
will
save
on
some
paperwork
if
not
come
to
next
meeting
and
we'll
have
more
time.
A
Okay,
let's
go
through
the
topics
really
quickly.
So
what
you
do
you
want
to
discuss
it
now
and
move
to
next?
One.
D
Yeah,
so
the
first
one
is
already
addressed,
like
we've
decided
to
move
ahead
with
the
gear
graduation.
The
second
item
that
I
have
here
is
lack
of
multi-normal
systems,
which
could
be
a
key
blocker
for
this
work.
The
feature
is
beta,
but
actually
the
end-to-end
tests
are
not
executed
on
the
CI
because
they
don't
have
multi-numa
and
and
Sergey.
We
discussed
this
in
the
signal
CI
meeting
last
week,
I
I
ended
up
creating
an
issue
in
the
test
infrastructure,
repo
that
we
have
If
people
could
take
a
look
at
that.
L
D
Yeah
I
think
as
part
of
a
test
infra
itself
like
we
will
be
having
discussions
on
possible
machines.
So
we
we
I,
looked
into
AWS
Sergey,
helped
out
and
provided
some
machines
that
from
gcp
side
that
could
be
suitable
for
this,
but
again
like
it
depends
on
what
is
the
cheapest
option.
D
If
Equinox
provides
us
the
cheapest
machine,
I
think
we
can
go
with
that
I'm
just
looking
for
input
in
terms
of
if
a
signaled
do
we
want
to
kind
of
move
this
forward,
and
you
know
I,
don't
want
to
say,
push
put
pressure,
but
maybe
influence
the
test.
Infra
group
to
make
such
nodes
available
in
the
test
infrastructure.
A
Yeah
I
can
help.
With
that
we
can
I
mean
we
can
always
play
with
a
frequency
of
test
execution
and
run
some
weekly.
For
instance,
it's
not
ideal,
but
if
it's,
if
price
concern
is
too
big,
we
it
can
help.
A
Next
topic
I
wanted
to
discuss
CRI
API
policies
we
can
move
to
next.
One
next
meeting
we
make
has
an
update
on
in
place
of
vertical
scaling.
We
may,
if
you
want
to
call
I,
hope
you
don't
mind.
Moving
into
the
next
meeting,
that
I
can
don't,
is
not
here,
I
updated
the
teams
for
based
on
owner
files.
If
you
can
approve
this,
it
will
be
great.
It's
part
of
year,
end
cleanup
of
all
the
files
and
yeah
Marlo.
A
If
you
want
to
say
more
about
the
meeting
on
Thursday
or
what
you've
been
doing,
you
have
maybe
one
minute.
L
Yeah,
basically,
we
had
the
initial
discussion.
There
are
some
concerns
that
we
need
to
address,
but
the
model
is
somewhat
sounds
we'll
have
to
address
them.
So
basically
we
need
some
way
to
bootstrap
at
the
very
beginning
before
the
resource
managers
come
online.
L
A
Great
okay,
with
that
I
we
out
of
time.
Thank
you,
everybody
for
staying
two
minutes
late
and
have
a
great
holidays.
I.
Think
it's
a
last
meeting
this
year
meet
you
all
in
January,
bye-bye.