►
From YouTube: Kubernetes SIG Node 20230131
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230131-180532_Recording_2560x1284
A
Hello,
hello:
it's
a
signal
to
weekly
admission.
It's
the
last
day
of
January
2023
welcome
everybody.
We
have
very
long
agenda
today
and
we
will
also
want
to
go
through
our
caps
caps
document.
So
let's
try
to
feed
all
agenda
items
in
under
30
minutes
because
most
agenda
items
cap
related
anyway,
so
it
will
be
a
good
discussion.
B
Hey
folks
yeah,
so
my
name
is
arvind
I
know
many
people
here
might
not
be
familiar
with
my
name:
I
work
at
Red
Hat
on
openshift,
mainly
on
the
Windows
node
side.
I'm
here
to
talk
about
this
cap
about
adding
a
new
feature
called
node
log
query,
which
allows
you
to
query
for
no
logs
using
Cube
cuddle.
This
is
the
cap
is
quite
old.
It's
a
couple
of
years
old.
The
implementation
has
been
going
on
for
a
couple
of
years.
B
The
main
reason
for
the
feature
was
on
the
Windows
side.
We
run
into
issues
where
the
cubelet
goes
to
ready,
but
you're
not
able
to
bring
up
pods.
You
typically
need
to
look
at
continuity,
logs,
CNN
logs,
and
you
know
cubeletal
logs,
and
this
feature
helps
us
debug
much
much
easier
and
we
have.
We
have
similar
issues
on.
If
you
have
similar
issues
on
the
Linux
side,
you
can
use
the
same
feature
to
do
the
start
of
debugging,
but
the
latest
feedback
we
got
from
Jordan
was.
B
He
wants
he's
a
little
uncomfortable
about
introducing
this
feature
because
they're
going
to
disable
the
the
login
point
in
the
cube,
API
server
by
default,
and
so
his
his
stance
is,
if
this
feature
does
move
forward,
he
wants
it
to
be
disabled
by
default
and
if
it
is
enabled
there's
a
war
warning
and
then
puts
a
cluster
in
a
non-conformant
way.
So
I
just
wanted
to
get.
You
know
me
and
sick
windows.
We
wanted
to
get
some
feedback
from
the
node
signaled
folks
on
how
we
can
move
forward
with
this.
B
C
A
I'm
sorry
I
didn't
look
at
the
cap
Bob.
Is
there
some
alternative
for
suggested,
like
third
party
agents,
that
will
do
the
same
in
terms
of
work,
collection.
B
Yeah
I'm
sure
there
are
alternative
ways
of
doing
this.
You
could
have
a
full-fledged
law
collector
running
on
the
Node.
However,
these
are
sort
of
you
know,
they're
heavy
services
that
you
typically
end
up
running
and
at
least
definitely
on
the
Windows
side.
We
don't
want
the
customer
to
have
to
do
something
extra
to
allow
us
to
debug.
We
would
like
to
be
able
to
debug
these
sort
of
issues
right
out
of
the
box
without
the
customer
having
to
do
anything
extra
on
their
site.
C
So
Sergey
the
way
I
think
about
this
is
because
I
was
asking
Arvin,
similar
questions
yesterday
or
I
forget
last
week
when
we
were
chatting
on
us
if
I
contrast
this
with
Jordan's
feedback
on
the
Cube
API
server,
log
endpoint
being
disabled,
a
cluster
admin
is
not
necessarily
admin
on
how
the
control
plane
is
configured
or
operated
in
all
contexts.
Obviously,
there
are
situations
where
people
externalize
the
control
plane
and
as
a
user
cluster
admin,
you
in
those
scenarios
would
typically
just
have
root
on
every
worker
node
in
that
cluster.
C
C
The
the
scenario
that
Arvin
gave
back
to
me,
which
I
found
interesting,
was
how
to
help
users
overcome
situations
where
they
are
unable
to
start
a
pod
right
and
that's
where
I
think
this
node
log
use
cases
is
interesting
if,
as
a
user
you're
trying
to
troubleshoot
operational
deployment
in
an
environment
where
the
Pod
can't
even
start
whether
that's
your
first
see
an
iPod
or
that's
your
log
forwarding
pod,
so
I
can
see
there
being
a
a
an
interest
on
this.
C
I
didn't
necessarily
see
an
elevated
privilege
posture
on
this,
because
that's
cluster
admin
I
can
run
privileged
processes
on
notes
generally.
So
it
was
really
just
like
a
convenience
factor
to
me,
which
I
think
is
different
than
what
Jordan
might
have
been
looking
at
from
Cube
API
server,
but
just
wanted
to
give
that
context
starving
from
from
her
own
probing
on
this
back
and
forth.
D
And
I
can
maybe
provide
a
little
bit
more
details
so
like
on
Windows,
many
of
there's
a
lot
of
things
that
log
to
either
like
the
event
Racing
for
Windows
infrastructure
or
Windows
system
events,
and
this
cap
was
kind
of
a
way
to
unify
access
to
node
logs.
D
So
if
you
targeting
a
Linux,
node,
it'll
kind
of
shell
out
to
journal
CTL
and
and
if
you're
going
through
a
Windows,
node
it'll
figure
out,
if
it's,
if
you
need
to
use
the
Powershell
apis
to
get
the
windows
events
logs
and
things
like
that
and
kind
of
unify
it
for
the
user
to
make
it
a
little
bit
easier.
D
So
there
are
ways
to
get
all
of
the
logs
from
nodes
either
on
Linux
or
Windows
that
do
have,
like
you
know,
custom
blogging,
Solutions
and
all
of
that,
but
the
and
also
like,
if
a
malicious
user
wants
to
get
the
logs.
This,
like
they're,
probably
wasted
that
too.
But
this
is
really
a
convenience
feature
for
operators
who
would
like
to
get
logs
without
having
to
deploy
those
and
unify.
That
interface
is
what
we
were
thinking
when
the
cup
was
initially
envisioned.
A
D
Like
you
know,
container
in
10
logs
system
logs,
like
all
the
windows
sub
like
infrastructure,
for
certain
containers,
that
sort
of
thing.
C
C
If
you
want
to
summarize
today's
discussion
and
get
on
the
cap
for
Jordan's
feedback,
that
would
be
good,
but
if
others
could
chime
in
with
their
concerns
or
not,
I
think
is
probably
helpful
to
unblocking
this
or
getting
attraction
going
forward
so
and
thank
you
Mark
as
well.
B
Yeah
I
can
I
can
summarize
on
on
the
on
the
pr
where
Jordan
has
left
the
left.
A
comment
and
I
think
I've
linked
to
the
pr
here
folks
can
add
add
to
this
and
that
might
help
Jordan
get
more
clarity
on
this
yeah.
C
C
Just
top
of
mind
that
we
could
make
sure
covered
in
the
cup
or
have
been
enumerated,
one
would
be
like
clarifying
who
were
the
intended
consumers
of
this
like
I,
don't
think
we
want
to
have
each
cubelet
be
hit
for
QPS
rates
to
be
the
main
log
emitter
or
advertise,
centralized
log
forwarding
away
from
the
node
itself.
C
So
to
me
this
is
like
no
different
than
exec
or
attached
traffic
flows,
and
then
I
do
feel
that
we
need
the
endpoint
to
be
disabled
in
those
environments
that
do
want
to
disable
it.
At
least
speaking
for
the
user
base.
I
know
I
would
represent
it.
C
The
same
users
who
want
to
disable
exec
and
attach
would
likely
disable
this
endpoint,
but
that's
not
necessarily
the
majority
of
all
users
in
the
ecosystem,
so
I
think
if
those
from
eks,
netcast
and
similar
would
see
the
world
similar
I
think
that
would
be
also
good
to
see.
A
Yeah
I
think
it's
important
who
want
to
see
like
this
works,
because
I
mean
we
involved
in
many
log
systems
like
next
things
that
people
will
ask
for
is
some
sort
of
hiding
sensitivity,
information
out
of
the
slogs
and,
like
some
filtering
on
top
of
it
and
I,
don't
I,
don't
think
we
want
to
get
into
the
business
of
building
full-fledged
login
system
out
of
Kublai.
B
Yeah
I
I,
don't
think
the
intent
is
to
do
that
either
at
least
so
far
is
not
to
build
a
full-fledged
logging
system,
but
thanks
Derek
I
can
definitely
summarize
about
what
we
spoke
here
and
I
have
not
added
the
code
to
actually
make
this
disabled
to
make
this
an
option
to
disable
it.
But
I
can
definitely
do
that,
but
I
just
wanted
to
get.
You
know
feedback
and
also
get
confirmation
from
Jordan
that
he's.
Okay
with
that.
So
thanks
folks.
A
Okay,
next
item
is
about
Google
changes
for
reliable,
not
retrievable
status,.
F
It's
the
owner
of
the
cap
is
cigarps
because
we
it's
mostly
to
improve
handling
of
portfoliers
for
jobs,
but
the
signal
is
participating
sick
because
we
also
attached
a
couple
of
aspects
of
how
how
how
parts
are
handled
and
the
failures.
And
now
the
cap
is
in
beta.
But
we
found
some
issues
that
we
want
to
improve
yet
in
the
second
iteration
of
beta
before
we
approach
graduation
to
ga.
F
So
this
is
update,
like
not
the
whole
cap,
but
just
the
pi
that
contains
the
update
and
one
important
thing
in
this
Cup
update
is
that
we
reviewed
the
situation.
So
we
give
users
ability
to
control
how
pod
filers
are
handled
by
the
continuing
Pathfinder
policy
and
this
portfolio
policy
can
match
exit
codes
or
port
and
in
general,
but
end
state.
That
includes
the
exit,
codes
or
Port
conditions.
F
However,
we
want
to
match
the
Pod
failure
policy
against
the
configured
profiler
policy
against
the
power
and
State
only
once
the
Bob
is
in
terminal
phase
and
after
the
first
iteration
of
beta,
this
wasn't
guaranteed.
So
we
reviewed
the
situation
in
which
this
may
not
be
happen,
and
then,
with
one
case
where
the
body
is
pending
and
terminating
and
is
already
scheduled
to
a
node,
then
it
would
get
stuck
in
this
phase
if
the
portfolio
policy
is
configured.
F
F
And
there
is
also
a
POC
implementation
of
of
this
I
don't
know,
but
if
there
are
some
questions
now,
then
I
am
also
happy
to.
A
Update
was
written
in
a
little
bit
cryptic
language,
so
I
I,
think
I
understand
all
the
terms,
but
at
the
same
time
I'm
not
sure
whether
I
understand
all
the
terms.
So
if
you
can
make
update
a
little
bit
more
detail,
like
is
a
more
explanation,
maybe
more
examples.
It
will
be
much
easier
to
review
and
give
feedback
I
mean
in
general,
I.
Think
it's
a
good
direction.
H
I
think
this
has
some
intersection
with
the
cubelet
pod
lifecycle
refactor,
that
folks
discussed
with
Clayton,
so
maybe
Ryan
and
David.
You
guys
may
want
to
take
a
look.
J
Oh
yes,
the
about
the
ppis
approved
from
story
design
for
the
packets,
and
we
have
enabled
this
feature
in
25
version
and
then
reverted
and
I'm,
not
sure.
If
there
is
anything
else,
I
need
to
do
to
re-promote
this
feature
to
Beta,
because
until
now
we
have
now
other
bugs
reading
for
this
feature,
and
this
team
seems
to
be
the
blocking
issue
for
promoted
to
Beta.
I
We
slightly
discuss
this
in
a
I
think
last
week's
meeting
or
the
week
before.
I
There's
other
issues
with
the
feature
where
you
can.
A
user
can
go
in
and
change
the
project
ID
of
the
directory
and
effectively
get
around
quotas,
and
so
there's
some
discussion
around
whether
we
really
want
to
bring
this
feature
to
Beta.
That
was
an
ongoing
discussion.
We
hadn't
made
a
decision
about
it
yet,
and
so
we
can
certainly
get
the
bug
fixes
in,
but
I'm
not
sure
if
the
feature
is
ready
for
beta.
J
Cleo
provides
a
discussion
meeting
I
think
last
week,
the
last
two
week
meeting.
J
Okay,
I
will
get
back
to
see
what
about
that.
Oh
thanks.
I
I
Yeah,
okay,
I'll
ping,
you
on
slack
kubernetes
slack
and
we
can
go
over
the
PRS
and
perhaps
get
them
in.
Maybe
we
can
figure
out
a
plan
on
getting
the
feature
to
Beta
too.
J
A
I,
remember:
we've
been
discussing
some
limitations
of
this
feature.
Maybe
all
right
yeah
we're
looking
for
some
write
up
like
is
it
working
or
not,
and
if
it's
working
and
we
can
make
it
work,
let's
promote
it.
Otherwise,
let's
get
it
duplicated.
A
Okay,
thank
you
Parker
and
Ryan
I
posted
in
the
notes.
We
discussed
it
on
January
17th,
it's
a
very
first
topic
and
recording
is
available.
A
Okay,
Kevin!
Are
you
on
a
call.
E
Oh,
it's
actually,
my
PR,
so
I
can
answer
questions
so
that
that's
a
simple
change,
just
like
passing
CDI
ID
like
through
the
equivalent
down
to
the
runtime.
So
it
was
reviewed
and
it's
pending
like
final
approval.
A
Okay,
another
one
Kevin
again.
A
Yeah
I
guess
it's
another
enhancements
that
you
need
to
review.
I
know
that
there
was
a
meeting.
6
am
this
morning:
6
a.m!
For
me,
go
ahead
and
talk
about
it.
L
Sure
yep,
so
this
is
the
caps
for
container
compute
interface,
former
cubelet
resource
plugin
cap,
so
we
basically
open
the
pr
to
to
to
the
enhancement
repo
you
can
find
it
attached.
L
So
in
the
just
to
give
context
for
new
list
listeners,
this
is
basically
an
approach
where
we
would
like
to
enable
drivers
attaching
drivers
for
resource
management
to
bots
which
require
such
drivers
to
to
get
resources
allocated
so
in
in
terms
of
the
status
we
we
have
now
initial
cap
kind
of
summary
done,
so
it
was
probably
already
kept
form
is
filled
out
by
90.
We
are
gathering
feedback
from
the
community
and
also
questions.
L
Trying
to
cover
that
through
the
cap,
the
review
is
ongoing.
L
There
were
oh,
some
opens
raised
in
the
meeting
in
the
community
meeting
what
we
had
this
morning,
so
one
one
was
if
we
can
add
some
description.
What
happens
if
cubelet
restarts
to
describe
this
scenario
just
just
to
illustrate
it
better?
Then
there
was
an
open.
We
were
describing
a
lot
about
the
architecture
for
Alpha
phase.
There
was
open
and
requests
if
we
can
give
it
a
little
bit
better
overview
or
long
term
a
lot.
L
What
are
the
expected
architectural
changes
for
better
and
graduation
to
ga,
and
there
was
an
ask
to
illustrate
the
failure
and
Corner
cases
a
little
bit
better,
so
we
had
a
troubleshooting
kind
of
information
provided
in
the
Gap,
but
most
probably
we
need
to
go
a
little
bit
see
in
details
with
some
examples:
how
we
plan
to
handle
failing
cases
when
a
plugin
fails
or
basically
a
driver,
fails
or
how
you
can
deploy
other
Bots,
how
you
can
maintain
the
cluster
up
and
running
if
there
are
failures
on
plug-in
side,
no
yep.
A
A
I
think
during
the
meeting
you
said
that
it
was
reviewed
by
some
people.
L
Yeah,
the
the
the
changes
were
a
little
bit
driven
by
some
of
the
feedback
in
the
community
meeting
like
we
had
feedback
that
we
should
think
about
bootstrapping
and
fail
safety,
and
one
of
the
issue
is
if
you're
starting
the
kubernetes
cluster
and
something
happens
with
work.
Your
plugin,
if
you're
going
completely
through
a
driver
to
handle
the
whole
allocation
process.
What
do
you
do
then?
L
So
that's
why
we
thought
it's
good,
basically
to
apply
the
drivers
only
to
pods,
which
really
need
the
drivers,
and
for
that
we
will
need
some
sort
of
Association
technique
similar
to
resource
classes.
What
you
see
in
in
the
array
or
or
another
option
in
post
spec,
and
this
will
allow
us
to
identify
pots
which
require
the
drivers
and
basically,
we
can
always
start
the
cluster
with
with
a
normal
CPU
manager
and
if
the
driver
is
required.
Basically
the
the
bot
which
is
to
be
scheduled,
we'll
check.
L
If
that
or
there
will
be
a
check
if
the
driver
is
running
and
if
driver
is
not
accessible,
you
will
get
a
failure.
So
with
that
we
we
are
trying
to
to
deal
with.
Basically,
some
of
the
corner
cases
and
trading
scenarios
and
the
bootstrapping
tissues
foreign.
A
I
think
it's
a
big
change
comparing
to
what
it
was
before
and
I
just
want
to
set
the
expectations
that
it
for
me,
at
least
it
blows
my
mind
trying
to
fit
like
how
this
cap
will
fit
into
a
longer
term
directions
for
public
and
at
the
same
time,
whether
there
is
any
immediate
applications
that
it
will
have
like
with
the
reliability
and
such
so
I.
For
me,
personally,
it
will
be
hard
and
also
what
user
stories
you
you
use
cases
we
will
solve
today
and
which
will.
L
So
that
also
one
argument
for
this
kind
of
change
of
Direction
a
little
bit
is,
let's
say
we
are.
We
are
coming
from
the
argument
that
90
of
the
use
cases
what
you
have
today,
they
are
covered
by
standard
components.
You
have
CPU
manager,
memory
manager,
College
manager,
you
don't
want
to
lose
them,
you,
don't
you
don't
don't
want
to.
Basically
your
standard
pots
not
to
be
running
so
the
cases
where
you
need
plugins
are
specialization
cases.
So
those
are
concrete
examples
where
you
need
certain
specialization
on
driver
side.
L
M
L
Yeah,
so
basically
we
don't
want
to
reinvent
the
wheel.
Just
to
summarize
it
we
we
when
we
need
a
specialization,
we
want
to
Mark
the
Bots
which,
or
we
are
thinking
that
users
would
not
like
to
have
a
way
to
Mark
the
pods,
which
will
require
some
specialization
insert
in
terms
of
resource
management,
and
then
you
apply
the
drivers
in.
In
many
cases
you
are
fine
with
standards,
CPU
management,
kind
of
Concepts
inside
kubernetes,
90
of
the
cases.
A
L
We
have
some
examples
for
advanced
cases,
so
they
are
not
majority
of
the
cases,
but
there
are
such
cases.
M
A
K
C
I
would
Echo
those
who
wanted
a
better
time
for
the
meeting.
I
I
also
wasn't
able
to
be
there
this
morning
and
I
apologize,
but
but
was
not
a
great
time
for
me
either,
but
I
like
to
catch
up
on
the
latest
discussions.
A
Yeah,
maybe
we
need
to
do
like
us
apparently
meeting
at
least
once
if
you
want
to
expedite
reviews.
Thank
you.
Thank
you.
A
Next
topic
is
Claudio,
hello,.
G
Yeah
so
I'll
try
to
keep
it
short,
so
I
proposed
I
changed
like
two
months
ago
in
regards
to
how
the
cubelete
registers
plugins
and
especially
registers,
plugins
I,
don't
know
if
you
remember,
but
basically
currently,
the
curable
plugin
registration
is
based
on
timestamps.
Basically,
that's
how
the
reconciler
is
going
to
notice
there's
a
plugin
that
has
to
be
red
registered
so
based
on
the
timestamp.
It
will
then
detect
whatever
plugin
has
to
be
re-registered,
but
there
are
a
couple
of
issues
with
this
implementation
on
Windows.
G
That's
mainly
due
to
the
fact
that
time
granularity
on
Windows,
it's
a
lot
less
fine-grained.
G
So,
for
example,
if
you
do
two
consecutive
time
that
now
calls
on
Windows,
you
will
most
probably
get
the
same
timestamp,
it
will
only
get
updated
in
at
most
15
milliseconds
or
so
so
that
is
causing
a
couple
of
issues
on
Windows
in
a
couple
of
unit
tests
to
also
fail
on
windows.
So
currently
the
pull
request
tries
to
add
the
uid
instead
of
what
timestamp.
So
whenever
a
uid
is
seen
as
being
different.
That
means
that
the
plugin
has
to
be
registered
pre-registered
instead
of
free
time.
G
On
the
timestamp,
so
my
question
is:
if
there
are
any
concerns
regarding
this
update
or
or
not,
because
I
am
interested
in
hearing
out.
If
there
are
any
issues,
so
I
can
collect
them
as
soon
as
possible.
A
Just
for
this
euid,
is
it
something
you
store
in
kublet
and
does
it
require
any
changes?
How
people
approach
plugins
like
do
they
need
to
change
anything
or
it's
automatic.
G
It
it
is
automatic,
in
a
sense
that
there
is
a
desired
state
of
world
which
gets
updated
with
a
new
plugin.
So
all
the
calls
to
register
new
plugins
are
basically
going
to
get
added
to
the
desired
set
of
world,
in
which
case
the
reconcile
will
then
iterate
over
the
list
of
plugins
that
desired
state
of
world
has
and
checks
against
the
actual
set
of
world.
G
If
detects
a
plugin
which
doesn't
have
a
matching
current
at
the
moment
in
in
kubernetes,
if
it
doesn't
have
the
matching
timestamp,
then
it
will
update
it
and
re-register
it.
So.
Basically,
my
purpose
updates
the
check
from
checking
the
timestamp
to
checking
a
uid.
The
mechanism
stays
the
same.
Nothing
changes
when
it
comes
to
the
it
pays
themselves.
G
N
O
Yeah
I
was
looking
at
the
the
code
Delta
you
actually
removed
the
older
API.
Is
that
necessary,
or
you
think,
maybe
just
provide
the
additional
euid?
G
Okay,
yeah,
that's
that's
our
name.
Basically,
yeah
public.
G
Sure
I
can
change
the
name
back,
but
technically
it
won't
be
a
correct
name.
Since
it's
not
going.
O
G
That's
fair,
okay!
That's
where
I
can
have
them
both
then.
A
A
G
Then
I'll
add
the
plugin
exists
with
Cortana
system
as
well,
so
we'll
have
both
for
a
while,
but
basically
the
reconciler
at
this,
with
my
pull
request
is
going
to
use.
The
plugin
exists
with
the
correct
uid,
since
that's
going
to
check
whatever
a
new
plugin
is,
has
to
be
re-added.
A
Okay,
thank
you.
If
there
is
no
more
questions
here
vinay
do
you
want
to
talk
about
In-Place,
vertical
scaling.
N
Yeah
hi,
so
two
things
one
is:
can
we
get
this
cap
officially
tracked
for
127
I?
Think
Tim
pointed
out
asked
this
question
and
I
noticed
that
it's
not
in
the
tracking
list
mark.
Are
you
able
to
do
this
or
do
you
need
something
from
I?
Don't
know,
I.
C
A
I
think
first
question
is
about
enhancement
to
be
tracked
for
versus
yeah.
C
Yeah
we
can
do
that.
I'm
I
just
want
to
get
yeah.
That's
the
next
practical
problem
which
was
did
we
have
a
Tim
rate
emerge.
N
It
was
my
idea
to
merge
the
API
first
and
then
merge
the
implementation
later
just
in
case,
and
that
was
like
early
January
when
things
were
quiet,
I
saw
that
there
was
no
not
much
activity
and
it's
easier
to
merge
that
if
there
is
anything
going
on
the
API
breaks
something
then
we
can
roll
back
just
the
API,
but
Tim
kind
of
brought
up
the
point
that
if
we
do
merge
and
then
and
then
there
are
other
peers
that
pick
up
the
API
changes
and
then
build
on
top
of
it
unwinding,
it
will
be
really
hard.
N
So
he
prefers
to
merge
the
whole
thing
and
I
don't
have
a
strong
opinion.
The
main
reason
I
wanted
two
different
mergers
was
two
different
PR
smudge.
Was
that
it's
easier
for
someone?
Looking
back
for
posterity
that
you
know?
Okay,
this
is
the
API
changes
it's
one
set
of
in
this
PR
and
then
this
is
the
implementation,
but
I
guess
anybody
looking
at
it
will
have
to
probably
they
go
through
both
anyways.
N
C
N
So
I
will
I've
been
I've,
been
sick
for
the
past
few
days,
but
today
evening,
I'll
rebase
I,
don't
think
it
needs
rebase
at
this
point,
but
it's
good
idea
to
revisit
once
before
we
hit
the
approve.
What
do
we
need?
We
need
your
approval
is
already
there.
We
need
lgtm
and
approval
from
Tim,
and
you
right
is.
C
N
Yeah,
okay,
so
let
me
I,
don't
see
any
conflicts
at
this
point,
but
I
do
prefer
to
do
one
rebase
before
hitting
the
approve
merge
just
to
have
one
more.
It's
been
almost
two
weeks,
it's
the
last
rebase
and
there
was
a
unit
test
break
that
happened
because
of
some
new
things
that
came
in
and
I
added.
It
I
made
a
change
to
fix
that,
but
that's
about
it
yeah
golang,
CLI
lint.
That
was
the
issue.
N
What
do
you
prefer?
Should
we
just
wing
it
and
hit
approve
today
and
see
if
it
goes
through
yeah.
C
Today
I
mean
if
you're
ill
and
you're
not
feeling
well
then
tomorrow,
but
just
or
when
you
do
feel
well
right.
But
you.
P
C
Want
to
ping
me
and
Tim
again
and
we
can
say:
are
we
good
hit
the
button?
Let's
do
it
I
just
I
want
to
let
you
know
I'm
around
here
this
week
to
support
you.
N
Okay,
let
me,
after
this
meeting
I'll
I'll,
just
paint
ping
you
we
have
that
slack
thread
going
so
I'll.
Just
update
on
that.
Let
me
take
a
quick
look
at
if
there
is
any
major
potential
conflicts
that
are
coming
I'll,
just
pull
a
quick
rebase
and
see
what
happens.
N
So
we'll
go
with
just
merging
the
whole
thing
at
this
point
and
I'll
close
the
API
PR
oud.
Okay
does.
A
That
sound
good!
Thank
you
in
the
think
of
that.
N
Break
things
what?
But
this
is
a
good
time
to
do
it.
We
have
six
weeks
and
sorry
I
didn't
mean
to
interrupt
you.
A
A
A
H
Yeah
Marlo
I,
don't
know,
do
you
guys
have
folks
identified,
can
help.
O
A
H
Thanks
so
the
limit
on
Parallel
image,
pull
I
think
it
has
lgtms,
it
needs
a
final
approval,
so
Dawn
or
Derek
you're
will
have
to
make
a
pass
for
approval.
Okay,.
H
Again
so
then,
next
one
I
think
we
just
discussed
both
of
them
right,
like
one
is
waiting
on
approval
for
the
CRA
Don
is
going
to
take
that
and
the
other
one
needs
a
review.
H
So
yeah
we
can.
We
can
continue
reviewing
and
uploading
that
one,
the
fine-grained
supplemental
group
groups
control
so
I'm
reviewing
it
I
think
it's
getting
close,
there's
some
discussion
around
the
API.
So
if
folks
have
some
thoughts,
take
a
look-
and
we
can
close
it
out
so
basically
I
think
maybe
I
can
just
bring
it
up
here.
So
he
is
proposing
an
API
where
enum
or
enum
is
representing
two
states
of
a
Boolean.
So
what
is
our
stance
there?
H
Should
it
be
a
Boolean
or
it
should
be
in,
should
be
like
two
different
items
in
an
enum.
C
I
thought
we
were
always
trying
to
avoid
use
of
buoyance
and
API
conventions:
okay,
okay,
ravenons.
H
Yeah,
so
this
is
like
either
you
ignore
the
groups
in
your
pod
image
or
you
respect
them
or
current.
Currently
we
are
respecting
them
and
then
there
are
like
two
inner
states
ignore
or
respect
I.
Think
it's
okay
for
this
one,
because
we
can't
imagine
a
third
state
but
the
case
where
there
is
a
third
option.
Then
it
becomes
tricky
if
we
have
to
only
break
like
one
of
the
items
in
the
enum
and
more
than
one
are
more
than
one
are
feasible.
A
H
Q
Yeah
yeah
I
think
the
dark
was
Art,
so
well.
I
volunteer
myself
to
take
a
look
so
I
halfway
through
the
the
only
challenge.
For
me,
it
is
because
the
the
previous
stage
beta
stage
is
long
time
ago,
so
the
format
is
totally
changed.
Yeah
the
aesthetic
please
correct
me!
So
that's
why
I
have
to
figure
out.
What's
the
previous
one,
because
a
lot
of
format
changes
a
lot
of
diffs.
So
this
is
why
process
a
little
bit
slow.
Q
So
since
you
have
background,
if
you
want
to
take
over
I'm,
okay
or
if
you
are
too
busy
so
I
can
finish,
investor
I.
P
Q
P
Yeah
pretty
much
like
the
changes,
I
involved
that
I
made
are
related
to
the
formatting,
so
it's
just
yeah.
H
Right
so
the
next
one's
swathi,
the
cubelet
Pod
resources
to
ga
for
spending
on
that
one.
P
K
Yeah
yeah,
thank
you
so
basically,
during
the
so
there
is
a
there
is,
was
a
bit
of
missing
bits
in
the
in
the
in
the
state.
So,
for
example,
we
cannot
just
remove
the
ga
locked
in
the
future
gate
to
GA.
We
need
to
solve
a
Windows
bug,
a
bug
effect,
a
bug
manifesting
itself
only
on
Windows,
actually
so
to
I
asked
it
on
cigarch.
K
K
I
I
think
they
will.
The
prr
people
will
have
some
comments
about
an
item
which
probably
slap
to
the
cracks
about
dos
prevention.
Long
story
short:
please
please!
If
you
can
fix
the
levels,
please
do.
Thank
you
and
I
totally
want
to
do
that
in
27,
but
the
scope
is
a
bit
increasing.
So,
let's
see
I
do
totally
do
want
to
work
on
that
on
27,
but
I
won't
be
surprised
if
it's
the
slips
thanks.
H
Right
thanks
Francisco,
so
then
the
next
one
doesn't
have
a
link,
but
I
know
like
Sasha.
Has
a
PR
open,
so
I'll
get
a
link
here
and
I
think
it
needs
a
final
approval.
It's
a
little
book,
Don
or
Derek
for
that
one
just
for
graduating
downward
API
in
place
updates.
We
just
covered
that
split
STD
out.
Can
you
click
through
that
one
I?
Don't
think
we
have
an
update
on
that
foreign.
A
Yeah
I
remember:
implementation
was
like
in
a
very
minor
minor
chain
like
feedback
items,
address
I'm
surprised,
it's
not
moving
forward.
H
A
It
was
because
we
is
for
the
what
will
be
returned
from
CRI
right,
I.
Think
API
change
was
approved
for
this
at
least.
H
The
next
one
Mike
sub
second
granular
groups.
O
Pretty
much
where
we
got
to
it
was
the
knowledge
that
yeah.
When
you
run
more
probes,
you
probably
got
to
use
more.
You
know
system
resources.
We
wanted
to
wait
for
the
plug
event.
You
know
apis
to
be
written,
so
we
can
have
better
performance
analysis
yeah
in.
O
I
think
it's
it's
probably
ready.
We
just
have
to
decide.
Is
this
something
we
want
to
you
know
we
want
to
do
at
this
point
in
time
or
not
I'm
ready,
I'm
ready
to
re.
You
know
rebase
it
or
whatnot
and
get
it
through
I
see
Antonio
Ouija
had
done
some
some.
You
know
more
detailed
analysis
on
it.
There's
some
additional
comments
added
two
weeks
back.
A
Yeah
we
found
a
nice
debug
with
regular
probes
when
you
exhaust
all
the
sockets
on
on
the
host,
even
with
regular
ports
and
with
the
regular,
like
this
existing
Port
limits
and
cut
their
limits.
Ingredients
have
second,
maybe
maybe
if
after
it
will
be
needed
and
redesigned
with
a
long
running
connections,
nothing
like
that.
Maybe
it
could
be
more
baby,
more
beneficial
or
easier
to.
O
Yeah
I
mean
it's
really
for
me.
It's
a
question
of
whether
you're
going
to
begin
doing
proper
tuning.
You
can
actually
use
up
more
resources
if
you're,
if
your
cycle
time
is
only
on
the
second
range
right,
you
can
hold
resources
for
a
half.
A
second,
for
example,
on
average
that
you
didn't
need
to
hold,
because
your
Pro
Cycle
time
should
have
been
at
1.5
seconds,
not
two
right.
O
A
Yeah
I
think,
if
you
can
address
this
section
like
a
scalability
and
possible
liability,
but
I
mean
it
needs
to
be
a
test
somehow.
M
O
Okay,
yeah,
it
does
improve
the
test,
buckets
that
are
there's
well
this
cap,
it
doesn't
add
a
whole
lot
of
end-to-end
buckets
but
yeah.
We
could
do
that
as
well.
We
need
to.
H
Right
good,
so
so
again,
next
one
should
be
dropped.
A
In
terms
is
cool,
but
we
have
side
cars
and
we
just
need
to
put
it
in
the
proper
Milestone
and
I
started
a
new
issue
instead
of
reusing
old
one,
because
if
you
don't
own
the
issue
like
it's
hard
to
do
all
the
tracking
properly.
H
I
H
Made
a
pass
I'll
make
another
pass
and
try
to
close
on
it
this
week
and
other
folks.
If
they
want
to
take
a
look
at
it,
that's
what
Qs
class
resources
I
know,
Marcus
bring
me
I
have
to
review.
I
would
definitely
appreciate
other
folks
reviewing
it
as
well,
because
I'm
not
sure
whether
we
agreed
on
the
API
changes
being
proposed
like
Marcus.
You
can
correct
me,
but
now
it
includes
API
like
pod
changes
right,
yeah.
R
Yeah,
that's
correct,
so
yeah
I
changed
the
kind
of
scope
of
the
kind
of
first
implementation
test.
Rest
of
the
kind
of
League
state
that
we
got,
but
now
there's
been
another
ux
was
dropped
from
there
and
then
go
directly
to
called
kubernetes
API
chances,
so
you're
correct.
So
there
is
no
no
really
consists
on
the
on
the
API
changes:
I'm,
not
much
free
feedback
either
so
yeah.
Please,
please
take.
K
A
A
H
So
I
think
we
will
definitely
need
to
continue
next
week,
but
I
think
maybe
like
we
can
try
to
asynchronously
go
through
caps.
Unlike
folks,
if
you
have
something
which
is
close,
please
poke
folks
for
reviews
and
approvals.
I
know
at
least
a
couple
more
which
are
closed,
I
think
second,
graduation
and
another
one
that
son
Ryan's
plate
with
even
the
pr
is
ready
for
that.
One.
A
Okay,
so
and
the
pr
freeze
is
this
Friday
I
think
so,
if
you
want
to
go
through
PR
review
like
make
sure
that
you,
by
the
way
there
again
don't
we
need
to
make
sure
that
our
caps
are
in
proper
Milestone,
otherwise,
PR
reviewers
will
not
pick
it
up
and
then
they
will
not
review
it.
So
that's
the
extra
work
that
needs
to
happen.
So
maybe
we
can
synchronize
on
slack
on
that.
Q
I
remember
last
week:
I
did
a
fix
the
bunch,
but
maybe
some
things
missed
or
messed
up.
Yeah
I
can
go
over
one
more
time
this
week.
Yeah.
Thank
you
yeah.
Also,
let
me
find
the
the
upper
armor
cap
also.
It's
already
looks
good,
I
believe
and
sorry
earlier
people
ask
because
I
was
distracted
so,
but
the
implementation
I'm
not
sure
no,
because
I
I
haven't
got
the
reply.
Yet
is
they?
They
do
have
the
time
to
finish
implementation
or
not
yeah.
A
Okay
with
that
I
suggest
close
the
meeting.
If
there
are
any
last
comments,
please
pick
up,
otherwise
we
always
open
on
slack
and
other
ways
of
communication.
Thank
you.
Everybody
thanks,
bye.