►
From YouTube: Kubernetes SIG Node 20200526
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
B
Sure
you
ready
so
we
had
been
making
some
progress
in
the
you
know
the
testing
meetings.
The
meeting
was
rescheduled
to
right.
After
this
one,
the
test
spreadsheet
has
been
updated.
There
are
still
a
few
items
open
or
few
slots
open
where
tests
haven't
been
signed
up.
So
if
you
interested
please
do
that
we
had
sort
of
been
looking
at.
B
You
know
what
priority
should
we
be
looking
at
these
tests
and
so
from
the
last
meeting
we
sort
of
narrowed
down
to
maybe
focusing
on
the
merge
blocking
first
and
then
release
blocking
and
then
release
informing.
So
we've
got
an
idea
and
I
need
to
go
through
and
update
the
spreadsheet
and
mark
those
appropriately.
So
I'll
do
that.
I
was
out
Friday
and
Monday,
but
what
the
one
of
the
more
interesting
parts
is.
B
Think
I
think
these
were
the
release
blocking
ones,
and
so
you
know
the
email
was
sent
out
and
there
was
a
few
of
us
that
got
on
it
was
it
was
me
and
Ed
and
Morgan
and
debug
this
thing,
and
what
we
realized
is
is
that
some
of
the
cost
image
testing
had
been
silently
failing
for
about
the
past
four
weeks
and
the
pr
this
one,
one,
seven
six
one
seven
I
think
I
did
that
sort
of
update
some
images.
B
We
solved
that
hey
we've
got
some
new
tasks
going
here
and
why
are
they
intermittently
failing
and
that's
when
we
sort
of
gained
insight
into
well?
They
had
been
silently
felling
for
four
weeks
and
now
they're
failing
intermittently
and
it
turned
out.
We
debugged
and
you
know,
IDI
sort
of
figured
out
hey.
Is
this
one
particular
image
I
think
it
was
73?
B
That
was
the
only
one
failing
after
we
speculated
many
other
things,
and
so
we
replace
that
with
a
new
image,
and
so
now,
unless
someone's
happened
in
the
past
day
or
two,
these
should
be
running
and
they
should
be
passing
and
it
just
goes
to
show.
That's
I.
Think
we
have
probably
a
maintenance
work
here
to
do
to
keep
an
eye
on
these
images
to
make
sure
they
are
updated
regularly
and
that
we
avoid
this
fire
drill
going
forward
because
that's
not
much
fun
so.
C
B
Let
me
ask
a
question
it
previously
has:
has
there
been
any
one
sort
of
monitoring
these
images
and
doing
maintenance
to
say
hey
these
things
are
about
to
fall
out.
Here
comes
some
new
ones,
let's
test
them
to
make
sure
to
update
the
images
in
the
file,
or
is
that
the
is
there
a
process
for
that
or
or
no.
C
Previous
that
we
do
have
the
process,
actually,
the
the
the
team
I
mean.
We
cannot
hear
you
so
the
team
so
internally
in
Google,
actually
closely
monitors
not
just
cause
imagery
in
it.
Actually
all
the
know
that
II
to
Eva
late
here,
the
staff,
so
we
basically
carry
most
of
the
load
but
I
believe
recently.
We
have
a
lot
of
the
change
not
inside
Google
and
outside
Google,
and
a
signal
also.
So
that's
why
this
is
a
forty
to
crack.
B
D
D
Cause
image
interesting
videos,
yes,.
B
So,
just
a
quick
recap:
what
we
realized
is
that
if
you
look
on
the
it
was
I
think
it
was
one
seven
six
one,
seven
that
PR
updated
some
of
the
cost
images
and
what
we
noticed
is
that
hey
it
seemed
to
be
some
new
test
running
and
after
we
looked
at
it
a
bit.
Well,
you
realize
that
those
cost
tests
were
not
running
for
approximately
the
last
four
weeks.
B
With
so
basically
cost
image
was
not
running
for
the
past
four
weeks
we
updated
image,
but
all
this
was
triggered
by
signo
cube.
Little
master
was
failing,
which
I
think
is
the
release
blocking.
So
we
got
an
email
and
said,
and
so
you
know
we
jumped
on
it.
So
what
we
really
need
is
to
be
more
proactive
about,
I,
think
keeping
the
test
images
updated
and
also
just
in
general
monitoring
the
test
I'm.
So
that
will
even
we
do
see.
D
D
Yeah
I
know
I
know
you
are
thank
you
for
organizing
that
effort
to
416
tests
and
stuff,
so
the
multiple
use
use
when
the
short
run
when
the
long
term
right
so
the
short
term,
so
we
needed
to.
Can
you
please
send
me
those
the
list
of
tests
that
company
that
needs
a
cause,
image
updates
or
all
fix,
I
think
we
have
I'm,
not
sure
Roy
joined
today
by
joins
from
cos.
Team
plus
close,
is
the
open
source,
so
everybody
can
play
around
with
it.
D
It's
open
source
operating
system
from
Google
anyway,
so
other
I
think
we
have
somebody
from
costing.
You
can
take
a
look
as
well
to
help
us
the
short
term,
but
long
term
is
for
those
test
affiliates.
Do
we
have
any
a
locking
mechanism
that,
like
people,
can
get
inert
and
know
about
it
beforehand
before
it's
cut
in
these
blocking
issues,
yeah.
B
That's
a
good
good
question
and
I
think
you
know
in
the
sick
testing
meetings
we
are
talking
about
what
is
the
alerting
mechanism
and
so
right
now,
if
you
look
in
some
of
these
jobs
file,
some
have
an
email
alert
and
some
are
individuals,
and
probably
what
we
need
to
do
is
go
and
clean
that
up
and
even
maybe
even
better
than
that
would
be
to
come
up
with
a
an
email,
alias
that
would
be
sent
an
email
to
folks
that
are
really
proactively
monitoring.
This.
That's
how
I
got.
B
F
B
G
There
is
a
possibility
to
specify
regex
for
the
images
and,
if,
but
the
it's
like
it's
better,
because
we
can
just
specify
the
series
of
the
images
the
milestone,
like
course
like
81,
and
they
say
course
table
and
81,
and
it
then,
if,
of
course,
release
is
like
new
image
inside
that
milestone.
The
test
will
automatically
like
pick
it
up
and
use.
However,
if
they
stop
releasing
the
sample,
I
called
my
stone
images
that
we
are.
B
H
I
have
clues
no
inter,
like
and
recently
I.
Just
joy,
frankly,
is
a
signal
to
like
workshop
I
have
closed
the
monitor,
all
those
like
a
pure
eyes.
Yes,
I
agree
with
it.
I
think
every
type
cause
really
release
the
image
we
keep
up.
The
use
is
not
scalable.
Also
I
noticed,
like
also
I,
noticed
the
code
I
added
the
comic
there's
a
while
cause
imagery
like
Rex
right
I'm,
not
sure
why
that's
not
use,
there's
supposed
pickles
are
lazy
the
evening,
but
cut
off
for
like
a
white,
solid
image.
H
Only
like
the
the
time
like
so
up
are
my
question
when
Z
is
useful,
if
not
the
use
of
various
a
week
for
the
evening
Sam's,
something
like
what
Edie
suggests
right,
because
we
are
look
at
this
cause
image
is
time
has
caused
stable,
look
at
the
history,
as
ins
as
the
time
call
has
no
LTS
now
cause
has
out
here
whether
we
just
specifies
just
a
way
out.
He
has
always
get
Alicia's
image
because
called
him
before
this
achieve
before
this
freak
like
epidemic
Conti
mostly
is
head
bug
or
security.
C
I'm
sorry
to
interrupt
you
later,
can
we
separate
after
Google
host
cast
from
the
open
source?
Well
we're
in
the
past.
We
have
the
policy
and
we
want
to
cause
image.
It
is
the
minimal
upgrade
not
a
friendly
upgrade.
We
want
that
annoying.
What
we
are
are
minimal:
support
for
the
open
source
officers,
so
I'm
the
governor,
but
right
now,
I
have
to
wow
the
open
source
had
so
this
circuit
of
the
Google
production.
C
The
interests
are
from
the
signal,
the
open
source
interested
here.
So
we
know
this
cause
in
the
past
a
long
time
back
cause
a
lot
of
the
stability
issue
to
the
open
source.
So
we
have
the
policy
update
of
the
signal
to
release
blocker
all
the
test
cost
image
later,
the
last
one
and
then
gke.
It
is
thus
a
paper
story.
C
Don't
those
is
the
two
different
of
the
qualification
pipeline?
Otherwise,
basically
an
open
source
always
have
to
do
you
fix
off
those
the
UK
specific,
a
problem
and
the
block
openSUSE
move
forward.
So
this
is
just
mental
of
those
old
policy.
This
is
what
we
defined
the
policy
and
in
pastor
last
couple
years,
we've
been
follow
those
things
sorry
yeah.
E
C
Okay,
so
I
will
strongly
suggested
and
defined,
maybe
make
sure
the
in
force
of
those
the
policy
and
figure
out
what
is
working
phone
apart
in
the
policy
and
make
sure
we
have
their
next
a
different
cause,
image
rule
out
policy
from
the
gke
separately
and
make
sure
it's
not
out
in
each
operating
system.
Image
related
problem
cause
the
stable
being
a
key
issue
to
the
open
source
signal,
so
that's
more
important,
so
even
open
source
can
move
faster
and
can
move
forward
based
other
more
reliable
of
the
operating
system.
Here.
A
I
guess
what
I
would
like
to
understand,
and
maybe
control
in
this
phone
is
there's
obviously
some
institutional
knowledge
around
the
rollout
of
these
images
and
tests,
and
just
as
long
as
the
right
representatives
from
that
institution
are
on
the
right
mailing
list
to
get
the
alert.
I.
Think
that's
like
the
first
step
in
I.
I
know.
I
personally,
do
not
know
how.
A
B
A
A
I
So
let
me
present
ourselves
if
you
want
so
I'm
I'm
part
of
the
communities
of
organization,
but
haveno.
Never
work
in
the
signal
before
and
I
have
not
been
very
active
in
the
recent
past,
but
right
now
we're
working
with
a
client
that
will
need
this
cycle
altering
as
proposed
in
that
case
functionality.
I
Especially
about
some
concerns
like
I've,
been
winning
the
whole
cap
discussion
and
I'm
trying
to
follow
all
the
links
it's
quite
long,
but
especially
the
concerns
there
are
two
to
address,
so
we
can
well
see
what
what
they
are
and
proposed
ways
to
what
what
wisdom,
also
like
maybe
I'd
like
to
have
the
count
or
have
in
mind
that
that
cap,
for
example,
helps
without
helps
with
our
country's
case.
But
it's
not
perfect
and
maybe
we'll
fall
short
in
the
future
when
different
teams
try
to
inject
automatically
multi-level
side.
I
Psycho
dependencies
also
I
understand
that
some
of
the
concerns
were
related
to
the
terminated
termination
grace
period
but
yeah.
Basically,
so
I
would
like
to
have
a
place
with
all
their
concerns.
So
we
can
see
what
proposed
how
to
address
and
maybe
have
more
real-world
use
cases
in
mind.
So
even
if
the
cap
doesn't
address
them,
it's
extensible
or
seems
reasonable
to
to
address
other
more
complicated
use
cases
in
the
future.
But
yeah
wanted
to
say
hi
here
and
ask
you.
How
do
you
think
that's
well.
A
A
Folks
might
say:
that's
not
a
concern.
I'm
not
worried
about
that.
I
can
work
around
that,
but
then
there
were
other
use
cases
like
if
you
were
using
it
for
log
forwarding.
You
might
actually
lose
data
if
we
didn't
respect
some
of
those
scenarios
and
so
I
thought
that
Tim,
Huck
and
I
don't
know
where
you
go.
A
If
you
were,
the
group
that
came
afterwards
was
going
to
find
some
folks
at
Google
to
help
flush
out
that
problem
and
I
don't
burn
all
had
volunteered
outside
support
that
and
so
I
I'm
I'm
losing
track
of
names
to
know
Rodrigo.
You
were
brought
in
by
Tim
Orr
and
if
Murnau,
maybe
you
can
fill
in
what
meetings
might
have
happened
since.
A
A
A
J
A
Concern
was
that
there
were
a
lot
of
users
running
kubernetes
in
like
a
virtual
private
cloud,
and
everything
was
great,
but
then
I'm
also
aware
of
many
users
running
kubernetes
in
trains
or
planes
or
automobiles
or
just
in
different
environments,
where
the
normal
way
one
would
do
maintenance
on
that
host.
I,
don't
want
to
ignore,
and
so
I
want
to
know
that
like.
C
Sorry,
okay
I
finish
my
sentence:
yeah,
so
I
agree
with
Stark
there's.
The
major
concern
is
shutdown
sequence,
but
beyond
that
this
is
basically
it
is
other.
Besides
in
each
continent.
This
is
another
complexity
I'd
into
the
powder
lifecycle
management.
So
we've
been
talk
to
you
about
how
the
lifecycle
management
in
the
past
Nexen
no
matter
is
the
restart
policy
or
it
is
the
poor
image
policy,
all
those
kind
of
things,
and
so
there's
the
many
attempt
to
revisit
about
the
Pug
lifecycle
management
and
the
continent
of
cycle
management.
C
I
A
A
It's
thought
about.
So
the
bug
containers
was
the
other
type
of
container
that
has
been
introduced.
Where
you
ask
okay
well,
what
does
it
mean
when
I
add
that
container
later?
If
the
cube
looks
like
that,
I
give
you
a
line
of
uses
and
stuff,
and
so
in
general
that
there's
not
one
one
thing,
I
can
point.
C
A
That
what
would
be
useful
for
the
team
akimbo
to
provide
is
what
is
the
use
case
that
your
client
is
progressing
and
in
what
domain,
where
the
cap
as
defined,
would
make
sense?
Is
it
a
use
case
not
identified
on
that
cap?
Is
it
an
envoy,
sto
style
use
cases
it
is
it
something
else?
Maybe
your
your
clients
experience
would
be
like
the
useful
net
new
information
yeah.
I
Service
mesh
is
a
use
case.
Not
matter
is
gathering
all
soloing
like
uploading
logs
after
continue
termination,
so,
as
you
mentioned,
that
it
will
interfere
with
not
a
shutdown
but
also
some
cracked
or
graphic
boost
post
website
card,
that
Frances,
a
demon
and
and
renew
certificates
and
another
type
of
sidecar
is
sidecar.
That
is
also
running
and
checks,
some
certificates
to
the
part
to
use
and
unrotated
them.
A
Omits:
okay?
Is
it
possible
to
say
which,
like
industry
vertical,
it's
in,
so
that's
an
example
like
I,
don't
know
if
Kevin's
on
the
call
when
Kevin
this
morning
was
talking
about
well
when
I
run
a
run,
an
AI
ml
workload
on
a
kubernetes
cluster
I
set
up
my
cubelet
as
follows,
and
this
is
the
type
of
pod
I
know
that
is
running
on
this
cluster.
A
A
I
Regarding
the
general
complexity,
something
that
I
had
in
mind
when
looking
at
this
I
don't
know
if
it
makes
sense,
but
I
was
thinking
of
making
side
cars
but
part
of
the
regular
path.
So,
like
I've
been
looking
at
the
PR
and
it's
basically
an
alpha
feature
and
if
enabled
and
if
side,
car
something-
and
maybe
of
course
it
will
be-
you
know
for
feature,
but
so
there
will
be
that
if
but,
for
example,
I
think
right
now,
there
is
like
a
bull
alike
like
this
container,
a
sidecar
or
not.
I
So
we
have
this
bowl
semantics,
but
maybe
something
more
general
can
also
be
part
of
the
regular
path
and
satisfy
the
sidecar
requirements
like,
for
example,
let's
say
it's:
instead
of
a
bullet,
it's
like
an
int
and
basically
that
int
the
falls
to
the
same
number.
When
you
don't
even
specify
it.
Let's
say
it's
like
order:
each
container
has
one
and
it
has
a
default
value
and
what
what
you
do?
I
A
I
think
that
was
some
feedback
that
was
originally
offered
when
the
cap
was
first
proposed.
I
think
I
had
asked
were
we
reaching
like
the
need
to
have
kind
of
system,
D
level,
pre-start
post-op,
type
of
hoax
or
dependencies
across
units?
But
now
within
a
pod
and
I
meant
that
in
the
in
the
positive
case,
I'm
not
pejoratively,
because.
A
A
Complexity
for
most
productive
outcome,
I
guess,
if
you
were
to
ask
some
folks
who
had
reviewed
it
and
that
there
was
one
particular
class
of
use
case
that
kept
being
seen
that
maybe
we
didn't
need
to
do
the
install
approach.
He
talked
about
I.
Think
whether
you
do
the
InStyle
approach
or
you
do
sidecars
as
defined
right
now.
To
me.
What's
important
is
that
as
an
API,
if
we
express
a
startup
and
shutdown
sequencing
order,
our
implementation
needs
to
be
rock
solid
on
that
and
not
have
any
sacrifice
on
its
ability
to
do
that.
A
E
A
K
A
E
K
So
so,
starting
from
the
tomato
you
can
see
the
basically,
this
llamo
defines
a
static
pass,
the
scheduler
and
the
sir
sinker.
So
we're
looking
at
a
please
look
at
so
forth.
Four
rows!
Oh
well!
We
have
three
masters
so
at
the
bottom,
one
is
248,
it's
not
patched.
So
let
me
show
the
behavior
on
this
node
first,
so
we
can
see.
Hopefully
he'll
get
see
it
at
top
of
screen.
You
can
see
two
containers
corresponding
to
the
scheduler
and
the
sir
sinker.
So.
K
K
So,
by
the
way
my
operation
is
within
the
grace
period,
so
what
I'm,
showing
the
audience
is
that
this
cancelable
a
grace
period
so
within
the
Great
Spirit,
if
the
request
for
a
scheduler
becomes
fat,
the
previous
containers
will
be
shut
down.
So
you
can
see
the
difference
the
three
hour
versus
twenty
second,
and
so
this
value
is
the
pot
ID.
You
can
see
of
from
EBE
to
29
D,
and
this
is
a
container
ID.
You
can
also
lock
look
at
this
first
column
from
a
1
3
D
6,
7,
8
2,
new
containers,
yeah.
B
A
A
A
K
K
So
yeah
basically
I'm
showing
the
major
portion
of
the
patch,
so
this
a
seam,
a
pod
color.
So
this
the
code
in
the
middle
is
I,
don't
know
if
the
fund
they
speak
enough
for
the
audience,
but
basically
I
can
briefly
go
over
it.
So
before
coding,
the
Q
part,
so
this
middle
card
give
this
give
determination
some
grace
period
so
during
the
own
language,
so
I
try
to
pull
whether
there
is
an
intention
to
add
this
an
underlying
static
pod
back,
if
that's
the
case,
I
come
out
of
the
waiting
and
a
log.
K
A
Okay,
I'm
gonna
have
to
look
at
this
closer
I,
guess,
I.
Think.
If
the
aggregate
question
is,
if
a
static
pod
expresses
a
termination
grace
period
on
the
pod
spec
itself,
I
think
there
would
be
no
disagreement
that
the
Keyblade
should
respect
that
I'm,
not
sure,
if
that's
what
you're
you're
doing
here,
though,
versus
just
delaying
sending
sick
kill
for
that
period,
verse
so
things
that
kill
and
then
waiting
that
period
for
I'm.
Sorry
before
sending
sick
timer.
Vice
versa,.
K
Yeah,
so
during
the
time
oh
I've
shown
that
so
the
current
behavior
doesn't
respect
the
great
scarrier.
So
second
part
of
the
demo
shows
that
I
just
want
to
save
some
time.
Since,
basically,
there
are
two
scenarios:
I
can
demo
one
is
without
moving
them.
The
llamo
pack,
the
two
containers
from
a
previous
life
cycle,
should
terminate
after
the
grace
period,
since
that
involves
I'm
waiting
I
immediately
that
part.
K
K
Not
much
time
for
everybody
to
digest
my
proposed
solution,
so
I'm,
just
briefly
showing
my
code.
The
second
issue
I
want
to
bring
to
the
community's
attention
is
for
request
9
1
to
11,
so
this
one
I
can
also
do
a
brief
summary.
Basically
I
forgot,
I,
don't
know
the
name
of
the
user
who
opened
the
issue.
However,
basically
I
myself
and
Matt
have
tried
out
the
proposed
approach
in
now
1
to
11
multiple
times
so,
basically
so
Matt.
If
you
go
to
the
underlying
issue,
I
think
I
linked
to
just
one.
A
So
just
to
close
in
the
first
issue,
the
first
issue
is
important,
given
the
broad
new
squads
and
so
I
want
to
make
sure
we
don't
lose
that
one.
Oh,
it
sounds
like
if
we
are
not
respecting
grace
period,
then
that's
a
definite
like
p1
issue
and
I'll
work
with
you
Ted
to
unblock
that
I.
Just
don't
I'll
have
to
see
what
your
PR
is
doing,
because
that
would
have
an
impact.
A
cube,
a
DM
and
a
variety
of
users
of
static
manifests.
C
My
understanding,
tied
I'm,
sorry
I,
haven't
looked
at
your
your
your
issue,
description,
yeah,
but
my
understanding
based
on
your
described
here.
So
if
you
remove
that
file
the
manifest
file,
and
so
then
the
component
won't
respect
because
the
optelec
read
I
to
the
API
server,
it
is
immediately
removed
right.
So
you
just
saw
the
code
I
think
that's
what
it
is
just
so
the
code
so
and
then
there's
the
termination
quiz
Pury.
Oh,
that
won't
be
respected.
But
if
you
are
not
anymore
right,
manifested
file,
so
you
basically
just
just
try
to
trim.
C
The
second
scenario
it
is,
you
can
remove
that
manifesto
from
the
node.
You
basically
just
say
up
just
dream:
the
node
you're
doing
the
dream
and
all
you're
missing
the
stick
terms,
all
those
kind
of
things
and
then
you
I
I,
think
about
company
will
respect
here.
Your
combination,
Chris
period
right
Dantas.
A
That,
when
it
iterates
all
the
pods
on
that
node,
it
will
skip
a
podna
backed
by
a
damon
set
or
a
pod,
that's
backed
by
a
static
like
it.
You
won't
actually
get
a
delete.
Action
last
I
recall
even
sent
to
the
API
server
anyway.
If
you
did,
would
it
would
just
be
ignored?
So
the
part
I'm
trying
to
work
through
is
when
you
move
the
file
out
to
the
cubelet.
Have
enough
information.
B
C
Because
I
can
obviously
see
that
the
code
kind
of
just
yeah
are
there,
so
we
can
move
in
the
file,
the
notice
that,
when
the
immediate
will
delete
of
the
Mira
Padre,
so
then
in
that
API
server,
there's
no
such
object
anymore.
So
obviously
that's
the
problem,
that's
it.
This
is
why
we
cannot
respect
the
termination
waste
period
because
the
mihrab
had
already
removed.
C
A
Make
sure
I'm
a
proper
understanding,
the
sequence
but
I
feel
like
there's
enough
smoke
here.
That
could
have
a
broad
enough
impact
that
I
want
to
make
sure
I
understand
the
scenario
and
I
want
to
thank
Ted
for
bringing
it
forward
on
the
on
the
next
topic
that
Ted.
What
did
you
want
to
discuss
with
logs
yeah.
K
K
So
in
my
first
comment:
yeah,
if
people
can
see
that
Chad
I'm
switching
to
the
second
PR,
so
you
can
see
the
part
definition
proposed
by
Matt
so
which
basically
would
keep
logging
to
the
given
directory
of
our
log
appalled,
slash
default
so
happying
using
this
definition,
along
with
the
patch
complete
to
test
effects.
So
in
summary,
the
without
the
fix-
basically,
let's
say
after
the
pod
wronging.
For
some
time
we
have
a
zero
down
log,
one
dialog,
etc
and,
of
course,
with
a
subsidiary.
K
A
significant
chunk
of
the
logs
for
dead
containers
I
will
not
be
clean
and
that
the
current
behavior
is
so
that
the
pod
can
be
a
long-running
and
they
can
be
a
talkative.
So
the
current
criteria
is
look
at
this
any
given
pod.
So
we
only
keep
logs
for
the
live
container
and
one
dead
container
or,
if
there's
no
live
container,
then
just
that
allows
for
that
two
dead
containers,
yeah
so
I,
hope,
David
or
lantau
can
review
this
I
want
to
Elam
yeah
that
wraps
up
my
portion
thanks.
A
A
M
Then
it's
just
a
status
update
the
I'm
working
on
the
main
implementation
to
make
the
vertical
resizing
happen.
The
pod
vertical
scaling
I
do
have
the
code
review.
The
API
code
review
has
been
a
pH.
Changes
have
been
out
for
a
while
stim
and
David
have
already
looked
at
it.
I
have
shared
a
change
for
the
CR
I.
Could
let
see
where
I
kept,
that
other
cap
that
we
discussed
last
week
and
I
implemented
the
changes
based
on
our
discussion
last
week
and
hopefully
David
or
anyone
else
interested
in
looking
at
this?
M
Please,
please
take
a
look
at
it
and
once
I
have
the
main
code
fairly
close,
ready,
I'm
gonna
send
all
the
three
distinct
PRS
out
to
the
main
master
branch
instead
of
within
my
branch.
So
this
is
the
API
and
code
changed
number
one
PR
yeah
that
yeah.
That
is
the
one
that
is
ready
for
review.
That's
the
CRI
kept
changes
to
the
CRI
to
support
Windows
and
to
return
information
about
the
currently
configured
limits
from
the
time.
M
So
it's
pretty
small,
contain
change
and
hopefully
should
be
easy
to
review
the
unit
itself
kind
of
barely
what's
needed
just
in
case
there's
a
lot
of
feedback
and
I
need
to
change
things
and
I.
Don't
want
to
throw
away
a
lot
of
work
and
for
the
end-to-end
tests
have
Shen
rank
from
IBM
volunteered
to
help
me
so
I
think
we're
going
to
make
the
the
code
freeze
date,
but
the
unit
tests
it
looks
promising
surprised
of
how.
So
that's
why
we
are.