►
From YouTube: Kubernetes Resource Management WG 20170725
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right
so
welcome
to
the
July
25th
cleaning
up
the
research
management
work
group,
that's
weddings
and
the
agenda
today,
first
item
being
CTU
manager,
DITA,
Connor
and
stuff
here
to
take
us
through
it
and
then
a
lot
of
hard
stuff
at
30
minutes.
If
it
takes
that
long
to
make
sure
we
can
do
the
second
iteration
and
last
big
discussion
around
device,
plugins.
C
B
B
When
needed
and
then
the
other
use
case
we
had
in
mind
here,
was
you
know
your
scheduling,
virtualized
network
functions
on
top
of
your
entities
and
containers
which
I
know
some
people
are
starting
to
do
now,
and
the
latency
that's
added
by
the
operating
system
process.
Scheduler
can
can
blow
latency
budgets
for
some
of
those
work
bugs.
B
B
The
main
requirement
that
we're
getting
to
second
requirement
is
that
you
may
have
side
car
containers
such
as
no
log
forwarding
or
processing
or
metrics
exporter,
any
sort
of
non-critical
workload.
That's
a
sidecar
container,
and
we
want
to
support
that
use
case
where
one
container
is
treated
as
latency,
critical
and
the
other
one
is
not.
So
we
don't
want
to
necessarily
dedicate
an
entire
exclusive
core
to
something
that
you
just
doing
something.
On
the
side.
That's
not
important
to
you
actually
serving
reserved
requests.
B
Number
three
is
that
we
want
to
avoid
capping,
CPU
quota
for
guaranteed
containers,
because
that
would
be
counterproductive
with
respect
to
number
one
and
also
take
physical
processor
topology
into
account.
So
there's
a
big
section
down
here
about
oh
I,
guess
we
can
go
through
the
block
diagram
first,
but
there's
another
section
that
I
won't
go
through.
That
basically
motivates
why
CPU
topology
is
important
if
anyone
on
the
call
wants
to
go
through
that
we
can
but
I
think
everyone
is
pretty
much
on
the
same
page
with
with
that
topic.
B
I
just
want
to
give
gone
okay
cool,
so,
in
addition
to
the
proposal,
we've
also
been
working
on
a
POC,
mostly
between
in
cell
and
redhead,
and
so
what
we've
done
here
is
kind
of
to
outline
the
pieces
that
be
the
code
has
I,
guess
the
shape
that
has
taken
place
in
that
POC.
So
in
this
picture
we
have
a
block
diagram
where
all
of
the
light
blue
boxes
are
new
stuff
and
the
arrows
indicate
where
we're
either
interacting
with
existing
components
or
whether
existing
components
interact
with
us.
B
B
But
that's
the
pluggable
piece
that
we'll
talk
about
later.
We
have
multiple
policy,
implementations
and
inside
register.
What
you
do
as
a
policy
implementer
is
to
update
cpu
set
mappings
in
a
state
abstraction
and
that
state
abstraction
is
read
by
the
CPU
manager
in
order
to
write
back
CPU
set
settings
through
the
CRI
using
a
new
method
in
the
CRI
called
update
container
resources,
so
I'm
suspecting
that
this
piece
will
be
one
of
the
more
contentious
issues
but
yeah.
We
can
talk
about
that
when
the
time
comes
or
now.
A
E
F
A
A
B
D
A
D
B
Yeah
so
well
for
the
for
the
static
policy.
There's
a
lot
more
detail
below,
but
you
know
what
it
does
is
you
know
before
you're
running
any
of
these
special
containers
that
are
supposed
to
get
coos.
Of
course
everything
is
as
before.
So
if
you
have
a
burst
of
a
pod
or
even
a
guaranteed
pod
with
non-integer
CPUs
or
a
best-effort
pod,
they
would
run
on
all
of
the
course
just
like
they
do
now.
B
D
G
No
idea,
what's
going
on
so
I,
try
to
get
like
really
close,
I
hope,
you've
added
my
big
screen
on
on
such
big
Faison
extreme.
What
I'm
asking
is
what
hypothetically,
if
you
had
to
say
that
all
the
like
all
the
this,
the
cpu
set
mask
applied
only
at
the
pod
level
and
not
at
the
container
level.
What
sort
of
use
cases
would
that
break.
B
B
E
E
Well,
it
depends
on
which
way
you're
removing
it
right
if
you're
making
it
good
kinda
had
a
clear
way
of
saying
this
and
I'm
going
to
muddle
it.
But
it
was
it's.
If
you
are
making
the
CP
or
stick
more
restrictive,
then
you
have
to
start
at
the
leaf
and
work
back
and
if
you're,
making
it
less
restrictive,
you
get
to
start
at
the
bottom
and
go
towards
leaf.
G
Yeah
I
guess
like
it's
really
the
API
issue
at
the
C
group
level,
then,
because,
if
you're,
not
if
you're,
not
setting
any
explicit
restrictions
at
the
leaf
level,
then
general
expectation
is
that
beliefs
would
just
like
follow
the
restrictions
applied
at
the
parent
level
or
somewhere
up
the
tree.
But
if
each
leaf
is
getting
defaulted
and
then,
if
you
have
to
go
update
those
defaults
that
it's
sort
of
annoying
yeah.
A
A
B
B
B
So
it
knows
which
one
to
update
and
then
the
there's
an
internal
interface
that
abstract
just
a
couple
of
methods
from
the
cubelet
to
get
the
machine
info,
so
we're
using
c
advisors,
cpu
discovery,
existing
cpu
discovery
mechanisms
to
populated
topology
construct
inside
the
CPU
manager,
and
also
we
use
it
to
get
pods.
So,
basically,
the
CRI
is
called
with
this
update
container
resources
in
two
places.
B
One
is
in
register
and
the
other
is
part
of
a
reconcile
loop
that
runs
and
that's
basically
how
the
in
the
case
of
the
static
CQ
manager,
how
the
the
containers
in
the
shared
pool
get
basically
how
they
reconcile
with
the
current
a
CPU
set
for
the
for
the
shared
for
the
shared
pool.
Sorry,
if
I
was
a
meandering,
a
description
but
I
hope
it.
A
Made
sense
so
the
one
question
I
had
them
is
say:
I
have
a
pod
with
two
containers.
One
of
the
containers
gets
created
and
starts
just
fine.
The
other
container
creates
but
doesn't
start
and
somewhere
between
there.
Let's
just
imagine
like
soccer
or
something
like
it
times
out
and
the
cube
with
them
says:
oh
shoot.
Let
me
let
me
delete
the
container
and
create
a
new
container
and
then
try
to
start
that
one
is
my
CPU
assign
meant
changing
each
time
there
or
what
would
I
expect
in
that
type
of
flow.
A
B
E
Not
entirely
clear
on
that,
but
we
need
to
make
sure
that
in
all
those
ear
pads
we
don't
basically
you
know
allocate
CPUs
to
applaud
at
you
know
basically
goes
away,
but
it
doesn't
go
down
the
CRI
path
that
we
already
intersect
for
the
free,
but
basically
make
sure
that
you
know
for
every
allocation.
There's
a
D
allocation
and
you
don't
have
pods
that
are
erroring
out
and
taking
up
cpus
and
eventually
depleting
the
pool
yeah.
A
G
G
I
didn't
ask
this
question,
but
I
was
hoping
that,
as
part
of
like
bringing
up
a
board,
you
would
have
a
phase
where
you
would
figure
out
like
what
sort
of
CPUs
it's
going
to
get
if
any
and
then
like
what
sort
of
other
devices
it's
going
to
get
and
you
will
be
he'll
be
having
a
static
allocation
map
prior
to
even
attempting
to
like
start
the
pod.
But
you
want
to
make
sure
that
you
can
satisfy
all
the
pods
resource
requirements
before
you.
G
E
That's
what
we
do
and
there's
a
because
it's
the
static
policy,
there's
no
convergence,
I
mean
if
we
can't
allocate
the
CPU
another.
We
talked
about
this
sure
if
we
bail
out
or
we
just
like
log
like
hey,
you
had
a
guarantee
pie
with
energy
resources,
but
wood
didn't
allocate
a
CPU
for
whatever
reason,
and
but
we
don't
want
to
block
your
pod
starting
because
of
that.
So
you
just
get
the
look
you
get.
You
know
basically
know
what
policy
behavior
from
your
guaranteed
but
I
think.
A
A
B
D
F
D
A
A
B
E
Need
to
write
this
part
of
the
proposal,
but
the
the
design
right
now
is
to
allow
exhaustion
of
the
shared
pool
and
when
that
happens,
to
evict
all
first
of
all
and
best-effort
cause.
They
don't
have
CPU
requests
and
basically
the
ones
that,
as
far
as
the
schedule
is
concerned,
occupies
the
Rose
CPU
and
therefore
can
land
on
any
note,
even
if
the,
if
even
if
there
is
no
free,
CPU
and
then
set
the
note,
condition
CPU
pressure
to
indicate
to
the
scheduler
that
you
shouldn't
sign.
B
E
B
B
I
Think
you
have
to
be
for
an
interview,
one
as
a
shareable
or
maybe
public,
because
the
next
example
of
the
pubic
stalker,
all
the
demons
running
and
they
pretty
much-
did
not
reserve
them.
Unless
we
want
to
resurface,
if
you
for
those-
and
they
pretty
much,
will
compete
the
security
source
with
all
the
first
of
all
and
all
those
cannot
the
best
effort.
So
you
have
to
I
think
by
design.
E
Yes,
so
we're
honoring
a
cube,
reserved
and
system
reserved
when
we
create
the
initial
pool
of
CPUs,
and
so,
if
you
set,
if
you
like,
say
cube
reserved
for
system
reserved
equal
to
one,
it's
going
to
pull
out
the
C
lighting
of
the
the
reserved
CPUs
and
those
won't
be
part
of
the
shared
pool
or
and
guaranteed
pods.
You
know
these
pin
guaranteed.
Pods
won't
won't
be
pinned
to
them
either,
but.
A
E
E
B
We
start
at
the
high
end
and
work
down
so
that,
if
you
have
anything
reserved
for
cuban
system,
it
will
include
core
zero
and
that
allows
you
to
also
set
up
things
like
you
know.
If
you
want
to,
you,
know,
set
the
cpu
set
for
your
system,
slice
or
potentially
in
the
future,
even
use
something
a
simple
isil,
CPUs
policy.
You
can
do
that.
Your
high
number
physical
course
your
potentially
be
reserved
in
that
way.
Yeah.
E
The
only
the
only
got
you
a
for
just
blindly
reserving
core
0
is
that
the
math
doesn't
add
up
for
the
scheduler.
Your
app,
if
you
don't
set
cube,
reserve
or
system
reserved
than
capacity
equals
allocatable
and
you
could
try
to
schedule,
got
a
four
core
box.
Try
to
schedule
for
one
CPU
guaranteed
pods
on
there
for
pinning,
but
there's
only
going
to
be
three
available
in
the
CPU
set,
because
you
reserved
for
zero
but
didn't
tell
allocatable
and
therefore
the
scheduler
about
it.
Yes,.
A
J
B
I
G
C
Yes,
yes,
could
we,
you
know.
J
H
Okay,
so
I
think
I
might
start
with
that
again
s
Connor
did
is
that
the
motivational,
easy
objectives
for
SL
Nvidia
is
to
be
able
to
provide
and
some
kind
of
way
to
actually
enable
GPUs,
and
to
do
that,
we
need
to
need
a
few
things,
at
least
where
we
consider
the
minimal
set
of
features
that
we
would
like
that
makes
GPU
available
in
production
clusters
would
be
to
be
able
to
make
those
DP
available
available
in
the
container,
be
able
to
know
how
to
check
the
GPUs
and
be
able
to
run
some
kind
of
priest
to
be
able
to
clear
the
memory
and
run
some
tests
before
starting
the
containers
on
the
GPU.
H
So
this
is
actually
what
we're
doing
right
now
with
our
customers
is
that
before
they
launch
long
running,
for
example,
deep
learning
training
jobs,
we
usually
have
a
all
testing
period,
where
we
make
sure
that
the
GPU
is
in
the
same
state
so
and
to
do
that,
we've
been
discussing
for
a
month,
I,
think
or
maybe
to
about
a
plugin
system.
That
would
allow
other
vendors
to
do
that
too.
So
it
would
not
just
be
specific
to
Nvidia
and
I
think
this
is
the
basic
objective
should
I
represent
everything
in
the
PR?
L
L
Thank
you,
I
think
you
can
tell
yeah.
So
it
looks
like
you
already
have
a
document
I
just
yeah.
Go
down
to
this
section,
I.
Think
a
LAN
question.
We
discussed
the
over
the
path
of
a
grade
like
a
rather
we
want
to
support
adaptation
phase
in
1.8,
I
think
you,
you
know,
I
have
explained
that
the
reason
line
you
think
basically
the
important
feature
to
supporting
the
initial
posted
prototype
and
we
definitely
upgrade.
This
is
the
feature
of
a
single.
We
definitely
want
to
support
them.
I
feel
as
a
teen.
L
You,
the
our
major
concern
with
secretary
Kerry,
would
be
come
come
the
kicks
are
complicated,
so
you
mentioned,
like
I,
think
the
policy
you
would
like
to
implement
it.
If,
for
some
reason
the
best
plugins
becomes
available,
the
power
of
the
device
is
already
allocated.
You
will
assume
they
are
still
like
a
included
state,
but
you
will
like
make
the
allocatable
our
device
job
to
zero
until
the
device
luckily
connects
back
it
this
or
corrective
summary.
So.
H
What
a
tree
my
thinking
was
that
the
government,
that
the
the
design
documents
is
to
be
able
to
upload
device
handling
to
a
device
plug-in,
and
so
my
thinking
is
that
if
we
do
that
as
a
device
logging
is
not
available,
then
we
should
not
allocate
containers
that
require
requested
device.
If,
however,
containers
are
already
running,
we
should
not,
we
should
not
solve
them
unless
they
fail,
but
the
basic
idea.
H
That
is,
that,
if
device
plugins
are
not
here
since
they
are
in
charge
of
allocating
devices,
then
that
means
that
there's
probably
something
wrong
or
maybe
there's
not
something
wrong,
maybe
we're
just
a
dating,
but
in
that
case
we
should.
My
thinking
is
that
we
should
not
be
able
to
allocate,
or
we
should
not
even
advertise
those
devices.
So.
I
I
mean
imagine,
is
I'm
going
to
update
to
your
device,
whatever
form
format
to
use
are
Kristin
tell
people
that
we
are
going
to
make
sure
your
our
device.
I
know
people
don't
agree.
Oh
I
are
think
my
existing
company,
so
you
I
have
a
learning
container
I,
don't
hear
the
waves
goodbye
because
are
you
going
to
active
I
wanted
to
make
sure
what
I
did
or
located
and
McHugh.
I
G
G
H
So
if
a
device
is
marked
as
failed,
then-
and
it's
not
running
in
a
container,
then
it's
made
unavailable
and
in
the
capacity
of
the
node
that
decreases
by
one
it
for
the
whole
device.
Florian
fails.
Then
there,
the
capacity
of
the
node
is
decreased
by.
It
is
removed,
at
least
the
device
capacity
that.
H
G
This
I
mean
I,
think
I
think
that's
too
a
little
bit
of
thinking
to
be
done
here.
What
are
the
regions
and
allocate
call
can
fail
like
I
think
we
need
to
walk
through
that
and,
like
would
be
retry
like,
like
that's
the
question
to
answer,
if
so
like
how
many
times
we
retry
or
how
long
would
be
wait
before
we
give
up
on
a
device,
because
you
don't
want
to
keep
ping
pong
so
pink.
L
G
H
L
K
L
Think
my
question:
no,
my
proposal
was
to
attend
a
registration
table
and
one
night
not
about
the
magnolia
tournament
in
that
facia
at
all
so
I
agree.
This
is
the
important
feature
and
I
think
it's
okay
to
to
make
it
available
in
the
initial
prototype.
But
I
just
want
to
make
sure
like
a
by
introducing
this
feature
in
1.8.
We
also
have
a
proper
are
very
handling
like
all
I
need
the
reasonable
handling
so.
A
Nothing
we
will
hit
ping-pong
any
like
I'm
I'm
thinking
in
my
head,
that,
like
we
say,
sometimes
we'll
hit
it
I'll
feel
like
we're
hitting
it
all.
The
time
like
I
just
know
it's
going
to
happen,
and
so
like
I
would
like
us
to
mitigate
the
effects
of
oscillating
conditions
or
ping
pong
device
skates
either
by
doing
what
we
do
for
other
things,
which
is
like
require
some
steady
state
before
transitioning
out
of
that.
A
But
that
seems
like
a
pretty
minimal
requirement,
so
just
make
sure
we
capture
in
a
design,
so
is
likely
say
we
won't
hit
this,
but
those
of
us
like
supporting
us,
will
feel
like
we're,
hitting
it
all
the
time
right.
H
So
I
think
my
mic
motion
was
more
about
not
not
about
hanging
tape,
running
which
this
was
more
about,
and
there
are
some
failure
cases
that
we
might
not
think
about,
and
that
my
question
was
more
about.
Is
that
okay,
to
not
be
able
to
handle
some
errors,
or
is
it
better
that
we
do
not
implement
allocate
at
all
I.
I
Personal
fear,
we
need
the
handle
I
locate
based
on
the
parameter
for
the
TPU
cases.
I
also
can
think
about
some
other
device.
We
need
unless
we
find
some
other
way
and
to
learn,
for
example,
a
long
time
ago
we
mentioned
to
you
next
we
have
the
possible
neck
of
satellizer
and
all
maybe
initializers
packable
so
and
it
is,
could
be.
Extensible
I
can
expect
a
normal
electrical
bag
or
closure
coach.
L
Yeah
I
agree
like
a
educator,
is
an
important
feature
to
the
pod
and
it's
a
solid
fan
like
they.
They
make
this
a
feature
in
a
day.
Release
I
think
so
just
a
couple
is
not
by
it's
like
it
is
possible.
We
make
this
allocator
car
bike
optional,
because
it's
all
sudden
divided.
They
don't
really
need
this
device.
L
Specific
education
operation
like,
for
example,
hyperbolic
snake
we
appropriate
on
the
brain
90
to
reset
the
device
I'll
run
and
they
attacking
on
the
device.
So
like
people
have
options
to
choose
whether
they
want
to
to
provide
this,
the
allocator
interface
or
not.
Unless
so
so,
for
example,
like
a
Yeti
situation,
your
Center,
you
have
very
bad
sake,
incident
the
UNIX
stockade
to
cooperate,
so
it
is
possible.
It
will
like
a
domestic
Aki
instance.
I
am
space
to
dream,
then
occupation.
No,
it
doesn't
need
to
make
this
call
back
during
education.
L
I
Actually,
this
is
good
point.
I
want
to
ask
one
thing
so
so
normally
we
use
site
hop
that
one.
Do
you
mostly
in
a
reference
to
next
coming
out,
recite
all
those
kind
of
things
can
we
just
know
when
we
stop
when
we
and
bonding
and
no
those
are,
is
possible.
We
already
two
eyes
the
coming
up
stage.
We
can
already
get
those
devices
ready,
so
many
invective
is,
we
don't
need
I,
looked
it,
but
no
all
I
some
self.
I
H
I
L
I
But
ever
ok,
ok
I,
forget
box
office,
I
use
in
a
rock
face
the
so
when
you
and
bounding
that
it
divides
two
of
pop,
can
we
coming
up?
Can
you
do
those?
So
in
that
cases
you
don't
need
actual
allocation
expert
in
your
inability
allocation
base,
but
anyway
you
have
to
put
that
to
reclaim
right.
That
is
awesome.
So
anyway,
even
we
don't
explicitly
put
adapter
I
only
case
in
the
face.
Actually,
you
are
intended
to
have
that
face,
but
then.
G
H
L
G
G
It
does
look
like
what
we
can
do
in
1.8
like
honestly,
have
five
weeks
and
we
can't
like
take
a
whole
big
PR
done
so
I
think
we
cannot
return
a
compromise
quality
either
like
we
have
an
alcohol,
but
if
you
just
if
you
just
like
capita,
be
a
huge
PR
it
after
they
gonna
miss
some
some
issues
and
then,
like
people,
think
that
fixing.
That
is
a
huge
pain
that
we
have
seen
in
the
past.
H
G
Sorry
I
guess
I
moved
on,
maybe
I
should
spend
that
excellently.
I
am
soon
I.
There
are
a
few
more
bottles
later.
They
want
a
question
like
the
high
beam
and
sub,
but
I
was
hoping
making
this,
because
that
are
plain,
and
not
necessarily
here,
I'm
saying
that
in
terms
of
the
implementation
of
the
API
like
after
after
some
of
the
things
that
we
are
now
stating
that
we
will
discuss
offline
are
actually
discussed
and
you
settle
down
on
all
the
specifics.
G
H
So
I
agree,
but
then,
and
when
I
wanted
to
actually
understand
is
there
and
if
we
go
and
implement
Jiang's
proposal,
is
it
really
to
think
that
it
is
going
to
be
it
is
going
to
stay
quicker?
The
Shining's
proposal
is
different
from
this
one.
There's
there
are
a
lot
of
different
parts,
it's
not
implemented,
and
it
really
has
no
real
way
forward
to
this
proposal,
and
there
are
a
lot
of
things
that
we
are
actually
going
to
also
discuss
on
this
proposal.
L
And
so
I
think
we're
gonna
hollow
like
a
less
than
community
left
I,
don't
really
want
to
argue
like
whether
it's
one
way
or
the
other
I'm
okay
to
include
to
the
allocation
and
to
a
travesty
in
the
initial
proposal
in
the
initial
implementation
we
just
need
to,
like
I,
will
put
a
standing
on
the
fader
cases.
I
actually
want
to
move
to.
The
second
topic,
I
think
it's
more
important.
L
That
is
how
how
we
can
recover
from
cubelet,
because
the
current
implementation
pushes
a
kind
of
using
node
status
and
continuous
data
for
checkpoint
in
the
education,
attrition,
State
and
I,
hopefully
can
reach
an
agreement
on
that,
because
I
started
from
our
past
discussions.
We
have
decided
they're
like
we
don't
want
to
introduce
too
much
API
change
between
two
plates
and
the
aggressor
was
in
the
1.8
release.
Instead,
we
want
to
use
the
extended
IRA
to
avoid
that,
so
under
to
recover
ground
cube
later
failure
we
can
implement
a
some
local
check.
L
H
Actually,
I
have
a
just
another
concern
on
this
is
that
your
argument
is
that
we
want
to
break
it
down
into
multiple,
multiple
PRS
and
that
we
won't
don't
want
to
have
a
lot
of
code,
but
then
you're,
actually
adding
checkpointing,
which
is
probably
a
moral
code
in
my
understanding
and
and
I,
just
think
that
maybe
I
don't
have
a
clear
understanding
of
what
it
takes.
What
it
really
takes
to
change
the
API.
But
my
understanding
is
that
we
could
just
mark
those
features
as
alpha
and
then
move
to
checkpointing
in
1.9
I.
G
That's
perfectly
fine
I
would
much
rather
do
it
do
it
properly
one
in
over
time,
rather
than
like,
like
do
something
now
get
rid
of
this
code
and
then
do
something
else
like
I:
do
not
go
to
delete
the
existing
implementation
for
NVIDIA
GPUs
and
all
of
a
it's,
not
that,
like
people
would
be
left
stranded
for
our
current
users,
and
even
if
we
have
this
out
by
experimentation,
thought
that
their
life
is
going
to
significantly
true,
so
I
given
given.
Given
that
users
side
assumption
I
would
much
rather
avoid
making
short
of
changes,
but.
H
G
You
won't
get
like
production
experience,
but
basically
all
I'm
saying
is
I.
Don't
want
us
to
make
I
don't
want
to
like
make
decisions
now
that
we
just
throw
away
like
asanas
one
on
a
teacup,
but
thankfully
PA
change.
If
you're
going
to
check
fighting
in
one
name,
you
might
as
well
do
it
now
and
like
we
have
people
to
do
it
right
like
we
have.
We
have
like
at
least
two:
are
they
people
working
on
this
so
I,
don't
see
why
we
can't
do
it,
but.
I
Statement
here,
giant
actually
is
okay,
which
we
introduced
allocate,
and
he
only
he
only
asked
one
possibility
make
that
it
is
not
it's
optional,
it's
not
required
and
that
also
it
is
not
strong
preference
about
her
and
my
understanding.
She
is
more
concerned
about
the
new
API
you
I
do
introduce
try
to
avoid
chick
pot,
stand
in
front
what
your
proposal
is,
if
you
try
to
avoid
the
checkpoint
on
a
local
and
node,
and
you
try
to
do
the
remote
check
out
through
the
EPS
server.
I
So
that's
the
kind
of
giant
comes
in
because
from
also
my
same
consent
and
I.
Actually
like
the
quantitative
change
you
put
there,
the
only
things
I
am
clear.
It
is
for
me
because
you
introduce
that
device,
struct
and
I'm,
not
sure
that
these
are
just
drag
it
upward,
abstract
enough
to
to
cover
other
devices.
I
understand
is
cover
your
current
review
cases,
but
I'm
not
sure
so.
I
want
to
just
I,
like
that.
I
You
have
these
content
editors
and
we
create
a
switch
which
divides
bonding
to
the
container,
and
you
have
those
information
which
should
help
us
to
look
at
that.
The
improve
the
deep
activity
and
also
enter
all
those
kind
of
things
and
on
somebody's
hominin
ends
I
just
not
familiar
with
that
device
abstract.
It
was
object,
a
mouthful
so
to.
G
I
Of
host
one
thing,
one
thing
actually
like
the
one
you
propose
nice
to
have
the
container
stats
have
some
device,
the
like
what
I
already
miss
so
much
I
think
about
I'm,
not
just
not
comfortable
edit
device
abstract
at
this
moment.
It's
because
it's
really
an
ATP
use
specific
at
this
moment,
and
but
we
do
want
those
information
for
the
whatever
the
backup
in
in
here
and
you
can
we
just
simplify
that
structure.
Okay,.
G
G
By
color,
a
debug
API
in
the
few
blip
that
exposes
the
vacation
I
get
to
kill
this.
You
can
see
here
W
sure,
so
it's
a
cubic
equation,
a.
I
H
Let's
focus,
we
only
have
a
few
minutes.
Rest,
let's
focus
on
what
the
decisions
are
minor
sending
is
that
we're
all
ok
with
an
allocated
face?
It's
just
that
we
would
like
to
get
to
your
note.
I'm
fine
with
it
and
I.
Think
it's
a
good
thing
and
I
mean
at
this
point
the
API,
how
we
implement
a
check
pointing
or
failure
recovery,
whether
it's
check
pointing
where
the
API
or
another
method
I
think
can
be
discussed
up
right.
G
As
long
as
people
in
the
column
do
not
have
opinions
on
that,
I
think
it
can
be
described
something
but
know
that
we
are
all
the
same.
It
has
to
be.
This
is
no
plane,
so
the
clear,
let's
have
a
deadline
for
for
Tory.
Given
your
proposal,
review
and
I'm,
like
anyone,
has
any
opinions
on
it
like
I,
think
by
just
week
the
shuttle
being
white
and
for
when
you
focus
in
X
for
weeks
on
just
implementation,
but.
G
I
L
I,
don't
particularly
have
a
couple
still
a
couple
questions
like
a
rather
discovery
and
the
like.
Oh
it's
nice
to
ring
by
the
way
new
to
everything,
but
I
guess
we
can.
We
can
discuss
that
over
the
comment
like
the
PR
comments:
oh
yeah,
yeah
and
the
hopefully
the
father
next
week,
because
I
was
single
PR
talks
like
that
is
the
plans
going
forward.
That
sounds.
Ok.