►
From YouTube: GMT 2018-04-17 API WG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
That
was
finishing
this,
which
I
found
one
but
I
think
what's
in.
B
B
B
We
already
have
that
today
now,
so
the
scope
of
this
work
is
is
hoping
to
to
extend
a
little
bit
further
but
focus
on
the
CPUs
first
now
let
me
describe
I
think
everyone
who's
we're
attending
this
meeting
right
now
has
a
pretty
good
grasp
on
how
how
we
use
CPUC
group
to
isolated
cpu
time,
but
I'll
just
give
a
very
quick
overview.
So,
by
default,
we're
going
to
use,
we
were
using
CPU
shares
to
to
guarantee
the
amount
of
CPU
runtime
allocated
to
to
each
container.
B
For
the
long
longest
time
we
were
kind
of
rationalized,
the
behavior
from
a
services
centric
point
of
view,
which
is,
if
you're
just
because
you
have
the
as
a
user
or
as
the
application
owner.
You
would
like
to
model
the
amount
of
CPU
that's
allocated
to
each
instance,
so
that
an
aggregate
you
want
to
say,
okay,
I,
have
100
instance,
that'll,
be
sufficient
for
me
to
host
I.
Don't
know
how
many
overall
requests
per
second.
B
Or
so
forth,
and
you
have
to
justify
that
one
heard
instance,
and
so
if
each
instance
performance
is
somewhat
predictable,
then
you
will
you'll
be
able
to
base
the
calculation
on
that.
But
if
each
instance
capability
kind
of
varies
just
depending
on
just
depending
on
how
many
neighbors
they
each
have,
then
it
becomes
very
unpredictable,
so
front
of
and
missus
has
traditionally
been
used
for
primarily
for
services.
B
So
for
that
kind
of
perspective
he
kind
of
makes
sense,
but
as
we
try
to
utilize
mesas
for
a
multi-tenant
environment
accommodates
both
services
and
like
batch
type
workloads,
then
because
for
batch
it's
really
a
greedy
kind
of
a
career
than
that
you
would
prefer
to
give
as
much
CPU
runtime
to
your
batch
tasks
as
possible.
So
different
tasks
will
have
different
preference
in
this
case.
So
then
the
CPU
as
the
most
obvious
compressible
kind
of
resource.
Now
you
naturally
would
think.
B
Okay,
how
can
we
enable
both
and
have
tasks
opt
in
for
one
quality
of
service
class
versus
another?
So
this
is
the
proposal
because
utilizing
CPU
shares
and
CFS
quota.
We
can
already
achieve
that.
So
it's
more
of
a
API
design
thing
at
this
point,
I
think
than
a
controller
ization
thing,
because
the
mechanism
is
already
there,
and
so
let
me
use
the
terms
from
communities
to
describe
what
we
wanted
to
to
to
to
model,
and
so
basically
in
in
kubernetes,
you
have
three
classes
of
QoS.
B
B
B
I
can
move
over
to
the
API
proposal,
which
and
then
obviously
there
are
like
two
options
to
take.
One
is
to
model
it
the
same
way
that
kubernetes
models
it,
which
is
give
a
resource
guarantee
and
a
resource
limit,
and
at
this
here,
is
that
I
think
with
the
Korean
eighties.
It's
pretty
simple.
The
resources
are
basically
just
numbers
and
our
resource
types
have
a
pretty
rich
set
of
metadata
in
there
and
there's
role
and
there's
type
and
there's
allocation
versus
reservation.
So
if
you
look
at
the
this,
this
proposed
traditional
fuels.
B
There
I
can
sort
of
come
up
with
like
problems
that
I
can
see
that
Patricia
look
on
earth
can
arise,
which
is
the
question
of
how
do
you?
How
do
like
provide
a
another
feel
that
specifies
the
resources
but
make
it
obvious
that
you
should
put
only
quantities
in
there
or
you
should
put
only
the
limits
there
or
you
should,
for
example,
I
have
my
research
I?
Have
my
guarantees
for
CPU
memory?
Disk
and
I
only
want
to
set
limits
for
my
CPU,
and
how
do
you
express
that?
B
Should
you
likes
it
all
three,
but
how
resources
equal
minute
limits
for
memory
and
disk
and
only
have
different
numbers
for
CPU?
How
do
you
do
that
and
and
another
insight
that
I,
that
I
got
from
communities
is
that
in
their
design
dock,
they
kind
of
already
admitted
that
they're,
the
rule
that
they
set
up
to
infer
the
QoS
class
from
the
resource
requests
and
limits
are
pretty
subtle,
so
basically,
guaranteed
then
means
all
of
the
resources
will
have
matching
requests
and
limits,
and
if
it
doesn't
match
that
and.
B
B
So
yeah
and
then,
if
it
doesn't
qualify
that,
but
you
also
have
requests
then
you're
in
the
second
category,
which
is
first
of
all,
and
if
you
don't
qualify
that,
then
that
means
you're
now
setting
requests
then
you're
in
the
third
category,
which
is
best
effort.
So
the
rules
are
not
that
straightforward,
but
in
addition
to
that,
Kareena
just
has
this
behavior
of
first
of
all
for
incompressible
resources.
In
this
case
right
now
only
memory
supported.
B
So
if
you're
using
the
revocable
resources,
you
should
be
care,
you
should
be
prepared
to
be
killed
and,
as
I
will
mention
later,
maybe
there's
another
kind
of
killing
which
is
be
preempted,
but
that's
kind
of
a
circle
feature
at
least
when
preemption
is
not
happening
and
maysa
will,
under
current
circumstances,
wouldn't
kill
you
and
I
think
if
I
were
to
design
something.
That's
first
of
all,
I
think
my
my
purpose
will
be
that
I,
don't
I
don't
want.
B
So
if
my
batch
shops
utilizes
more
resources,
then
it's
minima,
guarantee.I
I,
don't
want
it
to
be
killed.
So
that's
how
I
think
about
the
Koreans
behavior
and
the
second
option
is
okay.
How
about
we
just
define
the
Q
s
class,
especially
then
in
in
this
world
you
will
have
AQ
s
class,
guaranteed,
first
of
all
and
potentially
also
best
effort
and
within
the
task
info,
and
you
specify
the
class
you
you
want
your
task
to
be
seen
with.
B
So
that
is
the
intent
and
the
intent
is
very
direct
in
my
opinion,
and
also
that's
just
the
intense
it
doesn't
mean
that
you
will
always
get
a
guaranteed
or
burst
of
all
behavior.
What
guarantee
is
sort
of
guard
is
sort
of
your
certain
because
that's
the
basic
Mason
basis
guarantee,
but
first
of
all,
depending
on
your
resources,
you
may
not
be
able
to
first
of
all,
but
but
that's
that's
your
intent.
Anyways
so
I
think
this
kind
of
API
and
allows
you
to
use
just
one
feel
to
explicitly
set
your
intent.
B
So
actually
so
the
second
option
in
this
is
the
water
I'm,
preferring
here
I'd
like
to
select
feedback,
if
not
right
here,
yeah,
maybe
at
least
later,
when
I,
when
I
published
the
doc.
So
a
little
bit
concern,
maybe
is
that
okay
right
now
we're
just
gonna,
do
CPU
so,
but
I
think
this
is
so
they
okay
for
my
task,
saying
it's!
B
B
Yeah
I
think
I
think
given
the
time
constraint
or
just
live,
it
I
die
for
this
part
of
the
API,
and
just
one
thing
that
I
that
that
occurred
to
me
that
so
wait
the
priority
tiers
design,
we're
gonna,
have
a
model.
Okay.
When
is
it
possible
for
a
task?
That's
supposedly
in
the
guarantee,
it's
called
a
QoS
class
to
be
pre-emptive,
and
my
understanding
is
that
these
two
actually
don't
conflict.
So
one
is.
B
B
B
So
yeah,
that's
that's
the
summary,
I
I
kind
of
wanted
to
get
a
sense
of
how
people
think
of
this
feature
overall
and
the
the
direction
in
choosing
between
the
two
and
relatives
and
more
me.
If
people
like
have
something
else
in
mind
and
then
I
will
certainly
extend
the
stock
and
to
to
think
about
all
of
the
edge
cases
and
implementation
and
transition
stories.
All
of
that
so.
A
One
thing
that
comes
to
mind
is
that
I
think
when
we're
comparing
to
kubernetes
and
their
implementation,
I
think
I.
So
I
don't
know
about
this
side
of
kubernetes
too
much,
but
I
think
that
one
additional
dimension
we
have
in
mesas
is
the
configuration
of
the
Isolators
on
a
particular
agent,
so
I
think
the
behavior
that
the
user
observes
will
depend
on
the
Isolators
that
have
been
loaded
on
that
agent
right
sure,
so,
I
wouldn't
like
in
the
Kate.
You
mentioned
the
memory
case.
A
A
B
To
not,
maybe
it's
possible
yeah,
but
in
that
world
I
think
it
will
mean
the
the
memory
is
not
isolated.
So,
no
matter
what
you,
what
you,
what
you
said,
everybody
is
just
contending
for
memory,
resources
and
the
colonel
would
have
to
in
its,
for
example,
in
its
Umi
logic,
in
its
reclamation,
logic
would
have
to
treat
these
processes
as
if
they
were
just
normal
system
processes.
Yeah.
A
B
So
right
now,
I
think
when
you're
using
resources
the
well
it's
not
written
down,
I,
don't
think
I
see
it,
but
I
guess
people
on
this
on.
This
call
can
probably
provide
a
opinion.
I
think
the
current
semantics
is
really
not
okay.
The
it's
guaranteed
is
its
allocated
to
you,
suffer
to
you
and
you
choose
to
use
it.
So
it's
a
guarantee
but
and
the
fact
that
the
different
agents
could
have
different.
B
Isolator
configuration
is
effective.
Okay,
that's
my
promise,
but
how
much
I
can
fulfill
that
promise?
How
you
know
strengths
or
Linnaean
I
want
to
be
for
that
promise
then
is
up
to
the
operator
to
configure
so
I.
Don't
think
that
changes,
the
semantics
in
the
API
in
terms
of
my
task
info
specifies
one
gig
of
ram
I.
B
Think
it's
still
means
I
should
be
guaranteed
the
one
gig
of
ram
and
I
guess
your
point.
I
take
your
point
out:
okay,
that
was
somewhat
implicit
and
now,
if
I
choose
a
guaranteed
policy,
Q
s
class
and
I-
don't
have
a
isolator
to
to
do
that.
Then
it
kind
of
is
not
guaranteed
right.
I!
Guess
that's
what
you
yeah.
A
D
E
B
So
yeah
I
think
I.
Think
yeah
I
would
agree
that
I
so
like
like
for
a
operators
guide,
you
should
probably
say
that,
yes
in
this
cluster,
this
these
types
of
resources
and
configured,
you
should
isolate
them.
If
you
don't,
then
the
missus
semantics
are
not
guaranteed,
and
that
is
you
know
you
do
that
at
row,
Imperial
so
right,
so
that
doesn't
change
how
missus
is
supposed
to
work
right.
Well,.
A
I
think
jun-hyung
is,
if
I'm,
correct,
chun-hyang,
suggesting
that
we
could
have
the
agent
actually
fail,
for
example,
a
task
that
attempts
to
explicitly
limit
memory.
If
a
suitable
memory
isolator
is
not
installed
on
the
agent
I
mean
that's
an
interesting
idea.
We
would
have
to
have
some
way
like
I,
don't
either
we
would
just
have
some
hard-coded
set
of
Isolators
that
work
for
particular
purpose
or
an
isolator
could
like
advertise
what
limits
it
provides
in
a
systematic.
D
Way,
I
imagine
that
we
could
probably
pass
a
testing
for
like
some
minute
data
that
related
these
constraints
to
the
Isolators
and
then
either
decide
whether
there's
like
you
can
be
free
or
not.
Well,
yeah
I
haven't
thinking
about
details
yet,
but
it
seems
to
me
that
well,
if
we
can,
we
can
declare
that
piece
like
us
classes
have
these
behaviors
and
if
the
agent
cannot
fulfill
these
behaviors,
you
should
fail
the
tasks
and
tell
the
user
that
oh,
this
cannot
be.
It's
not
doable.
B
B
If
they
fail
only
when
the
guarantee
is
not
matched,
it
may
be
a
like
unfortunate
timing
or
something
I
I
can
I
can
I
can
see.
I
can
see
the
point
there,
but
so
also,
there's
been
I've
been
hearing
conversations
about.
Okay,
so
maybe
agents
should
provide
their
isolation
to
to
the
scheduler
so
that
the
scheduler
would
know
what
kind
of
isolation
is
on
this
house
and
choose
offers
accordingly.
I,
don't
think,
there's
anyone
who
proposed
anything
in
more
tea
tell
but
I
think
that
will
solve
it
in
the
in
a
reverse
way.
Right.
So.
B
B
Variation
of
that,
if
not
for
like
the
list
of
isolator
name's
Elise,
maybe
there
are
some
capabilities
and
so
forth,
provided
there
yeah
I
can
I
can
think
of
the
cases
where
that
will
be
useful,
but
I
think
it's
a
little
bit
ass
organelle
to
this.
This
proposal
here,
because
right
now,
I
as
a
user
I,
really
think
this
is
when
I
specify
the
resources
in
the
task
info.
It's
it's
really
something.
B
I
I
think
the
cluster
is
supposedly
gonna
guarantee
for
me
and
if,
if
not
also,
for
example,
before
the
port
isolator,
that
James
recently
did
I
mean
I
can
have
I
can
claim
this
port
and
somebody
else
it
could
could
just
use
it.
I
will
I
will
fail,
but
what
I
you
see,
what
I'm
saying
it
so
that
will
solve
that.
B
A
Yeah
I
think
that
may
be.
The
main
difference
for
me
is
that
adding
something
like
this
to
the
API
would
make
the
limiting
explicit.
So
it
seems
a
little
more
significant
if
the
agent
couldn't
satisfy
it.
I
could
I
could
imagine
us
implementing
it
either
way,
either
just
leaving
it
up
to
the
operator
to
configure
the
agents
appropriately
or
having
a
logic
which
will
fail
tasks
that
an
agent
can't
limit
appropriately
and
I'm
yeah,
I'm
not
sure
immediately,
which
seems
more
suitable
to
me.
But
it
seems
like
a
high-level
question
that
is
worth
considering.
C
C
C
B
C
A
C
C
B
Not
you,
the
owner
of
the
task,
is
okay
with
being
allocated
more
resources
than
in
a
task
info,
because
I
was
just
describing
the
case
where
that
the
task
owner
is
saying
it's
not
okay,
because
I
want
to
I
want
to
model
my
CPU
or
other
resource
consumption.
More
precisely
so,
I
would
rather,
if
I
want
my
distributed
application
to
serve
more
traffic.
I
would
rather
increase
my
instance
count
from
100
to
200,
then
I'm
predictably
have
like
more
CPUs
per
instance.
D
So
sorry,
so
when
you
say
using
different
keywords,
classes,
we
may
want
you
out
a
resource
based
on
a
different
classes
like
for
guaranteed.
We
allocate
the
tasks
for
what
they
like,
so
I
can't
get
lost.
You
mean
like
say
for
first
or
what,
which
one
like
for.
First
of
all,
where
that
image
margin
guarantee,
you
should
allocate
their
allocated
with
the
limit,
read
and
guarantee
I.
Don't.
B
Get
right
so,
if
I
have
launched
a
task
unto
this
agent
with
32
cores
and
I
specified
CPU
equals
one.
If
guarantee
in
a
guaranteed
class.
I
will
only
use
one
core,
even
though
they're
there
they're
the
other
31
course
that
are
totally
idle,
but
perhaps
I
want
that.
So
that's
the
guaranteed
semantics
and
that's
actually
what
the
flag,
the
original
agent
flag,
provides
that
semantics
and
and
and
Twitter
uses
it
and.
B
It's
the
best
effort
will,
in
our
world,
be
equivalent
to
revocable
resources
so-
and
it's
not
100
percent
clear
to
me
how
we
want
to
consult,
reconcile
the
two
or
consolidate
the
two.
Should
we
have
the
genome
here
and
translate
revocable
resources
to
the
scene
on
so
after
that
you
will
be
able
to
refer
to
the
type
kuis
class,
or
should
we
not
have
best
effort
here
and
solely
use
revocable
resources,
but
that
would
be
equivalent
of
using
revocable
resources.
B
D
B
B
D
B
A
B
Probably,
but
the
challenges
as
I
was
alluding
to
when
I
was
describing
I
kind
of
gloss
over
uncle
one.
But
what
do
you
put
there
like
I'm
saying
the
resources
in
in
our
world?
This
is
a
class
with
tons
of
metadata
should,
should
you
just
use
quantity
there?
What
should
you
use
there?
It's.
F
B
F
F
F
Like
that,
that's
possible
with
the
kubernetes
api,
but
it's
not
possible
with
QoS
classes
unless
you
introduce
some
kind
of,
or
maybe
that's
what
you
have
is
first
of
all,
I'm,
not
sure
you
have
the
category
that
you
can
burst
up
to
to
something
with
no
limit,
but
it
just
doesn't
have
the
category
where
there's
like
I
can
burst,
but
up
to
a
finite
amount.
Is
that
the
limitation
of
that
approach?
Yes,.
F
F
F
F
Well,
I
might
I,
don't
know.
I
mean
I
might
say
that
if
I'm
charging
based
on
consumption
or
if
they're,
paying
ahead
of
time
for
how
much
they
can
consume,
if
I
have
a
model
where
I
charge
them
based
on
what
they
use
after
the
fact
should
be
fine
right
because
they
used
a
lot
of
CPUs
they
get
charged
and
they
pay
for
it
right.
If
I'm
out
front,
saying:
okay
well,
you
guys
have
paid
to
get
like
access
to
100
cores
50
of
those
are
like
bursted,
so
they're
cheaper,
but
I'm,
not
gonna.
F
F
C
F
A
F
F
The
first
thing
is
basically,
what's
the
limit
on
the
amount
of
guaranteed
CPUs
you're
going
to
get
in
the
cluster,
and
the
second
thing
is:
what's
the
limit
on
the
total
amount
of
CPUs
you
could
consume
in
the
cluster.
So
it's
like
a
two
tiered
quota
based
on
the
guarantees
and
the
limits
of
a
pod.
F
A
F
A
F
A
A
F
B
Right
so
I've
been
thinking
about
bins
and
James's
points
on.
You
know
the
bill,
then,
in
that
case,
wouldn't
be
when
a
guarantees
be
what
you'd
be
concerned
about,
so
your
because,
ultimately,
depending
on
the
the
chargeback
model
right.
So
if
you
want
to
limit
your
belt,
then
can't
you
just
choose
the
guaranteed
mode.
F
C
One
of
the
things
one
of
the
ways
we
use
they
first
of
all,
CPE
at
the
moment,
is,
as
a
quote,
some.
What
is
a
approximation
to
or
their
scaling,
and
you
can
think
of
it
like
that
as
it
was
to
perform
or
auto
scaling.
So
you
deploy
something
which
is
idle
most
of
the
time,
but
then
has
you
know
lunchtime,
traffic
or
breakfast
traffic
or
whatever
you
have,
and
in
the
absence
of
any
sort
of
auto
scaling
controller.
First
of
all,
CPU
does
a
jog
right.
F
C
F
C
I
think
this
causes
at
least
vote
from
whom
I
ran.
He
had
this
clock
yeah
bosses
of
service,
because
there's
customer-facing
production
services
and
there
is
a
random
migrate
web
service
thing.
Okay,
teams
that
teams
using
it
internally-
let's
say
they
ordered
that
auto-scaling
behavior
makes
more
sense
for
the
latter
class
than
for
the
former
class
I
think
got.
F
C
C
As
I
do
like
the
simplicity
of
the
modeler
as
a
generic
UI
POS
class,
one
spills
like
basically
bumping
the
existing
missiles
flag
layer
from
the
agent
from
an
agent
to
to
task
in
that
feels
pretty
simple
and
pretty
understandable
and
fits
in
you
know.
I
can
see
a
clear
path
to
getting
that
work
on
and
deployed
yeah.
C
B
So
from
what
I've
gathered,
the
two
options
are
actually
not
mutually
exclusive,
but
if
its
first
model-
that's
probably
where
more
discussions,
we
can
extend
from
the
option
to
to
include
option
one
but
I.
Imagine
it's
at
that
point.
We
probably
need
to
plug
that
into
the
whole
cluster
wide
quota
and
allocation
algorithm,
arithmetics,
yeah.
F
I
think
you
more
I
think
yeah
I
think
for
the
first
approach.
There's
probably
some
refinement
needed
as
well
like.
Maybe
what
we
put
in
is
specifically
like
CPU
limit
and
memory
limit
things
that
we
know
can
have
an
amount.
That's
larger,
like
we
wouldn't
care
like
there's.
No
port
limits,
there's
no
like
volume
limits
right,
so
we
we
might
actually
want
to
start
with
a
restricted
model
there.
If
we
were
to
go
with
that,
rather
than
trying
to
make
it
really
generic
and
just.
A
C
F
B
F
Also,
how
you
set
up
the
Isolators
and
everything
I,
guess
that
would
be
nice
to
validate
it
ideally,
but
what
I
was
gonna
say
to
Greg's
point?
Was
it
would
be
nice
to
it
would
I
do
agree?
It
would
be
nice
to
keep
in
mind
that
at
some
point
we're
probably
gonna
want
to
be
able
to
have
an
operator
control
on
how
much
total
consumption
something
could
have,
at
least
for
the
billing
use
case.
You
talked
about
possibly
for
some
other
reasons.
F
And
that's
that's
the
one
piece
of
the
qs1
that
I'm
a
little
less
clear
on
how
that
would
work
and
I
I
guess
the
other
thing
that
probably
isn't
clear
to
people,
including
myself,
is
and
Ian
mentioned
this.
The
interaction
with
revocable
resources
like
I,
didn't
I,
didn't
actually
think
that
best
effort
was
revocable
because
we're
not
going
to
destroy
the
container
I
assume
it's
like
a
there's,
different
kinds
of
revocation
I.
Guess:
there's
like
a
temporary
kind.
F
F
B
F
B
B
F
B
F
A
G
A
Yeah
yeah
I
apologize
guests
on
that
we
did
not
get
a
chance
to
cover
those
items
you
can
see.
You
got
sons,
emails
and
I
also
included
a
link
in
the
agenda
to
a
particular
review
that
I
thought
might
be
of
interest
where
we
introduced
some
new
operations,
states
that
will
be
used
for
reconciliation,
yeah.