►
From YouTube: Kubernetes SIG API Machinery 20180815
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
everyone
welcome
to
the
videos
at
the
August
15th
2018,
a
Commissioner
usig
meeting
I'm
back
from
paternity
leave
and
the
first
half
of
this
hour.
We
were
going
to
do
a
joint
discussion
with
the
multi-tenancy
group.
I
see
like
us
on
the
call
David
is
David
office
on
the
call
David
is
this
is
everyone
you
were
hoping
to
be
here
here.
B
B
B
D
B
A
Great,
let's,
let's,
let's
get
started
Mike
you
want
to
kick
us
off
David.
B
B
B
F
So
yeah
I
think
this
is
not
actually
a
new
issue,
so
I'll
try
to
represent
what
I
understand
the
previous
discussions
to
have
covered
as
well.
But
what
launched
me
down
this
path
is,
as
you
guys
know,
I'm
working
on
building
a
control,
plane,
I,
consider
my
project,
building
a
control
plane
for
something
not
kubernetes
using
kubernetes
api
machinery,
specifically
I'm
focused
on
building
as
the
end
for
a
you
know,
a
very
traditional
infrastructure
sort
of
refer
to
the
server
system.
F
Fairness
is
a
concern.
Obviously,
we've
got
our
own
schedulers
for
the
managed
resources,
and
so
the
issue
is
not
our
schedulers
fair.
The
issue
I'm
bringing
to
this
particular
forum
is:
is
our
control
plane
fare
so
that
is,
can
one
tenant
make
the
control
plane
so
busy
that,
or
in
some
subset
of
the
tents,
make
the
control
plane
so
busy
that
it
doesn't
give
a
proper
amount
of
a
decent
amount
of
attention
to
the
other
tenants?
F
So
that's
you
know
just
the
basics
of
the
problem
in
the
previous
discussion
in
the
multi-tenancy
work
group.
Yet
it
was
mostly
centered
on
the
long-standing
issue
of
exactly
how
do
we
define
a
tenant
anyway,
I
think
I
see
Craig.
Here
he
brought
one
of
the
more
challenging
scenarios
which
he's
written
up
and
posted
to
the
multi-tenancy
working
group.
Our
mailing
list
I
think
actually
Craig,
that
your
scenario
can
be
handled
by
a
fairly
simple
concept
of
multi-tenancy
as
long
as
it
allows
controlled
access
between
tenants.
So
yeah,
your
your
constant.
F
Your
situations
won't
work
with
total
isolation,
but
if
there
can
be
controlled
access
both
in
terms
of
users
accessing
the
API
objects
and
stopped
off
any
in
the
data
path,
you
control
in
the
network
being
able
to
access
stuff
as
long
as
the
relevant
parties
can
control
back
in
the
desired
way.
I
think
that
can
cover
your
scenarios.
Basic
can.
A
F
Yeah
fair
point
right:
I
should
back
up
and
be
more
comprehensive
right.
We
also
discussed
issues
about
a
protection
against
bugs,
or
you
know,
failures
like
so
even
in
a
single
tenant
system.
There's
our
concern
with
if
something
goes
wrong
in
the
controller,
for
example,
e1,
don't
want
that
controller,
overlooking
the
API
servers
or
the
HDD
servers
on
that
topic.
I
think
we
centered
on
the
idea
of
using
rate
limits.
F
Engineering
rate
limits
to
protect
against
controllers
would
go
bad,
but
rate
limits
are
not
really
an
adequate
solution
for
fairness,
because
for
tenants
you
don't
simply
divide
the
total
capacity
amongst
the
tenants
and
be
done
with
it.
You
have
no
possibility
of
multiplexing
or
anything.
You
want
the
rate
limits
set
higher
than
you
know.
One
end
of
the
capacity
yeah.
A
G
A
B
Yeah
I
need
protect
between
denial
of
service
and
trying
to
enforce
some
wanted
quantifiable
guarantee,
which
I
think
is
a
little
different,
I
think
the
the
the
first
one
has
a
more
squishy
and
probably
easier
to
implement
definition,
just
making
sure
everybody
gets
some
service
versus.
You
know
having
a
very
sophisticated,
prioritization
and
allocation
scheme
where
you
know,
if
I
pay
twice
as
much
I
can
get
twice
as
much
of
the
schedule
or
CPU
time
and
I'm
guaranteed.
You
know
scheduling,
QoS
and
five
pods
per
second
and
you
get
ten
positive.
Second
I.
B
A
I'm,
a
little
surprised
that
we're
talking
about
fairness,
that's
not
what
I
was
expecting
the
conversation
to
be
about,
because
that's
not
what
I
think
I
want
to
when
I
think
of
adding
a
tenant
concept
and
I
think
we
can
get.
We
can
add
journalist
to
our
existing
system
without
adding
a
tenant
concept.
B
Well,
just
to
be
clear:
nobody
like
adding
a
tenant
was
kind
of
an
orthogonal.
A
tenant
concept
was
kind
of
an
orthogonal
discussion.
It
came
up
in
discussing
this
because
you
have
to
define
what
are
the
principles
or
entities
that
you're
you're
isolating
but
I
don't
think
we
have
like.
We
can
define
pretty
much
anything
as
the
the
isolation
domain,
so
we
can
say
it's
namespaces
or
service
accounts,
or
you
know,
I
mean
I,
think
one
one.
B
C
You
can
do
it
by
subject
as
well.
Are
you
using
that
as
an
example
of
what
you
would
like
to
have
a
rate
limiter?
Have
flexibility
on
I,
guess,
I,
guess,
I'm,
standing
back
and
saying
what
decisions
are
you
trying
to
make
at
what
level
with
what
information
right?
And
there
are
lots
of
different
options
for
how
you
manage
that
and
obviously
a
webhook
based
rate
limiting
decision
defeats
the
purpose
of
your
written
learning
decision?
F
Right
so
I
completely
agree
with
David's
comments.
If
I
I
guess
different
people
read
more
or
less
into
the
word,
fairness
I
was
not
trying
to
talk
about
something
very
precise
I.
My
concern
is
simply
that
as
a
provider
of
infrastructure
as
a
service,
I
really
don't
want
some
of
my
you
know
the
like
a
busy
subset
of
my
customers
walking
out
more
or
less
the
rest
of
my
customers.
So
it's
more
like
a
denial
of
service
issue
and
not
deliberate.
Just
even
you
know
accidental
I.
F
B
G
B
F
The
problem
is,
the
connection
is,
as
was
mentioned,
we
need
have
a
definition
of
the
entities
that
we're
providing
fairness
between
or
preventing
dass
between
and
in
particular
you
know,
I
don't
have
a
really
strong
opinion,
but
you
know
what
I
need
is
that
these
entities
will
line
up
with
my
customers
a
more
or
less
in
some
useful
way.
You
know
so
it
seems
to
like
my
namespace
is
the
most
obvious
can
make
here,
but
it
doesn't
necessarily
have
to
be
namespace
at
the
end
of
the
day.
F
C
Don't
think
that
would
be
sufficient
if
you
were
trying
to
add
a
rate
limit
for
subjects
to
the
API
server
right
it
is.
It
allows
you
to
express
one
dimension
of
things
you
are
operating
on,
but
if
we
look
at
the
choices
we
made
when
we
very
first
made
authorization,
we
looked
at
what
is
cheap
information
on
a
request
and
then
how
can
we
describe
a
policy
from
that?
F
B
A
F
C
B
C
C
A
A
The
the
client
can't
send
requests
faster
than
that
limit,
so
the
queue
develops
in
the
client.
So
all
the
requests
are
still
all
the
requests
are
queued,
they're
sent
eventually
at
the
rate
limit.
Yes,
but
if
one,
if
one
the
the
queue
allows
like
if
somebody
is
generating
events
at
a
far
larger
rate,
I.
D
A
My
broader
point
was:
we
also
have
a
problem
with
with
our
controllers-
maybe
not
maybe
not
your
controllers
Mike,
but
with
with
our
controllers,
where
eventually
we're
going
to
get
clusters
that
are
big
enough,
that
the
controllers
don't
all
fit
on
one
machine
and
we
have
no
sharding
strategy
at
the
moment.
Right
I
think
there
is
a
natural
charting
strategy,
which
is
to
assign
particular
namespaces
to
particular
shards
of
controllers,
running
the
same
thing,
and
we
could.
We
could
potentially
also
charred
by
I'm,
not
sure
if
service
account
make
sense.
B
Even
without
charting
I
thought,
your
point
was
that
we
would
have
to
make
our
existing
controllers
aware
of
tenants
and
and
and
do
this
kind
of
internal
fairness
in
order
for
the
overall
system
to
have
that
property,
even
if
they
all
fit
on
a
single
node
I
mean
right,
I
thought
it
was
like,
isn't
it
like
when
we
talk
about
control,
plane,
fairness,
it's
not
just
about
the
API
server.
It's
also
we
have
to.
A
A
B
Way,
I'm
not
sure
this
is
a
practical
strategy,
like
think
about,
like
a
software
as
a
service
provider,
who's
supporting
like
a
thousand
clients,
they
might
not
all
be
active
at
the
same
time,
but
like
running
on
a
cluster
simultaneously
you're
not
going
to
run
a
thousand
copies
of
the
controller
manager.
A
thousand
copies
of
the
scheduler
and
I
think
that
the
practical
strategy
is
to.
A
B
A
A
B
B
Right
limit,
it
may
not
be
the
right
concept
because
it
doesn't
give
you
an
easy
way
to
over
subscribe
like
if
you
have
two
users
in
the
cluster,
you
don't
really
want
to
limit
them
each
to
50%,
because
you
want
you
want
something.
That's
I
forgot,
there's
like
some
fancy
word
work
like
work
and
surveying
or
something
like
you
want
something
so
that
somebody
can
use
the
slack
resources
when
the
other
one
isn't
yeah.
F
Think
that's
still
too
crude
Davis
was
right,
the
David
Holmes
rightthey.
The
word
is
work
conserving
and
it's
a
concept
in
queueing
systems.
Where
you
say
look,
you
know
you
want
your
controls
to
not
waste
anything
so
that
you
know
the
guys
who
were
asking
for
work
are
able
to
use
up
the
system.
Even
if
there's,
a
bunch
of
guys
are
not
asking
for
any
work.
A
B
G
A
F
I'm
sorry
over
security
might
not
be
the
right
word,
I.
Think
since
it's
the
multiplexing
is
more
like
the
right
word
or
work.
Conserving
right
concept
is
you've,
got
it
close
to
a
certain
capacity
if
you've
got
a
thousand
cuts
summers,
but
only
300
of
our
work
are
busy.
Now
you
want
to
let
all
those
300
use
all
your
resources,
yeah,
okay,
yeah,
that
makes
lots
of
sense,
yeah.
D
A
A
Brian
has
very
old
issues
about
it,
but
we've
never
found
time
for
somebody
to
go
out
and
do
a
design
or
actually
start
implementation.
So
I
I,
don't
think
anyone
has
opposed
to
the
concept.
I
mean
do
to
all
the
stuff
that
we
mentioned
here.
It
is
tricky
to
get
right
and
there
will
probably
be
a
vigorous
discussion
on
making
design
but
I
think
actually
like
this
is.
B
B
C
F
F
A
F
To
say
that,
yes,
so
this
baby
eats
and
the
other
David,
which
is
hard
right,
yeah,
the
speaker
person
speaking,
was
there,
you
know
yeah,
okay,
great,
so
I'll
see
you
three
and
we'll
start
arguing
amongst
ourselves
and
we'll
proceed
from
that.
Yeah
I
think
that
I
think.
F
E
C
So
Marco
at
the
beginning
of
summer
started
a
google
Summer
of
Code
project
where
he
was
looking
at
the
challenges
that
you
face
when
trying
to
write
an
aggregated
API
server
and
provide
it
as
a
unit
for
cluster
admins
to
install
into
their
cluster,
and
he
targeted
how
you
manage
storage
for
your
API
server
in
a
way
that
you
don't
have
to
have
ever-expanding
buckets
of
storage
that
need
management
and
I
guess
from
there.
I
will.
C
D
D
D
Can
you
see
it?
No,
yes,
okay,
so
hi
there
I'm,
almost
Buca
Summer
of
Code
student
with
my
mentor
is
David.
It's
in
Stefan,
who
is
not
here
today.
I
was
working
over
the
past
few
months
on
the
etcd
proxy
controller,
which
represents
a
solution
to
be
a
facility
storage
problems
for
aggregated
API
servers.
D
If
you
want
to
get
started
with
elevated
API
server,
you
need
to
be
point:
etcd
cost.
You
can't
reuse
the
existing
cluster
because
there's
high
risk
of
mutating
each
other
data
data,
especially
if
you
have
multiple
API
service
or
if
you
want
to
use
the
minutes,
it
cost
me
this
project
we
are
using.
Did
you
see
the
proxy
feature
of
the
CCD
to
create
a
proxy
for
the
PCB
coxster?
That
proxy
is
to
a
namespace
or
cluster?
This
book's
title
preface
and
watch
the
key.
For
example.
D
D
So
what
is
happening
here
is
that
this
approach,
you're
not
in
multiple
it
Sydney
clusters,
you
can
use
one
or
even
the
main
bond
that
used
for
Humanity's.
That
means
you
don't
need
to
manage
because
of
the
certificate
rotation
for
multiple
custom.
You
can
share
standard
basis,
backup
strategy
for
the
one
faster.
Also
it
is
not
easier
to
get
on
a
bathing
Edmonton's
of
the
bonito
city
custard,
the
more
of
them
today
we
are
going
to
show
how
this
work
and
I'm
going
to
them.
D
Only
to
you
today,
so
here
right
now,
I
have
the
clock
a
lot
faster
running
here
that
we
are
going
to
use
for
demo
along.
We
did
you
see
the
constantly
boy
disagreed
in
space
and
its
furniture.
What
are
we
going
to
do
right
now
is
to
deploy
the
open,
sleeved
self-serving
said
signing.
It
is
a
project
from
the
operations
team
that
help
of
the
certificates
for
aggregate
in
API
service.
So
it
is
it,
so
you
don't
have
to
do
it
yourself,
but
this
is
a
project
of
a
shift.
D
It
doesn't
have
dependencies
on
obviously,
and
you
can
use
it
on
the
classic
Cuban
I
discussed.
So
we
are
going
to
be
blowing
the
air
patrols
for
D.
For
that
and
it's
here
and
those
we
have
the
under
cross,
you
can
deploy
the
search
controller
that
is
going
to
handle
these
certificates
for
our
aggravated
aggressors.
D
D
D
Okay,
at
this
point,
we
can
deploy
our
controller
and
but
before
it
is
running,
the
next
is
called
cube,
API
self
storage.
This
is
configurable
by
providing
the
flag
to
the
controller,
but
let's
see
is
find
before
we
can
use
in
create
neatly
City
storage
resources.
We
need
to
deploy
the
certificates
for
the
core
etcd
death
certificates
are
deserving
CA
certificate
and
client
certificate
and
keep
em
close
by
deploying
benefits
such
as
this
one.
That's
fine!
Okay!
D
C
So,
just
so
that
everyone's
clear,
this
is
the
spot
that
a
cluster
admin
would
end
up
having
to
integrate
with
whichever
single
@
CD
cluster.
He
wanted
to
work
with
right.
So
this
is
a
custom,
fitting
sort
of
piece
that
needs
to
be
done
by
the
cluster
admin
to
make
use
of
this,
and
for
the
purposes
of
the
demo,
it's
hard
coded
right.
Yes,.
D
D
D
Okay,
this
one
we
are
deployed
in
a
space
service
account,
but
also
to
coffee,
coffee,
web
and
secret.
So
my
dad,
it
is
an
approximate
roller
handle
certificates,
Felicity
automatic
that
includes
certificates,
innovation,
as
well
as
certificate
for
dish
how
that
works.
When
we
clearly
deceive
astrologers
that
represents
needlessly
props
in
it,
the
certificates
are
being
generated.
Those
coffee,
my
pet
seekers,
are
ranked
in
the
beginning,
the
controller
when
it's
in
storage
is
created
and
when
it
is
ready
to
be
used
and
certificates
are
generated.
The
controller
operates
the
coffee
met
and
be
secret.
D
D
Ok,
ok,
people
only
the
air
patrols
for
the
aggregated
API
server,
not
only
for
us
but
beside
that,
but
we
believe
the
air
patrols
for
the
etcd
proxy
service
account
to
modify
the
coffee
maps
and
the
secrets
you
can
define
that
you
can
only
modify
those
to
be
created
so
right
now
we're
going
to
reconcile
that
rose
and
that's
it
now
that
we
have
this
ready.
We
are
going
to
deploy
the
etcd
storage
in
the
aggregate
to
the
api.
So
let's
take
a
quick
look
at
this
manifest.
D
D
This
just
for
this
one
so
before
the
point,
the
aggregated
API.
So
what
we
have,
we
have
dtc
store,
tracers
very,
create
deities
in
history
tracers.
He
stars
the
proxy
to
be
how
it
is
a
cluster
configurator
to
work
it
in
Sydney
proxy
controller.
It
creates
the
pot
and
deployment
with
that
respond.
It
is
in
the
proxy
votes.
Each
pot
creates
the
proxy
to
be
each
city
caster
and
it
used
the
namespace.
It
which
is
called
saying,
is
leasing,
storage
reserves.
This
one
state
is
Cuban,
it
is
will
take
care
that
we
can
create.
D
It
is
a
key
storage
to
a
TCP
storage,
the
same
names,
so
nobody
will
be
able
to
create
a
TCP
storage
in
excess
of
it.
So
this
is
a
more
security
point
here
that
we
can
measure
what
here
in
the
PC
historic
Specht
we
need
to
provide.
The
name
of
the
coffee
map
may
be
serving
singers.
Certificates
will
be
stored.
That's
a
coffee
mug.
We
created
area
in
the
API
server
nice
ways,
also
naming
names
based
on
the
secret
we
created
earlier
for
storing
the
current
certificate
keeper.
D
D
Currently
it
lasts
our
operator
here
to
rest
restart
the
API
server
manually,
because
there
is
no
mechanism
for
Kennedy's,
but
this
is
sort
of
faulty
problem.
We
will
come
up
with
a
solution
to
this
problem,
so
API
service
for
some
time
at
achill.
Also,
we
have
this
ready,
be
ready
to
deploy
our
api
serve.
So
how
it
looks
like
this
is
a
simple
deployment
manifest,
but
what's
important,
you're
more
think,
the
coffee
map
we
created
earlier
in
the
secret,
so
beholden
and
many
more
technique
and
certificates.
So
for
resting,
we
are
specifying
etc'
each
server.
D
When
you
see
the
storage
is
created,
it
exposes
it
easily
proxy
over
a
service.
The
chera
names
such
as
easily
/name
period--
name
of
the
etcd
storage,
less
risks,
but
the
namespace
will
be
the
city
storage.
Big
city
prosecutor
is
created.
He
now
gave
it
to
MPI
server,
storage
and,
depending
on
your
classical
CBC
and
part,
then
we
D
serving
CA
for
the
HCV
storage
and
then
client
certificate
keeper
DC.
This
is
provided
by
our
controller
and
generated
by
it
and
provided
in
the
cosmic
mapping
secret.
We
created
earlier
and
mounted
here.
D
A
E
C
Are
you
asking
about
how
it's
done
for
the
backing
at
CD
that
the
sed
proxy
pod
connects
to
yeah
it's
using
a
list
of
valid
names?
This
work
here
hasn't
exploded
it
out
into
the
individual
ones.
Although
my
memory
of
the
SME
client
is
that
it
actually
attempts
to
locate
others,
maybe
through
a
public
Peter
listing
I,
see.
A
Might
be
might
be
interesting,
like
our
services.
Have
this
feature
where,
if
you
stuck
them
to
cluster
IT
but
say,
cluster
IP
is
none
capital
in
then.
Instead
of
giving
you
a
cluster
IP
it
will
the
the
DNS
system
publishes
a
a
record
for
each
pod
that
fax
the
service.
So
if
you
only
have
like
three
about
because
or
something
that's
a
great
way
to
tell
the
client
about
all
three
of
them,
yes,.
F
Yeah,
but
that's
the
effect-
it's
not
what's
happening
here
right
if
I
understand
correctly.
This
is
when
he's
showing
us
right
now
in
this
command
line
flag.
This
is
a
reference
to
a
domain
name
that
the
sample
API
server
is
going
to
look
for
just
one
a
record
from
this
guy
right
and
that's
gonna,
be
backed
by
just
the
one
pod
there's
a
proxy
pod.
A
E
F
A
B
F
C
G
D
G
D
Because
we
are
using
the
namespaces
feature
of
the
UCD
proxy,
so
very
starti
VCD
proxy
eat,
for
example,
when
you
write
to
it,
you
write
the
sake,
but
for
the
minute
is
in
the
cluster,
that
is
practices
and
you
can
only
access
keys
that
have
their
projects.
So
you
can't
really
access
the
kid
that
doesn't
get
it
and
it
is
secured
from
that
dead.
No
other
API
service
can
access
other
data,
natively
main
human,
a
disgusted
maybe
can
Isis,
because,
yes,
we
can
be
accessed,
you
mean
it.
A
So
as
long
as
there's
no
path
traversal
bug
in
in
the
proxy,
it
should
be
safe
from
that
perspective.
But,
following
up
on
our
earlier
question,
what
about
what
about
fairness
like
like?
Is
there
any
can?
Can
one
can
one
client
send
so
many
A
to
D
request
that
that
the
other
clients
can't
get
a
word
in
yeah.
C
C
C
D
D
Okay,
so
let's
continue,
it
is
certificates
in
place.
Api
server
will
use
them
to
securely
access,
DB,
CD
and
to
securely
serve
itself.
Also,
we
are
going
to
put
it
service.
Please
pose
the
API
serving
in
the
API
service
to
register
the
Advocate.
The
baby
I
celebrate
two
minutes
Sofia,
so
what
you
can
do
is
to
deploy
it
to
a
server
and
see
the
magic.
D
It's
a
proxy
patrol
namespace.
Okay,
we
have
new
deployment.
This
quote
it
will
be
very
good
name
of
the
et
CV
storage
resource
and
also
it
created
the
free
thoughts
of
VP
CD
proxy.
Let's
see
how
is
it
working
in
the
API
server?
It
is
running
here
we
have
only
one
or
two
for
be
more
resources,
but
you
think
you
cover
me
mokuba.
D
If
it
works,
so
this
is
a
very
high
server
uses
warning
PCB
cluster.
That
is
like
a
baby
to
see
the
proxy.
That
means
you
can
add
as
many
eight
additive
api
service
you
want,
that
will
use
the
same
if
you
submit
faster,
and
then
that
means
just
only
one
is
needed.
This
could
also
make
it
work
to
use
the
me
city
faster
users
by
kubernetes.
If
you
really
want
that,
but
here
in
your
city
getting
dedicated,
did
you
see
the
constant
for
better
security
and
for
better
easier
way
to
access
it?
D
So
what
is
also
important
to
mention
here
is
those
manifests,
be
silky
so
here
for
the
pointy
I
believe
today,
PS
several
are
fully
static.
It
can
ease
it
easily
for
them
to
another
faster
auto
or
whenever
you
need
it,
then
what
you
just
need
to
do
is
to
deploy,
ladies
in
approximate
roller
and
to
configure
it
to
use
the
correct.
If
you
see
the
custom
and
then
we
justify
deactivated
API
server,
then
we'll
create
you
etcd
proxy
to
for
a
faster
and
you're
ready
to
use
them.