►
From YouTube: Kubernetes SIG Multicluster 20180911
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
B
Great
hi
guys
nice
to
meet
all
of
you,
John
Mary
I'm,
a
product
manager
working
on
kubernetes
here
at
and
in
particular,
I've,
been
working
on
something
we
call.
Google
kubernetes
engine
policy
management
alongside
my
colleague,
it
ray
Colleen
who's
here,
who's,
the
engineering
lead
on
the
project,
and
what
it
is
is
something
that
we've
designed
to
kind
of
deal
with
some
of
the
issues
that
our
customers
have
run
into
with
the
Federation
of
policies,
particularly
around
access
control
and
quota
management.
B
B
So,
let's
just
like
this
okay,
so
I'll
just
go
through
some
some
background
really
quickly.
To
give
you
some
context,
this
is
kind
of
the
problem
that
we
see
with
the
number
of
people
who
are
using
kuving's
today
whether
they've
got
multiple
clusters,
so
these
gray
boxes
on
the
right
would
be
the
clusters,
and
then
they
have
multiple
teams.
B
So
what
it
is
is
a
way
to
centrally
define
policies
and
kubernetes
in
a
single
central
source
of
truth.
The
one
that
we're
going
to
talk
about
here
today
is
a
git
repository,
but
we've
designed
it
in
such
a
way
that
you
know
whatever
system
that
can
store
the
the
definitions
of
those
policies.
We
should
be
able
to
support
it
as
a
central
source
of
truth.
As
long
as
the
clusters
can
reach.
B
With
it,
it
applies
across
multiple
clusters
and
so
there's
kind
of
a
hub-and-spoke
model
where
you've
got
the
the
central
policy
stored
and
then
all
the
clusters
talk
to
that.
We
use
namespaces
for
tenants
and
right
now,
it's
used
to
manage
role
based
access
control,
so
role
by
names
and
roles,
cluster
role,
bindings,
cluster
roles,
namespaces
and
resource
quota,
but
we're
in
the
process
of
kind
of
broadening
it
out
so
that
it
will
essentially
handle
any
kubernetes
resource
that
people
want
to
federate
out
into
multiple
clusters.
B
So
right
now
we
supports
two
sources
of
truth:
get
or
GCP.
So
for
customers
who
are
Google
cloud
customers,
we
support
calm,
syncing
their
policies
over
from
cloud,
but
if
you're,
just
a
Cabrini's
customer
that
doesn't
have
any
relationship
with
Google
cloud,
the
get
version
works
without
any
kind
of
you
know
face
with
Google.
B
The
one
of
the
advantages
we
have
by
using
git
is
that
we
can
look
at
this.
These
policies
is
cooked,
and
this
is
something
where
we're
using
the
git
workflow
to
manage
kind
of
the
workflow
around
policy
updates
for
administrators,
and
so
in
the
same
way
you
would
come
manage
a
change
to
code.
You
branch,
the
policy,
you
can
run
pre-commit
validation
on
it,
so
we've
included
in
policy
management,
a
validator
that
looks
at
these
policies
and
looks
at
them.
You
know
not
only
syntactically
but
ensures
that
they
make
sense
in
relation.
E
B
Another
you
know
this
of
this
flow
supports
code
reviews,
so
multiple
administrators
can
look
at
the
same
change
and
sign
off
on
it,
and
then
it
gets
merged
back
into
the
code
base,
at
which
point
it's
automatically
deployed
out
to
to
all
the
clusters,
and
so
we've
taken
some
of
the
the
workflow
that's
inherent
and
kind
of
source
control
and
try
to
apply
it
to
some
configurations.
In
particular,
the
policy
configurations.
B
So
what
that
looks
like
can
get
is
a
hierarchy,
that's
based
on
the
folder
structure,
and
so
that
hierarchy
allows
you
to
attach
policies
at
higher
levels
and
have
them
inherit
down
to
the
leaf
nodes
which
are
namespaces.
So
you
can
see
kind
of
an
example
directory
structure
on
the
right
places
where
you
have
a
namespace
channel.
B
So
as
you
go
through
this
and
making
changes
to
it,
if
there's
an
error
or
a
problem,
let
you
know
about
that
ahead
of
time.
So
what
I'd
like
to
do
is
kind
of
go
into
a
little
bit
of
a
demo
here
kind
of
show
you
guys
what
it
looks
like,
and
you
know
we
can
step
through
code.
Some
example
use
cases
with
some
clusters.
I've
got
running
and
look
at
some
of
the.
B
Manage
policy
this,
this
demo
uses
a
repository
that
I've
got
running
actually
on
github
up
in
the
cloud.
It'll
show
us
kind
of
managing
quota
across
namespaces,
so
as
part
of
our
installation
of
policy
management,
we
also
include
a
admission
controller
that
deals
with
cross
namespace
resource
quota,
so
you
can
set
a
set
of
research
quota
policies
that
apply
to
a
pool
of
namespaces,
so
to
speak,
versus
kind
of
the
the
inbuilt
resource
quota,
admission,
control,
which
men
just
things
at
individual
namespace
level.
B
C
B
Bear
with
me
one
second:
okay,
so
I've
got
a
team
up
session
set
up
here.
I'm
running
two
clusters
on
GCP
they're,
pretty
much
kind
of
basic
clusters:
they've
got
this
gke
policy
management
system
installed
on
them
already,
but
there's
nothing
else
running
on
the
clusters
other
than
the
basic
stuff
and
they
are
synced
to
the
repository
which
you
can
see
on
the
right
there.
B
So
I've
got
a
local
version
of
that
repository
here
on
my
machine
and
and
that's
just
a
tree
view
of
that
local
set
of
files
and
that's
representative
of
what's
sink
down
to
the
clusters
right
now.
So
there's
only
a
couple
of
basic
roles
and
role
bindings
that
apply
kind
of
across
the
cluster
I,
don't
have
any
namespaces
set
up.
B
I,
don't
have
any
kind
of
tenants
on
my
cluster
yet
so
what
we
can
do
is
kind
of
get
that
working
so
I'm,
going
to
switch
to
the
switch,
find
cute
cuddle
context
to
the
cluster
I
have
on
the
bottom
left
there
and
then
I'm
gonna
copy
over
some
policy
definitions.
I've
got
sitting
in
another
directory
just
to
get
us
started.
D
B
Then
you
can
see,
there's
also
some
policy
definitions
in
here.
Shipping
def
has
a
job
creator
role
and
role
binding,
and
then
we've
got
a
quota
policy
that
applies
across
these
three
namespaces
and
some
other
good
stuff
in
that
so
I'm
gonna
commit
this
and
push
it
up
to
the
github
repository
and
when
that
happens,
you
should
see
these
namespaces
getting
created
on
the
individual
clusters
and
there
we
go
so
what's
happening
here.
B
So
I'll
create
a
a
pod
courier
role,
binding
for
shipping
at
back-end,
and
we
can
see
you
know
how
to
to
set
up
access
for
some
of
our
users.
So
this
is
for
Alice
at
a
food
court
and
she's
got
this
for
across
all
of
shipping
out
back
end,
and
so
this
role
binding
should
apply
to
all
three
of
those
namespaces
and
the
way
we
accomplish
that,
because
role
bindings
are
namespace
level
resources
kubernetes.
So
we
flatten
this
out
when
we
pull
down
that
policy
to
the
cluster
and
actually
results
in
three
separate
role
bindings.
B
So
if
I
wanted
to
test
that
by
looking
at
the
shipping,
dev
I
can
see
that
I've
got
that
shipping
app
back-end
pod
Cretors
role,
binding
that
it's
inherited
I've
also
got
a
local
role
binding
for
job
creation
and
I've
got
another
inherited
viewers
role.
That's
inherited
from
the
root
of
that
directory
structure.
I
can
test
that
by
trying
to
pull
some
secrets
as
Alice,
which
fails
as
it
should.
If
I
try
to
retrieve
some
pods
that
should
be
authorized,
which
it
is,
although
I
don't
have
any
pods
at
the
moment.
B
C
B
D
B
D
D
B
If
I
want
to
look
at
the
role
bindings
for
shipping
out
front
and
I
can
see
I
get
that
inherited
viewers
tool,
but
I
don't
have
much
else
there.
So
it
wouldn't.
We
used
to
tune
you
my
developers,
they
don't
have
access
to
do
much
there.
So
what
I'm
gonna
do
is
create
access
for
Alice
the
developer
there
by
moving
an
existing
role,
binding
up
the
the
treatment
so
that
pod
greater
role
binding
was
at
shipping
at
back-end
if
I
move
it
to
online.
B
B
B
B
D
A
Assume
that
there
is
some
kind
of
controller
running
in
each
of
the
clusters,
which
is
basically
watching
the
trip
or
wherever
these
policies
are
placed
and
pulling
from
there.
So
what
kind
of
success?
So
that's
that
I
mean
controller
or
to
whatever
it
is
need,
and
it's
like
an
add-on
kind
of
a
thing
which
can
be
installed
on
each
of
the
cluster.
Something
like
that.
Oh
yeah,.
B
D
B
B
Terms
of
access,
the
importer
it
has
to
look
at
the
repository
needs
to
be
able
to
connect
to
the
repository.
It
also
needs
to
be
able
to
write
the
CDs
and
the
system's
syncing
to
the
cluster
itself
has
a
KSA
which
is
relatively
highly
privileged
because
it
needs
to
be
able
to
create
and
manage
these
resources.
B
So
the
typical
kind
of
setup
is
to
install
these
things,
provision
the
service
account
for
the
policy
management
piece
and
and
then
hand
that
over
to
the
pod
and
then
once
that's
done
it
can
create
all
the
resources
it
needs
to
do.
In
addition
to
those
two
things,
we
also
ship
as
I
said
mission
controller,
which
handles
the
resource
quota
evaluation
of
across
namespaces.
So
when
it's
when
we
need
to
look
at
policies
that
are
specific
to
a
namespace,
these
two,
the
resource
quarter,
we
use
that
inbuilt
admission
controller,
but
we
have.
D
B
G
B
H
Right
so
what
you
got
there
is
the
gift
policy
importer
and
it
contains
both
get
sync
code,
and
it
also
contains
the
code
that
the
John
mentioned
that
reads
from
that
repo
and
writes
out
C
or
DS
or
CRS,
that
are
defined
by
GK
policy
management
that
represent
the
objects
that
we
are
synchronizing
and
we
use
the
status
of
those
objects
to
to
report
on
synchronization
state.
Alright,
the
policy
admission
controller
is
the
hierarchical
to
sorry.
The
policy
mission
controller
is
a
safety
valve
for
us.
H
So
if
bad
stuff
comes
from
the
get
the
git
repo,
we
will
not
let
things
apply
to
the
CRS
that
we
create
internally
and
so
think
of
it
as
like
a
validator
to
make
sure
they're
rocketing
garbage
from
the
from
the
git
repo
and
letting
it
into
the
system.
The
resource
quoted
mission
controllers,
as
John
described,
that
does
the
hierarchical
evaluation
of
quota.
The
sinker
piece
is
what
reads
those
CRS
and
writes
out
to
the
kubernetes
api
server.
H
J
J
B
Don't
think
we've
run
into
it
in
the
wild.
It
is
a
case
that
we
want
to
be
aware
of
right,
there's
a
number
of
reasons
why
something
might
not
work.
For
example,
if
there's
a
collision
in
name,
you
know
we
wouldn't
apply
on
a
particular
cluster,
and
so
one
of
the
things
we're
working
on
and
should
release
shortly
is
kind
of
status
updates
on
all
this.
B
So
right
now,
if
there's
an
error,
we
do
publish
logs,
and
it
is
something
that
you
can
kind
of
dig
in
and
understand
what
happened,
but
you're
planning
on
kind
of
adding
status
information
to
all
of
our
CRTs
and
making
it
much
easier
to
speak.
The
last
sync
successful
same
day
for
all
of
them
and
you
know
giving
people
the
information
to
deal
with
cases
like
that.
All
the.
B
H
There's
gonna,
be,
you
know,
central
role.
The
idea
here
is
you'll
know
the
state
of
every
cluster,
and
we
expect
that
you
know,
especially
in
distributed
environments
where
clusters
may
be
hype
or
connectivity
right.
You
could
see
some
amount
of
latency
difference
between
applications
and
you
also
could
see
breakages,
but
this
is
a.
The
system
is
always
trying
to
pull
the
repo,
so
it
is
self-correcting
right.
H
I
H
So
what
you're
looking
at
here
is
our
is
our
alpha
version.
So
what
what
I
don't
know
if
you
could
hear
a
Christian?
But
the
idea
here
is
is
that
this
is
meant
to
work
in
heterogeneous
environments.
So
you
could
imagine,
having,
like
you
know
in
a
given
region
where
you
don't
need
that
much
connectivity,
you
might
only
have
a
small
cluster
versus
in
your
main
region.
H
You
might
have
a
large
cluster
and
you
may
only
want
to
give
certain
teams
access
to
that
small
cluster
because
they
have
critical
workloads
that
have
to
run
there,
and
so
the
way
that
we
think
about
that
is
is
that
the
namespaces,
my
tags
will
exist
right
on
all
of
those
clusters,
but
the
quota
would
be
minimized
or
the
things.
So
you
might
have
zero
quota
on
that
small
cluster.
H
If
you're,
not
one
of
the
teams
means
to
run
there,
and
so
that
features
what
we
call
per
cluster
addressability
right
and
the
idea
there
is
to
is
to
allow
people
to
take
like
resource
quota
and
say:
okay
I
want
to
apply
it
to
my
large
cluster
and
then
I
want
to
apply
this
different
quota
to
a
small
cluster
for
the
same
namespace,
and
then
that
allows
you
to
deal
with
the
heterogeneous
nature
and
across
all
policies
that
that
feature
would
work.
That's.
H
E
I
B
Yeah,
so
that
is
something
we
can
do
with
that
mechanism
right
and
so
that's
how
the
mechanism
works
as
it
uses
labels
to
say
these
clusters
are
part
of
this
environment.
These
other
clusters
are
part
of
these,
this
other
environment,
and
you
know,
then
you
you
apply
the
policy
to
that
label
to
the
selector.
J
One
last
question:
for
me:
someone
did
explain
how
the
quota
controller
works
between
two
clusters.
You
did
an
example
show
hey.
You
know
if
I
have
a
cat
limit
on
one
gig,
a
four
gigs
of
memory
and
I
try
to
over
commit
it
that
you
guys
were
able
to
perhaps
a
hey.
You
are
over
your
limit
across
multiple
clusters.
Hey
explain
how
that
works.
The
mechanisms
for
that
it.
J
B
Yeah
thanks
a
lot
guys.
I
just
wanted
to
mention.
If
there's
any
interested
in
taking
a
look
at
this
or
playing
around
with
it,
feel
free
to
get
in
touch
with
me.
We're
we're
testing
a
testament
with
with
folks
right
now
my
email
address
is
my
name.
John
Murray
at
Google
calm
happy
to
discuss
it
further
with
folks.
If,
if
there's
interest.
F
One
thing
I'll
just
mention
is,
in
our
last
meeting:
I
took
an
action
item
to
make
a
change
to
the
cluster
registry,
to
add
something
that
marries
a
cluster
registry
record
with
a
cute
config
so
that
you
can
express
in
the
cluster
registry
what
what
credentials
to
use
to
contact
the
cluster
I
have
not
had
a
chance
to
do
that
yet.
But
it
is
on
my
list
of
things
to
do
by
the
end
of
this
week.
Sorry
for
the
delay.