►
From YouTube: Kubernetes SIG Multicluster 2020 August 11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
You
know
thursday
used
to
be
my
meeting
hell
day.
It's
shifted
to
tuesday
now.
C
The
well
the
meeting
hell
day
is
the
day
on
which
I
not
only
have
meetings
straight
through
from
7
a.m
until
noon,
but
in
fact
I
have
several
conflicts
within
that
setup.
C
C
A
All
right,
we'll
give
it
another
minute
for
folks
to
join.
A
Richard
I
see
your
nick
in
the
chat.
Are
you
the
same
richard
that
put
an
item
on
the
agenda.
A
Okay,
great
all
right!
Well,
why
don't
we
go
ahead
and
get
started?
This
is
the
august
11th
sig
multi-cluster
richard
you're.
First
on
the
agenda
with
the
loblaw
project
or
multi-cluster
model.
D
Yeah,
so
should
I
share
my
screen.
That
would
be
great.
A
Oh,
I
probably
need
to
enable
you
to
do
that.
Yes,
all
right,
you
are
enabled
to
share
your
screen
perfect.
D
D
D
So
we
have
things
like
loyalty,
programs
and
and
quite
a
bit
of
canadian
traffic,
so
we've
chosen
to
run
our
our
workloads
on
kubernetes,
primarily,
but
I
wanted
to
cover
a
little
bit
about
what
we
saw
in
our
line
of
work
as
it
pertains
to
multi-cluster
and
some
of
the
approaches
that
that
we
went
down
and
tried
to
explore.
D
Definitely
I
only
learned
about
a
sig
multi-cluster
a
couple
weeks
ago,
so
we've
been
hard
at
work,
reading
up
on
on
your
documentation
and
your
position
statements-
and
I
thought
this
would
be
a
great
opportunity
to
share
our
notes
as
it
you
know,
as
we
come
across
it.
So
that's
fantastic!
Thank
you.
D
Awesome
so
our
sort
of
because
we
are
part
of
a
you-
know
a
larger
enterprise.
There
are
a
couple
of
architectural
requirements
that
were
sort
of
handed
to
us
and
our
challenge
was
to
design
sort
of
you
know
our
cluster
or
multi-cluster
solution
based
around
us.
D
So
the
first
thing
is,
you
know
a
private
network
sort
of
the
the
private
ip
space
has
to
be
so
we
couldn't
use
a
lot
of
say,
service
type,
load,
balancers
out
of
the
box,
because
you
know
we
need
a
firewall
at
all
points
of
ingress
into
into
this
private
network
space
and
that's
sort
of
how
you
know
a
lot
of
these
out-of-the-box
solutions
start
to
break
down
is
when
you
need
to
put
a
layer
in
between
your
load,
balancer
and
the
internet.
D
His
and
configurability
historically
has
been
sort
of
a
struggle
for
us.
We
need
mtls
between
clusters
and
workloads,
and
I
know
that
this
this
sig,
isn't,
you
know
specifically,
for
you
know,
meshes
or
whatever.
I
saw
tim's
comment
on
on
how
we're
trying
to
build
building
blocks
for
these
meshes
but
sort
of
just
you
know
a
point
there
single
we're
working
off
of
a
single
region,
container
native
routable
solution
and
also
we
are
using
istio
as
a
service
mesh.
D
So
that
is
our
main
method
of
ingress,
after
all,
like
the
firewall
stuff
and
the
north
south
stack.
So
the
main
problem
statement
that
we
wanted
to
solve
was
that
upgrades
on
a
cluster-wide
basis
is
very
terrifying
if
all
of
your
workloads
are
on
that
same
cluster,
so
anything
that
requires
a
control
plane.
Anything
that
requires
custom
resource
definitions
right
when
you
touch
that
resource
definition,
inherently
the
whole
cluster
gets
impacted
by
that.
So
you
know
we
thought
about.
D
How
can
we
limit
that
blast
radius
of
a
of
a
of
a
tool,
or
you
know
istio
or
any
of
the
other
platform
tools
that
we
rely
on,
and
so
we
thought
about?
You
know
what,
if
we
blew
green
the
cluster,
and
so
this
was
a
a
little
bit
of
a
road
map
or
sort
of
the
the
algorithm
that
we
wanted
to
follow.
As
we
went
ahead
and
blue
blue-greened,
our
clusters,
you
know
obviously
phase
zero.
D
Is
creation
of
the
green
cluster
make
sure
that
the
the
clusters
can
connect
and
then
make
sure
that
services
are
discoverable
and
then
finally,
a
north
south
switchover
on
the
on
the
ilb
and
then
finally
decommissioning
of
the
blue
cluster.
That's
a
lot
of
points,
so
I
sort
of
drew
a
diagram
to
help.
Everybody
understand
what
I
mean
by
that.
So
we
have
blue
clusters
and
green
clusters,
and-
and
I
especially
appreciated
your
your
statements
on
you-
know-
name,
space
sameness
right
where
tenant
one
and
tenant
one
here.
D
Both
clusters
should
be
the
same
and
at
least
look
the
same
in
in
any
consuming
service
and
also
tenant
two
tenant
two.
D
So
specifically,
because
I
talked
about
mtls
with
our
service
mesh,
the
only
cross-cluster
discoverable
service
that
in
in
our
current
design,
is
the
ingress
gateway,
because,
when
blue
cluster
needs
to
connect
over
to
green
cluster,
the
mtls
is
verified
by
the
ingress
gateway
and
how
those
interact
with
each
other
so
and
then
the
the
the
last
point
of
of
ilb
switch
over
the
north
south
switchover
is
really
about
the
the
internal
load
balancer
up
here
so
phase.
D
One,
I
think,
is,
is
kind
of
less
relevant
to
the
the
work
of
this
group,
because
it's
mostly
about
sort
of
signing
certificates
and
rotations
and
stuff
like
that
phase
two,
which
was
the
service
discoverability
I
thought
was-
was
bang
on
with
the
work
here
at
mcs
api.
D
I,
as
I
said
before,
the
only
service
that
that
we
need
to
be
discoverable
between
two
clusters
is
the
ingress
gateway,
and
so
we
actually
ended
up
writing
our
own
service,
that
watches
service
and
endpoint
events
and
then
takes
the
definitions
and
reapplies
them
to
another
cluster.
It's
you
know
relatively
quick
and
dirty
solution.
D
I
have
also
made
the
the
source
code
available,
but
I
would
be
really
interested
in
you
know,
and
I've
already
started
a
little
bit
of
it,
comparing
the
two
approaches
and
seeing
how
how
similar
or
how
different
they
are
and
here's.
I
guess
one
of
the
questions
that
I
wanted
to
discuss
today
is
like
I
I
don't
know
if
I'm
missing
something
in
the
documentation,
but
I
couldn't
figure
out
how
clusters
were
actually
registered
with
each
other,
because.
A
I
was
just
gonna
say:
that's,
that's
a
really
good
question
and
the
the
thesis
of
our
sig
multi-cluster
intro
is
that
cluster
registry
is
easy
to
describe
like
as
a
thought
right
in
practice.
It's
been
super
difficult
to
to
build
something
that
that
makes
that
idea
real
in
software
and
one
of
the
things
that
characterizes
our
approach
that
we're
taking
in
the
sig
right
now
is
that
there
is
an
existing
project
out.
A
There
called
the
cluster
registry,
that's
in
the
kubernetes
github
org,
but
we
really,
I
think,
sort
of
put
the
cart
before
the
horse
with.
A
And
we
have
two
things
happening
right
now.
One
is
a
that
were
we
have
a
worldwideable
document
where
we're
trying
to
collect
specific
use
cases
that
people
have
for
registration
as
part
of
like
a
rethink
of
the
concept
and
the
other
one
is
that
in
multi-cluster
services-
and
I
think
you
kind
of
happened
on
to
this
dimension,
but
it's
also
in
the
the
the
work
that
we've
been
doing
on
the
work
api.
Is
that
we're
trying
to
avoid
prematurely
characterizing
a
registry
and
overfitting
to
the
problems
that
are
directly
in
front
of
us.
A
So
I
would
be
super
interested
in
in
use
cases
that
you
would
characterize
for
what
you
think
registration
means,
and
what
would
you
want
to
do
with
it?.
D
Right,
I
think,
just
in
terms
of
thinking
about
the
the
problem
set
of
mcs
api.
First
of
all,
I
guess
I'm
interested
in
how
the
sort
of
implementation
right
now
does
it.
I
know
I
I
was
able
to
find
that
you
know
declaring
a
service
import
does
indeed
import
a
cluster,
I'm
just
a
little
bit
confused
as
to
where
it
sources
that
information
right
now
as
it
stands
in
the
in
the
github
right,
jeremy.
B
Yeah,
so
this
kind
of
goes
in
the
you
know:
building
blocks
first
approach,
but
basically,
like
we've
hand,
waved
away
the
cluster
registry
for
now,
with
with
mcs
we've
also
hand
waved
away
the
actual
implementation,
so
we've
created
the
crds
on
both
sides.
Then
we've
defined
the
behavioral
spec
for
how
you
map
service
export
to
service
import,
but
there
is
no
canonical
implementation
for
how
that
actually
happens.
B
So
there's
there's
a
spec
for
what
an
implementation
should
do
when
it
sees
service
exports
in
in
your
clusters
that
are
attached
to
your
imaginary
cluster
registry,
that
we
will
have
also
one
day
and
and
the
service
imports
that
should
be
created,
but
yeah.
We
that
canonical
implementation
doesn't
exist.
But
I
know
that
like
submariner
has
been
looking
into
this
and
what
that
would
look
like-
and
I
know
that
there's
a
bunch
of
issues
open
to
implement
the
api
and
there's
actually
so.
B
This
is
something
that
is
probably
even
more
relevant
to
you,
but
I
know
that
istio's
been
interested
in
implementing
this
the
api
as
well
right.
So
I
don't
know
if
you've
talked
to
the
networking
working
group
or
anything
with
istio,
but
but
yeah.
So
there
is
no.
There
is
no
implementation
today
for
how
it's
done,
but
the
the
assumption
and
and
hope
is
that
there
will
be
multiple
right,
but
that
they
will
behave
in
similar
ways
right.
According
to
this
fact,.
D
Yeah,
I
mean
so
I'll
sort
of
go
over
briefly
how
we
decided
to
solve
the
problem,
and
I
don't
think
it
is
the
best
way,
but
it
was
sort
of
the
first
to
to
mine
was
basically
when
we
install
the
service
we
target
another
cluster,
and
that
was
sort
of
our
our
quick
and
dirty
method,
and
one
of
our
thoughts
going
behind
that
is
like
a
service
registry
is
another
sort
of
moving
part
to
it.
It
seemed
to
fit
our.
D
You
know
we're
a
relatively
simple,
where
there's
only
ever
two
clusters
at
once,
so
obviously
that
that
won't
fit
everybody's
use
case.
But
you
know.
A
I'm
sort
of
trying
to
line
up
your
description
just
a
moment
ago
with
the
content
on
the
slide
and
when
you
say
that
you
target
another
cluster,
would
that
be
like
the
blue?
One
knows
about
the
green
one
or
vice
versa,
and
so
your
service
syncer
thing
is,
is
just
copying
from
the
target.
D
Right
so
it
would
be
a
service
primarily
on
the
on
the
green
cluster
that
would
copy
its
services
into
the
blue
one,
because
primarily,
we
wanted
to
touch
the
blue
one
as
little
as
possible
during
the
the
changeover,
obviously
for
rollback
purposes,
if
we
could
just
kill
the
green
cluster
and
leave
our
blue
running
in
production,
that'd
be
amazing
right.
So
we
wanted
to
install
as
little
as
possible
in
terms
of
services
on
a
blue
cluster,
but
still
have
it
routable
over
to
the
green.
If
that
makes
sense,.
A
So
it
I
I
think
it
does.
I'm
hearing
you
make
a
new
green
cluster
when
you're
gonna
do
a
deployment
you
make
a
new
green
cluster.
It
knows
about
the
blue
cluster
right
once
once
your
checkout
process
on
the
green
cluster
gives
you
the
the
green
light.
A
D
Right
exactly
yeah,
okay,
that's
good
good
characterisation,
yeah
and
the
other
part
I
wanted
to
touch
today
was
the
sort
of
the
the
phase
three
of
it,
which
is
the
the
internal
load
balancing
the
the
top
from
the
north
south
traffic.
The
other
question
I
want
to
ask
was
you
know?
Obviously,
the
the
specifics
of
how
the
load
balancers
are
are
handled
is
up
to
the
cloud
provider
or
or
external
load
balancing
software.
D
But
do
you
guys
have
any
thoughts
on
how
sort
of
describing
the
behavior
or
how
the
behavior
should
be
from
the,
and
I
say
that
because
right
now,
a
service
of
type
load
balancer
has
one
load
balancer
attached
to
it
in
a
in
a
sort
of
single
cluster
way.
D
When
we
expand
that
definition
to
more
than
one
cluster,
are
we
still
having
one
load
balancer
ip
or
will
there
be
multiple
or
do
you
guys
have
any
thoughts
on
that.
B
This
is
a
really
good
question.
I
think
I've
certainly
had
some
thoughts.
I
don't
think
we've
gotten
there
yet
with
ncs,
and
I
think
this
is
probably
a
good
time
to
start
this
conversation.
Actually,
it's
also
a
good
segue,
because
I
think
then
one
of
the
next
items
on
the
agenda
for
today
andrew
brought
up
the
idea
of
node
port
multicluster
service
types
and
what
that
could
look
like,
but
I
think
I
think
we
need
to
have
a
conversation
about
what
what
other
service
types
might
look
like.
B
So
what
is
a
a
multi-cluster
service
type
load
balancer
like
I
think
it
seems
reasonable
that
there
might
be
one
single
load,
balancer
ip,
that
routes
to
the
multi-cluster
service
across
the
cluster
set,
and
then
the
mechanics
would
hopefully
then
be
similar
to
to
a
service
type
load
balancer
within
a
single
cluster.
But
I
think
there's
with
any
of
these
external
routing
concepts.
I
think
we
have
to
figure
out.
How
do
we
actually
describe
that
like?
B
Currently,
the
only
two
service
types
are
for
for
a
super
cluster
or
sorry
cluster
set.
I'm
gonna
probably
do
that
for
a
little
while
now
that
we
have
a
real
name,
but
the
cluster
set
ip
and
head
list
and
those
are
easy
to
derive
from
a
service.
But
then
how
do
you?
How
do
you?
B
How
do
you
dictate
that
this
should
be
a
load,
balancer
type
multi-cluster
service?
I
think
it's
a
question
that
we
we
need
to
answer,
but
we
should
answer
it.
So,
let's
see.
D
Our
our
toy
solution
is
finding
a
way
to
manually
manage
that
by
using
the
external
ip
service
type
and
then
just
getting
our
our
load
balancer
to
point
to
the
the
the
the
nodes,
the
node
ips
of
each
indiv,
each
different
cluster
and
and
because
it's
a
you,
know,
layer,
four,
we
don't
have
the
opportunity
to
sort
of
do
percentage-based,
writing
or
header-based
writing
or
all
those
other
things
which
is
which
is
okay.
D
D
A
C
A
Jeremy,
I
think,
like
you
said
a
few
words
that,
like
connected
that
in
in
my
in
my
brain,
I
wonder
if
this
is
a
good
time
for
us,
maybe
just
like
we
have
the
open
use
cases
for
what
do
we
think
the
properties
of
cluster
registration
are
and
how
do
we
want?
A
What
do
we
want
to
do
with
like
primitives
around
registration,
if
we
should
have
like
a
similar,
open
call
for
use
cases
for
cluster
sets
so
that
we
can
we've
got
a
name
and-
and
probably
people
have
different
ideas
about
what
do
they
mean,
and
maybe
it's
time
to
start
writing
down?
What
do
we
actually
want
them
to
do
and
allow
us
to
express.
A
D
Awesome,
well,
that's
that's
pretty
much
the
end
of
my
presentation,
but
I
did
want
to
say
I'm
I
I
was
really
excited
to
find
you
guys.
You
know
it's
this
type
of
of
work.
D
You
know
really
needs
a
community
around
it
and-
and
you
know
finding
you
guys
was
sort
of
like
hey.
It's
not
just
me,
I'm
not
crazy,
but
I'm
really
excited
to
to
be
a
an
active
contributor
in
this
group.
A
Thanks
a
lot
for
the
presentation
this,
this
was
super
interesting
to
me,
so
I
really
appreciate
you
coming
and
sharing
your
experiences
with
us.
That's
that's
the
kind
of
input
that
we
really
need
to
to
find
the
things
that
are
going
to
be
useful
to
provide
from
the
community.
So
it's
really
appreciated
thanks
a
lot
richard
yeah.
Thank
you.
This
is.
This
is
great.
B
Yeah
and
I
think
I'll
go
quickly
because
andrew
added
a
couple
bullets
that
I
think
are
probably
more
interesting,
so
I
just
wanted
to
let
everybody
know-
and
I
linked
it
in
the
notes,
but
we
actually
have
a
sigs
repo
now
for
the
alpha
implementation,
which
means
we
can
take
pr's,
so
there's
some
work
to
do
before
we
can
get
to
beta
quite
a
bit
of
work.
Obviously
we
need
some
implementations
for
one.
So
that's
that's
a
good
place
to
start
so
that
we
can
answer
hey.
B
Is
there
a
canonical
implementation?
Yes,
maybe
or
multiple,
so
we
have
somewhere
to
point.
Dns
is
a
big
open
question
that
I
think
we've
alluded
to
in
the
in
the
cap.
I
think
at
a
high
level
it
doesn't
seem
too
controversial,
but
let's,
let's
see
about
implementing
it
and
see.
If
that
holds
true,
I
think
I
think
we'll
uncover
some
surprises
there.
B
I
I
would
expect,
and
so,
if
anyone
wants
to
help,
we
actually
have
a
place
where
you
can
so
pr
is
welcome
and
I
see
that
andrew
has
pr
one,
which
is
great
so
that
took
no
time
at
all.
So,
thank
you
yeah,
so.
D
Yeah,
so
as
we're
talking
about
like
implementations
like
I
don't
know
what
folks
here
think
about
cluster
api
and
and
whatnot,
but
like
yeah
like
I
can
see
a
cluster
api
implementation
being
kind
of
like
the
maybe
like
an
obvious
candidate
for
this,
because
everything
everything
related
to
clusters
is
represented
as
crds
in
in
like
what
what
they
call
the
management
cluster.
So
it'd
be
kind
of
pretty
trivial
to
just.
You
know,
use
that
crd
as
like
the
cluster
registry
and
then
and
then
go
from
there.
So.
A
I
I'm
wary
of
attaching
to
any
provisioning
thing
with
the
hard
dependency
because,
for
example,
we
have
a
provisioning
api-
that's
not
part
of
cluster
registry
that
we
use
for
openshift.
As
I
understand
it,
also
like
the
thing
that
is
called
cluster
register.
Sorry
cluster
api
in
our
upstream
is
not
directly
supported
by
any
vendors,
so
I
think
vendors
put
additional
apis
in
front
of
that
in
order
to
use
it.
A
I
think
that
that
that's
sort
of
a
challenge
that
we'll
have
to
cope
with
and
one
of
the
the
things
I've
sort
of
wondered
to
myself.
If
we
should
think
about
this,
as
a
principle
is,
is
it
a
principle
that
we
want
the
work
in
this
sig
to
be
independent
of
any
provisioning
api?
D
So
I
agree
with
that.
I
think
cluster
api
is
a
little
weird
in
that
it
is
a
provisioning
api,
but
because
everything's
you
know
in
crds,
like
you
can
layer
anything
you
want.
On
top
of
that.
So,
like
I,
don't
see
a
cluster
api
implementation
kind
of
violating
that
principle,
because
it's
it's
not
like
it's
not
strictly
provisioning,
I
guess
but
yeah.
I
think
in
general,
like
I
tend
to
agree
with
that
that
we
shouldn't
couple
to
provisioning
it
guys
too
much.
B
Yeah
it
yeah-
I
like
that
as
well.
I
do
think
there's
probably
room
for
like
I.
I
can't
tell
if
you're
talking
about
like
the
just
mcs
or
like
a
whole
cluster
registry,
like
I,
I
think
tying
cluster
registry
to
cluster
api
strongly
might
be
yeah.
B
So
I
think,
there's
probably
room
for
that.
Yeah,
I
think
like
having
an
implementation
of
mcs
based
on
cluster
api,
could
be
interesting
to
see
how
that
would
look
for
sure.
A
D
From
my
perspective,
it
was
mostly
just
like
there's
a
project
that
implements
clusters
as
crds,
and
so
it
would
be
an
easy
like
first
implementation
or
like
we
could
use
it
as
a
reference
implementation
to
actually
go
through
the
cycles
of
like
implementing
the
entire
mcs
api
and
figuring
out,
like
you
know
what
parts
of
the
api
feel
awkward,
what
parts
of
it
do
we
like.
So
that
was
really
the
angle
I
was
coming
from.
D
A
Okay,
yeah,
I
think
I
think
I'd
probably
just
want
to
make
sure
that
we
don't
let
the
apparent
convenience
of
something
let
us
skip
a
step
on
collecting
like.
Is
there
a
cluster
registration
use
case
here
and
exactly
what
is
it.
B
Yeah
and-
and
I
want
to
be
wary
about
making
changes
to
services
to
accommodate
cluster
api
when,
when
they're
not
otherwise
necessary,
or
you
know,
making
sure
that
we're
the
changes
that
we
make
are
for
the
right
reason
and
not
to
kind
of
go
along
with
that
convenience.
A
So,
let's
is
it
cool
if
we
maybe
zoom
in
on
what
the
registration
use
case
is
here,
so
it
sounds,
it
sounds
like
there
is
like
we're
we're
having
this
conversation,
because
there's
a
registration
use
case
is
the
use
case.
Is
it
that
we
just
need
to
have
a
record
of
like
endpoints?
Is
it
something
deeper
than
that?
Is
it
that
we
want
to
run
a
reconciler
that
will
watch
the
list
of
clusters
exactly
what
use
case?
Are
we
looking
for
from
registration
here.
D
So
the
whole,
so
I'm
missing
a
lot
of
context
of
the
class
registration.
I
also
have
a
heart
stop
in
like
a
minute,
so
I
might
have
to
drop,
but.
D
Go
ahead:
sorry,
yeah,
but
yeah.
Basically,
what
I'm
saying
is
like
yeah
like
if,
if
a
client,
if
a
controller
was
implementing
like
the
api,
that
like
reads
the
service
import
and
creates
a
service
export
and
all
that
stuff
like
you,
would
need
to
be
aware
of,
like
yeah
the
list
of
clusters
and
then
based
on
that
list,
it
should
be
able
to
like
generate
a
keep
config
or
a
way
to
access
and
talk
to
each
cluster
and
pull
information
out
of
it.
D
And
so
like,
like
cluster
api,
seems
like
a
good
starting
point
because,
basically,
like
any
cluster
resource,
represents
a
cluster
and
there's
a
matching
secret
with
this
like
admin,
key
config
and
so
like.
That
is
an
already
good
starting
point,
but
we
can
use
that
pull
in
all
the
service
exports
and
then
reapply
the
service
imports
based
on
that.
A
So
I
know
you've
got
a
hard
stop,
maybe
think
about
how
you'd
characterize
exactly
what
the
specific
use
case
is,
and
why
don't
we
talk
about
that
in
next
week's
meeting
is
like
a
good
way
to
see
that
use
case
stock.
D
E
B
Thinking
about
these
things
is
is
really
good
like
it.
I
think,
out
of
that,
it
sounds
like
we've
come
up
with
a
couple
use
cases
for
that
cluster
registry
doc,
and
we
good
to
to
think
about
that
over
the
next
week
and
really
get
them
down
at
the
next
meeting,
and
then
also
you
know,
node
port
and
then
load
balancer
type.
Two,
I
think
it's
we
should
start
thinking
about
what
those
service
types
could
look
like
in
a
multi-cluster
world.
A
So
it's
too
bad
andrew
had
to
leave.
I
I'm
sort
of
wondering
if
anybody
else
heard
andrew
describing
properties
of
a
push
scheme
where
there's
one
thing:
that's
getting
an
admin,
cube
config
on
the
registered
clusters
and
pushing
to
those
clusters
resources.
Did
anybody
else
hear
that.
B
A
Of
the
one
of
the
lessons
that,
like
we
at
red
hat,
take
out
of
our
our
initial
experience
in
what
we've
done
up
to
this
point
is
that
for
a
lot
of
users,
it
seemed
like
push
had
a
couple:
significant
limitations,
that
there
was
a
perception
that
a
push
scheme
is
a
single
point
of
failure.
A
So
if
the,
if
you're
pushing
from
some
hub
that,
if
that
hub
goes
down
your
and
you
don't
have
a
a
way
for
another
hub
to
pick
it
up-
that
the
the
hub
is
effectively
a
single
point
of
failure.
A
As
just
for
my
own
subjective
thoughts
like
I
find
the
term
single
point
of
failure,
often
times
to
just
be
confusing,
and
to
eliminate
details
and
nuance
where
they're
usually
very
important.
So
I
am
more
relaying
that
as
something
that
people
have
told
to
me
rather
than
an
articulation
of
my
own
personal
opinion.
But
it
does
seem
to
be
a
concern
that
people
have
the
the
other.
One
is
that
there
are
many
environments
that
users
have
where
push
doesn't
work
for
them
for
whatever
reason,
because
it
crosses
a
an
addressability
boundary.
A
So
many
many
of
the
users
that
I've
spoken
with
have
topologies
where
the
their
fleet
is
able
to
dial
a
hub
like
they
can
dial
externally,
but
they're
extremely
limited
in
terms
of
what
inbound
traffic
they
can
take,
and
so
that
has
kind
of
pointed
my
own
thinking
to
be
positioned
around
pull,
where
the
difference
between
push
and
pull
is
that
in
push
you've
got
something
that
is
that
is
reaching
out
and
programming
a
cluster
and
in
pull
you
have
something
running
in
the
cluster
that
is,
that
is,
instead
of
being
pushed
to
is
reaching
out
to
something
else
like
watching
an
api
and
pulling
information
from
that
and
materializing
that
onto
the
cluster.
A
So
I
don't
I,
I
am
pretty
roundly
convinced
overall,
that
one
or
the
other
is
not
like.
We
can't
make
a
binary
choice
that
will
work
equally
well
for
everyone,
so
just
wanted
to
call
that
out
about
push
versus
pull.
B
Yeah-
and
I
think
we
we
made
a
strong
point
of
including
that
in
the
mcs
cap
as
well,
the
idea
push
versus
pull
and
centralized
versus
decentralized
are
like
that
was.
That
was
a
key
conversation.
I
think
it's
important
whatever
we,
whatever
direction,
we
we
go.
We
should
make
sure
that
we're
accommodating
both.
D
I
actually
might
have
a
a
another
viewpoint
on
this
push
and
pull
primarily
based
on
sort
of
even
deployments
and
and
how
we
approach.
Our
ci
is
very
much
pull-based,
because
we
found
that
a
pull-based
method
of
of
sort
of
artifact,
retrieval
and
and
other
things.
They
just
tend
to
be
more
controllable
right
having
credentials
on
another
on
on
something
that
can
push
seems
very
dangerous
to
just
to
us
right,
and
so
we
prefer
a
pole-based
methodology
as
well.
A
And
that's
a
that's
another
piece
of
feedback
that
we
heard
around
previous
efforts
in
federation
is
that
if
you've
got
a
place
in
the
control
plane
that
holds
admin
credentials
or
cluster
route
credentials
for
a
bunch
of
clusters,
one.
That's
a
really
really
high
value
target
that
you've
added
to
your
api
surface
in
that
cluster
and
related
to.
A
That
also,
is
that
there
are,
if
you,
if
you
start
to
stack
multiple
use
cases
like
we're
talking
about
connectivity
and
multi-cluster
services
now,
but
if
we
stack
like
scheduling
onto
that
and
maybe
policy
enforcement
that
two
things
happen,
one
is
that
it
becomes
very
clear
that
there
are
multiple
service
account
like
entities
that
are
doing
things
at
very
different
degrees
of
privilege.
A
So
we
need
to,
I
think,
be
careful
to
account
for
subtlety,
nuance
and
decomposition,
in
terms
of
which
service
accounts
are
doing
what
for
which
use
cases,
and
it's
very
likely
that,
if
we're
running
an
agent
that
is
programming
network
stuff,
that
that
needs
a
really
really
different
level
of
permission
on
particular
resources
within
the
cluster
besides
or
other
different
than
what
I
might
need.
If
my
job
is
to
deploy
helm,
charts
or
if
my
job
is
to
enforce
a
policy.
B
C
A
Any
other
perspectives
people
want
to
present
about
that.
We're
we're
nearing
the
end
of
our
time,
so
maybe
we're
talked
through
and
and
I
will
as
an
action
item
besides
getting
all
the
videos
uploaded,
which
I
guess
there's
a
bit
of
a
backlog-
it's
surprisingly
difficult
to
get
them
uploaded
and
put
into
the
playlist,
but
I
that
is
supposed
to
change
soon.
Now,
aside
from
that,
I
will
open
up
a
use
case
document,
for
what
do
we
think
cluster
sets
should
mean
and
what
should
they
do.
A
All
right
sounds
like
we
maybe
talk
through
thanks
everybody
thanks
a
lot
richard
for
joining
us,
and
I
will
see
you
all
soon.