►
From YouTube: Kubernetes SIG Multicluster 2021 Jan 19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
We're
having
construction
done
like
residing
our
house
so
during
the
week
it's
pretty
much
like
living
in
a
universe
of
pounding
him
sounds.
B
A
C
We
are
following
suit
and
getting
exciting.
Just
like
you
are
you
really
yeah
after
that
woodpecker
went
to
town,
I
don't
think
we
have
a
choice.
A
Well,
we'll
we'll
start
the
festivities
shortly
but
jeremy,
I'm
curious
to
know
you
going
masonite
or
hardy
plank.
A
Yeah
hashtag
plank
chat
all
right.
Let's,
let's
go
ahead
and
get
started
so
today
is
january,
19th
2021.
This
is
signal
to
cluster.
We've
got
a
couple
things
on
the
agenda
today.
The
first
one
I
put
on
there
from
our
discussion
last
time
so
jen
and
I
spent
some
time
looking
at
caps
and
trying
to
see
if
we
felt
like
work
would
fit
as
a
kep,
and
we
thought
it
really
didn't.
A
A
I
had
remembered
that
at
some
point
in
the
past
they
were
writing
caps
for
that
stuff.
So
I'm
very
happy
not
to
make
it
a
cup,
and
I
that
raises
the
question
of
where
do
we
put
it?
If
it's
not
a
cap,
so
the
the
jin
and
I
had
talked
about
like
maybe
we
can
keep
it
in
a
google
doc
or
maybe
we
can
introduce
a
git
repo
under
the
kubernetes
org.
That
will
eventually
hold
code
that
can
hold
the
proposals
for
now.
A
So
I
wonder
if
anybody
has
any
opinion
about
that,
I
I'd
probably
strongly
prefer
to
keep
it
in
git
rather
than
having
it
kind
of
floating
around
in
a
google
doc.
But
I'm
curious
to
know
what
other
people
think.
D
B
A
C
Yep,
I
think
so
sure,
all
right
fantastic.
On
a
related
note,
before
we
change
topics
tim
did
we
figure
out
where
to
put
your
diagram
of
the
various
parts
of
a
workload
system.
D
No,
I
still
have
it
open
here
on
my
desktop,
it's
in
my
queue
of
things
to
get
to
when
I
have
time
to
think
about
it.
So
no
we
do
not
have
a
place
for
it
to
go,
or
I
haven't
even
really
decided
if
you
know
ascii,
art
or
svg
or
ping,
or
something
is
the
right
format.
D
A
Okay,
well,
I
will
make
a
request
for
get
repo
later
on
and
yuri.
You
have
the
next
agenda
item.
E
A
I
will
make
sure
that
you
can
share
your
screen
and
if
you
give
me
just
a
moment
there
we
go,
you
should
be
able
to
share.
B
E
Yes,
perfect
yeah,
so
I
just
wanted
to
make
a
short
demo
of
a
project
open
source
project
that
we
were
busy
with
synopsa
for
a
while.
It
is
kinda
relevant
to
multi-cluster
scenario,
because
it's
global
balancer.
E
Basically,
we
tried
to
build
and
we
built
a
cloud
native
global
server,
reload,
balancing
solution
which
is
kubernetes
aware,
and
it's
originated
in
apps
and
upsize
south
african
financial
organization,
pretty
huge
one.
Meanwhile,
we
have
a
very
strong
investment
in
open
source
and
to
overall
strong
open
source
engineering
support.
E
So
the
task
that
appeared
kind
of
roughly
a
year
ago
is
was
to
replace
one
specific
vendor
solution
for
gslb,
with
a
open
source
version
which
is
cloud
native
because,
like
apart
from
it
was
proprietary,
I
do
not
name
it
for
one
obvious
reasons,
but
it
was
proprietary
and
it
actually
didn't
work
for
us
properly
for
our
kubernetes
clusters.
E
So
we
built
something
that
is
kubernetes
aware
operates
with
us
on
top
of
standard
kubernetes
primitives
and
obviously
extends
the
kubernetes
api
with
a
crd,
so
yeah,
it's
important
to
mention
that
it
doesn't
have
any
like
dedicated
control
cluster.
Nothing
like
that.
It's
actually
scattered
over
multiple
clusters,
so
like
and
yeah
I'll
get
to
crd
in
a
bit,
but
here
we
may
be
in
concepts.
There
are
some
peaks,
so
basically
kgb.
As
an
operator
is
deployed
on
every
cluster.
E
We
want
to
include
into
global
balancing
scenario
and
it
watch
over
associated
workloads
that
I
enabled
for
for
global
load
balancing
and
how
to
control.
That
stuff
is
actually
very
simple,
using
the
special
crd,
which
is
kind
gslb,
that
exactly
this
operator
watches.
So
this
pack
is
pretty
simple.
So
basically,
it's
embedded
in
respect,
plus
the
strategy
parameters.
So
in
this
sample
on
the
front
page,
we
specifying
in
grass
is
specifying
back
and
service
to
watch
and
and
associated
global
load,
balancing
strategy
and
in
case
of
failover.
E
We
also
specifying
a
geo
attack
which
is
early,
is
a
part
of
kgb
configuration.
So
we
pinning
one
of
the
geographical,
disparate
cluster.
We
pinning
as
a
main
one
right
and
another
one
is
acting
as
a
secondary,
inclusive
failover.
E
So
yeah,
that's
roughly
the
concept
and
we
build
it
as
much
environment,
agnostic
as
possible,
meaning
kgb
runs
as
bar
as
part
of
itself
a
continuous
process,
and
it
actually
responds
to
a
dns
request
on
its
own.
It
uses
external
dns
for
only
single
purpose,
and
I
will
show
it
in
demo.
E
It
only
configures
zone
delegation
in
a
specific
dns
providers
and
we
support
several
of
them
and
the
rest
is
self-contained
and
testable,
and
we
can
basically
be
pretty
much
assured
in
our
pipelines
that
operator
behaves
properly,
so
only
zone
delegation
to
external
work
and
the
rest
is
is
self-sufficient,
so
implementation
parts.
So
we
based
on
operator
sdk
operator
framework.
It's
pretty
convenient
project
to
create
operators,
nowaday
so
obviously
go
link
once
because
helm
and
ansible
part
of
it
are
not
flexible
enough
to
be
operating.
E
This
kind
of
relatively
complex
logic
here
coordinates,
as
I
mentioned,
is
a
part
two
response
to
crew
to
actual
dns
requests
and
yeah
operator
dynamically
modifies,
dns
responses.
According
to
load
balancing
strategy
dns
to
face
to
interface
with
a
dns
provider,
we
kind
of
internally
call
it
hdns.
It
is
in
the
project.
So
it's
a
route
53
infoblox
enhanced
one
in
our
case.
Well,
current
support.
E
Meanwhile,
thanks
to
this
design,
well
we've
been
pretty
much
agnostic,
even
if
there
is
no
direct
support
on
some
dns
provider.
I
don't
know,
let's
say
it's
go
daily
or
whatever
it
is
enough
for
operator.
I
mean
human
operator
in
that
case,
to
manually
configure
zone
delegation
for
gcp
gs
will
be
enabled
hosts
and
it
will
work
so
yeah.
E
So
this
integration
basically
makes
this
zone
delegation
a
fully
automated,
so
yeah,
it's
a
dlcd
operator,
it's
kind
of
part
that
we
running
coordinates
with
city
blockant
and
we're
running
small
dedicated
lcd
pro
lcd
cluster,
using
the
kgb
and
space
to
to
populate
this
dns
entries
dns
records
dynamically
and
yes,
I
already
showed
to
only
single
crd,
so
we
integrated
to
three
dns
providers.
E
I
already
mentioned
them.
Infoblox
was
the
first
one,
because
obviously
we
first
thought
about
business
challenges
and
one
of
the
partial
operator
on
prem.
So
infoblox
is
our
solution
internal
then
we
started
to
integrate
with
a
more
cloud
and
to
more
popular
solutions
like
the
velociraptor
53
and
recently,
because
it
collaborated
with
these
ns1
guys
and
enabled
integration
with
them.
E
So
yeah
and
admiralty
is
a
multi-cluster
scheduling
solution.
We
had
a
very
nice
collaboration
and
we
basically
we
basically
enabled
a
composition
of
scheduling
by
admiralty
and
load
balancing
by
kgb.
So
it
worked
pretty
nicely.
E
So
I
think
I
will
show
shortly
show
a
very
quick
demo,
so
I
prepared
the
setup
for
for
the
sake
of
time
yeah.
So
we
operating
two
two
clusters,
two
case
clusters.
One
is
in
europe
and
here
there's
some
shortcuts.
One
is
in.
E
Us
yeah,
okay,
and
in
both
of
them
of
these
clusters,
kgb
is
already
deployed.
E
So,
as
you
can
see,
the
kgb
all
name
space
where
everything
related
to
a
2gb
operator
of
operations
running
so
kgb
like
a
controller
itself,
cordinus
process,
internal
external
dns,
which
is
populating
the
hcd
backhand
and
external
dns,
dedicated
to
rat53
to
dynamically
configure
zone
delegation
kgb,
also
uses.
E
Additional
resource
behind
these
scenes,
apart
from
the
standard
ones,
standard
primitives,
it's
the
innocent
point
from
from
external
dns
project
and
external
dns,
is
configured
with
source
crd.
So
it
doesn't
watch
ingress
and
service
like
a
very
standard
scenario
for
external.
B
E
Instead,
it
it
watches
this
kind
of
resources
and
I'll
show
you
contents
right
now,
so
it's
kind
dns
and
point
right,
so
external
dns
watches
for
this.
For
this
object
to
appear
and
converges
this
state,
you
know
of
a
dns
provider.
It's
looking
to
so.
In
this
specific
scenario,
we
deconfigure
zone
delegation
dynamically.
E
So
basically
it's
tested,
kgb,
io
four
zone
that
we
delegate
two
kgb,
enabled
clusters.
It's
a
nice
ns
record
and
yeah.
We
had
to
actually
contribute
another
record
support,
picks
their
own
dns
to
enable
that
and
standard
good
glue,
a
records
which
are
actually
populating
as
ns
entries.
E
So
european
one
does
exactly
the
same,
but
populates
its
own
blue
record
so
and
just
to
quickly
show
it,
and
I
wrote
it's
a
shrinking
face,
so
you
see
ns
record,
which
is
pointing
to
ns
servers
which
are
effectively
coordinates
ports
that
are
controlled
by
kgb
if
you're
running
is
in
our
cluster.
So
that's
roughly
a
setup,
and
here
we
in
test
gsub
namespace.
We
have
a
sample
workload
so.
B
E
A
standard
code
bobby
info,
you
know
pretty
popular
testing
application
just
running
there,
visa
associated
services
so
associated
service,
yeah
and
yeah.
How
we
actually
enable
jsob
for
that.
E
E
E
Yeah
so
follower
custom
resource
example,
so
kind
jsovb,
obviously
named
namespace
against
this,
that's
exactly
from
the
front
page
right,
so
we
via
I
already
applied
it
and
it
already
exists
and
testjs
will
be
namespace.
So
we
that
should
be
a
couple
of
them,
one
for
round
robin
strategy
and
one
for
failover.
So
if
we
look
at
the
failover
one
yeah
and
feel
free
feel
free
to
interrupt
me
if
you
have
questions
right
in
line,
so
we
have
this.
E
Gslb
resource
it
watches
the
states
state
of
the
polls
through
through
kind
transitive
path.
So
we
have
associated
ingress.
E
Which
has
a
link
to
a
service
and
eventually
end
points
and
down
to
basically
both
pots
liveness
and
readiness
props,
so
idea
behind
it.
So
it's
not
a
standard
cloud
bouncer
with
http
and
to
end
health
checks.
It
doesn't
work
really
well
from
our
experience
in
a
cloud
native
environment.
E
Instead,
we
actually
making
like
very
like
cluster,
aware,
load,
balancer
and
operator,
and
we're
giving
control
to
the
teams,
through
this
crd
to
and
the
ability
to
create
specific
props
for
their
applications
and
basically
control
the
load
balancer
behavior
this
way.
E
So
here
we
see
that
the
is
populated
by
by
an
ip
address
from
you.
B
E
We
can
quickly
check
that
in
the
the
http
response
of
this
sample
up,
I
put
a
message
which
is
depicting
the
geographical
location,
so
just
for
convenience
and
in
the
right
plane.
We
basically
continue
and
query
that
just
for
demonstration
purposes
so
that
it's
we
currently
eat
responses
with
a
failover
strategy
with
the
main
pin
cluster
from
so
we
and
yeah
it's
important
to
check
very
imv
and
europe,
and
we
have
this
failover
resource
right
so
just
to
for
clarity.
E
Currently
we
are
in
us
cluster
and
exactly
the
same
resource
is
applied
there,
so
exactly
the
same
spec
without
any
modifications,
but
in
status
it's
visible.
It's
another
cluster,
different
geotech
primary
geotech,
it's
still
europe.
So
it's
aware
and
it
it
also
returns
from
its
own
dns
coordinates
process.
The
response
is
forever
so
like
they,
they
they
actually
copyrighting
the
state
and
the
information
information
between
each
other,
also
through
dns
protocol.
E
So,
for
the
sake
of
simplicity,
this
information
is
not
sensitive
and
we
get
rid
of
a
whole
plus
of
problem
of
cross-authentication
and
all
that
stuff.
So
in
dns,
it's
more
lightweight
and
for
that
specific
purpose
than
an
http
form
of
so
we
can
switch
back
to
europe
and
basically
try
to
emulate
the
filler
way
by
scaling
to
zero
right.
E
So,
let's
get
deploy,
let's
scale
the
deploy
under
in
context
to
zero.
E
Yeah
and
we
can
observe
the
status
of
gsub
yeah,
so
it's
already
became
unhealthy.
It's
already
returning
the
ipa
address
for
america
and
the
in
case
of
failover.
There
is
a
like
kind
of
small
downtime
in
that
scenario,
giving
the
limitation
of
dns
protocol
and
also
I
have
some
additional
caching.
It
must
take
a
little
bit
longer,
but
basically
here
we
are
keeping
30
second
ttl
right,
so
there
is
unfortunately
some
time
to
update,
but
in
case
of
global
scenario,
at
least
for
us.
It's
not
super
critical
for,
like
kind
of
this.
E
So
if
we
switch
to
us
again
check
the
status
no
interesting,
so
it's
still
kind
of
converging,
because
yeah
now
now
because
again
they
kind
of
talking
to
each
other
and
at
that
moment
is
the
first
time
there's
a
apparently
europe
was
still
returning
one
kind
of
lcip
address.
Now
it's
purely
it's
converged
and
utah
america
also
returns
its
own
ip
addresses
from
a
front-ending
load.
Bouncer
nlb,
like
a
network
load
balancer
at
those
one
select
standard
one
and
yeah.
E
E
So
while
it
tries
to
get
there
yeah
sorry,
I
was
just
you
know
it
was
scroll
and
it
wasn't
it's
already
returning
for
property.
So
it's
already
fixed
itself.
You
know
roughly
easy
money.
E
So
is
there
any
question
on
this
stage.
A
It's
not
a
question,
but
I
I
wanted
to
just
like
point
out
that
I
think
the
original
feature
for
dns
endpoint
in
external
dns
was
added
during
some
of
the
federation.
V2
work.
E
A
It's
it's
really
cool
to
see
that
you're
using
that,
because
we
didn't
really
wind
up
using
it.
We
got
it
in
and
so
I'm
really
glad
that
somebody
somebody's
been
able
to
use
it.
E
Oh
thanks
a
lot
for
that
because,
like
initially
when
I
was
like
kind
of
composing
it
up
and
crd
source
was
totally
like
not
on
documentation.
It
was
somewhere
in
a
like
double
here:
hierarchy
directory
a
small
dock,
and
I
was
really
happy
to
find
it
in
a
external
dns
cli
help
so
because
it
would
like
it
solved
a
lot
of
problems
to
do
for
me.
So
yeah
kudos.
A
E
A
So
the
use
case
for
this
is,
you
want
to
run
a
global
load
balancer
for
a
single
app
right.
E
E
Well,
if
we
actually
look
at
round
robin
example,
so
there
are.
E
Yeah,
that's
the
one!
So
actually
here
we
have
a
multi-service
ingress,
spec
and-
and
this
is
for
and
from
kind
of
demonstration
purposes.
So
it's
round-robin
and
basically
it
references
a
multiple
apps
right,
so
one
is
healthy,
round-robin,
another
existing,
but
on
calcium,
another
non-existing.
So
here
it
just
like
just
do
it
to
this
point,
it's
a
rare
case
relatively
at
least
for
us
how
we
already,
by
the
way,
we're
already
running
kgb
in
production
like
for
half
a
year
yeah.
C
That's
pretty
cool
that
is
cool,
I'm
curious!
Have
you
have
you
been
following
the
gateway
api
coming
out
of
sig
network
at
all,
because
it
looks
like
there's
a
lot
of
overlap
in
crd
in
particular,
and
and
it
might
be
interesting
to
kind
of
sync
up
there
and
see
how
this
fits.
E
D
V2
is
the
the
lazy
name
for
it,
but
it's
much
bigger
than
that.
E
Show
this
round
robin
which
is
multi
multi-service
right,
so
yeah
one
is
healthy.
One
is
populated
for
some
reason:
it's
actually
should
mix
yeah.
One
of
the
cluster
is
that
it's
okay,
one
of
the
application
in
the
cluster
is
that
so
now
now
it's
returns
only
american
ones
and
and
yeah
the
other
another
ones
is
unhealthy,
which
is.
E
Which
is
totally
unhealthy?
Okay,
yeah,
usually
I
demonstrated
this
with
an
upscale
down,
but
yeah.
No,
I
think
you've
got
the
point.
It's
really
clear
and
non-existent
right,
so
yeah
and
yeah.
This
is
the
statuses
associated
so
and
health
isn't
healthy
in
advance,
food
wouldn't
found,
and
so,
if
we
go
back
to
europe,
which
is
currently
scaled
to
zero
right
and
double
check.
E
E
E
E
E
Okay-
and
here
we
go
yeah
so
now
it's
kind
of
flipping
from
from
cluster
to
cluster
in
round
robin
fashion
yeah,
and
that's
pretty
much
two
strategies
that
we
support,
because
we
try
to
keep
things
pretty
simple
for
so
from
application
teams
perspective.
These
two
strategies
are
currently
enough
for
us,
but
we
definitely
want
to
gather
more
cases
from
community
and
we
have
something
in
our
road
map,
but
I
I
believe
until
we
have
a
real
consumer,
it's
not
so
interesting
to
implement
so
yeah.
E
C
This
this
is
really
cool.
Thank
you,
yeah,
thanks
yeah,
this
is
a
great
demo.
I
really,
I
really
do
think
it
would
be
worth
showing
this
to
the
to
the
gateway
folks
too,
because
I
think
like
ideally,
it
would
be
possible
to
express
your
crd
via
a
gateway
class
and
kind
of
fitting
in
that
model.
It'd
be
good.
C
C
A
Have
you
had
a
lot
of
outside
contribution
on
this
yet.
A
E
Yeah
we
do
and
we
actually
we
collaborated
with
several
external
folks,
so
rancher
guys
they
we
had
a
meetings.
We
got
a
very
nice
feedback
from
them
and
I
think
they
wanted
to
package
it
in
the
film
chart,
but
I
in
their
rancher
catalog
and
chart,
but
I'm
not
sure
if
it
was
done
or
not
tennis,
one
guys
and
admiralty
folks
as
well
so
like
this.
So
that's
why
this
integration
and
against
this
multi-cluster
scheduling
is
a
standard
cluster.
It's
durable,
cubelet
works
pretty
nicely
this
kgb
yeah.
E
So
in
a
standard
scenario,
kgb
creates
a
ingresses
for
you
out
of
the
its
own
spec
right.
But
if
you're
already
running
some
workloads-
and
you
already
have
ingresses-
which
is
pretty
common
case
in
case
of
upside-
we
kind
of
keeping
this
will
be
ingresses
and
the
local
ingress
is
kind
of
in
parallel
just
for
safety,
but
some
folks
and
it
was,
I
think
it
was
directly
admiralty
a
requirement.
E
They
want
to
reuse
the
existing
ingress
right.
So
I
will
I
can
show
it
in
a
in
actually
in
dog,
because
it's
super
simple.
So
if
you
want
to
reduce
existing
ingress,
so
there
is
a
possibility
to
just
set
a
special
special
kgb.
Annotation
and
kgb
controller
will
pick
it
up
and
create
gclb
resource,
so
it's
kind
of
a
bit
universal
and
then
it
will
close
the
loop
and
it
will
operate
in
a
standard
way.
E
A
Awesome
looks
really
cool.
I
have
some
questions
about
this
that,
like
I'll,
probably
ping
you
about
offline
cool.
A
So
anything
I
think
that's
it
for
the
agenda
is
there
anything
else
folks
want
to
discuss
while
we're
still
here
or
do
you
want
10
minutes.