►
From YouTube: Kubernetes SIG Multicluster 2020 Jan 14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
B
All
right,
it's
it's
to
pass
I
think
we
can
go
ahead
and
get
started.
Welcome
everybody
to
the
January
14th
2020
meeting
and
kubernetes
sing
multi
clusters.
So
first
on
the
agenda
today
is
Jake.
Is
gonna.
Give
us
a
demo
of
Razzie
and
before
you
get
started,
I'll
just
say
that
if
you
have
a
demo
that
you
would
like
to
give,
please
don't
hesitate
to
reach
out
or
just
put
it
onto
the
agenda,
I'm
hoping
that
we
can
get
demo
per
meeting.
Basically,
so
the
the
agendas
world
writable,
don't
hesitate
all
right.
A
A
I'm
back
again,
let's
see,
if
now
it
will.
Let
me
share
yes,
sure
excellent
cool
man
do
not
disturb
yes,
all
right
cool
all
right.
Well,
so
I'm
Jake,
Kitchener
I'm,
the
lead
architect
for
the
IBM
cloud
community
service
and
Red
Hat
OpenShift
and
IBM
cloud
there
to
essentially
manage
kubernetes
offerings
from
IBM
cloud.
A
Mike
McKay
is
here
with
me
also
he
is
basically
the
development
lead
for
Razzie
and
the
team
and
yeah
I
mean
I,
guess
you
know,
we've
sort
of
been
getting
interested
in
what
the
community
at
large
is
doing
from
a
multi
cluster
standpoint
and
since
we've
open-sourced
Razzie
we're
definitely
interested
in
getting
feedback
from
others
and
hearing,
and
you
know
more
about
what
would
be
useful
as
we
kind
of
move
forward
in
our
journey
with
the
this
open
source
project.
So
yeah
I
mean
that's
basically
it
in
a
nutshell.
A
What
got
pushed
out
there
or
like
opening
up
privileged
sessions
into
production
systems
from
laptops
and
whatever,
which
is
obviously
a
highly
discouraged
and
public
cloud
and
going
and
looking
to
see
what
the
heck
was
actually
there.
So
you
know
it's
necessity
is
the
mother
of
invention,
so
we
rapidly
figured
out
we
needed
to
figure
out
some
way
to
easily
see
what
the
heck
is
running
everywhere.
Yeah.
A
That's
where
those
micro
services
are
running,
but
we
also
have
all
the
clusters
that
we're
deploying
and
managing
for
our
customers
and
there's
many
thousands
of
those.
So
we're
like
it
would
be
fantastic
to
help
sre
and
ops
teams
to
do
troubleshooting
and
may
an
aged
customer
clusters.
If
we
had
the
same
level
of
visibility
for
those,
so
we
basically
enabled
it
out
of
the
box
for
every
single
one
of
our
customer
clusters.
A
So
you
know,
we
sort
of
like
said
put
our
money
where
their
mouth
was
and
said:
let's,
let's
just
go
for
it
and
scale
this
thing
way
up
so
initially
that
was
just
visibility
and
then
we
sort
of
got
to
the
same
thing
where
it's
like
wow,
there's
a
lot
of
content
that
we're
deploying
into
these
clusters.
As
part
of
like
the
provisioning
process
for
our
customers,
we
could
really
streamline
the
release
process
of
those
things.
A
If
we
leveraged
Razzie
and
you'll
as
I
kind
of
go
through
the
demo,
you
get
to
see
some
examples
of
what
those
things
are
so
after
much
happiness
and
success
and
internal
use.
We
said
you
know,
look
we
talked
about
what
we're
doing
with
a
lot
of
our
customers
and
other
teams
in
IBM
and
they're
all
very
interested
in
this
thing,
let's
build
like
an
open-source
variant
of
it
and
you
know
basically
start
pursuing
the
open-source
path
so
that
others
can
have
access
to
this
thing.
A
A
For
you
to
make
that
thing,
sort
of
like
cleaned
up
for
open
source
use,
that's
functional
beyond
just
like
our
specialized
environment,
so
yeah
and
we'll
talk
a
little
bit
more.
So
we'll
go
into
some
of
the
details
of
like
how
we're
using
it
today.
What
it
looks
like
in
our
internal
use
a
little
bit
more
information
about
how
the
open-source
project
works
and
kind
of
what
the
future
of.
E
E
B
F
A
D
A
A
Yep,
you
know
we
talked
about
sort
of
the
global
footprint,
so
obviously
this
was
like
a
huge
driver.
Is
that
I
think
I
remember
from
the
the
poll
that
went
out.
You
know
there's
a
lot
of
folks
that
we're
using
multi
cluster
for
geo
distribution
and
that's
exactly
what
we're
using
it
for
right
like
we
have
basically
points
of
presence
for
the
service
all
over
the
world,
and
you
know
they're
for
lots
of
clusters
and
lots
of
stuff
to
keep
managed,
I.
A
Think
I'm,
probably
gonna,
do
the
demo
real
quick
now
and
then
we
can
kind
of
continue
talking.
Cuz
I
think
it'll
give
like
a
better
view
of
what
the
heck
this
thing
is
and
what
it
looks
like.
So
this
is
basically
our
internal
version
of
Razzie
that
we're
running
to
to
manage
the
service.
I
demo
this
because
there's
a
lot
more
cool
stuff
to
go,
show
you
right.
Then
there
is
to
just
do
like
a
little
demo,
open
source
thing.
So
I
think
it's
a
it's.
A
It
gives
you
a
much
better
feel
of
like
what
is
capable
with
Razzies
versus
just
like
a
quick
demo.
So
real,
quick,
like
some
of
this
stuff,
is
obviously
bespoke
to
our
internal
version.
Right,
like
this
kind
of
data
would
exist
inside
of
an
open
source.
Razzie
implementation,
but
like
these
are
like
views
that
we've
built
into
the
UI
that
are
kind
of
customized
to
us
right,
like
region,
names
and
stuff,
are
not
a
thing
that
is
sort
of
like
baked
into
the
Razzie
concepts.
A
It's
just
data
that
Razzie
has
that
we've
built
custom
views
on
top
of
so
we
can
basically
see
you
know.
Cluster
counts
that
we're
managing
in
various
regions
around
the
world,
and
we
can
kind
of
see
what
all
the
versions
of
those
coop
clusters
are
there
just
like
data
that
we're
always
curious
about.
So
it's
nice
to
just
kind
of
have
it
on
the
front
page
prints.
A
I
would
say,
like
probably
about
85%
of
all
the
function
that
exists
in
this
basically
comes
from
somebody
showing
up
at
Mike's
desk
and
saying
it
would
be
really
great,
and
somebody
might
be
me
I'm,
just
saying
so.
Yeah,
that's
kind
of
the
front
page
but
I
think
like
the
more
interesting
thing
is
basically
to
maybe
look
at
it
the
resource
view,
so
a
great
example
would
be
Armada
API.
A
A
You
know
it
tells
me
like
basically
what
container
image
it's
using.
We
have
some
slick
stuff
like
it
has
basically
like
vulnerability,
scan
results
that
are
included
here
like
so.
This
is
also
part
of
why
this
thing
demo
is
better.
When
I
show
you
the
production
thingy,
then,
when
I
show,
the
open
source
is
like
some
of
this
stuff
again
is
like
customized
things
that
we
built
like
the
vulnerability
scan
integration.
B
B
A
In
like
that,
we
know
in
that
time
line
initially,
all
I
could
do
was
see
all
this
stuff
and
like
right
now.
What
you're
sort
of
seeing
is
like
the
the
visibility
side
of
it.
I'll
show
you
in
a
minute
what
it
looks
like
to
deploy
something
using
Drowsy
but
yeah.
So
these
particular
things
were,
are
monitored
and
deployed.
D
B
D
Anything
that
could
be
represented
as
a
coupe
resource,
and
today
you
know
those
are
you
know.
Obviously
the
primitives
like
config
maps
and
deployments
and
stuff
like
that,
but
you
can
also
I
mean
there's
also
some
CR
DS
that
also
allow
you
to
deploy
helm,
charts
with
coop
resources.
There's
a
home
charity
but
I
think
bitNami
has.
D
A
And
that
brings
up
a
good
point
right
like
basically,
what
we're
doing
is
we're
piggybacking
off
of
some
of
the
logic
that
sort
of
exists
in
add-on
manager
to
do
like
reconcile,
so
that
you
know
Razzies
able
to
correct
for
drift.
So
essentially,
you
know,
like
Bob,
may,
go
log
on
to
a
system
and
start
monkeying
around
with
a
bunch
of
resources
that
were
originally
deployed
with
Razzie.
A
A
Okay,
but
I.
Think
some
other
interesting
things
to
point
out
here.
So,
like
you'll,
see
version
and
you'll
also
see
LD
version
which
is
launched,
darkly,
so
I
think
part
of
the
secret
sauce
of
how
our
implementation
of
this
works
is
that
we
use
launched
darkly
as
a
component
behind
the
scenes
to
distribute,
like
rules,
information
to
all
the
clusters
about
what
should
be
applied.
And
we'll
talk
more
about
this.
A
When
we
talk
about
the
future
like
it's,
what
we're
using
now,
because
that's
kind
of
what
we
got
started
with,
but
we've
kind
of
realized
that
it's
probably
not
a
dependency
that
everybody
wants
to
deal
with
and
like.
We
have
like
these
remote
resource
concepts
that
we'll
talk
more
about,
but
it's
just
not
very
much
fun
without
something.
That's
can
sort
of
like
do
some
sort
of
like
rules,
engine
pub/sub
stuff,
and
so
that's
some
of
the
things
that
we're
working
on
going
forward.
A
D
A
D
A
D
Does
but
it
does
it
better?
Okay,
because-
and
here
we
just
kind
of
rely
on
some
naming
standards
like
you-
have
to
tag
the
images
with
a
github
commit
hash,
the
new
version,
you
just
add
an
annotation
to
your
to
your
resources
and
then
we'll
pick
up
a
special
annotation
and
put
it
as
the
version,
and
so
you
could
put
a
link
to
get
me
back
to
get
out
of
it
back
to
bitbucket.
Be
back
to
you
know
your
you
know:
laptops
ftp,
drive
or
whatever
yeah,
but.
A
Like
it's
just
nice
to
have
it
in
the
dashboard
like
if
you're
going
and
debugging
a
problem,
you're
like
oh
crap,
you
know
someone
thing
broke
earlier
today.
What
what
got
deployed
you
know,
four
days
ago
we
went
from
this
commit
to
this
commit.
Well,
let's
go
see,
click
on
Arabic
yeah!
You
can
like
bring
up
the
diff
and
you
can
basically
just
go
see
directly
like
hey.
These
are
the
changes
that
were
made
as
part
of
that
push,
which
is
really
nice
for
debug.
A
It's
yeah
like
just
from
like
an
operational
standpoint.
It
was
pretty
life-changing
to
have
this
kind
of
data
even
before
we
could
do
changes
with
it.
Just
to
be
able
to
see
all
that
stuff
was
pretty
amazing
right,
so
that
is
basically
giving
you
a
view
of
like
the
deployments.
Essentially
in
this
case
right
now,
the
cool
thing
is
like,
let's
say:
I
wanted
to
go
push
out
a
change
for
this
particular
microservice,
so
this
is
where
kind
of
the
rules
come
into
play.
A
So
basically,
if
you
look
through
here,
you'll
see,
there's
rules
that
say,
like
cluster
name
starts
with
or
like
region
of
cluster
equals.
This,
so
basically,
like
metadata
is
starts
with
blah
blah
blah,
whatever
some
thing
that
you
define
and
then
you
can
basically
select
like
what
build
or
commit
or
whatever
you
want
to
call
it.
What
version
you
want
to
have
deployed
out
to
that
thing
so
like
if
it's
dev,
Mex
and
I'm
like
oh,
you
know,
I
want
to
go
roll
out.
A
new
version
of
this
thing.
A
I
can
basically
pick
the
new
commit
and
I
can
request
a
change.
More
voodoo
is
like
we
have
like
integration
with
our
change
management
system.
That's
not
an
open-source!
So,
like
essentially
you
say,
like
ok,
I'm,
gonna,
I'm
gonna
push
out
this
change,
you
click
start
and
then
it
basically
tells
you
like
hey.
This
thing
is
waiting
to
be
approved.
In
this
case
it's
Auto
approved.
There
was
no
change
management
and
then,
if
I
click
start
deploy,
then
it'll
actually
push
out
that
change
well
to
that
cluster.
If.
A
D
Is
so
what
this
means
is
that
we
actually
don't
go
out
to
any
cluster
when
we
do
the
start,
deploy
all
we're
doing
is
were
updating
the
rule
once
directly
and
any
cluster
that
matches
that
rule.
In
this
case
you
know
any
cluster
whose
name
starts
with
deb
mex
k1
k5
will
automatically
pick
up
the
change
and
apply
it,
so
this
could
be
1/10
or
thousand
clusters
and
they'll
actually
I'll
pick
it
up
at
the
same
time
right
and
apply
the
cool
thing
about
this,
so
I
think
someone
asks
I'ma
call
just.
D
The
second
goes
like
what
is
a
new
I
show.
The
new
ID
typically
won't
update
until
the
clusters
actually
apply.
You
know
gotten
the
rule
update
applied
to
apply
the
new
or
download
the
new
llamó
resource
from
cloud
object,
storage,
apply
that
resource
and
then
kind
of
sent
the
information
about.
What's
running
that
cluster
back
to
the
dashboard
here,
typically
there
that
takes
about
60
seconds
on
average,
pretty
fast.
So
most
of
our
deployments,
you
know,
are
average
or
just
you
know,
go
here.
D
A
So
like
a
good
example
of
how
this
it
really
benefits
us
at
scale
as
this
is
like
you
know,
if
I
go
back
out
to
the
main
dashboard
and
whatever
I've
got
9,000
114
version
clusters
out
there
and
if
there's
a
CVE
that
gets
released
and
there's
a
fix
for
114
in
we
don't
use
this.
Some
do
the
master
side,
components
that
we
manage
for
customers,
but
like
let's
say
that
there
was
a
CVE
in
my
coop
dashboard
right,
which
is
like
an
add-on
that
we
manage
through
this
for
customers
or
like
in
calico
stuff.
A
That's
running
in
customer
clusters
like
if
there's
an
update.
If
so,
like
CVE
comes
out
for
calico,
we
would
basically,
you
know,
build
the
fix
for
that
push
it
out
to
our
object,
storage
as
like.
Basically,
this
is
like
a
new
version
of
calico
that's
available,
and
then
we
would
just
change
the
rule
that
says
like
hey.
If
you're
a
114
coop
cluster,
you
should
go
from
calico,
38.1,
23.82
and
and
automatically
every
cost.
All
9,000
of
those
clusters
will
go,
pull
down
the
3.8
version
and
update
themselves
with
the
latest
version.
I
mean.
D
A
B
In
terms
of
time
like,
let's,
let's
try
to
wrap
this
up
by
the
hour,
okay,
one
one
thing
that
I
was
wondering
is
we're
talking
about
this
is:
is
there
like
a
community
meeting
or
community
slack
people
can
join
if
they
want
to
find
out
more
about
contributing
like
how
do
people
contribute
to
Razzie?
So.
D
Resi
Daioh
there's
actually
a
select
link
in
there
for
a
slack
channel
that
we
have
are
the
IBM
kubernetes
service,
slack
instance
get
started
there.
Any
issues,
even
open
pull
requests
are
obviously
always
accepted,
but
yeah,
like
the
kids
shown
here.
The
connect
with
us
what's
like
should
get.
You
started
there
now
as
far
as
community
means
go
this
kind
of
proud
like
the
maturing
process.
We
are
doing
so.
D
A
A
But
what
we're
working
on
right
now
and
the
community
version
is
this
idea
of
channels
and
subscriptions,
which
is
basically
like,
there's
a
an
API
on
Razzie.
Now
that
allows
you
to
publish
new
channels
and
you
can
publish
versions
to
those
channels
and
then
you
can
create
subscriptions
to
them
for
specific
clusters
that
meet
like
a
specific
set
of
requirements
through
tags.
Essentially,
so
it's
like
it
gives
us
the
vast
majority
of
the
power
that
we
have
today
with
launch
of
Darkly,
but
without
having
to
have
all
of
lunch.
A
D
I
mean
I
think
we
can
first-quarter
to
have
a
working
solution
on
there
and
kind
of
with
jegos
mentioning
there
it's
like
today,
the
Razzie
components
I
mean
you
can
do
what
we've
showing
in
terms
like
the
pull
based
deployments
means
like
that,
except
it's
just
a
lot
more
effort,
because
you
have
to
kind
of
manually
set
up
your.
You
know
your
your
objects
or
buckets
pushed
up
there
manually,
and
this
is
gonna
kind
of
help,
automate
that
and
make
it
easier
yep
all.
A
All
right
well,
like
I,
said
I'm
at
kitchen
slack
on
the
basically
the
community,
kubernetes
slack.
If
you
have
questions
or
like
I
said,
if
you
go
to
res
eoeo,
there
is
a
link
it
up
to
our
own
sort
of,
like
bespoke
slack
for
the
service
and
the
open
source
project.
It's
supposed
adds
a
channel
on
that
yeah.
A
B
C
E
C
Right
yeah,
so
yeah
we
so
I'm
working
for
did
you,
like
you
also
know,
has
mesosphere
in
the
past
still
yet
so.
Basically,
we
we
are
using
Kiev
at
4-4,
one
of
our
core
components,
which
is
the
platforms
that
allow
to
deploy
multi
clusters
across
multiple
providers
and
trade
Federation's
and
stuff.
So
for
us
Kiefer,
this
is
kind
of
a
core
component
and
we
were
seeing
that
there
were
no
much
activity.
C
We
kind
of
open
couple
of
beers
and
got
merged,
but
it
seems
there
are
no
active,
reviewers
or
optimal
trainers
based
on
the
on
the
response,
time
of
issues
or
response
time
of
PR.
So
we
were
offering
ourselves
in
this
case
myself
took
how
to
to
maintain
the
procedure
a
to
help
to
to
take
care
of
issues
and
staff.
Because
is
our
one
one
of
the
core
pieces
of
our
of
our
platform,
one
of
our
products
so
yeah
we
were,
we
want
to
keep
using.
We
want
to
keep
improving
it.
B
Yeah
I
think
I'm
I'm,
definitely
interested
in
the
offer
of
help.
I
personally
have
not
had
time
to
to
devote
to
cube
fed
lately,
so
maybe
Hector
you
and
I
can
have
like
an
offline
conversation
later
today
or
tomorrow
about
like
the
specifics,
but
I'm
I'm,
definitely
interested.
If
people
want
to
help
maintain
you
fed
to
facilitate
that
now.
I
think
you
also
had
a
PR
that
you
that
you
were
interested
in
getting
merged
to
is
that
right,
yeah.
C
So
I
mean
the
PR
I
understand
that
it
won't
be
release.
Sure
no
lease
until
there
is
some
attraction
of
that
answer
is
basically
the
creation
but
but
yeah
the
PR
is
usually
a
graffiti
rose,
instead
of
having
to
create
a
specific
role
per
tool
and
stuff.
So
we
are
using
aggregate
alone
for
kubernetes
and
I.
C
So
so
that's
adds
to
the
to
the
default
kubernetes
standard
Rose
permissions
to
also
touch
the
killer.
That's
what
it
does
Tessa
seems
to
be
passed
to
pass
a
couple
of
weeks
ago.
I
totally
was
failing,
but
it
wasn't
really
true
to
my
to
my
changes.
So
I
also
think
the
probably
share
my
need
so
long
as
well
yeah,
so
I'm
here
to
help
her
to
contribute
it
anyway.
Okay,.
B
I,
definitely
don't
want
you
to
be
blocked
by
a
PR.
That's
that's
hanging!
So,
let's,
let's
talk
about
that
offline
when
we
we
talk
and
figure
out
what
the
best
way
is
for
that
to
get
merged
and
to
move
forward.
So
it's
good
sound,
good,
yeah,
okay,
so
I
think
that
is
everything
on
the
agenda
for
today.
Does
anybody
have
anything
they
want
to
just
raise
with
a
group
while
we're
here
or
you
can
get
some
time
back.
B
All
right
sounds
like
that's
it
for
today,
thanks
a
lot
for
joining
everybody,
thanks
for
the
demo,
Jake
appreciate
it
and
I'll
just
remind
everybody
if
you've
got
a
demo,
that
you'd
like
to
give
don't
hesitate
to
reach
out
on
peat,
moreand
kubernetes
slack
or
the
agenda
is
just
world
writable,
so
you
can
put
something
in
there
and
I'll
see
everybody.
Then,
in
two
weeks
all
right
take
care.
Everybody.