►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Foreign
welcome
to
this
week's
cluster
API
office
hours
today
is
Wednesday
June
21st
cluster
API
is
part
of
sync
cluster
lifecycle,
we're
part
of
the
cncf,
so
please
respect
one
another
and
abide
to
the
Golden
Rule
as
part
of
the
cncf
code
of
conduct
raise
hands
if
you'd
love
to.
If
you'd
like
to
speak,
we'd
love
to
hear
from
you
and
without
further
ado.
A
The
first
thing
that
we
invite
folks
to
do
in
the
meeting
is
to
introduce
yourselves
if
you're,
new
or
really
for
any
reason,
if
you
want
to
say
hi,
tell
us
who
you
are,
who
you
work
for
what
you're
working
on
we'd
love
to
hear
from
you,
so
I'm
going
to
pause
and
see
if
anybody
wants
to
raise
hands
and
do
that.
A
The
first
thing
is
open
proposals
and
I.
Think
we've
only
got
one
which
is
mine,
so
I
can
briefly
mention
that
we
don't
yet
have
lgtm
consensus,
and
so
previously
I
suggest
we
might
be
starting
lazy
consensus
soon,
but
we're
not
quite
ready
to
do
that.
So
that's
really
all
to
share
there.
So
stay
tuned
link
to
the
pr
is
there
and.
A
B
B
Okay,
so
before
we
begin,
I
have
a
couple
of
slides
that
describe
what
our
project
is
and
what
we
want
to
demonstrate
today.
So
like
this
and
yeah.
First
of
all,
I
want
to
thank
the
community
for
the
opportunity
to
show
a
demo
of
our
operator
today.
I
hope
you
will
be
interested
to
know
what
features
it
offers,
how
it
works
in
general
and
so
on,
and
my
name
is
Mikhail
fidesan.
B
You
can
call
me
Mike
I
work
in
New,
Relic
and
I
am
one
of
the
maintainers
of
this
project,
but
before
we
begin
yeah,
it's
necessary
to
understand
what
cluster
operator
cost
API
operator
is,
and
this
operator
allows
you
to
manage
installation
configuration
and
life
cycle
in
general
of
cluster
API
providers
and
all
this
done
using
the
declarative
approach
adopted
and
kubernetes
and,
like
any
other
operator,
it
provides
a
set
of
crds
customer
service
definitions
for
cluster
API
providers
that
administrators
can
can
use
to
manage
them,
and
this
allows
to
automate
provide
their
management
using
github's
best
practices.
B
So,
let's
take
a
look.
What
features
this
operator
provides
as
I
said
it?
It
has
a
set
of
crds
that
allows
you
to
declaratively
describe
what
providers
you
want
to
use,
how
versions
and
so
on.
Then
one
of
the
most
important
features
I
think
it's
it
allows
to
upgrade
and
downgrade
provide
providers.
Easily
I
will
demonstrate
how
it
works
in
the
demo.
Then
it
supports
air-gapped
environments,
so
you
don't
need
connect
to
GitHub
or
gitlab.
You
can
create
a
config
map,
put
all
manifests
there
and
then
they
will
be
reused
by
by
the
operator.
B
Also,
it
leverages
controller
runtime
configuration
API,
which
means
you
can
fine
tune.
Your
providers,
for
example,
you
can
add
change.
Metrics
address.
B
Also
add
environment
variables,
which
is
important
for
us
in
your
Relic.
You
can
add
a
sidecar
container
to
your
provider.
Everything
is
managed
there
and
currently
we
support
several
dozens
of
different
options
and
we
add
more
so.
You
can
easily
configure
whatever
you
want
in
your
cluster
API
provider
and
overall
it
provides
a
transparent
and
effective
way
to
interact
with
various
cluster
API
providers
on
the
management
cluster.
B
So
in
the
demo,
I
want
to
demonstrate
three
things:
first,
installing
and
upgrading
core
API
provider,
then
customization
of
Kappa
cluster
API
browser
for
AWS
and
finally,
aircraft
installation
of
cluster
API
provider
for
azure.
C
B
B
You
see
it
now.
Okay,
so
here
I
have
a
kind,
cluster,
I,
pre-installed
search
manager.
Here
it's
a
requirement
for
the
operator.
I
installed
the
operator
itself.
B
At
this
moment,
one
important
thing
currently
I
use
operator
from
Main
Branch,
so
it's
still
unreleased
technically
everything
I
present
today
should
work
on
the
released
version.
Zero
three,
but
we
implemented
a
lot
of
fixes,
features
there.
So
I
decided
it's
better
to
use
the
main
branch,
but
we
are
going
to
release
a
new
version
earlier
next
week,
so
everything
will
be
there
soon.
B
Okay,
here
I
have
k
n
s
view
of
my
kind
cluster
here
I
have
logs
of
operator,
and
here
is
my
Management
console
and
first
thing
I
want
to
install.
Is
the
core
provider
yeah
one
thing
I
want
to
add?
B
Currently
we
have
four
crds
core
provider:
infrastructure
provider,
bootstrap
and
control
plane,
so
one
for
the
corresponding
cluster
API
provider,
unlike
cluster
CTL,
Which
packs
everything
in
one
provider,
CRT
okay.
So
this
is
template
for
core
provider,
and
this
is
all
actually
we
need.
We
need
to
procreate,
namespace
and
I'm
going
to
install
it
in
copy
system.
So
it's
like
the
minimum
configuration
that
you
need
to
provide
just
the
version
and
everything
will
be
created
in
your
management
cluster.
Let's
do
it
create
or
Dot
yaml.
B
Now
we
bars
our
objects
that
we
want
to
install
again.
It
takes
like
10-15
seconds
and
the
last
part
in
Stalin,
and
you
see
provider
successfully
installed.
We
see
installed
version
I
can
open.
B
And
we
see
copy
controller
manager
here
installed.
That's
all
so
it's
relatively
easy
and
what
I'm
going
to
demonstrate
next
is
to
upgrade
it
to
the
next
version.
I
think
our
current
release
version
is
one
four
three.
B
So
this
is
how
you
perform
the
upgrade
of
your
operator
just
update
version
here
from
one
to
three
and
now
it
starts
reinstalling
the
operator.
Well,
the
provider.
B
Again,
fetching
yeah
takes
about
10-15
seconds,
but
normally
everything
happens
pretty
much
fast,
and
now
we
are
installing
and
done
we
upgraded
core
Provider
from
one
version
to
another.
We
can
check
what's
again
this
one
gamble.
Why
not?
And
you
see
that
it
now
it
has
New
Image.
Everything
is
done
relatively
easy.
B
B
It's
here,
copper
system,
AWS,
encoded
credentials
and
gitcub
token
it's
optional,
but
but
recommended
to
prevent
trotlin
in
of
GitHub
API
or
a
rate
limiting
so
I
edited
here
as
well.
Then
we
deploy
infrastructure.
B
B
Okay,
okay-
and
this
is
what
I'm
going
to
change
at
least
something
it
is
it
a
couple
controller.
B
Hit
send
okay
yeah
here.
This
is
like
default
values
for
the
provider.
What
I'm
going
to
do?
First,
I'm
going
to
change
metric,
bind
address
I,
it's
better,
just
to
demonstrate
how
it
works.
D
B
I
want
to
change.
Metrics
I
want
to
change
synchronization
period
and
then
I'm
going
to
add
a
couple
of
new
options:
new
Flags
to
the
manager
itself,
it's
related
to
AWS,
but
this
is
what
we
can
do.
C
B
You
see
that
we
are
updated,
Matrix
bind
address.
Where
did
you
options
here
and
we
added
sync
period,
which
is
equals
to
500
seconds,
and
this
is
just
a
small
example
what
you
can
change
in
the
deployment?
Technically,
you
can
change
everything
you
can
yeah,
but
if
I
containers,
environment
variables,
Port
affinity,
everything
which
is
really
cool
and,
for
example,
if
you
update
the
version
of
your
provider,
all
your
changes
are
automatically
applied
to
the
newer
version.
So
you
don't
need
to
manually,
convert
them.
B
C
And
butter,
sorry
infrastructure
measure.
B
This
is
the
version.
This
is
secret
name
again,
it's
required,
for
instance,
infrastructure
providers,
but
here
another
section
that
shows
where
our
operator
should
pick
manifests
from,
and
this
is
a
config
map.
So
it's
it
looks
for
config
maps
with
these
labels
and
then
it
picks
this
version
when
it
finds
it.
B
Create
azure.
B
Course,
I
need
to
create
Galaxy
system.
B
Yeah
and
by
the
way
operator
automatically
grades
open,
config
Maps,
so
it
downloads
data
from
GitHub
only
on
the
when
it
installs
your
provider.
After
that
it
takes
data
from
the
local
config
map
like
like
here.
B
B
It's
here
we
have
five
items
here,
as
described
in
cluster
API,
and
the
final
step
is
to
create
the
provider
itself,
so.
C
C
B
B
It's
a
part
of
cluster
life
cycle,
a
special
interest
group,
it's
available
on
GitHub.
We
have
a
dedicated
slack
channel,
it's
called
cluster
operator,
so
please
join
if
you
haven't
yet-
and
here
is
a
list
of
current
cluster
evaporator
maintainers
in
GitHub,
so
you
can
pin
gas
if
you
have
any
questions,
requests
ideas,
we're
always
free
to
discuss
them
and
thank
you
for
attention.
C
B
E
Hi
well
great
great
demo,
great
work,
everyone
and
the
operator.
It
looks
amazing.
I
just
want
to
ask
from
your
perspective
what
what
is
the
difference
or
what
is
the
benefit
of
using
this
operator
versus,
for
example,
having
a
Helm
chart
that
installs
everything
right
so
I
know
that
there
is
a
health
chart
for
the
operator,
but
there's
it's
kind
of
like
a
two-step
versus
Saudi
operator
and
the
utility
operator
What
providers
to
install.
So
what
is
the
plate
of
using
the
operator
pattern
versus
a
direct
hunter
that
installs
everything?
D
B
Much
easier
to
upgrade
and
configure
so
as
I
said,
for
example,
if
you
want
to
make
some
custom
changes
to
your
provider,
you
don't
need
to
do
anything
with
the
operator.
You
just
describe
them
in
your
CRS
and
then,
after
that
they
are
automatically
applied
to
all
new
manifests
that
are
released.
B
Then
air-gapped
environments.
Technically
you
can
achieve
it
with
cluster
CTL,
but
here
you
just
deploy
I
config
map,
and
after
that
you
you
can
reuse
it
yeah
and
technically.
I.
Think
yeah
in
general
upgrades
and
downgrades
are
much
easier
with
this
operator.
E
Yeah,
so
one
one
clarifying
question:
would
you
say
it's
much
easier
around
upgrades?
Is
it
because
the
operator
performs
I'm,
assuming
that
it's
because
performs
certain
changes
in
certain
order
right,
so
it
coordinated?
So
it's
safely,
it's
safer,
whether
like
home,
will
just
apply
everything
in
a
specific
order.
Is
that
yeah
yeah?
Okay,
exactly
that
makes
a
lot
of
sense.
Okay,
we'll
think.
B
F
I
was
just
going
to
say
I
linked
to
the
the
issue
that
I
created
there,
that
we
can.
You
know
that
the
operator
would
have
a
home
chart
that
would
allow
the
continued
individual
installation
or
a
more
seamless
operation,
so
folks
could
install
ideally
one
or
all
of
them,
and
now
we
don't
have
tons
of
different
Health
charts
to
maintain.
B
Yeah,
sorry
for
interrupting
you
I'm
not
sure,
if
Alex
damage,
if
it's
here,
but
he
is
working
on
this
right
now
it's
available
and
he
is
repository
at
this
moment.
So
technically
you
can
install
everything
with
one
click
and
we
are
going
to
merge
it
soon
and
another
feature
we
want
to
add.
Currently,
you
must
specify
version
what
we
are
going
to
do.
B
B
Yeah
and
I'm
not
sure
if
Alex
is
here,
maybe
he
is
okay,
just
to
speak
about
how
it
works,
because
I
saw
it.
E
Yeah
yeah
I'm
here
I
think
we
can
make
another
demo
in.
B
Maybe
two
weeks,
maybe
actually
yeah,
especially
how
to
install
the
operator
and
yeah
providers
using
home
chart.
A
A
B
I
I
don't
know,
maybe
a
year
later,
we
can
discuss
how
to
integrate
it
because,
as
I
remember,
the
original
plan
was
to
develop
this
operator
in
cluster
API
repository
and
allow
cluster
CTL
to
deploy
everything.
So
they
should
share
a
lot
of
things.
B
I,
don't
know
how
to
proceed
with
this
right
now
yeah,
but
we
definitely
can
discuss
it
next
time.
A
Long
term
I
would
think
for
the
user,
Community
we'd
love
to
maybe
have
one
tool.
That
does
the
same
thing,
but
this
is.
This
is
definitely
really
great
and
I
think
solves
more
problems
than
like
the
the
command
line,
interface
that
cluster
CTL.
B
Yeah
just
controversy
brief
inside
we
have
more
than
200
clusters
and
each
one
has
copy
with
couple
kabzi
installed
and
we've
need
in
our
company.
We
need
to
find
a
good
way
to
manage
it,
and
cluster
CTL
is
good,
but
yeah,
it's
better
to
have
something
automated
that
you
can
use
in
github's
way
and
that's
why
we
developed
this.
A
Okay,
Jonathan,
you
have
a
machine
cool
machines,
agenda
item
about
the
spawning
of
new
PR's,
three
PRS
from
one
PR.
Do
you
wanna?
G
Yeah
I
just
wanted
to
give
a
quick
update
about
the
machine
for
machine
supplementation,
so
I've
closed
the
big
PR
that
I
had
opened
and
split
it
up
into
three
smaller
ones,
so
it'd
be
easier
to
test
and
review
I'm
really
hoping
we
can
get
it
in
before
the
next
release
or
the
before
the
code
free.
So
if
you
have
some
time,
I
really
appreciate
it.
If
you
could
give
this
give
this
a
quick
little
quick
look
over.
G
Yeah,
so
the
main
one
is
the
the
core
Cappy
components,
one
that
we
want
to
look
at
is
the
first
one.
The
second
one
is:
is
the
docker
machine
pool
machine
implementation,
which
is
re-based
off
of
the
Cappy
one
and
the
last
one
is
just
cluster
CTL
discovery,
which
is
a
pretty
straightforward
change
and
those
are
kind
of
independent.
That's
independent
of
the
other
two.
G
Then
yeah
I
think
the
Cappy
components.
One
is
the
most
important
because
in
order
for
any
providers
to
implement
machine
pull
machines,
we
would
need
that
open.
The
docker
implementation
is
more
of
a
reference
and
just
to
show
that
it
works.
A
Okay,
Stefan,
it's
yours
about
scale,
testing.
H
Yep,
can
you
open
this.
H
Works:
okay,
good
yeah
I
just
want
to
give
a
quick
update,
so
we
merched
in
membrid
I
think
one
or
two
weeks
ago,
two
weeks
ago,
and
now
we
are
busy
continuously
running
scale
tests
on
our
local
machines
right
now
and
if
you
scroll
down
stop
basically
that's
the
list
of
issues.
Mpr
set
already
came
out
of
that
and
improvements
to
Performance,
so
basically,
who's
interested
in
that
just
feel
free
to
tag
along.
H
Take
a
look
at
the
dishes
that
I'm
coming
up.
Some
of
them
are
like
open
for
volunteers.
Let's
say
we
have
some
issues
that
we
are
immediately
addressing
some
models
like
opening,
PR
and
all
that
sort
of
stuff.
But
we
also
have
someone
to
just
opened
issues,
and
if
someone
has
time
and
what's
up
there,
yeah,
yeah,
otherwise
I
think
making
good
progress.
We
already
made
a
lot
of
improvements
and
yeah
I
think
it
really
helps
to
basically
start
looking
at
the
situation
with
actual
data
versus
like
I.
H
Don't
know
how
to
work
in
the
past,
but
it
was
mostly
like
guessing
a
bit
what
could
be
performed
or
what
couldn't
be
I
think
it
weren't
too
bad,
but
we
really
fall
some
things
that
are
just
not
a
good
idea,
especially
if
you
have
controllers,
which
are
reconciling
a
lot
of
objects
like
a
machine,
controller
and
you're
doing
a
lot
of
uncached
calls
that
doesn't
work
well,
if
you're
getting
in
the
case
where
you
have
like
one,
two,
three
four
thousand
machines,
or
something
like
that,
everything
gets
very
so
yep,
so
just
want
to
mention
it.
A
I
Yeah
this
was
just
brought
up
during
the
release
team
meeting.
I
think
Christian
brought
this
up.
I
just
wanted
to
know
if
these
are
active
issues
that
are
going
to
go
into
one
five.
If
these
Milestone
tags
are
still
being
used,
just
wanted
to
to
bring
it
up
to
the
community
to
make
sure
they're
not
missed
as
a
1.5,
Beta
release
will
probably
come
out
on
the
July,
4th
I
think
and
then
the
code
freeze
is
July
11th,
so
I
wanted
to
make
sure
these
weren't
missed.
H
I
think
I
can
speak
for
a
second
and
the
third
one.
So
the
second
one
won't
happen
in
this
release,
except
someone
picks
it
up,
and
it's
very
quick
I
basically
didn't
get
to
it,
but
I
think
it's
also
not
something
that
we
necessarily
have
to
do
in
105,
because,
basically
just
start
dropping
code
paths
for
communities
that
we
don't
support.
H
I
H
I
can't
really
speak
for
the
first
one,
that's
something
that
once
opened,
but
if
we
open
it
I
think
both
repeat
soon
me
were
saying
that
I
think
we
would
want
to
make
further
progress
with
classic
class
or
for
resource
writing,
but
with
classic
class
plus
machine
put
before
we
graduate
cluster
class
in
some
way.
So,
in
my
opinion,
we
shouldn't
do
this
for
1.5,
but
that's
just
my
opinion.
H
I
But
and
again
just
to
be
clear,
I'm
sorry,
the
code
freeze
is
July
11th,
so
I
think
Jonathan
mentioned
trying
to
get
it
in
there
before
code.
Freeze.
C
A
H
H
A
A
Stefan,
are
you
have
you
said
your
piece
on
this?
Yes,
cool
all
right,
awesome
on
to
provider
updates
cap
Z
mat,
hey.
F
Thanks
Jack,
nothing
really
to
add
to
this.
Just
some
bug
fixes,
so
we
did
patch
releases
come
and
get
them.
That's.
I
Yep
just
released
a
quick
update
to
support
the
serverless
kubernetes
offering
and
I'll
try
to
give
a
demo
on
that
next
week,
but
basically
pretty
cool
feature
that
Sean
put
together.
So
I
wanted
to
call
that
out.