►
From YouTube: 20190130 cluster api
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Wednesday
January
30th
edition
of
the
cluster
API
office
hours.
We
have
a
relatively
light
agenda
today.
So
if
you
have
any
topics,
please
go
ahead
and
add
them.
The
dock
is
linked
in
the
chat.
To
start
with,
it
looks
like
we
have
an
announcement
that
the
dependencies
for
cluster
API
have
been
updated
to
1:13.
A
B
B
So
if
you
have
kind
of
like
more
feedback
on
the
proposal,
please
put
it
put
it
in
the
document.
I've
heard
some
folks,
like
kind
of
discussing
what
most
of
my
peer
groups
means
like
the
repercussion
of
those
I,
think
and
I'll
kind
of
chat
with
Robert
around
this
later,
but
I
think,
like
we
kind
of
want
to
reward
that.
It's
mostly
like
this
is
the
goal
where
we
want
to
be
and
leave
discussion
and
implementation
details
and
how
we
want
to
get
there
after
be
one
off
of
one
gets
delivered.
B
But
if
the
kind
of
like
at
the
concepts
and
like
at
the
goal
of
like
pretty
much
having
different
types
have
been
for
responsibilities
are
clear,
then
we
can
move
forward
into
some
of
the
topics
there
like
up
for
discussion
right
now.
One
of
one
of
them
is
how
to
run
multiple
clusters
in
the
name
space,
and
this
will
clear
the
road
for
that.
So
if
you
have
any
questions,
feel
free
to
ask
or
leave
comments
on
me
also,
I
will
reward
something
else
later
this
afternoon
after
dr.
Robert.
A
C
B
That's
a
very
broad
change,
like
I
mean
like
we
will
introduce
like
two
more
API
groups
at
this
point,
and
the
providers
will
also
all
need
to
change.
So,
given
that
we
set
the
one
out
for
one
for
I,
think
I
believe,
like
we
have
two
months,
that's
kind
of
a
stretch
given
the
backlog
that
we
have
right
now,
but
I
do
think
that,
like
after
we
agree,
you
know
like
a
way.
We
want
to
be
a
envy
1
alpha
2.
B
D
This
is
a
very
small
began
and
it
was
pending
since
long
time
and
we
I
think
in
the
last
meeting
it
was
just
and
finally,
we
equate
to
have
this
provider
ID
field
in
the
machinist
in
the
spec
of
the
machine
or
check
so
I
think
winds
take
I'm
there
to
add
more
description
for
the
field.
So
it's
just
a
update
from
my
side.
D
B
B
E
Yeah
so
I
think
this
is
related
to
to
the
splitting
of
the
e
I
and
a
few
few
thoughts
that
I
had,
and
you
know,
I
thought
I'd
share
them.
While
we're
discussing
this
potential
split,
which
could
be
a
big
change,
and
you
know
whether
there
are
any
other
things
we
may
consider
so
I've
previously
drafted
some
ideas
and
kept
them
at
the
draft
hasn't
haven't
quite
the
drought
sharing
yet,
but
I
think
it.
Maybe
now
is
a
good
time
to
kind
of
have
a
a
bit
more
of
a
discussion
around
it.
E
E
We
sure
can
yeah.
So
what
is
generic
layer?
Four
versus
a
provider
specific
layer
right
so
I
think
it
should
answer
some
simple
questions
there
easily.
For
example,
if
you
walk
up
to
a
management
cluster
that
manages
some
set
of
clusters,
you
should
be
able
to
easily
figure
out
how
many
clusters
do
you
have
there?
How
many
nodes
are
in
each
of
those
clusters
and
both
the
versions
are
and
how
to
connect
to
any
particular
cluster.
E
It
seems
like
a
simple
set
of
questions
to
begin
with
right,
so
so
in
terms
of
whether
the
Copeland
field
within
different
providers
in
absolute
sense,
I,
believe
there
are
really
no
absolutely
common
fields
which
would
be
required
in
all
the
providers
right
and
then
it's
a
it's
really
hard
to
define
and
we've
been
trying
to
do
that
and
I.
Just
looked
at
the
spectacles
like
like
called
slider,
and
so
this
slider
are
actually
required.
Parentis
currently
and
in
my
particular
case,
was
eks
control.
E
I
couldn't
actually
make
good
use
of
those,
and
there
are
other
sort
of
ciders
that
users
care
about,
but
because
those
are
not
exactly
outsiders
anyway,
we
can
give
the
specifics
later,
but
there
was
one
example
right
and
there,
like
across
the
DNS,
is
one
other
required
for
our
villages,
which
is
pretty
odd,
I,
think
and
so
I
think
all
the
reason
common
is
just
the
metadata.
So
there
it's
notion
of
some
clusters
and
their
names
basically
and
potentially
hard
to
connect
to
them.
E
Api,
endpoint
CA
starts,
and
maybe
the
status
whether
the
cluster
is
ready
or
still
creating
something
right
now
well,
essentially,
I'm
thinking
that
you
know
you'd
be
you'd,
be
reasonable
to
consider
whether
we
have
any
common
fields
at
all
other
than
a
metadata,
and
what
does
the
user
care
about
what
they
care
about?
Ease
of
use,
the
meaningful
API
and
reliable,
declarative
behavior?
So
when
they
look
at
the
Napier,
ID
should
speak
to
them
when
I
started.
E
Looking
at
simply
the
providers
out
there
like
I've,
looked
at
a
few
of
them
and
the
EP
I
didn't
speak
to
me.
There
are
some
cluster
object
that
has
a
bunch
of
site
herds
defined
in
it
and
then
there's
some
nodes
that
happen
to
represent
masters
and
then
also
actual
workers
and
I
can
really
picture
the
whole
thing
very
easily.
The
pretty
verbose
object
and
my
main
concerns
when
the
current
approach
is
that
it's
not
as
an
implemented,
it's
not
very
clear.
E
What's
the
path
for
the
common
component,
how
I'm
supposed
to
look
at
the
common
components
I
mean
essentially
the
last
point
on
this
slide
is
that
the
product
of
the
common
reap
of
the
cluster
isn't
exactly
a
library.
Neither
it's
a
deplorable
component
as
far
as
I
could
tell
I,
couldn't
see
their
bits
that
that
could
use
as
a
library
or
a
deployable
component
and.
E
And
I
felt
like
basically
trying
to
rely
on
that
and
figure
out.
How
can
you
use
the
the
common
come
on
repo?
The
code
is
in
the
c'mon
repo
seemed
kind
of
fruitless.
Actually,
and
if
you
take
a
look
at
ETS
control,
you
will
see
we
basically
did
it
all
from
scratch
in
rather
short
period
of
time,
for
what
we
wanted,
especially
like
just
the
creative
class,
the
functionality
and
the
rest.
E
We
have
other
challenges
to
deal
with,
and
you
know
those
are
not
something
that
cluster
ABI
people
would
help
us
with
and
in
a
way
you
could
think
eks
control
could
have
been
a
form
of
cluster
control,
but
I'm
not
entirely
sure.
What
would
that
bring
the
exactly?
And
it's
not
very
that
hard
to
create
a
CLI.
You
know
so
yeah.
E
Some
common
fields
are
not
applicable
to
all
providers,
as
I
said
earlier,
and
there
is
no
common
CLI
that
that
is
actually
useful
to
the
end
users.
So
it's
not
exactly
I
would
put
what
are
the
the
common
components
for
RNA.
So
what
is
being
suggested?
I
think
I
think
these
would
be
reasonable
goals
to
consider.
So
we
should
try
to
find
how
common
components
relate
to
provider.
Implementations,
create
a
common
CLI
and
make
common
API
do
a
simple,
a
meaningful
thing,
and
it
not
supposed
to
do
magic.
E
E
So,
in
terms
of
architecture,
the
kind
of
thing
that
I
had
in
mind
is
something
where
the
first
thing
is.
There
is
a
cluster,
the
drones
cluster,
EK
control
of
some
sort
potentially
and
a
II
provided.
Specific
controller
comes
in
and
registers
with
the
clustering
controller,
and
there
is
something
useful
that
clustered
cake
controller
provides
to
the
provider
that
simplifies,
provide
an
implementation
or
there's
something
else,
and
the
controller.
This
generic
controller
maintains
a
set
of
objects
that
contain
metadata
about
provided
specific
options.
E
So
when
user
goes
to
to
create
a
cluster,
they
define
a
provider
specific
object
and
create
that
and
and
then
you
know,
provider
does
what
it
does
and
then
eventually
it
reports
back
to
the
the
generic
layer,
and
it's
also
cake.
We
have
a
cluster,
it's
called
this
and
here's
its
status
and
stuff
right.
So
there
is
that
you
know
here's
an
example.
Basically,
a
provider
comes
in
and
says:
okay
well,
I'm
called
this
thing.
E
Eks
control-d,
for
example,
and
I
have
these
two
kinds
that
I
care
about
and
please
keep
track
of
those
for
me
and
basically,
that's
that's
what
it
does
and
then
a
user
goes
in
and
says:
okay,
I
want
any
key.
Is
cluster
I'm
going
to
find
that
and
say
that
I
want
in
this
region
and
I
need
I
mean
like
one
note
group
in
there
and
then
I
define
the
node
group
and
basically
10
nodes
and
it
has
instance,
type
parameter
everything
else
user
pushes
to
keep
as
default.
E
Whatever
the
defaults
are
for
the
given
provider
and
then
the
generic
layer?
If
they
walked
up
to
the
cluster
and
did
like
you
could
all
get
clusters,
they
could
see
an
object
that
represents
that
cluster
they've
just
defined,
and
this
reference
is
the
actual
provider,
specific
definition
and
it
has
the
status
fields
and
they
could
potentially
use
that,
perhaps
via
some
tool
to
kind
of
create
a
cube,
config
file
or
something
like
that
and
then
they'll
be
able
to
go
and
we
cut
puts
the
actual
object
is
and
and
what?
E
What
does
it
say
about
itself?
And
you
know
similarly
for
the
for
the
node
group.
It
would
map
to
the
common
idea
of
a
node
set
and
similar
type
information
would
be
available,
for
it
note
that
there
isn't
even
a
scale
parameter
to
this
or
anything
because
scale
parameter
is
not
like
a
single
thing.
You
can
made
ugly,
as
you
actually
have
you
know
the
minimum
the
maximum
and
decide
capacity.
E
Unfortunately,
I
pasted
along
wrong
image
here,
but
here
is
the
the
actual
object
of
a
cluster
environmental
that
happens
to
have
this
set
of
fields,
much
more
specific
about
networking
setup
and
you
know
stuff
around
DHCP
and
DNS
and
has
some
some
feature
called
CSS
age.
That's
specific
to
this
provider.
E
Basically
just
disables
SSH
after
it's
done
installing
all
the
cavernous
components
or
whatever
that
it
has
to
do,
and
it
happens
to
have
class
the
DNS
and
that
and
then
you
know
there
is
a
there-
is
a
provide,
a
specific
type
definition
for
for
a
in
a
tea.
We
kind
of
know
that
this
provider
can
can
manage,
so
it
has
like
pool
of
machines
which
it
which
it
keeps,
and
you
know
it
has,
for
example,
a
whole
rack
and
it
can
hand
it
up
and
the
mouth
to
to
to
any
cluster
that
user
requests.
E
So
user
goes
and
creates
a
bare-metal
node,
and
you
know
they
get
it
fairly
quickly,
much
quicker
than
some
other
type
of
note
that
this
provider
may
have
that.
You
know,
may
require
manual
steps
or
something
like
that
so
yeah.
So
that's
that's
kind
of
the
idea
and
then
obviously
there's
a
generic
definition,
which
is
very
much
the
same
like
so
these
are
the
ideas
that
I
had
in
mind
that
you
know
that
may
help
us
and
I
want.
F
E
Well
sure
I
mean
I
was
like
you
know:
I
was
hoping
to
you
cost
our
API,
the
product
of
clustering,
Theory
Poe
in
UK
has
control
and
I
figured
well,
it's
easier
to
go
and
roll
the
thing
of
my
own.
He
took
me
really,
you
know
very
little
time
to
actually
do
the
first
pass
at
it.
Other
things
are
details
of
the
particular
provider.
E
The
banks
would
happen
to
have
a
father
around
so
so
yeah.
So
the
kind
of
thinking
here
is
that
you
know
trying
to
basically
provide
some
feedback
and
what
what
ideas
crossed
my
mind
after
I've
spent
some
time
thinking
about
it.
I'm
definitely
happy
to
collaborate
more
on
this
and
I
I'm
glad
to
see.
There's
a
movement
towards
you
know.
Shuffling
things
around.
You
know
reworking
different
concepts
and
things.
It's
very
definitely
good
to
see
ya.
G
E
Thanks
yeah,
so
one
more
thing,
I
kind
of
want
to
talk.
The
kind
of
high-level
thinking
is
that
at
the
moment,
the
style
of
object
that
we
have
it
tries
to
be
a
union
essentially-
and
it
isn't
quite
I'm
not
entirely
sure-
was
the
status
of
using
multiple
providers
in
a
single
cluster
and
being
able
to
confirm
that.
But
it
seems
like
you
know
you
could
work,
but
there
are
some
gotchas
them.
It
seems
like
that's.
C
F
C
B
So
one
thing
that
I
want
to
add
to
the
best
I
see
you
share
like
I
think
it
would
be
like
great,
because
there
were
like
a
few
come
few
points
you
made
to
like
put
those
into
issues
that
kind
of
distill
this
a
little
bit
so
that
we
can
sync
the
issues
offline
and
people
can
chime
in
into
more.
Like
you
know,
like
discussion
like
where
we
want
to
go
forward,
not
like
I
mean
you
could
do
a
plate
as
I
level.
You
want,
but
you
better
up
like
a
very
good
point.
C
Yeah
I'm
really
interested
in
the
EPS
use
case.
I.
There
are
a
number
of
important
issues
you
brought
up
in
terms
of
the
API
should
speak
to
you.
So
there's
certain
areas
right
now
where,
for
instance,
master
is
implied
by
the
existence
of
a
field
and
that's
an
example
of
where
the
API
is
not
speaking
to
you
right.
It
acquires
you
to
come
to
the
system
with
a
priori
knowledge
to
understand
what
the
existence
or
absence
of
the
field
means
for
the
controllers.
C
E
No
because
we
started
with
code
that
existed
in
the
C
line,
that
drove
CloudFormation,
and
it
did
it
in
a
particular
way
that
we
were
quite
happy
with
and
I
think
it
may
be
possible
to
just
throw
machine
active,
actually,
a
machine
set
actuator
in
potentially
it's
just
that
we
have
a.
We
have
a
few
existing
use
cases
that
will
pour
it
in
you
know.
Managing
everything
through
CloudFormation
and
relying
on
the
remote
state
to
be
managed
by
CloudFormation
seems
like
a
good
idea
so
that
we
don't
actually
have
to
have
a
daemon
right.
E
So
it's
like
I
mean
in
fact
it's
all
kind
of
about
to
start
with.
We
we
wanted
what
cost
a
pair
doesn't
cater
for
at
the
moment,
being
able
to
create
clusters
just
based
on
config
file
who's
a
CLI
without
without
adding
more
to
it,
without
having
to
to
have
like
a
controller
in
another
cluster
type
thing
right.
So
we
kind
of
wanted
to
take
the
existing
functionality
of
the
CLI
that
we
had
that
basically
drives
confirmation
and
act
with
equal
support
to
that
and
then
extend
into
a
demon
control
of
it.
E
C
Later
I
think
that
make
sense.
So
then
I
just
have
one
comment
and
that's
that
I
think
you're
right
that
we
could
make
the
API
easier
to
understand
and
read
both
for
humans
and
computers.
One
concern
I
have
is
in
particular,
so
eks
is
something
to
I.
Think
is
a
good
example
to
bet
out
these
ideas,
because
it's
it's
very
different
things
like
vince's
control,
plane,
resource
suggestion.
That
might
be
one
way
to
handle.
Yes,
overriding,
the
machine
set,
actuator
or
creating
a
machine
set
actuator.
C
It
would
be
another
and
then
there's
a
few
other
ideas,
but
I
think
it's
important
that
we
use
upstream
resource
types
for
this
and
not
ad-hoc
provider,
specific
ones
because
otherwise
integrations
with
the
cluster
autoscaler
won't
work.
So
that's
I,
guess
my
only
feedback
is
that
I
understand
the
desire
to
create
your
own
objects,
and
it
does
certainly
make
it
easier,
but
to
the
extent
that
our
diverges
from
upstream
top-level
objects,
it
makes
it
harder
to
build
higher
level
functionality
and
that's
something
I've
been
concerned
about
with
the
project.
Since
the
beginning
is
I.
C
G
Sir,
but
if
I
understand
you
really
correct
me
the
wrong
but
I
understand
it.
Basically,
would
you
propose
to
me
that
each
spec,
which
is
now
embedded
into
the
generic
object,
put
it
moving
away
and
making
a
reference?
So
basically,
whatever
is
common
should
remain
common
and
whatever
high
level
tools.
Now
they
should
meet.
G
You
only
use
the
common
parts
for
moving
this
spec
as
an
individual
option
that
probably
make
a
lot
more
sense,
because
they
can
wait
to
wait
on
operators
that
act
on
this
option
without
having
to
go
to
the
numeric
ocular
next
track
is
far
but
I
mean
that's
my
understanding,
you're,
basically
taking
out
this
respect
as
I
individual
object,
which
is
dividers,
Pacific
right
yeah.
So.
E
Exactly
yes,
so
I
mean
what
what
I
was
thinking
is
that
you
know.
Currently
we
do
have
our
own
top-level
object
in
ETS
control
and
the
idea
was
to
just
create
that
for
experience
and
that's
like
alpha-3
happens
to
be,
we
kind
of
virtually
skipped
our
offer
one
an
offer
to
in
a
way
so
yeah
and
that
that
object.
You
know
it
could
could
basically
fit
into
the
envelope
of
clustering
as
it's
currently
defined.
It'd
be
just
the
providers
spec.
E
But
but,
like
you
know,
what's
the
user
benefit
of
using
that
envelope
I'm
not
entirely
sure
they
just
need
to
type
a
lot
of
things
before
before
they
can
use
the
thing,
and
then
you
know
whenever
they
copy
to
to
create
a
new,
slightly
different
object.
They
have
to
copy
all
that
all
of
like
along
and
it's
not
entirely
clear
with
the
benefit
of
it.
E
At
the
moment,
so
I
mean
I'm
just
trying
to
figure
out
how
we
arrived
at
a
point
where
there
is
more
benefit
to
everybody
of
using
the
comments
were
laying
in
this
clear
path
to
how
how
that
evolves
and
benefits
everyone.
So
right
now
was
for
me
was
kind
of
hard
to
extract
the
benefit
from
looking
at
all
the
different
provider.
Implementations
and
I
wasn't
sure
why
so
like
well,
I
mean
I
just
need
to
config
file
support.
All
the
other
problems.
E
E
But
what
everybody
agrees
on-
and
you
know
a
bunch
of
different
things,
so
I'm
just
wondering
right
now
we
kind
of
have
some
common
fields
and
some
metadata
and
some
bits
that
are
nested
within
within
that
sort
of
common
envelope
and
I
wonder
what
people
think
about
the
shape
we
can
go
in
forward
because
I
mean
the
moment.
I
do
see
the
the
proposal
that
is
being
put
forward.
I
think
that
that's
very
good
I
think
this
is
a
bit
related,
just
fund
an
extensive
idea,
and
you
know
so.
F
So
I
think
a
point
of
order
of.
Like
you
know
we
could
we
can
bike
on
the
details
here,
but
I
think
a
point
of
order
that
we
should
probably
do
is
similar
to
akin
to
what
the
upstream
semantics
are.
One
of
the
problems
that
I've
faced
with
cluster
er
in
general
is
that
we
keep
on
pushing
the
goalposts
and
what
I
want
to
avoid
is
yet
another
goalposts
push,
oh
sure,
so
what
I
think
makes
a
ton
of
sense
and
what
we
do
in
upstream.
F
Everything
else
right
is
you
go
through
a
proposal
cycle
right
so
after
we
get
me
one
out
alpha
one
out
the
door
go
through
a
standard
proposal
cycle
and
you
know.
In
truth,
you
need
democracy,
which
is
like
open
source
everything
we
we
make
the
case.
You
write
the
proposal.
We
vote
on
what
we
consider
the
best
and
we
go
forward
to
make
that
happen.
Yes,
I
think
that
makes
a
ton
of
sense
for
me
administrivia
perspective,
and
we
can.
F
E
Just
sort
of
I
would
I
get
that
there
was
discussion
going
on
about
essentially
the
major
changes
to
the
API
structure,
so
I
was
gonna
forward,
some
of
the
ideas
that
I've
been
sitting
on
for
a
while
and
and
see
what
people
think
and
whether
whether
I
should
keep
working
on
any
of
that
or
you
know
think
of
something
else
or
whatever.
So
this
is
good
to
hear
it
sounds
like
there
are
a
few
things
that
we'll
need
to
discuss
in
more
detail
but
definitely
understand
the
ev1
off
the
one
priority
there
yeah.
A
H
Filed
this
regret,
if
people
take
a
look,
I
think
a
few
people
already
have.
The
idea
is
to
remove
the
kubernetes
specific
fields
from
the
machine
object.
I,
not
sure
this
might
be
implied
by
the
splitting
of
the
machines
API
from
the
cluster
API
business
is.
That
is
that
true,
is
that
is
that
implied
like
those
the
field
with
the
kubernetes
specific
field
would
be
remote.
B
Not
the
burgeoning
field
that
we
haven't
talked
about
that
so
and
another
you
left
a
comment
like
already
coming
in
the
PR
I
was
gonna
challenge
this
a
little
bit
and
say
like.
Is
this
outer
scope,
for
you
know
this
first,
you
iteration
of
because
I
understand,
like
some
people
want
to,
like
you,
use
cluster
API
to
manage
any
cluster.
H
Absolutely
yeah
I
think
so
I
love
to
comment
in
PR,
and
you
know
the
high-level
summary
is
that
by
so
there's
I
think
there's
an
issue
with
with
the
way
that
the
actuator
today
does
both
the
machine
and
the
software
provisioning
and
it
if
you
it
can
be
useful
to
to
separate
those
two.
And
if,
if
you
do
separate
the
two,
then
the
Machine
object
might
be
the
appropriate.
H
The
appropriate
object
for
doing
the
Machine
provisioning
and
a
separate
object
might
be
the
appropriate
thing
for
doing
the
software
provisioning,
so
I
I,
think
I
I
think
they're
yeah
there's
there
are
separate
motivations
like
one
I
think
that
Meissen
brings
up
is
hey,
use
the
machine
machines
API
to
bring
up
non
kubernetes
nodes.
But
another
motivation
is
you
know
to
to
allow
operating
the
machine
provisioning
from
the
soccer
provisioning,
which
is
a
step
to
having
a
common
software
provisioning
piece
that
multiple
providers
can
use
is
I
think
to
date
most
providers
are
reimplemented.
H
This
is
the
same
software
provisioning
pattern.
You
know
with
with
calling
cube,
ATM
or
installing
you
know
various
dependencies,
and
it
could
be
useful
to
have
that
common,
but
there
are
other
ways
to
you
know
to
to
do
that.
So
just
wanted
to
you
know,
bring
this
bring
this
up,
get
eyeballs
on
it.
That's
it
thanks.
F
F
F
I
F
A
H
F
B
F
B
Okay,
so
for
this,
I
was
thinking
like
just
like
six
days
as
very
now,
given
like
at
the
the
proposal.
That's
out
and
discussion
is
for
later,
and
how
do
we
do
we
improve
like
a
current
status
off
with
the
actuators
one
way
to
solve?
This
issue
is
to
pretty
much
just
like
at
like
a
field
selector
for
your
label
into
the
machine
to
link
them
to
a
cluster,
and
this
was
also
the
problem
of
having
multiple
clusters
for
namespace.
I.