►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180801 - Cluster API
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Wednesday
August
first
edition
of
the
cluster
API
breakout
medium
course
in
cluster
life
cycle.
Today
we
are
working
on
fleshing
out
our
agenda
and
our
first
agenda
item
is
from
Justin
looks
like
you
talked
to
some
folks
over
at
AWS,
so
I'm
gonna
pick
on
you
first.
If
you
could
give
us
an
update
there,
absolutely.
B
Yes,
so
I
think.
Last
week
there
was
a
question
I
think
you
Chris
suggested
that
there
may
be
different
rate
limits
on
the
api's
for
AWS
when
you
go
when
you're
launching
instances
through
order
scaling
groups,
as
opposed
to
when
you're
launching
them
directly
one
instance
at
a
time
and
a
diversity
to
confirm
that
there
are
definitely
different
rate
limits,
at
least
for
the
top-level
api's.
They
didn't
immediately.
B
A
Cool
okay,
thanks
for
the
update
there
and
that's
part
of
the
continued
discussion
that
we
brought
up
last
time
as
well.
Regarding
you
know,
moving
for
what
types
of
patterns
are
we
gonna
start
seeing
in
the
controllers?
Do
we
look
at
following
like
a
group
approach
where
we
would
use
like
an
auto
scale
group,
or
did
we
look
at
doing
an
instance,
by
instance,
creation
and
mutation
over
time,
which
we
can
add
that
to
the
end
of
the
agenda
today?
A
If
folks
have
any
any
thoughts
on
that
as
well,
they
would
like
to
bring
up,
but
right
now
we'll
move
on
our
next
one
is
the
AWS
design
document
from
do
you
wat?
If
you
want
to
looks
like
we
have
a
links
there,
let's
see
if
I
can
make
that
link
work
there
we
go
so
we
can
pull
that
up
and
if
you
want
to
give
us
a
quick
overview
of
what
you
have
here,
that
would
be
helpful.
D
D
I
mean
you
wanted
to
bring
it
to
bring
it
to
everyone's
attention.
I've
started
looking
at
it
now
and
I
think
it
looks
pretty
good.
A
primary
question.
I
have
is
what
about
clustered
provider
configuration?
A
number
of
the
fields
were
specifying
for
a
machine.
I
think
should
not
be
specified
on
a
per
machine
basis.
So
little
things
like
that,
and
then,
if
everyone's
aware
of
it,
I
guess
I
wonder
if
there's
a
plan
for
when
we
would
agree
on
the
design
or
at
least
the
types
and
then,
if
we
haven't
either
representatives
here
check.
B
E
To
that
a
little
bit
the
work
that
we
were
doing
at
hefty
o
for
our
cluster
API
POC
on
AWS,
just
wrapped
up
so
we're
working
on
documenting
kind
of
the
lessons
learned
out
of
that
and
one
of
the
things
that
is
on
my
plate
for
this
week
is
to
actually
add
some
details
about
cluster
API
provider
to
that
document
and
help
provide
some
more
detail
in
that
document.
In
general,
around
kind
of
where
we
see
the
the
proper
approach
to
start
will
be
I.
F
A
A
Okay,
Jason
out
of
the
work
that
you
did
with
your
pita
C
an
AWS
you
mentioned
you
were
gonna,
put
together
a
document.
Does
it
touch
on
the
API
shape
and
perhaps
directives
that
you
use
and
didn't
use
at
all,
and
then
is
that
going
to
be
shared
publicly?
It's
so
we're.
E
Yeah,
so
we're
we're
trying
to
balance
kind
of
documenting
all
of
the
lessons
learned
and
also
trying
to
document
the
kind
of
public
design
document.
So,
yes,
we
do
plan
to
eventually
publish
the
lessons
learned
doc,
but
I
think
the
priority
right
now
may
be
to
go
ahead
and
try
to
help
evolve.
The
design
spec
first,
okay,.
G
D
G
So,
okay,
so
this
goes
back
to
to
the
discussion
we
had
a
while
ago
about
whether
whether
cluster
and
machine
needed
to
be
provider
to
provide
it
to
the
actuator
and
that
there
was
a
good
way
or
nothing
linking
them
to
each
other
right.
So
the
the
only
thing
that
that
you
know
to
link
to
cluster
and
machine
is
it's
the
cluster.
Is
there
a
cluster
in
the
namespace,
and
is
it
the
only
thing
in
the
only
cluster
in
the
new
space?
Otherwise
the
fishing
controller
errors
out
right.
G
So
the
the
idea
would
was
that
maybe
we
could
do
without
specifying
the
cluster
in
the
in
the
clock
to
create
machines,
Indian
trader,
like
anything
that
you
would
normally
get
from
the
cluster,
they
would
sort
of
be
trickled
down
to
the
Machine,
and
then
you
would
get
rid
of
the
ugliness
of
right
now.
If
you
just
had
a
cluster
to
a
namespace
that
already
has
a
cluster.
E
F
E
Okay,
so
I
I
think
another
benefit
that
you
have
of
being
able
to
specify
those
on
a
per
machine
type
is
even
if
you
do
keep
the
cluster
two
machine
kind
of
relationship.
It
gives
you
an
ability
to
override
it,
either
on
a
per
machine
or
per
machine
set
or
per
machine
deployment
basis
as
well.
So
if
you
wanted
like
a
subset
of
your
machines,
that
you
know
provided
more
cpu,
where
we
ended
instances
versus
memory
oriented
instances
that
sort
of
thing.
F
He
might
comment
about
the
the
status
was
the
fact
that,
right
now,
the
brightest
when
Kevin
status,
it's
normally
a
separate
source
of
of
your
main
resource,
and
in
this
case
we
with
your
suggestion
with
having
the
state
of
seeing
the
in
the
provider
configuration.
This
means
that
every
single
time
you're
update
the
status.
F
Every
time
you
update
the
stables,
the
the
what
was
code
it,
it
will
trigger
a
new
generation.
It
will
increment
the
the
generation
in
the
metadata.
So
it
means
that
I
mean
most
I
think
the
most
profit
place
for
this
thing
is
to
have
another
super
resource
in
them.
In
the
machine
like
provider
status
and
they're.
G
Sorry,
it's
like
socks
on
the
edge-
it's
probably
not
not
explicit
in
the
dark,
but
these
are
separate
types.
One
is
meant
to
go,
and
the
provider
config
part
of
this
in
tech
and
the
other
one
ADA
is
machine.
Provider
status
is
meant
to
go
in
the
providers
that
is
field
of
the.
Since
that
is
start,
so
there
I
meant
to
go
in
difference,
but
okay.
A
G
Take
there
are
just
two
different
types:
one
is
meant
to
go
into
the
the
AWS
provider.
Config
fact
there's
not
to
go
and
respect
the
other
one,
the
AWS
provider
status.
You
have
to
go
under
status,
particle
shape,
I,
guess
that
needs
to
be
filled
out
in
the
dark
to
make
it
more
clear
that
they're
meant
to
go
into
different
parts
of
the
machine.
Okay,.
B
Have
one
sort
of
question
which
is
like
if,
like
suppose,
cops,
is
gonna
use
this?
If
there's
more
feels
the
cops
wants,
nothing
I
know
that
there
are
good
cops
embed
this
or
you
know
what
would
sort
of
the?
How?
How
would
the
two
additional
tooling
extend
this
or
what
what's
is
there
any
thoughts
on
how
that
might
work?
I.
A
Think
that's
the
point
of
the
provider
config.
You
said
that
you
can
have
your
own
version
set
of
fields
there
and
in
this
case,
if
we're
trying
to
declare
a
standardized
provider,
config
I
mean
that's.
We
could
talk
about
the
pros
and
cons
of
that
approach,
but
ultimately
because
it
is
an
ambiguous
field,
we
could
adopt
this
a
portion
of
this
something
completely
different
or
something
that
contains
this
and
is
has
something
else,
addition
to
it
as
well.
So
there's
some
flexibility
there
with
such
a
leaky
abstraction
that
I
think
we
can
extend
pretty.
A
B
D
H
If
I'm
not
wrong
at
that
description,
wasn't
there
already
kind
of
consensus
that
we
can
add
an
optional
field
on
the
machine
machine
spec
which
can
point
back
to
the
cluster
and
the
conclusion
was
more
or
less
coming
from
the
fact
that
in
case
we
want
to
run
both
of
the
controller's
motion,
controller
and
pressure
control
separately
in
feature,
then
that
should
also
be
possible.
Only
you
just
wanna
make
it
compulsory,
but
as
an
optional
field
that
could
that
would
solve
the
problem
would
have
a
link
from
the
machine
to
the
question
and.
A
Yeah
I
think
we
talked
about
making
that
an
opt-in
feature
and
solving
that
like
how
we
we
track
environmental
variables
in
other
parts
of
kubernetes
api
definitions,
I
think
you
know
again
we're
getting
off
into
implementation.
You
kill
for
the
controller
if
the
controller
is
going
to
respect
such
a
link.
So
be
it
that's
up
to
the
controller
folks
to
decide
but
yeah.
If
you
have
an
optional
field,
there's
another
viable
Avenue
as
well.
A
B
And
we
can
always
evolve
this
right,
so
we
that's.
If
someone
wants
to
do
some
coding,
I,
don't
think
we
should
block
on
on
the
some
sort
of
rubber-stamp
right.
It's
not
like
this
is
this.
Is
we
can
evolve
this
as
we
go
and
it's
I
don't
think
we
will
we're
not
going
to
care
it.
It's
table
v1
immediately,
so
yeah.
E
G
D
A
If
somebody
is
interested
in
opening
up
a
pull
request,
I
don't
know.
Oh,
can
we
open
up
a
pull
request
with
just
the
proposal
to
the
repo
without
actually
writing
any
compilable
code
as
a
first
step
and
then
maybe
have
a
second
PR
that
brings
it
in
after
we
get
that
one
through
what
are
folks
thoughts
there.
A
E
B
A
Think
that's
a
good
starting
point.
I.
Think
having
the
issue
for
the
link
is
a
good
place
for
folks
to
have
that
discussion
and,
if
that
ultimately
turns
into
a
PR
downstream.
That
seems
very
independent
of
this
other
effort
to
get
the
AWS
cloud
specific
definition
to
get
the
ball
rolling
there
and
get
folks
looking
at
the
API
shape,
like
Justin
mentioned,
maybe
actually
trying
to
code
something
with
it
see
how
it
works
and
moving
forward.
A
C
A
Mean
I
I,
don't
think
we
need
to
standardize
on
a
formal
proposal
template
yet
I
think
if
it
becomes
a
problem
we
can,
you
know,
standardize
on
something
then
I
think
for
now
just
best
effort.
It
should
be
fine.
You
know
I
trust,
most
folks
here
written
some
sort
of
engineering
proposal
before
we
don't
have
to
go
for
kept
or
anything
like
that.
A
C
You're
like
just
to
up
to
a
procedure,
we
said
like
basically
every
point.
Our
main
task
is
to
update
design
and
the
command
for
the
actuator
API,
so
we
can
start
implementing
as
soon
as
we
can
like.
Basically
anything
that
works
as
we
can
that
we
can
iterate
on
and
meantime
like
folks
can
complain
and
say:
oh,
oh,
no!
A
Yeah,
that's
why
you
know
having
more
than
one
because
I
know
I
know
we're
working
on
stuff
that
hefty
oh
I
know
there's
this
proposal.
It
sounds
like
you
also
have
one
as
well
young.
That's
why
I
suggested
we
do
sort
of
a
proposal
as
a
first
step
that
has
these
these
definitions
in
there
and
then
that
gives
us
an
opportunity
to
still
have
opinions
and
change
things
and
potentially
start
prototyping
in
the
repos.
A
A
A
A
I
also
haven't
seen
that
either
so
I've
been
missing
them
as
well.
I,
okay,
I
can
I,
know
Roberts
on
a
plane
right
now,
I
can
ping
him
or
try
to
add
it
myself
and
then
I'll
bring
this
up
again
next
week
and
probably
at
the
end
of
a
couple
calls
maybe
four
just
to
make
sure
folks
are
getting
what
they
need
to
attend
these
cool
anything
else.
In
general,
we
have
a
pretty
short
call
today.
F
B
Or
something
on
this
I
think
they're
actually
more
like
purse
like
on
the
order
of
seconds
of
granularity,
so
like
the
one
which
everyone
always
hits
his
route
53,
which
has
like
I,
think
a
global
limit
of
five
requests
per
second
now
I,
don't
know
how
that's
actually
enforced
like
there's,
actually
enforce
it
for
10
second
window
or
something,
but
it's
it's
that
sort
of
order
of
granularity
I
mean
and
that
that's
a
very
limits.
Sorry
yeah
yeah.
F
But
but
right
now,
for
example,
with
the
controller
you
can
specify
how
many
parallel
workers
can
work
on
the
example
on
a
machine
can
reconcile
a
single
machine.
So
if
you
have
like
five,
this
means
that
you're
going
to
do
Atlee
at
the
maximum
five
cows,
every
second
or
I,
mean
I,
mean,
depends
on
how
fast
you're
doing
things
so
I'm,
not
sure
that
you
can
people
affected
greatly,
and
even
in
that
case,
we
get
back
up.
B
B
But
I
don't
don't
hold
me
to
that
and
that,
like
the
the
actual
rate,
limits
have
not
actually
documented
anywhere
and
they
it
is
documented
that
they
are
in
different
classes
for
mutating
requests
versus
not
meet
their
Cystic
read-only
requests,
and
it
is
documented
that
they
vary
based
on
the
load
in
a
particular
region.
I
believe.
So,
if
a
region
is
under
higher
load,
you
will
get
a
lower
rate
limit,
but
you
can't
actually
see
your
rate
limit
anywhere
other
than
by
trying
it
until
you
hit
it.
B
B
A
Yeah
I
mean
I,
think
I
think
we're
having
to
solve
this,
as
well
as
a
number
of
other
AWS
API
issues
as
we
started
moving
forward,
Justin
smiling
because
we
had
a
lot
of
the
stuff
in
cops,
but
yeah
I
think
I
think
starting
to
call
out
now
and
it
might
even
be
a
meaningful
exercise
to
go
through
the
cops
backlog
and
see
some
of
these
other
concerns
that
we
we
had
to
cross
as
we
were
starting.
This
you
know,
building
and
creating
and
managing
clusters
with
cops
over
the
past
two
years.
Is
that
yeah.
B
The
huge
one
was
route
53,
which
has
really
low
limits.
The
only
other,
the
biggest
other
surprise
was
I.
Think
there
are
some
resources
which
you
create,
which
are
very
asynchronous
like
Network
load,
balancers
and
I
am
some
things,
I
think
rolls,
but
I
don't
think
either
of
those
gonna
be
in
the
sort
of
synchronous
path.
So
we
should
be
okay
there,
but
the
instances
seem
to
be
pretty
relatively
straightforward
in
terms
of
launching
an
instance,
so
fingers
crossed
yeah.
A
Yeah
I
think
even
like
I
know,
cops
there's.
We
have
some
retry
logic
baked
into
the
tasks
resources.
I
forget
the
word,
but
looking
at
things
like
retries
looking
at
things
like
how
do
we
structure
API
dependencies
and
which
one
which
API
call
do
we
call
first
and
passing
it
back
and
forth?
That's
gonna
be
a
lot
of
work
solving
here
and
a
lot
of
that's
gonna
be
super
specific
to
to
Amazon
yeah.