►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181024 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.vtryp5oq72xt
Highlights:
- Support for phases in clusterctl
- Renaming ProviderConfig to ProviderSpec
- Demo of GitBook for Cluster API
- Updating provider-skeleton to CRD’s and move into kubernetes-sigs?
- Literal struct fields can not be defaulted via webhook, make them pointers
- Replacing glog
A
Hello
and
welcome
to
the
Wednesday
October
24th
edition
of
the
cluster
API
sub
project
from
sig
cluster
lifecycle.
We
have
a
decent
agenda
today,
so
let's
go
ahead
and
get
started.
First
up
is
action
items
from
last
week
there
was
a
PR
open
for
up
streaming
provisioning
scripts
with
no
link
in
the
agenda
notes.
Does
anybody
know
which
PR
number
that
is
offend
or
want
to
talk
to
it?.
A
Okay,
so
that's
the
second
way,
I'm
gonna
put
your
name
in
here,
so
I
know
who
did
to
call
it
next
time.
Mark
oh
I
love
some
comments
on
the
provisioning
scripts
talked
last
night.
I
know
if
we've
had
a
chance
to
look
at
those,
but
it
would
be
worth
maybe
taking
another
pass
on
talking
about
exactly
what
we
want
in
the
provisioning
scripts
that
we
have
stream
I,
think
you
know
we
talked
about
the
dock
either
last
week
of
the
week
before.
A
If
it
was
last
week.
Looking
at
the
notes
and
I
think
I
agree
that
the
bootstrapping
scripts
are
similar,
I
think
we
should
also
maybe
think
about
the
bootstrapping
scripts
in
sort
of
two
layers.
There's
one
layer
that
is
basically
get
the
underlying
operating
system
sort
of
up
to
snuff
like
make
sure
we've
got
the
container
runtime
installed,
make
sure
we
have
the
qubit
installed,
cube,
ATM,
installed,
etc,
and
the
other
sort
of
part
of
the
provisioning
scripts
is
the
actual
cluster
formation
part
which
you
know
for
a
lot
of
providers.
B
A
A
C
So
as
we're
building
out
the
AWS
provider-
and
we
start
demoing
a
two
people,
we
found
that
there's
a
lot
of
confusion
when
you
start
talking
about
cluster
API
and
and
using
cluster
control
as
a
kind
of
entry
point
for
it.
Because
they'll,
you
know
there
are
a
lot
of
concepts
that
are
kind
of
interleaved.
Just
during
the
bootstrapping
process,
you're
having
to
explain
you
know
the
process
of
the
bootstrap
cluster
and
why
many
cubes
involved
in
some
cases
and
you're
also
having
to
describe
pivoting
the
cluster
and
then
also
the
actual
cluster
API
components.
C
A
So
currently
we
create
the
boot
shop
cluster
in
you
know,
by
default,
a
mini
cube
where
we
we
stick,
the
API,
the
cluster
API
stack
onto
that
that
mini
qpn
and
he's
saying
that
he
just
creates
a
master
directly
and
sticks.
The
cluster
API
stack
on
to
that
master,
so
I
think
there
we
have
to
be
a
little
bit
careful
in
terms
of
forcing
people
into
a
particular
workflow.
I
know
that
block
and
some
of
the
other
folks
from
VMware
have
also
been
interested
in
experimenting
with
different
ways
to
do.
The
bootstrapping
flow.
A
It's
a
little
concerned
about
sort
of
codifying
that
we
have
now
before
we
allow
sort
of
the
different,
the
different
alternatives
to
be
explored,
because
there
might
be
a
significant,
better
way
that
we're
doing
now.
And
yes,
especially
if
people
don't
expect
this,
you
know
this
command-line
interface
to
be
somewhat
consistent
over
time
or
we
start
writing
configuration
files.
People
expect
to
have
compatibility
and
that
might
tire
hands
a
little
bit
if
we
find
a
better
way
to
do
it.
The.
D
The
trick
that
cops
did
here
is
we
have
feature
flags,
the
idea
being
that
you
export
an
environment
variable
to
turn
on
the
feature
flag,
or
you
know
some
like
that,
and
then
we
basically
don't
offer
guarantees
if
a
feature
flag
is
on.
So
it
lets
us
do
this
sort
of
stuff
without
committing
to
it,
and
it's
sort
of
like
it's.
A
pretty
clear,
like
I,
am
stepping
outside
the
bounds,
but
it
still
lets
you
like
get
old.
E
Another
option,
too:
is
you
just
what
Covidien
did
for
a
very
long
time
was
just
to
say
it's
alpha
phases
and
there's
no
guarantees
underneath
anything.
That's
explicitly
declared
as
alpha
as
in
the
flag
was
literally
alpha
phases,
kuba
DN,
alpha
phases
and
then
executing
each
individual
phase.
That's
also
very
clear.
Yeah.
A
Yeah
I
think
thinking
either
of
those
approaches
would
satisfy
my
concerns
right.
My
concern
is
mainly
I.
Don't
want
to
to
get
locked
into
the
what
we
think
of
the
current
set
of
phases
if
we
find
a
better
set
of
phases
in
the
future,
I
think
breaking
things
down
into
smaller
bite-sized
chunks
makes
a
lot
of
sense,
like
the
code
already
is
sort
of
starting
a
partition.
A
That
way
and
the
people
that
we're
talking
about
wanting
to
change
different
parts
are
basically
trying
to
change
the
implementation
of
specific
phases,
I
think
as
outlined
in
in
Jason's
issue
right
like
maybe
we
do
pivoting.
Maybe
we
don't
do
pivoting.
Maybe
we
have
a
bootstrap
cluster
here.
Maybe
we
don't
and
so
being
able
to
selectively
apply
those
phases
and
swap
in
your
own
bits
for
other
ones
makes
a
lot
of
sense.
C
Yep
so
I
think
I'll
definitely
take
probably
go
for
the
alpha
approach.
I
think
the
feature
Flags,
maybe
a
little
bit
more
difficult
to
implement
at
this
state,
so
I
I'll
go
ahead
and
I'll
update
the
outstanding
PR
that
I
have
now
to
implement
it
as
clearly
alpha,
and
then
we
can
always
adjust
granularity
and
kind
of
individual
steps
as
we
need
to.
A
A
Con
I'm
currently
signed
up
is
the
only
presenter
but
I
think
it
would
make
a
lot
of
sense
to
have
two
presenters
and
in
particular
somebody
not
from
Google's
that
we
have
a
little
bit
of
company
diversity
representing
our
sig
during
the
talk,
and
so
I
wanted
to
call
out
to
folks
that
if
you
are
interested
in
being
a
co-presenter
and
you
haven't
reached
out
from
yet,
please
do
so.
I'll
select
folks
by
the
end
of
the
week
and
then
try
to
come
up
with
some
sort
of
fair
and
equitable
process
for
selecting
a
co-presenter.
A
There's
been
more
than
one
person
who's
reached
out
so
far.
So
clearly,
not
everybody's
gonna
get
it
and
we'll
try
to
figure
out
a
way
that
people
feel
happy
with
the
process
and
it's
less
relevant
to
this
group
of
people.
But
the
same
thing
is
happening
on
the
cube.
Atm
side
we
also
are
sig,
has
a
deep
dive
in
seattle.
Q
con
for
cube,
ATM
and
tim
is
also
looking
for
a
co-presented
for
that
in
a
similar
vein.
A
F
F
But
there
was
a
request
on
the
PR
that
if
we,
if
we
change
this,
to
provide
respect
that
we
bump
the
version
from
V
1
alpha
1
to
V
1,
alpha
2
and
the
corollary
of
that-
is
that,
because
we're
on
CR
these
now
we
would
have
to
drop
the
V
1
alpha
1,
because
we
can
only
serve
one
API
at
a
time.
So
just
wanted
to
get
feedback.
F
G
H
Wanted
and
they
do
from
I
think
one
point:
10
no
1.9
actually,
but
the
problem
I'm
not
sure
about
the
renaming
of
fields.
I
think
that
there
were
some
very
simple
like
I
idea
how
to
do
this,
but
unfortunately
I'm
not
sure
that
you
can
actually
rename
fields,
and
you
guys
should
get
conversions
yeah.
My
understand
is:
you
can
have
multiple
versions
of
stomach
they're.
All
the
same.
A
G
H
Can
there
is
a
versions
field
that
that
was
added,
which
was
kind
of
a
breaking
change,
but
it's
not
a
problem,
but
one
of
those
versions
that
you
specify
is
storage
Russian.
So
this
one
is
actually
safety,
etcd,
yeah
and
so.
D
The
two
so
alpha-1
and
alpha-2
would
have
to
be
the
same
today
and
the
recommended
workaround
is
to
we
would
add
provider
expect
to
alpha
one
I,
guess
and
alpha
2
alpha
2
would
have
to
add
provider
config,
and
we
have
both
fields
in
both
today.
I
believe
is
the
workaround
how's
that
any
different
so
adding
those
fields.
G
A
Think
our
API
is
our
alpha
and
they're
likely
to
keep
changing
and
subtly
breaking
ways
and
I'll
be
explicitly
breaking
away
these
for
a
little
bit
longer
as
we
stabilize
so
I.
Don't
think,
there's
a
lot
of
value
to
waiting
I
think
we
should
get
it
in
and
try
to
start
fixing
the
things
we
know
need
to
be
fixed
now.
A
A
I
I
Now,
thank
goodness,
so
I
didn't
want
to
go
through
the
whole
thing,
because
a
lot
of
it's
incomplete
but
I
had
some
some
any
questions
that
I
wanted
to
get
feedback
on
to
see
if
they
were
acceptable
acceptable
to
the
group.
So
one
of
the
first
things
is:
do
we
like
the
idea
of
using
good
book
I
personally
think
it's
a
great
choice,
because
it
allows
us
to
to
write
and
markdown
but
still
allows
us
to
have
you
know
pretty
formatting,
search
and
other
things
like
that?
A
I
So
so
the
gist
of
it
is
there's
one
or
two
JSON
files,
but
everything
else
does
marked
down.
The
markdown
is
references
other
markdown
files,
so
there's
a
certain
expected
structure
and
then
there's
a
command
get
book
build
which
will
then
render
HTML,
which
you
then
check
into
the
repository,
and
then
you
can
serve
through
like
pages
or
I.
Think
coop
builder
uses
something
called
firebase.
I
I
A
J
Neat,
okay,
secret
Rubik
were
planning
at
some
point
to
use
this
for,
like
writing,
documentation
for
contributors
and
everybody
being
able
to
contribute
to
the
book,
and
if
you
want
feedback
on
how
this
went,
we
should
go
to
see
country
mix
and
ask
them
like
I'm
sure
that
Parris
Pittman
moon,
Aaron,
cracker
burger,
have
knowledge
of
this
I
think
this
is
a
good
solution
and
definitely
something
we
should.
We
should
consider.
I
A
I
We
have
to
decide
our
own
audience
and
when
you
look
at
the
qu
builder
book,
they
targets
just
about
every
audience
possible.
I
would
kind
of
like
to
do
that
too,
but
there's
some
stumbling
blocks.
So
if
you
see
the
audience's
that
I've
started
out
with
I
talked
about
kubernetes
users,
so
people
that
just
want
to
deploy
a
kubernetes
cluster
quickly
so
that
they
can
deploy
applications.
I
talked
about
infrastructure
engineers
who
may
want
to
be
able
to
deploy
multiple
kubernetes
clusters
to
build
hybrid
cloud
environments.
These
are
different.
I
D
It's
a
big
case
that
we
expect
cluster
CTL
to
evolve
into
such
a
tool.
Or
should
we
in
mind?
Is
that
my
bias
understanding
whatever
was
that
we
trust
each
other
more
of
a
sort
of
proof
of
concept,
and
there
would
be
more
polished
tools
like
gke
or
cops
or
other
things,
I'm
sure
will
or
Kubik
one
rights,
I'm
sure
what
evolved
that
use
the
cluster
API
and
that
the
user
documentation
should
be
about?
D
You
know
what
our
machine
deployments
and
how
does
that
translate,
and
that's
not
necessarily
focusing
on
like
how
does
the
cluster
first
get
bootstrap,
but
more
like
how
does
what
is
the
experience
of
managing
machine
infrastructures
strean
infrastructure
through
the
great
api
which
should
be
cross?
That
should
be
crossed
everything
I.
I
Think
that's
the
fair
point.
We
I
mean
I
think
that
so,
when
you
think
of
a
QuickStart
documentation,
I
think
it's
a
little
much
to
ask
people
to
think
about
what
it
means
to
delete
machine
versus
increase.
The
number
of
replicas
in
a
machine
set
right
really.
What
they
need
to
know
is
in
like
five
lines
or
less.
How
do
I
get
a
kookn
big,
so
I
can
start
McCluster
and
that's
where
I
think
having
common
tooling
around
cluster
deployment
is
extremely
important.
I
I
am
not
convinced
cluster
control
is
that
tooling
light
I
really
did
not
believe
it
is
now
and
I'm,
not
sure
that
it
will
be
in
a
future.
It
could
be
that
we
have
something
better,
but
I
do
believe
that
having
uniformity
of
deployment
means
is
part
of
the
value
of
cluster
API
I.
Think
it's
somebody
essential
value
really.
I
A
It's
one
thing
I
was
gonna
say
is,
is
maybe
this
first
audience
is
aspirational
rather
than
tactical,
and
we
say
it
might
make
sense
in
the
future.
If
cluster
huddle
involves
the
point
where
we
are
trying
to
explain
to
end-users,
how
to
use
it
directly.
But
at
this
point
we're
sort
of
maybe
more
targeting
the
second
two
audiences,
which
is,
you
know,
certainly
the
developers
of
people
that
are
creating
provider
configs
and
then
also
you
know,
maybe
people
that
are
more
about
cluster
management
than
just
straight
clustered.
A
I
A
And
I
think
you
know
it's
doesn't
point
out
and,
as
you
said
like
it's
I,
don't
think
we've
decided
as
a
group.
Yet
whether
have
the
aspirational
goal
to
make
a
cluster
cuddle,
be
that
deployer
tool
own
right,
I
mean
I.
Think
that
is
that's
one
possible
future.
Where
we
could
say
you
know,
we
could
make
it
good
enough
to
be
that
deployer
tool.
A
K
We
have
other
tools
that
actually
execute
the
cluster
API,
though
too,
to
run
through
the
lifecycle
operations.
Right
now,
I
mean
we
have
coop
coop
CTO
and
cluster
CTO,
but
neither
of
those
are
actually
very
well
documented
on
how
to
use
those
against
cluster
API.
So
I
think
for
somebody
started
coming
in
new
to
the
project.
They
see
the
API,
they
see
the
objects,
but
they
don't
know
how
to
use
it.
If
we
don't
document
cluster
CTL,
we
should
at
least
document
how
to
use
it
with
good
CTL
to
start.
A
I
like
that,
if
you
live
yeah
and
I'm,
not
saying
we
shouldn't
document
cluster
petal,
we
should
document
enough.
The
people
that
are
trying
to
develop
for
the
cluster
API
I
can
certainly
use
it.
I
just
don't
know
that
we
need
sort
of
polished,
end
user
documentation
where
we
expect
somebody
who's,
not
sort
of
kubernetes
developer
or
not.
You
know
either
familiar
or
trying
to
become
familiar
with
the
fluster
API
repository.
If
we're
really
targeting
that
audience
at
this
point
or
whether
that
audience
is,
you
know,
aspirational
down
the
road
or
out
of
scope
entirely.
A
Alright,
that's
the
other
option.
So
I
do
like
the
the
different
audiences
here
and
I
think
leaving
that
first
audience.
There
makes
a
lot
of
sense
because
we
are
gonna
start
to
get
pieces
of
documentation
that
do
apply
to
end-users
at
least
indirectly,
even
if
they're
not
using
cluster
petal
for
deployment
and
then
I
think,
maybe,
as
you
said,
we
have
a
big
gap
on
the
current
developer
section
right
now
and
there's
there's
also.
I
Don't
think
this
is
actually
this
right
now
is
a
very
high
level
in
terms
of
describing
what
the
objects
are,
what
the
purpose
of
the
objects
are,
but
I
don't
think
they
give
you
quite
a
detail
necessary
to
build
your
own
provider.
So,
to
give
you
an
example,
I
go
through
and
I
quote
the
code
and
then
I
have
some
documentation
about
the
purpose
of
the
object
and
you'll
have
something
like
this
to
you
here.
This
to
do
is
actually
fairly
critical.
I
If
you
want
to
use
to
cluster
control,
to
deploying
there's
a
there's
about
two
or
three
boxes
like
this
that
I
feel
like.
I
would
really
rather
not
explain
the
people
why
they
need
to
be
done,
not
because
I
don't
think
it's
interesting,
but
because
I
think
there's
a
long
history
of
tickets
and
it
would
take
a
while.
I
So
let
me
tangent
on
that
point
justice
briefly
so
so
this
is
a
slightly
condensed
version
of
what
I
believe
the
machine
actuator.
What
the
expectation
of
the
machine
actuator
is
or
the
machine
controller
I
think
that
this
documentation
is
essential
for
someone
implementing
a
provider
and
I
said
it's
concise,
I
think
I
mean
maybe
the
language
isn't
perfect,
but
I
mean
I,
wrote
it
crazy
lately
last
and
hopefully
others
can
help
improve
it.
The
the
bigger
concern
is:
is
things
like
the
this
to
do?
I
There's
more
to
dues,
but
things
like
this
to
do
so
later
on
in
the
agenda.
There
was
a
question
about
whether
or
not
the
skeleton
repository
I
created
should
be
moved
under
cumin
any
stings.
This
has
come
up
a
number
of
times,
I'm
convinced
we
should
not
do
that.
I
think
that
we
can
build
something
much
simpler,
just
using
hoof
builder
and
then
add
just
the
couple
of
two
dudes
like
the
status
annotation.
I
I
A
I'm
curious
the
when
you
say
to
not
use
that
and
to
just
the
documentation
here.
What
do
you
think
it
would
be
sufficient
for
bootstrapping
documentation
here
to
say,
run,
create
a
a
repo
run.
These
three
commands
modify
these
three
files
and
now
you're
open
or
is
there
more
sort
of
scaffolding
ability
to
provide
to
people,
in
which
case
having
a
repo
to
copy
would
give
us
a
place
to
put
the
other
parts
so.
I
I
think
there's
a
trade-off.
I
think
that
the
qu
builder
instructions,
if
we
explain
just
how
to
use
cout
builder,
to
create
your
provider,
spected,
cetera,
et
cetera.
That
could
be
described
succinctly
and
it
makes
it's
a
manual
process,
but
it
makes
it
very
clear
what
you're
doing
I
think
you
leave
the
process
with
a
better
understanding
of
why
you
did
what
you
did.
The
problem
with
that
is
that
it
does.
Is
the
deal
with
these
two
dues
like
the
status
annotation,
and
so
we
have
a
choice
there.
I
J
A
A
Ok,
there
are
two
things:
I'll
say
is
one
I
think
that
the
steps
are
greatly
simplified
now
that
we're
using
the
queue
builder
and
Ciardi
is
like
before
trying
to
explain
in
prose
what
you
need
to
do
to
bootstrap
a
provider.
Rico
is
really
difficult,
which
is
why
you
created
the
skeleton
in
the
first
place.
A
I
think
it's
a
lot
simpler
now
and
I
think
that
the
rate
at
which
new
providers
are
being
created
has
also
declined
significantly,
since
you
create
the
skeleton
repo
like
we
saw,
it
used
a
number
of
times
very
rapidly,
but
the
rate
of
people
I
think
looking
to
fork
it
and
create
new
ones
is
also
decreased
significantly.
So
I
don't
know
that
the
effort
of
keeping
it
up
to
date
is
particularly
useful
for
us
at
this
point,
I
think
that's
a
good
point.
I
Okay,
so
that's
primarily
what
I
had
I
could
go
over
more
of
this,
what
I've
written
so
far
and
copied
from
others,
but
I
think
the
big
question
is,
if
IPR
so
so
part
of
the
reason
we're
doing
this
is
that
we
got
a
lot
of
people
that
are
interested
in
writing.
Documentation
and
I
think
this
would
give
us
a
good
framework
for
others
to
begin
contributing.
There
are
certain
areas
in
particular
around
machine
sets
and
a
machine
deployments
where
I
think
hardik
and
others
probably
should
write
those
sections.
A
Yeah
I
think
I
would
vote
yes,
I
think
maybe
a
different
way
to
phrase
what
you're
asking
is.
Is
it
worth
your
effort
to
get
that
PR
ready
right,
because
it's
gonna
take
some
time
you're
ready?
Then
you
find
out
that
people
don't
want
it?
That's
it's
been
a
waste
of
your
time
so
floating
it
here
as
like
looking
for
like
does
anyone
think
this
is
a
bad
idea?
A
Is
there
a
different
way
people
would
suggest
doing
it
and
so
far
in
chat
I'm,
seeing
lots
and
lots
of
Plus
Ones
and
no
alternate
suggestions,
so
I
think
moving
on
to
sending
the
PR
and
makes
a
lot
of
sense
at
this
point?
Okay,
actually,
that's
all
I
have
one
corollary
to
that.
There
is
the
open
issue
that
does
add
some
markdown
documentation
to
the
recoup
repo,
the
open
PR,
which
I
was
I,
was
reviewing
yesterday
and
was
about
ready
to
merge.
I
Don't
object
to
merging
it,
but
I
will
say
that
I
think
that
I've
already
copied
the
most
important
parts
so
I
mean
my
my
hope
would
be
no
matter
what
the
state
of
this
by
the
next
meeting
I
would
like
to
have
a
PR
up,
and
hopefully,
even
sooner
so
that
we
can.
You
know
next
week
I,
like
others,
to
be
able
to
start
contributing.
J
I
I
So
that's
that's!
Where
I've
been
develop,
it
now
I
think
that's
the
right
place
for
it,
so
I
have
under
the
docs
directory
yet
again
book
directory.
We're
all
is
based,
I.
Think
the
big
question
well
independent.
It
lives
there
or
in
a
different
repo
I
think
it
should
look
with
the
repo
I
think
so
that
the
codes
there,
the
links,
make
more
sense,
but
the
big
question
would
be:
where
is
it
hosted
right?
I
B
A
M
So,
to
use
github
pages
are
the
Shoom
you'd
have
to
actually
like
write
HTML
to
a
branch
and
push
it,
because
you
have
pages
out
of
the
box
on
this.
Fourth
Jekyll
right
and
yeah,
so
it
can
can
do
like
a
very
basic
Jekyll
build
if
you
want
anything
custom
or
anything
other
than
Jekyll
you
have
to
you
have
to
like,
can
meet
the
output
'html
into
a
branch
in
push
that.
J
M
Don't
think
so
I
mean
we
could
probably
use
like
the
new
github
actions
thing
in
the
future.
To
extend
anything
I,
don't
know,
I
mean
it's
probably
better
to
ask
what
would
you
build
their
users
at
the
moment
right
and
maybe
there
is
some
Sheridan
forever
yeah,
because
the
only
the
only
way
I
used
github
pages
was
for
really
basic
jackal
stuff,
like
uniquely
escarole.
Just
to
give
you
an
example,
we
have
a
thing
where
we
basically
just
render
readme
in
a
prince
here
forma.
J
A
In
India
mentions
in
chat
that
get
booked.
Calm
is
the
sass
the
queue
builders
using
for
their
hosting
so
I
can
I
can
ask
Phil
if
we
can
maybe
a
piggyback
on
on
their
account.
I,
don't
know
how
they're
paying
for
that,
since
that
would
be
nice,
Oh
or
Jason
says.
Maybe
it's
just
free
for
us.
It's
our
open
source,
which
would
be
even
better
and
we
can
create
our
own
account.
A
N
And
one
additional
question
around
the
good
book
here,
more
specific
just
of
the
document,
we
also
intend
to
add-
let's
say
a
provider
specific
sections
in
this,
like
you
know,
for
all
the
known
providers
that
have
implemented
clustered
API,
maybe
a
smaller
section
for
them,
explaining
like
the
provider
specific
things
in
in
the
same
documentation
here.
Is
that
also
a
goal
or
something
that
we
want
to
do.
J
So
for
cube
ATM
as
an
example,
here
we
decided
to
not
include
any
code
provided
specific
bits
into
the
cube,
alien
documentation
and
the
long-term
plan
is
to
delegate
documentation
work
to
the
code
providers
themselves
using
like
hey
hey.
This
is
a
link
for
AWS
and
I.
Think
in
this
case
it
may
be.
It
should
be
similar.
You
could
have
a
landing
page
explaining
like
the
overview
of
this
code
provider,
but
the
specific
tomato
setting
can
be
pretty
huge
for
certain
code
provider
and,
ideally
you're
not
going
to
want
this
in
the
book.
A
So
the
rare
to
clarify,
are
you
suggesting
having
something
in
this
kit
book
that
says
like
here?
Are
some
supportive
places
where
you
can
run
this
and
then
you
can,
like
click
like
it'll,
say
AWS,
GCP,
digitalocean,
etc,
etc,
and
you
click
on
the
AWS
one.
It
would
take
you
to
a
different
place
that
describes
in
detail
how,
how
you
configure
the
provider,
specific
parts
or
AWS
and
not
include
those
in
line
I,
think
that's
what
you're
suggesting
I
just
want
to
clarify.
Yeah.
J
N
K
A
J
A
Yeah
I
like
that
model,
it's
also
in
line
with
where
we
have
today
in
where
the
main
repo
has
links
out
to
the
different
known
provider,
implementations,
but
doesn't
describe
them
in
detail
right.
The
other
thing
I'll
point
out
is
in
some
ways
it's
similar
to
like
the
actuator
interfaces,
where
we're
gonna
sort
of
need
to
maybe
have
somewhat
standardized
way
to
say
for
the
providers
to
present
the
information
that
gets
linked
to
right.
A
We
want
it
to
be
a
somewhat
consistent
experience
where,
if
you
say
I
want
to
see
how
we
do
this
on
AWS,
it
says
here
the
provider,
specific
fields
I
want
to
see
speaker
field
instead
of
saying,
if
I
go
to
AWS
now
it
dumps
me
into
you
know
something
very
different
from
what
I
would
see
on
the
other
one.
So
you'll
be
nice
if
we
can
maintain
at
least
some
consistency
there
for
the
provider
specific
this,
but
you
know
that
might
also
be
hard
to
to
keep
in
sync
no
long
term.
Yeah.
J
I
guess
for
the
common,
like
the
common
ground,
between
the
providers.
This
is
this
is
great
to
have
something
that
that
is
shared
between
the
providers.
But
if
you
have
something
that
is
completely
unique
to
a
certain
provider,
it
shouldn't
be
in
the
main
repo
the
book
I
mean
and
also
not
having
provider
specific
PRS
in
the
in
the
book
is
going
to
reduce
the
noise.
Whatever
the
book
is
hosted
as
a
repository.
A
So
the
next
thing
on
the
agenda
we
had
sort
of
punted,
which
was
about
the
virus
skeleton,
but
we
did
touch
on
that
quite
a
bit
during
this
conversation,
so
I
filled
some
notes
in
there
and
I
think
we
will
mark
that
as
done.
Let's
see
next
all
so
we're
running
a
little
bit
low
on
time.
So
I'll
try
to
go
quickly
here.
Next,
there's
a
question
about
defaulting
with
web
hooks
and
using
pointers.
I
linked
some
folks
into
this
conversation
that
should
know.
A
G
See
it's
radically
yes,
because
that
would
solve
the
problem
just
to
quickly
summarize,
but
now
it's
not
possible.
If
you
have
a
struct
field,
that
is
not
a
point
about
a
little
and
inside
some
some
type.
You
cannot
default
it
by
a
web
and
and
that
also
confirmed
by
some
other
folks
and
I-
think
the
best
solution
would
be
to
simply
make
it
a
point.
It's
optional
already
so
yeah.
H
One
thing
about
there's
like
a
comment
about
deployments
and
strategy
that
there
is
a
that
it's
literal
stroke
there
and
I
actually
talked
with
one
of
the
developers
and
kyouko
in
the
West
cube
cone
about
this
thing
exactly
and
they
mentioned
that
they
wanted
to
change
it.
However,
it
will
break
every
time
in
every
single
control
its
using
the
format's
out
there,
so
they
pretty
much
you
don't
like
to
do
it
so
they're
stuck
with
it.
So
we
can.
We
can
try
to
avoid
this
thing.
A
A
A
A
O
Eiffel
engine
test
and
we
separate
the
bootstrap
cluster.
These
are
our
controller
mission,
controller
and
and
deployment.
So
we
want
to
share
the
same
bootstrap,
Custer
and
I'm
wondering
whether
we
can
just
deploy
multiple
instance
mission
controllers
to
the
same
post,
request
and
and
that
will
work.
I
know
in
Serie,
C
I
D
to
support
namespace,
but
III
wondering
whether
anymore
already
doing
that.
A
It
stands
for
question
I
know
a
lot
of
people
have
talked
about
wanting
to
do
that.
I,
don't
know
if
it
works.
Yes,
there
were
some
open
issues
about
making
sure
that
machines
and
clusters
in
the
same
namespace
were
correctly
linked
together.
That
was
certainly
a
use
case
that
people
wanted
to
support.
I
haven't
actually
tried
it
myself.
It's
anyone
else
on
the
call
tried
this
to
see
if
it
works.
I
A
P
A
A
Yes,
I
think
that
works.
Okay,
I
think
the
answer
Hui
is,
is
it
may
work?
You
should
go,
try
it
and
let
us
know
because
I
think
it's
something
was
something
we
want
to
be
able
to
support
all
right
thanks.
Oh
thank
you,
Lock
G
log.
How
do
much
do
we
like
it
and
Justin
has
a
link
to
another?
Oh
yeah,.
K
We
we
develop
a
logging
package
for
our
last
open
source
project
here
at
VMware.
That
allows
you
to
do
tracing
to
your
code,
though
it
has
operational
tracing,
so
you
could
actually
trace
an
operation
from
into
in
from
one
component
to
the
next
component
cetera,
and
you
can
also
log
to
remote
syslog
server,
and
it's.
K
D
I
think
I
can
come
out,
I
think
we're
not
stuck
with
G
lug
I
think
it
lost.
We
want
to
fix
a
bunch
of
things.
I
think
different
people
want
to
fix
it
to
different
degrees.
I
personally
want
to
do
something
which
sounds
more
like
what
you're
talking
about
I
think
which
is
I
want
to
have
it.
I
want
to
have
logging
integrated
with
tracing
in
a
sort
of
open
tracing
type
world
to
that
sort
of
tracing
my
expands.
D
It
sounds
like
that
something
similar
to
what
you've
built
with
what
you've
built
in
your
previous
project.
I,
don't
know
what
the
right
way
is
to
move
forwards
on
this
I
feel
like
one
approach
would
be
that
we
prototype
it
in
a
smaller
projects
like
cluster
API.
Another
one
would
be
that
we
take
it
to
cigar
collector
and
and
try
to
get
their
input.
I.
Don't
think
that
would
be
very
productive
in
that
without
something
to
look
at
it's
difficult
to
perform
an
informed
opinion.
D
K
D
Yes,
there's
a
bunch
of
yeah
I
mean
I'd
love
to
look
at
it.
If
you
want
put
a
link
in
that
in
the
notes,
I'd
I'll
certainly
take
a
look
at
it.
I
think
we
want
to
avoid
requiring
a
tracing
package,
but
I
think
it
would
be
great
to
interact
with
a
tracing
package
personally,
but
it
is
politically
fraught
because
there
are
lots
of
people
who
are.
This
is
how
business
is
built
around
like
you
should
use
our
tracing
package
and
so
open
tracing
is
a
generic
API.
D
K
P
No,
and
you
know
there
are
several
things
that
are
going
on
there,
which
makes
it
hard
because
we
depend
on
C
advisor
and
C
advisor
uses,
G
log
and
you
know
so-
there's
recursive
dependencies.
It's
harder
to
extract
the
G
log
package
out,
but
I
would
definitely
say
that
if
you
have
a
separate
repository
with
just
the
basic
minimum
that
you
have
you
need
for
logging,
you
know
that
would
be
a
good
start,
that
we
can
prototype
either
in
the
cluster
API
repository
or
in
the
keiki
repository,
whichever
you
feel
comfortable
and
maybe.
K
P
P
P
A
P
Of
the
world
at
the
table,
but
at
this
point
what
I'm
trying
to
do
is
we
have
to
forge
a
log
as
the
first
step,
because
we
we
are
not
able
to
you,
remember
in
the
golang
log
package.
There
is
a
set
output
method
where
you
can
redirect
to
anything
any
other
logging.
So
we
need
something
similar
in
G
log
first,
so
we
have
to
forge
a
log
first,
and
that
was
the
point.
P
I
was
trying
to
make
to
Tim
and
hopefully
that
got
through
yesterday,
but
he
has
to
look
at
the
PR
and
then,
if
he's
ok
with
that,
then
our
options
are
open
up.
We
can
use
the
repository
that
he
he
started
off
with
an
interface
for
logging.
That
is
one
option
and
if
lock
the
stuff
from
via,
where
comes
through,
that
we'll
be
able
to
think.
P
It's
a
lock.
The
first
thing
you
should
do
is
if
you
can
separate
out
the
code
into
a
separate
repository
and
throw
us
a
URL
here
in
the
cluster
API
slack
on.
You
know
just
DM
it
to
me.
You
know,
I
will
take
a
look
and
then
I
will
try
to
see
what
we
can
go
where
we
can
go
from
there:
okay,
okay,
okay,
but
once
you
have
a
separate
repository,
you
can
also
propose
a
PR
straight
to
cluster
API,
and
you
know
you
can
be
can
debate
about
that.
A
Just
say
we're
a
few
minutes
over
time,
so
I
want
to
be
sensitive
to
people
if
they
need
to
drop.
There
was
one
more
thing
on
the
agenda
and
I
wanted
to
check
with
sitar,
if
it's
okay
to
punt
that
till
next
week,
and
then
if
people
want
to
sort
of
finish
off
this
conversation,
I
have
a
couple
minutes
before
I
have
to
run
and
kill
the
recording,
but
I
do
want
to
make
sure
people
realize
that
the
meeting
time
is
over
and
it's.