►
From YouTube: 20190911 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
C
There's
nothing
nothing
too
much
going
on
right
now.
We're
adding
support
for
adding
testing
and
also
adding
support
for
external
it.
Cds
partially
supplied
set
of
cluster
C
is
and
there's
a
reasonably
large
PR
open
right
now
that
I'd
love
some
feedback
on,
but
otherwise
nothing
major
there
and
then
for
cap
T
same
no
major
developments.
A
All
right
for
the
AWS
provider,
I've
started
working
on
rebuilding
out
the
IDI
test.
Suite
for
v1
alpha
2
I
have
a
link
to
the
work-in-progress
PR
that
I
have
right
now
as
of
right.
Now
it
deploys
all
the
V
1
alpha,
2
components,
and
that's
about
it
right
now,
but
I
want
to
get
it
to
where
it's
also
spinning
up
a
cluster
as
well
before
I
remove
that
work
in
progress
label.
So,
if
you're
interested
in
the
e
2e
testing
process
there,
please
provide
feedback.
A
Hopefully
it'll
provide
a
decent
model
for
other
providers
to
also
follow
for
testing
as
well,
and
it
looks
like
Vince
headed
in
a
topic
there
too,
about
adding
the
automated
prowl
image
build
on
the
master
PR
merge,
and
this
is
something
that
can
be
applicable
to
any
of
the
providers
granted.
They
are
using
the
kubernetes
staging
repositories
for
docker
images
and
the
image
promoter
process.
A
The
the
tooling
now
that
creates
the
staging
buckets,
also
grants
permissions
for
Google
Cloud
build
and
we
have
template
jobs
that
are
in
place
that
can
automate
the
building
of
images
through
prowl.
So
all
of
the
underlying
infrastructure
is
in
place
to
enable
this
for
anybody
using
the
staging
buckets
now
Kathy
Andrew.
Thank.
D
D
D
I
know
we
had
a
discussion
that
vmy
are
a
few
months
ago
or
I
wasn't
there
before
it,
but
DIMMs
was
and
I
think
at
the
time
the
decision
was
not
to
go
with
the
image
promotion
model
for
four
reasons:
dense
contempt
and
Travis
can
talk
about
those,
but
it's
neither
here
nor
there
we're
investigating
it
again
to
see
if
it
fits
our
needs.
But
yeah
we've
been
using
prowl
for
a
bit.
D
We
just
didn't
find
it
like
the
github
actions
or
other
services
worked
as
well
for
image
promotion,
so
cat
vv5
one
was
released,
which
included
think
to
make
an
edit
here
which
included
V
copy
zero
to
one,
and
there
are
currently
outstanding
PRS
for
support
for
the
auditory
vSphere
cloud
provider.
So
I
encourage
people
working
on
CAPA
and
other
and
for
providers
to
reach
out
to
Andrew
psyche.
D
Him
I
think
we're
doing
a
lot
of
front
running
here
and
can't
be
for
some
of
the
things
that
we're
doing
the
VCR
cloud
provider
is
probably
one
of
the
more
advanced
cloud
providers
and
so
we're
looking
to
them
change
it
to
the
default,
the
external
one
and
so
we're
having
to
solve
some
unique
problems
like
how
you
add,
manifests
to
target
clusters,
but
not
the
the
bootstrap
cluster.
And
how
do
you?
How
do
you
version
those,
and
how
do
you
apply?
D
Those
and
currently
recent
client
go,
but
we're
investigating
may
be
using
unstructured
data
as
a
way
to
create
those,
so
yeah
I
encourage
people
to
reach
out
to
Andrew,
so
I
think.
Eventually,
all
info
providers
will
have
to
go
down
that
route.
We
also
thinks
the
Essene
have
a
open
PR.
Unfortunately,
he
asked
for
me
to
review
that
first
thing
in
the
morning
when
I'm
generally
at
my
most
intensive
and
he
48
pieces
of
feedback
for
me,
but
it's
support
for
the
load
balancer
kept
that
moshe
proposed
a
while
back.
D
It
adds
support
for
load
balancers
to
cap
V
when
it's
running
on
BMC
on
AWS.
It
isn't
hard-coded
into
cat
v2.
You
know,
use
this
feature,
it's
a
separate
controller
that
we
were
going
to
use
as
a
POC
for
maybe
potentially
a
new
service
into
Burnett
ease,
but
it's
something
that
we
need
and
kept
be
to
be
able
to
do.
D
H
a
testing
and
since
our
platform
is
the
MC,
this
despise
us
that
and
the
last
thing
that
I
was
going
to
type
out
and
I
honestly
can't
remember
what
it
was
now
I
think
it
was
related
to
the
e
to
e
test.
I
know:
Andrew
is
been
working
on
a
framework
for
that
as
well
Jason,
so
I,
don't
know
if
you-
and
he
have
have
worked
together.
D
I
feel
like
just
maybe
opportunities
for
more
communication
sounds
like
cap
and
cap
V
have
been
doing
a
lot
of
the
same
things,
but
I
don't
see
the
PR
at
the
moment,
so
I
think
Andrew
may
have
closed
it
and
is
going
to
reintroduce
it
since
we
refactored
to
be
one
off
of
two
yeah.
If
that
wasn't
clear
v5
one
is
you
want
alpha
two?
So
that's
that's
all
I
have
right
now
for
cat
beat.
E
D
They're
not
really
issues
and
I,
don't
really
think
issues
work
for
finding,
maybe
a
Google
Doc
to
document
like
findings
and
cat,
V
them
or
other
improvement
writers
that
might
be
applicable
to
v1
alpha
three
yeah
there.
If
there
are
like
shortcomings
like
it's,
not
a
shortcoming,
it's
something
Cathy
can't
even
satisfy
I
mean
it's
more
of
a
pattern
than
anything.
That's
it's
like.
How
do
you
yeah
it's?
It's
not
something.
D
I
didn't
necessarily
think
a
peak
and
I'll
leave
that
to
Andrew
the
events
because
he
handled
it,
so
he
can
best
decide
how
to
do
that.
Okay,
but
either
way
like.
However,
we
document
it
I
totally
agree
with
you
that
it
definitely
is
something
worth
discussing
is
like
common
patterns.
I
mean
it's
not
necessarily
part
of
an
API
contract,
but
as
moving
forward
there's
these
common
patterns,
we're
gonna,
see
between
providers
and
there's,
clearly
a
lack
of
communication
that
we
need
to
reconcile
so
that
we
don't
re
like
reinvent
these
designs,
yeah.
E
The
the
region
was
mentioned
that
because,
like
I
remember
that
we
talked
about
like
management
for
when
we
bring
up
a
control
plane-
and
you
mentioned
to
like
apply
some
yeah,
mostly
30
cluster,
which
we
do
have
like
some
kind
of
like
way
to
think
about
that
for
like
this
might
might
be
a
good
thing
to
bring
up
to
there.
It's
a
new
another
use
case,
absolutely
yeah.
A
F
H
Yeah
so
started
work
this
week
on
kind
of
just
really
plugin
by
again,
after
being
out
for
a
few
weeks,
happy
stuff
but
yeah.
So
it
looks
like
as
far
as
talus
goes.
Our
provider
will
basically
become
a
bootstrap
provider
and
hopefully
we
can
strip
out
all
the
infrastructure
things
that
we
implemented
and
use
the
upstream
stuff,
so
I'm,
looking
forward
to
that
and
yeah,
so
we're
hoping
the
next
probably
month
or
so
to
have
everything
kind
of
out
there.
So
work
goes
on.
A
I
H
C
A
Okay,
I
was
gonna,
say
it
might
be
good
to
bring
up
some
of
that
stuff.
When
we
were
talking
about
v1
alpha
3,
to
figure
out
how
we
can
automate
things,
especially
around
like
upgrade
and
and
all
of
that,
to
help
ease
that
burden.
But
as
you
get
further
along,
we
can.
We
can
tackle
that
as
we
come
across
there
as
well.
J
I
linked
in
that
work-in-progress
highlight
where
the
PR
is,
so
you
follow
progress,
they're
very
interested.
It's
definitely
very
much
a
work
in
progress,
so
let
me
emphasize
that
so
so
far
what
I
did
is
just
like
update
all
the
imports
and
hunt
down
right
like
relevant
changes
that
moved
like
the
posit
or
YZ,
and
then
I
also
updated
code
changes
to
copy
a
new
series,
so
today
I'm
hoping
to
run
it
against
via
one
alpha,
1
and
smooth
out
some
of
the
bumps
and
then,
after
that,
I'm
allowed
to
test
so
finger
crossed.
B
B
B
In
terms
of
priority
and
milestone,
this
might
be
nice
to
try
and
get
into
the
0
to
X
series
so
that
we
fix
this
with
our
V
1
alpha
2
minor
release
so
do
y'all
think
yeah
a
you
see,
Jason
shaking
his
head
and
thumbs
up
all
right,
so
we'll
go
0
to
X
I'm
thinking
important
soon.
It's
not
critical
urgent.
B
C
I
was
just
gonna
say
that
this
probably
needs
a
little
more
discussion
around.
If
this
is
the
direction
we
want,
I,
don't
think
I
need
this
more,
but
if
we
were
trying
to
get
rid
of
the
static
client,
then
this
issue
could
become
sort
of
generic,
get
rid
of
the
static
client,
favor
controller,
runtime,
client
ray.
B
That
providers
might
be
using
and
I,
don't
know
that
we
have
an
official
documented
policy
given
that
we're
we're
pre
GA,
so
we're
still
alpha
and
things
tend
to
break
as
we
are
working
on
stuff,
but
I
think
we
probably
should
open
up
an
issue
to
discuss
what
sort
of
library
level
guaranteed
we
want
to
have
in
cluster
API
and
I.
Don't
think,
that's
anything
that
we
can
decide
quickly,
given
how
much
the
discussion
went
on
yesterday.
B
C
B
B
C
Maybe
maybe
maybe
I
think
there's
a
little
more
to
think
about
with
this
issue
this
issue,
maybe
someone
who's,
have
been
familiar
with
with
the
project.
Okay,
first.
C
C
F
B
C
B
B
A
E
Yeah
sure
so
for
catchy
we
have
did
at
the
the
whole
repo
he
went
out
for
this.
It's
like
a
big
breaking
change
from
what
was
in
there
before,
and
we
moved
all
the
all
code
to
release
zero
one
branch,
unfortunately
based
on
a
release
yet
for
the
v1
out
for
one,
but
we'll
try
to
get
everyone
else
for
two
abilities
in
time
like
in
the
next
few
weeks.
E
If
you
have
any
experience
with
TCP
and
you
would
like
to
contribute,
please
reach
out,
we
do
need
new
controversial,
maintain
yours,
so
this
would
be
a
great
opportunity
to
just
get
involved
and,
lastly,
I
will
work
with
Justin
kind
of
like
clean
up
the
oven,
pull
request
or
issues
so
just
stay
tuned.
We
don't
have
any
dogs
yet
as
well
so
stay
tuned
for
that
as
well.
D
D
Don't
want
to
be
blah
I,
don't
think
anyone
was
to
be
blocked
about
pulling
in
2x
or
a
3
or
whatever
released.
I
know
where
alpha
right,
but
there's
a
security
fix
or
a
bug
fix,
but
there's
some
other
issue.
That's
come
in
that
that
requires
additional
technical
debt
in
the
meantime
that
that's
the
situation
with
which
I'm
concerned
you
know
and
Andy
to
your
credit
I
mean
in
that
slack
conversation
KK
right,
you're,
like
yeah.
We
need
to
avoid
this
I
just
didn't
know.
D
D
Is
the
it
really
relies
on
your
cluster
controller
or
or
some
controller
in
your
repository
right
to
be
the
thing
that
updates
the
clusters
API
endpoints,
which
is
what
that
cube,
config
secret
gets
generated
from,
and
so
this
this
situation
is
what,
if
you
set
up
a
load
balancer
in
advance
and
you're,
deploying
machine
with
static
IP
and
your
load
balancer
set
up
to
point.
To
this
mean
those
IPs.
D
Well,
what
ends
up
happening
is
sorry
with
yard
services
outside
what
ends
up
happening
is
is
of
course,
that
the
there's
no
way
that
the
cluster
knows
about
that
control,
plane,
endpoint,
and
so
my
PR
does
actually
get
the
loops
over
control,
plane
machines
and
look
for
the
cube,
ATM
configs
control,
plane,
endpoint,
which
I
don't
like
I.
Don't
think
I
want
to
read
that,
as
assumes
knowledge
of
those
types
but
I
don't
know
a
great
way
around.
This
we'd
had
a
conversation
here
and
I.
D
D
Yeah
there's
a
proposal
doubt
that
not
really
a
proposal,
but
just
you
can
see
the
yeah
Mille.
Just
if
we
added
something
like
that,
then
you
know
the
when
we're
generating
manifests
and
the
cluster
would
know
about
the
control
plane
in
point
as
well
and
it
could
set
that
to
its
API
input
or
could
set
its
API
endpoints
to
that
value.
Since
again,
Vince
I
think
III
couldn't
actually
thought
I
kept
going
in
loops
on
this
and
I
realized.
D
What
was
happening
in
this
case
was
that
the
cube
config
secret
had
the
IP
address
in
the
control
plane.
Node,
not
the
control,
plane,
endpoint
specified
ahead
of
time,
so
yeah
so
I
encourage
people
to
take
a
look
at
this,
because
maybe
you
encounter
it
with
your
infer
provider.
I
don't
think
capilla
ever
will,
because
it
creates
its
own
load.
Balancers
yeah.
B
Add
there's
also
just
the
aspect
of
if
you
do
that
separation
having
the
infrastructure
provider
maintain
and
generate
and
maintain
that
separation
versus
you,
bringing
your
own
API
load,
balancer
URL
to
gate
equation,
because
the
event,
even
if
we
split
it
out
like
you,
still
need
a
way
for
the
user
to
supply
it
and
for
the
system
to
use
it.
Yeah.
D
And
I
guess
what
I
would
say
is
that
if
there
was
a
separate
control
plan
point
field,
then
the
API
endpoints
becomes
strictly
whatever
the
cluster
controller
puts
there
with
respect
to
the
actual
API
endpoints
and
the
roleplay
nodes,
and
there
may
be
cases
where
that
matches
up
to
control
planning
point,
but
most
likely
they
won't
and
the
control
point
end
point
may
end
up
being
derived
from
one
of
those
API
serve
employments
but
like
in
kappas
case,
the
control
plane.
End
point
is
going
to
be
the
elbe
fqdn.
D
So
yeah
yeah
I
think
this
is
like
a
good
b1
a3
issue
and
like
and
like
I
said,
we
have
a
PR.
It
just
requires
reading
the
cube
ATM
config,
which
I
don't
like
doing
because,
like
I
said
it
assumes
you
know,
I'm
gonna
I'm,
going
to
assume
the
cube,
ATM,
bootstrapper
I
know
I
could
check
and
pull
in
more
types
if
I
need
to.
But
that's
this
what
I'm
doing
for
now
and
the
other
option
is
that
all
bootstrappers
should
support
like
a
standard
field.
Pattern
for
control
playing
end
point
all
bootstrap
providers.
K
Yeah
I
have
a
question
so
I
have
this
requirement.
They
need
to
to
create
labels
on
nodes.
So
there's
a
there's
even
the
field
on
machine
in
machine
spec,
namely
named
object
maida,
which
in
in
the
document
it
says
this.
This
field
helps
set
labels
to
nodes.
K
I
B
B
Then
we
will
set
those
initial
node
labels.
It's
a
one-time
set
at
bridgett
head
node
registration.
They
are
not
reconciled
and
kept
up
to
date,
and
there
is
at
least
one
if
not
two
issues
in
cluster
API
talking
about
labels
so
again
for
right.
Now
you
can,
if
you're,
using
cube
ATM,
you
can
just
get
the
initial
one
set.
Yeah.
K
L
I
I
B
And
so
I
will
say
as
someone
who
has
been
a
remote
participant
to
a
six
or
eight
hour
meeting
in
the
past
generally,
what's
happened
in
my
experiences.
You
can
hear
about
20%
of
the
conversation
and
you
can't
see
anything.
That's
going
on
I'm
100%
in
favor
of
trying
to
set
up
a
zoom
or
some
other
conference
for
people
to
dial
in,
but
I
will
caution
you
that
you
probably
won't
hear
much.
B
D
Andrew
I'll
say
I've
done
that
as
well.
Indian
I
mean
it's
possible.
It
just
has
to
be
a
priority
and
it
focus
on
the
face
to
face
and
I
feel
like
when
it
is
a
distributed
community
like
this.
It
should
be
a
priority
from
the
get-go
to
include
those
who
can't
travel
for
various
reasons,
but
would
like
to
attend.
So,
yes,
I
mean
face
to
face
it's
ideal.
A
So
I
will
say
that
the
output
of
the
face-to-face
is
in
something
that
is
like
going
to
be
dictated
down
to
the
community.
The
output
of
the
face-to-face
is
something
to
be
consumed
by
the
computing
to
give
feedback
on,
and
then
that
results
in
the
outcome
as
well,
so
I
don't
want
to
make
it
seem.
Like
you
know,
the
face
of
face
is
the
end
all
and
be
all
of
what
v1
alpha
3
and
the
planning
is
going
to
be.
It
should
just
be
a
way
to
help
facilitate
higher
bandwidth
conversations
to
get
things
started.
E
At
all,
but
yeah,
we
like
I,
guess
like
the
outcome
will
be
like
caps
and
those
like
we'll
go
to
the
same
kind
of
like
kind
of
iteration
that
we
did
like
last
time,
maybe
a
little
one,
because
we
already
have
like
a
bunch
of
discussion
with
like
large
community
but
yeah.
Definitely
like
more
comments.
Kind
of
the
better
I
want
to
make
everybody
happy
in
the
community
and
like
involve
everybody.