►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I
think
it's
recording,
welcome
everyone
today
is
wednesday,
the
10th
of
march-
and
this
is
the
cluster
api
project.
Meeting
cluster
api
is
part
of
the
kubernetes
sigs
and
we
are
following
their
community
guidelines
for
the
meeting,
which
basically
means
treat
everyone,
as
you
would
expect
to
be
treated,
and
please
raise
your
hand
if
you
would
like
to
talk-
and
I
will
call
on
you-
looks
like
we
don't
have
any
psas
today.
A
So
there's
been
a
long
standing
idea
to
get
scale
from
xero
support
into
auto
scaler
for
cluster
api
and
I'm
getting
very
close
to
having
a
proof
of
concept
to
demo
here.
But
what
I
wanted
to
do
was
collect
the
information
that
I've
got
so
far
from
what
I've
been
building
and
from
what
we've
been
talking
about
and
share
it
with
the
group.
So
hopefully
we
can
get
some
more
ideas.
A
lot
of
this
has
been
discussed
before,
although
it
was
discussed
last
year.
A
You
know
the
the
general
idea
here
is
that,
to
enable
scale
from
zero
there's
a
little
bit
of
information
that
needs
that
the
auto
scaler
needs
to
know
once
something
like
a
machine
set
has
had
zero
replicas
and
it
essentially
needs
to
know
you
know
the
cpu
and
the
memory,
and
it
also
likes
to
know
if,
if
any
gpus
will
be
needed-
and
it
uses
this
information
to
predict
how
many
nodes
it
should
add
when
it
scales
up
now,
we've
implemented
this
in
open
shift,
but
we've
done
it
by
adding
information
to
the
machine,
sets
that
we
use
and
that
you
know,
based
on
the
way
machine
sets
and
machine
deployments
reconcile
and
how
they're
owned
by
it's
not
really
convenient
to
have
cloud
providers
putting
that
information
into
the
machine
set
directly.
A
Proposes
the
idea
that
in
the
infrastructure,
machine
templates,
a
user
or
a
cloud
provider
could
add
this
resource
hint
that
would
then
get
reconciled
into
the
status
for
machine
sets
and
machine
deployments,
and
then
the
cluster
auto
scaler
could
read
that
information
from
the
status
and
make
decisions
based
on
that
there's
also
the
idea
of
an
override
here.
So
a
user
could
add
these
annotations
to
their
machine
setter
to
make
machine
deployment
if
they
wanted
to
override
whatever
information
was
coming
out
of
the
machine
template.
A
So
yeah
like
I've,
tried
to
collect
everything
here.
I
see
you
seen
as
raising
your
hand
go
ahead.
You
see.
B
Yeah
there's
also
like
I
have
a
question
which
is:
do
we
cover
cases
where
users
might
want
to
use
any
other
device
plugins
such
accelerated
as
accelerated
mix
or
any
other
resources
than
gpu.
A
Yeah
that
is
currently
not
supported
in
the
auto
scaler,
like
as
resource
callouts
at
the
top
level.
That
would
be
done
more
by
you
know,
using
labels
or
taints
to
make
sure
that,
like,
if
you
had
a
node
that
was
labeled
with
you
know,
an
accelerated
network
card
or
something
like
that
that
all
your
jobs
would
go
to
those
nodes
so
like,
unfortunately,
the
auto
scaler
only
makes
predictions
based
around
cpu
memory
gpu
and
then
it
will
also
consider
taints
and
labels.
A
A
So
yeah,
so
I
guess
you
know,
the
call
to
action
here
is
if
anyone
would
like
to
take
a
look
and
add
comments.
I
just
wanted
to
try
and
get
this
in
before
our
march
14th
deadline
in
hopes
that
you
know
this
might
be
included
in
the
next
in
the
next
version.
A
So
I'm
just
I'm
seeing
a
question
here
from
ace
in
chat
curious
how
this
would
account
for
capacity
versus
allocatable,
also
node
similarity
in
the
cluster
auto
scaler,
which
is
per
cloud
provider,
so
I'm
trying
to
I'm
trying
to
so.
The
auto
scaler
accounts
for
like
capacity
when
you,
when
you
create
the
cluster
auto
scaler,
you
tell
it
the
max
number
of
nodes
that
you'll
allow
it
to
scale
to,
and
you
tell
it
the
max
cpu
and
memory
like
cores
and
memory
that
it
can
consume
in
your
cloud.
A
So
the
auto
scaler
will
use
those
as
its
upper
limits
when
it's
making
these
calculations.
The
other
side
of
this
is
that,
if
you
put
those
you
know,
presumably
you'll
have
some
sort
of
resource
quota
on
the
cloud
that
you're
operating
on.
So
if
you
didn't
set
those
or
you
set
them
too
high,
you
know
you
would
only
get
stopped
by
your
quota
most
likely.
A
Okay.
So,
let's
see
sorry
node
allocatable
versus
node
capacity,
talking
about
cube
reserved
for
node
templates
when
doing
scale
from
zero,
not
pool
size
yeah,
I'm
not,
I'm
not
sure.
I'm
following
that
does
it
does
anybody
else.
We
can
follow
up
offline
for
sure.
I'm
just
I'm
not
sure.
I'm
following
the
distinction
that
you're
making
there
is
anyone
else
is
anyone
else
following
that.
C
I'm
not
sure
I'm
also
following,
but
are
you
talking
about
potential
for
using
some
resource
for
storing
information
outside
of
the
node
for
scale
for
from
zero?
Or
is
this
about
something
different
competitive.
D
I
ahead,
I
think
I've
seen
this
in
eks,
amongst
other
things,
is
the
cube
reserve
capacity
is
like
you
might
have
that
memory,
for
that
might
be
the
memory
that's
available
in
the
machine,
but
a
certain
amount
of
it's
going
to
be
used
for
like
system
services
like
how.
How
do
we
take
that
into
account.
A
Yeah,
okay,
that
I
understand
now
yeah.
No,
these
are
just
the
resources
that
the
machine
the
instance
itself
would
provide.
It
does
not
have
any
subtraction
for
the
kubernetes
processes
that
run
on
top
of
that,
and
I
don't
I
don't
think
well,
the
cluster
auto
scaler
uses
the
same
scheduler
that
comes
out
of
core
kubernetes.
So
if
the
core
kubernetes
scheduler
takes
that
into
account
when
it
does
pod
scheduling,
then
that
would
also
occur
in
the
cluster
auto
scaler.
A
A
A
Again,
jason's
adding
a
good
point
here
which
is
in
in
the
auto
scaler
proper.
You
know
these
values
are
usually
hand
coded
or
hard
coded
into
tape,
lookup
tables
by
the
other
cloud
providers.
A
couple
cloud
providers
might
do
it
dynamically
through
the
cloud
apis,
but
for
the
most
part
it's
just
a
lookup
table.
E
Yeah
just
really
quickly.
I
read
through
this
before
the
meeting
and
I
also
made
like
one
minor
comment
about
machine
puts
in
general.
I
I
like
the
idea.
The
only
concern
I
would
have
is
so
so
what
I,
what
I'm
concerned
about
is
that
some
cloud
providers,
as
you
mentioned,
have
like
some
specific
implementations
in
auto
scaler
and
also
support
scaling
to
zero
there
with
like,
for
example,
whatever
cloud
provider
native
implementation
you
have,
and
we
have
like
machine
puts
for
that.
E
The
concern
I
would
have
is
like
re-implementing
functionality,
that
is
in
the
major
cloud
providers
again
in
cluster
api.
That's
a
little
bit
smelly
to
me
because
I
feel
like
then
we
have
like
two
different
places
where
we
basically
implement
the
same
thing.
Obviously
I
know
that
for
for
red
hat
that
that
might
be
like
a
different
concern,
because
I'm
not
running
on
the
major
cloud
providers
most
likely,
but
that
is
something
yeah.
We
need
to
kind
of
decide
how
we
go
about
it
in
cluster
api.
A
A
C
Go
ahead
yeah.
I
just
wanted
to
comment
on
one
of
the
things
potentials
in
terms
of
duplication,
application
code
locations
for
the
skilling
purposes,
there's
also
another
location.
C
I
think
australia
is
also
working
upon
using
out
of
c
code,
and
we
probably
may
have
a
look
at
that
because
they
are
trying
to
centralize
cloud
management
from
single
location
and
autoscaler
bits
could
be
on
there,
but
definitely
it
needs
some
specific
resource
which
will
allow
to
store
the
scanning
from
zero
information
either
it
would
be
machine
pulls
or
the
exploitation
which
mike
mccoon
are
currently
opposing.
C
I
don't
know,
but
just
to
keep
the
conversation
going.
A
Yeah,
that's
a
good
point.
Daniel
it'd
be
interesting
to
see.
If
there's
some,
I
guess
common
efforts,
we
could
use
david
go
ahead.
Thank
you.
Very
much
definitely
feel
the
desire
to
have
the
cloud
provider
or
the
the
clouds
be
able
to
have
their
own
machine
pool
implementations
and
try
to
do
it
the
best
way.
They
know
how
we
do
run
into
difficulties
with
this
at
times.
A
So
it's
some
of
them
don't
take
into
account
that
they're
working
with
kubernetes
so
and
we
don't
always
get
all
the
events
that
we
need
so
sometimes
that
push
model
where
we're
telling
the
provider
hey,
we
need
to
you,
know,
coordinate
drain
this
node
and
then
remove
it.
Instead
of
just
telling
the
provider
hey
lower
my
capacity
by
one
and
not
let
anybody
know
that
workload's
getting
dumped
here,
it
really
depends
on
like
what
the
provider
offers
you
to
be
able
to
hook
into
those
kind
of
events.
A
I
mean
excellent
point
yeah
that
that
is
a
huge
issue.
You
know
the
auto
scaler
will
try
to
drain
nodes.
You
know
that
it's
removing
and
whatnot,
but
yeah
you're,
absolutely
right
for
some
of
those
abstractions.
It
doesn't
make
sense-
and
I
want
to
you
know-
jason
said
something
here
in
in
chat
that
I
want
to
call
out,
because
I
tried
to
capture
this
as
one
of
the
alternatives,
and
I
think
it
is
the
way
forward.
So
long
jason
says
longer
term.
A
I
would
love
to
see
us
have
an
extension
mechanism
that
would
allow
for
provider
implementations
to
be
able
to
provide
this
data
directly
rather
than
have
to
try
and
do
this
through
user-provided
values.
I
I
completely
agree
with
you
jason,
and
I
think
the
last
alternative
that
I
called
out
here
is
something
that
mike
gugino
was
was
kind
of
getting
at
in
the
other
auto
scale
from
zero
proposal,
and
that
is,
I
think,
that
the
future
path
for
this
is
for
us
to
create
an
auto
scaling.
A
You
know
crd
or
reuse,
one
that
already
exists
that
can
sit
between
the
cluster,
auto
scaler
and
cluster
api
and
then
all
the
information
you
know
about
scaling
contracts
and
whatnot
can
exist
in
that
object,
so
that
it's
not
tied
directly
into
our
infrastructure,
but
it
can
also
speak
directly
to
some
of
these
needs.
We
have
about.
You
know
different
different,
auto
scaling,
backings
and
whatnot.
So
I
would
like
to
see
us
go
that
way
long
term,
but
I
think
it's
a
really
big
step
to
take
for
the
first
implementation.
A
So
I
think
we
could
do
this
now
and
then
we
could
work
towards
kind
of
the
second
implementation.
If
people
you
know
if
we
agreed,
that
was
the
best
way
to
go.
F
Hey
mikey,
it
might
be
worth
to
at
least
like
put
this
into
the
proposal.
These
bits
to
as
like
maybe
future
work
that
you're
thinking
about
or
like
maybe
an
alternative
here,
because
that's
where
you
we
want
to
go
like
would
probably
want
to
have
that
type
down
like
and
then,
as
an
amendment
later.
F
Yeah,
maybe
something
like
more
concrete
like
if
we
want
to
get
it
like,
like.
Maybe
someone
else
could
take
it
away
or
like
we?
I
want
to
make
sure
that
we
don't
forget
about
it.
You
know.
A
Okay,
for
sure
I
mean
the
this-
is
kind
of
modeled
off
the
approach
that
we've
taken
in
openshift
to
how
we
deploy
the
cluster
auto
scaler.
So
some
of
it
is
kind
of
similar
to
things
you
know
we
have
a
cluster
auto
scaling
operator
that
kind
of
watches
some
of
this
stuff.
So
I
this
is
kind
of
where
some
of
the
ideas
are
coming
from
for
us,
but
yeah.
A
All
right,
I'm
not
I'm
not
seeing
any
hands.
So,
let's
move
on
cecile's
got
questions
about
load,
balancer
provider
proposal,
I'm
not
going
to
click
on
this
link
because
slack
blows
up.
Yeah.
A
G
Oh
yeah,
no,
this
is
more
for
reference.
If
anyone
wants
to
see
the
thread
but
yeah,
so
we
started
talking
in
slack.
I
think
this
came
up
from
joel
asking
about
supporting
multiple
load
bouncers
in
providers,
as
this
might
be
a
use
case
for
external
infrastructure
providers,
and
this
I
think,
raised
a
bunch
of
questions
and
started
a
whole
discussion,
so
I
just
wanted
to
bring
it
up
here
to
like
clarify
some
stuff.
G
I
haven't
read
the
last
few
messages,
so
I
know
jason
might
have
answered
some
things
already,
but
the
first
question
I
had
is
like.
First
of
all,
I
haven't
really
followed
up
with
this
proposal
recently.
So
what's
the
status,
are
we
aiming
for
v1
alpha
4?
Is
it
more
longer
term?
Is
it
ready
for
review
soon
or
what's
what's
going
on
with
that?
If
anyone
can
answer
that.
H
Yeah,
I
can
get
into
some
of
that,
at
least
as
far
as
the
status
of
the
proposal.
Anybody
who
has
a
chance
to
review
it.
Please
do.
I
will
be
picking
back
up
on
addressing
any
issues
and
trying
to
get
it
moved
over
to
pr
forum
as
soon
as
possible,
targeting
the
end
of
this
week,
mainly
because
I've
been
distracted,
dealing
with
some
firefighting
on
some
unrelated
projects.
H
F
Just
a
quick
question
for
you
before
I
haven't
had
time
to
review
it
yet,
but
are
there
any
breaking
changes
in
here.
H
F
Okay,
so
do
we
want
to
consider
this
release
blocker
for
zero,
four
zero
or
potentially
release
zero
five
zero?
Now
for
four
again,
that
will
be
a
first
but
yeah.
We
could
do
that
too.
G
The
other
question
I
had
was:
how
does
this
overlap
with?
What's
going
on
with
the
external
infrastructure
proposal,
and
has
there
been,
you
know,
discussion
between
the
two
groups
on
how
to
make
the
changes
work
together,.
A
All
right,
so
I
see
vince
and
then
india
now
go
ahead
and
do
you.
D
Yeah
I
it
speaks
directly
to
the
external
infrastructure
proposal.
He
actually
came
out
with
discussion
with
me
and
joel
so
joel
spotted
that
for
the
cluster
api
aws,
for
example,
the
machine
controller
today
is
responsible
for
attaching
control,
plane
machines
to
the
elb
and
openshift
rightly
once
do
not
use
elbs
anymore.
Once
you
use
the
newer
network
load
balancer,
I
think,
and
one
of
the
ways
we
can
do
this
is
actually
through
the
separate
network
load
balancer
construct.
D
So
there
is
a
question,
however,
around
we
are
going
going
to
continue
supporting
the
current
method
as
a
sort
of
upgrade
strategy.
So
for
openshift
there's
a
question
of
like
adding
that
implement
implementation
into
kappa.
Today,
so
did
I
have
to
go
down
the
full
new
route
or
whether
or
not
we
just
sort
of
cut
over
for
the
external
infrastructure?
F
Ask
a
follow-up:
my
only
concern
here
is
about
the
breaking
changes
and,
given
that,
like
zero,
four
zero
is
coming
up
quickly.
We
could
reconsider
like
pushing
him
back.
I
don't
know
like
I
mean
that
this
would
be
the
third
time
that
we
like,
and
that's
okay,
if
we
want
to
do
that
like
I
just
want
to
point
that
out,
which
means
we
have
to
get
better
at
planning.
But
that's
another
point:
maybe
we
can
discuss
another
time
the.
F
It
would
be
great
if
we
have
like
a
point
person
and
give
it
them
that
you
spoke
up.
We're
probably
gonna
nominate
the
year,
but
just
choking,
but
the
the
point
person
will
be
kind
of
like
looking
at
the
experience
between
these
two
proposals
like
and
making
sure
that,
like
they
fit
nicely
as
you
mentioned,
but
I
would
like
to
knife,
we
want
to
keep
this
as
a
police
blocker
not
possibly
by
this
week,
or
we
want
to
wait
for
the
next
version.
C
Yeah,
I
just
wanted
to
comment
on
the
part
about
aws
provider
implementation.
So
in
terms
of
correlation
between
those
two
proposals,
I
think
they
shouldn't
very
much
overlap,
but
eventually
we
definitely
want
to
move
to
a
lot
balanced
external
proposal.
For
now,
though,
I
don't
think
that
this
would
be
a
breaking
change.
C
If
we
implement
an
additional
field
in
the
machine
for
optional
configuration
load
balancers,
that's
how
I
see
it
and
put
like
users
would
usually
experience
and
user
will
use
their
default
classic
load,
balancing
implementation
unless
they
explicitly
specified
that
there
was
used
the
new
and
all
by
implementation,
and
essentially
this
will
mean
that
a
management
construction
proposal
would
be
possible
as
a
placement
for
creators
and
albert
and
set
the
machines
or
set
their
machines
back
with
those
fields
enabled.
A
Thanks
daniel
so
joe
go
ahead.
I
I
Now
at
the
moment,
the
only
way
you
can
do
this
with
aws,
if
you
look
at
the
spec
and
the
status
and
stuff
on
the
resource,
is
that
it
has
to
be
an
elb
and
also
there's
only
one
of
them,
which
is
a
bit
later
to
discuss
about
that
and
yeah.
At
the
moment
it
seems
like
there's
logic
in
all
of
the
cluster
aws
cluster
azure
cluster,
whatever
to
add
those
attachments
to
the
machines
as
they
get
created,
but
only
for
control
plane
machines.
I
That
is
fine.
I've
completely
lost
track
of
where
I
was
going
with
this
sorry.
A
I
A
Don't
you
cecile
has
her
hand
up.
Why
don't
we
pass
the
mic.
G
Yeah,
so
this
kind
of
ties
into
the
other
question
I
had,
which
was
so
there's
a
bit
of
confusion
about
around
whether
the
plan
at
first.
I
thought
the
proposal
was
to
enable
providers
who
don't
have
native
load
balancer
support
to
leverage
external
load,
balancing
like
vsphere.
G
But
now
it
seems
like
we're
talking
about
completely
replacing.
You
know
how
load
balancing
happens
in
providers,
and
so
my
question
is:
how
do
we
make
that
painless
for
the
infrastructure
providers
that
don't
need
that
external
load,
balancing
like
azure,
for
example,
just
at
the
top
of
my
head,
and
also
like?
How
do
we
make
sure
that
whatever
we're
doing
is
like
going
to
be
generic
enough?
G
That
it
allows
users
to
configure
load
bouncers
with
what
is
available
in
each
cloud
provider,
because
it
seems
like
a
lot
of
the
examples
in
the
proposal
are
all
based
on
aws,
and
so
I'm
just
a
bit
concerned
that
we're
doing
things
in
a
very
aws
way.
Right
now
and
not
really
looking
at
how
other
providers
might
be
doing
things.
A
Yeah
great
great
questions
I
see
daniel,
then
vince,
then
yes
scene,
so
daniel
go
ahead.
C
I
basically
forgot
to
like
more
hand,
but
in
my
opinion,
if
we
do
api
field
addition
to
machines,
it
shouldn't
be
breaking
for
any
providers
so
far
and
well.
The
load
balance
proposal
could
go
its
own
way
and
definitely
implement
some
other
implementation
which,
like
external
implementation,
it
depends
on
the
on
the
direction.
It
is
all
going
so
far.
This
is
not
going
to
be
a
breaking
change
or
blocker
for
analysis,
so
this
just
a
feature
could
which
could
be
implemented.
C
A
Okay,
thanks
vince
go
ahead.
F
F
G
Not
an
azure
cluster,
it's
not
that
I'm
opposed.
So
it's
just
that.
You
know
if
right
now
things
are
working,
and
so,
if
we're
not
really
adding
any
functionality,
then
we
want
to
make
it
as
painless
as
possible.
Right.
We
don't
want
to
add
a
bunch
of
extra
work
and
it's
not
just
azure,
just
in
general,
like
gcp
too,
like
for
providers
that
don't
need
it,
it's
it'd
be
nice
to
not.
You
know,
require
re-implementation
or
anything
like
that.
F
Yeah,
I
think
that
makes
sense
like
I
need
to
really
go
deep
down
in
this
proposal,
but,
like
I,
I
would
assume
that,
like
we
need
a
way
for
the
current
instead
of
things
to
be
working
and
then
remove
that
function
so
like
provide
a
path
to
actually
get
to
external
load,
balancer
or
yeah
like
otherwise
like
we
could
break,
I
mean
we
could
even
make
things
more
complicated
if
we
require,
like
all
users,
to
specify
a
load
balancer
for
their
providers.
F
So
maybe
you
know,
one
of
the
answers
could
be
like
that,
like
azure,
for
example,
that
yes,
would
create
one
for
users
if
one
is
not
specified,
but
I
would
love
if
that
happens
in
like
at
least
two
releases,
which
would
give
us
a
way
like
an
upgrade
path.
That
makes
sense.
A
All
right,
thanks
vince
next
year
scene
and
then
joe.
B
Yeah
so
last
time
we've
been
discussing
this
with
jason
and
the
deer.
If
I
recall
correctly,
one
of
the
point
points
was
to
actually
preserve
like
the
way
we
read
the
control
plane,
endpoint
from
resources.
B
So
technically,
if,
if
like,
if
an
infrastructure
provider
wants
to
to
wants
to
use
the
same
behavior,
they
were
using
before
they
can
still
set
the
control
plane
endpoint
on
on
the
infra
cluster,
and
then
it
gets
copied
now,
where
the,
where
the
like,
where
the
control
plane
endpoint
is
set,
might
be
changed.
B
But
that's
the
like.
That's
the
only
change
that
we
would
require
really
as
a
breaking
change
and
yeah
jason
or
the
deer
feel
free
to
shine
up.
A
Yeah,
I
think
I
think
jason
actually
mentioned
that
in
chat.
He
mentioned
the
concept
of
having
the
fall
back.
You
know
yeah,
but
yeah
all
right
go
ahead.
Joel.
I
Sorry,
I'm
just
going
to
quickly
jump
back
to
where
I
was
earlier,
so
something
daniel
said
was
about
having
low
balances
on
machines.
I
just
wanted
to
clarify
that.
I
don't
think
any
of
this
proposal
is
talking
about
adding
load,
balancer
attachments
or
anything
to
machine
specs,
it's
more
about
having
some
sort
of
separate
resource
that
is
then
going
to
have
the
configuration
and
do
that
instead.
So
the
point
I
was
trying
to
make
was
that
at
the
moment
the
machine
controller
is
attaching
control,
plane
machines
to
load
balancers.
I
So
like
there
shouldn't
be
any
changes
to
machines,
it's
more
around
the
control
plane
resource
and
how
the
endpoints
are
coming
from
that
and
then
I
also
had
a
follow-up
question
for
cecile,
which
was
just
sort
of
clarifying-
I'm
not
very
good
with
azure,
but
like
some
of
the
examples
in
the
proposal
we're
talking
about,
you
know
having
vsphere
deployed
on
aws
and
then
using
an
aws
load
balancer
for
that.
So
you'd
want
to
use
like
the
vsphere
provider
with
an
aws
load
balancer.
G
I
believe
that's
something
you
can
do.
I
haven't
tried
it
myself,
but
I
think
my
comment
was
more
in
general
like
if
we're
going
to
eventually
go
down
the
path
of,
even
if
it's
not
right
away
in
vivo
f4,
but
if
eventually
the
planet
to
make
all
providers
provision
the
dancers.
This
way
it
would
be
nice
to
have
you
know
examples
of
other
providers
just
to
have
like
a
good
variety
and
have
a
good
idea
of
how
this
would
work
more
generically.
A
I
guess
does
that
answer
your
question
in
chat
joe,
I
saw
that
this
had
come
up
in
chat
too
about
the
examples
other
than
aws.
A
You
seen
you
had
your
hand
up
again.
Unless
I
don't
know
joel,
were
you
finished
or
did
you
have
something
more
to
say
sorry.
A
All
right,
you're,
seeing
europe.
B
B
How
to
you
know
if,
like
even
if
an
infra
provider
that
is
not
going
to
leverage
this
capability
of
you
know
you,
you
know
like
using
mix
them
between
an
infra
provider
and
the
load
balancer,
but
still
wants
a
resource.
You
know
to
write
integration
against
like,
but
they
have
an
external
client
that
wants
to
write
integration
against
load,
balancer
or
whatnot.
B
We
probably
need
to
document
either
a
way
to
generate
these
resources
for
existing
clusters
on
upgrade,
but
that
would
be
something
like
we
need
to
document
the
steps,
but
that
would
be
only
like
either
like
cabzi
cap
v,
kappa
to
write
a
controller
to
like
to
create
these
resources.
A
A
G
That
was
more
for
joel
if
he
wanted
to
bring
that
up,
because
I
know
that
was
another
thing
that
came
up
in
chat.
I
don't
know
if
there
was
resolution
or
not,
but.
I
Yeah,
if
I
can
speak
about
that
briefly
so
in
in
openshift-
and
I
think
in
in
other
deployments
of
kubernetes
and
stuff
as
well
like
it,
you
can
separate
the
traffic
that
goes
towards
the
api
server.
So,
for
example,
we
have
an
internal
load
balancer
that
is
used
for
something
like
cubelet,
that
is
like
private
ips,
all
within
aws,
which
means
that
we
can
secure
it
a
different
way.
We
haven't
got
that
cubelet
traffic
going
over
the
internet.
I
I
think
it
also
reduces
your
egress
costs
because
it
doesn't
egress
and
also
we
have
then
the
public
api
server
load
answer
which
is
used
for
admins
and
other
clients.
I
think
this
is
a
use
case.
That's
gonna
be
like
useful
for
cluster
api,
especially
as
we
talk
more
about
the
externally
managed
cluster
infrastructure.
So
if
you've
got
something
like
I
don't
know,
maybe
cops
does
this
as
well
and
they
want
to
build
some
like
externally
managed
aws
cluster
integration
with
cappy.
Then
maybe
that's
going
to
cause
a
problem
there
as
well.
I
So
I
was
wondering
really
about
initially
having
extra
load
balances
attached
to
the
control,
plane
machines
and
that's
right.
I
talked
to
nadir,
but
then
the
more
I've
talked
with
engineers.
I
I've
actually
realized
that
we
kind
of
need
to
have
a
or
definitive
internal
external
separation,
because
there
are
different
parts
of
cappy
that
need
to
know
how
to
talk
to
it
towards
the
api,
so,
for
example,
machine
health
check
and
like
the
bootstrap
provider,
might
need
to
know
different
urls
so
that
they
can
configure
and
talk
to
the
api
server
in
a
different
way.
I
So
part
of
the
conversation
that
we
were
having
just
on
slack
was
this
new
low
balance
proposal
that
jason's
been
working
on
actually
makes
this
a
lot
easier
to
talk
about,
because
you
can
then
just
attach
as
many
load
balancers
as
you
want.
I
We
could
make
it
so
that
in
the
like
infra
cluster
resources,
we
have
two
references
to
load,
balancers,
one,
that's
internal
one,
that's
external
which,
as
far
as
I'm
aware,
could
also
just
be
the
same
load
answer
and
then,
depending
on
whether
it's
internal
external
depends
where
it
then
gets
used
through
the
rest
of
caffeine.
I
So
I
guess
the
the
sort
of
point
for
this
is:
I'm
gonna
try
and
I
guess,
whip
up
some
sort
of
proposal.
That's
off
the
back
of
jason's
just
to
try
and
capture
some
of
these
ideas.
If
other
people
think
this
is
a
good
use
case,
then
I'd
love
to
hear
more
about
your
thoughts
on
it.
So
yeah.
A
Okay,
thanks
joel,
I
mean
that
yeah
sounds
good
to
me.
Is
there
anything
else
we
want
to
discuss
on
this
topic,
or
should
we
roll
on
to
the
next
one.
A
Awesome
yeah
good
discussion;
okay,
not
seeing
any
other
hands
so
fabrizio,
wrap
up
on
the
cap.
Bk
changes.
J
Thank
you
mike.
If
you
can
click
on
the
link,
I
I
will
so
then
I
have
a
cap
out
on
about
change
for
capri
k
and
for
getting
a
custom
pioneered
copy
of
of
kubernetes
types,
and
I
would
like
to
basically
highlight
what
what
are
the
impacts,
so
capiche
is
basically
generating
kubernetes
config
file.
J
Okay,
as
of
today
so
copy
version
lower
than
z,
zero.
Three,
six,
sixteen!
Basically
we
don't.
We
don't
support
oldest
version
so
lower
than
113
of
kubernetes.
J
Basically,
the
the
main
difference
is
that
for
kubernetes
greater
than
115,
you
will
get
a
kubernetes
v1b2
file
generated
on
the
machine.
This
will
move
away
the
warning
and
and
and
of
course
this
is
also
future
proof.
So
when
command
mean
we'll
get
a
new
api
version,
we
will
be
ready
to
manage
this
without
having
impact
on
the
custom
api
users.
What
I
would
like
to
point
out
a
part
of
the
this
one
is
you
move
to
the
next
line?
J
Slides,
please,
is
that
important.
If
someone
is
doing
is
using
pro
precoc
change,
the
kubernetes
config
on
the
machine.
Please
you,
you
should
validate
your
code
against
the
the
dv1
alpha
with
the
beta2.
J
There
are
no
many
difference,
but
please
do
do
test
the
the
change
and
also
the
last
slide
very,
very
quickly.
There
was
a
discussion
with
before
there
was
a
request
from
microsoft.
Then
also
json
raised
the
point
on
on
the
car
and
the
cab
that
the
new
the
kuben
mean
the
one
beta
two
types
introduce
a
new
field
which
is
in
in
in
your
profile,
error
that
could
be
useful,
and
so
the
plan
now
is
to
make
it
available
in
in
cluster
api
as
well
in
capstone
api
b1
alpha
four
as
well.
J
The
only
caveat
is
that
if
you
are
basically
installing
a
kubernetes
cluster
which
is
older
than
115,
this
field
will
be
ignored
by
kubernetes,
because,
basically
it
it
does
not
understand
it.
So
this
and
and
yeah
that's
our
that's
all,
that's
all
for
the
skype,
and
I
thank
you.
Every
everyone
for
the
feedback,
very
valuable.
A
Okay,
we'll
move
along
again
talking
about.
Can
we
backport
the
rollout
strategy
go
ahead?
Take
it
away
young.
K
Okay,
thank
you,
yeah.
This
feature
got
merged
into
the
master
yesterday
and
we
have
been
discussing
mainly
with
fabrizio
I've
been
asking.
Is
it
possible
to
back
portis
into?
We
wanna
offer
three
sorry
zero,
three
any
upcoming
versions,
especially
if
we
are
kind
of
a
postponing
the
woman
alpha
four
or
do
we
have
any
kind
of
a
dates
for
the
we
wanna
offer
for
coming
out
pro.
A
Yeah
good
question:
does
anybody
have
an
idea
about
this.
G
One
well,
I'm
I'm
not
oh,
go
ahead.
Cecil
I
was,
I
thought.
Vince
was
going
to
speak
up
on
this,
but
I
I
can.
I
I
think
what
we
had
said
previously-
and
I
think
still
holds
for
back
parts
in
general-
is
as
long
as
it's
not
a
breaking
change
and
it's
not
involving
a
major
refactor.
G
It's
fine
to
open
a
pr
for
re
for
backboard,
so
next
step
would
be
if
you
or
someone
can
open
a
backpack
pr
and
then
we
can
review
it
and
make
sure
that
there's
nothing
that
breaks
the
contract
there,
but
I
don't
see
any
reason
to
not
backboard
as
long
as
it
follows
those
requirements.
K
Yeah
we
can,
we
can
do
that
and
and
if
nobody
had
anything
against
it,
we
we
will
do
it
quite
soon
and
anyway,.
A
Okay,
great
thanks
to.
F
A
Okay,
then
cecile
you've
got
the
last
topic
here
with
a
call
for
reviewers.
G
Yeah,
just
before
we
get
to
that,
I
think
there
was
a
question
in
chat.
I'm
not
sure
if
it's
for
the
fabrizio
for
cube
adm
or
if
it's
about
the
last
back
part
that
we
just
talked
about.
J
Oh
it,
it
seems
that
that
chris
misunderstood
the
change,
probably
so,
but
if,
if,
if
you
want
it,
you
can
contact
me.
G
J
G
Okay,
great
sounds
like
that
solved
it.
So
for
my
topic,
I
just
wanted
to
erase
this.
I
know
like
a
lot
of
people
here,
are
trying
to
get
more
involved
in
contributing
to
cluster
api
and
that's
really
great
and
we're
seeing
a
lot
of
like
new
comers
open
prs
and
you
know
assigned
to
good
assign
themselves
to
good
first
issues.
So
that's
really
awesome.
We
also
need
a
lot
of
help
with
reviews.
G
So,
if
you're
looking
to
get
more
involved,
you
know
contributing
commits
is
not
the
only
way
to
do
that.
If
you
want
to
help
with
code
reviews,
even
if
you're
not
you
know,
super
familiar
with
the
whole
system,
you
know
sometimes
just
going
through
prs
and
looking
at
code
and
reviewing
documentation,
and
things
like
that
can
also
really
help
lighten
the
load
on
the
maintainers
and
help
us
get
to
other
pr's
faster.
G
So
yeah
just
wanted
to
erase
that,
and
if
you
want
to,
if
you're
interested
in
becoming
an
official
reviewer
and
adding
your
name
to
the
owner's
file
and
maybe
eventually
becoming
a
maintainer,
that's
a
really
good
way
to
get
started
to
show.
Your
involvement
is
just
to
start
reviewing
prs
and
if
anyone
is
interested
in
that
and
wants
to
know
more
about
the
process
or
has
questions
I'm
happy
to
like
pair
with
you
or
like
mentor,
your
whatever,
and
just
like
help
you
get
to
that
owner's
status.
A
A
A
Concerns,
okay,
well,
I'm
not
seeing
any
hands
raising,
and
I
guess
that
means
we
should
take
about
10
minutes
back.