►
From YouTube: 20190710 scl cluster api office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
Couple
questions
yesterday
about
you,
know
the
the
infrastructure
and
bootstrap
providers
how
they
would,
how
multiple
providers
would
run
in
the
same
namespace
or
or
multiples
or
objects
from
multiple
clusters
would
get
reconciled
in
the
same
namespace
and
and
I
reached
out
to
Vince
on
on
slack
and
I
thought.
Well,
you
know
B.
It
would
be
nice
to
record
this
somewhere,
at
least
in
the
interim,
and
it
may
be
something
that
that
we
want
to
think
about.
B
Just
just
one
share
that
I
hope,
I'm,
not
jumping
the
gun.
I
mean
you
know
we're
not
released
we're
not
really
far
enough
along
till
I
talk
about
you
know,
conformance
we're
just
working
on.
You
know
implementing
the
first
one,
but
just
thought
I'd
got
it
down,
because
I
couldn't
find
a
place.
I
thought
maybe
add
it
to
the
book.
D
B
B
Anybody
working
on
the
like,
because
we
have
a
section
on
building
a
provider,
but
it's
specifically
for
V
1
alpha
one
green
one
outlet,
it
looks,
looks
quite
a
bit
different.
Anybody
working
on
that
is
it
is
too
early,
or
do
we
maybe
wanted
to
start
kind
of
building
the
scaffolding
for
that
as
we
as
we're
implementing.
C
Jesus,
so
the
blocker
here
right
now
is
that
we
need
to
modify
the
nullify
config
to
be
able
to
do
per
branch,
subdomains
and
I'm
waiting
for
David
to
get
back
to
find
out
whether
he
has
access
to
that,
so
that
we
can
get
that
federated
out
to
additional
people
and
make
the
changes
needed
or
if
he
was
coordinating
with
somebody
else
on
the
middle
of
my
config.
And
then
we
can
document
that
once
we
have
that,
we
just
need
to
drop
a
separate
nibble
if
I
can
fit
in
for
the
release.
C
C
C
G
Vince
you're
gonna
yeah
I
mean
just
book
for
me,
offering
my
help
you're
also
next
so
yeah
so
for
the
release.
Zero.
One
branch
open
a
PR
to
add,
go
model
support,
and
this
is
just
to
ease
a
little
bit.
The
development
process
between
B
1,
alpha
2
and
B
1,
alpha
1
and
especially
backbone
patches,
this
ad
in
Mato
support,
but
it
doesn't
remove
the
go
packets,
the
lock
and
Tamil
files
to
just
like,
be
backward
compatible
and
on
that.
Like.
H
Yeah
I
can
speak
to
that,
so
we
were
just
discussing
yesterday
about
the
possibility
for
building
a
backlog
for
the
cuvette
DM
bootstrap
provider,
specifically
just
because
I
was
trying
to
find
something
to
work
on
there
and
the
issues.
There
were
just
a
few
like
small
doc
issues
open
at
the
time.
Nothing
really
focused
on
implementation
or
anything.
So
we
thought
maybe
big
a
good
idea
to
have
like
a
kickoff
or
some
sort
of
planning
meeting
around
that
Chuck.
D
Yeah
so
I
added
a
few
tissues
yesterday
I
think
he
saw
those.
But
as
far
as
the
planning
meeting
goes,
we
kind
of
have
the
plan
outlined
in
the
v
1
alpha
to
bootstrap
provider,
doc
and
I
think
we're
just
trying
to
keep
it
to
what
was
outlined
in
the
document
and
once
we
sort
of
see
the
implementation
there.
Maybe
we
can
start
to
talk
about
a
further
plan
after
that,
but
we
have.
H
D
A
Prefer
to
keep
it
in
cluster
API.
We
already
have
so
many
like
journals,
that's
my
personal
opinion
because,
like
in
the
fullness
of
time,
this
should
be,
it
would
probably
reduce
to
a
smaller
set
of
people.
You
know,
and
it
would
just
be
used
as
a
common
component
across
across
my
owners.
I
didn't.
F
E
Hi
just
to
announce
that
it
is
a
proposal
for
an
open
issue
for
some
time
ago,
that
is
realizing
what
is
now
the
provider
specific
aspects
in
the
state,
as
mr.
Lussier
be
OCR
these
it's
a
really
have
some
feedback,
thanks
for
the
people
who
test
a
time
so
far.
This
signal
to
be
any
contentious
point,
if
set,
maybe
support,
decided
making
these
reference
mandatory
or
optional,
because
that
may
probably
affect
a
lot
the
difference
of
the
proposal
of
the
design,
but
the
race
are
mostly
the
things
that
can
be
handled
easily.
E
Probably
sooner
than
that,
because
the
changes
are
not
a
big
and
then
I
will
work
on
on
translating
the
sequined
items,
I
wrote
in
ASCII
art
the
proper
sequel,
a
durable
that
was
a
shortcut
I
will
do
that
over
the
weekend.
Concern
really
have
and
Amelia
have.
We
have
time
before
that
that
these
lady
colonies
so
welcome.
I
Thank
you
so
much
for
putting
this
together.
Pablo
I
will
echo
and
encourage
everybody
who's
interested
to.
Please
read
this
and
provide
feedback.
It
would
be
fantastic
if
we
could
get
this
approved
and
start
implementing
it
and
include
it
in
the
one
l42,
because
then
we'd
have
clustering
machine.
Looking
similar
instead
of
clusters
with
a
maligned
provider
specs
the
way
that
they
are
in
B
1,
alpha
1
and
machines
with
object,
references
so
I,
agree.
I,
don't
think
this
is
going
to
be
too
controversial.
So
please
take
a
look.
I.
A
I
So
we
talked
about
this
last
week,
but
we
didn't
have
a
big
crowd
because
of
the
holiday,
so
I
sent
a
message
to
the
SiC
cluster
lifecycle
mailing
list
late
yesterday
afternoon
and
wanted
to
bring
it
up
again.
Today
we
are
interested
in
putting
together
a
face
to
face
to
do
planning
and
design
for
view
1
alpha,
3
or
whatever
comes
after.
I
I
Who
thinks
that
they'll
plan
on
attending
or
will
get
that
information
from
the
poll
we
can
host
it
potentially
in
a
VMware
office
or
if
there's
other
locations
that
people
are
interested
in,
it
doesn't
have
to
be
so
I
mentioned
in
the
email
where
we
potentially
could
do
Bellevue
in
Washington
State.
We
could
do
Palo
Alto
or
the
Bay
Area
in
San
Francisco.
There
may
be
some
other
places
as
well,
but
please
look
for
emails
with
polls
and
whatnot
over
the
next
couple
of
days
and
respond
if
you're
interested.
E
Brief
question,
as
for
Lamia,
is
hotter
than
oversea
meeting
like
this
one
is
it's
not
related
to
a
conference
or
something
that
we
can
somehow
justify
budget?
It
will
be
any
possibility
to
do
some,
a
lot
of
more
participation
for
plenary
sessions.
The
opening
and
closing
session
probably
use
my
videos,
hopefully
that
other
sudden,
not
only
working
session,
make
no
sense
but
at
least
beginning
an
energy
of
the
meeting,
or
least
recording
for
that
yeah.
I
E
I
J
A
B
A
B
A
Remarkably,
consistent
with
this
every
time
is
that
it's
not
necessarily
a
feature
cluster
API
proper,
but
that
doesn't
mean
that
external
auto
scanners
can't
use
the
features
inside
of
cluster
API.
So
it's
a
consumption
model
where
Auto
scalars
could
use
the
features
of
cluster
8
vs
cluster
API
having
that
within
its
capabilities,
chasin
your
interest.
C
J
Sure
well,
I
had
a
note
there
for
cluster
autoscaler
open
shifts
been
utilizing.
Cluster
API
to
scale
machine
sets,
so
that
might
be
generally
interesting
people
here,
and
it
would
also
be
helpful
for
I
think
everyone
get
more
feedback
on
that
PR
and
jump
on
that
bandwagon,
because
cluster
autoscaler
I
don't
know
if
they're
quite
convinced
on
that
particular
use
case.
But
I
would
also
add
a
comment
as
far
as
the
cloud
provider
specific
stuff
in
this
case,
I
would
rather
see
this
be
a
separate
provider
further.
L
Yes,
a
couple
things:
yeah
I
think
it
is
just
about
using
the
infrastructure
and
so
I
think
I
mean
doing
a
separate
provider
is
definitely
possible.
I
think
right
now
it's
a
little
bit
tricky,
but
if
we
have
like
that
decomposition
between
infrastructure
and
bootstrapping,
that's
maybe
one
way
that
we
could
achieve
this
and
slightly
more
like
clean
way
across
providers.
L
I
do
think
to
Daniel's
point
in
the
dock,
though
right
this.
This
should
be
kind
of
consistent
across
providers
and
kind
of
kubernetes,
aware
I
think
it's
less
about
auto-scaling
using
those
necessarily
I
know.
Ferreira,
specifically,
there
are
just
some
features
of
vmss
over
individual
VMs
and
make
them
a
little
bit
easier
to
manage.
Yeah.
A
L
K
Justin
yeah
I
was
going
to
say
I.
Think
I
have
also
been
consist
on
this,
which
is
we
can't
stop
anyone
from
building
providers
that
implement
other
api's,
but
I
think
we
can
give
them
fair
warning
that
they
have
to
be
entirely
conformant
with
a
TBD
to
be
determined
conformance
test.
So
we
want,
like
machine
sets
to
behave
consistently
across
all
providers
at
some
stage
in
the
future
we
will
likely
have
a
test
and
it
will
be
hard
it
will
be.
K
It
is
a
lot
of
work
to
implement
the
exact
same
behavior
on
top
of
a
cloud
API
which
doesn't
necessarily
expose
all
the
same
knobs
that
we
will
likely
exploit
so
that
we
can
be
like
more
kubernetes
aware,
it's
sort
of
the
same
challenges
with
not
going
to
conflate
the
issue
with
autoscaler
being
aware
of,
like
unscheduled
pods
right.
It's
that's
that's
why
cluster
autoscaler
is
different
from
like
a
sort
of
auto
scaling
group,
which
is
a
horrible
example,
because
it
just
conflates
everything
again,
but
I
would
very
much
caution.
I
Earlier
so,
I
do
think
that
we
I
totally
agree.
Justin
I,
don't
know
that
that
the
solution
of
using
something
like
a
VM,
SS
or
an
ASG
is
possible
unless
we
adjust
cluster
API
to
expect
that
they
are
there,
and
so
that
potentially
could
mean
modifying
machine
sets
to
support
either
machine
based
or
auto
scaling
group
vmss,
whatever
it's
called
based
scaling
or
creating
a
a
parallel
machine
set
like
resource
that
is
specifically
for
scaled
groups
of
machines
provided
by
cloud
providers
and
I.
Think
that's
TBD.
M
So
you
know
this
is
Craig
from
Microsoft
and
in
my
team,
where
this
VM
SS
puzzle
comes.
We
definitely
want
to
do
this
in
a
way.
That's
consistent,
most
of
the
autoscaler
and
with
cluster
api
back
to
Justin's
point.
You
know,
we'd
love
to
be
a
part
of
figuring
out
what
conformance
looks
like
and
do
that
at
the
same
time
together,
I
I,
just
don't
know
at
this
time
the
timing
is
just
bad
or
what
we
we
need,
this
capability
for
our
customers
and
that's
driving
the
motivation
to
get
it.
K
Tested
yeah
I
mean
I,
I,
hear
you're,
saying
I,
guess
I
would
I
would
say,
like
we're,
I,
don't
think
we're
obligated
to
support
every
functionality
of
every
underlying
cloud.
So
if
this
can,
if
we
can
deliver
the
same
user
functionality
without
using
bigs
without
using
auto
scaling
crew,
so
that
using
vmss,
then
I
think
we
will
be
in
better
shape
if
there
is
functionality
that
can
only
be
delivered
through,
vmss
I
think
that's
an
interesting
discussion
and
like.
A
Exactly
I
think
you
just
opened
Pandora's
box
there,
so
what
I
was
going
to
say
is
why
don't
we
table
this
discussion
and
follow
up
for
next
week
for
people
to
have
reviewed
the
proposal
which
I
have
not
at
all
and
if
folks
can
have
comments
and
feedback?
Maybe
we
could
have
a
more
informed
discussion
and
hopefully
deduplicate
the
the
overlap
and
naming.
C
So
we've
had
a
few
changes
that
have
been
back
ported
to
the
release,
0.1
branch
and
we're
also
being
asked
to
help
guinea-pig
the
image
promoter
process
that
the
k10
per
group
has
been
working
on
so
because
of
that
I
want
to
go
ahead
and
try
to
cut
a
0.1
now,
if
I
release,
either
today
or
tomorrow.
So
I
wanted
to
see
if
there
were
any
additional
back
ports
or
anything
that
anybody
feels
is
necessary.
Change
that
we
need
to
get
in
before
that
releases,
cut.
C
A
I
think
it
might
be
easier
for
people
who
might
not
even
be
able
to
attend
the
meeting
just
that
they
they
view
the
repository
as
the
canonical
source
of
truth.
So,
if
there's
going
to
be
a
release
coming
up
and
people
want
to
chime
in
the
the
repository
becomes
the
means
by
which
they
can
communicate.
This.
G
I
just
wanted
to
give
an
update
on
like
we
went
out
for
to
progress
all
controllers
logic,
machine
machines
and
machine
deployment
has
been
implemented
and,
as
others
have
mentioned,
that
have
been
some
changes
like
in
the
kind
of
contracts
that
which
have
been
but
like
in
the
proposal.
But
we
need
like
a
more
extensive
documentation,
but
we
we
have
done
like
a
lot
of
progress
and
also
like
on
tests
a
lot
less.
G
So
we're
actually
comment
out
before
Indy
1,
alpha
1,
so
like
it
now,
we
actually
have
like
proper
tests
running
in
like
in
reconciliation,
loops,
etc,
and
if
anyone
wants
to
work
with
myself.
Another
is
like
on
on
the
communication.
Please
reach
out.
I
was
thinking
to
do
a
Code
walkthrough
on
July
16th
during
the
office
hours,
and
is
anybody
interested
or
like?
Should
we
do
it
like
later
any
thoughts
from
that.
A
A
C
A
At
close
tabs
all
the
time
just
because
they
have
too
many
tabs
decnet
requirements
for
conforming
infrastructure
boots
check
providers.
I
think
this
is
long
term.
Maybe
even
backlog
I
do
think
that
once
we
hit.
What
does
it
comment?
We
want
to
make
about
guarantees
typically
sums
for
GA
absolutely
for
beta.
Maybe
are
there
any
thoughts
there.
A
G
All
providers
have
to
check
like
when
that
reference
is
actually
not
nil
and
like
a
fail,
if
it
is
also
the
user
experience,
it's
not
great,
because
if
you
miss
the
label
like
there's
no
validation
on
it
and
so
like
it
will
just
say,
like
yeah
I
got
a
machine,
but
I
don't
have
a
cluster,
so
I
can
do
much
with
it
and
I.
Don't
think
I
want
my
eyes.
Machines
and
machine
deployment.
I
actually
broken.
G
If
you
don't
use
the
cluster,
given
that
no
draft
cannot
be
set
and
if
you
are
like
running
the
controllers
in
a
management
cluster,
so
I
would
like
love
to
like
kind
of
solve
this
in
be
1
alpha
2
and
make
the
cluster
object
as
thin
as
possible,
and
this
is
mostly
related
for
with
like
apples
proposal.
If
that
goes
through
and
B
feature
is
optional,
then
the
cluster
could
be
required
on
a
machine,
and
all
of
these
issues
will
go
away.
I
E
Me
actually,
yes,
yes,
it
should
be
I
remember
this
is
probably
being
the
most
contentious
point.
I
remember
since
I
joined
the
coast
where
pi2
me
III
agree
with
this,
but
that's
under
set,
isn't
in
that
every
time
the
top
is
calm.
Is
it
like
a
wall
I,
don't
know
why?
Because
I
said
probably
because
the
but
implies
to
have
any
closer,
probably
would
clarify
that
to
close
it
up
yet.
A
I
A
To
specifically
table
that
this
conversation
in
part,
because
until
we
have
Cobb
lowest
proposal
in
place,
look
at
the
thin
wrapper,
as
well
as
a
more
discrete
set
of
well-defined
states
for
bootstrapping
similar
to
machines.
I
think
that
that
will
help
to
clarify
its
its
use
case.
Otherwise,
the
current
state
is
it's
fair
to
say
that
you'd
want
them,
separated,
I,
think
the
long
term
state
has
not
been
fully
vetted
and
that
we
could
potentially
get
into
a
contentious
topic
there.
That
could
never
end
so.