►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone-
this
is
a
kubernetes
sick
cluster
life
cycle
cluster
API
office
hour
today
is
May
17th
2020-23.
before
starting
some
usual
PSA.
So
we
have
a
office
hour.
The
document
that
I'm
sharing,
let
me
pass
the
link
in
the
chat
and
if
you
don't
have
access
to
this
document,
you
have
to
subscribe
to
the
C
cluster
life
cycle
mailing
list.
Let's
see
copy
link.
A
And
I'm
passing
the
the
list.
The
link
to
the
mailing
list
as
well.
So
next
is
that
this
meeting
abide
to
the
cncf
called
Contour
and
so
please
be
kind
with
each
other,
and
we
have
in
this
document.
We
have
the
agenda.
Please
add
your
name
as
attendee
and
if
you
have
topic
feel
free
to
add
the
topic
to
the
to
the
list
below.
So,
let's
start
and
as
usual,
we
at
the
beginning
of
every
meeting.
We
will
serve
some
time
for
new
attendees.
A
So
if
there
are
new
attendees
in
this
meeting,
please
feel
free
to
raise
your
and
speak
up.
Introduce
yourself
say
hello
to
everyone
and
say
eventually
say
why
your
attending
this
meeting
I
give
some
time
to
folks
to
show
up.
A
Okay,
it
seems
that
we
don't
have
new
attendee
for
today.
So
next,
Topic
in
agenda
is
open
proposal
results.
We
have
only
one
proposal
out
which
is
make
infrastructure
the
infrastructure
cluster
resource
optional,
slash
making
it
possible
to
for
the
control
plane
to
provide
the
control
planning
point
I,
don't
know
if
there
are
updates
on
this
proposal.
Richard
is
sorry
Jake
a.
C
A
Yeah,
thank
you.
If
you
need
some
help
to
to
make
a
progress
here,
let's
Reach
Out
moving
on
so
discussion
topic.
First,
one
today
is
yuvaraji.
D
Thank
you
so,
right
now
in
machine
implement,
we
have
this
field
called
spec,
dot,
pause
and
Its.
Behavior
is
a
little
inconsistent
with
how
the
rest
of
the
controllers
can
be
paused.
So
the
machine
deployment
right
now
can
be
paused
in
two
ways:
by
adding
the
pause
The
annotation
and
by
setting
the
spec
dot
pause
to
true
and
both
of
them
behave
differently.
So
with
spec
dot
pause,
you
can
still
scale
up
scale
down
machine
deployments.
D
You
just
can't
roll
out
machine
deployments
so,
which
seems
increases
inconsistent
with
how
the
other
types
within
our
core
copy
behave.
So
the
idea-
and
it
was
suggested
on
this
issue-
that
we
might
want
to
potentially
just
deprecate
the
field
and
not
have
it
because
it
just
seems
odd
and
out
of
place.
I
want
to
bring
it
up
in
the
officers
to
gather
thoughts
on
deprecating
the
field
in
deprecating,
the
paused
field
in
machine
deployment.
C
D
C
A
E
Yeah
I
mean
I
think
this
definitely
seems
inconsistent,
especially
in
the
sense
that,
like
you
know,
we
have
the
on
The
annotation
already
so
having
two
two
sets.
Basically,
two
ways
of
doing
things
is
pretty
confusing.
I
think,
like
one
top
level
pause
that
pauses
everything
at
the
cluster
level
and
like
annotation
specific
to
act
on
the
object,
I
think
is,
is
more
consistent.
A
F
Yeah,
so
this
is
what
I,
if
I
remember
correctly,
it
was
copied
from
the
deployment
back
in
the
day
like
the
deployment
code
in
kubernetes,
because
you
could
do
basically
like
all
of
the
command
roll
out
pause
or
something
like
that,
and
it's
specifically
to
pass
the
rollout,
not
necessarily
the
reconciliation
of
that
machine
of
that
deployment.
Specifically
so
I
think
it's
probably
fine
to
deprecate.
F
A
B
F
Yeah
the
reason
I
was
saying
like
for
the
mailing
list
for
particular
one
this
one
and
maybe
like
some
others
and
nutritious,
because
it
does
have
like
a
slightly
different
promise
of
behavior
right,
then
the
other
feel
or
like
The
annotation
that
we
have
or
the
cluster
field.
So
if
there
is
people
relying
on
it,
we
might
want
to
think
differently.
B
F
B
B
D
One
last
thing
to
add
is
I
believe
that
this
field
is
used
in
plastic.
Alpha,
rollout
pause,
so
we'll
have
to
switch
that
over
to
The
annotation
and
probably
have
to
support
both
in
at
some
duration.
Because
if
a
machine
deployment
is
paused
using
an
old
cost
accounted
version,
then
they
might
not
be
able
to
unpause
it
with
the
new
cluster
value
version.
Something
like
that.
Something
like
that
could
happen.
But
since
it's
only
been
deprecated,
it's
fine,
but
we
might
have
to
have
support
that
at
some
point.
B
B
A
Okay
is
that,
okay,
to
answer
to
your
question,
yeah,
okay,
I,
have
added
a
comment.
Maybe
Stefan
or
ibarazia
I,
don't
know
about
the
castle
for
common
and.
G
A
D
Okay,
so
this
issue
talks
about
the
messenger
controller
and
trying
to
make
the
machine
set
controller
a
little
more
stable
by
making
it
by
making
by
looking
at
the
current
state
of
the
cluster
before
it
tries
to
create
new
machines
so
that
it
is
safe
to
create
new
machines.
Can
you
go
back
and
open
that
slides
for
Visa
that
are
linked
in
the
talk.
A
D
Is
yeah
all
right,
so
this
just
gives
you
an
example
of
how
it
could
happen.
Like
imagine
you
have
a
control,
plane,
animation
deployment,
I'll,
let
some
version
125
3,
and
then
you
try
to
upgrade
the
cluster
and
then
the
machine
deployment
scales
up
either
the
user
scales
it
up
or
Auto
scalar
scales
it
up
or
MHC
kicks
in,
and
it
tries
to
create
new
machines
right.
D
Plane
version
could
fail
because,
because
it
is
falls
out
of
the
cubadium's
versions,
queue
policy
and
it's
generally
unsafe
to
basically
create
these
machines
and
then
try
to
join
because
they
could
potentially
fail,
and
not
only
that,
once
these
machines
are
created
and,
let's
say,
they're
successfully,
join
these
new
machines
that
are
created
and
the
machines
that
will
probably
eventually
be
replaced
by
the
machines
for
the
newer
version
of
the
cluster.
So
we
we
just
so.
D
If
you
look
at
the
whole
cycle,
we
created
machines
that
are
not
safe
to
be
created
in
the
first
place,
because
they
could
not
potentially
join
the
cluster
and
also
created
machines.
That
will
just
eventually
be
replaced.
And
so
we
created
a
lot
of
machine
churn.
So
because
of
this,
the
the
idea
is
to
basically
try
and
strengthen
the
machines
that
can
Harden
the
machines
that
control
a
little
bit
by
making
sure
that
whenever
it
tries
to
create
a
machine,
it
is
safe
to
create
a
machine.
D
And
we
can
do
this
by
adding
some,
like
pre-flight
checks.
Information
set
controller
to
see
to
look
at
the
state
of
the
cluster
to
figure
out
if
it's
in
a
good
State
and
some
of
those
preferred
checks
would
include
like
looking
at
the
state
of
the
control
plane,
ensuring
the
kubernetes
version,
skill
policy.
And
if,
if
the
machine
set
is
using
the
quibidium
bootstrap
provider,
then
ensuring
the
cubadium
versionscape
policy
and
so
on.
D
The
issue
has
a
little
more
details
on
some
of
these
privileges,
but
basically
around
what
the
PreFlight
six
are,
trying
to
look
for
and
and
there's
also
a
PR
associated
with
it.
So
if
you
want
to
take
a
look
at
how
some
of
these
filtrate
sticks
are
implemented
and
also
want
to
mention
that
all
of
them
are
optional,
so
users
can
basically
opt
out
if
needed,
but
the
idea
is
to
basically
Harden
the
messenger
controller
by
making
sure
it
is
safe
to
create
the
machines
before
it
tries
to
create
conditions.
G
F
So
I
know
I
know
I
had
like
some
comments
about
it
in
general,
like
I
was
looking
at
the
proposal
here
that
that
we
I
don't
know
if
books
have
seen
this,
but
I
know
that
you
better
passes
like
current
limitations,
and
maybe
we
should
talk
about
that
specifically,
but
in
general,
like
from
a
user
perspective
like
if
I
have
a
control
plane.
F
What
was
in
the
example
126
and
I
have
right
like
and
then
I
have
worker
nodes
or
like
a
pool
worker.
Another
125
and
I
were
rather
like
a
not
upgrade
but
I,
but
I
would
like
to
scale
up
those
worker
nodes.
F
I
should
potentially
be
able
to
do
so,
especially
given
that,
like
and
like
in
the
the
skew
policy
version
in
the
cap
that
I
link,
39,
35
or
cap
number
I
think
we're
talking
about
n
minus
three
control,
plane,
node
skills,
it's
a
little
larger
than
M
minus
one,
but
it
does
allow
like
folks
to
kind
of
like
upgrade
on
my
yearly
cycle
if
they
wanted
to
so
I'm
curious.
F
Like
do
we
have
alternative
paths
like
to
to
get
there
first,
especially
given
that
this
check,
in
particular,
like
it,
will
apply
to
both
new
clusters
and
old
clusters.
That
is,
like
any
machine
set
that
it's
a
machine
deployment.
D
So
yes,
I,
can
let
me
give
you
some
context
on
that.
So
the
first.
D
If,
if
the
machine
pool,
is
it
an
old
version
and
it
tries
to
scale
up
because
of
the
cubadium
versions
queue
if
it
is
using
the
kbm
bootstrap
provider,
scale
up
could
potentially
just
fail
because
J
because
join
might
not
work
and
we're
not
and
all
the
and
to
add
to
that
all
the
PreFlight
checks
are
optional,
so
the
user
can
just
opt
out
if
they
know
that
the
versions
that
they're
using
for
the
control
plane
and
the
workers
are
compatible
with
each
other
and
Joint
works.
D
If
the,
if
they're,
not
using
the
cubadium
boots,
are
provided
we'll
still
going
to
have
a
kubernetes
versus
queue
which
will
do
the
n,
minus
3
support
or
n
minus
two
support,
depending
on
probably
depending
on
the
kubernetes
version.
So
we'll
have
to
take
a
look
at
that,
but.
D
The
the
yeah
so
the
the
the
pr
and
the
issue
is
proposing
different
pre-flight
checks,
depending
on
the
state
of
the
cluster
and
depending
on
what
it
is
using
right.
So
if
it
is
not
using
the
qbm
bootstrap
provider,
it
falls
back
to
the
kubernetes
versions,
queue
which
is
not
as
strict
as
cubadium's
version
scheme.
F
Yeah
I
think
I
think
it
it
does,
but
what
I
was
trying
to
ask
is
like,
instead
of
fixing
the
problem
machine
set
going
forward,
can
we
fix
the
problem
in
qubit
yam
instead,
because
this
is
a
really
poor
user
experience
like
that,
we
would
somehow
like
not
disallow
and
like
let
the
user
decide
if
they
want
to
do
upgrades
like
the
whole
point
of
cap
is
like
to
make
this
easier
so
like
can
we
do
better
basically
like
that
was
my
original
question.
A
A
For
so
having
this
profile
check
is,
is
a
kind
of
stop
Gap
and,
and
we
have
to
figure
it
out
how
to
fix
this
specific
problem
in
common
Main
with
regards
to
the
other,
provide
check
are
still
good,
no
matter
of
this,
this
issue,
so,
for
instance,
today,
if
you
try
to
to
join
a
much
a
machine
that
is
of
a
kubernetes
version,
it
is
greater
than
your
control,
plane,
Nothing
Stops
you
in
in
cluster
API
and
so
I
think
that
this
will
allow
us
to
handle
gracefully
this,
and
these
apply
also,
when
you
are
doing
an
upgrade
and
something
happened,
I
don't
know
how
to
scale
or
kicks
in,
etc,
etc.
A
So,
basically,
what
we're
doing
is
trying
to
fix
a
set
of
edge
keys
that
will
serve
it.
The
one
about
equivalent
meaning
is
we
are.
We
I
think
that
we
all
agree
that
we
are
not
fixing
in
the
ideal
way,
but
it
is
the
The
Fast
and
the
the
one
that
this
team
can
Implement
without
getting
the
Cuban
main
community
to
come
meet
on
having
basically
what
they
call
Future
compatibility,
which
requires
some
more
discussion.
E
D
It's
a
per
cluster
basis,
so
it
will
be
so.
You
can
opt
out
of
the
pre-flight
checks
by
listing
the
Privileges
that
you
want
to
opt
out
on
The
annotation
on
the
machine.
E
So
so
I
guess
like
once.
We
like
eventually
qadm,
fixes
fixes
this
then
like
the
behavior,
like
wouldn't
change
because,
like
ultimately,
even
if
you
keep
The
annotation
I,
assuming
that
we
would
remove
the
code
that
is
triggered,
but
by
that
annotation
right.
E
A
B
B
Skew
policy
but
I
mean
then
it's
n
minus
three,
so
you
can
adjust
the
logic
to
validate
that.
Instead,
I
mean
we
have
to
see.
If
that's
basically,
on
the
same
as
the
good
news
check,
maybe
you
don't
need
to,
but
I
mean
I
I,
don't
I
mean.
Let's
say
there
are
some
improvements
and
I'm
pretty
sure
they're
not
going
from
like
we
need
to
exactly
the
same
version.
So
we're
supporting
n
minus
three,
so
yeah
I
think
we
can
just
evolve
it
depending
on
on
whole.
Cubit
evolves.
F
There's
two
questions
like:
would
this
stop
remediation
I?
Guess
it
would?
If
you
have
old
nodes,
it
would
remediation
would
fail
and
then
a
couple
other
points
like.
Maybe
there
is
like
an
action
item
for
us
to
take
this
essay
cluster
lifecycle
level.
I.
Think
that's
a
pretty
important
topic
to
discuss
there.
F
Maybe
we'll
get
Wikipedia
maintainers
there
from
our
side
from
gappy
side
like
we
do
have
ekaes,
though
right,
so
we
can
potentially
like
doing
upgrade
as
well,
so
we
can
potentially
test
at
least
like
in
the
minimum
amount
that
like
like
of
the
skew
and
then
we
could
go
from
there
and
potentially,
we
could
also
have
like
a
better
error
message
in
that.
That's
like
this
is
not
allowed
because
XYZ
again
until
maybe
qubidium
enforces
this
Q
policy
a
little
more
in
line
with
kubernetes.
D
Just
to
answer
to
one
of
Prince's
question
regarding
remediation:
yes,
it
will
block
remediation,
as
in
with
the
current
proposed
PR,
it
will
not
delete
the
unhealthy
machines
because
it
knows
that
it
can't
create
new
machines
that
would
successfully
join.
So
it's
it's
probably
better
to
have
the
machine
that's
already
there
when
we
know
that
we
can't
create
a
machine
that
a
create
a
new
mission
that
will
potentially
fit.
So
that's
how
reputation
will
be
blocked.
But,
yes,
let
me
just
will
be
done.
A
Yeah
I
think
that
the
right
term
is
that
we
have
the
fairing
remediation
until
we
are
certain
that
the
radiation
could
success
but
yeah,
then
it
is
just
working.
He
has
seen
if
I
saw
the
order
right.
E
Yeah
I
think
another
data
point
that
we
might
want
to
consider
here
is
like,
before,
like
each
provider
will
likely
need
to
consider
this
on
their
own,
but
depending
on
how
fast
you
can
roll
out
the
control
plane,
you
might
have
a
window
maintenance
or
like
a
window
where
you
cannot
upgrade
your
workers,
so
typically
some
providers
might
take
some
time
to
actually
you
know
clone
or
create
the
instances
so
I
think
there's
probably
some
some
discussion
at
the
provider
level
to
see
whether
this
can
be
enabled
or
not,
depending
on
their
use
cases.
I
Hey
I
just
want
to
provide
some
quick
context
from
this
problem.
Cubing
game
I
believe
Daniel
originally
raised
this
product,
where
he
discovered
that
you
cannot
use
an
older
version
of
comedian
to
join
workers
in
coastal
API
to
an
existing
quadrupting.
So
the
problem
is
that
we
basically
need
a
cap.
Somebody
has
to
step
up
to
write
a
cap.
It
is
possible,
but
it's
like
undefined
Behavior
like
you,
can
potentially
join
to
126
quadrupling.
You
can
join
something
like
that.
I
125
Cube,
ATM,
with
125
coupling
problem,
is
essentially
in
API
that
kubadium
stores
in
the
questor
Arbuckles,
basically
a
bunch
of
strict
policies
that
you
have
to
apply,
so
nobody
has
stepped
up
to
try
to
escape
the
cubadian
team.
Currently
not
only
doesn't
have
the
battery,
but
also
is
not
concerned
about
this
problem.
I
So
if
somebody
from
the
copy
Community
wants
to
step
up
to
write
a
little
the
Gap,
we
will
happily
take
a
look.
It
boils
down
to
maintenance
bandwidth
in
the
end
of
the
day
like
how
do
we
support
this?
How
do
we
make
sure
that
we
don't
break
in
the
future?
What
are
we
supposed
to
do
with
them
tests,
but
we
need
a
plan
for
a
test
that
actually
makes
sense.
I
How
do
we
end-to-end
test
such
a
policy
in
terms
of
the
the
other
cap
which
is
still
like
in
progress
to
support
all
the
school
and
minus
three?
This
is
interesting
because
Cuban
currently
supports
Cooperators
n
minus
one,
so
it
supports
the
current
version
of
the
couplet
that
matches
the
kubernetes
version
and
supports
one
complete
order,
and
this
is
strictly,
quite
frankly
with
maintainer
sanity
in
mind,
because
imagine
the
equivalent
deprecates,
V1
beta1
and
now
has
a
new
API
or
something
changes
like
how
do
you?
I
How
do
you
even
manage
supporting
social?
We
could
we
quote
the
quiblet.
Doesn't
change
so
much
it's
a
stable
component,
but
again
that's
like
a
several
Cuban
Yankees
and
when
Jordan
joined
one
of
the
questions
committees,
which
is
basically
the
domain
question,
was
how
do
we
maintain
this?
What
is
the
same
way
to
maintaining
this?
So
everything
goes
down
to
caps
discussions
documents
and
it's
doable
what
we
have
to
basically
share
the
effort.
I
guess.
A
F
I
Yes,
126
supports
kublet,
126
and
kublet
125.
F
Yeah,
but
it's
just
cubelet
like
not
necessarily
like
joining
to
a
com
to
a
control
plane,
that's
like
on
1
26,
without
125
node.
I
Well,
yes,
but
the
version
of
Cuba
again
the
joys
has
to
match
that
is
the
The
Cooperative.
The
problem
is
when
you
have
one
to
join
poor,
Cuba,
again
credit
requested
or
recently
upgraded
poster,
you
have
to
match
the
qubidden
version.
So
if
coaster
was
created
by
Cuban
126,
you
have
to
use
kubernetes
126.
You
join
and
that's
like
the
that's
the
biggest
issue
right
now,
because
we
do
not
support,
for
example,
joining
125
Cube
ADM.
You
know
the
equivalent
version
doesn't
matter.
I
That's
that's
an
issue
in
copy.
What
you
can
do
is
actually
you
can
use
an
image
that
has
Cuba
126.
F
Yes,
but
that
I
mean
that
would
like
really
mess
up
like
yes,
it's
it's
an
air,
gapped
environment
and
like
anything
like
that,
yeah,
but
maybe
we
should
talk
about
it,
whatever
qbdm
office
hours
or
like,
like
the
reason
that
that,
like
I'm
fixating
a
little
bit
on
this,
is
because
like
when
I
do
upgrades,
we
we
generally
want
to
keep
the
cluster
running,
and
so,
if
we're
saying
that
like,
if
I
upgrade
the
first
node
of
a
control
plane
well,
the
control
plane
is
still
upgrading
all
like
scaling
operations
up
and
down,
and
what
what
I
guess
like
you
can
do
down,
but,
like
you,
cannot
scale
up
or
remediation
for
all
the
worker
nodes
are
affected.
F
A
B
B
I
totally
fine
to
make
it
just
work,
I
mean.
Currently
we
just
have
issues
like
I
mean
we
had
this
this
over
the
last
one
or
two
years
that
here
and
there
is
we're
just
bringing
up
hey
I
was
doing
this.
Customer
I
was
doing
this
and
that
and
then
I
got
a
remediation
and
doesn't
work
and
blocks
other
stuff
and
then
that's
also
very
bad
lyrics
but
agree.
Ideally,
the
trust
works.
I
Yeah
I'm
a
perfectly
supportive
of
basically
having
a
way
to
change
this
cubadian
skew
I,
agree
it's
problematic.
It
doesn't
make
any
sense
to
have
it,
especially
given
kubernetes
expanding
its
support
flexibility,
it's
just
a
artifact
of
the
past.
Essentially
this
is
what
was
in
Cuba
again.
This
is
what
makes
sense
to
us
right
now
as
a
very
small
team
to
maintain.
So
you
know
happy
to
look
at
proposals
on
what
we
can
do.
B
Stay
fine,
oh
yeah,
just
just
wanted
to
comment,
I
think
just
getting
to
n
minus
one
support
would
already
fix
like
most
of
our
problems,
because
I
think
most
of
our
problems
are
not
necessarily
at
least
today
that
we
want
to
have
like
n
minus
three
machines
joining.
But
it's
mostly
like.
Okay,
we
have
an
upgrade
going
on
and
we
will
make
sure
that
the
older
version
still
works
and
that's.
It
would
be
already
possible
if
you
get
n
minus
one
to
work
yeah,
but
that's
it.
A
Okay,
if
you
all
agree,
I,
will
move
on
to
the
next
point
in
in
agenda.
So,
okay,
the
next
one,
is
fine.
So
this
is
a
follow-up
of
a
coopercon
discussion
about
stress
testing
capping.
So
there
are
also
a
bunch
of
issue
about
this
topic,
and
the
quantity
is
that
we
don't
have
a
good
senior
about
cluster
API
running
a
scale.
We
don't
know
what
are
the
bottlenecks,
what
we
can
support
and
etc,
etc.
A
So,
today,
the
best
option,
the
best
proxy
to
this
senior,
is
that
we
can
get
is
to
use
Kat,
D
or
kubemar
or
yeah
to
to
run
big
cluster
on
some
cluster
provider.
A
G
G
A
Are
looking
for
something
that
can
run
either
locally
or
on
Pro
long
term?
We
are
looking
for
something
that
ideally
can
even
do.
Support
cost
testing
so
see
not
only
see
that
cluster
the
Cuban
the
cluster
API
runs
when
a
scale,
but
also
that
cluster
API
can
handle
issue
at
scale,
because
we
faced
some
some
problem
in
the
past,
so
I
have
cubicon.
A
We
had
a
couple
of
brainstorming
about
this
and
we
discussed
a
possible
solution
which
is
described
in
a
in
a
set
of
slides
I
I
I
will
not
go
through
into
details,
but
the
trdr
is
the
this
solution
is
basically
to
implement
a
mock
cloud
provider
where
everything
will
be
in
memory
when
and
a
everything
will
be
in
memory.
Okay,
since
everything
is
memory
we
there
will
be
no
real
workout
cluster,
and
so
we
need
also
a
mock
API
serverity
CD
server.
That
will
be
just
enough
to
keep
a
copy
happy.
A
So,
for
instance,
it
will
provide
a
list
of
nodes
for
the
mock
workload
cluster
which
is
required
for
I.
Don't
know
they'll
check
providers
and
stuff
like
that,
so
it
will
be
very
minimal
and
very
specific
to
this
case,
and
the
goal
is
only
to
test
the
management
cluster
to
not
do
not
support
a
runny
pods
and
we
have
discussed
with
a
couple
and
the
idea
sounds
and
and
doable
and
so
I'm
giving
a
little
bit
update
on
progress.
So
we
are
doing
a
little
bit
of
prototyping
in
in
this
repository.
A
It
is
on
my
repository
now,
but
the
goal
is
to
we.
A
Some
of
this
work
is
is
going
to
be
merged
by
end
of
year
and
the
plan
is
to
move
this
work
back
under
slash
test
in
in
the
main
copy
repository.
As
soon
as
we
have
a
working
profit
concept
that
we
are
going
to
demo
in
this
office
hour
and
now,
I.
C
Oh,
this
looks
really
fun.
The
new
provider
is
named
goofy.
Does
that
suggest?
This
is
a
kind
of
chaos,
monkey
type
way
of
stress,
testing,
race
conditions,
state
transition
like
edge
cases.
That
kind
of
thing
is
that
the
primary.
A
Goal
was
is
to
implement
these
and
in
a
second
iteration,
is
to
add
the
to
make
it
capable
to
do
cost.
Testing
and
Goofy
comes
from
yeah
from
cows,
and
but
the
name,
and
then
we
can
figure
it
out.
I
want
us
to
clarify
that
house.
Testing
will
be
a
second
iteration
as
soon
as
we
have
all
the
Scaffolding
in
place.
C
Yeah,
cool,
I,
I,
actually
kind
of
I
was
sort
of
thinking,
something
along
the
lines
of
Stefan's
comment
and
chat
about
that.
This
is
sort
of
the
support
scale
testing,
but
it's
kind
of
different
in
a
way
that
I
mean
this
is
super
important.
You
need
you
need
both,
but
I
think
for
Pure
scale
testing.
We
need
scale,
and
that
is
hard
to
do
in
a
mock
way.
C
You
actually
I
mean
yeah
I,
don't
know
if
anybody's
stepping
up
from
the
provider
Community
to
offer
to
build
five
thousand.
Ten
thousand
node
clusters
is
really
expensive,
but
something
like
that
would
be
super
useful
to
get
the
scale
at
all
the
dimensions
like
IO
actual
controllers.
You
know,
re-queuing
things
happening
at
like
a
significant
scale.
Go
ahead.
G
Funny
you
mentioned
that
Jake
So
currently
see
so
we
are
scale
testing
copy,
but
we're
doing
the
I
guess
the
brute
force
method.
So
we
are
spinning
up.
You
know
all
actual
clusters
using
captor
using
multiple
backend
machines
and
just
trying
to
spin
them
up
by
just
creating
lots
and
lots
and
lots
of
machines
in
in
AWS.
So
I'm
really
interested
in
this,
but
also
we
are
doing
our
own
scale
testing
as
well.
C
G
A
E
Yeah
so
Upstream
in
pad
view
we
started
discussing,
for
example,
also
kind
of
the
equivalent
this
for
of
this
bar
for
cap
V,
in
the
sense
that,
like
we're,
marking,
for
example,
the
infrastructure
apis
of
vsphere
to
stress
tests,
the
controllers
we're
also
starting
to
explore
doing
an
end-to-end
skill
test
depending
on
the
size
of
the
environments,
so
I'm
still
doing
the
dimensions
to
see
how
far
we
can
stretch
I
think
there's
definitely
value
to
stress
test
copy
with
any
infrastructure
provider.
E
As
long
as
that
infrastructure
provider
has
a
certain
level
of
latency,
because
when
you
test
with
cap
D
or
the
mock
provider,
you
assume
that
the
underlying
implementation
everything
goes
smoothly.
But
in
reality
that's
like
the
ideal
stage.
But
in
reality
the
ideal
set
never
exists.
So
while
we
might
have
a
signal,
providers
also
have
to
play
their
parts-
and
you
know
stress
tests-
do
some
skill
testing
up
to
their
levels,
so
in
cab
view
we're
at
least
starting
to
explore
that
to
support
this
initiative.
A
A
quick
comment
about
this
and
what
the
checks
say.
Yes
is
totally
true:
a
mock
provider
as
a
certain
degree
of
in
of
corner
that
that
it
does
not
allow,
but
it
has
the
advantage
that
if
you
find
a
problem
in
copy
and
you're
sure
that
is
copy,
you
can
probably
isolate
it
quickly.
So
yeah
it
does
not
cover
all
the
story,
but
it
is
a
good
improvement
from
where
we
are
so
and
I
totally
agree
that
also
providers
should
look
at
similar
way
to
do
stuff.
Stefan.
B
Start
with
some
sort
of
happy
part,
but
we
also
want
to
make
it
relatively
realistic
over
time
in
a
sense
of
I,
don't
know
machine
questions
should
also
take
a
few
minutes
or
something
like
that.
We
started
typing
off,
of
course,
but
I
think
over
time
to
actually
make
it
realistic.
E
A
Great
thank
you
everyone
for
comments
on
on
this
one.
So
it
seems
that
like
Stefan
was
saying
in
chat
that
we
have
a
business
case
for
for
it
and
yeah.
We
will
keep
you
updated
as
we
progress
on
this.
So
we
are
still.
We
have
Mark
a
couple
more
Topic
in
agenda
15
minutes
left.
So
let's
move
on
Killian.
H
Yeah
very
quickly
because
this
has
been
mentioned
a
few
more
times
as
we
head
towards
the
release,
so
I'm
working
up
here
at
home
to
stop
serving
V1
after
three.
If
you
want
Alpha
three
has
been
end
of
life.
For
over
a
year
now,
I
was
officially
deprecated
in
V
1.4,
so
once
1.5
is
released,
you
won't
be
able
to
create
resources
in
V1,
Alpha,
3
and
so
on.
H
A
Thank
you
for
the
call
out
yeah.
Let's
keep
that
in
mind
these
two
folks
as
soon
as
we
get
to
the
release
and
also
provide
feedback.
While
we
go
down
this
part
of
implementing,
if
not
a
comment,
the
next
one
is
a
scene
yeah.
E
E
E
So,
if
you
have
as
a
consumer,
if
you're
as
a
consumer
pulling
copy
on
the
given
provider
and
your
your
live,
like
your
project
wants
to
use
latest
copy,
if
you,
if
you're
Pro,
if
you
have
a
provider
that
is
stale
in
terms
of
version,
then
you
have
a
version
conflict
between
Cappy
and
the
providers,
so
it
renders
pretty
much
unusable
copy
as
a
library,
if
you
don't
have
the
provider
also
consuming
the
right
version
of
copy
so
yeah,
it
seems
like
from
the
from
scheming
And
discussing,
also
like
this
with
folks.
E
It
seems
like
the
release.
Notes.
Sorry
there,
it
is
process
has
like
a
beta
tag
where
any
changes
pertaining
to
Providers
are
basically
delivered
by
that,
and
it's
pretty
much
I
think
four
weeks
before
GI.
So
we
said
that,
like
we
were
comfortable
at
least
releasing
a
ga
of
cap
V,
two
weeks
after
the
ga
of
core
copy.
So
these
this
leaves
us
like
a
month
and
a
half
to
absorb
any
changes
like
related
to
to
copy,
so
I
think
that's
where
we're
gonna
roll
with.
E
A
A
If
there
are
no
comments,
let's
move
to
the
next
Point
Vince.
F
I'll
keep
it
short,
so
we
started
throwing
with
the
idea
like
that
tubecon
last
year,
in
Valencia,
and
so
like
flat
car
is
currently
incubating
and
through
cncs
and
folks,
they
don't
know
like
the
the
booster
provider.
Like
you
know,
they
relies
on
on
cloud
in
it.
F
It
does
have
some
support
for
ignition,
although
like
like,
if
flutter
becomes
like
you
know,
like
a
ghost
Rook
incubation
process,
and
then
you
can
have
a
fully
cncf
project,
it
would
be
interesting
to
like
adopt
it
as
like
the
operating
system
for
capping.
What
do
I
mean
by
that?
F
We
have
a
common
issue
which
is
like
I'm
bootstrapping,
a
kubernetes
node
and
I'm
bootstrapping,
a
machine
and
the
operating
system,
and
then
there
is
like
the
whole
story
about
like
what
about
upgrades
of
both
of
the
operating
system
like
input
upgrades
of
the
operating
system,
so
through
a
reboot
or
and
the
kubernetes
components
as
well.
You
know
to
fix
CVS
quickly.
F
Right
now,
like
doing,
Replacements
of
machines
has
worked
just
great
to
boost
up
the
project,
but
at
the
same
time
it
creates
a
lot
of
churn
of
machines,
and
it
creates
problems,
especially
when
machines
are
not
infinite
like
in
clouds,
are
but
more
like
on
the
bare
metal
side,
so
just
wanted
to
put
it
out
there.
It's
one
of
the
options.
Of
course,
the
nice
thing
about
is
like
that.
Like
flat,
car
has
oh
boy
street
style
like
upgrades
like
it
could
upgrade
in
place.
F
E
You've
seen
yeah
I
think
like
once
we
like
once
or
if
we
consider
this
I
think,
there's
probably
gonna,
be
a
round
table
that
we
need
to
do
at
the
provider
level,
because
some
provider
love
some
providers,
especially
the
on-prem.
One
might
be
doing.
Networking
configuration
through
a
kind
of
like
net
plan,
slash
cloud
in
it
metadata
format
and
we
need
to
see
how
that's
gonna
work,
for
example,
with
flat
car.
Since
it's
all
ignition
based.
F
Oh
yeah,
the
main
issue
that
I
see
with
ignition
today
is
that,
like
it's
really
a
blob
of
text
that
then
somehow
tries
to
use
the
cloud
and
it
config
to
shove
it
into
an
ignition
config
which
is
like
you
know,
it
was
good
to
like
kind
of
range
you
know
like
to
see
like
if
that
would
have
worked
and
everything
there
is
I
forgot
to
link,
but
the
main
configuration
process,
not
that
is
actually
through
and
this.
F
So
these
the
this
Reaper
has
actual
types
that
there
are
go
types
they're,
stable,
they're
versions
and
so
on
and
I
reach
out
to
the
maintainers
to
say,
like
hey,
can
I
generate
crds
out
of
this.
That
would
that
would
be
pretty
interesting
and
awesome
like
if
I
can
also
run
validation
through
this
repo,
because
it's
very
lean.
F
Literally
validate
when
you
boost
traffic
on
machines
when
you're,
creating
and
maybe
even
through
a
Web
book,
so
that
was
kind
of
like
what
got
me
excited
about
the
idea
of
integrating
it.
E
I
think
like
just
to
follow
back
Vince
I,
think
one
of
like
a
at
least
one
of
the
advantages
that
I
see.
Is
that
like
if
we
can
get
like
a
support
in
at
the
OS
level,
so
we
need
probably
to
check
also
with
with
Folks
at
the
image
Builder,
because
today,
for
example,
with
we're
using
Ubuntu
we're
using
other
OSS,
but
I.
Don't
think
that
we
have
anyone
from
the
Ubuntu
community
involved
in
cluster
API
so
usually
like
whenever
at
least
at
the
provider
level.
E
Whenever
we
have
an
issue
we
have
to,
especially
at
the
OS
level,
we
have
to
recharge
image
Builder.
Sometimes
they
have
to
reach
out
to
canonical
folks
and
so
on
and
so
forth.
F
Yeah
exactly
flat
car
is
already
an
image.
Builder
I
think
CCO
can
speak
more
to
that.
F
Cool
anyway
feel
free
to
reach
out.
This
is
early
thought
it
will
require
like
a
whole
cap-
and
you
know
probably
new
types
and
so
on,
like
the
thing
through
and
a
lot
of
testing
yeah
black
cards
already
in
there
yeah.
A
Great,
thank
you
little
beans
for
raising
discussion,
so
we
are
six
minutes
left
and
we
have
another
point
in
agenda
so
manage
kubernetes
for
feature.
Group
updates,
dick
just.
C
A
real,
quick
one
that
there's
a
PR
open
to
formally
retire
the
feature
group
thanks
everybody
for
the
various
meetings
we
had.
We've
got
a
cape
out
now
and
that
will
be
what
we
focus
on
going
forward
and
then
the
work
that
stems
from
that.