►
From YouTube: SIG Cluster Lifecycle 2021-04-06
A
A
Okay,
I
don't
see
any
new
metric
participants.
We
have
fairly
large
us
agenda
and
participants
today
as
well.
B
Yeah,
thank
you.
I
just
I.
I
guess
I
sort
of
wanted
a
sanity
check
on
this,
which
is
we
have
some
people
in
the
chaos
community
they're
a
little
bit
confused
about
I'm
not
so
confused,
but
just
hearing
from
sydne
sign
networking
that
cubenet
is
going
away
and
I've
put
a
link
to
the
issue
and
it
seems
like
they
are
actively
saying
that
it
is
going
away.
I'm
trying
to
get
them
to
realize
that
there
is
a
distinction
between
them,
removing
the
flags
and
then
removing
the
functionality.
B
The
functionality
will
remain.
It's
pretty
core
too.
I
think
every
kubernetes
cluster,
and
so
I'm
worried.
This
is
another
one
of
these,
like
docker
shim
like
we're
removing
docker
type
things
where
it
gets
confusing.
I
think
it's
less
bad
because,
like
cubenet
is
less
well
known,
but
I
don't
know
how
others
feel
about
this.
B
I
feel,
like
I've
made
my
point
on
this
particular
issue,
but
if
there's
other
I
I
might
be
overreacting
or
if
there's
other
things
that
that
we
should
be
thinking
about,
please
let
me
know,
but
I
mean
essentially,
I
don't
think
they're,
actually
removing
what
I
consider
the
functionality
of
cubenet,
which
is
networking
using
pod
ciders
and
not
like
relying
on
like
linux,
networking
or
relying
on
your
cloud
to
route
or
infrastructure
to
route
packets.
Rather
than
like
you
know.
B
In
the
old
days
we
had
some
cni
providers,
which
tried
to
do
their
own
ip
allocation,
and
we
had
some
cni
providers
which
tried
to
do
like
user
space
tunneling,
and
I
think
cubenet
basically
won
and
all
the
cni
providers
stopped
doing
those
things.
But
now
they
are
saying
they
want
to
remove
cubenet,
but
they
are
removing
the
flags
which
is
fine.
The
functionality
will
continue
to
exist.
You
just
have
to
go
through
a
sort
of
cni
setup,
but
it
is
confusing.
A
A
B
Hope
this
will
be
less
significant.
I
just
one
way
one
way
would
be
we
say
in
the
release.
Now
it's
like
the
cubenet
flag
is
deprecated
and
we
emphasize
the
word
flag
and
we
say
like
if
you
want
to
continue
to
use.
Cubenet
here
is
like,
and
we
just
link
to
ben's
comment
there
where
he
talks
about
like
how
to
actually
configure
it
like
here
is
how
you
can
configure
it
so
make
it
clear
that
it's
not
going
away.
B
A
Yeah,
okay,
do
you
do
you
want
to
further
comment
with
this,
or
should
I
drop
a
basis
for
something.
B
I
think
we're
okay,
I
don't
know
I
I
don't
know
whether
anyone
else
feels
physically
strongly
that
I'm
either
overreacting
or
under
reacting
or
whatever
do.
Let
me
know
and
then
I
think
I
think
dims
if
dims
is
going
to
write
a
release.
Note
I
think
he
can
see
that
like
there
is
yeah
that
there's
discussion.
B
B
B
B
I
I
don't
have
a
strong
opinion
there
I
mean,
I
think
I
think,
as
long
as
we
have
as
long
as
the
work
around
in
terms
of
configuring
cni
works,
which
I
assume
it
will
that
to
me,
seems
okay
if
they
can
get
a
nice
flag.
That
would
be
nice,
but
I
don't,
I
imagine,
there's
gonna
be
some
complexity
in
just
moving
moving
using
the
external
docker
shim.
A
It's
it
feels
like
it
just
enhances
some
of
the
complexities
for
users
that
will
have
to
migrate
to
this
external
docker
shoe
module.
They
have
to
run
it
as
a
service,
so
if
they
are
currently
passing
the
same
flags
to
the
kubelet
they
now
have
to,
I
don't
know
how
they're
going
to
do
it.
They
have
to
use
a
workaround
on
this
external
docker
stream.
Somehow,
with
with
this
external
location,.
B
Yeah,
it's
certainly
reasonable
for
docker
shim.
If
they
want
to
to
write
this
cni
configuration
if
or
whatever
the
appropriate
thing
is
for
docker
shim,
but
I
I
think
that's
almost
up
to
them
right
what
they
want
to
do.
There.
A
We
have
a
tracking
issue
in
kubernetes.
How
are
we
going
to
start
handling
the
whole
docker
machine
problem?
It's
going
to
be
a
matter
of
docs,
in
my
opinion,
we're
just
going
to
instruct
users
how
to
how
to
install
docker
shim
externally
as
part
of
the
docker
setup
on
a
particular
node,
and
that's
all
we
have
to
do.
I
think
we
are
trying
to
minimize
interactions
with
this
cri
such
as
we.
We
have
zero
interaction
with,
for
instance,
with
container
deconfiguration.
A
Same
goes
for,
for
instance,
configuring
docker
shin
in
some
way.
I
don't
know
what
is
the
future
of
all
those
flags
that
the
kubrick
currently
has
related
to
docker
shim,
but
basically,
kubernetes
will
pretty
much
stop
configuring
docker
shim
related
to
some
of
these
options,
so
the
user
has
to
set
up
the
service
externally.
Cubed
aim
has
to
be
compliant
with
the
setup
and
just
start.
A
Okay,
I
don't
think
I
have
any
more
yet
the
topics
does
anybody
else
have
any.
D
D
Maybe
the
the
most
interesting
one
is
is
the
the
most
interesting
one
still
pending
is
the
load
balancer
proposal
from
json,
and
also
we
are
labeling
issues
that
we
are
considering
release
blocking.
D
Second,
another
note
that
may
be
interesting
for
for
people
in
this
meeting
is
that
aws
is
leading
a
work
about
supporting
external
tcd
in
cluster
api,
and
this
work
is
based
on
on
atcdm
and-
and
so
I
think
this
is
interesting
for
for
justin
or
also
for
the
people
working
on
cv
at
the
cbdm.
D
Also
for
cops,
because
there
are
a
tcdm
adm
now
is
developing-
is
kind
of
chaos
dependent.
So
maybe
people
if
people
is
interested
I've
linked
the
ecmd,
where
we
are
shipping
out
the
proposal
and
feel
free
to
comment.
D
A
It's
it
is
becoming
a
bit
tricky
for
new
contributors,
even
if
they
work
for
big
companies
like
rws,
for
instance,
if
the
maintainers
of
the
involved
projects
don't
have
the
bandwidth
to
help
them
to
coordinate
them
into
executing
some
of
these
bigger
changes.
I
saw
that
rajishri
created
this
proposal,
which
is
with
a
lot
of
detail
already,
but
I
also
saw
that
she
has
questions
on
slack
to
for
its
for
the
ncda
adm
maintainers
for
the
course
repair.
Maintainers.
A
I
don't
think
how
we
can
organize
this
better.
Do
you
think
that
like
dedicated
sessions
can
help
out,
but
it's
consuming
bandwag
from
the
same
maintenance
they're
busy
with
other
work?
This
is
not
a
high
priority
for
them.
Maybe
so
I
don't
know
how
these
projects
can
help
new
contributors
to
drive
new
features
in.
D
Because
the
idea
was
a
little
bit
rough,
so
it
requires
another
final
and-
and
so
we
agreed
to
work
on
on
the
proposal
and
then
also
on
a
prototype
that
that
the
person
from
aws
is
is
developing
on
its
own
personal
branch.
So
we
are
kind
we
are
supporting
these.
It
is
low
priority,
but
I
think
that
from
the
cluster
api
side
that
we
we
have
banded
to
support
this,
like
any
other
community
proposal,.
A
B
Yeah,
I
just
want,
I
mean
slack-
has
turned
into
what
it
ceased,
what
it's
so
to
replace,
and
I
look
forward
to
the
next
slack,
which
will
then
grow
until
it
becomes
slack
and
then
the
slack
v3
there's
a
perpetual
cycle
of
these
things.
I
think-
and
I
think,
if
you
I
think
we
welcome
people
at
the
at
our
sort
of
office
hours
right.
So
the
etcd
adm
meeting
is
is
a
great
place.
I
think,
and
we
can
really
deep
dive
there.
B
If,
if
that
time,
isn't
convenient
for
contributors,
we
can
try
to
move
that
time
or
like
have
a
separate
meeting,
but
until
the
next
slack
arises
until
we
launch
coop
coin
and
ico
our
own
slack
thing,
okay
slack,
then
we
should
presumably
stick
with
sort
of
synchronous
things
in
in
our
meetings
and
that
I
will
go
into
the
sack
and
and
dig
up
this
thread
that
I
guess
died
and
find
out
where
find
out
if
they
can
attend
one
of
the
the
bi-weekly
at
the
adm
meetings.
A
A
I
myself,
I
had
experience
coordinating
with
a
google
server
of
code
student
over
mailing
lists
entirely
and
we
didn't
use
irc
at
the
time
at
all.
I
don't
think
slack
existed
and
we
still
succeeded
in
making
something
very
good
over
email
communication
we
didn't
have
zoom,
we
didn't
have
slack
so
it's.
I
don't
think
some
of
these
chat
platforms
are
that
good
for
communicating
complicated
topics
at
all
yeah
just
for,
I
think,
they're
good
for
pinging
people.
That's
it.
A
Okay,
any
questions
or
comments
for
quiz.
A
D
Yeah
a
few
notes
for
for
kubernetes.
So
in
the
last
meeting
we
had
a
planning
session
and
then
the
main
topic
for
the
next
release
is
that
we
are
dropping
the
one
with
the
one
api
release,
api
version
and
we
are
going
to
add
the
v1
beta
3,
the
new
vm
beta
3.
D
D
D
As
a
side
note,
I
had
some
time
during
the
easter
pto
and
I
played
a
little
bit
about
the
dd
of
the
mean
operator
and
maybe
in
in
the
next
couple
minutes,
sour
or
as
soon
as
possible.
I
will
like
to
demo
what
I
have
and
also
to
try
to
understand
what
is
the
best
way
to
to
move
this
work
out
of
my
lap
of
my
mac
into
the
open,
so
people
interesting
can
join
it.
A
D
D
A
Yeah
so
some
context
here
that
cube
adm
used
config
map
to
enumerate
the
list
of
api
server
endpoints.
This
was
the
legacy
design
and
then
we
switch
to
annotating
static
ports.
Instead,
the
api
server
static
ports
with
an
end
point,
which
means
using
something
like
cube:
curl.
You
can
get
the
port
and
check
the
rotation
and
understand
what
is
the
end
point
of
this
api
server.
A
A
For
that,
like
I,
I
wish
to
check
some
of
the
flags
for
the
api
server,
but
unless
I
have
access
to
the
ports,
if
the
ips
server
is
a
static
port,
I
cannot
understand
the
configuration-
and
I
don't
think
sig
architecture
or
the
component
owning
six
wish
to
enable
configuring
that
to
enable
such
an
api
and
also,
we
have
feature
gates
and
things
that
some
of
the
feature
gates
are
relevant
to
certain
components.
A
A
Well,
I
okay,
so
I'm
going
to
watch
this
see
how
the
maintainers
respond
and
we
can
go
from
there,
but
you're
basically
saying
that
in
any
case
we
can
you
think
that
it's
safe
to
remove
it
in
v1
beta3.
A
Great
thanks,
I
have
another
update
for
cube
idea
by
during
the
weekend
I
wrote
the
tool
to
automate
the
our
end-to-end
test,
workflow
and
testing
for
a
job
creation,
basically
a
tool
to
generate
this.
I
know
that
cops
has
something
sophisticated
like
generate
a
matrix.
A
We
didn't
have
anything
like
that
in
cuba,
so
we
had
to
manually
edit
all
the
ammo
and
contributors
that
try
to
understand
this.
They
basically
make
a
lot
of
mistakes.
We,
the
mistakes
slip
through
some
of
the
tests,
release
informing
the
release
team
starts
poking
at
us,
and
I
wrote
this
too
and
I'm
going
to
show
it
after
121
is
released.
I
guess
something
like
that,
but
it's
already
like
working.
A
For
for
cuba
yeah,
but
I
think
I
something
that
is
interesting
here.
I
wanted
to
showcase
in
front
of
the
group
and
the
topic
of
apis.
A
I
thought
a
problem
with
the
cube
adm
api.
We
we
added.
I
mean
this
is
a
discussion
for
the
kubernetes
officers,
but
I
want
to
showcase
this
particular
problem.
Embedding
a
working
progress
api
in
another
api
is
always
problematic,
so
in
kubernetes
we
have
a
couple
of
api
groups.
The
first
one
is
output,
which
is
a
small
api
group
for
structure
out
structured
output.
For
instance,
when
you
list
tokens
bootstrap
tokens,
you
can
view
them
as
json.
That's
the
responsibility
of
this
api
group,
the
other.
A
A
So
this
is
the
direct
example
here:
the
bootstrap
token
output,
api
embeds,
the
cube
adm
configuration
api
that
has
a
bootstrap
token
structure
so
that
this
output
api
doesn't
have
to
redefine
the
whole
booster
token
structure
and
the
problem
with
that
is
that
this
means
that
every
time
you
bump
this
version,
you
have
to
also
bump
the
output
version,
the
output
api
version
they
are
coupled
together
when
you
remove
this
api,
which
is
kubernetes
v1
beta2.
A
So
it's
a
coupling
between
two
working
progress.
Apis
one
is
stuck
on
the
other.
They
have
to
move
forward
simultaneously
simultaneously,
and
I
I
I
take
this
slipped
through
review
and
I
I
really
don't
like
this
approach.
I
I
just
wanted
to
mention
this
in
front
of
this
group,
because,
if
you're
working
on
api,
you
have
to
be
aware
of
this
problem
and
for
british,
maybe
we
can
discuss
this
more
in
the
cubania
office
hours.
I
have
some
ideas.
Ideas
have
to
solve
this.
B
B
Yes,
it's
just
before,
though
the
the
k-ops
grid
generation
is
is,
I
would
not
call
it
sophisticated,
because
what
we
actually
did
was
we.
We
don't
try
to
merge.
We
just
regenerate
everything
every
time.
So
that's
actually
it's
a
nice
sort
of
like
it's
a
little
easier
to
reason
about,
I
think
than
like.
I
think
other
tools,
for
example
the
the
image
bumper
like
goes
in
and
like
updates
the
image
in
place
which
is
sort
of
a
little
bit
harder
to
think
about.
B
So
it's
it's
a
it's
a
it's,
a
quick
and
dirty
approach
that
works.
Well,
I
think
on
etsy
adm
we
are
getting
the
builds
going
into
the
like
staging
working
group,
gates,
infra
infrastructure
or
the
kubernetes
infrastructure,
and
we
are
starting
to.
We
will
be
that's
almost
working.
I
think
we'll
have
one
more
one
more
pr
to
go
in
and
we
should
be
able
to
consume
those
in
k
up
shortly.
So
we
will
then
have
moved
the
location
of
the
lcd
manager
tool
that
we
consumed
to
the
kubernetes
six.
B
It's
both,
but
yes,
there
is
a
container
image
build
and
there
there
is
or
are
binaries
at
cd
adm
and
yes,
they're
they're.
Both.
A
I
saw
I
think
I
saw
a
comment
from
you
that
maybe
we
should
start
building
the
kubernetes
lcd
image
as
part
of
the
hcd
adm
repository
or
something
like
that.
B
Yeah
there
was
a
there
was
some
debate
about.
Where
are
we
going?
Where
should
we?
So
there
are
some
images
in
the
clusters
subdirectory
of
the
kk
repo,
which
is
of
course
very,
very
deprecated,
and
there
was
some
debate
about
what
we
should
do
about
those
going
forwards.
B
B
So
I
don't
know
if
like
would
we
give
them?
You
know
anyway,
I
don't
know
whether,
like
organization,
that
felt
a
little
bit
harder,
even
if
we
could
just
give
them
a
staging
repo
and
suggest
they
do
it.
I
guess
a
third
option
would
be
that
we
just
spin
up
a
a
for
a
new
project,
a
new
kubernetes,
six
project
that
builds
like
etcd
for
kubernetes,
but
yeah.
I
didn't
see
a
lot.
I
didn't
notice
any
more
discussion
on
that.
I
should
actually
go
and
see
whether
did
anyone
reply.
A
A
B
Yeah
and
I
think
that's
fine-
I
think
we
can
always
put
in
a
subdirectory.
A
B
Yes,
I
mean,
I
think,
there's
a
there's,
a
there's,
a
narrow
question
and
there's
a
prop,
really
big
question.
So
the
narrow
question
is,
I
think
that
currently
there's
some
tooling
to
help
you
upgrade
between
lcd
versions
and
technically
that
isn't
upstream
in
the
lcd
repo
and
does
it
need
to
exist.
You
know
blah
blah,
so
we
just
need
to
figure
that
out.
B
The
really
interesting
question
is
when
we
start
well
now
that
we
are
separating
out
its
kubernetes
to
the
point
where
what
was
in
v1
considered
core
functionality
is
no
longer
in
the
core.
Should
we
we're
going
to
test
with
these
images?
B
Should
we
just
test
with
random
images
that
we
pull
off
the
internet,
or
should
we
actually
like
start
building
them
and
making
sure
that
they
are
built
according
to
our
community
code
of
practices
and
everything,
and
they
are
hosted
in
a
reliable
place
that
is
less
likely
to
be
backdoored
or
shared
fate?
Backdoors
right,
you
know,
is
that
something
that
we
consider
in
scope
or
not,
and
I
honestly
don't.
B
I
don't
think
the
five
of
us
can
answer
that
question,
but
that
is
the
like
much
bigger
topic
that
that
we
are
maybe
walking
towards.
A
Yeah
from
my
perspective
here
we
are
already
consuming
an
lcd
binary.
We
are
not
so
okay,
we
are
we're
building
that
server
binary
from
source.
I
think
we
are
not
getting
it
from
their
releases
right.
A
Yeah
but,
but
even
if
we
build
it
from
source,
we
essentially
we
are
trusting
the
source.
We
are
not
validating
the
entire
hcd
source
code
and
I
think
if
they
also
build
the
image
themselves,
we
are
just
going
to
trust
it.
I
mean
I'm
making
a
comparison,
comparable
argument,
because
we,
the
the
cooperatives
project,
does
not
validate
the
entire
company
hcd
source.
Basically,
we
are
just
trusting
that
the
icd
server
works.
B
I
agree
with
that
ex
I
feel
like
we
could.
We
could
verify
a
sha
we
could.
We
could
prevent
against
certain
things
right.
We
could
prevent
the
scd
project
just
changing
what
an
image
is.
You
know
we
have
all
this
work
that
we've
been
doing
in
like
image
promotion
in
the
working
group,
kids
infra,
to
like
promote
an
image
with
a
very
particular
sha-256,
and
if
we
say
oh,
but
you
know
like
just
put
whatever
cd
you
want
like.
A
Yeah
yeah,
I
see
I
see
it's
it's
going
to
be
interesting.
How
we're
going
to
do
this
with
the
split
of
the
monolith
yeah.
A
B
Yeah,
it's
it's.
I
it's
probably
more
of
an
architecture.
Well,
I
think
our
actually
actually
has
a
strong
claim
on
it
because
it
is,
it
is
related
to
like
how
do
you
actually
use
kubernetes?
How
do
people
actually
run
kubernetes,
whereas
cig
architecture
isn't
really
concerned
with
that?
Sorry,
I
apologize
api
machinery
isn't
really
consumed
concerned
with
that
it
could
be
a
steering
committee
or
architecture,
type
question
or
a
release.
Sig,
release,
type
question.
A
Yeah,
just
it's
it's!
The
usage
in
kubernetes
falls
under
api
machinery.
In
fact,
the
the
owners
of
the
image
are
members
of
api
machinery
as
well
a
bit
too
bad
acidic.
It's
that's
that's
how
we
treat
it
hcd,
but
yes,
we
this
is.
This
sikh
has
to
participate
in
the
discussion.
How
we're
going
to
consume
it
and
build
it.
A
Also.
The
building
and
the
releasing
part
is
also
seek
release.
Maybe
like
are
we?
Are
we
going
to
stop
building
ncd
completely,
I'm
going
to
move
it
to
ownership
of
the
hcd
project?
I
mean
honestly,
I
think
that's
the
project
should
do
all
the
things
and,
like
you
say,
maybe
we
should
just
verify
the
community
side
to
make
sure
that
we
trust
this
particular
image
and
and
then
we
had
called
a
particular
version
into
our
add
to
it
this,
as
the
latest
supported
trusted
version.
A
A
A
Okay,
should
we
forbid.
E
Yeah
we
have
a
release
out.
We
release
once
a
month.
Well,
the
release
hasn't
gone
out,
yet
the
we're
a
little
behind,
but
the
beta
is
coming
out
going
out
today.
E
There's
not
nothing
that
exciting,
but
we're
slowly,
making
container
d
a
more
viable
default
container
runtime
by
making
it
feature
by
giving
a
feature
parody
with
docker
and
we're
slowly
like
benchmarking
that
slowly
we're
we're
benchmarking
start
times
in
different
different
operations
with
container
d
compared
to
docker
on
the
inside.
Just
in
case
people
are
more
comfortable
with
that
as
a
default
going
forward.
E
We
don't
have
any
pressing
need
to
switch
to
container
d
as
a
default
soon,
but
eventually
it's
going
to
be
that's
going
to
be
the
default
in
kubernetes,
so
we
want
to
make
that
a
viable
option.
So
that's
what
we've
been
working
on,
there's
other
there
are
other
like
stability
fixes
and
stuff,
but
for
the
most
part
it's
a
it's
a
it's
a
big
release
with
not
a
lot
in
it.
If
that
makes
any
sense.
A
You
always
saw
basically,
I
have
multiple
questions
like
what.
What
do
you
think
about
the
whole
couplet
slow
down?
I
we
we
saw
that
it's
not
going
to
be
merged
in
dot
zero
one.
Twenty
one
did
you
see
many
many
complaints
from
the
users.
I
know
that
ben
is
delaying
the
kind
release
because
of
that.
E
A
D
A
Only
have
ci
that
can
catch
the
overall
slow
down
of
the
kubernetes
startup
yeah.
We
also
have
plans
to
default
container
d
to
be
the
defaulting
cube
there.
Maybe
at
some
point
we
yeah,
I
I
think
what
we
should
do
is
just.
We
should
create
another
survey
for
the
users
to
see
how
much
the
container
the
usage
has
increased
over
time,
because
we
we
have
been
seeing
this
increase
over
time
for
the
number
of
container
users
based
on
multiple
service.
A
E
Yeah
yeah,
we,
the
only
reason
we're
even
worried,
is
because
we
have
documentation
and
it's
a
pretty
common
practice
to
use
the
docker
demon
inside
of
the
mini
cube
vm
and
can't
really
do
that
with
container
d.
So
we
have,
to
like
add
mini
cube,
commands
to
like
approximate
what
they're
doing
so.
E
And
so
our
goal
is
really
just
to
be
confident
that
whenever
we
have
to
flip
the
switch,
we
can,
with
a
very
little
like
suffering,
very
few
consequences
and
yeah
we're
just
pinned
to
a
version
of
kubernetes
until
that's
fixed
and
we're
I'm
like.
E
I
don't
know
about
medya
the
the
tech
lead,
but
he
I'm
I'm
willing
to
to
test
out
like
a
bit
like
a
test
version
of
kubernetes,
and
we
can
run
it
through
our
testing
to
see
what
start
time
looks
like
and
what
our
test
coverage
looks
like
with
a
with
it
with
like
a
a
build
version
of
of
kubernetes.
A
Yeah,
it's
just
it.
It
goes
back
to
the
previous
times
it
just.
It
takes
10
seconds
to
start
yeah.
It's
good.
D
Yeah,
I
have
a
question
for
sheriff
about
the
difference
of
performance
between
docker
and
continually,
because
I'm
interesting
about
this,
given
that
there
are
discussion
in
cluster
api
for
placing
docker
with
container
d
inside
kappa
d
for
our
end-to-end
testing
and
yeah,
I
I
will
be
really
interested
in
your
feedback
in
terms
of
performance
memory.
Consumption
stuff,
like
that.
If
you
have
data.
E
We
have
data-
I
I
don't
have
access
to
it
right
now,
but
if
I
can
get
it
and
it's
and
it's
in
a
good
shape
to
be
shared,
I
can
share
it
either
I'll
share
it
in
this
doc,
like
next
in
two
weeks
or
whatever
I
can.
D
A
B
Yeah
I
mean
I'm
also
interested
to
know
if
there
are
any
issues
that
we
need
to
address.
So
if
there's
that
data
can
be
sure,
that'd
be
wonderful.
I
think
I
wanted
to
share
a
little
bit
like
what
we're
doing
in
chaos,
which
is
we
are
moving
forwards
with
it.
Assuming
that
it's
like
what
we
have
learned
in
the
past
is
at
some
stage,
people
will
just
stop
supporting
anything,
but
this
and
we
want
to
be.
B
Ideally,
we
want
to
have
made
it
the
default
prior
to
that
release,
and
ideally,
we
want
to
have
made
it
the
default
two
releases
prior
to
that,
so
that,
if
that
default
proves
to
be
a
disaster,
we
have
an
extra
like
release
and
three
months
to
like
fix
it,
and
we
tried
to
do
this
with
basic
oauth,
for
example,
and
we
didn't
quite
pull
it
off,
but
like
the
basic
oauth
removal
hurt
a
bunch
of
people
and
it
turns
out
like,
even
though
we
started
like
nine
months
early,
it
was
not
enough
time
to
like
to
get
it
rolled
out
to
people
find
the
problems
fix.
B
And
so
like
that's
why
we,
I
think,
as
I
understand
it
in
the
latest-
release
it
defaults
to
container
d
for
new
clusters
and
we
are
planning
on
in
the
next
release,
which
isn't
necessarily
like
aligned
with
kubernetes.
But
in
the
next
release
we
will
default
to
container
d
unless
you
specify
otherwise.
E
Yeah,
I
we
don't
have
a.
We
don't
have
a
road
map
like
for
when
we'll
actually,
when
the
default
will
be
container
d
and
you'll
have
to
specifically
ask
for
docker
yeah
we're
just
gonna
like
for
docker
for
the
docker,
shame
we're
just
gonna
switch
to
whatever
is
being
maintained.
We
don't
actually
have
a
strong
opinion
about
that,
but
we
don't
we'll
flip
the
switch
when
it
makes
sense.
E
Our
big
thing
is
is
making
sure
that
our
test
cover,
like
our
test
results,
look
the
same
because,
right
now
the
docker
container
runtime
is
more
stable
just
internally
in
terms
of
mini
cube
right
now,
the
start
times
are
almost
exactly
the
same.
We
have
a
bug
somewhere
where
sometimes
it's
very,
very
slow
to
start.
E
So
that's
like
our
big
issue,
but
that's
we're
we're
we'll
we'll
flip
the
switch
at
the
same
time
as
everyone
else.
Basically,
we
just
don't
have
a
strong
opinion
about.
E
A
Yeah
the
same
person
points
it
out
that
this
might
be
actually
due
to
some
run,
see
bug
or
something.
A
E
A
Okay,
do
we
have
any
more
comments
for
miraculous.