►
From YouTube: sigs.k8s.io/kind 2019-03-25
A
B
A
It's
a
bit
tricky.
We
have
to
set
all
awesomes
all
awesome
solve
a
UX
for
this,
like
what
is
the
best
option
and
just
go
for
it.
I
mean
it
wonder:
selling
there
has
to
be
some
sort
of
field
that
says
enable
ipv6
and
from
there
we
have
to
automatically
pick
like.
If
the
user
said,
some
of
the
others
is
to
be
ipv6,
we
should
transparently
support
it.
You
know
they
say
they
should
have
a
single
flag
like
enable
support
for
this,
and
possibly
this
way
we
can
also
have
the
door
stack,
working
eventually
I.
A
I
mean
we
have
to
get
the
doors
like
working
for
us.
I,
don't
know
what
is
the
state
of
the
proposed
caps
at
the
moment
if
those
right
yeah
people
are
still
trying
to
figure
out
how
to
support
it
in
core
components,
to
my
understanding,
so
it's
not
only
about
kind
or
cube
atm
it's.
There
are
like
some
questions
about
the
API
server
instance.
B
Well,
the
proposal
basically
added
this
field
to
say,
IP,
family
and
we
type
of
family
you
can
define
ipv4,
ipv6
or
world
stack
and
kind
should
be
validated
that
everything
is
correct
and
deployed
in
the
cluster,
because
tourist
tack
is
not
clear,
I
didn't
know,
I
added
the
things
that
are
in
the
cap.
You
know
that
they
suggest
to
use
comma
separated
list
for
some
fields.
A
B
That's
why
I
suggested
to
move
all
these
parameters
that
depends
on
the
ipv4
or
ipv6
to
the
new
API,
so
we
can
validate
after
they
comfy
you
need
to
configure
their
do.
We
had
the
validation
process
of
the
function
and
we
can.
We
can
stop
the
the
creation
of
the
crafted,
because
if
you
said
you
send
this,
these
parameters
appear
they
the
Cuban
patches
or
the
JSON
patches,
I
I,
don't
know
if
we
are
able
to
run
the
validation
there.
Hey
since.
C
D
D
D
F
E
E
A
D
E
Let
me
say
that
okay,
up
contesting
a
trade,
is
a
priority
zero.
As
soon
as
we
can
test
upgrade.
We
want
to
test
upgrade
also
for
each
a
cluster,
because
you
have
to
test
the
machinery
that
upgrades
the
secondary
control,
a
nodes
which
is
different
than
the
machinery
that
that
agrees.
The
first
counterplay
not
sure.
D
D
E
E
D
E
C
E
D
We
should
absolutely
do
this.
I
just
wanted
to
be
clear
that
you
know
that,
hopefully,
that's
not
the
final
state.
Hopefully
we
also
get
some
other
signal
covering
these
things,
with
real
close,
particularly
for
a
CI
signal
where,
like
it's
not
as
much
of
a
big
deal
to
have
some,
you
know
expensive
cloud
resources
comparatively.
E
I'm
expecting
that,
for
instance,
unexpected
at
some
Forks
that
is
doing
cluster
API
and
as
soon
as
Cupid
Mina
h-hey
graduates
to
beta,
they
support
beta
and
they
test
upgrades,
but
but
it
is
a
kind
of
if
we
don't
reach
beta,
they
don't
adopt
it
and
and
so
on
and
so
on
so
suits
okay.
So
this
is
the
background.
The
other
piece
of
the
background
is
that
kubernetes
anywhere,
which
was
the
the
tool
that
was
used
in
the
past
to
test
upgrade,
is
not
an
option
anymore.
E
So
kind
is
the
the
most
concrete
option
we
have
on
the
table
today
and
yeah,
and
this
is
the
background.
So
the
contest
is
that
and
and
then
I
think
that
we
should
shoulder
plane
a
meeting
to
focus
on
these.
The
contest
that
we
are,
we
have
two
different
layers
of
problems.
One
layer
is
that
ku
Batista
and
the
could
end,
and
this
new
version,
which
is
Cuba
with
us
to
somehow,
should
be
structured
to
support
the
great
scenario.
E
Let
me
try
to
get
there.
This
is
a
justice
only
food
for
discussion,
so
is
not
a
definitive
solution.
First
of
all,
so
my
starting
point
is
that
Cooper
test
now
execute
up
test
and
down
okay.
This
is
this
is
bizarrely,
those
are
the
three
macro
phases
that
we
can
play
on
that
mean
that
one
one
viable
option
is
to
feed
upgrades
in
one
of
these
three
phases.
E
D
Mean
because
so
the
other
idea
is
that
the
part
of
the
reason
that
they're
in
different
steps
is
that
you
can
run
like
a
and
up
and
then
you
can
come
and
do
a
test
and
then
you
can
tear
it
down
and
you
may
not
choose
to
tear
down
when
you
up
and
test
then
even
choose
to
up
and
then
test
for
some
reason.
Maybe
you
want
to
do
something
else.
Some
other
kind
of
testing.
First.
E
E
D
Think
we're
going
to
have
anything
I,
don't
think
we're
gonna
have
a
lot
of
progress
on
cube
test
to
in
this
month.
It's
not
a
priority
for
me.
I
know
it's
gonna
come
up
as
somewhat
of
a
priority
for
one
of
my
teammates
for
some
other
testing
purposes,
but
that's
probably
not
until
at
some
point
in
it
like
actually
in
q2
and
I,
can't
make
any
promises
about
what
for
us
to
be
able
make
on
that,
and
you
just
seem
it's
really
a
nice
to
have
it's
not
like
yeah.
E
A
I
say
can
I
say
something
at
the
end,
please
so
I
know
these
constraints
of
what
we
can
do.
What
we
cannot
do,
what
are
the
priorities
of
testing
for
us
as
well,
a
kind
as
to
so
you
know,
Tim
said
declare
one's
faces
and
stuff
and
kind.
We
want
to
keep
this
tool
to
be
a
Swiss,
Army
knife
and
stuff
in
it,
but
the
reality
is
that
if
you
want
to
get
the
signal
in
115,
we
basically
started
planning
or
on
kinder.
A
You
know
the
other
tool
as
a
back-up
plan
for
this
and
also
the
to
itself.
We
have
to
execute
it
directly
from
a
script
without
completely
bypassing
cube
test.
The
same
way
you
are
doing
Bank,
currently
with
the
the
regular
basket
that
you
execute
in
the
jobs
you
know
so
using
this
mechanic
we
can
implement
whatever
we
want
upgrades
are
not
the
concern
from
Fabrizio
and
Timothy
is
that
we
fragment
the
two
projects
at
the
same
time.
A
D
G
D
Would
also
add,
though,
that
that
script
that
we're
using
for
kind
testing
today
is
100%.
Like
you
see,
it
is
not
really
meant
to
be
like
the
pattern.
It's
a
pattern
and
probably
the
easiest
way
to
get
started,
but
I
would
recommend
everyone
to,
like
you
know,
write,
write,
a
well-defined
program,
probably
closer
to
say
kinder,
as
opposed
to
like
an
ever
growing
script.
D
A
D
Well,
I
think
all
this
is
fine.
I
will
also
add
that
phases
specifically,
is
something
that
I
planned
her
as
we
go
to
play.
Keep
planning,
that's
something
I
plan
to
learn
very
early,
but
I.
Don't
think
that,
like
upgrade
as
an
up
phase,
I
think
upgrades
gonna
need
to
be
like
just
some
totally
separate
concept
from
those,
because,
like
phases
are
going
to
be
some
kind
like
alpha
experimental
thing,
that
kind
offers
up
as
part
of
up
and
not
necessarily
particularly
extensively,
but
even
just
logistically
from
a
testing
perspective.
A
D
A
D
Adjust
and
I
would
also
just
from
a
perspective
of
like
making
sure
the
tests
are
good.
I
would
also
go
ahead,
recommend
that
maybe
not
like
the
end-to-end
test,
but
some
kind
of
limited
smoke
test
be
run
both
as
the
first
part
of
the
upgrade
test
and
then
the
actual
upgrade,
and
that
itself
as
a
test
does
upgrading
work
and
then
do
things
work.
After
upgrade,
you
can
run
the
normal
test.
Suite
I
sent.
A
D
A
D
Yeah
I
mean
so
you
could
I
mean
you
could
even
have
something
like
you
call
your
test
program,
and
maybe
your
test
program
calls
ginko
or
maybe
you
don't
want
to
implement,
calling
ginko.
So
your
test
program
calls
cube
test
and,
like
that's
relatively
fine,
that
the
ginko
testing
can
kind
of
be
its
own
isolated
chunk
of
this
and
I.
Don't
think
that
particular
abstraction
for
that
matters
that
much
right
now,
yeah.
D
D
D
So
as
far
as
0.3
is
concerned,
I've
been
looking
at
the
requests
I'm
getting
for
everyone,
and
also
the
you
see
cases
that
I
know
and
I
think
these
are
roughly
the
things
that
will
fit
in
about
a
month's
time
frame
and
not
necessarily
all
of
them
will
land,
but
definitely
the
the
top
ones
at
the
priority
should
I
think
we
really
need
phases
to
unblock
some
local
development
of
queue.
Batum
getting
the
multi
node
local,
dynamic
volume.
Stuffin
seems
like
a
real
win
of
her
various
use
cases
and
shouldn't
be
particularly
problematic.
D
We've
had
people
testing
this
for
a
while
now
without
it
being
integrated,
but
you
know
manually
deploying
this.
We
already
have
a
PR
for
building
from
a
release,
tarball
and
I-
think
that's
gonna.
Thank
you.
Andrew
I
think
that's
going
to
save
some
CI
headache
so
that
we
don't
have
to
do
quite
as
much
of
building
of
kubernetes
and
we'd.
Also,
don't
have
to
necessarily
worry
about
publishing
images,
because
it
will
make
it
cheaper
and
easier
to
build
an
image
from
a
particular
CI
tarball
without
actually
doing
the
kubernetes
build.
D
So
that
will
let
us
patch
over
that
a
little
bit
traffic
serving
is
kind
of
a
nebulous
one,
I'm
not
putting
that
down
to
anything
in
particular,
but
just
making
it
easier
to
do
tests
where
you
have
something
on
your
hosts
talk
to
something
running
in
your
kind.
Cluster
I've
seen
a
lot
of
use
cases
blocked
on
that.
We
have
a
couple
of
options.
C
D
This
one
is
not
user
facing
for
the
most
part
but
finishing
abstracting
away.
Docker
we've
still
got
some
various
pieces
of
the
code
that
do
things
like
pass
flags,
the
docker,
it's
just
strings
and
things
like
that
in
it
that's
going
to
buy
this
later,
as
we
keep
trying
to
add
features
or
we
want
to
support
pod
man
and
so
on.
So
I
have
some
partial
work
on
this
left
over
I'd
like
to
finish
this.
It's
not
super
high
priority,
but
I
think
it's
important
that
we
have
well-defined
interfaces.
D
This
will
also
leave
us
open
the
option
in
the
future.
If
we
want
to
use
like
an
imprecise,
docker
client,
as
opposed
to
a
as
opposed
to
showing
out
to
docker,
it
will
also
make
it
easier
for
us
to
support
special
modes
with
getting
a
runtime
like.
D
Maybe
we
find
that
we
need
to
do
things
slightly
different
if
we
start
training
to
support
remote
docker
and
if
we
have,
if
we
have
our
own
interface
for
I,
want
to
make
calls
to
docker
on
the
host
for
talking
to
the
node
containers
fully
as
the
thing
that's
injected
all
the
way
through
then
it'll
be
easy
for
to
do.
Okay.
C
Be
to
be
super
clear,
I
thought,
so
this
is
talking
about
the
CRI
that
is
used
to
run
the
node
containers
and
you
made
a
statement
and
I'm
not
gonna
hold
you
to
it
for
ad
infinitum,
but
I
think
you
made
a
statement
with
respect
to
internally.
There's
no
promise
that
it
will
use
docker
to
run
the
the
pods,
but
externally
the
thing
that
will
run
the
nodes,
at
least
for
the
foreseeable
future,
with
no
plans
to
change.
You
said
with
soccer,
so
you
know
so.
D
So
so
so
that
is
by
far
the
most
important
and
common
one,
but
leaving
it
open
where
we're
not
tied
at
the
hip.
Sure
to
that
is
super
useful
and
we've
been
making
progress.
In
that
case,
pod
man
is
the
easiest
one
because
it
mimics
docker,
but
right
now,
it's
pretty
ugly
to
plum
through
and
try
to
make
everything
use.
Pod
man
instead
now
I
get
where
you're
going.
D
D
C
Pretty
separated
today
yeah,
that's
actually
what
I
was
I
posted
that
last
Friday
night,
like
at
8:00
I'm
pretty
thoroughly
using
you
know:
docker
exec
to
mimic,
you
know,
SS
aging
into
nodes,
and
so,
if,
if
they're
gonna
support
more
than
just
stalker
I
want
to
make
sure
that
either
that
has
the
ability
to
executive
nodes
or
we
consider
what
I
suggested,
which
is
being
able
to
add
information
to
the
nodes
in
kubernetes
that
pulls
data
from
the
nodes
operating
system.
So.
D
You,
as
a
kind
user,
should
always
be
able
to
use
docker
on
the
host.
If
you'd
like
to
also
use
this
additional
stuff,
you
might
need
to
port
to
it.
We
already
today
have
lash
library,
tooling,
you
can
use
in
kind
for,
like
I,
want
to
get
a
list
of
all
the
nodes,
and
then
I
want
to
be
able
to
take
actions
against
the
nodes
like
run
commands
on
them,
and
that
today
calls
docker
exec,
but
it's
red
that
is
wrapped
under
an
abstraction
there's.
D
D
D
Potentially
we
can
ship
a
lighter
image
that
way
and
the
know,
kubernetes
lifecycle
is
moving
towards
being
on
the
CRI
path.
All
the
time
so
I
want
to
look
at.
Can
we
take
the
container
D
we
already
ship
underneath
Daugherty
and
use
that
as
the
runtime
on
the
node,
and
that
will
also
help
us
shake
out.
Some
things
where,
like
stuff
is
also
depending
inherently
on
docker
on
the
node
and
I,
see
that
one
is
actually
long-term.
D
Damon's
been
restarted
and
they
still
have
this
kind
of
cluster
around
and
they
want
to
be
able
to
restart
it
as
opposed
to
creating
a
new
cluster.
Initially,
the
argument
was
well.
You
should
just
make
a
new
cluster.
These
are
supposed
to
be
cheap
and
not
stateful.
But
there's
been
a
few
points
around
things
like
well
I
needed
to
pull
some
third-party
images
to
my
cluster
and
that's
time
consuming,
and
it
will
be
a
more
straightforward
win
to
add,
restart
support,
then
to
optimize
the
way
pulling
third-party
images.
It's
difficult
to.
C
D
D
D
It's
a
possibly
a
little
bit
much
to
expect
that
user
to
go
through
and
figure
out
what
are
all
the
images
in
the
manifest
and
pull
them
to
their
host
and
loaded
them
into
their
cluster.
It's
just.
It
should
be
relatively
simple
for
us
to
have
something
that
restarts
the
containers
properly
for
the
nodes.
D
Minor
release,
I
did
not
put
it
there.
I
thought
that
might
be
a
tad
ambitious
I
think
we
can
I
think
we
should
get
worked
towards
it.
I
don't
know
if
we
should
say
like
we're.
Definitely
gonna
have
it
working
some
things
that
I
think
we
still
need
more
time
to
design
to
like
this
CNI
abstraction,
where
you
want
to
be
able
to
configure
that
I
think
we
should
absolutely
have
work
towards
it.
D
We're
making
that
IEP,
6pr
rebase,
so
I
think
we
might
need
to
break
it
up
into
like
we're.
Gonna
fix
pieces
of
it
and
lay
down
some
of
the
tooling
and
things
to
start
making
the
code
aware
to
not
assume
ipv4
all
over
the
place,
but
I
don't
I'd
like
it
to
not
block
things
like,
say:
refactoring,
four
phases
or
something
I,
don't
know
that
I
don't
think
they
will
clash,
but
we
have.
D
We
have
a
lot
of
other,
like
almost
there
things
and
I'd
like
to
make
these
releases
shorter
and
which
I
guess
that's
something
we
didn't
explicitly
talk
about
here,
but
we
have
brought
up
in
the
past.
I
think
about
a
month
is
where
we've
roughly
been
at,
but
like
slipping
a
little
bit,
I'd
like
to
actually
tighten
it
up
to
actually
about
once
a
month
for
the
minor
releases,
possibly
faster.
Once
we
stop
breaking
things.
D
D
A
D
D
Not
as
much
that's
more
of
a
completeness
thing
on
our
end,
like
people
have
asked
for
cryo,
for
example,
and
I.
Think
container
D
is
an
easier
route
for
us
to
start
exploring,
because
once
we
add
cryo,
we
are
definitely
going
to
have
to
deal
with
like
the
base
image
being
different,
but
we
can
actually
do
some
exploration
of
container
dealing
with
with
images
that
we've
already
published.
Hopefully,
okay,
I
think
it
really
depends
on
how
you
talk
to
you
in
general
they're,
less
interest
in
this.
D
D
So
not
necessarily
finish
but
ship,
something
I
think
we've.
So
if
you
look
at
how
our
bring
up
is
today,
I
think
we've
relatively
stabilized
these
things
that
it's
probably
okay
to
expose
some
kind
of
experimental
or
alpha
flag
to
create
this
as
like.
This
is
the
steps
I
want
taken,
and
we
can
do
some
cleanup
to
try
to
make
sure
that
they're
reentrant,
so
that,
if
you
run
like
three
other
steps,
and
then
you
run
another
step,
that
other
step
will
be
able
to
rediscover
all
the
information
I
think
we're
relatively
there.
A
D
C
To
revisit
the
priority
of
the
guitar
wall
and
I,
don't
know
if
it
co
wanted
it
so
much
as
he
was
just
involved
in
the
commenting
very
great
feedback
on
the
PR,
if
I'm,
the
only
one
that
wants
it.
I
kind
of
responded
to
been
pointing
it
out
to
me
and
I
do
find
it
useful,
but
it's
not
something
that
I'm
blocked
on
either
I.
Think.
D
We
need
this.
We
are
starting
to
have
more
CI
jobs
for
kubernetes
and
they're,
duplicating
building
kubernetes
and
despite
the
basil
cache
that
still
consumes
considerably
more
resources.
So
we
can't
make
these
like.
We
can
make
the
pods
and
the
tests
cheaper
if
we
don't
actually
need
to
build
kubernetes,
and
we
can.
We
like
a
normal
kubernetes.
Ci
job
depends
heavily
on
the
fact
that
you
can
spin
up
a
cluster
from
these
car
balls,
so
if
we
can
make
it
closer
to
that
equivalent.
D
Originally
the
plan
was,
we
were
gonna,
have
a
CI
job
that
pushed
kind
images,
but
there's
some
logistics
with
that,
and
we
already
have
a
job
that
pushes
tar
balls.
So
if
we
have
kind
build
from
tar
balls,
we
can
tell
everyone
that's
doing
a
CI
job.
You
know
either
use
a
stock
kind
image
or
build
from
a
tar
ball.
Please
and
I'm.
D
Reserve
building
from
like
a
basil
or
like
the
make
files
or
something
for
pre,
submits,
kubernetes,
presubmit,
specifically
and
even
yeah.
It's
can
potentially
switch
to
downloading
one
of
the
CI
channels
instead
of
building
that
one's
a
little
bit.
If
your
since
we
do
need
to
test
kind
itself.
Building
but
like
an
average
project,
should
be
able
to
test
a
recent
kubernetes
snapshot
without
having
to
build
communities,
and
this
is
probably
the
most
viable
way
to
unblock
that
for
now.
Okay,.
D
Yeah
that
that
ones
I
think
that
one,
the
reason
I
put
it
a
little
bit
lower,
is
because
the
other
two
actually
are
probably
less
design
and
work.
We
already
have
it
broken
in
two
phases.
Internally,
we
just
need
to
expose
a
mapping
somewhere
and
Fabrizi
already
had
like
a
reasonable
plan
for
that
before
we
can
have
like
a
command
line
flag
with
a
like
comma-separated
list.
D
We
just
need
to
reintroduce
that
now
that
we've
kind
of
negotiated
some
of
the
stability
stuff
and
that
we've
refactored
these
things
for
a
while
now
and
it's
relatively
stable.
What
the
steps
are
I
think
that
may
change
slightly
as
we
have
to
deal
with
like
say
installing
a
provisioner
for
volumes
or
something
like
that.
So.
D
More
question
I
have
for
this
group
the
local
dynamic
volumes
so
that
one
trends
into
kind
of
iffy
territory
where
like
what
is
a
minimum,
viable
pine
cluster
but
I've,
looked
around
and
I
think
pretty
much
everyone's
kubernetes
distribution
or
deployment
has
working
dynamic
volumes.
I
think
we
should
probably
bundle
that
one
as
a
default
enabled
thing
so
that
we
have
a
storage,
a
default
storage
class
that
actually
works.
We
can
do
the
host
pass
storage
class,
which
is
built-in
the
downside
of
that
is.
D
Iii
will
do
this,
but
it's
more
of
a
like
well
you're,
going
to
need
some
kind
of
boundary
here
where,
for
example,
someone
requested
the
metric
server
I'm,
not
sure,
if
that's
something,
we
definitely
should
ship.
That
kind
of
seems
India
Territory
of,
like
you
could
add
sauce
yourself
if
you're
needing
it,
but
the
volume
one
seems
like
a
blocker
for
a
lot
of
like
any
sort
of
testing
where
you're
the
thing.
You're
testing
needs
storage,
they're,
going
to
request
just
like
the
default
storage
class
for
a
lot
of
simple
testing
and
ours.
A
You
can
put
all
the
line
for
such
important
or
API
objects
like
you
can
say:
okay,
we
are
going
to
include
this
because
this
is
important
for
a
lot
of
people,
but
things
like
you
know.
The
matrix
server
is
up
like
that.
This
is
definitely
a
consumer
customizable
part
of
a
certain
users
closer
and
it's
not
important
for
everyone
right.
D
D
The
need
to
ship
are
like
this
storage
class,
like
if
I'm
going
to
run
a
cluster
on
vSphere
I
need
the
vSphere
storage
out-of-the-box,
it's
pretty
useless
without
that,
so
I
think
having
the
local
provisioner
is
like
our
closest
thing
to
that,
whereas
say
like,
the
dashboard
is
totally
up
to
your
use
case
and
not
actually
integrated
into
the
point.
The
cluster
necessarily
I.
A
C
D
One's
a
bit
of
a
special
case
when
you
revisit
so
there's
actually
a
separate
thing
with
that,
where
it's
being
configured
in
a
similar
way
to
some
other
true
Benes
components
and
it's
preferring
the
host
names,
but
it
that
one,
you
run
it
in
the
cluster,
not
on
the
host.
So
it's
on
the
cluster
network,
so
it
doesn't
need
any
special.
No.
D
D
E
D
We
still
have
a
couple
of
agenda
items.
We
have
the
dual
stack,
support,
he's
really
interesting
and
there's
a
pretty
detailed
dock
for
this.
If
you
haven't
seen
it
I
request
that
you
review
it,
I
took
a
pass
over,
it
I
think
it.
We
might.
You
know
bike
shed
a
few
things.
Some
of
the
configuration,
but
like
generally,
this
makes
sense
in
is
something
that
we
want
it's
another
area
where
we
just
don't
have
CI
testing,
because
no
one
has
like
a
CI
cluster
Phyllis.
A
A
D
G
G
D
C
I
mean
wouldn't
that
be
a
default
and
wouldn't
the
CNI
like
if
there
was
no
same
default
like
we've,
then
just
be
part
of
the
phases,
a
approach
where
it
was
in
the
adding
the
overlay
Network,
because
right
now
it's
just
bound
to
leaf.
But
I.
Imagine
that
that
gets
just
changed
to
some
default.
And
so
there's.
D
D
Well
so,
like
let's
say
you
want
to
do
this
offline
or
locally
you
might,
we
might
need
to
add
some
not
even
seen
eye
specific
support
for
saying
when
you
do
bring
up
I
want
these
images
to
be
side
loaded
or
something
like
that,
because,
alternatively,
you
have
to
build
a
custom
node
image
with
this.
This
is
for
I,
have
a
node
image
and
I
want
what
this
configuration
will.
Let
you
say:
I
want
it
to
come
up
in
dual
stack
mode
and
I
wanted
to
use
this
alternate
scene
I.
D
D
B
D
Yeah
so,
and
that
maybe
if
we
can
find
if
like
if,
for
example,
calico
works
and
good
enough
for
us,
then
that
may
be
simpler
than
solving
some
of
these,
because
it
because
there
are
other
issues
like
how
do
we
get
the
images
pre-loaded
and
we
have
nice
options
for
that
right
now
for
the
pre-built
ones,
fruit,
but
doing
this
at
cluster
beep
time.
Maybe
I
think
it's
something
you
should
eventually
support,
but
it
may
need
a
little
bit
of
thought
like
how
do
you
specify
the
images
or
like?
D
Well,
but
it's
so
specifically
for
CNI,
because
you're
bringing
this
up
during
the
cluster
bring
up
even
even
like
that,
doesn't
exactly
work,
because
you
have
a
chicken-and-egg
problem
with
like
okay.
E
D
Cni
is
a
bit,
whereas
if
this
were
like
I
want
to
test
my
application,
I'd
say:
okay,
get
the
cluster
up
load.
The
images
in
yourself
then
test
the
app,
and
that
should
be
fine
today
with
the
tooling
we
have,
but
for
things
that
are
part
of
kind
itself.
Coming
up,
that's
trickier!
If
we
could,
if
you
think
we
can
get
a
calico
switch
in
earlier.
That
would
be
great
so
that
we
can
test
that
out
a
bit
before
we
actually
release,
and
if
we
do
that
that
might
unlock
some
things.
G
D
Worth
it
exposing
and
they
config
I,
don't
know
that
it's
necessarily
worth
blocking
everything
on
it
right
now,
because
I
also
think
it's
something
that
we
want
to
think
about
like
we're.
Probably
you
know
that
one's
gonna
need
more
design
of
like
what
does
it
look
like
like
I,
don't
think
just
putting
the
URL
is
actually
going
to
be
super
valuable,
long-term
I
think
that's
like
one
option,
but
there's
also
like
local
manifests
or
like
the
images,
considerations
or,
and
so
like
I
think
it's
something
we
should
get.
D
But
if
we
can
just
say
a
bit
like
calico
is
great:
it's
gonna
support.
Ipv6
too,
we
just
switch.
We
can
revisit
the
cni
options
and
focused
more
on
the
I.
Think.
Probably
the
more
interesting
task
for
us
here,
which
is
getting
ipv6
up,
I,
think
that
testing
will
work
better
and
there
are
workarounds
today
for
like
I,
want
to
test
my
CNI.
In
that
case
you
can
build
a
you,
can
take
a
kind
image
and
you
can
put
your
manifest
in
it
and
that
works
pretty
well.
If
you're
specifically
focused
on
testing
CNI,
the.
A
A
D
Up
phases
is
also
one
and
I
would
say
it
doesn't
belong
in
config,
it'll,
probably
be
a
command
might
flag.
That
is
optional,
because,
if
you're
describing
your
cluster,
the
phase
that
you
take
it
through
is
orthogonal
you're,
going
to
use
the
same
cluster
configuration
for
all
of
the
phases,
so
it's
kind
of
similar
to
name
and
that,
like
the
name
of
the
clusters
orthogonal
to
the
configuration
of
the
cluster.
A
D
A
D
That's
still
only
for
a
few
rather
specific
use
cases,
and
in
those
cases
it
should
be
fine
to
like
use
a
flag
or
import
kind
as
a
library
and
do
it
there
or
something
for
like
the
like,
broad
cross-section
of
kind
users.
These
phases
are
things
like
cube,
a
dominate
which
they
really
shouldn't,
be
worried
about.
D
H
Hey
hi
good
to
meet
y'all.
Thank
you
for
your
work
on
this
project
so
far
very
useful
for
me:
I'm
a
contractor,
that's
getting
people
into
docker
and
containerization
and
keys
and
have
been
using
mini
cube
as
kind
of
the
development
environment
for
that,
as
many
people
do
I'm,
looking
into
kind
as
the
replacement
for
that
on
developer,
laptops
and
I'm
wondering
what
words
of
caution
if
any
there
are.
H
D
Okay,
because
the
nodes
themselves
are
containers
and
they're,
not
binding
arbitrary
ports
from
the
container
to
the
host
outside
of
the
like
docker
for
Mac
VM,
so
I
think
Mini
Cube
has
some
special
tooling
for
this,
and
we
don't
currently
we
you
have
to
use
like
you
cook,
proxy,
on
Linux.
This
is
less
of
a
problem.
You
should
be
able
to
talk
to
the
containers
over
the
bridge
network.
D
Yeah
I
guess
like,
for
example,
also
mini
Cuba,
has
I
think
like
this
built
in
concept
of
add-ons
for
various
things.
I
don't
have
that.
So,
if
you
want
to
like
toggle
an
add-on
by
a
flag,
you're
gonna
have
to
actually
install
it
to
a
cluster
instead,
I,
don't
think.
That's
a
big
blocker,
but
I
do
think
that
biggest
blockers
have
been
the
storage
thing
and
the
network
stuff.
D
H
D
Including
the
the
host
itself,
yeah
so
like
I,
I'm
running
kind,
and
then
the
next
step
I
want
to
like
run
like
curl
some
service
in
the
cluster.
If
you're
on,
like
iMac,
that's
not
gonna
work,
you're
gonna
need
to
run
something
like
keep
going
to
port
forward.
First
right
think
we
can
add
some
support
for
this,
but
it
it
may
be
tricky
because
of
the
additional
consideration
that
we
have
to
make
that
it
makes
this
more
awkward
for
us
is
that
we
support
multiple
nodes,
so
we
can't
just
do
something
it.
D
We
want
to
explore
options
besides,
like
let's
just
bind
all
of
the
ports
that
you
can
possibly
have
something
on
to
the
host,
because
it
actually
takes
support
on
the
host
and
passes
it
through
the
container
I'm
gonna
be
exploring
options
for
that
this
cycle,
but
right
now
that
has
been
a
pain
point
for
people
that
are
using
it.
Like
many
cute
got.
H
D
Yeah
awesome
I
will
also
add
that,
like
replacing
mini
cube
for
all
use,
cases
has
not
been
like
leave
first
priority
I
think
we
have
a
scoping
dock
now
and
so
like
I.
Do
you
think
it
could
work
pretty
well
for
that?
But
our
first
one
is
testing
like
kubernetes
like
we
want
to
test
things
like
you
Adam,
and
we
want
to
test
applications
on
it.
We
want
to
do
the
blue
star
clusters
for
the
AP
cluster
API,
and
then
anything
else
is
just
like
a
really
nice
to
have.
D
D
A
A
Network
service
meshes,
you
can
find
it
in
github.
This
is
it
yes,
it's
like
a
frog.