►
Description
This was an unconference session and therefore has no proper description.
A
That
is
planned
and
with
the
plan
with
that,
is
that
all
all
state
that
we're
going
to
track
it's
either
some
really
simple
things
that
are
just
labels
like
the
cluster
name
and
other
things
are
going
to
be
stored
as
files
in
a
directory
in
the
container,
and
we
have
tools
to
dump
a
bunch
of
those
things.
There's
a
command
to
export
a
lot
of
metadata,
so
we'll
probably
have
that
command
also
export
the
config.
It
exports
some
runtime
info
about
docker
and
all
of
the
container
logs
and
things.
A
C
I
think
there's
been
a
few
people
talking
about
well
the
other
day
about
getting
together
some
user
stories
and
kind
of
understanding.
What
everyone's
use
cases
are.
We've
all
come
to
this
room
because
we
all
want
to
be
able
to
test
something.
I
guess
is
there
anything
I'd
like
any
particular
use
case
right
now,
anyone's
got
to
share
that
we
might
that
you
might
not
already
see
the
project
kind
of
already
trying
to
to
cover
there
or
are
we
all
doing
the
same
thing.
D
Yeah,
so
one
of
the
big
things
that
we're
using
kind
for
and
I
don't
know
if
this
counts
as
something
we
don't
know.
We
don't
know
about
because
I
am
a
kind
contributor
and
I
talked
to
Ben
all
the
time,
but
we're
using
it
specifically
to
test
a
application
that
runs
on
kubernetes
and
testing
that
NCI
otherwise
would
involve
having
an
existing
container
or
spinning
up
like
some
container.
Somehow
on
like
gke
or
something
but
using
kind,
we
can
spin
up
a
container.
D
We
can
spin
up
a
cluster
in
a
matter
of
minutes,
instead
of
tens
or
hundreds
of
minutes
that
some
of
the
other
tools
will
take.
So
I
think
that's
a
that's
a
big
one,
even
outside
of
the
core
cabernet
DS
system,
because
a
lot
of
people
are
developing
apps
for
caverna
T's
and
they
want
to
know
that
these
apps
actually
run
properly.
You
know
converting
these
environment.
A
B
A
Using
this
for
certain
manager-
and
that
is
like
not
the
original
use
case,
but
it's
definitely
one
that
we
are
actively
supporting
again
I.
Think
these
things
need
some
work.
Things
like
UX
log
output,
that
kind
of
thing
but
supporting
running
other
applications
and
testing
them
on.
It
is
a
big
thing.
We've
brought
cluster
boot
down
time
down
to
about
like
40
seconds
now,
so
it's
a
lot
more
tolerable
for
testing
for
us
personally,
not
having
to
wait
for
like
10
15
minutes
of
booting.
A
GC
cluster
is
a
huge
one.
The.
E
D
A
A
E
A
Can
that
is
supported
today,
multi
node
is
not
yet,
but
multi
cluster
is,
and
Katherine
has
actually
done
a
fun
thing
with
that
you
should
attend
her
talk.
She
does
some
stuff
with
basically
turning
all
of
kubernetes
into
unit
tests
and
getting
coverage
information
out
of
it
and
who
soak
tests
that
she
has
a
fun
script,
that
spins
up
something
like
six
kind
clusters
on
her
machine.
It
runs
the
tests
against.
C
And
testing
cuba
Nettie's
itself
could
really
do
well,
obviously
needs
multi-node.
I
wonder,
though,
if
like
this
is
something
that,
like
the
wider
community
like
needs,
and
once
it's
like
absolutely
something
that's
happening
without
doubt.
I
think
it's
already
done.
In
fact,
thanks
to
for
its
yeah
yeah,
I
mean
I'm
just
wondering:
does
anyone
else
have
like
what
are
other
people's
needs
for
multi
know.
B
A
Know
Fabrizio
has
a
there's
actually
a
PR
out
for
most
of
us.
Now
he
has
one
where
it
runs
a
haproxy
container
locally
and
actually
runs
a
full
H,
a
cluster.
He
did
a
pretty
awesome
demo
I
wish
I
could
replicate
here
where
he
drained
away
access
to
some
of
the
masters
by
just
having
exposed
the
approxi
control
plane
and
going
to
that
UI
and
like
disabling
Network
to
one
of
the
masters
so
I'm
just
going
to
create
a
default
cluster
here.
So
we
can
see
real,
quick,
I
guess.
Oh
maybe
I'm
not.
A
No,
so
it
it
auto
names
everything
so
that
like,
for
example,
the
nodes
have
consistent
node,
name,
hostname
container
name.
If
you
create
a
cluster
again
and
don't
specify
a
name,
then
you're
going
to
have
a
name
collision.
We
want
to
add
a
pre-flight
check
for
that.
We
just
time
in
cotton
to
it,
yet
you
can
set
a
flag
for
it
and
go
over
to
this
other
one.
A
C
A
A
A
A
A
A
C
A
Basically,
I'm
actually
really
hoping
cube.
Adam
is
now
GA.
We
have
a
beta
config
we've
switched
to
that
now
for
new
enough
kubernetes.
Ideally
we
get
to
the
completely
stable
config,
and
then
we
can
remove
a
lot
of
logic
that
needs
to
like
select
over
different
configurations
and
things.
There's
small
differences
between
them.
A
The
way
we're
doing
it
right
now
were
actually
for
the
Basel
one
and
for
the
one
I
didn't
mention
it
can
install
from
the
released
apt
packages
when
we
do
that,
we're
actually
running
some
installation,
so
that
doesn't
work
as
well.
With
that
we
might
switch
the
binaries,
in
which
case
we
will
switch
to
a
more
normal,
build,
there's
some
up
and
to
hide
sandtown
sites
of
both.
We
get
a
little
bit
more
coverage
for
testing
a
little
bit
more
complexity
and
kind,
but
it
was
a
pretty
small
trade-off.
A
All
of
the
code
that
for
doing
that
is
shared
with
the
code
used
for
like
manipulating
the
normal
containers
and
it's
fairly
hermetic,
we
staged
everything
into
a
temp
directory.
They
exists
for
the
duration
of
the
build,
so
we
don't
have
any
major
issues
there.
Normally
people
are
worried
about
docker
commit
mostly
because
it's
not
very
reproducible.
A
B
A
A
There
was
a
there
was
a
previous
iteration
that
colleague
did
who's
a
working
somewhere
else
now
and
it
actually
that's
how
it
always
shipped.
It
ran
a
cluster
inside
a
container,
so
you
had
two
layers
of
doctor
and
docker.
Always
we
removed
that
because
it
turned
out
to
not
really
be
very
helpful
and
it's
a
lot
more
annoying
to
work
with
locally
with
this
one
you
can
kind
of
just
docker
exec
and
each
node,
and
from
that
point
you
have
a
bash
shell
and
it's
what
you'd
expect.
A
A
Mini
cube
is
a
significantly
more
stable,
more
mature
tool.
I
see
it
as
a
very
polished
experience
for
like
I
want
stable,
released,
kubernetes
and
I
want
to
be
able
to
do
everything.
I
can
run
a
local
load,
balancer
ingress
or
do
like
GPU
plugin,
or
something
that
we
haven't
figured
out
how
to
ship
yet
is
multiple
container
runtimes
and
selecting
over
those.
You
have
all
of
those
things
already
with
mini
cube
and
they
work
great.
A
A
Yeah
I
I
mean
I,
think
that's
possible,
but,
like
I
haven't
really
wanted
to
like
delve
in
anyone
else's
space
there
and
like
there's
a
lot
to
do
for
CI
at
the
moment.
I
want
it
to
have
a
reasonably
polished
user
experience,
because
the
the
other
goal
is
that
when
you
run
a
pre
submit,
someone
should
be
able
to
look
at
that
grab
one
line
and
run
it
on
their
machine
and
not
worry
about
it
right
now.
A
And
has
a
default
storage
class
both
host
path?
We
have
some
people
looking
at
other,
more
advanced
things,
but
nothing
yet
I'm,
not
sure
how
active
they
are
on
that
either
I'm,
not
targeting
it
right
now,
because
I'm
kind
of
terrified
of
someone
using
this
for
a
stateful
workload.
That
sounds
like
a
really
bad
idea.
A
A
A
A
I
believe
so,
once
we
have
that
ready,
I
also
know
Federation
is
using
it
I
think
the
other
thing
is.
We
have
some
tests
right
now
that
are
targeting
scalability
and
GPU
plugin
and
presubmit
again,
GPU
plug-in
might
be
a
good
target
to
be
a
postman.
It's
been
pretty
stable
in
you
know,
but
that
gets
into
telling
people
like
I'm
gonna
move
your
signal
around
and
stuff,
so
it
will
need
to
have
some
discussion
with
them
on
that
we're
also
in.
A
And
sick
testing,
we
are
also
looking
at
how
we
can
improve
the
experience
around
post
summits.
I
think
something
that
I'm
hoping
to
push
independently
of
this
going
for
it
is
to
have
SIG's,
have
a
dashboard
for
their
tests
with
subscribe,
alias
sick
cluster
lifecycle.
Has
this
today,
most
six,
don't
have
an
alias
for
this.
So
if
we
move
their
tests
in
a
post
submit
and
no
one
is
maintaining
that
and
then
it's
all
on
the
release
team
again,
we
need
to
avoid
that,
but
that's
something
that
we
kind
of
need
to
fix.
Anyhow,.
A
For
kind,
so
I'm
got
one
out
now.
Semver,
ideally,
will
hopefully
actually
keep
with
that.
We
are
using
it,
but
we
only
have
one
release:
we're
putting
binaries
on
get
up
for
the
command
line,
and
we
have
images
on
docker
hub
when
we
specify
an
image
as
a
default.
We
pin
to
the
image
hash
so
that
it
won't
change
out
from
underneath.
You.
A
A
A
B
A
A
It
nominally
does
this,
though
I
found
for
something
like
the
Cuban
ACI.
It's
going
to
be
a
lot
more
straightforward
to
publish
a
stable
binary
and
then,
whenever
we
define
that,
there's
some
breakage
and
kubernetes
master
that
we
can
just
switch
out
the
binary
underneath
rather
than
running
a
vendor,
update
and
potentially
updating
a
bunch
of
other
libraries.
Incidentally,
that
kind,
updated
I
think
it
will
be
much
easier
if
we
keep
that
decouple.
A
A
A
A
A
There's
a
open
request
for
this
we've
just
held
off
on
adding
that
until
we
have
the
better
config
to
add
that
to
because
again
we
don't
want
to
have
a
super
large
surface
with
the
flags,
because
the
other
thing
is
low,
but
we're
trying
to
keep
most
things
in
config
where
we
have
versioning.
We
use
the
kubernetes
api
machinery
and
we
can
attempt
to
avoid
breaking
you
when
we
make
changes
to
how
config
works.
A
There
are
only
a
few
flags
right
now,
there's
one
for
specifying
the
conflict,
there's
one
for
specifying
the
image
and
there's
one
for
specifying
the
cluster
name,
there's
two
advanced
ones
for
retaining
the
nodes.
If,
for
some
reason,
cluster
up
doesn't
work
and
you
want
to
debug
them
afterwards
default
is
it
will
clean
up
and
there's
one
for
setting
how
long
you
want
it
to
wait
for
the
nodes
to
be
ready,
which
is
sometimes
useful
in
CI,
whereas
locally,
you
could
probably
go
about
what
you
were
doing
and
come
back
to
using
the
cluster.
A
A
Kind
waiting
for
docker
to
be
ready
and
then
I
cube
a
time
going
through
its
steps
takes
a
little
bit
setting
up
all
the
manifests,
starting
containers
waiting
for
containers
to
be
ready
before
cube
a
time.
Internally
runs
some
things
as
add-ons.
So
it's
waiting
for
things
to
be
ready
before
doing
that,
and
then
we
have
to
apply
the
CNI.
A
A
So
six
gates
do
slash
kind.
We've
got
an
issue
tracker
up.
James
are
gonna,
be
spending
some
more
time
on
that
this
week,
especially
considering
all
the
influx
we've
got.
Some
milestones
up
and
some
issues
that
exist
I
think
a
few
of
them
are
marked
Help,
Wanted
and
we'll
be
moving
more.
If
there's
also
a
doc
slash
to
do
MD.
That
kind
of
has
a
wish
list.
A
There
is
a
release
you
can
cut.
You
can
grab
that
from
the
releases
page
for
Mac
or
Linux
I
might
put
up
Windows
binaries
later
after
I
get
a
chance
to
make
sure
that
works.
I
haven't
recently.
You
can
also
go
get
SIG's,
Kate
studio,
slash
kind.
If
you
have
a
recent
issue
go
and
that
should
work.
Fine,
there
is
a
slack
channel
as
kind
on
the
kubernetes
slack.
There
is
no
mailing
list
or
regular
meeting.
A
So
I
think
I'm
relatively
fine
with
that
issue.
I
think
the
main
thing
is
that
I
operate
the
most
under
SiC
testing,
and
that
was
also
somewhat
of
a
nice
way
to
avoid
politics
for
a
long
time
kind
was
quietly
built-in
testing
for
a
while
we'd
sorted
things
out
made
sure
things
work
around
this
week
was
when
we
were
planning
to
start
asking
people
to
take
a
look
at
it.
A
Yeah
I
imagine
there's
going
to
be
a
lot
of
overlap.
I'm
happy
to
try
to
work
with
people
on
that.
I
also
was
hoping
to
avoid
it
as
much
as
possible.
There's
a
lot
of
niches
for
this
kind
of
tool.
I
just
saw
one
today
that
uses
copy-on-write
vm's
and
is
implemented
as
a
go
library
you
import
into
your
unit
tests
and
they're
using
it
for
every
unit
test
for
metal
lb,
which
is
a
bare
metal
load.
Balancer.