►
From YouTube: sigs.k8s.io/kind 20190204
B
A
A
B
Hi,
so
welcome
to
the
second
kind,
sub
project
meeting
I
know
things
are
still
pretty
ad
hoc.
We
only
just
managed
to
have
the
first
one.
Last
week,
I'm
hoping
people
have
some
topics,
even
though
they're
not
actually
in
the
agenda
dock.
Yet
just
because
we
had
some
problems
getting
that
out
to
everyone.
A
B
C
Yes,
this
is
me,
that's
the
topic
we
were
discussing
the
other
day
over
the
schmuck
Channel,
and
you
suggested
that,
like
we
added
a
document
with
the
Kipling,
with
the
use
case
as
a
separate,
the
document
I
started,
doing,
that
is
still
everydrop.
I
just
basically
wanted
to
know
that
it
was
the
kind
of
document
you
who
you
were
expecting
and
he
it
so
should
I
open
an
issue
or
link
the
issue,
the
github
to
the
document
or
how
to
divide
the
conversation
around
this
document.
That's
basically
so
question
I
think.
B
The
thing
we're
really
missing
is
like
what
we're
actually
what
we're
trying
to
like
test
or
cover
with
this.
We
know
that
we
can
make
kind
sort
of
provision
a
machine,
but
there's
there's
some
problems
around
like
the
config
needs
to
be
aware
of
things
with
kind.
So,
if
we're
not
fully
provisioning
community,
it's
like
the
cluster
API,
for
example.
B
C
I
mean
it's
some
introduction
in
the
document.
I
can
elaborate
on
that,
but
basically
the
specific
use
case
I'm
having
and
then
describing
the
sequence
is
that
around
the
cross
therapy
I
am
developing.
Some
operators,
which
are
complex,
I
mean
there
are
not
video,
and
there
are
a
lot
of
that
logic
which
is
in
depending
on
the
actual
provider.
You
are
using
I,
don't
want
to
test
providers
I
want
to
test
the
controllers
that
manage
this
lifecycle
of
the
cluster
and
the
life
cycle
of
machines.
C
C
C
Before
the
credit
cluster
is
cover
actually
mentioned
that
in
the
document
which
is
basically
Cober.
However,
as
I
said,
the
controller
needs
to
create
individual
note
and
append
them
to
the
Amir
at
me.
He's
repeating
the
provisioning
logic.
I
mentioned
the
other
day.
That
I
was
like
and
a
hacky
way
trying
to
find
out
if
it
is
possible
and
then
what
should
I
looking
around
the
code
and
the
code,
basically
is
reusable
I
will
say
it
to
95%
of
su,
probably
because
this
really
works
structure,
ania
steps
and
whatever.
C
So
it's
like
having
like
provisional,
common
or
way
to
call
it.
The
only
tricky
there
is
that
actually
I
will
need
some
additional
information
that,
once
you
create
a
cluster.
It's
not
it
is
there
somewhere,
but
it
is
not
readily
available
because
it's
like
this
meeting
memory,
you
create
all
this
in
memory,
representation
of
the
cluster
that
you
have
all
this
process
of
waiting
or
the
null,
but
once
the
cluster
is
done,
I
will
have
to
like
when
we
create
this
information.
C
B
B
Right
now
is
people
being
able
to
run
commands
on
the
nodes
themselves,
we've
kind
of
backed
away
from
that,
because
I
mean,
unlike
the
cluster
API,
we're
not
actually
giving
you
a
VM
whatever
is
going
on
in
that
image
is
just
what
we
need
to
like,
make
it
possible
to
Bernays
inside
of
docker,
and
it
is
an
unusual
environment
and
we
may
have
to
change
it
over
time.
So
you
know
if
something
else
is
taking
over
all
the
steps
that
run
inside
of
the
machine.
C
From
the
perspective
of
these
particular
use
case,
that
is
not
an
issue,
because
what
actually
the
machine
provided
provides
is
something
that
is
able
to
connect
to
the
it's.
Basically,
what
we
do
I'm
is
that
he's
obstructing,
actually
all
that's
open,
whether
the
provider
is
something
that
make
curated
notes
appear
somehow
I
mean
the
rest
of
the
components,
don't
really
care
how
this
happened.
C
That's
why
makes
this
architecture
interesting
is
because
you
actually
can
do
that
I
mean
if,
if
I
should
have
to
know
something
about
the
particular
implementation,
then
it
means
that
this
somehow
the
architecture
is
broken,
because
I
cannot
decouple
the
genetic
management
for
knowing.
How
do
you
bring
this
particular
node
to
life?
So
I
think
that
is
not
in
the
particular
use
case
is
not
really
an
issue,
because
I
don't
need
to
know
that
the
only
thing
I
need
is
that
somehow
a
note
appears
he
that
crosses-
and
you
know.
B
Do
you
mean
fully
configured
or
just
as
a
machine,
because
it
sounded
like
from
your
use
case?
You
didn't
want
to
be
configuring,
the
node
with
like
say,
Q
Batum,
and
that
there
are
some
details
that
we
have
to
plug
through,
like,
for
example,
in
the
past,
we've
had
to
work
around
DNS,
not
looting
properly
during
setup
because
of
the
doctrine.
Docker
environment,
funkiness,
with
the
networking
and
like
I,
wouldn't
expect
a
usual
cluster
API
implementation
to
be
doing
something
like
that.
B
C
Okay,
I
mean
it's
fine,
I
mean
eventually
and
that
the
other
use
case
that
is
more
for
my
colleague
Alvaro
with
you.
The
person
that
you
was
also
interacting
with
he
says
kind
of
the
opposite
is
case.
He's
like
he's
a
really
emotional
dog.
What
I'm
talking
about
it
was
sooo
prone
to
test
the
initialization
process,
because
we
are
also
testing
so
I
understand
and
their.
B
B
B
I'm
probably
need
some
more
details
of
what
this
education
is
supposed
to.
Look
like
and
I
do
want
to
point
out
that
we
are
trying
to
get
like
all
of
the
existing
use.
Cases
finished
fleshed
out
that
we're
kind
of
in
the
original
design,
and
we
have
quite
a
bit
in
our
backlog
still
before
we
can
get
those
things
done.
B
So
actually,
the
next
topic
I
wanted
to
discuss
today
is
getting
another
minor
release
out
with
some
alpha
fixes,
and
we
have
some
more
things
that
I
think
we
should
really
sort
out
for
that
one
as
well,
but
I
I
I
like
this
idea
a
lot.
I
thinkI
it's
not
completely
obvious
to
me
how
much
this
helps
the
cluster
api,
but
it
could
be
really
interesting
to
see
a
like
local
implementation
of
this.
C
Then
we
need
something
like
that
because,
eventually
for
demo
in
it's
a
lot
easier
in
training
and
whatever
and
development,
because
it's
something
you
can
test
you
know
any
sophisticated
external
set
doctor
can
basically
because
you
don't
want
to
really
test
that
you
want
to
show
your
own
stuff,
which
is
a
controller.
So
hopefully.
B
Could
you
add
some
more
details
about
what
you're
looking
for
from
kind
on
that
and
come
up
with
an
issue
to
anyone?
I
think
we're
going
to
need
to
we
kind
of
intermix
a
bunch
of
discussions
on
the
previous
issues.
I
think
we're
going
to
need
to
close
those
out
in
papers
of
some
new,
more
focused
issues.
No.
A
Okay,
can
I
jump
in
for
a
second,
so
if
I
pull
I
pull
what
a
back
out.
This
sounds
a
little
bit
like
incorrectly
from
wrong
right
now.
The
state
of
kind
today
is
it
creates
clusters
it
deletes
clusters.
It
doesn't
actually
do
anything
to
manage
cluster
in
place
right
right
and
so
I
could
potentially
look
at
this
as
trying
to
extend
kind
to
update
the
cluster
in
place,
which
seems
like
we
haven't
quite
wrapped
up
the
create
and
delete
stuff.
So
then
separate-
and
apart
from
that,
this
reminds
me
of
a
fun
time.
A
C
Yeah,
that's
a
good
point,
a
word.
There
is
no
such
a
I
mean
that
sorry
I
know
there
is
no
state,
I
mean
the
stated
the
cluster
itself,
but
when
you
delete
what
you
do
is
basically
go
with
before
the
node
base
into
labels.
So
as
far
as
the
you
know,
what
I
know
you
you
respect
that
labeling
of
the
cluster
I
think
that
for
the
delete
part
should
be
safe,
because
there
is
no
other
state
where.
B
B
D
So
in
a
post,
v1,
Apple
and
Road,
like
I'm,
trying
really
hard
in
the
cluster
API
world
to
point
everything
to
post
you
an
alpha
one.
In
fact,
my
whole
goal
at
every
single
meeting
is
just
for
me
to
sit
there
and
say
this
sounds
like
a
horse
people
now
for
one
thing,
and
this
this
also
sounds
like
a
core
piece
of
technology:
who's
like
in
the
core
of
cluster
API,
to
be
able
to
spin
up.
D
You
actually
test
myself,
because
it's
more
of
a
framework
than
it
is
an
API
to
have
a
default
kind
provider,
and
when
we
do
that
there
would
be
a
lot
more
people
actually
working
on
their
piece.
It
that's
that's
months
away,
which
will
give
folks
enough
time
to
get
create
and
delete
sort
of
in
a
state
where
they
feel
happy
with,
but
I
do
I
do
totally
empathize
and
agree
that
there
are
pieces
of
update
that
are
non-trivial
and
may
require
what
could
be
very
large
changes
at
this
time.
B
And
skew
across
control
planes
is
something
that
we
should
be
able
to,
but
haven't
done
tried
out
today,
but
things
like
upgrade
and
actually
changing
the
clusters
are
very
untrusting.
I
would
agree.
I'd
appreciate
coming
back
to
some
of
those
things
after
we
kind
of
polished
up
some
of
the
more,
but
it
would
really
good
to
start
exploring
how
we
might
get
there
yeah
I,
don't
think
we
want
to
make
those
architectural
changes
yet
I
do.
D
Think
there
are
probably
small
small
modifications
that
are
beneficial
to
kind
in
general
that
actually
helped
get
us
solve.
Some
of
the
core
problems.
I
know
Fabrizio
showed
up
was
like
a
funk
of
code,
but
we
could
probably
break
it
apart
pieces
of
this
to
address.
You
know
one
piece
that
you
need
today
that
would
be
beneficial
anyways
and
then
reevaluate
at
a
later
date.
Oh
no
sorry.
C
Good
you
just
used
to
mention
that
something
that
we
are
need
to
consider
here.
I
mentioned
that
on
this
lucky
thing
is
like.
Is
that
I
think
and
that
probably
somebody's
kasemir
more
sense?
If
kindness
is
reused,
as
a
library
and
delegation
like
maintaining
the
state
of
the
metadata
at
the
closer
to
somebody
else,
because
I
mean
as
Kathy
CLI
is
implemented
right
now,
I
made
a
lot
of
sense,
because
it's
one
action
atomic
actually
created
that
the
cluster
you
deleted.
I'm,
probably
tweaking
this
tool
to
feed
these
other
use.
C
Cases
where
you
probably
need
to
maintain
or
keep
track
of
the
state
of
the
cluster
is
too
complicated,
so
I'm
actually
thinking
that.
Probably
in
my
particular
use
case,
I
will
be
more
interested
in
having
a
library
that
can
be
cold
for
the
from
the
controller,
because
the
controller
or
attorney
has
fork
information.
That
is,
it
is
the
actual
sort
of
truth
about
the
cluster.
It's
only
using
kindness,
a
library
for
prohibition
in
the
nodes.
I
just
mentioned
that,
because
I
agree
with
Mendel,
he
said.
C
Probably
we
were
getting
entirely
outside
the
scope
of
kind
as
a
tool,
but
the
internal
pieces
can
be
reusable
as
a
library,
so
I'm
particularly
morning
how
interested
probably
moving
to
the
library,
because
nothing
is
easier
to
get
there,
because
it's
basically
reorganizing
the
corner.
Where
is
reusable
without
having
to
raccattack
this
hombre
sources?
So.
B
That's
happening
some
today
and
some
tools
like
cluster
EPI,
you
testing
for
the
bootstrap
cluster,
but
it's
in
a
similar
note
where
there's
some
there's
still
some
coupling
that
hasn't
been
fixed
between
kindness,
the
command-line
in
kind
is
a
library,
okay
and,
and
so
one
of
the
things
I
want
to
call,
isn't
a
doc.
It
talks
about
the
action
stuff.
That's
internal
and
I.
Think
that
is
a
really
cool,
abstraction
and
I'm
happy.
We
have
it
by.
It
needs
a
bunch
of
architectural
changes
so
I'd.
B
Rather,
if
we
think
that
like
say,
the
cluster
API
needs
this,
like
sequence
of
actions
to
do
stuff,
I
think
we
should
prototype
something
outside
of
kind
of
use
it
and
at
some
point
we
can
reconcile
them.
But
I
don't
want
to
try
to
ship
that
outside
of
kind
as
a
public
interface
right
now,
I
very
intentionally,
move
that
to
internal
to
create,
because
the
the
one-shot
action
like
you
said
of
create
is
a
fairly
thought
out
thing
today.
It
needs
a
little
bit
of
cleanup,
but
the
guts
of
what's
happening.
B
D
What
might
be
beneficial,
which
we've
done
outside
of
KK
for
our
tools?
Is
we
create
a
two-phase
libraries
where
we
have
a
client,
tooling,
interface
library?
So
if
person
is
going
to
import
our
code,
the
import,
the
client
library,
the
client
library,
has
just
basic
Diesel's
of
supported
functions,
but
then
there's
a
separate
experimental
right
and
not
actually
do
you
know.
Kubernetes
versioning
is
a
wunderkind
right,
but
anything
underneath
experimental
gives
you
break
capabilities
right.
So
that
way,
if
you're
you're
looking
at
it
from
a
client
tooling
perspective
like
that,
is
actually
super
useful.
D
B
That
idea,
a
lot
I
just
do
also
want
to
be
clear
that,
like
some
of
the
things
that
we're
talking
about
moving
into
a
place
like
that,
I
am
100%
about
it.
Just
hasn't
like
it
hasn't
happened
yet
because
they're
not
the
highest
priority
but
like
they
need
to
be
broken
and
actions
is
a
big
one
and
they're,
probably
gonna,
be
broken
pretty
significantly.
So
I
would
just
as
soon
say
you
might
not
want
to
build
on
that
at
all.
You
might
build
on
something
similar
and
we
can
reconcile
it
later
and
then
you
won't.
D
B
On
that
note
and
exposing
things
I
think
there
are
a
couple
of
things
that
we
really
should
land
early
on,
that
are
important
to
a
lot
of
the
use
cases
of
the
create
and
delete
a
really
big
one,
I
think
is
being
able
to
mount
post
paths.
I'm
interested
anybody
has
any
good
ideas
there
I've
looked
at.
We
can
do
something
like
just
allow
people
to
provide
extra
docker
flags,
but
then
we're
locked
into
the
docker
command
line
being
underneath
we
can
attempt
to
shim
it.
B
The
simplest
thing
might
just
be
allowing
those
paths
but
I
appreciate
some
other
people.
Thinking
on
this
I
might
knock
out
an
MVP.
Today,
it's
been
a
really
big
ask,
but
it's
because
docker
does
these
really
structured,
inline
values
for
volume
and
mount
flags?
It's
going
to
be
really
tricky
to
allow
people
to
configure
all
of
this
without
being
just
like
bound
to
whatever
format.
Docker
has
those
flags
and
at
some
point
I
imagine
we
might
want
to
move
to
like
the
client
library
instead
of
shelling
out
a
docker.
B
That
is
a
that
is
a
really
big
task
right
now
they
can
use
some
help
I.
Also,
if
anyone's
familiar
with
you
go
and
that
Liffe
I
we
do
have
those
now
and
it
would
be
really
good
to
get
our
Docs
restructured
for
that
the
trick
is
we
actually
need
to
get
the
tooling
set
up
build
one?
We
get
a
lot
of
questions
now
about
like
how
things
work
and
how
I
do
things,
and
we
kind
of
have
some
Doc's
if
you
really
good
to
get
that
fully
structured,
I
guess
those
are.
B
Alternative
tools
definitely
looked
into
as
well,
though
I
know.
A
cute
builder
is
using
git
book.
There
are
some
other
ones,
but
I
don't
think.
That's
necessarily
the
big
problem
is
just
actually
getting
around
to
setting
one
of
them
up.
The
biggest
thing
was
most
of
these
is
finding
a
like
a
theme
that
kind
of
fits
and
getting
that
set
up
in
the
repo.
E
B
I'm
also
I
haven't
somewhat
of
an
alternative
and
that
I'm
using
this
as
a
pilot
for
some
others.
Some
projects
in
particular
that
some
of
the
testing
for
tooling,
like
prowl
I,
really
don't
think,
should
be
on
the
main
kubernetes
website,
but
on
the
other
hand,
it's
complicated
enough
that
tossing
some
markdown
around
in
a
though,
is
not
really
making
effective
dogs
I'm.
D
Adamantly
opposed
to
the
notion
of
federating
out
the
documentation
into
the
mainline
repository
as
this
is
such
a
nascent
tool
likely
and
subject
to
change
as
some
high
velocity
in
the
near
term.
That
would
just
become
a
maintenance
burden
versus
having
the
documentation
directly
applicable
and
then
linking
out
from
the
main
Docs
as
a
potential
tool.
Great
the.
B
Other
reason
is
we're
not
asking
the
like
Docs
subgroup
to
own
this
and
I.
Don't
want
to
ask
them
to
own
this,
but
I
would
like
to
see
some
of
these
projects
have
better
docks
them.
They've
been
doing
and
not
just
be
like.
Oh,
we
have
a
readme
there's
a
lot
that
you
can
do
with
this
and
we're
looking
to
make
it
possible
to
do
even
more
I'd,
even
like
to
have
like
a
subsection
for
like
a
developer
guide.
That's
something
that
we've
started
working
on
a
bit.
B
I
also,
don't
think
it'll
be
significantly
challenging
once
you
have
this
set
up
and
you
have
Matloff
I,
it's
very
easy
to
iterate
on
and
review.
It
will
post
a
preview
link
of
what
it's
going
to
look
like
on
every
pull
request
and
you
can
click
through
and
see
if
everything
looks
fine
without
even
having
to
actually
like
read
the
source
code.
B
Looks
like
we
had
some
more
so
I
guess
those
are
the
big
ones
that
I
have
for
the
next
one
I'd
like
to
cut
another
small
release,
maybe
end
of
this
week,
I'm
also
looking
at.
We
should
really
automate
pushing
images.
We
should
be
able
to
tell
people
the
communities
has
a
release,
and
you
know
within
a
very
short
time
window.
You
should
be
able
to
spend
up
a
kind
cluster
with
it
figuring
out
how
that
works.
Logistically
may
be
interesting.
B
B
A
We'd
be
interested
in
playing,
looser
and
faster
if
it
would
help
other
projects
if
a
people
showed
up
to
help
out
and
be
the
very
very
first
thing
we
locks
down
was
really
really
visible
billing,
because,
because
we're
now
spending
money,
it's
really
important
that
we
know
what
we're
spending
our
money
on,
and
there
is
a
reticence
or
hesitancy
to
like
just
open
up
the
floodgates.
If
we
don't
actually
know
where
all
the
money
is
going
to
be
spent,
that's
the
biggest.
B
Blocking
factor
there,
but
to
be
clear,
the
blocking
factor
is
not
right.
Now
it's
not
having
a
place
to
push
its
figuring
out
all
the
logistics
of
making
sure
that
we
actually
run
the
push
at
the
appropriate
time
and
for
all
the
releases
and
I
can
get
tricky
because
we
maybe
need
to
do
all
of
the
like
all
of
the
different
kubernetes
tags
or
backfill
this
or
something.
B
Yeah
so
the
biggest
trick
there
is
a
vac
could
work,
but
we'd
want
the
way
proud
gets
away
with
this
today
for
itself.
Is
it
I
believe
it's
using
a
trusted
cluster
where
it
can
have
those
credentials
and
I
believe
that
cluster
is
actually
just
the
main,
proud
cluster
and
that
kind
of
makes
sense
for
prowl
itself,
because
if
that,
if
any
of
that
is
compromised,
the
whole
thing
is
but
for
like
say
kind,
we
probably
shouldn't
be
giving
kind
jobs
access
to
that
cluster.
A
B
And
independent
of
that
there's
also
just
like
okay,
how
deep
like
how
do
we
actually
schedule
it?
Are
we
going
to
trigger
on
every
time
a
push
happens
so
kubernetes
and
then
try
to
figure
out
if
it's
tagged,
what
happens
when
we
miss
one
of
those
pushes
because
we
do
miss
get
of
events?
Do
we
want
to
schedule
it
regularly
and
then
do
we
have
some
kind
of
logic
to
like
compute,
which
tags
need
pushing
that's
an
open
problem,
so.
E
B
So
we
can,
we
can
do
that.
What
I'm
saying
is
that
there's
a
chance
that
it
misses
the
monitor
in
the
commit
and
the
other
problem
is
this
needs
to
run
against
the
kubernetes
repo
for
which
we
can't,
just
like
say,
add,
Travis
or
something
assuming
somehow
tribus
were
actually
better
at
it.
This
is
the
secret
management
is
probably
not
the
most
interesting
problem.
We,
we
do
have
some
options
for
that.
It's
the
making
sure
that
we're
actually
keeping
up
with
all
the
tags
and
that
we've
backfilled.
B
I'm
thinking
probably
at
this
point,
the
best
solution
is
to
have
some
kind
of
hybrid
model
where
we
we
trigger
on
the
tag.
So
we
make
sure
that
we
push
quickly
after
a
tag
is
pushed,
but
we
also
probably
need
something
to
verify
that
we
didn't
miss
one,
because
it's
going
to
happen
so
my
books
from
get
up
all
the
time,
yeah.
E
B
E
B
But
then
we're
going
to
need
some
we're
going
to
need
some
logic
to
figure
out
which
tags
and
so
that
some
summering
that
we
have
to
pick
one
of
these
and
then
we
have
to
tackle
that
actual
that
bit,
because
no
one
has
it
doesn't
look.
No
one's
done
this.
Yet
we
have
the
official
release
process
and
that
is
totally
manually
triggered.
B
So
basically
we
can.
We
can
write
a
drop
that
matches
on
branch
pushes,
but
you
can
do
rest
tags.
I
can
do
that
again
that
that's
probably
the
tricky
part.
So
much
as
the
other
logic
for,
like
you
know,
making
sure
that
we
check
out
to
that
tag
and
that
we
like
run
the
appropriate,
builds
for
kind
and
push
and
maybe
off
to
docker
hub
and
those
sort
of
things
need
scripting
out.
B
E
B
B
Think
I
was
the
only
one
that
raised
my
hand
and
that's
because
I'm
used
to
building
kind
locally,
all
the
time
but
I
think
maybe,
if
we're
Auto
fishing
images,
it
would
be
easier
to
say:
hey
just
you
know,
run
this
just
set
this
flag
to
this
tag,
and
you'll
now
have
an
alpha
build,
but
we're
not
gonna
keep
up
with
that.
Well,
unless
it's
automated.
B
B
A
A
B
A
B
Can
do
that?
I
have
some
concerns
about
using
that
in
the
main
building
cluster,
because
because
of
things
like
kind,
where
we
very
much
break
a
lot
of
the
isolation
in
order
to
run
things
like
talker
and
darker,
it's
not
a
super
duper
realistic
concern,
but
it
still
feels
if
he
and
it
feels
more
if
he
for
kind,
because
we
in
turn
turn
around
and
run
that
as
a
privilege,
container
I'd
like
to
keep
that
process,
pretty
lock
down.
B
F
B
That's
the
other
thing,
I'm
wondering
and
I
should
probably
focus
Erin.
A
potential
alternative
is
that,
just
in
the
short
term,
I
can
just
go
through
the
hoops
of
getting
a
proud,
build
cluster
up
and
just
set
it
up
myself
here
at
Google
and
with
the
intention
that,
once
the
kids
in
from
work
group
is
sorted
out,
we
will
move
that
work
load,
but
there
is
a
similar
problem
of
like
you
know.
B
B
B
B
A
Like
I,
just
firm
from
speaking
as
a
representative
of
the
infra
working
group
like
I,
totally
want
to
be
helpful
and
supportive
of
projects
and
sub
projects
alike.
It's
just
more
about
sure.
Each
of
these
approaches
sounds
doable
as
a
one-off.
Now
do
it
for
a
hundred
sub
projects
yeah
and
does
it
does
it
scale
appropriately?
How
much
thought
do
we
need
to
put
into
it
up
front
versus
growing
into
something?
That's
a
little
more
scalable
and
manageable.
B
We
can
follow
up
on
these
things
offline,
but
I
I
think
we
should
I.
Don't
think
we
need
to
have
this
for
like
say
the
next
release,
but
for
I
mean
and
I'm
also
to
be
clear,
I'm,
not
necessarily
sure
we
should
actually
call
this
thing
100
on
the
timeline
we
have
currently
but
I'm,
trying
to
at
least
push
towards
it
by
then
and
I
could
I
think
we
really
should
be
building
kubernetes
images
on
a
regular
basis
automatically
like
this
release
cycle.
A
A
What
is
me
less
clear
to
me
is:
oh,
the
due
date
is
the
end
of
this
quarter.
Okay,
never
mind
I'm
blind,
just
kind
of
like
maybe
a
administrivia
suggestion.
The
way
this
template
was
set
up
as
if
this
is
a
meeting
that
has
recurring
topics
you
go
through
the
recurring
topics
we
didn't
do
any
of
that.
We
just
jump
straight
into
open
discussion
this
time,
but
it
might
be
useful
for
this
crew
if
we're
trying
to
push
towards
a
one
dot.
A
Oh
release
of
kind
to
review,
what's
been
done
for
the
one
dot,
oh
and
what
is
like
the
next
most
important
thing
for
one
dot.
Oh
I've
seen
other
projects
like
walkthrough
I've,
seen
other
sub
projects
walk
through
a
project
forward
or
walk
through
a
backlog
of
nos
Dejan,
whatever
I'm
not
suggesting
you
have
to
do
that.
Right
now,
but
just
throwing
that
out
there
yeah.
B
That
sounds
like
a
really
good
idea,
and
we
have
actually
made
pretty
good
progress
towards
this.
Some
of
these
things.
It's
also
not
super
clear
if
they
absolutely
have
to
happen.
1.0,
for
example,
we
do
have
some
resources
from
packet
note
also
trying
to
figure
out
arm
CI,
which
is
going
to
be
another
interesting
question
that
we
haven't
solved
anywhere
else.
B
Yet
maybe
we
don't
have
to
have
our
sorted
out
necessarily
by
then
we
have
a
kind
of
optimistically
included,
but
a
lot
of
these
other
things
are
not
not
are
not
actually
super
involved
in
most
of
the
other
issues
are
tracking,
don't
necessarily
seem
like
things
that
are
going
to
be
in
it
like,
for
example,
maybe
mushy
and
provisioning
is
not
exactly
a
1.0
thing.
That's
a
follow
on
yeah.
A
A
D
Sosuke
cluster
lifecycle
has
a
well-defined
process
that
we
document
in
multiple
locations
now
how
we
do
this,
so
anybody
who
shows
up
to
a
repo
says,
oh
I've,
been
I'm
I'm
in
the
next
bucket
I
have
been
prioritised.
You
know,
I
know
when
it's
coming
and
know
when
to
expect
things
to
happen,
but
the
this
way,
if
you
have
a
well-groomed
backlog,
it's
very
clear
to
any
observer.
What's
going
on
from
what
milestone
live
in
my
mouth,
yeah.
B
A
D
A
Basically,
my
ask
is
I've
heard
you
say
you
brought
a
really
tight
chip
and
it's
like
well-known,
but
I
personally
haven't
seen
it
documented
because
if
it
works
really
well
for
you,
this
sounds
awesome.
I'd
love
to
hand
it
over
to
like
we've
been
experienced
and
say:
please
take
this
and
copy
paste
it
for
all-star
projects,
even.
E
D
The
mini
cube
folks
have
taken
my
stick
and
written
it
all
into
words
right
and
the
question
is
I
want
to
review
it
and
hand
it
off.
Okay,
okay,
you'd,
be
in
the
middle
repo
and
after
double
check
that
right
now
what
they
told
me
that
they
had
documented
it
in
detail,
and
so
I'm
gonna
take
a
look
at
it
and
double
check
and
see
if
it
needs
modification.
Okay,.
D
E
E
B
A
A
G
A
B
E
B
B
Yeah
I
think
that's
also
going
to
significantly
improve
good
times,
though
I.
Do
you
think
we
need
to
document
this
as
well?
The
first
time
someone
through
its
kind
they're
going
to
be
like
wow,
this
image
is
huge
kind,
must
be
super
bloated,
it's
gonna
be
like
well,
actually
we're
just
pulling
down
all
of
kubernetes
upfront
and
it
turns
out
that's
huge
yeah.
B
B
So
actually,
I
just
mostly
need
to
update
the
roadmap
there
that
one's
actually
pretty
done,
except
maybe
instructions
for
cute
config.
That
can
get
a
little
weird,
because
I
think
it's
a
little
bit
more
common
to
use
shells
on
Windows
that
have
different
syntax
for
this
on
Linux.
We
can
pretty
reasonably
expect
that
you
can
just
say
export
cube,
config
and
that's
it,
but
on
Windows
we
don't
know
if
you're
using
PowerShell
or
command
prompt
or
basher
yeah.
E
B
We
can
outline
how
Windows
works
or
we
can
remove
some
of
this
from
the
command
line.
I
mean
right
now.
If
you're
on
Linux,
you
can
actually
cut
and
paste
the
output
from
the
end
of
concrete
cluster
and
just
use
that,
and
it
should
behave
properly
on
Windows.
That's
not
necessarily
true,
and
it
would
be
good
to
fix
that
or
leave
a
note
or
remove
it
entirely
or
something.
E
B
B
So,
like
actual
common
usages
doing
things
with
it
right
now,
we
have
some
info
of
that
like
how
you
can
boot
it
and
like
maybe
how
you
might
configure
one
particular
thing,
but
we
don't
even,
for
example,
show
you
how
to
do
like
nh8
cluster
right
now,
even
though
that's
possible
not
that
you
need
one.
But
it
is
a
thing
you
can
do
is
kind
that
there
are
actually
zero
docs
for
currently
uh-huh.
B
B
It
makes
it
both
that
place
where
it
currently
tells
you
about
how
you
can
use
hack
local
cluster.
It
would
be
really
nice
to
mention
that
you
can
do
it
with
this
as
an
option.
Instead,
I
know
that's
something
that
Timothy
wanted
as
well,
but
we
might
want
to
include
at
least
some
amount
of
Doc's
and
ours
as
well
and
I.
Don't
expect
these
to
change
that
much
and
unless
we
see
it
really
change
cube
test
at
some
point,
in
which
case
we'll
need
to
update
it
every
where
anyhow
yeah.
E
B
E
B
Create-
and
so
the
other
thing
we
have
under
here
is
the
github
pages
landing
page.
It
looks
like
instead,
that's
going
to
be
nullified,
but
basically
the
same
thing
where
most
of
the
way
to
that
we
just
need
to
actually
make
the
pages
I
mean
we
even
have
content
we
can
put
in
it
in
good,
pretty
good
format.
Now,
thanks
to
George's
work,
we
just
we
need
to
pick
like
queue
or
something
and
actually
set
that
up.
I,
don't
expect
that's
a
huge
task,
the
logging
and
debug
ability
stuff.
B
E
B
B
But
today
we
haven't
actually
fully
done
that
some
things
do
still
go
through
logging,
we're
not
writing
to
a
stream
and
for
people
consuming
as
a
library,
in
particular
you're
stuck
with
word.
Logging
till
ogresses
default
instance,
and
you
know,
have
good
options
for
controlling
that.
So,
even
even
if
we're
gonna
stick
with
loggers
and
we
should
make
it
easier
to
shim
that
out
if
you're
gonna
consume
it
as
a
library,
yeah.