►
From YouTube: sigs.k8s.io/kind 2019-02-11
A
B
A
So
we've
got
quite
a
bit
in
here:
I
just
started
going
through
some
of
this
I'm,
not
sure
how
I
didn't
get
a
chance
to
look
at
the
queue
Batum
backlog
videos.
Yet
I
did
read
through
the
doc
on
how
you
do
that,
but
I'm
not
remembering
how
you
started.
If
you
just
like
go
through
all
the
issues
or
what
I
was
just
going
through
them
one
by
one.
So.
C
A
A
kubernetes
pod,
in
some
cases
that
may
cause
like
the
in
cluster
DNS
to
get
propagated
through
the
client
nodes,
which
gets
a
bit
weird
because
of
their
like
on
another
doctor
network
and
they're
trying
to
talk
to
be
like
core
DNS
in
your
cluster
and
that's
not
ideal.
You
can
work
around
that
by
changing
the
pod
DNS
settings,
but
kind
probably
has
some
use
case
to
do
this
as
well.
C
A
A
C
C
A
A
A
Yeah,
so
this
one
definitely,
we
need
to
I
need
to
manually
push
some
images
for
the
next
release,
we're
looking
at
automating,
but
that's
probably
not
going
to
make
it
into
the
next
one.
I
would
like
to
do
this
in
the
next
release.
We're
doing
work
to
improve
the
configuration
I'd
like
to
keep
it
down
to
just
two
alpha
versions:
each
release
and
eventually
get
to
something,
if
not
literally
Beta
Beta
ish.
A
We
don't
want
to
just
literally
plumb
through
docker
flags,
because
then
we'll
every
all
the
you
not
just
start
for
our
own
purposes,
but
users
will
be
broken
when
they
try
to
use
their
config
between
different
docker
versions
or
something
I
did
some
digging
and
I
think
we
can
do
exactly
the
kubernetes
does
for
this
and
use
the
container
runtime
interface
to
identify
I,
guess
I.
Don't
actually
have
a
good
link
for
this
here.
A
Anyhow,
there's
there's
a
specification
for
like
what
it
looks
like
to
configure
DNS
or
to
configure
mounts
or
that
sort
of
thing
in
the
container
runtime
interface.
We
can
use
that
to
guide
how
we
talk
to
darker
without
actually
using
the
container
runtime
interface
itself
yet,
and
so,
for
example,
that
has
a
definition
of
what
a
mount
looks
like
and
it
Maps
pretty
well
to
you
know
pretty
much
all
container
runtimes,
because
that's
like
the
entire
idea
behind
the
container
on
time
interface,
so
we
should
add
at
least
host
path.
A
C
A
So
another
thing
is
it
the
the
the
docker
shim
in
kubernetes
has
somewhat
of
a
questionable
future,
but
it
shouldn't
be
a
problem
for
us
to
maintain
just
the
subset
that
we
want
I
mean
we
essentially
already
have
a
docker
shim
today,
I'm
not
advocating
that
we
like
drop
that
work
or
that
we
start
trying
to
like
actively
support
other
container
runtimes
right
away,
but
just
if
nothing
else
to
avoid
being
coupled
to
specific
releases
of
docker.
That
was
one
of
the
problems
that
was
solved
in
kubernetes.
By
defining
this
interface.
A
Those
paths
mounting
was
the
thing
that
drove
this,
but,
as
I
looked,
it
seems
pretty
applicable
to
basically
everything
we
do
talking
to
docker
from
the
hosts
and
I
think
long-term.
It
probably
makes
sense
to
actually
use
the
container
runtime
and
that
will
allow
people
to
use
whatever
container
you
actually
use
a
container
runtime
interface
and
that
will
allow
people
to
use
alternate,
possibly
even
say,
more
secure
to
container
runtimes
or
things
to
run.
This
yeah.
A
I
think
in
general,
that
that
will
that
will
also
solve
some
other
things.
Like
I
know,
we
have
a
PR
that
we
still
need
to
get
back
to
about
Padma
and
support,
and
instead
of
trying
to
hope
that,
like
these
flags
in
the
command
line
are
never
gonna
change
underneath
us,
if
we
can
have
just
one
layer
of
indirection
between
but
right
now,
the
amount
of
things
that
we
expose
are
pretty
much
you
get
nodes
and
that
doesn't
make
any
eat.
A
We
don't
make
any
agreement
to
users
about
what
kinds
of
flags
we're
gonna
pass
through,
but
as
long
as
soon
as
we
start
doing
things
like
host
path,
mounts
or
DNS,
config
we're
gonna
start
saying:
okay,
we're
gonna,
set
these
values
on
docker,
and
unless
we
act,
unless
we
add
some
sort
of
shim,
it
is
literally
docker
flags.
So
basically,
I'm
saying
here
is,
as
we
start,
adding
any
sort
of
shim.
We
should
look
to
the
container
runtime
for
what
that
shim
should
look
like
yeah.
A
This
one
I'm
not
actually
sure
if
we
can
solve
this
I
believe
this
is
actually
when
I
have
to
look
at
what
mini
cube.
Does
here,
the
cube
Aden
just
creates
an
admin
user
on
each
cluster,
so
the
contexts
are
gonna,
be
the
same.
We'd
have
to
add
some
kind
of
user
provisioning,
I
think
if
we
were
going
to
do
this
I'm,
not
sure
if
this
is
actually
something
we
need
or
want
to
solve,.
D
A
A
C
A
B
Yeah,
this
is
a
problem.
I
was
following
this
because
we
did
the
same
situation.
They
had
and
I'm,
not
a
proper
environment
so
was
I
need
to
take
a
look
at
these,
because
this
morning
I
saw
that
this
person
is
working.
That
was
still
having
the
problem,
but
the
problem
was,
and
he
was
passing
me-
he
was
using
a
local
horse
as
a
proxy
that
that
made
no
sense
because
local
coffee
is
not
available
from
inside.
So
right.
E
B
A
D
A
A
I
mean
we
can
punt
it
out
of
1.0
if
we
need,
but
I
think
we
can
probably
get
it
and
before
then,
and
it
shouldn't
be
a
huge
thing
to
do
this.
One
uses
a
container
IP
address
this
one
is
actually
definitely
blocking
some
use
cases.
I
did
some
more
experimentation
with
using
kind
for
some
internal
testing.
A
The
past
week
and
I
found
a
number
of
cases
where
you
want
to
run
a
workload
inside
the
cluster
and
use
the
credentials
and
if
you're,
using
the
cube
config
that
we
generate,
that
will
look
like
the
cute
configure.
You
need
to
talk
from
outside
the
cluster
and
that
won't
work.
So
this
would
actually
also
be
relatively
straightforward
and
it's
definitely
something
we
need
I,
don't
really
I,
don't
really
want
to
put
too
much
more
in
the
next
release,
though
I'm
actually
hoping
to
get
one
out
this
week.
A
A
A
That,
like
just
that,
one
alone
will
probably
solve
a
lot
of
questions
and
I.
Think
it's
just
easier
if
we
can
get
the
tool
to
detect
it
and
suggest
workaround.
Eventually
what
I'd
like
to
do
once
we
have
the
site.
Stabilized
is
actually
linked
out
to
known
issues
on
there
as
well,
where
we
can
have
more
detailed
information
on
the
known
issues,
page
okay.
So
that's,
although
the
important
soon.
A
This
one
I
think
we
can
also
get
this
in
1.0
and
I've
actually
been
thinking
about
how
we
can
support,
having
the
seen
eyes,
pre-loaded
and
basically
just
put
the
manifests
in
the
image
and
a
lot
of
selecting
over
those
and
have
a
default
and
then
allow
also
supplying
your
own
manifest
with
the
understanding
that
it's
not
supported
per
se.
But
you
can
certainly
supply
your
own
and
make
sure
it
works
for
the
cni
and
that
should
also
the
supply.
Your
own
version
should
be
pretty
simple,
I.
C
Mean
we
have
a
bit
of
a
problem
with
the
multitude
of
seein
is
available.
There
are
people
like
everybody,
wants
a
different
one,
that's
the
problem.
So
in
an
offline
scenario,
we
can
exclude
all
these
outside
of
the
the
load
image
and
if
the
users
pre-poo
the
the
images
for
the
cni
Pogan
and
also
if
they
have
the
manifest
report
they
can,
we
can
allow
them
to
apply
this
way
right.
A
B
A
I'm
I
think
I
think
so
see
and
I
is
an
add-on
that
we
absolutely
have
to
ship,
because
you
can't
really
have
a
multi
node
cluster
without
it
for
more
general
add-ons.
I'd
also
like
to
find
an
answer,
but
there's
a
distinction
in
that
I
think
for
other
add-ons.
We
probably
don't
want
to
be
shipping
those
in
the
image,
whereas
like
C
and
I,
we
definitely
want
to
ship
at
least
a
C
and
I
that
you
can
use
without
any
fuss.
F
B
A
B
A
C
The
the
whole
situation
is
super
complicated
at
the
moment.
So
in
qadian
we
ship
a
couple
of
essential
add-ons
that
does
keep
proxy
and
the
other
one
is
keep
DNS
or
Core
DNS,
and
we
don't
banish
anything
else
and
I
think
that
for
now
kind
should
do
the
same,
and
eventually
once
we
have
a
better
method
for
exposing
others.
Maybe
we
can
apply
the
same
for
kind,
I,
believe.
A
There
are
there's
at
least
one
kept
open
discussing
an
option
for
this,
and
I
have
reached
out
to
the
author
on
just
in
Santa
Barbara,
to
discuss
what
that
makes
sense
for
that,
because,
even
if
we
don't
formally
adopt
it,
we
can
at
least
try
it
out,
because
kind
would
actually
probably
be
a
pretty
good
way
to
test
such
a
tool.
And
if
it
works
well,
we
can
either
bundle
or
recommend
some
options
for
this.
A
A
So
at
minimum
we
should
document
this
and
actually
be
a
big
deal.
We
should
also
you
know,
look
at
whether
or
not
we
need
to
make
it
easier.
I
think
a
number
of
tools
like
make
you
offer
a
flag
to
add
features
like
this
turned
on.
We
can
probably
at
least
at
a
configuration
value
if
the
patches
wind
up
being
unwieldy,
I
believe
I
took
a
stab
at
this
and
they
were
a
little
bit,
but
it
might
be
better
now.
Yeah.
A
D
A
A
But
it's
not
important
it's,
it
might
be
a
nice
thing
to
have
like
say
the
build
also
have
one
or
if
we
remove
it
from
the
main
one
that
we
should
have
the
same
in
both
places
this
one.
Similarly,
this
would
be
nice
to
do,
but
K
log
itself
seems
kind
of
in
there.
I
wanted
to
sync
with
the
cluster
API
folks
about
this
at
some
point,
I
believe
they've
been
using
it.
A
Sounds
good
to
me,
yeah
I!
Think
Telok
is
eventually
gonna
have
traction
just
by
being
the
de
facto
thing,
but
it's
not
clear
to
me
if
it's
like
worth
the
effort
this,
which
just
yet
I
think
it
might
be
nicer
for
embedding
like
I
know,
cluster
API
is
I
believe
they
are
embedding
kind.
Now
it
would
be
nice
to
use
the
same
logging
back-end.
C
Here
so
Patrick
is
bringing
the
same
problem
I
brought
with.
Basically,
when
you
import
the
back
end,
it
imposes
a
library
on
the
front
end.
So
currently,
if
you
import
you
medium
and
if
you
design
some
sort
of
a
front
end,
you
have
to
use
que,
walk
as
well,
and
the
same
applies
to
the
coastal
api.
Everything
does
think
a
walk,
yeah.
A
It
looks
like
all
the
rest
of
these
are
actually
I,
wonder:
I,
don't
know
if
there's
a
way
to
actually
maybe
query
if
there
isn't
a
milestone,
but
we
could
just
take
a
through.
These
two
are
also
unclear.
I
think
we
need
to
revisit
whether
or
not
those
even
need
to
be
separate
it.
It
might
actually
be
reasonable
to
even
say
like
have
like
an
alpha
sub
command
for
upgrade
or
something
I
think
we
just
need
more
details
on,
but
we're
actually
trying
to
do
in
upgrade.
C
A
A
A
A
So
we
need
to
build
new
images,
I'd
like
to
drop
the
1
alpha,
1
and
add
V
1,
alpha
3,
with
the
at
least
a
host
path
mounting
and
I'd
also
like
to
fix
pre
loading.
The
CNI,
though,
that
one
might
be
okay
to
come
back
to,
since,
hopefully,
the
proxy
issues
at
least
fix
most
users.
Immediate
concern,
but
I
also
think
that
dealing
with
seen
eyes
is
something
that's
increasingly
coming
up
and
it
should
make
the
cluster
boot
faster
and
I've
been
poking
at
how
to
do
this
for
a
while.
A
Now
so
I
think
we
can
probably
get
that
one
in
as
well.
We
already
fixed
getting
some
issues
up
and
switching
the
name
flag
behavior,
so
we
can
get
those
breaking
changes
in
soon.
It's
the
other
one.
That's
not
in
here
is
I'd
like
to
figure
out
some
more
about
how
to
split
out
some
of
the
networking
config
like
the
load
balancer
and
not
not
have
the
explicit
load
balancer
in
it.
I
know
we
have
a
PR
out
to
make
load
balancers
implicit
on
V
1,
alpha
2,
but
I'm
kind
of
thinking.
A
More
of,
like
you
know,
V
1
alpha
2
is
also
going
to
go
away.
Eventually,
we
can
take
it
away
pretty
much
any
time
we
want,
since
alpha
long-term
I
think
it
doesn't
make
sense
for
the
cluster
load
balancer
to
be
configured
as
a
node,
because
it's
not
a
trigger
Nettie's
node
and
even
if
we
add
support
for
other
load
balancers
the
control
plane
load.
Balancer
is
always
going
to
be
special
because
it
also
means
things
like
the
certificates.
C
A
A
If
we
do
wind
up
meeting
them,
I
don't
think
that's
super
likely
right
now
we
are
gonna
wind
up
having
other
networking
options
that
are
global
to
the
cluster,
though
like
the
docker
network
being
used,
and
for
that
I'm
thinking,
we'd
probably
want
to
sort
out
like
a
some
other
field
in
the
config
that
handles
these
things.
That
is
distinct
from
the
note
list.
Yeah.
B
A
A
C
A
And
I'm
hoping
to
make
some
of
those
things
more
straightforward
as
well,
we'll
see
so
I
want
to
get
a
couple
of
announcements
in.
Hopefully
we
have
some
coop
con
talks
at
Barcelona
if
anyone's
interested
right
now,
James
and
I
are
signed
up
to
you,
cute
converse
Ilona.
We
couldn't
make
today's
meeting
potentially
depending
on
how
that
goes.
We
can
also
switch
the
speakers
around
that's
going
to
be
a
cig
testing.
A
Dita
I
was
ever
going
to
look
at
how
kind
works,
how
it's
used
for
kubernetes
and
maybe
some
amount
of
how
you
can
test
other
things
on
it,
but
I'm
holding
off
that
on,
hopefully,
Liz
and
I
will
have
a
talk
about
testing
kubernetes
apps
on
kind
I'll
have
to
get
more
details
about
those
out.
Yeah,
we'll
see
that
let's
submit
it
a
CFP
wills,
it's
through.
A
A
If
we
get
that,
then
I'm
also
looking
to
do
a
clever
thing
and
have
kind
of
cigs
like
a
Salado
/v
1
alpha
3
bring
that
pole
link
to
the
go
docks
or
redirect
to
the
go
docks
for
that
type
which
what
we
need.
If
anyone
has
any
ideas
here,
I
really
appreciate
that
I
think
the
next
really
big
thing
to
do
is
flush
out
the
contributing
Docs
which
I'm
going
to
talk
to
George
somewhere
about
in
particular,
I,
think
we
should
get
some
metadox
for
how
to
contribute
to
the
docks.
A
What
I
mean
is
we
don't
necessarily
need
to
fill
out
all
this,
but
we
should
have
like
a
page
and
it
should
link
to
these
things
and
similarly,
in
the
repo
we
should
think
of
this
we'll
need
at
least
a
little
bit
of
our
own
details,
because
we
need
to
cover
things
like
you
should
install
Hugo,
and
you
know
here's
the
make
command
to
run
to
browse
the
site
locally.
We
don't
need
a
heavy
guy,
but
we
should
have
at
least
something
that
covers
the
repo
specific
details.
So.
C
A
C
A
C
A
That's
a
good
question
I'm
for
now
I
think,
probably
just
updating
the
community
to
be
testing
dock.
Where
we
mentioned
how
to
do
local,
we
can
mention
that
you
can
do
this,
what
kind
and
have
some
limited
docks,
long
term
I'm,
not
sure
I-
think
for
unless
you're
actually
testing
kubernetes
I'd.
Rather
people
not
depend
on
cube
test,
but.
D
A
A
C
A
Yes,
I
think
that
is
entirely
possible,
especially
if
we
stick
to
say,
like
the
parallel
safe
conformance
test,
I'm
a
little
loath
to
I
guess,
there's
always
an
awkward
thing
of
like
I'm,
one
of
the
main
people
that
actually
maintains
what
we
run
in
presubmit
and
I.
Don't
really
want
to
be,
like
you
know,
selling
my
own
solution,
but
I'm
happy
to
help
with
that
and
I
do
think
that
it
makes
sense,
and
it
is
something
we
should
look
towards,
but
I.
You
know
I
want
to
make
sure
that
this
is
say.
A
C
A
A
C
C
C
F
Pretty
much
new
to
this
project.
My
question
was
more
around
ipv6
support.
Just
to
give
you
more
context
like
what
I've
been
working
on
glam,
so
I
worked
for
Ericsson,
open
source
and
I've
been
working
on
adding
ipv6
functionality
to
cube
router
and
I'm.
Also
working
on
the
forthcoming
dual
stack
integration.
That's
gonna
happen
with
uber
Nettie's.
F
There's
somebody
by
the
name
of
Dave
blow
they
gained
LeBlanc
had
worked
on
a
doctor
and
doctor
based
approach
for
ipv6
CI
it
something
similar
with
kind
would
be
really
awesome.
I
think
I
personally
use
a
dev
environment
which
is
based
on
vagrant
but
yeah
I.
Guess
in
general.
My
question
is
about
ipv6
support
and.
A
So
it's
something
that
I
think
would
make
a
lot
of
sense
to
do
in
kind.
It's
not
something
that
we
read
yet
just
because
there
are
lots
of
other
small
things
that
need
doing
like
code
cleanup
and
things,
but
it's
definitely
something
I'd
really
love
to
see
if
you're
interested
on
working,
that
just
filing
issues
and
proposing
changes
here
and
whatnot
would
be
really
helpful.
I'm,
not
exactly
certain
what
all
needs
to
go
on
for
that.
Similarly,
as
I
work
on
a
lot
of
tripping,
it
eases
CI
and
tests
infra.
A
Basically,
everything
we
have
access
to
is
like
k,
ops
on
AWS
or
kubernetes
anywhere
on
GCE
or
other
things
on
GCE,
and
we
don't
have
ipv6.
So
we
most
of
us
don't
really
have
expertise
with
like
what
needs
to
change
for
that
to
work
but
I.
That
effort
does
seem
pretty
important,
and
it's
unfortunate
that
the
previous
one
kind
of
fell
through
I'd
love
to
see
kind
fix
that.
C
F
That's
a
good
point
actually
so
from
my
own
dev
environment
or
my
own
dev
cluster
I.
Don't
use
cube.
Atm
I
use
a
more
like
conventional
approach
or
like
old
approach
of
just
starting
various
components
inside
VMs
with
appropriate
flags,
but
yeah.
Definitely
that
that's
one
of
the
things
I
can
I
can
definitely
go.
Look
at
it.
F
F
A
We
hear
that
we
do,
as
I
said,
we
do
use
Q
beta
for
everything
and
I
think
we
intend
to
stay
that
way.
It's
important
that
we
work
with
other
projects
within
the
core
community,
so
starting
to
figure
out
what
needs
to
be
fixed.
There
would
be
a
great
step
and
then,
whatever
we
learn
from
that,
we
can
apply
to
kind
itself
as
well.
I.
C
Think
I
saw
some
I
saw.
Somebody
was
saying
that
ipv6
was
broken,
so
they
started
sending
PRS
for
you
made
him
to
fix
ipv6
and
we
applied
the
PR,
so
commando
said
incorrectly
cube,
ATM
works,
fine
with
ipv6
accepts,
except
that
we
don't
have
test
signal
for
it,
and
nobody
knows
what
the
state
of
that
is.
A
A
A
Pablo
I
also
want
to
get
back
to
around
the
Michigan
provisioning
thing.
I've
been
asking
some
more
people
with
experience
in
that
area
about
their
thoughts
on
that
and
generally
I.
Think
everyone
thinks
that
it's
a
good
idea,
it's
just
sort
of
like.
Probably
it
should
come
a
bit
later,
once
we've
sorted
out
some
other
things.
A
Hi
yeah
I
I
do
think
that
is
a
really
great
idea
in
general
and
I've
talked
to
a
few
more
people.
That
also
think
that,
like
it
would
be
pretty
interesting
to
test
the
cluster
API
in
this
way,
I
think
I
think
we're
all
in
agreement
there.
It's
just
not
sure
like
what
all
we
need
to
do
and
kind
of
like
how
organized
that
is
versus
like
getting
some
of
the
other
fixes
and
early
releases.
Oh
okay,.
B
E
B
A
E
A
E
B
E
We
don't
want
to
like
you
know,
we
don't
want
to
go
overboard
and
test
a
whole
bunch
of
stuff
or
we're
basic,
basically
just
testing
our
mocks,
but
really
definitely
I.
Think
there's
a
lot
of
interest
in
having
some.
You
know
just
like.
Oh
you
can
actually
list
nodes
and
the
Machine
sets
actually
work.
The
way
we
expect
them
to
I.
Think
there's.
Definitely
interesting,
which
which
meeting
is
that
me.
A
B
A
Enough
about
the
cluster
API
from
what
I
do
know,
I
think
this
does
make
sense
for
testing
parts
of
it
and
I'd
love
to
help
with
that
long
term,
but
I'd
also
just
independently
like
to
get
more
up
to
speed
on.
What's
going
on
with
the
cluster
API
itself,
I've
been
talking
some
more
people
here
about
that
I
think
we're
also
interested
in
using
this
for
testing
actual
cloudy
clusters.
A
Great,
it's
gonna
be
great
yeah.
Also
on
that
note,
I'm
going
to
be
pushing
updates
to
like
all
of
our
CI
and
kind
today
because
of
the
run
C
exploit.
So
that's
fun.
We
are
definitely
keeping
a
close
eye
on
the
rootless
work.
I,
don't
think
it's
gonna
land
in
kubernetes
for
some
time,
because
there's
some
major
culet
functionality.
It
doesn't
actually
exist
when
in
the
rootless
patches,
but
it
is
possible
to
run
docker.
A
The
container
D
cryo
rootless
now
adds
upstream
as
of
like
a
week
or
two
ago,
but
kubernetes
is
gonna,
need
more
work,
I
also
synched
up
with
some
note
people
about
cgroups
v2
someday,
where
we
can
actually
properly
nest.
These,
like
the
future
for
making
this
more
secure
is
bright.
But
today
it's
kind
of
scary
I
will
send
a
note
to
the
channel
when
we
are
using
updated
docker
inside
of
kind
yeah.
C
A
These
went
into
master
I'm,
not
sure,
if
they're
in
a
release
yet,
but
they
also
like
we
can't
really
leverage
the
root
list.
Yet
for
a
couple
of
reasons,
including
the
fact
that,
like
we
can't
run
turbine
Eddie's
in
that
way,
yet
yes,
but
long
term,
we
should
definitely
be
looking
to
do
that.
There's
some
other
things
we're
like
on
Linux.
You
need,
like
you,
some
distres
don't
have
the
functionality
you
need
for
this
enabled
by
default.
Hopefully
that
will
improve
long
term
as
well.
A
Version
is
run,
C
escape
applies
to
like,
basically,
all
released
versions
of
run
C
as
far
as
I
know,
which
is
essentially
all
major
container
runtimes,
and
it
has
two
modes
one.
You
run
a
malicious
image.
It
can
potentially
get
out
and
replace
the
run
C
binary
on
the
host.
If
you're
running
this
is
route,
the
other
one
is,
if
you
make
an
exec
to
an
image
that
has
had
malicious
modifications
to
its
file
system,
it
can
potentially
get
out.
Mitigations
are
things
like
SELinux
or
having
a
read-only
copy
of
these
binaries
on
the
host?