►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171107
Description
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.1ci4n8ah9w90
Highlights:
- Discussion about what the sig should be doing / reworking the sig charter
- Signing up for a SIG session at KubeCon
- Getting started guides
- IPv6 support in Kubernetes / kubeadm
- Survey for when do 1.10 planning
- Kubeadm reference doc PR
A
B
How
fast
we're
doing,
but
we're
also
seeing
like
a
lot
of
good
progress
on
other
items
like
the
machines
API
and
the
cluster
API
and
component
config
and
atom
management
and
all
those
things
so
I
wanted
to
sort
of
make
sure
we
weren't
missing
anything
and
also
try
to
understand
like
what
our
big
areas
of
priority
should
be
and
like
I
think
when
we
started
sync
us
for
life
cycle.
There
was
a
big
debate
about.
Why
is
suggested
life
cycle
different
from
sequester,
ops
and
I.
B
Think
what
we
decided
was
that
sequester
ops
is
like
what
the
state
of
the
world
is
today
like
how
we
install
kubernetes
today
and
sig
blasted
life
cycle
is
how
do
we
make
it
better?
How
do
we
want
to
change
kubernetes
or
like
the
API,
wherever
it
is
in
order
to
make
the
whole
world
of
operating
and
managing
installing
operating,
managing
and
communities
cluster
better
and
so
I
think
I
think
the
things
I
guess
the
five
areas
that
I
think
we
can
do
better
on,
or
that
should
be
the
focus
of
this
sake.
B
I
drop,
five
sort
of
stroll
men
and
I
hope
that
people
will
chip
in
and
say
you
know,
two
of
them
are
the
same
or
like
one.
Api
can
do
two
of
them,
or
what
about
this
other
thing,
but
the
five
I
drew
up
as
real
men
were
I'll
list
them
first,
which
is
sort
of
control
plan
configuration
management,
add-on
management,
infrastructure
management,
EDD
management
and
control
planes.
Software
management.
We
can
just
drop
the
word
management
for
each
one
of
those
but
like
if
I
want
to
traverse.
B
Add-On
management
is
like,
oh
I,
want
to
add,
helm
or
I
want
to
add
an
ingress
controller.
I
want
to
add
whatever
it
is.
It's
things
that
are
not
necessarily
part
of
the
core
API
that
we
have
to
bring
up,
but
we
still
want
to
manage
and
are
part
of
the
cluster
as
opposed
to
the
application
infrastructure
management
is
well,
so
we
typically
want
to
run
this
on
something.
And
how
do
we
bridge
to
that?
B
And
what
is
the
correct
integration
points
and
that's
what
I
think
is
happening
in
the
machines,
API
and
and
other
things
I
put
like
sort
of
lists,
all
of
them
at
city
management?
Is,
you
know
we
have?
We
have
created
a
central
state
store,
and
so
the
problem,
as
with
all
databases,
arises
as
to
how
you
run
your
state
store
and
we
don't
have
stellar.
We
don't
have
a
one.
B
Stellar
answer
that
is
like
outshines
all
the
others
and
finally
sort
of
like
old
controller
playing
software
management,
which
is
I,
think
a
lot
of
what
we
focused
on
so
far,
but
that's
different
from
configuration
mentoring.
This
is
sort
of
trickier
where
it's
like
the
actual
version
of
maybe
your
operating
system,
but
your
cubelets.
Maybe
your
control
plane
software
version
and
that's
where
it
gets
tricky
and
overlap
II.
But
those
are
that's
sort
of
another
thing.
C
Infrastructure
management
is
quite
interesting
because
I
think,
like
cubed,
am
like
regularly
runs
into
select
this
quandary
right
like
how
much
do
we
interact
with
the
OS
and
to
what
degree
of
responsibility
is
Kuban
am
sort
of
mean?
Is
it
responsible
for
sort
of
prepping,
the
the
Catena,
the
machine
or
the
OS
for
cube
a
edium
to
run
I
mean
we
run
into
it
with
whether
it
should
install
like
CNI,
binaries
and
stuff,
like
that
all
of
the
system,
deke
config?
C
We
had
that
bash
script
on
our
documentation
and
I
have
to
regularly
copy
and
paste
that,
because
I
just
want,
like
some
kind
of
minimal
OS
for
Cuba
DM
to
run
on
so
I'm
I'm
super
excited
about
like
the
idea
of
improving
that
you
user
experience,
but
it
comes
back
to
like
the
recurring
debate
of
what
is
like
the
core
responsibilities
of
Cuba
DM
right
and
it
seems
like
our
opinion
up
till
now.
Is
it's
not
its
responsibility?
C
It's
it's
up
to
the
user
to
configure
the
RS
themselves
and
then,
from
that
point
onwards,
cube
idiom
is,
you
know,
can
do
its
job,
but
I
know
that
J
could
have
done.
He
has
like
his
new
project
was
tries
to
get
everything
on
that
OS
up
and
running
so
yeah
I
think
that's
a
quite
interesting
discussion
topic
that
so.
D
I
mean
I
want
to
break
in
there.
I
think
it's
important
to
recognize
that
cube.
Atm
is
one
of
the
tools
that
this
gig
is
actually
sort
of
driving,
but
it's
not
necessarily
the
only
tool
right
and
so
I
would
say
that
if
we
want
to
start
exploring
some
of
these
other
areas
like
the
machines
API
or
what
have
you
or
maybe
there's
a
cube,
prep
tool
that
we
come
up
with,
that
does
a
bunch
of
this
guck
right
like
these
things
can
be
composable
set.
D
B
Definitely
in
favor
of
like
adding
adding
other
sections
right,
I
would
say
that
yes,
I,
guess
Robert.
You
can
pretty
pipe
in
on
more
about
this,
but
I
think
what
I
like
about
the
machines
API
is.
It
gives
us
like
a
goal
like
we
want
to
have
a
different
machine,
and
then
you
know
the
the
bootstrapping
tokens
that
have
to
happen
to
do
that
or
like
qadian
versus
who
installs.
A
So
the
other
section
I
want
to
call
out
is
at
TD
management.
I
think
that
CD
is
a
real
gray
area
in
the
system.
Right
now,
I
think
that,
because
we
do
upgrades,
I
think
we
kind
of
get
in
somebody
stuck
with
maybe
a
CD
being
our
responsibility,
but
I
think
that
it's
it
better
falls
in
the
sig
API
machinery
camp
they're,
the
one
he's
maintaining
the
sed
client
library
is
they're,
the
ones
that
maintain
the
sed
container.
A
You
know
they
should
be
helping
drive
at
City
lifecycle
in
the
same
way
that
that
they
are
responsible
for
forward
and
backward
compatibility
of
the
API
server.
So
I
mean
I
think
that
this
is
it's
kind
of
a
gray
area
right,
because
we
want
to
be
able
to
do
an
upgrade
and
if
they
don't
give
us
the
tools
to
upgrade
at
CD,
then
we
kind
of
end
up
building
them
ourselves.
I.
B
E
Because
of
some
of
the
issues
that
have
occurred
with
contributing
changes,
they
willfully
don't
make
modifications
because
of
the
amount
of
time
it
consumes.
So
it's
left
to
people
who
have
been
around
the
project
and
know
the
code
enough,
which
is
only
a
small
handful
of
people.
That's
pretty
much
Jordan,
myself
and
voytek
that
have
made
those
changes
over
time.
Yeah.
A
I
wasn't
talking
about
the
core
West's
folks.
I
was
like
what
the
API
machinery
sig
so
like
daniel
matt
liggett's.
You
know
the
folks
from
Google
I,
don't
think
they
feel
like
they
own
it
right
now,
but
I
think
that
is
the
right
home
for
it.
I
think
as
we're
faking
out
via
the
steering
committee,
sort
of
the
roles
and
responsibilities
and
redoing
sig
charters,
I
would
like
to
see
if
we
can't
convince
them
to
take
and
feel
ownership
of
that
CD
I've
been
pushing
for
that
internally
and
it's
been
only
moderately
successful.
It's.
E
F
Like
so
with
Q
beta
and
we're
gonna
do
something
well
just
kind
of
hacky
with
static
pods
for
for
the
upgrades,
as
we
as
we
talked
about
last
week,
but
still
I
mean
one
of
the
biggest
concerns
of
mine
here
is
like
scalability
test.
So
from
what
we
discussed
last
week
was
that
basically
sig
scalability
drove
the
sed
support
in
a
2d
tree
support
along
with
sig
API
machinery
bit,
but
then
the
actual
like
implementation
of
the
upgrades
and
such
other
than
the
built
in
clients
and
validated
server
version
or
less
than
the
class
life.
F
So
what
would
really
help
is
we
could
get
the
scalability
test
running
somewhere
else
that
isn't
like
a
cube
up
and
Google
only
features
right,
Oh,
some
Google
only
a
configuration,
because,
right
now
we
we
discussed
last
week
should
we
go
with
the
three
to
eight
CD
server,
for
example,
myself
and
Tim
had
a
chat
about
that,
but
at
the
same
time
we
can't
really
do
that
kind
of
thing.
If,
if
the
tests
running
on
Google
are
on
three
1,
then
we
have
absolutely
no
coverage
and
basically
blind.
E
Another
thing,
too,
is
part
of
the
scalability
test
suite.
Eventually
we
want
to.
We've
talked
about
this
for
a
number
times.
Red
Hat
has
their
own
jig
for
because
I
know,
because
I
did
it,
I've
managed
the
team,
but
has
their
own
Jake
for
HJ
configurations
and
being
able
to
test
it,
and
we
don't
have
the
equivalent
thing
in
the
community
and
as
we're
adding
H
a
configuration
for
sed,
it
would
be
ideal
to
have
that
as
a
deployment
or
maybe
even
the
default
deployment.
D
So
I
feel
like
we're
drifting
a
little
bit
here.
What
I'm
hearing
about
this
is
that
you
know,
as
we
start,
these
sort
of
you
know
formalizing
looking
at
the
Charter
or
type
of
stuff,
it's
not
a
slam-dunk
that
this
group
owns
the
the
sed
lifecycle
management.
I.
Think
that's
a
that's
a
separate
thing!
I
think
it's
worth
calling
out
that
right
now
this
is
a
bit
of
a
hot
potato
and
it's
not
something
that
we
necessarily
feel
is.
D
You
know
folks
are
signing
up
to
do
yeah
and
like
and
as
we
think
about
like
things
like
as
we
build
out
tools
like
cube,
ATM
right.
One
of
the
things
that
we
could
do
with
these
tools,
as
part
of
our
pre-flight
checks
is,
is
like
yeah,
we'll
install
NCD,
we'll
put
a
warning
there:
hey
if
you're
having
us
manage
NCD.
That's
probably
you
know,
you
know
some
sort
of
disclaimer
around
that
when
we
do
upgrades
if
we
have
to
ensure
a
certain
version
of
sed.
D
F
A
quick
reply
to
what
you
said
that
what
we're
doing
now
is
all
onto
bottom
in
it.
If
you
don't
specify
external
LCD,
we
will
install
the
right
at
CD
version
for
the
like
right,
kubernetes
version.
Then,
if
you
do
upgrades
we're
planning
to
have
a
flag
like
cubed,
M
upgrade
upgraded
CD
as
well,
or
something
to
actually
bump
a
CD.
At
the
same
time,
we
bump
the
control
plane
version
because
well
1/8
is
supports
three
zero,
one,
nine
suppose
three
one
probably
and
then
you're
on
your
own.
F
D
F
D
B
F
B
D
Let's
be
clear,
like
API
machinery,
they're
doing
God's
work,
but
they're
also
not
very
much
end-user
focused
by
evidence
of
the
fact
that,
like
the
mess
around
client
go
and
we
don't
have
an
SDK
and
blah
blah
blah
right.
So
I
think
it's
probably
wishful
thinking
to
think
that
they're
gonna
actually
put
together
something
that's
consumable
by
end
users.
It's
just
not
in
the
mindset
of
that
group.
Yeah.
B
G
B
Sign
it
for
that,
guy
I
think
what
we
can
do
is
like.
We
can
like
what
I
was
thinking
because,
like
we
can
try
to
like
look
at
look
at
what's
out
there
do
demos
like
I,
did
a
little
collection
of
the
CD
things.
They
were
like.
There's
a
bunch
of
solutions
out
there,
not
just
the
SD
operator.
There's
like
one
from
solando
one
from
monsanto,
like
you
know,
cops
has
one
cube.
B
Atm
has
one,
it
sounds
like
it
sounds
like
Red
Hat
has
one
which
actually
does
AJ,
which
I
would
love
to
see,
but
the
you
know
it.
If
we
can
try
to
figure
out,
you
know,
what's
out
there,
can
we
use
one
of
them?
Can
we
get
rid
of
it?
That's
great,
but
we
don't
necessarily
I,
think
we
need
a
solution
for
ed
CD
management,
which
maybe
someone
else
to
do
with
it,
but
we
still
need
a
solution.
Yeah.
A
Maybe
it's
a
lot
of
same
ideas
component,
where,
if
we
set
up
the
framework
of
here's,
how
you
upgrade
that
CD
here
is
how
you
specify
configuration,
then
maybe
a
different
Sega
likely
a
time
machine
or
Sega
actually
drives
like
when
we
do
the
upgrades
and
qualify
a
CD
versions.
But
we
write
the
machinery
to
actually
do
the
upgrades
and
give
them
the
way
to
configure
it.
Maybe
that's
something
split
like
that
makes
sense,
and.
D
I
think
one
of
the
things
that
we
can
do
as
a
cig,
it
doesn't
all
have
to
be
code,
it
can
be
documentation
and
guides
and
best
practices
and
just
cutting
through
a
lot
of
the
noise.
That's
the
first
step
before
you
actually
write
code
is
you'd
at
least
get
clarity
on
what
it
is
that
that
needs
to
be
done,
and
you
know
that's
even
if
we
say
hey,
there's
no
officially
supported
no
officially
supported.
D
A
All
right,
why
don't
wanna
write
a
hold
on
this
too
long
I
think
maybe
the
next
step
Justin.
If
you
could
create
a
separate
doc
or
we
put
these
bullet
points
and
people
can
kind
of
flush
them
out.
I
think
it's
gonna
be
required
for
us
to
do
anyway
from
the
steering
committee
as
part
of
redoing
our
Charter,
but
this
I
think
would
be
a
great
way
to
sort
of
kick
start.
That
process
cycle.
D
Most
of
the
rat
holing,
going
on
in
the
steering
committee
right
now
is
around
incubation
projects,
so
that's
kind
of
the
first
order
of
business
that
people
are
talking
about.
So
there
hasn't
been
a
lot
of
talk
in
terms
of
of
what
the
sync
process
is
going
to
look
like
I
think
that
that
the
sync
charter
will
probably
include
both
some
level
of
clarity
around
sort
of,
what's
in
and
what's
out
of
the
sig,
so
that
would
be
sort
of
sig
responsibilities
in
areas.
D
A
Thank
you
all
right.
Moving
along
the
next
thing
I
put
on
here
is
an
email
this
morning
with,
like
the
list
of
all
of
the
sig
updates
and
sig
deep-dive
scheduled
for
Q
Khan,
and
we
are
not
on
that
list,
even
though,
after
the
last
time
the
list
went
out.
Email
folks
asked
me
to
put
on
the
list.
Peres
said
that
there
is
space
for
us
to
do
a
deep
dive
or
an
update,
or
both
I,
wanted
to
pull
people
and
see
what
they
that
they
would
prefer.
A
I
think
we
need
to
do
at
least
one
of
them
to
have
our
presence
at
Q,
Khan
felt
I,
don't
know
if
people
want
to
sign
up
for
doing
both
I
think
that
at
least
one
of
them
is
sort
of
like
a
it's,
an
80
minute,
long
presentation,
slash
Q&A
type
thing.
So
a
little
bit
of
work
to
put
this
together,
but
I
think
it's
important
for
us
to
do
at
least
one
so.
F
Four
cubed
M:
we
have
things
covered
kind
of
because
I'm
speaking
on
Thursday
I
think
about
like
cubed
M.
What
is
it
like?
How
does
it
work
under
the
hood
things
like
that?
It's
pretty
advanced
like
it's
in
the
advanced
level
we
might
want
to
have
something
like
more
general.
That
is
not
that's
cubed
and
focused,
probably
as
complementary
or
like
how
to
contribute
to
cubed,
M
or
C
cluster
lifecycle
in
general
or
cops
or
whatever
I.
Also,
you
have
a
talk
about
the
cluster
API
right
Kristin.
A
Are
these
yeah
there?
It
didn't
tell
me
when
they
available
slots
were,
but
she
said
there
was
some
space
available,
I.
Think
going
back
to
Justin's
point
like
we
haven't,
spent
a
lot
of
time,
focusing
on
cube
admin,
we're
starting
to
spend
some
time
focused
on
cluster
API.
But
if
we
step
back
maybe
this
would
be
sort
of
a
good
forum
to
give
an
update
on
is
what
we
think
our
Saiga
is
doing
like
here's,
what
working
on
and
maybe,
if
you,
if
it
has
already
happened,
go
see
Lucas!
Let's
talk
about
this.
A
If
you
want
more
information,
go
see.
If
I
were
to
talk
about
that,
if
you
want
more
information
there,
but
to
give
sort
of
a
high-level
view,
that's
not
too
in
the
weeds
I
think
might
be
useful
for
people
and
maybe
not
I,
don't
know
it
seemed
like
it
was
an
opportunity
for
us
to
if
nothing
else,
they
have
sort
of
a
sink,
a
thering
ourselves.
One.
E
Thing
I
wanted
to
take
you
to
as
well
is
least
we're
on
the
topic
is
try
to
promote
engagement,
because
we
haven't
been
necessarily
that
good
at
that
and
help
to
promote
fostering
of
how
people
can
get
started
right.
If
we
have
this
opportunity
and
menu,
we
should
spend
some
time
you
know
trying
to
break
down.
F
Sign
up
for
it
right
away,
but
maybe
boat
I,
don't
know
anyway.
I'm
not
gonna,
sign
up
like
this
minute,
but
I'm
positive
to
driving
something
along
these
lines.
F
A
F
One
so
I've
been
I've,
been
working
on
like
the
latest
two
months
or
so
I've
been
trying
to
engage
more
people
along
and
have
been
mentoring,
a
couple
of
different
folks
in
in
various
yes
and
there's
also
a
mentoring
program
coming
up
like
officially
soonish,
but
now
it
has
been
like
from
time
to
time.
I
think
I
would
definitely
like
to
drive
something
contribute
of
focused
and
like,
for
example,
I
wrote
wrote
up
the
dark
how
to
contribute
this
across
a
life
cycle.
F
That
is
a
thing
that
should
be
more
visible,
I,
don't
know.
What's
the
right,
one,
the
right
place,
to
put
it,
that's
some
kind
of
information
right
now,
I
think
it
lives
in
the
cubed
and
repo
all
things
like
that.
So
that's
definitely
a
thing
I'm
interested
in
also
I
kind
of
covered.
It's
in
the
blog
post,
the
latest
two
blog
post
I've
been
giving
for
c-class
lifecycle,
but
this
code,
like
charter
or
things
like
that,
could
always
be
reader
reiterated
further.
So
to
get
more
visibility
into
the
process.
A
Okay,
so
Paris
is
telling
me
that
an
update
is
effectively
a
high-level
summary
of
everything.
That's
been
going
on,
I
think
that
was
sort
of
what
I
was
proposing
doing
where
we
step
back
when
we
say
sort
of
here's.
What
our
Charter
is
here,
start
working
on
and
then
sort
of
pivot
into
what
Tim
suggested
about
here's,
how
people
can
get
engaged
and
where
they
can
start
contributing,
and
she
says
that
a
deep
dive
is
more
like
a
working
session.
F
A
Exactly
and
we
might
I
might
want
to
have
sort
of
a
evening
drinks
or
something
to
get
together
and
chat
as
well
as
long
as
people
are
there.
Okay,
I
will
sign
up
for
a
sake.
Update
session
then,
which
I
think
is
is
half
an
hour
long
or
so,
and
we
all
coordinate
with
you
Lucas
about
what
we
can,
what
we
can
put
together
for
that
cool.
F
Ish
heads
up
on
the
getting
started
guide,
PR
I
closed
one
as
I
think
this
is
what
we
don't
want
to
do
is
to
expand
our
vast,
like
mass
of
different
guides,
and
these
get
outdated
is
so
fast
with,
like,
naturally,
only
one
maintainer
or
so,
and
we
don't
want
to
like
mislead
people,
and
so
this
is
just
like.
Please
take
a
look
if
you,
if
you
want
to
maybe
add
something,
I
mean
I,
don't
want
to
throw
away
good
work,
but
well
we
it's
no
use
merging
and
then
deleting
things.
F
E
E
F
E
Fix
the
docks,
because
I
just
I
turned
off
that
keyword
right
after
that
some
poor
guy,
but
the
I
think
it's
worthwhile
every
now
and
then
to
to
reevaluate
some
of
our
Doc's,
but
possibly
from
a
new
person's
perspective,
not
necessarily
our
perspective,
because
we've
been
doing
this
for
so
long
that
we
know
all
our
ways
around
it.
If
there
is
some
minor
things
there,
but
a
new
person's
perspective
is
sometimes
just
really
comfortable.
Yeah.
C
I
completely
agree,
I
think
a
lot
of
the
work
that
for
Brett
SEOs,
doing
helps
without
a
lot
in
terms
of
the
reorganization
of
our
Doc's,
but
since
I've
started,
triage
and
I've
started
to
like
it
a
lot
of
these
issues,
as
you
say,
like
these
paper
cuts
which
people
are
running
into
I
started
to
become
a
lot
more
visible
to
me.
So
one
thing
I
did
recently
is
create
like
a
dedicated,
troubleshooting
doc,
which
actually
sort
of
collects
all
of
these
common
problems.
C
F
C
C
Yeah,
there's
I
think
there's
a
few
different
solutions
of
how
to
do
that,
but
I
think
that
the
most
fully
functional
one
is
the
one
by
Morales
right.
So
it's
just
like
whether
we
feel
comfortable
documenting
that
as
a
proposed
solution.
If
you
know,
for
that
use
case
right
and
making
a
very
you
know,
make
an
explicit
that
we
don't
manage
or
maintain
this
project,
but
you
know
somebody
wants
to
go
ahead
and
do
that
and
use
keep
cube.
Le
am
like
this
is
dedicated
repo
over
here.
C
D
So
here's
something
that
I
think
you
know
just
trying
to
connect
the
dots
between
this
discussion
and
the
Charter
discussion.
One
of
the
things
we
should
probably
put
in
the
Charter
is
owning
the
documentation,
experience
or
uninstalling
kubernetes
right
and
if
we
claim
that
explicitly
and
we
get
sort
of
the
right
people
involved,
I
think
we
should
like
look
at
sort
of
like
the
entire
sort
of
like
somebody
goes
to
kubernetes
dot
IO,
they
want
to
say,
I
want
to
install
kubernetes.
What
is
the
guidance
we
give
them?
D
We
point
them
at
the
set
of
tools.
How
do
we
help
them?
Make
those
decisions
right
and
how
do
we
make
sure
that
that
that
stuff
is
up-to-date
and
clear?
And
then
you
know
essentially
step
up
and
clean
that
mess
up
and
not
be
afraid
to
piss
some
people
off?
If
we
need
sort
of
remove
some
stuff,
that's
old
and
then
say
like
okay.
D
F
D
Right
well,
I
think
we
should
be
more
ambitious
in
terms
of
curating
creating
a
con.
You
know
a
story
there
that
talks
about
like
look.
Here's
the
first
set
tier
set
of
tools
that
are
actively
maintained
and
you
know,
and
and
who's
sponsoring
them
and
like
why
you,
you
know,
maybe
some
sort
of
pros
and
cons
that
people
can
and
then
maybe
have
an
archive
of
like
here's,
a
bunch
of
other
stuff,
that's
going
on
and
and
really
help
people
guide
towards
finding
the
tool,
that's
appropriate
for
them.
Yeah.
D
I
think
some
of
it
is
just
like
I
feel,
like
I've,
been
a
little
bit
reticent
to
actually
sort
of
make
that
part
like
to
do
that,
because
it
doesn't
feel
like
it's
part
of
the
Charter,
and
it's
like
you
know,
you
want
to
be
fair
to
everybody
and
so
I
think
there's
a
certain
amount
of
frozen
with
indecision
around
that
stuff
and
I
think
like.
If
we
you
know,
if
we
just
say,
hey
we're
gonna,
you
know
this
is
this
is
something
that
falls
within
this
SIG.
D
A
A
I
agree:
I
think
that
was
definitely
not
something
that
we
put
in
our
initial
charter,
but
that,
as
we
go
back
and
reassess
like
what
we
think
we're
doing
as
a
group
and
if
you
look
at
Justin's
list
I
think
that
does
sort
of
naturally
fall
within
the
things
that
we
should
be
owning
and
I.
Think
one
is
that
explicit.
Then
there
will
be
a
lot
less
pushing
artists
asking
why
we're
doing
it
so.
C
F
So
I
don't
think
we
need
a
formal
like
change
of
the
Charter.
We
just
go
ahead
and
do
it
well.
I
I've
been
I've
been
doing
things
like
that
in
the
name
of
c-class
license
I
got
rights
because
we've
talked
about
it.
We've
said
that
this
is
something
we
should
do
in
early
meetings,
but
we
haven't
actually
written
it
down
each
other.
F
Yeah,
exactly
and
also
relevant
is
that
the
dockside
is
gonna,
be
best
proposals.
Are
there
proposal
out
for
restructuring
the
the
whole
duck
site
to
be
like
use
case
and
then
like
scenario
driven
right,
so
we
might
also
want
to
actually
go
ahead
and
attend
sig,
duck's
meetings
and
say
like
how
far
is
there
an
MVP
for
this?
Yet
how
can
we
make
kubernetes
installation
look
great
on
this
new
MVP
site
that
you're
building
and
how
does
it
fit
in
in
your
different
motifs
that
you're
creating
yeah?
That's
that's.
C
H
H
That
is
pretty
embarrassing.
So
please
don't
judge
me
by
my
lack
of
being
able
to
see
the
green
green
button
anyways.
My
name
is
Damian
Hanson
I've
been
involved
in
Signet
work
for
about
a
year
myself
and
several
others
are
part
of
in
ipv6
working
group.
I
think
we
actually
have
some
of
the
working
group
members
on
the
call,
so
I
appreciate
their
attendance,
but
I
just
wanted
to
take.
H
H
There
is
issues
both
recent
as
well
as
issues
that
date
back
I,
think
close
to
two
plus
years
of
the
community
asking
for
ipv6
support
and
so
about
a
year
ago,
when
myself
and
some
others
got
involved,
and
we
wanted
to
get
involved
in
a
community
and
do
some
good
work.
This
came
across
our
radar
and
we
said
wow.
H
You
know
this
is
an
opportunity
to
contribute
to
the
community
and
something
that's
also
important
to
customers
and
and
I'm
part
of
Cisco
and
Cisco
being
a
networking
company,
it's
important
to
Cisco
as
well
so
kind
of
met.
All
those
checkboxes
you
know
an
ipv6
is
no
longer
a
corner
case
again.
Another
link
in
the
notes
section.
You
can
take
a
look
at
just
over
the
recent
years
that
I
think
it's
up
to
about
20
percent.
H
You
know
government
agencies
are
required
to
to
support
ipv6,
and
so
it's
you
know,
ipv6
is
no
longer
this
corner
case
that
we
could
kind
of
just
turn
a
blind
eye
to,
and
you
know
last
but
not
least
is
you
know.
V6
is
different
than
before.
So
you
know
it
provides
an
opportunity
for
innovation.
There's
you
know
functionality
related
a
security
mobility
that
can,
you
know,
potentially
need
that
opportunity
to
change
the
way
that
applications
are
built
on
top
of
kubernetes
and
and
so
forth.
You
know
the
current
status
is.
H
For
example,
three
you
see
there's
over
30
pour
quests
that
have
merged
I.
Think
there's
another
dozen
outstanding
in
the
main
kubernetes
repo
alone.
That
doesn't
include
the
work.
That's
been
done
to
CNI
to
DNS
the
go
client
I
mean
the
list
goes
on.
So
you
know,
there's
been
a
lot
of
work.
That's
been
done
up
into
this
point,
a
couple
other
things
to
point
out,
so
we've
got
two
IP
tables
as
well
as
ipbs,
proxy
or
support
and
I.
H
Think
the
fourth
bullet
down
is
one
of
the
reasons
why
I'm
here
talking
to
this
team
is
that
early
on
with
you
know,
with
options
of
okay?
Well
great?
How
do
we
actually
get
this
deployed?
We
looked
at
the
field
and
said
well,
it
seems
like
Kubb.
Admin
makes
sense,
so
let's
just
focus
on
that
and
and
I
think
Lucas
can
attest
that
we
probably
have
at
least
eight
to
ten
pull
requests
related
to
kuba
admin,
to
ensure
that
users
can
very
easily
deploy
a
kubernetes
cluster
using
ipv6
with
code
admin
and
the
last
bullet.
H
We've
got
a
working
deployment
and
end-to-end
tests
and
you
can
go
ahead
and
click
on
either
of
those
and
see
the
details.
You
know
there
are
caveats
right,
so
we're
basically
having
the
cherry-pick
the
pull
requests
that
are
in
flight.
There's
a
few
indent
tests
that
aren't
passing
as
well
as
maybe
one
or
two
additional
end-to-end
tests
that
should
be
created
that
are
more
specific
to
ipv6.
H
You
know
the
work
outstanding
is
you
know
we're
trying
to
get
ipv6
support,
alpha
support
and
1/9
the
documentation
needs
to
be
done.
The
end
end
test
integration
needs
to
be
complete
and
I
kind
of
perked.
Up
a
few
minutes
ago,
when
there
was
discussion
around
using
Kubb
admin,
didn't
cluster
I
think
that
the
requirement
was
in
deploying
a
cluster
using
cube
admin
with
everything
being
in
containers,
so
your
master
and
nodes
and
containers.
We
made
a
decision
to
move
forward
with
Kubb
admin
didn't
cluster.
H
You
could
click
on
that
and
you
could
actually
see
that
we
had
a
significant
amount
of
effort
to
add
ipv6
support
in
that
project
and
the
pull
request
for
the
work
that
we've
been
doing
over
the
last
few
months
just
landed
either
today
or
yesterday,
and
you
see
the
amount
of
work
that
was
done
there,
and
that
was
really
a
requirement
for
us
because
we
decided
to
do
end
and
test
on
GCE.
Gce
does
not
support
ipv6,
so
we
cannot
deploy
physical
nodes
and
have
them
talk
to
each
other
over
v6.
H
So
we
had
to
basically
emulate
a
multi
node
cluster
so
that
we
can
deploy
that
multi
node
cluster
as
containers
and
use
docker
networking
since
dr.
networking
on
a
single
host
supports
ipv6
and
simulate
a
real
cluster
with
we'd,
also
like
to
see
additional
CNC
and
I
plug
in
support.
We
do
have
an
outstanding
pull
request
to
add
the
support
for
Kuby
net.
H
D
So
I'm
excited
to
see
ipv6
make
progress,
so
that's
super
cool
when
it
comes
to
cube
admin.
I
think
one
of
the
biggest
problems
we
have
right
now
is
is
cube.
Admin
essentially,
you
know,
assumes
that
say
the
cube
lid
is
already
running
and
there's
networking
flags
that
are
passed
to
the
cubelets
that
cube
admin
doesn't
actually
know
how
to
change
and
because
of.
E
D
You
know,
and
it
might-
and
this
is
going
to
change
what
the
component
config
stuff
that's
coming,
so
this
is
exciting,
but
I
think
in
general
sort
of
like
customizing
the
network.
Around
cube
admin
is
not
as
as
easy
as
we
want
it
to
be,
yet
so
I
think
that's
kind
of
a
little
bit
of
a
precursor
to
to
really
make
it
its
work.
What
do
you
think
am
I
on
a
date.
F
D
D
D
But
I
think
you
know
the
story
that
I'd
like
to
see
is
cube.
Admin
actually
have
like
hey
here's.
The
network
ranges
you
need
to
pick.
Here's
the
defaults
that
we
do,
here's
how
to
override
them,
and
then
one
of
the
overrides
could
be
oh
and
if
you
want
to
run
in
beast
6
mode,
here's
how
you
do
that
yeah.
F
So,
just
just
to
clarify
I've
been
reviewing
like,
like
you
said,
maybe
8
to
10
cubed
and
pull
requests
that
implement
this.
It's
exciting
to
see
the
progress
and
also
it's
it's
very
good
that,
like,
as
we
said
s
cubed
in
natures,
it's
gonna
be
the
the
point
that
others
like
send
pull
requests
to
in
order
to
test
their
new
stuff
out
in
the
same
manner
as
we
as
cube
up
works
today,
kind
of,
but
in
a
more
streamlined
way.
So
so
yeah
I
mean
it
cube.
Item
is
basically
gonna
go
into
an
ipv6
mode.
F
D
D
B
F
H
It
would
get
one
or
the
other,
and
so
when
we
first
started
on
this
path,
we
were
interested
in
dual
stack
support
and
we
found
that
okay,
there's
a
design,
that's
under
review
for
being
able
to
specify
multi
networks
for
pod
for
pods,
and
so
we
said:
okay,
well,
let's
just
kind
of
focus
now
on
ipv6,
only
alright!
So
you
we
don't
have
to
worry
about
that.
And
thankfully
we
didn't
do
that,
because
that
that
design
proposal
is
still
going
through
iterations
and
we
probably
would
have
been
hung
up
in
that.
E
H
So
yeah,
if
I
understand
you
correctly
Justin,
so
bridging
work.
So
again
we
have
support
in
our
reference
implementation,
as
the
CNI
bridge
driver
and
local
IBM
driver
plug-in.
However,
you
want
to
reference
it,
and
so
that's
fully
functional
I'm.
You
know.
I
would
really
urge
this
team
that,
if
you
are
interested
in
in
the
work
in
the
presentation,
I've
got
a
link
to
a
deployment
that
we've
referenced
here.
Let
me
just
share
this
again
share
sorry.
H
Can
you
see
this
right,
and
so
you
know
we've
got
basically
a
deployment
guide,
and-
and
so
you
know,
aside
from
like
the
basic
prepping
physical
nodes
and
all
that
good
stuff,
I
mean
you
get
down
to
the
meat
of
it
as
it
relates
to
qu,
Badman
or
I.
Guess
your
point,
Justin
is,
you
know,
see
and
I
yeah
you.
You
have
to
go
ahead
and
create
your
CNI
file.
You
have
to
create
a
coop
admin
comp
file
and
then
you
just.
B
H
H
Yeah,
so
this
was
basically
a
v6
work
that
just
landed
for
this
project,
and
what
we
do
is
part
of
this
work
is
when
you
go
ahead
and
specify
that
you
want
to
deploy.
V6
cluster
will
deploy
a
few
additional
containers.
One
of
the
containers
is
a
DNS
container
that
does
dns64
and
another
container
is
an
at
64
server,
so
that
those
you
know
those
container
nodes
and
the
container
master
whenever
they
need
to
get
out
and
curl
or
pull
down
an
image.
Or
what
have
you?
It's?
D
H
B
B
It's
like
I
know
that
there
there
networking
providers
out
there
that,
if
you
assign
a
cider
to
your
notes,
which
you
know
the
controller
can
do
anyway,
it
will
and
set
up
that
for
you
on
with
like
GRE
tunnels
for
or
the
Exxon
tunnels,
I'm
wondering.
If
we
did
that,
whether
with
your
support,
everything
would
then
just
work.
It.
H
Shouldn't
one
of
the
pull
requests:
if
you
look
at
the
the
link
in
the
over
30
pull
requests,
let
me
see
if
I
could
actually
even
find
it
pretty
quickly,
but
Rob
Potter's
gentleman
that
added
cluster
cider
support
so
that
you
can
specify
you
know,
slash
40
v6
address
for
your
cluster
cider
and
then
your
you
know.
The
way
that
works
at
Evernham
sure
knows
the
pod
ciders
are
deadly
up
amongst
the
nodes
from
that
cluster
cider,
so
I'm,
you
know
there
may
be
some
corner
cases
or
caveats.
H
Because
again
our
focus
has
been
on
this
specific
reference:
implementation
of
the
CNI
bridge
and
local
IPM
plugins.
We
do
have
an
outstanding
pull
request
to
add
Kuby
net
support,
but
if
you're
filming
familiar
with
Kuby
net,
it
essentially
just
wraps
those
two
CNI
plugins,
and
so
we're
not
expecting
you.
B
D
H
H
A
Alright,
we've
got
basically
zero
time
left,
but
I
wanted
to
do
a
couple
of
quick
PSAs.
One
J's
put
a
link
in
here
to
please
fill
out
a
very,
very
short
survey.
It's
a
one
question
like
one
radio
button
about
whether
you
would
prefer
to
do
1:10
planning
next
week
or
in
three
weeks
from
now.
Please
fill
that
out.
Jace
has
volunteered
to
run
that
meeting,
we're
gonna
co-opt
the
majority
of
the
Tuesday
weekly
meeting
to
talk
about.
We
wanted
to
shoot
for
in
110.
A
This
is,
as
I
mentioned
previously
part
of
the
sort
of
new
p.m.
planning
process
where
each
cig
is
going
to
help
do
their
own
planning.
We're
not
going
to
have
an
overarching
p.m.
driving
us
I
wanted
to
briefly
mention
that
the
cig
testing
folks
mentioned
that
you
can
set
up
email
alerts
for
failing
tests
and
test
grid.
We
have
a
number
of
failing
tests
right
now
and
it
would
be
really
nice
to
get
alerts
sent
out
for
those
I.
A
Don't
know
if
people
are
interested
in
having
alerts
spam,
our
signaling
list,
if
we
should
set
up
a
different
mailing
list
or
use
like
a
plus
failing
tests
or
something
if
people
can
email
filter
out
so
I'll
mention
that
again
next
week.
I
just
wanted
to
put
that
in
the
back
of
your
minds
and
then
very
quickly.
Fabbrizio
did
you
want
to
talk
about
your
dock
or
your
PRP
or
three
times.