►
From YouTube: kubernetes kops office hours 20190816
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
We're
gonna
tell
you
sure
why
so
I
added
back
the
permissions
which
I
thought
it
should
have,
but
probably
we
need
to
give
some
group
more
permissions
to
directly
edit
I
am
adding
people
individually
until
we
figure
out
what
that
group
is,
but
certainly
you
can
comment
and
I
will
click
the
tick
button
as
quickly
or,
if
you
remember,
of
sequestered
life
cycle
or
Sogeti.
Oh,
yes,
you
can
comment
and
I
those
Google
Groups
I
will
click
the
tick
button
as
fast
as
I
can
I
didn't
have
to
permission
myself
earlier.
A
A
You're,
a
mute,
I'm,
gonna,
yes
and
I
actually
did
some
new
a.m.
eyes
as
of
this
morning,
and
the
really
good
news
is
that
they
actually
are
being
built
from
there's
a
there's,
an
image
builder
sub-project
of
sequestered
life
cycle,
and
so
we
are
trying
to
move
over
there
and
I
built
it
from
there
and
right
now.
Basically,
all
the
cube
deploy
stuff
is
just
in
a
subdirectory
that
we're
essentially
trying
to
rationalize.
A
But
at
least
we
are
in
a
a
better
location
where
all
the
image
building
is
taking
place,
the
pr's
on
up
yet
there's
a
massive
copy,
PR
and
then
I'll
do
the
actual
PRS
themselves,
but
that's
in
good
shape,
and
so
there
are
two
R's
that
went
up.
Sorry
there
are,
there
are
ami,
so
I
went
up
for
111,
112,
113
and
114,
and
there
is
a
PR
to
put
them
in
the
Alpha
Channel
and
we
can
talk
of
it
about.
A
We
can
talk
later
about
like
the
sort
of
agreeing
there's
the
releases
that
we
intend
to
do
over
the
next
two
weeks,
but
they
are
there
and
in
the
Alpha
Channel,
and
if
you
want
to
try
them,
they
should
work
a
day.
Is
twenty
in
two
zero
one?
Nine
zero,
eight
one
six!
So
if
she
wants
to
try
that
you
can
try
that
and
more
concerning
Li,
this
is
easy.
Tab,
I
guess
I'm,
not
exactly
I.
Pronounce
that
I
don't
know
through
here,
I,
don't
see!
A
You
here
has
pointed
out
that
there
is
a
challenge
with
the
newer
version
of
Debian
bust
or
the
one
after
stretch,
which
is
not
yet
I,
don't
know
it's
not
fully
stable
or
whatever,
or
not.
The
official
stable
release
yet,
but
in
general,
iptables
r18
no
longer
uses
IP
tables
on
the
back
end
and
now
uses
NF
tables,
but
it's
still
called
IP
tables.
A
A
We
can
easily
put
a
version
of
IP
tables
into
the
am
ice
that
we
bake
and
sort
of
stick
to
that
version.
The
the
real
pain
point
is
sort
of
switching
or
trying
to
change
from
one
to
the
other,
because
it's
mixing
that's
the
real
problem,
so
we
might
be
in
better
shape
for
people
been
using
the
ami
sort.
We
want
to
find
something
that
works
for
everyone
to
some
degree,
but
it's
not
read
it's
not
clear
how
that
would
work.
So
discussion
is
happening
on
that
upstream.
Kubernetes
issue
number:
seven,
one:
three:
zero
five.
A
A
Jessie
also
brings
up
if
there's
nothing
else
in
that,
Jessie
also
brings
up
that.
We
should
think
about
cloud
provider.
Extraction,
OpenStack
is
doing
it
or
has
done
it
and
that's
great
news.
The
the
other
entry
providers
are
in
the
process
of
moving
and
I,
don't
necessary
want
us
to
be
the
first
ones
to
do
it,
but
we
might
want
to
consider
adding
support
I,
don't
believe
that
any
of
them
are
necessarily
at
the
stage
where
we
would
recommend
I
need
them
out.
A
Other
than
OpenStack
are
necessary
at
the
stage
where
we
would
recommend
using
the
external
cloud
providers,
but
I
could
be
wrong
on
that
I'm
really
right
at
us
in
GCE.
Here,
where
my
understanding
is,
the
entry
ones
are
still
the
golden.
The
the
primary
targets
and
using
the
external
ones
would
be
problematic
or.
A
A
Technically
the
last
supported
crewman,
the
currently
supported
kubernetes
version
is
113,
and
so
112
is
not
getting
some
security
fixes
anymore.
So
I
was
thinking
of
at
least
trying
to.
We
have
a
mechanism
to
I
could
force
force
people
to
upgrade
its
effectively
deprecated
in
old
version
of
communities.
We
basically
mark
it,
there's
a
PR
up,
7
4
to
3,
and
what
we
basically
do
is
we
in
the
in
the
channel.
A
We
say
that
we
require
a
particular
version
of
kubernetes
and
then,
if
you
try
to
update
or
something
or
create
a
cluster
using
a
version
that
is
less
than
that,
then
we
emit
an
error
message
by
default
and
if
you
say
actually
I
do
want
to
run
it.
You
say:
cops
run,
obsolete
version
equals
1
as
an
environment
variable,
and
you
can
still
continue
to
do
it.
A
B
A
Good
yeah
yeah
are
you
there
I,
don't
call
them
unsupported,
but
they
yeah
I
mean
they.
Certainly,
yes,
they're
unsupported
by
upstream.
We
work
with
them,
but
they
are
not
getting
security.
Fixes,
which
is
an
age
deal.
B
A
Mean
eventually
I
hope
we
can
get
to
like
we've,
never
deprecated
kubernetes
version
I
think
we
eventually
should
think
about
deprecating
like
yeah.
This
is
our
my
you
step
to
that.
Just
exactly
yeah
yeah,
but
yes,
so
we
can
certainly,
but
we
certainly
I,
think,
should
not
make
it
trivial
to
run
with
versions
that
are
that
far
out
of
support,
I
think
yeah
I'm,
hoping
it's
just
one
or
two
people
that
just
run
big
clusters,
but
we
will
find
out
yeah
and
then
so
yeah
that
there's
a
peer
to
do
that.
A
A
B
A
A
Okay
and
then
something
else
we've
been
doing.
I
think
that
the
past
a
couple
of
sessions
which
I
have
found
very
helpful
to
me,
is
this
idea
of
sharing
or
agreeing
the
releases
that
we're
gonna
do
so.
I
was
gonna
share
my
screen
window
and
we
can
have
a
look
at
how
we're
doing
obviously
I
think
I
I
have
a
tendency
to
over
or
belittle
over
over
a
over
ambitious,
but
we
did
do
a
bunch.
So
these
are
the
ones
that
we
did
out
of
the
other
ones.
We
released
one
three,
oh
well,
sorry
130.
A
Now
we
promoted
our
ami
alphas
or
a
mais
from
alpha
to
stable
the
previous
ones,
a
bunch
of
other
kubernetes
versions,
and
we
did
a
net
CV
manager
released
with
the
CTL
binaries.
Those
are
now
available
directly
on
github
and
we
also
did
another
exiting
manager
release
and
then
we
have
not
done
these
cops
releases
yet,
and
so
this
is
the
sort
of
proposed
things
that
we
aim
to
do,
some
of
which
Lee
sees
first
for
first
for
which
are
actually
already
happy
ours
up,
so
I
am
relatively
confident
we
can
achieve
them.
A
A
Kubernetes
release
it's
another
like
carrot
to
get
people
to
move
forwards.
Putting
some
kubernetes
reads:
the
very
latest
Kareena
is
release
into
the
alpha
channel.
This
is
sort
of
the
general
like
we
just
tend
to
roll
these
forwards
on
good
hygiene.
More
importantly,
in
the
stable
Channel
I
was
proposing
to
push
one
thirteen
nine
sort
of
immediately.
There
is
a
CVE,
it
was
fixed
in
139
and
so
I
think
we
should
put
that
in
there
I
think
the
current
version
is
one
13
8.
A
So
it's
not
a
huge
jump
but
I
think
we
should
probably
do
that
and
then,
at
the
same
time,
I
figure
we
should
do
the
others
and
I
might
we
might
as
well?
Also
do
one
12
10,
but
I
want
to
highlight
one
12
is
not
support,
so
one
1210
does
not
include
the
CVE
fix
and
the
CV
fixes
for
CR
DS
sort
of
like
an
are
back.
A
Vulnerability,
I
guess
so,
if,
if
you
are
using
I
think
it's
non,
namespaced
are
back
non
namespace.
Cr
DS
then
definitely
be
aware
of
that
of
that
of
that
cv
e.
But
again
it
doesn't.
If
you
trust
your
workloads
and
trust
your
users,
it
is
not
as
far
as
I
know,
remotely
executable
or
monday--
exploitable
and
then
the
one
we
discussed
forcing
users
onto
the
most
recent
one
kubernetes
11110,
technically
111
and
112.
No-One
are
supported,
but
I,
don't
necessarily
want
to
immediately
push
users
onto
112,
because
112
is
somewhere.
A
We
introduced
at
CD
3,
so
it
does.
That
feels
like
a
harder
jump,
but
getting
everyone
on
to
111
I
think
will
be
a
good
start.
It
also
means
that
then
you
know
the
one
that
cv-22
XD
3
migration
is
better
I
have
not
tested
it
very
thoroughly
from
Michael
1:8
going
directly
to
112,
so
it
would
be
good
to
get
everyone
onto
111
as
a
face
line.
A
The
1:14
0
beta,
1
I,
am
hoping
will
go
today.
We
did
all
the
big
things.
Oh
there's
an
interesting
one.
Actually
we
should
talk
about
the
volumes,
but
we
did
all
the
big
things
and
then
we're
basically
trying
to
get
this
last
cherry-pick
in
there
just
because
Jesse's
done
a
bunch
of
work,
so
we're
getting
that
city
manager
into
there
and
then
I
think
we
can
also
cut
1
in
15
0.
A
Alpha
1
I
would
try
to
cut
at
the
same
time
and
116
alpha
1
remains
problematic
on
the
no
tables
controller,
so
I
think
we're
in
better
shape
than
we
have
been
in
terms
of
backlog.
So
because
this
is
these
two
are
basically
more
less
ready
to
go,
and
so
I'm
gonna
try
to
work
on
the
node
labels
controller.
This
week,
which
is
you
know,
the
the
new
permissions,
the
new
restrict
around
node
labels
and
what
you
by
concept,
Justin.
B
B
A
Yeah,
it's
not
we're
gradually
becoming
less
and
less
dependent
on
it.
So
that's
good
yeah!
If
you
want
to
do
that,
that
would
be
wonderful,
I,
don't
know
if
we
should
do
it
for
one.
A
B
A
Yeah
114,
we
can
release
114
without
bumping
me.
This
time
sounds
good.
We'll
probably
have
another
beta
at
least
soon,
and
we
actually
release
it
even
without
it
I.
It's
I,
think
I
said
this
way
yeah,
but
we
wants
to
do
it.
If
we
would
like
to
okay,
it
used
to
be
much
more
of
a
concern
when
we
had
some
of
the
server
stuff
that
was
using
AP
I'm
using
a
great
API
servers.
It
seems
like
it's
much
less
important.
Now.
A
We've
also
like
winnowed
whittled
down
our
dependencies
on
kubernetes
I
think
we're
actually
looking
pretty
good.
Now
in
terms
of
the
dependencies
we
have
left,
coop
cuddle
move
to
staging,
or
at
least
most
of
it
did,
and
so
that's
gonna
be
a
great
boom
for
us
in
that
I
think
like
one
of
our
big
last
dependencies
was
on
was
on
some
coop
cutter.
Libraries
and
I'm
also
moving
the
drain
code
into
a
even
more
like
isolated
bit
of
coop
cuddle.
A
So
we
should
hopefully
pull
the
drain
code
is
one
that
I
think
everyone
vendors
in
so
hopefully
we
can
pull
that
into
a
different
package
with
absolutely
no
CLI
dependencies
at
all
and
we'll
probably
still
have
some
CLI
dependencies,
but
we
can
see
what's
left,
but
we're
certainly
getting
a
lot
better
and
you
know
we're
not
using
as
much
of
the
more
complicated
machinery
that
is
more
volatile.
So
it's
less
it's
less
important,
but
if
you
want
to
do
that,
it'd
be
great
to
update
Basil's,
also
causing
us
never-ending
fun.
A
I,
don't
know
if
people
have
noticed
this,
but
basil
is
marching
towards
1.0,
and
that
means
that
they
are
doing
all
their
breakages
now.
So
we
are,
we
are
in
maximum
breakage
land,
I,
don't
know
anyone
says
anything
that
they
want
to
bring
up.
I
want
to
talk
a
little
bit
about
the
persistent
volume
thing,
but
I
will
give
anyone
else
that
wants
to
mention
anything
a
moment.
A
A
Around
the
idea
that
the
latest,
where
we
go,
they
are
the
latest
instance
types
on
AWS
have
a
maximum
of
28
attachments.
It
seems
so.
I
certainly
did
this
and
sorry
for
any
attachments,
so
the
volume
limit
is
lower
and
if
you're
also
using
the
AWS,
VCC
and
I
provider,
the
limit
is
even
lower
because
see
and
I
is
sorry
en
I
network
interfaces
count
against
your
attachment
limit
and
so
ionized
and
volumes
draw
from
the
same
pool
of
attachments.
A
A
A
It
seems
I
think
that
the
reasonable
compromise
is
this
one
that
I
haven't
sort
of
proposing.
Where
we
essentially
say,
rather
than
like,
be
fully
like
a
pact
each
node
to
the
max,
let's
just
sort
of
decide
that
of
the
28
attachments,
you
know:
20
you're
gonna
have
20
EBS
volumes
for
Ian's
eyes
and
for
instances
and
that
sort
of
things
we
essentially
like
pre
slice
it
and
we
don't
necessarily
so
it's
not
it's
not
fully
optimal.
A
A
It
would
be
nice
if
that
was
instance,
type
versus
instance,
type,
specific
and
I
think
that
sort
of
thing
can
come
later.
But
for
now
this
seems
like
the
best
option.
I,
don't
know
if
anyone
has
any
other
views
or
has
mike
has
hid
this
in
the
real
world
was
gonna
is
gonna,
be
bummed
if
we
only
have
20
volumes
and
for
en
eyes,
or
anything
like
that,
so.
B
A
Maybe
yeah
the
math
is
sort
of
funny.
If
you
keep
the
pod
limit
at
110
right
you-
and
you
know
that
you
have
50
IPS
per
eni.
You
know
you're,
never
gonna
have
more
than
three
Yin
eyes,
so
we
have
one
spare,
but
if
you're
on
a
smaller
instance
type,
but
also
you
have
to
set
one
max
persistent
volume
across
every
node
in
the
system
and
you
can't
you
can't
sort
of
mix
and
match
based
on
the
size
of
the
nodes.
So
it's
not
perfect,
but
I
think
this.
A
Exposing
this
option
lets
people
set
a
limit
that
will
stop
things
breaking
and
I.
Think
probably
we're
gonna
have
to
start
setting
some
sort
of
lower
limit
some
sort
of
limit
that
is
lower
for
when
these
instance
types
become
more
prevalent.
So
this
is
sort
of
a
first
step
towards
that.
But
it's
not
it's
not
the
final
answer,
and
obviously
it
is.
It
is
all
the
it's
all
the
ones.