►
From YouTube: Kubernetes kops office hours 20200814
Description
Recording of the kops office hours meeting held on 20200814
A
Hello,
everybody-
this
is
cops
office
hours
today
is
friday
august
14th.
I
am
your
moderator,
facilitator,
john
meyers.
I
work
for
proofpoint.
A
reminder
of
this
meeting
is
being
recorded
and
we've
put
on
the
internet.
Please
be
mindful
of
the
code
of
conduct,
which
is
basically
to
be
a
good
person.
I've
put
a
link
to
the
agenda
in
the
zoom
chat.
Please
feel
free
to
put
your
name
to
the
attendees
list
and
add
any
items
you'd
like
to
discuss
to
the
agenda
in
the
appropriate
place.
A
B
C
Yes,
I,
as
far
as
I
know,
it's
only
one
organization,
the
copio
organization,
I
think
so
I
am
planning
on
paying
for
it
or
getting
the
cncf
to
pay
for
it.
C
I
am
also
planning
on
not
paying
for
it
in
perpetuity
and
therefore
switching
to
we
should
we
should
get
onto
the
case.gcr.io,
which
is
coming
very
very
soon,
and
so
we
should
do
that,
but
I
don't
see
any
way
we
can
avoid
paying
for
it,
and
I
don't
think
it's
unreasonable
to
pay
the
five
dollars
a
month,
they're
asking
for
anyway.
I
don't
know
if
anyone
has
yet
seen
this
already.
C
Some
people
were
saying
that
there
was
already
rate
limiting
and,
as
I
understand
it,
it
would
be
per
ip.
So
if
you
launched
a
big
cluster
within
that
gateway,
or
in
other
words
with
private
ips
or
that
public
ips,
you
might
notice
that
or
you
would
likely
notice
this
if
it
was
big
enough.
D
Yeah,
I
think
I
think
they'd
already
implemented
it,
but
a
far
higher
level
is
my
understanding
and
that
it's
enforced
on
the
clients
rather
than
the
repos,
I
believe,
but
yeah.
The
messaging
has
been
pretty
atrocious.
I
think
in
terms
of
clarity.
A
Okay,
peter,
if
you're
like
ep
jobs,.
E
Yeah,
the
e
to
e
job
for
failing
for
gce
because
of
bucket
permissions
justin
suggested
a
fix
and
I'm
pulling
it
up
now
and
it
looks
like
it's.
I
implemented
the
fix
it's
still
failing.
I
haven't
had
a
chance
to
look
into
it
since.
F
F
So
what
what's
going
on
is
the
nodes
they
that
are
the
vms
that
are
running
the
you
know,
the
all
the
stuff
all
the
kubernetes
stuff
are
authenticating
to
that
bucket,
as
as
the
node
service
account,
so
in
gce
each
node
has
a
service
account
and
what
we're
doing
is
we're
assigning
like
if
you
don't
choose
a
service
account,
if
you
don't
specify
a
service
account,
it's
going
to
give
you
the
default
service.
Account
default.
Compute
service
account
associated
with
that
project.
F
So
every
project
in
google
has
a
different.
You
know
it's
going
to
have
a
different
service
account.
If
you
do
specify
that
service
account
ahead
of
time,
then
we
could
add
permissions
like
that.
Then
we
could
grant
that
service
account
access
to
a
certain
bucket,
and
then
we
could
run.
You
know
all
of
our
nodes
with
this
service
account.
F
F
A
different
project
in
that,
like
the
project,
generator
thing
that
that
that
proud
uses.
C
It's
also
tricky
because
gce
doesn't
have
or
gcs
I
should
say,
doesn't
have
the
same
permissions
model
as
s3
has
in
terms
of
subtrees.
I
think
it's
starting
to
come
in,
but
it's
still
a
little
complicated.
C
I
I
think
I
looked
at
it
and
like
filed
some
internal
bugs,
because
I
found
it
very
difficult
to
understand
and
very
difficult
to
make.
Actually
like
do
what
I
expected
it
to
do,
so
I
will
see
where
those
bugs
got
to,
but
the
without
using
those
the
what
we
should
be
able
to
achieve
is
granting
access
to
the
whole
bucket.
C
So
we
like,
if
you
have
multiple
clusters
in
a
bucket,
we
don't
get
the
the
granularity
or
the
isolation
that
we
get
on
s
on
an
s3
bucket,
but
that
should
work,
and
so
I'm
interested
by
the
way.
F
Yeah,
so
what
I
would
suggest
is
that
we
have
a
bucket
and
then
we
we
just
general.
You
know
we
create
a
path
that
has
some
random
thing
at
the
end
of
it,
and
then
we
just
clean
up
after
ourselves
at
the
end
of
it
and
then
just
give
and
then
create
a
service
account
and
then
make
sure
that
that
service
account
has
full
access
to
that.
You
know,
storage
object,
admin
on
that
bucket
and
then
we
should
be
good
to
go.
F
We
should
solve
this
problem
once
and
for
all,
but
that
the
problem
is
getting
that
service
account,
and
I
I
found
that
very
difficult
to
do.
Maybe
that's
maybe
become
a
little
bit
easier.
Now
I
can
tell
you
exactly:
I
can
help.
I
have
everything
documented.
I
went
through
this
a
couple
months
ago
and
I'm
happy
to
share
my
notes
and
get
this
thing
done,
because
it's
driving
me
crazy
too.
F
A
Great:
hey
recipient
docker
faults.
G
B
Okay,
thank
you.
So
in
the
past,
docker
was
started
for
kubernetes
with
without
the
ap
tables
rules.
This
is
a
historic
thing
that
I
cannot
find
any
reference
at
the
moment.
B
B
So
I
think
my
question
would
be
do
we
want
to
keep
doing
this?
I
really
don't
know
why
it
was
done
in
the
beginnings
and
to
what
do
we
do
about
it?
Do
we
want
to
document
something
in
the
docker
configs
that
if
you
want
docker
commands,
you
have
to
enable
the
ipv
rules-
or
I
don't
know
so,
any
feedback.
C
A
question
does
iptables
you
posted
a
work
around,
which
seems
like
a
good
thing
we
could
put
in
the
like
as
a
release
note,
and
I
think
that
workaround
was
to
essentially
set
these
two
options
back
to
true
in
the
in
the
kubernetes
in
the
in
the
cluster
config.
For
for
cops.
Does
that
break
ip
tables.
B
It's
just
the
question
of
how
do
you
want
to
handle
it
at
the
release?
Note
at
the
documentation
note
or
make
it,
let's
say,
probably
less
secure
by
enabling,
by
default
using
the
docker
defaults,
let's
say.
C
Fine,
I
I
double
checked
and
gke
is
still
setting
these
options
to
false
both
defaults.
I
don't
know
if
there's
any
reason
why
we're
still
doing
that
there,
but
as
far
as
I
know
like,
basically,
everyone
sets
them,
but
yes,
you're
right.
It
could
just
be
like.
Everyone
has
carried
forward
these
settings
for
all
time.
B
C
Can
pick
and
talk
locally
anyway,
they
can
run
like
a
privileged
container
anyway,
if
they
get
access
to
docker,
but
yeah,
it's
another,
it's
another
roadblock.
I
guess
I
can
try
to
find
out
why
we
set
these.
I
seem
to
recall
it
being
important
to
set
originally.
I
don't
know
whether
it's
still
important.
B
I
think
originally
they
clashed
with
the
cni
firewall
rules
or
even
with
cube
proxy,
but
in
the
recent
days,
like
last
two
years,
everybody
fixed
their
cni's
to
play
nice
anyway.
It's
for
now
it's
not
that
important.
We
had
only
two
people
complaining
about
it,
so
I
guess
most
people
don't
run
docker
commands
on
their
hosts.
B
H
Recovery,
mine
is
a
little
bit
different
for
me.
It's
mostly
just
like
modifying
the
volume.
It's
something
that
I
was
able
to
do
like
to
trick
ups
into
thinking.
It
can
do
it
by
editing
the
cluster
spec
manually
directly
on
the
s3
bucket
and
then
modifying
the
volume
on
aws.
So
I
didn't
have
to
recreate
anything.
H
It's
just
changing
the
volume
type
on
aws,
for
example,
from
gp2
to
io1
and
then
increasing
some
iops
provisioning,
and
then
I
just
had
to
go
to
the
cluster
spec
modify
these
fields
directly
on
s3
and
as
soon
as
I
upload
it
to
back
to
s3
cops
update.
Tester
was
happy
about
everything,
so
I
was
just
curious
about
why
it's
not
supported.
C
Yeah,
so
for
the
reason
of
why
it's
not
supported,
I,
I
suspect
it's
just
that
no
one
has
it
probably
used
to
be
unsupported,
like
it
probably
used
to
not
be
a
supported.
I
don't
know
if
it
used
to
be
possible
to
change
the
type
on
like
hot
as
it
were,
and
if
it
is
now
possible,
I'm
guessing
it's
just
that.
C
We
have
never
updated
the
logic
to
allow
that,
so
it
would
be
great
to
get
that
to
be
allowed,
particularly
if
you
didn't
have
to
do
anything
special
other
than
like
bypass
the
validation
logic.
H
Yeah,
that's
pretty
much
what
I
had
to
do.
I
was
just
more
concerned
because
yesterday
I
did
that
in
a
test
cluster,
so
I
wasn't
too
scared
of
breaking
it,
but
I
was
mostly
concerned
if
there
are
any
unknown
consequences
or
implications
of
me
doing
so
like
behind
the
scenes
and
if
there's
anything
unexpected,
it
should.
C
If
ebs
lets
you
do
it
live
it
shouldn't
like,
then
it's
just
whatever
consequences
there
are
of
doing
it.
Live
on
ebs
like
we
shouldn't
care,
but
I
don't
know
if
there
are.
I
don't
know
if
people
here
know
of
surprises,
yeah.
A
The
validation
being
out
of
date,
that's
is
kind
of
a
different
issue.
I
think
I
think
recipient's
issue
is
more
of
if
you
lose
the
data
yeah
yeah
again,.
C
A
B
B
B
A
B
C
Okay,
I
don't
know
okay,
that
sounds
like
another
story,
but
we
will
we'll
not
get
distracted
again.
Would
you
mind
opening
an
issue
on
a
city
manager
describing
sort
of
how
to
reproduce,
and
I
will
look
at
what
what
happened
it
it?
It
might
be.
You
have
to
issue
a
command
because
we
consider
it
to
be
an
unsafe
operation.
C
I
don't
know
that
it
would
be
an
unsafe
operation,
so
it
might
be
a
like.
So
you
should
probably,
but
please
do
open
an
issue,
and
I
will
look
at
exactly
what
the
the
state
we
got
into.
C
G
B
Well
many
people
when
they
start
doing
testing
with
cops.
They
want
to
see
how
the
recovery
works.
Disaster
recovery,
they
don't
care
about
the
no,
the
storage
being
on
ebs.
They
delete
everything
and
expect
to
work,
and
then
they
come
back
and
say:
hey
didn't
this
didn't
work!
Why
what
happens?
If
I
break
it.
C
So
it's
a
great
thing
to
do
and
I
think
I
agree
with
them
that
like
they
should
be
able
to
delete
all
their
instances
if
they
see
all
their
volumes.
C
Well
I
mean
we
guess
we
have
the
s3
backups
as
well.
So
yes
there's
I
mean
yes,
we
should.
We
should
make
sure
that
we
don't
get
into
a
surprising
case
when
when
it
is
easy
to
recover,
if
it's
the
case,
the
unsafe
operation
thing
is,
for
example,
if
they
did
delete
all
their
volumes,
it
should
still
be
possible
to
recover
from
s3.
The
challenge
is
because
you
lose
something
we
don't
want
to
do
it
totally
live,
so
we
we
will
require
a
command.
B
B
C
E
So
our
cluster
spec
allows
users
to
provide
an
acm
certificate
that
gets
attached
to
the
api
elb
and
that
changes
the
listener
from
tcp
to
tls.
And
when
that
happens,
client
certificate
authentication
no
longer
works,
because
the
tls
session
is
established
between
the
client
and
load
balancer
rather
than
the
client
to
api
server.
So
client
server
doesn't
work,
and
so
what
has
been
happening?
Unknowingly
is
if
you're,
using
a
keep
config
file
generated
from
cops,
export,
keep
config.
E
E
We
would
need
to
open
up
security
group
access
from
the
instances
to
whatever
was
allowed
to
reach
the
eob,
but
that's
kind
of
one
of
my
ideas.
I
haven't
had
a
chance
to
test
it.
My
main
concerns
are
dealing
with
dns
ttls
because
we
primarily
use
the
cops
export
keep
config
during
like
upgrades
and
rolling
updates
where
the
dns
records
might
be
changing,
and
so,
if
we're
now
relying
on
ttls
rather
than
load,
balancers,
adding
and
removing
instances
that
might
cause
additional
issues.
A
I
think
the
general
approach
sounds
reasonable
to
you
know:
changes
based
on
using
an
acm
or
configuring
an
acm
certificate
in
the
in
the
cluster
spec.
I
think
that
a
site
that
uses
this
would
it
would
behoove
them
to
not
use
client
certs
and
install
some
sort
of
other
authentication
provider
sooner.
C
C
A
C
E
So
I
had
two
ideas:
one
is
two
alternatives:
one
is
we
kind
of
just
discourage
the
use
of
the
acm
certificate?
There's
not
really
that
much
benefit
to
using
it.
The
reason
we
adopted
it
was
because
it's
you
know
we
don't
have
to
keep
the
cluster
ca
in
everyone's
key
configs.
It's
one
less
field
to
manage,
and
but
like
we
can
easily
just
go
back
to
using
that
and
switch
back
to
tcp
listener.
E
My
other
idea
was
we
open
an
additional
port
on
the
elbs
that
does
not
have
the
cert
that
points
to
the
same
masters,
and
so
then,
this
additional
port.
When
we
want
to
use
client
sort
off,
you
would
hit
this
additional
port
instead
of
the
443
port.
A
A
C
There's
also
ssh
tunnels.
I
we
have
not
used
that
anywhere.
So
I'm
a
bit
reluctant
but
like
that
is
another
another
path.
We
don't
we
don't
use,
we
don't
rely
on
them
anywhere
to
date,.
C
And
I
personally
test
test
that
a
lot
I
tend
to
actually
run.
If
you
use
a
domain
name
and
you
are
not
using
a
cni,
you
get
dns
load
balancing
unless
someone's
changed
behind
my
back
and
so
like.
I
actually
do
that
and
like
that's
like
I
had
this.
That
was
the
bug
with
the
replacing
or
wrapping
cube.
The
client
go
like
that
one
we
did
see,
but
in
general,
until
that
they
made
that
until
that
happened
it
was
fine.
C
So
it's
it's
not
bad
and
it's
it's
it
is
it
generally.
It
used
to
work,
fine,
the
industry
so
much
broke
a
little
bit
and
it's
fixable
the
dns
rely
on
dns.
I
I
do
worry
about
exposing
the
internal
dns
internal
load.
Balancer
publicly,
like
we
wouldn't
be
able
to
lock
it
down
right.
A
C
C
All
right,
maybe
we
should
take,
maybe
we
should
discuss
more
on
the
issue
like
I
think,
we've
identified
a
bunch
of
ones
like
I'm
feeling,
like
ssh,
isn't
actually
terrible,
even
though,
like
in
general,
it's
terrible,
I'm
like
it
might
not
be
the
worst
option
of
the
of
the
ones,
but
yeah
I'm
not
wild
on
what
I
consider
to
be
the
personally.
The
best
of
those
options.
E
Okay,
I
will
I
didn't
open
an
issue
for
this,
so
I
will
do
that
and
we
can
discuss
more
there.
C
Just
sorry
just
looking
for
this,
the
other
option
is
to
somehow
use
the
authenticator,
but,
like
you
know,
as
this
is
the
break
glass
option
that
feels
problematic
as.
A
Well,
okay,
so
I
have
a
pr
open
which
makes
the
situation
change
to
worker
node
bootstrap,
so
to
get
the
cubelet
cert
instead
of
phishing
it
out
of
the
bucket
or
using
the
node
authorizer.
It
nodeup
will
make
a
request
to
cops
controller
authenticating
with
a
cloud
provider
specific
authentication
in
aws.
That's
using
a
signed,
git
identity
request,
similar
to
the
way
vault
works,
and
then
it
provides
private
keys
and
then
the
cops
controller
will
find
those
and
issue
certificates
for
cubelet.
C
G
There's
no
flag,
I
mean
we
possibly
could,
but
I
just.
C
C
A
I
C
And
I
think
one
of
the
one
of
the
things
that
john
and
I
had
very
brief-
we
didn't
go
back
and
forth
many
times.
Did
we
just
like
30
or
40?
I
think
the
very
brief
like
back
and
forth
about
was
whether
this,
like
the
interface,
would
work
for
other
cloud
providers.
So
we
should.
We
should
try
to
do
that
and
figure
out
whether
we
want
to
to.
F
A
Okay,
cyprian
arm64.
B
Did
a
trick
for
the
generic
without
dash
arm
64
or
md64
to
push
the
amd64
one
to
it
for
now
so
to
be
compatible
until
we
decide
on
how
to
build
the
manifest
and
other
than
this
should
work,
except
at
cd
manager
that
it's
different
repo,
so
we'd
have
to
handle
it
there.
B
But
I
see
no
issues
with
with
that.
So
me
and
the
all
I
have
already
tested
it
well
small
clusters
and
works
pretty
well,
so
the
question
was:
should
I
go
ahead
and
merge
it
like
this
and
figure
out
the
manifest
later
so
in
the
next
few
weeks?
I
guess
there
won't
be
any
major
roadblocks
with
that,
or
do
you
have
a
different
preference
or
should
wait
for
something
else.
C
I
was
just
going
to
say
I
thought
the
idea
of
pushing
it
was
very,
very
clever.
Like
that's
that's
genius,
I
like
that
like
pushing
it
to
the
to
the
no
suffix
name,
because,
like
presumably,
if
you're
going
to
use
arm
you're
going
to
expect
you
can
like
use
the
suffix
and
if
you're
going
to
use
amd64,
then
you
will,
you
might
not
be
using
the
suffix.
So
I
like
that
approach,
a
lot.
B
Yeah,
it
should
work
right
now
to
not
break
anything,
but
going
for
so
it
works
perfectly
with
with
side
loading
right
now.
It
won't
really
work.
If
someone
provisions
like
you
would
do
with
the
cops
regularly
expect
to
pull
it
from
the
bucket,
because
some
of
our
components
come
up
with,
let's
say
demon
set
deployments,
and
I
could
do
something
there
to
create
to
put
two
demon
sets
with
the
note
selectors
or
something
the
way
flannel
did
it.
B
C
That
makes
sense
to
me.
I
think
the
I've
heard
reference
to
a
go
tool
which
people
use
for
the
manifest.
I
think
that
is
the
tool
called
scopio
which,
as
someone
that
registered
the
domain
name
copia,
you
can
imagine
how
I
feel
about
this
tool,
but
nonetheless
we
should
probably
use
it.
If
it
does
it.
I've
seen
like
this
used
in
the
kubernetes
communities
repo.
So
I
think
this
is
the
one,
but
I
haven't
actually
got
past
my
rage
to
figure
out
whether
it
whether
it
actually
is,
is
the
one.
C
But
I.
B
A
Well,
I
think
this
stuff
is
good
to
take
now
we'll
want
to
do
a
release
soon.
After
so
we
can
clean
up
some
of
the
interim.
I
B
Yeah
anyway,
we
need
a
new
release
so
that
I
can
re-enable
some
of
the
tests,
because
the
artifacts
change
name
for
the
image.
Unfortunately,.
C
This
could
be
the
the
release
verse.
You
propose
a
pr,
I
think
I
did
we
merge.
I
think
we
merged
that
this.
If
this
is
a
119
release,
we
are,
we
are
on
the
new
devolved
process,
more
devolved
process
whereby
you
can
propose
a
pr
to
tag
an
alpha,
and
then
I
will
do
the
the
remaining
mechanics.
C
A
Okay,
so
since
I
already
enabled
authorization
mode
web
hook
for
119
plus
the
other
question
is:
do
we
want
to
enable
cubelet
authentication
token
web
hook
mode
by
default,
so
that
tokens
provided
a
cubelet
would
then
be
go
to
token
review
instead
of
being
rejected
outright?
I
don't
think
I
don't
see
any
security
downside,
it's
just
enabling
a
feature
that
a
lot
of
people
are
going
to
want
to
enable
just
better
defaults.
C
A
Okay,
better
defaults,
so
then
I
have
a
pr
to
for
cops
controller
to
take
the
node
labels
from
the
cloud
tags.
Instead
of
out
of
the
clusters,
and
instance,
group
specs,
I
believe
we
agreed
to
move
forward
in
the
last
meeting.
I
just
need
somebody
to
review
it,
so
it
gets
merged
so
and
then
anderso.
J
Yes,
so
currently
there
was
some
traction
on
the
windows
worker
support
issue
we
or
my
company.
We
have
made
a
set
of
skips
to
to
run
windows
workers
with
cops,
but
it's
a
bit
hacky
and
sidelined,
but
there
seemed
to
be
maybe
some
interest
in
trying
to
move
cops
forward
in
that
direction.
So
it's
just
yeah
wondering
about
how
how
do
we
proceed?
I
thought
there
was
a
few
items
listed
that
could
be
started
with
like
file
path,
making
sure
that
it
works
on
windows
and
linux.
J
There
there's
some
cni
stuff
that
needs
to
be
discussed
to
figure
out
since
windows
requires
different
settings
for
flannel,
so
just
wondering
what
the
yeah,
what
the
feeling
about
moving
forward
with
such
work
would
be.
F
If
you
have
an
interest
in
it
and
and
then
think
of
you
to
do
it,
then
we'd
love
it
yeah
jump
in
there.
Absolutely,
especially
if
there's
issues
open
on
it
happy
to
have
contributions.
B
So
the
thing
is
that
it
would
be
pretty
different
from
what
we
have
now
or
we
would
have
to
rewrite
parts
of
nodap.
So
I'm
not
against
it.
We
already
talked
about
it
a
few
times
in
slack
just
that
it
may
require
an
effort
to
move
it
like
to
120
or
121,
because
it
will
be
a
big
change,
so
you
have
to
be
prepared
to
to
either
have
a
separate
branch
and
start
testing
things.
We
can
help
you
and
guide
you,
but
depends
a
lot
on
you,
because
I
don't
know
at
least
me.
J
Yeah,
so
for
our
part,
we
we
have
a
well,
not
significant,
but
we
have
workloads
that
that
require
windows.
So
for
us
we
we
need
to
run
windows,
notes.
J
Yeah,
but
that's
also,
I
guess
a
thing
at
some
point
then,
with
you
know
the
automated
test
infrastructure
to
be
able
to
safely
move
forward
with
with
such
a
project
as
well,
making
sure
that
everything
works
and
new
features
that
will
affect
node-up
on
both
sides.
Then
and
yeah
I
mean
it'll,
it'll,
add
more
things,
and
I
remember
the
last
one
of
the
last
times
I
we
I
talked
to
some
of
you
about
this.
J
It
was
set
to
wait
until
the
kate's
admin
or
the
the
cluster
management
tool,
because
that
should
handle
windows
or
something,
but
I'm
not
sure
how
the
status
of
that
is
either
so.
C
C
We
should
establish
like
a
realistic
goal
like
we
shouldn't,
as
I
understand
it
like
running
like
api
server
and
all
those
pieces
is
a
lot
harder
and
we
should
be
like
just
windows
nodes,
not
but
linux
masters,
presumably
like
only
looks
like
you're
using
flannel,
so
like
only
flannel
or
something,
and
then
we
could,
but
I
I
I
would
be
in
favor
of
creating
a
end-to-end
test
and
having
it
fail
for
a
while
and
watching
where
it
fails.
C
C
F
B
J
J
B
We
have
some
way
of
running,
let's
say
even
flannel.
If
you
look
at
the
flannel
settings,
when
you
create
a
cluster,
you
create
it,
you
can
create
it
with
flannel,
vxlan
or
flanish
frannel
dash
something
so
we
can
create
one
with
flannel
dash
windows,
yeah
yeah,
that
works.
It
doesn't
have
to
work
for
all
the
clusters
so
happy.
B
C
C
Yeah,
we
should
say
like
only
from
whatever
version
we
introduced
it
in
and
no
no
like
transition
like
no
need
to
add
to
existing
clusters
sounds
like
it'll,
be
another
one
that
would
be
good
yeah,
whatever
the
next
few
releases.
B
B
By
the
way
regarding
this,
peter
did
some
nice
work
with
the
mock
for
open
stack
and
I
think
at
some
point
he
was
anxious
to
do
it
also
for
gce,
not
sure,
if
he's
still
volunteering
for
that,
but.
E
Yeah,
I
basically
wrote
the
cloud
mock
for
openstack
and
it's
helped
catch
a
handful
of
bugs
so
far
and
helps
us
kind
of
run
through
all
of
the
cloud
up
tasks
which
we've
been
unable
to
do
without
access
to
openstack.
So
that's
I
think
it's
been
pretty
helpful,
whether
it's
worth
the
3000
lines
of
code
yeah-
I
don't
know,
but
it's
it's
helped
catch
a
few
bugs
so
that's
good
and
if
we
think
it's
reasonable,
I
think
it'd
be
good
to
expand
on
that
and
add.
E
I
was
thinking
gce
and
maybe
spot
ins,
because
that's
kind
of
it's
pretty
low,
there's
only
a
few
resources
that
it
needs
to
support
and
it's
kind
of
built
on
aws.
So
it
should
be
straightforward.
A
But
okay,
recurring
topics
releases
119
alpha
three,
so
I
think
we're
playing
on
cyprian,
proposing
one
after
merging
as
pr.
C
And
if
there
are
any
problems
with
the
docs,
just
let
me
know
and
it's
the
new
release
process.
I
think
it's
called
new
release.md
or
something.
B
B
B
How
does
that
work?
Did
they
did
that
support
for
reference,
e3.
C
I
B
C
C
B
C
Yeah,
if
it
is
this,
if
it
is
an
internal
flag,
you
can
pass.
That
seems
like
a
a
reasonable
thing.
People
might
want
and
then
it's
a
work
around
like
just
looking
at
this
pr.
Then
it's
a
workaround
that
like
users,
would
have
to
opt
into,
and
I
think
that's
a
reasonable.
A
If
you
do
everything
except
open
up
the
the
security
group
yeah
so.
C
A
C
C
I
realized
another
thing
we
could
do
is
we
could
create
a
we
could
create
a
service
mind.
C
I
was
going
to
say
if
we
could
get
to
the
into
the
to
the
api
server,
we
can
create
a
service
account
and
use
a
kubernetes
service
account
and
use
the
kubernetes
service
account
token,
but
there's
a
chicken
and
egg
scenario
there
problem
there.
G
G
C
All
right,
yes,
we
should
definitely
look.
I
I
I
think
it's
important
to
think
about.
Let's
discuss
it
on
the
issue.
I
don't
know.
If
any
of
these
are
high
enough
priority,
I
guess
the
vxlan.
It
was
interesting,
but
I
guess
it's
not.
It's.
E
E
B
Cube
cops,
dns
controller,
I
think
sorry.
This
is
why
I
put
the
diff,
because
I
didn't
know
if
it's
in
so
cops
controller
is
not
being
able
to
add
labels
or
something
to
work.
You
have
to
build
the
pods
and
the
new
one
will
work,
it's
a
simple
fix
and
it's
only
open
stack.
So
the
decision
is
I'm
only
explaining
what
happens
so.
A
C
If
it's
I'm
not
opposed
the
the
the
string
fix
john
that
you
put
in
it,
looks
like
john.
Yes,
sorry,
john
sorry,
there's
an
scd
restriction.
Sorry
sipper
in
this.
A
C
A
No
it
actually,
in
some
cases
a
node
will
fail
to
be
created.
It'll
fail
to
join.
C
A
Because
it,
if
you
enable
aws,
im
authenticator,
then
you
have
then
some
percentage
of
times.
Your
nose
will
not
join
all
right
because.
A
C
Okay,
that
seems,
and
then
the
the
the
that
seems
like
that,
justifies
it,
and
then
the
this,
the
string
cast
versus
format,
is
that
it's.
C
Cool
all
right.
Well,
I
mean
I
think
we
can
do
it
for
the
the
fcd
race
seems
like
it
justifies
it,
because
that's
a
bit
of
a
pain
to
track
down
as
well.
I
imagine
and
then
the
openstack
thing,
so
I
think
it
justifies
the
release,
but
it's
nice
it's
good
to
have
a
release
without
a
ton
of
like
scary
stuff
in
it.
That's
that's
a
good
thing.
C
What
is
that?
No,
it's
happy
that
the
automation
only
goes
to
one
and
later,
okay,
okay,.