►
From YouTube: Kubernetes kops office hours 20200605
Description
Recording of the kops office hours meeting held on 20200605
A
Hello,
everybody
today
is
friday
june
5th
2020..
This
is
the
cops
office
hours.
I
am
your
moderator,
facilitator,
justin,
santa
barbara.
I
work
at
google
a
reminder.
This
meeting
is
being
recorded
and
will
be
put
on
the
internet
and
to
be
mindful
of
our
kubernetes
code
of
conduct,
which
boils
down
to
being
a
good
person.
We
don't
have
a
ton
on
our
agenda,
but
we
do
have
a
bunch
of
things.
A
Please
do
feel
free
to
add
things
to
the
agenda
if
you
would
like
to
be
sure
to
get
to
them
I'll,
replace
the
link
right
there
in
chat.
If
anyone
wants
to-
and
you
can
also
add
your
name
to
the
attending
list-
which
is
helpful
for
people
matching
faces
to
names
later
when
particularly
watching
the
video.
A
Otherwise,
I
suggest
we
jump
right
into
the
first
item
on
the
agenda,
which
is
hackman
talking
about
adopting
github
actions.
B
Yep,
I
think
it's
something
I
was
discussing
with
peter.
He
already
has
a
few
pr's
to
try
things
so
net
irons
helped
a
lot
with
setting
up
github
actions,
but
we
would
like
to
make
them
blocking
and
remove
the
travis
jobs
for
anything
other
than
arm.
A
B
Yeah,
I
noticed
that
the
kubernetes
dashboard
people
started
adding
travis
ci
jobs
and
they
have
an
automated
bot
that
runs
the
job
and
when
it
runs,
it
runs
four
jobs
or
something
in
a
row
one
hour
and
a
half
each
so
pretty
much
we're
blocked
with
prs
and
everything
if
they
start
before.
A
B
D
B
A
They're
not
all
right.
That's
disappointing!.
B
A
B
C
F
For
it,
I
I
think
that
travis
was
testing
on
windows
just
on
mac
os.
Sorry.
G
A
Reasons
yeah,
so
it
is
it
possible
to
stop
running
the
travis
jobs
start
running
the
getting
action,
jobs
not
make
them
blocking.
Yet
is
that
cool
okay?
And
so
the
question
is
like?
Are
there
these
old
pr's
and
we
can
do
a
pass,
maybe
and
see
if
we
can
or
older
prs,
and
we
can
do
a
pass
and
and
see
how
they're
doing
yeah.
D
A
G
Okay,
you
can
actually
also
do
that
in
prow
you
can.
They
have
a
job
that
the
blocks
sinker
I'll,
try
to
find
and
drop
a
link
that
we
can
add
specific
things,
and
that
might
be
the
way
to
do
it.
When
you
aren't
a
repo
admin
that
automates
that
part.
C
A
H
A
Oh
put
them
in
the
chat,
and
I
will
I
will,
if
you
want
to,
if
you
don't
mind,
putting
them
in
the
chat.
That
would
be
wonderful
and
I
will
take
your
power.
C
Yeah,
the
only
one
I
think
that's
mergeable
is
is
one
but
we'll
say
health.
A
Chat
chat
or
the
doc
is
yeah.
D
I'll
do
that.
Thank
you.
I
was
looking
at
the
prows
settings
where
you
can
specify
additional
contexts
to
block
on
and
github
actions
are
not
part
of
their
context.
Api
or
status
api
they're
totally
different.
So
I
think
that
prow
is
not
aware
of
the
status
of
github
action,
jobs
and
so
right
because
they're,
the
other
type
yeah
yeah.
D
A
I
mean
to
be
honest:
yeah
like
we
don't
expect
many
of
we
don't
expect
github
actions
or
travis
to
catch
many
things
like
most
of
the
time.
It
should
be
a
flake
touchwood,
the
like
it's
like
a
os
dependency
in
cops,
the
cops
winery
which
is
less
likely,
maybe,
but
well,
we
yeah
it's
nice.
C
A
Let
me
put
that
on
the
I
maybe
I'll
inject
that
then
into
the
discussion
yeah.
I.
D
A
Am
try
I'm
trying
to
get
like.
There
was
a
query.
I
think
two
weeks
ago,
think
about
like
automation
around
like
from
peter
asked
about
automation,
around
release,
automation
and
so
I'm
trying
to
get.
We
have
a.
We
have
a
slightly
complicated
process
by
which
our
we
go
through
e
to
e.
Basically,
whenever
a
on
the
master
branch,
whenever
a
code
is
merged,
it
is
a
triggers.
A
post
submit
job
which
labels
drops
a
label
in
google
cloud
source.
A
Google
cloud
storage
called
latest
ci
dot
text
which
points
to
a
version.
There's
a
periodic
job
which
runs
every
hour,
looks
at
latest
ci.
Dot
text
tries
to
bring
up
and
down
a
cluster
and
then
writes
latest
ci,
updown
green.text
there's
another
periodic
job,
which
runs
every
hour,
which
looks
at
latest
ci
updowngreen.txt
and
tries
to
bring
a
cluster
wrapper
on
the
e
to
e
test
and
bring
it
down
and
then
writes
latest
ci
green.
A
That
text,
I
think,
anyway,
I'm
not
sure
that
we
need
all
these
steps,
but
I'm
trying
to
get
the
same
set
of
steps
running
from
the
release
branches
using
the
staging
builds,
which
are
the
the
builds
that
go
into
the
the
staging
buckets
and
the
idea
being
that
then
those
staging
builds.
We
can
then
actually
promote,
and
we
won't
have
like
multiple
sources
of
truth.
The
builds
that
we
test
will
be
exactly
the
same,
build
that
we
promote.
A
I
don't
know
how
we're
gonna
do
versions
but,
like
everything,
should
be
hopefully
clearer
and,
like
I'm
also
looking
at,
we
can
optimize
like
do
we
really
need
that
those
three
at
least
three
like
different
stages,
or
can
we
at
least
drop
one
of
them
like
I'm,
not
sure
what
up
down
is
for
to
be
honest
and
yes,
and
then
we
will
have
more
coverage
more
periodic
coverage.
A
A
Tmi
all
right
good
information,
a
little
bit
of
overload,
perhaps.
H
A
I
don't
know
if
there's
any,
oh,
so
and
hackman,
you
said
that
yeah
we're
gonna
keep
travis
for
arm64,
because
I
presume
github
action,
slash
azure
doesn't
yet
support.
Arm64.
Is
that
the
thing.
A
I
have,
I
have
no
insight
information,
so
I
cannot
comment,
but
we
can
probably
bug
the
microsoft
people
that
when
they're
gonna
start
supporting
it,
I
was
impressed
they
supported
mac
os.
I
thought
that
was
pretty
cool,
so
I
don't
know
yet
they
do
support.
F
They
do
support
arm64
for
self-hosted
runners,
so
they're
making
the
software
available.
They
just
don't
have
the
genome
posted
runners
available.
A
We
could
run
the
we
can
run
our
own
runners
on
on
aws-
I
guess,
but
anyway,
let's,
let's
start
with,
let's
keep
it
simple
for
now
and
yeah.
I
think
we're
okay
for
now
with
the
travis
ci.
A
Okay,
all
right!
So
thank
you
for
that
and
yeah.
It's
super
exciting
to
see
that
arm64
work
as
well
like
like
some
of
the
new
rabbit
on
gravitron.
I
don't
know
two
benchmarks
are
super
impressive,
so
it'll
be
good
to
be
able
to
use
those,
or
at
least
try
them
out.
B
It
should
be
available
pretty
quick
at
mike's
suggestion.
I
split
the
ppr
into
smaller
ones,
so
pretty
much
all
the
prerequisites
are
in
with
peter's
help,
and
now
I'm
working
on
rewriting
the
main
thing
based
on
the
latest
master,
because
things
changed
since
a
month
ago.
Thank
you,
no
problem
just
tested
it.
A
And
works
great,
that's
great.
You
also
have
the
next
item,
which
is
around
a
security
advisory.
You
wanna
around
ipv4
and
ipv6
confusion.
It
sounds
like.
B
B
B
And
they
did,
the
kubernetes
did
a
release
with
cni
binaries
086
and
they
applied
it
to
116
plus
I
already
made
the
pr
for
one
118
and
119
wanted
to
know
if
I
should
backport
it
also
to
117
and
116..
A
B
I
did
it
for
115
plus
because
it
was
easier,
perfect,
but
yeah
depends
on
what
we
want
to
release
with
this.
So
I
can
backport
it
even
further,
not
sure
if
116
makes
sense
but
117,
I
guess
we
will
have
other
releases
so.
A
I
feel
like
we
just
released,
cops
117
and
I
I'm
always
I
don't
feel
like.
I
don't
feel
fair,
forcing
people
to
upgrade
because
of
a
security
issue.
So
if
we
can
get
it
into
cops
116
I
I
would
certainly
support
another
bump
of
that
just
because
we
released
117
like
literally
days
ago.
I
mean
I
can't
remember
what
it
was,
but
well.
A
I
mean
technically,
our
policy
is
we
would
we
would
if
we
were
a
month
down
the
road,
we
wouldn't
put
it
into
cops
116,
because
people
should
be
running
117.
B
I
think
it
was
two
since
we
started
doing
the
last.
I
B
I
would
like
one
but
anyway
so
other
things
are
calico
weave
and
flannel.
That
needs
to
be
need
to
be
updated.
I
did
the
update
for
flannel
for
flan,
sorry
for
calico,
and
I
think
I
can
do
it
for
weave,
but
flannel
doesn't
have
anything
yet
they
will
probably
be
very
late
with
their
updates.
B
A
B
C
It's
rogue
router
advertisements,
so
the
attacker
says
by
the
way
I'm
an
ipv6,
router
and
then
most
offices
will
then
say:
okay,
fine.
D
C
Give
my
ipv6
traffic
to
you
and
then
you
spoof
dns,
to
say,
oh
by
the
way,
this
nice
pc
target
has
this
additional
address.
A
I
I
was
more
just
wondering
like
if,
if
nothing,
if
I
like,
I
know
well
that
if
I
don't
have
an
ipv6
address,
I
find
it
very
difficult
to
reach
ibv6
other
ipv6
addresses
so
like
I
was
wondering
like.
How
can
we,
if
we
don't
give
anyone
ipv6
addresses
it's
like?
Does
this
still
happen?
That's
just.
A
Okay,
I'm
I'm
very
wary
of
the
backboarding
docker
on
or
background
docker
too
far
and
like
changing
the
docker
version
on
the
older
ones
by
default.
Let's
just
do.
A
Yes,
the
third,
yes,
the
patch
yes,
but
the
bigger
bumps
it
sounded
like
even
17
was
a
minor
upgrade.
Is
that
right
or
not
no
16
would
be
17
is
a
minor,
so
16
is
a
17.
Is
a
patch
upgrade
16
is
a
minor
upgrade?
Is
that
right.
B
From
116
117
upwards,
we
have
docker
19,
0,
19,
3,
okay,
with
4
8
and
whatever
right
now.
So,
basically,
I
say
that
I
would
do
it
default
for
17,
also
because
it's
a
batch
release
and
I
would
add
it
as
an
option
in
116.
so
as
we
did
with
one
zero
with
with
the
zero
eight
version.
So
if
anyone
wants
to
enable
it
can
do
it
manually.
A
That
is,
I
I
agree
entirely.
I
think
that's
certainly
right.
The
only
thing
we
might
additionally
want
to
do
is
we
might
want
to
evaluate
whether
it's
bad
enough
to
bump
116..
I
I
don't
believe
it
is
right
now,
but
we
should
we
can.
We
can
make
that
decision
later,
when
we,
when
we
have,
if
we
have
support
in
there
manual
support,
is
a
great
first
step.
A
B
We
said
that
we
would
decide
this
week.
Voting
in
github
went
pretty
well
in
favor
of
ubuntu.
A
All
right
well,
there's
another!
I
I
I
have
a
buster
image
now,
but
there
are
gotchas
there.
I
discovered
another
gotcha
there
which,
like
so
I
am
more
inclined
to
also
to
support
ubuntu.
I
might
still
build
the
buster
image,
but
I
think
it
sounds
like
the
overwhelming
majority
people
prefer
ubuntu
for
people's
information.
The
the
buster
challenge
is
cloud
in
it
doesn't
support
or
the
version
of
clouds
in
it
in
buster
doesn't
support
imds
v2
the
instance
metadata
service
v2,
the
more
secure
ec2
instance
metadata
service.
A
So
that's
a
little
frustrating.
I
was
able
to
back
port
the
like
the
one
from
I
don't
know
what
it
is
sid,
maybe
but
yes,
it
was
another
like
a
gotcha
with
buster
that
was
sort
of
frustrating.
So
we
should
all
right.
Let's
have
five
years
of
ubuntu,
then
I
guess
right:
yay,
okay,
so
everybody!
A
B
A
That's
that's
absolutely
fair.
Yes,
I
was
just
wondering
if,
specifically
for
the
thing
you
were
about
to
talk
about
about
docker
on
startup,
whether
we
need
to
like
build
a
custom
image
to
do
that.
B
B
B
I
think
I
would
prefer
not
installed
and
just
keep
it
in
in
the
cache.
A
Okay,
well,
look
why
don't
we
I
I
know
there's
been
discussion
in
the
past
about
the
nf
tables,
iptables
problem
and
that's
what
I
sort
of
I'm,
what
I'm
wondering
is
going
to
hit
us,
but
what
I
suggest
is
why
don't
we
just
put
ubuntu
2004
in
one,
certainly
in
119,
like
the
master
which
we
I
think
I
don't
know,
if
we've
actually
bumped
that
I
don't
know
if
anyone
got
around
to
bumping
that,
but
I
probably
in
118,
then
yes,
no,
I
don't
know
at
least
in
the
channel
and
then
we
can
see.
B
A
A
Okay?
Well,
let's
put
it
in
there
and
see
if
yeah,
if
we're
going
to
make
that
change.
Let's
do
it
now,
so
we
can
see.
B
A
A
B
A
A
Peter
describe
instance,
types.
D
Yeah,
so
I
have
a
pr
open,
that'll
replace
some
of
our
hard-coded
instance
type
info
with
a
call
to
ec2.describe
instance
types.
It's
I
think
it's
mostly
working.
It's
used
in
a
couple
places
one
partly
for
api
validation,
but
also
in
node-up
for
the
vpc
cni
provider,
getting
the
max
number
of
enis
and
ips
in
order
to
set
kubelet's
max
pod
limit.
D
So
my
I
think
it's
ready
for
review.
My
only
concern
is
adding
that
additional
api
call
at
startup
in
nodeup
could
possibly
lead
to
throttling
or
rate
limiting
in
large
clusters
and
given
that
there's
kind
of
no
way
to
opt
out
of
this
change,
if
you're
using
the
ppc
cni,
I
was
just
wondering
if
anyone
had
any
concerns
there.
I
We
are
actually
a
little
suffering
from
that.
We
have
four
large
clusters
in
the
same
aws
region,
and
I
know
that
with
the
160
aws
cni
version,
we
were
bit
up
pretty
hard.
I
mean
our
nodes
would
not
come
online
for
like
30
minutes
because
of
the
rate
limiting
I'm
not
sure
if
there's
a
way
around
it,
if
it's
something
that
we
have
to
resolve
with
aws
like
to
figure
out,
if
we
can
increase
our
limits
but
yeah,
that's
definitely
a
very
good
point
to
consider.
D
I
think
one
alternative
we
could
do
is
have
the
cops
cli
make
that
call
for
each
instance
type
in
the
instance
group
and
then,
when
it's
doing
you
know
in
an
update
cluster
command,
it
could
kind
of
save
the
resulting
information
in
this
cluster
state
store
somewhere
and
then
node
up
would
get
that
information
at
startup
rather
than
making
the
api
call.
But
that
would
be
quite
a
bit
more
complex.
I
Yeah
I
saw
that
comment.
I
I
don't
know,
if
maybe
I
guess
it'll
be
worth
at
least
in
the
documentation.
Sorry,
my
slacks
were
up,
maybe
in
the
documentation
just
to
put
some
sort
of
note
there
hey
heads
up.
If
you
have
a
large
cluster,
you
might
get
throttled
because
of
that
or
something
of
that
nature
I
don't
know.
A
The
throttling
that
you
saw
like
describe
is
normally
not
not
the
bottleneck.
I
mean
we
have
done
things
in
the
past
that
have
made
it
the
bottleneck,
because
the
scribe
calls
tend
to
have
very,
very
high
limits
or
relatively
high
limits.
I
don't
know
if
you
know
what
was
it
actually
described,
or
was
it
some
one
of
the
other
calls
that
was
problematic.
I
Because
my
slack
is
just
blowing
up
right
now,
so
I've
noticed
that
they
describe
instances
when
we
run
like
cops
updates
cluster
every
now,
and
then
we
would
see
like
that.
We
are
being
followed
by
easy2.describe
instances
and
the
one
that
we
had
with
aws
cni.
I'm
not
sure
I
don't
remember
if
that
was
that
call
peter,
if
you
remember
the
fix
between
160
and
161,
that
they
actually
reduced
the
the
api
calls,
if
that
had
anything
to
do
with
that.
H
I
actually
sort
of
remember
hitting
this
as
well.
We
have
our
own
cni,
but
we
kind
of
hit
that
limit
as
well.
We
had
to
implement
something
that
caches
because,
like
you,
we
have
very
high,
like
very
large
clusters,
so
I
know
aws
tried
to
implement
that
in
their
cni
2
and
they
got
pushbacks
so
yeah.
A
The
the
there
are
there
are
other
places
we
can
put
this,
so
we
can
one
of
the
things
we
could
do
is
we
could
list
all
the
instance
types
and
just
put
them
in
a
big
like
json
blob
right,
so
we
don't
have
to
be
too
smart
about
it.
We
could
do.
We
could
put
it
in
the
api,
so
we
could
put
that
in
like
the
state
store
in
s3.
We
could
put
it
in
the
kubernetes
api,
that's
something
which
I
like.
We
could
do
as
like
a
I
don't
know,
collaboration
with
aws.
A
We
could
like
create
a
machine
aws
instance
type
crd,
I
guess-
and
put
it
in
there
and
that
would
make
it
a
an
api
call
except
I
don't
know
whether
that
would
work
for
node
up,
because
I
presume
it's
in
the
wrong
like
I
presume,
though,
this
is
like
very
early
in
the
bring
up
the
other
place
we
could
do.
Is
we
put
into
cops
controller
like
at
some
point?
I
think
we're
going
to
eventually
have
a
either
rest
or
grpc
interface?
A
A
K
Yeah,
I
think
I
think
the
cluster
autoscaler
certainly
falls
back
to
like
a
hard-coded
list
if
it
fails
to
describe
anything.
So
that
sounds
like
a
reasonable
fallback.
A
One
of
the
long-term
things
I
want
to
do
is
get
more
of
the
logic
on
bringing
up
a
node
into
the
cops
controller
so
that
we
can
like
it
would
avoid
a
lot
of
this
stuff,
and
I
think
it
will.
When
we
eventually
get
to
cluster
api,
it
will
work
nicely
there.
A
It
will
avoid
having
to
write
the
s3
bucket
type
thing
like
there's:
less
need
for
the
nodes
to
access
the
s3
bucket
or
the
gcs
bucket,
which
can
be
like
the
gcs
bucket
has
different
semantics
around
what
I'm
currently
grappling
with
around
like
permissions.
So
it
may
be
useful
from
a
whole
bunch
of
perspectives
to
have
a
little
bit
more
logic
in
comps
controller.
G
Peter
and
you
feel
like
in
its
current
state,
you're
worried
that
this
pr
could
be
throttled
right.
D
I
mean
at
a
certain
size
cluster.
I
think
it
might,
but.
G
G
That
is
always
throttled
because
we
have
so
much
in
it.
So
I'll
try
to
make
some
time
to
test
this
there
anytime.
We
do
cluster
changes
there.
You
know
all
of
our
api
calls
time
out,
takes
hours
to
apply
so
I'll
see.
If
I
can
do
some
testing
there
to
kind
of
help
you
out,
I
don't
want
to
go
talk
about
any
of
that.
Why
that
exists,
but
it's
not
production.
A
It's
occurred
to
me.
One
of
the
saving
graces
here
might
be
that
the
because
the
coal
is
one
per
instance,
booted
aws
would
presumably
be
very
amenable
to
raising
limits
because
you
know
you're
paying
the
money
for
each
one
of
these
api.
Like
a
lot.
A
Amount
of
money
for
each
one
of
these
api
calls,
unlike
you
know
like
in
the
past,
in
kubernetes,
we've
had
like
aggressive
polling
of
described
volumes
where,
like
there
was
no
corresponding
money,
changing.
G
A
Should
want
you
to
pay
for
that
instance,
exactly
they're
gonna
all
right,
so
I
think
I
think
we
have
a
plan
of
action
and
some
reasonable
fallbacks.
If
the.
If
we
do
encounter
problems.
A
All
right,
what
is
next
on
the
list
next
is
john
talking
about
an
unversioned
ominous
unversioned
api
in
the
state
store.
C
Yeah,
so
I've
been
looking
at
a
project
to
move
to
a
v101
api,
but
they've
been
falling
down
the
rabbit
hole,
so
the
full
version
of
the
cluster
spec
is
written
in
the
with
the
unversioned
api,
so
that
needs
to
be
changed
to
be
the
versioned
api
according
to
api
machinery
rules,
so
that
we
can
actually
make
changes
to
the
unversioned
api
between
releases.
C
So
so
that
means
we
would
have
to
make
the
change
in
119
and
then
120
would
be
when
we
could
actually
create
a
new
api
version.
C
The
other
issue
I've
been
running
up
against
is
that
this
of
both
the
cluster
and
instance
group
specs
that
are
used
by
nodeup
and
cops
controller
and
whatever
are
updated
when
you
edit
the
cluster
before
you
do
the
update
cluster.
Yes,
and
so
we
have
some
data
which
gets
updated
immediately
and
then
some
data
which
doesn't
get
updated
until
update
cluster,
yes
and
which
seems
to
be
contrary
to
what
we're
trying
to
do.
C
A
It
is
yes,
we
should
fix
that.
Thank
you
for
finding
it
and
reminding
me
of
it.
I
think
this
goes
back
to
the
like
one
of
the.
So
one
of
the
mechanisms
we
can
do
is
we
can
write
it
in
more
locations
and
that
could
be
a
like
full
cluster
v1
and
a
full
cluster
v1
beta.
Are
you
like,
like
the
one.
C
Yeah
all
right,
so
what
I'm
doing
is
I'm
still
writing
the
old
inversion
wand,
but
I'm
never
reading
it
right
and
then
and
then
I'm
writing
a
new
one
which
is
in
which
is
version.
C
Yeah
but
the
instance
group
specs,
we
were
writing
the
version,
but
reading
and
deserializing
as
if
it
were
unversioned,
okay,
yeah,
yep
and
so,
and
then
we're
also
going
to
need
to
if
we're
going
to
want
to
update
that
at
the
appropriate
time.
We're
going
to
need
a
second
copy
of
all
the
instance
group
specs.
A
Yes,
you
mean
like
the
the
problem
about
like
uncoordinated
or
change,
is
happening
yeah
at
random
times.
Yes,.
D
A
So
that
that,
where
that's,
where
I'm
wondering
whether,
when
we
do
this,
we
can
write
it
into
like
a
ver,
not
version,
but
a
a
per
a
directory
with
a
time
stamp
or
something
like
that,
and
we
basically
avoid
overwriting.
A
C
I'm
well
I'm
thinking
as
well.
The
other
thing
is
when
you
get
to
terraform,
then
you
have
issues
of
the
time
between
when
you
do
the
update
cluster
and
then
when
you
actually
execute
the
plan-
and
I
was
thinking
well,
I
was
thinking
it
might
make
sense
to
put
it
into
the
user
data,
except
for
the
fact
that
cops
controller
needs
it
in
order
to
label
the
nodes.
C
A
Yes,
the
the
other,
the
other
plan,
which
is
like
the
like
a
much
more
long,
potentially
longer
term
we
could
pull
it
in,
is
that
we
have
node
up
on
the
nodes
sourced
from
cops
controller
instead
of
like
talking
to
s3.
It
doesn't
solve
everything,
but
it
makes
might
help.
A
Yes,
yes,
it's
it
is,
we
have.
We
have,
I
think,
some
heuristics
on
instance
templates
or
we
delete
like
after
three
or
something
but
or
five
but
like
we
can
do
something
like
that.
We
don't
we
don't
they
are
so
small.
I
would
argue
we
don't
need
to
delete
them
aggressively,
but
we
can.
We
can
stop
there
being
more
like
100
would
be
too
many.
Let's
agree,
100
is
too
many,
so
we
say:
delete
after
100
yeah.
A
Which
api
version
you
mean
or
which
date
yes?
Well
I
mean
we
would.
We
would
then
be
much
more
disciplined.
I
think
about
making
sure
that
the
we'd
have
to
match
correctly
yeah,
so
there
we
probably
have
to
tag
well.
We
probably
tagged
them
anyway,
but
we
have
to
tag
the
instances
to
associate
them
with
a
particular
version
which
I
think
is
probably
a
good
idea
from
a
perspective
of
knowing
what
exactly
is
running
at
a1
time.
A
Great
there
is
something
in
the
launch
template
anyway.
There's
some
similar
logic
on
template
instance
launch
configuration.
I
can't
remember
which
one
it
is
today,
but
there's
something
along
those
lines
anyway.
So
I'm
able
to
share
that
logic.
A
I
think
you
have
the
next
and
final
topic.
John,
a
port
conflict.
C
A
Yeah-
and
I
can
do
the
other-
I
mean
I
know
exactly
what's
required,
do
we
I
can.
I
can
do
this,
I
can
do
it
they'll
just
they
can
be
one
pr,
that's
pretty
easier:
okay,
yeah,
I'm
just
trying
to
think
about
back
porting
and
stuff.
Maybe
I'll
do
just
two
pr's.
C
A
It's
not
even
available.
Okay,
that's
that's
good!
Okay,
cool
yeah!
There's!
There
is
an
idea.
We
can
expose
that
port
and
start
using
it
in
health.
Api
server.
Sorry
in
elb
health
checks,
like
there's,
been
a
long-standing
issue
to
like,
say,
like
those
health
checks
could
be
more
intelligent
like
at
the
they
are
currently
at
the
tcp
level.
A
A
Thank
you
for
bringing
that
up.
That
is
the
end
of
our
agenda
other
than
the
release
plan.
Are
there
any
other
topics
before
we
jump
into
the
release
plan.
A
A
I
got
it
right
how
about
that
all
right,
and
yes,
I'm
working
a
little
bit
on
trying
to
get
those
release
branches
under
more
testing,
as
we
talked
about
like.
I
think
that
was
one
of
the
sticking
points,
but
there
was
the
security
issue
that
was
raised.
So
that's
one
of
the
reasons
that
we
pushed
out
all
the
releases.
I'd
encourage
people
to
look
at
well
looks
like
we
have
another
one
coming
as
well
that,
but
yes,
so
we've
got
some
some
of
those.
A
A
A
Stretch,
yes,
I
think
it's
stretch
yes
and
then
I
I
don't
know
if
it
goes
on
the
release.
Clamp,
we're
gonna,
put
it
at
118
into
the
ubuntu
2004
into
the
channels
to
start
using
them
for
118
and
above.
A
Okay,
yes,
you
definitely
get
the
credit
for
even
driving
that
thank
you
for
that
other
than
that.
I
don't
know
if
we
have
any
particular
items
on
the
release
plan.
I
know
eric
and
I
are
looking
a
little
bit
of
it-
a
little
bit
at
gce
testing,
trying
to
reboot
that,
after
our
a
little
snafu
from
workload,
identity
or
something,
but
we
are
working
on
that-
I
don't
think
there
will
be
a
release
impact
per
se.
A
Okay,
so
yeah,
I
guess
we're
gonna.
We
are
going
to
move
the
master.
If
no
one
is
done
yet
I'm
going
to
move
the
master
branch
to
119.,
so
it
will
be.
I
don't
know
if
there's
any
reason
to
cut
a
119
alpha.
I
tag.
A
Well,
that
is
a
good
prospect.
I
like
that,
if
you
are
on
64.
yeah.
Okay
sounds
like
it's
easier
for
people
to
test.
Yes,
that
sounds
great.
We
could
also
try,
so
I
I
think
we
have
the
staging
bucket
now
automatically
built
by
the
post
submit
and
if
you
set
your
copyspace
url
to
that,
you
can
download
cops
from
that
staging
bucket
and
set
your
copyspace
url
to
it
and
it
just
works.
So
that's
pretty
nice,
in
my
opinion,
yep
yep,
very
nice,
very
nice.
A
Yes,
andy
and
the
I
have
some
scripts
coming,
which
will
also
make
it
easier
to
I'm
sure
everyone
has
the
script
but,
like
I
imagine
we
can
put
in
like,
I
have
a
build
aws
and
a
build
gce
which,
just
like
does
the
whole
push
for
you
and
that's.
C
A
I
don't
know
if
there's
anything
else
for
our
group,
I
see
the
pr.
Thank
you
well
easier
to
do
it
right
now.
Exactly
oh,
do
we
need
to
add
arm64
as
a
separate
thing
in
there?
That's
gonna
be
interesting.