►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
see!
What's
in
the
agenda
first,
we
have
testing
for
PR
from
myself.
We
have
to
branch
the
currently
our
jobs
are
targeted
at
master
and
we
have
to
branch
those
in
to
release
one
seven
one
eight.
So
we
can
get
signal
from
from
the
release
branch
as
it
eventually
diverged.
This
diverges
from
master,
and
so
the
release
team
has
a
clear,
clear
signal
for
releasing
alpha
beta
beta
1
2
3.
What
are.
B
A
B
A
B
A
Yeah,
at
least
we
should
like,
with
this
PR
once
it
passes,
the
tests
seems
to
like
I
had
some
type
of
somewhere.
It's
we
at
least
have
the
like
main
single
for
cubed
m18
upgrade.
Then
we
have
upgrade
bootstrapped
august
beta
when
upgrading
to
one-eighth.
Basically,
there
were
some
new
features
in
and
resource
renames
between
one
seven
one,
eight
one
of
the
new
like
one
cluster
or
actually
two
cluster
roles,
change
name.
So
we
we
have
to
handle
that
like
delete
the
old,
a
role
binding
or
cluster
or
binding,
can't
change
the
role
it's
fund.
A
D
Did
you
is
there
a
PR
for
the
name,
change
on
bootstrap
tokens
for
the
release
binaries
for
the
rpms
and
stuff?
Because
don't
you
need
this,
or
is
this
only
on
the
API
server?
So
it's
just
the
epa's.
A
A
Yeah
exactly
so
so
this
is
our
back
things
only
regarding
moonship
focus
and
also
Matt's
added
the
feature
with
before
cold
freeze
that
made
it
possible
to
all
this
color
custom
groups,
which
this
is
now
interesting
as
well
so
Mucha
token
will
get
the
identity.
The
group
of
system
boot
trappers,
something.
Let
me
see
that.
D
B
A
B
Yes,
I'm
saying
like,
instead
of
changing
that
constant,
we
should
basically
introduce
a
new
one,
and
so
we
can
switch,
which
we
use
based
on
the
version.
I
also
just
noticed
your
issue
for
setting
up
tests
for
the
one.
Eight
branch
sets
up
a
test
for
running
installing
a
one,
seven
cluster
on
the
one,
eight
branch,
no.
C
B
A
A
D
A
Yeah
yeah,
so
it's
four
one,
eight
four,
oh
eight,
oh
well,
now
I
looked
at
the
wrong
one
401
and
then
now
pasted
and
yeah
I'm
gonna
I'm
gonna
do
a
quick
letter
PR
as
well
after
this
one,
eight
branches
made
yeah.
So
basically
the
concept
changes,
it's
gonna
be
well,
you
can
use
Soyuz,
1
1
7,
and
it
will
work
just
fine
with
the
old
thing
and
but
when
you
upgrade
it
will
like
add
this
system,
boot-up
rescue
bed
and
default
node
token.
The
old
groups
call
the
secret.
A
A
He
he
said
he
could
take
that
one
and
it's
it's
basically
about
like
that
design.
That
I
made
for
1/7
should
be
updated
to
1/8,
with
all
the
new
faces.
This
kind
of
thing-
and
we
also
like
do
some
events
of
it.
The
main
Doc's
like
the
reference
guide
of
Cuba
name
or
something
I,
don't
know
what
do
you
think.
D
E
Could
probably
time
all
at
least
part
of
this
cuz
I
was
planning
to
do
some
dates
related
to
the
new
pinning
hello
and
the
security
properties
of
that,
and
then
also
the
boots.
Our
token,
a
group
change
anyway,
I
can
take
a
look
at
least
part
of
it.
I
don't
know
if
I'm
comfortable,
tackling
upgrade
docks.
A
What's
more
and
yeah
the
I
created
document
yesterday
for
like
the
process,
how
things
are
done
in
this
sig
regarding
cubed,
M
development
and
like
what
do
we
do
the
first
month
of
the
cycle?
What
do
we
do?
The
second
month,
what's
important
to
think
about
think
about
before
releasing
these
kind
of
things.
A
For
example,
it's
one
non
trivial.
One
like
non
obvious.
One
is
before
the
1/8
Aussie
is
cut.
We
should
bump
the
default
version
like
it's.
It's
a
hard-coded,
stable
one
7
right
now
and
as
a
default
default
version,
when
you're
executing
cube
at
a
minute
and
like
tell
me
what
branch
to
use
by
default
and
look
that
up
in
the
CI
in
the
future
we
I
don't
know
we
probably
could
do
some
fancy
things,
but
I
don't
know.
If
that's
worth
it,
it's
a
number
bump
every
quarter,
but
it's
something
that
has
to
be
done.
A
B
You
know
in
gke
we
always
used
to
just
say
like
okay,
you
know
one
four
is
out
you're
going
to
get
new
clusters
at
1/4.
We
found
that
actually
broke
a
lot
of
people,
and
so
one
one
four
one
one
seven
came
out.
We
said
you
can
launch
one
seven,
but
you
know
it's
one:
seven,
zero!
Historically,
those
have
not
been
super
stable,
so
we're
so
to
lunch.
You
know
165
by
default.
B
If
you
want
to
one
seven,
you
can
get
it
and
at
some
point
we
may
want
to
change
cube
admin
to
behave
more
like
that,
where
even
the
1/8
version
of
cube
admin
defaults,
you
know
175,
but
you
can
launch
one
eight
and
then
at
some
point
we
flip
the
that's
the
default.
So
I,
don't
we're,
probably
not
quite
to
the
point
where
we're
worried
about
that
yet,
but
it's
something
to
consider
in
the
future
that
we
shouldn't
just
assume
that
will
always
take
you
know
latest
from
branch
from
CI.
D
D
D
That's
the
way,
that's
the
first
time
they
actually
take
the
consumable
thing,
but
it'd
be
ideal
if
we
could
actually
not
considered
that
zero,
like
almost
like
a
beta
test,
but
actually
get
real
beta
testing
from
people
who
would
be
willing
to
take
the
release
candidates
right,
because
the
the
problem
with
dot
zero,
no
one
ever
installs
it,
because
there
was
afraid
right.
So
if
we
had
a
process
involved,
there's
some
type
of
virtuous
cycle
of
somebody
who
wants
to
be
the
guinea
pig
that
would
be
helpful.
I
think.
A
This
is
far
recent.
Now
we
have
the
upgrade
command.
We
like
I,
explicitly
added
allah,
masha'allah
release,
candidates
or
something
based
on
robots
proposal
comment
there
when
we
reviewed
it
some
time
ago,
and
I
think
that
definitely
makes
sense,
which
sounds
hosting
in
the
future.
I
mean
for
EDD
testing
or
something
it
would
be
really
cool
to
just
spin
up
across
the
once
and
then
inside
of
your
cluster.
You
just
do
upgrades
all
the
time
or
something
like
that
and
yeah
yeah.
A
A
Exactly
and
and
but
for
the
like,
be
the
test
person
process
involved,
I'd
love
to
see
something
like
community
effort
or
something
like
a
badge
like
I'm
I'm,
a
beta,
tester
and
I'm
signed
up
on
this
mailing
list
or
whatever,
and
then
there's
some
coordinated
program
or
something
like
this
this
week,
we're
gonna
do
be
the
testing
and
like
everyone
that
has
done
it
and
provided
some
feedback
there.
Even
if
just
well
I,
don't
know
get
something
shootout
on
Twitter
or
whatever
I
mean
that's
something
for
Cyrus
recording
it.
Probably
our
sig
p.m.
A
Yeah,
anyway,
that's
that's
going
to
improve
over
time
and
if
we
said
that
the
self
hosting,
when
we
have
self
hosting
it
will
be
far
far
easier
to
do.
Do
these
automated
like
when
we
don't
have
to
even
assume
some
kind
of
info.
We
just
talked
to
the
combination,
API
and
say:
let's
upgrade
yourself,
and
it
can
do
that.
A
D
B
F
F
In
buku
right
now,
we
only
turn
on
client
certain
for
the
kuba
API.
We
don't
turn
on
cue
with
authorization
and
if
we
do
turn
it
on,
then
it's
just
a
question
of
how
this
pod
check
pointer
should
behave
because
essentially
it
can't
contact
the
crew
but
API
without
there
being
an
API
server
running
and
it
needs
to
make
checkpointing
decisions
based
on
determining
the
local
state.
You
know
what
is
what
is
running.
F
What
is
not
so
the
options
the
easiest
option
would
just
be
if
we
were
to
turn
on
the
google
it
api
authorization
that,
if
the
pod
check
pointer
can't
reach
the
couplet
api
that
it
assumes
something's
wrong.
I
should
start
the
checkpoints,
I
think
that's
relatively
safe
in
a
couple
cases,
one
of
which
is,
if
the
only
thing
that
we're
checkpointing
is
the
API
server
I,
don't
think
it's
that
crazy
to
just
try
and
start
the
API
server.
F
That's
fairly
easy
to
reason
about.
If
there's
a
port
conflict,
it's
not
that
big
a
deal
at
some
point.
The
other
API
server
will
start
responding
and
the
check
point
can
just
remove
the
check,
for
the
other
would
be
if
the
coolant
API
actually
gives
a
response
of
authorization
failed
or
timed
out.
We
could
key
off
of
that
and
then
say:
okay.
Well,
it's
safe
to
start
or
we
should
just
start
the
check
point,
because
we
can't
determine
local
state.
F
That's
one
option
which
would
be
the
easiest.
The
other
option
is
that
we
try
and
determine
the
local
state,
not
through
the
Kubla
api
and
maybe
just
reaching
out
to
the
CRI
endpoints
directly
and
I'm,
not
even
sure,
if
they're
exposed
right
now
or
trying
to
reach
out
to
the
runtime
directly
to
just
ask
what
doctor
containers
are
running.
A
A
F
Ideally,
I
would
say
that
the
check
point
should
be
able
to
operate
only
off
of
local
state
and
that's
cool.
It
would
be
able
to
determine
its
local
state
and
then
checkpoint
or
no
right.
Now
the
checkpoint
er
only
reaches
out
to
the
API
server
for
garbage
collection
decisions.
So
if
it
can
reach
out
to
the
API
server-
and
it
sees
that
pods
are
no
longer
scheduled
to
this
node,
then
it
will
make
garbage
collection
decision.
F
The
other
option,
for
maybe
this
wouldn't
work
for
buku,
but
we
could
change
the
check
pointer
for
Kubb
admin
would
be,
and
this
is
something
Tim
I
think
had
looked
into
a
little
bit,
but
just
asking
sed
directly.
If
you're
willing
to
give
you
know
if
the
check
pointer
is
going
to
be
co-located
with
the
sed
or
at
least
have
access
to
at
CP,
it
could
just
look
at
at
CD
for
that
information
as
well.
D
F
Think
it's
at
all,
I,
don't
think
it's
a
good
general
behavior
cuz!
Technically,
you
could
checkpoint
whatever
you
wanted,
but
with
the
API
server
I
feel
like
that.
That's
not
that
risky.
It
would
just
be
a
port
conflict
essentially,
and
if
it's
a
poor
conflict,
then
something
some
API
server,
the
check
pointed
copy
or
a
real
copy
is
going
to
start
responding
at
some
point
and
then
it'll
just
act
normally
it'll
be
able
to
determine
the
local
state.
F
It'll
determine
whether
that
copy,
API
server,
the
the
checkpoint,
a
copy
should
be
running
or
not,
and
it'll
clean
it
up
and
look
it's
just
if
people
reuse
this
more
generally,
that's
not
the
greatest
behavior,
because
we
can't
reason
about
it.
So
we
could
just
I
mean
it
could
be
as
something
as
dumb
as
like
add
a
flag.
That's
just
if
can't
contact
Google,
it
turn
I'll
inject
points
or
change.
The
behavior
I
mean
I'm
kind
of
open.
Here,
it's
just
none
of
its
ideal.
A
D
A
F
A
Well,
what
about
just
like
adding
that
kind
of
behavior
like
so
that
will
unblock
you
from
from
enabling
TLS
bootstrapping
in
full,
like
for
the
also
adding
authorization,
the
cubelet
api
right?
If
you
did
that
small
change
behind
the
flag
or
whatever,
and
then
we
could
write
in
the
reference
reference
Docs
that
well,
if
you
have
checkpointing
it's
and
you
reboot,
it's
gonna
burn
to
the
ground.
If
you
don't
want
that
to
happen,
and
this
is
the
Olfa,
you
can
apply
this
manifest.
A
A
A
C
A
Well,
we
we
don't
find
that
well,
you'd
have
to
use.
Do
you
have.
C
A
Well,
yeah,
we
had
some
like
security
discussions.
I
think
Robert
pointed
that
out
that
localhost
is
maybe
the
best
practice,
as
you
could
do
a
lot
of
proofing
and
man-in-the-middle
things
there.
If
but
I
mean
in
this
case
it's
it's
probably
fine.
It's
more
the
general
case
that
a
search
shouldn't
have
holes
in
it,
but
yeah
it
I
think
it's
fine.
It
would
be
probably
what
we're
gonna
do
when
we
do
AJ.
The
next
cycle
is
probably
of
some
kind
of
virtual
IP,
but
yeah
we'll
see.
C
I,
don't
really
have
much
right
now
that
I
need
to
push
you
guys
on,
but
I
know,
there's
some
up
courtesy
pressure
to
Cuba
knees
in
the
court
to
allow
external
cert
issuers
so
keeps
fair
supports
vault
as
an
issue
of
certs.
But
it's
really
just
you
know,
making
API
calls
to
vault
directly.
C
A
C
A
C
A
A
A
It's
it's
a
reconciler
of
the
like
internal
advertise
IP,
so
in
in
the
case
where
you
have
like
this
master
is
accessible
on
one
two.
Three
four,
it's
gonna
add
that
to
the
kubernetes
internal
service,
I,
don't
know
how
you
in
handle
the
kubernetes
internal
service.
It's
never
gonna
delete
the
thing.
It's
gonna
stay
there
forever
like.
If
your
API
server
goes
down,
you
have
two
IPs.
One
of
them
is
down.
50%
is
gonna,
go
into
black
hole.
D
I
still
point
everything
to
the
minion
load,
balancer
and
just
let
it
deal
with
the
issues.
I
saw
that
code
a
while
ago,
and
it
was
basically
the
internal
service
resolution
right
so
like
if
using
the
internal
kubernetes
service,
you
want
to
be
able
to
load
balance,
the
endpoints
that
that
are
there
and
I'm
trying
to
remember
there
was
some
issue
with
this:
I
have
enough
to
have
to
come
back
to
it,
but
this
is
this
as
part
of
an
H,
a
conversation.
D
C
A
Yes,
here
we
and
now
pasted
the
link
so
I
think
it's
I
guessed
him
that
what
you're
thinking
about
like
the
reconciler
doesn't
work
the
way
we
expect
it
to
work,
but
this
is
gonna,
be
fixed
up
for
online
I.
Don't
know,
I
think
this
is
reasonable.
I
mean
it
could,
whatever
kind
of
node
level
thing,
we
have
that's
load
balancing
like
the
global
cubelets
requests
or
sync
requests
right,
bi
service,
it's
gonna
be
updated.
Some
in
some
way.
A
Like
Matt
just
said
it's,
you
have
to
go
and
updates
every
static
pod,
but
for
us
not
to
do
that,
we
could
like
use
the
endpoints
of
the
kubernetes
service
as
a
source
of
truth
with
this
reconciler.
That
basically
makes
every
API
server
agree
by
themselves.
Like
is
this:
it's
my
PS
healthy
or
my
PS,
healthy
I.
C
A
D
C
D
C
D
C
A
C
A
D
C
A
A
C
A
That
would
be
nice,
hey
I
played
with
it
yesterday
and
it's
well.
It's
far
from
abyss,
but
after
a
lot
of
wrangling,
around
I
could
get
it
working
with
a
hundred
something
bash
scripts.
Just
like
it's
a
it's
a
while
loop
and
it
does
Q
beta
min
it
teeth,
built
log
then
apply
it
like
weave
and
runs.
The
conformance
binary
inside
of
the
cluster
runs
like
whatever
test
I
want
and
then
I
without
the
unit
file
there
and
yeah
basically
pushes
that
everything
to
GCS
and
then
I
had
to
do.
A
Let
me
see
if
I
find
the
PR
a
really
small
PR,
just
decided
to
test
quit
like
look
at
this
thing.
Look
at
just
this
GCS
bucket
and
my
results
are
showing
up
so
I
mean
that's,
that's
really
cool.
We
should
definitely
do
this
for
a
lot
of
environments.
As
we
said
yesterday,
we,
when
one
one
seven,
was
released.
A
We
immediately
got
a
lot
of
user
feedback
like
this,
isn't
working
AWS
and
like
oh
okay,
and
it
turned
out
like
AWS
cloud
provider
sets,
makes
the
hostname
and
the
node
API
object
named
mismatch
and
the
mil-dots
right,
sir.
It
allows
that
such
things
with
we
had
we
would
have
called
such
things
with
federated
testing
in
this
way.
So
there's.
A
D
We're
we've
only
got
a
couple
minutes
left
I
make
sure
at
least
we
talk
about
it
in
this
group
because
we
have
the
right
people
here
there
is
there.
We
had
planned
to
update
to
exid
e310
3
1:10,
this
really
cycle,
there's
a
number
of
issues
that
were
fixed.
The
client
was
updated
before,
whatever
reason
the
actual
PR
to
update
all
the
manifests
and
images
was
not
merged
and
I
think
it
was
blocking
on
a
image
push
from
a
Googler
and
I.
D
Don't
it's
I
know
it's
late
in
the
cycle,
but
I
don't
exactly
know
what
our
plan
here
is
right
now,
because
we're
so
late
and
I,
don't
think
it's
gonna
affect
the
there's
a
minor.
It
could
affect
some
behavioral
things.
We
could
get
test
runs
under
it,
but
this
was
LG.
I
think
was
LG
teams
il
GTM
did,
but
it
didn't
have
the
official.
D
You
know
all
the
way
through
the
approval
process,
because
it
was
waiting
on
people
who
owned
coop
up
stuff
right.
So
I
don't
know
what's
going
on
here,
but.
B
D
A
A
Think
that
we
can
use
cube
atoms,
it's
kind
of
a
guinea,
a
pig
here,
oh,
like
sure,
can
we
add
a
flag
like
Etsy
Lee
version
or
something
that's
that
could
also
be
away
its
I
think
you
way,
CD
version
would
make
sense
as
a
flag.
It's
kind
of
nice
I
think.
D
D
B
A
How
easy
is
it
to
do
some
more
heavy
testing,
more
than
conformance
Robert
like?
Could
we
could
we
do
this
now
and,
like
turn
on
more,
like
turn
on
other
than
conformance
tests
in
the
cubed
me2,
it
runs
as
well.
I
mean
not
upgrade
once
because
we
don't
upgrade
at
CD,
but
just
on
Keenan
fault,
that's.
B
A
good
question,
I
think
the
the
sake
API
machinery
folks
are
driving
most
of
that
I
know
that
boy
tech
was
driving
that,
from
six
gales
point
of
view
to
try
and
get
us
past
the
the
mm
no
limit
a
few
releases
back
but
I
think
it's
mostly
transitioned
over
the
API
machinery.
Folks
and
so
Joe,
who
just
commented
on
on
that
issue,
would
be
the
right
person
to
ask
about
that.
B
D
No,
it's
it's
pretty
much
a
stabilization
bump.
So
as
long
as
you
bought
by
miners
there,
they
guarantee
backwards
compatibility
and
they
do
a
fair
amount
of
regression
testing.
Although
we
have
found
issues
because
kubernetes
is
kind
of
a
unique
beast,
it
exercises
it
in
a
very
interesting
way.
So
you
know
I,
don't
what
version
is
boot?
What
version
is
the
book
coop
folks
at.
D
D
D
D
The
the
client
that
is
built
into
ipython
kubernetes
api
server
is
right.
Now
it's
currently
three
110,
even
though
we're
it's
tested
at
head
at
head,
it's
three
110
and
you're
running
a
back-end
of
three,
whatever
right
that
is
supposed
to
be
tested,
but
no
one's
there's
been
no
cycles
on
it,
so
you're
you're
bleeding
the
edge
there
and
and.