►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yes,
the
other
update
was
the
same
up
to
his
head
yesterday.
I
need
to
get
my
PR
refreshed
I.
Have
it
mostly
done
I
need
to
go
through
and
do
some
testing
for
the
checkpointing
stuff.
I
talked
with
yuju
and
it
implements
her
version
of
it
and
it's
a
lot
simpler,
so
I'm
going
to
get
that
pushed
as
soon
as
I
can
I
have
some
other
things
on
my
plate
that
are
higher
priority
at
the
moment.
So
I
finished
those
things.
First,.
A
Cool
and
speaking
about
finishing
tasks
also
I'm
pushing
the
upgrade
duck
who
speak
here,
I'll
helping
you
on
that
later
yeah
I
mean
well
in
the
new
proposal.
Upgrade
proposal,
I
wrote
that
we
could
do
or
actually
I'm
splitting
the
current
upgrades
proposal
into
two
proposals,
one
for
self
hosting
generically
and
one
for
upgrades
themselves,
and
so
basically,
when
not
doing
the
surging
daemon
set
strategy,
we
have
two
options
we
could
discuss,
which
one
of
them
we
think
are
safer.
A
So
the
current
ways
we
duplicate
the
current
current
daemon
set
and
have
that
temporarily
there
as
a
fallback
when,
when
the
the
actual
API
server
part,
is
removed,
but
we
could
also
so
the
benefit
there
is
that
we
can
do.
We
could
technically
do
it
remotely
right.
So
we
have
like
I,
execute
cubed
em
on
my
laptop
against
my
GC
cluster,
that
that
is
running
cube
adium,
but
the
problem
is
like
it's
kind
of
fragile.
A
At
least
that
is
my
interpretation,
because
we're
relying
on
these,
like
we
were
relying
basically
that
the
on
that
the
API
server
crash
loops
without
failing
for
a
certain
period
of
time,
so
it
tries
to
be
in
the
port
and
stays
up
for
a
minute
or
so
without
actually
exiting
one.
So
yeah
I,
don't
know.
The
other
option
is
to
like
down
down
grab
it
temporarily
static
pod
or
let
the
static
port
be
the
like
fallback
thing.
That
will
be
some
advantage
that
you
have
to
be
on
the
node.
It's
all
so.
B
So,
just
for
clarification
for
those
who
are
listening
and
trying
to
understand
it.
This
only
affects
a
single
node
conditions.
Yeah,
it
does
not
affect
high
availability.
So
if
you're
in
high
availability
mode,
you
can
just
roll
away-
and
you
should
be
fine,
so
the
it
only
affects
the
single.
The
search
strategy
exists
because
if
you
had
a
single
API
server,
you'd
want
the
new
pod
to
be
coming
up
before
the
old
pond
is
tore
down.
The
problem
is
we
if
we
did
a
rolling
upgrade
with
a
you?
B
It's
like
a
chicken-egg
problems
right,
you
need
the
API
server,
node
or
Geodude
upgrade,
and
now
it's
like
gone.
So
it's
a
weird
scenario:
it's
an
edge
case.
It's
not
an
edge
case
according
to
I
open
up
the
call
of
Education
use
case
that
exists,
but
it
has,
it
has
corners
that
you
can
bunk
your
head
on.
A
Yeah,
so
well,
so
what
I
implement
was
like
remote,
friendly
option
that
copies
the
demon
sets
and
and
makes
it
all
executable
remotely.
The
bad
thing
with
this
approach
is
that
in
case
something
fails,
there's
no
rollback
possibility,
so
there's
well
kind
of
it's
kind
of
scary
at
the
same
time
to
upgrade,
but
well,
if
some
in
case
something
fails
you're
on
your
own
right.
So
that's
that
that's
the
current
situation
and
that's
why
that
is
alpha
right
so
because
I
mean
if
we
kill
the
API
server
and
some
it
doesn't
come
up.
A
B
A
The
the
main
well,
what
we
could
do
is
basically
have
the
like
static.
Ports
would
be
used
as
disaster
recovery
right.
So
if,
if
something
will
will
do
the
normal
first
thing,
we're
not
creating
a
single
master
with
self
hosting,
we
write
the
the
we
duplicate.
The
the
current
1-7
api
server
manifests
a
static
pod.
The
the
static
port
api
service
starts
crash
looping
as
its
called
the
port.
A
Then
we
upgrade
the
real
API
server,
which
removes
the
self-hosted
API
server
pod
and
adds
the
new
one
right
in
that
small
in
in
a
small
time
frame.
The
controller
manager
and
scheduler
will
talk
to
the
static
port
API
server,
and
then
we
can
like.
Then
then,
one
1/8,
API
server
comes
up
self-hosted
and
we
can
proceed
so
and
then
then
we
repeat
this
procedure
procedure
with
controller
manual
and
scheduler,
but
the
the
only
difference
here
is
that
we
use
in
case
something
goes
wrong.
We
can
revert
easily
by
using
the
static
ports
I'd.
B
D
C
A
A
Fun
thing:
oh
yeah,
so
that
is
basically
I
mean
I
while
having
the
like.
Well,
we
concert
execute
this
remotely.
It
feels
way
safer
anyway,
cuz.
Well,
if
something
goes
really
wrong,
we
can
just
like
we
know
what
we
had
running
a
self-hosted
things.
We
can
then
like
every
erase
everything
and
just
write
the
the
old
manifests
back
to
the
static
port
place.
Then
we're
like
kind
back
to
stage
one
where
we
once
we're
a
static
podcaster.
Then
we
can
maybe
even
ask
the
user.
Should
we
upgrade
to
self
hosting
again
or
not
right.
C
So
basically,
we
we
pivot
from
static
wads
to
start
posting
and
ampere
upgrade
to
base
people
practice
out
of
pods,
as
sort
of
as
if
we'd
started
a
new
cluster
with
static
pods,
and
we
put
it
back
to
self
hosting.
So
the
question
is:
do
we
use
the
same
strategy
for
AJ
or
do
we
do
it
differently?
Aj.
A
Is
just
normal
so,
like
the
single
single
master
case
would
just
insert
some
hooks
in
between,
like
some
some
hooks
to
add
to
fake
this
high
availability
thing
where
you
have
something
that
that
picks
picks
up
the
requests
in
between,
but
but
the
the
generic
thing
would
be
the
same,
like
we
just
issue
an
update
to
the
demo
set
in
AJ.
This
just
works
in
static
like
in
in
the
single
master
case.
We
have
to
add
some
hooks
to
actually
do
static,
pod
pivoting
in
in
between
right,
so
well,.
A
A
A
B
C
Jsf
I
also
wanted
to
check
in
on
the
state
of
the
dock.
I
haven't
taken
another
pass
or
a
first
pass
through
the
dock
and
I
haven't
hooked
other
people
at
Google
to
take
a
pass
on
the
dock,
because,
when
we
discussed
last
week
was
still
under
a
sort
of
heavy
iteration
and
not
really
ready
for
wider
review.
Where
is
it
now
blue
beehive?
It
sort
of
ready
for
wider
review
people
start
thinking.
C
A
B
C
D
B
A
I
mean
it
is
for
your
use
case
where
you
have
something
like
cops
or
maybe
something
that
executes
locally
on
your
node,
so
you
can
like
generate
certificates,
locally,
distribute
them
to
all
masters
or
whatever
cloud
environments
you
have
with
whatever
technique
then
set
up
external
CD
or
maybe
at
CDs,
hosted
somewhere
already
and
just
run
cubed
in
minutes
all
around
it.
It's.
B
A
Yes,
I
was
thinking
more
like
we
have
some
place
that
is
like
where
you,
where
you
generate
the
certificates
and
then
you
that
is
not
a
master,
and
then
you
distribute
from
that
place
to
all
masters,
so
yeah,
but
I,
don't
know
if
we
have
to
have
that
in
the
dark.
I
mean.
Isn't
this
something
that
works
today,
right,
I,
don't
know,
I've
been.
A
D
I
was
pinky
yesterday,
I
check
it
out
so
that
the
new
cluster
API
to
see,
if
somehow,
this
feat
and
under
the
API
API
basically
introduced
the
concept
of
instance,
troops
understanding.
So
if
I
look
at
the
bootstrap
process,
I
need
some
kind
of
seed.
Some
kind
of
point
where
everything
starts
I
can
limit
these
to
generate
the
certificates
or
I
can
go
till
the
generate
the
first
master
node,
but
this
is
something
that
is
separated
from
the
regular
master
node,
the
other
additional
master
mode.
D
So
one
and
I
I
agree
with
the
fact
that
this
is
something
which
is
related
to
the
to
the
to
the
tuning.
Are
we
bootstrapped,
but
from
on
the
other
side,
is
something
that
everyone
needs,
so
is
a
matter
of.
We
put
the
trade
off.
I
don't
have
the
answer
honestly
I
would
like,
for
instance,
if
I
look
at
how
kuba
cotton
works.
D
A
D
My
understanding
from
the
class
therapy
I
propose
and
which
is,
let
me
say,
more
or
less
similar
to
what
week
on
Gaza
is
that
with
a
you,
define
group
of
nodes,
and
there
is
the
group
of
nodes
which
is
which
is
master
and
the
group
of
node,
which
is
the
workers,
and
then
let
me
say
inside
group,
all
the
nodes
are
equal.
That
means
that,
for
instance,
they
start
all
those
know.
The
starts
from
the
same
unit
screen.
C
A
Yeah
and
also
that
the
same
goes
for
no
joining,
so
we
also
have
this
this
well,
you
can
just
copy
a
cube,
config
file
with
the
discovery
information
like
the
CI,
sir,
to
trust
and
the
master
endpoint.
You
can
do
that
just
fine
today,
since
one
six
I
think
for
for
no
joining
so
then
the
concern
that
you
well
execute
different
scripts
for
different
nodes,
depending
on
when
they
join.
Isn't
that
critical,
because
you
have
you
have
a
file
that
has
been
distributed
to
the
node
and
you
then
it's
all
the
same.
A
You
can
rotate
the
token
if
you
want
that's,
not
a
problem,
so
yeah
I
think
there's
less
like
two
major
use
cases,
one
for
like
interactively
typing
the
cubed
in
command
for
for
new
user
and
to
the
more
automated
with
terraform
or
whatever,
or
the
cluster
API.
The
automated
execution
of
cube,
ATM
and
I
think
the
automated
AJ
thing
works
today,
where
we
just
pumped
on
the
hot
AJ
stuff
on
a
tooling
to
say
that.
Well,
we
assume
you
have
an
HDD
cluster.
A
C
A
B
A
And
yeah
I
mean
overall
it.
It
has
really
been
cleaned
up
and
improved
like
merge,
these
two
ducts
together
and
it's
something
we
can
actually
commit
into
getsu
I'll
address
the
last
commit
comments
here,
yeah
by
the
today,
so
what
what
we'll
still
have
outstanding
yeah?
What's
the
the
most
important
thing
to
talk
about
here.
B
A
A
A
A
C
C
That's
a
lot
harder
than
just
generating
a
new
API,
because
you
could.
Presumably,
if
you
have
the
DES
private
key
for
the
CI,
sir,
you
can
just
generate
a
new
API
service
around
every
upgrade
and
refresh
the
one
year
thing
and
then
basically
you're
telling
people
you
have
to
upgrade
at
least
once
a
year
for
your
so
not
to
expire
case
or
doesn't
expire.
A
C
I
think
what
we
for
most
of
our
Google
is
that
people
upgrade
at
least
every
other
release
and
a
lot
of
people
are
realizing
they
should.
They
should
pick
up
sort
of
every
release,
because
there's
someone
a
bug
fixes
right
things
are
changing
so
fast
and
getting
fixes
are
often
not
backported
very
far
that
it's
sort
of
prudent
to
keep.
You
know
your
cluster
up
to
date.
C
B
My
problem
is
when
you
support
people
that
are
not
in
a
hosted
environment,
you
can't
tell
them
when
to
upgrade
right.
So
it's
it's
up
to
them.
To
do
it,
you
can
you
can
advise
all
you
want
and
the
we're
an
open
source
project,
so
we
can
say
some
things,
I
kind
of
like
this
idea,
even
though
I
might
regret
it.
C
Yeah
I'm
not
sure
it's
the
right
time
like
we
probably
want
to
also
build
some
sort
of
like
certificate,
rotation,
command
and
so
forth.
Right
and
we're
gonna
have
to
probably
that
for
the
CA
I
think
that
you
know
by
default,
if
we
just
always
replace
a
couple
of
this
sorts
on
every
upgrade,
like
that's
gonna,
eliminate
a
lot
of
the
issues.
Every
bolt
we're
setting
up
to
date,
we'll
never
have
to
think
about
doing
it
manually
outside
of
that
time
frame
right.
A
A
A
A
Well,
how
we
achieve,
how
do
we
achieve
that
with
self
hosting?
It
should
be
thread
pretty
straightforward
right,
because
we
just
update
the
secret.
I
if
we
assume
the
secret
thing,
we
just
update
the
secret
in
the
cluster,
we'll
do
a
rolling
upgrade
of
our
masters
and
that
will
do
I
mean
changing
a
config
map
doesn't
reload
the
pod
automatically
right.
A
Sea
level
returns.
So
if
we,
if
we
change
the
coffee
map,
if
we
change
the
secret
resource
like
we
do
it,
we
generate
a
new
API
service
over
insert
update
the
API
server
serving
cert
secret
and
then
upgrade
from
like,
let's
say,
1
8
1
9,
when
using
self-hosting,
then
when,
when
the
1
9
API
server
comes
up,
it
will
have
the
new
config
map
as
the
new
secret
right.
B
A
B
It
actually
does
I
just
think
it
does
a
watch
on
the
secrets
and
config
maps,
and
it
does
a
refresh
but
I,
don't
believe
that
it
will
tickled
the
pod
unless
the
pod
has
type
of
watcher
for
it,
because
there
was
a
recent
condition
where
we
were
talking
about
this
worthless
config
maps
where
it
actually
does
reload
them.
Oh.
A
We
need
to
check
that,
but
if,
if
we
well,
if
it
doesn't
reload,
which
I
don't
think
it
does
right
now
like
that,
it
doesn't
kill
the
pod
and
and
reload
the
contents
of
the
binding,
then
we
should
be
good
to
go
to
just
update
the
secret
with
the
serving
cert.
Then
do
a
roll
like
the
normal
upgrade
procedure
and
well
boom.
There.
We
go
API
service
star
that
wouldn't
use
secret.
Well,
we're
a
new
something
so
I.
B
A
Yeah
so
I
mean
a
restart,
will
be,
will
happen
in
any
case
right,
because
if
we
are
on
one
one,
eight
we
have
some
kind
of
secret.
Like
the
the
old
the
this
the
serving
side,
that's
gonna
expire,
then
we
upgrade
to
one
nine,
and
we
do
this
static,
poor,
jiggery
right,
then
the
API
so
is
gonna
be
teardown
and
the
wine
is
gonna,
be
the
1/8.
D
A
B
We've
been
avoiding
the
context
for
this
release
cycle,
but
we
can
start
to
talk
about
it
with
regards
to
how
we
want
to
do
load
balancing
to
the
Masters,
and
one
thing
I
was
thinking
about.
Is
you
know
at
least
to
start
to
percolate
in
their
minds?
Is
the
notion
of
potentially
in
the
future,
potentially
deploying
envoy
as
a
possible
means
to
get
around
this
right,
because
on
boy
would
give
you
the
local
client
connection
for
the
proxy
that
could
auto
load
balance
the
API
servers?
B
B
There
people
are
working
on
there's
various
things
going
on
now
and
sto
is
weird.
It's
neat
and
weird.
At
the
same
time,
it's
not
fully
baked
but
I
think
what
we're
looking
for
is
a
special-purpose
use
case
and
I
think
there
are
a
number
of
people
working
on
it
right
now,
so
yeah.
A
A
C
B
Can
we
can,
we
can
maybe
address
it?
The
next
cycle,
I,
don't
think
we're
gonna
have
bandwidth
or
time
to
address
the
sucker,
but
I
just
wanted
to
throw
that
out
there
as
an
idea,
because
I've
been
I've
been
tinkering
it
with
the
thought
of
it
and
it's
much
cleaner
than
everything
else.
I've
thought
it.
A
B
A
One
question
is:
would
it
like?
Well,
we
have
a
reboot
condition
here
as
well
right
or
like,
if
you,
if
you
deploy
and
boy
as
a
diamond,
set
or
static
pod,
whatever
then
reboot
and
your
cubelets
can't
find
the
master,
because
it's
talking
to
via
P
that
doesn't
exist,
and
one
has
to
come
up
and
with
us
to
get
information
somewhere
like
where
did
I
to
what
clients
did
I
talk
to
the
last
time,
yeah.
B
I
think
I'd
have
to
dig
in
the
details,
but
I'm
pretty
sure,
there's
a
there's,
a
I,
don't
know
if
there's
a
check
pointing
for
it
for
restarting,
but
there
is
a
separate
storage
facility
for
envoy
and
I'd
have
to
verify
what
that
storage
facility
is
and
how
it
works.
I
am
NOT
an
expert
on
it
by
far
stretch
the
imagination,
I,
basically
playing
with
the
blocks,
and
it
sounds
like
a
good
idea,
but
you
know
the
Devils
in
the
details
and
they
have
not
dug
in
as
Latinos
okay.
A
B
B
A
That
would
force
us
to
know
whether
we're
using
IP
tables
or
IP
vs,
for
example,
which
is
a
great
downside,
and
we
have
to
checkpoint
the
well
deploy
some
kind
of
network
IP
tables
checkpoint
4
for
that
to
work
for
reboots,
so
yeah
it
has
those
downsides.
Then
the
other
proposal
that
has
come
up
is
well.
A
We
basically
write
our
own
static
board
thing
well,
checkpoint
somewhere
right,
some
iptables
rules
over
for
a
VIP
and
watches
the
kubernetes
service,
but
that
would
be
a
lot
of
extra
maintenance
overhead
for
us
to
do
and
I
mean
race
conditions.
All
round
could
happen
so
and
yeah
such
details.
So
absolutely.
B
Experimenting
and
reading
up
is
a
good
idea.
I,
don't
know
where
we're
I
want
to
Inc
you,
the
the
thought
process.
As
someone
I
talk
with
all
the
time
mentions,
let
the
idea
percolate
and
see
if
listicle
e,
it
makes
sense
and
then
then
reevaluate
once
we
have
like
some
pocs
available,
but
every
solution
I've
seen
thus
far
I,
don't
particularly
like
in
it
feel,
is
very,
very
happy,
but
this
might
be.
This
poses
an
opportunity
for
this
possible
solution.
A
A
I
think
so,
and
so
we
were
basically
gonna
Fabrizio.
Can
you
send
a
quick
email
to
the
c-class
lifecycle
mailing
list,
just
saying
that
well,
I,
consolidated
and
unified,
Lucas
and
Tim
stock,
and
here
is
the
result
well
this
week
we're
reviewing
it
and
sharing
it
with
a
broader
group.
Please
take
a
look
at
it
and
we'll
address
comments
coming
up.