►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
So
we
missed
the
boats
in
the
one
8
Series.
Do
you
get
the
updates
2
3
1
and
we
do
not
have
time
to
qualify
a
3o,
17
2,
a
3
2
9
upgrade
I,
don't
think!
There's
people
contestants,
but
I,
don't
know
if
core
OS
has
not
has
explicitly
stated.
They
have
not
done
that
regression
testing,
because
I
talked
to
Ji
Young
on
about
that
and
he
said
that
they
have
not
done
it.
So
I
think
for
new
clusters
blowing
out
at
3.
C
If
we're
gonna,
specially
we're
going
to
have
the
H
a
feature
available,
glowing
out
at
3.
2
is
probably
pretty
important
and
I
talked
with
Clayton
too
as
well,
and
he
had
mentioned
that
they
are
skewing
the
clients
to
the
version
that
they're
currently
running
so
they're
running
a
3-2
client
tickets,
a
3-1
and
we've
done
that
on
many
releases.
The
client
version
met
doesn't
need
to
match
directly
with
the
version
that's
deployed.
C
C
D
Yes,
so,
while
we're
talking
about
ID,
CD
versions,
so
in
the
review
for
the
API
same
wishes
got
managed
for
it.
Co
raised
like
a
really
good
point
in
terms
of
right
now,
we're
not
exposing
our
version
choices
to
cube
ATM
users,
so
you
know
do
we?
Do
we
want
to
expose
that
choice
to
users
in
in
one
line
right?
So,
if
I'm
a
user,
is
it
reasonable
for
me
to
expect
my
etsy
D
to
run
like
three
one
blah
blah
blah
or
should
we
like?
C
D
So
because
I
wanted
one
of
the
spec
fields
for
the
like
cluster,
that's
gonna
be
sent
to
the
operator.
It
is
a
net
CD
version
string
right,
so
I'm,
just
trying
to
figure
out
like
the
expected
behavior
of
this
sort
of
H
a
self-hosted
Ltd,
whether
we
should
even
expose
that
version
as
an
option
to
the
user
or
whether
we
should
just
sort
of
always
assume
that
we
know
best
I
would.
C
Always
assume
that
we
know
best
for
a
time
being,
because
we're
still
alpha
I
mean
eventually
over
time.
We
might
want
to
have
a
nod
to
support.
We
might
want
to
have
a
knob
with
a
default
right
and
if
you
know
it's
like
you
know,
super
experimental.
Buyer.
Beware,
you
know
you
kind
of
blast
your
own
foot
off
with
a
foot
gun.
It
was
NAB.
You
know.
B
D
B
C
It
can
be
skewed
by
one
one
minor
version,
and
we've
done
this
across
a
number
of
releases
and
I'm
gonna
push
to
get
that
change
upstream.
So
we
get
all
the
test
cycles
now
so
so
long
as
we
get
all
the
test
cycles
now
we
should
be
good,
and
that
way
you
know
matching.
The
server
version
would
be
a
harder
shift
because
then
we'll
have
to
go
through
the
upgrade
process,
but
we
might
want
to
make
that
case.
Given
the
issue
that
we've
seen
yeah
so.
B
C
C
C
C
I
think
default
lis
going
to
3
1
is
the
safest
bet
and
just
smashing
the
ammo
is
Hulk
smash.
Oh,
it's
totally
fine,
because
it'll
restart
itself,
I.
B
Mean
I
tested
that
locally
and
well
it
it
seems
to
work.
Should
we
well
so
in
UX
terms,
should
we
do
that
CD
upgrade
when
you
do
upgrade
apply,
or
should
it
be
a
separate
procedure,
or
can
we
somehow
feature
gates
or
conditionally
flag?
This
like
well
we'll
do
this
by
default,
but
if
you
want
to
opt
out
of
upgrading.
D
D
C
That
seems
like
a
legitimate
thing
to
do.
I
think
as
long
as
Kubb
ATM
can
detect,
if
it
still,
if
the
configuration
has
a
local
xcd,
which
it
should
be
in
the
comp
thing,
that's
now
stored
in
a
config
map
right,
so
it
should
be
able
to
detect
that
it's
there
and
then
automatically
do
the
upgrade
if
it
if
it
laid
down
the
version-
and
that
seems
like
a
legit
thing
to
do.
If
it's
external,
then
somebody
else
is
gonna-
have
to
manage
that
separately.
D
B
D
B
C
D
D
Politeness
I'm
just
trying
to
think
in
terms
of
the
UXA
I
mean
backing
up,
and
so
there's
there's
two
points
there
like
I
would
like.
It
would
be
nice
to
me
as
a
user,
to
have
a
local
backup
just
in
case
something
goes
wrong
just
for
future
reference,
perhaps
like
at
our
wall
of
my
CD
stay
and
secondly
like
if
we
were
to
do
rollback,
say
something
went
wrong
with
that.
Cd
upgrades
we're
gonna
have
to
roll
back
to
some
state
right,
in
which
case
I
think
we
would
need.
D
E
I
have
a
question
I
apologize.
If
this
is
not
obvious
to
me,
but
like
how
big
can
these
backups
get
I
mean?
Can
they
be
big
enough,
substantial
enough
that
you
can
run
into
a
situation
where
you're
backing
up
taking
a
lot
of
disk,
but
this
space
and
somehow
interfering
with
their
workloads
going
on
in
that
master?
Now,
isn't
it
it's.
D
Definitely
possible
I
mean
it
really
depends
on
like
the
the
the
size
of
your
communities
cluster
and
how
many
resources
are
being
persisted
at
CD
right,
I
mean
but
I,
don't
know,
I
mean
that
is
firmly
a
that's
something
that
the
just
gonna
have
to
figure
out
by
themselves.
We
can't
really,
we
can't
really
hold,
can
hold
them
to
that
degree.
I,
don't
think
we
can
sort
spec
out
hardware,
requirements
and
I
mean
where'd.
You
draw
the
line
that
right
I
mean
we
can.
D
C
I
think
I
think
train
there's.
We
should
make
it
easy
to
do
and
crawl
out
instructions
where
possible,
but
I
think
there
there's
a
line
that
we
don't
want
to
partake
in,
which
is
like
this
is.
The
back
up
aspects
are
separate
right.
That's
that's
orthogonal
to
the
behavior
of
coup
Beatty
and
proper,
and
we
can
delegate
that
to
other
tools
to
to
manage
that
risk
or
we're
basically
saying
it's
your
problem.
Yes,.
B
D
B
Static
pod:
well,
we
might
want
to
do
just
copy
of
Marley
bat
CD
right
like
the
actual
directory.
We
might
want
to
copy
that
like
not
getting
into
the
application
level
backups
instead
of
instead
we'll
do
a
file
system
level,
backup.
Maybe
then
we'll
copy,
like
the
normal
operating
procedure
for
control
places
or
copy
the
current
CD
manifests
into
a
backup
directory
will
write
the
new
one.
Yet
CD
should
restart
and
everything
should
be
fine
if
it.
B
If
things
like
fail,
then
we'll
we'll,
like
you,
just
just
move
the
file
from
the
backup
directory
the
static
pod
file
from
the
backup
directory
through
actual
occasion,
we'll
see
if
a
TB
comes
online
again
like
with
the
previous
version
and
like
then
we
might,
in
the
end,
want
to
have
this
optional
hook
like
okay,
we
try
to
roll
from
3
0
to
3
1,
it
didn't
work.
We
try
to
roll
back
2
3
0,
but
it
didn't
come
up
successfully
that
either
so
we
know
we're
now.
B
C
You
went
through
many
steps.
The
the
problem
I
had
was
along
the
way,
your
there's
a
piece,
that's
missing,
and
that
when
you
do
the
upgrade,
do
we
want
to
have
a
save
file
of
some
kind
for
the
previous
version
and
Sarah
metadata
in
the
new
one
that
that
specifies.
You
know
this.
The
old
one
was
this
so.
B
No
I
don't
think
so.
Instead,
like
the
the
current
upgrade
proceed,
certain
procedure
is
basically
shuffling
files
around
on
disk
right.
We
have
the
current
one.
7
manifests
ApS
of
a
manifest
will
will
just
copy
that
well,
actually
we'll
we'll
move
that
so
backup
directory
will
write
the
new
file,
which
is
1/8
API
server.
Then,
if
something
goes
wrong,
we'll
we'll
just
move
that
old
file
from
the
backup
directory
back
to
the
actual
real
location,
and
we
could
do
the
same
with
a
CD
with
the
optional
hook.
B
D
B
Right
I
actually
like
that
a
lot
because
then
we're
not
like
mocking
with
you
user
data
right.
We
just
took
a
snapshot.
Well,
if
things
did
go
wrong,
you
can
it's
basically
one
single
command
you'll.
Have
you
have
to
do
on
the
command
line?
So
it's
it's
kind
of
straightforward,
yeah,
I,
think
that's
the
best
and
and
then
we'll
leave
this
whole
thing
optional
from
the
command
line
right.
B
D
D
And
then
for
the
AJ
stuff,
we
can
probably
avoid
this
process
entirely.
We
might
not
even
have
to
back
up
because
I
assume
the
operator
backs
up
things
anyway.
It
has
mechanisms
for
backup,
so
yeah
I
think
it's
just
going
to
be
a
case
of
executing
like
an
HTTP
request
to
some
endpoint
and
then
from
I
I.
Don't
know
we
might
have
to
load
weights
perhaps
and
do
some
health
checks
to
like
make
sure
that
that
states
been
reconciled,
and
then
we
can
proceed
for
the
rest
of
the
control
plane
upgrade,
but
yeah
yeah.
B
Yeah,
that's
that
sounds
good
to
me.
Anything
else
on
those
lines.
B
And
yeah
to
know
that
III
think
I,
don't
even
think
we
have
got
one
issue
with
Cuba
time
upgrade
so
far.
Jamie
have
you
seen
something
I
I
haven't
so
I
mean
this
should
be
kind
of
really
safe
because
well
chorus.
Does
a
lot
of
regression
testing
between
these
cues
so
well,
and
the
overall
cube
atomic
read
command
seems
to
be
stable,
like
to
proceed.
You're
well
we're
doing
so
cool.
D
B
D
B
B
C
F
C
B
B
C
D
C
B
C
B
B
B
Well,
it
has
basically
been
things
like
well.
If
you
set
the
advertised
address
of
the
API
server
to
an
ipv6
address,
it
will
not
try
to
like
health
check
on
local
hosts
like
on
the
ipv4
address
the
local
host.
It
will
use
the
ipv6
variant
and
think
like
that,
so
small
changes
that
basically
swap
out
ipv4
addresses
to
ipv6.
In
case
you
obtains
it.
This
is
from
what
I
understand.
One
of
the
major
teams-
and
here
is
CNI.
B
Zero.
Six
is
required
here,
so
I
mean
it's
well
kind
of
they
go
hand-in-hand.
Any
questions
comments
there.
C
B
C
B
C
B
Yeah
and
another
one
is
Windows
support
for
Cuban
I'm
joined,
which
is
cool.
It's
it's
really.
The
same
I
mean
cubed,
I'm
joined.
Doesn't
do
anything
really
all
right
it's
nowadays.
It's
basically
doing
the
discovery
and
writing
down
the
file
right.
So
it's
basically
just
this
validation
flow
that
can
I
trust
the
master
and.
B
B
That's
actually
my
questions
in
soonish
I
mean
it's
not
cube.
Atms
I
mean
well
it's
kind
of
this
big
like
they
have
to
somehow
run
the
cubelet
in
a
way.
I,
don't
know
how
they
do
it.
Alright,
so
I'm,
not
the
queue
proxy
either
or
how
they
package
CNI
things
like
that.
So
in
this
queue,
betting
joint
support
is
just
writing
down
this
file.
So
the
cubelet
then
can
bootstrap
itself
and
do
stuff,
but
but
yeah
I
mean
I.
D
D
D
G
F
D
To
read
the
comments
on
that,
so
what
you
think
is,
what
do
you
think
is
the
best
approach
moving
forward
like?
Should
we
encourage
him
to
close
it?
Should
we
I
mean
try
and
build,
try
and
extend
his
commit
and
improve
that
output
thing
or
I
mean
what
what
do
we
need
to
do
is
to
get
multi-master
to
work?
Is
it
even
a
requirement
for
ha1
9
I?
Guess
it
is
right?
Yeah.
B
B
D
Yeah
I
need
to
I
need
to
read
through
that
eh
a
proposal.
I
I
answered
some
of
the
questions
by
the
operator
net,
but
yeah
I
nice
go
through
and
look
at
the
implementation
details.
Is
that
really
volunteered
to
act
on
that
document
and
to
implement
multi
masterÃs
is
still
like
in
review
stage
that
dock
so
because,
like
I,
might
have
I
might
have
some
time
over
the
next
week
or
so,
if
I,
clear
up
the
operators
directly
so
that.
B
Was
my
well
next
question
and
topic
and
just
to
state
that
well,
we
now
have
proposal
for
Rita
has
has
been
with
a
kind
and
summarized
my
and
Tim's
thoughts
from
earlier
and
added
some
new
stuff
as
we
we
did
discuss
one
or
two
weeks
ago,
and
now
we
have
to
like
attack
on
this.
Otherwise
it
won't
make
the
release
we
well
I.
Just
LG
TM
do
some
sed
self
hosting
PR
OSC
the
operator
PR
for
the
API
types
it
got
much
now
we
need
to
build
more
on
that
one.
B
D
B
B
B
D
So,
on
the
topic
of
feature
gates,
I
mean
so
I
I.
So
when
you
said
with
you
know,
the
next
step
is
to
actually
implement
the
operator
like
I've.
Pretty
much
done
it,
but
one
thing
I've
noticed
is
that
there's
a
lot
of
overlap
between
the
feature
flags
of
high
availability
and
self-hosted
so
like
do
we
ever
expect
a
scenario
for
someone
to
1/h
a
but
not
self-hosted,
know.
B
C
I
call
it
the
circle
features
we
couldn't
have
one
without
the
other
and
they're
starting
to
actually
get
the
progress
now.
So
no,
my
my
joke
is
actually
no
longer
applies
because
we're
actually
getting
making
the
progress,
but
the
I
think
the
one
thing
is
the
workflow
from
cube
ATM.
That
needs
to
get
done
right.
Like
that's
the
big
thing,
all
the
other
pieces
for
we're
still
going
to
defaults,
to
have
everything
either
using
a
VIP
or
a
DNS
name,
so
that'll
be
fine
right
and
we'll
probably
have
to
I.
C
B
C
B
Yes,
so
why
didn't
want
secrets
is
because
well,
as
we've
said,
I
mean
it's
not
secure
right
now,
at
least
not
that
secure
that
we
want
it
to
be
so
I
know
what
it
will
would
just
like
me.
Well,
this
is
kind
of
like
suboptimal
right.
That
will
be
the
obvious,
because
well
I've
researched
there
as
well,
and
try
to
think
about
the
different
scenarios.
B
The
there's
an
ongoing
in
parallel
feature
from
cigarrets
and
Jordan
as
well.
Regarding
the
node
authorized
like
what.
How
can
I
say
what
labels
are
notice?
I
can
register
itself
with
how
can
I
let
a
node
delete
itself.
How
can
I
like
say
that?
Well,
you
shouldn't
be
able
to
like
add
this
label
after
you
have
registered
things
like
that
cuz,
we
don't
want
the
situation
where
a
node
can
well
a
node
and
its
credential
can
just
say
well.
I
want
to
patch
myself
and
add
the
master
label.
Then
I
will
get.
We
are
scheduling.
B
Demon
sets
I
will
get
access
to
all
the
secrets
of
the
control,
plane
and
well
I've
graduated
from
a
node
to
master,
which
makes
a
node
token,
basically
a
root
identity
in
the
cluster
lice,
like
super
easy
to
escalate
right.
So
that's
one
of
the
the
problems
with
this
flow,
and
that
is
one
of
the
reasons
I
wanted
to
go
with
the
CID
thing
and
things
like
that.
But
in
the
end
we
also
have
to
like
realize
that
Sagat
is
working
on
this
and
it
will
get
better
as
we
go.
B
C
Long
as
we
have
we'll
just
bring
in
an
alpha
for
some
period
of
time,
yeah
till
all
of
these
things
are
available
and
that's
totally
cool
with
me
too.
So
I
think
you
know
we're
gonna
go
out
without
for
this
release
cycle,
we'll
probably
just
stay
in
alpha
until
until
that
audit
is
good
enough
for
most
folks
and
I'm.
Sure
there's
gonna
be
other
bugs
we're
gonna
find
cuz
like
Security's
kind
of
fractal.
All
right.
That
kind
of
it
totally
is.
B
B
Which
is
like
obvious,
when
thinking
about
how
it's
architected
that,
therefore
we
need
my
pyaare,
which
makes
it
which,
which
basically
adds
a
new
field
to
the
authorizer,
an
optional
field
that
says
denied,
and
if
any
of
these
well,
you
name
it
authorizes,
say
like
return
denied,
it
will
deny
the
requests.
Currently
we
Union
all
our
trusses,
which
makes
it
well
if
our
back
allows
it
well.
My
ingress
controller
should
be
able
to
access
all
secrets.
B
Then
the
authorization
vessel
will
be
like
granted,
but
in
this
case
well
we
have
to
do
some
kind
of
auto
riser
or
whatever,
probably
a
lot
of
core
at
first
some,
some
demon
running
somewhere
I,
don't
know
that
basically
answers
like
if
this
is
a
cluster
credential
credited
by
cubed
M.
We
should
deny
this
request
unless
it's
coming
from
the
super
user
of
the
classroom.
B
So
those
are
the
two
two
main
ones:
security,
wise,
but
yeah
I
mean
this
is
just
for
getting
the
ball
rolling
and
if
we
end
up
in
a
place
that
we
don't
see
any
like
path
forward,
we
just
have
to
reconsider
and
say
in
alpha
and
pre
architect
and
don't
use
secret
if
it
turns
out
to
be
totally
like
well,
not
usable.
What
mentioning
is
that
chorus?
Dude
is
already.
D
So
I
had
a
question
about
something
completely
unrelated
kind
of
related,
because
it's
the
operator,
but
one
thing,
I
notice-
is
that
when
we
have
our
bootstrap
static
pod,
that
CD,
which
is
first
deployed
right
and
then
the
sed
operator
is
deployed
in
a
pivot
to
the
new
self
hosted
version.
What's
kind
of
interesting
is
that
when
the
self
hosted
cluster
is
is
comes
up
and
then
a
big
that
pivot,
that
migration
from
the
bootstrap
to
the
new
version,
the
way
that
it
sort
of
advertises
itself
as
through
a
DNS
name,
is
through
a
hostname.
D
But
the
problem
is:
is
that
that
relies
on
clustered
DNS
and
the
static
pod
isn't
synced
with
the
DNS
at
all
right.
So
my
question
is
like
how
are
we
gonna
get
out
to
work
so
there's
a
few
options.
One
is
to
make
the
DNS
server
reside
on
the
host
Network
and
that
I
got
it
working
using
that.
Basically,
by
the
time
that
the
pivot
happens,
the
DNS
server
is
up
and
the
static
pod
can
resolve
all
the
all
of
the
DNS.
Through
the
DNS
server.
We
can
create
like
a
temporary
DNS
server.
D
That
is
on
hostnet
and
then
pull
it
down
and
then
have
it
not
on
host.
Now
we
can
try
and
do
some
like
I
know
some
like
Etsy
host
name
hosts
kind
of
magic,
where
we
no
I'm,
not
sure
that
even
works
but
and
all
we
can
try
and
like
send
a
pull
request
to
I
see
the
operator
to
sort
of
make
that
host
name.
D
B
So
this
is
now
I
understand
why
you're
asking
I
didn't
have
the
context
and
was
like
why
but
okay
yeah,
we
it's
really
easy.
We
have
no
other
choice
but
to
have
it
as
a
normal
workload.
Anything
else
we'll
make
the
cluster
non-conformance
and
probably
break
conformance
tests
and
everything
we.
We
can't
do
anything
that
special
in
this
case
so
yeah.
It
has
to
be
a
workload
with
a
port
IP
with
a
service
backing
it
and
a
stable
service
name.
That
is
the
same,
all
cubelets
use
and
things
like
that
in
the
cluster.
B
D
B
A
B
B
C
Already
defaulted,
yeah
I,
don't
even
know
why
the
people
it
said
this
was
an
issue
with
sed
to
you.
I,
don't
know
why
there's
this
is
there's
there's
weird
conversation
pieces
going
on
still,
because
there's
I
think
that
prisoners
is
API
server,
parameter
that
says
enable
quorum
reads,
but
it's
defaulted
in
the
client,
so
I'd
have
to
take
a
look
at
the
plumbing
and
verify
like
what
they're
doing,
but
with
a
city
3,
but
I
know
that
that
addition
was
made
a
while
but
while
ago,
but
it
makes
no
difference
in
its
III.
B
I
need
to
make
any
difference
yeah.
So,
let's
start
with
like
adding
that
flag
when
we
are
behind
the
feature
gate
also,
we
need
to
use
the
new
of
reconciler
written
by
Ryan
Phillips
or
something
from
Korres
early
in
the
cycle,
which
basically
is
a
better
way
for
the
API
service
to
manage
the
internal
competitors
endpoint.
C
To
the
right,
the
stuff
we
have
right
now
is
for
ingress,
primarily
right.
Eventually,
you
know
the
the
ability
to
have
envoy
has
so
many
capabilities
is
kind
of
really
interesting.
The
potential
use
cases
are
pretty
dramatic
and
it
solves
an
abstraction
layer
that
I
think
would
be
cool
to
solve
in
different
use
cases.
I,
say
I'd
like
to
Inc
us
to
start
tinkering
with
it.
B
So
Andrew,
meanwhile
we're
planning
to
well
in
in
in
lack
of
a
better
solution,
we're
thinking
that
the
cube
proxy
might
be
able
to
settle
for
its
own
VIP.
So
cubed
n
will
would
write
down
the
the
well.
It
would
make
an
IP
tables
rule
with
the
internal
kubernetes
service
VIP,
which
is
often
10960
one.
B
B
B
The
iptables
rules
and
Kevin
Fox
had
some
proposal
as
well
somewhere.
That's
where
he
wanted
to
add
checkpoint
in
support
in
Turku
proxy
itself,
which
would
be
cool
I
guess,
but
nobody
has,
it
doesn't
mean
accepted
or
something
yet
I
think
network,
so
yeah.
Those
are
the
two
next
steps.
I
think
Jamie
is
gonna,
want
to
see
yeah
before.
C
B
B
B
Upgrade
upgrading
the
self-hosted
cluster
in
a
slightly
better
way
are
the
four
main
four
main
tasks
that
have
like
this
has
to
happen.
Inside
of
cubed
M,
four
one
nine
to
launch,
then
we
have
some
smaller
issues
and
improvements
as
well,
but
those
are
the
four
big
ones
at
C.
The
operator
supports
making
making
a
node
in
the
cluster
able
to
address
all
API
service
3
upgrading
at
C
D.
When
you
do
cubed
M
upgrade
apply
from
3
0
to
3
1,
which
we
discussed
earlier
and
for.
B
B
F
F
B
B
F
F
F
Just
to
check
that
if
even
an
update
is
available,
you
see
we're
trying
to
build
like
a
like
a
sort
of
wave
API
wrapper
around
the
cube
ADM
so
that
you
people
can
sort
of
put
like
a
website
and
a
few
buttons
and
create
like
a
cluster
and
sort
of
similar
to
how
detail
works.
But
but
currently
it
requires
SSH
into
the
machine
to
do
that
so
yeah.
B
F
So
so
we're
trying
out
their
new
8.1
release.
So
what
we
have
done
is
like
essentially
I
mean
I,
much
told
you
that
we
copied
some
of
the
code
from
a
cube,
idiom
sort
of
like
the
version
check
the
version
get
our
stuff
and
try
to
sort
of
simulate
that
usually
from
like
in
API
so
yeah.
So
if
we
just
they're
like
a
couple
of
issues,
the
first
one
is
from
remotely
it
is
hard
to
know
what
is
the
current
installed
version
of
QV
DM?
F
You
would
mean
so
so
one
thing
we
have
done
here
is
that
we
basically
he
deploy
a
demon
set
and
then
the
demon
said
essentially
dance,
QB,
DM
version,
Oh
short,
and
it
takes
the
output
of
that
and
annotates
to
the
node.
So
we
can
look
at
the
node
annotations
and
see
what
is
she
which
conversion
now
installed
so
that
sort
of
fills
a
key
process
so
that
we
did
and
there's
one
part
and
then
the
second
part
was
just
the
code
that
was
in
qadian
version.
F
Basically,
we
look
at
the
remote
like
the
the
download
file
which
details
the
gist
able
short
of
stuff.
Like
there
and
then
just
actually
copied
the
logic
that
looks
at
the
like
API
server
and
although
that
cube
the
components
from
the
gives
from
cluster
and
then
like
essentially
check
right
and
then
says
that
this
is
the
version
available,
so
that
part
of
the
logic
works.
F
B
Yeah
some
some
initial
comments,
sir
I
think
so
so
I
mean
API.
Server
version
is
easily
gettable
and
cubed
M
gets
it
from
the
slash
version
endpoint
of
the
API
server.
That's
fine.
It
gets
node
versions
from
the
node
objects
in
the
cluster.
That's
also
fine
remotely.
It
uses
like
its
own,
compiled
metadata
inside
of
the
queue
bed
in
binary
to
get
its
own
version,
which
is
the
thing
I.
Don't
think
we
can
change
right,
I
mean
it's
it's
we.
B
We
can't
really
like
get
that
information
from
anywhere
else,
because
we're
actually
trying
to
resolve
what
well
like
how
old
am
I
right?
It's
basically,
that's
kind
of
query
and
I
have
to
question
that,
like
from
from
my
own
metadata,
not
from
anyone
else,
but
I,
don't
I
mean
I,
don't
think
that's
a
huge
problem,
because
the
only
thing
that
queue
barium,
where
the
only
place
where
the
cubed
M
version
actually
is
kind
of
used,
is
to
detect
some
some
small
ish
cases
like
like.
B
If,
if
you
want
to
upgrade
a
cluster
from
one
eight
one
to
one,
eight,
five
and
I
have
a
one,
eight,
three
cubed
and
client,
it
will
say
that,
well,
you
should
upgrade
cubed
M.
First
to
one
eight
three,
it's
a
one,
eight
five
and
then
you
can
upgrade
your
cluster
to
185.
But
in
this
case
you
can
just
append
F
or
or
a
force
force,
and
it
will
do
the
thing
for
you
right
cuz,
because
that's
not
it's
more
of
a
gene
thing
like
we.
B
It's
best
practice
to
upgrade
cube
and
I'm
your
cuban
client
before
upgrading
cabañas
version,
because
Cupid
I'm,
a
new,
a
Cuban
version
knows
what
a
new
if,
if
it
has
to
deal
with
the
new
combination,
is
in
some
special
way,
which
I
actually
haven't
happened
in
the
latest
cycles.
So
so
I
mean
that's
that's
kind
of
safe,
so
I
think
your
birthing
plan
is
actually
pretty
fine.
B
Cubed
M
apply
will
almost
certainly
always
be
run
from
from
the
node,
like
from
the
actual
master
itself,
and
there's
there's
a
great
rationale
for
this,
which
is
safe
there
like
so
so,
if
you,
if
cubed
M,
execute
on
the
same
node
it
operates
on
or
for
it
can
actually
make
sure
that
well,
everything
goes
smoothly
right.
So,
if
you're
upgrading
your
master
and
something
goes
wrong,
it's
easy
to
roll
back,
because
I
have
access
to
the
local
files.
B
I
can
do
whatever
I
want,
have
shuffle
files
around
if
I'm
remotely
and
then
with
self-hosting,
trying
to
up
rolling
upgrade
my
own
API
service,
there's
a
bug
in
a
new
kubernetes
version
or
whatever
that
makes
these.
These
new
API
servers
fail.
I
suddenly
don't
have
the
endpoint
I
was
talking
to
anymore,
and
my
cluster
is
dead.
I
I
have
no
ability
to
restore
the
previous
environment
as
I
can't
get
into
the
cubelets
and
and
just
run
static,
pods
or
whatever
to
reboot,
strap
or
everything
or
I.
Can't
roll
back,
so
I
mean
remotely
executable.
B
C
B
F
F
Fills
with
a
key
that
we
have
to
assess
it
into
the
machine
and
then,
like
you
know,
I
mean
it's.
Obviously
the
only
option
growl
like
I
just
was
especially
if
there
is
a
way
like
we
could
restrict
that,
like,
even
when
we
access
to
that
machine
that
we
don't
have
any
additional
permission
or
like
way
to
do
anything
more
than
just
like
maybe
run
the
QED
M
commands.
That
will
be
like
things.
F
B
B
Deploy
like
well,
it's
kind
of
I
mean
I,
don't
know.
If
I
should
dig
down
into
the
implementation
details.
You
can
read
the
proposal
there
that
I
have
us
up
as
a
PR
right
now,
but
but
yeah
it
will
just
talk
to
the
API
server.
It's
a
rolling
upgrade
itself
kind
of
so
I
mean
it
does
work,
but
it's
not
the
safest
possible
way.
So
if
you
want
to
stay
safe
well,
you
should
execute
it
on
your
local
node,
a
local
master.
F
B
D
H
B
H
B
B
1/8
that
client
is
3-1
that,
like
the
3-1
client,
actually
has
a
bug
when
having
multiple
h-e-b
instances
and
multiple
masters
which
actually
hit
mon-sol
in
their
postmortem.
You
can
read
about
that.
So
that's
really
unfortunate.
It's
probably
gonna
be
fixed
in
some
way
in
the
next
patch
release
of
kubernetes,
but
also
Tim
is
for
1-9
gonna
bump
this
to
3/2,
which
have
fixed
this
bug.
It.
B
So
it's
nothing
q
better
and
specific
it
just
what
the
API
server
the
code
a
today
psi
we
uses
for
talking
to
at
CD,
so
yeah,
it's
and
and
but
the
ETD
server
was
three
zero.
In
one
eight,
it
will
be
for
new
clusters:
it
B
3
2
in
1,
9
4,
cubed
M,
which
is
what
salary
well.
That
was
so
I'm,
not
sure
if
Tim
is
gonna
bump
every
every
news,
so
I
think
that
the
proposal
was
for
new
clusters
used
three
to
five
squared
plus
T
is
three
one:
I'm.
H
Not
that
poor
cube
admin
or
for
kubernetes
in
general
right
so
we've
had
a
problem
in
the
past,
where
it's
not
clear,
sort
of
who
owns
a
TD
I
would
propose
at
its
API
machinery
sig
that
owns
a
city
and
not
us
in
terms
of
selecting
which
version
gets
tested
and
validated
for
kubernetes
releases.
And
so,
if
we
want
to
use
3.2,
we
should
probably
go
ask
them
to
qualify
3.2
with
1.9
and
and
sort
of
announce
that
that
should
be
the
default
for
one
nine
clusters.
H
B
H
I
know
I
know
that
the
the
impetus
for
bumping
up
to
33
in
the
first
place
was
driven
by
the
scalability
sake
and
from
the
Google
side
by
high
tech,
because
they
they
wanted
that
to
be
able
to
hit
their
target
for
5,000
node
clusters
once
we
hit
that
target.
As
far
as
I
can
tell
the
scale,
though
they
sing
sort
of
lost
interest
in
continuing
to
push
forward,
and
it's
not
the
API
machinery.
Sig
doesn't
seem
to
pick
up
the
torch
and
so
I.
H
Don't
think
anybody
is
actively
owning
at
CD
and
keeping
the
train
rolling
forward,
and
you
know
Justin's
complained
multiple
times
that
and
how
we
don't
have
a
great
support
mechanism
for
upgrades.
So
I
think
this
is
a
broader
issue
and
if-
and
maybe
we
should
bring
this
up
with
their
Saviour
in
the
community
meeting
that
we
feel
like
or
maybe
a
cig
architecture
we
feel
like.
This
is
an
area
that
doesn't
have
active
ownership
and
that
we
don't
want
to
be
the
people
driving
this.
H
B
H
H
H
B
Yeah,
this
is
definitely
a
cigar,
collector
and
probably
even
Stephen
accommodating
to
to
talk
about
like
how
do
we
manage
how?
How
should
we
manage
our
sakes
to
divide
this
work
between
themselves?
I
mean.
H
It's
all
it's
all
the
components
right,
it's
like
do
we
bump
the
version
of
qbn
s,
or
do
we
let
the
workin
sake
bump
the
version
of
qvn
s?
Do
we
choose
cube
units
versus
Cordy
and
s,
sort
of
networking
sake,
juice,
cube,
Genesis,
14
s
right
like
we
are
already
an
installer
that
has
to
have
some
opinion
about
what
to
install
that.
Are
we
forming
that
opinion
ourselves
or
are
we
letting
the
other
states
tell
us
what
we
should
be
run
right?
I.
Think
that
that's
that's
really!
H
The
question
here
at
C
D
is
one
component,
the
bigger
problem
that
CD
is.
Nobody
is
seeming
to
be
owning
it,
but
it's
true
for
every
component
that
we
launched
as
part
of
Cuba
right
yeah
and
it's
impossible
for
things
like
the
API
server.
The
controller
manager,
like
you
just
pick
the
kubernetes
version
of
those
components,
but
when
we
have
a
cloud
controller
manager,
how
do
we
pick,
which
version
of
that
to
run?
If
it's
built
out
of
tree-
and
you
know
it's
ideas
out
of
tree
cube-
DNS
is
out
of
tree.
H
You
know
we
don't.
We
aren't
opinionated
about
installing
the
dashboard
or
heap
stir
or
fluent
D
or
anything
else
today,
but
once
we
have
add-on
management,
how
do
we
specify
those
versions
and
who
tells
us
which
versions
to
launch
right,
so
it
sort
I
think
it's
a
bigger
problem
that
we
need
to
figure
out
how
to
solve
yeah.
While
we
were
talking
I
looked
up,
he
is
still
running
3,
o
17,
fret,
CD
and
the
person
on
our
side.
That's
been
driving
at
CD.
B
B
H
Okay,
I
will
I
will
check
in
with
them
today
and
see
what
they
say.
Cool
I
would
be
surprised
if
they
bumped
cube
up
and
forgot
to
make
the
corresponding
bump
in
g
ke
I
camera.
If
that's
a
place
where
we've
diverged
from
cube
up
something,
I
can
figure,
we
just
use
for
Batum
and
some
of
it
we
have
overwritten
I
mean.
B
Probably
and
all
four
clusters,
or
whatever
right
now,
yeah,
so
that
was
what
we
talked
about.
Then
we
concluded
that
cubed
M
will
upgrade
like
so
in
in
Cuba.
Now
cried
with
in
in
1:9,
cubed
M
will
upgrade
HCDE
to
three
one
right
as
part
of
like
the
upgrade
procedure,
as
that
is
the
official
embedded
version
for
kubernetes
1:9,
and
this
will
happen
before
the
control
plane
components.
B
It
is
a
static
pod,
so
we'll
we'll
just
do
the
same
like
manifest
shuffling
around
on
disk,
so
we
basically
rename
the
real
manifest
parts
to
a
backup
directory
will
write
the
new
the
new
static
pond
manifest
will
check
is
a
TD
coming
up
correctly.
If
it
is,
will
will
just
proceed
if
it's
not
we'll
we'll
roll
back,
the
old
select
ball
manifests
check.
Is
it
like
coming
up
correctly
if
it
isn't?
Luckily,
we
we
snapshotted
the
data
directory
while
TD
in
the
beginning
and
we'll
just
tell
the
user.
B
H
B
H
H
B
H
B
H
Cards
or
something
yeah
I
get
pods,
yeah
well,
I,
think
I.
Think
component
satis
is
actually
well
in.
Unless
the
API
server
itself
sets
itself
unhealthy,
when
I
can't
talk
to
add
CD
good
component
status,
isn't
doesn't
actually
touch
at
CD
directly.
All
it
does
is
send
HTTP
requests
to
the
master
components
and
ask
them
all
if
they're
healthy.
So
if
you're
already
asking
on
CD,
if
it's
healthy,
it's
not
actually
doing
any
extra
checking
unless
the
API
server
sets
itself
unhealthy.
B
Some
kind
of
test
there
is
is
well
it'll,
be
good
yeah.
So
that
was
what
we
talked
about.
First
then,
we
conclude
just
a
couple
of
PSAs
CNI
0-6.
We
bumped
the
version
from
zero
one,
five,
one,
two:
zero
six,
zero
and
well.
Of
course
we
encountered
some
kind
of
issue
with
the
cubelet
integration,
so
just
just
adds
up
that
signal
and
Signet
work
is
looking
into
this
and
it's
it's
marked
for
1/9
I
mean
it's
it.
B
It
will
be
a
critical
bug
or
I,
don't
know
if
we,
if
the
folk
will
fix
it,
I
believe
so.
Otherwise
we
all
have
to
roll
back.
If
we
do
roll
back
it
will
we
get
really
tricky
because
ipv6
support
won't
make
it
and
things
like
that,
but
but
yeah
just
just
so,
we
know
about
it
at
at
head,
cubed
M
is
tested
against
the
new
CNI
version.
So
that's
ok
and
it
says
just
to
be
clear.
The
CNI
0-6
thing
is
a
from
what
I
understand
kind
of
scalability.
B
Scalability
issue
more
that
the
latency
well,
the
issue
is
named,
create
pod
latency
increase
with
the
new
within
EU
CNI
version
and
actually
seems
like
we
have
a
peer
up
already
cool
yeah.
So
just
just
we
have
that
in
tracking
as
well,
then
a
PSA
that
ipv6
support
is
coming
in
1/9,
which
also
affects
cube.
Atm
I've,
seen
pretty
many
pull
requests
from
from
the
Signet
work
team
likely
adding
cuban
in
support.
Ipv6
support
to
cube
am
mostly
like
where
we
do
use
ipv4
addresses.
We
now
have
ipv6.
B
H
H
H
H
B
B
H
H
B
So
I
mean
right
now:
I
mean
I.
Don't
personally
claim
that
I
will
like
I
personally
will,
will
support
ipv6
in
cubed
M.
It's
more
that
well
I
review
P
us
I
approve
them
as
they
are
good.
Then,
if
sig
network
has
has
like
tests,
as
you
said
that
we
see
our
green
constantly,
then
I
can
say
like
there's
a
fairly
good
chance
that
things
work.
I,
guess.
H
B
H
Cool
we
should
support
Windows.
We
should
support
Solaris,
whatever
else
people
want
to
put
in
there.
H
B
About
at
all
in
1/5
that
the
Olfa
version,
then
it
has
stayed
kind
of
unmodified
or
something
like
waiting
for
Windows
upstream
something
during
the
spring
and
I
think
there.
They
resumed
the
work,
q3
q2,
something
and
now
they're
up
to
speed
and
want
it
wanting
to
make
it
be
down.
1/9,
I!
Think
so
again.
Here
we
have
the
same
situation.
There's
another
sig
actually
owning
this.
They
they
have
added
two
peers
that
have
been
like
low
volume
right
there.
B
H
Can
I
think
one
thing
we
have
to
figure
out?
One
thing
else
figure
out
here
is
our
support
model
because
people
run
cube
admin
and
it
doesn't
work.
They're
gonna
come
to
us
first,
even
if
it's
a
feature
that
someone
from
another
sig
has
added
and
we've
said
yeah
that
seems
to
work
and
we've
seen
your
tests
and
they
looked
green
they're
still
gonna
come
to
us
for
support.
First,
we
need
to
figure
out
how
to
how
we
want
to
handle
that.
Do
we
want
to
try
and
support
them
directly.
H
Do
we
want
to
redirect
them
to
the
other
sig?
Is
there
a
way
if
we
want
to
do
the
latter,
that
we
can
do
that
automatically
without
them
hopping
through
us
first
and
giving
them
a
slower
experience?
I.
Think,
as
we
give
these
things,
that's
gonna
become
more
of
a
problem.
Cuz
we're
less
direct
experience
like
I
will
have
never
run
cube
admin
on
a
Windows,
node
and
I
won't
know
how
it's
supposed
to
look
for
how
it's
supposed
to
work
and
I
want
to
help
tell
on
debug
yeah.
B
I
think
well,
those
issues
probably
have
to
be
at
least
try
ash
first
in
the
queue
betting
repo
mentioning
the
right
SIG's.
If
it
turns
out
to
be
a
Cabana,
this
issue
will
have
to
move
to
the
cominius
Cabana
in
sweeper
or
whatever.
If
we
have
Cabana
has
slash
sig
windows
as
such
a
repo,
we
can
move
there
yeah.
H
I,
just
I'm
just
trying
to
figure
out
what
what
that
looks
like
and
make
sure
that
people
who
are
coming
in
and
just
sending
us
PRS
adding
features
are
also
signing
up
to
do
the
rest
of
the
process.
Right
they're
signing
up
to
do
support
and
to
do
maintenance
and
they're,
not
just
throwing
code
over
the
wall
saying
great.
It
works
so
now,
I'm
gonna
leave
and
expect
us
to
carry
the
torch
forward
right.
The
same
way
with
its
sed,
like
someone
needs
to
actually
own
that
and
keep
owning
it
going
forward.
H
B
I
mean,
as
we
concluded
with
Kuchiki
like
this,
will
be
more
and
more
the
case
right
so
right
now
when,
when
someone
adds
a
new
feature,
the
kubernetes,
they
have
to
add
it
to
keep
up
or
like
well
G
key
code,
G
C
code
to
test
it
and
as
we
migrate,
these
things
well
cube
am
and
the
cluster
API.
Eventually
it
will
become
a
more
and
more
this.
As
you
say,
communication
redirection
issue
for
issues
I
mean
we
don't
want
command
s
cube
attempt
to
end
up
like
the
cabanas
main
issue,
tracker
and.
H
B
B
He
recently
sent
a
PR.
He
would
go
with
the
official
proposal
as
well.
I
had
hoped
for
barista
would
be
here
today.
He
haven't
attended
either
meetings
so
we'll
see
how
we,
when
I,
convert
the
the
proposal
or
we'll
just
leave
it
a
proposal.
One
more
week
or
whatever
in
in
this
form,
I
mean
the
the
the
most
important
part
now
is
that
we
start
attacking
with
code
start
actually
writing
things
getting
it
behind
the
feature
gates
getting
the
tests
up
before
before
it's
too
late
right.
B
This
is
going
to
be
an
extremely
short
cycle.
We
recently
got
cabanas
one
eight
out
of
the
door
and
now
code
freeze
is
coming
up
in
three
weeks,
something
like
that.
So
three
four
weeks
so
and
and
then
we
have
cube
con
in
in
the
middle
of
in
the
middle
of
like
code
freeze
or
the
stabilization
period,
so
the
one
of
the
critical
parts
is
that
see
the
operator
Jamie
just
sent
the
PR
which
I
could
link
to
here
for.
B
So
that's
the
thing
he
did
this.
So
it's
easy
term.
B
B
B
I/O
were
tech,
also
commander
cool
I
think
for
breaches.
Well,
I'll
help
all
help
him
out
as
well
to
answer
something
but
yeah
from
what
I
understand
he's
he's
owning
that
that
part
yeah.
B
B
B
B
Imagine
you're
rebooting
your
TV
operator
cluster,
like
your
STD
operator,
master
first
comes
up.
The
further
cubelet
comes
up
well,
I
want
to
talk
to
the
API
server,
but
the
API
server.
Isn't
there.
Then
we
have
the
checkpoint
or
inside
of
the
cubelets.
The
boot
trap
checkpoint,
which
sees
that
well,
I
was
running
these
three
pods
I
will
restore
them.
B
B
H
B
Yeah,
actually,
that
could
be
a
feature
request
to
the
operator
right.
The
problem
here
is
like
we
stated
that
we
want.
We
don't
want,
like
rewrite
all
the
sed
handling
code
right,
hence
we're
using
that
CD
operator,
but
that
CD
operator
is
made
for
applications
on
top
of
kubernetes,
not
kubernetes
itself
and
for
applications
of
kubernetes.
It's
it's
totally
fine
to
run
things
as
normal
pubs
with
backing
services.
B
Replicate
the
the
daemon
sets
like
kind
of
dmoz,
it
kind
of
scheduling,
policy
or
whatever,
so
it
would
like
assign
this
one
part.
The
the
first
master
is
second
part,
our
master
and
and
etc,
and
but
still
handle
the
lifecycle
of
a
CD
with
backups.
Not
snapshots
restores
up
version
upgrades
and
these
things
that's.
Why
we're
using
the
TD
operator.
H
Yes,
but
if
it's,
if
it's
not
designed
to
work
for
the
case,
we
need
that
then
the
question
is:
is
it
harder
to
fix
it
or
to
not
use
it,
and
if
we
have
to
build
a
whole
nother
layer
of
checkpointing
that
seems
kind
of
silly?
Maybe
we
should
figure
out
if
we
can
make
that
see.
The
operator
work
for
our
use
case
and
talk
to
four.
H
G
I
guess
that
the
most
interesting
part
is
that
we
have
two
Arden
Inc
the
bootstrap
process,
making
sure
that
we
are
able
to
move
from
a
single
master,
to
move
to
multiple
master
and
during
the
process
of
changing
certificates
and
so
on
and
so
on.
So
there
are
who
are
free
question
the
further
for
me
are
Wharf
or
some
work
from
you
on
team.
B
F
B
B
Well,
the
Masters
pods
assigned
to
me,
which
will
make
the
node
authorizer
brand
me
access
to
all
the
secrets
in
the
cluster,
including
the
CI
key.
So
then
I
basically
can
turn
my
with
with
two
requests
right.
Basically,
two
commands
I
can
turn
my
node
credential
like
add
a
new
node
to
my
cluster,
to
a
see
a
key
credential.
B
That
means,
if
someone
hacks
my
ingress
controller,
they
will
also
get
my
see
a
key
for
free,
well
and
and
other
workloads
in
roster
as
well.
Like
I
have
my
whatever
port
that
I
don't
want
that
I
turned
off
our
back
for
like
basically
granted
truster
admin
and
then
well.
It
indeed
can
get
my
CA
key
if
it
wanted
to,
but
with
Mike's
new
Mike
SPR,
for
explicitly
denying
an
authorization
request
for
certain
well
certain
work
with
certain
types
of
requests.
So
we,
the
goal
here,
is
to
make
some
kind
of
authorizer.
B
G
G
B
Heavily
locked
down
like
it
won't
affect
any
normal
production
when
a
user
and
and
more
on
experimental
testing
right
now
is
this
something
we
can
get
the
production
greatness.
If
it
isn't,
we
have
to
like
go
back
to
the
drawing
board
and
see
what
we
can
do
instead,
but
this
is
the
MVP
we're
trying
to
achieve
in
one
nine,
then
we'll
see
is
this
something
we
can
proceed
with
1:10
like
birra
and
then
then
continue
to
enhance
to
what
ga.
G
B
So
what
we
must
do
now,
if
we
want
to
have
something
for
1/9
so
as
mentioned,
this
is
a
really
short
cycle.
If
we
want
to
get
this
into
1
9,
which
we
want,
we
need
coding
power,
but
that
also
means
myself
that
what
it
now
seems
I'm
more
like
coordinating
all
our
forces.
I
won't
have
that
much
time
to
actually
write
code,
which
means
my
my
definite
like.
B
So
Jamie
has
that
CD
operator
right
now.
I
reaiiy
I
an
hour
ago,
I
asked
Andrew
to
maybe
do
the
Q
proxy,
like
you're,
like
resolving
multiple
masters
addresses
from
the
viana
VIP
from
the
nodes
point
of
view,
so
basically
we're
solving
multiple
API
service
from
from
the
node
and
roommate
may
be
able
to
take
tackle
that
then
we'll
have
what
else
we
need
someone
to
enhance
the
way
we
upgrade
self-hosted
clusters
to
be
a
bit
safer
and
we
need
someone
that
would
write
code
for
upgrading
@cd.
This
three
three
zero.
Two
three
one
migration.