►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I
know
that
they
had
created
a
build
three
to
ten
I
believe
was
the
version
number
and
I
know
that
we
that
JP
Betts
was
supposed
to
do
the
PRS,
because
I
did
I
told
him.
I
didn't
have
time
to
do
him
and
I'm
still
burning
through
my
backlog,
to
look
to
see
where
they're
at
I
told
him
to
tag
me
on
those
PRS
once
they're
in
place,
but
that's
primarily
for
the
AJ
concerns
which
don't
affect
this
yet
but
will
affect
us
in
the
future,
and
I'll
have
to
take
a
look
at
that.
B
B
I,
don't
know
if
the
client
patch
will
make
the
there
was
a
topic
of
conversation
on
API
machinery
that
occurred
and
I
forgot
about
until
just
a
second
ago.
So
you
mentioned
that
Oh
highlight
there
was
topic,
a
conversation
where
they
said
they
were
willing
to
wait.
A
patch
release
like
a
one.
Nine
one
patch
release
before
updating
the
client
and
the
reason
in
being
was
there
is
Daniel
was
mildly
terrified
of
opinion
this
early
in
the
cycle
and
wanted
to
give
it
more
test
time
or
this
late
in
the
cycle.
A
B
B
He's
targeted
first
for
110
release
which
I'm
okay
with
from
our
from
this
perspective
and
I,
would
bump
both
client
and
server
I
would
do
both
you
didn't
very
early
in
the
cycle,
so
you
get
the
testing
yeah.
C
A
B
A
A
Not
bad
I
mean
this
cycle
is
so
short
and
we
got
in
valuable
PRS
targeted
at
component
config
and
all
the
features
that
we
won't
enable
the
next
cycle
of
stabilization,
as
these
features
actually
get
this
table
get
get
stabilized
by
others
in
the
community.
The
age
rating
relies
on
operator
pretty
much
as
we
discussed
I
mean
we
could
have
written
our
own
thing,
maybe
squeeze
that
into
release,
but
that's
not
something
we
want
to
maintain
for
all
eternity.
So
waiting
a
release
for
the
HCV
operator
is
just
fine
I.
D
B
A
B
Mean
we
can
add
secret
checkpointing,
it's
not
really
hard
to
do.
I
think
the
hardest
thing
is
getting
people
to
buy
in
to
the
idea
of
secret
checkpointing,
and
then
you
have
to
encrypt
it
and
do
all
the
other
crap
there.
So
the
I
think
the
facilities
are
all
there
and
the
jiggery
is
all
there
I
think
it's
just
a
matter
of
adding
secrets
and
making
it
secure
and
making
sure
that
it's
an
opt-in
only
feature
on
some
environments.
B
B
Think
we're
gonna
have
some
level
of
infighting
with
regards
to
the
security
level
that
this
entails,
but
I
think
I.
Don't
think
the
code
is
hard
just
like
the
code
for
checkpointing
wasn't
hard,
it's
more
of
the
political
battle
as
well
as
getting
people
to
sign
off
and
buy
and
to
make
it
happen.
I
think
what
I'll
do,
though,
is
if
I
do
make
this
modification
I
will.
Definitely
we
should
talk
about
it
now
and
start
thinking
about
the
things
we
need
to
do.
B
B
A
So
ever
serving
cert
but
I
mean
that's
a
realist
rate
forward
procedure
with
one
master,
because
we
basically
just
moves,
asserts
into
expired
directory
and
write
a
new
one,
because
we
have
the
CA
but
but
yeah
updating
like
the
serving
source
of
three
API
service
in
an
aged,
a
scenario
where
we
can't
do
anything.
We
can't
like
Cuban
M
executing
on
some
host
can't
access
the
machine
where
we
want
to
rotate
dessert,
then
we're
kind
of
that's
crude,
but
it
gets
so
much
harder.
I
know.
B
There
was
a
lot
I
think
there
was
a
lot
of
consternation
that
existed
with
the
security
aspects
of
storing
checkpoints
in
files
and
I.
Don't
recall,
I
think
we
should
probably
dig
up
those
constraints
because
I
don't
know
if
they
all
apply
anymore
and
it's
very
targeted
to
just
the
master
notes
and
if
you
escape
the
couplets,
somehow
anyways
you
already
have
the
certs
for
the
master.
So
I,
don't
I,
don't
see
how
you
it's
any
different.
B
Now
if
this
was
a
regular
node
in
your
check,
pointing
and
writing
the
search
to
disk
I
could
see
how
that
could
be
disconcerting
right,
but
the
because
it
is
a
master
node
and
tainted
as
such.
I
don't
and
that's
the
only
one
that
has
that
flag
to
the
coolant
specified
to
checkpoint
to
local
disk
I.
Don't
really
see
how
that's
that
much
different
from
a
security
perspective,
then.
A
A
Yet
yeah
I
mean
that's
gonna,
be
special
anyway
right,
because
we
might
do
something
else
like
the
API
server
discovers
from
a
TV
storage
or
discovers
from
its
own
loopback
time
right
that
this
is
the
configuration
as
you
do.
It
yeah
that
probably
makes
sense,
we're
probably
gonna
use
config
maps
for
controller
manager
and
scheduler.
But
that's
another
thing
and
it's
far
more
straightforward.
A
B
E
B
Epi
server
can
be
brought
back
online
in
the
restart
scenario,
for
the
bootstrap
condition.
That's
all
that
really
matters,
so
the
API
server
is
recoverable
and
the
the
time
delay
for
scheduler
restart
doesn't
it's
kind
of
irrelevant
for
the
control
plane,
because
your
data
plane
is
still
running
so.
A
B
E
E
E
E
Let's
talk
about
kubalek
equivalent
on
the
worker
node
that
they
start
pinging
looking
for
the
API
server
using
the
V
okay,
and
this
happen
on
the
worker
node
now
on
the
master,
node
I
have
to
use
the
same
approach
or
not
if
worker,
node
and
master
node
use
the
same
config.
My
move
Kubala
to
the
component
counting.
The
answer
is
yes,.
A
So
I'm
trying
to
look
at
the
component
config
thing
now
so
yeah.
Just
just
to
reiterate
what
Fabrizia
said.
The
idea
here
and
we
not
that's
we're
looking
into
an
experimenting
with
is
to
the
Kuban
am
joined,
is
creating
an
IP
tables
rule
from
the
bootstrap
master
that
you
add
to
the
cube.
Admin
join
command
line,
so
the
service
there
is
one
special
service
in
the
default.
Namespace
called
kubernetes,
which
has
the
first
first
available
address
in
the
service
subnet
range
and
then
cube.
A
Atm
creates
this
IP
tables
rule
to
point
to
the
first
master,
the
the
one
you
specify
incubating
join.
Then
the
cubelet
talks
to
this
lip
and
the
queue
proxy
talks
to
this
width.
Then
everything
comes
up.
Everything's
proxies
through
this
whip,
then
the
queue
proxy
comes
up
and
like
notices.
I
should
take
care
of
reconciling
this.
Then
all
API
service
or
reconcile
all
the
time
to
add
themselves
when
they
register
and
remove
API
services
are
stale.
So
this
way
we'll
get
an
order
updating
thing.
B
E
B
You
chicken
I,
get
them
like
you're
in
a
weird
chicken
egg,
because
what,
if
you're
the
master
but
like
if
you
want,
if
you're
the
master
and
you
want
to
bring
it
up
and
you're,
maybe
have
a
scheduler
that
needs
to
talk
with
API
server.
You
still
want
to
have
it
like
a
configuration
to
the
proxy,
so
the
proxy
loads
that
value
don't
start
up
and
tries
to
set
the
route
before
it
does
anything
else.
E
I
think
that
I
can
manage
these
also
on
master
node,
but
it
is
much
more
complex.
The
idea
there
is
that
I
create
pivot
tables
that
proxies
the
data,
not
net
the
local
address
to
the
Vita,
the
kind
of
loop
loop
back.
Yes,
yes-
and
this
will
will
allow
me
to
to
make
the
local
cubelet
to
point
to
the
local
API
server
and
then
bootstrap
I
have
to
work
on
this
because
it
is
do.
B
A
E
E
A
And,
and
for
the
record,
this
is
an
experiment
we
most
probably
are
gonna
use,
something
like
convoy.
That
is
a
real
real
load
balancer,
instead
of
like
still
tim,
has
raised
the
issue
before
with
API
server.
Caching.
So,
basically
we,
if
we
use
this
approach,
it
will
be
randomized
which
API
server
am
you
talking
to
and
also
like
IP
tables
is
not
gonna,
be
the
default
for
all
eternity
using
the
proxy.
So
if
we
were
using
IP
vs,
that's
totally
different
story
when
joining
so
so,
but
is
this
more
like
an
experiment?
B
A
B
E
A
Cool
but
but
that's
that's
a
really
good
experiment.
Thank
you
for
being
here
for,
for
conducting
that
I
mean
we.
We
want
to
experiment
at
this
time
in
the
cycle
for
the
next
cycle,
so
so
that
we
know
if
it
works
and
then
how
it
works,
and
if
it's
something
we
actually
can
ship
in
some
release
or
if
we
directly
have
to
go
and
build
a
sidecar
for
an
envoy
or
whatever
the.
E
A
E
E
B
F
B
Yeah,
that
sounds
right.
There's
another
question
in
the
110
cycle
that
goes
beyond
HH
stuff.
It
has
to
deal
with
like
how
exactly
like
CNI
is
the
one
odd
horse
out
right,
because
we
once
we
have
basal
online,
will
be
able
to
build
from
the
mainline
repository
in
version
rpms
and
Debs
independently
the
build
products
for
everything.
B
The
one
thing-
that's
just
weird
is
CNI,
which
kind
of
lives
weirdly
still
in
an
ago
somehow,
and
it's
not
maintained
separately
when
in
reality,
I,
don't
I,
don't
see
why
we
are
publishing
artifacts
for
that
versus
referencing
artifacts.
For
that
right,
like
the
the
CNI
group
itself
should
be
publishing
their
own
stuff
or
we
could.
We
could
pull
the
binaries
directly
into
our
rpms
and
dads
there's
another
possible
way
to
deal
with
it
as
we
have
this
like
weird
sub
packaging
right,
yeah,.
A
A
D
A
D
A
Yeah
so
then
we
have
the
other
fun
thing.
We're
basically
GTE
and
GCE
doesn't
want
to
depend
on
github
for
stability
reasons
or
something
so
freehand
or
someone.
Maybe
Jeff
Grafton,
just
grabbed
the
builds
from
github
and
pull
them
into
a
GCS
bucket.
So
in
some
places
we're
referencing
where
Jiki
code
is
involved,
we're
referencing
the
GCS
bucket
in
some
places
like
the
bed
packages,
I
think
we're
referencing
the
github
artifacts.
They
are
exactly
the
same
by
analyzed,
but
it's
just
two
locations
which
is
confusing
I.
A
A
B
A
A
There
we
go
so
those
are
the
two
main
sources
of
truth
in
this
matter.
Writing
and
yeah
I
mean
it's,
it's
gonna
be
fine,
but
it
is
suboptimal.
It's
getting
better
with
the
one
online
release,
but
as
we
actually
can
use
official
builds,
not
things
we've
built
ourselves,
but
yeah
so
I
mean
I,
really
I
bad.
A
A
A
A
Config
there,
okay,
so
that's
the
cubelet!
So
right
now
we
bundle
like
twenty
arguments
to
the
cubelet
or
something
like
that
which
are
in
the
cubed
M
Deb,
which
messes
things
up.
If
you
upgrade
cubed
and
before
the
cubelet,
then
you'll
basically
get
a
broken
cluster
in
order
to
mitigate
that
for
future
releases.
The
there
will
be
just
this.
These
four
arguments
and
they
won't
change
like
between
minor
releases,
is
so
so.
That's
that's
good,
then,
on
cue,
barium
in
it.
A
A
We
basically
reference
the
config
map
from
the
node
spec
object
and
then
the
cubelet
and
the
master
cubelet
will
notice
that.
Well,
I
should
grab
this
configuration
from
this
config
map
and
basically
download
that
to
the
dynamic
configuration
dear
directory
and
we'll
it
will
check
point
things
locally
there
so
and
then
that's
it's
4
cubed
in
it
and
yeah
well.
Wait
are.
B
A
A
The
cubelet
will
start
and
do
the
TLS,
bootstrap
and
then
cue
barium
join,
will
use
the
local,
a
TLS
bootstrapped
credential,
a
cube
config
file
so
update
to
patch
the
node
object,
no
object
so
use
the
dynamic
config
knob
and
that's
basically
it
then
we
can
just
if
we
upgrade
our
cubelets,
we'll
just
switch
switch.
The
the
config
map
reference
and
that's
all
or
update
the
config
map,
so
I
mean
this
will
be
a
much
nice
way
to
use
to
do
things
declaratively
instead
of
we
are
the
command-line
imperative.
C
Okay,
can
you
hear
me
yep?
Okay,
that's
not
that's
me
baby!
Yes,
yes,
because
I
basically
needed
a
CH.
A
I
want
to
set
up
a
productive
system.
I
I
want
to
use
cube.
Atm
I
decided
to
do
some
manual
work
on
this
I'm,
currently
still
in
the
process
of
getting
things
running
because,
basically,
yesterday
and
two
before
yesterday,
I
was
in
was
trying
to
use
etcd
in
in
docker
in
humanities,
just
as
the
default
setup
of
AD
and
in
it
does.
C
B
If
you
want
to
coordinate
that
with
there's
some
other
folks
on
sick
cluster
life,
psycho
that
we
can
poke
on,
namely
great
Tracy's,
been
working
on
the
exact
same
thing,
we
should
get
him
involved
in
you
guys,
sync
up.
So
that
way
you
can
kind
of
you
know,
compare
notes,
because
he's
done,
he
has
some
ansible
script
area,
that's
doing
very
similar
things,
and
we
should
and
there's
a
couple
other
open-source
things
too,
that
we
could
kind
of
distill
down.
C
Yes,
I've
seen
it,
but
I
really
wanted
to
stick
to
cube
ATMs,
duplicity
and
since
it's
obviously
going
to
be
the
stomach
in
the
future,
so
I
rather
don't
want
to
set
up
many
things
manually.
Just
as
in
cube
spray,
I
saw
it.
Yes,
III
created
an
issue
on
github.
We
just
find
it
issue
number
546,
yes,
I
can
just
put
that
in
the
chat,
and
some
people
have
already
commented
on
it.
C
A
We
basically
decided
yeah
I
call
you
when
she's
crying
here
to
do
the
same.
So
then
we
kind
of
thought
in
on
slack
and
things
that
we
should
at
least
like.
We
could
have
done
this
for
many
races
ago,
but
now
is
at
least
the
time
to
document
this
in
a
official
place
on
the
carbonate,
ion
docks
and
get
more
people
aware
of
that,
it's
possible
to
do
cubed,
Maj.
If
you
basically
can
copy
your
files,
your
search
around
and
set
up
the
load,
balancer
mm-hmm,
so
so
yeah,
that's
that's
great
I
didn't
know!
A
C
Really
no
I
think
the
blocker
I
had
today
was
basically
fixed
by
Jamie's
comments,
saying
that
I
should
rather
stick
to
etcd
running
on
the
OS
than
trying
to
to
cluster
eyes
single
instance
edct
running
in
dhaka,
while
the
cluster
was
still
up
so
that
caused
some
problems.
Alright,
although
I
got
very
far
with
that,
I.
A
C
A
So
I
would
run
it
CBE
in
Campinas.
Yes,
instead
of
Haute
instead
of
the
host,
but
I
would
not
use
the
like
I
would
from
Q
Barons
perspective.
I
would
use
externality
D
right
mm-hm,
so
so
I
would
run
in
some
containers
but
use
external
at
CD
and
not
try
to
convert
the
single
node
local
hosting
cubed
M
sets
up
for
you
to
in
the
static
port
to
an
age
18,
but
does
should
that
work
for
you
or
like
running
it
in
docker.
Well,.
C
I
mean
I
can
always
start
it
in
docker
before
I
set
up
it
last.
Yes,
yes,
that
is,
of
course
not
a
problem.
That's
really
a
matter
of
taste
I
just
thought
it
was
rather
elegant
to
have
it
have
it
in
the
manifest
so
that
so
that
Cuban
aegis
actually
tracks
it
maintains
it,
and
but
anyway
getting
I
mean
converting
the
the
single
instance
I
have
from
cube
ATM
to
at
last,
while
the
things
still
up
that
was
tricky
and
then
in
the
end,
I
got
into
problems
getting
the
new
masters
joining
the
cluster
yeah.
A
So
what
I
think
should
work?
Is
you
specify
external
CD
from
cue
balance,
point
of
view
you
write
a
static
pod
to
the
like.
You
generate
certificates
for
at
CD.
That's
the
first
step.
You
write
a
static
port
for
a
TD
with
the
right
parameters
and
things.
Then
you
run
external
at
CD
for
cuba
like
when
you
do
cubed
a
minute.
You
point
to
localhost
like
or
your
advertised
address
locally
or
something
or
whatever
master's
other
masses.
You
have
and
then
cue
bidding
will
start
up.
A
C
F
C
Basically,
these
are
just
shell
scripts
I,
normally
try
to
document
stuff
and
shell
scripts.
Okay,
because
I
can
try
the
documentation
out
at
some
advantage,
but
anyway,
eventually
I
have
to
write
it
up,
but
but
I'm
a
pretty
fast
writer.
So
that
shouldn't
be
a
problem,
but
I
would
just
like
to
have
it
all
running
once
so.
I
wasn't
that
far
today,
yet
and
I.
A
E
A
A
One
thing
we
still
have
to
like
do
is
the
other
documentation.
I
will
talk
to
some
detail
and
a
couple
of
other
people
to
check
whether
we
can
like
who,
if
there's
someone
that
will
update
our
general
Doc's.
We
have
three
four
pages
on
community
io,
something
I'll
I'll,
synch
with
them
separately
and
now
we're
out
of
time
cool.