►
Description
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.w4xw7yvzdgs
A
B
So
yeah
we
were
just
going
over.
We
wanted
to
sort
of
enumerate
the
potential
blockers
of
using
or
potential
issues
of
using
the
STD
operator
and
I
said
that
there
were
sort
of
two
right
now
one
was
technical
and
one
was
more
of
a
sort
of
conceptual
design
issue.
The
technical
problem
should
hopefully
be
solved
in
one
eight
or
nine,
and
the
technical
issue
is
the
way
that
the
EDD
operator
sort
of
manages
the
cluster.
B
Every
time
it
scales
up
and
down
is
that
it
manually
deploys
a
pod,
so
it
needs
to
communicate
with
a
community's
API
and
then
sort
of
manually
manage
that
the
pods
in
the
cluster.
The
problem
is,
is
that
by
the
time
it
initiates
that
process
when
I
first
boots
on
a
cube,
a
VM
cluster,
we
don't
have
CNI
installed
and
we
can
assume
that
CNI
isn't
stored.
So
the
problem
is:
is
that
all
of
the
working
nodes
that
the
pods
can
be
scheduled
to
are
not
ready?
B
So
basically
the
EDD
operator
scale
tab
creates
the
pods,
but
they
remain
unscheduled,
lead
to
stay
in
pending
forever
and
somebody
actually
added
a
new
scheduling
feature
to
1/8
and
it
should
be
working
by
1/9,
which
adds
more
granular
paints
and
Toleration
x'.
So,
basically,
when
we
boots
a
new
cube,
ADM
cluster
and
then
we
add
a
node
and
we
haven't
installed
CNI,
we
can
still
schedule
workloads
to
that
worker.
As
long
as
we
have
that
granular
toleration
set
on
the
pods,
so
yeah,
that's
something
like
brand
new.
B
It
doesn't
actually
work,
I,
don't
think
in
1a
I
tried
to
test
it
and
I
think
so
and
as
they
conclude
it
like
a
bunch
of
the
fixes,
Trank
out
about
working
for
1/9
so
yeah.
That
was
that
was
the
technical
issue
and
then
Lucas
also
was.
It
was
a
bit
of
a
skeptical
about
us
coupling
ourselves
to
sort
of
like
in
a
third-party
vendor
and
then
bringing
into
cube
idiom
core.
So
that
was
like
the
sort
of
conceptual
issue
that
so
maybe
Lucas
do
when
they
sort
of
talk
more
to
that
second
point:.
B
I
think
we
won't
probably
record
so
in
terms
of
the
dependencies
so
I
remember
somebody
I
think
it
was
Jordan.
He
basically
called
that
out
in
the
initial
pour
request
and
he
said
because
kuba
DM
isn't
in
its
own
repo,
it
doesn't
have
full
control
of
it
like
the
dependencies
it
gets
to
you
so
and
I
think
the
only
packages
we
needed
to
use
the
Etsy
operator
is
some
like
structs
and
types.
So
the
workaround
was
just
to
read
Eclair
those
in
our
own
code
base
for
the
time
being.
D
So
isn't
there
a
security
issue
to
you:
I'm,
just
trying
to
numerate
the
potential
list
of
problems,
because
the
certs,
in
order
for
them
to
be
running
in
the
cluster,
have
to
be
stored
on
the
cluster
they're,
not
stored
on
the
host
machine.
So
isn't
that,
like
the
third
killer,
one
that
kind
of
like
puts
a
fork
in
it
just
so
we
like.
B
D
B
C
So
the
current,
the
counterproposal
for
storing
secrets,
is
using
custom
types
and
kind
of
see.
Rd
put
them
there
in
order
to
have
a
like,
have
it
not
granted
by
default
by
Ingres
controllers,
for
example.
So
you
can
just
pawn
your
English
English
controller
and
you
have
write
access
to
hed,
so
instead
we'll
use
the
CRD
that
will
basically
store
up
byte
blobs
and
wait
with
the
certs
that
will
be
downloaded
to
host
part.
B
So
I
actually
asked
some
of
the
core
OS
team
about
like
the
necessity
for
that,
and
they
they
seem
to
be
of
the
opinion
that
it
wasn't
just
for
the
ACD
operator,
that's
required.
They
said
it's
generally
a
requirement
for
self
hosting
I.
Personally,
don't
have
opinion
on
that,
but
I
think
that's
part
of
the
wider
self
hosting
discussion,
whether
we
need
to
checkpoint
IP
tables.
C
E
D
A
notion
that
we
might
even
go
away
from
it
in
the
future
right
and
you
know
and
there's
other
options
too,
that
exist
for
other
proxying
improv
mesh
network
solutions
that
might
just
completely
eliminate
dependency
entirely
right.
So
it's
very
thorny
and
I
think
if
we,
if
it
if
they
have
a
hard
requirement
on
that
feature,
I
think
we
just
we've
enumerated
enough
features
to
say:
okay,
we've
we've
done
our
due
diligence.
Let's,
let's
go
on
to
option
B,
which
is
the
let's
do
static
pods,
but
now
we
need
to
own
the
lifecycle
right.
D
I
just
want
to
record
it
for
posterity
and
then
I
know
for
each
Co
has
a
comment
in
here
about
the
two
docks.
But
what
we
should
probably
do
after
this
meeting
is
take
this
information
that
I've
kind
of
put
into
that
I've
started
to
enumerate
in
the
stock
and
put
it
in
our
H
a
unification
dock
and
then
commit
that
sucker
to
the
Covidien
repo.
C
D
C
B
C
D
So
why
don't?
Why
do
we
talk
about
option
B,
which
is
the
one
that
you
proposed
in
your
other
document,
originally
Lucas
yeah
with?
Let's,
let's
try
to
poke
holes
in
that
one
and
then
then,
just
like
pick
pick
like
option
A
or
option
B
and
go
with
it
right
like
we
need
to
just
go
now
at
this
point,
yeah.
C
D
We're
only
doing
that
so
that
the
other,
what's
when
you're
doing
a
join,
the
other
master
can
download
it
from
an
well-defined
location,
Y
and
then
then,
after
that
period
of
time
they
can
who's
going
to.
How
does
the
handoff
going
to
occur?
Because
if
you,
unless
you
know
the
number
of
masters,
how
do
you
know
when
to
delete
that
funk?
I
won't
have
fun,
gets
deleted.
C
Well,
basically,
document
like
this
is
a
QC
document.
This
is
the
command
run
to
delete
the
like
stop
bootstrapping
or
something
like
that.
We
might
even
consider
a
cubed
M
phase
AJ
something
come
on
anyway.
So
the
good
thing
with
with
the
like
static
port
activating,
is
that
we
kind
of
own
the
secure
certificate
certificate
generation
process
and
this
will
be
done
via
the
controller
manager,
writes
the
CSR,
signer
and
CSR
approver.
C
C
Well,
we
don't
want
anyone,
but
a
CD
to
be
able
to
peer
at
CD
so
and
using
a
different
CA
would
be
great
like
for
energy
D,
just
for
a
tea
CA,
just
for
a
CD
and
but
again,
then
we
can't
use
the
dynamic
mechanism.
Can
writing
new
sets
on
the
fly,
because
the
CSR
signer
doesn't
support.
Multiple
see
is
as
far
as
I
know,
which
is
one
thing
we
might
want
to
improve,
but
but
yeah
the
problem
is
upgrades
right.
C
D
Here's
the
question
like
should
I,
don't
it
gets
weird
because,
like
that,
CD
is
fundamental
to
the
control
plane
right
and
there's
a
lot
of
tooling
that
already
exists
yeah.
Would
it
be
possible
for
us
to
maybe
make
a
little
very
myopically
focused
tool
that
only
does
the
SED
upgrade,
because
right
now,
a
lot
of
the
other
pieces
of
the
upgrade
process
say
punt
right
like
we
won't
handle.
D
B
So
to
register
respond
to
that,
like
it
ties
in
to
like
the
way
I.
One
of
the
ways
I
think
about
this
problem
is
that
you
know
when
it
when
it
comes
to
all
of
the
things
that
we
expect
cube
a
cube,
a
maybe
and
sort
of
deploy
a
net
CD
cluster
and
water
cluster
has.
The
capacity
of
doing
you
know,
needs
to
be
able
to
be
flexible
scale
up
and
down
and
needs
to
be
secure
and
also
needs
to
be
easily
upgradeable
in
life.
B
No,
we
can
decouple
that
that
that
problem
of
security
from
the
form
factor
that
C
D.
So
you
know
if
we
do
use
studies,
maybe
we
could
you
see
IDs
to
deploy
VPS
certificates
to
other
nodes
and
then
use
SED
operator
or
static
pods.
That's
a
that's!
A
choice
for
the
user
right,
but
like
I,
like
the
idea
of
using
C
IDs,
because
it
sort
of
securely
sets
up
a
node
to
host
at
CD
right.
But
I
think
that's
slightly
orthogonal
from
what
is
the
form
factor
about
CD.
In
my
opinion,.
C
What
I
have
now
high
level
what
I,
what
I
had
in
the
dark
was
cubed
I'm
in
it
sets
up
hcd,
hcd
client
cert
and
it
won
a
CD
client
search
for
API
server
right
and
one
s
CD
piercer
for
the
initial
node
right.
This
is
single
node.
We
upload
the
common,
so
it
figures
to
the
CID
and
that's
basically
the
master
first
master
initialization.
C
Then
we
do
the
bootstrapping
of
the
second
master.
We
use.
We
first
get
the
TRD,
we
download
all
the
common
certs.
Then
we
generate
new
ones
from
the
CSR
signing
CSR
approver
thing
right,
unique
for
this
master
with
the
right
sense
and
all
that
then
we
in
the
same
manner,
we
generate
a
unique,
a
TDP
assert
with
the
right
sense.
Again,
then
we
what
was
the
last
step
yeah,
so
we
have
to
tell
it
to
be
in
the
initial
TV
instance
that
well
I
I
now
have
two
nodes,
two
instances
right.
D
I'm
slowly
I
understand
your
workflow
for
initialization,
but
the
problem
that's
going
to
happen
too
is
people
are
going
to
want
to
manage
the
lifecycle
of
this
and
they
want
to
add
and
remove
members
and
scale
up
and
scale
down
and
do
some
rotation
and
that's
where
things
get
really
weird
right,
because
once
we
once
we
own
that's,
why
I'm
starting
to
think
in
my
head
of
having
a
separate
tool
that
just
does
this
life
cycle
aspect,
but
part
of
that
separate
tool
is
the
sed
operator.
So
this
is
this
miracle.
B
E
C
A
So
Mike
took
a
proposal
to
sig
off
for
camera.
We
call
it
it's
like
a
explicit
deny
or
something
so
you
know,
sort
of
the
authorizers
had
basically
a
except
which
would
stop
the
authorizer
chain
or
a
I,
don't
care
which
would
allow
you
to
go
to
the
next
thing
authorized
or
chain
a
second
one
we
called
deny,
but
it
wasn't
actually
deny
it
was
really
I,
don't
care
and
so
he's
adding
a
third
state
which
is
explicit,
deny
and
so
as
you're
walking
down
a
threads
or
chain.
C
C
Thanks
we
have
basically
the
most
security
features
we
need
for
being
able
to
self
host
like
Korres
does
with
secrets,
I
mean
if
we
had
that
we
could
make
this
authorizer.
That
would
explicitly
deny
all
like
cluster
secrets
to
anyone
requesting
it,
except
for
our
special
credential
goes
here
and
also
we
could
make
it
impossible
for
with
the
new,
with
the
node
authorizer
improvement
so
like
automatically
graduate
from
I
run,
cubed
M
join
as
a
node
and
then
I
just
right
now,
nodes
can
commute
eight,
the
spec.
C
C
D
We
security
is
an
important
aspect,
but
if
I
was
doing
the
most
secure
thing,
I
would
have
a
separate
ansible
script
and
a
totally
separate
STD
cluster
and
I
would
never
self
host
at
CT
and
it'd
have
my
CA
setup
properly
configured
right,
so
I'm
not
going
to
like
even
pretends
like
this
is
gonna,
be
the
most
secure
option,
but
it
should
be
the
most
user-friendly
right.
If
a
person
I
mean
these
are
the
trade-offs.
D
We're
gonna
have
to
weigh
it's
like
and
if
we
don't
sort
of
say,
cut,
cut
the
bars
and
provide
that
option
to
sort
of
say
like
you
could
have
you
can
it
have.
Your
external
at
CD
will
absolutely
support
that.
But
if
you
don't
have
that
configured
or
you
don't
have
the
tooling
for
that,
then
we
can.
Then
we
will
have
this
default
option
for
you,
and
the
default
option
should
prefer
ease
of
use
over
the
most
secure
environment.
B
C
B
C
B
Okay,
that
answers
your
question.
I'm
just
yeah,
okay,
I,
don't
like
it
down
the
rabbit
hole.
I
was
just
wondering
like
when
a
new
master
set
up.
Does
it
have
local
copies
of
the
site's
on
on
desk?
It
doesn't
sound
like
it
does.
No.
C
C
Really
important
I
mean
there's
way.
There
are
ways
to
lock
I
mean
there
are
ways
to
lock
cubed
and
clusters
down
like
but
again
we
we
do.
Have
the
node
token
thing:
I
mean
the
token.
Isn't
that
long,
it
probably
could
be
brute
forced,
pretty
easily
I,
don't
know
like
if
you
have
a
really
powerful
machine
and
execute
I,
don't
know
anyway,.
C
Yeah
I
I
do
agree
with
Tim
I,
think
I
think
the
the
secret
without
having
this
authorizer
might
be,
or
at
least
two
three
months
ago.
It
was
a
deal
breaker
for
me
now,
in
the
light
of
my
lack
of
progress,
I
might
be
will
and
when
having
a
promise
that,
like
mike,
is
working
on
this
it.
This
feature
might
make
data.
E
C
B
Pina
yeah,
so
it
might
be
sort
of
worthwhile
just
thinking
with
them
and
finding
that
well,
their
graduation
criterias
and
what
their
current
doubts
about,
making
it
fully
stable
right
now.
So
that
could
be
interesting
to
put
our
discussion
into
context
to
you.
Yeah.
C
So
if
we
had
this
thing,
while
while
in
in
alpha
Tim,
would
you
be,
would
you
agree
on
using
the
Q
proxy
to
do
a
load
balancing
initially
I
mean
it
is?
It
is
iptables
yes,
but
I
guess
it
will
take
quite
some
time
until
we
we
have
this
in
production
anyway.
So
until
that
ipbs
has
probably
graduated
speeder,
maybe
default,
then
we
can
like
reconsider.
But
anyway,
if
we
that
would
make
us
have
something
concrete
at
least
and
not
saying
like
well
go
with
your.
D
D
C
D
So
I
think,
having
a
default
option
of
the
operator,
knowing
the
certain
limitations
that
we
have,
but
working
with
core
OS
to
get
some
of
the
other
bits
out
of
the
way
seems
like
a
reasonable
approach
to
me
because
that
kicks
us
out
of
lifecycle
management,
which
I
don't
want
to
do
for
for
some
other
pieces
right.
I.
B
C
Just
be
clear:
everything
here
will
stay
behind
the
feature:
gate
AJ,
true
yep,
at
least
for
like
two
cycles,
or
something
at
least
for
one
cycle.
If
we
get
something
in
for
one
line,
I
don't
think
the
encoding
actually
is
that's
hard.
I
mean
we
have
self-hosting
in.
We
have
self-hosting
with
secrets,
I,
myself
and
and
Andrew
works
on
that
last
circle.
So
most
of
that
code
is
done.
We
need
checkpointing.
We
need
that
operator.
Jamie
has
the
operator
working
progress
TR,
it's
probably
has
to
be
rebased
and
stuff.
D
E
C
B
E
C
C
C
Of
POD
well
so
when,
when
a
node
is
not
ready
well,
when
this,
when
this
CNI
network
is
not
set
up
right
now,
the
full
node
status
goes
not
ready
and
when
this
node
condition
is
not
ready,
the
scheduler
doesn't
even
consider
that
node
for
scheduling
so
we're
ending
up
in
this
loop,
where
we
can't
schedule
a
new
pod,
because
the
scheduler
filters
out
the
only
no,
that
would
that
exists
there
right
and
we
don't
have
C
an
I.
So
the
node
is
not
ready
with
the
new
granule
taint
Thanks.
C
B
E
Theoretically,
that
mean
that
we
can
able
also
know
which,
which
is
started
with
a
previous
version
of,
could
mean
that
the
on
it
is
Sydney,
which
is
a
very
based
on
the
static
border.
So
in
out
path
we
are
can
kind
of
adopt
also
those
kind
of
those
kind
of
cast
or
cluster.
That
means
that
we
can
eventually
think
that
we
installed
a
till
today
to
see
operator
when
the
second
notes
joins.
I,
don't
know
if
this
makes
sense.
I.
C
E
D
Master,
it
seems
totally
legit.
The
original
document
actually
had
a
bootstrap
that
CD
and
then
it
did
a
pivot
right.
So
the
original
document
was
that
you
have
the
original
lay
down
of
cube
ADM
in
it
as
normal,
and
only
on
the
second
master
join.
Do
you
start
to
do
the
deployment
and
then
the
pivot
of
the
operator.
C
C
C
We
we
might
pivot
on
the
first
master,
we
might
pivot
from
local
at
CD
to
Southwest
at
a
CD
on
the
first
master
paper.
First
master
this
in
its
elevation,
or
we
might
do
it
as
Fabrice
I
said
on
the
second
one.
The
second
one
would
remove
the
issue
with
with
the
cni,
not
ready
thing,
then
we
could
so
I
mean
that's
all
agreed
on
kind
of
right
now,
I
mean
I
I'm,
not
saying
we
should
rip
out
the
the
earlier
things
of
we're
there,
but
I
do
think.
D
We
can,
we
can
slim
down
the
dock
and
we
can
also
trim
out
say
like
we.
We
propose
this
other
option
and
has
these
faults
and
then
just
mix
it
right
like
we
don't
the
dock
doesn't
need
to
outline
all
the
details.
Right
like
we
can
just
punt
on
like
saying
these.
Here's
some
background
information.
You
we
don't
need
to
outline
all
the
text.
Sanne
me
and
every
other
things
inside
there.
C
Yeah
and
then
four
and
then
we'll
work
with
to
go
to
improve
the
secret
situation
and
the
last
thing
we
we
could
easily
use
the
cube
proxy
to
update
the
its
own
endpoint.
So,
basically,
when
we
talk
to
10960
one
on
on
the
worker
nodes
and
cubed
and
in
a
Cuban
enjoin
for
a
node
would
write
initial
IP
table
through
in
the
same
way
as
the
queue
proxy
would
cubelets
starts
talking
to
the
one
endpoint
of
the
API
server.
C
The
cube
proxy
goes
live
notices
that,
oh
now,
the
cabinet
service
has
three
endpoints
and
writes
rewrites,
like
updates
the
the
local
IP
tables
rule,
and
then
the
new
reconciler
keeps
track
of
what
API
servers
are
living
and
that
automatically
propagates
so
all
nodes
in
a
cluster
and
then
we'll
we'll
just
see
what
happens
with
IP,
VF
or
whatever
in
the
future,
or
we
might
mean
the
other
option
is-
is
basically
to
build
our
own
controller.
That
that's
the
same
thing,
but
the
downside
there
is.
C
C
D
So
I'm
totally
cool
with
experimenting
behind
an
alpha
feature.
Game
say:
okay,
if
you
want
to
experiment
with
the
idea
of
how
to
load
balance,
the
master
connections
I
think
that's
totally
fine.
So
long
as
the
default
configurations
and
for
load
balancers
are
all
gummed
all
the
way
through
and
they
are
right
now
right,
you
can.
You
can
override
and
specify
the
single
single
endpoints
and
it
gets
propagated
through
to
the
the
nodes
right.
C
C
D
C
D
It's
all
has
this
feeling
of
like
it's
just
one
of
those
hard
problems
where
there's
no
good
answers
that
I
know
of
other
than
like
specify
I
do
like
the
idea.
I
think
there
is
more
cleanliness
to
the
idea
of
having
this
local
proxy
and
then
nodes
connect
through
a
single
endpoint
and
then
that
proxy
does
the
magic
for
you,
that's
very
similar
to
how
maze
O's
works
is
actually
they
deploy
an
H,
a
proxy
with
every
node
and
that's
also
how
they
do
their
service
magic.
D
So
I'm
not
opposed
to
some
of
this,
but
the
I
I
think
I
think
gating
it
behind
a
feature
and
who's
going
to
be
executing
it
and
doing
the
reviews
is
important,
the
logistics
behind
it.
So
we
need
a
dock
that
outlines
this
piece
right.
Just
so
we're
not
like
randomly
coding,
nonsense,
right,
I,
think.
D
That
things
getting
too
conflated
I
think
we
should
probably
separate
it
out
into
two
separate
bits.
One
is
the
sed
bit
like
we
that
we
should
have
a
mid,
a
doc,
an
sed
bit
and
here's
the
other
talking
about
stuff
how
to
connect
to
the
Masters.
Does
that
seem
legit,
because
otherwise
we're
gonna
have
a
mobster
document?
Well,.
C
D
C
E
E
Let
me
see
embed
the
balance,
insist,
and
but
this
will
be
developing
in
another
if
I
got
it
right
and
then
we
can
move
also
in
a
separated
document
on
leaving
the
original
document
would
increment
issue
detail
written
by
Lucas
about
phases
or
what
to
change.
These
are
not
designed
all
these
are
implementation,
doc,
if
you
agree
yeah.
E
C
E
C
To
go
with
that,
one
first
lock
secrets
down
as
much
as
we
can
see
how
how
secure
it
is
at
the
end
of
the
day
and
let's
use
this
test
then
see
like.
Is
this
something
we're
confident
in
promoting
to
beta
or
do
we
have
to
start
over,
but
at
the
current
situation,
we're
in
a
dead
end
anyway?
If
we're
not
doing
this
right,
so
wait
we're
trying
to
get
something
instead
of
nothing.
C
E
A
A
C
A
D
D
I'll
Pokemon
slack
too,
as
well
and
I'll
try
to
get
some
testing
so
that
goes
on
this
because
I
want
to
yeah.
I
would
surely
I
want
to
test
a
lot
more
earlier
in
the
cycle
this
round,
because
last
sight
last
iteration
was
another
I,
don't
know
I'm
not
going
to
trust
anything
until
182
is
released
again.