►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
So
I'm
current
in
the
process
of
trying
to
use
the
sed
operator
for
H
a
cube,
a
DM
but
I
ran
into
a
problem
last
week,
because
I
realized
that
the
current
@
çd
operator
relies
on
a
working
DNS
over
and
the
way
it
advertises.
New
members
of
the
ETD
cluster
is
through
host
names
and
so
that
that's
a
problem
for
cubed
en
because
we
don't
have
a
working
DNS
server
by
the
time.
B
So
if
we
go
to
generate
certificates
based
on
pod
IPS,
we
need
to
sort
of
shuffle
the
logic
there
to
make
it
work,
and
so
Hongkong
town
I've
been
to
sort
of
discussing
on
slack
the
best
way
in
which
we
can
do
that
and
I
think
there's
sort
of
there's
two
there's
two
ways
we
can
do
it
number
one
is
using
like
the
CSR
model.
So
you
know
you
basis
so
currently
how
the
couplet
does
it
sends
a
CSR
request
to
the
API
server.
B
The
API
server
generates
either
serving
or
client
certs
based
on
the
root
CA
and
either
a
user
approves
that
or
you
have
like
an
auto
approval
policy
in
place
and
so
that
solution
one
then
the
second
solution
is
to
generate
a
net
TDC,
a
a
CA
root.
Edd
cluster
itself
expose
that
to
the
operator
and
then
the
operator
can
use
a
CA
to
basically
generate
new
certs
every
time
a
pod
comes
up,
so
we
would
do
that
in
an
init
container.
B
This
is
this
is
the
type
of
request
we
think
it
is
or
to
approve
that
and
so
yeah
that
that's
my
personal
preference
but
yeah.
So
that's
the
current
state
of
it's,
the
haunt
of
gee
did
I
miss
anything
out
there.
Do
you
have
any
personal
opinions
of
what
you
think
is
a
better
solution
to
do
in
terms
of
the
operator.
C
C
A
It's
a
so
you
mean
that
certificate
generation
which
which
option
to
use
right
I,
would
definitely
prefer
if
we
from
the
get-go
made
a
difference
between
a
td-30
DCA
and
the
API
server
slash
cluster
serving
CA
because
well,
there
are
two
different
security
boundaries.
Indeed,
it
makes
it
harder
even
harder
and
even
more
certs
I
mean
we
already
have
way
too
much
of
them.
A
But
in
the
end,
that's
probably
the
right
way
to
go
and
yeah
I.
Think
correctness
is
more
important
there
in
the
end,
we're
the
ones
that
are
supposed
to
abstract
away
this.
This
complexity
right
and
over
time.
We
might
find
something
that
achieves
the
same
thing
with
with
less
certs
or
like
whatever
I,
don't
know.
Spiffy
might
come
in
and
help
us
who
knows,
but
but
yeah,
that's
that's.
A
C
It
doesn't
need
to
create
a
secret
like
you,
you,
as
long
as
like
you,
give
a
root,
CA
cert
and
in
the
CH
e
site,
sign
in
so
and
signing
key
like
it
can
pass
along
with
the
SD
pod
and
then
I
use
a
in
any
convenient
to
generally
the
third
base
on
the
hosts
IP
IP,
like
just
so
whatever
the
Emmett
ICU
is
Emmett
icing,
so
it
doesn't
need
to
create
any
secret.
It's
all
self-contained
yeah.
A
C
Oh
I
see
what
you
mean,
but
but
the
the
signing,
shirt
and
the
signing
key
is
provided,
is
provided
like
by
it's
like
some
at
being,
which
is
a
CD
user
and
the
user
like
provided
you're.
Also
on
the
SED
like
why
you
couldn't
just
like
in
our
model
that
the
user
providers
signing
cert
also
owns
the
a
CD
and
s.
If
you
can
see
the
signing
circuits
like
it's
often
a
incited
by
the
user,
it's
owned
by
the
user
and.
B
Yeah
I
mean
like
I
guess:
I
might
also
be
an
option
as
well
like
so
like
child.
Correct
me
if
I'm
wrong,
but
the
operator,
basically
when
it
when
it
creates
a
new
pods,
it
basically
sort
of
listens
for
new
nodes
and
at
the
point
which
it
detects
a
new
node.
It
creates
a
new
pod
for
that
node
right.
So
my
point
is
that
at
that,
at
that
point,
where
adds
the
new
pod
is
gonna
know
the
new
node
IP.
Is
that
right.
B
C
B
I
was
just
sort
of
saying,
because
I
Lucas,
if
you
have
concerns
about
giving
pod
given
pods
access
to
their
to
the
root
CA
key,
we
would
be
able
to
get
around
that
by
having
the
operator
generate
the
certs
instead
and
then,
like
the
pod,
just
grabs
whatever
has
been
generated
by
the
operator
right.
So
you're
sort
of
you
know
reducing
the
attack
back
to
that.
If
we
don't
trust
the
S,
if
your
users
who
yeah
whatever
yeah.
A
C
Yeah,
even
if
you
use
a
secret,
it
can
still
ice,
see
it
by
so
and
actually
Oh
at
this
reason
why,
if
it's
a
secret,
it's
come
and
mount
into
the
warning
and
yeah,
it's
gonna
see
still
see
the
cert
yeah.
A
C
B
So
I
think,
like
this
discussion
would
benefit
from
like
a
design
doc
or
something
like
that.
So
I
know
that
we
mentioned
in
slack
that
we,
that
would
be
the
next
stage,
so
I
think
like
just
sort
of
writing
down
all
of
this
and
then
figure
out
like
the
best
implementation.
I
would
be
a
good
next
up.
The
question
I
had
is
that,
like
code
freezes
on
when
the
22nd,
which
is
in
like
two
weeks,
so
if
we
need
to
change
something
in
the
operator,
my
question
is
like
is
it?
B
A
B
In
my
opinion,
like
we
couldn't,
there
was
no
way
in
which
we
could
deploy
it
like
I'm,
trying
to
think
of
a
possible
hack,
I
mean
you
could
try
and
like
add
an
entry
to
the
to
the
hosts,
@cd
hosts
file
to
try
and
temporarily
allow
that
cluster
host
name
to
operate
but
yeah.
That's
that
sounds
like
super
janky
to
me,
like
I.
Don't
think!
That's
a
good
idea!
I'd
much
prefer
just
to
sort
of
see
whether
we
can
refactor
the
operator
teeth's
ip's,
rather
than
those
things
yeah
yeah.
C
A
A
C
A
That
sounds
good
and
I
mean
this
is
generic
functionality
anyway,
for
like
this,
for
instance,
like
use
advertised
eyepiece
or
host
names,
it's
a
general
thing
and
improving
the
security
with
pre
permission
insert
as
opposed
giving
the
routes
see
a
key
away
everywhere
is
itself
a
general
thing,
so
yeah
I
think
a
design
like
there
sounds
good
changing
the
operator
using
the
Rex
release
and
then
we'll
just
see
how
far
we've
come.
This
cycle.
A
C
C
E
E
If
we're
going
to
include
the
operator
as
part
of
this
I
think
the
thing
we
need
to
track
and
keep
in
our
minds
is
getting
the
update
of
the
client
inside
of
kubernetes
2
3
to
10,
once
3
to
10
is
available,
as
well
as
tracking
the
server
to
be
3
to
10
inside
of
the
operator
deployment
that
we
will
visually
use.
I
see.
C
I
see
I
I
was
there
11.9?
It's
like
clearly,
there's
no
estimate
like
I.
We
had
been
like
keeping
in
sync
like
every
week.
I
think
I'd
be
asking
about
3
to
10
as
well,
just
like
just
like,
based
on
last
week.
There's
no
estimation
for
now.
I
wouldn't
recommend
like
updated
by
just
for
this
short
time,
but
like
it
will
for
the
long-term.
E
That's
pretty
dangerous,
though,
there's
that
deadlock
constraint
that
exists
from
the
client
side
that
it's
really
bad
for
hae
clusters
right
so
III
know
that
we
want
to
give
people
time
to
fix
it
properly,
but
we
need
to
get
a
fix
out
in
the
field
because
there's
there
have
been
there's
one
blog
post
of
that
guy.
That's
publicly
available
now
outlining
the
issue
that
exists
and
what
they
bumped
into
yeah.
C
Actually,
like
we,
we
encountered
this
like
very
early
life,
just
like
two
or
three
months
ago,
in
into
Q
test
like
we
are
making
a
lot
of
tests
in
book,
you
like
doing
sometimes
sv,
but
unfortunately
like
the
the
book
you
owner
just
like,
doesn't
care
about
this
and
just
like
yeah.
It
doesn't
doesn't
want
us
to
touch
that
anymore.
Unfortunately,
but
that's
that's,
definitely
something
it's
it's.
Actually
it's
hidden
for
a
long
time.
E
C
E
A
Yeah
and
please
in
in
the
no
meeting
notes
as
well.
So
just
just
to
recap
what
we
said
both
so
it's
eb-3
1x
has
known
issues
that
are
even
worse
than
3
2
X,
but
3
2
X
still
has
known
issues,
at
least
in
the
client.
Then
what
we
want
to
do
is
wait
for
3
to
10
to
come
out
to
be
released
bump
the
client
inside
of
the
API,
so
err
in
time
for
1
9
to
go
out
and
preferably
use
a
3
to
10
8
CD
server
as
well.
E
The
ideal
rainbows
and
unicorns
world-
yes,
I,
don't
know
if
we
will
achieve
that
within
the
timeframe,
according
to
the
feedback
from
Ji,
Young
and
Hong
Kong,
so
I
think
for
the
time
being,
we
are
safe
enough
and
that
we
are
still
alpha
and
I'm
cool
with
that,
and
this
also
actually
gives
us
time
to
properly
bump
from
a
Kubb
ad
in
perspective,
because
right
now,
kuba
DN
went
from
3
o
17
to
3
110,
and
this
will
allow
us
to
properly
bump
per
release
cycle
and
I
think
we're
still.
Okay
from
our
side.
C
C
Personally,
like
him
can
represent
my
quest
to
apologize
for
all
those
things
like
I
I,
don't
know,
I
I
want
to
push
the
the
testing
and
i4h
a
well
I.
It's
just
like
not
doing
well
enough
and
there's
some
like
horse
doesn't
have
some
I
spent
a
lot
of
time
and
just
like
focus
on
making
a
CD
like
stable.
For
these
cases
like
I,
we
haven't
done
a
job
well
and
and
just
so
before,
I
delay
in
all
those
releases.
C
A
B
A
A
Quick,
so
the
the
obvious
differences
are
demon
sets
V
as
deployments
but--
cube,
uses,
flannel
unconditionally,
cubed
M
doesn't-
and
this
is
a
like
source
of
of
the
the
SED
operator
issue,
we're
seeing
right.
We
we
don't
deploy
any
networking
solution,
then
we
don't
have
the
DNS
pods
up
and
running.
Hence
we
can't
use
the
host
host
names
inside
of
the
clusters
to
resolve
the
fears
and
the
same
guy.
B
That
should
be
fixed
by
the
by
that
new
scheduling
functionality
added
him
one
well,
it
will
be.
It
was
added
in
1:18
work
how
it
works
on
1/9,
so
in
terms
of
not
being
able
to
schedule
pods
to
network,
not
ready
nodes
and
so
yeah,
like
a
lot
of
the
reasons
why
we
why
we
chose
daemons
s
in
the
first
place
like
we
may
not
have
that
problem
anymore.
B
A
Personally,
I
do
think
that
demon
sets
are
better
for
predictability
and
things
like
that.
We
don't
have
to
scale
up
and
down
the
things,
but
but
yeah
sure
I
mean
at
least
the
technical
blocker
is
fixed
in
1/9,
thanks
to
your
work
and
yeah,
but
but
still
like
the
the
DNS
thing
is
that
let
DNS
is
unavailable
in
cubed.
M
is
still
a
thing,
and
we
just
have
to
use
my
piece
instead.
B
B
Think
Ryan
wanted
that
as
like
a
feature
of
cubed
iam
being
able
to
render
like
run
a
render
command
and
then
spit
out
all
of
the
manifests,
and
then
users
can
tweak
them
and
then
they
can
deploy
them
basically,
but
I
not
sure
how
feasible
it
dies
because,
like
you
know,
if
we
did
render
all
the
manifests
and
then
these
are
runs,
QQ
cuddle
create
on
like
a
directory
that
doesn't
really
guarantee
that
the
control
panel
will
come
up
and
in
like
in
a
proper
way
right,
because
it's
more
than
just
like
a
random
execution
of
manifests,
so
yeah
I
I
committed
on
that
specific
bed.
E
With
regards
to
checkpointing,
I
talked
with
you
jus
last
week,
Thursday.
She
had
some
comments
about
the
current
implementation.
Some
minor
things
I
need
to
reshuffle,
order
and
fix
a
couple
of
things
as
well
as
get
a
node
intent
test
in
place,
which
is
a
little
thorny,
because
the
way
the
node
intent
tests
work
is
they
have
a
pre-configured
initialization,
routine
and
I.
Don't
really
want
to
enable
that
behavior
for
all
things,
but
I
could
so
I'm
gonna
have
to
get
those
two
pieces
in
place.
E
My
goal
is
to
get
that
done
this
week,
so
that
way,
I
have
plenty
of
time
before
code
freeze
and
just
get
it
in
and
then
we're
gonna
need
cycles
to
enable
and
test
and
fix
any
issues
that
find.
But
it's
not
a
lot
of
code.
It's
just
a
matter
of
initialization
routine.
The
hardest
part
is
just
the
couplets.
Are
the
cooler
processes?
Inbound
things
is
very
difficult
to
follow
so,
but
that's
the
only
that's
really
up
today.
A
E
In
in
the
high
available
mode,
you
don't
care
because
you're
going
to
automatically
rectify
with
the
main
API
server,
because
you
have
a
DNS
name
for
the
main
API
server
or
the
what's.
It
called
the
VIP,
the
VIP
yeah
you're,
going
to
it's
going
to
reload
the
entire
ap
tables
roles.
It'll
do
a
full
list.
It
only
is
a
problem.
If
you
had
a
single
node
configuration
and
then
you
would
have
to
checkpoint
the
IP
tables
rules,
but
that's
kind
of
bananas
to
deploy
that
way.
E
B
C
So
so
like
currently
like
it's,
we
were
planning
to
end
a
like
a
caller
I.
Currently,
usually
our
deployment
was
like
this.
We
have
like
three
master
nodes
and
we
label
them
and
the
SDLP
will
figure
out,
there's
no
more
master
notes
and
we'll
add
like
new.
When
we
deploy
new
parts
and
wait
like
the
the
crash
parts.
Do
we
start
something,
but
in
the
future
like
the
best
solution
would
be
it's
it's
not
top
base,
it's
actually
a
no
base,
so
it's
actually
kind
of
a
kind
satellite.
C
How
many
it's
know
how
many
master
nodes
is
there?
How
many
nodes
is
there
and
then
I?
You
can
just
add
Toleration
seconds
right,
right,
yeah,
that's
something
we
did
as
well,
but
that's
nice,
because
even
you
add
coalition
issues,
I
couldn't
do
like
parfaits.
You
want
actually
I
make
it
like
very
note
bass.
So
so
it's
nice
that
the
no
it's
not
like
deleted.
It's
like
it's
tradition
or
did
unready.
You
can
tolerate
it
for
a
while
yeah
I.
Think.
E
Just
sitting
some
are
between
like
yeah,
and
this
is
the
configuration
that
we're
talking
about
here
is
not
the
soup
to
nuts
bulletproof
h8
configuration
so
I
think
I
think
we're
were
it's
it's
the
good
enough
to
get
you
happy
started
environment
and
I.
Think
updating
the
Toleration
seconds
to
some
reasonably
high
value
kind
of
similar
to
what
we
did
with
the
certs,
not
not
that
high
about
10
years
inserts,
but
I.
It's
like
high
is
a
reasonable
thing
to
do
right.
A
E
E
Well,
we're
kind
of
enabling
self-hosting
as
the
default
as
well
as
or
you
know,
version
X
of
self
hosting,
there's
a
couple
different
versions
and
then,
as
well
as
adding
a
che
as
a
feature,
gated
thing
right
and
I
think
I
think
not
even
publicly
stating
that
its
feature
gated
or
maybe
we
should
I,
don't
know,
I,
don't
know
what
we
should.
We.
A
A
A
E
Thing
I
was
contemplating
and
I
know
you
created
like
a
container,
for
this,
too,
is
our
initialization.
We've
had
a
lot
of
thorny,
weird
issues
with
regards
to
deployment
of
Deb's
and
rpms,
and
at
some
point
we
could
just
have
like
our
own
deployment
container,
that
host
mounts
and
does
the
right
thing,
because
it's
basically
shuffling
your
own
system
to
unify
'ls
just
in
control,
restart
and
and
our
rpms
and
Deb's
are
kind
of
crap
anyways.
E
A
Just
can't
comment
that
it
was
incredibly
hard
to
find
the
time
to
maintain
so
I,
eventually
just
deprecated
thing
and
like
well
yeah.
So
what
I
really
want
to
do
in
one
mine
if
possible,
is
to
start
using
the
like
it's
not
reading
from
a
file
for
the
cubelet
right,
so
that
cubed
M
actually
can
set
the
parameters
to
the
cubelets.
You
know
well
known
location,
so
they
can
this
way
they
can
talk
right
right
now.
A
This
is
because
the
Deb's,
where
the
cubed
M
the
cubed
M
package,
has
the
cube,
let
drop
in
right.
So
if
we
upgrade
the
cubed
M
Deb
we're
gonna
get
a
new,
but
not
the
cubelet
Deb
we're
gonna
get
a
new
new
arguments
for
cubelets,
but
we
still
have
an
old
cubelet
right
so,
and
this
is
kind
of
oh,
it's
terrible
and
what
what
we
can
do
here
is
start
using
dynamic,
cubelet
configuration
which
soon
should
be
a
thing,
I
hope,
but
also
start
loading
from
a
file.
A
E
Long
term,
because
of
the
whole
split
between
basil
and
the
release,
repo
and
I
don't
want
to
put
any
effort
at
all
into
the
release
repo
in
its
process,
because
it's
insane
so
the
I'd
rather
just
get
out
what
we
can
get
out
and
then
move
on
to
1:10
and
hopefully
in
the
110
cycle.
We'll
have
the
auto
bills
with
basil
in
place.
A
So
what
we
could
do
in
time
for
next
week
is
hopefully
having
the
checkpointing
PR
ready
to
merge,
having
the
having
sig
notes
approval
a
recent
change
that
merged
was
Q
proxy
component
config
appear
that
reflect
us.
The
whole
cube
schedulers
internal
structure
was
merged
also,
today
ish
and
that
unblocks
cube
scheduler
component
config,
so
hopefully
in
1/9,
we'll
have
we'll
be
able
to
embed
both
key
proxy
and
well
the
Q
proxy
and
the
schedulers
component.
Config
structs
inside
of
the
cube
ATM
master
configuration
API.
A
Well,
to
have
validation
and
defaulting
in
inside
of
the
for
the
different
components
automatically
right
right
now
we
have
an
kind
of
untyped
string
string
map
with
arguments
which
are
command
line.
Flags
and
that's
kind
of
bad
with
no
structure
so
component
config
here
is
is
preferable.
The
only
thing
that
worries
me,
though,
is
what
about
customization.
Let's
say
we
have
API
service
and
Weez
component
config
for
them,
and
we
have
an
address
field.
For
example,
that's
that's
not
going
to
be
the
same
for
all
the
API
service
or.
A
So
we
forgot
so
I
just
said
that
component
config
for
Q
proxy
and
keep
scheduler
is
it's
kind
of
ready,
they're
all
fine,
their
own
API
group,
soon
going
to
be
graduated
to
beat
up.
Hopefully,
the
same
goes
for
the
cubelet
API
or
cubelet
component
configuration
API,
the
only
so,
and
that's
way
better
than
then
the
screen
string
string,
string,
slices,
yeah
maps
with
with
command
command
line,
arguments
that
we
have
now
with
no
validation
or
whatever.