►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
C
C
D
A
So
it
seems,
like
the
experiment,
was
successful
and
we
we
now
have
more
data
on
that
it
works
and
how
it
works.
But
we
don't
know
if
we
really
want
to
do.
This
probably
will
use
something
like
envoy,
which
has
real
load
balancing
capabilities
instead
of
using
an
IP
tables
rule
which,
basically,
where
the
load
balancing
is
random.
Instead
of
using
cached.
A
B
A
A
proxy
type
so,
basically
something
that
would
run
on
the
node
and
proxy.
The
communication
for
the
cubelet,
for
example.
So
when
a
cubelets
should
talk
to
the
api
service,
well,
the
cubelet
can
only
live
problem.
Is
the
cubelet
can
only
talk
to
one
address,
and
since
we
have
multiple
api
service,
we
have
to
have
some
place
where
we
like
split
this
one
point
of
contact
to
multiple
point
of
contact
to
the
different
api
service,
and
that
could
be
unloyal
or
something
or
it
could
be
a
bit.
A
B
F
So
I
know
that
Martin
you're
on
the
call
right,
so
he
he
basically
wrote
a
kind
of
like
straw,
man
guide
for
setting
up
a
che
for
kuba
diem,
or
at
least
like
integration,
between
Cuba
diem
and
like
hae
environment,
and
he
used
nginx
and
keepalive
D
as
well
so
yeah
that
that
is
an
option
to
just
I.
Don't
know
I
mean
again
it
comes
back
to
like
the
the
nature
of
Cuba
diem
right.
F
A
Yeah
I
mean
this
is
gonna,
be
all
docs
in
the
like
the
first
Lorien
that
we're
doing
four
one
nine,
it
has
been
possible
to
run
aj
clusters
with
cuba
diem
from
basically
from
the
get-go.
If
you
can
hold
like
set
up
an
entity
cluster
externally,
and
if
you
can
create
this
load,
balancer
like
in
your
cloud
or
something
whatever
using
dns
and
then
copying
the
certificates
from
master
one
to
master
to
the
tree.
F
You
know,
try
to
make
it
as
modular
as
possible
so
that
when
we
do
have
like
those
switch
boxes
that
we
can
have
on
the
website
and
we
can
like
easily
plug
out,
you
know.
Oh,
this
is
like
your
load
balancing
setup
step
right
here.
F
You
can
only
use
a
closed
cloud
load
balance
or
you
can
use,
keep
ID
or
you
can
use
I,
don't
know
envoy
kind
of
thing,
and
we
have
we
can
we
can
plug
out
different
bash
grips
depending
on
the
choice
of
these
and
once
to
making
them
from
the
following
steps
onwards.
It
doesn't
really
matter
how
they
did
that
previous
step.
Like
from
that
point,
always
it's
the
same
experience
right.
A
A
G
F
G
F
Ties
into
the
HEV
doc
as
well.
A
few
comments
asked
about
the
form
factor
about
CD
and
hae
environment
and
I
was
of
the
opinion
like
I,
have
a
no
strong
opinion
about
this.
My
opinion
was
that
having
sort
of
dedicated
nodes
would
be
preferred
because
then
you
have
kind
of
like
hardware,
isolation
and
you
know
for
master,
goes
down.
It
doesn't
like
screw
up
your
database
kind
of
thing,
so
that
was
my
natural
assumption
to
have
separate.
F
You
know
like
three
VMs
for
your
at
CD
layer
and
then
three
fuel
masses,
but
I
think
Martin
and
Lucas
both
said
that
you
know
we
could
co-locate
that
on
the
masters
themselves,
right
so
yeah
I
just
wanted
to
sort
of
gather
feedback
and
try
and
figure
out.
If
anybody
is,
you
know,
got
experience
doing
this.
Whether
they've
have
any
observations
about
best
best
practice
and
stuff
like
that.
G
I'm
sorry
kubernetes
instead
CD
life
cycle
at
CD,
and
it
has,
you
know
particular
constraints
around
how
you
want
to
roll
and
upgrade,
for
instance,
how
you
want
to
manage
the
general
life
cycle
of
SED
and
that's
still
one
area
that
we're
not
really
clear
on
what
you
know
how
we're
gonna
do
this
with
cube,
ATM,
I,
guess:
I
know
that
Jamie
you've
been
working
on
us
and
I
meant
to
touch
base
with
you
earlier
this
week,
but
I
would
like
to
love
to
touch
base
with
you
on
your
thoughts
and
findings
around
the
sed
operator
and
in
Dutch.
G
E
B
We
had
a
discussion
within
our
community
and
some
some
providers
were
some
current
cloud
providers.
They
would
prefer
to
have
a
etcd
still
hosted
within
a
docker
environment
and
not
installing
anything
on
the
actual
host.
So
the
basic
there
rule
of
thumb
is
OS
plus
docker.
That's
it.
Nothing
else
goes
in
the
native
mode
on
the
host.
So
if
there
is
any
way
to
kind
of
push
the
etcd
as
a
Content
containerized
solution,
that'll
be
good.
F
Yeah
I
mean
it's.
Definitely
it's
definitely
becoming
increasingly
popular.
To
do
that.
My
question:
is
you
know
when
you
say
it's
it's
favored
by
providers
or
you
know
service
or
cloud
providers.
Is
this
like
you
know,
mass-market
solutions
or
is
a
specific
tha
I
mean?
Do
you
see
many
hey
CheY
environments
that
do
docker
eyes
a
CD?
F
F
So
it's
worth
pointing
out
that
in
it
well,
hopefully
in
one
10
kabillion
110,
we
should
have
docker
eyes
sed
via
the
operator,
so
that
should
be
provided
out
of
out
of
the
box
by
Cuba
diem.
But
until
that
point
it's
just
the
question
is
like:
how
do
we
want
to
document
this?
So
people
can
set
up
themselves
right,
so
I
think
it's
easier
to
do
system
D
and
if
there's
an
easy
way
for
people
to
dr.
a
set
CD.
F
We
can
document
that
as
well,
but
I
think
just
to
go
back
to
what
Martin
were
saying.
If
we,
if
we
did
go
with
system
D
as
like
sort
of
base
form
factor,
then
it
doesn't,
it
doesn't
really
matter
how
you
host
their
CD.
You
could
dr.
Iser,
have
it
and
can
have
it
in
a
container
right.
I
mean
it's
just
so
you
know
the
lifecycle
of
that
container
is
managed
by
the
OS
rather
than
just
some
kind
of
user.
A
A
Do
you
have
system
these
system
D
these
days,
but
again,
there's
always
this
question
like
well.
What
should
we,
if
we
don't
have
this
also
I
found
that,
like
upgrades
with
cubed
em
right
now
is,
are
really
easy,
as
we
can
just
shift
around
these,
these
plain
text
files
on
disk,
we
can
just
bump
the
version
and
the
cubelet
will
do
all
the
stuff
for
us.
We
don't
have
to
execute
a
lot
of
like
first
edit,
the
the
drop
in
file
somewhere
in
a
specific
location.
A
Then
like
systemctl,
daemon,
reload,
systemctl,
restart,
cubelet
and
restart
sed
service,
and
things
like
that
and
handle
roll
back
and
do
you
think
I
mean
I
think
it
gets
a
lot
of
easier
for
us
and
Sergey
was
also
pointing
out
that
well
we
get
it
duck
arise.
Then
we
don't
pollute
the
hosts
in
the
same
way
like
between
like
well,
we
have
a
td3
110.
A
Now
we
want
to
upgrade
to
a
TD
tree
to
a
level
or
whatever
that's
just
a
matter
of
config
file,
rewrites,
so
yeah
I
think
I
mean
and
again
to
be
clear
about
this
effort.
I
mean
there
have
been
talked
about
upstream
link
else's
cabanas
the
hard
way
guide.
This
effort
is
basically
a
first
baby
step
towards
that
effort,
making
some
something
the
kubernetes
developers
could
recommend
I
mean
there's.
A
A
It's
more
about
I
mean
this
has
been
possible,
but
for
a
long
time,
but
still
we
get
I
mean
I
get
pinged
a
lot
of
days
in
the
week
on
slack
by
people
like
asking
when
it's
cubed
MHI
done
and
and
it's
like
well,
you
could
do
it,
but
you,
you,
then,
have
to
do
these
three
things
again.
So
it's
it's
more
like
to
reduce
that
load.
A
F
F
Maybe
he
can
confirm
this,
but
for
there
a
tectonic
product
they
run
they've
they've
just
moved
to
hosting
a
CD
in
system
D
rather
than
the
operator
and
authorized
so
I
think
there
is
like
a
lot
of
you
know
a
lot
of
interest
in
hosting
on
OS.
But
if
we
do
the
same
thing
where
we
can
balance
people
as
these
cases
and
offer
both
as
documentation,
that
would
be
cool.
G
So,
just
on
the
point
that
I
had
made
earlier
about
running
in
Sopranos,
I,
wasn't
intending
to
say
running
in
systemd
I
think
we
would
prefer
to
run
it
in
docker
under
management
of
kubernetes.
But
the
tricky
part
here
with
cube
idiom
is
that
it's
beating
Manchester
static,
pods
and
when
you
get
into
the
life
cycle
of
at
CD,
like
you
know,
even
joining
nodes
to
the
sed
cluster
or
making
a
CD
itself
h.a
inside
of
kubernetes
under
cube
ADM,
it
gets
kind
of
tricky.
F
I'm
not
sure
either
so
tomorrow,
I
can
have
a
look
at
that.
I
mean
if
it's.
If
we
just
hosted
them
in
static
pods,
I,
don't
think
much
will
change
in
the
H.
A
guide
I
think
we
would
still
yeah
we've
still
either
manually,
generate
the
certificates
and
then
try
and
figure
out
how
a
member
joining
works
and
one
question
I
had
for
Lukas.
Do
you
think
we
could
do
a
lot
of
this
setup
with
the
phases
thing
so
I
know
that
we
have
a
face
for
at
CD
setup
right?
Does
that
generate
certificates
or.
A
But
as
we've
talked
about
in
earlier
meetings
in
other
times,
like
the
master,
is
the
bounder
right
now,
if
you
tolerate
normal
workloads
to
run
on
the
master,
you
violate
this
like
secure
the
security
layers
anyway,
but
I
mean
you
know,
really
a
really
really
production
thing.
You
probably
wants
you're
at
TD
nodes
and
like
dedicated
activity
pairs
on
dedicated
nodes,
then
masters
with
the
control
plane
and
then
the
nodes-
and
this
should
never
like
touch
each
other
to
have
the
different
boundaries
cleared.
A
But
it's
a
we
have
to
balance
between
usability
and
like
security,
and
we've
made
this
compromise
for
now.
So
no
Q
barium
doesn't
include
anything
to
generate
certificates
or
or
similar,
but
I
think
static,
quad
hosting
is
shouldn't,
be
shouldn't
change.
Anything
from
cube
a
as
point
of
view
I
mean
you
should
well.
You
have
to
use
from
Cubans
point
of
view.
You
have
to
use
external
a
city
like
point
to
note,
can.
F
Be
weak
so
in
the
in
the
canoe.
So
if
we
did
host
say
if
we
have
three
masters
and
we
hosted
a
city
in
static
pods,
the
way
that
the
master
configuration
files
would
look
like
I
I
assume
they
would
reference
the
external
IP
for
the
master
node
right.
They
wouldn't
refer
to
a
local
host
yeah.
So,
to
all
intents
and
purposes
it
Stoics,
it's
Dino,
sed
yeah,
okay,
yeah!
It's
right!
F
G
A
G
A
Yeah
so
a
couple
a
month
ago,
something
we
were
actively
discussing
what
we
should
do
with
that
CD
like
in
in
the
so
we
want
to
do
this
cubed
M
in
it,
then
cubed
M
join
master
or
something
like
that
flow.
So
we
could
add
a
new
master
dynamically
with
the
minimum
amount
of
effort,
and
then
we
have
to
cluster
a
CD
in
some
way
and
we
were
thinking.
A
I,
don't
think
that
excludes
Percy.
An
experiment
like
Fabrizio
now
has
conducted
on
on
the
q
proxy
self-hosting.
That
would
think
around
with
some
HCD
static
port.
The
orchestration
thing
I
know
Justin,
it's
really
just
in
Santa.
Barbara
is
really
passionate
about
this
topic
as
well,
and
it's
from
what
I
know
looking
into
something
like
Etsy,
D,
admin
or
I,
don't
know:
what's
he
calls
it,
but
anyway,
something
that
would
cluster
a
CD
in
a
more
user-friendly
way
that
than
the
primitives
that
actually
give
us
like
out
of
the
box.
C
If
I
can
give
my
opinion
here,
is
that
basically,
my
feeling
is
that
we
are
moving
away
from
a
static
pot
and
also
with
the
self-hosting
think
so
is
not
longer
term
incision
to
propose
or
to
manage
an
ATC
cluster
in
static
port.
So
I
prefer
the
solution
based
understand.
This
is
a
TC
d
operator
and
for
the
scope
of
the
guy
that
Jamie
is.
Writing.
C
I
think
that
it
is
perfectly
fine
to
propose
the
solution
or
if
a
system
D
manage
a
TC
d,
we
can
also
propose
the
the
solution
based
on
ethnicity
on
docker,
but
it
is
not
mandatory
to
have
a
TC
on
static
border.
I
think
the
difference
I
will
suggest
a
TC
deep
on
ports
only
when
we
are
able
to
use
the
operator.
B
F
Yeah
well,
yeah
I
mean
so.
We've
tested
that
the
NCD
operator
can
fulfill
cube,
idioms
use
case
of
clustering
of
clustering
air
sitting
and
offering
an
H
a
capacity
I'm,
not
sure
the
fact
that
they
can't
do
that
in
a
static
pod
way
takes
away
from
that
ability
to
solve
a
use
case.
The
way
I
looked
at
a
static
pause
right
now
is
that
it's
a
short-term
solution.
F
Until
we
have
the
operator
right,
it's
a
short-term
stopgap
until
we
can
host
the
cluster
datastore
in
kubernetes
right
and
use
the
kubernetes
life
scope,
lifes
lifecycle
tools
to
manage
SED.
You
know
after
1:10
whether
we
still
want
to
advocate
static
pods
as
a
form
factor.
That's
a
discussion
we
can
have
later
on,
but
I
think
until
that
we
do
have
the
operator
personally.
I
don't
see
that
much
of
a
problem
with
static
pods,
because
I
mean
at
the
end
of
the
day.
It's
just
it's
a
means
to
host
a
CD
and
docker.
A
A
Okay,
let's
proceed
then
mm-hmm,
yes,
a
1
9
update,
a
general
darks
update
might
be
needed,
like
just
going
through
the
general
getting
started
guide.
Checking.
Is
this
anything
we
we
need
to
refine
there.
Something
small
that
has
changed
might
not
be
the
case,
but
generally
there
are
two
three
things
that
we
do.
We
change
for
each
release,
there
I
hope
for
businessman
page
and
reference
guide
will
be
merged
soon.
Jamie
just
sent
up
great
PR
I'll,
look
in
a
minute
after
the
race
is
done.
A
Yeah.
Please,
please
look
at
the
release
cycle
doctor
that
I
wrote
up
in
order
to
see
like
right
now
where
it's
stabilization.
What
should
we
do
now?
We
have
see
our
jobs
which
we,
which
should
be
green.
We
should
check
that
I.
They
are
green
release.
Notes
are
coming
up
soon,
getting
bug
fixes
in
it's
a
priority.
A
F
D
A
A
I
got
the
release:
managers
thanks
to
Jamie's
Deb
script.
I
got
the
release
managers
to
push
the
all
the
labs,
which
is
cool.
So
now
we
have
everything
in
place.
It
should
turn
green
on
the
next
run
and
I
mean
rpms
should
be
exactly
the
same,
because
when
a
when
a
release
branch
lead
pushes
that
ebb
see,
he
will
remember
to
push
the
RPMs
as
well,
but
it's
it's
just
to
double-check
for
posterity
with
a
script
tab.
A
A
A
F
A
Will
update
the
spreadsheet
with
that
information
linked
to
your
upgrade
doc?
Pr
value
send
now
just
to
have
at
least
something
for
for
them
to
see.
I
mean
there
there's
not
much
lines
that
have
to
change,
but
a
couple
and
yeah,
maybe
maybe
somebody
going
through
the
general
getting
started
guide
looking.
That
is
it's
up
to
date,
just
like
sanity
checking,
that
is
the
only
thing
I
can
think
of
outstanding
fun
Docs,
because
it's
your
own
Fabrice
SP,
are
all
the
two
main.
F
F
Four,
so
when
Fabrizio
first
submitted
his
pro
request,
I
remember
going
through
like
the
entire
thing,
which
incorporates
pretty
much
all
of
our
dogs
and
dis
correcting
stuff
anyways.
So
hopefully
we
don't
even
need
to
do
that
again
because
it
was
like
two
or
three
weeks
ago.
So
should
it
all
be
up
today,
yeah.
A
A
Anymore,
so
do
you
want
to
discuss
this
right
now,
or
should
we
like
invite
people
to
a
special
meeting
discussing
discussing
this
like
how
we
can
make
a
general
like
how
we
could
inject
failure?
Stat
CD
an
edge
I,
see
the
installation
more
easily
in
EDD
tests,
so
we
can
catch
kubernetes,
GRP
seat,
for
example,
kubernetes,
client
issues.
A
For
example,
we
had
this
aj
failure
where
sed
client
in
built
into
kubernetes
had
a
bug
and
geo
PC
had
a
bug
and
then
then
those
combined
with
a
TD
server
bug
made
things
pretty
unstable
in
some
environments
when
running
an
AJ
cluster,
we
have
led
some
regressions
from
well-known
companies,
so
so
yeah
I
mean
most
of
these
things
will
be
fixed
in
110,
maybe
already
already
in
one
nine.
But
as
a
post-mortem
here
we
we
want
to
increase
the
coverage
here,
because
we
realize
we
have.
A
A
A
F
I
think
so
I
think
Tim
just
left
some
comments.
So
I
only
have
another
look
at
tomorrow
and
maybe
add
some
more
tabs
based
on
our
discussion
today.
So
yeah
I
think
it's
pretty
much
on
track.
Do
you
know
when
the
dog
freezes
is
it
next
week?
Sometimes
the
first
I
think
prior
thing
is
like
the
first
few
days
of
December
so
home.
A
A
E
Blockers,
clever,
no
blockers
for
my
sight
at
the
moment,
I'm
quite
busy
with
some
project
work
at
the
moment
so
before
next
week.
I
won't
be
able
to
do
anything
on
this.
Actually,
it's
good.
E
You
asked
me
because
for
me
interesting
would
be
to
know
how
I
can
contribute
best,
because
now
I've
just
written
out
my
guide
and
written
a
couple
of
of
answerin
squids,
and
so,
if
there
are
things
that
you
would
like
me
to
do,
or
you
would
like
me
to
do
in
a
different
way,
just
tell
me:
I
shall
try
next
week.
I
will
be
more
involved
back
in
this
mouth
yeah.
So.
F
I
guess
this
so
ties
into
like
a
meta
issue
of
like
what
technology
should.
We
include
an
official
documentation
right
I
mean,
should
we
I
mean
if
we're
gonna
put
it
in
a
tab?
Should
we
include
ansible
and
then
like?
Where
does
that
leave
people
who
use
puppet?
Where
does
it
lead?
People
who
use
a
terraform
so
like
answer
that,
like
very
muddy
area,
I'm
a
big
fan
of
providing
a
solution
to
use
this,
you
can
use
copy
and
paste
that
being,
but
at
the
same
time
we
also
need
to
respect.
F
Like
the
you
know,
the
agnostic
nature
of
cute
of
Cuba,
nieces
and
Cuba
DM
right
so
yeah
I,
don't
have
a
great
answer
right
now.
I
think
it
would
I
think
you've
already
had
look
at
the
poor
request
right,
Martin,
yes,
yes,
I
have
so
I'm
just
trying
to
think
whether
there's
a
way
for
us
to
so
I
know
that
you
want
to
address
like
an
odd
toilets,
have
load-balancing
solution
right
actually.
E
Well,
I
mean
I
am
quite
comfortable
with
with
kiba,
lifeti
and
nginx
through
that,
as
far
as
ansible
is
concerned,
I,
don't
think
that
we
should
push
any
tool
forward.
We
should
document
what
people
need
to
do
and
having
some
implementation
of
this
is
just
an
add-on,
and
so,
since
I
am
doing
this
anyway
I'm
happy
to
publish
it,
and
so
we
can
link
to
it.
But
but
anyway,
it
will
be
basically
my
project
for
my
stuff
that
I
will
need
my
a
code
base.
Yeah.
F
E
I
mean
I
I,
haven't
really
included
any
answerable
snippets
only
some
some
templates
from
answerer,
because
they
basically
just
show
what
people
have
to
fill
in
and
always
under
the
ansible
template
is
always
one
example
of
an
already
filled
in
files,
so
that
I
think
should
be
enough.
You
need
some
kind
of
kind
of
placeholders,
so
why
not
use
ansible
syntax
I,
don't
see
any
harm
in
that.
F
A
Okay,
then
yeah
and
we'll
definitely
talk
about
how
people
could
contribute
in
various
ways.
Maybe
I
mean
you
could
if,
if
there's
someone
here
also
beside
Martin
that
be
eager
to
step
up
and
help
contributing
more
in
valuable
ways,
you
can
just
just
think
me
out
slack
or
whatever
and
I'll
reference.
Some
some
good
things
to
take
on.
A
There's
I
have
a
pile
I
guess
yeah
will
will
not
meet
next
week
as
well,
myself
and
Robert,
and
the
other
leads
are
giving
an
in-person
talk
about
how
you
can
contribute
to
class
lifecycle
and
related
areas
at
cube.
Cohn
so
had
basically
at
this
time,
I
think
so
yeah,
but
the
week
after
that,
probably
and
yeah,
we
have
nine
minutes
left
to
our,
which
leaves
us,
with
nine
minutes
more
to
improve
kubernetes
right.