►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
Auto
mode
service,
token
kind
of
token
for
the
API
server,
because
anything
that
that
autumn
on
service
Crone
token
is
basically
another
secret
amount
into
the
API
server,
which
we
don't
need
and
we
don't
have
checkpoint
for
secrets.
So
you're.
It's
like
this
weird
circle
where,
like
I,
need
the
API
server
to
get
my
secret,
but
my
secret.
This
is
my
VI
server,
and
so
we
don't
need
that
and
we
should
remove
that.
A
B
A
E
E
You
have
wonderful
like
schedule,
airy
and
Control
Manager,
and
the
second
part
is
like
testing
the
actual
H.
A
finality
is
working,
which
means
that
if
you
feel
like
part
of
the
kill
zone,
maybe
EXO
as
part
of
the
kill
zone,
ICD
slovers
or
like
partial,
kill,
ma
schedulers
from
the
client
perspective,
the
cluster
is
still
up
and
running
right,
I'm,
not
sure.
Like
my
don't,
it
means
more
focus
on
the
second
part
like
testing,
the
HP
functionality
is
not
like.
E
Actually
testings
are
like
AJ's
setup
or
HJ
installation,
so
for,
but
to
test
as
a
HP
like
video
injection,
we
need
to
have
it
a
setup.
So
that's
why
I
like
I,
think
maybe
some
Canadian
people
are
really
interesting
because
like
could
have
that
window
set
up
a
chief,
also
force
and
Buffalo
section
part
I.
Think
right
now
we
have
from
entrant,
has
to
cover
some
like
fit
injection.
E
But
those
times
are
very
basic,
so
I
wrote
a
thought
to
just
describe
where
I
want
this
effort
to
go
and
where,
with
what
what
we
should
do,
for
example
like
we
should
inject
figures
between
a
city
peers
like
here.
Why
city
knows
well
like
still
keeping
the
other
two
running
or
like?
We
should
interact
some
network
partitions
between
known
and
also
which
would
has
an
interaction
between
the
SOL
sed.
We
can
enjoy
our
videos
between
us
around
the
city
and
also
like
we.
E
C
E
B
A
D
Yeah
totally
I
agree
Ben,
so
in
terms
of
invoking
the
tests
with
with
the
end-to-end
tests,
use
cube
ATM
to
deploy
a
ha8
CD
and
that
you'd
run
all
of
those
tests
right.
So
it's
contingent
on
a
che
at
CD
being
incubate
yen.
Or
do
you
see
one
like
a
stopgap
measure
to
get
this
in
now
and
then
switch
over
to
Cuba
diem
when
we
have
it.
B
There
is
no
stopgap
for
deployments
in
a
testing
infrastructure
to
do
that
is
a
ridiculous
amount
of
extra
work
and
gives
you
a
bad
signal
which
then
the
testing
folks
come
and
hunt
you
down.
So
it's
not
worth
the
effort
to
do
that.
It's
much
more,
it's
much
more
prudent
to
put
the
effort
into
setting
up
the
full
jig
to
do
what
you
want,
then
having
it
as
a
feature
of
blocking
PR
job,
which
that
way,
you
know
it's
worth,
couldn't
bad.
The
good
part
is
that
we're
actually
testing
it.
D
E
So
right
now
is
a
design
proposal
and
someone
like
what
would
Jack
from
Google
he
is
interested,
but
he's
on
vocation
right
now,
so
I
haven't
speed
with
him.
I
know
like
some
folks,
may
be
interesting,
this
iPhone.
So
right
now
it's
just
a
proposal,
but
if
everybody
agrees
on
a
concrete
kind
likewise-
and
we
can
have
some
people
working
on
that-
and
maybe
some
waters
also
can
help.
E
D
Think
that
dovetails
into
like
my
status,
update
too
so
the
thing
I'm
currently
working
on
this
trying
to
get
well
trying
to
get
a
cube
medium
to
deploy
H
a
cluster
hasd
clusters.
Using
the
operator
we
ran
into
a
blocker
because
the
operator
assumes
that
there's
a
work
in
DNS
over
which
Cuba
DM
doesn't
do
so
I'm
working
with
home
child
to
try
and
fix
that
yeah.
There's.
E
Another
thing
that
I'm
a
little
bit
concerned
about
because
actually,
if
you
were
set
up
a
city
with
I,
see
the
operator,
it
makes
a
story
a
little
more
complicated.
Because
then
you
have
like
circular
dependencies
between
a
city,
salary
and
super
ends.
They
need
to
have
more
so
I
think
a
better
I
need
to
a
poll
result
like
having
a
separate
sed
call.
E
Sorry
and
the
companies
also
say
like
the
couple
which
other
and
then
we
try
to
task
set
up
and
the
like
after
we
can
pass
all
these
and
we
can
to
Warrick
testing,
because
I
believe
the
problem
here
is
now
like
the
most
of
the
bomber.
Here
we
will
fund
initially,
is
not
really
about
how
we
set
up
kubernetes,
but
is
more
about
like
the
internally
presentation
of
kubernetes,
how
it
used
sa
decline
or
how
the
kubernetes
client
interact
with
who
brings
against
Cola.
E
D
So
one
so,
this
has
been
like
a
discussion
item
this
dead
cycle.
In
terms
of
like
how
we
are
gonna
set
up
at
sea
DHA-
and
we
came,
we
sort
of
came
to
the
decision
that
you
know
we
can
try
and
implement,
deploying
the
mechanism
to
sell
at
har
cells,
or
we
can
try
and
use
what's
currently
out
there
and
save
ourselves
a
lot
of
strife
and
work
by
just
using
the
operator
and
so
fibers.
Look
as
if
it's
feasible
it
can
work.
Yeah
I,
don't
know,
I,
think
it's!
It's
we're!
C
D
I
so
I'm
interested
into
into
into
sort
of
figuring
out
what
the
circular
dependencies
are
and
trying
to
figure
out
how
they
complicate
our
tests.
Maybe
just
trying
to
think
of
a
good
next
step
that
I
mean
maybe
just
carry
and
implement
in
it
and
then
try
and
run
some
initial
tests
to
see
how
complicated
is
that.
D
C
E
To
not
contact
the
APS
over,
but
that
city
and
the
velocity
is
still
running
right.
If
it's
a
nice
nice
tacos
chasity,
like
everything,
will
just
keep
on
running,
but
if
Santa's
under
the
control
of
crew
believes
when
the
cupola
connect
ApS
working,
if
you
haven't
made
already
decide
to
like
join,
join
all
the
pores
on
that
node,
because
no
didn't
really
connect
to
the
a
gas
or
for
maybe
more
than
five
minutes
right,
they'll
be
a
different
there.
It
was.
This
is
not
like
supposed
to
be.
E
Nothing
will
happen
because
I
said
he
still
happening
running
right,
but
this
time
like
when
the
know
reconnect
for
Cooper
is
really
what
decide
to
kill
that
I
stay
apart.
It's
not
really
about
like
actually
operator,
but
about
like,
like
running
a
CD
under
system.
D
is
different
that
I'm
running
under
a
couplet,
because
sometimes
cooperate
aside
to
curiosity
right
when
they
settle
curiosity,
is
that
right,
Q
itself
right.
B
We're
kind
of
going
in
a
rabbit
hole
here.
The
I
think
we
can
start
the
iteration
in
the
process.
There
are
other
aspects
that
we
can
dive
into
with
regards
to
the
a/b
comparison
of
self-hosted
versus
not,
and
we
will
support
both
with
kuba
DN,
but
comedian
can't
deploy
the
non
self
hosted
version,
at
least
not
at
this
stage,
and
we
don't.
We
don't
even
want
to
support
that,
because
then
we
have
to
go
in
the
lifecycle
of
it,
which
is
different
than
deploying
the
operator.
B
E
C
E
B
I
mean
as
long
as
we
have
global
visibility.
We
can.
We
can
use
the
sig
as
a
means
to
federate
and
to
have
other
people
get
involved
in
specific
issues.
I
think
doing
that
is,
is
much
more
beneficial
and
has
has
proven
over
time
that
it
works
so
long
as
we
constantly
do
that
where
we
write
down
the
details
and
then
we
send
an
email
to
the
list,
saying
hey,
we
have
this
thing.
B
E
A
Basically,
the
conclusion
is
to
use
cube
item
the
cubit
of
agent
mode
as
a
testing
playground
that
is,
reproducible
linearly
tests
easily
for,
like
the
whole
kubernetes
ecosystem,
for
example,
in
scalability
tests,
where
we
could
tell
like
five
masters
or
something
like
that
and
inject
alias
pretty
easily
by
like
killing
or
tainting
masters
or
masa
pods,
or
like
doing
some
disruptive
but
loads
easily.
Still
we
are
the
kubernetes
api
right,
the
so
that's
afraid
that
disruption
framework
would
use
the
kubernetes
api.
Then
it
would
be
kind
of
portable
between
different
environments.
A
I
think
one
of
the
problems
we
have
today
is
basically
that
you
have
to
well.
If
you
set
up
an
AJ,
a
TB
cluster,
you
you're
directly
tied
into
your
environment.
So
if
we
write
a
disruption
framework
for
AWS,
then
it
doesn't
work
on
GC
or
actually
maybe
it's
the
other
way
around.
You
have
something
for
GC
already
robots
or.
E
B
This
is
the
self
hosting
provides
a
capability
for
doing
platform,
agnostic
fault
testing
that
we
wouldn't
have
otherwise,
because
we
have
the
full
api
order
at
our
disposal.
So
as
part
of
the
framework
that
already
exists
there,
we
could
we
could
split
tests
and
fail
your
test
that
we
couldn't
do
otherwise,
not
without
digging
into
like
knowing
the
details
of
a
platform
like
if
a
node
were
to
die
so
to
speak.
B
D
E
A
I
think
that's
all
I
have
today.
So
what
about
the
GC
or
Tiki
AJ
tests
today?
What
do
they
test
and
how
is
it
working.
C
So
Phillip
pointed
to
a
simple
open
source
tests
that
he'd
added
in
the
doc
and
I
don't
know
exactly,
but
for
Tekken
folks
have
implemented
internally
in
terms
of
testing
because
I've
been
ignoring
that
and
none
of
them
were
available,
unfortunately,
to
join
the
call
today.
So
no
don't
have
any
first-hand
contacts
there.
C
C
Interesting,
it's
a
nice
bunch
of
well
testable
usable
for
EBE
tests.
That's
the
question!
I!
Don't
know
the
word
that
for
I
mean
we've
got
something
now.
I
was
part
of
the
last
night.
That's
sort
of
demo.
Quality
like
I
can
create
and
delete
clusters
so,
like
that's,
maybe
close
depending
on
how
reliable
I
think
it
is
yeah.
A
I
would
I
would
love
to
use
that
instead
of
communities
anywhere.
If
we
start
like
finding
the
tests,
I
looked
at
what
we
have
and
well
it's
it
works,
but
I
would
really
like
to
just
do
some
kind
of
like
in
cube,
deploy
or
wherever
somewhere,
where
we
could
start
writing
the
new
upgrade
tests
than
you
AJ
tests.
C
A
A
Jamie,
did
you
have
something
I
think
we
already
covered
yeah,
but
basically
so
to
go
through
the
proposals?
I
mean
I.
Don't
think
we
covered
the
what
you're
planning
to
do
with
what
eyepiece
do
do
we
want
to
have
a
specific
self
hosted
at
CD,
flagged
in
the
API
or
something
to
make
it
use
for
our
piece
instead
of
hostnames?
Oh.
D
So
what
do
you
mean?
How
does
so?
Basically,
if,
if
the
operator
detects
that
the
sed
cluster
is
going
to
be
self
hosted
and
I,
think
it
determines
that
by
the
the
spec
that
you
give
it?
So
you
know
if
you,
if
you
give
it
a
spec
object
and
it
has,
the
self
host
is
struct
embedded
in
it.
It
says:
hey.
This
is
self
hosted
at
this
point
onwards.
Every
time
I
add
a
new
CD
pods
use
news
pot
Opie's.
So
it's
not
these
host
names.
D
If
it
has
that
struct
embedded
in
the
spec,
that's
in
review,
you
know
right
now,
it's
quite
small
request.
So
hopefully
she'll
get
merged,
maybe
this
week
and
then
the
second
part
is
to
create
a
new
sort
of
controller
or
to
approval
controller
for
CSRs,
which
I
submitted
today.
So
basically,
when,
when
the
operator
launches
a
new
Exedy
product
has
an
init
container
in
it
in
inside
the
pods
that
sends
a
csr
and
the
operator
has
sort
of
a
side
car
which
auto
approves
if
this
CR,
if
the
CSR
looks
a
specific
way.
D
So
if
it
matches
like
an
organisation
field
that
says
that
CD
cluster
or
something
like
that
and
it
can
match
the
service
name
of
the
pod
of
the
pod
sent
in
the
CSR,
it
will
also
approve
it.
So
at
that
point
the
TLS
generation
can
happen.
So
it
looks
as
if
you
know
it's
pretty
it's
pretty
workable
and
all
of
our
all
of
the
TLS
generation
is
going
to
go
through
the
CSR.
Making
this
an
API
server
instead
of
us,
generate
some
it
ourselves.
If
that
makes
sense,.
A
D
Be
a
sidecar
in
the
in
the
operator,
pod
I
think
this
is
all
pending.
Let's
this
is
mine.
This
is
my
assumption
about
how
it
work,
if
on
Chow,
is
a
completely
different
idea
that
we
might
have
to
change
it,
but
my
assumption
would
be
that
it's
a
container
that
runs
as
a
sidecar
in
the
operator
pod
and
it
listens
out
for
CSRs
that
match
a
pattern.
I
was.
A
D
Guess
it
could
yeah
I
I,
don't
know
I
mean
I,
don't
have
a
strong
opinion
either
way.
I
just
thought
it
would
be
nice
to
have
a
sort
of
separation
in
terms
of
also
cause
like
the
you
know,
like
the
the
build
artifacts
would
be
separate.
So
I
don't
know,
maybe
there's
a
case
to
be
said
for
isolating
into
a
separate
binary,
I'm
I'm,
really,
no
sure
I
have
no
strong
opinion.
D
A
Just
like
on
a
first
and
the
first
glance,
I
think
it
would
make
sense
to
have
both
a
go
routine
and
a
separates
binary
like
create
the
library
inside
of
the
operator
and
like
have
a
binary
as
well
as
run
it
in
the
HDD
operator.
Just
so,
we
don't
raise
the
bar
significantly
in
terms
of
runtime
requirements.
A
E
We'll
worry
about
is
like
adding
complexity
to
the
core,
as
it
is
operator
right
so,
but
if
it's
possible
to
run
the
thing
as
a
separate
battery
and
I
now
assume
that
this
module
will
be
like
decoupled
from
the
curiosity
operator,
pretty
much
pretty
much
all
of
it
right
so
on.
My
grass
is
fine.
Yeah.
D
A
A
A
Well
I.
Should
we
put
a
ánewá
cubelet
new
york,
you
blaze
flags
and
that's
most
probably
not
going
to
work.
So
we
work
around
that
right
now
in
our
Docs.
But
the
long-term
plan
here
is
to
make
cubed
a
minutes,
write
a
file
with
the
cubelet
Oggs,
and
then
this
way
the
file
will
be
persistent
between
all
the
minor
versions
and
cubed
I
mean
it
will
just
write,
different
arguments
to
the
cube.
Let's
use
and
then
pivot
dynamic
configuration
once
up
yeah,
but
that's
a
short
status
up
that
we
have.
A
A
A
E
So
I
think
you
can
instantly
do
a
pull
request,
all
right
to
a
credit
least
request
like
we
can
add
something
there
or
we
can
give
you
an
example.
If
you
want
so,
if
there
any
thing
already
that
yeah
there
is
something
already
and
we
we
already
use
it.
So
probably
just
pee
me
on
the
Gaddafi
issue
or
get-ups
around
so
I
cool.
A
A
What
else
do
we
have
the
test?
Make
one
line?
Let
me
see
yeah,
then
we
have
foxy
components,
configuration
and
one
from
dan
dan
Ian
was
ascending
at
the
C
class
lifecycle
meeting
one
week
ago
about
ipv6
support
for
that
team.
It's
important
to
get
components
configuration
through
proxy
in
sooner
rather
than
later
and
I'm
guiding
a
guy
there
to
how
to
implement
that.
It's
gonna
be
fairly
simple,
like
a
cue
proxy
configured
and
with
well
the
configuration
in
it
and
just
bind
mouth
and
a
config
map
monster
into
the
the
pod.
A
A
A
C
There
are
quite
a
few
open
issues,
37
open
issues
at
some
point.
We
should
triage
those
down
and,
like
you
said,
actually
kick
things
out
to
the
next
milestone
that
aren't
going
to
make
it.
So
we
have
a
more
concise
actionable
burnt
down
list
there.
Quite
a
few
of
these
that
aren't
assigned,
for
instance,
which
scene
it
makes
it
really
unlikely.
They're
gonna
be
done
by
the
code
freeze
next
week,
yeah.
C
And
then
I
stuck
other
thing
on
here
that
Jamie
had
mentioned
yesterday
during
the
cig
meeting.
I
would
just
to
go
back
and
review
the
one
night
planning
talk
for
our
sort
of
p1
and
I
guess
also
the
p0
goals
and
see
where
we're
on
track
or
we're
not
on
track
what
things
we
should
just
sort
of
cut
bait
now
to
focus
on
getting
a
couple
of
them
actually
done.
A
B
The
way
by
the
way,
I've
been
kind
of
curious
about
this.
This
brings
up
a
meta
topic
which
I
don't
know.
If
other
people
think
about
this,
but
I
think
about
all
the
time
is
we
do
four
release
cycles
a
year.
I
sometimes
think
three
would
be
better
just
to
give
us
longer
dev
cycles
in
between,
so
we
can
actually
pack
in
and
do
because
a
lot
of
times
it's
a
panic
attack
right
at
the
end
of
every
single
cycle.
B
C
D
So
I'm,
looking
at
the
other
planning,
so
another
p1
which
we
haven't
really
talked
about,
is
moving
or
what
we're
gonna
do
with
the
API.
So
it's
currently
on
what
B
1
alpha
2
or
something
like
that.
We
want
alpha
1,
I'm,
not
sure
what
it
currently
is,
but
Lucas
I
know
you
wanted
to
maybe
sort
of
solidify
that
API
and
make
it
more
stable.
Do
we
have
a
guest
that
I.
A
A
A
C
I
think
the
both
Chris
and
trigger
+,
their
PR
is
open
for
comments,
but
also
checked
a
version
in
so
could
write
code
against
it.
So
there's
a
version.
That's
checked
in
that
we
are
writing
code
against
and
the
pr's
are
left
open
for
comments.
Last
I
talked
to
Chris.
He
was
trying
to
basically
cut
as
much
as
possible
out
of
API
Justin
had
mentioned
it's
a
lot
easier
to
add
things
later
than
it
is
to
take
things
away.
C
So
we're
trying
to
start
with
this
for
most
minimalistic
set
possible
because
we
know
we're
going
to
know
adding
things
down
the
road
and
so
we're
trying
to
figure
out.
What's
the
smallest
set
that
can
get
us
started
at
this
point,
so
I
think
it's
as
a
result.
It's
probably
a
lot
slimmer
than
what
you'd
expect
and
a
lot
smaller
surface
area
than
what
cube
Evan
even
has
for
the
control
plane,
configuration
part
yeah.
A
D
C
D
B
A
It's
gonna
happen,
then
we
have
that
written
down
I'm,
not
sure
if
we
have
other
items
to
discuss
at
this
point,
I
mean
Fabrizia
is
gonna.
Take
a
look
at
the
temporary
like
making
cube.
Let's
address
the
master
thing:
if
self
hosting
the
queue
proxy
man
works,
is
there
anyone
looking
into
the
unloading,
which
we
consider
the
longer-term
plan,
I'd.
B
A
That's
definitely
the
best
way.
Long
time
then
Jamie
is
working
on
the
freighter.
It's
the
operator
stuff,
which
is
really
cool.
We
have
self
hosting
pretty
much.
Okay,
the
component
configuration
peers
are
in
the
works.
We
have
a
TD
upgrade
of
credit
soonish
with
the
link
PR
and
testing
AJ
clusters
is
going
to
be
a
high
priority
for
want
them.
The.
B
Only
update
I
have
from
conversations
with
ji-yong
yesterday
was
that
there
may
be
a
late-breaking
client
update,
postcode
freeze
to
the
sed
client,
but
the
Joe
Jay
pivots
is
tracking
back
port
fix.
There's
two
problems.
There's
a
client
problem,
then
there's
a
server
problem
and
david
says
a
back
port
fix
for
some
of
the
server
issues
that
he's
going
to
put
into
one
dot
or
3.1.
That's
something,
and
then
the
the
client
fix
or
change
might
not
land
until
after
code
freeze,
so
I
don't
exactly
know.
B
E
Yeah
I
think
we
probably
need
to
use
a
newer
version
else.
Oh
boy
likes
doing
111,
and
we
also
want
to
use
the
latest
version
of
Oh
fine,
which
is
I'm
in
those
three
to
release.
Actually
the
onion
serratus
everything
so
I
think
what
we
need
is
ready
print
so
that
we
can
we're
doing
stuff
ready
right
now.
It's
like
a
little
bit
confusing
yeah.
B
E
E
A
B
E
B
Know
the
I
think
not
really
sicknesses
showstopper
worthy
like
I,
want
to
make
sure
that
we
get
folks
on
the
horn
at
Google
to
understand
that
this
issue
is
could
be,
is
potentially
catastrophic.
So,
like
you,
can't
I
think
we
should
hold
up
the
train
for
this
one.
So
I
think
it's
worthy
of
that
dad.
You
know
state
yeah,.
A
E
E
For
a
long
time
we
we
have
been
teamed
up,
but
there's
a
workaround
like
you
just
restart
on
it
gets
over
and
yeah,
but
like
we
haven't,
really
figured
all
the
new
cost
of
all
the
issues.
We
know
that,
like
fixing
audacity
client
will
fix
part
of
the
issues,
but
we
haven't
like
try
and
make
sure
that
all
the
issues
that
we
have
thing
are
fixed
right.
So
this.