►
From YouTube: kubeadm office hours 2020-03-18
A
B
A
A
Another
quick
PSA
is
that
we
managed
to
set
up
commedia
windows
in
20s,
which
was
quite
interesting,
quite
difficult,
but
unfortunately
they
are
also
quite
flaky
and
a
possibly
later
today,
I'm
going
to
contact
sexy
windows
and
ask
them
if
they
know
about
some
of
these
failures.
As
you
can
see,
this
consistency
around
the
different
failures.
A
good
part,
of
course,
is
that
we
were
able
to
get
the
majority
of
the
test
pass,
which
is
going
to
help
us
graduate
the
future
to
beat
them
as
planned
for
the
118
cycle,
and
that's
all
my
PSS.
A
D
Suppose
I
can
first
of
all
I
created
document,
as
you
proposed
and
and
in
particular
Dan
and
the
reads
you
have
already
commented
on
it,
which
reveals
that
obviously
just
loose
wrapping.
These
things
is,
of
course
not
enough,
so
needs
to
be
taken.
Some
look
at
issues
related
to
to
the
lifecycle,
for
instance,
upgrading
components,
what
about
growing
or
shrinking
a
cluster?
So
that
is
something
that
we
have
to
look
into
also
fabrics.
D
D
Yes,
yes,
let
me
just
take
a
quick
look
at
the
document.
I
just
jumped
in
so
I
need
to
catch
up
a
little
bit
for
for
five
minutes,
but
then
then
I
shall
be
able
to
give
you
some
more
information
or
better.
Even
if
you
I
mean
you
are
sharing
the
screen
anyway.
So
why
not
just
open
the
document
and
then
we
can
take
a
look
together.
Yeah
I
saw
my
screen.
D
D
D
Just
a
question:
oh
okay,
well,
I
mean,
from
my
perspective
this
should
be
fairly
open.
I,
just
enumerated
a
number
of
solutions
that
I
happen
to
know.
But
if
you
look
at
the
last
chapter
in
this
document,
you
will
see
that
I
would
actually
like
to
find
a
very
generic
approach
which
would
allow
to
integrate
even
more
things,
not
only
limited
to
load,
balancing
and
and
virtual
IP
handling.
A
D
Yes,
I
agree
and
I'm
in
particular
when
we
think
about
things
that
go
beyond
just
rolling
them
out
just
bootstrapping
them.
Then
of
course,
some
deeper
knowledge
about
what
we
are
dealing
with
is
actually
required.
So
how
upgrade
scenario
is
going
to
work?
This
may
may
be
different
from
from
software
solutions
of
the
solution
that
we
try
to
integrate.
So
it's
it
will
not
be
totally
generic
in
the
end.
I
guess
so
that
is
something
which
is
pretty
open
for
now.
F
D
B
Yeah
sorry,
it
basically
creates
its
own
manifests
for
putting
all
of
this
together.
That
was
kind
of
the
idea,
so
the
problem
I
mean
keep
alive
D
and
ng
nginx,
and
things
like
that
they're
a
little
bit
hard
to
manage
a
bunch
of
different
components
that
you
need
to
kind
of
you
know
scaling
up
and
doing
those
sorts
of
things
can
be
a
bit
of
a
pain,
so
I,
basically
just
kind
of
grabbed.
B
The
raft
algorithm
did
did
some
bits
and
kind
of
followed
the
same
sort
of
just
for
having
a
bunch
of
manifests
that
you
know
using,
perhaps
qadian
add-ons
and
without
really
changing
the
UI
for
cube
ADM.
It
could
also
put
those
manifests
in
place
which
would,
on
top
of
everything,
put
the
VIP
and
then
load
balance
between
all
the
control
play.
Members.
F
D
I
mean
this
is
something
that
we
might
want
to
make
some
kind
of
decision
about
whether
we
should
restrict
this
whole
idea
to
a
particular
component
that
does
all
the
job
but
would
not
allow
us
to
choose
something
else
or
whether
we
should
try
to
keep
it
as
open
as
possible,
so
that
basically
people
can
choose
whatever
they
want.
I
mean
I
would
be
happy
with
with
queue
VIP,
which
does
the
job
for
me.
But
of
course,
since
it's
a
rather
fresh
component
I,
don't
know
whether
it
would
be
production
already
or
something
like
that.
B
Flexibility
is
probably
key.
You
know,
there's
gonna,
be
some
people
who've
already
settled
on
edge.
A
proxy
I
mean
a
lot
of
you
know.
So
if
you
let
the
cluster
API
project
at
the
moment,
a
lot
of
their
images.
So
if
they've
already
kind
of
settled
on
H
a
proxy
for
a
lot
of
these
things,
but
you
know
they're
having
it
roll
their
own
images
and
things
like
that
to
do
these
sorts
of
things
so
I
would
opt
for
flexibility.
I.
B
F
Have
a
question
Dan,
so
I
read
the
COO
blip
thing
and
I
understand
that
it's
kind
of
a
bootstrap
saying
so
this
is
not
something
you
could
really
install
after
the
fact
right,
like
you,
couldn't
bring
up
kubernetes
and
then
like
schedule,
a
daemon
set
to
the
control
plane,
because
then
we
wouldn't
have
the
VIP
for
provisioning
right.
Yes,.
B
There
you'll
find
the
steps
for
actually
doing
it,
and
basically
at
the
moment,
to
do
that.
You'll
you'll
need
to
install
cube
ad
and
cubelet
cube,
CCL,
etc
and
then
create
the
manifest
it
can
create
the
manifest
for
you
and
then
you
do
a
cube
ATM
in
it
and
as
part
of
cubed
I
am
coming
up.
It
will
start
that
manifest
it
will
create
the
VIP
which
then
cube
ATM
will
use
to
validate
that
the
first
control/play
members
come
up
and
things
like
that.
D
Actually,
this
would
basically
be
the
same
approach
to
all
different
mentor
software
solutions
that
we
would
find
for
that
I
mean
like
like
nginx
or
whatever.
Actually,
if
I
understand
your
question,
rightly
when
I
think
you
were
thinking
about
bootstrapping
the
cluster
first
and
then
try
to
convert
it
into
something
load-balanced-
and
this
is
actually
something
we've
been
doing
before
before
actually
cube,
EDM
was
capable
to
handle
multi
master
setups
out
out
of
the
box.
F
Well,
you
can
throw
stuff
in
extra
search
sands
ahead
of
time.
If
you
know
what
the
VIP
is
gonna
be,
but
it's
pretty
messy
like
because
we
don't
support
changing
the
cluster,
like
that,
thanks
Martin
for
the
for
the
context
there
I
completely
forgotten
about
that
work.
So,
while
ago
yeah
I
was
I
was
just
kind
of
trying
to
turn
my
gears
a
bit
and
think
well.
G
B
F
A
Cubed
emkin
can
potentially
request
an
external
like.
Maybe
it
could
be
a
face,
it
could
be
an
external
binary
or
set
up
of
sorts
to
set
up
the
load
balancer
on
each
node,
but
you
have
to
know
the
control
plane,
endpoint,
there's
no
way
to
provision
mommy.
There
are
ways
to
provision
the
cursor
and
go
back
and
modify
the
control
plane,
endpoints
certificates,
potentially,
but
I,
don't
think
we
should
follow
this
route.
A
A
A
F
D
Okay,
let
me
just
jump
in
here.
First
of
all,
I
mean
what
I
proposed
in
this
document
is
something
that
I've
been
doing
before
by
hand,
basically
just
having
my
ansible
scripts,
which
would
create
the
virtual
IP
and
load-balanced
configuration
files
first,
and
the
next
step
would
create
the
manifest
and
then
run
cube
ADM
in
it.
So
when
cube
ATM
in
it
is
run,
the
manifests
already
exists.
D
It
takes
a
little
while
until
all
the
services
are
up,
and
then
you
have
the
virtual
IP,
you
have
the
port
open
and
you
can
then
configure
your
cluster
based
on
that,
and
so
my
idea,
which
was
based
on
this
templating.
It
was
well
basically
just
coming
from
the
answer
the
world,
where
I
would
just
use
templating
to
implement
this.
D
So
if
I
teach
cube
ATM
to
create
those
files
for
me
and
actually
what
comes
into
those
files
which
is
different
depending
on
the
solution
that
you
are
choosing,
it
will
all
be
the
same
facts
that
come
in
something
like
the
the
port
numbers,
the
virtual
IP
or
DNS
names,
etc.
So
you
can
just
fill
that
in
automatically
or
you
have
to
create
it
beforehand.
Is
the
templates.
F
The
issue
wrestling
with
is
said
you
mentioning
facts
which
is
I,
think
the
common
idea
from
link
puppet
and
ansible
and
etc.
So
some
facts
can
be
known
ahead
of
time
like
just
based
off
of
the
contents
of
the
configuration
and
then
some
facts
potentially
could
be
only
known
at
runtime.
So
my
question
is
what
facts
are
not
known.
F
F
D
D
D
F
Yeah
I'm
just
thinking
out
loud
did
you
really
just
need
to
bootstrap
the
VIP?
You
can
store
the
configure.
The
VIPs
IP
addresses
the
peers
inside
of
kubernetes
and
then
get
them
that
back
on
on
to
the
node,
using
a
daemon
set
or
back
out
to
the
control
plane.
That
would
that
would
be
one
mechanism
for
homogeneous
control
plane
where
you
wanted,
all
of
them
to
load
balanced.
The
same
way.
D
Also,
when
we
are
coming
to
the
point
where
it
isn't
probably
totally
generic
because
I
don't
know
beforehand,
for
instance,
which
of
those
existing
solutions
that
we
might
want
to
integrate,
support,
something
like
like
sick
up
to
to
reload
a
service
to
reload
the
service
configuration
or
something.
Now.
F
D
But
we're
not
using
system
D
here
I
mean
we
are
talking
about
hosting
those
services
inside
kubernetes
as
as
static
port,
but
how
about
keep
alive?
D
well,
first
of
all,
I
haven't
tried
it
out
with
keep
alive
D,
because
then
Dan
come
up
with
a
cute
VIP,
which
works
so
nicely
for
me.
But
if
it
is
possible
with
acutely
IP,
I
would
expect
to
be
able
to
run
keep
alive
T
in
the
same
way.
A
F
That's
kind
of
what
I'm
thinking
for
some
things.
You
won't
even
need
the
operator
because
they'll
reload
when
the
file
changes
but
like
Envoy
but
yeah
the
this
is
very
interesting.
F
It's
it's
a
neat
neat
problem
to
solve:
I
I'm,
very
the
part
that
I'm
most
like
unsure
about
is
just
how
you
can
bootstrap
the
VIP,
because
my
theory
is,
if
you
can
bootstrap
the
VIP
so
that
the
cluster
can
come
up,
then
the
cluster
is
operational
and
as
new
control
plane
nodes
join,
they
can
be
registered
with
the
VIP
right.
So
we
can
support
like
a
config
map
or
we
can
use
the
node
list
and
filter
on
those
IP
addresses
and
for
all
the
control,
plane,
nodes
and
then
say
yeah.
F
F
D
F
I
think
the
the
tweak
is
that
normally
you
start
leave
say:
hey
I
want
five
nodes
right
then
manually
you
would
already
configure
the
VIP
to
have
five
nodes
so
automating
it
gets
a
little
weird
as
the
as
the
control
planes
they
synchronously
come
up.
Then
you
go
from
a
VIP.
That's
just
one
to
two
to
three
to
four.
F
So
there's
a
little
bit
of
lag
in
the
VIP
scaling
up
to
the
size
of
the
nodes
and
I.
Don't
think
that
that
would
be
I
think
we're
pretty
safe
from
race
conditions
there.
If
we
use
the
kubernetes
api
to
populate
the
VIP
config
on
the
node,
but
I
just
don't
know
if
that's
the
part
that
I
would
be
most
skeptical
of
in
what
I'm
talking
through
is
if
that
would
work
to
accomplish
configuring,
the
entire
control
plane,
I.
B
Yes,
the
first
one
comes
up.
It
starts
the
raft
server
Alexis
self,
as
a
leader
as
its
leader.
It
will
adopt
a
VIP.
It
will
start
the
load
balancer
with
instances
underneath
it
and
then
there's
Cuba
diem,
run
Seward's
steps.
It
will
use
that
VIP
speak
to
the
one
working
control
plane
owed,
Cuba,
do
more
complete,
you
can
add
nodes,
2
and
3
and
they'll
hit
the
VIP
and
it
will
bind
the
clusters
together.
We
can
then
add
in
the
manifests
on
nodes,
2
&
3,
so
that
cube
VIP
comes
up
on
those
two
nodes.
B
They
join
the
rafts
consensus.
If
node
one
goes
down,
two
or
three
will
take
their
master
status,
at
which
point
it
will
adopt
the
VIP,
and
then
traffic
will
go
through
that
and
routes
to
the
remaining
control
points
we
can
have
it
adopt.
Config
and
sense
so
would
have
one
as
no
two
and
three
join.
We
could
do
a
config
map
watch,
so
anything
else
that
becomes
a
control
plane
would
automatically
join
during
the
backend
as
load
balancer
nodes
and
things
like
that.
The
number
of
different
ways
we
could
go
about
doing
it.
B
B
We
just
need
to
start
from
somewhere.
That's
the
only
thing,
so
we
at
this
point
we
start
from
the
file.
We
could
start
just
from
the
manifest
have
it
stored
in
the
raft.
Have
it
updated
based
upon
the
config
map
watch
and
as
nodes
come
in.
They
just
adopt
that
that
config,
so
we
would
be
able
to
scale
up
and
scale
down,
and
it
would
just
basically
reflect
the
conflict
of
the
cluster.
F
I
definitely
think
of
it.
It's
good
to
boots
not
only
a
bootstrap
to
disk,
but
also
right
back
when
I
think
about
clusters
going
down
and
coming
back
up
in
the
our
scenario,
because
if
you're,
if
you're,
if
you
have
an
elastic
control
plane,
the
IP
is
that
bootstraps,
the
VIP
might
not
even
exist
anymore.
F
A
So
I
guess
we
have
to
discuss.
How
are
we
going
to
face
the
potential
integration
if
we
execute
this
command?
But
how
is
this
deep
going
to
propagate
to
the
config
and
then
I
guess
we
have
to
have
a
separate
face:
how
to
write
this
static
pod,
manifest
on
the
init
mode
and
the
joining
controlling
mode.
A
D
Right
well,
as
I
said,
I'm
I'm
not
really
too
familiar
with
the
way
that
you
could
IDM
is
organized
internally.
What
I
had
written
in
my
document
was,
as
I
already
said,
inspired
by
by
using
configuration
management
with
which
I
work
with
a
lot
and
I
was
actually
even
thinking.
I,
don't
know
how
it's
been
implemented
now
I
mean
at
CDs
support
in
ADM.
D
In
a
way
it
could
probably
be
modeled
in
the
same
way,
I
mean
so
in
the
end,
you
could
come
out
with
something
where
you
have
an
interface
where
you
feed
in
templates,
which
use
some
predefined
list
of
variables.
That
is
supported
by
by
cube
ATM,
and
then
you
can
just
create
any
configuration
file
that
you
need.
So
that
would
be
very
easily
extendable.
However,
what
it
doesn't
cover
is
all
those
lifecycle
issues,
because
they
require
some
more
of
a
cuarón
or
acknowledge
about
the
the
software
that
you
try
to
integrate.
D
A
Personally,
I
think
if
we
go
for
this
integrated
dip
solution,
incubate
iam,
we
have
to
be
opinionated
and
instead
of
supporting
templates
for
multiple
different
solutions,
I
think
we
should
stick
to
one
the
same
way.
We
support
a
single
DNS
server
and
a
single
proxy
I
think
we
should
do
the
same
for
this.
F
My
opinion
here
is
that
we
can
publish
an
API
to
disk
and
that
once
once
the
file
is
on
the
node
like
since
the
VIP
is
software
that
needs
to
either
be
installed
to
the
node.
The
system
d-unit
a
static
pod
manifest
or
you
know,
if
you're
doing
further
lifecycle
management,
you
can
use
a
daemon
set,
but
either
way
like
any
of
those
things
that
you
use
the
you're
mutating
the
node,
and
once
you
have
privileged
installation
software
installation
on
the
node,
you
can
inflate
a
file
and
listen
to
changes.
F
You
know
for
the
file
using
inotify
or
you
can
just
go
straight
to
the
kubernetes
api.
If
you
have
permission
to
do
so,
I
I
think
for
coop,
VIP,
nginx,
envoy,
h8
proxy.
You
know
traffic
or
whatever
load
balancer
you
want
to
use,
can
be
configured
in
that
manner
and
that
Koopa
dam
should
not
own
the
integration
that
translates
the
facts
into
the
file.
Just.
D
A
We
haven't
decided
anything
we
basically
multi
presented
what
they
have
been
doing
for
load
balancing,
and
also
we
have
got
done.
Who
is
one
of
the
authors
of
the
cube
solution?
Explain
how
Q
deep
works
cube
deep
is
mentioned
as
a
lotta
native
to
the
combination
of
keepalive,
with
no
engine
X
or
a
she
proxy
and
I.
Basically
might
I
can
remember
my
comments
at
Lisa's.
G
Yeah,
okay
I
did
a
quick,
TL
DR
of
my
comments.
So
my
comment
is
that
does
the
work
will
me
issues?
First,
a
focus
on
being
a
bootstrap
provider
and
it
should
be,
let
me
say,
at
the
gnostics
to
load
balancers.
So
personally,
I'm
not
really
keen
to
choose
a
or
to
sponsor
I
could
a
lot
balancing
solution
if
it
is
not
sponsored
by
the
kubernetes
project.
As
a
word.
G
So
this
is
the
first
point,
so
in
my
opinion,
we
have
to
seek
for
a
generic
solution,
and
genetic
means
that
he
can
work
with
two
or
three
or
four
solution
available
on
the
market
and
I
know
that
it
is
complicated.
So
I
was
thinking
as
a
possible
how
we
can
get
there
without
being
stuck
and
and
I
think
that
a
good
step
can
be
could
be
to
document
the
word,
what
Martin
is
doing
unready,
and
so
we
have
a
clear
understanding
of
what
are
they,
the
limitation
or
the
possible
constraint?
G
For
instance,
they
did
the
fact
that
we
need
to
know
in
advance
all
the
all
the
members
or
the
fact
that
I
don't
know
alkaloids
are
not
managed
and
so
on
and
so
forth.
So
we
start,
let
me
say
getting
the
idea
out
in
in
the
wild.
We
start
getting
feedback
and
then
as
soon
as
we
get
more
field,
but
we
try
to
generalize
if
we
are
not
ready
to
do
it
now.
G
A
A
A
We
have
to
support
an
abstraction
across
different
load,
balancers,
pretty
much
a
configuration
generic
configuration
that
has
to
be
templated
over
a
variety
of
different
all
balancers
and
people
are
going
to
come
to
us
because
our
template
is
not
going
to
be
anymore.
So,
of
course,
people
are
going
to
come
to
us
and
say:
hey,
can
you
please
add
support
for
this
and
they
are
going
to
start
creating
ticket
about
tickets
about
it
now,
I,
just
I,
don't
see
how
we
can
make
this
generic
enough.
G
I
agree:
templating
is
always
our
the
path.
This
is
why
I
was
suggesting
that
we
can
consider
also
something
like
plugins
or
something
different.
So
we
define
some
well-known
point
integration
point,
and
if
someone
want
to
books
into
this
integration
point,
then
he
can
basically
write
his
cone
with
code.
Do
whatever
I
want.
G
But
yeah
I
agree.
It
requires
some
Sun
design.
So
this
is
why
thinking
about
these
and
after
the
feedback
of
of
Martina
I
am
leaning
towards
the
idea.
That
is
probably
bad.
Our
best.
A
good
idea
to,
let
me
say,
make
this
the
approach
designed
by
Martin
available
in
the
wild,
see
if
we
can
get
feedback
and
so
on
and
so
forth.
Maybe
we
supposed
I,
don't
know
how
to
make
this
more
visible.
D
D
G
A
G
G
A
Maybe
I
should
clarify
what's
going
on
at
the
website
right
now.
Basically,
sick
dogs
just
got
an
approval
from
cigarette
texture
to
remove
how
do
I
say
and
documentation
about
projects
which
are
not
CN
CF
approved.
So
if
you
decide,
for
instance,
hg
proxy
sorry,
if
you
the
dip
solution,
if
you
decide
to
document
the
whole
page
about
it
with
cube
ADM
integration,
they
are
going
to
request
that
this
page
is
removed.
So
it
has
to
be
like,
since
you
have
a
growth
project,
the
situation
here
is
there
is
getting
slightly
political.
A
D
I
think
that
makes
sense,
I
mean
in
a
way-
and
there
is
probably
a
lot
of
documentation
elsewhere,
which
we
could
just
treat
as
contributions
or
something
I've
got
third-party
contributions,
and
they
should
just
be
a
little
to
it.
But
I
think
it
would
be
make
sense
to
have
some
some
central
site
where,
where
these
things
can
be
found,
just
like
a
lot
of
contribution,
which
is
part
of
the
kubernetes
get
repo
on
github.
D
D
D
There's
a
lot
of
stuff
on
that
on
cubed
github
already
so
so.
I
think
the
info
cube
is
is
pretty
complete,
but
I
was
thinking
that
it
would
probably
make
some
sense
to
have
some
entry
point,
which
is
not
yet
specific
to
a
particular
solution
but
saying
something
like
okay.
So
in
principle
we
approach
it
like
this,
and
now
you
can
choose
whatever
you
like.
People
have
tried
cube.
Web
people
have
tried
combination
of
Kiba
life,
D
and
H
a
proxy
for
instance,
and
then
this
could
actually
branch
to
some
more
specific
guides.
Okay,.
G
Okay,
fair
enough
to
work
something
in
people
mean
he
show
what
is
possible
and
what
is
the
general?
Let
me
say,
approach
from
the
solution,
but
also
we
have
to
be
crystal
clear
in
in
defining
what
are
the
limitation
so
no
operate?
The
joint
will
flow
basically
does
not
work.
So
if
you
have
a
static
control
plane
and
those
are
the
main
limitation
that
that
it
until
we
have
a
VIP
management
that
that
work
nice
with
the
join
workflow.
In
my
opinion,
it
is
an
OCO.
G
G
My
suggestion
is
to
to
take
incremental
approach.
They
start
with
the
documenting
and
see
if
we
get
feedback
from
people
and
then
when
we
are
mature
and
often
enough
to
do
a
generic
solution
that
fits
whether
if
the
cupola
mean
any
join
war
flow
and
dead,
also
as
a
good
answer
for
the
life
cycle,
that
means
upgrade
only
at
this
stage.
We
can
start
embedding
something
in
the
code.
G
A
G
It
has
to
be,
we
can
have
also
different
page
in
our
dogs,
so
we
can
have
a
generic
and
something
that
explained
the
generic
approach
and
and
something
else
another
page
that
is
okay.
If
you
apply
this
generic
approach
to
Cooper
Cooper
beep,
this
is
how
it
works.
If
you
apply
the
generic
approach
to
keep
alive
and
Angie,
this
is
out
work
out,
works.
A
A
A
So
in
terms
of
actions
like,
should
we
continue
working
on
this
in
this
document,
or
should
we
start
drafting
a
new
document
with
the
proposed
document
that
we
can
merge
into
the
Canadian
repo
I
think
I'm
push
one
on
a
new
dock
and
we
can
keep
this
one
for
the
potential
integration
inside
cube
Adium.
Maybe.
D
Totally
agree,
absolutely
and
so
I
I
think
I
would
like
to
do
some
more
experiments,
as
I
said,
with
keep
alive
to
you
whether
it
can
actually
integrate
it
as
static
for
as
well,
and
then
I
am
happy
to
write
some
documentation
and
propose
it
to
you
guys.
Maybe
starting
off
at
at
cube,
EDM
slash
docks,
and
then
we
can
see
whether
it's
a
part
of
it
that
we
can
actually
integrate
into
Eddy
Cue
Bonita
Scott
il
if
it's
compliant
with
their
policy.
So
that
should
be
fine.
I.
D
A
G
Only
one
I
can
really
ask
to
Martin
to
make
it
at
the
top
of
the
page.
Let
me
see
a
clear
statement
that
a
disease,
one
of
the
possible
alternatives,
supports
a
bike
of
admin.
So
I
don't
want
that.
People
might
somehow
complain
that
we
are
following
one
solution:
I'm
among
the
others,
so
we
are
just
proposing.
The
goal
of
the
document
is
to
explore.
One
idea
is
to
propose
a
solution
that
other
can
so
on
and
so
forth,
but
we
should
be
crystal
clear.
That
is
not
the
only
one
supported
by
kobani
yeah.
D
G
A
We
have
to
be
able
to
support
the
dogs
in
terms
of
you
know,
user
requests,
so
I
was
thinking
that
maybe
one
day
we
should
also
think
about
having
end-to-end
tests
for
that,
but
it
because
even
bigger
complication,
where
are
we
going
to
run
the
tests
like?
Is
this
going
to
be
a
GCP
setup
like
maybe
kinder
can
be
extended
one
day,
but
it's
we
have
to
support
the
dogs
with
solid
signal,
I.
G
Think
that
it,
it
is
fair
for
the
first
iteration.
Let
me
see
in
topic
on
top
of
the
dog
also
has
played
the
contract.
This
is
a
spear
'mentally
to
support
the
best
effort,
whatever
we
want
to
write,
but
for
the
first
iteration.
Let's
consider
like
an
alpha
feature,
we
make
it
this
clear.
What
is
the
contract
around
this
document?