►
From YouTube: Istio environments WG meeting - 2018-11-14
Description
-Mesh and cluster attributes: doc
- Federated Istio introduction and demonstration
- Upgrades / CRD installation
B
B
C
B
B
B
D
D
So
first
I
want
to
just
take
a
few
minutes
to
explain
the
use
case
that
led
me
to
this
effort
and
so
at
a
high
level,
I'm
I'm,
looking
at
managing
multiple
kubernetes
clusters,
where
there
would
be
an
Mis,
teo
mesh
and
each
one
of
those
clusters
and
for
scale
and
all
sorts
of
reasons.
I
don't
want
to
share
any
state
if
it's
kubernetes
state
sto
state
across
the
clusters,
so
that
essentially
means
each
of
those
control
planes
that
kubernetes
control
plan
that
is
Joel
plane
that
is
encapsulated
within
a
cluster
and
for
production.
D
That's
needed
for
Federation
configuration
the
versus
a
type
configuration
which
declares
which
API
types
Federation
should
handle
across
all
the
kubernetes
clusters,
and
these
API
types
can
be
native
types,
a
config
map
or
it
could
be
a
custom
type
like
a
virtual
service
or
a
gateway,
and
then
there's
a
cluster
component
which
declares
which
clusters
Federation
should
target
and
within
the
type
configuration
there's
wreaths.
You
know
simply
three
fundamental
concepts:
there's
the
template
all
right.
D
So
when
you
look
at
a
template,
it
looks
just
like
the
standard
manifests
again
using
the
config
nap
example
and
there's
two
additional
fields
in
there
and
the
the
kind
and
the
API
version
is
different.
But
for
the
most
part,
if
you
look
and
understand
a
config
map,
you
look
at
the
Federated
twin
of
that
config
map,
and
it's
very
easy
for
you
to
understand
and
with
that
template,
there's
also
a
placement
type
again.
D
That
kind
of
marries
that
template
with
which
clusters
do
I
want
this
target
type
to
exist
right
and
then
what's
nice
too,
is
this
third
fun
fundamental
concept
and
that's
the
override?
So
if
we
can
imagine,
okay
I've
got
a
hundred
kubernetes
clusters
each
with
the
NS
do
mesh
in
there,
and
maybe
one
of
those
clusters
are
a
couple
of
them.
Have
some
variations
that
I
need
to
have
differences
in
well?
I
can
use
the
override
type
so
that
98
out
of
the
hundred
clusters
is
cookie-cutter.
D
All
right
have
a
simple
script
that
first
federates
additional
kubernetes
resource
types.
So
when
you
instantiate
the
Federation
control
plane,
there
is
a
handful
of
native
kubernetes
types
that
Deaf
Federation
federates
right
again:
config
map
service
ingress,
all
that
good
stuff,
and
there
are
some
additional
types
that
need
to
be
added
and
again.
D
B
D
I
stood
up
the
Federation
control
plane
in
cluster
one
and-
and
let
me
pause
for
a
second
just
to
explain.
You
know
what
is
this
Federation
control
plane?
Very
simply,
it's
a
controller
manager,
it's
called
the
Federation
controller
manager
deployed
as
a
stateful
set,
runs
as
a
pod,
and
it's
just
that
you
know,
has
a
bunch
of
control
loops,
very
similar
to
the
controller
manager
native
to
kubernetes.
It
has
an
API,
it's
implemented
by
extending
the
kubernetes
api
using
CR
DS.
D
D
Native
or
custom
types
where
I
can
go
ahead
and
say
you
know
federate
virtual
service
provider,
a
gateway
and
and
so
going
back
to
the
installation
again.
This
is
all
familiar.
The
difference
is,
you
know.
Instead
of
at
you
know,
adding
custom
resource
definitions,
we're
adding
federated
custom
resource
definitions.
B
D
Right
so
for
so
for
each
of
the
clusters
again,
they
act
independently.
The
mesh
is
act
independently,
and
so,
when
I
want
to
communicate
with
any
of
those
clusters,
I'm
doing
it
the
same
way
that
I
would
normally
do.
I
could
set
my
contacts,
my
coop
cuddle
context
and
authenticate
against
the
kubernetes
api
and
start
managing
those
resources
individually.
I
don't
have
to,
because
how.
B
D
Yeah,
so
so
what
it
does
is
when
I
do
a
cube
fed
to
join
where
I
say.
Okay.
In
my
example,
again
I've
got
two
clusters
and
the
host
clusters
is
cluster,
one,
it's
hosting
my
Federation
control
plane,
but
it's
also
acting
as
a
target
cluster
and
cluster.
Two
is
just
my
target
cluster
and
and
to
your
question,
is
when
I
join
those
those
clusters,
cluster
1
and
cluster
2
to
the
Federation
again
I'm
joining
cluster
1,
because
it's
also
my
target
cluster,
not
just
my
most
cluster.
D
A
component
of
Federation
is
the
cluster
registry
and
if,
if
you're
unfamiliar
with
the
cluster
registry,
simple
API
extension
and
kubernetes,
simple
concepts
that
say,
here's
cluster
1,
here's
closer
to
whatever
clusters
I
join
to
the
cluster
registry,
and
it
holds
information
about
each
of
those
clusters.
Simple
metadata
that
I
want
to
use
along
with
how
do
I
access
that
cluster
right.
D
So
the
API
endpoint,
along
with
the
TLS
assets,
needed
to
access
that
cluster
and
one
of
the
things
that
that
the
Federation
control
plane
does
when
you
go
ahead
and
join
the
cluster,
not
only
adding
it
to
the
to
the
cluster
registry
is,
it
will
also
go
ahead
and
create
a
service
account.
You
know
in
each
of
those
target
clusters,
and
so
it's
going
to
use
those
credentials
it's
going
to
go
ahead
and
use
those
credentials
to
create
that
the
service
account
secret,
all
that
good
stuff
right,
so
that
it
was.
F
B
D
So
just
you
know
get
my
clusters
up,
get
my
Federation
control
plan
up
and
walk
through
the
Federated
sto
bits,
but,
as
I
mentioned
right,
the
the
Federation
API,
it's
just
a
bunch
of
CR
DS,
and
these
are
the
native
types
of
the
core
core
API
types
that
are
supported
when
you
stand
up
to
control
plane.
But
you
know
to
your
question
here
is:
where
is
it
at
here
right?
We
we
join
cluster
one.
We
do
some
checks
in
cluster
one.
We
register
them
to
the.
A
D
And
it
actually
worked.
So
let
me
just
go
back
to
where
I
stopped
talking
right.
So
we
finished
the
sto
control
plane.
Installation
I,
wait
it.
You
know
wait
a
little
while
for
the
control
plays
pods
to
run.
What
you
see
here
is
a
control
plane
up
and
running
in
cluster
1
and
cluster
2
I
go
ahead
and
then
label
the
default
namespace
for
auto
sidecar
injection
I
checked,
and
we
see
that
for
both
cluster
1
and
cluster
2.
That's
done
now.
D
E
D
Then,
let's
see
so
I
checked
make
sure
that
they're
running
they're
not
running
just
yet,
but
they
will
be
and
then
ok,
let's
start
testing
it
externally.
So
I
create
the
book
info
gateway,
a
federated
version
of
it,
so
that
I
create
those
gateways
in
each
of
the
two
target
clusters.
I
create
an
external
DNS
controller.
If
you're
unfamiliar
with
this
project.
D
D
I
also
have
to
go
ahead
and
in
order
for
so
one
of
the
things
that
we
can
do
with
fed
be
to
along
with
it's
not
just
bad,
it's
really
this
whole
kind
of
solution
right.
One
of
the
things
we
could
do
is
cross
cluster
service
discovery
right
and,
if
you're
familiar,
which
I'm
sure
everyone
is
how
service
discovery
works
within
a
cluster
same
concept
across
the
cluster.
So
with
some
way
to
do
that
with.
D
D
If
it's
not
I
then
just
start
to
kind
of
chain
up
the
list
until
I
find
that
that
endpoint
and
that
could
be
anywhere
else
in
the
world,
including
cluster,
two
right,
where
I
expose
a
service
in
cluster
two
and
I'm,
not
going
to
show
that
here,
this
cross
cluster
service
described
discovery
in
this
demo.
I
actually
just
got
it
working
yesterday,
but
this
I
believe
kind
of
addresses
the
question
that
I've
been
getting
since
since
I've
been
socializing.
This
work
is
okay.
D
Well,
how
does
a
service
a
and
cluster
one
talk
to
service
B
in
cluster
two
similar
concept
right?
How
do
we
do
service
discovery
within
a
cluster?
Well,
we
can
do
cross
cluster
service
discovery
as
well,
and
when
we
can
do
that,
we
can
then
start
using
sto
capabilities
to
identify
the
source
and
destination
for
doing
rules
and
and
all
that
good
stuff
right.
D
B
D
Yeah,
let
me
actually
show
you
the
I'll
just
show
you
the
manifest
and
how
I
do
it
and
then
what
I
can
do
is.
Is
the
next
call
in
two
weeks,
I'm
happy
to
come
back
and
really
dive
into
that
in
more
detail,
like
I
said
I
just
got
it
working,
and
so
what
I
would
want
to
document
it
and
then
just
put
together
a
simple
demo
script
and
a
slide
or
two
but
I.
B
D
And
so
to
that
point
is
what
we
need
to
do
is
we
need
to
create
a
config
map,
both
Kubb
DNS,
as
well
as
core
DNS,
have
added
this
support
and
we're
basically
explaining
to
to
those
in
clusters.
Dns
providers
is
here,
are
additional
zones
and
and
the
DNS
the
external
DNS
provider.
If
it's
Google
DNS,
you
know,
here's
the
the
name
resolution
server's
IP
addresses
of
those
name
resolution
servers
so
that
when,
when
that
in
the.
D
So
basically,
I
I
after
creating
the
federated
domain
and
I
create
the
book
info
service
DNS
records
for
product
page
I.
You
know
I'm
good
to
go
from
the
rest
of
the
world
right.
I've
created
the
gate
of
the
gateway.
The
virtual
service
I
created
the
DNS
information
that
I
need
to,
and
now
the
rest
of
the
world
should
be
able
to
to
hit
my
front
end
and
what
the
rest
of
the
world
doesn't
realize
is.
D
You
know
this
front
end
is
in
two
different
kubernetes
cluster
is
running
on
two
different
sto
meshes
and
let
me
just
go
ahead
and
we
see
that
okay,
it
succeeded
and
you
see
that
it
takes
a
little
while
for
because
you
know
my
client
isn't
pointing
to
Google
DNS,
so
it
takes
a
while
for
really
for
the
propagation
of
the
DNS
resource
record.
But
let's
just
do
it
manually,
really
quick
and
you
can
see
that
it's
200
back
I
do,
and
let
me
share
this
with
the
group.
D
C
D
C
D
Yeah
I
mean
this
is
just
a
demo
so
but
I
tested
this
in
it
with
a
with
multiple
clusters
across
the
world
and
and
yeah
it
works,
but
for
demo
purposes,
I've
got
basically
US
West
one
a
and
US
West,
one
B.
So
in
two
different
zones,
but
yeah
you
see
here,
it's
like.
Let
me
know
if
you're
having
any
problems,
but
you
should
be
able
just
yeah.
D
Exactly
exactly
now,
one
of
the
things
that
I
would
like
and
maybe
I'm
just
not
aware
of
this,
because
I
use
case
that
I
started
to
investigate
is
how
can
we
do
live
upgrades
right
if
we
have
two
or
three
clusters,
each
with
their
own
mesh
and
I,
say:
okay?
Well,
let's
go
ahead
and
use
5v2
to
say:
okay,
I'm
gonna
upgrade
cluster.
D
Three
look:
okay,
everyone
move
over
your
applications
to
one
and
two
and
that's
very
easy
to
do
within
Federation
v2,
you
just
patch
the
placement
resources
and
now
those
resources
that
I
have
running
across.
You
know
the
three
duplicates
of
the
of
the
book
info
application
are
now
just
running
on
cluster
one
and
two
okay,
great
I've
got
no
customer
applications
on
cluster
three.
Let
me
upgrade
it
and
now,
let's
move.
Let's
move
that
traffic
back
over
where
I
started
to
run
into
some
issues
and
I
actually
saw
it
on
the
agenda.
D
So
don't
doubt
it
even
though
my
customers
are
going
to
Asia
your
reprioritize
and
send
them
to
Australia
or
wherever
it
might
be,
because
what
I
have
to
do
now
is
I
would
have
to
essentially
take
down
the
gateway
or
I'm.
Sorry,
actually,
the
sto
ingress
service
and
I-
and
you
know
it's
workable,
but
I
would
prefer
not
to
take
that
hammer
approach
and
take
more
of
us
kind
of
a
surgical
approach.
Well,
nice
thing
I
want
to
demonstrate
your
cue
is
right,
so
this
is
pretty
cool.
We
got
multiple
instances
of
the
book.
D
The
info
application,
no
one
in
the
rest
of
the
world,
realizes
that
it's
highly
available,
but
what
I
want
to
do
now
is
I
want
to
go
ahead
and
I
want
to
I
want
to
use
the
power
of
this
do
to
say.
Okay
right,
we've
seen
this
demo
on
same
thing,
but
now
we're
doing
across
multiple
clusters
and
meshes
is
hey
route.
All
the
reviews,
traffic
to
two
reviews,
v1
and
again
I've
federated
these
route
rules,
so
I
don't
have
to
go
ahead
again.
D
If
we
have
a
hundred,
you
know
100
clusters,
100
messages,
I,
don't
need
to
have
a
hundred
different
instances
of
the
route
rule.
So
I
can
do
create
sto
samples
cook
info
routing
and
it
is
the
rule
all
v1-
and
this
is
again
it's
just
like
the
route
rule,
albeit
one
that
everyone
knows
and
loves,
but
it's
the
Federated
equivalent
of
it.
So.
C
D
Let
me
just
let
me
just
take
a
second
here
and
and
show
it
to
you
here
we
go
book
info
routing,
all
v1.
This
is
exactly
what
I
went
ahead
and
did
right,
and
so
what
you'll
see
here
is
well.
The
virtual
service
looks
just
like
the
normal
virtual
service
for
each
of
these,
but
it
is
a
little
different
right,
I'm
now
using
the
Federation
control
plane
by
saying
here's,
the
API
that
I
want
to
use,
and
you
see
that
it's
no
longer
a
virtual
service.
D
So
but
it's
a
federated
virtual
service,
but
really
the
meat
is
still
the
same.
There's
the
difference
here
right
instead
of
spec
host
its
spec
template,
spec
host
right
and
then
for
every
federated
type.
We
need
to
tell
okay,
where
do
I
want
to
go
ahead
and
place
this
and
that's
the
Federated
virtual
service
placement.
So
it's
that
placement
type
that
says
here
are
the
clusters
and
again,
okay.
This
is
interesting,
but
imagine
the
cluster
names
being
1
through
100
right,
but
for
the
most
part
it
looks
very
similar.
D
So
what
I
have
found
personally
is
that
it's
easy
to
ramp
up
on
on
the
Federation
on
how
to
federate
resources.
Now
with
that
said,
it
was
also
very
painful
and
error-prone,
and
all
that
kind
of
stuff
to
manually
do
this
and
since
presenting
this,
because
I've
been
working
within
the
Fed
v2
community
for
the
last
two
three
months
and
I'm.
C
B
You
can
get
the
IP
address
if
you
look
at
the
probably
the
detail
in
for
few
comments,
if
I,
if
I
may
one
is
there,
are
some
diseases
are
extremely
useful
in
system
own
I'm,
very
interested
in
the
insult
indication
part
because
it
would
be
good
to
align
with
what
we
are
doing
for
mutti
cluster
I
think
we
have
a
similar
approach.
We
also
fittest
a
business
count.
We
also
connect
to
the
other
clusters.
B
A
A
The
CRD
Fenneman
whole
thing
is
interesting
and
we
could
use
that
as
long
as
it
doesn't
result
in
a
change
of
the
eight
guys
and
then
I
think
the
third
game,
Cosmo
kind
of
alluded
to
is
there's
a
lot
of
contention
around
how
to
do
discovery
and
we
sort
of
have
it's
losing
in
mind.
It
scales.
It
really
well,
so
we'll
have
to
sort
that
out
and
keep.
D
In
mind
again,
you
know
the
use
case
and
I
realize
at
least
what
I've
seen
so
far.
Is
this
multi
cluster
multi
mesh?
It's
just
a
topic
where
there's
a
lot
of
different
views
and
different
approaches
and
I.
Think
as
this
space
matures
we'll
see,
you
know
what
approach
kind
of
floats
to
the
top.
But
what
going
back
to
kind
of
my
use
case
is
that
this
pattern
of
how
to
manage
and
use
it
needs
to
be
a
consistent
workflow,
whether
it's
for
ISTE
o
ourn,
honesty,
oh
right!
D
So
if
I
go
ahead
and
say,
I
want
to
go
ahead
and
just
stand
up.
A
high
availability
application
on
kubernetes
I
could
take
the
same
exact
workflow,
and
so
just
keep
that
in
mind
for
my
use
case
standpoint,
but
whatever
I
can
do.
However,
I
can
work
with
the
community
and
just
at
least
you
know
again
I'm
not
saying
that
this
is
the
right
or
only
way
to
do
it.
B
D
And
what
I'm
showing
you
here
really
quick-
and
this
is
what
I
would
come
back
two
weeks
from
now
and
just
demonstrate
a
little
bit
more
is
I've
got
again
the
two
clusters
in
each
cluster
I've
got
the
HTTP
bin
sample
app,
that's
my
server
and
then
the
sleep
sample
app.
That's
my
client
I
use
Federation
to
deploy
both
of
them.
D
So
all
the
same
stuff
we
just
saw
with
book
info
and
again
I'm
using
you
know,
standard
mechanisms
of
sto
of
you
know,
create
the
service
entry,
create
the
gateway
and
create
the
virtual
service
and
and
then
again
on
the
Federation
side.
I
create
a
domain
from
the
DNS
standpoint
and
then
basically
connect
DNS
to
the
appropriate
sto
component
in
which
is
the
ingress
gateway
and
I've
demonstrated
and
verified
that
you
know,
I
can
have
sleep,
clients
communicate
to
each
other's
clusters,
HTTP
bin,
app
and
again
I'm,
not
just
using
standard
sto
mechanisms.
B
D
When
I
go
to
SEO
101
o3
install
this
is
so
what
I
did
as
I
took,
the
the
source
of
103
and
I
ran
the
helm,
template
right
to
generate
the
manifest
and
then
at
this
point
it
was
a
bit
sing
but
created
the
federated
equivalent.
Exactly
as
you
see
it
in
the
standard,
sto
gamble
now
one
thing
I
did.
Let
me
pause
there,
for
a
second
is
one
thing:
I
had
to
do
is
I
did
have
to
break
off
in
103.
D
Let's
see
the
sto
types,
the
destination
rules
and
all
that
kind
of
stuff,
otherwise
is
tou
mo
and
sto
Taichi
amel.
You
combine
those
and
it's
the
same
exact
thing
that
you'd
see
from
a
standard
install
with
the
main
differences
being
API
versions
changed
because
we're
talking
to
the
fed,
v2
control
plane,
we
add
federated
to
the
standard
type
and
again
we
have
to
add
spec
template
and
then
the
Associated
placement
resource
and
again
there
is
everything.
D
D
So
I
don't
have
to
do
this
again
manually
because
it
was
painful
or
looking
at
the
customized
project.
Could
we
use
that
and
then
whether
we
do
it
internally
or
through
customized
is
then
take
that
and
then
can
we
bring
it
in
to
helm
so
that
there's
a
he'll
ability
to
instead
of
just
you
know,
creating
a
standard?
You
know
manifest
we're
going
to
create
the
federated
version.
D
E
C
Is
I
think
if
I
install
it
still
in
single
cluster
and
based
on
you
and
if
I
have
looking
for
running
and
then
based
on
your
Yama
file,
I
think
I
would
have
to
reinstall
Israel
for
federated
and
I
have
to
reengineer
my
phone
and
all
its
configuration
llamo
to
be
federated
to
work
with
the
federation
mode.
Is
that
standard
practice
for
the
federation
community
to
ask
you
how
to
kind
of
reengineer
everything,
maybe
reinstall
everything?
C
D
Let
me
let
me
respond
to
question
two
first,
so
what
I
demonstrated
was
an
example,
implementation
and
one
of
the
components
of
that
that
I
briefly
described
was
the
external
DNS
project
that
supports
multiple
backends
right
and
I.
Don't
know
all
of
them
off
the
top
my
head,
but
I
am
pretty
confident
that
one
of
them
is
core
DNS
all
right,
so
it
doesn't
have
to
be
a
cloud
DNS
back.
Okay,
so.
C
Maybe
I'm
missing
something
I'm
looking
at
you,
it
still
llamo
and
also
a
book
info
configuration
for
Gateway
and
the
config
map.
It
sounds
like
this
is
different
than
what
the
stand
is
do
if
I
install
its
do
in
within
the
single
cluster,
so
I'm
thinking
user
must
always
class
the
one
before
they
even
get
your
cluster
too
right.
Let's
say
if
I
had
always
cluster
while
I
install
it
see,
I
got
my
booking
for
whatever
Apple
have
working
now,
when
I
want
to
go
to
your
federated.
C
D
Not
at
all
I
mean
if
you
have
three
clusters
and
you
say
well,
cluster
one
is
up
and
going
outside
of
Federation
I.
You
know
I,
don't
want
Federation
to
touch
it.
Then
we
would
use
placements
and
say:
hey,
let's
create
a
Federation,
but
listen
even
touch
cluster
one.
Just
yet.
Let's
just
do
two
and
three
and
then.
C
B
C
D
B
E
A
Pretty
clear,
you
know
that
CRTs
top
of
the
work
and
probably
won't
work
in
Everett
close
and
we
don't
wanna-
rely
on
implementation
details
of
how
to
do
our
work.
So
because
of
that
we
need
some
kind
of
workaround
I
think
you
had
the
best
solution
constant.
So
maybe
you
can
propose
what
you
had
worked
up
and
we
can
go
from
there.
Well.
B
What
I
am
I
happy?
It
was
intended
for
1.2
or
1.3
I
mean
it's
kind
of.
It
was
more
like
just
like
Federation's
that
we
discussed
early.
It
was
more
along
terms
and
then
quick
fix
for
one
at
one
and
it's
a
tree
again.
I'm
surprised
how
similarities
in
to
what
was
described
earlier
about
the
idea
that
instead
of
upgrading
the
component
as
a
whole,
and
just
saying
that
you
know
I
have
east
your
pilot
running
and
I'm
going
to
upgrade
to
a
running
upgrade.
B
The
model
is
also
that
hey
I
start
a
new
history
of
piloting,
a
different
cluster
different
namespace
different
instance
anyway,
and
then
I
gradually
migrate,
every
every
workload
to
use
a
new
pilot
as
when
I
shut
down
sold
one,
and
so
the
approach
that
I
try
to
take
in
in
my
prototype
is
to
break
down
to
the
components.
So
you
can
run
with.
It
was
namespaces
for
different
versions
and
configurations
of
each
some
subset
of
5000
telemetry,
for
example,
with
with
run
Irenaeus.
Mr.
Fehmi
t10
will
remain
history
system.
B
You
start
telemetry
1.1
in
a
Lane
space
to
start
point
of
one
one
in
a
different
namespace.
You
started
sidecar
you
sector
again
with
one
one,
and
then
you
realize
both
olds
olds
were
clones
to
point
from
each
your
system
to
teach
to
pilot
one.
One
is
2pi
watt,
one
one
with
point
telemetry
to
telemetry,
1,
1
and
so
forth.
Then
NBC
would
kind
of
migrate,
but
one
days
to
serve
the
time.
B
It
can
be
scripted,
I
mean
if
you,
if
you,
if
you
want
to
live
dangerously,
you
can
just
run
a
script
that
is
just
creating
everything
that
once
it's
still
a
script.
But
if
you
are,
you
know,
production
oriented,
you
probably
want
to
have
this
kind
of
safety
net
where,
where
you
know,
is
that
you
just
might
need
it
one
name
space,
maybe
not
the
most
critical
part
of
your
business.
You,
my
gradient.
If
you
see
that
everything
is
working,
you
see
that
again,
you
don't
have
any
kind
of
failures
errors.
B
C
B
Is
a
main
goal
and
it's
not
only
migration
is
also
a
product
a
time
you
may
have
some
production
workloads
that
are
running
with
before
they
pls
enable
with
with
whatever
is
for
easy
and
so
forth,
and
you
may
have
a
bunch
of
other.
You
know
test
jobs
that
have
different
requirements
where
you
want
to
have
by
default.
Logging
enable
you
want
by
default,
we
have,
you,
know,
turned
off
or
you
want
to
have
more
matrix
work.
B
I
mean
you
have
different
profiles
for
environments
and
I'm,
trying
to
cause
an
environments,
because
if
it's
a
so
what
you
go
picking
and
you
would
have
one
pilot
in
a
different
name-
space
for
each
environment
and
one
of
the
one
of
the
environments.
Maybe
it's
your
master
with
oh
and
upon
t1
and
everything
you
know
any
less,
for
you
have
the
production
environment
is
everything
is
locked
down
and
it
is
the
most
different
resource
allocation,
different
everything.
E
B
Grand
approach
an
upgrade,
you
agree
to
assist
them,
it's
a
all
or
nothing.
You
you,
you,
you
Stubbs
upgrade
and
then
five
minutes
later
you
see
that
you
see
the
entire
cluster
goes
down,
because
you
must
have
bug
or
a
mistake
or
something
is
wrong
and
that's
the
core,
and
also
you
have
only
one
set
of
default.
I
mean
you
configure.
Is
your
system
in
a
particular
way?
And
that's
you
know
my
way
was
the
highway.
B
You
don't
have
any
a
way
to
kind
of
or
any
easy
way
to
customize
you
could
do
you
know
you
being
shaked
and
and
modifies
it,
but
for
some,
if
you
want
to
talking
and
what
particular
you
want,
debug
good
luck
with
that
I
mean
you.
You
need
to
use
to
be
shipped
and
then
you
need
to
tweak,
there's
a
template
and
do
other
things.
You
don't
have
a
way
to
to
easily
set
a
particular
profile
on
your
or
your
main
space,
and
also
it's
the
values
that
you
know
that
we
have
it's
a
nightmare.
B
G
B
A
B
We
do
I
have
a
operator
know
that
we
want
to
move
away
from
hell
because
we
keep
hit
hitting
hand
problems
and
we
want
to
move
to
a
model
where,
where
more
and
more
can
be
automated
in
operator.
So
that's
that's
wrong
and
it
can
be
a
simple
script
like
I.
Think
what
you
have
that,
but
it's
also
kind
of
moving
to
a
split,
because
we
have
ordering
that
mean
you
you
you
if
you,
if
you
start
in
a
different
order,
you
we
were,
you
are
kind
of
likely
to
get
promised.
A
A
C
A
But
if
you're
gonna
install
with
tom
2.10,
you
cannot
upgrade
if
you've
done
nothing,
there's
enough
like
that,
if
you
do
everything
yourself,
there's
an
upgrade
path
without
because
there's
removed
here
in
unit
cell.
So
that's
what
basically
is
impaired
does
is
it
removes
all
used
to
just
near
the
install
we're
gonna,
add
Sardis
later
I,
don't
think
the
right
a
break,
though
actually
after
Matt
I
think
the
right
part
making
process
a
great
security
at
one
time
that
would
be
compatible
with
ER,
didn't,
kill,
we're
not
and
believe
this
year
tell
others
today.
A
A
D
B
D
What
I
would
be
looking
at
is
like
helm
aside
is
just
like
okay,
what
right,
what
resources
need
to
get
patched
or
which
one
you
know
which
new
resources
are
added
or
what
configuration
all
that
kind
of
stuff
and
that's
kind
of
the
process.
I
started
going
down
a
couple
weeks
ago
and
I
was
trying
to
do
what
I'm
calling
you
know.
D
This
teal
live
upgrades
and
I
saw
as
I
started
going
down
there
I'm
like
wow,
there's
a
lot
of
things
that
need
to
change,
and
you
know
I'd
be
interested
in
helping
potentially
creating
either
an
operator
or
just
essentially
a
controller
that
can
do
the
upgrade,
but
the
way
that
I
do
things
like:
okay,
first,
let's
just
get
it
down
manually
right.
How
do
I
go
from
102
one
one,
what
things
change
and
do
it
manually
capture
that
and
then
automate
it
with
with
a
controller
yeah.
D
Even
from
you
know,
from
a
user
standpoint
at
this
time
is
like
okay
yeah.
Maybe
it
stinks
that
I
have
to
manually.
Do
this
and
I?
Could
you
know,
create
a
simple
batch
script
eventually,
but
anybody
until
we
have
something
programmatically.
Does
it
at
least
that's
an
option
to
say:
hey,
there's
issue
XYZ
we're
gonna
automate
this,
but
for
the
time
being
go
through
this
painful
manual
process,
and
maybe
we
even
just
create
a
simple
script
to
help
out
in
the
near
term,
you
could
even.
G
B
Is
exactly
adjacent
o2
to
have
a
small
opening
micro
parrot
or,
however,
you
want
to
call
that
it's
only
responsible
for
see
are
these
and
against.
The
patch
that
is
proposed
is
kind
of
things
that
it's
just
basically
what
it's
a
script
or
whatever.
It
is
that
in
stores-
and
you
see
these
independently
of
installation.