►
From YouTube: kubeadm office hours 2019 07 10
Description
agenda
A
B
You
Lorraine
so
I'm
trying
to
start
integral
the
priority
of
this
cycle
and
I
think
that
it
is
important
to
disambiguate
to
different
activities
and
that
we
haven't
run
into
the
rudder.
One
is
change
the
cluster
and
the
other
is
customized.
So
what
what?
What
is
that
it's
ambiguous
for
me
change?
The
cluster
is
a
new
coupon,
minimal
flow
that
will
allow
modify
setting
of
an
existing
cluster,
so
I
have
a
cluster
and
I
and
I
need
a
new
workflow
to
change
it.
B
It
is
something
that
it
is
meant
to
replace
what
people
are
doing
today
and
that
is,
for
instance,
using
phases
or
using
cubed
min
upgrade
changing
the
config,
but
not
the
version
or
using
or
changing
the
the
manifest
manually.
So
this
is
change
the
cluster
for
me
change
on
running
cluster
on
or
an
existing
cluster
customized.
It
is
a
different
robot
or
different
discussion
that
we
always
have
in
Cooper
the
mean
so
the
people
are
continuous
is
asking
for
new
possibility
to.
B
Configure
the
class
data
before
creating
even
before
creating
it,
and
so
typically
they
ask
for
new
flags
in
the
config,
but
it
is
something
that
we
we
don't
want
to
allow
too
much
order
to
keep
under
control.
So
we
discuss
a
few
times
to
keep
into
consideration
the
user
of
customizer
for
allowing
the
user
to
do
advanced
customization
without
being
as
being
for
32
to
extended
the
Google
domain
configure.
So
basically
it
is
another
semantics
for
expressing.
C
B
We
forgot
to
the
declarative,
behavior
or
the
operator
behavior
I
think
that
something
that
should
be
discussed
because
it
at
the
end.
It
boils
up
to
two
things.
So
the
first,
in
my
opinion,
the
first
thing
is
that
couple
mean
with
it
is
the
tool
that
generates
some
artifacts,
just
some
static
code
or
certificates,
or
also
in
cluster
configurations
and
because
couldn't
mean
it
is
also
responsible
for
doing
that.
But
the
ADA
lets
me
that
attest
in
my
opinion,
means
the
coop.
D
So,
to
some
extent,
what
I
see
is
that
I
see
key
with
EMS
the
enabler
for
the
operator
when
you
have
to
change
some
cluster
wide
settings,
you
have
the
operator
that
will
iterate
through
all
the
nodes,
call
into
this
to
this
key
of
ADM
way
of
changing
running
cluster.
So
it's
like
it
is
repeating
the
same
come
on
over
and
over
on.
All
notes
right
is
directed
to
right.
Yes,.
B
So
I
agree
with
with
team.
If
we,
let
me
say,
for
the
orchestration
part
of
changing
the
cluster,
it
is
not
a
responsibility
of
Cupid
mini-cooper
mean
is
a
tool
that
can
act
only
on
one
node.
This
is
a
limitation
of
its
own
architecture.
Is
it
is
by
design
so
doing
the
operation
of
many
cluster
and
orchestrate
the
operation
is
a
part
of
the
operator,
but
Cooper
mini
should
offer
the
actuators
they,
let
me
say
the
atomic
action
or
or
all
that
the
required
tool
for
making
the
change
up,
and
we
agree
with
a
steamer.
C
A
So
it's
also
the
naming
custom
I
mean
we
don't
we
shouldn't
care,
how
we
name
the
two
tasks
that
we're
going
to
separate,
but
just
want
to
point
out
that
customized
by
definition
means
changing
something
to
comply
with
a
customer,
so
it
implies
change
as
well.
Just
it
doesn't
matter
how
we
call
these
features.
B
A
B
B
This
is
a
part
of
the
the
reason
of
this
a
big
weight.
They
are
two
topics.
In
my
opinion.
They
are
two
different
topics,
even
if
they
are
related,
but
they
are
two
different
topics
my
mind,
my
intent
is
to
try
to
the
divider
and
and
scope
down,
something
that
is
are
actionable
if
we
keep
the
two
things
together,.
B
Deep
dive
a
little
bit
in
what
does
mean
change
in
the
cluster
and
the
first.
The
first
fix
that
I
want
to
discuss
together
is,
but
what
is
a
change?
What
we
define
a
change
of
the
cluster
I
think
that
something
is
crystal
clear
in
in
this
scope,
for
instance,
changing
the
configuration
of
all
the
Flex
off
of
the
core
component
of
API
server
and
scheduler
and
controller.
In
my
opinion,
it
is
definitely
part
of
this
wider.
The,
for
instance,
changing
cupola
two
flags.
A
So
the
corporate
situation
is
interesting,
I
think
in
qadian
in
general,
we
are
making
our
own
assumption
that
all
nodes
are
based
on
an
image
that
is
the
same
for
all
of
them
and
that's
not
true.
Oftentimes,
like
people
can
provision
different
operating
systems
bare
metal
stuff
and
they
have
different
resources.
They
have
different
requirements
for
some
of
the
loads.
Welcome.
C
C
You
could
you
can
specify
a
different
config
tab.
That
was
one
of
the
features
that
we
are
Fabricio
added
in
the
beginning.
I
don't
know
if
we
can
specify
a
different
month.
You.
C
C
I'm,
actually
on
my
a
game
today,
which
is
rare,
actually
got
some
sleep,
so
the
it
was
around
113
and
114
that
we
added
this
feature
because
of
the
exact
scenario
of
cuckoo
config
not
being
fully
up-to-date,
and
we
wanted
to
have
the
heterogeneous
capability
to
be
able
to
pull,
specify
the
location
of
where
you
could
have
qu
configs
store.
So
if
you
had
different
pools,
no
pools
different
groupings,
you
could
pull
config
a
or
config
d.
Your
config
see
I.
D
A
A
F
A
The
version
part
is
also
kind
of
broken
broken
is
by
design,
because
if
we
want
to
support
the
official
kubernetes
versions,
q4
couplets,
we
can't
and
I
was
talking
with
Andy
the
other
day
like
sorry
one
month
ago,
something
that
currently,
we
can
only
join
nodes
that
match
the
volitional
of
the
couplet
of
the
primary
node
and
that's
a
problem
that
there
is
because
we
version
the
config
map.
It's
of
the
config
map,
name
for
the
couplet
configuration.
B
But
sorry
to
interrupt
you,
but
the
disease,
understanding
our
corporate
config,
the
map
is
managed.
He
follows
cues
or
leaf.
Aventure
of
groups
off
of
node
is
important.
I
will
open
at
9,
shoot,
attract
bees
and
but
I
if
possible,
I
want
to
go
back
to
the
topic.
So
when
we
are
talking
about
changing
the
cluster,
what
we
consider
is
scope
of
these
activities.
A
B
C
C
F
C
I
hear
yeah,
it
depends
upon
how
destructive
the
changes
right
like
in
an
ideal
world.
There
should
be
like
some
metadata
applied
to
knobs
like
sure,
go
ahead
and
twiddle
this
bit.
It's
not
going
to
matter.
It's
not
gonna,
make
major
changes
to
your
cluster.
Okay
I
would
expect
recon
to
occur
like
we
don't
actually
have
SiC
up
on
processes
inside
of
could
be
instead
of
guarantees.
Yeah,
you
know,
like
you
know,
rocking
my
old
cluster
manager,
hat
like
every
other
cluster
manager
on
the
planet,
has
sig
up
for
something.
C
So
the
the
fact
that
we
don't
do.
This
is
depressing
with
my
old
man
hat
on,
but
you
should
be
able
to
do
it
right.
You
should
be
able
to
do
is
sit
up
on
a
recon
for
a
nob
and
then
have
the
destructive
ones
that
were
a
process.
Restart
and
I
do
agree
that
you,
you
potentially
want
to
have
some
the
capability
of
either
doing
it
automatically
or
the
capability
of
doing
it
in
a
controlled
fashion.
Yeah.
C
F
B
C
Lee
makes
a
good
point
where,
like
you'd,
want
the
option
or
capability
to
do
that,
but
it
could
be
potentially
catastrophic
where
you'd
want
an
external
component
to
orchestrate
that,
for
you
just
like
you
know,
you
have
the
note
controller.
That
does
things
in
a
piecemeal
fashion.
If
you
have
all
these
configurations
like,
if
you
don't
want
to
just
all
the
coolest,
to
magically
restart
you
accidentally
jacked
your
whole
cluster
sideways
you'd
want
to
do
this
in
a
controlled
fashion.
Is.
F
A
F
B
F
Did
have
some
implementation
ideas
that
I
left
in
the
comments,
if
you'd
like
me
to
comment
too
to
like
copy
and
paste
those
somewhere
in
the
notes
or
if
you
just
want
to
paste
them
somewhere,
I,
don't
know
if
they're
helpful,
but
the
idea
of
having
a
face
set
be
configurable
from
the
user
like
being
able
to
run
the
same
phase
multiple
times
and
it
declared
a
fashion.
You
know
order
it,
and
maybe
there's
like
some
orchestration
stuff
in
there
too,
with
the
multi
node
control,
plane.
B
F
A
Be
limited,
so
I
was
thinking
about
something
that
the
way
we
do
upgrades
right
now
is
using
apply,
and
then
we
do
parallel
operation.
We
can
do
something
the
same
here
like
we
can
modify
the
objects
which
are
in
the
question.
That
was
the
configuration,
the
couplet
configuration
and
then
we
can
have
some
sort
of
command,
that
is
for
node.
That
is
like
a
medium
sink
or
something
like
that,
and
it
essentially
is
going
to
pull
the
changes
and
apply
them
locally
to
manifest
to
complete
conflation,
two
flags,
potentially
environment.
F
You
could
even
commit
a
phase
config
to
the
cluster
in
a
in
a
config
map
and
then
the
cube
ATM
join
could
load
the
B
phase
config
by
ID
right.
So,
like
you,
you
could
say:
oh
you
know,
kuba
dim
changed
this
config.
It
spits
back
an
ID
and
says:
hey
I
uploaded
this
thing
to
the
cluster,
and
then
you
can
run
that
on
every
one
of
your
nodes
and
say
hey
like
could
equip
it
in.
You
know:
change
join
ID
this
or
change
worker
ID.
This.
B
I
think:
okay,
okay,
the
D
is
a
good
idea
and
but
I'm
still
a
little
bit
before
in
the
discussion,
because
it's
a
it
is
not
still
clear
to
me
what
we
spend
in
respect
that
change
should
do
and
and
I'll
change
should
be,
let
me
say,
click
River.
For
instance,
there
are
change
the
tournament
that
can
disrupt
destroy
that
the
cluster.
F
C
Hey
like
people
should
not,
people
should
be
piloting
the
changes
and
I
don't
want
to
put
too
many
guardrails,
because
that
puts
so
much
extra
logic
into
Covidien.
I
think
we
should
be.
You
know,
that's
the
whole
purpose
of
dry
run
and
for
people
to
take
a
look
at
things
to
verify
stuff
and
if
people
put
all
their
eggs
in
one
basket,
with
a
with
a
change
and
don't
pilot
it
before
they
run
I'm
kind
of
like
in
the
one
of
those
states
like
ok,
I
ran
everything
personally.
F
Just
on
the
guardrails
like
comment,
I
I
find
a
lot
of
the
guardrails
that
we
have
in
Covidien
to
be
like
anti
user
and
very
frustrating
thing.
A
lot
of
the
stuff
with
networking
like
I
I,
really
I,
can't
I
haven't
been
able
to
create
a
video
in
115
cluster.
That's
bound
to
localhost
it
just
doesn't
let
you
do
it.
You.
B
B
D
D
Now
what
I
mean
is
if
someone
is
going
to
change,
for
example,
the
the
control
plane
end
point.
We
should
somehow
have
a
whitelist
of
thing
of
things.
We
ought
to
change
or
black
list
of
things
we
don't
allowed
to
change.
So
the
idea
would
be
just
to
start
with
something
simple
and
probably
something
that
people
actually
want.
That
is
probably
with
a
low
effort
and
then
go
growing
that
at
least
maybe
so.
C
I,
keep
on
taking
in
this
conversation
that
we're
inverting
the
cart
before
the
horse,
like
we
don't
have
component
config
for
a
lot
of
the
components
in
place.
If
API
server
is
a
concrete
example
right
it
because
we
don't
have
component
config,
video
guest
or
her
and
people
haven't,
took
hard.
Look
at
all
the
knobs
that
exist
about
what
things
can
change.
What
things
would
have
metadata
that
would
potentially
say
that
this
is
destructive,
like
I
would
put
a
requirement
on
the
component.
C
A
A
C
Of
actually
you
you
could,
if
you,
if
you
separated
out
the
behavior
into
different
components,
there
is
no
reason
you
an
operator
couldn't
roll
try
one
do
a
delayed
timeout
if
it
fails
rollback
right.
That
could
be
a
responsibility
of
the
operator
right
and
that's
typically,
the
way
modern,
automated
deployments
work
right.
They
would
basically
say
try
one
give
some
time
on
interval
if
it
doesn't
work
roll
it
back
with
an
error
code
on
your
information
of
that
is
your
change
failed.
C
A
A
A
We
we
technically
for
this
feature.
We
want
us,
we
want
a
subset
of
upgrade.
We
don't
want
to
I,
don't
think
we
want
to
do
the
full
ability.
We
only
want
parts
of
it
like
customizing,
sorry,
changing
parameters
in
the
coastal
configuration
and
restarting
all
the
the
components
there
like.
This
is
something
that
users
want,
that.
That's
why
they
are
using
config
for
upgrades.
They
pass
a
new
config
that
has
differences
from
the
one
that
was
caught
on
in
it.
A
C
So
I
want
to
take
a
step
back
here
for
a
second,
because,
like
I
was
looking
back
at
the
prioritization
that
we
did
for
this
cycle
and
I
know,
we've
talked
about
changing
and
running
cluster
many
times,
but
the
it
wasn't
actually
on
the
priority
list
that
I
see
for
this
cycle.
We
have
customers
a
as
p1,
because
that
get
asked
that
gets
an
estimate.
Every
single
cycle.
A
A
C
Agree
that
the
users
want
this.
What
I?
What
I
see,
though,
is
that
there's
a
set
of
features
that
are
very
broad
in
scope
that
we
don't
have
nailed
down
in
KK,
yet
that
I
think
I
would
like
to
maybe
maybe,
as
part
of
this
conversation,
we
enumerate
that
set
and
we
try
to
get
that
as
dependencies
and
this
this
changing
of
running
cluster
turns
a
new
epoch
which
could
probably
span
multiple
of
these
cycles.
I.
F
So,
like
we've,
we've
got
existing
structure
in
the
tool
and
we
should
consider
why
those
existing
structures
are
not
serving
these
use
cases
for
people
and
what
we
can
do
to
improve
the
user
experience
and
the
onboarding
experience
so
that
people
understand
like
and
and
the
tool
facilitates
their
use
case.
All
the
pieces.
F
B
F
That's
what
I'm
getting
in
as
I
believe
that
there
are
ways
that
we
can
aggregate
these
things
simply
and
document
them
for
some
patterns
to
emerge
before
I
I.
Do
think
that
Tim's
point
with
regard
to
like
we
don't
have
mature
component
config
for
every
component,
like
graduated
to
beta
I,
feel
like
that's
a
really
big
deal.
F
If
you
want
to
talk
about
a
higher
order,
user
experience
for
doing
cluster
config
ops,
so
obviously
I'm
a
little
biased,
but
like
that's
a
that,
would
be
my
input,
unlike
what
we
would
need
to
focus
on.
But
there
are
before
like
really
trying
to
build
something
around
this.
That's
a
dedicated
inside
of
cubed.
Am
I
it's
my
opinion
that
we
have
existing
structure
in
the
tools
that
can
be
used
to
improve
the
documentation,
onboarding
and
and,
like
user
experience.
A
A
question
related
to
secondary
control
planes
in
phases.
I
think
that
phase
is
a
great
approach
for
the
initial
control
plane.
We
can
definitely
document
how,
to
you
know,
modify
the
control,
plane
components,
but
what
happens
from
the
secondary
nodes
is
an
interesting
situation,
because
I
am
not
sure.
If
we
have
a
phase,
it's
a
fabric
has
to
confirm
whether
the
secondary
control,
plane
mode
can
just
pull
the
updated
coaster
configuration
and
apply.
The
changes
to
the
local
manifests
I.
F
I
think
that
that's
a
great
example
of
questioning,
if
something
is
missing
in
how
we
factored
out
phases,
because
the
answer
to
that
should
be
yes,
and
it
just
might
not
be
for
any
number
of
reasons,
and
we
can
just
file
a
bug
right
or
a
feature
improvement
and
then
go
fix
that
and
then
that
would
fit
within
like
a
little
bit
of
like
going
toward
the
priority
of
getting
users
what
they
need.
In
order
to
do
the
thing
that
they
need
to
do
I
mean.
A
I
think
one
big
problem
here
is
that
I
saw
users
using
config
for
upgrade
as
a
workaround
for
this
instead
of
using
phases.
But
nobody
provided
clear
use
cases
for
what
they
want
to
compute
I
saw
a
couple
of
items
in
a
ticket,
but
the
reality
is
that
we
don't
have
coverage
for
all
the
use
cases
and.
C
Yeah
one
of
the
things
that
benefit
be
beneficial
would
be
to
bend
and
categorize.
We
don't
have
an
area
like
name
a
label.
What
are
going
to
call
Khatri
confer,
we
can
say
changing
your
running
cluster,
an
area
foo
label
that
we
can
bin
all
the
stuff
together
and
because
there's
there's
distinctly
two
bits
right
like,
as
you
mentioned
earlier,
there's
the
once
one-time
pad
for
customization
on
the
front
end.
C
Then
there's
like
the
the
run
there,
which
is
like
then
there's
the
run
time
changes
that
people
want
to
make
after
you've
actually
deployed
a
cluster.
So
if
we
can
start
to
bend
those
or
bucket
those,
and
then
we
can
have
a
I
think
maybe
a
slightly
more
informed
conversation.
If
look,
this
is
what
we
see
in
the
wild
we've
seen
the
miss
crossed
issues.
I
know
that
but
I
don't
have
a
firm
list
of
these.
Are
the
known
knobs
people
have
broke
themselves,
trying
to
or
broken
their
clusters
or
themselves
trying
to
do.
This.
A
Yeah,
we
definitely
should
Christopher
Kaeding
list
for
this,
and
maybe
that
is
going
to
be
an
indicator
that
we
should
punt
this
work
until
we
have
more
proof,
I
I
just
thought
about
an
example
of
something
that
users
may
want
to
do.
I
can
give
it
here
like,
for
instance,
imagine
someone
deploying
the
default
coordinates
and
at
some
point
they
want
to
modify
this.
The
way
they
currently
have
to
do.
It
is
changing
the
config
map
of
coordinates
and
restarting
I
think
the
posts
are
going
to
restart
automatically,
but
the
problem
there
is
a
way.
A
F
F
A
B
B
A
Mean
you
know,
I
agree
in
an
ideal
world
each
component
should
have
the
config
like,
like
Tim,
is
saying
somewhere
stored
somewhere.
If
somebody
modifies
it's,
the
component
automatically
rewards
its
configuration
and
attacks
and
the
couplet
attempted
that
with
dynamic
public
configuration,
but
the
problem
is
that
it
doesn't
work
as
expected
on
the
user
side,
and
if
we
punch
through
this
specific
aspect
of
component
config,
maybe
we
are
just
going
to
delay
this
feature
for
a
long
time.
A
So
I
agree
with
the
that
we
need
the
list
of
use
cases,
but
maybe,
like
Lee,
said
that
we
should
start
documenting
how
to
customize
all
the
stuff
we
don't
have
to.
You
know:
change
using
phases,
let's
go
document,
what
we
have
right
now,
what
is
possible
right
now?
Maybe
this
is
the
action
item
for
this
cycle.
I
think.
C
Having
documentation
to
help
guide
the
path
for
when
people
break
themselves
on
config
changes
is
a
good
path
for
words,
because
once
they
once
they've
tried
to
modify
it,
can
fake
after
they've
done
employment
I
think
we
should.
We
should
have
some
documentation,
helps
them
unfledged
themselves,
even
though,
like
we
know,
we've
know
full
well
that
they're,
like
a
large
number
of
issues,
have
been
filed
against
this.
D
Really
refers
once
and
so
one
year
ago,
we
where
hounds
at
Hsu,
say
and
are
kind
of
interesting
they
self-hosting
but
yeah
right
now
we
are
going
with
the
studied
manifest
way.
So
I
was
wondering
if
I
should
go
ahead
and
clean
up
this
of
hosting
lucky,
because
we
are
going
to
work
on
different
things.
Also
doing
this,
this
cycle
I
think
it
will
be
great
to
remove
the
avocado.
We
need
any
plan
any
announcement
to
to
clean
that
up.
If
someone
is
interested
in
money
in
that
or
clinic
Rafa.
G
C
Together,
we
love
that
there
specifically
because
some
people
for
pivot
works
fine
for
a
che
deployments
for
each
a
deployments
that
you
know
our
feel.
Okay
about
the
whole
DC
outage
problem
right
and
we
don't.
We
left
it
as
an
alpha
phases,
so
the
person
has
to
opt
into
that
and
there's
you
know.
We
even
have
that
warming
thing
that
says,
like
you
know,
once
you
do
this
kuba
diem
is
now
out
of
support.
You
can't
there's
no
way
back
so
I.
A
Think
we
already
had
a
couple:
one
one
was
from
the
Friends
of
lis
and
the
other
one
was
a
sousaphones.
Maybe
another
complain
so
three
in
total
I
think,
but
we
should
keep
it
and
also
the
dogs
thanks
to
America
now
consolidated
in
a
single
location.
So
we
don't
even
promote
the
features
that
is
supported.
C
But
as
we
found
out,
this
is
a
security
health
scape
for
lack
of
a
better
term,
and
so
because
of
that,
we
I
willfully
punted.
So
it's
it's
a
we're,
not
gonna
do
it
and
it's
in
the
alpha
grade,
support
for
people
who
people
are
okay
with
it
right.
So
there
there
are
categories
of
solutions
that
were
okay
with
people.
People
can
opt
into
that.
We
know
or
right,
like
I'm.
C
A
person
has
a
tha
environment
and
they're
fairly
comfortable
if
they're
DC,
and
they
don't
feel
like
it's,
never
gonna
go
out
and
we're
just
like
you
know,
that's
a
reasonable
expectation
that
they're
okay
with
that
approach
and
I
think
having
that
capability.
There
has
proven
itself
useful
to
users
of
cube
idiom.
D
All
right
and
the
second
one
is
about
the
aviary
Dux.
It
got
improved
a
lot.
Thank
you
for
reaching
for
that
and
others
involved.
Now
115.
We
have
one
command
for
the
first
control
plane,
then
the
same
command
for
secondary
control,
planes
and
worker
notes,
and
my
idea
was
that
I'm
actually
working
at
this
on
another
project,
which
is
basically
using
the
very
same
command
for
the
whole,
upgrade
story
both
for
first
control,
plane,
secondary
and
worker
notes.
It
has
to
do
some
stuff
like
comparing
to
the
comedian.
D
Comfy
come
from
a
person
and
so
on,
but
do
you
think
it
will
make
sense
for
me
to
create
a
per
request,
or
maybe
an
enhancement
I,
don't
know?
Maybe
it's
not
that
important
to
interpret
an
investment
request.
So
do
you
think,
would
you
be
fine
if
I
work
on
B
soldering,
this
cycle
I
will
keep
the
older
ones
it's
just
about
creating
one
that
is
the
same
for
all
denim.
A
So
I
erase
the
same
topic
in
photo
frosting
for
each
show,
and
basically
they
summarize
that
the
summer
is
that
we
don't
really
care
that
much.
If
the
commands
are
the
same,
we
basically
think
about
it.
We
have
the
different
commands
for
anything
join.
Do
you
have
a
specific
like
problem
with
the
different
commands,
or
is
it
just
a
nice
to
have
in
general,
nice.
D
A
Recommend
that
you,
by
the
way
upgrade
phases
are
nice,
but
we
wanted
them
outside
of
the
cycle.
So
everybody
has
time
today
for
this
like.
Please
go
ahead
because
I'm
seeing
more
and
more
requests
about
the
great
phases
we
already
have
phases
for
upgrades
note,
but
not
for
update,
apply
and
that's
the
tricky
part.
I
I
This
is
nothing
tied
to
dual
stack
and
but
in
general
they
weren't
being
tested,
so
I
added
a
couple
of
tests.
For
that,
please
take
a
look.
Let
me
know
if
I'm
sort
of
using
the
same,
like
a
sound
approach
to
writing
these
end-to-end
tests
and
again,
if
it's
you
know,
if
it's
useful,
then
we
can,
you
know
either
merge
it
us
keep
it
as
part
of
this
PR
or
pull
it
into
a
separate
PR.
I
A
I
A
H
A
I
Yeah
that
sounds
great
and
also
the
second
point
was
so
I
think
the
bricio
mentioned
that
we
should
add
some
kind
of
a
check.
So,
let's
say,
for
example,
one
of
the
fields
is
is
set
to.
You
know
be
dual
stack
related,
so
a
lot
of
the
changes
with
dual
stack:
how
they
impact
you
QA
DM,
is
that
a
lot
of
fields
which
were
singular
before
are
now
sort
of
like
these
slice.
They're,
like
an
array
of
IP,
addresses,
for
example,
right
and
also
these
fields
are
sort
of
spread
across
different
configurations.
I
There's
some
in
the
cluster
config
networking.
Obviously
you
have
API
endpoints,
which
is
an
init
config.
There's
a
compute
proxy
change
prop
changes.
So
it's
very
hard
to
do
this
sort
of
this
test
wear
or
this
validation,
where
we
say
like.
Oh,
if
one
of
the
fields
is
off,
you
know
we
throw
an
error
just
because
the
fields
are
spread
everywhere.
It's
a
lot
simpler.
Just
to
like
you
know,
treat
each
individual
field
individually.
If
that's
what
it
makes
sense.
I
Don't
think
it
will
I
primarily
because
I
think
that
was
one
of
the
questions
that
came
up
in
the
PR,
but
but
just
the
way
that
we're
sort
of
parsing
the
pods
like
cider
field.
It
should
not
in
any
way
adversely
impact.
If
it's
you
know
like
a
single
IP
in
there
or
like
a
slice
of
my
piece,
comma
separated,
I.
Personally,
don't
see
that
being
an
issue.
That's
also
the
sort
of
the
approach
that's
been
taken
by
the
other
PRS
that
are
not
Hubei
TM
but
are
tied
to
dual
stack
in
mainline
kubernetes.
F
I
F
A
Well,
if
they
didn't
cause,
bind
others
for
like
wait,
a
minute.
We
have,
we
have
a
structure,
a
roster.
Can
you
explain
the
name?
We
have
a
structure
that
is
related
to
all
the
caution.
It's
basically
have
ports
and
an
address
for
four
components,
and
now
this
field
has
to
be
dos
OH
saving
I
can
open
the
config
quickly.
A
I
Is
the
end
point
exactly
yes,
sir
there's
a
new
phase,
so
this
dual
stack
work
isn't
being
done
in
different
phases
so
that
local
API
endpoints
advertise
address
will
also
become
a
slice,
comma
separated
slice
of
a
by
piece.
So
I,
don't
imagine
all
of
these
changes
coming
in
immediately
in
cubed,
EMI
I
expect
follow-up
PRS.
I
A
B
B
What
I
saw
looking
at
the
PRI
said
arriving
a
birthday
cluster?
The
user
has
to
do
configuration
in
many
places,
so
what
I'm
from
sermon
is
to
get
issue
for
the
user,
because
they
do
miss
miss
configurations,
and
here
we
have
two
option.
One
is
let
to
try
to
cement
something
that
adds
user
to
a
validation
that
helps
the
user
to
do
the
things
right,
at
least
the
basic
checks.
I
don't
know
if
there
is
a
minimal
set
of
configuration
that
is
required
for
drastic.