►
From YouTube: CAPZ office hours June 26 2020
Description
CAPZ office hours June 26 2020
A
B
D
A
A
Okay,
so
we're
gonna
start
with
some
PSAs
upcoming
release
plans
I'm
assuming
Cecile.
If
you
want
to
talk
about
that,
don't
worry,
I'll
talk
about
a
next
way:
yeah.
D
I
got
a
few
questions
about
when
you
know
the
next
cabs
need
release
was
gonna
be
and
how
the
ongoing
refractors
we're
gonna.
You
know
effect
that
so
I
just
wanted
to
clarify
I
think
right
now
we're
waiting
for
the
0:37
release
from
Kathy
to
move
forwards.
We
are
currently
developing
on
0:37
alpha,
but
we
want
to
get
the
official
release
before
we
can.
Categories
for
kept
sea
and
the
reef
actors
are
not
going
to
impact
release
their
hope
like.
D
Ideally,
we
would
do
it
all
in
one
piece
and
not
release
before
it's
done,
but
in
practice
I,
don't
think
that's
gonna
be
possible.
So
it's
completely
fine.
If
we
cut
a
release
in
the
middle,
it's
all
done
in
a
way
that
the
master
branch
is
always
releasable.
So
that's
fine.
There
are
a
few
pr's,
though,
that
I
would
like
to
see
emerge
before
we
can
release
which
are
the
one
from
ace
that
removes
the
plain
text
secrets
from
the
templates.
I
think
that
one
is
pretty
important.
A
Cool
thanks,
oh
I'm,
all
right
I
will
talk
about
the
next
one,
so
we're
gonna
be
looking
for
new
moderators
for
this
meeting.
This
is
specifically
because
I'm
going
to
be
taking
off
a
couple
months
and
then
starting
a
job,
so
we
were
thinking
of
doing
it
a
couple
ways.
One
way
we
can
do
it
is.
We
could
have
a
sign
up
on
the
top
of
our
agenda
and
basically,
if
you
want
to
be
the
moderator
for
that
month,
we
leave
two
meetings:
a
month.
A
Go
ahead
and
sign
up
your
name
and
you'll,
be
the
moderator
and
honestly,
it's
pretty
like
show
up
people
guide
you
through
it
I'll
give
you
the
host
key
and
everything
and
you'll
just
have
to
upload
the
if
your
coordinate
you'll
have
to
upload
that
video
to
YouTube
and
I'll
show
you
how
to
do
that
too.
But
otherwise
you
just
go
through
the
agenda
items
hopefully
call
out
to
a
call-out
for
new
agenda
items
and
anything
that
you
and
then
we
also
do
triage.
A
D
Yeah
I
just
wanted
to
add.
If
we
do
want
to
do
the
rotation
and
have
people
will
sign
up
I
added
a
table
at
the
top
of
the
documents
for
like
the
next
few
things,
we
can
try
it
out
and
if
anyone
wants
to
give
it
a
shot
and
if
you
don't
want
to
do
the
recording
and
everything
I
can
take
care
of
that.
You
just
have
to
speak.
That's
good
I.
D
A
Maybe
the
last
person
that
was
like
a
moderator
can
be
the
new
note-taker
really
for
the
next
week
or
some
things
like
that
cool
thanks,
guys
so
I'm
glad
we're
set
for
at
least
the
next
month.
That's
awesome
and
yeah,
we'll
just
hopefully
keep
going
like
that.
I'm
guessing
Cecile
you'll
keep
adding
new
dates
on
there
and
so
we'll
be
good
to
go
for
the
future.
I'll
miss
y'all,
maybe
I'll,
just
come
and
watch
the
videos
just
for
fun,
just
be
here.
Okay,
let's
see
what
month
are
we
in
this
month?
A
F
Well,
I,
don't
this
could
be
an
open-ended
discussion,
but
since
nadir
and
Carlos
are
here-
and
you
guys
seem
much
more
clued
in
as
to
what's
going
on
upstream
I,
so
I've
been
basically
flailing
around
trying
to
get
a
good
test
written
that
mimics,
the
AKS
engine,
one
that
stands
up
an
internal
load,
bouncer
external
load,
balancer
and
service,
and
all
that
and
I've
got
it
mostly
working.
But
I
was
slowed
down
a
little
because
there's
some
helper
functions
that
were
in
there
were
upstream
in
zero,
three
seven
that
I
copied
locally.
F
Now
we
have
zero
three
seven
F
I
can
take
advantage
of
those,
but
those
only
helped
a
little.
So
as
I
see
we've
people
have
already
identified.
There
should
be
some
other
helper
functions.
We
can
fall
back
on
so
basically
I
went
ahead
and
wrote
some
of
those,
but
I
don't
know
if
this
I'm
not
happy
with
how
the
code
is
structured
and
I'm
just
kind
of
looking
for
some
guidance
about
here's,
how
we
want
to
do
it,
since
we
don't
have
a
lot
of
these
type
of
tests.
F
F
G
Yeah,
for
my
opinion,
like
I,
was
thinking
to
add
some
hyper
solutions
in
there
in
the
upstream,
in
that
cluster
API
itself,
like
for
my
like,
create
deployments,
didn't
set
and
other
basic
functionalities
as
the
first
step,
and
maybe,
if
he's
too
specific
for
for
cubsy,
they
are
we
adding
in
our
side,
then,
if
it's
like
generic,
if
you're
in
the
brass
or
any
other
provider
canoes
we
put
in
the
framework
in
the
upstream,
this
was
my
idea.
I
made
a
comment
in
the
clustered
api
channel.
G
I
I
need
to
open
an
issue
in
their
report
to
like
specify
all
the
helpers,
or
at
least
the
initial
idea,
for
order
to
implement
in
the
air
side
like
I
did
appear
for
the
network
policies
and
I
was
trying
to
reuse
the
AKS
engine
tests.
I
saw
that,
but
I
was
not
able
to
import
the
implant
testing
from
the
AKS
in
G,
then
I
copied
some
part
of
the
code
to
the
captaincy
repo.
But
I
didn't
like
that.
G
H
I
also
want
to
say
that
I,
don't
think
like,
like
just
just
my
opinion
like
adding
helper
functions
in
the
upstream,
where
there's
no
test
using
them
is
probably
not
the
best
idea.
If
we
need
some
helper
functions,
we
can
add
them
in
our
repo
and
whatever,
when,
whenever
like
copy
needs
them,
they
can.
We
can
move
them
there
and
then
take
them
out
of
here,
or
rather
to
the
framework
or
something,
but
just
adding
helpers
just
to
have
them
there.
I
don't
think
it's
very
helpful.
Nobody
is
using
them.
Yeah.
G
I
saw
one
like
I
was
trying
to
identify
in
a
the
brass,
for
example,
to
see
if
we
can
have
like
similar
tests
in
the
danger
side
and
like
I
say,
maybe
we
can
have
like
one
generic
one
that
is
in
the
upstream
and
that
a
the
Blessed
Virgin
and
a
jerk
and
we
use
them.
I
was
checking
the
other
testing
code
to
see
if
we
can
create
the
help
of
switch
on
that
service.
Everybody
not
just
one.
I
agreen.
B
F
Yeah
and
that's
kind
of
where
I
got
slowed
down
is
I,
wanted
to
make
sure
I
wasn't
duplicating
code
because
I,
my
assumption
was:
there's
got
to
be
helpers
like
this
out
there,
but
I
hadn't,
I'm
not
familiar
with
cafe
and
all
the
other
stuff,
so
I
took.
Basically
the
approach
Nader
is
saying
which
is
I,
wrote
the
stuff
I
needed,
and
but
in
such
a
way
that,
hopefully
we
can
promote
it
up
to
Cappy.
If
it's
more
more
useful
in
general,
yeah.
B
G
F
I
took
the
client
go
approach,
just
because
at
some
point
Cecile
said:
go
that
way
and
so
I
did
otherwise.
We've
been
a
lot
easier
to
copy
over
the
test
from
a
kiss
engine
but
yeah,
but
this
is
route
nice
and
stand
alone.
So
I
think
that
so
anyway,
I've
probably
talked
enough.
I'll
just
put
the
P
R
out
there
and
we
can
all
see
if
it's
useful
as
is
or
if
yeah.
F
But
but
that's
that's
good
to
know
just
I
kind
of
want
to
know
where
the
status
was
of
upstream
helper
functions.
It
sounds
like
we
just
proposed
it
and
don't
have
an
issue,
so
maybe
the
few
things
I've
written
plus
the
next
set
of
tests
we
write
would
make
it
obvious.
These
things
could
be
promoted
up
to
cap
e,
mostly
just
I,
mostly
just
have
helper
methods
around
services
and
jobs.
Right
now,
yeah.
F
B
A
Move
on
David
cycles
and
reconciler,
hey.
C
Everybody
how's
it
going
recently.
I
came
across
a
bug,
it's
probably
slightly
self-inflicted,
so
went
back
through
the
reconciler
z--
and
we
added
in
I
added
in
the
watches
and
the
pause
notifications
for
controllers.
So,
as
a
cluster
comes
up,
it's
it's
pause.
We
don't
reconcile
the
objects
that
are
associated
to
it.
Likewise,
it
takes
into
account
the
annotation
for
for
pause
on
any
of
these
sub
objects.
C
So
this
is
nice,
but
part
of
it
was
also
an
exercise
where
Jason
teachers
provided
me
some
guidance
on
perhaps
what
we
were
doing
wrong
and
not
listening
to
some
events
in
our
controllers.
So
this
led
me
to
you
reworking
the
way
that
we
are
watches
they
are
I
think
up
to
date.
This
then
introduced
a
subsequent
issue
that
we
saw
that
I
found
that
the
Azure
cluster
reconciler
is
kicking
off
an
event
and
then
that
event
causes
Azure
machines
to
reconcile.
C
C
What
are
you
doing
and
they
would
be
fantastic,
but
if
somebody
has
an
idea
that
I
would
love
to
hear
it,
maybe
something
based
off
of
observe
generation
after
we
get
the
observe
generation
and
its
status,
maybe
we
can
just
sit
there
and
say:
hey
we're
watching
this
thing.
Has
its
eye,
cold
or
or
something
like
that
anyway?
C
A
Cool
I
tried
to
capture
that
alright,
so
Spencer
welcome,
hearing
meetings
and
we'll
see
when
I
talk
about
meeting
Security
Response,
an
external
I
he's
right,
awesome.
B
Thanks
yeah,
so
I
just
wanted
a
bubble
up
to
a
couple
of
things
that
have
been
blocking
me
this
week,
Cecille
dropped
a
PR
yesterday
for
removing
the
clusters
removing
of
some
fields
from
the
cluster
status.
They
had
your
cluster
status,
which
I
think
works.
Fine,
but
it
looks
like
it's
not
gonna
pass
I,
think
it's
not
gonna
get
merged.
I
know
matically
right
because
it
removes
something
from
the
API.
B
D
B
It
okay
awesome
so
yeah,
so
I'll
rebase,
my
PR
on
that
that
should
fix
the
tests
I
ran
into
yesterday
and
then
I'm
working
through
the
default
values
and
the
validation
stuff
for
priorities
this
morning
or
this
afternoon.
Now
the
other
thing
I
wanted
to
mention
I,
don't
know
how
I
just
wanted
to
get
a
sense
for
priority
and
kind
of
you
know
if
it
was
gonna
come
soon
or
if
I
should
start
taking
a
look
at
it
myself.
B
In
regards
to
like
the
azure
machines,
failing
to
provision,
you
allocate
a
public
IP
right.
So
the
way
we're
using
this
at
least
internally
with
with
Talos
is
we've
got.
We've
got.
Telos
is
a
little
weird
because
the
OS
self-publishers
an
api
that
you
can
interact
with
the
operating
system,
on
port
50,000
right
so
and
and
as
part
of
that
I
need
basically
public
access
to
notes.
So
yeah
I'm,
totally
hitting
this
I
I
wasn't
and
then
once
I
got
past,
someone
did
some
of
the
other
bugs
that
I
ran
into
you
this
week.
B
D
It
would
be
very
tricky
to
like
fix
that
and
then
after
like
rebase,
and
you
fix
it
again,
basically
so
I'm
working
on
a
branch,
that's
based
on
of
like
the
new
public
IP
we
can
sell
and
then
trying
to
get
that
working.
It's
a
little
tricky
because
basically,
this
broke,
because
we
added
outbound
rules
to
the
load
balancer,
which
are
required
for
to
provide
our
bounds
to
the
ends
in
Azure
when
you're
using
a
standard
load
balancer.
When
you
don't
have
public
IP
designs.
D
The
problem
is
that
you
can
only
have
one
public
load,
balancer
attached,
her
IP
configuration,
and
so
you
can't
have
those
VMs
with
public
IDs
attached
to
the
load
balancer
that
have
outbound
rules,
but
they
also
have
to
be
attached
to
the
same
load
balancer
for
exposing
services
with
cloud
provider.
So
right
now
talked
about
it
with
David
a
little
bit
and
he
had
a
pretty
good
idea
of,
like
maybe
trying
to
add
a
separate
network
interface
configuration
and
try
to
use
that
for
the
public
IP
that
might
change
things
on
your
end.
D
So
I'll
talk
to
you
about
that
how
they
might
impact,
because
basically
it
won't
be
the
primary
ipconfig
anymore.
It
will
be
a
secondary
one.
Yeah.
B
Okay,
now
that's
great
like
I
said:
I
mean
I
think
this
is
really
the
only
two
things
that
are
blocking
us
now.
Yeah
things
have
been
yeah,
I
mean
we're
getting
really
close
to
being
I
would
say
like
for
real
for
real.
You
know:
Telus
works
on
Azure,
so
cool
yeah.
That's
it
for
me.
Thanks.
A
Okay,
cool
I
have
a
couple
read
in
college,
so
it
is
quickly,
but
by
the
way
red
has
a
forked
copy
tendency.
Okay,
yeah
sorry,
cats
eat
they
also
have
one
of
Kathy
I've
had
another
like
summer.
I
didn't
think
it,
though
so
yeah,
that's
interesting,
I,
don't
think
any
of
us
have
talked
to
red
hot.
Yet
so.
A
Cool
awesome
and
then
another
thing
I
want
to
talk
about
was
our
notion
of
a
community
repo
and
sorry
that
conversation
up,
so
we
were
thinking
of
providing
creating
some
sort
of
community
repo
on
the
side
of
cabs
II,
like
in
some
other
organization,
to
support
integrations
that
we
know
won't
necessarily
support,
end
to
end
in
cab,
Z
and
so
I
think,
that's
just
an
open
question
on
how
we
do
it.
What
kinds
of
integrations
are
we
talking
about?
A
Do
we
go
all
the
way
to
the
intro
layer
like,
for
example,
would
flock
our
support
land
there
instead
of
Kagzi
I'm
deciding
to
make
sense
so
like
things
like
that,
I'd
love
to
start
talking
through
or
thinking
of
at
least
we
don't
have
to
do
this
now,
but
if
anyone
has
thoughts
on
where
we
should
draw
that
line
in
the
sand,
it's
something
that
we
talked
about.
So
if
we
don't
want
to
do
commute,
that's
also
something
we
can
talk
about.
A
C
So
I
do
have
one
thing
that
I
would
like
to
try
to
say
about
this.
Perhaps
this
is
something
nature
in
the
context
of
a
of
the
plugins
that
we've
been
talking
about
in
my
cluster
cuddle,
maybe
that
that
is
part
of
this
conversation,
because
those
those
could
end
up
having
like
also
it's
different,
the
hooks
and
customizations
that
folks
want
to
get
into.
F
I'm
kind
of
curious
about
where
we
draw
the
line
for
documentation
and
add-ons
and
stuff
since
there's
probably
gonna,
be
a
lot
of
things
where
we're
gonna
want
to
say,
go,
install
the
helm
chart.
That's
the
best
way
to
do
it
is
that,
is
that
already
something
just
the
documentation
itself
that
talks
about
the
home
chart?
Is
that
something
we
would
want
to
have
and
like
a
contributor
project,
or
does
that
if
it's
just
documentation,
does
it
stay
in
caps,
II,
yeah.
A
I
think
we
want
to
have
pointers
to
the
right
places
so,
like
someone
should
blant
be
able
to
land
in
cabs
a
year
in
a
copybook
and
understand,
like
all
the
different
components
that
they
need
be
able
to
build
a
like
a
fully
functioning
solution,
and
so
maybe
like
some
sort
of
like
good
path
or
a
happy
path
for
them
for
all
the
peripherals
like
and
that
I
think
should
be
well
documented
for
all
the
different
options
like
David
said:
I
think
those
can
be
exposed
through
the
templates
and
more
documentation
through,
like
individual
templates
and
like
what
the
different
options
are
there
and
then
also
links
to
either
like
Azure,
documentation
or
charts.
A
I.
Think
it's
gonna
be
a
mix
of
all
of
that,
because
I
don't
want
all
the
document.
I,
don't
think
we
need
to
document
every
little
configuration
of
what
it
does,
but
we
should
link
so.
F
Yeah,
that's
kind
of
the
trade-off
I
was
imagining
is
to
the
extent
that
we
document
something
like
a
helm
chart.
You
know
in
detail
directly
in
cap,
see
documentation,
it
sort
of
implies
we're
supporting
it
to
whatever
extent
we
support
cap
Z
and
that's
not
what
we
want
to
do.
We
want
to
be
like
this
should
work
but
you're
on
your
own,
on
the
other
hand,
actually
linking
to
separate
documentation,
it's
pretty
rude
for
most
users.
So
there's
a
there's,
a
fine
line
there
for
yeah.
A
I
think
like
enabling
them
to
like,
for
example,
deploy
the
stuff
we'll
still
live
in
a
cab
xethru.
Ideally
this
template
configurations
we're
talking
about
to
find
out
more
of
what
it
does.
Cuz
I'll
like
we
don't
wanna
we're
not
gonna,
be
supporting
every
single
option.
For
example,
we're
not
gonna,
be
testing
that,
and
so
those
templates
possibly
live
somewhere
else.
A
Maybe
super
folder
like
where
they
live
like
that,
will
be
kind
of
unsupported
and
other
folks
can
create
those
and
Grayson's,
and
ideally
they
can
go
as
far
as
they
want
with
documentation,
but
our
stance
will
be
like
make
sure
people
can
get
the
right
information,
but
we
should
fully
document
a
happy
path.
Yeah.
A
A
I
A
I
So
it's
like
an
all-in-one
scripts
to
help
you
friend
off
straight
kubernetes
end-to-end
test,
but
then
we
want
to
increase
our
test
coverage,
so
I
modified
the
script
and
rename
it
to
CI
entry
point.
So
not
only
can
we
run
up
committees
and
test,
we
can
also
start
running
some
after
specific
tests
or
any
other
community
related
tests
against
cancer
clusters.
I
So
if
you
go
to
our
capsule
repo
and
don't
do
the
development
documentation
in
the
last
section
you
can
see,
we
have
a
conformance
testing
sections
which
basically
documents
how
we
can
use
the
script
to
run
the
penitent
tests.
So
you
can
simply
just
call
it
called
scripts
and
before
you
call
the
screen,
we
can
set
a
couple
of
environment
variables
and
to
run
non
kubernetes
option.
Engine
test
is
very
simple:
all
you
have
to
do
is
just
declare
this
upstream
skipper,
UPS
treatments
and
tests
to
true
actually
opportunity
to
change
it
to
true.
I
But
anyway,
all
you
have
to
just
all
you
have
to
do
is
just
declare
this
environment
variable
and
simply
call
the
script,
and
you
can
CD
into
your
whatever
project
you
have
and
call
and
make
target
to
run
the
engine
test.
So
this
way
the
script
will
help.
You
create
a
cap,
see
clusters
and
it
will
run
whatever
come
in
you
supply,
and
then
it
will
help
you
break
down,
tear
down
a
cluster.
I
If
you
choose
do
and
let's
see-
and
you
can
also
bring
your
own
capsule
clusters,
so
you
can
also
define
a
variable
called
skip,
create
regular
clusters,
so
you
can
continuously
run
and
test
against
your
existing
capsule
clusters
and
currently,
besides
conformance
tests,
we
are
also
running
a
bunch
of
fashion
specific
tests.
So
here,
as
you
can
see,
we
have
conformance
tests
for
machine
deployment
and
machine
pull
Pepsi
clusters
in
addition
to
that,
we
also
run
after
disk
and
not
to
file,
so
those
are
stored,
specific
intent
test.
I
So,
let's
see
so,
you
can
hear
so
yeah.
This
is
intent
test
for
cap
seed,
116
cluster,
and
we
also
run
the
same
set
of
tests
against
machine
pull,
as
you
can
see
here
and
there.
Apparently
there
are
some
air
in
the
after
file
one.
So
I'm
gonna
take
a
look
at
that,
but
yeah
overall
loop.
The
next
step
would
be
to
continue
increasing
the
test
average.
I
What
I
want
what
I
will
be
working
on
is
to
incorporate
more
after
specific
tests,
more
specifically
who
I
want
to
at
cloud
provider
after
specific
engine
tests
to
test
stuff
like
low
balance
or
network
security
groups,
I'll
stuff
like
that,
and
if
you
guys
don't
know
this
is
test
trick.
Basically,
it's
a
dashboard
where
we
show
our
historic
test
result.
I
I
can
paste
a
link
in
the
chat
for
you
guys
to
check
it
out
and
check
out
the
result,
and
you
can
play
around
it
and
last
but
not
least,
I
also
want
to
talk
about
the
lock
collection.
Dum
dai
worked
on
last
month,
so
basically
each
job
run.
If
you
click
on
it,
you
can
also
click
on
artifacts
and
you
can
check
out
all
the
locks,
all
the
control,
plane,
locks
and
all
the
manage
between
the
cluster
and
locks.
I
So,
for
example,
if
we
go
to
replica
Lester's,
you
can
see
that
we
have
a
control,
plane
and
some
machine
deployment,
and
if
you
click
on
control
plane,
you
can
see
you
can
check
out
all
the
locks
for
all
the
pots
in
the
control
place.
So,
for
example,
we
can
check
out
the
API
server
lock
and
yes
yeah.
So
this
is
here
because
they'll
help
us
debug
in
kestrel,
which
is
very
useful,
so
yeah,
that's
pretty
much
it.
Let
me
paste
a
link
in
the
chat
and
if
you
have
any
questions
for
each
other.
E
I
C
A
D
Okay,
so
I
just
wanted
to
talk
a
bit
about
the
refactor
story.
Stuff,
that's
I've
been
working
on
recently
in
case
you
know,
people
are
curious,
big
pr's
in
the
queue
on
and
talked
a
bit
about,
the
design
behind
it.
So
basically,
what
we're
trying
to
do
is-
and
this
is
from
an
old
issue-
that's
been
open
for
a
long
time-
we're
trying
to
simplify
the
way
we
do.
D
So
I'll
talk
about
the
goals
first,
before
talking
about
like
the
actual
implementation,
but
I
think
the
main
goals
are
to
first
of
all
get
rid
of
this
like
spike
interface
inputs
and
outputs,
and
we
only
want
to
have
a
clean
service.
Looking
like
this,
so
reconcile
and
deletes
only
takes
context
and
returns
error
and
each
azure
cluster
or
like
each
object,
reconcile
and
delete
loop
is
a
composition
of
smaller
service,
freaking
cells
and
deletes.
D
So,
for
example,
as
your
cluster
reconciles
public
I
piece,
it
Rican
cells,
load,
balancers,
a
tree,
consults
resource
groups,
etc,
and
so
what
I
want
to
get
to
in
the
end
is
you
just
have
like
each
object
is
just
a
composition
of
calling
other
Rican
cells
instead
of
what
we're
doing
now,
which
is
the
services?
Do
a
bunch
of
logic
of
like
setting
the
spec
and
then
calling
leaking
cell
for
each
different
resource.
D
So
if
you
have
two
public
IPS,
you
have
to
provision,
it
will
call
Rican
cell
public
Ivies
twice
once
for
each
public
IP
with
a
different
spec
and
then
the
other
goals
and
I
think
this
summarizes
it
pretty
well.
Is
we
want
to
make
it
possible
to
share
services
between
different
objects
so,
for
example,
I'm
taking
public
IP
as
an
example
along
because
this
is
what
this
care
it
does,
but
it's
actually
a
really
good
use
case.
That's
why
I
chose
it.
Public
eye
peas
are
reconciled
both
in
machines
and
in
Azure
clusters.
D
Right
now
for
Azure
clusters,
you
need
a
public
IP
for
the
load
balancer
for
the
public
load
balancer
and
for
machines.
This
is
the
use
cases
that
Spencer
was
talking
about
earlier.
If
you
want
a
public
IP
assigned
to
your
nodes,
then
that's
going
to
get
reconciled
by
Azure
machine,
so
we
want
to
make
it
possible
to
share
the
services.
The
service
that
reconciles
public
IP
is
between
two
different
as
your
cabs
resources.
D
Also,
the
services
shouldn't
care
about
what
object,
what
resource
they're
reconciling
against.
They
should
just
care
about
the
address
stuff.
So
when
we're
testing,
this
is
really
important,
because
when
we
do
when
we
write
unit
tests,
I
know
a
lot
of
you
have
like
helped
write
units
ask
for
services
and
you've
probably
noticed
you
need
to
like
define
all
this
like
scope,
stuff
and
like
the
services
actually
test
more
than
like,
creating
agile
resources.
D
For
so
the
public
IP
service
should
only
care
about
the
DMS
name
and
the
public
IP
names
and
how
many
public
Ivies
it
needs
and
like
their
specification,
it
shouldn't
care
about
what
v-net
resource
group
the
cluster
is
in
and
all
of
that
extra
stuff,
and
that
also
leads
them
to
the
third
thing,
which
is
the
scope,
has
should
be
very
explicit
that
the
service
should
be
very
explicit
about
what
it
needs
from
the
scope
right
now.
The
scope
is
kind
of
like
this.
D
By
being
said,
this
PR
is
really
big
because
it
lays
it
kind
of
all
the
groundwork
for
it,
including
some
auth
changes.
So
I'm
just
gonna.
Look
at
this
smaller
one
which
is
basically
I,
took
this
key
R
and
on
top
of
it,
I
implemented
the
same
thing
for
network
interfaces
and
the
idea
is
I.
It's
a
lot
smaller
because
it
only
contains
their
network
interfaces
changes,
but
I
just
want
to
show
you
big
what
it
looks
like.
D
D
D
Only
as
your
machine
calls
this
function,
but
in
the
public
IP
case,
as
I
said
before,
there's
two
and
there's
also
cases
where,
like
machine
pool
and
machine
shared
the
same,
you
know
services
or
managed
cluster
and
managed
machine
pool
like
there's.
There's
some
overlap
there.
So
that's
the
idea,
and
then
so
what
this
is
NIC
specs
is
what
it
it
basically
returns
a
list
of
network
interface
specs.
So
we
don't
assume
that
each
so
we
know
that
most
services
will
usually
have
to
create
several
resources.
D
So,
instead
of
having
to
like
call
the
reconcile
service
multiple
times
from
the
week
installer
from
the
controller,
we
only
call
it
once
and
it
knows
like
the
list
of
services
that
it
needs
to
create
I'm,
sorry,
the
list
of
resources
and
so,
for
example,
for
the
network
interface.
This
is
very
similar
to
what
used
to
be
in
the
service
itself.
D
Like
here
and
so
so
now,
it
knows
that
the
scope
has
to
have
that
information
and
so
that
what
I
was
looking
at
before
that's
in
machine
scope
and
so
machine
scope
defines
this
and
any
other
resource
have
Z
resource
that
needs
to
reconcile.
Network
interfaces
should
also
define
this
and
they
might
have
a
different
implementation
return
different
things.
D
D
No,
but
I
just
want
it
to
give
a
kind
of
quick
overview
of
what
the
idea
behind
the
reactor
was,
and
oh
and
I'm
in
terms
of
testing
it's
pretty
cool,
because
now
we
can,
we
can
mock
the
scope
and
so,
instead
of
having
to
like
define
the
scope
and
the
unit
size,
you
can
mock
it
and
then
you
can
control.
You
know
what
returns
and
right.
Let's
look
at
this
unit
test.
D
D
So
all
of
this
is
gone,
and
this
was
basically
like
in
the
for
this
was
defined
for
every
service
unit
test.
It
was
shared
amongst
all
the
test
cases,
so
that
also
didn't
get
that
much
flexibility,
and
so
now
we're
not
defining
this
anymore
instead,
each
each
test
case
has
you
know
like
this
smock
for
like
the
scope,
and
so
it's
able
to
say
like
okay.
If
I
call
network
interface
specs,
then
we
turn
this,
and
this
is
gonna,
be
my
definition
and
it's
based
on
this
I
expect
this.
D
You
know
network
interface
to
be
created,
yeah.
Anyone
have
any
questions
or
comments
or
feedback,
or
are
you
all
just
confused
I.
A
H
A
A
C
A
I
can
attest.
You
surround
the
office
in
the
backlog
bug
okay,
I'm
gonna
put,
that
is
in
progress.
Yeah.
A
D
A
Cool
testy
networking
policies
in
progress,
okay,
mock
tests
do
not
call.
A
Control
finish
so
I
did
say
it
looks
a
bit
weird.
Okay,
put
that
there
it's
not
clearly
handle
creating
separate
routes
to
tape,
route
tables
for
know
and
control
plane
to
do
backlog,
backlog,
okay,
crepe,
it
was
an
out
gate.
Public
IP
is
enabled
an
edger
machine
to
do
to
do.
Yeah.
Okay,
we've
hacked
her
public
IP
service,
yet
spectrums
go
in
progress.
A
Kenzie
should
use
out
at
recon
controller
major
in
story,
charters,
backlog,
okay,
initial
support
for
conditions,
very
agile,
clustering,
a
sure
machine
in
progress
to
me
and
then
con
services
add
fashion
host
service,
ok,
cool,
let's
just
double
check
these
two
dues,
or
actually
like
the
enlargements
for
everything's,
looks
right.
Okay,.
A
A
D
A
D
A
D
D
A
Yeah
we're
starting
to
get
into
territory
at
the
next
milestone
anyway,
so
I
believe
them
another
month
at
this
milestone,
but
for
the
next
one
like
I,
think
where
we're
gonna
be
closing
it
out
I'm
in
the
middle
of
the
month,
so
feel
free
to
start
picking.
I'm
stuck
very
next,
just
great
reduce
the
amount
of
costs
or
create.