►
From YouTube: 2019-03-05 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
the
recording
has
started.
This
is
the
March
5th
2019
crossplane
community
meeting
and,
as
always,
we'll
go
ahead
and
start
this
meeting
with
a
milestone
checkup.
So
let's
take
a
quick
look
at
the
roadmap
here,
so
we
are
in
the
middle
or
towards
the
end.
Actually
of
these
yirou
milestone
the
first
release
of
cross
plane
when
the
repo
was
taken.
A
Public
was
right
before
cube
con
seattle
around
the
beginning
of
december,
and
we
were
looking
to
do
in
2019
here,
get
on
a
quarterly
release,
cadence,
which
would
be
setting
us
up
for
a
0.2
release.
You
know,
after
taking
a
nice
little
break
from
0.1
and
the
holidays,
and
everything
this
will
set
us
up
for
a
0.2
release
sometime
later
in
this
month
in
march,
and
then
a
0.3
release
sometime
around
June
or
so
around
cube.
A
Con
Barcelona
is
when
we're
targeting
that
so
we're
in
the
middle
of
the
0.2
milestone
right
now
and
the
focus
is
for
the
0.2.
Milestone
were
around
a
few
things.
One
of
them
was
around
workload,
scheduling
so
to
be
able
to
start
influencing
and
have
the
initial
implementations
of
the
scheduler,
which
makes
smart
decisions
or
user
influenced
decisions
about
where
a
workload
should
run.
You
know
which
cluster
or
which
cloud
provider
etc.
A
There
is
more
discussion,
I
think
on
279
about
the
scheduler
being
location,
aware
and
I
think
that
there
is,
as
Ilya
has
commented
on
this
issue
here,
there
is
more
there's
a
little
bit
more
context
that
needs
to
be
fleshed
out.
For
you
know
what
does
location
mean
and
how,
from
a
user
experience
perspective
and
from
a
context,
capturing
perspective
for
the
workload
and
the
scheduler,
we
can.
A
Okay,
you
know
capture
that
information
about
location
and
plumb,
that
down
through
the
through
the
scheduler,
so
I
think
that
there's
more
design
work
to
do
here,
that'll,
be
part
of
you-
know
more
grander
vision
of
the
scheduler
design.
And,
oh
you,
let
me
know
if
you
have
anything
to
add
to
to
that
assessment.
A
A
Ok
and
then
we
had
another
focus
that
was
on
enabling
new
support
for
new
manage
services
that
could
be
deployed
and
configured
and
managed
by
crossplane
for
workloads
to
consume
them.
The
set
of
managed
services
that
we
had
decided
upon
was
influenced
by
our
partnership
with
gitlab,
because
the
should
be
the
longer-term
vision
here
where
0.3
is
to
be
able
to
automatically
or
deploy
in
an
automated
fashion
the
get
lab
project
and
in
their
entire
stack
and
so
some
of
the
managed
services
they
rely
on
our
object.
A
Storage,
buckets
Postgres
database
in
Redis
cache,
so
we
wanted
to
focus
on
making
sure
that
the
resources
that
get
lab
requires
to
deploy
is
also
supported
and
crossplane
so
that
we
can
execute
or
implement
that
fuller
vision
of
having
get
lab
automatically
deployed
by
cross
plains.
So
Nick
recently
finished
up
the
Redis
support
across
all
the
three
major
cloud
providers
that
we
support
so
far
for
Azure,
AWS
and
Google
is
well,
which
Nick
that's
an
master.
Now.
Is
that
correct.
A
Right
sweet,
so
that's
very
recent
update,
then
that
we
have
Redis
support
across
all
three
cloud
providers,
so
that
is
exciting
and
we
have
Postgres
support.
I
think
last
month
got
made
that
to
master.
So
with
those
two
managed
services,
we
have
the
foundation
of
the
services
that
are
required
for
get
lab
and
we
will
start
being
able
to
focus
more
on
the
other
application
requirements
for
get
lab
about
how
to
package
the
rest
of
their
components
and
how
to
define
these
dependencies
in
all
that
sort
of
stuff.
A
So
I
think
Nick
will
be
diving
into
that
with
the
with
the
get
lab
team
and
start
fleshing
that
out,
and
so
that's
that's
targeted
40.3.
So
it's
not
something
that
is
anticipated
to
be
finished
within
the
0.2
milestone,
and
that
will
be
a
much
larger
effort.
I'm
excited
to
happens,
expertise
on
that
one
and
then
another.
The
third
major
focus
that
we
have
the
0.2
milestone,
which
is
ongoing.
This
one
is
not
completed,
is
about
how
we
can
integrate
cross
plane
with
other
types
of
services
that
deployed
directly
to
kubernetes
using
an
operator.
A
So,
let's
let's
go
ahead
and
take
a
look
at
the
at
the
0.2
milestone
since
we've
kind
of
defined
what
the
major
major
focuses
were
for
it,
and
so
we
have
a
set
of
tickets
here
in
the
milestone
we
are
more
than
halfway
through
and
we
have
a
set
of
tickets
here
that
you
know
about
half
of
them.
I
guess
have
owners
and
then
the
other
ones
are.
You
know
open
for
anyone
from
the
community
who
wants
to
jump
in
on
them
and
and
contribute
to
them.
A
I
think
that
there's
there's
not,
let's
see
if
then
I
think
that
on
here,
there's
not
too
many
that
I
am
very
worried
about
I
think
the
biggest
one
we
have
left
is
still.
You
know
the
managed
services
using
the
kubernetes
operating
operator,
patter
I
think
that's
still
got
the
most.
You
know
design
work
and
things
to
worry
about,
but
these
other
issues
on
here
I,
don't
there's
not
large
concerns
I
have
about
them
in
this
milestone.
C
A
That
sounds
great
John,
yeah
I
think
the
0.2
project
board
should
be
in
an
agreement
with
the
0.2
milestones.
I
think
should
everything
should
be
up
to
date
there,
and
you
know
any
of
these
issues
here
that
are
unassigned-
should
be
pretty
much
up
for
grabs
if
anybody
comments
on
them
or
expresses
interest
offline
after
the
meeting
okay.
A
So
we
can
now
move
on
to
the
community
topics
and
questions
and
Omar.
You
added
a
an
item
here
to
the
agenda
about.
Will
there
be
a
need
to
implement
and
support
multiple
cloud
provider
services,
s3
Redis
ElastiCache
for
each
provider?
Do
you
want
to
do
want
to
talk
a
little
bit
about
that
Omar
and
and
and
kind
of
flesh
out
that
question
here
in
the
group
now.
D
D
A
Yeah,
that's
a
great
question
I'll
mark,
so
the
there's
a
couple
couple
different
facets
to
that
question.
One
of
them
is
that
you
know
it
would
be
it'd
be
nice
to
have
support.
You
know
for
as
many
cloud
provider
manage
services
as
possible.
The
focus
so
far
has
been
on
services
that
you
know
are
built
on
an
open
source
technology
with
a
you
know,
open
protocol
of
like
a
wire
protocol,
that's
standard
and
a
lot
of
applications
take
a
dependency
on
so
my
sequel
in
Postgres
and
your
gets
with
s3
protocol.
A
You
know
those
are
all
examples
of
resources
or
services
that
have
a
fairly
standard
open
protocol,
that
a
lot
of
applications
are
already
consuming
and
so
to
help
with
the
goal
of
workload.
Portability
of
you
know,
being
able
to
just
simply
express
an
application,
can
express
the
need
that
it
has
on
the
protocol,
such
as
my
sequel
or
Postgres,
and
then
the
underlying
implementation,
from
whatever
cloud
provider
it
may
be,
you
know,
will
satisfy
that
particular
protocol
in
you
know
thus
making
the
application
portable
that's
been.
A
The
initial
focus
you
know
portability
is
is
very
powerful
as
a
concept
to
enable
for
these
applications
so
implementing
cloud
provider
services.
You
know
that
are
built
on
open
source
with
open
protocols
is,
has
been
the
focus
so
that
gets
into
you
know,
scaling
the
community
and
scaling
the
project.
You
know
with
the
numerous
managed
services
that
each
cloud
provider
offers
you
can
get
into.
You
know
what's
the
best
way
to
support
those
or
you
know
what
is
the
limits
of
the
support
that
we
want
to
implement,
and
so
there's
been
some
discussions
about.
A
You
know:
how
is
there
any
way
to
automate
that
or
do
we
have
to
do?
You
know
a
manual
implementation
of
the
type
and
a
controller
for
each
service,
and
you
know
kind
of
slows
that
process
down,
which
doesn't
scale
very
well
and
I.
Think
Nick
actually
had
some
ideas,
or
at
least
there's
one
potential
project
that
can
help
maybe
automate
that
Nick.
Do
you
want
to
talk
about
the
wizard
magic
modules.
C
There's
I
can
but
I
don't
have
anything
particularly
intelligent
to
say
if
anyone
is
familiar
with
I'm
sure
plenty
of
people
here
are
familiar
with
terraform
the
hash,
equal
product,
I
think
there's
a
couple
of
patterns
that
they've
used
to
make
the
the
burden
easier
of
supporting
all
these
different
things.
So
so
I
just
joined
the
project
cross
played
and
just
added
support
for
Redis,
and
it
took
a
reasonable
amount
of
time.
C
A
lot
of
that
was
because
we're
still
exploring
patents,
and
a
lot
of
that
is
because
I'm
huge
across
clients
I
imagine
that
that
will
get
faster
over
time.
The
practice,
but
it
still
is,
it
would
take
a
good
amount
of
time
to
be
able
to
maintain
all
the
different
features
and
functionality
of
all
the
different
things
that
cloud
providers
offer.
C
So
one
thing
that
other
tools
are
taken
to
to
get
that
done
is
to
basically,
as
they've,
become
more
popular
as
they
get
traction
is
to
break
out
the
responsibility
of
maintaining
each
cloud
provider
to
a
a
teep
of
people
who
are
really
passionate
about
that
clap.
Oh,
you
know
terraform,
for
example,
is
modular
eyes.
Do
they
have
the
Google
provider
and
they
have
the
etc
etc?
That.
C
Sort
of
by
somewhat
different
sets
of
people
Google,
furthermore,
has
got
a
piece
of
software
called
magic
modules,
which
is
my
understanding,
is
not
quite
complete
code
generation,
but
it
helps
with
a
lot
of
templating,
and
things
like
that.
It's
almost
always
like
ooh
builder
is
my
understanding
for
generating
modules
based
on
Google
API,
so
it
generates
and
simple
modules
and
generates
terrible
modules.
C
I
believe
once
it's
gender
I
haven't
really
done
any
you
know
it
could
be,
could
be
wrong
here,
because
I
haven't
looked
at
it
in
great
detail,
but
I
believe
that,
basically
it
generates
something
that
then
you
could
take
and
fill
in
the
gaps
to
have
modules.
So
here,
though,
we
could,
it
has
eventually
look
into
using
that
to
generate
some
code
or
you
know
a
couple
of
other
different
pens,
but
that's
the
most
promising
what
I've
seen
so
far
yeah.
A
Thanks
Nick
yeah,
so
I
think
that
you
know
one
final
thought
on.
That,
too,
is
that
you
know
crossplane
is
you
know
very
much
a
community
driven
project,
so
you
know
the
set
of
things
manage
services,
and
you
know
they
end
up
that
get
support
or
a
priority
of
implementation
from
the
crossbone
project
is
largely
up
to
also
what
the
community
wants
to
see.
A
So,
if
we
get,
you
know,
if
we
hear
from
folks
that
you
know
want
a
particular
service
to
be
supported,
or
you
know,
contributors
come
to
the
project
from
various
sources
that
you
know
want
to
implement
support
for
a
particular
service.
Then
that
is
absolutely
you
know
fully
encouraged
from
from.
You
know
everybody
basically
so
yeah.
So
what
you
know
it's
community
driven
in
what
the
community
wants
to
see
should
you
know,
will
influence
which
direction
the
project
goes.
D
A
Yeah,
thanks
for
adding
that
question
to
the
agenda
document
today
alrighty,
so
we
had
started
kicking
around
a
topic
here
about
some
of
the
patterns
and
standards
for
the
Cross
Plains
repo.
There
is
some
across
the
different
implementations
from
different
developers.
We
have
not
fully
standardized
yet
on
how
a
reconciler
or
a
controller
should
be
structured,
how
it
could
be
unit
tested
what
the
unit
tests
look
like.
You
know
there
is
some
consistency,
but
we
do
not
have
full
consistency
in
a
you
know.
A
Full
standard
and
patterns
agreed
upon
by
everyone
so
with
the
folks
on
the
you
know,
wanted
to
open
this
up
to
the
community
and
have
a
bit
of
a
discussion
on
that
today
about
you
know
what
should
some
of
these
patterns
be
and
so
I
believe
we
have
yeah?
We
have
a
good
quorum.
I
think
your
to
discuss
that.
So,
let's
that,
let's
actually
start
with
with
one
of
the
one
of
the
potentially
easier,
let's
actually
start
at
the
bottom
here,
so
poor
requests.
A
You
know
it's
obviously
there's
a
few
reasons
for
that,
one
of
them
being
that
when
you
have
a
you
know,
super
large
pull
request
with
a
whole
ton
of
code
changes.
It
gets
more
difficult
to
refuel
of
those
changes
at
the
same
level
of
quality
that
you
can
provide
when
you're,
focusing
on
a
smaller
set.
A
You
know
you
want
to
do
this
as
an
API
to
design
of
you
know
what
would
be
portable
MongoDB
abstraction.
Look
like
you
know
in
the
CR
D
form
and
then
you've
got
support
from
various
cloud
providers
or
you
know
local
operators
etc,
and
so
you
know
where
do
you
draw
the
lines
in
terms
of
requests
that
you
submit?
You
have
all
of
that
in
every
single
cloud
provider
and
all
the
API
is
etc
into
a
single
pull
request,
its
thousands
and
thousands
of
lines.
A
C
Yeah
I
feel
like
it's,
regardless
of
whether
you
have
a
large
or
a
small
pull
request.
Well,
let
me
take
a
step
back
in
my
mind.
The
only
reason
the
size
of
the
pull
request
really
matters
is
the
potential
for
merge
conflicts
for
the
author
for
the
pull,
requests
and
I
say
that,
because
I
think
that
it
is
important
to
review
on
a
commit
by
commit
basis,
I
think
that
it's
important
that
by
the
time
that
the
pull
request
is
in
four
of
you,
it
should
have
a
commit
history.
C
That
is,
that
tells
a
story
that
makes
sense
and
is
framed
for
the
reviewer.
So
if
there's
four
commits
and
the
last
commit
is
just
fixing
a
typo
at
a
previous
commit
or
if
a
particular
piece
of
functionality
is
something
half
working,
can
one
commit
and
then
finished
off
in
another
commit
I
think
that's
that's
sort
of
a
miss
pan
at
review
time
so
sometimes
well,
I
know,
a
pull
request
is
going
to
be
very
large.
I
will
a
get
the
pull
request
in
early.
C
What
it's
only
has
partial
functionality
with
a
work-in-progress,
prefix
and
maybe
it'll
have
two
commits
and
then
I
will
push
more
commits
as
I
add
more
and
more
functionality,
but
don't
try
and
make
sure
that
those
commits
tell
a
story
and,
to
my
mind
as
a
reviewer,
it
is
burdensome
to
have
to
review
a
lot
of
stuff
at
once.
But
if
someone
sends
me
a
large
full
request
with
five
commits
in
it,
I
can
just
review
that
as
five
smaller
commits
in
five
different
times.
I
can
review.
C
B
C
I'm
gonna
clear
out
some
statistics
that
I
have
a
Hopf.
Remember
that
a
couple
of
jobs
ago
I
did
a
Tech,
Talk
garden,
pull
requests
and
had
a
ride.
A
good
pull
request,
a
review,
pull
request.
I
think
stat
was
something
like
500
lines
after
that
the
average
sort
of
person
just
it's
your
I
start
to
glaze
over
and
it's
really
hard
to
actually
focus
and
catch
anything
important.
A
C
Line
of
coal
plays
over
surface
no
skill
spit
I
I.
Think,
though,
that
there's
a
follow-on
thing
there,
so
so
so,
for
instance,
I
I'm,
a
my
red
SPR
was
an
example
of
a
very
large
pull
request
that
we
had
recently,
and
one
thing
that
would
have
helped
me
to
do
that
in
bite-sized
pieces
is
a
faster
feedback
loop.
I
I
think
that
I
did
get
some
comments
on.
C
My
API
is
when
I
first
put
them
through,
and
then
there
was
sort
of
nothing
until
the
pull
request
was
deemed
to
be
sort
of
ready
to
go,
and
it
could
be
that
there
was
a
it
could
be.
Maybe
there
was
a
sort
of
a
communication
there,
because
I
had
in
working
progress
status,
and
it
could
be
that
people
interpret
working
for
progresses,
I.
C
The
agreement
is
on
how
we
partially
intimate
beaches
in
crossplane,
because
to
my
understanding
there
is
no
feature
flags
or
anything
like
that
in
cross
plane
at
the
moment,
I
I
guess
you
could
just
not
apply
certain
see.
Are
these
so
that
so
the
API
server
is
not
aware
of
them,
but
part
of
my
concern
and
part
of
the
reason.
C
I
said
it
all,
as
one
big
pool
request
is
because
I
was
like,
do
we
really
want
to
have
Redis
support
that
actually
doesn't
have
any
cloud
providers
implementing
it
at
the
moment
like
just
the
api's
and
no
controllers,
or
do
we
have
just
a
client
for
do
CPU
no
control
or
do
we
just
have
full
support
the
gcpd,
but
not
for
Redis
and
sorry
not
for
AWS
and
has
EULA
for
potentially
a
week
or
two
or
something
like
that
in
the
codebase?
It's
totally
okay!
If
we
do
that,
I
just
yeah
the.
B
Other
thing,
I,
think
that
is
interesting
in
this
example-
is
getting
a
design
in
with
the
DRDs
on
both
sides,
like
the
abstract
ones
in
the
concrete
month,
helps
get
kind
of
all
the
early
issues
figured
out
before
the
the
rest
of
the
implementation
followed.
So
I
think
that
was
that
was
helpful,
and
we
should
probably
adopt
that
at
the
pattern.
A
E
I
think
it's
all
good
I
think
I,
don't
think
I
have
any
specific
concerns
about
the
size.
What
I
think
I
would
like
to
hear
this?
Basically
poor
hygiene
in
terms
of
scope.
It
is
very
tempting
working
on
the
issue
to
come
across
a
bug
or
miss
print
or
something
that
you
attempted
to
fix
in
the
scope
of
this
pull
request,
you
know
to
add
kind
of
change,
which
is
extraneous
to
the
scope
of
the
request.
So
am
I
working
in
the
future?
Let's
say
all
I'll
use
the
next
example
Redis.
E
It
is
tempting
to
fix
additional
issues
as
you
come
across
them.
The
challenge
with
that
is
kind
of
introduce
this
cop
spread.
So
now
you
say
why
is
this
change
being
introduced
here?
Is
it
related
to
the
initial
effort
adding
ready,
so
it's
something
that
we
simply
lump
in
along,
because
we
came
across
a
flip
side
of
it.
E
It's
also
dangerous
in
the
sense
that
if,
for
any
reasons,
you
decided
to
roll
back,
this
pull
request
that
bug
fix
or
that
spelling
change
or
whatever
that
additional
miscellaneous
item
you
just
said
it
also
goes
with
it
out
of
the
bathroom.
So
what
I
know
it's
very
tempting
and
I've
been
biggest
offender
to
add
those
things
in
pull
requests
lately
after
the
policy
that,
even
if
the
change
is
being
very
small
and
benign
I,
would
actually
shell
out
and
create
new
pool
requests
dedicated
to
this
change
alone.
E
A
Yeah,
that's
a
great
point:
Ilya
and
I
I'm
definitely
guilty
of
that.
Myself
of
you
know,
there's
there's
some
good
feedback
about
something
on
the
poor
request
or
somebody
notices,
something
or
I
noticed
something,
and
you
know
I'm
in
there
right
now.
Oh
I
just
want
to
add
that
in
there
and
take
care
of
it
right
now,
scratch
that
inch.
A
You
know
so
I
think
you're,
your
suggestion
there
or
the
workflow
that
you
have
adopted
where
you
open
up
a
separate
pull
request,
and
you
know,
fix
that
issue
in
isolation
and
get
that
adamastor
and
then
rebase.
Your
pull
request
is
a
great
way
to
do
it
and
keep
things
clean,
but
also
your
point
about.
If
you
need
to
roll
back
poor
request,
it's
got
all
these
other
things
lumped
into
it.
Then
that
gets
that
gets
messy
and
takes
away
functionality
that
you
know
might
not
have
been
the
issue
so
I
think
those
are.
A
Those
are
all
really
good
points.
Everybody
and
oh
yeah,
the
NIC
one
more
thing
to
say
to
you
is
that
you
know
I
when
I
see
work
in
progress.
It's
not
always
clear.
I
mean
when
a
not
the
review
is
necessary,
so
I
think
as
a
pattern
here
you
know
when,
when
the
author
of
a
pull
request,
when
they
push
some
more
commits
and
they
want
more
feedback,
you
know
@mention
somebody
on
the
progress
or
leave
a
comment.
Saying
hey.
This
is
ready
for
another
look.
A
E
I
go
to
the
Future.
It
said
in
the
past,
we've
been
doing
this
practice
of
abortion
in
the
commits,
so
when
I
see
initially
pull
request
of
multiple
convincin
I'm,
not
quite
clear
whether
those
commits
will
be
squashed.
So
hence
is
it
okay
for
me
to
start
reviewing
them
or
should
they
hold
up
and
where
the
tip
has
been
marked
as
ready.
Well,.
C
If
I
understand
that
I
mean
I,
think
I
could
be
jumping
to
it
assumption
here,
but
well,
they
obviously
should
not
be
a
like.
A
committing
to
pull
request
is
a
different
thing
right,
a
commit,
a
pull
request,
shouldn't
just
be
like
I.
Have
this
one
giant
commit,
but
that's
Redis
support,
because
that
that
gets
to
the
point
that
you're
like
like
you
say
if
you,
if
you
have
to
revert
something
like
you,
can
revert
a
commit
as
well
as
reverting
a
pull
request,
it's
much
more
granular.
C
So
at
that
point
you
you're
shooting
yourself
at
the
food.
If
you
like
here's
my
eleven
thousand
lined
if
it's
all
in
one
commit
so
I
can't
really
do
much
about
it.
So,
but
I
can
understand
that
there's
two
different
styles
of
how
people
use
commits
and
sometimes
people
use,
commits
by
those
cumulate.
C
So
in
my
life,
if,
if
a
committee
of
mine
makes
it
to
origin,
then
at
the
time
that
I'm
pushing
that
commit
I
feel
that
it's
ready
for
someone
to
look
at
if
Vanessa,
Borge,
West
open
and
it's
made
its
origin.
I
may
every
now
and
again
go
back
and
rebase
that
history
to
add
a
field
or
something
like
that,
but
I
typically
won't
be
making
any
dramatic
changes
getting
further
on
I
can
I
can
speak.
It
will
go
around
it
back
to
what
we
were
talking
about
with
a
pull
request.
C
Scope,
I
actually
slightly
disagree
with
that.
I,
of
course,
agree
that
multiple
changes,
I
of
course
agree
that
within
a
commit,
you
should
be
doing.
One
thing
I
personally
feel
like
if
the
contract
becomes
that,
if
you
see
a
typo
or
something
like
that,
while
working
on
a
pull
request,
you
have
to
go
and
open
another
pull
request.
I
won't
do
it
it's
too
much
effort
like
it's
an.
Why
would
I
do
that
now?
A
Yeah
I
think
I
thinkwell.
It
could
possibly
you
know,
get
to
a
discussion
to
about
the
the
scope
of
you
know.
Some
of
those
other
changes
like
if
it's
like
you
know
a
typo
or
something
that
that's
pretty
clear
you
couldn't
kind
of
you
know
include
that
and
or
visit
like
an
entire.
You
know
you're
changing
an
interface
and-
and
you
know
that
has
tendrils
they
go.
You
know
throughout
the
whole
rest
of
the
code
base
and
gets
kind
of
kind
of
messy.
Then
you
know
that's
that
scope
seems
too
much
to
be
included.
C
Think
mostly,
the
important
thing
is
here,
you
know
sort
of
early
and
I
may
have
slight.
You
know
disagreements
on
the
philosophy
here,
I
I
think
the
important
thing
is
for
us
as
a
project
to
have
her
stance.
I,
don't
personally
mind
what
the
stance
is
and
make
it
clear
what
the
stance
is,
so
that
people
know
how
to
meet
the
expectations
of
the
project.
A
A
Yeah
I
think
that
I
think
that
that
it
was
Ilias
original
point,
I
think
is
is
is
probably
still
something
that
I
would
side
on
of.
You
know
in
not
letting
things
that
are
unrelated
get
involved
in
the
same
poll
request,
that's
probably
where
I
would
side
on,
but
I
think
that
you
know
being
able
to
define
that
more
clearly.
I
think
is
something
that
we'd
have
to
do.
You
know,
maybe,
after
this
meeting
sometime.
A
A
All
right
so
I
thought
I
thought
that
was
gonna,
be
the
quick
one.
We
have
22
minutes
left.
So,
let's,
let's
take
a
quick
break
here
and
once
you
before
we
get
into
this
like
controller
paradigm,
saying
I,
think
cuz,
that's
gonna
possibly
have
some
meat
to
it.
I
want
to
go
ahead
and
open
to
the
community
here.
Anyone
else
on
the
call
that
you
know
is
there:
are
there
any
other
community
topics
or
questions
or
discussions
that
anybody
wants
to
bring
up
here?
A
You
all
righty,
let's
dive,
okay,
so
hey
are
you
still
you
still
in
transit
or
you
at
a
safe,
comfortable
place
now
yeah
I'm
Sabrina.
A
All
righty,
so
let's
take
a
quick
look
at
quick,
relatively
quick.
Look
at
this
controller
paradigms
here,
so
we
have
sort
of
three
different
approaches
right
now
across
the
controllers
and
the
reconciler
z'
for
how
they
interact
with
cloud
provider
api's
and
how
they
are
tested.
Though
I
have
a
couple
of
examples
on
my
screen
here
and
so
there's
three
different
ones
that
take
a
quick
look
at
and
then
we'll
have
a
discussion.
So
one
is
where
the
reconciler
gets.
A
A
provider
focused
API
interface,
so
each
reconciler,
you
know,
gets
a
API
that
is
focused
sorry,
an
interface.
That's
focused
entirely
on
the
entire
provider,
the
cloud
providers
API
surface
and
then
they'll
use
that
object
throughout
its
reconcile
loop
to
to
kind
of
closely
mimic
what
the
cloud
providers
API
is.
That's
one
approach,
and
the
second
approach
here
is
where
we
have
over
aidable
member
functions
of
the
reconciler.
So
in
the
reconciler
you
know
it's
connect.
It'll
creates
an
instance.
That'll
seek
that
instance
for
external
provider
resource,
and
it
can
delete
that
instance.
A
So
these
functions
here
can
be
directly
overridden
or
set
mocked
implementations
from
the
unit
tests,
which
obviously
helps
for
testability.
At
that
you
know
level
that's
at
the
granularity
of
inside
a
reconcile
function,
and
then
we
have
a
third
pattern
here
that
creates
an
interface
for
each
one
of
those
functions,
create
sync
delete
and
the
key
interface
is
well.
A
A
Essentially,
it's
what
I'm
saying
versus
a
pattern
like
this,
where
you
have
over
edible
member
functions,
you
get
to
mock
at
a
higher
layer.
So
if
you
wanted
to
test
reconcile,
you
wouldn't
have
to
implement.
You
know
all
the
provider
api's
and
small
granular
level
you
can
just
stuff
out
or
mock.
You
know,
create
with
the
single
function
to
say,
return
success,
it's
okay,
I'm,
not
focusing
on
great
I'm,
focusing
on
sync,
so
that
ends
up
being
a
more
granular
approach.
So
I
think
that
mocking
the
entire
provider
focus
api
is,
is
probably
the
weakest.
A
E
Jared
I
want
to
jump
in
a
little
bit
and
further
clarify
your
point
about
the
provider
interface.
This
kind
of
I
think
that's
not
necessarily
a
better
worse.
I
would
say
that
it's
slightly
diagonal
to
the
overall
add
a
lot.
We
do
use
clients
provide
a
class,
a
client
straight,
and
we
do
provide
them
as
a
interfaces
with
markable
or
fake
clients.
Object,
isn't
so
that's
kind
of
one
angle
of
it.
Second
pattern
with
the
staff
functions
versus
interface
comes
under
cuts
outside
those
are
specifically
geared
towards
the
testing
reconcile,
always
dialer
itself.
E
So
it's
not
necessarily
that
the
provider
was
in
or
weaker.
It
just
simply
addresses
slightly
different
topic
and
we
can
address
them
separately,
so
there's
a
client
site
with
their
own
marks
and
they
only
fake
clients
and
interfaces,
and
there
is
now
reconciled
side
of
it
which
we'll
use
those
in
turn,
but
for
now
probably
congest
simply
may
be
focused
under
entirely
itself.
With
these
stop
functions
and
interfaces,
and
with
that
said,
I
could
have
just
kept
my
two
cents
into
the
this
specific
topic.
E
It's
not
that
I
think
the
interfaces
is
the
goal
preferred
way
to
provide
testing
for
the
basically
exposing
internal
components
for
testing
and
I.
Think
there's
nothing
wrong
with
that.
The
reason
why
I
mean
or
initially
start
using
stop
functions,
just
sheer
simplicity.
How
simple
it
is
to
that
and
implement
with
the
least
possible
lines
of
earth
functionality
which
effectively
being
addressed
in
the
same
way
by
the
interfaces.
So
that's
kind
of
that's
the
only
meaningful
discipline
for
that.
E
If
we
adopt
the
interface
parent
I,
don't
mind
at
all,
except
to
just
again,
in
my
opinion,
was
a
little
bit
more
involved
with
that
set
with
caveat.
Goldang
itself
does
not
provide
really
good
meaningful
testing
when
it
comes
down
to
just
in
the
members
of
the
same
struggle
and
I.
Think
I'm
pointing
to
earth
I
actually
asked
that
in
the
sacral.
For
this
question,
and
basically
answer
was:
how
do
you
test
member
function
and
answers?
If
you
don't
I,
remember
you
fired
me
that
yeah
right,
it
was
downloaded
horribly
and
that's.
E
Okay,
point
is
that
if
you,
yes,
if
you
want
to
just
member
of
the
same
struck,
then
kind
of
you
get
stuck
and
adding
this
blood
of
the
interfaces
and
almost
gets
under
ways
and
yeah
I
just
only
want
to
stuff
out
this
one
a
single
function.
They
really
want
to
create
now
interface,
which
I'll
pass
as
it
into
a
new
constructor.
E
To
do
this,
all
these
things
just
to
step
out
the
function,
and
if
the
answer
is
yes,
I,
don't
see
anything
wrong
with
that
an
example
which
Nick
did
in
the
radius
I
think
total
health
want
to
be.
This
is
what
I
think
I
had
initially
and
the
reason
why
I
kind
of
stepped
away
from
that,
because
again
I
would
just
didn't
want
to
do
that
for
every
single
reconciler.
That's
all
so.
C
A
A
C
Those
replaceable
trucks
I
think
that
part
of
that
is
because
I've
broken
it
out
into
four
separate
interfaces,
whereas
in
practice
either
they
lost
the
crate
seemed
to
leak
care
around,
for
example,
but
it's
go
best
practice
to
offer
a
minimal
scope.
Actually,
my
tests,
which
will
pass
a
creator
around,
should
actually
pop
crater
or
a
Toledo
or
I
sink
around
each
other
to
last
around
the
smallest
possible
interface
as
a
side
note
I'm,
currently
working
on
the
introducing
linking
to
our
code
base,
which
would
actually
call
out
things
like
that.
C
It
also
looks
like
a
lot
more
code
because
there's
a
lot
more
comments,
because
I
tend
to
be
quite
a
stickler
for
putting
comments
on
everything.
So
the
reason
that
I
reason
functionally
that
I
switched
to
interfaces
arguably,
is
a
little
bit
more
philosophical
than
prank
Matic
or
a
little
bit
more
idiomatic
than
pragmatic,
I
feel
like
the
pattern
with
replaceable
stamp
functions,
does
a
very
good
job
of
what
it
was
designed
to
do,
which
is
succinctly
making
objects
more
testable.
E
C
It
doesn't
actually
separate
out
this
doesn't
actually
separate
out
the
concerns
of
what's
happening,
so
basically
they
reconcile
loop.
You've
got
this
piece
of
reconcile
code
that
takes
a
CID
and
then
calls
out
another
function
that
says:
okay.
Here's,
my
CID
I
have
decided,
as
probably
the
reconcile
loop,
that
the
CID
is
in
such-and-such
a
state,
so
it
needs
to
be
deleted
or
it's
in.
A
C
State
so
I
think
it
needs
to
be
created
or
the
state
so
I
think
it
is.
Then
it
passes
it
off
to
a
function,
to
enact
that
that
function
is
aware
of
the
cloud
provider
and
implementation
in
the
case
of
the
replaceable
stub
functions,
that's
actually
all
of
that
functionality,
and
all
that
logic
and
all
the
things
that
needs
to
talk
to
kubernetes
in
the
cloud
provider
and
everything
and
needed
to
reconcile
this
all
just
part
of
one
reconcile
is
struct,
so
I
tried
to
break
that
out
into
two
separate
objects.
C
Basically,
so
in
my
design
the
goal
was
that
the
American
Pilar-
this
is
all
very
hypothetical.
This
is
how
my
mind
works.
Of
course,
in
practice
we
only
ever
want
to
reconcile
with
a
cloud
provider,
but
the
way
that
I
designed
this
is
what,
if
we
wanted
to
reconcile
with
what,
if
we
some
for
some
reason,
wanted
to
reconcile
our
as
your
Redis
spec
with
something
that
wasn't
as
your
could
we
just
easily
swap
that
out
with
something
else
that
could
take
a
read
and
as
you're
ready,
spec
and
apply
it
elsewhere.
C
Of
course,
in
practice
we're
not
going
to
do
that,
but
when
you
think
about
the
API
in
that
way,
it
tends
to
lend
itself
to
more
testable
api's
in
a
fashion
that
doesn't
feel
like
you've
written
it
purely
to
make
it
more
testable.
So
when
I
read
the
start
function
pattern
to
me,
it's
like
yeah,
this
works,
but
there's
this
there's
this
whole
pattern
and
whole
code
here.
That
is
only
really
useful
for
testing.
C
You
know
those
those
functions
are
only
ever
set
to
the
you
know,
there's
an
initializer
that
says:
like
am
I
not
in
a
test.
Okay,
add
these
functions
in
sort
of
thing
or
that
lets
you
override
them.
So
I
tried
to
take
the
pattern
and
sort
of
just
had
the
same
functionality
in
a
fashion
that
was
designed
to
morph
like
what,
if
this
was
a
real
API
library,
that
someone
was
trying
to
use
sort
of
thing,
and
maybe
they
want
to
swap
thank
you,
clap,
divider,
commentation
or
something
like
that.
C
E
Again,
that's
just
my
take
on
it,
but
again,
I
would
definitely
preface
that
this
is
entire
code
was
intended
to
address
solely
testability,
not
extending
the
reconciler
to
be
a
like,
API
objected,
be
extensible
and
I
guess
used
in
that
context.
Hence
my
goal
was
to
come
up
with
the
least
possible
assets
to
facilitate
testing.
A
C
C
So
if
you
look
at
the
test
back
to
back,
they
look
quite
different,
but
I
think
that's
just
because
I
used
a
different
test.
I,
don't
personally
like
thank
you
very
much.
So
I
wanted
to
sort
of
give
a
shot
doing
it
pretty
much
with
just
so
I
just
use
the
base
go
testing,
library
and
a
tool.
That's
basically
reflected
on
DD
people
that
gives
slightly
better
human,
readable,
diffs,
yeah.
E
I
think
just
two
points
there.
The
thank
you
for
the
positive
comment
about
the
client,
fake
tests.
I
think
that's
the
I
and
when
I
start
looking
in
the
implement
in
those
I
borrowed
heavily
from
the
kubernetes
itself
and
I'll
control
a
runtime,
what
they
traduce
faked,
a
client,
what
they
said
pretty
severe
for
runtime,
which
is
think
it's
great
idea
to
always,
if
you
write
something
which
she
hasn't
faced
to
provide
fake
client
to
it,
so
you
can
easily
stab
it
out
and
mark
on
the
table.
Driven
test
and
vomega.
E
Go
may
have
been
used,
I
think
just
as
inertia,
because
initial
view
builder
scaffolding
comes
with
the
go
mega
scaffold
tests
and
we
using
basically
them
only
if
I'm
not
mistaken.
For
assertion.
We
don't
do
anything
beyond
that.
We
just
simply
a
certainty,
values.
I
have
no
problem
of
letting
go
mega
go.
There
is
no
any
function
on
these
fold
and
beyond
there
one
comes
to
table
driven
test,
I'm,
actually
fine
with
them
as
well.
I
think,
that's
again,
that's
the
probably
it
preferred
paradigm.
E
One
thing
I
would
definitely
say
that
it's
definitely
easy
to
add
new
table
driven
tests
because
it
just
simply
add
new
entry
in
the
table.
Sometimes
it's
a
little
bit
harder
to
troubleshoot
again,
it
depends
if
you
good
hygiene
for
table-driven
tests
where
each
test
case
entry
is
identifiable,
so
you
can
easily
identify
it,
and
perhaps
another
may
be
minor
need.
I
would
have
a
table
driven
test
it.
E
Sometimes
it's
a
little
harder
to
rerun
a
specific
table
test
case
you
will
actually,
let's,
if
you
have
a
pretty
large
table,
I
would
say
turn
test
cases
and
nine
is
failing
you.
Basically,
if
you
want
to
step
through
it
and
debug,
it
you'll
force
to
rerun
just
eight
and
then
just
to
get
to
nine
sister
up
again,
those
are
minor,
picks,
I,
I.
E
Think
I,
like
all
the
benefits
table
trip,
will
just
provide
so
I'm
willing
to
live
with
that
inconvenience,
if
that's
the
case,
so
that
I
also
want
to
get
a
little
bit
back
flow
on
history.
Here,
as
a
key
builder
specifically
to
reconcile
you
can
use
plaintiff
you
builder,
for
any
project
and
you
create
and
scaffold
your
host
operator.
You
will
notice
that
cubital
they
offer
slightly
different
process
to
what
comes
on
to
Justin.
E
Complicated
I
actually
and
is
not
a
sneak
host
of
all
the
same
paradigm,
but
would
basically
unit
test
reconcile
method
or
methods
will
in
reconciler
struck
instead
of
doing
this,
and
if
a
synchronous
run
of
reconciler
and
watching
for
the
events
and
do
all
this
more
integration
type
faster,
so
I
think
that's
what
we're
doing
right
now.
What
emerging
as
a
concealer
paradigm
I
think
it's
what
kind
of
fun
right
track,
but
it's
table
driven,
I'm,
told
fine
to
adopt
I,
think
I
like
it
and
the
way
Nick
wrote
them.
E
They
very
clean
and
possible
to
me
so
I,
don't
I,
don't
have
any
strong
preferences
using
the
table
during
versus
no
table
driven
and
I.
Think
just
to
close
the
loop
I
think
I'm
actually
also
good
to
accept
the
interface
approach
versus
start
functions,
the
amount
of
code.
It
doesn't
bother
me
as
well
just
again
to
be
in
terms
of
standardized
in
one
approach
and
assistance,
idiomatic
go
away
kind
of
do
things,
I!
Think
anyone
who
will
come
new
to
project
will
understand.
E
A
A
It
was
only
an
artifacts
really
from
the
initial
cue
builder
patterns
and
then
removing
that
whole
spinning
up
at
CD
and
making
tests
much
faster
and
not
having
an
integration
focus
has
been
amazing,
so
I
like
that,
a
lot
better.
So
when
we
could
finish
a
continue,
this
conversation
on
slack
and
what.
E
C
I
was
gonna,
make
suggest
a
similar
action
and
I
would
say
as
a
follow-on
to
that
I.
Don't
think
this
really
anything!
Well,
when
I,
when
I
was
new
to
the
community
and
I
was
trying
to
build
my
own
controller
I
did
find
Ilias
patent.
Quite
a
read,
so
I
would
say
that
if
we
are
gonna,
go
back
and
retroactively
lis
rewrite
or
refactor
the
existing.
C
Would
obviously
focus
on
the
ones
that
don't
use
it
either
the
create
sync
delete,
method
pattern
or
the
interface
one
I
basically
take
the
ones
that
are
that
first
pattern
or
no
pattern,
refactor
them
because
I
don't
think
the
the
ones
using
crate.
Sync
really
methods
are
broken
at
the
moment.
Sort
of
thing,
I
think
it
would
be
a
minor
optimisation
to
refactor
them,
whereas
the
other
ones
pretty
hard
to
follow
and
pretty
clunky.
So
they
would
benefit
a
lot
from
refactoring.
A
All
right,
so
we
have
a
hard
stop
in
two
minutes
and
so
Daniel
reached
it.
So
we're
gonna
move
on
to
the
pull
request,
so
Daniel
reached
out
to
me
and
said
that
he
will
have
an
update
to
this.
Incorporating
all
of
our
feedback
by
the
in
do
I
think
this
week
is
what
he's
targeting.
So
we
should
be
able
to
review
this
again
and
for
adding
as
your
resource
groups,
and
we
have
some
feedback
about
the
logging
infrastructure
or
you
know,
consolidating
on
what
logging
package
we
want
to
use.
A
So
I,
don't
know
if
you
notice
on
the
line
here,
but
we
hopefully
will
get
an
update
on
that
sometime
soon.
We
should,
we
should
add
a
comment,
Vista
ping
Beano
and
see
if
there's,
if
we
get
an
update,
soon
and
I,
think
this
is
the
pull
request
you
opened
very
recently
about
finding
state,
so
we
I
don't
think
we
anyone's
taking
a
look
at
it
yet
so
we
just
have
to
follow
up
and
take
a
look
at
that
as
well.
Yeah.
C
It's
it's
pretty
small,
so.