►
From YouTube: 2019-04-30 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
A
Okay,
alright!
So
that's
just
a
quick
status
update
here
then
so
for
adding
extending
crossplane
with
new
functionality.
For
you
know,
out
of
tree
controllers
and
types
and
such
I've
had
a
work-in-progress
pull
request
for
the
design
up
for
a
little
bit
now
and
I've
gotten
feedback
from
I.
Think
most
everybody
I
have
not
incorporated
all
that
feedback
into
the
document
yet,
but
I
have
been
working
on
the
implementation,
that's
kind
of
in
agreement
with
the
design,
as
well
as
people's
feedback.
C
C
Kubernetes
workloads,
which
is
just
arbitrary,
kubernetes
resource
kinds
on
a
horse,
plain
manuscript
news
cluster.
So
the
idea
is,
you
can
say,
hey
press
play
and
go
and
spin
up
that
could
be
native
cluster
somewhere
for
me
and
then
go
deploy
all
of
these
things
to
it
a
little
bit
kind
of
a
little
bit
like
Federation.
C
So
the
pull
request
got
sent
yesterday.
Illya
has
started
reviewing
it
and
I'm
in
the
process
of
fixing
up
some
of
the
things
they
found
and
replying
to
that
there
is
going
to
be
potentially
further
work
to
do
on
it.
After
this
pull
request
is
landed,
the
pull
request
adds
sort
of
the
core
functionality,
but
there
is
one
or
two
more
things.
Those
things
are
validation,
Web
books.
C
There
are
some
cases
in
complex
for
clothes
where
crossplane
user
could
submit
a
complex
for
college.
Ciardi
called
a
kubernetes
application
that
was
fundamentally
broken
and
would
cause
crossplane
to
have
a
race
against
itself.
Our
plan
to
stop
that
happening
is
to
use
validation
where
books,
but
crosspoint
does
not
use
any
validation
where
books,
yet
so
what
that
would
be
the
first
time
introducing
that
pattern.
So
I've
just
heard
that
from
this
pull
request-
and
we
can,
we
can
add
it
in
a
separate
one
later.
C
The
other
thing
that
we
need
to
look
into
is
what
I
was.
I
calling
them
server
populated
immutable
fields
such
as
the
service
cluster
IP
address.
Basically,
the
way
that
complex
workloads
work
is
that
we
have
a
template
for
what
a
call
your
template
for
a
service
or
a
deployment
or
whatever
as
part
of
the
collects
workload.
C
That,
then,
cannot
be
changed
so
that
we
come
back
and
we
try
to
update
it,
and
as
part
of
that
we
say
we
don't
have
a
cluster
IP
and
then
it
says:
oh
no,
you
can't
change
this
field
back
to
empty
string,
because
I
said
it
to
this
specific
thing:
there's
where
the
complications
are
reasons
that
it's
more
difficult
than
usual
to
do
this
for
us,
but
I
have
might
have
some
ideas
about
that
anyway.
Long
story,
short
complex
workloads
at
the
moment,
will
create
update
and
delete
almost
every
kubernetes
of
exhaust
type.
A
That's
that's
a
great
update,
Nick
and
then
it's
so
obviously
you
know
continuing
to
drive
that
pull
request
and
the
feedback
you
know
from
reviewers
on
it
is
going
to
be
a
priority,
but
I
also
liked
the
idea
that
I
think
you
had
yesterday
about
you
know
trying
to
directly
use
these
these
complex
workloads
that
you've
created
and
implemented
to
to
deploy
gitlab
itself.
So
instead
of
you
know,
going
through
a
Gila
have
controller
kind
of
validate
them
with
the
real.
A
C
I'll,
probably
yep,
like
you
say,
I'll,
probably
try
and
steward
this
pull
request
through
first
and
then
it
Ilia
mentioned
that
he
has
made
some
good
progress
towards
something
like
what
actual
resources
would
look
like
for
a
complex
workload.
That
was
good.
A
B
B
B
Boss
I
mentioned
initial
feedback
on
the
Google
dog,
but
after
that
was
ported
to
the
TR
I
think
you've
been
a
little
bit
dormant,
which
is
fine,
I
think
we
has
done
a
lot
of
stuff.
So
since
right
now
we
have
forthcoming
tale
from
Nick
for
actual
implementation
of
complex.
What
was
I
can
start
integrating
gitlab,
basically
planning
the
writing
the
complex.
What
what
types
into
the
needler
on
the
front
of
the
separating
get
labs
into
manageable
workload
into
vacations.
There
is
also
a
PR
going
on
there
in
the
olden
I
think.
B
B
B
Generate
and
separate
into
separate
application
groups
which
required
to
install
it
lab
application
generally,
characterized
in
two
broad
categories
of
core
and
dependencies,
with
dependencies
being
external
DNS,
less
encrypt,
possibly
even
Minya,
and
with
core
being
all
this
internal
heat
lab
components
needed
like
get
lab,
unicorn
task,
runner,
sidekick,
secret
generator
and
so
forth
alone.
So
that
kind
of
gives
this
ability
to
manage
and
organize
workloads
and
I
think
they
should
be
probably
represented
as
a
that
granulizer
applications.
B
B
As
for
the
concerns
about
gitlab,
there
are
few
of
them,
primarily
it
will
be.
We
are
interesting
to
come
up
with
the
uniform
deployment
across
multiple
clouds,
specifically
right
now
we're
dealing
with
the
a
diverse
and
miss
appeal
every
yet
to
be
tackled.
We
haven't
done
anything
for
a
juror,
even
at
the
concept,
level
and
biggest
challenge
would
be
come
up
with
the
dependencies
and
work
laws
which
generic
enough
to
be
working
on
both
cloud
providers.
The
way
we
had
examples
with
the
workload
now
I'm,
sorry
with
WordPress
and
workloads.
B
That
would
be
a
biggest
challenge
and
the
reason
why
I
just
celebrated,
because
it
lab
application
or
service
itself
is
opinionated
about
cloud-
is
being
installed
on
there's
actually
hard,
if-else
statements
there,
which
is
saying,
if
you
run
on
innate
abilities,
then
you
do
this
all
you
need
to
have
this
dependency.
If
you
run
an
end
in
Google,
you
need
to
have
this
dependence
in
this
format
of
secrets
of
support.
So
is.
B
It's
also
it
permeates
into
dependencies
like
ingress
controllers,
like
external
DNS,
of
course,
ingress
controller
forget,
lab
initial
was
considered
to
be
as
a
dependency,
however,
get
lover,
specific
requirements
and
they
specifically
kind
of
tailor
it
ingress
control,
for
they
need
with
the
open,
TCP
22
port
for
cessation
access.
Hence
it's
or
didn't
became
not
a
requirement,
they
became
core
component
and
that
one
itself
also
specific
to
a
cloud
provider.
So
when
you
point
ingress
for
G
key,
it's
one
settings
one
of
options
when
you
deploy
ingress
for
AWS,
it's
another
settings
in
other
options,.
B
D
A
A
B
I
think
they
became
they
will
become
a
parent
and
we
will
need
to
solve
them.
As
we
start
writing
the
controllers,
for
the
I
mean
yeah
control,
forget
love
to
use
a
NICs
new
complex
for
clothes.
So
basically
that's
what
we're
going
to
be
facing
face
to
face
with
these
problems
and
we
need
to
sell
in
one
way
or
another.
A
A
We
had
you
know
we
had
wanted
to
kind
of
try
to
stick
to
a
cadence
of
around
quarterly
releases,
and
the
0.2
release
was
a
little
bit
behind
that
I
think
that
was
more
like
early
April
and/or
start
early
to
mid
April,
and
it
should
have
been
like
late
March
or
mid
March.
So
we're
a
little
bit
behind
that
I.
Don't
think
we
have
a
very
clear
date
right
now,
Nick
for
specifically
when
we're
targeting
0.3
I
think
we
need
to
do
a
little
bit
of
thinking
about
that
and
also
a
pass
to
scope.
A
A
C
A
That
that
kind
of
work
is
definitely
not
out
of
scope
either.
I
think
that
you
know
that
kind
of
a
lot
of
that
fits
into
running
world
applications
as
well.
As
you
know,
having
some
more
production,
readiness,
type
of
attributes
and
behavior
can
go
along
with
that
as
well.
I
think
that
you
know
that's
going
to,
but
I
can
heritage
that
will
always
have
a
nice
long
tail.
So
I
thinking
you
know
as
we
go
through
for
future
milestones.
C
D
A
Know
I
think
so.
I
think
that
you
know
some
I'm,
not
against
that
whatsoever.
I
think
that
you
know
being
able
to
have
the
right.
You
know
patterns
and
the
rights
level
of
quality
and
reliability,
etcetera
in
place
to
start.
You
know,
enabling
more
adoption
is,
is
definitely
something
I'm,
supportive
and.
D
I
think
we
should
there
is
an
opportunity
to
talk
about
what
you
know
the
roadmap
for
point
three
and
point
four,
and
maybe
that
put
some
of
that,
what
it's
talking
about
in
perspective
so
any
way
we
can
as
an
action
item.
Let's
maybe
discuss
the
roadmap
and
then
come
back
in
the
next
community
meeting
with
a
proposal
or
like
a
bit
a
bit
of
a
horizon
here:
3.3.4,
maybe
0.5,
well
we're
kind
of
hearing
and
but
we're
we're
we're
seeing
what
we
wanted
to
do.
A
A
C
A
A
A
All
right,
so
we
don't
have
to
go
into
the
full
discussion
here,
but
we
can
bring
up
a
little
bit
about
you
know.
We
are
currently
on
pull
requests.
We
have
two
different
code
coverage
checks.
Two
different
systems
sonar
in
code
Cove
and
you
know
I-
wanted
to
see.
If
there's
any,
you
know
what
the
opinions
are
of
the
two
different
systems
because
they
have
or
they
both
have,
what
appears
to
be
slightly
different
ways
of
measuring
the
code
coverage.
So
we
don't
have
to
go
into
the
fold.
A
C
I
think
it's
that,
as
far
as
I
can
tell
code,
comm
has
a
pretty
what
seems
like
a
pretty
obvious
bug
in
that
it
complains
about
code
coverage
going
down
when
you
change
non
code.
When
you
change
like
mock
down
and
things
like
that,
it
will
complain
to
the
the
code
coverage
is
reduced,
which,
which
seems
to.
A
B
B
Might
be
in
the
code
curve
coverage
numbers
always
slightly
off
to
some
extent,
but
it
becomes
very
apparent
when
you
change
the
files.
We
shall
not
correlate
it
case
in
point
I,
with
the
design
duck
for
reconcile
a
pattern
in
col.caf
complains
about
coverage
going
down
and
some
unrelated
service
go
file,
changed
lead.
B
I
suspect
that
the
analysis
is
slightly
behind.
Doesn't
it
mentioned,
perhaps
there's
a
bug
how
they
detect
the
changes,
because
technically
it
is
obvious
that
the
no
go
no
go
files
will
change
so
get
shots.
They
comparing
and
run
an
analysis
slightly
off,
or
sometimes
it's
not
even
understandable,
where
those
get
shot
comes
from.
So
yeah
I
had
spotty
understanding
how
they
coverage
with
the
kodkod
work.
I,
think
interface
itself
is
nice
and
intuitive
and
when
it
works,
it
works.
B
However,
even
on
a
good
day,
if
I
compare
actually
coverage
numbers
and
try
to
understand
what
the
percentage
even
come
from,
it
would
be
really
hard
to
rationalize
whether
that
sixty-six
percent
is
coming
from,
whereas
on
the
flip
side,
where
the
sonoran
gives
you
clear,
understanding
here
is
total
number
of
lines.
Here's
number
of
lines
covered.
Here's
a
lot
mass
to
be
covered.
Overall.
This
is
percentage
which
is
pretty
straightforward
in.
A
B
Pretty
much
I
think
overall
I
will
characterize
the
SONA
problem
a
little
bit
better
coverage
numbers
and
gently
to
that.
Sona
coverage
analysis
on
CI
side
runs
much
faster.
It's
usually
one
of
10
to
15
seconds
to
submit
the
coverage
report,
whereas
the
curve
takes
sometimes
up
to
minutes
to
upload
the
coverage
results.
So
I
know
it's
totally
irrelevant,
but
just
I
think
maybe
speaks
for
the
maturity
of
the
platform
itself.
We
can
keep
an
eye
on
the
code
cow,
but
for
now
I
pretty
much
exclude
I'm,
not
looking
personally
into
quad-carbon,
unreasonable,
dude.
A
Does
it
so
it
sounds
like
we
might
be
at
a
point
where,
like
I,
haven't
heard
anything
where
code
covens
really
or-
and
it
sounds
like
it's,
not
something
we
can
really
even
trust
because
the
numbers
are
unreliable.
Are
we
at
a
point
where
we
can
remove
support
for
code
come
and
just
take
a
dependency
on
sonar
going
forward?
Do
you
think
or
be
not
there
yet
I.
C
Would
personally
like
to
get
rid
of
coke
of
I?
This
starts
to
diverge
a
little
bit
towards
the
the
whole
opinions
about
get
lab
gates
and
get
updates
rather,
and
things
like
that
so
too
much
into
it,
except
to
say
that
I
personally
have
been
where
I
find
it
really
frustrating
and
confusing
to
see.
All
of
these
like
little
red
x's
on
github
Piazza,
don't
actually
mean
that
the
the
PR
is
not
ready
to
merge.
C
So
so,
given
that
the
majority
of
them
are
coming
from
code
Cove
and
that
code
cub
has
different
targets
than
sonar
I
would
I
would
find
it.
I
would
find
it
easier
to
rationalize
the
expectations.
If
we
just
had
one
coverage
target
from
one
system-
and
it
seems
like
sonar-
is
the
better
of
the
two
options
that
we're
trying
at
a
moment.
A
A
You
know
obviously-
and
you
know,
gives
us
a
less
of
a
burden
to
have
to
kind
of
dig
through
values
that
we
don't
even
necessarily
have
a
trust,
trust
in
and
know
yeah
I
think
you
said
you
mentioned
you
kind
of
stuff
looking
at
the
code
coverage
numbers,
so
it
seems
like
I,
don't
hear
encoder
around
at
all.
It
seems
like
we
could
just
punt
it
out
and
get
rid
of
it.
I
wouldn't.
B
Mind
and
I
think
kind
of
this
was
intent,
I
want
to
say
all
along
against.
Maybe
that's
why
the
results
would
not.
All
checks
were
not
required
because,
essentially,
we
did
not
know
beforehand.
What's
the
good
welcome
to
use
granted
that,
roughly
four
weeks
ago,
we
had
zero
coverage
reports
in
the
system
in
crossplane,
so
this
was
something
like
evaluation
period.
Didn't
think
I'm
comfortable
of
this
missing
need
quad
cab
right
now,
but
we
can
also
keep
an
eye
on
it.
Also
side-effect
of
sonar.
It
provides
a
little
bit
more
than
coverage
reports.
A
I
definitely
I
totally
agree
with
you
that
you
know
it
was
good
to
have.
You
know
an
evaluation
period
where
we
have
both
of
them
running
and
we
could
get
a
strong
sense
from
experiment.
You
know
from
experience
to
know
which
one
will
actually
meet
our
needs
better,
so
I
totally
think
it
was
great
to
do
that.
Integration
and
I
appreciate
that
integration
as
well
so
yeah.
So
it
sounds
like
we,
you
know
unless
anybody
else
on
the
call
wants
to
speak
up.
A
Alrighty
so
I
think
we
that's
the
only
topic
we
had
here
for
the
community
topics
there
you
know
some.
We
had
had
some
conversations
on
the
dev
channel
of
the
slack
workspace
yesterday
about
logging.
It
some
improvements,
maybe,
when
running
a
debug,
have
to
get
rid
of
some
of
the
distracting
or
misleading
stack
traces.
So
we've
already
had
that
discussion,
so
I,
don't
think
we
need
to
have
that
again
here.
Were
there
any
other
community
topics
that
anybody
wanted
to
bring
up
before
we
move
down
to
the
PRS
section.
B
You
find
me
I
think
what,
when
we
talk
about
code
coverage,
I
think
the
SONA
vs.
quad
cam
was
part
of
the
discussion
and
I
think
I
actually
I
think
it's
small
apart,
so
the
I
think
Nick
and
I.
We
have
the
kind
of
conversation
in
the
PR
and
then
we'll
follow
up
an
email
general,
which
we
can
also
move
to
here,
but
overall
I
think
it
boils
down
to
kind
of
two
points
in
my
opinion
and
I.
B
Think
Nick
grace
is
that
understanding
what
they
give
hub,
pull
request
checks
I
about
and
how
we
should
be
supposed
to
treat
them
and
I.
Think
we'll
also
kind
of
have
very
similar
understanding
that
normally,
if
you
have
a
check
on
your
PR,
this
check
should
pass.
Hence
it
will
be
nice
to
have
the
required
checks
if
we
truly
care
about
quality
of
our
PRS.
B
So
I
think
what
Nick
brought
up
the
point
saying
that
it's
super
annoying
to
see
a
red
X
on
a
given
check
and
yet
still
accept
this
PR
as
a
valid
PR,
hopefully
by
removing
the
offending
at
this
point,
cold
cow
and
we
can
get
rid
of
the
some
of
these
checks,
but
I
think
it
brings
the
illogic
conversation
topic.
What
is
the
take
on
the
checks
for
the
PR
and
while
I
agree
with
Nick
I
think
it
could
be?
B
Maybe
at
this
early
stage,
sometimes
we
can
adapt
new
check,
which
is
we
don't
until
they
know
whether
or
not
we
should
make
it
required
in
kind
of
people,
but
by
making
required
to
you
kind
of
explicitly
prevent
the
arts
from
being
merged.
So
that's
kind
of
I
want
to.
Maybe
people
have
different
opinions
to
see
what
they
think
about
non
requires
power.
Chair
I,
just.
C
Like
to
since
I'm
the
one
who
sort
of
brought
up
it's,
this
sort
of
optional
thing,
I
think
I
provided
a
bunch
of
context
to
Illya
and
Jared
in
an
email
that
was
internal
to
upbound,
so
just
to
sort
of
state.
My
thoughts
around
this
here
in
the
sort
of
public
four-
and
this
is
definitely
an
area
where
I
am
happy
to
be
told
that
I'm
just
a
crazy
person
and
that
no
one
else
thinks
this
way,
and
so,
but
so
I
think
about
this
from
two
perspectives.
C
One
is
my
project,
maintainer
hat,
and
what
is
my
potential
contributor
hat
and
to
me
this?
It's
it's
not
that
I,
don't
fundamentally
like
soft
checks,
its
this
is
more
complained
about
get
hub's
user
experience
around
them
in
the
github
like
if
github
gave
you
an
orange
thing
or
some
like
some
different
way
to
say
that
like
oh,
if
there
was
some
easy
way
to
see
all
of
the
required
checks
have
passed.
Are
this
not
required?
C
Checkers
is
not
passed,
then
it
would
give
a
it
would
give
a
nice
story
to
be
like
okay,
cool
I'm,
like
my
PR,
is
now
good
enough
to
be
reviewed
and
potentially
merge
that
it
would
be
nice
if
this
extra
chess
check
passed,
but
but
it's
not
necessary,
but
as
a
reviewer.
When
you
look
at
the
list
of
pull
requests
in
github,
there
isn't
a
way
to
distinguish
between
a
check,
failing
that
we
don't
really
need
to
pass
or
or
a
critical
check
failing.
C
C
Maintainers
expect
me
to
make
a
pass
and
which
don't.
So.
This
has
led
to
things
before
where
I'll
you
know,
be
adding
a
whole
bunch
of
questionable
useful
tests
to
hit
like
a
test
coverage
thing
for
some
protein
to
then
they'll
be
like.
Oh,
we
didn't
actually
care
about
that,
but,
but
you
didn't
have
any
way
of
knowing
that
we
didn't
care
about
that.
So,
but
I
just
I
just
think
it
gives
people.
It
sets
expectations
much
more
ease.
Clearly,
if,
if
the
mental
model
is,
if
there's
a
check,
the
check
must
pass.
A
Think
I'm
not
meeting
now
I
can
weigh
in
on
that
is
well.
You
know,
you're
you're
thinking
there,
Nick
I
do
agree
with
you
know
when
you
have
a
set
of
checks
for
a
pull
request,
you
know
if
it,
if
one
doesn't
pass
and
it's
it's
I,
don't
think
right
now
that
there's
any
real
way
to
make
it
to
have
any
indication
of
of
that
that
failure
not
being
important,
there's,
there's
really
no
facility,
or
you
know
allowance
for
that,
and
so
that
makes
that
makes
a
difficult
situation.
A
You
know
as
a
contributor
or
even
as
a
reviewer
as
well
of
having
to
you,
know
juggling
your
head
of
okay,
that
red
X
is
actually
okay.
This
time
you
know
you
don't
really
want
to
get
in
situations
like
that,
so
in
general,
I
think
that's
driving
for
if
it's
a
check
it
on
a
pull
request,
then
it's
meaningful
that
makes
sense
with.
If
we
remove
code
cup,
though,
is
are
we
able
to
like?
Do
we
have
the
the
other
checks?
A
You
know,
obviously
like
the
integration
tests
or
unit
tests
like
that
has
to
pass
the
DCO
has
to
pass.
That
leaves
us
with
sonar,
and
you
know
does
that
or
we
at
a
point
where
you
know
sonar
is
as
a
hard
pass
or
not,
because
if
we
can
get
to
a
state
where
all
checks
must
pass
and
that's
just
what
it
is,
you
know
green
means.
Yes,
red
means.
No,
that's
where
I
think
that
we
would
want
to
be
so
or
we
would
be
able
to
get
to
that.
That
I
think.
B
We
are
and
I
think
it's
slightly
in
my
opinion,
not
the
point.
Yes,
we
want
to
have
required
checks
and
wanted
them
to
be
able
green,
no
confusion
there
I
think
again,
just
for
the
context
code
coverage
being
a
good
example,
which
we
start
introducing
and
improving
and
evaluating
for
last
four
weeks.
Hence,
if
we
right
off
the
front
start
with
the
required
check
would
not
be
able
to
merge
any
of
the
PRS
for
last
one
four
weeks
or
not
with
over
right
right.
So
again,
at
this
in
ideal
state,
when
project
is
mature.
B
C
B
So
and
I
think
I
can
envision
future
context
like
that.
What
we
may
want
to
add
a
check
which
yet
we
may
be
not
comfortable
to
become
yes,
Hartmann
and
evaluated
and
saying
yes,
that
must
pass
if
we
always
assume
that
every
check
we
introduce
must
be
require
than
yes,
that's,
that's,
probably
and
I'm
kind
of
on
the
path
with
that
too
I
just
think
it's
a
little
bit
rigid,
especially
on
the
early
phases
like
that,
as
I
mentioned,
with
code
coverage,
I.
C
D
C
Totally
get
the
point
like
you
definitely
don't
want
to
just
put
a
rule
in
place
that
that
you
don't
understand
yet
and
again.
A
lot
of
this
is
just
me
kind
of
complaining
about
the
github
user
experience
rather
than
that,
our
decisions
around
it
I
think
one
other
option
is
to
in
the
case
of
code
coverage
or
something
like
that
is
to
potentially
set
up
the
tests
so
that
it
like.
C
B
I
think
that's
the
second
segue
to
that
and
but
just
I
think
the
checks
themselves
in
orthogonal
to
specific
check.
It
could
be
other
checks,
notice,
little
even
related
to
code
coverage,
key
takeaways
that
when
you
look
at
the
gift
hub
top
level,
you
can
have
this
red
X,
which
tells
you
nothing
about
what
type
of
checks
is
being
violated
until
you
get
in
open
the
P
R
and
go
into
this
level
of
you
and
expand
it.
Mnc.
C
B
You
have
initial
scaffolding
generated
by
cube
builder,
which
intended
to
be
modified
by
users
and
the
second
set
of
files,
which
is
quite
generic
files,
which
is
not
supposed
to
my
favorite
since
they've
been
regenerated
every
single
time.
You
do
core
gem
in
our
project
today
we
already
exclude
good
set
of
files,
which
is
specifically
all
the
Gordian
files.
We
don't
collect
courage
in
those
we
exclude
old
test
files.
B
B
We
think
we
should
exclude
or
should
not
exclude
and
Nick
brother
good
point
saying
it
if
I'm
author
and,
let's
say
a
file
which
would
say,
create
PR,
which
requires
kind
of
one
called
the
controller
runtime
related
infrastructure,
small
file,
which
has
no
business
logic
per
se
and
I
have
two
files
controlling
something
else
which
has
heavy
business
logic.
My
controller
runtime
file
complains
about
zero
coverage
and
bringing
my
own
entire
PR
coverage
down
below
the
target
rate.
So
a
question
is:
can
I
exclude
that
file
and
answer?
B
Yes,
you
can
totally
exclude
that
file
and
you
can
just
add
the
properties
in
the
github
repo
evens
have
Sona
properties
with
the
list
of
exclusions
in
hey,
specifically
exclude
this
file
from
coverage
collection,
and
it's
understandable.
You
can
do
that
and
then
your
coverage
numbers
go
back
up
great.
Your
PR
spasm
side
effect
of
that
is
that
this
is
the
user
author-it
file
and
while
today,
right
now,
it
doesn't
have
heavy
business
logic.
Nothing
prevents
next
PR
to
add
business
logic
to
it
in
the
package.
B
Unlike
goldlink,
you
cannot
simply
say
annotate.
This
function
do
not
lend
there's
no
analogy
in
go
test
or
go
comer,
so
this
is
a
handle
at
the
level
of
the
report.
Now,
once
all
interprets
this
reports,
and
what
do
you
want
me
to
do
with
this
file
or
exclude
it
good,
so
I
played
with
quite
a
bit
and
I
found
that
it's
actually
easier
for
me
to
and
frankly
arguable
more
beneficial
to
write,
simple
test
coverage,
a
test
for
that
file,
rather
than
figuring
out
how
to
efficiently
and
less
error-prone
excluded.
B
So
that's
kind
of
wedding,
misunderstanding
and
missing
disagreement
slightly
happening
because,
while
I
do
understand,
some
files
may
not
be
requires
from
focus
boot
coverage.
I
do
think.
Nevertheless,
if
you
put
tests
for
that
and
it's
user
or
third
file,
no
matter
how
little
there's
still
value
in
that
test,
specifically
I
made
an
example
that
that
file
itself,
if
you
admit,
schema
registration,
you
can
actually
break
the
build.
Oh,
not
build,
but
actually
break
the
distribution.
In
fact,
you
can
produce
product
which
at
runtime
will
fail.
So
I
do
think
that
we
need
to.
B
C
C
So
my
my
thinking
was
mostly
whether
there
was
a
better
heuristic
than
n
percentage
of
the
diff
to
be
targeting
so
code.
Cars
are
not
come.
Sonar
allows
you
to
specify
coverage
in
a
couple
of
different
ways
based
on
like
lions,
branches,
yada
yada,
and
one
of
them
is
to
target
overall
coverage
of
the
project.
So
a
patent
that
I've
seen
in
the
past
that
I
have
liked
is
doesn't
rely
on
of
being
somewhat
of
an
honor
system
for
for
users.
C
We
say
that
if
this
doesn't
decrease
the
coverage
and
the
poor
west
paths,
but
then
I
as
a
reviewer
might
be
like
hey
you
added
a
thousand
lines
of
this
over
new
or
they
have
you
know
20%
coverage,
so
I
would
prefer
you
to
raise
the
coverage
a
little
bit
rather
than
saying
coverage
context
fundamentally
failed,
but
again,
I
really
think
that
we're
getting
pretty.
This
is
getting
to
the
point
that
it
would
probably
affect
like
one
in
20,
poor
Chris.
C
If
we
need
to
spend
that
much
time
optimizing
I
would
be,
I
would
be
pretty
happy
to
make
the
coverage
tech
check
required,
make
the
sonar
check
required.
That
is
getting
rid
of
the
code.
Kupchak
set
the
sonar
check
to
80%
and
just
monitoring
see
how
often
we
come
into
cases
where,
where
we're
feeling
like
we
would
have
to
add
tests
to
make
this
pass,
that
we
wouldn't
otherwise
add
I.
C
Also
think
that,
like
you
like,
when
you
get
into
the
hole,
you
know
that
specific
example
of
like
the
api's
and
workloads,
the
reason
I
so
describe
this
as
a
philosophical
thing
is
that
different
people
have
different
approaches
to
testing
and
none
of
them
are
funded
well.
Few
of
them
are
fundamentally
whelmed
in
my
opinion,
so
it
starts
to
get
towards
a
you
know:
person
a
and
person
B
might
disagree
on
a
particular
pull
request,
whether
it
warrants
unit
test
coverage
as
opposed
to
end-to-end
coverage
or
Gration
test
coverage,
etcetera,
etcetera.
A
Yeah
point
that
that
I
would
like
to
make
is
that
you
know
I
that
Ilya
had
brought
up
as
well.
Is
that
you
know
there
is
definitely
value
in
coverage
for
things
that
can
catch
integration.
Type
issues
like
it's,
not
new,
you
might.
It
might
be
some
like
kind
of
generated
code
or
boilerplate
code,
but
you
know,
if
you
don't
for
the
example
there
being.
If
you
don't
register
your
schema,
then
at
runtime
you
know
you
won't
be
able
to
the
controller,
won't
be
able
to
get
your
types
that
might
even
crash,
etc.
A
So
you
know
being
able
to
have
some
tests
around
that
to
make
sure
that
it's
properly
registered.
You
know
the
whole
full
chain
of
schema
registration
is
done
there.
You
know,
which
is
in
the
API
level.
That's
that's
kind
of
reasonable
to
have
as
testing
to
make
sure
that
you
know
at
runtime.
Things
won't
completely
blow
up
and
then
another
point
is
that
a
ton
of
other
projects
I've
seen
the
quality
gate
around
test.
A
The
code
coverage
for
tests
is
around
not
reducing
the
coverage
anymore,
like
we
get
it
to
a
certain
level,
and
then
you
know
no
tests
can
bring
it
down
anymore.
So
once
you're
at
a
certain
level,
you
inherently
have
a
cover
it.
You
then
begin
to
start
having
a
coverage
requirement
of
at
a
high
level,
because
you've
already
got
the
project
to
a
certain
level
and
you
can't
take
it
down
anymore.
So,
therefore,
you
have
to
be
at
that
high
level
with
your
new
code
as
well.
A
I'm.
Okay,
though,
right
now
with
like
keeping
the
the
sonar
check
it,
you
know
a
certain
high,
it's
higher
than
the
current
repo
wide
level
of
coverage,
and
you
know
continuing
to
like
see
if
that
forces
us
to
write
tests
that
don't
make
sense,
etc.
But
you
know
it
seems,
like
a
lot
of
tests,
do
make
sense
that
they
have
integration
implications.
B
So
some
tests
are
more
important
than
another,
so
while
alt
has
been
created
equal
some
test
more
equal
than
other,
so
we
probably
need
to
focus
in
generally,
as
Nick
also
mentioned
earlier,
to
address
technical
death
to
bring
overall
project
health
to
some
reasonable
state
where
we
can
now
set
the
bar
and
do
some
kind
of
enforcement's
right
now,
it's
kind
of
almost
challenging
to
do
that,
because
I
can
fetch
the
code
with
zero
coverage
and
now
I'm
holding
the
bucket
to
fix
leaky
right
80%
of
calories.
Just
to
get
my
arrow
yeah.
C
I
definitely
feel
like
I,
just
just
to
be
clarify
on
that
point.
Cuz,
it's
a
good
one.
When
sonar
does
pull
request
coverage.
Let's
say
we
have
sonar
with
this
80
percent
target.
Does
it
look
at
80
percent
of
the
files
that
you
touched,
including
what
you
didn't
edit
I
would
really
prefer
it
if
it
was
possible
for
it
to
be
like?
C
B
Master,
so
no
to
extend
to
answer
of
the
tape
right
now,
but
from
what
I've
seen
so
far,
I've
seen
evidence
of
doing
that
that,
while
be
right
in
the
line
of
the
what's
you
in
the
file
and
I'll
get
the
error
granted.
That
may
be
enough
Hollister
of
the
test
for
the
liner.
To
maybe
that's
why
I
don't
know
so.
C
C
B
A
C
A
D
I
put
out
there
this
morning:
it's
not
ready
at
all.
I
just
wanted
to
get
kind
of
the
concept
out
there,
so
we
can
talk
about
it
quickly
in
this
call,
but
basically
so
you
may
have
seen
in
slack
or
haste
adjusted
validating
the
resource
group
names,
and
this
could
also
extend
other
resources
within
Asher
using
the
the
HRM
kind
of
API.
So
you
submit
a
deployment
and
it
basically
validates
it.
D
Resources
are
structured.
Is
that
they're
they
kind
of
wrap
a
Adger,
API
and
they're
in
their
go
library,
so
you
can't
actually,
for
instance,
I
think
in
here
somewhere
it.
So
it
has
like
a
Giri
source
group
and
we
have
a
client
there's
the
group's
client.
Well,
it's
actually
using
a
different
client
if
you
want
to
use
the
AR
M
validation.
So
my
kind
of
idea
for
that
was
to
have
a
client
component
of
our
resource
group
and
then
kind
of
a
validator
component
and
I
just
want
to
get
some
feedback
on
that.
D
If
people
thought
that
was
a
good
idea
or
if
there
was
kind
of
a
better
way
to
do
it
when,
when
verifying
a
deployment,
you
kind
of
let's
say
we're
just
checking
the
resource
group
name,
it
kind
of
a
passing
a
dummy,
valid
deployment,
and
then
it
would
not
be
failing,
because
the
deployment
wasn't
valid.
It
would
just
be
because
the
resource
group
name.
B
D
Values
yeah,
it
would
just
be
I,
think
allowed
values
I'm
pretty
sure.
So,
if
you
see
in
the
I
guess
it's
in
the
client,
property,
Anthony
yeah
and
the
client
I
believe
that
it
checks
the
the
resource
group
name
for
kind
of
a
regex
pattern
that
Asher
has
on
the
website
so
kind
of.
Instead
of
that,
we'd
want
to
move
to
something
where
we're
actually
asking
Azure,
instead
of
having
it
hard-coded
in
there.
So
that's
kind
of
the
idea
behind
that
and
that's
why
Jorge
burn
that
up
and
is.
B
D
Can
be
used
for
other
ones,
it's
called
the
deployments
client.
So
basically
you
just
pass
it
I
think
it's
like
a
name
and
location
and
then
it's
kind
of
like
you
know,
like
terraform,
or
something
like
that
or
CloudFormation.
It's
just
they're
kind
of
templates
for
deploying
resources.
So
it's
validating
one
of
those,
so
it
mm-hmm
I
see.
B
I
came
across
when
I
wrote,
the
storage
account
similar
thing
is
that
when
historical
actually
has
a
uniqueness
validation
that
so
it
must
not
collide
with
any
other
storage
account
in
universe.
When
you
create
one
just
like
bucket
and
I
guess
in
Google,
so
I
think
it's
fine
to
have
this
up
a
client
and
perhaps
create
that
and
reference
it
from
the
resource
clients.
It
could
be
with
Val
unit,
so
I,
just
kind
of
I'm
not
opposed
to
add
in
that
I
understand.
Also
the
infrastructure,
complex
and
now
in
your
reconciler.
B
You
need
to
create
in
one
client,
you
can
now
either
create
two
clients
or
perhaps
create
one
friend
which
drops
both
now
your
resource
plan
and
validation,
client
right,
I'm
open
to
that,
and
just
as
long
as
it's
done
kind
of
inconvenient
way,
they
could
be
evaluating
it
but
I
also
in
the
same
time,
maybe
not
necessarily
super
pressing
and
that
again
the
worst
comes
to
worst.
Even
you
provide
totally
invalid
name
your
provisioning
going
to
right.
So
right,
we're
gonna
complain.
B
A
Yeah
then,
the
the
two
things
I
would
add
here
is
that
you
know
if
the
azure
API
underneath
it
requires
a
different
type
of
client
to
do
validation
checks.
Then
you
know
that's
fine.
We
can
create
the
two
different
two
different
clients
and
use
them
to
perform
the
work.
You
know
one
of
validating
in
the
second
of
provisioning
or
deploying,
if
they're,
two
different
clients,
for
that.
A
That's
okay,
no
problem,
and
then
the
the
important
thing
for
me
too,
is
that,
from
the
experience
perspective
both
from
the
users
experience
perspective
and
also
another
experience
to
take
into
account
as
the
developers
experience
so
from
the
users
perspective.
If
they
creates
a
like
a
resource,
an
azure,
that's
that's!
No
good.
Are
they
gonna
get
the
right
information?
They
need
to
fix
that
like
up
front
early
validation
before
we
create
some
resource
with
Azure
that
has
to
get
cleaned
up
later.
Is
the
error
message
clear,
like
this
field?
A
Is
this
value,
but
it
needs
to
be
these
values
like
you
know.
That
type
of
experience
is
really
the
overall
goal
here,
for
will
our
users,
if
they
make
a
mistake,
be
able
to
clearly
understand
what
they
have
to
change,
or
can
we
even
change
it
automatically?
Is
there
any
automation,
logic
that
we
can
make?
You
know
reasonable
changes
without
you
know
changing
the
user's
intent.
A
Obviously,
if
we
can
do
that
that
that's
great
and
then
the
developer
experience
is,
is
also
important
as
well
of
you
know,
if,
if
we,
if
we
can
wrap
this,
this
validate
our
clients,
so
that
you
know
we
don't
like
when
we
invoke
the
API
and
we'd
have
to
think
about
it
when
we
invoke
the
API
to
create
or
to
validate
that
it's
you
know,
this
is
like
managing
that
extra
client
doesn't
become
super
burnin,
something
that's
a
goal,
but
you
know
that.
Would
that
would
require
some
thought
and
it's
you
know
not.
A
It's
not
necessary
black
and
white.
Okay,
but
I
think
the
direction
here
isn't,
you
know,
isn't
totally
wrong.
You
know
having
a
separate
client
that
does
the
validation
work.
If
that's
what
the
azure
API
requires,
then
that's:
okay,
that's
what
the
azure
API
requires
cool
Thanks
all
right,
so
we
are
out
of
time
and
I
think
that
was
the
last
item
here.
So
thank
you
very
much
today,
guys
for
the
fruitful
discussion
on
you
know
our
own
requests
and
milestones,
etc.
A
So
we're
gonna,
keep
executing
on
real-world
applications,
get
lab
support,
etc,
and
we
will
also
be
thinking
about
I.
Don't
think
you
know
everyone
can
do
this,
be
thinking
about
the
roadmap
and
some
of
the
goals
for
the
upcoming
milestones,
and
you
know
what
we
want
to
put
priorities
on.
You
know:
production,
readiness
and
technical
debt
and
feature
support
and
all
that
sort
of
stuff,
and
we
will
circle
back
about
that
soon,
but
the
near-term
priority
like
this
week
and
next
week,
is
continuing
executing
on
get
lab
support.