►
From YouTube: 2022-05-05 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
And
let's
get
going
so
first
we're
gonna,
do
the
the
milestone
checkup,
I'm
just
gonna
run
through
this
gender
in
order
v.
1.8
milestone
is
due
on
may
17th,
which
is
a
little
less
than
two
weeks
away,
which
means
we
did
just
go
into
feature
freeze
for
this
release.
So
we
typically
at
this
point
onwards,
don't
merge
any
more
major
features
for
the
release
and
we
do
merge
bugs,
and
we
have
made
exceptions
about
features
in
the
past
if
there's
stuff
that
people
really
want
to
get
in
there.
A
But
we
we
try
not
to
just
to
give
time
give
stuff
a
little
bit
of
time
to
soak
and
let
people
sort
of
try
out
features
before
they
go
into
into
mainland.
A
So,
looking
at
the
board
for
1.8
the
folks
working
on
well,
I
believe
it's
so
again,
who
is
an
upbounder
who
is
working
on
both
of
these
review
and
progress,
ones
which
I've
been
reviewing
I'll
need
to
check
in
with
him
to
see
where
it's
at?
I
believe
he
is
out
today
and
not
on
the
call,
but
these
ones
are
likely
to
to
get
in.
A
The
proposal
for
implementing
a
plug-in
mechanism
for
out-of-three
external
stores
is
also
an
upbounder,
that's
hassan
who's.
Working
on
that
that
proposal
looks
good
to
me,
that'll
be
merged,
but
that
is
just
a
proposal
at
this
point.
I'm
pretty
confident
to
be
emerged.
I
should
say,
but
it's
just
a
proposal
at
this
point,
so
no
no
actual
feature
coming
support
patching
from
common
data
sources.
I
oh
mr
mx,
whose
first
name
I
I
think
is
max,
is
is
working
on
this
one.
A
I
think
that
proposal
is
also
coming
along
pretty
nicely
so
that
might
make
it
in.
I
honestly,
don't
know
it
he's
not
here
on
the
call.
A
Sadly,
the
one
that
sergey
and
I
are
working
on
is
custom
compositions,
which
we've
been
calling
composition
functions
now,
to
more
accurately
represent
the
way
that
we
think
it's
gonna
work,
unfortunately,
that
it's
not
going
to
make
it
in.
I
was
rushing
to
get
it
ready
by
tuesday
and
it's
like
40
ready
code
base
wise
or
something
like
that.
So
sorry
to
say
this
one's
been
delayed
a
few
times.
It
didn't
make
it
to
1.7,
it's
not
going
to
make
it
to
1.8.
A
The
good
part
is
that
we
have
got
a
prototype.
We
have
got
a
good
way
along
to
the
design,
so
it
would
be
surprising
to
me
if
it
didn't
make
it
into
into
crossfire
1.9.
A
So
that
is
what
we've
got
going
on
at
the
moment.
I'm
pretty
sure.
Oh,
no
there's
one
or
two
things
that
have
gone
into
this.
We
have
a
steering
committee
meeting
after
this,
and
one
thing
we're
noticing
lately
is
that
the
crossplay
project
encore
especially
has
slowed
down
a
lot
new
features.
Wise
we've
had
a
couple
of
releases
that
haven't
had
a
significant
amount
of
stuff
in
them,
and
my
hunch
is
that's
because
we
have
matured.
A
A
It
takes
a
good
amount
of
time
and
we're
releasing
every
eight
weeks
at
the
moment,
which
is
which
is
pretty
frequently
so
something
I'm
going
to
raise
with
the
steering
committee
meeting
later
today,
which
I
remember
is
possibly
to
release
a
little
bit
less
frequently,
maybe
go
to
release
every
three
months
or
something
like
that,
just
to
basically
make
the
releases
a
little
bit
more
meaningful
and
hope
that
we've
got
a
bit
more
sort
of
content
in
them.
A
That's
built
up
over
three
months,
rather
than
having
sort
of
more
frequent
releases
with
with
less
in
them
any
thoughts
or
opinions
on
that.
For
the
folks
who
are
on
the
call.
A
All
right,
so
we
are
gonna,
as
I
won't
start
it
right
now,
but
food
for
thought.
We
are
going
to
start
planning
out
the
1.9
release,
so
there's
obviously
going
to
be
some
holdovers
from
that
here.
I
think
that
the
the
plugable
secret
stores
feature,
which
we
recently
added
alpha
support
for
for
secret
stores,
which
basically
changes
the
way
the
kubernetes
secrets
are
written
a
little
bit
for
connection
details
so
that
you
can
push
them
to
another
cluster.
A
If
you
want
to
as
well
as
writing
to
the
local
one
and
add
support
for
vault,
we
thought
we'd
sort
of
start
there
to
prototype
it
out,
but
now
we're
actually
looking
at
a
sort
of
almost
a
csi
style
extension
model
for
crossplane
that
will
effectively
let
people
build
separate
binaries
that
teach
cross-plane
how
to
write
to
different
kind
of
secret
stores,
so
so
that
I
expect
it's
gonna
be
1.9.
A
That's
the
design
that
hassan
has
pretty
close
to
done
or
he's
done,
and
I
feel
like
it's
pretty
much
ready
to
be
merged
once
he
gets
back
yeah.
Another
thing
that
I
have
that's
top
of
mind
for
me
that
I
personally
might
pitch
for
a
1.9
is
we
have
a
handful
of
features,
including
secret
stores,
including
composition,
revisions
and
controller
configs
that
are
in
alpha
at
the
moment?
I
would
really
like
to
spend
some
time
getting
some
of
them.
I
think
revisions,
composition.
A
A
Other
than
that
updates
on
providers,
I
don't
know
that
we
have
too
many
active
provider
maintainers
here,
but
one
exciting
thing
that
that
did
happen
recently
was
that
the
folks
at
digitalocean
released
the
first
version
of
their
provider
made
a
little
bit
of
a
bit
of
news
about
that.
There's
a
blog
over
here
on
the
on
the
digitalocean
blog.
I
believe
it's
got
obviously
droplet
support
and
a
few
other
things
I
think
it
might
have
had
yeah
managed
kubernetes
in
it.
A
B
A
Yuri,
I
see
your
name
on
it.
Any
commentary
about
that.
C
Yeah,
so
there
is
a
very
initial
release
of
jetta
the
provider
which
intentionally
covers
only
minimal
amount
of
tested
resources,
so
it's
basically
monitors
and
slos
and
associated
to
json
way
to
describe
monitor.
C
So
you
can
already
start
creating
like
a
drive
data
configuration
without
provider,
at
least
creating
the
data
dock
configuration
resources
that
is
very
relevant
to
application
and
associated
slos.
So
that's
very
initial,
but
working
release.
D
A
A
A
Yeah,
that's
that's
me,
nice,
it's
just
anything
anything
to
say
about
this.
One
apart
from
folks
should
go
watch
it.
It's
awesome.
A
Let's
see
we,
we
saw
this
tweet.
I
I'm
curious
about
this
one.
I
don't
know
if
anyone
on
the
call
knows,
but
dan
garfield
tweeted,
that
the
large
hadron
collider
is
powered
by
argo
and
crossblade,
I'm
guessing
that
might
mean
like
the
computers
around
the
large.
I
don't.
D
A
I'm
presuming
the
literal
collider
itself
is
not
running
crossplane
but
but
maybe
they're
using
it
on
the
project.
I
believe
they
do
use
it
at
cern
right.
A
A
A
Kubecon
eu
updates,
crossplaying
folks,
will
be
at
kubecon.
Eu
right
is
anyone
on
the
call
going
to
be
representing
crossclaim
at
the
conference.
I
will
not
be
able
to
make
it.
Unfortunately,
I
hope
to
be
at
the.
A
Sweet
cool
yeah,
I'm
pretty
sure
we'll
have
well
upbound,
is-
is
sending
quite
a
lot
of
people
over
there
to
represent
both
upon
the
company
and
cross-play
the
project
at
kubecon.
So
it'll
definitely
be
a
bunch
of
folks
who
are
familiar
with
crossplane
over
there.
A
We're
also
totally
happy,
you
know
we,
we
totally
want
community
members
at
the
the
cross,
plain
booth,
so
if
anyone's
interested
in
everyone's
going
to
to
the
kubecon
and
would
love
to
hang
out
at
duluth,
you
know
please,
please
go
ahead
or
let
us
know
if
you
want
to
volunteer
at
the
booth.
That
would
be
even
better
get
in
touch
with
me
and
I'll
put
you
in
touch
with
the
right
people
to
help
out.
C
A
And
then,
under
here
I'm
seeing
just
this
one
topic:
how
should
providers
handle
reconciling
remote
source
content?
Is
this
one
that
you
added
bob.
D
Yeah
this
is
me,
I've
just
been
so
I've
been
doing
a
lot
with
provider
terraform
and
noticed
that
it's
when
we
use
a
remote
module
source
that
it's
pulling,
that
remote
module
source
and
basically
you
know,
re-running
everything,
including
terraform
init,
on
every
reconciliation
yeah,
and
there
was
a
there-
was
a
request
in
the
provider
kubernetes
for
remote,
manifest
support,
which
I
thought
was
a
good
idea.
D
But
then
you
know
the
same
question
comes
up.
Right
is
the
expectation
that
the
provider
is
reconciling
against
the
status
of
the
remote
thing
when
it
was
created,
or
you
know,
does,
is
the
provider
responsible
for
going
out
every
minute
and
checking
to
see
if
the
thing
is
create
is
changed
and
reconciling
against
that
which
seems
a
little
bit
overkill?
D
You
know
you
can
do
that
1400
times
a
day.
You
know
it
seems
like
if
you're,
if
you're
referring
to
something
remote,
especially
if
it's
a
get
reference
that
maybe
you
want
to
you
know
you
can
assume
that
the
remote
thing
isn't
changing
unless
the
get
reference
changes,
and
so
as
long
as
the
get
reference
in
the
manifest
doesn't
change.
D
A
There's
a
couple
of
things
in
there:
one
supporting
the
just
shooting
from
the
hip
I
haven't
heard
of
it
before,
but
shooting
the
remote
manifest
shooting
supporting
remote
manifests
in
provided.
Kubernetes
sounds
like
a
good
idea
to
me
as
well
that
that
sounds
very
similar
to
the
provider.
Terraform
pattern,
as
you
mentioned,
then
the
other
part
of
it
seemed
to
be
about.
A
A
There
is
really
your
claim
in
the
kubernetes
api
right
and
we
have
the
poll
interval,
which
defaults
to
60
seconds,
and
we
we
effectively
check
that
every
60
seconds
there
was
a
very,
very
old
bug
or
issue
that
jared,
I
think,
had
actually
written
that
effectively
said
it
would
be
really
nice
if
we
could
subscribe
to
things
because
we
want
to
you
know
we
want
to
correct
drift
effectively
like
if,
if,
if
we're
taking
your
remote
kubernetes
manifests
and
using
it
to
create
a
deployment
or
something
and
someone
changes
that
deployment
we
want
to
fix
that
we
want
to
reset
it
as
soon
as
possible
to
force
crossplane
to
be
the
state
of
truth.
A
Now
kubernetes
when
you're,
when
you're
doing
something
kubernetes,
we
have
the
advantage
that
they
have
watches
and
all
that
kind
of
stuff.
So
we
don't
need
to
actually
pull
it.
We
could
just
watch
it
and
see
if
it
changes,
but
if
it's
something
in
aws
it
becomes
a
lot
harder
to
sort
of
subscribe
to
an
rds
instance
and
sort
of
say,
hey
tell
us
if
this
changes
so
that
we
can
fix
it,
which.
A
For
some
cloud
resources
using
some
cloud
technologies,
but
but
that's
kind
of
tough,
so
longer
story
short.
I
believe
that
there
was
a
patch
set
that
I
was
working
on
a
while
ago
that
went
into
some
providers
at
core
cross
plane
that
made
the
pole
interval
tunable.
The
interval
at
which
we
sort
of
pulse
stuff
to
see
if
it's
drifted.
A
I
actually
forget
whether
that's
in
all
providers
at
this
point
or
even
if
it
got
into
provider,
template
that
we
use
to
build
these
providers,
but
it
is
desirable
to
roll
that
out
and
that
would
allow
us
to
basically
have
a
flag
or
cross
plane.
So
if
you
sort
of
were
pretty
comfortable
saying
check
for
drift
once
an
hour
or
once
a
day
or
something
like
that,
that
would
be
configurable.
A
I
think
the
other
part
of
it
is
just
you
know,
hitting
it
so
often
means
you're
pulling
from
in
cases
like
provide.
A
terraform
provided
kubernetes,
where
almost
like
the
source
of
truth
is,
is
potentially
external
to
to
cross-plane
it's
hitting
a
git
repo
or
something
to
pull
in
that
manifest,
as
as
you
say,
that
is
kind
of
we're
hitting
that
thing
a
lot
to
get
that
source
of
truth.
The
immediate
thing
that
comes
to
mind
there
is
that
we
we
usually
do
cash
that
stuff,
I
believe.
A
Actually,
I
forget,
urinary's
provider
terraform
a
lot
better
than
me,
but
we
could
cash
that
stuff,
at
least
so
I
could
imagine
adding
a
sort
of
an
almost
pod
pool,
temple
or
container
pool
policy
style
option
to
these
things.
That
sort
of
say,
you
know,
pull
the
terraform
manifest
down
remotely.
D
Yeah
I
mean
in
the
terraform
case
the
the
you
know,
it's
sitting
on
it's
sitting
in
memory
under
tf
right
and
you
know
it.
It
does
get
refreshed
every
minute.
You
know-
and
I
guess
so.
I
guess
that
was
actually
part
of
the
question
right
is
the
is
the
source
of
truth,
the
remote
I
mean
it
is
right,
but
is
it
supposed
to
be
the
remote
repo
or
is
the
source
of
truth
supposed
to
be
the
claim,
as
it
was
instantiated
the
first
time
right,
barring
any
edits
to
the
claim,
of
course,.
A
Yeah
well,
this
is
an
interesting
one
because
I
think
about
like.
If
what
I'm
imagining
is
you've
got
a
claim
which
creates
you
know
as
a
proxy
for
a
composite
resource.
So
let's
hazely,
say
they're
the
same
thing.
You've
got
a
claim
and
then
the
claim
uses
a
composition
to
create
a
managed
resource
which
is
a
terraform.
What
are
they
called?
Just
a
terraform
configuration
workspace.
A
Workspace
yeah
yeah.
So
at
that
point
I
would
say
that
the
the
sources
of
truth
are
a
combination
really
of
the
remote
terraform
manifest
presuming.
You
didn't,
use
the
inline
option
and
put
that
in
your
composition,
so
the
remote
terraform
manifest
and
the
claim,
because
it's
probably
fields
on
the
claim
that
are
being
used,
as
you
know,
inputs
to
that
territory
like
that
so
they're
kind
of
both
they
they
all
become
the
source
of
truth
and
get
merged
together
to
to
make
things.
A
D
Yeah
I
mean
that's
why
the
remote
the
remote
kubernetes
manifest
kind
of
raised
the
same
issue
with
me
right.
It's
like
well,
you
know
great
I'm
going
to
go,
get
this
remote,
kubernetes
manifest
and
instantiate
it
right
now,
but
then
I'm
going
to
check
it
every
minute
to
see
if
that
remote
thing
changed
so
that
I
can
change
what
I
instantiated.
D
You
know
you
know
it
almost.
It
almost
makes
sense
to
add
an
option
you
know
make
it
configurable
right
is
my
so
here's
my
remote
here's,
my
remote
url,
you
know:
do
I
get
have
an
option.
Do
I
get
that
once
and
never
check
it
again?
Do
I
you
know:
do
I
check
it
at
some
given
configurable
poll
interval
right?
So
maybe
I
check
it
every
hour
or
I
check
it
every
day
or
something
I
just.
D
I
can
see
lots
of
different
use
cases
where
in
some
in
some
cases
you
never
expect
it
to
change,
and
you
always
expect
it's
going
to
be
the
same
thing.
So
you're
wasting
your
time,
retrieving
it
once
a
minute.
You
know
in
other
cases,
maybe
you
do
expect
it
to
change,
but
it's
only
going
to
change
once
a
day
or
whatever.
A
Yeah
yeah,
the
other
exciting
thing
to
that
is
that
polling.
It
frequently
allows
us
to
frequently
recover
from
from
losing
that
state.
That's
in
memory
or
you
know,
potentially
in
like
a
persistent
volume
or
something
like
that
when
we're
the
slash
tf
directory.
So
it
allows
us
to
be
more
stateless.
If
we
were
going
to
say,
we
only
want
to
pull
this
relatively
infrequently,
but
we
want
to
apply
it
much
more
frequently.
A
So
we
want
to
sort
of
get
that
state
once
but
use
that
state
to
reconcile
stuff
many
times
a
day.
Then
that
has
the
implication
that
we
we
potentially
would
need
to
be
more
persistent
about
how
we
store
that
state.
Unless
we
framed
it
as
like
sort
of
only
pull
it
once
a
day
unless
you
lose
it,
in
which
case.
D
B
A
Think
I,
like
you,
know
much
lower
figuring
out
what
the
api
looks
like
to
configure
that
and
all
that
kind
of
stuff
right.
I
feel
like
that's
fairly
reasonable,
to
tune
how
how
frequently
you
hit
the
sort
of
source
of
truth
there.
It
feels
very
close
to
the
to
the
I
think
it
was
called,
but
like
the
pull
policy
or
whatever
on
on
containers,
yeah.
D
A
You
might
want
something
a
little
more
like
you
know,
like
you
would
sort
of
say,
pull
it
once
a
day
or
something
if
you
already
have
it,
but
no
more
often
than
that
right
yeah
it,
but
I'm
not
sure
if,
if
you've
sort
of
erased
an
issue
tracking
this
or
anything
like
that,
but
I
would
say,
it'd
be
very
valid
to
do
so.
It
seems
like
a
feature
that
I
would
probably
not
work.
D
Well,
I
mean,
and
I'm
happy
to
to
you
know,
contribute
what
I
can
to
it
as
well.
I
I
did
raise
an
issue
on
the
fact
that
I
mean
basically
what
I'm
looking
at
right
now
is
the
performance
and
scalability
of
provider
terraform,
because
we're
using
it
a
lot.
D
Excuse
me
we're
using
it
a
lot
and
we're
finding
that
it
slows
down
with
more
and
more
provide
more
and
more
workspaces,
and
I
think
part
of
that
is
because
it's
I'm
assuming
it's
single
threaded,
I'm
not
a
go
person
in
any
useful
context,
so
I'm
assuming
it's
single
threaded
and
that
it
just
had
you
know
every
interval.
A
Yeah,
I
can
send
you
some
information
I
off
the
top
of
my
head.
I
believe
that
it
kind
of
is
single
threaded.
I
mean
it's
in
practice,
there's
a
bunch
of
routines
in
there
that
gets
spun
up,
but
number
of
concurrent
reconciles
defaults
to
one
okay
controller
runtime.
A
The
patch
set
that
I
so
I
actually
did
a
bunch
of
performance
tuning
for
core
crossplane
and
providers.
Maybe
a
year
ago
now,
almost
but
the
tricky
thing
about
it
was.
I
came
up
with
a
pattern
to
make
them
more
performant.
I
applied
it
to
a
crossline
of
one
or
two
providers,
but
it
wasn't
feasible
for
me
to
go
to
every
provider
that
so
I
have
an
example
of
the
of
sort
of.
A
I
think
there
was
a
pr
to
provide
a
gcp
that
that
might
be
like
fairly
straightforward
to
sort
of
copy
over
to
to
provide
a
terraform
and,
among
other
things
that
makes
the
parallelism
of
reconciles
tunable.
It
does
do
it
in
a
slightly
interesting
way.
In
that
you
tell
you
tell
the
provider
you're,
you
can
tell
that
you're,
a
political
which
you
can
tell
you
can
do
here.
I
believe,
actually
am
I
I
might
be.
C
Actually,
specifically
to
provider
terraform,
the
first
things
that
we
can
try
to
do
is
to
cache
terraform.
You
need
like
this
dot,
terraform
directory
per
workspace
right.
It's
currently
like
recites
with
a
go-getter
generated
checkout,
and
then
it's
on
every
reconciliation.
C
C
A
Yeah
well
I'll
get
back
to
you,
and
I
also
I
agree
that
there's
there's
definitely
plenty
of
stuff
to
be
done.
That
is
specific
to
this
provider,
and
there
is
also
probably
that
generic
pack
set,
which
I
cannot
immediately
diff
this
file
in
my
head
to
remember
whether
that's
been
applied
or
not.
A
I
don't
think
it
has,
though,
because
it
added
a
flag
that
was
effectively
along
the
lines
of
like
how
many
reconciles
per
second
do
you
want
this
thing
to
be
capped
at
and
then
it
will
basically
do
at
most
that
many
reconciles
derive
a
bunch
of
other
properties
such
as
how
parallel
it
is,
and
all
that
kind
of
thing
from
from
that
knob.
So
I
will
bob:
what's
your
username
on
crosslane
slack.
A
A
Don't
think
it
is
I
I
yeah.
Sadly,
it
really
should
be.
A
That's
yeah.
Let
me
double
check.
A
Oh
nope,
it
is
yeah
if
you
see
this
max
reconcile
flight
rate.
That
is
a
good
telltale
that
that
that
sort
of
pattern
has
been
applied
so
new
providers.
I
know
it
might
not
be
part
of
terror
jet
so
far
I
kind
of
remember
vaguely,
saying
hey:
this
should
go
into
terror
jet
and
then.
C
A
A
Everything
we
have
on
the
agenda
for
for
the
day.
Does
anyone
have
any
other
thoughts
or
topics,
or
should
we
wrap
up
the
meeting
earlier.
C
Yeah
just
to
thank
the
bob
to
also
contribution
to
provider
terraform.
Thank
you
so
much
all
the
prs
are
amazing
and
you
like
it
really
like
increased
the
provider
quality
dramatically.
You
know
all
your
recent
competitions
and.
A
Yeah
thanks
to
thanks
to
both
of
you,
that
was
that
was
one
that
I
started
and
then
was
like.
I
need
someone
else
to
to
make
this
great
and
then
yuri's
done
an
awesome
job.
It's
good
to
see
you
joining
him.
Bob.
A
What's
your
what's
your
typical
language
that
you
work
in.
A
B
Hey,
hey
how's,
it
going
so
I
have
a
customer
who
is
trying
to
use
argo
cd
with
crossplane.
It's
very
pretty
well
documented,
so
it
works
for
the
most
part.
The
problem
is
that
when
they
are
using
ergo
city
cli
to
create
a
cluster
with,
you
know,
crosstalk
installed
they're
getting
like
timeout
errors.
I
don't
have
like
details
like
that
all
the
details,
yet
I
will
have
like
a
meeting
tomorrow
to
go
over
it,
but
just
kind
of
wonder
what
to
see
if
you
guys
had
like
any.
B
You
know
issues
like
that.
It
doesn't.
It
only
happens
when
you're
using
the
rocity,
the
binary,
the
cli.
C
Yeah,
is
it
possible
that
they're,
using
hell
in
a
comp
composition
with
argo
city
in
that
scenario,
so
the
zargo
city
isn't
isn't
looking
anything
help
related,
I'm
asking
because
we
actually
have
like
a
similar
customer
request
and
we
identified
that
keep
control
like,
as
the
recent
versions
of
I,
according
to
recent
circling,
patches
and
helm,
is
still
like.
It
has
the
same
class
of
problems,
but.
B
C
Relevant
is
patched
in
its
code
base,
so
it's
producing
the
same
throttling
black
strength,
very
similar,
throttling
warning
messages
as
keep
control
in
older
versions.
B
D
Just
to
confirm
what
your
description
of
the
error
is,
they
are
successfully
able
to
cube
ctl
to
the
cube
api
and
they're
able
to
interact
with
the
cluster
in
all
other
ways.
But
talking
specifically
to
the
argo
daemon
on
the
cluster
is
not
working.
B
If
they
use
the
argo
cd,
the
cli
right
arrow
cd
has
their
own
select
right.
If
they
use
that
and
then
it
will
essentially
they
are
getting
diamonds.
I
I
really
want
to
share
all
that
information
with
you,
but
I
can't
yeah.
D
I
know
it's.
It
just
sounds
like
if
that
particular
case
sounds
like
a
configuration
for
what,
however,
they've
set
up
ingress
for
argo
cd,
not
necessarily
related
to
cross
plane.
B
D
B
The
you
know
the
reason
they're
pointing
out
the
cross
plane
is
that
it
works
just
fine
if
they
don't
have
across
brain
in
that
environment.
So
again
I
don't
have
like
all
that
information.
So
I
just
kind
of
wanted
to
see
if
you
guys
had
like
just
from
you
know,
describing
the
issue.
If
you
had
any,
you
know
something
that
came
into
your
mind,
but
if
not,
then
you
know
I'll
definitely
go
get
some
more
information,
because
I
honestly
I
don't
have
that
much
more
information
either.
So.
A
Yeah
we
what
once
you've,
got
more
information?
Sorry
aaron,
you
can
definitely
down
to
do
more
troubleshooting,
but
I
didn't
want
to
point
out
that
we've
got
this
issue
on
crossplane
2773,
which
is
almost
our
aggregation
issue
for
any
there's,
a
handful
of
known
issues
with
crossplay
in
argo
city
working
together.
So
we've
been
using
this
to
capture
them
all.
So
if
this
does
turn
out
to
be
one
of
these
or
I
don't
think
it
will
but
or
a
new
one,
please
do
comment
here.
Manabu
I
had.
A
Actually
I
just
noticed.
I
had
pinged
jesse
who's,
one
of
the
aggro
cd
project
leaders
to
talk
about
where
we
work
on
these,
and
he
has
actually
replied
a
week
ago,
and
I
just
noticed
now
so
that's
good
good
progress.
B
Yeah,
actually,
I
saw
that
yesterday
and
looks
like
I
did
trying
to
solve
that
the
claim
and
the
composite
resource
relationship
right,
but
hopefully
that
gets
resolved
because
right
now
you
essentially
have
to
like
ignore
that
composite
resource
relationship,
yeah
yeah.
A
Yeah,
I'm
pretty
sure
that
one
is
one
that
we
can
just
fix
on
cross
plains
and
we
just
should
fix
it
at
some
point.
I
I
hope
I
think
that
one
might
be
already
labeled
as
a.
A
It's
arguably
a
good
first
issue
put
that
on
there.
So
if
anyone
wants
to
work
on
that
one,
I
think
it's
a
pretty
easy
fix.
The
only
tricky
thing
is
that
we
already
do
a
bunch
of
label
propagation
so
coming
up
with
a
way
to
not
propagate
those
labels.
That
is
not
breaking
change,
and
that
is
also
elegant.
It
doesn't
just
look
like
having
a
sort
of
a
deny
list
of
labels
is
maybe
a
bit
tricky,
but
code
wise
should
be
pretty
easy
to
do.
B
If
I
have
time
I
might
pick
one
up
like
next
week
or
something.
D
B
Cli
yeah,
it's
like
I
said
you
know
I
don't
I'll
I'll
get
on
the
meeting
tomorrow
with
that
customer
and
I'll
get
my
information
and.
C
Yes,
when
I
was
just
to
clarify,
is
it
related
to
our
yesterday?
Conversation
means
like
regardless
the
amount
of
crds
right,
so
it's
like
actually
performance
related
like
api.
B
C
So
do
you
know
how
does
it
look
like
in
terms
like
it's
any
argo,
cd,
client
location
or
it's
argo,
cd,
sync
on
some
argo
application
and
something
happens
once.
B
B
So
this
particular
example
that
gate
that
they
gave
us
is
the
argo
city.
Cluster
add
command.
To
be
honest
with
you,
I'm
not
too
familiar
with
argo
cd
on
cli.
I
didn't
do
like
other
city
stuff.
I
just
use
like
decorative
setup,
so
I
probably
have
to
like
dig
into
that
a
little
bit
more
but
yeah.
B
So
it's
happening
with
the
ergo
cd
class
slot,
add
command.
C
B
A
All
right
folks,
well,
unless
anyone
else
has
anything
on
their
mind,
let's
get
what
24
minutes
back
or
something
like
that
and
end
the
meeting
early
thanks
for
coming,
everyone,
I'm
sure
jared,
will
be
back
to
run
this
much
better
than
I
do
next
week.