►
From YouTube: 2022-02-24 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
recording
folks,
we
just
went
for
two
minutes
without
recording
we're
just
just
done,
beating
and
doing
a
check
in
on
the
1.7
release.
That's
coming
up
soon,
so
we
just
touched
on
external
secret.
B
A
Support
and
said,
there's
a
pull
request:
open
move
off.
Do
you
want
to
give
an
update
on
the
web
hook,
work
you're
doing.
C
Yeah
yeah,
so
the
web
of
work
would
be
in
two
parts:
the
act
actually,
the
the
original
design
dog
is
not
merged
yet,
but
I
already
started
prototyping
and
I
thought,
like
you
know,
separating
the
certificate
part
as
as
a
separate
pr,
because,
like
you
know,
whatever
we
do
with
the
design
of
you
know
the
choices
that
you
have
to
have
the
certificate
stuffs
figured
out
with
the
api
server
and
providers
so
that
pr
is
open.
C
Now
in
a
draft
state,
once
I
manually
test
it
and,
like
you
know,
see
everything
works,
it
should
be
ready
for
vivid
what
it
does
is.
Essentially
it
accepts
a
pls
secret
that
is
a
known
type
for
kubernetes
users
and
it
mounts
it
to
kubernetes
itself
and
also
mounts
it
to
all
providers
on
a
given
path
given
with
an
environment
variable
so
that
they
can
optionally
use
it
in
their
in
their
web
book
servers
if
they
implement
it.
C
So,
like
you
know,
anyone
or
like
any
provider
or
cross
play
maintainers
children
wouldn't
have
to
worry
about
any
kind
of
certificate,
handling
for
developing
server
and
just
write
logic
and
one
additional
thing
there
is
that
we
inject
that
ca
bundle
to
crd's
web
hook
strategy
field
because
it
is
required
to
be
there.
So
we
inject
that
if
there
is
a
bad
book
status,
you
choose
chosen.
C
C
We
do
that
in
in
the
in
the
in
the
crosstalk
package
manager
and
also
crossband
init
container.
That
applies
those
the
ips
so
yeah.
I'm
planning
to
wrap
that
whole
wrap-up
thing
up
next
in
the
next
sprint
and
I'm
planning
to
like
you
know,
do
a
demo
about,
like
you,
know,
immutable
fields
just
to
demonstrate
it
and
then
go
into
like
you
know,
other
examples
of
convergence.
A
Nice,
that's
great
all
right,
removing
deprecated
apis.
I
have
no
memory
of
this.
Did
anyone
on
this
call?
Add
this
to
the
sprints.
C
A
All
right
and
support
layer,
description,
descriptor
annotators
in
the
package
manager
dan-
is
this
something
that
you
added.
B
Yep,
it
is
finishing:
well,
I
guess,
implementing
the
x
package
spec,
which
will
be
part
of
1.7.
This
isn't
a
huge
effort,
but
I
just
need
to
get
it
done,
makes
sense.
A
C
Yeah
there
is
this:
the
identity
based
authentication
for
providers,
although
was
planning
on
opening
a
new
issue
for
that,
but
he
hasn't
yet.
But
at
the
same
time
I
mean
it's
not
gonna
make
it
to
1.7
because
he
had
to
like
you
know,
focus
on
the
scaling
issues
which
we
will
talk
in
a
minute
so
yeah
in
1.7
time
frame.
We
would
probably
have
only
like
your
discovery,
part
of
that
problem.
A
Right
sounds
good
all
right,
let's,
unless
anyone
has
anything
else,
let's,
let's
move
on
from
the
1.7
project
board
one.
A
There
is
a
question
we
usually
ask
which
is
basically
about
community
priorities,
so
I
just
want
to
give
a
second
to
check
whether
you
know
anyone
came
here
today
to
advocate
for
something
that's
on
our
roadmap
or
not
out
on
our
roadmap
or
something
that
should
be
worked
on
soon
or
if
there's
anyone
here
who
is
interested
in
working
on
something
on
our
roadmap
sooner
rather
than
later,.
D
Can
you
tell
us
something
about
composition
revisions?
Because
we
are
very
interested
in
this
in
our
setups.
A
That
relates
to
the
maybe
touches
on
the
topic.
I've
got
a
little
bit
later
on.
D
A
Which
is
which
is
roughly,
there
hasn't
been
much
progress
on
it
lately,
but
we
have
an
idea
of
what
to
do
next,
but
but
I'm
realizing
that
I
personally
don't
have
the
bandwidth
to
drive
a
lot
of
things
forward,
so
I'm
kind
of
looking
for
volunteers.
A
So
you
know,
if
that's
something
you'd
be
interested
in
working
on.
For
example,
I'd
be
happy
to
you
know,
set
up
regular
positions
with
you
and
team
up
to
to
get
it
get
it
implemented.
D
A
C
A
All
right
so
yeah
there's
a
notes
there
that
those
two
pr's
are
up
so
providers
crd
scaling
issues
I'll.
Let
move
off
mostly
touch
on
this
one,
but
the
the
background
context
is
that
for
a
while
now,
we've
noticed
with
our
bigger
providers,
mostly
the
the
new
terajet
ones,
which
us
you
know
hundreds,
but
even
with
provider
aws,
we
we've
seen
the
aws
sorry
sydney,
api
server
had
clients
struggling
to
handle
that
many
cds
we've
made
and
have
worked
with
community.
A
You
know
the
kubernetes
community
to
make
a
bunch
of
fixes,
but
but
they've
only
been
partial
fixes,
so
we'll
have
to
take
it
away.
C
Yeah
yeah,
so
the
the
fix
that
nick
just
mentioned
was
about
crds
blowing
up
the
api
server
with
very
high
cpu
and
memory
usage
for
a
while
during
distillation,
so
that
problem
is
gone
and
it's
it's
tripled
down
to
all
club
providers
as
well.
So
the
problem
right
now
that
we're
facing
is
the
client
throwing
stuff
which
is
so
the
it's
it's
it's
mostly
actually
like.
It's,
mostly
a
configuration
issuing
ctl
where
it
just
limits
itself
for
the
request
that
it
makes
to
discover
servers
service,
and
there
is
a
pr
actually.
C
Let
me
link
it
here.
There's
a
draft
pr
from
albert
that
does
a
deep
dive
on
this
issue
and
explains,
like
you
know
how
it
works,
how
what
what
affects
it
and
also
like
you
know
what
are
the
criteria
set
that
we
expect
to
work?
We
we
expect
from
that.
Like
you
know,
ctl
calls
and,
like
you
know,
the
tools
that
that
will
allow
you
to
easily
test
your
suggestions
or
solutions.
C
In
addition
to
that
client
throttling
stuff,
there
is
another
issue
I
mean
not
a
yeah.
Another
issue
is
that
there
is,
apart
from
the
first
burst,
that
we
used
to
experience.
C
There's
a
continuous
high
memory
usage
when
you
install,
like
you,
know
a
thousand
civs,
and
that
is
usually
okay,
like
it's
like
around
three
gigabytes
or
three
and
a
half
gigabytes
for
api
server.
C
But
the
problem
is,
like
you
know,
the
cloud
providers,
some
of
the
cloud
providers
specifically
gk,
is
not
able
to,
like
you
know,
scale
in
a
in
a
manner
that
is
like
you
know,
invisible
to
the
user
for
eks
and
azure
commander
service.
There
is
no
disruption
at
all,
but
gk
regional
and
gk
zone
clusters
struggle
with
scaling
to
that
resource
usage.
So
they
get
into
like
you
know
they
start
to
give
timeouts
and
stuff
so
another.
C
Another
issue
that
he
is
looking
into
right
now
is
like
a
profiling
api
server
to
see,
like
you
know
whether
there's
an
apparent
hotshot
spot
in
the
in
the
memory
usage.
I
think
in
the
stand
up
today,
he
shared
that
there
is
a
conversion
from
v1
beta1
to
v1.
C
In
crd
api
extensions,
we
want
better
one
from
n2v1,
that
is
problem
that
is
probably
unnecessary,
but
also
happening.
So,
like
you
know,
we
we
might
get
something
like
that
like
that
is
unnecessary,
but
still
that
I'm
heating
up
the
memory
and
we
could
just
fix
it.
But
but
I
don't
have
the
details
yet
because
he
hasn't
updated
the
the
pr
yet
but
yeah.
C
The
other
thing
nick
just
mentioned
is
that
in
1.24
the
fixes
that
we
made
or
like
you
know,
we
accelerated
to
be
merged
to
client
side
were
not
effective
and
the
new
fix
from
someone
else
made
them
effective
and
now
in
1.24
of
ctr,
we
don't
experience
clients
like
throttling,
mostly
if
you
have
like
a
you,
know,
one
or
two
jet
providers.
If
you
have
three
of
them,
you
would
see
the
message,
but
it
wouldn't
be
as
bad
as
today
so
yeah.
C
This
is,
like
you
know,
a
a
little
like
you
know,
a
summary
of
the
issues.
The
pr
is
up
there
and
we're
planning
to
make
it
ready
for
review
this
week,
and
there
are
some
action
items
on
it
as
well,
and
also
the
criteria
set.
So
please
go
ahead
and,
like
you
know,
take
a
look
at
that
creativity,
I
said
and,
like
you
know,
see
if
if
they
would
work
for
you,
for
example,
I
mean
I
I
suggested
it
to
be.
C
Like
you
know
a
bit
specific,
like
you
know,
hey
keep
ctl
get
parts,
let's
say
should
return
in
that
time
frame.
Let's
say
so,
like
you
know
these
kind
of
criteria
that
we
will
call
like
you
know:
hey,
okay,
we're
done
scheduled
scaling
issues
are
completely
gone.
Those
kind
of
things
from
ux
perspective
feel
free
to,
like
you
know,
drop
it
drop
them
as
a
comment
on
that
pr,
yeah.
A
Yeah,
the
client-side
stuff,
the
underlying
problem
is
just
that
clients
make.
I
think
it
was
one
at
least
one
request.
The
api
server
per
api
group
uses
to
discover
what
types
are
there,
so
you
can
imagine
if
there's
tens
of
hundreds
of
api
groups
that
takes
a
while
and
the
clients
also
currently
have
a
somewhat
naive,
let's
say,
throttling
implementation
client-side
throttling
implementation
that
doesn't
really.
That
was
just
the
numbers
of
which
were
picked.
A
You
know,
assuming
that
there
would
be
ten
crds
or
something
like
that
at
the
system,
not
hundreds
or
thousands.
So
there's
there's
a
bunch
of
client-side
work
that
can
be
done.
A
I
think,
just
to
bump
up
the
client-side
throttling,
which
I
think
is
what's
been
happening,
but
another
thing
that
we've
seen
folks
suggest
is
potentially
using
an
api
server-side
priority
in
fairness
with
api
server
will
just
tell
you
when
it
needs
you
to
rate
limited
effectively
as
an
option
and
there's
been
some
very
rudimentary
discussion
of
potentially
just
rethinking
the
whole
discovery
process,
because
it's
just
effectively
it's
it's
quite
inefficient
overall,
and
it's
just
done
at
times
that
that
it
doesn't
need
to
be
done.
A
Sometimes
it's
worth
noting
that
these
improvements,
we're
seeing
are
specific
to
the
coop
cuddle
code
base,
so
other
clients
that
use
client
go.
I
do
think
we
probably
need
to
move
the
fix
sort
of
down
in
the
client
libraries
at
some
point.
Otherwise,
just
anything
else
that
uses
will
be
seeing
these
issues,
but
things
like
kubernetes
controllers
people
don't
notice
that
much
if
they
take
a
few
more
seconds
to
start
up
that
bad.
It's
you
know
mostly
when
it
when
it
happens.
On
the
client
side.
A
All
right,
I
think
I
could
maybe
hear
people
in
your
working
space.
A
I
think
I
think
if
you
don't
mind
muting,
I
think
there's
someone
talking
behind
you
that's
coming
through
on
your
headphones.
A
All
right,
although
I
believe
we're
gonna,
ask
move
off
to
unmute,
basically,
basically
straight
away
anyway,
because
the
next
topic
is
that,
where
some
of
us
crossbar
maintainers,
are
investing
investigating
options
to
dedupe
the
big
three
providers,
it's
mostly
just
something
that
moveoff
and
I
have
been
speaking
about
so
far,
so
we
do
need
to
loop
in
other
maintainers
of
provider,
aws
etc.
A
Like
you
know,
christopher
here,
the
general
idea
is
the
team
here
at
upbound
sort
of
released
and
donated
this
terrorjet
project
a
little
while
ago
to
help
generate
providers
to
to
get
a
lot
of
resource
coverage
really
quickly
by
generating
providers
that
use
sort
of
terraform
behind
the
scenes
under
the
hood
and
because
you
know
that
was
early
alpha
software
and
we
wanted
to
get
it
out
there
and
get
people's
feedback.
A
We
could
launch
that
as
its
own
separate
provider,
distinct
from
let's
say,
provider,
aws
provider,
gcp
etc
in
some
cases.
So
it's
so
in
cases
where
there
were
existing
providers
for
a
thing
we
sort
of
just
duplicated
them
using
using
terrorjet,
and
I
think
that
was
the
right
call
to
make
because
it
gave
us.
You
know
all
the
all
the
feedback
we've
now
got
about,
like
the
crd
scaling
and
things
like
that.
A
So
so
it
was
a
good
way
to
test
the
waters,
but
now
we're
starting
to
wonder
whether
we,
whether
we
want
to
keep
them
as
separate
providers,
long
term
or
whether
we
should
look
into
sort
of
finding
a
way
to
merge
them
together
with
with
let's
say,
merge
the
terror
jet
provider
aws
and
the
provider
aws
together.
So
I
know
move
off
and
I
am
both
working
on
some
documents
with
with
thoughts
about
this
that
are
going
to
be
coming
out
soon.
A
We're
not
going
to
commit
to
anything
yet
but
keep
an
eye
out
for
that,
and
if
you
have
thoughts
on
the
general
topic,
definitely
let
us
know
christopher,
I'm
particularly
curious.
You
know
what
you
would
think
if,
let's
say
provider,
aws
was
sort
of
a
mix
of
the
controllers
that
it
has
today
and
sort
of
terror
jet.
Let's
say
if
we.
If
we
took
the
120
resources,
we
have
today,
let's
say
kept
them
and
provided
aws
but
then
sort
of
backfilled.
The
rest
with
with
terajet
resources
is,
is
one.
D
I
think
we
also
in
the
company
thought
about
this,
but
our
biggest
issue
currently
is
the
cid
scaling
issue
that
we
say
we
need
to
fix
this
first,
because
nobody
thought
about
what
happened
if
we
add
so
many
crds
to
the
current
provider
aws,
because
fair
enough,
we
have
a
lot
of
use
cases
running
on
our
setups
and
the
other
stuff
we
thought
about
is
how
looks
the
process
to
get
more
stuffs,
implemented
against
the
api
from
aws
and
move
out
after
that,
the
the
terror
jets
stuff
from
the
providers.
D
So
we're
seeing
the
issue
in
general
that
the
community
has
stopped
working
on
the
general
implementing
against
the
apis,
because
terror
jets
stuff
is
adding
so
much
new
resources
there,
but
in
in
our
setups
we
see
the
problems
that
we
have
also
so
many
open
issues
in
the
terraform
world
and
the
the
hashicorp
guys
are
not
merging
them
because
they
have
not
so
many
peoples
working
on
their
stuffs.
D
So
I
don't
know
at
the
moment
without
the
scaling
issues,
we
have
a
little
bit
problems
if
you
want
to
merge
them.
A
D
And,
and
and
fair
enough,
I
think
a
lot.
I
I
think
muafak
says
something
about
you
see
in
gcp
the
issues,
but
we
also
see
the
issues
in
aws
because
we're
talking
with
the
professional
services,
for
example-
and
we
see
also
that
our
masternodes
and
so
on
api
servers
using
a
lot
of
memory.
There
and
aws
make
a
great
job
in
the
background,
but
they
talk
without
what?
What
are
you
doing
on
servers
because
we're
seeing
so
much
memory
usage
on
there?
D
We
had
a
discussion
with
our
technical
account
manager
this
week,
what
we're
doing
there,
because
we
have-
I
don't
know
10,
10
or
15
crossplan
instances
running
and
all
of
them
are
using
so
much
memory.
If
we
enable
the
jet
providers
for
examples
or,
for
example,
so
yeah.
D
I
think
I
think
aws
makes
a
lot
of
tapes
on
the
api
servers
that
is
running
and
we're
not
seeing
issues
there,
but
they
see
the
issues.
I
think
and
that's
why
we
had
a
discussion
this
week.
A
Right
yeah,
I
have
a
friend
who
who
works
on
eks
and
he's
definitely
reached
out
to
us
before
and
been
like
hey.
What
are
you
all
doing
when
we've
been
like
running
crossblade
tests
and
descending
yeah,
no
traffic?
A
Cool
anything
else
on
that
topic
from
you
move
off,
or
should
we
move
on
to
talking
about
the
the
next
turnjet
release.
C
Yeah
not
much,
I
mean
I
was
planning
to
actually
open
the
proposal
pr
before
the
community
meeting
and,
like
you
know,
present
it,
but
it
couldn't
make
it
so.
I
will
update
this
agenda
with
proposal
predict
once
it's
ready
and,
of
course,
we'll
post
on
slack
about
it
to
get
like.
You
know
more
thoughts
and
opinions
about
about
what
to
do.
There.
A
Yeah
definitely,
and
if
anyone
else
has
any
feedback,
please
drop
it
on
the
the
track
issue
that
mr
link's
there.
A
Okay,
so
tara
jet
0.4
is
out
any
new
or
exciting
features
in
that
release.
C
Yeah,
well,
there
are
a
bunch
of
like
you
know,
bug
fixes
and
other
stuff,
but
there
are
two
big
things
I
think
was
worth
mentioning.
One
is
that
yeah,
so
in
the
telegraph
guide
you
would
it
would
direct
you
to
import
the
go
module
of
the
tf
provider
so
that
it
can
process
the
schema
to
generate
the
crd.
C
But
with
this
release
we
are
using
terraform
cls
schema
command
to
print
the
json,
to
get
it
to
print
the
json
of
the
schema
of
the
whole
provider
and
then
process
that
json
schema,
which
would
allow
you
to
not
import
the
tf
provider
at
all,
and
this
is
important
because,
like
you
know,
we
have
seen
people
like
you
know
having
dependency
issues
with
with
tf
providers.
They
import
they're,
like
you,
know,
a
bunch
of
unexpected
errors.
C
You
get
with
some
of
the
specialist
small
providers
that
have
dependency
conflicts
with
their
with
their
own
stuff.
So
yeah,
that
is,
like
you
know,
a
good
improvement,
and
the
other
thing
is
that
sagan
has
worked
on,
like
you
know,
opening
up
an
interface
for
initializers
to
add
an
initializer
to
the
manage
reconciler
and
also
a
generic
tiger
implementation.
C
So
you
can
use
that
generic
tag,
implementation
if
it
works
for
you
or,
like
you,
know,
write
your
own
initializer
and
use
it
in
all
of
your
cids
whenever
possible
or
like
you
know,
for
only
for
the
ones
that
have
tags
filled.
Let's
say
you
can
have
such
heuristics,
so
that
is
also
like
you
know,
another
thing
and
and-
and
there
are
bug
fixes-
let
me
let
me
actually
link
the
release
here
and
yeah.
That's
that's!
That's
pretty
much!
It.
A
D
Thank
you
thank
thanks
for
this,
because
it
unblocks
us
for
pagerduty,
for
example,
and
mirror
security.
We
have
in-house
running
jet
providers
for
this,
and
it's
instantly
fixing
our
issues
for
this.
A
A
All
right
so
we're
into
the
sort
of
community
topics
and
questions
section
before
I
have
a
topic
that
I
added
here
before
we
touch
on
that.
I
want
to
check
whether
anyone
else
has
a
topic
or
a
question
that
they'd
like
to
ask.
B
That
I
can
do
that
one
I
was
hoping
darren
might
be
here
because
darren
did
all
the
work
on
it,
but
we
use
elasticsearch
at
upbound
and
we
use
elastic
cloud,
and
so
we
wanted
to
have
a
provider
to
to
manage
our
clusters
and
darren
who
works
on
some
of
the
stuff
internally
built
this
provider.
So
we're
definitely
looking
at
having
a
0.1
release,
probably
this
week
or
next.
B
A
Yeah,
thank
you
folks,
all
right.
Sorry
about
that.
So
back
to
soliciting
community
topics,
questions
anyone
have
anything
on
their
mind.
D
Well,
I
I
I
I
added
one
issue
after
your
max
reconcile
rate,
I
saw
yeah
because
we
we
updated
in
our
setup
yesterday
and
also
today,
our
provider
aws
to
the
latest
version
and,
for
example,
we
see
we
see
an
issue
with
zync
packages
that
crds
are
controlled
from
the
latest
provider
and
one
question
we
had
or
we
have
is
how
many
provider
revisions
we
normally
see
under
owner
references.
A
And
this
seems
like
something
that
might
be
your
area
of
expertise,
but
it
shouldn't
be
basically
any.
I
believe,
any
provider
revision
that
actually
delivered
that
type
of
custom
resource
keeps
an
odor
reference.
So
if
you've
had
10
provider
revisions
and
all
10
of
them,
the
package
for
that
revision
had,
let's
say
the
rds
instance
crd
in
it.
I
believe
all
10
will
keep
an
owner
reference
against
that
that
crd,
and
only
one
of
them
that
is
active,
will
be
the
controller
reference
of
that
is
that
correct
dan.
B
Yeah,
that's
correct
and
also
a
few
releases
back.
We
made
a
change
to
also
have
the
top
level
object.
The
provider
configuration
own
it,
which
is
just
an
additional
owner
rep.
So
you'll
have
a
number
of
revisions
that
own
it
and
then
the
provider
configuration
itself.
That
is
just
to
guard
against
cases
where
you
have
to
do
some
sort
of
manual
intervention
intervention
when
you're
doing
an
upgrade,
and
you
delete
an
older
vision
just
to
make
sure
that
the
crds
hang
around.
D
Okay,
so
we
we
have
a
short
discussion
with:
what's
named
carl
henry
glunder
he's
also
a
maintainer
and
provider
aws,
and
we
talked
about
this
because
he
see
this
issue
also
before,
and
what
we
did
now
is
to
bump
our
max
reconcile
rate
to
100
and
it
immediately
fixed
our
issue
that
an
old
revision
is
controlled,
the
crds,
and
we
are
not
knowing
why
this
fixed
the
issue.
D
So
so
can
can
we
so
so
what
we?
What
we
really
don't
know
is,
for
example,
we
have
more
than
10
000
managed
resources
in
the
cluster.
Can
we
say
something
about?
I
don't
know
more
than
5
000
or
10
000.
We
need
to
bump
the
reconcile
rates
or
we
need
to
bump
the
cpu
cpus,
because
this
only
fixed
our
issues
in
the
clusters.
D
So
we
at
the
end,
we
we
we
bumped
in
all
clusters,
this,
the
the
cpu
rig,
requests
to
1000,
millicourse
and
bump
max
reconsider
rate
to
100,
and
it
fixed
the
issue
on
all
of
the
clusters
with
this
year.
These
are
controlled
by
an
old
provider
revision
and
we're
not
knowing
why
I.
B
Would
guess
that
what's
actually
happening
here
is
when
you
bumped
the
reconcile
rates?
It
just
restarted
the
controller
which,
in
queue
to
reconcile
so
like
the
first
time
it
came
around
picked
up.
I
don't
think
it
it's
well,
I
don't
want
to
say
absolutely,
but
I
doubt
it
had
to
do
with
actually
the
changing
of
the
reconcile
rate,
whatever
value
there.
D
Yeah
but
but
but
but
I
can
add
something
if
we
only
so,
we
played
around
in
a
few
clusters.
So
if
we
only,
for
example,
bump
the
cpus
two
thousand
millicourse,
we
had
the
same
issue.
Nothing
changed
and.
B
D
If
he
changed
the
reconcile
rates,
then
the
issue
is
gone.
D
Yes
and
then
it's
fixed
for
all,
for
example,
eight
eight
clusters
without
issues
is.
A
It
the
only
other
idea
that
comes
to
my
mind,
which
I
don't
have
high
confidence
in,
is,
could
it
be
that
with
so
many
things
of
the
system,
the
reconcile
is
just
not
getting
a
chance
to
run.
B
Yeah,
I
think,
that's
possible.
It
seems
like
it
could
either
be.
If
it's
you
know
getting
cpu
throttled,
it
seems
like
that
could
trigger
the
context
deadline
or
what
you're
saying
nick,
I
guess
could
also
be
the
case
if
it's
just
getting
rate
limited
really
hard,
but
I
am
kind
of
surprised
to
hear
that.
So
I
definitely
want
to
look
in
that
a
little
more.
D
What
what
what
we're
seeing,
for
example,
I
can
add
a
lot
of
more
things,
because
we
had
this.
We
snapshot
our
monitoring
systems,
but
what
we
see
is
in
the
crossplane
container,
that
we
see
client
client
throttlings
on
the
server
and
if
we,
if
we
bump
the
reconcile
rates,
it
is
completely
gone.
We
see
nothing
like
this
that
that
makes
interesting.
A
So
max
reconcile
rate
actually
has
a
it's
intended
to
be
a
simple
flag
in
front
of
a
bunch
of
related
knobs.
So
when
you
bump
up
the
max
reconcile
rate,
it
is
actually
bumping
up
the
amount
of
parallelism
behind
the
scenes.
It's
also
you
know
it
literally
affects
the
reconcile
rate
in
a
rate
limiter,
but
it
also
adds
more
parallelism
and
it
also
adds
the
ability
to
do
more
requests.
A
So
it
bumps
up
the
the
request:
burst
threshold,
basically
add
an
actual
request
rate
threshold
to
the
api
server.
So
if
your
problem
was
that
you
were
being
rate
limited
for
too
long
and
like
never
being
able
to
get
around
to
sort
of
succeeding
with
a
request
bumping
max
records
all
right,
I
could
see
helping
with
that,
because
it's.
B
Gonna,
basically,
let
you
make
more
requests:
okay,
yeah.
That
makes
a
lot
of
sense,
because
when
you
every
iteration,
which
we
we
change,
the
way
that
the
revisions
reconcile
the
frequency
of
what
at
which
they
do,
but
every
time
through
they're
going
to
touch
every
object,
they've
installed,
which
is
obviously
quite
a
lot,
and
so
that
could
definitely
update
your
rates
there.
B
A
Okay,
thanks
no
worries
all
right.
Next
next
topic
is
one
that
I
added,
which
is
effectively.
I
have
a
couple
of
projects
that
I
started
that
I
am
just
not
able
to
keep
up
with
at
the
moment.
My
role
here
has
changed
a
lot
at
outbound
and
I'm
spending
a
lot
of
time,
sort
of
reviewing
architecture
and
trying
to
do
sort
of
broadly
ship
things
across
the
crossline
community,
and
I
get
very
little
time
to
code.
A
Sadly,
the
three
things
that
I've
added
here,
that
this
is
basically
a
a
call
out
for
new
maintainers
for
these
first
two
providers
and
for
people
who
are
interested
in
becoming
maintainers
or
contributors
to
to
crossplane
the
composition,
revisions
and
max
reconcile
rate
sort
of
functionality
is,
is
sort
of
half
implemented
at
the
moment
and
needs
to
be
taken
past
the
finish
line
so
I'll
go
through
these
in
order.
It's
it's,
I
think,
roughly
from
sort
of
most
at
least
needed
provider.
A
Sql
is
a
provider
that
models,
sql
databases,
not
instances
but
actual
databases,
users,
etc,
etc.
I
started
this
provider
and
we
did
have
two
other
maintainers
on
it,
but
those
folks
have
since
moved
on
either
moved
to
companies
that
don't
use
crossplane
or
got
on
a
month's
longer
vacation.
In
one
case,.
A
I
can't
keep
up
with
the
the
pr's
that
are
against
there
and
also
ironically,
for
someone
who
was
an
sre
for
10
years.
I
actually
don't
know
sql
that
well,
so
people
are
adding
a
bunch
of
features
about
like
advanced
postgres
and
mysql,
sort
of
user
features
and
grants,
and
things
like
that
that
I
just
I
don't
understand
the
api
calls
myself
or
the
sql
calls.
So
I
I
definitely
feel
that
this
provider
could
use
attention
from
just
anyone.
C
A
C
A
So,
if
anyone's
interested
in
any
of
these
feel
free
to
you
know
pipe
up
now
and
or
let
me
let
me
know
at
some
point,
the
next
one
is
provided
terrafore
which,
unlike
the
terror,
jet
providers,
is
a
provider
that
just
you
pass
it
a
big
blob
of
terraform
and
it
just
goes
and
runs.
A
Terraform
yuri
has
helped
out
on
this
one
a
little
bit
here
and
I'm
kind
of
hoping
that
yuri
might
be
interested
in
in
taking
on
a
maintainer
role
to
help
sort
of
review,
prs
and
move
things
forward
on
this.
But
we
totally
could
I
could
use
other
folks
as
well.
It
is
a
one
of
the
more
simple
providers.
I
guess
because
it
only
has
one
kind
of
managed
resource
which
is
a
terraform
configuration,
but
you
know
there's
a
there's,
a
lot
of
complexity
behind
the
behind
the
scenes
there.
A
So
on
these
these
these
providers
are
both
on
the
I
believe
up
to
cosplay
control
auger
at
the
moment,
and
I
basically
I'm
just
gonna
focus.
I'm
just
gonna
make
you
a
maintainer
and.
A
All
right,
then,
the
other
two
things
that
we
have
here
are
features
in
either
cross-playing
or
permeating
the
cross-playing
community.
One
is
getting
composition,
revisions
to
v1,
beta,
1.,
so
composition,
revisions
chris,
was
asking
about
before
we,
we
added
it
as
v,
one
alpha
one.
So
it's
behind
a
feature
flag
and
it's
turned
off
by
default.
A
We
have
got
some
good
feedback
from
folks
that
it's
that
it's
pretty
useful,
the
most
concrete
request
of
it
came
from
jillian
hill's
team,
effectively
asking
for
a
way
to
consistently
specify
a
composition,
revision
across
clusters,
because
at
the
moment,
when
you
create
a
revision,
it
has
sort
of
a
randomly
generated
name.
So
it's
not
possible
to
create
a
claim
that
pins
to
a
particular
revision
across
any
clusters
we
have
discussed.
I
actually
forget
where
it
was.
A
I
think
it
was
at
a
different
issue,
but
we've
discussed
some
approaches
to
do
this:
using
label
selectors
and
putting
you
can
put
some
magic
version,
for
example,
in
a
composition,
revision
selector.
So
I
think
we
have
a
good
way
forward
on
this.
I
just
haven't
had
time
to
work
on
it,
so
this
is
another
area
where
I'm
happy
to
partner
with
someone
if
they're
interested
in
you
know
actually
doing
the
work
to
add
that
feature
which
would
make
us
feel
good
about
getting
this
to
be
one
beta.
One.
D
C
A
Yeah
yeah,
I'm
I
like
the
idea
in
general,
but
I
also
like
the
introducing
a
concept
of
a
channel.
I,
like
the
symmetry
of
keeping
selector
next
to
ref.
For
us,
I
I
don't
know
if
it'd
be
possible
to
have
channel
is
just
a
convention
with
labels
or
something
like
that.
But
I
don't
know
this
seems
like
something
that
needs
a.
Maybe,
if
not
a
one-page
error,
detail,
comment
proposal
or
something
like
that
before
we
go.
C
A
Yeah,
that's
what
I
didn't
think
they
were
very
deeply
to
be
completely
honest.
I
was
mostly
just
commenting
that
adding
a
new
concept
of
channel
in
there
has
less
sort
of
symmetry
with
with
all
the
other
references
than
having
selected
with
labels,
but
yeah.
Something
like
that.
C
A
Yeah
that
that
was
that
was
kind
of
the
big
gotcha.
This
one
actually
was
that
most
of
our
references
aren't
continuously
resolved,
by
which
we
mean
when
you
set
a
label
when
you
set
a
label
selector
that
resolves
a
reference.
So
it
looks
for
something
you
know
this
is
this
bunch
of
references
throughout
crossplane?
So
in
this
case,
if
you
added
a
label
selector
hypothetically,
if
we
use
the
default
behavior,
what
would
happen
would
be?
A
It
would
select
a
composition,
revision
based
on
the
labels,
set
the
reference
and
then
just
stop
doing
resolution.
The
reference
is
said.
I
don't
need
to
check
my
label
selectors
again,
that's
how
we
would
normally
do
it,
but
in
order
to
actually
make
it
work
so
that
if
something
else
changed
to
be
like
the
stable
revision
or
something
like
that,
we
would
need
to
on
every
reconcile,
resolve
those
label
selectors
and
see
if
something
changed
and
then
propagate
the
updates
potentially.
A
C
C
Yeah
there's
actually
an
issue
in
cross
perimeter,
references
being
optionally
reserved
optionally
resolved.
So
maybe,
like
you
know,
we
can
add
another
option
like
that.
Like
you
know,
by
default
it
would
keep
the
current
behavior
and
but
you
know
you
can
make
it
refresh
on
every
reconcile.
Look
at
that
with
a
field
there
yeah,
I'm,
not
sure
which
field
exactly
would
go
like
you
know.
Mesh
tables
is
essentially
a
map
of
string,
so
ref
would
could
have
it
because
it's
a
struct
but
yeah
it
could
be
an
option
there.
A
A
All
of
that
that
we
just
talked
about
is
is
one
of
the
many
things
that
that
I
don't
realistically
see
myself
getting
around
to
doing
anytime
soon,
so
wanted
the
final
thing,
which
is,
which
is
maybe
a.
A
Kind
of
a
good
first
issue,
or
something
it
it's
it's
very
boilerplatey
is
what
I'm
saying,
but
it
does
touch
a
lot
of
code.
Is
we
added
the
max
reconcile
rate
flag
to
to
cross
playing
core,
but
we,
I
want
to
add
it
to
effectively
all
providers
as
well,
so
there's
a
couple
of
places
where
that
hasn't
been
done.
A
Yet
I
don't
think
we've
yet
updated
terrajet
to
to
add
this
flag
when,
when
providers
are
generated,
we
haven't
yet
updated
the
provider
template
that
some
folks
use
to
write
sort
of
providers
that
aren't
terajet
based
and
there's
probably
a
handful
of
providers.
I
want
to
say,
like
provider
azure
out
there
at
the
moment,
that
off
and
you
provide
sql
provider,
terraform
providers
that
are
fairly
popular,
that
that
haven't
had
this
flag
added.
I
think
I
actually
added
a
little
list
here
of
ones
that
I
wanted
to
get.
A
I
don't
think
we
need
to
have
every
provider
in
the
world
updated
to
do
this.
That
would
be.
That
would
be
a
different
task
for
someone,
but
basically
adding
this
flag
is
usually
a
case
of
of
looking
at
one
of
the
existing
pull
requests,
adding
the
flag
and
then
plumbing
it
down
to
the
controller.
So
it's
pretty
boring
work
to
be
completely
honest,
but
but
it
does
it
does
need
to.
It
would
be
good
to
get
it
done
at
some
point.
A
Does
anyone
who
has
provided
aws
been
updated
with
this?
Yet
I
don't
think
it
has
right.
I
don't
think
the
ack
generator
understands
it
either.
A
A
That's
that's
all
of
them.
If
anyone,
if
anyone
wants
to
help
out,
please
please
let
me
know
because,
as
I
say,
I
personally
probably
am
not
going
to
get
much
time
to
work
on
these.
At
least
this
quarter
not
likely
next
quarter
either.
A
D
So
only
to
mention
I
I
make
in
provide
aws
the
go
sdk
bump.
I
will
focus
tomorrow
on
looking
at
the
opening
points
more
added
to
the
review,
and
I
think
then
we
are
then
we
can
go.
I
think
I
tested
a
lot
of
stuffs
in
the
provider
about
this.
D
D
And
hopefully
we
can
get
it
in
because
a
lot
of
people's
waiting
for
this,
I
think
the
essentia
and
deutsche
bahn
guys
needs
this
sdk
bump
for
a
lot
of
tagging
stuff
and
also
a
few
resources
waiting
for
this.
D
Yes,
so
I
think
jury
also
added
something
we
are
very
needed
in
the
provider
for
carpenter
makes
working
in
our
ecas
setup.
Thanks
for
this.
D
Nice
and
then
I
have
one
other
questions:
are
we
not
open
an
issue,
but
it's
more
a
question
so,
for
example,
if
you
create
a
kafka
server
in
one
region
in
aws,
and
then
you
want
to
create
a
user
for
kafka
server
in
normally
in
the
same
region,
for
this
server,
but
a
user
makes
an
issue
and
at
the
other
region,
then
we're
seeing
that
provider
aws
hitting
the
api
every
second
and
we
had
the
last
week
that
aws
blocks
our
iam
users
in
the
api
or
our
iam
user
roles
in
the
api,
because
we're
making
so
much
api
requests.
D
A
Generally,
if
crossplane
is
making
a
really
tight
loop
of
requests
against
the
api
inside
my
punch
would
be
that
it
is
a
bug
or
an
issue.
This
is
something
that
the
max
reconcile
rate
feature
would
would
help
to
protect
against.
A
If
we
got
that
into
provider
aws,
because
it
I
mean
depends
if
you
haven't
set
to
100
it's
going
to
allow
total
across
all
of
provided,
that's
100
api
calls
per
second
and
technically
it
does
100
records
per
second,
so
it
could
be
a
little
bit
more
api
calls,
but
it
you
know,
constrains
constrains
that.
So
that's
something
that
you
can
do
to
protect
against
it
in
general.
But
my
my
hunch
is,
it's
probably
a
bug
in
in
one
of
the
kafka
managed
resources.
B
I
I
believe
I
have
seen
chris
in
in
alpha
level
additions
to
provider.
Aws
I've
seen
cases
where,
like
observe,
calls
run
away
because
something
something
breaks
in
the
observed
call
and
it
just
immediately.
Recues
and-
and
you
get
kind
of
a
flood
of
these
requests.
A
Yeah
there's
an
unfortunate
behavior
in
the
and
they
can
well,
I
don't
know,
unfortunately,
maybe
not
the
right
way
to
put
it,
but
there
is
behavior
in
the
controller
runtime
invitation,
whereas
if
your
reconciler
returns
an
error,
it
just
effectively
immediately
recues
and
that
is
subject
to
some
great
limiting.
That's
that's
set
up
by
default,
but
it's
it
allows
you
to
do
quite
a
couple
of
requests.
I
believe,
before
it
starts
backing
off.
D
Okay-
I
I
I
will
open
after
that
in
an
issue
because
we
had
much
users
creating
more
than
100
000
users
in
the
kafka
server,
for
example,
and
then
we're
hitting
immediately
the
issues
in
the
aws
api.
So
we
need
to
investigate
what
is
the
problem.
If
this
is
in
the
resource
observer,
I
don't
know
exactly
yeah
yeah.
That
sounds
good.
Okay,
thanks
thanks,
aaron
also.
A
All
right,
folks,
any
other
topics
before
we
wrap
it
up
for
the
day.
E
If
you
back
up
to
the
crd
issue
for
the
lazy
loading
and
the
issues
with
the
api
server,
we
can
we've
been
working
with
aws
because
we're
seeing
on
our
clusters
granted
we
have
additional
software
on
it
versus
just
a
vanilla
cluster
that
we're
unable
to
even
load
the
terajet
provider
like
it,
fails
out
and
they're
feeling
that
there's
a
conflicting
software
or
conflicting
crds
that
are
causing
the
problem,
and
then
they
also
have
solutions
for
the
client
side,
throttling
that
you're
going
to
possibly
see
for
the
cash
which
I
think
is
the
another
issue
that
I
think
is
getting
resolved
in
a
later
version
of
kubernetes
on
how
to
update
it.
E
A
Is
that
this
1.24,
maybe.
A
Cool
and
sorry
did
you
say
julia?
Did
you
say
that
was
the
aws
solutions,
folks
that
you're
talking
to.
E
Yes,
because
we're
we're
struggling
on
the
terror
jet
provider
to
get
it
fully
installed
and
we're
not
seeing
any
good
logs,
even
if
like
to
point
to
what's
causing
the
problem
or
what
could
be
potentially
conflicting.
A
That's
different,
that's
good,
to
know
because
that's
different
than
what
we've
seen
with
eks.
So
far.
As
you
know,
christopher
mentioned
that
eks
works
for
him,
but
the
provisional
services
folks
are
like
hey.
The
memory
usage
is
high,
but
it
otherwise
works
and
in
our
experiments
we've
usually
found
eks
to
be
the
most
resilient
of
all
the
hosted
kubernetes
services
we
haven't
been
able
to.
We
haven't
been
able
to
break
it
ourselves
using
terajet.
Yet
so
I
wonder
if
it's
literally
not
finishing
the
install.
E
It's
not
finishing
it
basically
and
it
sits
in
a
failed
state,
and
so
then
it
it
never
it's
ready
false
and
then
it
just
sits
there
and
hangs,
even
though
we'll
still
continue
to
see
client-side
throttling.
But
that's
really
the
caching
issue,
not
so
much
the
install.
So
we've
worked
with
eks
about
you
know,
increasing
our
size
like
behind
the
scenes
on
the
node
they've
made
it
to
the
largest.
E
We
still
can't
install
it
so
we're
we
have
credits
on
the
professional
service
and
professional
services,
so
we
may
have
them
come
dig
into
our
clusters
to
see
what
potentially
could
be
preventing
the
install,
but
we
can't
get
it
installed.
D
I
I
can
also
add
something,
so
we
are
normally
working
in
eu
central
one
there.
We
have
definitely
no
issue
only
the
the
discussion
with
the
professional
services
guys,
but,
for
example,
in
eu
west
one.
So
in
ireland
we
we
had
also
the
same
issue
that
the
jet
providers
are
not
finishing
the
install,
so
it
I.
D
I
think
it
depends
on
how
the
aws
guys
are
looking
in
the
managed
control
planes,
for
example,
how
they
are
scaling
up
and
so
on,
and
I
don't
know
if,
if
this
could
be
also
an
issue,
if
how
big
are
the
the
instances
in
the
background
and
if
they
have
instances
available
to
upgrade.
E
Yeah
we
worked
with
them
and
they
gave
us
the
largest
one
in
our
account
just
so
that
and
I'd
spun
up
a
cluster
with
our
entire
platform
on
it
and
it
still
didn't
work
and
we
start
with
like
171
crds,
but
that
includes,
I
think,
the
aws
provider
that
we
have
already
installed
in
all
of
our
clusters.
E
A
Yeah,
the
the
the
regular
aws
provider,
certainly
shouldn't
conflict
with
the
with
the
jet
provider.
I
think
we
would
have.
A
I'd
be
surprised
if
that
was
the
case.
I'm
wondering
whether
there's
have
you.
A
I
don't
know
what
the
best
place
to
report
this
is.
I
know
that
that
one
of
the
folks
at
upbound
here
is
is
digging
into
these
in
general,
currently
in
the
process
of
opening
a
one
pager
describing
some
of
the
problems
we've
been
seeing
and
what
we
could
do
to
address
them,
but
this
is
the
first
time
I've
heard
a
report
of
tarajet,
literally
not
finishing
an
install.
A
D
Is
this?
Is
this,
for
example,
something
we
can
add
to
the
eks
roadmap
in
github
that
we
can
create
an
issue
there
and
see
if
the
chaos
guys
will
answer
this.
A
Yeah
that
could
certainly
be
worth
trying.
I
think,
there's
multiple
fronts
to
look
at
this
on.
You
know
so
far
we're
looking
at
from
from
the
sort
of
crosstalk
community
perspective.
We
want
crossbane
to
obviously
work
with
any
kubernetes
cluster
within
reason,
so
we're
looking.
You
know
if
this
is
let's
say
just
pure
resource
usage
issues
on
the
api
server
side.
A
As
you
know,
just
memory
and
cpu,
and
things
like
that,
as
move
off
mentioned
earlier,
we're
looking
into
you
know
whether
we're
just
profiling
of
the
api
server
and
see,
if
there's
obvious
places
to
make
it
more
performant.
I've
heard
some
speculation
about
lazy
loading
of
crds
in
the
api
server.
So
we
basically
just
would
tell
the
api
server,
don't
serve
these
cids
until
someone,
you
know,
hits
the
endpoint
for
the
first
time,
don't
don't
start
a
a
handler
for
them
at
first,
but
that
here
has
a
bunch
of
possible
downsides.
A
B
A
In
our
experience,
for
instance,
gke
is,
is
much
less
resilient
to
this
than
the
new
ks
there's
also.
B
A
D
Our
security
guys
have
problems
if
we
add
crds
in
our
clusters,
but
I
think
we
can
have
a
chat
about
this
because
I
can
also
add
the
regulatory
stuff
from
the
banking
sector
in
germany.
For
this,
and
I
think
then
you.
B
D
Can
add
this
in
the
ticket
and
then
all
all
guys
know
what
what
what
is
the
problem
for
the
regulatory
stuff?
If
we
had
so
many
cles
there,
because
then
the
the
the
auditors
need
to
look
in
kubernetes
in
airbag
in
either
roles
I
am
policies
and
so
on
and
so
on.
And
then
we
hadn't
had
a
never-ending
story.
A
Yeah,
I
do
remember
what
the
issue
was
that
we
were
discussing
this
scene
chris.
E
Yeah
yeah,
like
our
use
case,
is
that
we
don't.
We
don't
expose
all
of
those
resources
to
our
platform
users.
So
we
don't
not
everyone,
not
all
of
aws
resources
are
available
for
anyone
to
create
things
to
so
we
want
to
restrict
that
from
a
platform
delivery
perspective
and
we're
also
not
going
to
abstract
away
everything.
So
we
don't
see
a
need
to
have
that
many
that
not
something.
E
For
what
yes,
for
what
what
they
have
access
to
to
provision?
Yes,
just
in
general,
though,
and
again
from
again,
I
think
from
a
security
perspective-
is
that
they
don't.
We
don't
want
all
of
these
crds
loaded
in
our
clusters.
D
E
A
Yeah
yeah,
some
of
the
stuff
that
we've
talked
about
with
potentially
implementing
this
in
the
past-
that
I
believe
actually
someone
someone
opened
a
pr
and
it
it.
E
A
A
The
other
approach
that
we've
flirted
with
would
be
something
like
what
the
aws
controllers
kubernetes
folks
do,
and
instead
of
there
just
being
an
aws
provider,
there
would
be
you
know:
50
aws
providers,
there'd
be
an
s3
provider,
an
rds
provider,
a
yada,
yada
provider,
etc,
which
you
know
is,
is
definitely
somewhat
cleaner
to
implement.
But
then
we
have
the
problem
where
references
between
those
different
types,
we
you
know,
would
not
be
able
to
happen
at
the
sort
of
code
internal
level.
A
We
need
more
generic
references,
so
so
actually
fixing
this
is
tough.
So
definitely
getting
your
or
you
know
changing
this
behavior,
so
definitely
getting
use
cases
from
folks
is
is,
is
is
very
handy.
A
I
just
added
the
issue
there
in
the
folks
to
wait.
If
you
haven't
already
all
right-
and
we
are
now
a
little
bit
over
time,
so
I
have
to
run
that
so
I'm
gonna
call
it
for
this
meeting.
Thank
you
very
much.
Folks.