►
From YouTube: 2021-11-18 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
the
recording
has
started-
and
this
is
the
november
18th
2021
crossplane
community
meeting.
Let's
see
so
we
are
in
the
midst
of
the
cycle
for
1.6.
We,
there
has
not
been
a
release
since
the
last
community
meeting.
A
I
think
we
had
just
gotten
1.5
outs
before
last
community
meeting
and
there
have
not
been
any
patches
that
at
least
I'm
aware
of
to
main
crossplane
there's
been
a
number
of
provider
updates
and
things
going
on
with
the
terror
jet
effort
that
we'll
get
into
in
a
little
bit
here,
but
for
horror
crossblane.
There
has
not
been
any
any
releases
since
last
community
meeting.
A
I
think
we're
a
little
slow
here
still
in
terms
of
the
cycle
for
1.6.
You
know,
so
we
have
a
release
date
scheduled
here
for
december
21st,
so
we'll
get
out
another
release
here
before
the
end
of
the
year
and
we'll
do
our
typical
code
freeze
and
feature
freeze
days
before
that,
so
following
the
same
cadence
that
we're
all
accustomed
to,
but
I
think
we're
a
little
slow
here
in
terms
of
identifying
or
committing
to
core
features
in
an
effort
in
core
cross
plane.
A
I
think
that
you
know
there's
a
lot
of
effort
going
into
you
know:
provider
coverage
for
generating
providers.
Things
like
that.
So
I
don't
know
how
much
specific
effort
there
will
be
in
1.6,
but
we
should
take
a
look
real,
quick
at
the
roadmap
and
the
1.6
board
just
to
see
if
the
things
that
updates
that
folks
want
to
make-
or
you
know
any
sort
of
blockers
or
things
that
folks
want
to
be
calling
out
here.
A
A
A
The
meat
of
the
the
content
where
people
are
focusing
on
right
now,
at
least
in
the
next
section-
so
that's
that's
okay,
and
anybody
from
the
you
know
community
here
to
that,
wants
to
add
their
opinions
or
feedback
on
things
that
they
want
to
see.
Some
progress
on
or
think
that
is
a
higher
higher
priority
than
we've
been
treating
it
is.
The
floor
is
most
certainly
always
open
during
this
discussion.
For
that
also.
A
Close
these
out
sorry,
okay!
Well,
let's
get
there,
it
is
okay!
So,
let's,
let's,
let's
get
in
then
to
possibly
more
focus
on
the
provider's
thing,
because
that's
where
a
lot
of
the
attention
is
right
now
and
we
want
to
be-
we
have
some
announcements
and
some
some
you
know
progress
on
that.
Also
so
muaf.
Do
you
see
your
little
cursor
running
around
there
right
now?
Do
you
want
to
give
us
an
update
now
on
the
terra,
jet
efforts
and
everything
that's
going
on
with
that
right
now,.
D
Yeah
yeah,
definitely
so
the
biggest
news
since
the
last
community
meeting
is
that
all
kubernetes
api
server
scaling
issues
that
we
were
facing
are
resolved
now
and
the
patch
releases
have
gone
out.
So,
like
you
know,
that's
that's
the
great
news
there.
There
is
one
ctl
fix,
but
it's
like
you
know,
only
slowing
the
queries
down.
So
it's
not
it's
not
a
huge
blocker
like
like
the
ones
we
had
with
apis
server
but
yeah.
D
It's
all
good
now,
but
actually
just
ran
out
yesterday
and
I
checked
like
you
know
whether
they
were
included
and
they
were
so
it's
all
good.
Now
we
will.
I
asked
coming
the
kind
clustered
community
about
like
where
that
new
image
will
be
available,
and
I
haven't
done
an
answer
yet,
but
the
we
will
also,
like
you
know,
publish
our
own
kind
images,
because
cash
releases
until
like
you
know
they
propagate
all
right
sources
and
all
cloud
cloud
providers.
D
D
One
will
be
like
you
know
we
0.3.0
will
have
like
around
from,
like
you
know,
50
to
100
crds
for,
like
you
know,
everyone
to
be
able
to
install
at
the
same
time,
we'll
have
0.3.0
with
all
the
crts
that
we
are
able
to
generate
today,
which
is
like
you
know,
for
three
providers.
I
mean
like
now
a
thousand
800
crds.
D
So
that's
like
you
know,
a
the
other
big
new
other
big
news
and
like
this
will
be
the
release
that
we
have,
like
you
know,
excellent
features,
external
name
complete
with
the
like.
You
know
the
ones
that
we
added
the
configuration-
and
I
believe
hassan
also
like
you
know,
has
a
pr
about
you
know
guide,
but
how
to
generate
a
provider,
and
I
think
I
forgot
to
write
it
here,
but
we
got
also,
like
you
know,
a
new
intelligent
provider
called
provider,
tf
equinix.
D
D
So,
like
you
know,
when
we
talk
about,
like
you
know,
hey
this
is
a
native
provider
people
talk
about
like
no
jet
providers,
essentially
and
yeah.
The
last
point
that
I
wanted
to
make
is
the
provider
strategy
doc.
So,
like
you
know,
a
lot
of
people
asked
about.
Like
you
know,
hey
what's
going
to
happen
to
native
providers
or,
like
you
know,
is
it
worth
like
you
know,
using
telejet
or
like
your
nato
country,
part
of
that
provider
like
or
in
what
cases
should
we
use
which
one?
D
So
that
and
like
you
know
what
like
what
is
essential
like
the
long-term
plans
for
both
providers,
so
that
provider
strategy
doc
is
where
you
can
like.
You
know,
voice
your
concerns
and
ideas
and
it's
a
bit
of
a
long
document,
but
you
can
skip
like
the
options
section
and
then
the
rest
is
like
hey.
These
are
the
options.
This
is
like,
you
know,
what's
proposal
and
then,
like
you
know,
the
initial
decision
that
we
arrived
and
for
initial
releases.
D
So
you
can
like
your
new
comments.
There
add
your
feedback
and
we
can,
like
you,
know,
discuss
that
about
like
what
we're
going
to
do
in
the
mid
to
long
term.
D
So
I
would
highly
recommend,
like
you
know,
I
suggest,
to
take
a
look
at
that
document
and
provide
feedback
with
your
use
cases
and,
like
you
know
how
your
organization
would
would
receive.
Just
like
you
know,
having
the
current
strategy,
which
is,
like
you
know,
having
two
providers
completely
separate
for
for
for
each
cloud
provider.
D
So
yeah,
I
think
that's,
that's
all
I
have
in
mind
with
related
detergent.
There
are
lots
of
things
like
you
know.
We
have
fixed
or
like
provided
features
for,
but
I
think
these
are
what's
on
my
mind
now
yeah,
if,
like
you
know
I'll
pay
roxanne
here,
if
you
want
to
add
something
to
yourself.
C
A
That's
some
really
really
important
updates
there.
I
think
some
things
I
would
emphasize
is
that
I
think
it
was
really
quite
impressive
from
the
effort
of
folks
on
this
call
in
the
community
here
to
be
identifying
the
performance
issues
in
upstream
kubernetes
and
the
kubernetes
control
plane
and
the
api
server
where
the
work
they
were
doing
in
crossplane.
You
know
pushed
the
api
server
beyond
what
upstream
community
as
a
whole
had
planned
for
or
had
been
ready
for.
A
So
it
was
kind
of
interesting
that
we
hit
that
with
what
we're
doing
here
and
the
scope
of
the
coverage
that
we're
we're,
building
generating
and
then
really
impressed
with
the
effort
here
from
the
community
on
driving
fixes
into
upstream
kubernetes.
Navigating
the
you
know,
the
release
processes
there
and
getting
things
accepted,
approved
merged
back
ported
released
all
that
sort
of
stuff.
It's
not.
A
You
know
a
super
trivial
thing
to
do,
and
so
very
good
work
here
from
our
community
to
get
kubernetes
upstream,
updated
with
things
that
we
need
for
crossplan
to
be
successful,
but
the
things
that
will
in
turn
in
time
also
benefit
the
greater
upstream
ecosystem
around
kubernetes.
So
that
was
actually
pretty
pretty
pretty
darn
awesome.
C
D
And
also
the
the
folks
in
the
upstream
community,
they
were
all
like
now
very
welcome
and,
like
you
know,
they
made
quite
quite
a
good
effort
like
now
to
make
sure
crossbank
community
is
able
to
get
those
providers
and,
like
you
know,
expand
the
ecosystem.
C
E
Ahead,
yeah
related
to
this
topic.
Maybe
I
can
ask
a
question
so
it
looks
like
that
we
will
generate
the
kind
images
so
that
the
comment
folks
can
give
them
a
try,
and
you
know
my
suggestion
would
be.
You
know,
to
push
them
under
crossplane
or
could
that
be
possible.
D
Yeah
yeah,
I
would
say
that,
like,
let's
play,
let's
say
ben
the
kind
maintainer
for
an
answer
like
not
when
you'll
be
able
to
push
the
images.
If
not
like
you
know,
we
can
consider
doing
that
and
do
it.
But.
A
Yeah
I
mean
in
general
ever
like
whatever.
If
we
need
to
do
things
to
unblock
ourselves,
then
I'm
totally
supportive
of
that.
I
would
say.
A
A
People
are
using
it
yeah
yeah,
that's
pretty
cool,
so
obviously
this
is
getting
some
eyes
on
it
and
things
are
happening
with
it.
That's
awesome,
yeah
and
then
yeah.
That's
that
guide
was
used
by
marcus
for
getting
the
equinix
two
equinix
providers
up
and
running
yesterday.
I
hope
marcus's
demo
went
good
today
because
he
wanted
to
get
those
up,
so
he
could
do
a
demo
today.
So
I
hope
that
all
went
well
awesome.
C
Yeah,
great
and
if,
if
anyone
right
is,
you
know,
was
in
the
past
thinking
about
like
a
specific,
you
know,
provider
that
they
were
looking
into.
You
know
the
anything
that
that
has
a
terraform
provider.
You
know
please
reach
out.
Please
give
this
guide
a
try.
C
We
are
really
really
eager
looking
forward
and
marcus
already
gave
some
some
nice
suggestions,
git
github,
integrations,
etc.
So
it's
it's
getting
polished
in
a
good
state.
So
I
think
it's
you
know,
at
least
for
those
adventurous
folks.
It's
it's
ready
to
to
be
used
and
we
would
love
feedback.
So
if
you
have
any
any
provider
that
you
were
thinking
about
in
the
past,
this
is
like
it
would
be.
A
great
time
to
you
know,
join
forces
and
and
give
it
a
try
and
give
us
feedback
on
the
on
the
guide.
A
Perfect,
all
right,
so
there
were
a
couple
of
releases
recently
since
last
community
meeting
for
aws
and
azure
as
well.
You
can
read
about
the
release,
notes
there.
Are
there
any
like
fixes
that
folks
wanted
to
call
out
as
important
in
those
I
think
I
know
the
azure
one
was
fixing
a
panic
any
like,
so
we
could
talk
about
that
if
we,
if
we
wanted
to
dan,
does
that
I
think
you
drove
that
fix.
Is
there
anything
you
wanted
to
highlight
on
that?
One.
F
Nope,
this
was
just
well.
I
guess
I
guess
there
is
one
thing
I
guess
I
want
to
highlight.
This
is
the
storage.
F
Apis
are
quite
old
here
and
they
have
a
very
strange
pattern
where
I
think
the
storage
container
uses
a
storage
account
as
its
provider
config
or
its
provider
reference
and
the
legacy
terms
which
creates
kind
of
a
weird
relationship
there,
but
we
were
basically
erroring
by
just
calling
or
accessing
the
provider
ref
field
and
obviously
we
use
provider
config
ref
more
commonly
now,
so
this
basically
just
uses
whichever
one
is
defined,
so
it
was
causing
a
panic.
It
wasn't
actually
a
functional
thing.
A
Awesome
daniel
thanks
for
getting
that
fix
out
there,
so
that
would
unblock
folks
that
were
hating
that
all
right
and
then
0.21
for
aws
had
a
number
of
fixes
in
it.
Anything
really
important
to
call
out
here,
chris
or
mafik,
or
anybody.
C
D
A
A
I'm
sure
they'd,
like
a
t-shirt,
awesome,
okay,
so
let's,
let's
drop
into
each
one
of
the
the
data
providers
for
a
second
here
I
know
chris,
you
had
added
a
bunch
of
updates
here
on
provider.
Radio!
Yes,
do
you
want
to
talk
through
some
of
those
that
you
think
are
important.
H
B
So
this
is
the
issue
I
think
we
talked
about
this
at
the
last
meeting.
We
just
have
an
issue:
the
collision
between
crd
types
and
other
types
that
we
were
experiencing,
a
provider
aws
the
situation.
Where
some
you
know
we
have
a
first
class
customers
or
yeah
resource.
That
is
a
security
group,
but
security
group
is
also
a
field
on
other
custom
resources
and
those
were
causing
collisions
and
causing
problems
with
code
generation.
H
One
one
thing
for
me
is:
we
have
currently
three
prs
open
to
bump
sdk
version
and
the
code
generator
version.
I
think
we
can
consolidate
the
three
pr's
to
make
one
one
pr
from
them
to
focus
on
this.
B
Yeah
920
was
opened
simply
as
a
draft
to
demonstrate
the
behavior
of
the
latest
code
generator
so
that
you
know
we
can.
We
can
collaborate
on
why
we're
seeing
what
we're
seeing
with
the
new
resources
being
generated
on
the
wrong
api
and
with
several
types
disappearing
along
the
way.
D
Yeah
aaron,
I
just
saw
a
pr
merged.
I
guess
like
yesterday
or
something
yeah
yesterday
in
the
code,
generator
that
seems
to
be
fixing
these
problems.
B
It
the
pr
itself
mentioned
it
doesn't
fix
all
of
them
and
it
is
when
I
ran
the
tests
it
didn't
result
in
a
clean
generate
run.
So
I
think
one
of
the
issues
that
we're
facing
here
is
there.
The
people
who
are
doing
the
the
refactors
are
kind
of
flying
blind.
I
don't
know
that
they're
running
code
generation
in
our
repo
they're
just
trying
to
make
the
code
make
sense
from
their
perspective,
so
it's
possible.
B
We
may
want
to
supply
tests
that
they
can
use
to
to
make
sure
that
any
changes
they
make
within
the
cross
plane
command
on
code.
Generator
are
not
breaking
things
because
right
now,
they're
just
reading
the
code
and
trying
to
make
it
make
sense.
They
don't
have
anything
to
compare
the
results
to.
D
Yeah
yeah,
I
think
I
think
that's
a
great
idea
and
something
that
I
want
to
do.
I
think
there's
some
kind
of
integration
test
there,
but
couldn't
yeah.
In
most
cases
they
don't
need
to
touch
that
part
of
the
code,
but
I
think
they
changed
their
flag
to
be
a
different
thing
and
like
along
the
way,
some
of
the
things
that
we
put
like
you
know
using
a
different
group.
Config
changed.
B
It
was
weird
I
looked
at
the
history.
Basically,
one
person
came
through
to
change
a
flag
and
accidentally
stomped
on
the
cross.
Plane
generate
function
that
we
specifically
call
out
and
cross
plane
and
replaced
it
with
the
default
model
generator,
and
then
someone
else
came
through
and
made
another
refactor
based
on
the
already
kind
of
stomped
on
code.
So
we're
a
few
changes
deep,
trying
to
unwind
exactly
what
the
original
behavior
was.
B
When
I
run
the
code
generator,
I'm
still
getting,
the
latest
everything
is
still
generating
for
v1
beta
1,
which
is
not
what
we
want,
and
the
v1
beta
1
generated.
Crds
are
still
being
placed
on
the
default
aws
code
generator
api
and
not
on
the
cross.
Aws.Crossplane.Io
api.
C
D
A
Cool
all
right
and
christopher,
were
there
were
there
other
things
you
wanted
to
add
for
aws
for
provider
aws.
H
I
think
late
in
the
document
I
added
a
few
stuffs
from
the
community
for
document
db.
I
think
we
merged
the
last
days
a
few
pr's,
but
it
seems
to
be
broken
now
for
default
parameters
in
the
api,
so
reconciliation
is
every
every
time
you
get
an
error
and
if
you
have
a
deeper
look
in
the
generated
or
in
the
code
and
then
the
reconciliation
is
running
every
second,
so
I
think
there's
something
strange
under
the
hood
now
in
documentdb.
A
Great
idea
yeah
we
should
we
should
consume
what's
up
with
that,
okay
yeah
and
then
I
call
out
here
too
that
there
is
a
release
policy
now
defined
for
provider
aws,
where
you
know
we'll
be
doing
a
minor
release
every
four
weeks.
Obviously
we
can
do
patches
as
needed
for
fixes
of
high
priority
or
high
severity,
but
we
do
have
a
policy
now
for
doing
releases
on
a
regular
cadence.
Now
with
a
professor
aws,
all
right
alpha.
Do
you
want
to
get
us
a
quick
update,
then,
on
provider
azure
also.
E
Yes
sure
thank
you
for
the
opportunity,
so
we
have
a
new
managed
resource
merged
this
week
from
sargan.
Maybe
sargan.
Would
you
like
to
mention
about
it.
C
Yeah
sure,
thank
you.
I
worked
on
this
new
managed
resource
and
this
is
the
mysql
server
configuration
and
you
know
there
is
already
a
managed
resource
for
mysql
server
and
we
can
manage
other
other
mysterious
asian
amazing
resources
with
it
and
with
the
newly
added
mysql
server
configuration
resource,
it
is
now
possible
to
configure
mysql
server
configurations
such
as
connect,
timeout
and
another
ones,
and
yeah
configurations
can
now
be
updated
via.
A
It's
this
is
this:
was
this
your
first
contribution
to
the
to
the
azure
provider.
A
Nice
awesome
man
yeah,
that's
fantastic,
to
see!
Congratulations
on
on
your
yeah!
Congratulations
on
your
new
contribution!
There
yeah
absolutely.
A
Okay,
anything
worth
calling
out
for
the
other
ones
as
well.
E
Yeah,
so
the
second
one
is
a
small,
you
know
fix
or
let's
call
there
was
already
the
api
available
exposed,
but
it
was
not
working
as
expected.
E
You
know
we
can
now
configure
file
access
to
mysql
server
databases
with
a
contribution
from
the
community
relatively
a
small
one,
and
maybe
if
we
could
switch
to
the
issue,
I
would
like
to
discuss
an
issue:
that's
affecting
the
native
provider.
Yes,
that
one,
so
it
turns
out
that
you
know
we
can.
We
can
no
longer
provision
kubernetes
clusters,
aks
clusters,
using
provider
azure.
E
You
know
this
was
initially
mentioned
in
the
select
channel,
but
it
turns
out
that
victor
had
already
opened
an
issue
about
this.
So
what
was
discussed
in
the
cross
planar
pound
channel
is
exactly
the
same
issue.
Victor
has
reported.
I
spent
some
long
debugging
ours.
You
know
on
this,
so
the
initial
issue
is
relatively
easy.
E
On
the
azure
site.
There
have
been
modifications
in
the
in
the
authentication
and
you
know,
identity
apis,
and
you
know
when
we
are,
when
you
are
provisioning,
an
application,
you
can
specify
a
app
id
uri
and
previously
my
understanding
is.
There
was
no
validation
on
that
id
uri.
E
So
what
does
this
have
to
do
with
provider
asia?
So
when
we
are
provisioning,
a
kubernetes
cluster,
we
do
a
series
of
operations
like
we
provision
an
application,
and
then
we
make
sure
that
a
service
principle
for
the
tenant
actually
exists,
and
then
we
assign
a
role
to
that
service
principle,
and
then
we
create
the
kubernetes
cluster
and
while
we
are
creating
the
kubernetes
cluster,
we
specify
the
service
principle
profile
in
which
you
know
we
share
the
secret
of
the
application
that
we
provisioned
with
the
kubernetes
cluster
service.
E
So
this
is
the
overview
of
the
flow,
and
you
know
initially
what's
broken.
Here
is
when
we
are
creating
the
application.
We
are
specifying
an
app
id,
a
uri
and
that's
being
validated,
and
that
was
the
initial
you
know
problem.
This
problem
is
relatively
easy
to
fix.
I
have
already
attempted
to
you
know
fix
this
by
specifying
a
uri
that
does
not
need
validation.
However,
it
turns
out
that
we
have
further,
you
know
problems,
so
I'm
not
100
sure,
but
my
you
know
my
my
feeling
is
that
there
are
also
some
other.
E
You
know
changes
on
the
asia
side
and
they
make
because
of
those
things.
It
looks
like
that
we
are
running
into
some
race
conditions
in
the
provider
itself.
So
previously
we
had
also
similar
situations
for
postgresql
server.
E
E
So
the
investigation
is
not
over
yet
you
know
I've
dealt
in
too
deep
into
it,
but
you
know
I
I
need
to
try
some
new
things.
C
A
So
albert,
is
it
thanks
for
the
details
there
is
it
does
this
result
in
like
100
of
aks
clusters,
not
be
not
being
successfully
created,
or
is
it
like
a
timing
related
thing.
E
No,
no
right
now
with
the
released
version.
You
cannot
provision
any
ats
clusters
because
we
are
using
a
fixed.
You
know
a
fixed
format,
app
id
uri,
which
is
being
validated
and
it
never
passes
the
validation
got.
It
got.
A
It
do
since
you're
looking
into
this,
do
you
want
me
to
assign
assign
you
to
it
then.
D
Asking
that
url
from
like
to
user
could
could
make
the
problem
like
easier
as
at
least
like
you
know,
instead
of
giving
that
giving
a
hardcoded
one,
it
could
be
like
a
spec
field.
E
Yeah,
that's
one
of
the
options
we
can
ask
them
and
you
know
if
they
are
going
to
use
the
https
scheme.
You
know
it
needs
to
be
already
registered
or
invalidated
on
the
azure
site.
E
But,
as
I
mentioned,
you
know,
the
initial
issue
regarding
this
app
id
uri
is
relatively
easy
to
fix,
but
we
need
to
you
know,
find
our
way
with
the
race
conditions
there,
but
that's
something
we
can
consider
so
making
it
as
field,
and
you
know
letting
users
be
able
to
provide
this
app
id
uri.
E
So
basically,
my
understanding
is
this
app
id
your
you
know.
Uris
are
used
for
you
are
used
in
cases,
for
example,
in
which
the
application
redirects
users
to
one
of
its
endpoints.
That's
why
you
know
the
https,
endpoint
or
https
scheme
is
involved
and
also
you
can
use
it.
You
use
the
prefix
when
specifying
scopes,
so
this
application
is
that
we
create
is
specific
to
crossplane.
E
So
I'm
not
sure
whether
it
really
makes
sense
to
you
know
make
it
a
spec
field,
because
it's
something
internal
but
for
example,
you
know
if
the
customer
has
already
validated
domain
or
you
know,
can
specify
a
valid
one.
It
should
be
okay
yeah.
Maybe
we
can
continue
discussion
of
time.
A
Yeah
thanks
for
looking
is
that
helpful.
That
definitely
does
sound
like
something
that
you
know.
A
The
priority
on
is
is
maybe
not
a
bad
idea,
since
it's
resulting
in
aks
clusters
from
failing
to
be
able
to
be
created,
so
I'm
glad
you're
looking
into
it
now
we
can.
C
A
All
right,
thank
you,
yeah,
so
you
know
we'll
move
to
the
community
section.
Now
we've
got
25
minutes
left
so
I'll.
Try
to
pick.
B
A
The
pace
to
get
to
some
of
these
other
discussions
here
so
for
in
terms
of
content
this
this
community
meeting
there
is
a
whole
bunch
of
things
going
on.
There's
a
live
stream
that
I
think
it
started
an
hour
and
a
half
ago.
So
a
live
stream
this
morning
from
anais
about
using
crossband
and
argo
to
do
get
ops
and
best
practices
around
that.
A
So
you
can
catch
the
replay
there
victor
put
out
a
like
another,
a
video
about
specifically
about
producing
or
provisioning
kubernetes
clusters
that
are
production
ready.
So
that's
a
super
interesting
topic.
Also,
there's
been
some
more
writing
recently
from
all
over
the
community.
So
yan
has
a
new
blog
post
out
peter
does
as
well
too
and
kind
of
a
few
of
them.
Those
are
all
blogs,
blog
writings
about
people
using
kubernetes
started
using
crossplane
and
how
it
you
know
the
problems
that
they're
solving
with
it.
A
So
those
are
all
pretty
interesting
and
then
another
one
here
dan
created
a
new
reference
platform.
I
don't
think
it
has
an
official
release
yet,
but
did
you
kind
of
want
to
mention
that
use
case
real
quick?
Because
I
think
it's
actually
quite
interesting
and
might
be
kind
of
like
defining
a
new
pattern
for
for
folks
going
forward.
F
Yeah
for
sure,
so
this
is
a
reference
platform
that
actually
depends
on
another
one,
so
this
is
kind
of
showing
how
you
can
really
utilize
the
package
manager
to
resolve
dependencies
and
install
basically
new
implementations
for
types
that
are
defined
by
xrds,
so
folks
may
be
familiar
with
platform,
ref
multi
kates,
that
jared
wrote
and
it
allows
you
to
have
a
single
cluster
xrd
type
right
that
defines
a
cluster
scope
and
namespace
scoped
type
that
you
can
create
and
it
can
be
satisfied
by
either
an
eks
cluster
or
a
gke
cluster.
F
This
depends
on
that
platform
and
all
it
installs
is
another
configuration
which
actually
gives
you
another
cluster
implementation.
That's
just
satisfied
by
a
helm,
install
of
vcluster,
and
so
it
basically
allows
you
to
run.
You
know
multiple
clusters,
just
each
in
a
different
name:
space
within
a
larger
cluster,
and
so
when
you
install
this,
it
will
bring
platform
ref
multi
case
and
all
of
its
dependencies,
as
well
as
this
composition
and
allow
you
to
stamp
these
out.
You
can
use
it
by
provisioning.
A
you
can
create
two
clusters.
I
guess
one.
F
That's
satisfied
by
like
eks,
for
example,
and
then
put
this
virtual
cluster
or
a
number
of
virtual
clusters
in
it
by
referencing
it.
You
can
also
use
the
in
cluster
config,
and
this
is
actually
an
interesting
case
that
some
community
members
have
been
trying
out
recently
is
to
just
in
cluster,
so
you're
running
crossplane.
F
You
can
just
stamp
out
a
bunch
of
virtual
clusters
there,
which
is
nice
for
testing
things
like
provider,
kubernetes
or
provider
helm,
because
you
just
have
these
lightweight
clusters.
I
believe
also
something
to
note
here
is
yan
in
his
blog
post
about
crossplane.
I
think
also
brought
in
this
composition
and
in
the
repo
and
is
demonstrating
the
control
plane
of
control
plane's
approach
with
it,
which
is
where
you
basically
have
a
cross-plane
cluster,
that
spins
up
other
ones,
and
then
you
install
crossplane
in
those
remote
clusters.
F
So
this
is
once
again
a
really
easy
way
to
do
that,
even
within
the
same
cluster,
which
is
really
interesting
and
we'll
have
a
readme,
more
full-fledged
readme,
with
a
diagram
and
stuff
like
that.
Coming
up
in
this
reference
platform
here
and
then
it'll
be
in
the
upbound
registry.
As
well,.
A
Yeah
right
on
dan,
I
think
this
is
a
really
really
interesting
pattern
for
being
able
to
run
a
control
plane
of
control
planes
like
within
the
same
patrol
plane,
but
then
getting
like
some
more.
A
You
know
more
strictly
defined
tendency
boundaries
there
and,
and
you
know,
providing
a
surface
area
apis
for
for
multiple
teams.
So
I
think
that
was
actually
kind
of
interesting
and.
A
This
continue
to
move
forward
here,
and
I
also
really
like
that.
This
is
the
first
example
of
an
extension
of
a
a
reference
platform,
an
extension
of
a
configuration
package
that
you
know
just
adds
one
small
thing,
but
reuses
all
the
existing
functionality
from
the
package
that
depends
on
so
it's
kind
of
you
know.
A
Let's
see,
and
then
I
think
dan
was
also
writing
some,
like
some
another
blog
post
himself
too,
about
how
kubernetes,
like
the
control
plane
of
the
api
server
validate
custom
resources,
a
deep
dive
into
that
which
is
pretty
interesting
and
then
also
this
is
really
cool.
So
mauricio
from
from
the
community
here
is
writing
a
book
about
cd,
continuous
delivery
on
kubernetes,
and
there
was
a
chapter
specifically
on
crossplane
and
his
book
is
is
available
in
the
early
access
program.
A
Now,
if
folks
are
interested
in
that,
so
I
think
it's
super
cool
that
there
is
a
book,
that's
being
written
that
contains
an
entire
chapter
on
kubernetes,
so
also
work
for
mauricio
on
that
one
and
the
link
is
available
there.
A
Okay,
cool,
so
lots
of
lots
of
interesting
content
being
generated,
and
people
talking
and
people
writing
about
crossbane,
and
I
really
it
kind
of
pumps
me
up
every
week,
community
meeting
to
like
go
through
what
has
happened
in
the
last
couple
weeks
and
just
find
all
this
interesting
stuff
that
people
are
saying
and
doing
with
crossplane.
So
I'm
stoked
on
that
christopher.
Do
you
want
to
now
bring
up
this
dynamodb
reconciliation
issues,
and
is
it
all?
H
I
think
both
of
them,
you
can
see
if
you
configure
fields
so
like
default
values,
yeah
for
example,
then
you
get
issues
in
the
reconsolation.
So
in
the
first
step,
if
we
create
the
resources,
everything
is
good
and
after
the
first
reconciliation
we
want
to
set
fields
that
are
in
aws.
H
Currently
have
this
the
the
same
configuration,
and
then
we
get
the
error
that
we
not
can
set
provision
mode
yeah,
because
provision
mode
is
set
up.
Something
like
this,
and
this
is
really
strange
and
we
see
this
for
a
lot
of
fields.
Now,
if
you
remove
the
default
fields,
then
everything
is
in
the
reconciliation.
Okay,
I
don't
know,
I
don't
know
where
we
have
the
problems,
because
I
checked
the
last
prs.
We
merged
each
of
them
working
great,
but
all
together
now
causes
the
problems.
H
You
see
something
like
this
there,
the
table
validation
exception
table,
has
no
stream
to
disable
yeah,
but
we
set
up
the
table
with
a
disabled
stream
and
in
aws
sites
it
is
disabled.
So
yeah,
it's
strange
behavior
at
the
moment
and
this-
and
we
have
this
currently
with
all
of
the
fields.
D
H
I
think
for
dynamodb
we
have
here
the
behavior
that
we
currently
can
patch
one
field
and
something
in
there.
We
have
the
problem,
so
this
is
custom
code.
There.
D
Okay,
yeah,
I
will
I
will.
I
will
take
a
look
at
it
and
because
I
I
have,
I
have
added
some
code
there.
I
think
it
was
last
week
and
about
like
an
up-to-date
check,
but
not
I
don't
remember
much
about,
like
you
know,
late
initializing
behavior,
but
I
will
take
one
time,
is.
A
That
this
pr
office,
I
was
just
looking
at
on
another
screen
here
for
dynamodb
related
updates,
because
I
I
know
we've
been
solving
yeah
like
as
christopher
was
describing.
A
I
think
we've
solved
a
couple
of
individual
problems
with
dynamodb
and
either
around
later
knits
or
updating
of
fields
later
on,
because
I
think
that
the
the
dynamodb
resource
didn't
have
update
even
implemented
in
the
first
place,
and
so
now
we've
added
some
things,
and
it
looks
like
what
christopher
is
saying
here
is
that
you
know
the
the
aggregation
of
of
those
updates
there,
and
you
know,
focusing
on
specific
fields
is
causing
strange
behavior
when
they're
all
put
together.
It
sounds
like.
D
D
So
like
this
problem
arises
when
you
set
those
fields
like,
if
you
create
the
example
yeah,
you
don't
see
this
right.
It
just
works
as
expected.
H
I
think
for
this,
this
is
a
good
example.
You
see
this
here.
If
you,
if
we
configure,
for
example,
the
provision
throughput
yeah,
then
the
billing
mode
is
automatically
set
up
to
provisioned
and
we
now
trying
to
update
the
billing
mode
to
provision
with
an
error
that
we
need
to
set
up
also
the
provision
throughput
through
the
api.
D
Yeah
yeah,
I
think
I
think
we
yeah
I'll
look
into
like
you
know
better,
why
it
doesn't
doesn't
skip,
update
and
give
the
values
attention.
A
Is
this
also
related,
then
christopher
dynamodb,
adding
default
tags.
H
I
think
nah,
I
think
there
was
more
quest.
I
I
not
added
this,
but
I
think
there
was
a
question
in
the
community
why
crossplane
added
a
default
text
to
resources
so
like
from
which
provider
this
resource
was
provisioned
and
so
on?
This
is
not
really
clear
to
the
folks
why
cross
planet?
I
Christopher,
I
feel
like
I
know
you
since
I
message
you
often
but
yeah,
so
we
noticed
on,
but
it's
not
consistent
across
all
resources
that
dynamo
db,
but
on
dynamodb
resource
it
is.
We
are
adding
default
tags
they're,
not
they're,
useful,
not
saying
they're
not
useful,
but
I
feel
like
that
may
leak
some
implementation
details
that
people
may
not
want
to
surface.
D
Yeah
yeah
the
the
the
default
tags,
I
think,
was
quite
a
while
ago.
D
Now
we
have
added
this,
like
you
know,
to
the
crossbar
runtime,
to
be
added
for
all
resources,
so
that,
like
you,
can
search
through
those
resources
and
make
like
a
relation
with
the
actual
cr
and
the
external
resource
in
the
console,
like
you
have
the
distillation
in
the
cluster,
but,
like
you,
don't
have
that
in
in
the
console,
and
it
was
like
it
was
very
helpful
for
specifically
the
resources
whose
name
is
assigned
by
the
provider,
for
example
aws
bpc
it
it
gets
an
id
from
aws,
and
if
you,
if
you
don't
have
default
text
there,
you
wouldn't
have
any
idea
about,
like
you
know,
which
cross
plane
cluster
and,
like
you
know,
to
what
essentially
cr
it
belongs
to.
D
So
that
was
like
the
these
were
like
to
the
best
of
my
knowledge,
but
the
reasons
that
we
added.
However,
you
write
that,
like
you
know,
it's
not
implemented
for
all
resources
because
oftentimes,
like
you
know,
if
you
notice
the
tagging
behavior
is
not
consistent
in
aws.
In
some
cases,
it's
a
map
in
some
cases
it's
an
area
of
key
and
values.
D
I
Yeah,
I'd
also
make
sure
this
probably
should
make
it
to
documentation,
because
this
is
a
significant
implementation
deep.
This
is
like,
I
don't
hate
them,
but
I
also
don't
love
them.
Tagging
within
an
organization
has
lots
of
meeting
meanings
and
people,
yes,
should
be
able
to
add
their
own
custom
tags,
but
for
the
most
part
like
to
our
end
users,
they
don't
know
what
cross
plane
is,
nor
should
they
we
abstract
that
completely
away,
so
it
adds
it
leaks,
implementation
details
that
could
potentially
be
problematic.
I
So
I
think
that
one
we
should
document
it
that
that's
happening,
because
that's
not
done
so.
People
who
are
building
compositions
and
testing
and
are
expecting
certain
behaviors
may
be
surprised
to
see
tags
two,
maybe
a
thought
of
being
able
to
disable
the
adding
of
those
default
tags
so
that,
if
somebody
does
not
want
those
on
their
implementation,
they
can
turn
those
off.
I
C
I
D
Got
it
yeah
yeah
in
my
opinion,
like
there
isn't
anything
like
you
know,
very
special
or
a
little
confidential
in
those
tags,
but,
like
you
know,
we
write
that
we
we
could
have
like
done
a
better
job
of
documenting
them,
so
maybe
an
entry
in
the
cross
plane
talks
about
the
managed
resources
could
use
some
help
like
you
know,
hey
like
you
know,
this
is
like
these
are
the
tasks
that
you
will,
that
will
get
added
automatically
and
for
removing
them
feel
free
to
open
an
issue
to
start
a
discussion
but
as
like,
currently
at
least
there
there
is
no
plan.
C
I
A
I
There's
an
open
pr
and
sorry
because
I
was
driving
and
trying
to
like
was
parked
not
driving
and
and
then
trying
to
make
it
to
this
meeting.
What
busy
day
there
is
an
open
pr
for
rds.
I
think
it's
an
rds
instance
to
add
restoring
from
a
a
snapshot.
I
think
it's
technically
from
a
backup.
That's
coming
from
s3.
I
There
there's
also
db
instance
and
db
cluster.
That
also
should
have
that
functionality,
and
my
kind
of
question
is
what
are
our
plans
for
feature
parity
and
you
know
when
it
comes
to
rds
right.
We
have
the
db
cluster
and
db
instance
that
are
all
really
aurora
related
and
then
the
just
the
vanilla
rds
instance
is
just
the
standalone
it's
kind
of
like
we
should
probably
look
at
how
to
get
feature
parody
across
it.
I
So
if
you're
going
to
do
a
pr
to
support
snap
like
snapshots
or
recovery
or
restoring
from
a
backup,
it
would
be
great
to
make
sure
that
we
add
that
to
db
cluster
and
db
instance,
either
as
a
fast
follower
or
included
within
the
same
work
since
they're
all
very
related
resources.
D
Yeah
yeah,
so
so
so,
if
you
look
at
those
pr's
like,
I
believe
that
pi
is
also
a
contribution
from
the
community,
so
you
cannot
kind
of
add
a
command
like
you're
asking
them
to
add
it.
But,
like
you
know
in
the
end,
it's
up
to
them
to
do
that.
So,
like
you
know,
but
like
you
know,
when
you,
when
you
see
such
like
disparities,
feel
free
to
open
pr's,
like
you
know,
in
some
cases
it's
like
almost
called
like
copy
paste
code,
because.
D
Yeah,
it's
like
very
similar
so,
like
you
know,
but
when
you
see
that
like
feel
free
to
open
a
pr
okay,
so
we
can,
we
can
add
those
and
yeah.
I
And
then
my
next
one
is
and
christopher,
I
think
you
answered
already
the
question
on
log
groups
for
cloudwatch.
I
know
aws
doesn't
have
an
ack
yet
forward.
I
don't
believe
I
don't
think
I
saw
it
on
their
roadmap,
but
do
we
think
that
something
that
we
want
to
have
in
aws
is
for
all
the
cloud
watch
type
things
like
metric
alarm
subscription
filters.
D
Yeah
yeah,
so
the
I
mean
the
high
level
goal
is
like
you
know,
covering
everything.
Aws
has
okay,
so
so
we
wouldn't
really
really
filter
out
cloud
watch
or,
like
you
know,
specific
resource
resources
in
that
so
yeah,
but,
like
I
just
searched
through
the
issues
like
searching
for
club
watch
and
I
didn't
see
a
specific
issue
open
for
them,
so
feel
free
to
open
and,
like
you
know,
maybe
some
folks
will
chime
into
country.
I
A
H
D
I
That's
good
to
know,
and
just
from
you
guys
perspective
like
we're
looking
to,
I
mean,
probably
do
a
mix
of
with
crossplane
and
then
utilizing
cubella
for
other
areas
to
help
for
known
fields.
It's
also
how
probably
we're
adjusting
like,
like
vpc
and
security
group,
ids
we're
going
to
have
cubella
pass
it
in
the
short
term.
D
That
provider
will
become,
like
you
know,
go
to
play
sport
for
the
crds
that
are
missing
in
the
native
provider,
which
could
be
uploaded
in
such
cases.
I
A
Yeah,
it's
definitely
good
to
have
that
option
for
increased
coverage
that
you
know
in
the
in
the
terra,
jet
generated
very
aws
and
all
those
all
those
options
and,
and
then
like
feedback
of
you
know
if
they're,
covering
the
scenarios
that
are
important
is
always
really
good.
Yeah.
A
There
giant
so
we
got
a
few
minutes
left
here
and
I
think
I've
seen
more
prs
being
added
here.
So
let's
try
to
approach
these
here
with
the
context
of
you
know
if
these
are
being
added
for
hey,
we
want
some
eyes
on
these
or,
if
there's
specific
topics
to
go
into
these.
H
H
Is
the
first
one
is
from
me
it's
more
than
have
an
eye
on
it
because
we
need
it.
We
have
a
transit
gateway
in
place
and
if
you
are
not
able
to
use
private
nut
gateways,
we
have
in
all
aws
accounts,
internet
gate
gateway
is
deployed
and
then
yeah,
a
customer
from
us,
can
go
through
the
internet
gateway
and
not
go
through
our
transit
gateway,
and
this
fixed
it
that
we
can
use
private
nut
gateways.
H
The
the
reason
behind
is
that
we
set
up
all
of
our
eks
node
groups
in
the
secondary
crdr
ranges,
and
we
need
the
nut
gate
base
to
go
through
the
public
networks
and
then
go
through
the
transit
gateways.
So
it
would
be
really
cool
and
helpful
for
us.
If
someone
has
the
look
on
it
and
we
can
get
it
merged
that
we
have
private
net
gateways,
then
you'll
not
need
yeah
internet
gateways
and
public
ip
addresses
anymore
for
not
gateways.
A
Yeah
right
on
christopher
and
it's
like
aaron
or
moaf,
or
I
get
nothing
next
on
the
call
is
that
something
that
we'll
get
in
our
review
pass
coming
up
here
soon.
C
H
Here,
yes,
next
one
is:
we've
implemented,
assume
roll
on
for
provider
config
that
we
can
use
the
provider
configs
over
all
of
our
aws
accounts,
because
we
have,
I
don't
know
more
than
100
and
yeah.
We
need
the
ability
to
have
managed
more
than
one
aws
account
with
injected
identity,
and
so
we
added
here
the
pr.
F
I
can,
if
you
want
to
request
me
on
that
one
I
wouldn't
mind,
taking
a
look
at
that.
I've
been
looking
at
some
of
the
multiple
irsa
kind
of
situations,
I'd
love
to
check
this
out
and
also
in
relation
to
the
potential
partitioning
of
providers
and
see
what
this
experience
is
like
and
see
if
this
would
satisfy
some
of
those
use
cases.
H
A
All
right
right
on
all
right,
daniel
thanks
for
you,
know,
you've
been
on
the
call
since
beginning
it's
thanks
for
thanks
for
your
patience
here
as
well.
G
No
problem
this
first
one
is
mainly
just
updating
some
old
packages,
since
the
go
version
was
like
1.13,
we've
had
some
issues
with
the
new
1.17,
adding
some
extra
comments
on
top
of
the
file
which
fails
the
check.
Fifth,
that's
purely
all
this
one
does,
along
with
updating
some
packages
and
whatnot
that
were
out
of
date,.
A
And
and
daniel,
so
something
for
for
you
real,
quick.
First
of
all,
I
think
I
already
said
this,
but
I
love
your
your
you
know
handle.
Thank
you
cracks
me
up
every
time.
I'm
sure
you
hear
that
all
the
time
yeah,
but
so
for
this
one.
You
know
you're
you're
you're.
You
have
your
maintainer
on
on
this
digital
ocean,
one
right,
you're,
you're
kind
of
taking
some
ownership
and
driving
it.
I
am
yes.
A
I
work
in
digital
ocean,
so
perfect,
awesome
man
yeah,
so
you
know,
and
so
we
can
kind
of
have
a
bit
of
a
talk
too,
of
like
how
to
accelerate
you
also
so
that
you're,
not
you,
know,
blocked
on
things,
because
you
know,
if
there's
other
folks
in
general
ocean
or
more
folks
in
the
community,
they're
interested
in
this.
You
know
the
crossband
contributions.
You
know
we're
kind
of
those
ones.
A
Are
you
know
earlier
days
and
we're
happy
to
streamline
things
there,
so
we
can
add
more
folks
on
to
get
you
know,
review
permissions,
and
so
you
can
kind
of
move
more
international
fashion
here
you
know
as
as
you're
owning
it
and
driving
it.
So
we
can
talk
about
that
and
you
know
have
have
ways
to
kind
of
accelerate
your
efforts
here.
So
you're
like
blocked
on
pr's.
F
Right
now
that
would
that
would
be
awesome.
I
have
to
drop
here,
but
just
wanna.
I
have
totally
dropped
the
ball
and
on
reviewing
these.
So
I'd
love
to
get
some
more
folks
in
there
to
help
out,
because
I
definitely
hate
being
a
blocker
on
that.
So
yeah
you're.
G
Good
I've
had
to
kind
of
drop
it
for
the
past
month
since
we've
hit
some
deployment,
deadlines
and
whatnot.
So
but
yeah.
G
No
problem,
the
other,
the
other
one
is
this:
one
is
just
adding
the
ability
to
spin
up
kubernetes
clusters
in
digital
ocean.
The
other
one
is
adding
the
ability
to
spin
up
any
variation
of
our
database
clusters
that
we
provide.
A
Yeah
this
I
mean,
and
also
dan
daniel-
I'm
really
excited
to
first
see
the
digital
motion
provider.
When
it
was,
you
know,
the
effort
was
started
a
little
bit
ago
and
then
adding
these
useful
resources
is
really
cool
too.
This
is,
you
know,
becoming
a
more
and
more
useful
provider
here
and
giving
more
people
access
to
the
services
on
central
lotion,
which
is
really
really
cool
and
so
yeah.
So
it's
definitely
good
to
see
this.
A
How
are
you
feeling
with
the
patterns
daniel
so
far,
are
you
feeling,
like
you
know,
you've
got
a
good
grasp
on
you,
know,
adding
features
and
having
them
kind
of
be
consistent
with
the
api
patterns
and
the
coding
practices
in
cross
plane.
Or
do
you
feel
like
you
need
some
more
support
for
the
team
on
that?
How
are
you
feeling.
G
Yeah
they're
pretty
good.
It's
pretty
simple
to
implement
this,
since
we
have
a
pretty
robust,
golang
api
for
digital
digitalocean.
So
it's
pretty
simple
just
to
sort
of
it's
essentially
a
one
for
one
copy
of
the
golang
api
for
digitalocean.
So
it's
pretty
simple
to
it's
a
lot
of
copy
paste,
but
it
works.
A
Awesome
yeah
and
it
is
sort
of
that's
another
thing
there
if
you're
feeling
good
about
the
patterns
here,
that's
like
even
another
reason
to
accelerate
your
efforts
on
that
provider
and
unblock
things
for
you.
Man.
G
Sounds
good
I'll
I'll,
also
sort
of
be
a
little
bit
more
pushy
about
getting
other.
A
People
in
there
awesome
daniel
yeah
love
to
see
that
stuff
steve.
You
you've
got
a
couple
of
pianos
here
to
round
us
out
for
the
day.
J
J
H
J
And
then,
lastly,
in
the
version
20
release
the
endpoints
changed
and
and
we're
feeling
we
have,
we
are
there's
a
workaround,
but
it's
pretty
painful.
So
a
lot
of
the
there's,
a
bunch
of
requests
around
updating
the
sdk,
because
there's
new
endpoint
support
in
the
newer
version.
J
So
we
that's
why
I
submitted
this
one
and
that's
also
why
I
wanted
to
push
or
I
submitted
all
the
across
the
act.
Generator
and
all
the
the
aws
prs
upgrade
to
the
version,
one
of
the
the
newest
version
of
the
sdk,
because
I
it
contains
a
lot
of
endpoint
fixes,
which
I
think
can
make
the
endpoint
resolvers
easier
and
kind
of
unblock
us,
because
we
don't
want
to
use
the
workarounds
and
we
aren't
running
the
latest
crossplane
in
govcloud
or
other
environments
where
we
need
to
set
endpoints.
J
So
that's
why
I
was.
I
was
very
anxious
to
get
I
guess
at
the
last
act
meeting.
They
agreed
to
accept
the
newer
version,
which
is
great,
but
that's
why
I
was
anxious
to
get
these
in,
to
to
kind
of
fix,
go
cloud
and
and
you'd
be
able
to
use
fip
sub
ones.
A
Nice
and
then
steve
with
the
this
is
just
a
newer
version
of
v2,
so
we
already
took
the
big
hit
and
you
know,
got
the
v2,
the
sdk
and
integrated
and
some
you
know
some
progressions
from
that.
There's
refactoring
in
that,
and
this
is
a
much
more
scoped
smaller
one,
because
it's
just
incrementing
the
the
version
of
the
v2
sdk
exactly.
J
Yeah
and
then,
with
the
newer
version,
you
can
see
that
they
have
newer
endpoint
methods
and
they
also
support
setting
ursa,
I
think,
and
it
fips
using
either
environment
variable
or
as
a
like,
a
boolean
that
you
pass
the
config
but
you.
But
we
can't.
You
can't
use
that
unless
you're
on
the
newest
version
of
the
sdks.
J
J
Second
piece,
hopefully
we'll
be
implementing
those
booleans
to
to
toggle
the
the
different
endpoints.
J
J
The
the
it's
actually
that
one,
the
internal
endpoints
v2
that
contains
the
endpoint
changes.
A
Which
one
is
it
here?
Oh
this.
J
A
Cool
all
right
yeah,
so
that's
definitely
some
some
like
interesting
updates.
I
think-
and
it's
also
sounds
like
they're
addressing
some
of
the
scenarios
here,
for
they
probably
have
use,
for
you
know
the
general
community
too
steve.
So
thanks
for
driving
this
and
but
then
I
think
it
sounds
like
we're
going
to
do
a
push.
Then
you
know
next
week
soon
for
going
through
some
of
these
pr's
of
priorities.
A
So
it's
good
to
have
all
these
like
surfaced
here
in
the
community
meetings
and
kind
of
address,
those
that
have
more
burning
need.
So
all
right.
So
that's
everything
that
was
on
the
agenda
here.
I
know
we
went
a
little
over
time,
but
yeah.
Definitely
thank
you
for
everybody,
adding
your
comments
and
feedback
here
and
continue
to
drive
the
project
as
an
entire
community.
Everyone's
efforts
are
super
appreciated.
A
So
it's
good
to
see
everybody
this
week
and
we'll
follow
up
on
a
lot
of
these
prs
here,
making
more
progress
until
we
see
each
other
again
in
two
more
weeks
my
dog
says:
hi,
also
see
you
all.
Everybody
bye.