►
From YouTube: 2019-08-20 Crossplane Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
is
started
and
I
think
I
need
to
probably
share
my
screen.
I,
don't
think
I
did
that
yet
running
a
little
behind
today.
Here,
sorry,
alright.
So
this
is
the
August
20th
2019
cross,
blatant
community
meeting,
and
the
first
note
I
want
to
make
today
is
that
a
few
of
us
have
a
hard
stop
today
at
9:30
Pacific.
So
if
the
conversations
are
still
going
beyond
9:30
halfway
through
the
hour,
then
I'll
just
go
ahead
and
transfer
host
over
to
Dan
and
Dan
can
continue
dragging
it
for
the
rest
of
the
hour.
A
So
let's
go
ahead
and
get
started
on
just
a
quick
look
at
our
roadmap.
We
had
published
this
in
to
merge
this
into
master
a
couple
of
maybe
a
couple
of
weeks
ago,
and
so
we
are
making
progress
on
a
fair
amount
of
these
right.
Now,
we've
we've
already
accomplished
a
few
of
them
and
the
main
focus
right
now
for
the
next
couple
of
weeks.
Here
we'll
be
focusing
on
the
individual
cloud
provider
stacks
in
adding
you
know,
resource
network
connectivity,
type
of
support
for
each
one
of
those
and
upgrading
the
all
the
controllers.
A
In
those
cloud
provider
specific
stacks
to
use
the
general
or
generic
managed
resource
controller
such
that
at
the
end
of
the
0.3
milestone,
we
will
have
each
cloud
provider
stack
in
a
very
good
shape.
That
demonstrates
the
best
practices,
and
you
know
the
recommended
way
to
write
controllers
and
and
the
patterns
for
managing
external
resources
from
Okuma
days
cluster.
A
So
you
know
the
what
I
expect
you
know
after
the
dust
settles
from
implementing
some
of
the
specifics
of
those
cloud
provider
stacks,
you
know
we
will
definitely
need
to
update
all
of
our
user
guides
and
the
examples
etc
from
for
how
you
know
how
to
use
those
and
how
to
be
successful
with
them.
If
you
want
to
take
them
for
test
spin,
so
we
definitely
want
to
pay
attention
to
the
quality
of
that.
A
Make
sure
that
the
user
experience
here
is
is
up
to
par
for
people
to
try
out
the
project
and
you're
successfully
use
these
use.
Crossplane,
and
you
know
all
the
various
clouds
in
it.
Sports
I
think
the
the
project
view
here
is
pretty
up-to-date.
So,
let's
just
do
a
quick
look
at
you
know
what
we
have
in
progress
and
then
we'll
keep.
We
can
move
on
past
that
so
to
speak
through
it
all
here.
A
Some
you
don't
have
to
take
turns
so
Marcus
is
going
to
be
focusing
on
the
implementation
for
all
of
the
features
that
we
want
to
add
to
the
stack
manager
in
the
0.3
time
frame.
So
the
security,
namespace
isolation,
work
and
also
being
able
to
add,
take
metadata
from
the
packages
the
stack
packages
and
add
those
as
annotations
to
CR,
DS,
so
higher-level
user
interfaces
or
tooling.
It
can
just
examine
the
CR
DS
to
get
all
the
configuration
and
metadata
information
they
may
might
want
for
each
one
of
the
CR
DS.
A
A
Excuse
me,
Jevon
is
focusing
on
all
things
AWS,
so
Chabad
is
gonna
own.
The
AWS
stack
and
he's
working
on
adding
the
you
know,
support
for
networking
types
like
V,
pcs,
and
you
know
subnets
and
security
groups
and
all
that
sort
of
stuff
to
help
automate
the
process
of
getting
application
up
and
running
in
securely
connected
to
the
resources
that
it
needs
to
consume.
A
Norfolk
is
doing
the
same
thing
for
G,
CP
and
Dan
will
be
doing
a
similar
effort
for
Asher.
Once
we
have
rolled
out
all
of
the
cloud
provider
specific
functionality
into
their
own
repos
Daniel
Susskind
is
working
on
an
example
stack
for
WordPress,
so
an
application-level
stack,
and
we
know
what
that
looks
like
you
know,
package
the
logic,
controller
and
see
RDS,
etc.
A
To
be
able
to
deploy
and
manage
an
application
and
add
that
functionality
to
the
cross
plains
for
a
control,
plane
and
and
then
let's
see
and
then
Phil
he's
working
on,
you
kept
getting
a
start
on
some
of
those
user
guides
and
blog
posts
and
in
marketing,
splash,
etc
for
the
0.3
release.
So
that's
everything.
That's
in
progress
right
now
and
that's
you
know,
kind
of,
as
we
talked
about
before.
That's
a
big
focus
on
these
three.
A
You
know
high-level,
epics
or
items
for
the
0.3
milestone,
so
I
expect
a
lot
of
rapid
progress
in
the
next
next
two
weeks
here
and
you
know,
everybody's
pretty
focused
and
heads
down
and
gonna
make
a
lot
of
progress
all
right.
So
that's
this.
The
project
update
there
on
0.3
progress
and
then
we
can
move
ahead
now
to
the
community
topics
section,
and
so
we
have
Glen
I,
think
mark
joined
as
well.
Yes,
I
see
both
Glen
and
mark
yes,
so
Glen
do
you
want
to
go
ahead
and
introduce
this
topic
and
then
we
can.
B
Sure
yeah
mark
and
I
work
in
a
project
it's
actually
building
and
fast,
but
that's
kind
of
irrelevant,
its
collection
of
CR,
DS
and
controllers,
and
so
on
the
maps
and
some
interesting
work
to
package.
These
are
the
projectors
stacks
and
extensions
to
cross
plane,
I'm
relatively
new
to
cross
plane.
So
here's
their
own
words.
B
And
so
what
we've
been
wondering
about
is
whether
there's
a
requirement
for
stacks
to
support
image
reallocation
so
that
you
can
ship
a
stack
with
pointing
at
registries
out
from
the
internet
and
then
relocate
those
images
to
a
private
registry
and
then
tell
the
stack
to
map
the
images
appropriately
that
bring
any
bells
with
anyone.
Who's
had
requirements
being
looked
at.
Well,
it
sounds
plausible.
Oh
yeah,.
A
So
yeah
thank
you
for
bringing
that
up,
Glenn
and
so
that
type
of
functionality.
It
sounds
like
you
know
that.
That's
something
that's
desirable
in
the
in
this
project,
the
cross
plane
project
as
well-
and
it
sounds
like
the
level
of
thinking
that
we've
done
on
it.
It's
probably
a
bit
more
or
a
bit
less
mature
than
the
thinking
that
you've
done.
So
you
know
basically
what
I've
been
thinking
about
so
far,
it
was
pretty
naive.
A
I
would
say
in
the
sense
that
you
know
how
you
know:
kubernetes
pods,
where
you
can
kind
of
specify
you
know
where
to
pull
the
image
from
and
then
provide
like
an
image
poll
secrets
type
of
functionality
to
you
know
to
provide
credentials
they're
necessary
to
access
that
private
registry.
That's
really
the
level
of
thinking
that
I
had
done
previously
on
this,
and
so
the
term
image
relocation
or
you
have
like
a
you-
know
it
being
mapped
to
a
private
registry
I'm,
not
all
that
familiar
with
that
particular
the
depth
of
this
topic
here.
A
B
So
suppose
we
especially
ship
some
software
and
it
has
a
dozen
images
there
may
be
source
images
owned
by
other
projects
and
there'd
be
one
shipped
by
this
project
and
so
on.
But
basically
you
ship
some
decorations
that
refer
to
these
images
and
then
the
customer
wants
to
run
those
images
out
of
their
own
private
registry.
So
what
they
do
is
they
run
a
tool
that
we
provide
and
it
can
be
very
swamp
to
relocate
the
images
and
what
that
does?
B
Is
it
rewrites
the
image,
references
and
then
pushes
the
images
to
their
new
locations
on
the
in
the
private
registry,
and
you
end
up
where
they
was
from
the
relocation
mapping
and
basically
a
map
from
the
old
locations
or
the
image
references
to
the
new
image
references?
And
then
you
somehow
have
to
get
the
software
to
kind
of
accept
that.
B
So
essentially,
you
can
then
install
the
software
and
it
will
use
the
relocated
images
and
then
pull
them
from
the
private
registry
and
then
the
kind
of
related
aspect
of
this
is:
if
you
want
to
ship
that
into
a
truly
air
gapped
requirement,
then
the
customer
can't
do
the
relocation
from
the
internet.
They
can't
pull
from
the
public
registries
because
they
probably
don't
have
access
across
their
network.
B
So
in
that
case
you
have
this
approach
where
you
take
the
images
and
you
pack
them
all
into
a
table
or
and
and
then
ship
that
with
your
product
and
then
they
can
install,
they
can
relocate
then
from
the
table
or
to
the
private
registry
and
again
the
client
relocation
mapping
as
the
installer
product.
As
the
I've
got
more
write-up
seconds,
yeah
examples.
I
can
post
I'll
link
those
into
the
minutes.
Oh
I'm,.
A
Sorry
yeah
that
would
be
very
useful
to
have.
You
know
some
more
information
to
read
about
how
that
functionality.
So
so
it
sounds
like
thin
that
the
relocation
mapping
the
information
about
that
mapping
is
that
is
that
story
saying
that
that's
normally
stored
at
the
kubernetes,
like
in
cluster
level
as
a
config
map
or
some
other
type
of
data,
that
it's
not
it's
not
stored.
On
the
you
know,
registry
server
or
doctor
distribution
server
side,
it's
more
local,
that
information
for
local
to
the
cluster
itself.
That's.
B
A
Idea,
yes,
okay,
I
see
in
the
air
gapped
environments.
Have
you
found
that
folks
will
will
pretty
much
run
a
you
know,
a
container
image
registry
or
a
doctor
distribution
type
of
thing
just
on-premises
there,
so
that
you
know
over
the
internal
network
it
can
be
accessed
and
not
have
to
go
out
to
the
public
Internet
at
all,
because
it's
there
gaps
yeah
precisely,
you
know,
got
it
okay,
that
that
makes
that
makes
sense.
Yeah,
I
think
that
for
me
you
know.
Maybe
other
folks
already
understand
this
all
very
well.
A
There's
definitely
focus
on
here
that
are
faster
than
I
am
but
getting
a
better
understanding
of.
You
know
how
the
complexity
here
or
how
you
know
what
needs
Drive.
You
know
they
had
the
necessity
for
this
extra
machinery.
You
know
to
you,
know
kind
of
meet
the
enterprise
needs,
so
I
would
love
to
do
some
more
understanding
of
that,
and
so
so
that's
functionality,
that
is,
that
part
of
the
CNM
spec
or
how
does
it
come
into
play
there?
Yes,
interesting,
we.
B
Developed
the
machinery
for
an
adjudication
before
we
got
anywhere
nice
near
scene
up,
so
we
had
a
version
where
we
had
to
see
a
lot
that
could
do
image
reallocation
of
a
distribution
of
our
software.
And
then
we
pulled
out
that
code
into
a
separate
repo
and
we
did
some
more
experiments
and
then
we
got
involved
in
Sina
and
we
plumbed
the
code
into
there.
They're
ready
reference
implementation,
which
is
called
duffel
I'm,
so
they
they're
using
the
same
approach
but
essentially
won't
be
thinking.
B
Less
of
the
days
is,
it's
probably,
you
know
biting
off
seen
app
as
a
big
to
jump
for
cross-play,
whereas
actually
just
fixing
the
imagery
education
requirements
is
a
lot
simpler
because
we've
kind
of
got
the
heavy
lifting
is
already
done
in
a
separate
open-source
repo.
And
so
it's
just
question
of
having
the
dependency
and
plumbing
it
in
and
then
dealing
with
the
mapping.
I.
A
Guess
duffl
Glynn
is
duffle
a
client
side
only
thing,
or
does
that
have
a
server
side
component
to
it
to
its
client
side
only
got
it,
and
where
did
you
where?
Where
is
the
implementation
of
the
imagery
location
that
you
said
it's
in
a
new
repo
is
that
is
that
is
that
part
of
the
duffel
reference
implementation
presented
different
repo,
no.
B
A
That's
not
the
pivotal
org
image
relocations
got
it.
That
sounds
great
yeah
yeah
any
of
those
extra
links,
and
you
know
for
background
information
and
some
more
of
that
information.
There
would
be
definitely
really
useful
for
me
to
better
understand
that
this
space
and
sure,
maybe
for
other
folks
as
well,
does
anybody
else
want
to
add
in
some
comments
here
or
have
some
thoughts
about
the
imagery
location,
I.
C
Would
how
would
this
differ
from
say
like
if,
if
a
stack
was
I'm,
not
sure,
if
stack
is
the
right
answer
here,
if
you
had
a
pod
who
had
an
image-
and
you
just
changed
the
image
and
the
the
repository
that
the
image
was
referencing
in?
What's
the
advantage
of
this
over
just
dynamically
changing
the
pod
image.
B
It's
really
when
you're
managing
a
number
of
images
from
different
sources,
and
we
our
projects,
was
built
on
top
of
key
native
lots
of
the
kubernetes
and
there's
some
other
dependencies.
So
we
end
up
with
a
I,
don't
know
dozen
or
so
images.
We
need
to
do
a
uniform
way
to
map
them
across
so
having
individual
parameters
and
tweaking
them
wasn't
really
an
option
and
the
set
of
images
could
flow
and
it
wasn't
under
our
control.
B
A
It
sounds
like
you
provide
some
I
guess:
scalability
and
automation
to
this
whole
need
for
private
registries,
yeah,
so
I
think
it's
pretty
interesting
Glenn.
You
know
I
definitely
appreciate
you
bringing
this
to
the
community
meeting
today
to
kind
of
share
this
idea
with
us.
So
I,
you
know,
I
would
love
to
do
some
more
reading.
I
put
the
image,
relocation
repo
link
into
the
minutes.
Already,
if
there's
other
links
that
you
have
more
information,
you
can
add
those
the
minutes
and
then
I'll
accept
those
changes
and
I
would
love
to
do
some
more.
A
You
know
research
into
this,
to
understand
it
better,
but
I.
Definitely,
like
the
you
know
the
approach
here
where
you
know
it's
it's
you
know
providing
some
scalability
and
to
this
problem.
That,
probably
you
know
a
lot
of
enterprise
would
sort
of
go
to
one
assault.
So
I
definitely
appreciate
the
information
here
so.
B
There,
the
bullet
there,
where
the
reg
was
registered
workforce
tax
under
way
it
was
just
a
kind
of
side
point
because
I
noticed
in
the
stats
design
doc.
He
talks
about
storing
a
stack
package
as
a
tar
file
and
an
OCR
registry
and,
as
starts
to
wonder
you
know
the
using
an
image
manifest.
So
is
it
just
a
lair
or
what
what's
going
on?
I
couldn't
find
the
code
right.
A
So
that
the
assumption
so
far
is
that
you
know
that
you'd
be
able
to
store
a
stack
in
any
you
know.
Obviously,
I
compliant
registry
like
a
doctor,
distribution,
server
and
we're
hoping
to
you
know
we'll
have
a
publicly
accessible
registry
that
you
know
can
store
all
different
sorts
of
stacks
that
you
know
people
can
publish.
You
know
there
are
stacks
to
and
share
with
other
people
and
there's
nothing
that's
special
about
that
right
now.
Besides,
you
know
what,
like
we've,
been
testing
with
docker
hub
right
now,
so
you
know
a
me.
A
The
regular
doctor
distribution,
OCI
registry
is
pretty
much
the
gold
they're
from.
If
you
know
a
regular,
a
container
image
format,
format.
Kid
could
go
in
there
and
be
the
packaging
standard
for
what
would
be
the
components
that
comprise
the
stacking
of
the
CDs.
It
has
the
controllers
that
it
wants
to
run
it's
a
metadata
about
it
and
all
that
sort
of
stuff.
Okay,
I'm.
B
Particularly
interested
in
the
representation
of
a
stack
in
the
registry,
because
we've
done
some
similar
work
with
docker
in
the
scene
out
world
and
they
they
use
an
image
manifest
so
that
you
can
get
kinda.
You
can
traverse
the
dependency
between
a
bundle
in
that
case,
but
stacking
your
case
and
it
the
images
it
needs
so
that
then
you
can
GC
images
that
aren't
used.
For
example,
I.
A
See
I
see
what
you're
saying
there:
yeah
I
need
used,
image
manifests
and
other
projects
before
it
to
to
help
with
multiple
architecture
support
as
well.
You
know,
that's
really
useful,
to
be
able
to
say:
hey
I
just
want
this
image,
and
then
you
know
it
automatically.
The
manifest
tool
in
the
manifest
of
the
clients
had
stuff
will
remap
that
to
okay.
You
need
arm
for
the
particular
architecture
you're
running
on
and
that's
the
exposure
I'd
had
see
the
you
know
manifest
so
far,
not
I
haven't
really
done
much
about
them
with
dependencies
fulfillments.
B
A
A
All
right
cool,
so,
let's,
let's
go
ahead
and
move
on
to
the
next
topic.
We
had
already
kind
of
mentioned
this
on
the
Crossman
repo
has
been
upgraded
to
use
queue
builder,
B
to
control
the
runtime
and
controller
tools
v2.
So
we
are
now
at
a
point
where
we're
starting
to
break
out
the
code
into
other
repos,
and
so
something
that
Dan
and
I
had
discussed
was
going
to
be
important.
For
that
effort
is
to
maintain
the
history.
A
A
So
when
you're
looking
at
for
debugging-
or
you
know
wanting
to
understand
more
about
the
the
GC
p
stack,
we'll
have
all
that
history
from
the
you
know,
development
generations
that
it
went
through
to
get
to
where
it
is
now.
So
you
can,
you
know,
understand
why
changes
remain
or
how
you
have
more
context
around
how
the
code
got
to
where
it
is.
So
I
thought
that
was
Dan
and
I
thought
that
was
really
useful
and
that's
the
method
that
we're
using
to
split
out
the
cloud
providers
into
their
own
repositories.
D
For
sure,
so
the
main
thing
I
wanted
to
bring
up
here
is
so
this
morning
we
merged
the
PR
that
converted
all
manage
kinds
to
using
strongly-typed
resource
classes.
So,
basically
previously
we
had
a
generic
resource
class
that
had
a
parameters
field,
which
then
was
basically
just
an
arbitrary
a
map
of
strings
that
provided
configuration,
details
for
any
managed
service
and
then
there'd
be
a
function
that
accompanied
the
manage
kind
that
basically
took
those
parameters
and
parsed
out
the
ones
that
were
relevant
to
that
specific
manage
kind,
and
the
issue
with
that
is.
D
When
you
created
a
resource
class,
a
generic
one,
then
you
weren't
able
to
know
if
you
know
you
had
provided
appropriate
values
or
not
in
terms
of
the
type
of
the
parameters.
You
also
didn't
know.
You
know
what
fields
were
required
in
that
sort
of
thing.
Basically,
there's
no
schema
validation,
so
that's
one
of
the
benefits
of
moving
to
strongly-typed
resource
classes.
D
So
we
had
originally
thought
about
doing
that
after
splitting
out
into
the
different
repos,
but
because
it
was
a
pretty
mechanical
change
and
it
would
just
be
nice,
go
ahead
and
get
it
moved
and
and
not
track
that
work
separately
went
ahead
and
implemented
it
yesterday.
So
that
is
all
ready
to
go.
The
one
thing
I
really
wanted
to
bring
up
here,
though,
is
that
for
a
very,
very
brief
time
which
I'm
working
on
right
now.
D
The
examples
will
not
be
functioning
correctly
in
the
cross
plane,
repo
on
the
master
branch,
so
I'm
updating
all
of
those
from
using
the
generic
resource
class
to
the
strongly-typed
resource
class
pattern,
and
those
should
should
be
done
by
you
know
the
end
of
today
or
sometime
tomorrow
and
documentation
will
be
updated
as
well.
So
that's
the
main
family
to
call
out
in
case
anyone
was
was
testing
anything
on
master.
A
Awesome
dan
thank
you
for
for
I
was
definitely
very
excited
that
you
were
able
to
execute
on
that
very
quickly
yesterday
to
just
go
ahead
and
update
all
of
the
struggling
subclasses
or
implement
the
stronger
tech
classes
for
all
the
cloud
providers.
So
now
that
you
have
that
we're
breaking
out
the
cloud
providers
into
the
repos
that
that
work
will
be
included
there
and
that's
one
less
thing
that
each
of
the
individual
contributors
to
those
repos
will
have
to
worry
about.
So
that's
a
really
good
start.
A
Okay,
so
I
think
that
we
had
one
more
pull
request
of
one
pull
request
that
I
wanted
to
bring
up
here
that
I
hadn't
got
a
chance
to
look
at
yet,
but
it
looks
really
interesting
that
Javad
opened
yesterday
and
I
wanted
to
look
at
it,
but
I
wanted
to
also
call
attention
to
it
to
get
other
people's
feedback
if
they
can
Javad.
Do
you
want
to
mention
talk
a
little
bit
about
this
pull
request
here
about
the
need
for
mapping
the
a.m.
eyes
to
the
kubernetes
eks
versions?
Yes,.
C
C
Initially
we
had
some
aim
eyes
which
were
compatible
with
four
different
regions.
That
GCS
is
supported
and
then
four
specifically
for
that
community's
version.
That
was
the
default
version
of
ETS
at
the
time.
But
then
what
happens
is
like
default.
Version
of
ETS
has
been
changed
from
one
twelve
to
one
thirteen
and
then
potentially,
it's
gonna
be
changing
later
as
well,
and
then
those
in
my
eyes
are
a
version
of
specific.
C
C
eyes
related
to
that
version
specific
to
cluster
version,
so
every
time
that
you
want
to
take
off
the
worker
note,
it
looks
up
to
see
using
an
ec2
client
API
a
client
go
client,
it
looks
up
to
see
was
a
proper,
a
my
image
for
that
class
servers
agent
in
that
region
and
then
dynamically
sets
up
and
kicks
up
one
instance
of
that
orchid
node
and
having
a
my
rather
than
using
the
hard-coded
values
in
the
code.
That's
basically
the
summary
of
whether
to
change
awesome.
A
Thank
you
for
judge
bad
I
will
try
and
take
a
look
at
that
as
well
today.
But
anybody
else
who
wants
to
who
wants
to
dig
into
that
is
more
than
welcome
for
feedback
is
well
I,
made
Dan
the
host
now
and
because
I
need
to
peel
off
this.
This
meeting
here
and
I
think
we're
almost
out
of
topics,
though,
that
we
had
in
the
agenda,
but
I'll
let
Dan
drag
the
rest
of
it
to
see.
A
D
B
C
And
then
so,
when
you're
saying
in
relocation,
my-
and
this
is
just
out
of
curiosity-
it
you
didn't
mention
you
didn't
say,
like
maybe
remapping
you
mentioned
relocation,
there's
a
reason
like.
Are
you
talking
about
the
dynamic
like
a
relocation?
While
the
pot
is
up
and
running
or
kuben
is
up
and
running?
You
know
redirecting
or
just
like,
instead
of
like
reading
from
darker
happening
from
somewhere
else
and.
B
C
So
it's
like
when
you
when
you
before
the
pod
comes
up.
It
gets
the
image
copies
it
to
the
private
registry
and
then,
after
that,
goes
from
there.
Yeah
okay
got
it.
That's
a
very
neat
idea:
yeah
I,
like
it
yeah
back
in
back
in
my
previous
job
I,
had
when
we
had
that
situation,
where
we
actually
had
copied
all
the
images
directly
to
ECS
registry.
But
then
we
had
to
do
some
modification
to
let
the
worker
nodes
know
that
they
have
the
right
item.