►
From YouTube: CNCF end user technology radar, June 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
let's
go
from
the
beginning,
so
my
name
is
sheryl
hung
and
I
lead
the
cncf
end
user
community.
You
can
find
me
on
the
internet
at
oyesheville
and
the
cntf
end
user
community
is
a
group
of
more
than
140
companies
featuring
some
of
the
biggest
and
smallest
companies
out
there
who
are
all
using
cloud
native
and
kubernetes,
and
the
goal
of
the
cncf
technology
radar
is
to
find
out
what
is
the
ground
truth.
What
is
the
reality
of
cloud
native
as
it
looks
today,
so
the
cntf
technology
radar
typically
looks
like
this.
A
We
have
three
rings:
adopt
trial
and
assess,
and
we're
going
to
look
at
a
specific
topic
which
the
radar
team
has
chosen
and
look
at
a
few
different
tools
and
frameworks
within
those
that
topic
and
place
them
into
each
of
these
three
levels.
So
adopt
means
clear
recommendation.
Many
companies
and
many
teams
have
used
it
successfully.
A
Trial
means
that
we've
used
it
with
success
and
recommend
a
closer
look
and
assess
means
tried
it
out.
It
seems
promising,
and
you
should
take
a
closer
look
at
this
when
you
find
this
need
I'd
like
to
welcome
members
of
our
radar
team
now
and
ask
them
to
introduce
themselves
so
we're
just
going
to
go
left
to
right.
So
gabe,
please
go
ahead.
B
Hi
there
I'm
gabe
jackson.
I
work
at
mattermost
on
the
cloud
platform
team.
Even
though
mattermost
has
a
history
of
developing
a
communications
platform,
that's
on-premise
focused
recently.
We've
been
also
delivering
it
as
a
service
in
the
cloud.
So
that's
what
my
team
is
responsible
for.
C
A
C
Colleague,
simona
is
unfortunately
not
able
to
to
join
us
today,
but
he
was
part
of
the
raider
team,
contributing
with
the
content,
ideas
and
his
expertise.
D
D
Hello,
everyone,
my
name
is
raja
rajan
people
call
me
rajan,
I'm
a
vp
at
fidelity
investments.
My
focus
is
mainly
cloud
platform
architecture.
D
We
we
are
responsible
for
building
a
next-gen
platform
cloud
native
platform
for
fidelity
where
application
teams
development
teams
reap
the
benefits
of
the
latest
cloud
native
technologies
without
putting
much
effort
into
it.
So
we
try
to
make
it
easy
for
them.
Currently,
we
are
at
250
to
300
clusters
in
that
range.
We
are
managing
that.
So
that's
me.
E
Yeah
thanks
cheryl
yeah
niraj
amin
also
worked
at
fidelity,
leading
the
cloud
platforms
teams
primarily
focused
on
the
csps
and
the
journey
that
fidelity
has
taken
to
migrate
applications
to
the
cloud
specifically
on
the
kubernetes
platforms
that
we're
building
out
in
architecting.
A
D
Yeah,
I
can,
I
can
start
with
it,
so
I
think
this
is
one
topic
where
it.
It
really
depends
on
whether
you're
a
small
team.
You
know
it's
a
small
organization
or
we
have
a
lot
of
application
developers
in
a
lot
of
large
organizations.
D
Those
sort
of
things
really
affect
your
choice,
your
your
choice
of
how
you
would
like
to
manage
it
so-
and
this
is
one
of
the
areas
where
there
are
like
several
number
of
options
available,
but
there
is
no
clear
choice,
typically
asked
like
which
one
is
like
the
better
one
right.
D
So
typically,
I
think
the
this
is
one
of
the
topics
where,
based
on
the
results,
you
will
really
find
it
useful
that
if
you're
doing
something
today
in
a
certain
way
to
manage
clusters,
you
would
you'd
get
the
reassurance
that
you
know.
Others
are
also
doing
it
in
a
certain
way
and
if
not,
you
get
to
know
the
reasons
why
you
know:
that's
not
the
case.
So
it's
it's
really
interesting
and
very
you
know
important
topic
because
everything
starts
with
the
you
know:
cluster
creation
and
provisioning
right
so
yeah.
A
B
Yeah
I
can
jump
in
regarding
the
topic
itself.
I
think
one
of
the
things
that
maybe
caught
most
of
us
sort
of
off
guard-
or
at
least
me
was
there-
was
sort
of
the
idea
of
some
of
the
other
things
we
had
discussed
as
possible
topics.
There
was
sort
of
like
a
clear-cut
sort
of
top
three
or
top
five
sort
of
options,
but
when
this
topic
was
brought
up,
there
was
a
lot
of
organic
conversation
immediately.
B
A
lot
of
people
doing
a
lot
of
different
things,
and
so
it
was
like
a
perfect
thing
to
sort
of
dive
into
yeah
and
we
have
the
same
idea
matter.
Most
of
we
use
a
bunch
of
different
tools
to
do
different
things
and
depending
on
sort
of
our
needs
at
the
time,
we'll
pick
a
totally
different
tool
set
so
yeah
that
all
ties
into
the
topic.
C
There's
also,
as
everyone
starts
with
one
cluster
there
and
then
expands
and
the
journey
continues
and,
and
you
you,
you
grew
your
environment
into
into
large
environments.
There
is
no
clear
path
out
there,
and
this
is
like
an
interesting
journey
and
to
to
document
for
for
for
teams
starting
out
okay,
I'm
now
here
I
will
probably
end
up
with
a
large
environment.
What
possibilities
are
there?
What
are
others
doing
and
learn
from
that?
To
avoid
the
the
pain
that
a
lot
of
us
have
experienced
and
not
longer
experience?
Maybe.
E
I'll
just
add,
and
I
kind
of
agree
with
federico.
I
think
that
scalability
concerns
around
trying
to
figure
this
out
and
picking
this
topic,
I
think,
is
good.
I
think
clusters
themselves
right
are
becoming
more
like
kettle
right,
more
and
more
and
as
as
we
grow
and
more
especially
at
fidelity
as
more
teams
adopt
and
move
over
to
kubernetes,
I
think
for
a
platform
team,
that's
sort
of
essentially
trying
to
manage
this.
I
think
this
topic
is
pretty
important.
A
Fantastic
okay,
so,
after
picking
the
topic,
we
basically
went
out
and
asked
the
end
user
community.
What
are
your
thoughts
on
this?
What
are
you
doing
right
now?
What
things
do
you
not
use?
Have
you
moved
away
from
and
just
to
give
you
an
idea
of
the
different
kinds
of
companies
that
responded?
A
Most
companies
kind
of
fell
in
some
generic
software
industries,
which
can
cover
a
lot
of
different
things,
but
I
think
there
was
a
slight
bias
towards
the
larger,
larger
companies
which
perhaps
make
sense
if
you're
talking
about
multi-cluster
management,
you're
more
likely
to
need
it.
If
you
have
a
larger
company,
more
complex
infrastructure.
B
I
can
kick
that
off.
The
funny
thing
is,
I
didn't
know
what
to
expect
on
one
hand,
I
kind
of
expected
that
there'd
be
a
lot
of
varying
answers.
I
assume
that
as
the
number
of
employees
at
the
organizations
sort
of
skewed
towards
the
higher
end,
as
in
there's
more
there,
that
they
would
sort
of
have
a
more
clear
tool
set
and
infrastructure
stack,
but
it
turned
out
that
that
wasn't
necessarily
the
case
I
was
expecting.
B
Maybe
there
would
be
some
sort
of
like
hidden
gems
of
like
this
is
a
way
you
could
do
it
that
maybe
we
weren't
expecting
but
yeah
the
I
I
was
when
this
topic
initially
came
up
sort
of
was
in
the
camp
of
thinking
that
perhaps
matter
most
was
doing
it
in
a
unique
and
maybe
not
completely
optimal
way,
and
I
was
definitely
pleasantly
surprised
to
see
that
that
wasn't
necessarily
the
case
that
a
lot
of
people
were
using
a
lot
of
different
tools
and
that
this
is
definitely
an
interesting
problem.
D
Yeah
I
want
to
I
want
to
add
to
that,
so
I
think
I
I
I
get
the
feeling
so
we
we
started
with
our
period
of
two
years.
I
think
I
still
remember
creating
the
first
cluster,
so
we
we
started
with
like
one
cluster
and
then
now
we
are
at
like
250
to
300
clusters.
So
in
the
journey
like
many
times,
you
know
we
have
felt
the
same
thing.
D
Where
are
we
doing
things
in
the
right
way,
because
we
have
to
sort
of
do
some
custom
things
like
we
have
to
do
things
in
a
slightly
different
way,
especially
the
scaling
part.
So
I'm
pretty.
I
clearly
remember
that
the
first
six
months
we
were
only
at
ten
clusters
or
something
like
that.
So
we're
doing
a
lot
of
experiments,
we're
making
sure
just
the
stability
aspects
and
all
those
things
are
there,
and
then
we
quickly
scaled
right.
So
at
that
time
we
had
to
do
things
in
a
slightly
different
way.
D
So
many
times
we
did
feel
the
same
thing.
Where
are
we
in
the
right
track?
Is
it
okay
to
do
things
that
way,
but
yeah?
Looking
at
the
results,
I
think
it's
definitely
reassuring.
Yeah.
C
Yes,
sorry,
what
I
expected
to
learn
was,
or
was
I
curious
about-
is
that
since
the
end
user
community
is
spread
over
a
variety
of
industries
with
all
different
requirements,
different
policies,
different
rule
sets
to
see
if
there
is
like
a
a
pattern
emerging
if
you're
in
this
industry,
you
you,
you
manage
your
your
environments
like
with
this
and
if
you're
in
this
industry
you
you
will
manage
with
this,
and
and
so
I
I
was
expecting
perhaps
to
to
discover
some
patterns
there,
or
some
also,
as
gabe
said,
hidden
gems,
that
that
are
known,
not
really
known,
but
would
be
good
to
to
to
give
them
a
larger
platform
to
to
become
known
in
the
in
the
community.
E
And
I'll
add
the
same,
I
mean
I
thought
when
going
into
this,
I
thought
there
would
be
some
conformity
across
some
of
these
these
toolings.
So
that
was
my
expectation
or
opinion
going
into
this.
So
it
was
interesting
to
see
the
results.
A
Actually
I'll
follow
up
with
a
question
of
my
own
too
gabe,
since
you
mentioned
hidden
gems,
why
do
you
think
that
there
aren't
really
those
hidden
gems?
Why
do
you
think
everybody
has
deployed
it
and
kind
of
set
foot
in
unique
ways.
B
It's
a
good
question
just
going
with
my
gut
on
this
one.
B
I
think
it's
just
because
it's
a
hard
problem,
kubernetes,
as
was
discussed
earlier
you
know,
was
we've
solved
with
the
kubernetes
platform,
the
idea
of
running
apps
and
services
as
sort
of
cattle
right,
but
we're
now
at
the
point
where
the
clusters
themselves
need
to
go
through
that
same
uplift,
and
I
think
that
it
was
just
something
that
wasn't
initially
tackled
in
the
same
way
as
the
core
platform
was
and
it's
in
certain
ways
even
more
complex
than
than
the
kubernetes
platform
itself.
E
Yeah
I'll
start
on
this
one,
so
I
I
think
it
kind
of
just
evolves.
I
don't,
I
don't
think
any
of
us
were
expecting
to
kind
of
go
into
having
two
raiders
at
the
end
of
the
day.
E
But
as
we
as
we
went
through
the
questions,
and
you
know
the
radar
itself,
I
think
we
started
figuring
out
that
we
had
two
different
radars
that
would
be
required,
one
that
would
handle
sort
of
the
infrastructure
piece
or
the
cluster
deployment
aspect
of
it
and
then
another
one
that
tooling
wise
would
would
answer
what
you
do
almost
in
day,
two
or
kind
of
build
on
top
once
the
infrastructure,
provisioning
or
you
know,
day,
two
operations
of
the
cluster
itself
outside
of
the
infrastructure.
E
A
C
Yeah
we
can
take
that.
So
what
we
have
seen
is
that
organizations
with
with
a
smaller
amount
of
clusters
depend
on
on
the
the
the
regular
installers
like
cops,
adm
and
and
and
others
we've
seen
that
the
when
the
number
of
cluster
grows,
there's
a
tendency
to
move
away
from
these
installers
and
to
use
managed
kubernetes
services
for
organizations
in
in
the
in
the
public
cloud.
C
That
would
be
the
offerings
from
the
public
cloud
providers
for
organizations
than
in
in
the
with
their
own
data
centers
not
being
in
the
in
the
in
the
cloud.
Even
those
then
tend
to
to
use
packaged
kubernetes
offerings.
C
That
would
be
a
managed
kubernetes
offering
and
resembles
the
ones
that
you
would
expect
and
see
from
from
the
from
the
public
cloud.
So
the
pattern
there
is
either
you're
in
the
cloud
or
in
your
data
centers
the
more
clusters
that
you
manage.
There's
a
tendency
to
move
over
to
to
manage
kubernetes
offerings.
F
C
Aspect
that
I
saw
here
in
in
these
results
is
that
we
compared
also
to
the
other
radars
is
that
we
have
the
adopt
sector
pretty
much
filled,
while
the
other
sectors
are
a
little
bit
empty
compared
to
to
the
other
radar
and
during
our
discussions
we
we
said
that
this,
if
you're
operating
kubernetes
and
if
you're
in
production
with
kubernetes,
you
have
found
your
your
tool
set,
and
you
will
then
stick
to
that
one
and
continue
to
work
that
with
that
one,
rather
than
experimenting
a
lot
and
switching
a
lot
of
these
things
out.
C
D
Yeah
and
just
to
add
to
that,
I
think,
respect
whether
it
is
private
or
public.
I
think
the
key
word
there
is
managed
so
as
the
number
of
clusters
increase,
like
the
one
way
to
look
at,
it
is
like,
as
the
number
of
clusters
increase
your
the
complexity
of
managing
control,
plane,
components,
hcd
and
stuff.
Like
that,
it's
going
to
be
tricky.
That's
that's
one
aspect,
but
but
the
other
aspect,
at
least
from
the
fertility
side.
What
we
looked
at
was
we
wanted
to
spend
that
time.
D
Instead
on
that,
like
spend
on
other
stuff,
where
we
add
like
a
lot
of
lot
more
features
that
will,
you
know,
benefit
the
the
application
teams,
you
know
they'll
things
that
will
make
it
really
easy
for
them
to
consume
the
technology.
So
we
we
sort
of,
chose
a
strategy
to
focus
that
time
on
those
things
you
know
so
that
things
get
better
and
easy
for
the
app
teams
to
sort
of
use
the
technology.
So.
B
Yeah
for
sure
sorry,
I'm
just
going
to
say
regarding
the
the
radar
itself
and
the
fact
that
there's
a
lot
of
tools
in
adopt,
we
actually
really
challenged
ourselves
to
on
those
assumptions
of
do
these
all
need
to
be
in
adopt,
and
why
are
there
so
many?
And
I
think
it
actually
is
a
good
way
to
visualize,
just
like
how
tricky
this.
B
This
problem
still
is
this
cluster
management
issue
and
I
think,
over
time,
we'll
see
other
things
change,
but
right
now
you
can
see
that
it
was
almost
in
a
way
sort
of
a
forced
adoption
where
you
have
all
these
tools
and
they
help
you
in
a
very
specific
way,
sometimes
in
a
couple
ways.
E
Yeah
exactly-
and
I
think
both
of
these
raiders
have
in
the
adopt
section,
you
know
custom
in-house
tools.
I
think
that
kind
of
ties
back
to
our
earlier
point,
where
we
don't
there
isn't
a
clear-cut
winner.
Yet
so
I
think
folks
are
trying
to
bridge
that
gap
where
possible
or
where
needed.
C
Yeah
and
to
give
a
little
bit
more
details
on
that,
so
in
the
answers
that
we
have
seen
is
even
for
organizations
choosing
the
the
managed
kubernetes
offerings.
C
They
were
like
a
hundred
percent
overlap
to
to
custom
in-house
tools,
so,
while
you're
still
using
and
trying
to
to
get
the
benefits
out
of
a
managed
service,
it's
it's
not
enough.
The
managed
service
does
only
provide
so
much
that
you
need
to
complement
it
with
custom
in-house
tools
that
help
you
to
to
do
the
the
work
and
the
and
the
setup
that
is
needed
for
your
own
organization.
A
Awesome,
this
is
really
great
commentary.
I
just
want
to
move
on
now
to
the
specific
themes
that
we
pulled
out
of
this
and
look
into
those
in
a
little
bit
more
detail.
C
Yeah,
this
can
be
summarized
as
gabe
said
this.
While
there
are
these
tools,
there
is
no
clear
winner
and
you
need
to
have
a
combination
of
tools
to
do
the
the
setup
that
is
required
for
your
environment
as
well
as
with
the,
as
I
said,
just
a
couple
of
minutes
before
with
the
managed
kubernetes
service,
they
they
cannot,
or
they
are
not
giving
you
the
silver
bullet.
You
need
to
complement
this
with
with
extra
tools
or
with
extra
custom
in-house,
develop
tools
to
do
that.
C
To
overcome
the
the
the
lacking
features,
the
lacking
possibilities
of
what
is
out
there.
The
other
thing
is
also
since
there
are
so
many
tools
required
for
this
is
that
it
feels
like
that.
You
need
to
come
with
your
own
glue
to
put
these
things
together,
so
that
they
stick
together
and
work
together.
B
Yeah,
I
definitely
agree
with
that
and
I
think
going
back
to
the
idea
of
the
hidden
gem
it
I.
I
think
it
basically
ties
directly
to
this
point,
so
we're
all
sort
of
hoping,
maybe
there's
a
silver
bullet
out
there
or
something
that's
at
least
a
little
bit
closer
to
that
that
we
could
all
start
using
and
yeah.
I
don't
think
we
necessarily
saw
that
pop
up,
but
I
definitely
agree
that
the
one
of
the
common
themes
was
that
glue
as
suspension
is
a
really
good
point.
E
B
E
Mean
a
lot
of
this,
you
know-
and
this
is
kind
of
where
I
think
it
it
matters
for
this.
Maybe
the
sector
or
the
industry
you're
in
or
or
the
company,
and
that
has
certain
rule
sets
etcetera
right
so
fidelity.
E
You
know
we
have
lots
of
regulations
and
security
concerns,
so
part
of
that
part
of
the
glue
is
to
handle
some
of
these.
I
know
different
companies
have
different
hierarchies
of
how
they
set
up
accounts
or
subscriptions
right,
etc.
So
all
that
kind
of
ties
back
to
needing
some
custom
tooling
or
glue
that
to
kind
of
mesh
a
couple
tool
kits
together.
A
Cool
okay:
let's
go
on
to
the
next
topic,
which
you've
discussed
a
little
bit
already.
Cluster
management
often
requires
custom
house
custom
built
in-house
solutions.
Maybe
I'd
like
to
know
a
little
bit
more
about
like
what?
What
are
those.
D
I
can
I
can
probably
start
with
that,
so
yeah,
typically
right.
So
when
the
problem
statement
is
clearly
defined,
though
you
start
off
with
number
of
tools,
then,
over
a
period
of
time,
you'll
you'll
see
clear
winners,
but
in
this
case
I
think
the
problem
statement
itself
stretches
a
little
bit
here
and
there,
depending
on
the
company
policies
and
stuff
like
that.
So
I'll
give
you
some
examples.
D
So,
for
example,
some
sort
of
companies
might
take
an
approach
where
the
the
app
teams
might
actually
go
get
the
cluster
and
then
they
sort
of
manage
it
from
there.
So
they
just
go
to
the
central
team
just
to
get
the
cluster
provision
you
have.
D
The
other
set
of
you
know:
teams
where
they
want
the
central
team
to
manage
the
entire
platform,
for
example,
infidelity
right
the
reason
to
have
the
custom
you
know
so
in
solutions,
so
we
sort
of
took
an
approach
where,
instead
of
looking
at
clusters
separately
add-ons
and
the
features
that
you
add
on
that
separately,
we
sort
of
decided
to
look
at
it
as
like
one
platform.
So
what
I
mean
by
that
is
from
from
an
application
teams
or
development
team
standpoint.
D
They
they
look
at
like
one
platform
version,
they
say
a
fidelity
platform
version
1.0
and
that
behind
the
scenes
could
be
like
one
one:
eighteen
kubernetes
cluster,
a
set
of
ad
specific
version
of
add-ons.
A
specific
set
of
you
know,
infrastructure,
setup
and
stuff
like
that.
So
if
you
want
to
put
all
these
things
together,
you
you
sort
of
go
down
the
githubs
route
and
stuff
like
that.
D
So
in
our
case
we
sort
of
came
up
with
like
a
custom
solution
where
teams
can
just
go
and
describe
what
they
need
and
like
plain
yaml
files
and
behind
the
scenes
like
we
use
like
a
lot
of
other
a
lot
of
these
tools
behind
the
scenes
work
together
to
make
that
happen.
So
that
is
one
example,
the
other
one.
Is
we
sort
of
decided
to
take
the
infrastructure
setup
sort
of
into
account,
for
example,
one
of
the
tools
that
we
build
along
with
the
cluster
proportion
it
sort
of?
D
Does
the
infrastructure
set
up
like
it?
It
executes
the
cloud
formation
templates
and
stuff
like
that,
but
the
main
point
here
is
the
the
versioning
is
like
map,
so
this
particular
sort
of
the
cluster
provisioning
works
with
these
set
of
cloud
server
cloud
formation
templates
right,
the
the
specific
way
you
set
up,
ppcs
and
stuff,
like
that
everything
is
like
washing
control.
I'll,
give
you
another
example
which
we
have
done:
it's
an
open
source
tool
from
our
site.
We
we
wanted
to.
D
We
wanted
a
tool
where
developers
can
simply
plug
in
their
active
directory
credentials.
All
they
have
an
identity
which
is
the
active
directory
credentials
simply
by
plugging
in
we
want
them
to
get
access
to
the
cluster.
So
in
our
case
we
have
a
tool
called
k,
connect,
so
developers
sort
of
plug
in
their
ad
credentials
and
then
it
automatically
behind
the
scenes
goes
and
figures
out
based
on
their
credentials
what
sort
of
clusters
they
have
access
they
have
to
clusters
across
clouds.
So
it
automatically
lists
hey.
D
You
have
access
in
five
clusters
in
aws
two
clusters
in
azure
and
like
five
clusters
in
brancher,
and
they
just
select
one
and
then
behind
the
scenes
it
it
wires
the
connection,
so
they
don't
have
to
manage
coupe
config.
You
know
and
stuff
like
that.
So
this
this
this
this
might
be
trivial
if
you
have
like
a
five
member
team,
but
when
you're
talking
about
like
10
000
developers
in
an
organization,
even
small
things
like
this
makes
like
significant
values
right
so
yeah.
D
B
At
manormost,
we
had
to
develop
a
tool
to
basically
allow
us
to
scale
our
sort
of
custom
clusters,
so
we
for
the
majority
of
our
workloads,
decided
not
to
use
a
managed
solution
and
we
used
cops
which
is
fairly
flexible,
but
if
you're
not
familiar
cops,
allows
you
to
just
sort
of
pick
a
public
cloud
and
deploy
kubernetes
cluster
there.
But
one
of
the
things
that
is
sort
of
inherent
with
cops
is
that
you
just
run
these
commands
and
you
manage
it
that
way
and
we
needed
to
scale.
B
So
we
need
to
build
a
bunch
of
clusters.
We
need
to
upgrade
them
and
manage
them,
possibly
in
parallel.
So
we
we
developed
this
thing.
We
call
the
cloud
provisioner
and
yeah.
It
was
our
custom
tool
and
our
way
around
this
problem
of
how
do
we
sort
of
like
retain
control.
You
know
we
can
choose
our
kubernetes
version
and,
like
we
have
access
to
the
master
nodes
and
some
of
these
things
you
have
to
give
up
if
you
do
go
with
a
managed
solution.
So
how
do
we
keep
all
of
that?
B
E
Sure
so
I
think
this
goes
back
to
the
second
radar
right,
so
at
a
certain
point
in
time,
at
least
at
fidelity
once
the
infrastructure
piece
and
the
cluster's
there,
we
augment
the
cluster
with
as
part
of
the
platform
with
the
it's
a
bunch
of
stuff.
First
and
foremost
comes
certain
security
and
our
back
that
we
apply
then
there's
other
other
operators
that
we've
custom
built.
There's
you
know
ingress
controllers
in
terms
of
how
to
you
know,
get
connected
into
a
cluster,
etc,
etc.
E
So
all
these
things
from
a
post,
provisioning
or
post
day,
two
action
on
the
cluster
itself,
the
infrastructure
piece
of
it
we
actually
handled
today
with
get
ops
and
use
flux.
So
you
know
we
have
certain
repos.
You
know
at
felidae
that
manage
based
off
versioning
of
platforms
that
will
then
you
make
use
of
flux
and
helm
to
you
know,
push
basically
a
set
of
add-ons
to
a
cluster
and
get
it
to
a
proper
state.
So
this
makes
use
of
the
kubernetes.
C
We
use
it
perhaps
partly
not
directly
with
our
argon
flux,
but
what
I
also
wanted
to
mention
is
from
the
answers
there
is
like
where
you
have
seen
is
on
the
cluster
provisioning
part
organizations
tending
to
use
the
the
the
managed
kubernetes
and
then
gluing
it
together
with
their
custom
in-house
tools.
This
even
goes
on
to
the
to
the
day
two
services
to
the
core
services
to
the
add-ons.
A
naked
cluster
cannot
be
used
by
any
organizations,
there's
observability.
C
That
needs
to
be
added
on
our
back
ingress
and
and
and
those
instead
of
being
that
the
managed
communities
there.
You
will
see
that
organizations
use
the
the
project
provided,
helm
charts,
but
that
is
not
enough
again.
You
glue
those
together
with
custom
in-house
tools
which
could
be
in
most
cases
and
the
operators,
so
the
same
problem
for
provisioning.
The
cluster
exists,
then,
on
the
on
the
on
the
other
side
of
the
core
services
and
add-ons
they
need
to
be
combined.
C
They
need
to
be
adapted
to
the
requirements
of
the
organizations
using
them
and
there
is
no
no
standard
way
of
really
doing
it.
Unless
you
see
the
operator
pattern
becoming
a
standard,
but
there
are
so
many
operators
and
there's
also
in
in
so
many
ways,
configured
differently.
A
D
Maybe
I'll
start
off
with
an
example,
it's
an
interesting
example,
and
then
I
think
we
just
can
follow
up
with
that.
So
we
had
a
requirement
where
teams
had
to
do
exec
into
pods
in
production.
So
typically
it's
not
allowed,
at
least
in
our
case,
but
we
had
some
really
interesting
use
cases
which
basically
warranted
for
that.
So
it
was
very
difficult
thing
to
because,
typically
when
you,
when
you
do
an
exec,
then
it's
the
connection
stays
forever
and
stuff
like
that.
It's
a
tricky
problem.
D
So
one
of
the
ways
we
solved
it
is
we
sort
of
have
an
operator
in
our
platform,
which
is
there
in,
like
all
the
clusters,
where
teams
can
actually
go
and
request
an
exact
pass.
So
basically
they
just
submit
like
yammer
file,
which
is
like
kind
exact
pass,
and
they
say
I
need
like
one
like
few
minutes
of
exec
access
or
stuff
like
that
and
then
behind
the
scenes
an
operator
actually
gives
the
exec
access
to
the
specific
team
and
then
takes
it
away.
D
You
know,
after
like
certain
number
of
minutes,
so
without
operator,
achieving
something
like
this.
It's
going
to
be
really
really
tricky.
We
did
think
about
having
an
api
first,
where
they
call.
But
the
moment
you
have
an
api,
you
have
the
authentication
authorization
thing
that
you
need
to
take
care
of,
but
with
operator
we
can
easily
tie
into
the
our
back
model,
kubernetes
armor
model.
D
So
if
somebody
can
submit
a
request
for
exec
pass,
which
is
like
the
kind
yaml
file
the
exact
saml
file,
then
we
know
that
kubernetes
has,
you
know,
allowed
allowed
them
to
create
it.
You
know
it
has
gone
through
the
kubernetes
and
back
so
we
can
tie
into
that.
So
I
I
felt
I
sort
of
wanted
to
start
off
with
an
example
so
that
it
becomes
much
more
clearer.
I
don't
know
if
you
want
to
add
something
to
it.
E
Yeah,
no,
I
would
just
say
I
mean
I
think
we
have
that's
one
example.
I
think
we've
have
at
least
four
or
five
operators
that
we've
built
in-house.
I
think
we've
open-sourced
one
of
them.
I
I
think
operators
are
are
kind
of
the
standard
way
to
automate
and
kind
of
target,
concise
tasks
right
to
complete
within
a
cluster.
So
I
I
think
I
see
I
mean
from
a
from
a
community
perspective.
E
I
think
almost
everything
or
almost
all
the
new
new
things,
at
least
all
have
operators
associated
with
them.
I've
seen
stuff
for
kafka,
etc.
Right,
so
operators
really
make
it
easy
to
and
mask
some
of
the
complexity
that
normally
would
would
appear
all
right.
So,
instead
of
having
to
maintain
or
manage
an
entire
kafka
cluster,
you
can
have
an
operator
that
really
constructs
the
cluster
itself
for
you.
E
So
I
I
think,
we've
also,
I
think,
to
rogers
earlier
point
use
operators
to
facilitate
some
of
of
the
work
within
a
cluster.
So
we
have
tiers
of
authority
within
a
cluster,
so
there
might
be
a
a
business
unit,
cluster
administrator
that
may
be
able
to
do
certain
things,
whereas
normal,
like
a
namespace
admin,
cannot
so
tying
our
back
to
operators
is
really
easy
and
with
with
custom
resources,
etc.
I
think
it's
extremely
extensible,
so
I
think
that's
really
beneficial
for.
A
Us
cool-
and
you
made
a
good
point
there
about
custom
or
in-house
operators
versus
the
operators
that
are
available
widely.
I
don't
think
we
distinguish
them
on
the
radar
itself,
but
that's
something
to
look
out
for
as
well.
C
Yeah
the
during
our
discussion,
it
was
mentioned
also
like
the
operator,
is
the
the
resident
expert
that
lives
in
for
that
piece
of
software
that
lives
in
the
cluster,
and
you
can
talk
to
that
resident
expert.
C
The
operator
in
the
same
way
as
you
do
all
other
things
in
kubernetes,
with
the
same
declarative
way
of
of
writing
your
your
deployments,
your
your
services,
you
you,
you
control
the
the
the
operator,
the
expert
in
in
the
same
way,
which
makes
it
a
a
common
pattern
to
do,
and
then
that
is
something
that
that
makes
it
also
easier
to
to
switch
from
one
task
to
the
other
task.
When
you
operate
and
manage
a
a
large
scale
of
environments,.
D
Point
I
wanted
to
add.
I
just
wanted
to
talk
a
little
bit
about
the
downside
as
well.
It's
it's
not
like
it's
an
easy
thing
to
do
as
well,
so
there
is
a
decent
learning
curve.
I
would
say
initially,
but
one
when
you,
when
you
get
past,
that
things
become
okay,
but
but
there
are
some
things
which
are
not
straightforward.
D
For
example,
you,
your
your
versioning
and
stuff
like
that,
for
example,
let's
say
you,
you
bring
up
with
your
first
version
of
your
custom
resource,
and
then
you
want
to
make
some
changes
on
top
of
it.
So
the
migration.
D
It
really
depends
on
like
what
sort
of
changes
and
stuff
like
that,
but
especially
when
you
have
like
a
lot
of
clusters
and
people
are
already
using
like
one
version
of
it-
the
migration
of
the
one
one
custom
resource
version
to
another
and
stuff
like
that,
it's
doable
yes,
but
it's
not
very
straightforward.
D
So
sometimes
you
might
have
to
sort
of
want
to
take
a
look
at
like
the
the
the
complexity
of
it
versus
the
benefit
you
get
out
of
it.
So
if
you
just
have
like
a
handful
of
clusters,
maybe
there
is
a
different
way
which
might
be
easy
for
you
right.
Maybe
you
don't
need
an
operator
so,
but
in
our
case
like
given
the
number
of
clusters
we
have
and
the
number
of
developers,
it
was
like.
D
A
Yeah,
I
definitely
appreciate
that,
let's
look
forwards
now
to
our
last
theme.
The
community
eagerly
awaits
readiness
of
cluster
api.
Just
tell
us
a
little
bit
about
cluster
api.
B
Yeah,
so
I
think
anyone
that's
had
the
privilege
of
managing
dozens,
hundreds
or
thousands
of
kubernetes
clusters
probably
heard
of
cluster
api
at
this
point
and
it's
a
really
exciting
project,
that's
being
developed
and
it's
coming
along
fairly
quickly,
and
I
think
a
lot
of
the
communities
is
waiting
for
this
to
be
ready.
It's
probably
the
closest
thing.
We
have
to
a
possible
silver
bullet
to
handle
a
lot
of
the
issues
we
run
into
now.
B
There's
kind
of
two
main
points
about
cluster
api
that
are
sort
of,
I
think
kind
of
tell
the
story
a
little
bit.
The
first
one
is
that
it
cluster
api
just
approaches:
cluster
management
with
more
of
a
desired
state,
kubernetes
focused
cattle-focused
sort
of
architecture,
which
is
awesome
because
it's
worked
out
well
for
kubernetes
kubernetes
itself,
so
it
seems
like
this
would
be
a
good
fit
for
the
cluster
management
side
of
things.
B
B
Well,
I'm
sure
there'll
still
be
edge
cases
that
are
a
little
rough,
but
this
will
probably
be
their
best
chance
at
sort
of
getting
a
really
good,
singular
tool
to
help
us
out
with
this
cluster
management
issue,
and
I
think
what's
interesting
about
it-
is
that
even
though
its
cluster
api
has
progressed
quite
a
bit,
as
was
mentioned
at
this
point,
everyone
that
has
to
manage
clusters
has
built
all
this
glue
and
we
use
all
these
tools
and
we
sort
of
had
to
go
through
a
lot
of
like
pain
and
effort
to
get
to
the
point
where
we're
at
now,
where
things
are
working
and
scaling
in
the
ways
that
we
need
them
to.
B
So
I
think
one
of
the
tricky
things
for
cluster
api
is
going
to
be.
It
needs
to
get
to
that
threshold
where
it's
like,
finally
good
enough
to
make
it
worth
our
while
to
sort
of
like
really
put
the
time
and
effort
into
trialing
it.
It
at
least
has
to
match
all
the
stuff
we've
built
so
far.
So
it's
definitely
getting
there
and
a
lot
of
people
are
waiting
for
it
to
get
to
that
point.
D
Yeah,
I
I
just
wanted
to
add
from
fidelity's
side,
for
example,
we
are
multi-cloud,
so
we
we
use
like
clusters
and
different
cloud
providers,
and
you
know
on-prem
as
well
so
this,
so
we
today
have
something
custom
which
sort
of
mimics
this.
This
we've
been
using
it
for
like
a
couple
of
years,
and
we
we
can.
I
mean
we
really
think
that
really
helped
us.
You
know
scale
to
like
whatever
number
of
clusters
we
have.
So
we
have
seen
the
the
importance
of
it
like
I'll
give
an
example.
D
For
example,
if
you
are
creating
some
clusters
in
like
aws,
you
have
a
tool
called
eks
cutters
right,
so
that
is
like
very
specific,
for
you
know
that
cloud
provider,
but
from
a
user
standpoint
we
wanted
to
give
this
use
a
simple,
unique
interface,
where
they
just
go
and
describe
in
a
very
neutral
way.
That's
what
they
like
right.
D
So
they
want
to
describe
in
a
in
a
very
neutral
way,
where
we
sort
of
process
that
and
behind
the
scenes
the
tool
can
actually
work,
but
we
don't
have
to
expose
each
of
those
specific
tools
to
the
users.
You
know
straight
away
so
in
in
that,
in
that
way
I
think
cluster
api
putting
a
spec
in
the
front.
I
think
it's
it's
going
to
help
a
lot
and
another
another
good
thing
about
putting
a
spec
in
the
front
is
that
is
when
the
ecosystem
sort
of
starts
to.
D
You
know
really
evolve
the
moment
you
have
a
spec,
you
know
a
lot
of
tools
can
supporting
tools.
Can
you
know
you
know
evolve
around
it
so
yeah,
I
think,
like
personally
and
like
from
affirmative
standpoint.
That's
why
I
think
we
have
been
definitely
waiting
for
this.
C
Yes,
it
will,
it
will
like
kind
of
ex
abstract
the
way,
the
lower
part
that
you
might
have
to
deal
with,
and
and
and
and
reason
about,
kubernetes
and
the
the
original
base
deployment
in
the
same
way
as
you,
as
you
reason
about
your
application
and
your
services,
and
it
makes
it
really
a
good
candidate
to
to
start
to
treat
your
clusters
as
cattle
as
you
treat
your
pods
and
applications
as
cat.
A
Nice,
okay!
Well,
I
definitely
look
forward
to
it.
I
think
it's
something
that
is
quite
interesting
and
is
going
to
make
quite
a
big
difference
in
the
next
year
or
two
years
so
yeah.
I
think
that
wraps
up
our
themes
for
today
so
last
question
I'd
just
love
to
hear
a
line
or
two
from
each
of
you
about
how
you
found
the
process
of
creating
this
radar.
Was
it
something
that
you
were
surprised
by
you
found
interesting
federico?
Do
you.
C
Yeah,
it
was
very
interesting.
You
never
know
how
this
is
done
and
rather
than
just
watching
a
making
of
or
behind
the
scenes
documentary
of
the
tech
writer
being
part
of
this
gives
you
the
first
hand
experience.
I
enjoyed
very
much
the
the
conversations
that
we
had
around
the
the
entire
radar.
As
you
mentioned,
this
was
a
couple
of
weeks
process.
It's
not
just
this
webinar
and
it's
not
just
the
the
inquiry
that
we
send
out.
C
It's
preparing
it
discussing
the
the
the
topic
and
then
combining
the
results
together,
which
gives
you
the
possibility,
just
like
look
over
the
fence.
Look
over
your
own
fans
that
you're
normally
busy
in
your
day-to-day
stuff
and
see
what
is
out
there.
So
I
I
can
really
recommend
it
to
to
to
everyone
that
might
be
invited
at
some
point
to
to
to
say.
Yes,
it's
I
enjoyed
it
a
lot.
E
Yeah
I'll
just
add,
it
was
fun
for
me
as
well,
and
I
think
it's
fascinating,
especially
on
certain
topics,
to
see
what
your
peers
are
doing
right.
It
kind
of
allows
you
to
gauge
if
you
have
a
chance
to
course
correct
or
or
improve
upon
things
so
it.
For
me
it
was
a
big
learning
experience
so
yeah.
It
was
really
fun.
B
Yeah,
I
completely
agree
with
that.
The
especially
with
the
topic
we
chose.
It
was
really
reassuring
just
to
hear
that
you
know
you
that
this
this
is
complicated
and
then
you
get
to
see
the
perspectives
of
all
the
other
companies
tackling
this
issue,
and
it
was
very
much
sort
of
it
helps
you
keep
a
long-term
sort
of
mindset
about
things,
while
also
sort
of
approaching
the
short
term
like
what
are
we
doing
day
to
day.
B
What's
the
next
best
step
and
yeah
the
amount
of
perspectives
we've
had
from
our
conversations
have
really
opened
up
my
eyes
quite
a
bit
and
yeah.
I
think
it's
been
an
incredible
opportunity
and
I
think
it's
great
that
we
get
to
share
sort
of
all
of
these
conversations
in
the
form
of
the
radar
itself.
D
Yeah
I
mean
I,
I
definitely
found
it
interesting,
so
I
think
I
think
I
personally
believe
in,
like
you
know,
creating
creating
tech
radars.
I
think
it's
super
useful,
especially
I
think
nirich
mentioned
about
course,
corrections
in
our
experience
at
at
least
over
a
period
of
last
two,
two
and
a
half
years
at
several
points.
We
have
done
course
corrections
and
most
of
the
time
when
we
did
that
it
was
usually
when
we
sort
of
spoke
to
another,
we
went
to
a
conference.
D
I
spoke
to
another
set
of
companies
through
some
other.
You
know
events
or
something
like
that.
So
I
think
in
that
aspect
I
really
found
it
interesting,
and
I
personally
believe
this
is
going
to
be
very,
very
useful
for
many
many
teams
out
there.
A
A
Just
as
a
reminder
to
finish
us
off,
you
can
go
back
to
look
at
previous
radars
at
radar.cncf.io.
You
can
also
look
in
a
little
bit
more
detail
about
the
different
kinds
of
votes
and
the
different
kinds
of
companies
that
submitted
answers
to
this
radar.
A
We'd
love
for
you
as
well
to
get
involved.
So
if
you
want
to
have
a
say
about
what
the
next
topic
is,
you
can
go
to
cncf.io
tech
radar.
This
is
just
a
github
issue
where
people
have
been
posting.
What
kind
of
topics
they're
interested
in
hearing
about
from
the
community
and
you
can
kind
of
upvote
and
downvote
things
we
would
love
for
you
to
come
and
be
part
of.
One
of
these
future
radars
be
part
of
the
team,
and
you
can
find
out
more
about
that
at
cncf,
io,
slash,
end
user.