►
From YouTube: CNCF SIG App Delivery Meeting 2019-10-09
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
CNCF SIG App Delivery Meeting 2019-10-09
A
B
B
A
A
B
B
So
Amy.
E
F
B
Okay,
so
welcome
everyone
choose
the
cig,
application
delivery,
meeting,
October,
nice
2019,
so
I
hope
you
guys
have
already
opened
the
meeting
notes,
because
we
today
have
I
think
three
project
to
present
to
receive
application,
delivery
and
I
don't
want
to
waste
your
time.
So
we
can
just
begin
our
project
rotation.
I.
Think
the
first
project
Malik
have
listed
on
the
agenda
is
our
goal
proposal
so
well.
Handle
I
woke
up
late
and
delayed,
who
had
on
the
Argo
presentation
everyone,
okay,.
A
A
A
C
E
A
A
First,
what
is
Argo
Argo
is
a
set
of
kind
of
kubernetes
native
tools
for
running
and
managing
jobs
and
applications,
particularly
you
know.
Basically,
on
kubernetes,
our
tagline
from
the
beginning
of
the
project
has
been
is
to
get
stuff
done
with
kubernetes.
We
saw
at
the
time
that
a
lot
of
people
were
starting
to
this
was
about
two
years
ago.
A
A
lot
of
people
were
starting
to
experiment
on
kubernetes,
creating
kubernetes
clusters,
and,
of
course,
you
know,
some
people
started
running
scalable
micro
service
and
so
on
right
away,
but
a
lot
of
other
people
are
trying
to
figure
out
like
what
do
we
do
with
kubernetes,
and
we
thought
that,
in
addition
to
running
obviously
long-lived
applications,
which
you
know
kubernetes
supported
from
the
beginning,
things
like
batch
jobs,
workflows
event
based
processing.
All
of
those
were
also
very
important
modes
of
computing
and
that
any
like
large
application
would
likely
use
a
combination
these
techniques.
A
So
we
wanted
to
create
a
you
know,
tool
set
which
would
make
it
easy
for
people
to
create
a
orchestrate
manage.
These
kind
of
you
know
more
complex
applications,
and
so
we
focused
on
workflows
first
and
then
we
did
events
and
then
on
May.
We
did
kind
of
more
like
this
continuous
deployment
or
CD
CD
aspect
of
the
project,
so
the
Argo
project
consists
of
three
main
components.
A
The
art
we
have
other
communities
of
at
this
point
is
a
fairly
large
community
and
is
not,
and
also
is
still
growing
rapidly,
particularly
right
now
on
the
Argosy
different
and
there's
a
lot
of
you
can
see.
You
know
marquee
recognizable
brand
names
here
like
Adobe,
Blackrock,
Google
and
Vidya.
They
were
all
very
early
adopters
of
argot
per
to
be
starting
with
workflows
and
then
more
recently,
like
I,
say
P,
Ticketmaster,
Tesla,
Vocaloid,
cetera
and
obviously
into.
It
is
also
a
big
big
user
of
argot.
A
A
lot
of
people
start
with
one
tool
and
then,
as
they
develop
their
applications,
they
integrate
other
tools
it
from
the
auricle
family.
So
there's
quite
a
few
people
using
you
know
multiple
agro
tools
for
their
application
today
and
just
some
example
of
some
of
the
tweets
and
so
on.
That's
happening
in
the
community.
A
At
this
point,
the
agro
project
has
about
you
know
5,000
stars
in
a
900
Forks
photo
of
240
different,
individual
contributors,
etc,
etc.
I
guess
the
thing
that
I'm
most
proud
of
is
that
as
the
project's
mature
community
contributions
have
been
increasing
more
and
more
so
at
this
point,
actually
for
something
like
Argo
workflows,
which
has
been
in
a
little
bit
more
mature.
Actually
60%
of
the
contributions
are
coming
from
the
community.
So
if
you
look
at
the
pull
request
about,
60%
are
from
the
community,
and
these
contributions
are
not
just
breakfast
and
so
on.
A
Shortly
thereafter
of
sadhaks
was
acquired
by
Intuit.
Add
into
it.
We
started
the
Argos
CD
project
initially
to
meet
the
needs
of
Intuit
internally
and
then
in
May
of
2018
actually
early.
We
had
always
wanted
to
integrate
kind
of
event-based
integration
with
our
ago
and
we
had
kind
of
started
a
opened,
a
github
issue
and
are
going
to
discuss
what
this
type
of
integration
may
look
like
when
black
rubber
approached
us
and
say
hey
we
use
our
ago,
and
we
were
also
wanted.
A
Event-Based
integration
and
we've
actually
been
working
on
one
for
the
past
two
months,
so
you
know
that's
Kathryn
and
discuss
it
and
we
really
like
what
they've
done,
and
they
also
decided
that
it
would
be
to
contribute
Argo
events
to
the
Argo
project
and
so
in
May
of
2018.
That's
when
Argo
events
was
contributed
by
Blackrock
has
a
part
of
the
Argo
project
and
then
in
June
2018.
We
got
our
first
big
user
of
workflows
for
the
ml
use
case.
A
When
cue
flow,
they
decided
to
adopt
Argo
as
the
workflow
engine,
which
is
basically
the
workflow
engine.
B-Behind
cube
flow
pipelines.
They
did
implement
their
own
UI
on
top,
but
it's
basically
in
our
go
underneath
and
then
in
July
2018.
We
put
our
go
CD
into
production,
use
it
into
it,
and
today
is
deploying
and
managing
of
your
thousands
of
applications
and
namespaces
at
Intuit.
A
A
Of
course,
kubernetes
you
said:
Intuit
has
been
going
very
rapidly
right
now
into
it
consists
of
four
major
business
units.
You
know
along
the
lines
of
some
of
our
main
products
like
TurboTax
and
QuickBooks,
because
there's
kind
of
a
central
team
as
well
and
all
more
of
these
major
business
units
have
adopted.
Kubernetes-
and
you
know
are
basically
you
know,
are
sorry-
are
using
article
as
well,
so
at
the
adoption
of
kubernetes,
then
are
go
ahead
into.
A
It
has
been
in
a
very
rapid
okay
now
to
get
into
some
of
the
more
details
like
so
we're
discussing
are
go
as
a
collection
of
three
main
projects.
How
are
they
related
and
what
makes
that
a
single
toolkit
so
I
mentioned
before
Arco
makes
it
very
easy
to
kind
of
combine
workflows,
events
and
applications
to
create
more
kind
of
complex
applications,
basically,
the
orchestrate
jobs
and
services
related
to
these
applications,
and
so,
if
you
look
at
like
events,
for
example,
events
can
be
used
to
trigger
workflows.
A
This
was
the
original
use
case
that
motivated
Blackrock,
to
create
Argo
events
and
to
contribute
it
to
Argo
and
workflows
can
obviously
also
generate
events.
Events
can
be
converted
into
messages
which
are
processed
by
long-running
applications,
and
those
application
also
generate
events.
Similarly,
workflows
can
trigger
deployments
and
also
to
the
Oracles.
Well,
we
think
that
in
a
large,
complex
applications,
you'll
really
have
to
be
using
multiple
modes
of
computing
you're
not
just
going
to
have
long-running
services.
It's
not
just
going
to
be
event
based.
A
A
A
little
bit
more
defenders
of
argyll
workflows.
What
is
our
go?
Workflows,
our
workflows,
all
of
our
components
are
implemented
as
declarative.
You
know
kubernetes
resources,
that's
what
makes
them
like
kubernetes
native.
It
allows
us
to
build
layers
of
abstraction
in
a
very
declarative
way
by
starting
with
you
know,
containers
and
pass
the
appointment,
services,
applications
and
so
on
and
so
forth,
and
without
the
workflows
each
step.
The
workflow
is
basically
a
pot,
so
it
Maps
very
well
to
kind
of
the
kubernetes
native
attractions.
A
It
does
it
doesn't
mean
that
the
workflow
step
is
it's
kind
of
a
granular
right.
The
container
does
take
a
little
bit
of
resources
and
time
to
start
up,
so
it's
not
designed
for
like
last
type
of
workloads
right.
It
also
supplies
you
to
specify
workflows
in
two
different
ways.
You
could
specify
it
in
more
like
a
step
based,
sequential
and
you
know,
fork/join
parallelism
type
of
fashion
or
you
could
use
specify
arbitrary
bags
and
the
DAC
form
is
particularly
popular
with
folks,
using
our
go
workflows
for
machine
learning.
A
For
example,
audio
CD
is
declared
a
continuous
delivery
for
kubernetes
and
in
terms
of
the
kind
of
definition
of
terms
that
say
cap.
The
delivery
is
currently
working
on.
You
see
a
picture
of
that
on
the
right
side,
like
you
have
an
application
which
is
declaratively
specified
it's
most
likely
residing
in
a
git
repo,
and
there
are
rollout
strategies
which
basically
take
those
specs
and
realize
them
into
running
workload.
A
A
Obviously,
rock
rock
created
allows
you
to
respond
to
various
external
as
well
as
time,
calendar
or
time-based
events,
as
well
as
internal
events,
so
that
one
workflow,
when
it
completes,
may
create
an
artifact
and
the
creation
of
that
artifact
may
trigger
another
workflow,
which
comes
these
another's
stage
as
workflows
become
more
complicated.
It
becomes
unwieldy
to
create
just
one
huge
monolithic,
workflow
and
therefore
our
go
have
bents
allows
you
to
decompose
those
workflows
into
smaller,
smaller
workflows,
which
you
can
stitch
together,
you're
using
events
to
create
a
completely
automated
system
for
processing.
A
A
Cute
flow
is
another
good
use
case
for
the
Argo
project
they
use
both
are
the
workflows
as
well
as
argue
LCD
as
part
of
their
pipelines
for
get
outside
machine
learning
they,
as
I
mentioned,
they
built
their
own
UI.
So
this
doesn't
look
like
the
RBI,
but
basically
the
underlying
components
is
our
go
for
ya.
A
Also,
like
Sheldon
uses
our
go
workflows
and
also
our
go
CD
to
build
and
deploy
machine
learning
models
as
part
of
their
their
product.
So
another
machine
learning
use
case
when
we
started
the
project,
we
saw
that
you
know
they
had
never,
obviously
other
existing
workflow
engines,
especially
in
the
CI
space.
But
since
the
CI
space
was
already
well
established,
we
decided
to
build
a
more
general-purpose
workflow
engine
and
targeted
at
other
emerging
applications
at
the
time
which
include,
obviously
kubernetes,
but
also
a
lot
of
folks
doing
ml
and
they'll
nai.
A
We
saw
that
there
wasn't
a
great
workflow
engine
for
mo
nai,
so
we
did
make
a
conscious
effort
to
try
to
target
those
particular
areas.
Some
people
were
using.
You
know
trying
to
use
our
patchy
air
flow,
for
example,
which
other
people
use
for
data
processing
purposes,
but
ml
folks
were
really
not
using
it,
and
what
we
found
is
that
you
know
most
most
people
actually
prefer
doing
something
more
kubernetes
native
or
more
configurable,
using
something
like
our
the
workflows
and
air
flows
in
the
NL
space.
So
at.
A
Have
you
know
major
platforms
like
you,
throw
and
videos
maglev
project
as
well
as
Selden,
I/o
and
now
internal
it
into
it?
We
decided
to
standard
on
our
go.
You
know,
as
the
toolkit
for
doing
ml
and
AI
some
alternatives
to
Argo.
These
are
the
projects
that
we
get
most
often
compared
with,
and
someone
to
describe
this
to
provide
some
context
in
terms
of
how
the
Argo
project
may
fit
into
the
CNC
f.
You
know
landscape.
A
Another
project,
obviously
that
we
often
get
compared
to
is
flux
which
is
from
weave
works.
They
started
kind
of
going
to
get
ops
term
the
approach
to
a
way.
We
do
CDs,
of
course,
a
little
bit
different.
We
designed
obviously
be
more
from
enterprise
perspective,
managing
many
clusters,
multi-tenant
type
of
environments,
but
where
flux
really
excels,
is
you
know?
If
you
want
to
deploy
something
very
quickly?
You
have
full
access
to
the
cluster.
A
You
want
to
boost
you
up
the
cluster,
so
it's
a
very
kind
of
different
use
cases
we're
driving
the
core
design
of
it,
although
you
could
use
either
either
project.
Obviously,
for
you
know,
applications
as
well,
as
you
know,
cluster
bootstrapping
or
out
their
applications.
Another
project
I
would
sometimes
get
compared
with
particular
go
event.
Side
is
Kay
native
from
Google.
Obviously,
K
native
is
more
of
a
fast
system
rather
than
a
workflow
system,
or
focusing
more
on
coarse-grained
workflows
and
at
having
good
integration,
at
least
between
advanced
workflows
and
long
running
deployments.
D
Yeah
I
had
a
few
okay.
First
of
all,
thank
you
for
a
fantastic
presentation,
very,
very
impressive
technology
and
and
very
lucid
presentation.
Thanks
for
that
questions,
one
is
these
workflows
that
are
running
inside
this
technology.
How
how
aware
of
they
are
they
that
they're
part
of
it
and
how
do
they
interact
with
with
the
augur
framework?
That's
the
one
question
and
the
other
one
is:
is
the
data
that
flows
between
these
various
workflow
elements?
What
is
the
typical?
You
know
how
much
of
that
it
just
provided
by
argon.
A
Yes,
so
so
one
of
the
great
things
about
our
go
workflows
and
the
whole
suite
of
other
tools
is
it's
kind
of
native
integration
with
kubernetes.
What
this
means
is
that
you
can
use
all
of
the
kubernetes
features,
including
like
volume
mounts
or
you
know,
being
able
to
create
other
resource
related
resources
from
workflows
all
natively
from
inside
kubernetes
work,
our
go
workflows,
so
basically,
you
could
include
a
kubernetes
spec.
A
I
could
also
provide
an
eight-hour
go
native
feature
called
artifacts,
which
automatically
packages
like
the
output
of
one
step
in
a
workflow
puts
it
in
you
know,
say
something
like
s3
or
something
and
then
automatically
imports
it
as
a
input
to
another
step
in
the
workflow
or
another
type
of
applications,
where
you're
doing
a
lot
of
data
processing.
Where
you
have
very
large
data
sets
often
times
folks,
will
store
the
data
in
something.
H
H
A
Yes,
our
go
workflows
particularly
has
commonalities
with
Colin.
Obviously,
a
workflow
engine
is
a
workflow
engine
at
the
end
of
the
day
where,
where
I
find
that
a
lot
of
these
workflow
engines
differ,
is
that
they
kind
of
get
defined
by
the
original
set
of
like
users
and
applications
or
community
that
they
attract
because
future
like
features
and
so
on,
get
driven
by
us.
So
when
we
started
our
go
workflows,
for
example,
there
were
other
workflow
engines
that
existed.
Most
of
them
were
not
kubernetes
native,
because
kubernetes
was
just
starting
at
that
point.
A
So
when
our
Co
started
it
ml
was
starting
up,
so
a
lot
of
people
are
doing
ml
use
it.
Similarly,
as
Tecton
starts
up,
you
know,
and
and
gross
I
think
they
have
to
decide
like
what
is
the
main
community.
They
want
to
serve
usually,
as
workflow
engines
become
established,
it's
difficult
to
like
displace
them.
Unless
something
weird
happens
from
the
community,
that's
already
accepted
it,
so
I
feel
like
there
will
always
be
new
or
workflow
engines
and
they'll
just
be
like
targeted
towards
newer
application
areas
that
emerge
I.
H
A
G
H
A
Yeah
I
think
that's
a
you
know,
that's
another
approach
that
you
can
take
is
to
build
a
common
denominator
and
then
perhaps
other
projects
may
use
it.
You
know
like,
for
example,
like
youthful
users.
Users
are
go,
for
example,
so
yeah
these
tools
get
it
corporate
into
larger
frameworks
serving
more
specific
user
communities.
So
yeah.
F
B
Now,
my
guess
is,
there
is
another
question
resident
in
the
chat.
It's
actually
asking
that
workflow
seems
very
generic
and
should
it
be
more
of
a
delivery,
workflow
engine,
so
I
think
the
question
is
like
what
is
the
relationship,
this
agro
workflow,
partly
the
application
delivery,
because
I
think
the
question
is
talking
about
that
the
the
workflow.
You
actually
have
a
larger
scope,
the
application
flavoring.
B
A
I
mean
you
can
use
workflows
for
a
lot
of
lot
of
different
things,
there's
definitely
and
I
think
an
interaction
between
workflows
as
well.
As
you
know,
that's
akin
to
the
reverse,
and
it
also
depends
on
how
you
view
the
term
application
like
if
you
view
it
in
a
narrower
scope
like
it's
a
set
of
like
deploy,
bins,
pods
and
so
on
for
running
long-running
services.
Then
workflows
is
not
strictly
a
part
of
that.
C
Yeah,
my
question
was
also
how
you
would
relate
to
what
you're
doing
with
relating
kind
of
since
their
project
was
create.
I,
obviously
see
the
difference
in
India
approach.
There
I
worked
because
you
hear,
like
your
opinion,
also
not
just
that
there
may
be
the
difference
between
those
I
mean
you're,
wanting
courage
that
project
collaborates
together,
maybe
spinning
it
the
other
way
around.
How
would
you
see
being
part
of
scenes
of
profit
or
what
would
be
your
interaction
point
without
the.
A
Work
for
projects
yes
well,
I
think
that
we
should
all
kind
of
work
on
maybe
creating
a
more
abstract
spec
in
terms
of
I.
Think
the
approach
that
sig
applicants
delivery
is
already
taking
in
terms
of
these
finding.
You
know
kubernetes,
obviously
didn't
have
a
native
concept
of
application
when
it
started
and
what
people
want
to
do.
I
run
applications,
not
in
individual
pods.
You
know
not
kind
of
thing
at
the
end
of
the
day
and
so
you're
defining
like
what
are
the
basic
attractions
and
terms.
What
does
that
mean?
A
What
are
the
boundaries
between
these
components?
I
think
something
similar
could
be
done
for
workflows
as
well,
so
we're
actually
interested
in
we're
creating
a
draft
right
now
of
some
basic
terms
and
after
chin
areas
in
this
kind
of
modeling
it
based
on
like
the
definitions
that
application
zig
Zigler
has
already
created,
for
you
know,
applications,
and
maybe
you
know
we
might
also
discuss
in
the
future
whether
we
should
take
a
broader
view
on
applications.
Is
it
just
like
running
pods
and
so
on,
or
is
it
something
a
little
bit
broader.
I
E
A
Yeah
and
yeah
and
we'd
really
like
to
you,
know
actively
participate
in
the
discussion
by
the
way.
I'm
sure
lots
of
other
folks
would
as
well,
but
right
now
we're
seeing
you
know
like
massive
adoption
of
kubernetes
at
Intuit
for
the
past
year
and
have
actually
we've.
To
be
frank,
we've
had
our
head
bounces
supporting
into
its
needs,
but,
as
the
use
case
grows,
our
group
has
been
growing
and
we
now
have
much
more
bandwidth
to
interact
with
the
community
as
community
as
well.
A
F
So
the
the
way
the
application
is
defined
is
definitely
much
broader
than
just
a
deployment
or
is
the
service
when
it
is
so
the
technologies
that
we
actually
use
all
three
to
drive
the
transformation
of
into
it
into
leveraging
kubernetes
natively
I
am
cloud
technologies
in
particular,
like
all
the
CN
CF
projects,
to
deliver
value
for
running
these,
like
myriad
applications
that
we
have
recommend
to
it.
Yeah.
A
And
particularly
as
an
end
user,
we'd
also
like
to
help
bring
in
other
end
user
communities
into
you
know
and
get
their
feedback
and
interest,
and
you
know
work
with
them
as
well
to
kind
of
decide
what
are
the
actual
needs
of
the
end
users
who
are
using
kubernetes
in
terms
of
application
delivery?
Is
it
only
like
you
know
long-running
services
and
pods,
or
are
they
also
looking
at?
You
know
eventing
and
you
know,
workflows
and
so
on,
and
do
they
want
some
way
to
integrate
it
in
right?
C
Yeah
I
think
there's
a
fix
of
the
space
prison,
yeah
I,
think
the
event
based
application
pieces
I,
find
interesting
and
also
the
machine
learning
part
is
all
sort
of
found
in
most
it's
reaching
one
honestly,
not
that
the
other
parts
are
not
interesting,
but
this
is
something
that
we
currently
don't
cover
at
all.
How
like
a
machine
learning
application
would
be
a
pipeline,
for
this
would
look
like
how
are
they
were
in,
for
these
types
of
application
looks
like
I
think
this
adds
also
a
very
interesting
aspect
to
the
entire.
J
G
Go
ahead:
okay,
there
was
a
question
about
the
underlying
backbone
that
Argo
is
using
for
these
messaging
pipeline.
I
think
it
probably
got
lost
because
there
were
multiple
cautions.
Whereas
so
can
you
help
us
understand,
but
there
whether
Argo
is
using
more
of
native
kubernetes
storage
and
communication
mechanism
or
any
other
external
element
into
that?
Yes,.
A
Actually,
you
can
use
both.
We
discussed
a
little
bit
about
storage,
that
you
could
use
persistent
volume
claims
or
even
NFS
losses,
the
EFS
XFS.
All
of
that
you
know,
kubernetes
already
supports
and
since
you
know,
Argo
is
built
on
top
of
kubernetes.
You
can
basically
use
all
of
those
come
and
just
one
of
the
advantages
I
think
of
basing
it
on
kubernetes
and
making
native,
rather
than
creating
a
wrapper
around
kubernetes
and
then
having
to
reemployment.
A
All
of
these
features
that
you
want
to
integrate,
but
more
specific
to
your
questions
about
messaging
we're
kind
of
agnostic
about
what
the
underlying
message
in
a
backbone
should
be.
That
could
be
just
kubernetes
events
at
a
very
similar
level
or
it
could
be
like
an
ad
server
she's
been
up
for
messaging.
What
are
the
events
does?
A
Is
it
defines
things
like
gateways
and
sensors
which
allow
you
to
create,
for
example,
a
gateway
for
input,
events
into
the
system
and
then
and
then
also
define
more
complex
rules
in
terms
of
what
sets
of
events
can
trigger?
You
know,
messages
or
or
workflows
or
other
types
of
activity
in
the
system.
So
that's
the
basic
interest
in
events
provis,
but
in
terms
of
the
actual
messaging
back
backbone
you
could
use
nads
Kafka,
just
kubernetes
events.
You
know
so
we're
agnostic
to
that.
A
J
A
F
So
we
have
taken
a
conscious
decision.
Our
choice
made
the
choice
to
not
be
a
bit
of
Gnostic
to
the
configuration
management
tool
that
you
define
the
application
in.
So
in
that
sense,
the
customized
support
or
case
on
support
our
helm
support
is
there
are
those
are
things
that
you
just
use
for,
defining
your
application,
which
our
system
actually
fits
your
needs.
What
we
also
are
doing
in
August
will
be
in
the
next
version
on
the
roadmap
is
to
have
first-class
support
for
health.
F
What
does
literally
means
is
to
be
able
to
have
a
helm
repository.
That's
your
point,
alga
city,
to
much,
as
you
would
point
almost
cb2
to
a
git
repository
that
you
have
your
definitions
written
in,
and
we
will
actually
understand
the
helm
chart
from
the
repository
and
apply
the
same
mechanisms
that
we
would
apply
if
as
if
it
were
defined
in
lecture
customize.
So
in
that
sense
we
are
baking
more
of
a
helm,
repository
first-class
citizen
support,
like
you,
would
have
from
a
git
repository
supported
right.
A
In
in
general,
like
Nome,
has
a
multiple
components
are
constants,
one
is
as
a
it.
It's
a
way
of
packaging
applications.
There
is
a
little
deployment
component
as
well,
but
the
agro
tools
are
kind
of
agnostic.
In
terms
of
what
configuration
management
tools
you
use
for
the
application
package,
the
application
we
want
to
support
all
of
the
popular
ways
of
packaging,
the
application
where
it's
a
helmet
art
or
some
some.
You
know
case
on
the
thing
or
some
other
type
of
thing
so
and
then
configuration
management,
obviously
is
another
entire
broad
area.
A
B
Guys
actually
I
think
we
can
stop
the
first
topic
here,
because
we
still
have
two
projects
it
would
be
to
prevent.
So,
let's
go
to
the
next
project
and
I'm,
really
sorry
that
we
just
discussed
for
so
long
here
and
I
will
try
to
follow
up
with
you
guys,
offline,
okay,
so
I
think
the
next
project
is
a
better
framework.
F
K
You
thank
you
all
right,
so
this
is
the
since
you
think
about
proposals
for
the
framework.
My
name
is
Daniel
Messer
I
work
at
Red
Hat
in
the
operator
criminal
community
space,
and
what
we
are
looking
at
here
is
something
that
we
merge
from
us
noticing
about
two
years
ago
already
that
we
are
now
pushing
the
third
wave
of
Qunu
dos
applications,
applications
on
top
of
kubernetes,
which
we
call
advanced
distributed
systems.
K
What
the
operator
framework
is
about
is
giving
developers
the
tools
to
build
test
and
publish
kubernetes
operators
and
an
iterative
development
cycle
and
also
help
owners
of
clusters.
Providers
of
clusters
providers
in
place
to
manage
available
offering
its
own
cluster
in
a
missing
location
so
on
a
high-level
ability,
operator
framework
has
divided
to
three
parts.
K
K
The
Opera
SDK
is
targeting
the
developers
of
operators,
and
this
is
the
build
stage
where
the
SDK
helps.
You
skip
over
a
lot
of
the
regular
board,
a
plane
code
that
you
need
to
have
in
place
in
order
to
write
an
operator
in
order
to
interact
with
the
API
server
to
register
by
yourself
for
watches
and
Reconciliation
cycles
and
just
focus
on
the
application
code.
The
code
that
is
specific
to
managing
your
application,
which
is
the
unique
property
of
the
operator.
Obviously,
the
lifecycle
manager
is
targeting
Bonelli's
admins.
K
So
these
are
the
people
who
are
responsible
for
the
stability
of
the
cluster
and
offering
additional
services
on
top
of
it,
to
the
users
of
those
clusters,
business,
the
component
that
deploys
and
runs
operators,
and
then
there
is
another
upstream
effort
in
order
to
provide
developers,
people
interested
in
searching
for
operators
a
place
to
search
and
publish
their
creations.
So
this
is
operator,
habitat
I'll,
be
serious.
K
Operators
goes
back
all
the
way
to
2016,
where
the
concert
was
initially
introduced
by
Korres,
starting
with
three
operators
here
on
the
sed
Prometheus
and
both
side,
and
this
then
quickly
took
off
into
community.
So
we
had
early
adoption
on
very
popular
source
projects
in
the
storage
space
in
the
database
space.
K
Primarily,
all
those
will
doses
require
active
care
and
are
otherwise
fairly
difficult
to
run
on
kubernetes
with
external
systems,
our
scripting,
the
try
to
impose
state
our
on
configuration
on
those
what
the
operator
does
it
does
this
on
cluster,
and
it
does
a
very
specific
to
the
application
and
2018
beep
actually
officially
launched
the
operator
members
as
an
extreme
open
source
project
which
the
SDK
or
them,
and
a
metering
operator
Patrick.
Jr.
K
license
and
we're
seeing
that
this
video
Knox
the
potential
for
running
stateful
workloads
in
a
very
safe
and
predictable
fashion
on
to
the
net,
so
we
had
a
ton
of
fun
morning
from
very
popular
workloads
and
on
Cuban
edits,
normally
be
Redis.
My
signal
phosphorus
all
those
leverage,
the
operating
pattern
in
order
to
define
how's,
that
your
booth
gets
deployed-
yes,
sir,
and
how
it
is
managed,
especially
with
the
focus
on
his
native
relations.
If
you
want
to
do
instant
backup
of
your
database,
we
store
more
complex,
reconfiguration
or
retention
of
the
orchestration.
K
Say
a
question,
or
this
is
ok,
no
question.
At
the
same
time,
we
also
started
to
form
a
discussion
forum
on
IDE
OpenShift
Commons
umbrella.
This
is
an
open
source
community
for
the
origin,
not
be
shipped
obscene
project
and
performed
this
seek
there.
That
has
monthly
community
meetings
with
good
participations,
and
we
also
have
a
mailing
list
in
place
that
has
a
lot
of
community
community
contribution
from
the
larger
kubernetes
space
in
2019.
K
I
think
we
reached
an
inflection
point
where
I
didn't
at
a
much
broader
level
throughout
the
kubernetes
community,
there
were
discussions
around
how
add-ons
to
we
notice
a
control.
Plane
are
going
to
be
managed
and
one
part
of
the
opposite
same
work.
You
know,
lifecycle
manager
is
one
of
the
technologies.
That
was
the
piece
under
consideration
of
this
particular
c+
web
cycle
group.
K
G
K
The
Opera
SDK
is
a
project
that
is
leveraged
a
lot
by
ISPs
and
authors
of
stateful
applications
in
order
to
create
operators
there's
a
whole
as
research
to
this,
where
commercial
vendors
are
using
the
framework
in
order
to
package
their
operators
and
deliver
operators
to
their
customers
in
order
to
provide
really
good
service
and
the
session
attendance
at
Yukon,
you
need
Seattle
Barcelona.
We
go
flex
that
that
level
of
interest
now
a
little
bit
more
detail
on
some
key
components.
K
The
SDK
is
a
tool
that
developers
use
on
their
workstations
to
scaffold
code
and
provide
a
code
structure
to
how
to
write
operators.
This
is
heavily
leveraged
in
controller
runtime.
In
order
to
let
the
author
focus
on
the
reconsideration
width
of
the
operator,
this
is
primarily
happening
for.
Go
or
preggers,
but
the
asically
also
addresses
other
audiences
into
the
data
space.
K
This
is
a
company
by
a
testing
framework.
We
believe
that
operators
are
important
and
impactful
but
loads
on
the
cluster.
I
need
to
be
tested
very,
very
well,
because
the
users
trust
these
services
to
manage
their
production
applications
eventually.
So
the
SDK
also
comes
with
a
testing
framework
and
verification
and
scoring
tools
that
help
the
developer
judge
to
maturity
of
the
operator.
It
also
integrates
and
the
rest
of
the
primary
in
terms
of
generating
the
bundle
packaging
format
which
is
used
by
the
lifecycle
manager.
The
lifecycle
manager
is
an
on
cluster
component.
K
K
It
defines
a
packaging
format
for
operators
and
has
the
concept
our
catalogs
that
can
be
hosted
in
standard
container
image
registries
and
against
or
against
those
you
can
state
the
intended
where
they
install
an
operator,
keep
it
updated.
So
the
lifecycle
manager
introduces
a
concept
where
the
author
has
direct
control
over
the
update
graph
of
the
operator,
again,
operators
being
long-running
important
impactful
workers
on
the
cluster
should
be
updated
very
frequently,
and
we
provide
means
to
do
that
with
the
lifecycle
manager.
K
Operators
are
also
always
wrote,
notes
that
have
a
global
impact
on
the
cluster
duty
to
build
the
nature
of
Seir
these,
so
the
lifecycle
manager
also
provides
a
lot
of
guardrails
in
order
to
do
that
safely.
On
the
cluster,
there
are
checks
against
ownerships
of
existing
CR
DS
against
existing
operators.
There
are
measures
in
place
that
prevent
privileged
escalation
from
the
usually
highly
privileged
service
accounts
that
operators
are
using,
since
these
are
on
cluster
workloads,
there's
also
as
part
of
the
catalog
concept.
K
The
ability
to
implement
segregation
of
concerns
of
operators
so
I've
been
applications
that
consists
of
multiple
stateful
services,
there's
a
concept
of
dependencies
between
operators
that
would
allow
one
operator
to
specify
pendency
on
another
operator
and
once
you
install
such
an
operator
or
them
what
resolve
dependency
at
install
time.
When
make
sure
this
operand
is
available
and
up
and
running,
it's
part
of
a
larger
step,
and
you
can
use
this
very
well.
Express
update
model
also
to
control
the
updates
to
be
managed
application
what's
displayed
here
is
one
variant
of
this.
K
The
third
part
of
my
life
that
I
always
weapon
on
that
note
addressed
to
joint
effort
between
devils
and
likes
of
Reddit
and
Google's.
We
launched
this
in
February
this
year.
We
now
have
over
75
operators
published
there
and
multiple
versions
getting
treatment
updates.
This
is
a
community
curated
place
where
people
can
just
publish
the
operators.
It
comes
automated
testing
and
PR
base
review
process
and
it's
agnostic
to
the
type
of
operator
and
the
way
they
got
created.
So
this
is
not
dependent
on
an
operator
being
creative.
K
The
SDK
SDK
operated
created,
offering
this
obviously
have
the
advantage
that
a
lot
of
the
packaging
is
created
for
you.
This
is
also
giving
you
straightforward
instructions
given
om
mr.
Pradhan.
The
community's
best
know
how
to
install
an
operator-
and
this
also
provides
an
public
catalog
for
operators
that
you
can
use
in
their
clusters.
K
Couple
of
statistics
from
the
community.
We
have
to
now
get
up
stars.
We
have
a
lot
of
clowns,
especially
in
the
SDKs
side,
I'm
very
pretty
people
freaking
for
this
cycle
over
160
and
individual
contributors
with
38
organizations
contributing
this
is
also
including
the
community
operators
that
we've
just
seen
and
they've.
Also
very
active
made
in
this
India
face
rainberge
special
interest
group
with
over
80
subscribers
mobile.
K
The
feedback
has
been
very
good,
so
you
see
your
a
couple
of
mentions
of
the
framework,
the
SDK
and
all
and
on
Twitter
from
Brandon
Phillips
Winston's,
and
also
some
of
the
companies
that
provide
application
packaging.
So
Ben's
a
cloud
is
one
such
company
that
provides
us
a
press,
supported
version
of
Walt
and
I.
Think
in
general
D.
K
Our
community
has
reached
a
point
where
it
becomes
clear
that
if
you
want
to
run
the
type
of
stateful
application,
that
has
reasonable
complexity
and
you
should
use
the
operator
pattern
in
order
to
automate
it
sufficiently
overworked
or
that's
actually
running
on
cluster
and
and
and
not
outside,
trying
to
impose
itself
on
the
cluster.
So
this
is
a
quote
from
one
of
the
peoples
of
Engineers
here
when
the
last
to
come
in
Barcelona.
K
Actually
we
and
offers
a
huge
momentum
in
the
software
vendor
space,
obviously
because
it's
a
very
nice
way
to
divide
very
good
user
experience
on
cluster
very
well
integrated
into
kubernetes
itself,
which
is
also
why
we
call
these
communities
native
applications.
So
couple
of
couple
examples
here
are
dynaTrace
corpse,
cystic
or
radiation
Jaeger.
All
these
companies
are
providing
operators
and,
as
part
of
a
way
to
shoot
this
off
there,
but
we
also
have
a
lot
of
open
source
contribution
from
Minister.
K
So
every
operator
ship
sear,
these
the
lifecycle
manager,
is
developed
with
operators
as
well.
The
installation
is
driven
by
CR
DS
and
the
lifecycle
model
imposed
by
the
operator
lifecycle
manager,
with
its
defined
update
graph
very
nicely
tapped
into
the
release
management.
Trotter
of
this
group.
We
also
aligned
with
other
sentient
projects
or
sukhasana
sorry
I
mentioned
in
the
beginning.
There's
an
add-on
sub-project.
It
discusses
how
to
manage
cluster
add-ons
for
sleaziness
and
how
those
come
to
knives
and
OLMS
discussed
as
a
additional
turnip
there.
K
We
go
very
closely
with
Jupiter
these
days
in
order
to
join
birds
on
the
Golan
best
operator
side
so
triple.
There
is
also
a
project
that
focuses
on
helping
go
based:
Avago
developers,
writing
operators
and
we
look
to
leverage
some
of
their
work
as
well
as
contribute
and
jointly
into
some
of
the
control
on
time
space.
K
K
K
There's
also
question
around
the
service
program
project,
so
service
brokers
and
and
in
general,
is
something
that
I
personally
do
not
see
a
lot
of
momentum
anymore
overall
from
the
bigger
players
in
the
background,
as
well
as
the
overall
community
concept
of
the
service.
Procore
is
largely
replaced
by
the
operator.
Current
I
would
say,
and
there
is
there
are
some
commonalities
between
all
11
service
brokers,
but
due
to
the
nature
of
operators
and
and
some
of
the
specific
aspects
of
running
democracy,
I
think
it's
only
a
very
high
level,
so
mostly.
J
Mostly
I
was
just
referring
to
the
open
service
broker.
Has
this
concept
of
like
a
plan
which
is
kind
of
like
you
know,
like
a
bundle
of
provided
services,
I
guess
and
it
kind
of
seemed
similar
to
the
bundle
slash
package
thing
that
the
one
slide
about
OLM
was
talking
about?
That's
that's
the
reason.
I
brought
it
up,
yeah.
K
I
think
that
makes
sense,
there's
also
an
interesting
concept
in
the
brokers
face
around
finding.
So
how
would
you
actually
consume
and
a
service
that
you
just
got
from
a
broker
and
he
have
similar
discussions
in
the
community
around?
How
do
you
consume
and
inject
connectivity
information
from
an
operator
managed
service
into
your
application,
because
that's
eventually
what
the
developer
usually
wants
to
do.
So
we
are
looking
at
the
dovetailing
application.
Finding
a
sponsor
in
the
operator
space
as
well
and
I
mean
the
roadmap.
Basically
has
bosses.
B
Sorry
guys
we
may
want
to
have
we
have
to
stop
the
meeting
because
we
just
run
outside
and
I
think
if
we
we
have
more
questions
about
the
operating
framework
or
SDK
is
especially.
There
are
questions
about
how
to
choose.
Operators
between
different
implementations,
I
think
we
can
actually
fill
up
by
over-mighty
least
of
a
cig
application
delivery
on
ABEO
very
happy
to
discuss
more
about
all
of
this
project.
Okay,.