►
From YouTube: CNCF TOC Meeting - 2019-06-11
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
B
A
C
C
So
it
reconciles
the
contents
of
a
git
repo
with
the
kubernetes
database
and
really
that
it
is
the
effect
of
that
is
to
extend
the
the
reach
of
your
sort
of
your
system
of
record
to
get,
which
means
that
you
can
operate
on
it.
Using
get
operations
which
gives
pretty
useful
and
seems
to
have
chimed
with
with
people,
because
it's
it's
a
core
mechanism
of
what
has
come
to
be
known
as
'get
ups,
and
here
is
a
diagram
of
the
flow
of
stuff
going
from
the
Reaper
being
applied
by
flux
to
cuba.
Nettie's.
C
C
Instead
of
replacing
Quebec
to
apply
and
we've
effectively
replaced,
helm
upgrade
with
git
push
and
in
fact
that
that
capability
has
really
driven
a
lot
of
people
to
to
the
community,
because
I
think
that
was
something
that's
no
complementary
to
the
other
stuff
that
comes
with
that
which
I'll
cover
in
a
wee
bit
next
slide.
Please.
C
So
we
think
it's
really
important
to
set
boundaries
on
the
scope
of
and
what
stuff
does
and
flux
is
really
aiming
to
be
at
an
angle
approaching
90
degrees
to
other
themes
that
are
in
the
kubernetes
ecosystem.
We
want
to
do
one,
or
maybe
two
jobs
well
and
not
try
and
get
other
people's
lon,
so
we're
not
trying
to
replace
continuous
integration,
we're
trying
to
work
with
it.
C
We're
not
trying
to
replace
other
operators
or
and
service
meshes
or
things
like
that,
we're
trying
to
work
with
them
and
the
examples
there.
Another
example
is
secrets
management,
the
idea
being
that
if
you're
using
sealed
secrets,
for
instance,
flux,
will
create
the
resources
for
you,
but
then
it
starts
there
and
lets
sealed
secrets
or
whatever
other
operator
takeover
we're
not
trying
to
be
a
packaging
or
templating
solution.
C
What
we're
trying
to
do,
as
with
the
helm
support,
is
work
with
those
solutions
to
extend
their
reach,
to
get
and
make
them
declarative
by
necessary
and
similarly
customized,
which
is
on
its
way
to
becoming
part
of
core
kubernetes
and
we're
also
not
trying
to
do
sophisticated,
rollouts
or
really
sophisticated,
continuous
deployment.
That
is
better
done,
at
least
in
concert
with,
with
other
tooling,
like
like
a
sto
and,
and
so
next
please.
C
So
these
are
all
users
that
have
added
themselves,
or
at
least
are
on
and
I
read
me
as
production
users
and
right
in
the
middle
there.
We
use
it
ourselves,
of
course.
Okay,
that's
another
kind
of
thing!
Thank
you
so
Kyle,
if
you
don't
mind
decloaking
and
just
saying
one
or
two
minutes
about
how
Under
Armour
uses
glutes
that'd,
be
cool,
sure.
E
Do
this
one
thing
really
really
well,
so
we
started
to
apply
that
to
our
cluster
and
we
saw
the
benefits
almost
immediately.
We
never
were
questioning
what
was
in
the
cluster.
We
could
just
look
at
the
master
branch
in
the
repo
that
flux
is
looking
at
and
that's
the
source
of
truth
and
it's
flux
is
making
sure
that
it
is
in
sync
and
we
weren't
having
problems
anymore
with,
like
oh
did
this
actually
get
applied
or
I
made
this
change
in
a
blew
away.
Someone
else's
manually
apply
change.
E
It
was
just
feeding
everyone
back
into
the
into
get,
and
that
was
the
source
of
truth,
and
it
was
so
much
more
clear
and
now
we're
starting
to
take
this
one
step
further,
where
we
want
to
have
kubernetes
multiple
kubernetes
clusters,
for
that
represent
a
region
and
flux
has
been
super
clutch
in
this
situation,
because
when
we
I'm
an
infrastructure
engineer
at
Under
Armour
and
when
we
spin
up
new
clusters,
we'd
have
to
like
tell
other
people,
hey
there's
this
cluster.
Here
you
can
put
your
stuff
on
it
now
with
flux.
E
We
don't
have
to
do
that
anymore.
We
can
just
do
that.
Work
and
unbeknown,
a
phys
running,
because
flux
is
making
sure
that
it's
applied
to
that
cluster.
So
we
we
have
three
clusters
that
represent
the
u.s.
region.
If
things
are
going
wrong
with
it,
we
can
shift
traffic
to
one
of
the
other
clusters
or
if
we
take
a
cluster
out,
we
can,
you
know,
upgraded,
bring
it
back
up
and
we
know
that
all
the
apps
will
get
deployed
onto
it.
So
it's
been
a
super
clutch
tool
for
us,
as
in
our
kubernetes
journey,
brilliant.
C
C
There's
there's
definitely
overlap
care
that
you
know,
I,
don't
think
it's
out
of
the
question
that
flux
and
Jenkins
X
could
sort
of
work
together,
but
there's
significant
overlap
there
so
that
you
could
consider
that
an
alternative
and
Argo
CD
also
based
on
similar
ideas
to
flux
and
works
with
some
other
things
from
our
go
project
and
like
I,
say
there
is
close
in
spirit
of
Lux
and
it's
just
a
bit
a
bit.
Newer
I
had
some
newer
ideas
in
some
ways.
Sorry
good
go
to
the
next
one.
C
Thank
you
there's
some
other
projects
which
may
be
less
in
the
way
of
alternatives,
but
that
people
might
think
of
when
they're
looking
at
this
stuff.
So
spinning
is
definitely
one
of
them
and
I
think
the
main
way
flux
would
compare
with
spinnaker
is
that
his
flux
is
really
trying
to
do.
One
thing
quite
mechanically,
simply
spinning
is
quite
what
we
are
told
anyway,
it's
quite
a
complex
platform,
and
it's
not.
C
C
Customizers
is
sort
of
newer
to
to
our
support.
In
fact,
it's
not
even
in
a
full
release
yet
but
I
would
say
some
of
the
things
for
customised
and
with
those
we're
not
we're
not
really
trying
to
replace
those
we're
trying
to
be
complementary
to
them
to
sort
of
add
the
good
bits
of
flux
to
to
those
other
tools.
C
Another
thing
that
might
come
up
is
why
don't
I
just
write
my
own
or
can't
I
just
drive
this
from
continuous
integration.
You
can,
of
course,
and
that
works
pretty
well.
I
think
fluxes
may
be
more
in
the
spirit
of
kubernetes,
of
having
a
system
in
the
records
and
then
a
reconciliation
process,
whereas
continuous
integration
tends
to
be
more
event-driven
and
again,
we
are
not
trying
to
be
a
sort
of
general
pipeline
thing.
We're
just
trying
to
do
the
one
job.
C
C
People
can
go
back
and
refer
to
it
if
they
are
interested,
except
to
say
that
for
quite
long
time,
flux
was
really
a
tool
driven
by
our
own
requirements,
although
it
was
always
open
source
and
the
one
thing
that
really
brought
a
lot
of
interest
from
from
and
more
built
more
of
the
community
around.
It
was
the
the
held
support.
C
F
So
yeah
as
Michael
said,
and
you
can
can
also
see
from
the
from
the
graph
things
changed
dramatically
since
the
initial
home
support
was,
was
released.
So
we've
seen
more
and
more
people
come
in
and
contribute
to
the
code,
but
also
wrote
documentation.
They
were
block
entries.
We
had
a
number
of
different
integrations
field,
so
one
of
the
first
ones
was
integrating
flux
with
slack,
and
then
everybody
tried
to
figure
out
like
how
can
we
do
things?
F
The
get-ups
way,
so
helm
releases
is,
do
cannery
deployments
open
fast
now
people
really
wanted
to
figure
this
out
and
people
have
given
talks
and
workshops
independently
of
us,
and
it's
really
nice
to
see,
and
this
is
also
why,
over
time
we
have
to
start.
You
know
you
know,
building
more
structure
like
having
monthly
calls
having
a
community
mailing
list
and
having
a
bit
more
process,
but
in
general,
like
all
our
trends
are
going
up.
C
Up
to
now,
flux
has
supported
essentially
yum,
all
files,
flat,
kubernetes,
manifests
and
young
files,
and
we
have
added
support
for
driving
things
like
customized
from
from
flux
recently
and
things
that
we
and
the
theme
continues
stuff
that
we're
hoping
to
finish
soon
by
supporting
git
repos,
that
we
only
have
read
access
to
and
we're
also
hoping
to
release
a
1.0,
have
a
1.0
release
of
the
helm,
operated
and
stuff.
That's
coming
up
supporting
more
than
one
git
repo
and
how
many
three
is
is
no
longer
looming
on.
C
There's
also,
you
know
quite
a
lot
of
alignment
if
we
think
of
and
flux
sort
of,
adding
they
get
super
power
to
to
kubernetes.
That
can
also
add
that
to
held
and
customized
and
other
things
that
are
in
the
family,
and
another
reason
that
is
particular
to
weave
works
is
that
it
worked
well
for
cortex,
which
I
think
is,
you
know,
gone
from
strength
to
strength
since
being
adopted
in
the
the
CN
CF
next.
These,
so
here
are
some
of
the
ways
it
aligns.
I've
actually
covered.
C
These,
I
think
largely,
but
just
to
reiterate
fluxes
is
in
theory
it's
not
it's.
It's
abstract!
It's
not
tied
to
kubernetes,
but
the
implementations
definitely
tied
to
kubernetes
its
strongly
influenced
by
how
kubernetes
works
as
well.
So
that's
another
time
and
it
can
be
not
only
used
to
run
applications,
but
it
can
be
used
to
bootstrap
and
manipulate
kubernetes
clusters
so
operates
on
the
infrastructural
level
as
well.
C
It
we
have
helm
support,
that's
first-class,
if
not
in
quality,
it's
not
at
one
point
they
were
released
just
yet,
although
it's
widely
used,
then
it's
you
know.
First
classes
supported
with
the
helm
operator,
it's
its
own
thing
and
customized
also
part
of
the
kubernetes
family.
As
I
mentioned,
we
now
support
generating,
manifests
in
flux
teaming,
which
would
also
work
with
other
things,
but
it
was.
It
was
designed
largely
to
support
customized,
first
and
foremost,
okay,.
B
C
Via
the
command
line
tool,
the
flux
kettle,
so
all
that
service
is
really
doing
is
proxying
there.
It
will
also
relay
events
such
as
money
makes
commits
or
syncs
commits
to
the
upstream
service.
You
can
run
Gluckstein
and
a
lot
of
people
do
without
we've
cloud.
After
all,
there's
also
an
open-source
implementation
of
stream
service
called
flux.
Cloud
justin,
Barrett
made
it's
on
github.
Is
that
enough.
D
G
No
and
I
get
that
it
can
be
without
without
weed
cloud,
I
think
one
of
the
things
just
through
the
lens
of
sort
of
the
ciencia.
How
tight
is
the
connection
there
so,
like
you
know,
looking
through
the
repo
one
of
the
things
that
I
would
be
looking
for
is
to
have
leave
cloud,
be
one
option
of
many
for
these
upstream
type
of
services,
which
means
taking
you
know,
removing
some
of
the
sort
of
hard-coding
leave
cloud
being
the
default
thing
so
like
when
you
give
it
a
token
for
authenticating
to
the
service.
G
You
know,
I
would
expect
that
you
would
also
have
to
say
I'm
fennec,
a
new
new
club,
whereas
right
now
there's
this
default,
assume
that,
of
course,
you
want
to
connect
them
in
cloud.
That's
natural
when
it's
actually
a
project,
that's
sponsored
by
a
company,
but
as
it
moves
into
the
community,
I
would
expect
that
that
we
would
actually
be
making
those
those
relationship
more
explicit,
both
in
code
and
and
in
documentation
and
such
yeah.
C
C
G
G
C
G
G
C
E
Also
like
to
chime
in
on
at
Under
Armour,
we
do
use
flux
cloud,
open-source
tool
to
send
messages
to
slack
and
it's
it
is
really
nice.
The
integration
was
super
easy.
It
was
set
it
up
as
a
sidecar
alongside
the
flux,
pod
or
inside
the
flux
pod,
and
it
sends
events
to
it.
I
just
had
to
point
it
at
the
flux,
cloud,
pod
or
container
and
then
hook
it
up
to
our
slack
Channel,
and
we
have
a
slack
channel
that
all
the
flux
messages
go
into
there.
Just.
H
I
have
a
question
on
full
disclosure
I'm
into
it,
so
we
did
Argo
an
Argo
CD
project
and
I
we've
been
looking
at.
You
know
some
possibilities
of
merging
the
Argo
CD
with
flux
a
little
bit,
but
I
was
kind
of
wondering
that
there
seems
to
be
like
a
philosophical
difference
and
you
know
having
each
each
individual
one
of
your
cluster
running
the
operator
looking
back
at
againt
repo,
and
so
you
kind
of
wonder
like
well.
How
does,
if
you
have
like
a
pre-production
cluster
and
the
production
cluster,
how
you
know?
H
The
philosophical
difference
with
having
a
cluster
that's
kind
of
dedicated
to
continuous
delivery
over
multiple
clusters,
which
is
kind
of
Argo,
Argo,
CD
approach
and
I'm,
wondering
like
how
you
guys
think
about
and
more
a
lot
more
discussions
on
that
offline.
But
how
do
you
guys
think
about
that?
You
know
essentially
managed
or
a
central
CD
service
versus
having
the
operator
run
on
each
individual
service.
C
I
don't
know
if
I
would
describe
them
as
philosophically
conflicting,
maybe
more
like
from
different
places
and
just
to
add
a
bit
more
opening
from
the
other
action
things
like
multi
cluster
I.
Think
that's
it's
sort
of
better
in
a
way
to
stick
to
at
least
I.
Think
it's
better
six
one
thing
and
let
that
be
an
advancement
or
rather
people
to
put
together
perhaps
I.
H
So
so
you're
saying
some
like
the
thing
that
stitches
it
together,
I
guess
I
would
be
like
what
the
weave
works
provide
would
be
like
the
philosophy
would
be
like
allow
of
each
each
cluster.
You
know
Matt,
you
know
keeping
their
own
state,
have
the
service
operators
running
in
each
cluster,
but
then
have
some
central
tool
that
that
that
more
the
user
experience
or
the
developer
or
the
DevOps
experience
ties
and
multiple
clusters
together,
and
that
way
you
can
see
what
the
state
of
the
CD
is
across.
All
of
those
that's
gonna
die,
dia
yeah.
C
I
think
so
I
think
it
crosses
over
into
sort
of
user
experience
user
interface,
how
you
know,
there's
lots
of
ways
of
putting
those
things
together
and
different
ones
to
make
different
sensors
it's
up
there,
new
vendors
for
individual
companies
or
whatever
to
do
that
and
users.
You
know
flux
as
a
piece
in
that,
rather
than
the
whole
thing
right,
specific
locks
either
bit
of
software.
I
Hello,
this
is
Lee
John
from
Alibaba
and
we
are
evaluating
advanced
solutions
and
we
really
like
that
approach.
So
one
of
the
question
is
how
much
is
dealing
with
a
the
road
strategy
of
your
applications?
So
you,
you
are
actually
building
interfaces
between
different
kinds
of
continuous
delivery
system
or
flux,
has
its
own
strategy
or
just
relying
on
kubernetes
Road
strategy.
C
This
makes
some
things
tricky,
so
it's
possible
to
detect
sort
of
unqualified
success
of
a
rollout,
but
it's
often
difficult
to
diagnose
failures,
or
you
know
whether
something
is
just
taking
a
while
or
has
is
never
going
to
succeed.
Unless
people
have,
you
know,
taken
pains
if
you
can
figure
it
that
way
so
yeah.
The
superficial
answer,
if
you
like,
is
that
we
leave
rollouts
to
kubernetes
and
we
try
to
do
our
best
to
observe
what's
happening.
H
You
might
want
to
check
out
the
Argo
rollout
there's
an
R
go
rollouts
project
that
canary
and
blue
bringing
rollout
strategies,
and
this
one
might
get
interesting
from
a
combination.
Standpoint
right
as
we
can
have.
We
can
have
a
project
that
that
really
allows
you
to
implement
different
different
rollout
strategies
and
then
that's
separate
from
flux,
which
is
really
or
or
argo
CD,
which
is
really
about
which
is
really
about
matching.
I
Yeah,
although
CDs
also
have
what
we
are
looking
at
as
we
wanna
be
I'm,
really
hoping
to
see.
If
we
can
come
by
this
two
thing
together.
So,
for
example,
you
can
actually
integrate
flux
based.
Our
ghost
ad
is
something
like
span
occur,
and
so
we
have
will
have
different
kind
of
Rob's
systems
and
of
course
it
came
to
create
a
base
for
Benes
itself.
I,
don't
know
if
this.
I
System
like,
for
example,
you
can
use.
We
are
hoping
to
see
that
if
we
can
use
flux
to
integrate
base
Argos
CD
that
we
can,
maybe
it
can
also
work
with
spinnaker.
So
so
that
are
the
Box
integration
already
in
the
community,
or
we
have
plan
to
do
something
like
interface
between
these
kind
of
things.
C
I,
don't
know
what
the
Argo
rollout
works
with
flux
if
it
does,
and
it's
very
plausible
there's
also
flag,
which
is
a
weak,
worse
project
which
is
has
a
similar
AE
might
do
to
roll
out
how
they
roll
out
in
doing
sort
of
higher
order
deployments
like
AV
in
blue,
green
canary
deployments,
that
sort
of
things
using
service,
mesh
things
sto
and
so
on.
I,
don't
know
about
screen
again.
Sorry,
ok,.
G
G
It
actually
has
two
modes
in
white
status
back
again
to
get
project
in
terms
of
which
clusters
think-
and
you
just
mentioned
that
as
a
workflow,
where,
if
you
push
a
new
image,
it
then
goes
through
and
actually
rewrite
no
do
reference.
The
new
image
essentially
kept
using
crazy
image
and
then
gets
triggered
to
deploy
it.
So
there's
sort
of
a
right
get
and
read
get
flow
that
are
built
into
flux
that
correct
my
understanding.
C
B
I
think
one
just
observation
really
that
it's
not
so
much
flux,
specific
but
I,
think
is
an
interesting
aspects
of
get
ops.
Is
the
the
audit
trail
that
you
kind
of
get
automatically
by
having
all
these
cluster
operations
recorded
in
get
commits,
which
is
really
nice
from
a
security
perspective?
So
that's
just
something:
I
wanted
to
throw
out
there.
I.
G
Actually
think
that
there's
an
opportunity
here
to
actually
standardize
on
sort
of
the
audit
sync
format
between
different
set
of
tools,
so
that
you
know
we're
gonna,
have
a
bunch
of
things
that
are
acting
on
behalf
being
able
to
to
throw
those
into
a
common
audit.
Log
synch
for
for
tracking
would
be
interesting,
but
that's
a
that's
a
different
project.