►
From YouTube: Kubernetes in 2023
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
B
Yep
pretty
excited
to
talk
about
in
the
ninth
the
ninth
year
of
kubernetes.
It's
amazing
to
to
be
thinking
about
that.
So
where
are
we
at
in
2023?
You
know,
I,
think
that
the
thing
is
that
the
kubernetes
is
now
mainstream.
It's
not
even
you
know
in
the
mid
adopter
phase,
it's
in
the
late
adopter
phase
and
and
it's
really
become
the
industry
standard
for
running
Cloud
native
applications
anywhere.
So
you
know
Stefan.
What
are
you
seeing
and
the
reasons
why
people
are
choosing
kubernetes.
A
Well,
the
reason
I'm
choosing
kubernetes
besides
anything
else,
it's
it's
Community,
I'm,
really
amazed
what
the
Community
has
built
around
kubernetes.
This
whole
ecosystem
is
is
really
great
and
I
feel
very
proud
to
be
part
of
the
kubernetes
ecosystem,
but
yeah,
if
you,
if
you
want
to
choose
kubernetes,
you
have
so
many
reasons
and
I.
A
Think
one
of
the
major
reason
today
is
the
fact
that
the
managed
Services
all
the
cloud
vendors
out
there
are
offering
a
mature
kubernetes
as
a
service
and
I
think
that's
it's
quite
critical
for
kubernetes
adoption,
given
the
fact
that
you
know
kubernetes
is
so
complex
and
it's
really
hard
to
set
it
up.
Well,
if
you
can
do
it
with
a
click
of
a
button,
then
you
know
you
can
really
focus
on.
What's
really
important
is
you
know,
create
this
continuous
delivery
ideal
pipeline.
B
Yeah
for
sure,
I
mean
I
think
that
the
interesting
thing
is
I.
Think
at
this
point
we
might
not
even
say
customers
are
choosing
kubernetes
I,
think
you'd
say
even
though
they're
just
assuming
kubernetes
and
you
I
think
you're,
seeing
this
all
over
in
all
sorts
of
whether
it's
in
AI
or
any
other
databases,
a
lot
of
people
who
are
building
isvs,
who
are
building
software
they're.
B
Just
assuming
that
you
have
managed
kubernetes
available
to
you
and
I
think
it's
a
huge
lift
as
well
in
terms
of
people
can
kind
of
forget
about
how
do
you
manage
machines?
How
do
you
deploy
software?
They
can
rely
on
this
infrastructure
and
then
innovate
on
top
of
that,
and
that's
a
another
big
part
of
the
ecosystem.
I
think
is
that
Innovation
and
speaking
of
that
sort
of
stuff
I
think
what
the
truth
is
when
we
look
at
our
developers
out
there.
B
What
we're
really
seeing
is
that
kubernetes
is
a
platform
for
building
platforms,
and
so
it's
the
base,
it's
the
foundation,
but
it's
not
where
a
lot
of
people
stop,
because
you
know
raw
kubernetes
by
itself.
The
basic
objects
that
are
in
there
are
a
little
bit
like
machine
code
and
they're,
often
too
complicated
for
application
development
or
in
some
cases
it's
just
because
the
kubernetes
community
proactively
decided
you
know,
that's
not
a
problem.
We're
going
to
solve
we're
going
to
defer
to
the
to
the
ecosystem
out
there
to
solve
it.
B
So
Sivan
were
you
seeing
in
terms
of
like
ecosystem
projects
that
that
complement,
kubernetes.
A
Yeah,
what
about
vendor
locking
I
mean
when,
when
kubernetes
started
right,
the
the
main
idea
is
it
will
get
rid
of
vendor,
locking
and
you'll
be
free
of
it
and
you
can
move
from
any
Cloud
to
any
cloud
from
any
data
center
to
any
data
center.
Did
we
end
up
with
this
promise,
or
it's
more
nuanced.
B
Yeah
I
think
it's
way
more
nuanced
I
mean
I,
think
that
you
know
I,
think
we
see
this
out
here
with
various
Enterprises
who've
said
like
we're:
gonna
be
building
an
application
platform,
no
matter
where
we
go
and
our
developers
will
just
see
the
application
platform
and
it
won't
matter.
You
know
which
of
what
cloud
they're
going
to
and
I
think
it's
a
great
it's
a
great
vision,
but
it
takes
a
ton
of
discipline
right
and
I
think
that
what
we're
interested,
what
we're
seeing!
B
Interestingly
and
enough,
is
that
really
it's
the
value
of
of
consistent,
tooling
and
not
having
to
re-educate
your
developers
in
different
ways
of
deploying
software
into
different
places?
That
is
the
real
win.
B
It's
not
sort
of
I
mean
I
I
sort
of
draw
a
comparison
to
the
Java
promise
of
like
write
once
Run
Anywhere,
which
was
really
more
like
right
once
debug
everywhere,
and
you
know
so
I
say
I,
don't
I,
don't
think
you
know
we
want
to
sell
people
on
the
vision
of
like
well,
there's
perfect
portability,
but
I
do
think
that
there's
skills
transfer,
meaning
you
can
have
one
set
of
CI
CD.
B
You
can
have
one
set
of
you,
know:
developer
tooling,
code
review
all
of
that
kind
of
stuff
and
have
it
that
skill
set
be
portable
and
I
think
also
getting
back
to
that
ecosystem
thing.
B
There's
a
real
opportunity
to
hire
people
with
the
skills
immediately
valuable
to
your
application
platform.
Even
if
you
know
they
come
from
a
different
company
or
even
you
know,
a
a
different
academic
environment
and
I
think
that
consistent
app
deployment
is
something
that
is
a
real
value
proposition.
A
Yeah
me
as
a
flux,
maintainer,
I'm,
I'm,
interacting
all
the
time
with
kubernetes
from
an
API
perspective,
so
I'm
from
my
point
of
view,
the
real
strength
of
kubernetes
is
that
no
matter
where
it
runs,
if
it's
Azure
AWS,
whatever
the
API,
is
the
same
and
I
have
the
same.
You
know
guarantees
if
I
want
to
look
at
the
application.
Is
it
healthy?
There
is
a
deployment
object,
it
has
a
status
conditions
and
so
on
so
I
have
that
consistency
of
at
least
saying?
A
Is
it
running
or
not,
or
do
I
want
to
do
something
with
it
and
I?
Don't
have
to
change
the
let's
say
the
operations
to
perform
a
rollout,
even
if
you
know
maybe
I
have
to
change
an
Ingress
controller
or
I
have
to
change
the
load.
A
Balancer
implementation
every
time,
I'm
moving
from
One
Cloud
to
another,
but
there
are
so
many
other
things
around
the
application
life
cycle,
which
is
very
consistent,
even
if
you
know
I'm
moving
from
from
one
platform
to
another
and
I
think
if
something
will
live
on
from
kubernetes
many
years
to
go
it
it's
the
idea
behind
its
API
and
how
how
these
things
are
are.
A
B
B
Yeah
for
sure
I
mean
I
think
we
always
thought
of
it
as
being
developer.
Oriented
apis
instead
of
infrastructure,
oriented
apis
and
I.
Think
also
that
ability
that
you
talked
about
for
that
for
the
ecosystem
to
build
a
single
set
of
tools
on
top
is
a
huge
win
too
right.
We
don't
want
to
have.
B
We
wouldn't
have
a
great
ecosystem
if,
if
flux
didn't
work
with
kubernetes
everywhere
right,
if
you
had
to
be
like
this,
is
the
Azure
flux
thing,
and
this
is
the
AWS
flux
like
it
just
wouldn't
work
right
so
that
consistent
API,
even
if
apps
aren't
perfectly
portable,
that
consistent
API
enables
these
shared
projects
right.
These
projects
that
the
whole
world
can
come
and
rally
around
and
I
think
what's
really
great,
actually
is
also
in
doing
that.
B
We've
also
gone
along,
and
we've
we've
enabled
people
to
forget
about
problems
right.
So
so,
actually
I
don't
know
Stefan
when
you
started
in
the
industry,
but
like
when
I
started
in
the
industry
figuring
out
how
you
shipped
your
code
out
to
a
server
in
the
data
center
like
that
was
a
problem
you
had
to
solve.
Like
am
I
going
to
use
a
tarball
am
I
going
to
use.
You
know
an
MSI
am
I
going
to
use
like
how
am
I
going
to
install
like
there
were
companies.
B
B
You
know
it's
just
a
consistent
way
of
doing
that
that
everybody
knows
and
that
we
can
rally
Tools
around
right.
So
you
can
do
image
scanning
on
a
specific
image
format.
You
can
do
all
kinds
of
different
stuff
in
a
consistent
way
and
similarly,
like
we've
taken
the
idea
of
how
do
you
do
a
deployment
right
and
we've
turned
it
into
code
that
you
that
everybody
can
use,
as
opposed
to
maybe
a
checklist
that
somebody
wrote
down
or
just
tribal
knowledge
that
people
were
like
okay.
B
This
is
the
right
way
to
do
a
zero
downtime
deployment.
I.
Think
that's
really
huge
in
terms
of
making
everybody's
lives
better
and
easier,
is
enabling
most
people
to
not
think
about
it
and
then
also
enabling
us,
as
a
community,
to
come
together
and
produce
a
single
best
practices.
Implementation,
as
opposed
to
like
hacky
scripts,
left
all
over
the
place,
but
we've
promised
a
lot
on
this
slide,
so
you
know:
do
you
think
it
actually
can
be
this
simple
right
like
are
we?
Are
we
overselling
this
or
what
are
the
complex?
A
Yeah
I
think
the
complexity
comes
in
when
you're,
not
a
single
developer.
You
don't
have
a
single
git
repo
and
it's
just
one
app
more
people
have
to
collaborate
on
it
and
in
you
know
it's
not
just
the
kubernetes
deployment
you
need.
You
know,
policies
around
it.
You
need
horizontal
product
or
scalers.
You
need
an
Ingress.
You
need
all
these
things
and
that
idea
that
okay,
it's
kubernetes,
is
decorative.
A
You
can
get
away
with
a
Docker
file,
a
deployment.yaml
and
a
GitHub
workflow
in
your
repo,
and
you
can
actually
you
know,
make
this
all
work
with
the
kubernetes
as
a
service,
let's
say
AKs
and
GitHub
actions.
It's
a
great
combination
to
deploy
a
simple
app,
but
when
you
get
to
many
teams
many
applications,
maybe
the
application
is,
you
know,
made
out
of
many
microservices,
so
you
have
so
many
Ripples
and
so
on.
A
It's
it's
quite
hard
to
maintain
that
simple
workflow
that
you've
started
with
so
one
one
solution
that
me
and
the
other
flux
maintainers.
We
are
working
on
for
I,
don't
know
five
years
now.
One
over
six
is
the
idea
of
how
you
can
you
know,
take
advantage
of
the
declarative
nature
of
of
kubernetes
and
allow
a
more
streamlined,
continuous
delivery
pipeline,
something
that
works
at
scale.
So
you
can,
for
example,
you
start
with
two
cluster
staging
and
production
right.
A
Then
you
realize
oh
I
cannot
have
a
single
production
cluster,
because
now
my
clients,
for
example,
are
not
only
in
Europe.
They
are
also
in
Asia
over
here.
So
I
have
to
you
know,
get
the
application
closer
to
them,
so
in
in
the
original
in
the
Legacy
pipeline.
Let's
call
it
right
when
you
do
everything
from
a
single
CI
job.
Every
time
you
add
a
new
cluster
you
have
to
you
know,
put
the
secret
there
make
the
CI
job
be
aware
of
that
cluster.
A
So,
in
order
to
make
this
story
better,
what
our
idea
was
to
make
the
Clusters
look
at
the
desired
State,
then
having
the
CI
process,
go
to
the
cluster
and
tell
the
cluster
with
the
Qatar
apply
or
Qatar
delete,
or
whatever
Helm
install
or
whatever
command
you're
running.
Instead
of
doing
that,
imperative
thing
where
you
connect
to
the
cluster
and
tell
the
cluster
hey
now
deploy
my
app,
the
cluster
should
look
somewhere.
It
can
be
a
git
report
more
of
those
and
see
if
there
is
a
change
in
the
desired
State.
A
Oh,
the
container
image
has
changed
so
I
have
to
roll
out
a
new
version
of
the
application.
You
don't
have
to
connect
to
the
class
and
tell
hey
now
run
this
new
version.
The
cluster
itself,
through
controller
extensions
and
flux,
is
just
an
extension
of
the
kubernetes
API
and
there
are
other
Solutions
out
there
that
they
are
doing
the
same
thing:
Argo
CD
as
well
in
cnco.
So
the
idea
is
basically
you
have
a
new
cluster
and
that
cluster
knows
from
start
that
it
needs
to
reach
a
certain
desired
state.
A
It
needs
to
install
add-ons.
It
needs
to
deploy
applications
and
so
on
without
you
having
and
telling
him
exactly
what
he
needs
to
do.
I
think
that's
yeah!
It's
not
the
solution.
You
know
that
all
of
the
sudden
will
make
you
scale
infinitely,
but
it's
a
good
step
forward.
B
You
know
I
think
one
of
the
other
interesting
things
that
we've
seen
is
that
it
enables
you
to
really
have
a
cluster,
that's
locked
down
from
a
security
perspective
right,
because,
if
you're,
if
you
have
an
agent,
that's
inside
the
cluster,
reaching
out
to
get
to
find
out
its
new
desired
State
and
then
adjusting
the
apis
inside
the
cluster
the
number
of
times,
you
need
external
users
to
then
go
in
and
reach
into
that
cluster
and
make
changes
is
actually
quite
small.
B
It's
just
you
know,
maybe
if
there's
a
live
site
or
a
problem
or
you
need
somebody
to
come
in
and
debug
a
problem,
and
so
that
means
that,
like,
for
example,
when
you
use
an
AKs
cluster
with
a
active
directory,
we
set
it
up.
B
You
know
we,
our
recommended
production
setup
would
be
that
you
have
a
group
that
has
no
members
in
it
that
has
access
to
the
the
production
cluster
right.
So
by
default
nobody
has
access
to
the
cluster
most
of
the
time
and
then
only
in
a
just-in-time
way.
You
add
users
into
that
group.
You
know
just
for
a
period
of
time,
a
Time
bound.
B
You
know
like
eight
hours
and
actually
aad
will
rip
them
out
automatically
remove
them
after
that
time
bound
and
get
up
to
make
that
possible
right,
because
if
you
are
using
traditional
CI
CD,
as
as
Stefan
said,
you
have
to
keep
that
credential
in
a
different
place,
and
then
you
also
have
that
robot
or
that
agent,
wherever
it
is.
That's
continuously
making
API
calls
into
your
cluster.
B
So
you
need
to
put
it
on
usually
on
the
public
internet
and
you
need
to
have
that
inbound
access
available
to
that
robot
and
so
I
think
it
Ops
not
only
makes
it
easier
to
scale
and
add
new
clusters,
but
it
actually
also
like
enables
you
to
have
a
more
secure
footprint
by
default
inside
your
clusters.
So
I
think
it's
a
cool
benefit
and,
of
course
also
I
would
say
the
other
thing
we
see
especially
out
to
the
edge.
B
You
know,
people
you.
We
have
three
clusters
here
and
maybe
that's
typical
for
an
internet
service.
But
we
see
retailers
who
have
thousands
of
kubernetes
clusters
in
retail
environments
and
there's
just
no
way
they're
going
to
manage
that
from
a
CI
CD,
both
because
of
the
scale,
but
also
because
of
the
intermittent
connectivity
right.
The
reversing
of
the
connection
from
being
a
push
into
the
cluster
to
being
a
pull
down
from
the
cluster
means
that,
if
that
cluster
has
intermittent
connectivity,
it
still
works
right.
B
The
flux
agent
will
try
for
a
while
and
then
it'll
fail
and
then
eventually
you'll
get
connectivity.
It'll
talk
to
the
get
repo
and
it'll
adjust
things,
as
opposed
to
like
your
entire
pipeline
crashing
to
a
halt,
because
you
couldn't
you
know,
do
a
cube.
Control
apply
onto
a
particular
cluster
at
a
particular
time,
so
I
think
there's
a
lot
of
benefits
from
that
reversing
of
the
of
the
flow
one.
A
More
thing
I
wanted
to
say
here
is
around
drift.
Correction
is
like
when
you
adopt
githubs,
you
should
in
a
way
be
comfortable
with
the
idea
that
you
shouldn't
be
going
into
a
cluster
and
doing
edits
all
of
the
sudden
right.
Then,
what
flux
tries
really
hard
to
do
is
undo
all
the
edits
when
it
discovers
it
reported
hey
someone
something
has
changed.
The
state
outside
of
my
knowledge
and
I
will
try
to
revert
it
to
the
last
desired
State,
and
it
did
in
this
way.
You
have
something
to
fight
with
during
incidents
right.
A
So
that's
why
we
we
have
this
command
on
how
you
can
pause
it
for
a
particular
time,
so
you
can
get
in
there
and-
and
you
know,
debugging
and
resolve
the
things
but
I
think
it's
what
what
we
are
trying
to
encourage
users
is,
if
you
want
to
change
something,
do
it
decoratively
and
your
whole
team
should
be
aware
of
the
change
right,
because
it's
also
a
collab.
It
has
a
collaboration
aspect
to
it.
You
should
open
up
a
request.
A
Someone
from
your
team
should,
you
know,
approve
some
change
and
only
then
that
config
change
will
actually
be.
You
know
rolled
out
across
the
cluster
Fleet
and
it's
it's
a
hard
change
in
terms
of
you.
B
Know
I'll
also
say
that
I
think
there's
I
mean
I've,
seen
outages
caused
by
people
going
in
fixing
stuff
manually
and
then
forgetting
to
move
the
fix
into
Version
Control
and
then
the
next
rollout
comes
along
and
it
overwrites
the
whatever
the
fix
was
right,
and
so
you
actually
have
the
same
outage
twice
effectively
and
so
I
totally
think
that
getting
people
into
that
mindset
of
everything
has
to
be
checked
in
even
live
site
fixes
I
mean
every
once
in
a
while
there's
an
emergency,
but
more
often
than
not,
you
want
to
go
through
the
standard
rollout
procedure
even
for
a
live
site
fix.
B
So
of
course,
so,
like
we
talk
a
lot
about
the
git
Ops,
which
is
kind
of
a
general
way
of
deploying
and
configuring,
your
your
cluster.
But
what
are
the
you
know?
What
do
you
think
the
ideal
Cloud
native
platform?
What
are
the
pieces
that
people
should
be
thinking
about
when
they're
building
that
cloud
native
platform.
A
A
A
There
are
so
many
people
which
are
part
of
the
delivery
process
which
are
not
technical,
so
I
think
the
ideal
Cloud
native
platform
should
allow
all
people
that
are
part
of
the
delivery
and
maintenance,
and
you
know
what
it
really
means
to
ship
a
product
to
to
clients
all
these
people
should
be
able
to
collaborate
on
on
this
platform.
So
this
platform
has
to
in
a
way
abstract
things
out
and
make
them
obvious.
A
So
yeah,
I
I
already
at
least
for
everybody,
I
think
it
should
be
the
the
driving
force
behind
building
a
platform.
Of
course
standardization.
If
you
have
node.js,
you
have
10
node.js
app
and
every
single
app
has
a
different
Docker
file,
a
different
health
chart.
A
different
way
of
you
know,
writing,
logs
or
or
stuff
like
that.
It's
is
not
going
to
scale
because
then
you,
you
can't
just
create
a
run
book.
A
A
B
B
A
hard
lesson,
I
think
for
engineers
to
learn
at
some
level
like,
but
at
scale
in
large
companies,
there's
just
so
much
value
that
you
get
out
of
that
consistency
and
and
even
for
people
who
know
what
they're
doing
right
and
I
totally
agree
with
you
like
a
large
number
of
of
people.
It's
they
just
want
to
get
their
job
done.
B
They
just
want
to
write
some
code
check
it
in
Magic
happens,
it's
deployed,
and
that's
goodness
right,
but
even
for
the
people
who
want
to
be
down
in
the
cloud
native
space
like
you
have
to
at
some
point
be
like
you,
don't
get
to
choose,
which
version
of
node
you
run
right
like
there's
one
version
of
node.js
for
our
company
and
and
that's
it's.
This
is
I
think
how
the
platforms
are,
how
you
get
there.
A
Yeah,
it's
also
about
flexibility
right
that
at
some
point
you
have
to
allow
something
to
fine-tune
something
around
their
application,
but
you
can't
do
that
at
the
detriment
of
security
and
policy
right.
You
can't
allow
people
to
say:
oh
I'm,
disabling
the
policy,
and
now
my
container
run
is
root
because
I
don't
have
time
to
look
into
this
issue.
There
are
different
configurations
and
different
levels
of
what
you
should
be
able
to
change
and
I
think
the
platform
should
be
flexible
enough,
but
also
have
a
way
of
enforcing
policies.
A
So,
even
if
you
you
change,
you
click
all
the
buttons
and
you
you
made
your
deployment
very
unsecure.
A
Something
should
you
know
tell
you
block
you
later
on,
like
there
are
great
tools
in
the
ecosystem
that
can
do
that,
even
before
your
application
gets
on
kubernetes,
you
can
do
it
as
a
static
analysis,
in
your
pull
request
that
just
scans
your
deployments
and
say
hey,
you've
set
this
deployment
to
learn
as
root.
That's
not!
Okay,
all
right
so
yeah
policies
is
is,
is
an
important
aspect
of
any
kind
of
platform.
A
B
For
sure
I
think
yeah
we're
seeing
a
great
use
of
policy
for
best
practices
like
I,
think
policy
kind
of
got
maybe
a
not
a
bad
rap,
but
that
got
associated
with
compliance
and
Regulation
and
all
this
kind
of
stuff.
But
the
truth
is
that
even
just
having
a
policy
that
says
like
you
need
to
have
resource
and
limits
set
on
your
pods
right.
That
has
nothing
to
do
with
compliance,
but
it
has
a
lot
to
do
with
keeping
your
app,
stable
right
and
you
know
I
think
that's.
B
That
is
an
area
that
we're
seeing
more
and
more
use
of
in
these
defining
of
these
platforms
is
and
again
I.
Think
it's
not
just
about
it's
not
about
trying
to
force
developers,
it's
actually
enabling
them
to
think
about
less
right,
like
they
don't
necessarily
have
to
remember
all
this
stuff.
You
don't
have
to
have
a
checklist
in
their
head
of
I
need
to
do
all
these
things,
policy
sort
of
helps.
I
say
it's
like
the
guard
rails
on
a
Mountain
Road
right.
B
A
Yeah
I
think
setting
good
defaults
is
one
of
the
most
important
thing.
When
you
do
the
standardization
right,
let's
say
what
most
people
do
they
create?
This
account
chart
right
and
in
in
the
hand
chart
there
are
no
defaults.
You
have
to
set
your
security
context.
You
have
to
set
resource
and
limits.
You
have
to
set
all
these
things.
Why
not
make
you
know,
sell
some
good
defaults
there,
so
people
don't
have
to
think
about
it.
A
If,
if
their
app
doesn't
run
correctly,
they
will
notice
in
the
I,
don't
know
test
environment
and
they
then
they
get
to
fine-tune
these
things,
but
you
should
always
set
good
defaults
and
I've
seen.
There
is
a
trend
now
with
policy
where
you
can
actually
set
these
defaults
at
the
admission,
control
of
kubernetes
or
kubernetes
has
the
thing
where
you
create
a
deployment.
The
deployment
has
no
resource.
The
diploma
has
no
limits,
but
you
can
actually
inject
them
based
on
I,
don't
know
some
conditions,
some
labeling,
based
on
what
namespace
they
end
up.
A
B
B
You
know
we
kind
of
have
the
components
of
it.
What
are
we?
What
are
we
thinking
about
when
we,
when
we're
designing
it.
A
So
I
think
the
first
step
is
to
acknowledge
that,
even
if
the
platform
is
built
everywhere
right
before
you
start
it
it's
already
there,
each
team
has
their
own
scripts
for
diploma.
Maybe
they
someone
builds
Health
charge,
someone
bills,
I,
don't
know
they
have
good
Docker
files.
Whatever
right
all
these
pieces
everywhere
are
the
platform,
so
someone
has
to
own
the
the
platform
project
and
in
order
to
unify
that,
you
actually
have
to
create.
A
The
team
name
is
the
platform
team,
but
someone
has
to
own,
especially
at
the
beginning
and
and
drive
be
the
driving
force
to
consolidation
and
all
of
that.
The
second
step
is,
of
course,
identifying
all
the
parties
involved
in
the
delivery
pipeline.
That's
that's
very
important.
You
don't
have
to
take
into
account
only
technical
people,
everybody.
You
should
know
the
whole
the
whole
aspect
of
that
and
I.
Think
after
you
describe
your
your
process.
How
are
we
going
to
do
this
delivery?
A
Then
you
can
go
into
kubernetes
pick
up
the
components
that
you
need:
Define,
infrastructure,
Blueprints
and
all
of
that,
but
the
human
aspect,
the
the
first
one-
is
very,
very.
B
Important
I
mean
I
think
even
if,
like
one
of
the
things
that
we've
built
into
our
deployment
platform
is
like
a
big
red
button
which
basically
is
like
I
want
to
stop
all
deployments
right.
It's
it's
Black
Friday!
It's
we're
going
to
do
a
conference,
it's
whatever
it
is
like.
We
all
know
that
deploying
code
is
the
easiest
way
to
cause
an
outage
right
like
that
is
that
is
the
truth
right
and
sometimes
you
just
want
to
stop
every
team
from
being
able
to
deploy
code.
B
Okay
and
you
don't
want
to
have
to
go
to
every
team
and
be
like.
Could
you
please
stop
and
here's
the
window
and
like
it's
work
for
them?
It's
work
for
it's,
it's
painful
for
everybody,
and
so
even
just
having
that
ability
in
the
platform,
it's
totally
human
thing,
but
to
be
able
to
hit
that
button
and
just
say
like
we're,
stopping
we're
not
doing
anything,
we're
not
deploying
any
code
for
two
weeks
or
whatever.
B
Well,
hopefully,
it's
not
two
weeks,
but
whatever
that
time
period
is,
is
it's
a
human
factors
thing,
but
it's
a
huge
it's
a
huge
value
and
then
the
other
side,
I'd,
say
and
I.
Don't
know
if
you've
seen
this
on
The
Human
Side
is
you
know
the
people
who
are
out
there
selling
to
our
customers?
B
They
don't
have
to
ask
us.
They
know
they
do
better
job.
Talking
to
the
customer,
so
I
think
there's
a
lot
of
that
human
factor
stuff
that
goes
into
to
designing
that
platform.
A
Yeah,
that's
part
of
the
observability
part
and
observability
is
not,
though,
only
about
Telemetry.
It
should
also
be
about
what
software
are
you
running
in
your
platform.
You
should
have
like
this
unified
software
bill
of
materials
right,
not
just
I'm
running
these
app
apps.
Well,
those
ads
are
definitely
important.
Hundreds
and
hundreds
of
packages
from
Upstream
from
open
source
projects
right.
You
should
be
able
to
to
look
somewhere
and
see
all
all
these
things
and,
of
course,
you
should
also
have
observability
into
the
deployment
of
Pipeline
and
SEO
okay,
this
region
is
it
this
version.
A
These
are
the
original.
This
version,
and
this
part
of
the
UI
aspect
right,
I,
I,
I
wrote
in
the
other
slide
that
you
should
have
self-service
API
as
well,
is
not
only
about
apis.
Apis
are
great
and
and
our
requirement
for
building
a
UI.
But
at
some
point,
even
if
you
you
say
you
don't
need
the
UI,
you
you'll
definitely
need
something,
maybe
read
only,
but
that
gives
you
that
Overlook
right,
so
you
don't
have
to
call
all
these
apis
on
your
own
and
and
so.
B
On
for
sure,
yeah
I
mean,
or
even
just
visualizing
right,
like
one
of
the
things
we've
done
for
our
dris,
is
just
give
them
a
visual.
A
timeline
visualization
of
when
did
each
deployment
happen
that
they
can
then
effectively
overlay
with
the
metrics
they're,
like
oh
I,
see
there's
a
spike
in
errors
here
and
I
see
that
there's
a
deployment
of
this
microservice
here
you
know
I,
wouldn't
necessarily
have
thought
they
were
correlated
but
because
they're
correlated
because
I
can
visually
see
that
they're
correlated
I'm
gonna
go
investigate
that
thing,
right
and
I
think
you're.
B
A
Yeah
one
thing
we
we
did
in
the
flux
project
we
integrate
it
with
the
grafana
notations
and
every
time
flux
does
a
pull
and
then
apply.
We
animate
all
the
graphs
in
grafana.
So
when
you
look
at
your,
you
know,
business
metrics
or
whatever
you
see
there.
Oh
these
tag
is
now
deployed.
So
you
see.
Oh,
this
is
version
2.0.
Maybe
that's
why
the
CPU
has
spiked
and
the
memories
of
the
roof.
B
For
sure
and
I
think
that
the
other
thing
that's
on
the
bottom
here
is
just
another
really
really
valuable
piece
of
advice
for
anybody
out
there
who's
starting
to
sort
of
get
their
feet
wet
in
production
in
producing
one
of
these
platforms,
which
is
I,
think
it's
a
real
tendency.
B
We
we're
all
in
technology,
we
love
technology,
we
like
the
cool
stickers
and
all
that
kind
of
stuff,
and
so
there's
a
tendency
to
like
throw
the
kitchen
sink
at
your
platform
and
kind
of
make
sure
that
you
to
that
cncf
landscape
that
has
I,
don't
know
how
many
logos
on
it
now
you
know
you've
sort
of
checked
off
each
one
like
it's
like
a
merit
badge
or
something,
but
don't
do
that
right,
like
like
be
very
clear
of
every
component,
that
you
add
into
your
platform,
be
very
clear
about
what
the
benefit
is
and
what
the
cost
is
in
terms
of.
B
Can
you
secure
it?
Can
you
update
it?
If
there's
a
cve,
do
you
can
you
operate
it?
If
it
doesn't
work,
you
know,
I
think
we
have
to
shift
from
thinking
that
each
component,
we
add
to
our
platform,
is
a
new
cool
thing
to
thinking
each
component.
We
add
to
our
platform,
is
a
potential
liability
and
I
think
we
had
I
went
through
that
same
mental
model.
B
When
we
talked
about
software
dependency
earlier
right,
I
went
from
a
world
of
five
years
ago,
being
like
cool
npm,
whatever
right
to
now
being
like
okay,
what
Bad
actors
am
I
inviting
into
my
code
by
installing
this
dependency
right
and
I?
Think
you
know
I,
don't
think,
there's
Bad
actors
necessarily
in
in
the
components
of
your
platform,
but
there
are.
B
There
is
a
operations,
liability
and
a
security
liability
that
comes
with
HP,
so
start
simple
know
that
it's
going
to
evolve
anyway,
platform's
going
to
change
everything,
changes
and
so
anticipate
that
you
can
add
things
as
you
need
them,
instead
of
having
to
have
them
in
there
at
the
start,.
A
B
So
like
when
we
think
about
that,
then
what
do
we
think
about
in
terms
of
like
delivering
the
application,
and
so
we've
got
our
platform
now
we're
going
to
deploy
code
into
it.
A
Yeah
so
before
we
get
to
piece
together
our
platform,
we
I
think
we
need
to
put
on
paper
the
delivery
process
of
each
application
or
each
group
of
applications.
It
really
depends
on
the
scale
that
you
are
running,
but
things
are
very
clear.
Not
all
apps
are
the
same.
A
Some
apps
are
critical
for
your
business
needs
for
your
I,
don't
know
whatever
the
product
offers
and
some
are
not
in
the
critical
path
right
and
how
I
think
that
decisions
should
be
made
is
by
defining
some
service
level
objectives
right,
and
you
can
group
apps
based
on
that
and
create
separate
delivery
processes.
Maybe
for
something
that's
highly
critical.
You
want
to
go
through
Dev
staging
test.
A
You
want
to
keep
it
running
for
a
very
long
time,
just
with
10
percent
of
your
users,
using
the
new
version
and
so
on,
but
for
other
components
which
are
not
critical.
Maybe
you
don't
want
to
wait
five
days
to
deliver
a
patch,
so
it's
it's
quite
important
to
group
them
based
on
I
I'm,
suggesting
here
service
level
objectives,
but
there
are
so
many
other
ways
of
how
you
can
structure
them,
but.
B
A
And
another
important
part
in
the
delivery
process
are
humans,
like
we
usually
think
of
delivery
processes,
a
totally
automated
process,
it
runs
on
its
own.
It's
continuous
humans
are
not
there,
but,
as
you
said
that
red
button
has
to
stop
everything,
I
don't
know,
don't
deploy
Fridays
or
whatever.
So
every
every
step
in
the
in
the
process,
you
should
consider,
should
I
make
this
step
optional.
Can
a
human
intervene
here?
A
Can
someone
do
a
rollback,
even
if
all
the
metrics
are
okay,
even
if
the
application
is
green,
but
maybe
there
is
a
business
decision
to
delay
some
future
or
whatever,
so
your
delivery
process
should
take
into
consideration
human
intervention
at
every
step.
I'm,
not
saying
it
like
add
Gates
healing
Gates
everywhere,
but
allow
that
possibility.
If
you
can.
B
Yeah
for
sure,
for
sure,
I
think
there's
always
that
case
where
either
your
testing
doesn't
catch
it.
But
you
see
that
there's
a
problem
or
you
know
I
always
say
like
you
never
know
when
somebody,
you
know
somebody's
going
to
be
out
demoing
your
thing
and
you
just
don't
want
to
do
a
roll
out,
while
you
know,
while
somebody's
doing
a
demo,
so
cool
so
before
we
even
think
about
like
delivering
code
out
to
production.
B
You
know
what
are
the
pieces
that
people
are
thinking
about
as
they
go
from.
You
know
the
thing
on
the
on
the
code
on
the
disk
to
The
Container
image.
That's
that's
ready
to
push.
A
Yeah
before
we
even
get
to
kubernetes,
we
have
to
build
our
apps
and
package
them
in
a
container
image
and
that
process
can
be
very
simple:
you
throw
their
Docker
file,
you
do
a
Docker
build,
but
given
the
fact
that
you
know
your
app
is
made
out
of
so
many
external
components,
you
should
think
very
careful
about
this
initial
step,
which
is
very
critical
to
the
security
of
your
whole
system.
Right.
It
really
depends
what
what
kind
of
packages
are
you
installing
on
on
the
base
operating
system
in
the
in
the
container?
A
So
maybe
you
should
look
into
creating
attaching
a
softer
bill
of
materials
to
your
container
image
and
recently,
for
example,
Docker
Docker
pill
kit
has
such
a
future.
Now
it's
not
perfect
to
not
create
a
perfect
s-bomb,
but
it's
a
start,
and
if
you
don't,
if
you
have
nothing
today,
you
should
enable
that
right
and
another
important
part
is
provenance.
You.
A
When
you
look
at
the
container
image,
you
should
be
able
to
tell
when
was
this
built
on
which
machine
and
what
tools
were
involved
in
creating
this
image
right,
and
this
helps
you
discover,
you
know
CVS,
that
there
there
was
no
public
CV
when
you
build
the
image,
but
maybe
two
days
later
there
is
a
major
CV
there
and
you.
You
should
be
aware
that
through
the
province
file,
oh
I
use
this
software
to
build
this
image
and
that
software
was
compromised
right.
B
I
think,
there's
even
stuff.
You
should
shift
left
like
one
of
the
things
we've
you
know,
we've
done
is
shifting
left
with
things
like
from
tags
in
your
Docker
file,
right
I
think
people
are
like
I
got
a
private
registry
I
built
my
image.
I
pushed
it
to
my
private
registry,
I'm
good
to
go
and
you're
like
well,
that's
great,
but
what
was
the
base
image
that
you
built
from
right
and
if
that
base
image
is
like
random
user
at
Docker
hub?
B
You
should
probably
be
thinking
very
carefully
about
whether
you
want
to
support
crypto
mining.
Basically,
you
know
and
I
think
that's
that's
the
other
piece
of
it.
Is
that
because
we're
going
to
these
declarative
formats,
not
just
for
our
applications
but
for
defining
how
the
application
is
packaged
together,
we
can
start
doing
a
bunch
of
the
shift
left
stuff
that
we
would
typically
have
thought
of
as
being
associated
with
code
to
our
application
package
as
well.
B
I
can
now
say:
look
you
have
to
have
a
from
image
like
I've
got
these
six
blessed
from
images.
You
can
start
there
and
that's
it.
Okay
and
I'm
not
going
to
let
your
darker
file
even
check
in
if
you're,
if
you're,
coming
from
a
different
from
image
and
I
can
even
go
through
and
look
for.
You
know,
I
can
assemble
it's
great.
The
docker
has
the
Som
stuff
in
it,
but
I
can
look
for
things
like
apt
and
I.
B
Can
you
know
see
that
you're
installing
you're
running
a
script
to
install
apt
or
you
are
you
know,
running
npm
and
I
can
actually
extract
the
packages
that
you're
installing
from
that
Docker
file,
so
I
think
even
before
we
start
building
the
image
as
a
bunch
that
we
can
do
to
kind
of
keep
track
and
keep
a
handle
on
exactly
what's
going
in,
because
I
do
think
that
there's
this
like
somewhere
along
in
the
container
Cloud
native
world,
we
forgot
that,
like
downloading
random
binaries
from
the
Internet,
is
a
bad
idea
and
somehow
Docker
pull
like
made
us
forget
that
so,
but
I
think
we're
getting
it
back
like
we're
starting
to
realize.
A
Yeah
I've
seen
in
the
last
two
years.
A
lot
of
you
know
going
back
to
the
basic
Building
images
and
there
are
many
projects
there
out
there
and
organizations
who
are
trying
to
you
know
make
this
process
as
safe
as
possible.
B
A
B
Think
the
other
thing
is
we've
seen
as
well
as
like
going
towards
things
like
reduced
package
images
right.
You
start
from
a
bun,
you
know,
let's
say
an
Ubuntu
or
anything
else,
a
standard,
traditional
Linux,
there's
a
lot
of
packages
in
that
image
that
you
might
not
be
using,
but
that
might
trigger
cves
you
might
be,
but
might
trigger
cves
right
in
your
scanning
and
the
more
noise.
What
we
found
is
the
more
noise
in
your
image
scanning,
where
someone's
like.
Oh
yeah,
that's
a
vulnerability,
but
we
don't
ever
use
that
tool.
B
It
just
makes
figuring
out
if
you're,
actually
secure
or
harder
right,
so
reducing
the
number
of
packages
going
to
like
a
destroyless
image
or
other
kinds
of
like
reduced
I
mean
if
you're
in
go,
maybe
even
just
a
pure
scratch
image.
Stat
in
you
know,
static,
binary
language
helps
you
make
sure
that
like
when
there
is
a
CV
scan,
the
CVS
are
really
about
your
application
and
not
about
you
know
like
image.
Some
image
magic,
binary,
that's
sitting
happens
to
be
sitting
on
your.
A
Yeah
I
mean
it
also
depends
on
which
programming
language
you
are
using
if
it's,
the
last
or
things
that
can
be
built,
statically
are
way
more
suitable
to
use
from
scratch
or
from
Alpine
with
no
package.
No,
nothing,
not
even
the
shell
installed,
but
for
others
for
interpret
languages.
You
have
to
rely
on
OS
packages.
You
have
to
have
open
SSL
installed
there
or
lib
SSH
and
all
of
that
yeah,
but
instead
of
installing
the
whole
Dev
suite
and
getting
every
single
tool
only
install
the
search
and
only
open
and
sell
the
client
yeah.
B
A
You
you
don't
need
the
whole
seat
right,
so
it's
a
lot
of
a
lot
of
stuff
I
needed
there.
That
just
is
there
to
to
be
exploited
in
a
way
yeah.
B
Yeah
I
think
it's
a
great
bridge
to
to
thinking
about
Security
in
general
right
because
I
think
when
people
are
thinking
about
images,
there's
been
a
lot
of
focus
lately
in
the
cloud
native
space
on
image,
signing
but
I.
Think
there's
just
so
much
more
to
the
broader
problem
because,
like
the
truth
is
that
if
you
sign
an
image
that
has
bad
software
in
it
in
it
and
your
signature's
not
doing
much
right,
and
so
one
of
the
things
we've
been
I've
been
really
excited
about.
B
Lately
we
put
on
a
lot
of
our
projects
is
dependabot,
which
you
know
will
come
along
and
do
a
pull
request
to
update
your
dependencies
in
GitHub,
but
I
think
in
general.
Getting
things
in
that
that
pra,
this
kind
of
comes
back
to
the
get
Ops
thing.
It's
like
the
more
you
can
get
stuff
to
just
be
I
pressed
a
button,
it
merged
a
pull
request
and
then
magic
happened
and
it
deployed
like
the
faster
you're
gonna.
Get
to
you
know
people
being
patched
insecure
right.
B
A
B
B
Request
it's
all
free
and-
and
it
is
actually
also
Community
like
the
a
lot
of
the
rules
in
there
are
you
know,
Community
sourced,
which
is
kind
of
cool
as
well.
I
think
again
comes
back
to
the
value
of
having
people
come
together
in
the
ecosystems
to
produce
kind
of
a
unified
Set
set
of
best
practices.
B
So
I
think
you
know
we're
the
the
next
thing
I
was
really
thinking
about
was
like
what
is
the?
What
are
the
pieces
of
kubernetes
that
we
should
be
thinking
about
in
this
platform
like
what
do
we,
what
does
kubernetes
offer?
What
do
we
need
to
add?
How
do
we
figure
out
in
this?
You
know
I
joke
about
the
cncf
landscape
being
an
eye
chart,
but
there's
like
I,
don't
know
how
many
projects
in
that
slide
now.
So
how
do
we
figure
out
like
what?
What
are
the
right
ones
to
do.
A
It's
it's
a
tough
process
right
you,
so
you
define
your
delivery
processes
and
from
there
you
can
extract
the
Futures
that
you
want
right.
We
serve
applications
to
end
users
and
that
has
to
go
over
the
internet
right,
so
I
have
to
expose
my
application
on
the
internet.
So
from
that
requirement
comes
materializes
itself
into
I
need
an
Ingress
controller
on
kubernetes
right
right.
So
first
you
have
to
identify
the
the
Futures,
but
for
every
single
future
you
have
in
the
cncf
landscape
a
gazillion
options
right.
A
There
are
so
many
Ingress
controllers
out
there,
so
many
cnis
everything
service
meshes
and
so
on
right.
There
is
no
way
around
it.
If
you
are
building
your
own
platform,
you
have
to
put
effort
into
it
and
you
have
to
understand
these
tools.
Do
your
own
Benchmark
make
your
own
decisions.
That's
one
way.
Another
way
is
trust.
Someone
else
with
that
recommendation.
I
know
if
you
are
an
Asian
customer,
I
bet,
Asia
has
opinions
and
I
Know
It
supports
some
of
the
cnis.
A
A
There
are
so
many
components
out
there
or
future
sets
of
kubernetes
that
you
want,
then
no
Cloud
will
support
it.
I,
don't
know
weird
things
around
networking
and
all
that
stuff.
So
at
some
point
you
have
to
put
something
there
and
you
have
to
understand
what
you
are
installing
in
kubernetes
like
kubernetes
add-ons.
We
call
it
crd
controllers
or
controllers
or
operators.
These
things
are
extending
kubernetes
and
you
have
to
think
that
once
you
have
deployed
such
a
controller,
you
have
to
take
care
of
its
life
cycle.
A
The
same
way
you
do
with
kubernetes
upgrade
and
so
on
and
being
a
controller
being
something
at
the
heart
of
kubernetes
is
very
tight
to
kubernetes
versioning,
how
kubernetes
functions
and
so
on.
So
the
more
components
you
add
to
your
cluster.
You
have
more
features,
but
the
greater
the
maintenance
burden
is
so
think
very
carefully
before
you
install
I.
Don't
know
cni
series
mesh,
Ingress
load
balancer.
All
of
that,
just
to
you
know,
expose
a
simple
app
on
the
outside.
You
may
not
receive
Advanced,
cni
or
I.
A
Don't
know
service
meshes
for
simple
things:
yeah.
B
For
sure
no
I
think
that's
I've
always
said
like
the.
The
best
thing
you
can
choose
is
the
thing
that
someone
else
will
run
for
you,
okay,
yeah
and
so
I
think
that's
yeah
that
and
I
think
it's
again
like.
Sometimes
we
as
Engineers,
don't
always
love
making
decisions
that
way
right
like
we
want
to
use
the
thing
that
is
the
coolest
or
the
thing
that
you
know
we
find
the
most
intriguing
or
is
written
in
a
language
that
we
like.
B
But
the
truth
is
that,
like
the
ability
to
call
someone
up
and
say
it's
broken,
please
fix
it
is
just
worth
way
more.
Then,
then,
anything
else.
So
when
we're
thinking
about
all
these
different
kubernetes
I
think
we
sort
of
talk
a
lot
about
the
context
of
a
single
single
cluster.
But
the
truth
is
that
actually
there's
a
lot
of
clusters.
We
talked
about
staging
and
production
earlier,
like
how
should
how
should
we
be
thinking?
B
What
do
you
think
about
when
we're
thinking
about
you
know
the
various
environments
that
we
might
be
deploying
to.
A
So
I
think
one
of
the
most
important
things
of
your
platform
is
the
fact
that
it
should
abstract
away
and
all
the
infrastructure
complexity
of
setting
up
an
environment.
You
should
be
able
to
say
I
want
a
new
test
environment
and
you
shouldn't
have
to
install
all
the
things,
create
a
cluster
from
scratch
and
so
on.
So
I
think
it's
important
for
a
platform
to
offer
in
environments
as
a
service.
A
I
don't
know
if
you
could,
if
it's
feasible,
to
do
that
for
production
environments,
but
you
should
definitely
be
able
to
you
know
onboard
the
new
team
member
in
your
Dev
team.
They
should
be
able
to
get
their
own
test
cluster
or
Dev
cluster
or
whatever,
without
putting
them
through
all
the
burden
of
creating
one
from
scratch.
Yeah.
B
We,
actually,
we
actually
I
mean
at
the
extreme
I
mean
you
said:
maybe
it's
not
possible
for
production
but
like
at
the
extreme,
we
actually
had
have
customers
who
do
blue
green
kubernetes
right
like
they.
They
do
blue
green
deployment,
but
they
turn
up
an
entirely
new
cluster,
install
all
the
software
test.
It
and
then
tear
down
last
week's
cluster,
and
they
do
that
on
a
weekly
Cadence
right
and
that's
I.
B
Think
the
power
of
the
cloud
at
sub-level
is
that
you
can
do
that
with
an
API
right
and
the
great
thing
about
that
is
like
you're
practicing
your
failover
you're
practicing
your
your
like
Disaster
Recovery,
every
single
week.
A
Tops
helps
a
lot
in
creating
almost
identical
clusters,
because
you,
basically
it's
a
it's
a
director
in
a
in
a
git
repo
where
you
have
there
all
the
kubernetes
add-ons
everything
is,
is
in
there
right.
So
if
you
make
small
changes
with
I,
don't
know
customize
overlay
or
with
some
templating
or
whatever
you
can
change
the
load,
balancer
name
or
you
can
change
the
DNS
name
and
so
on,
and
you
can,
you
know,
easily
create
an
identical,
almost
identical
cluster
next
to
the
one
you
have
in
yeah.
A
That's
that's
a
great
first
step
to
achieving
blue
green
deployments
in
production,
but
I
think
from
a
platform
perspective.
You
should
start
with
being
able
to
have
test
environments
or
Dev
environments
fast
and
work
from
from
there
towards
yeah
being
able
to
to
replace
production
in
a
week.
Yeah.
B
Yeah,
no
and
I
think
that
that
brings
up
another
discussion
which
is
related,
which
is
then,
how
do
you
perform
the
rollout
across
these
various
environments,
whether
it's
into
multiple
Dev
environments
or
into
staging
environments?
You
need
to
upgrade
your
monitoring
or
whatever
it
happens,
to
be
clearly
lasting
changes
to
the
entire
world.
All
at
once
is
a
bad
idea.
B
A
Yeah,
so
the
the
github's
way,
because
it's
everything
is
declarative,
you,
you
basically
have
to
have
some
automation
that
moves
changes
from
one
declaration
to
another
right,
so
in
a
in
a
really
simplistic
way
of
doing
it,
you
have
two
deployments
files.
One
is
for
your
staging
environment,
one
is
for
your
production
environment.
You
change
only
the
staging
one,
then
you
know
flux
or
whatever
is
there
picks
it
up
deploys?
A
That
understands
that
you
know
roll
out
to
production,
May
Fail,
but
maybe
you
shouldn't
roll
back,
because
if
from
three
regions
only
one
region
fails
is
is
better
to
freeze
it.
There
and
fix
it
and
go
forward
and
roll
back
everything,
so
you
can't
have
like
this
single
solution
that
works
for
everything.
It's
it's
very
tailor-made
to
how
business
critical
is
the
app
that
you
are
deploying
on
how
many
clusters
you
are
doing
it,
but
with
with
flux
and
Argo
and
other
GitHub
Solutions.
There
are
basically
two
ways
of
orchestrating
this.
A
There
are
two
approaches:
they
aren't
perfect.
Neither
one
I,
don't
like
the
idea
of
having
this
management
cluster
because
in
my
mind
that
becomes
a
single
point
of
failure.
You
have
another
cluster
you
have
to
maintain
and
so
on,
but
for
many
organizations,
that's
the
solution
because
they
want
this.
You
know
single
entity
that
can
actually
drive
changes.
B
Yeah
yeah
we've
seen
a
lot
of
people
with
with
get
Ops
on
AKs
we've
seen
a
lot
of
people
use
tag-based
workflows
where
they
manipulate
tags
in
the
repo
I
think
it
always
scares
me
when
people
you
said,
like
the
intro,
it
scares
me
a
little
bit
to
have
multiple
copies
of
a
file
because
I've
seen
so
many
cut
and
paste
or
weird,
you
know,
like
I,
remembered
to
fix
it
in
one
place,
but
I
didn't
remember
to
put
it
in
another
place.
B
B
So
then
the
characteristics
of
application,
rollout,
you
know
I,
think,
are
all
designed
around
this
idea
that
you
know
we're
delivering
Global
applications
out
to
the
world.
You
know
the
Azure
kubernetes
service
is
present
in,
like
60
different
regions
and
more
every
every
quarter
around
the
world.
But
we've
made
this
promise
that
failures
have
to
be
local
right
and
otherwise,
who
you
know,
people
can't
rely
on
you
and
you
just
mentioned
the
sort
of
like.
B
When
do
you
decide
to
roll
back
and
and
then
also
like
the
fact
that
every
single
change
you
ever
make
could
break
something
right?
I
I,
we
went
through
a
long
list
of
various
reasons.
We've
broke
out.
Things
have
been
broken,
but
it
feels
to
me
like
every
single
time.
We
think
we've
found
every
possible
way
to
break
an
application.
We
find
a
new
way
to
break
an
application
so
like,
given
all
of
that
like
how
do
we?
B
A
A
There
are
all
sorts
of
dependencies
in
in
the
deployment
pipeline,
for
example
the
the
most
usual
one.
Is
you
do
database
migration?
It's
in
your
new
version
of
the
app
runs,
the
migration
and
it
renames
some
columns
and
after
the
app
runs
a
little
bit
it
fails.
It
has
some
bugs
and
you
want
to
roll
back
well
surprised.
The
old
version
does
not
work
because
you
have
renamed
the
columns
or
you
have
removed
columns
altogether.
So
you
are
not
able
to
roll
back.
A
You
have
to
manually
I,
don't
do
a
database
restore
you
lose
the
data.
It's
it
gets
so
so
complicated.
So
I
think
it's
it's
very
critical
to
think
about
all
these
dependencies
and
make
at
least
two
versions
one
after
the
other
backwards
compatible
with
their
data
stores.
Caching,
all
the
other
dependencies.
If
you
can
do
that,
you'll
really
have
a
hard
time,
no
matter.
If
you
are
using
kubernetes
serverless,
it
doesn't
matter
right.
You
still
hit
this
issue
and.
B
Practice
it
too
I
would
say
also
like,
because
even
if
you
think
you
can
roll
back,
you
know
we
do
things
like
roll
forward
in
Canary
and
then
roll
back
and
then
roll
forward
again,
just
to
practice
it
okay,
just
to
make
sure
that
you
haven't
introduced
some
new
database
or
some
new
file
format
or
some
new
whatever
it
is
that
that
you
thought
you
had
done
it,
but
you're
now
broken
right.
A
Part
of
the
flux
project,
which
does
Canary
deployments
a
b
testing
and
so
on
and
early
on
people
notice
like
hey,
it's
okay,
Flagger,
can
detect
changes
in
my
app
version,
but
I
change,
something
in
a
config
map
that
value
in
the
config
map
mounted
as
a
environment
variable
for
my
app
my
app
went
insane
because
that
variable
is
wrong
and
couldn't
understand,
crash,
Loop
and
so
on
and
now
Flagger
treats
config
changes,
be
it
in
Secrets
or
config
map,
as
coaching
does
the
Kennedy
analysis
and
all
of
that
and
I
think
it's
important
to
treat
code
and
config
changes
as
a
whole,
and
there
are.
A
There
are
some
ideas
in
in
this
direction,
in
cncf
and
and
with
the
open
container
group,
where
what
we
really
want
to
achieve
is
to
be
able
to
create
this
oci
artifacts.
That
holds
your
code
as
in
the
container
image,
but
also
your
configuration.
So
when
you
deploy
something
you
have
configuration
along
with
code
and
that
can
be
tested
together
and
see
it
and
deliver
as
a
single
package
and
versioned
in
a
single
way.
A
B
No
I'm
really
excited
actually
a
lot
of
the
work
on
oci.
Artifacts
has
actually
been
driven
out
of
one
of
my
teams
and
we're
really
excited
actually
about
the
things
you
can
do
do
with
oci
artifacts
I.
Think
you're,
exactly
right.
I
feel
like
one
of
the
things
that
we
didn't
quite
get
right
in
the
design
of
kubernetes
early
on
was
integrating
config
map
into
deployment
more
tightly.
All
right,
like
config
map,
is
kind
of
a
byproduct
of
the
fact
that
you
have
a
pod
template
in
there.
B
But
it's
not
really
like
it's
not
a
first
order
thing
and
I
do
think
that
having
the
tighter
association
between
config
and
code
is
is,
is
the
right
way
to
go
about
doing
that?
B
Additionally,
I
think
we're
going
to
see
this
with
ml
a
lot
with
AI
as
well
right
where
people
are
going
to
want
to
ship
images
and
models,
and
they
probably
are
going
to
want
to
rev
them
independently
right,
but
you're,
going
to
run
into
exactly
the
same
problems
of
like
my
code
expects
a
certain
AI
model
and
my
AI
model
is
a
different
model
than
what
my
code
expects
and
you
know
either
it
doesn't
work
correctly
and
it
crashes,
or
maybe
it
even
just-
gives
bad
data
back.
B
A
One
second
I
want
to
conclude
here
about
osiac
effects,
so
there
is
a
new
API
specification
which
is
called
references
or
which
has
landed
in
oci
1.1,
where
you
have
now
a
way
to
say
this
image
references
this
other
layer
in
the
registry,
which
is
the
config
which
references
this
other
stuff,
which
is
the
machine
learning
model
and
so
on.
So
we
are
getting
closer
to
to
having
this
year
next
year,
tools
that
will
will
address
this
aspect
and
we
in
the
flux
project.
B
For
sure
and
I
think
it's
gonna
even
I
mean
my
guess
is:
it
will
even
actually
extend
some
of
the
base
image
stuff
that
we
were
talking
about,
where
you
could
even
imagine
saying,
like
I'm,
going
to
have
a
base.
Java
jdk
image
and
I
might
actually
attach
my
jar
as
an
artifact,
not
necessarily
as
a
layer
that
I
put
right
on
top
because,
like
at
some
level,
the
jdk
image
is
something
that
someone
else
produced
in
my
jar
file
or
my
War
file
or
whatever
is
going
to
be
something
that
I
produced.
A
B
B
Have
a
central
team
push
an
update
like
if
you
know
that
there's
a
vulnerability
in
your
jdk
image,
you
suddenly
have
to
talk
to
all
of
the
other
teams
and
get
them
to
rebuild
their
images,
and
it's
very
painful.
You
can
set
up
CI
CD
to
do
it,
but
it's
painful
right
and
I.
Think
it's
because
we
don't
have
that
ability
to
say
here's
one
thing
and
here's
a
different
thing
and
they
come
together,
but
they're
not
smashed
together.
So.
B
But
the
jar
yeah
and
then
the
jar
or
yeah
you
could
do
do
either
one
and
I
think
that's
going
to
really
help
people.
People
manage
a
lot
of
this
stuff
in
production
a
little
bit
better
where
you
might
be
able
to
even
centrally
man
like
you
could
centrally
patch
something
like
log4j
without
talking
to
every
single
application
team.
Maybe
right,
like
that's
sort
of
the
goal,
yeah.
B
A
Oh,
if
you
build
your
platform,
if
you
go
onto
that
journey,
I
think
yeah.
You
should
be
really
really
careful
of
what
you're
choosing
and
if
you
can
choose
a
managed
service.
You
should
be
doing
that.
Of
course.
It's
it's
also
a
business
decision
right
you're.
You
also
have
take
into
consideration
costs
and
everything,
but
on
the
long
run,
I
think
the
more
managed
Services
you
get
into
our
platform,
the
easier
it
will
be
for
you
to
maintain
it.
A
But
if
you
go,
you
know
bare
metal
and
everything
it's
on
you,
then
you
have
to
be
very
conscious
that
you
need
a
team
on
the
long
run.
You
need
a
lot
of
Maintenance
and
platforms
evolve
like
everything.
That's
in
cncf
landscape
changes
all
the
time,
so
you
have
to
be
prepared
for
that.
The
cni
you
deployed
today
will
not
be
the
cni
deployed
tomorrow
for.
B
Sure
and
I
think
I
think
that
you
know
it's
it's
easy.
When
you
get
a
price
sheet
to
know
what
a
managed
service
costs,
it's
a
lot
harder
to
understand
the
maintenance
and
Engineering
costs
of
owning
it
yourself
and
I.
Think
it's
incumbent
on
us
people
who
are
developing
platforms
so
to
be
clear
with
with
everybody
about
like
it's
gonna
cost.
You
right
like
open
source,
is
not
free
like
beer,
it's
much
more
free
like
puppy,
and
so
you
should
expect
that
you're
going
to
have
to
do
some
work
or
you're
gonna.
B
You
know
in
many
cases
it's
actually
a
more
efficient
choice
to
use
a
managed
service
in
a
public
cloud
or
with
a
you
know,
an
isv
like
weaveworks
and
I.
Think.
The
other
piece
we
said
is,
like
you
know,
maybe
the
corollary
is
every
component
that
you
pull
into
your
platform.
It's
adding
cost
it's
either
adding
literal
cost
or
it's
adding
time
that
you're
going
to
have
to
use
to
patch
it
and
maintain
it
and
operate
it
and
potentially
reliability.
Issues
too
so
be
very
careful
and
embrace
the
change.
B
So
I
hope
that
was
useful
for
everybody
and
you
can
reach
me
out
on
Twitter
or
GitHub
at
Brendan,
D
Burns.
A
And
on
GitHub
it
was
great
talking
to
Brandon
had
fun
absolutely.
Hopefully,
people
will
get
something
out
of
it.
All.