►
Description
Upgrading to a new release is one of the most disruptive operations we regularly inflict on our Kubernetes clusters. There are multiple strategies for doing an upgrade, but they all require rescheduling workloads and restarting cluster components.
In this talk we will share lessons from a year of automated Kubernetes upgrades: how we upgrade, what can go wrong, and tips for keeping your workloads running smoothly through this disruptive process. We hope these lessons will help others avoid pain in their Kubernetes upgrades.
Presenter:
Adam Wolfe Gordon, Senior Software Engineer @DigitalOcean
A
Okay,
I'd
like
to
thank
everyone
joining
us
here
today.
Welcome
to
today,
CN
CF
webinar
product
mm
upgrades
later
lessons
from
a
year
of
managed
Fornetti
service.
My
name
is
Arielle
chatib
I'm,
a
business
development
manager
for
cloud
native
technologies.
That
happened
also
a
CN
CF
ambassador
I'll
be
moderating.
A
Today's
webinar
I'd
like
to
welcome
our
presenter
Adam
Wolfe
Gordon,
he's
a
senior
software
engineer
at
digitalocean
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
going
to
be
able
to
speak
as
an
attendee
there's
a
Q&A
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
of
those
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
CNC
app
and
as
such,
is
subject
to
the
CNC
apps
code
of
conduct.
A
Please
do
not
have
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code,
basically
be
respectful
of
all
your
fellow
participants
and
presenters.
Please
note
that
this
record
a
recording
of
this
talk
and
the
slides
will
be
posted
later
today
at
the
CN
CF
webinar
page
at
CNC,
F,
Donna
and
I'll.
B
But,
more
importantly,
I
want
to
talk
about
some
lessons
that
we've
learned
from
doing
upgrades
for
a
year,
and
these
are
lessons
for
both
cluster
operators,
so
people
who
are
doing
upgrades
on
Cabrini's
clusters,
but
also
for
developers
and
others
who
are
deploying
workloads
to
kubernetes,
and
these
are
things
that
will
help
your
upgrades,
go
better,
make
them
easier
and
keep
your
workloads
running
as
expected,
as
your
upgrade.
Your
cluster
and
I
want
to
start
today
with
a
little
bit
of
background
on
kind
of
how
this
talk
came
to
be
so.
B
This
is
a
very
exciting
new
feature
in
our
product.
The
thing
that
we
probably
didn't
tell
you
at
the
booth
was
that
you
couldn't
actually
upgrade
yet
because
we
haven't
enabled
any
upgrade
paths
for
our
customers.
We
had
tested
our
great
process
a
whole
bunch.
I
had
run
hundreds
and
hundreds
of
upgrades
on
test
clusters.
But
if
you
went
to
your
cluster
page
on
digitalocean,
you
would
still
see
that
your
cluster
was
up
to
date,
regardless
of
whether
it
actually
was,
and
the.
B
B
That
by
the
time
coupon
Amsterdam
rolled
around,
we
were
supposed
to
give
this
talk.
We
would
have
done
about
20,000
upgrades
and
that's
a
really
nice
big
round
number.
So
I
put
it
in
the
title
of
the
talk
in
preparing
for
today
during
this
webinar
I
ran
the
numbers
again
and
we
actually
accelerated
our
upgrades
a
little
bit.
We've
done
more,
like
thirty
five
thousand
cards
now
and
that's
in
about
a
year.
So
if
you
run
the
math,
that's
about
a
hundred
upgrades
a
day
across
thousands
and
thousands
of
clusters.
B
So
that
leads
to
my
favorite
slide,
which
is
disclaimers.
I
have
two
disclaimers
for
everything
that
I'm
going
to
say
today.
First
of
all,
the
lessons
I'm
going
to
talk
about
today
are
lessons
from
our
upgrade
process
at
do,
and
there
are
lots
of
different
ways
to
upgrade
kubernetes
I'm
going
to
talk
about
some
of
the
variations
in
how
you
can
do
upgrades.
But,
depending
on
how
you
choose
to
do
your
upgrades,
you
might
see
different
things
than
we
do
and
some
of
the
things
I'll
talk
about
today
are
going
to
be
relevant.
B
If
what
you
take
away
from
this
talk
is
that
you
want
to
do
upgrades
a
different
way
than
we
do
upgrades,
because
you
don't
want
to
skip
the
same
things
that
we
get.
That's
a
totally
valid,
takeaway
and
I
I,
don't
want
any.
What
we're
doing
is
the
right
process
for
everyone.
The
other
disclaimer
is
that
the
lessons
we've
learned
are
from
upgrading
our
customers
clusters
and
these
their
workloads
are
probably
not
the
same
as
your
our
clothes.
B
So,
let's
start
by
talking
about
what
you
have
to
do
when
you
want
to
upgrade
a
kubernetes
cluster,
there
are
basically
two
parts
of
granny's
cluster
there's
the
control
plane,
which
is
up.
That's
called
the
master
and
there
are
the
worker
nodes.
So
upgrading
actually
sounds
like
a
very
simple
process
and
it's
on
one
small
slide.
First,
you
upgrade
the
control
plane
and
then
you
upgrade
the
worker
nodes
and
then
you're
done.
That's
it.
You've
got
pretty
do
Koreans
cluster.
It
sounds,
of
course,.
B
View
of
what
you
have
to
do,
this
is
probably
still
incomplete
and
it's
kind
of
vary
depending
on
your
exact
environment,
but
when
you're
upgrading
the
control,
planning,
you're,
upgrading
a
bunch
of
things
and
there's
some
ordering
you
have
to
be
careful
about,
although
some
of
these
steps
can
be
done
in
different
orders.
So
the
first
thing
you're
going
to
do
is
read
through
these
notes
for
your
new
kubernetes
release,
figure
out
whether
you're
using
anything,
that's
deprecated
in
your
current
version
and
not
going
to
be
supported
in
the
version
you're
upgrading
to
and.
B
Things
if
you
need
to
so
after
getting
any
resources
that
are
no
longer
supported
in
your
cluster,
then
you're
going
to
upgrade
at
CD.
If
you
need
a
new
n
CD,
then
you
can
upgrade
the
actual
control
plan
components.
You're
gonna
upgrade
your
API
server
and
your
cute
controller
manager
in
the
your
cheap
scheduler.
Then
you
can
upgrade
your
CI
plug-in
for
networking
if
you're
using
one
and
then
you
can
upgrade
any
provider
specific
things
so
assuming.
B
B
If
there
have
been
any
changes
that
need
to
be
made,
cubed
configuration
once
that's
done,
you
can
operate
the
cubelet
and
you
can
uncor
than
the
node.
Let
workload
start
being
scheduled
on
it
again
and
I'm
gonna,
rinse
and
repeat
for
each
of
the
nodes
in
your
cluster.
However,
big
your
cluster,
is
you.
B
You've
got
capacity
to
at
a
time
that
can
help
speed
up
the
process,
assuming
that
you
are
running
kubernetes
on
VMs
and
not
on
bare
metal,
and
we
are
running
on
beams
for
our
managed
product
there's
a
bit
of
a
shortcut
you
can
take
rather
than
upgrading
each
component
individually
in
place
on
the
nodes.
You
can
just
completely
replace
each
of
the
nodes
in
the
cluster.
B
So
before
you
start,
you
still
need
to
do
that
initial
step
of
making
sure
that
everything
you're
using
is
supported
in
your
target
version,
but
once
you've
done
that
to
upgrade
the
control
plane,
you're
going
to
destroy
your
old
control,
plane,
node
and
creating
you,
and
that
as
the
new
versions
of
everything,
this
does
assume
that
your
xev
data
is
resilient
to
that.
So
either
you
have
multiple
and
cg
nodes
and
they
can
be
rebuilt
when
you
destroy
one
and
create
a
new
one
or
your
entity
is
outside
your
cluster
or
you're.
B
Sed
data
is
safe.
You
can
blow
away
your
control
plane,
node
create
a
brand
new.
One
has
all
versions
of
everything.
So
that's
a
much
simpler
process
then
trying
to
update
each
individual
component
of
the
control
plan
in
place
same
thing
for
the
worker
nodes.
You
still
need
to
do
the
draining,
but
once
you've
drained
a
node,
you
can
destroy
it,
create
a
brand
new
node
in
its
place,
then
you're
noticing
to
have
the
new
versions,
everything
you
cubed
configuration,
etc.
B
So,
if
you've
worked
with
kubernetes
fair,
if
you've
upgraded
clusters
before
you
can
probably
already
see
some
potential
issues
with
doing
an
upgrade
this
way-
and
there
definitely
are
some
issues
and
that's
a--fun-
who
spend
a
lot
of
time
talking
about
today.
But
there
are
also
some
advantages,
and
this
is
how
we
chose
to
implement
upgrades
for
in
our
managed
product
for
our
customers
clusters.
We
do
full
node
or
placement
of
each
of
the
nodes
in
the
cluster
rather
than
upgrading
things
in
place.
B
A
B
Reasons
we
chose
to
do
it.
That
way
are
that
there
are
a
bunch
of
advantages.
First
off,
if
you
upgrade
by
a
node
replacement,
then
every
node
in
the
upgraded
cluster
is
a
clean
slate.
There's
no
chance
that
there
was
a
customization
made
to
that
node,
that's
going
to
persist
across
an
upgrade
and
cause
a
problem
in
the
new
version.
B
The
other
nice
thing
about
doing
upgrades
by
a
motor
placement
is
that
it's
easier
to
automate
there
aren't
that
many
steps.
There
are
basically
four
operations
in
this
process:
draining
a
node
deleting
a
node,
creating
a
node
and
waiting
for
a
node
to
become
ready
if
you've
built
automation
for
managing
your
clusters
already,
for
example,
automation
to
create
a
cluster
operation
to
an
automation
to
do
maintenance
on
a
cluster
you've,
probably
already
automated
these
operations.
So
automating,
your
upgrades
is
just
combining
those
primitives
that
you
already
have
in
the
right
order.
B
Finally,
this
process
works
regardless
of
what
kind
of
upgrade
you're
doing.
So
you
don't
need
to
worry
about
whether
a
particular
update
or
upgrade
requires
a
CNI
upgrade
or
not
whether
quire's
new
at
CD,
or
not,
whether
it's
a
minor
version
upgrade
or
a
patch
version,
upgrade.
All
of
the
upgrades
to
components
that
are
going
to
happen
are
encapsulated
in
that
you're
using
so.
A
B
B
B
B
B
Likewise,
at
least
on
our
platform,
when
you
do
an
upgrade,
every
node
is
going
to
have
a
new
name
in
kubernetes.
It's
gonna
have
a
new
IP
address
and
it's
not
going
to
have
any
labels
or
taints
that
are
on
the
old
node
this.
It
really
bit
some
of
our
customers,
who
are
scheduling
there
were
closed
directly
based
on
node
names
or
correctly
based
on
labels
or
who
were
directly
accessing
their
nodes
by
IP,
rather
than
using
our
managed
load
balancer.
B
So
some
lessons
for
coaster
operators
here
are,
if
you're
doing
upgrade
by
a
node
or
placement,
it
is
helpful
to
your
users
to
reuse,
node
names
and
IP
addresses
when
you
replace
nodes.
If
that's
possible,
workloads,
probably
shouldn't
expect
that
that's
going
to
happen,
it's
not
great
to
expect
that
a
node
name
is
going
to
persist
forever
in
communities,
but
scheduling
by
node
name
is
a
tool
that
you
can
use
and
someone's
use
it.
B
So
if
you
can
make
that
work,
it
will
reduce
some
problems,
regardless
of
whether
you
are
going
to
do
that
or
able
to
do
that.
You
definitely
want
to
make
sure
that
you're
retaining
labels
and
taints
in
some
way.
People
who
are
deploying
workloads
to
kubernetes
do
want
some
level
of
control
over
how
their
schedule
in
which
nodes
are
scheduled
on,
and
labels
and
tanks
are
the
right
tools
to
use
for
that
in
career
days.
B
So
providing
some
way
to
set
persistent
labels
and
taints
that
will
survive
an
upgrade
is
an
important
thing
when
you're
building,
finally
and
kind
of
like
line
is
providing
a
good
ingress
or
balancing
solution
that
works
with
your
clusters
is
important.
Getting
traffic
into
communities
cluster
is
actually
kind
of
tricky
and
that's
where
the
whole
talk
on
its
own
that
I'm
not
going
to
give
today,
but
almost
everyone
needs
to
do
it.
You
usually
have
some
kind
of
traffic
coming
into
your
the
workload
you're
running
in
kubernetes,
the.
B
B
These
are
things
that
will
really
help
make
your
workloads
more
resilient
upgrades
and
other
kinds
of
motor
placement.
So,
first
off,
if
you
need
to
customize
things
about
a
node
like
this
is
CTL
values
that
I
mentioned
your
best
to
use
kubernetes
primitives.
To
do
that,
two
good
ways
to
do
that
are
either
using
a
privileged
daemon
set.
That's
going
to
run
on
every
node
make
the
customization
that
you
need
to
make
or
using
an
init
container
as
part
of
a
workload
that
requires
a
customization
either
way.
B
What
you're
going
to
end
up
with
is
something
that
gets
scheduled
by
the
kubernetes
scheduler
on
each
node
that
needs
the
customization.
It
makes
that
customization
so
that
you're
not
doing
it
manually
and
that
way
if
your
node
goes
away
or
new
and
gets
created,
it's
going
to
get
the
application
you
need.
Secondly,
don't
use
node
names
for
scheduling
I
mentioned
you
can
do
it,
but
it
really
is
not
a
good
idea.
The
crew
Bernays
philosophies
that
nodes
are
livestock,
not
pets.
Nodes
are
going
to
go
away
at
some
point.
B
They
will
date
if
you're
doing
that
crazy.
We
are
they'll
go
away
during
upgrade,
but
they
may
go
away
in
other
time
for
maintenance
or
because
of
hardware
failure
or
whatever,
if
you
want
some
control
over
scheduling,
you're
much
better
off
using
labels
and
tapes
and
learning
how
to
set
those
in
a
persistent
way
in
your
environment
on
some
providers
or
if
you're,
managing
your
own
cluster,
that's
going
to
mean
just
setting
the
labels
or
setting
the
taint
on
the
mill
directly
through
entities
on
other
platforms.
B
B
So
read
your
providers
docs
if
you're
using
a
manager
or
a
service
or
ask
your
cluster
operator
if
you're,
not
managing
your
own
Carre's
cluster,
make
sure
you
understand
what
happens
when
a
node
goes
away,
it
gets
replaced
and
how
to
get
labels
set
appropriately.
Likewise,
I,
you
are
always
best
off
to
use
supportive,
ingress
or
load
balancing
service.
That's
provided
by
your
cluster
quieter.
B
If
you
can
that's
going
to
make
sure
that
traffic
keeps
getting
to
your
nodes
when
they're
replaced,
it's
gonna
make
sure
that
your
traffic
keeps
going
through
during
an
upgrade
and
that's
a
good
best
practice
to
follow.
There
are
always
going
to
be
used
cases
where
it
doesn't
work,
and
you
need
to
build
your
own
thing.
Then
there
are
totally
valid
reasons
to
do
that,
but
I
would
say
take
that
as
a
last
resort.
Try.
B
B
B
There
are
a
few
basic
problems
that
this
causes
they're
all
basically
related
to
two
draining
nodes.
So,
first
off,
if
a
customer
or
users
running
right
at
the
limits
of
their
cluster,
their
cluster
is
basically
full
to
capacity.
Then
it
might
not
always
be
possible
to
drain
in
nodes
worth
of
workloads
to
another
node.
There.
B
Do
have
some
use
who
have
single
milk
clusters,
hopefully
they're,
not
using
them
for
production
workloads,
but
they
do
exist,
and
so,
if
we
try
and
drain
their
single
worker,
no
there's
just
nowhere
for
those
workloads
to
go.
They're
gonna
go
down
either
way
because
the
pasady,
or
because
you've
decided
to
have
a
single
cluster
you're
gonna
end
up
with
downtime
for
your
workloads.
If
they
can't
be
drained
to
summer
and
regardless.
A
B
Those
issues,
even
if
you
don't
have
capacity
issues
at
all
another
issue
with
break
before
make,
is
just
extra
churn
that
causes
for
workloads
when
you
drain
the
first
node
in
a
cluster
in
our
scheme,
the
workloads
that
are
running
on
it
or
guaranteed
to
end
up
on
another
node,
that's
still
running
the
old
version
and
that
node
is
gonna
have
to
be
drained
and
replaced
right
away
as
well.
So
ever
you,
those
workloads,
are
going
to
be
drained
or
a
bit
twice
instead
of
just
once,
and
that's
a
just.
B
Time
and
extra
chance
for
things
to
go
wrong,
not
great
for
for
the
workloads,
so
listen
for
operators
here
is
pretty
simple.
If
you're
going
to
do
upgrades
by
Notre
place
in
that
way,
we
do
its
best
to
figure
out
a
way
to
create
the
new
nodes
before
you
delete
the
old
ones.
This
might
be
a
little
bit
more
complicated,
well
complicated
to
automate
like
it
is
for
us
for
various
reasons,
but
it's
really
a
much
better
experience
and
I
would,
if.
B
Time
this
is
how
I
would
we
build
our
great
process
if
you
really
can't
do
that,
for
example,
if
you
were
running
a
herbert
cluster,
then
you
can't
really
add
nodes
before
you
turn
in
notes.
You
might
want
to
consider
reserving
some
capacity
for
upgrades
so
having
a
node.
That's
not
usually
scheduled
that
you
enable,
during
an
upgrade
that
gives
you
just
somewhere
for
locals
to
drain
to
if
you're
near
capacity
that
might
be
kind
of
expensive,
but
it
might
CDX
is
long
for.
B
The
lesson
here
isn't
really
specific:
to
upgrades
it's
just
that
your
kubernetes
platform
is
eventually
going
to
lose
a
node
I've
noticed
going
to
have
to
be
great
for
an
upgrade
or
for
maintenance
or
for
some
other
reason,
so
leave
some
capacity.
Make
sure
that
at
least
one
node
worth
of
workload
can
be
drained
to
somewhere,
there's
somewhere
for
it
to
go.
What
I
know
it
needs
to
go
away.
That's
a
just
a
good
practice
to
keep
your
workloads
running
smoothly
through
not
only
upgrades,
but
also
failures
and
other
kinds
of
operations.
A.
B
Related
thing
that
we
got
wrong
was
that
we
replaced
nodes
exactly
one
by
one,
so
we
destroy
one
note,
create
one,
no
destroy
one
note
create
one
note
till
they're
all
replaced,
and
this
is
just
fine
for
a
three
node
cluster
or
a
5
node
cluster.
It's
not
great
for
a
300
mil
Custer,
because
it
just
takes
a
long
time
and
it
gets
really
bad.
If
you
have
a
big
cluster
and
the
workloads
don't
evict
quickly,
they
don't
see
your
no.
It
doesn't
get
drained
quickly.
We
end
up
hitting
a
drain
time.
B
If
you
take
15
minutes
to
drain
each
node,
that's
5
hours,
just
of
draining,
so
your
upgrades
gonna
take
more
than
5
hours
and
that's
a
long
time
to
be
waiting
for
your
cluster
to
do
an
upgrade
to
restate
that
a
little
bit
more
concisely,
replacing
those
one
by
one
is
just
slow
and
it
can
be
in
the
floor
if
you
have
workloads
that
get
stuck
so
upgrades
can
only
be
so
fast
there
and
takes
some
time
we
want
to
make
them
as
expedient
as
possible.
Most.
A
B
Of
kubernetes
are
going
to
want
to
sort
of
watch
their
upgrades
or
keep
an
eye
on
their
cluster,
doing
an
upgrade
to
make
sure
nothing
was
wrong
because
it
is
a
very
disruptive
operation
and
you
don't
want
to
leave
them.
You
know
watching
a
cluster
upgrade
for
5
hours
or
12
hours,
you're,
something
you.
B
As
fast
as
you
you're
really
so
for
operators,
the
lesson
is
really
simple:
replace
multiple
nodes
at
once.
If
you
can,
that
will
just
help
you
out
great
a
big
cluster
quickly.
This
kind
of
requires
that
you
do
make
before
break,
not
what
we
did
break
before
make
so
users,
you
know,
may
have
capacity
in
their
cluster
to
absorb
one
node,
we're
kind
of
worth
of
workload
when
you
need
to
drain
a
node,
they
probably
haven't
set
aside
like
ten
nodes.
B
B
Instantly
it
takes
some
time
for
a
process
to
respond
to
a
signal
and
be
evicted,
but
it
also
shouldn't
need
an
hour
if
you
set
a
good
time
out
and,
like
I
said,
I
think
our
current
time,
those
15
minutes-
that's
gonna,
help
make
sure
that
you're,
you
at
least
have
an
upper
bound
on
how
long
it
takes
to
replace
a
node
and
that
upper
bound
is
somewhat
reasonable
for
developers.
There's
not
a
lot.
B
You
can
do
about
how
your
cluster
operator
provider
replaces
your
nodes
or
how
they
do
upgrades,
but
you
can't
help
with
the
draining
aspect.
There
are
two
aspects
to
this:
you
want
to
make
sure
that
your
workloads
can
be
evicted
safely,
so
use
pod
disruption,
budgets
and
other
mechanisms
and
kubernetes
to
make
sure
that
enough
pods
of
your
workload
stay
up
all
the
time.
B
B
You
want
to
make
sure
that
your
workload
in
your
application
can
handle
back
to
the
sort
of
positive
side
of
things.
This
is
something
that
we
got
wrong,
but
we
were
very
happy.
We
got
it
wrong,
I
earlier
that
when
we
did
our
GA,
we
only
offered
automated
patch
version
upgrades,
so,
for
example,
that
would
be
114
1
to
114,
but
not
114
2
to
115
0,
which
is
a
minor
version.
We
started
out
with
patchwork,
not
creates
because
they're
a
bit
simpler
resources
and
communities
aren't
supposed
to
change
between
patch
versions.
So
everything.
B
B
B
B
B
Decision
we
did
make
that
really
helped.
The
minor
version
story
go
well
was
that
we
leave
most
kerbin
Inez
alpha
features
disabled
and
they're
disabled
by
default,
and
we
don't
change
that
configuration
alpha
features
are
the
things
that
are
most
likely
to
change
or
be
deprecated
between
releases.
So,
if
you
leave
them
disabled,
there's
just
a
whole
class
of
problems.
That's
not
you're,
not
gonna
have
to
worry
about
specifically
you're,
not
gonna
have
to
worry
about
changing
things
that
are
using
alpha
features
to
make
them
work
in
the
new
version.
B
The
lesson
for
operators
here
is
I've
been
pretty
simple.
I
would
recommend,
leaving
alpha
features
off
by
default.
They
are
much
more
likely
to
break
between
releases.
Like
I
said
you're
gonna
just
eliminate
a
class
of
problems
by
leaving
them
disabled
if
you
or
your
users
do
have
a
reason
to
use
an
alpha
feature,
just
consider
it
as
a
trade-off.
There's
value
to
using
the
feature
it's
also
potentially
going
to
cause
pain
at
upgrade
time.
It's
something
you're
going
to
have
to
think
about.
B
There
is
one
alpha
feature
that
we
did
enable
which
is
CSI
snapshots.
We
felt
that
offered
a
lot
of
value
for
our
users,
it's
something
they
requested,
so
we
did
enable
it
in
our
clusters
and
we're
actually
doing
work
right
now
to
migrate
away
from
those
alpha
snapshots,
and
it
is
one
extra
piece
we're
gonna
have
to
take
care
of
in
a
future.
Minor
version
upgrade
to
make
sure
that
we
migrate
from
the
alpha
version
of
that
to
the
beta
version.
B
For
developers,
I
think
the
lesson
is
similar.
This
is
something
you
have
a
lot
of
control
over
regard.
Whether
alpha
features
are
enabled,
you
can
decide
whether
you
use
them
or
not.
I
would
be
I
sort
of
reluctant
to
use
them
if
you
can
avoid
it
use
them
this
kind
of
a
last
resort.
If
you
do
need
to
use
one
beat
extra
vigilant
around
upgrades.
A
B
B
The
rest
of
this
talk
today
talking
about
two
common
classes
of
problems
that
we've
seen
with
upgrades.
The
first
one
is
issues
with
container
storage
interface
or
CSI,
a
component
in
Carini's.
So
for
those
of
you
who
maybe
aren't
familiar
with
it,
CSI
is
a
pluggable
way
to
provide
storage
to
containers,
it's
an
abstraction
layer
between
kubernetes
or
other
orchestrators.
It's
a
Orchestrator,
recognized
tech
framework,
but.
B
That
allows
you
to
present
storage
to
containers
for
them
to
use
in
a
sort
of
contracted
way,
so
that
you're
not
building
it
directly
to
Cooper
days.I
crannies
clusters
on
digitalocean,
whether
they're,
using
our
managed,
offering
or
managed
by
a
customer,
are
able
to
use
our
open
source
C
as
I
plug
in
to
attach
our
persistent
block
storage
to
their
workloads.
And
this
is
the
mechanism
we
recommend
for
any
user
that
needs
persistence
in
their
grades
on
dissolution,
because
you're,
your
CSI
volumes
are
completely
outside
of
your
cluster
they're,
going
to
survive
an
upgrade.
B
Different
roblems
in
CSI
and
I'll
talk
about
a
couple
of
them,
specifically
as
they
relate
to
upgrades
the
first
issue
that
we
see
Nancy's
eyes
just
that
it
was
generally
immature
when
we
started
using
it.
The
first
release
where
we
supported
upgrades
in
Kerberos
was
Gray's
1.10,
and
that
was
the
same
release
for
a
CSI
was
promoted
from
alpha
to
beta,
so
in
that
1.10
time
frame,
the
careerist
components
that
support
CSI
were
relatively
new
and
most
of
the
CSI
drivers,
including
our
square
relatively
new.
So,
unsurprisingly,
some
bugs
in
both
of
those
things.
B
A
B
B
The
symptoms
that
we
would
hit
or
that
node
I
wouldn't
be
able
to
be
fully
drained.
We
hit
the
drain
timeout
because
yes,
I
was
trying
to
detach
a
line
that
actually
wasn't
attached
to
the
node,
or
we
were
doing
a
node
and
try
and
reschedule
the
workloads
on
another
node
and
not
be
able
to
attach
the
volumes,
because
CSI
thought,
but
no
thought
the
line
was
still
attached
with
different
note
or
thought
it
was
already
attached.
Those
kinds
of
issues
the
nice
thing
is
CSI
is
mature.
B
B
Released
I,
wouldn't
the
other
problem
we
can't
related
to
CSI
is
also
kind
of
related
to
the
fact
that
it
was
not
that
mature.
When
we
started
every
CSI
driver
has
a
name
and
the
convention,
that's
defined
in
the
CSI
specification
is
to
name
them
on
a
domain
name
basis,
so
kind
of
like
a
giant
class
really
over
those
in
the
early
versions
of
the
CSI
spec.
That
convention
was
reversed,
fqdn
naming.
B
So
if
you
were
the
example
corporation,
you
would
call
your
driver
to
combat
example
that
CSI
in
later
versions
that
changed
to
be
forward
FTD
n.
So
now,
if
your
example
corporation,
you
would
call
it
CSI
on
example.com
and
we
changed
our
driver
when
the
spec
changed
to
be
I
conduct
a
Trojan
to
digital
ocean,
and
this
name
ends
up
being
used
in
a
bunch
of
places
and
creates
it
gets
used
when
your
driver
is
registered
with
the
communities
subsystem
that
manages
drivers.
B
A
B
B
So
when
we
went
to
upgrade
from
a
CSI
driver
release
where
we
used
the
old
name,
when
we
hit
a
new
name,
we
started
hitting
a
problem,
and
the
problem
was
basically
that
kubernetes
no
longer
knew
that
those
precision
phones
he
created
with
the
old
driver
should
be
managed
by
the
new
driver
and
those
volumes
became
unmanageable.
He
could
no
longer
attached
them
to
workloads
and
if
he
tried
to
drain
a
node,
they
wouldn't
get
detached
and
reattached.
So
our
solution
was
to
make
the
name
configurable
and
her
driver.
B
It
defaults
to
the
the
you
spec
right
thing
to
do,
which
is
the
forward
fqdn
naming,
but
it
is
over
edible
vine
if
I
were
variable
and
what
we
do
when
we
upgrade.
Is
we
detect
with
our
cluster
risk
using
the
old
name?
If
it
was,
then
we
configure
the
new
version
to
also
use
the
old
name,
so
the
bad
name
doesn't
change,
and
unfortunately,
this
will
probably
be
part
of
our
upgrade
automation
forever,
since
we
have
to
keep
supporting
clusters
that
have
been
upgraded
through
various
minor
versions
of
kubernetes
and
I.
B
Guess
that's
that
that's
one
of
the
few
sort
of
version,
specific
things
that
we've
had
to
build
into
our
upgrade
process
is
detecting
that
change.
Persisting
it
so
a
couple
quick
lessons:
a
trigger
clickable
for
both
operators
and
developers.
If
you're
using
CSI
I,
would
just
recommend
carefully
testing
your
upgrades
and
seeing
what
can
go
wrong.
There
is
a
lot
that
can
go
wrong
and
coordinating
blind
moves
between
nodes
and
the
data
on
your
volumes
is
probably
important
to
you.
B
That's
why
you
put
a
run
a
person
volume
in
the
first
place,
so
you
want
to
make
sure
that
those
your
data
is
safe
in
your
workloads
are,
as
expected,
watch
out
for
any
workloads
that
get
stuck
for
nodes
that
get
stuck
training
etc.
Like
I
mentioned,
those
are
the
common
issues
and
be
especially
vigilant
if
you're
using
an
older,
kubernetes
release,
I
would
say
before
1:14
I
use
a
newer
release.
If
you
can
it
is
that
sort
of
catch-22.
B
I've
seen
big
one
for
last.
This
is
probably
the
most
common
problem
we
see
in
kubernetes
upgrades
to
this
day.
It's
a
problem
with
admission
control
web
hooks,
and
these
problems
are
possible
in
any
environment,
with
any
upgrade
process,
so
I'm
going
to
spend
a
bit
of
time
on
them.
I
think
this
is
a
problem.
That's
been
a
big
pain
for
us
and
a
lot
of
people
don't
like
we
did
it.
B
So
for
anyone
who
hasn't
seen
Mission
Control
it
hooks
before
I'll
give
a
quick
overview,
an
admission
control.
A
hook
is
a
configuration
you
can
make
in
kubernetes
to
have
an
external
service
determine
whether
a
resource
can
be
created
or
not,
and
there
are
two
kinds
of
admission:
controller
books:
there
are
validating
ones
and
mutating
ones.
The
mutating
ones
can
modify
a
resource
before
it's
created,
validating
one
just
determines
whether
it
can
be
created
or
not
and
for
our
purposes
in
the
rest
of
this,
their
idea.
Well,
there's
no
difference.
Okay,
so.
B
Slide
here
shows
what
happens
when
you
try
and
create
something
in
Kerber
Nettie's
with
an
admission
control
lever
complain.
So
you
make
your
call
the
API
server
to
create
your
resource
and
the
API
server
is
gonna,
make
a
call
out
to
your
web
hooks
service
that
you've
configured
and
it's
very
common
to
run
these
women's
services
inside
your
cluster
as
the
communities
work.
You.
A
B
Common
to
draw
them
inside
your
cluster,
the
web
hook
is
going
to
return
response
that
says
aloud
true
or
a
lot
of
false
and
that's
how
the
ik
a
server
determines
whether
or
not
it's
a
lot
to
create
the
object.
I,
assuming
that
it
is
allowed
to
it's
I'm,
going
to
go
ahead.
Do
it
thing
create
the
object?
Everything's
good
I
would
be
really
clear
that
there
are
lots
of
good
use
cases
for
admission
controller
books.
Authorization
is
a
common
one.
Validation
and
enforcement
at
best
practices
is
a
good
one.
B
Injecting
sign
cards
for
things
like
service
meshes
is
also
common.
There's
nothing
wrong
with
using
the
admission
control
it
hooks
and
you
should
definitely
use
them.
They're
they're,
a
great
tool:
I'm
gonna
talk
about
how
to
make
them
safe
for
upgrades
and
the
problems
they
can
cost
your
upgrades.
B
The
problem
is
all
really
to
what
happens
if
the
webhook
service
is
not
running
and
it
can't
respond
to
the
api
server.
So,
looking
at
our
sequence,
diagram
again,
what
happens
if
you
go
to
create
a
resource
with
your
api
server?
It
calls
up
to
the
web
book
service
and
it
just
doesn't
get
a
response.
B
Well,
what
happens
depends
a
little
bit
on
how
you've
configured
your
replica?
First
of
all,
it
depends
on
the
failure
policy.
Their
failure
policy
field
can
be
either
fail
or
ignore.
If
it's
failed,
then
the
web
book
service.
If
the
web
book
service
isn't
available
in
the
API
server,
doesn't
air
responds
it's
going
to
act
as
if
that
we
yeah,
as
if
the
web
hook
disallowed
the
the
creation?
So
it's
your
resource
creation
is
going
to
fail.
B
So
the
problem
for
upgrades
is
that
during
a
cranny's
upgrade
we're
gonna
update
a
bunch
of
system,
components
that
run
in
a
kubernetes
cluster
as
workloads.
These
are
mostly
in
the
cube
system
namespace,
but
they
make,
in
other
databases
too,
depending
on
how
you
configure
your
cluster.
Some
examples
would
be
like
core
DNS
or
cube
proxy.
These
are
things
that
run
on
your
nodes
as
kubernetes
workloads
and
were
scheduled
by
the
api
server
at
various
controllers.
B
Web
hooks
can
prevent
these
updates
from
happening.
They
can
prevent
the
definitions
of
of
your
system
components
for
being
updated.
They
can
also
prevent
new
pods
from
being
created
for
your
system,
components
and
web
hooks
can
prevent
the
services
that
back
them
from
being
scheduled.
So
you,
if
you're
running
your
web
book
service
in
your
cluster,
which
I
mentioned,
is
a
very
common
configuration.
It
can
potentially
prevent
itself
from
being
started
and
then
your
webmix
service,
it's
never
going
to
work
again.
That's
a
bigger
problem,
so.
B
Web
book
configuration
for
a
minute.
This
one
applies
to
pod
creation
and
it
applies
to
pods
in
any
namespace.
So
when
you
try
and
create
any
pod
in
your
cluster,
this
webbook
is
going
to
be
actually
and
it
has
a
failure
policy
of
fail.
So
let's
look
at
what
happens
during
an
upgrade
say.
Our
web
book
service
is
deployed
in
the
US.
Here
is
a
deployment
and
we're
gonna
start
doing
our
upgrade.
We
have
a
node,
that's
running
the
web
book
service
and
also
the
other
normal
cluster
stuff.
B
When
we
start
our
upgrade
we're
going
to
try
and
we're
gonna
drain,
this
node
and
the
web
service
pod
is
going
to
be
killed
on
the
deployment
controller
is
not
going
to
be
able
to
create
a
new
pod
for
the
web
book
service
because
when
it
tries
the
API
server
is
going
to
try
and
reach
out
to
the
web
book
service
to
ask
whether
it
converted
the
pod
and
that
call
is
going
to
fail.
The
failure
policy
has
failed,
so
it's
not
gonna
create
it.
B
So
when
we
bring
up
a
new
node,
the
web
book
service
is
not
running
because
the
deployment
controller
was
not
able
to
create
new
pod
for
it,
and
the
demon
sect
controller
now
is
going
to
try
and
create
system
components
like
you
proxy
and
our
psyllium
CNI
driver
I.
It's
gonna
try
and
create
those
on
the
new
node,
and
it's
not
going
to
be
able
to
again
because
it's
gonna
try
and
create
the
pod.
It's
gonna
go
to
the
white
book
service
that
my
service
is
running.
It's
gonna
fail.
B
B
Solution
to
this,
which
is
set
the
fill
your
policy
to
ignore
and
that
actually
causes
another
problem
you
might
not
expect
because
of
the
timeout.
It
turns
out
that
almost
all
of
the
default
timeouts
and
kubernetes
are
30
seconds.
That
includes
the
time
up
for
web
hooks.
So
even
if
you
don't
specify
30
seconds
as
the
timeout
for
your
web
hook,
it's
gonna
get
30
seconds
like
people.
It
also
includes
the
API
server
timeout
when
you
make
a
request
to
the
API
server
the
default
timeout
for
that
request
is
30
seconds.
So.
A
B
B
So
I
recommend
keeping
your
timeouts
much
lower
than
30
seconds,
regardless
of
what
fill
your
policy
or
setting
this
is
actually
was
recommended
in
the
official
kubernetes
Docs.
So
this
isn't
just
me
saying
if
this
is
in
the
official
documentation,
the
configuration
I'm
showing
on
the
slide
here
will
work.
Just
fine
where
you
have
a
timeout
seconds
of
five
I'm
gonna
fill
your
policy.
They
can
work,
that's
never
going
to
cause
any
problems
during
the
upgrade.
B
Let's
say
you
really
do
need
your
failure
policy
to
be
failed,
because
your
web
book
is
very
important.
You
can
still
avoid
upgrade
problems
by
having
your
web
would
not
apply
to
the
cube
system
namespace
or
any
other
system.
Critical
namespaces.
A
good
way
to
do
this
is
to
set
a
label
on
your
cube
system
namespace
and
have
your
webhook
ignore
namespaces
with
that
label
using
a
namespace
selector.
B
B
A
B
B
I
said
it's
open
source,
the
other
things
like
I
mentioned.
You
might
want
to
configure
a
mutating,
a
book
that
mutates
well-put
configurations
to
make
them
harmless.
That's
a
great
way
to
avoid
the
problem
ever
coming
up
in
the
first
place,
but
if
you're
going
to
do
that,
you
might
want
to
consider
running
that
service
for
that
webhook
outside
of
your
cluster
I.
Just.
B
A
B
B
That's
that's
all
my
content
for
today.
I
have
this
slide
with
sort
of
everything
we
talked
about,
I'll
run
through
quickly,
just
as
a
recap.
My
first
lesson
today
was
I.
You
might
want
to
consider
upgrading
your
grades
cluster
by
an
old
replacement.
Instead
of
upgrading,
your
notes
in
place
is
a
simpler
process.
It
helps
with
automation.
B
There
are
some
problems
that
can
come
up
with
that
and
so
make
sure
to
be
aware
of
those
consider
retaining
those
names
and
IP
addresses
if
that's
possible
in
your
environment,
I,
have
your
workloads
assume
that
no
one's
are
going
to
go
away
and
not
refer
to
specific
node
names,
specific
norm,
AP
addresses
and
create
new
nodes
before
you
destroy
old
ones.
If
that's
at
all
possible,
that's
really
going
to
help
with
the
training
problem
and
having
your
workloads
continue
running
through
an
upgrade.
B
B
The
next
lesson
was
upgrade
more
than
one
note
at
once.
If
you
can
that's
really
going
to
help
when
you
have
a
big
cluster,
it'll
make
the
upgrade
process
faster
and
smoother,
and
that's
a
good
thing.
My
lesson
out
for
that
was
minor
version.
Upgrades
are
probably
easier
than
you
think,
especially
if
you've
avoided
using
or
enabling
alpha
features.
Don't
worry
so
much
about
minor
version
upgrades.
B
B
Around
specific
problems
that
we've
seen,
one
is
that
CSI
is
is
now
becoming
mature.
I
would
say,
is
nowadays
quite
matured,
but
on
older
versions
of
Cooper
nineties
it
was
not
so
take
special
care
when
you're
operating.
If
you
use
CSI
the
final
one
was
around
webhooks,
they
can
cause
all
kinds
of
trouble
to
have
an
upgrade
like
I
said.
This
is
the
most
common
problem
that
we
see
with
upgrades
for
our
customers.
B
So
if
you're
using
admission
control
on
books
check
your
targets
check
your
failure,
policies
check
your
timeouts
make
sure
that
those
are
all
configured
according
to
that
grantees
Docs.
And
what
I
told
you
today,
you
can
use
our
cluster
lint
tool
to
check
those.
If
you
want
there's
I'm
sure,
also
other
tools
that
can
check
and
that's
what
I
had
for
today
when
we
go
into
Q&A,
which
I
think
Carol
is
going
to
moderate
yeah.
A
B
So
the
way
that
works
is,
if
you
have
it,
when
you
define
a
pod,
you
can
have
the
normal
containers
for
the
pot,
and
you
can
also
have
in
it
containers
the
net
containers
run
before
the
normal
container
start
and
they're
gonna
run
on
the
same
node.
So
if
you
need
to
set
a
specific
CTL
value,
for
example,
on
a
node,
because
your
work
alone
wants
a
really
big
TCP
buffer
or
something
you
can
have
any
make
container,
that
goes
and
sets
that
value
before
your
work
load
starts
before
your
application
starts.
B
B
A
B
A
B
That's
so
there's
a
variety
things
that
we
do.
The
biggest
thing
we
do
is
we
make
sure
that,
after
we
replace
that
worker
node,
we
make
sure
it
becomes
ready
before
we
start
off
the
next
one
that
that
ensures
that
we're
not
gonna
like
take
down
all
the
worker
nodes
in
a
cluster
and
have
it
over
to
schedule
it'll
work
loads.
It's
not
a
100
percent
guarantee
you
that
the
workloads
are
okay,
there's
only
so
much.
B
But
we
do
make
sure
that
the
nodes
become
ready
and
same
between
the
control
plane
upgrade
and
the
return
upgrade.
We
make
sure
that
the
control
plane
components
are
all
up
and
healthy
that
the
you
know
CNI
is
healthy.
Our
cloud
controller
manager
is
healthy.
Our
community
schedulers
healthy
all
those
things,
so
that's
the
biggest.
The
biggest
thing
that
we
do
is
is
just
rely
on
the
kubernetes
health
statuses
to
make
sure
that
things
are
very
happy.
B
If
you
control
both
your
cluster
and
your
workloads,
then
doing
health
checks
on
your
workloads
would
make
a
lot
of
sense.
That's
not
really
something
we
can
do
since
we
don't.
Like
I
said
we
don't
control
we're
clones,
and
if
someone
wants
to
configure
their
workload
really
poorly,
we
don't
want
to
sort
of
end
up
with
a
stuck
upgrade
for
that.
It's
a
bit
of
a
trade-off
being
a
managed
provider
that
we,
you
know
what
our
customers
work,
wants
to
be
as
safe
as
possible
if
we
don't
full
control
over
them,
so
yeah
so.
A
B
B
There
definitely
is
we
leaned
heavily
on
prometheus
metrics
for
this,
so
the
process
we
have,
that
runs
that
reconciles
clusters
and
and
does
the
upgrade,
exposes
a
bunch
of
metrics
internally
to
us.
For
example,
what's
the
cluster
that's
sort
of
been
reconciling
for
the
longest?
So
what's
the
slowest
upgrade
that's
currently
in
progress
and
we
have
alerts
on
those
things
that
go
to
our
ops
team
and
then
eventually
get
escalated
to
us.
If
there's
a
problem,
so
that's
our
most
basic
mechanism.
B
A
How
do
you
is
there
a
good
way,
a
good
practice
that
you
guys
employ
to
determine
at
snack
point
we
used
to
struggle
with
this
a
little
bit.
What's
you
know
that
the
customer
didn't
deploy
application
to
the
cluster,
leveraging
best
practices
and
then
upgrades
can
potentially
become
problematic,
because
everything
is
there?
Is
there
some
practice
or
that
you
all
employ
to
evaluate
whether
it's
you
know
on
the
customer
or
whether
it's
something
in
the
platform
I.
B
Would
do
that
determination,
the
mostly
manual
at
this
point
when
we
do
have
something
get
a
stocker
across
problems?
It's
really
kind
of
human
intervention,
we'll
go
and
look
and
there's
some
problems,
we'll
just
fix
for
customers
like
the
webhook
ones,
for
example,
will
temporary
its
temporarily
disabled
hook
it
back.
That's
not
something.
A
B
A
B
You
if
we
upgrade
the
control
plan
and
it
never
comes
up.
That
would
be
the
probably
kind
of
the
last
point
at
which
we
do
a
rollback.
We,
we
don't
have
any
automation
for
that.
That's
a
manual
thing
we've
had
to
do
it.
I
can
probably
count
on
one
hand
how
many
times
we've
actually
done
it,
because
it's
not
a
great
thing
to
do
the
in
particular.
If
some
of
the
control
plane
has
come
up
and
it
started,
you
know
converting
resources
to
new
formats
and
at
CD
or
things
like
that.
B
There's
a
lot
of
chance
for
things
to
go
wrong.
If
we
roll
back
so
we
try
not
to
do
it,
but
our
basic
mechanisms
to
do
that
is
we.
We
do
take
at
CD
snapshots
and
also
the
end
snapshots
before
we
start
the
upgrade
and
make
sure
that
those
snapshots
are
sort
of
in
place
so
that,
if
we
need
them
and
we
need
to
roll
back
to
them,
we
can.
A
B
B
A
B
B
A
A
B
So
at
the
moment
we
are
still
doing
I
know
it's
one
by
one:
we're
we're
working
on
right
now,
moving
toward
a
system
where
a
we
create
new
nodes
before
we
delete
old
nodes,
and
then
that
also
allows
us
to
increase
the
number
that
we
upgrade
at
once.
I
think
we're
still
up
in
there
I'm.
Exactly.
How
are
is
that
those
numbers
I
think
it's
going
to
take
a
little
bit
of
experimentation
on
our
side.
It
also
depends
a
little
bit
on.
B
This
is
a
kind
of
detail
of
our
product.
It's
not
gonna
be
applicable
elsewhere,
but
our
in
our
product
that
worker
nodes
are
owned
by
the
user
there.
The
user
has
like
full
access
to
them,
and
that
means
that
they
count
against
the
limit
of
the
number
of
VMs.
We
allow
a
particular
user
to
have
on
our
platform,
so
we
have
to
be
kind
of
mindful
of
that
and
allow
for
the
case
where
there
they
hit
their
limit
and
we
can't
create
any
more
for
them.
B
A
And
I
think
the
same
holds
true
and
other
cloud
providers
believe
it
or
not.
Since
we
had
all
have
experience
with
a
multi
cloud,
where
yeah
limits
on
what
could
be
used,
impacted
operations
of
you
will
anonymous
attendee
ask:
do
you
upgrade
a
certain
percentage
of
nodes
at
a
time,
for
instance,
upgrade
a
third
of
the
nodes
each
day
over
a
three-day
period,
so.
B
We,
like
I,
said
just
just
now
at
the
moment
we're
we're
just
doing
one
by
one,
but
in
terms
of
the
time
scale
we
start
an
upgrade
and
we
don't
pause
until
it's
finished,
so
it
we
don't
take
multiple
days
or
anything
like
that.
Our
goal
is
to
do
an
upgrade
in
like
less
than
an
hour
and
right
I
mean
right
now.
If
you
have
a
hundred
mill
Kuster,
it's
definitely
to
take
more
than
an
hour,
but
hopefully
in
the
future
and.
A
I
think
we
have
one
final
one
we
can
ask
here
what
problems
do
you
think
the
other
approach
has
I
guess
a
little
bit
of
context,
the
the
approach
of
upgrading
each
component
separately
in
each
node.
Instead
of
replacing
the
complete
node
I'm
asking
as
my
previous
project,
we
used
to
see
a
lot
of
downtime
for
our
workloads,
as,
unfortunately,
we
didn't
have
breathing
room
in
our
cluster
for
upgrade
the
mate
before
break
yeah.
B
So
I
think
that
the
big
challenges
of
doing
an
in-place
upgrade
are
a
it's
just,
a
lot
of
components
to
to
coordinate
and
it's
going
to
be
somewhat
different
between
different
upgrades,
sometimes
you're,
upgrading
your
C
and
I,
sometimes
you're,
not
sometimes
you're
upgrading.
You
know
your
Cloud
Controller
manager,
sometimes
you're.
Not
so
you
have
to
kind
of
build
the
automation
differently,
depending
on
each
individual
upgrade
you're
doing
the
the
other
thing
is
that
you're
gonna
have
to
upgrade
configuration
in
place
and
you're
gonna
have
to
deal
with
any
customizations
on
the
nodes
anchor.
B
That's
changed
sort
of
beyond
your
control,
so
you
know,
depending
on
your
environment,
if
you
have
a
tightly
managed
environment
where
you
really
control
all
the
workloads
controller
cluster
I
mean
place
upgrade
could
be
much
less
disruptive
and,
like
you
said
you
don't
have
to
deal
with
that
capacity
concern
quite
as
much,
but
for
our
environment
where
we're
managing
thousands
of
clusters
and
we
don't
control
the
workloads
on
them.
The
node
replacement
strategy
seemed
safer
to
us
and
I.
Think
that's
played
out
pretty
well.
Okay,.
A
Great
I
see
another
question
came
in,
but
we
are
unfortunately
at
actually
a
little
bit
past
time
on
the
screen.
You'll
notice,
you
can
reach
out
to
Adam
via
email
he's
also
on
Twitter
or
lower
left-hand
corner,
so
feel
free
to
reach
out
there.
I,
Adam
and
I
want
to
thank
all
of
you
for
joining
today.
The
webinar
recording
and
the
slides
will
be
online
later
today
and
we're
looking
forward
to
seeing
you
at
the
next
CN
CF
webinar
have
a
great
day
Thanks.