►
From YouTube: Webinar: What's New in Kubernetes 1.17
Description
The release team covered the details of the Kubernetes 1.17 release.
To download the slides, please visit https://www.cncf.io/webinars/kubernetes-1-17/
A
All
right,
I
think
we
can
start
now
welcome
everyone
I'd
like
to
thank
you
for
joining
us
today
today.
As
a
CNC
F
webinar
watch
new
in
kubernetes
1.17,
my
name
is
David
McKay
I
was
the
lead
on
the
communications
team
and
I'd
like
to
welcome,
enhances
lead
mr.
Bob
Dylan
and
our
release
lead
Guinevere
Accenture.
So
thank
you
very
much
for
joining
me
today
before.
B
A
Get
started,
we
do
have
a
few
housekeeping
items
so
first,
there
is
no
speaking
during
this
webinar.
So
please
use
the
Q&A
box
at
the
bottom
of
your
screen.
We
will
try
and
get
to
as
many
of
those
questions
throughout
the
webinar
and
cover
off
anything
we
can
at
the
end
as
well,
so
ask
them
as
we
go.
I
must
do
my
best
to
get
that
covered.
This
is
an
official
webinar
of
the
cncn.
So
because
of
that,
we
are
subject
to
the
CN
CF
code
of
conduct.
A
C
Hi
everyone,
good
morning,
good
day
good
afternoon,
wherever
you
are,
may
I,
introduce
you
to
kubernetes
117
with
the
subtitle
of
the
chillest
release,
because
we
only
have
one
holiday
season
release
a
year
and
q4
is
it
with
that?
We
have
our
fantastic
capybara
mascot,
who
is
one
of
the
chillest
animals
of
in
existence
and
I
got
the
fantastic
Alison
downy
and
her
partner
Tyler
to
do
a
really
cool
logo
for
us,
so
she's,
extremely
talented
and
I'm
so
excited
she
made
this
artwork
for
us.
C
Can
we
do
the
next
slide?
Alright,
David
did
you
want
to
go
over
the
agenda.
A
First,
so,
with
each
selected,
one
feature
that
we're
really
happy
we're
from
the
1.17
a
release,
we're
going
to
talk
about
the
stability
changes
that
have
happened
in
this
release,
moving
on
to
snapshot
and
restore
volume,
support
ending
with
topology
aware
routine.
After
that
we
will
travel
through
all
the
other
sake
updates,
with
the
117
release,
followed
again
by
the
Q&A
at
the
end.
B
Okay,
this
is
where
I'll
kick
in
and
take
over
for
a
little
bit
for
this
release.
We
had
22
total
enhancements,
that's
much
less
than
we've
had
in
previous
releases,
but
we
sort
of
see
this
cyclical
nature,
where
you
know
towards
the
end
of
the
year.
Fewer
and
fewer
enhancements,
especially
with
the
holidays
and
the
shorter
cycle,
will
make
it
in
there,
but
they
make
up
for
it.
B
In
the
beginning,
we'll
probably
have
significantly
more
for
the
118
release
of
those
we
had
14
that
we're
going
to
stable
or
GA,
and
most
of
those
are,
like
smaller
feature
things
that
just
help
the
project
as
a
whole.
They
aren't
like
huge
features
for
that
we're
graduating
to
beta
and
for
that
we're
new
as
alpha.
C
C
B
So
of
the
22
14
went
to
stable,
well
dive
right
in
and
cover
two
of
them
here
in
a
sec,
but
for
a
quick
rundown,
the
ones
that
are
promoted
or
ain't.
Nobody
condition
a
pod
process
named
stray
sharing
schedule,
daemon,
set
pods
by
Jeep
scheduler
dynamic
maximum
volume,
count
humor
a
suicide,
topology
support
environment,
variable
expansion,
sub
path,
mount
defaulting
of
custom
resources.
B
B
The
race
test,
tarball
watch
support
if
you're
driven
for
testing
finalize
our
prediction
or
protection
for
service
load,
balancers
and
a
little
thing
of
avoiding
say,
lies
in
the
same
object
independently
for
every
water,
so
that
I
will
dive
into
sort
of
our
first
big
featured
enhancement,
snapshot
and
restore
volume
support.
So
this
is
a
part
of
my
audios
breaking
up.
B
Can
you
hear
me?
Okay,
okay,
cool!
This
is
a
beta
feature.
It
was
actually
introduced
as
alpha
in
Korea
he's
1.12
and
the
during
the
Alpha.
It
actually
got
like
rewritten
from
the
ground
up
twice
and
it's
now
moving
to
beta.
So
many
more
people
get
a
chance
to
actually
use
it.
So
Cooper
nice
itself
has
proven
to
be
like
a
great
abstraction
for
describing
workloads
programmatically.
B
B
So,
like
you
know,
when
you're
working
in
AWS
or
Google,
or
something
like
that,
you
can
take
a
snapshot
of
a
volume
and
and
possibly
restore
it
later,
but
there
hasn't
been
any
sort
of
like
integration
with
kubernetes
itself,
so
you've
sort
of
had
to
manage
that
sort
of
thing
out
of
band
and
this
enhancement
does
is
actually
brings
those
primitives
back
in
the
to
kubernetes.
So
you
no
longer
necessarily
have
to
you.
B
You
know
go
to
like
another
console
or
something
like
that
to
you
script,
that
backup
and
restore
now
to
just
to
make
something
clear.
Is
you
can't
use
this
directly
out
of
the
box
at
the
117
cluster?
It
requires
a
bit
more
plumbing
and
the
installation
of
external
snapshot
controller.
This
is
sort
of
similar
how
you
would
install
any
sort
of
CSI
based
storage
driver
and
you'll,
probably
able
to
expect
a
lot
of
the
cloud
providers
to
just
do
this
automatically.
For
you.
B
So
the
external
snapshot
controller
adds
a
few
new
CR
DS,
but
the
two
big
ones
are:
the
volume
snapshot,
class
and
volume
snapshot.
The
volume
snapshot
class
is
sort
of
similar
to
a
storage
class.
It
defines
which
CSI
driver
is
used,
how
the
snapshots
are
made
and
their
retention
policy,
and
then
the
volume
snapshot
itself
is
sort
of
an
instance
of
the
volume
snapshot
class
for
a
provided,
persistent
volume,
and
this
might
be
a
lot
to
take
in.
B
B
A
B
B
B
So
touching
on
what
was
said
before,
this
enhancement
was
sort
of
separate
ones,
pruning
and
defaulting
pruning
was,
was
promoted,
GA
and
the
faulting
was
promoted
to
beta
and
the
1/16
release.
This
wraps
it
up
by
bringing
faulting
to
stable
in
the
117
release,
so
C
or
D
or
custom
resource
definition.
There
used
or
like
sort
of
user-created
extensions
at
kubernetes,
and
you
went
up
sort
of
registering
them
as
their
own
API
ja,
yeah,
API
objects
and
they
will
have
their
own
like
API
version
and
kind.
B
B
B
You
so
they're
at
this
one
in
the
next
one
sort
of
work
together
and
there
they
aren't
really
user-facing,
but
it's
watch
bookmarks
and
this
improves
the
performance
of
a
cube,
API
server.
So
when
a
client
initiates
the
watch,
I
quiet
I
mean
a
programmatic
client,
so
something
builds
like
client
go
or
one
of
the
Python
libraries,
something
like
that
and
they're
watching
a
set
of
objects
to
get
notified.
When
something
changes.
B
It
gets
a
list
with
a
resource
version
number
and
that
maps
to
sort
of
a
sort
of
changes
or
diffs
from
the
previous
version
or
yesterday
vision,
and
if
the
client
happens
to
disconnect
and
tries
to
reestablish
that
watch.
The
cube,
API,
server
and
client
would
have
to
sort
of
play
back.
All
the
resource
versions
do
then
get
back
to
where
it
was,
and
this
caused
unnecessary
load
on
the
server,
especially
if
you
consider
you
might
have
a
whole
slew
of
things
watching
those
objects
just
skate
into
a
bit
of
a
nightmare.
B
B
The
less
object
serialization.
So
if
you
have
multiple
watches
watching
the
same
set
of
things
previously
for
each
client,
the
cube
api
server
would
have
to
like
serialize
that
object
for
each
one.
Now,
there's
a
little
bit
of
like
caching
in
play.
So
if
you
have
multiple
things
watching
the
same
thing
and
they'd
be
notified
of
the
same
sort
of
updates,
they'd
be
cached,
and
this
we
saw
like
significant
problems
with
the
like
scalability
test,
where
we
go
up
to
like
five
thousand
nodes,
where
this
would
start
to
cause
some
performance
issues.
B
So
behavior
during
conformance
testing.
This
is
a
non-traditional
enhancement,
but
it's
sort
of
a
larger
plan
where
we've
agreed
to
tackle
some
better,
like
conformance
testing
project.
Why?
So,
essentially,
this
is
defining
how
our
conformance
tests
should
be
built
and
documented
right
now
there
isn't
a
sort
of
single
explicit
list
of
behaviors
or
source
of
truth
out
there.
These
are
scattered
amongst
design,
Docs
enhancement,
proposals,
user
Docs
and
a
subset
of
the
e2e
tests,
as
well
as
the
code
itself.
B
B
This
last
one
is
the
removal
of
the
project
wide
usage
of
the
node
roll
labels.
This
might
impact
people,
so
this
was
actually
sort
of
an
accident
over
time.
The
node
roll
labels,
no
roll,
that
grazed
io
/
star,
lay
the
nodal.
Namespace
was
not
intended
for
a
widespread
use
by
the
project
itself,
but
several
things
actually
started
referencing
it.
These
were
introduced
by
cube
ATM
to
help
them
manage
the
provisioning
and
manage
the
lifecycle
of
the
medium
provision
nodes
and-
and
they
were
not
intended
for
use
yond.
B
That,
however,
like-
and
this
is
sorry
this
is
this-
is
because
there
are
many
sort
of
like
tools
and
parishioners
that
are
not
cube,
ATM
based,
which
means
you
know
if
nodes
are
missing
those
labels
and
certain
things
would
probably
have
issues
if
they
try
and
reference
them
and
certain
parts
of
the
project
itself.
If
I
recall
correctly,
the
there
was
a
service
load,
balancer
test
that
were
referencing
them,
but
because
they
were
referencing
that
it's
like
they
wouldn't
function
in
any
sort
of
conforming
cluster.
That
was
envisioned
by
cue
video.
B
So
it's
been
decided
that
these
labels
will
stop
being
used
project-wide.
They
may
continue
to
be
used
by
cube
ATM,
but
they
may
not
be
used
for
any
conformance
related
activities
tests
and
this
kept
simply
outlines
the
plan
to
start
the
removal
and
deprecated
them
from
the
other
places
in
the
project.
B
Kick
over
a
cloud
provider
with
a
another
fun
label
thing.
So
for
a
quite
some
time
now,
we've
had
the
beta
labels
used
by
the
cloud
provide
sort
of
signify.
You
know
what
instance
type
it
is
what
zona
region
it
is,
and
it's
it's
time
to
bring
those
to
GA.
So,
as
you
can
see
in
the
list
here,
like
failure,
domain
data
of
Zeeland
will
become
topologies,
io,
/
zone
and
so
on
for
the
other
ones
and
the
general
deprecation
plan
for
these
is
both.
B
Labels
will
be
applied
to
nodes
through
the
1.20
release
and
in
1.21
they'll,
stop
being
referenced
or
applying
them,
but
they
will
not
remove
them
from
objects
that
already
have
them
applied.
If
you
are
relying
on
these
things,
I
would
encourage
you
to
check
out
this
cap
and
think
about
updating
anything
anything
that
you
have,
that
might
reference
them.
B
This
is
a
really
short
one,
honestly,
structured
output
from
cube
a
DM
so
as
cube
a
a
DM,
is
sort
of
becoming
the
underlying
substrate
or
underlying
tool
for
many
other
things,
it'd
be
really
nice
to
actually
have
like
machine
consumable
logs
or
other
things
like
we
bubbled
up
easily
or
parsed
easily.
So
this
is
adding
some
PDM
logs.
I
can
yeah
put
it
in
like
JSON,
instead
of
just
simply
unbuffered
text.
B
Now
it's
time
for
some
of
the
fun
stuff
Network
and
we're
back
to
apology,
we're
routing
of
services.
I
cover
this
a
little
bit
more
in
detail
earlier,
but
essentially
you
can
set
a
predefined
list
of
preferences
via
the
LG
keys,
parameter
and
service
definition,
and
that
will
be
the
sort
of
preferred
way.
Services
or
possible
services
will
route
to
items.
B
Next
is
an
interesting
one,
so
the
ipv4
ipv6,
dual
stack
support
was
graduated
to
alpha
in
the
116
release.
However,
since
there
has
been
a
significant
amount
of
effort
across
many
parts
of
the
project,
we
are
continuing
to
track
it
in
this
release
anytime.
There
is
very
large
changes
like
this,
where
you
know
tests
and
other
things
that
impact
on
the
present
project
might
be
updated.
We
opted
you
track
it.
It
just
means
it's
like
graduating
week,
track
it
as
an
enhancement.
B
So
some
of
the
major
changes
that
happened
with
ipv4
ipv6,
dual
stack
enhancement,
is
dual
stack.
Support
was
added
to
the
queue
proxy
in
iptables
mode,
as
well
as
support
for
dual
stack
in
the
downward
api.
So
now,
if
you're
doing
a
reference
with
the
downward
api,
you'll
have
both
the
ipv4
and
ipv6
address
and
they're
separated
by
a
comma.
The
other
thing
were
some
changes
made
to
the
cube
controller
manager.
B
Like
note
on
cedar
max
size
that
should
be
before,
and
node
cedar
asked
size,
ipv6
and
those
can
strictly
only
be
used
in
dual
stack
mode.
The
other
thing,
with
sort
of
all
this
effort
going
on
right
now,
the
plan
is
for
them
to
push
to
beta
in
1.18,
which
means
that
clusters
will
fully
support
dual
stack
mode,
sort
of
out
of
the
gate.
B
Next
is
the
new
endpoint,
API
or
endpoint
slice.
You've
may
have
heard
of
this
just
around
a
little
bit
or
scatter
all
over
the
place.
This
is
sort
of
the
long-term
replacement
for
the
current
core
v1
and
points
API.
The
current
API
actually
has
a
lot
of
performance
and
scalability
problems
for
that,
like
impact,
multiple
sort
of
components,
the
control
plane,
the
sort
of
gist
of
it
is,
if
you,
instead
of
like
recomputing,
an
entire
list
of
endpoints
and
notifying
all
the
Watchers
when
one
is
updated,
they
are
now
sort
of
broken
down
into
groups.
B
I
believe
of
100,
and
only
the
group
that
has
an
updated
endpoint
will
be
updated.
Every
computed
and
updated,
and
you
can
like
small
clusters
didn't
like
have
too
many
problems
with
this
in
the
past.
Once
you
got
to
you
very,
very
large
clusters,
with
potentially
hundreds
of
thousands
of
pods,
it
becomes
a
significant
performance
problem,
and
you
can
imagine
soon
with
you
know,
potential
for
pods
to
have
two
endpoints
ipv4
and
ipv6.
That
will
be
magnified
and
point
slices
actually
been
like
a
requirement
before
ipv4.
If
it's
dual
stack
mode
could
be
supported.
B
Another
fun
one
finalizing:
protection
for
service
load,
balancers,
so
service
type
load,
balancer
requires
kubernetes
and
an
external
entity,
usually
a
cloud
provider
to
work
together
to
ensure
the
proper
management
of
your
life
cycle.
So
in
the
past
there
have
been
a
couple
conditions
where
it,
the
kubernetes
service,
could
be
deleted
before
the
actual
external
load
balancer
was
was
deleted.
So
if
you
can
imagine
no
you
deleted
that,
but
you
still
have
an
AWS
load,
balancer
provision-
that's
that's
not
so
great!
B
B
B
So
configurable
pod
process,
namespace
sharing,
so
the
default
container
behavior-
and
this
is
same
thing
when
you're,
like
you
know,
working
with
docker
or
locally
or
anything
like
that-
is
that
each
container
will
exist
in
run
in
its
own
process.
Id
namespace,
with
the
entry
point
process
serving
as
pit
one.
So
when
you're
in
a
pod
and
you
have
multiple
containers
and
that
they
like
they
can't
see
each
other's
processes.
There
have
been
certain
ways
to
do
it
in
the
past,
but
like
it's,
it's
been
a
little
hacky
now
with
shared
processes.
B
Namespace
enabled
you
can
actually
remove
that
boundary
and
let
them
share
one
sort
of
hid
namespace.
So
this
might
seem
like
it
removes
some
of
the
parent
security
or
isolation
mechanisms
that
we
see
with
containers.
But
it
also
opens
the
door
to
sort
of
more
complex
workflows
and
enables
things
like
debug
container
being
attached
to
another
container,
where
you
might
have
a
singular
going.
Binary.
I
also
know,
like
several
groups,
are
looking
into
this
as
a
means
for
better
CI
runners.
B
So
this
means
that
you
could
have
your
sidecar
container
sort
of
function
as
the
anit,
and
that
manages
the
job
execution
in
another
container
and
the
with
the
various
like
built
dependencies.
It
also
means
that,
like
that
container
could
perform
other
actions
say
when
being
container
main
process
that
was
watching
terminated.
It
could
do
some
cleanup
actions
before
fully
terminating
the
pod.
B
So
move
the
conceive,
the
heartbeats
Dilys
api.
So
this
is
another
back-end
things
that
end
users
generally
will
interact
too
much
with,
but
it
will
serve
for
us
serve
both
as
a
performance
boost
and
remove
some
of
the
scalability
limits
encountered
with
clusters
of
clusters
with
more
than
2,000
nodes,
so
on
an
interval
cubelet
we'll
check
back
in
and
update
sort
of
its
records
with
a
bunch
of
information
about
the
current
state
of
the
node.
This
includes
things
like
what
images
it
has
on
there.
B
What
volumes
are
mounted
and
as
well
as
a
slew
of
other
things,
and
these
individual
updates
can
be
like
upwards
of
15
kilobytes
in
size
for
a
single
update
and
when
you
think
about
that's
a
lot
for
just
a
simple
node
status.
So
now
you
multiply
that
by
a
few
thousand
nodes
and
those
nodes
checking
in
every
40
seconds,
you
can
start
to
get
an
idea
of
the
loads
in
which
that
puts
on
both
the
API
server
and
like
the
backing.
It's
already
database.
B
So
now,
instead
of
including,
like
that
full
update,
there
will
be
much
smaller
update
as
a
sort
of
ready
signal.
That
is
used
as
the
node
lease
object,
and
then
a
full
update
will
be
sent
on
a
much
sort
of
longer
interval
or,
if
there's
any
recognized,
meaningful
change
and
the
biggest
thing
that
you
will
actually
probably
see.
As
most
end
users
is,
if
you
use
cube
cut'
all
and
like
look
at
the
namespaces,
you
will
see
a
new
names
system.
A
B
B
B
A
B
B
B
Let's
see
this
is
sort
of
been
in
place
since
directly
since
112,
but
now
it's
just
essentially
graduating
to
stable
and
some
other,
like
things
been
done,
the
the
main
reason
for
doing
this
via
the
installations
is
that
way.
A
cluster
admin
can
sort
of
actually
force
a
pod
to
run
in
a
node,
even
under
one
of
those
conditions.
B
Okay,
next
one,
the
schedule
Damon
said
pods
by
cube
scheduler.
So
this
one
isn't
a
huge
change
for
a
lot
of
users.
It's
it's
already
in
effect,
but
the
sort
of
history
behind
it
is
that,
before
this
was
graduated
to
beta
in
112,
Damon
sets
were
not
scheduled
by
the
scheduler.
The
demon
sect
controller
actually
held
that
itself,
and
this
was
because
it
previously
caused
some
weirdness
with
the
scheduler,
since
the
like
spec
dot.
B
No
name
was
already
present
when
the
daemon
set
was
created
and
being
scheduled
separately
by
this
other
process
was
causing
some
other
problems
where
certain
things
were
being
respected.
So
this
is
most
notably
when
nodes
are
flagged
as
unschedulable
now
that
they
are
being
managed
by
they
keep
scheduler
all
those
restrictions.
Everything
like
that
will
be
handled
properly.
A
B
B
Essentially,
you'll
now
be
able
to
manage
snapshots
and
things
like
that
within
the
cluster.
This
is
covered
at
the
beginning,
I'm
not
going
to
spend
too
much
time
on
this
one.
This
is
here
mostly
for
a
completeness
sake
and
there's
some
useful
links
at
the
bottom
that
you
can
use
to
go
through
the
docs
on
them.
We.
B
B
Let's
say,
for
example,
and
edited
s,
you
are
limited
to
you:
sort
of
40
EBS
volumes
per
instance,
and
you
wouldn't
you
know
who
raised
you
wouldn't
want
something
to
box,
be
scheduled
on
there,
where
it
already
has
like
40.
This
is
attached
and
caused
a
problem,
and
so
by
moving
to
a
dynamic
design.
These
various
like
cluster
wide
settings
they're
like
you
know,
hard-coded
settings
have
been
deprecated
and
are
now
tied
specifically
to
like
the
CSI
driver
or
volume
type
that
are
associated
with
that
note
itself.
B
B
Preferences
for
scheduling
of
storage,
so
you
can
sort
of
define
you
know:
labels
and
they're
acceptable
values
for
use
in
volume
placement.
This
is
very
useful
for
like
if
you
want
to
make
sure
that
your
pod
is
consuming
storage
within
a
specific.
You
know,
region
or
zone
versus
an
area
where
you
don't
necessary.
Don't
have
that.
B
Next
is
environment
variable
be
a
sub
half
mount.
This
was
introduced
almost
a
year
ago
and
the
114
release
and
was
started
to
help
some
of
the
legacy
workloads
for
you
may
you
know,
need
to
write
to
a
specific
place
on
the
host.
The
intention
for
this
is
to
sort
of
work
in
a
combination
with
the
downward
API,
so
that
you
may
you
say
like,
for
example,
pod
names
with
an
amount.
B
B
B
Many
of
the
entry
based
volume
types
that
were
built
before
the
CSI
or
container
storage
interface,
amended.
There's
been
a
long-term
effort
to
get
those
out
of
tree
slit
and
they
may
be
managed
separately,
but
unfortunately,
like
right
now
is
sort
of
this
middle
ground.
This
is
has
led
to
having
to
support
support
them
both
in
tree
and
as
the
non
CSI
version
and
out
of
tree.
Yes,
the
CSI
version
and,
as
you
can
imagine,
there's
been
a
large
duplication
of
effort.
B
B
Testing
the
only
one
from
testing
a
break
apart,
the
test
tarball.
This
is
a
again
like
another
internal
facing
only
kept.
Where
previously
we
had
this
large
mondo
tar
ball
that
had
the
all
the
testing
Swedes
four
or
like
every
platform
baked
into
it.
So
if
you
were
on,
you
know,
Andy
armed
RPC,
it
didn't
matter
they
were
all
baked
into
there.
Now
they
are
on
their
own,
separate
artifact
and
it
improved
testing
times
and
testing
quite
a
bit.
B
B
B
B
B
It
stands,
all
of
them
are
participating
in
it,
but
the
it
should
be
like
completely.
It
shouldn't
impact
end-users
at
all.
The
right
now,
essentially,
there's
like
a
shim
being
built.
So
that
way,
any
requests
internally
will
just
essentially
use
the
CSI
behind
the
scenes.
The
big
thing
would
be
like
you
might
need
to
spin
up
CSI
driver
the
specific
cloud
provider
in
addition
to
what
you
would
normally
spin
up
in
a
new
Kirby
nice
cluster.
A
B
A
C
C
Yet
so,
basically
it's
as
Bob
mentioned
earlier,
there
isn't
really
an
official
definition
of
control
plane
or
what
makes
a
control
plane,
node
and
every
cloud
provider
sort
of
has
their
own
take
on
it.
So,
in
order
to
avoid
having
actual
internal
dependencies
on
these
specific
labels,
we
are
just
it's
basically
just
a
clean
up
thing
and
to
encourage
people
to
create
their
own
labels
and
their
own
behaviors.
A
Thank
you.
We
have
another
question
with
regards
to
the
cloud
provider
rebels.
When
will
the
nord
name
not
be
dependent
from
the
cloud
provider
and
moved
to
just
being
labels
and
then
in
brackets
and
unify
the
host
name,
overwrite
option
of
the
cubelet
and
we're
going
to
end
up
in
pissed?
Let's
see
what
that
is.
B
I,
don't
know,
but
there
is
a
kept
for
that.
Well,
they
were
discussing
it.
I
can
see
about
taking
up
the
link
and
dropping
it
in
the
I
feel
like
I
need
to
do
the
meme
picture.
A
So
that
could
be
followed
up
with
later.
That's
great
all
right!
Thank
you,
Bob,
thank
you,
and
that
was
great.
Thank
you.
Everyone
for
joining
us
today,
this
webinar
and
it
will
be,
was
recorded
and
will
be
available
online
with
the
slides
later
today,
and
we
look
forward
to
seeing
you
at
a
future
ciencia
webinar
so
have
a
great
day.
Thank
you
once
again
to
Bob
when
for
joining
us
and
I'll
catch
you
later.
Thank.