►
Description
In this session Yair Cohen will discuss insights and trends gathered by Datadog from examining more than 1.5 billion containers run by tens of thousands of customers in 2020. He'll explore container orchestration platforms, service meshes, workload autoscaling, networking and more. After demonstrating how Datadog’s monitoring and security platform helps teams to meet the demands of today’s fast-paced market, Yair will share his own predictions for 2021 based on insights from the containers report.
A
And
we
are
live
once
again,
ladies
and
gentlemen,
good
morning,
good
afternoon,
good
evening,
good
night,
my
name
is
michael
waite,
and
we
are
here
for
another
episode
of
the
openshift
commons
briefings
operator
hours.
We
have.
We
have
jayer
cohen
from
datadog,
he's
a
product
manager
and
we're
going
to
be
talking
about
containers
and
ecosystem
trends.
B
B
Right,
it's
no
longer
snowing
here.
A
No,
no,
no,
you
were
when
we
were
trying
to
do
the
dry
run
for
for
this.
This
conversation
you
were
down
in
texas
somewhere
we
had
to
reschedule
a
couple
times
what
happened
down
there,
a
little
bit
yeah.
B
Finally,
decided
to
take
a
short
trip
after
you
know
many
months
at
home
and
wanted
to
get
away
of
the
freezing
cold
of
new
york
so
made
it
all
the
way
to
texas
and
got
stuck
in
another
storm,
apparently
so
much
for
being
in
the
desert.
How.
B
A
B
That
was
a
week.
My
flight
was
cancelled
three
times
and
I
got
lucky
on
the
fourth.
B
A
Yeah
we're
red
hat,
just
red
hat,
just
donated,
I
think
ten
or
twenty
thousand
dollars
the
american
red
cross
to
help
out
down
there.
Oh.
A
B
Yeah
well,
I
work
at
datadog,
which
is
a
sas
monitoring
and
security
platform.
B
Been
pretty
exciting
journey
so
far,
I
think
we
started
the
company
about
almost
10
years
ago.
I
joined
a
year
and
a
half
ago,
and
you
know
every
year
we're
listing
new
products
focusing
a
lot
now.
Also
on
security.
It's
been
very
exciting.
You
know,
given
all
the
changes
that
are
happening
now
with
the
digital
transformation
that
was
really
accelerated
by
an
order
of
magnitude
over
the
past
year
as
well.
B
As
you
know,
a
lot
of
a
lot
of
the
modern
cloud
native
technologies
that
are
appearing
there,
which
is
more
of
my
focus
and
we're
really
trying
to
like
basically
break
the
silos
between
teams.
Make
the
you
know,
experience
of
running
applications,
more
efficient
and
more
effective
and
really
bringing
everybody
together.
Devs
ops
and
security
analysts,
as
well
as
like
business
decision
makers
and
and
essentially
everybody.
A
Now
you
know
what
I
asked
you
the
day
I
was
like:
oh
okay,
so
datadog,
that's
that's
apm
right!
That's
application,
performance,
monitoring
and
you're
like
no!
No
mike!
It's
it's
it's
much
much
much
more
than
that.
You
folks
are
pretty
pretty
diversified
as
far
as
the
types
of
software
that
you
guys
are
making
these
days
right.
B
Exactly
yeah,
I
mean
we
started
datadog
as
a
monitoring
platform
that
was
focused
on
infrastructure,
monitoring
and
metrics,
or
time
series
database
later
on.
We
launched
our
logs
management
product
as
well
as
application
performance
monitoring,
but
datadog
is
really
a
platform.
It's
a
unified
platform
where,
like
you,
we
make
it
easy
to
correlate
between
all
these
types
of
telemetries
and
jump
from
one
section
to
another.
B
It's
not
really
different
products,
it's
a
unified
experience,
and
ever
you
know
over
the
past
few
years
we
continue
to
launch
new
products,
such
as
like
network
performance,
monitoring,
continuous
profiling,
security
monitoring,
compliance
monitoring,
all
things
monitoring.
A
But
aren't
there
so
so
I
I
work.
I
work
with
a
lot
of
software
companies
who
certify
their
their
products
and
their
offerings
on
on
red
hat.
You
know
what
red
hat
enterprise,
linux,
ansible,
openstack
openshift.
It
seems
like
there's
no
shortage
of
you
know
companies
who
do
what
you
do
out
there.
What
makes
you
guys
better
than
you
know
name,
you
know
any
of
the
others.
B
Yeah
I
mean,
I
think,
that
the
last
the
last
year,
right
with
the
unfortunate
events
of
kovid
and
all
these
difficult
times
that
we're
experiencing
made
me
understand,
I
think,
a
lot
of
our
customers
where
we
are
unique
in
in
the
industry,
in
the
sense
that,
in
order
to
move
fast
in
this
world
and
in
order
to
transform
your
organization
and
your
business
faster,
you
just
need
to
have
less
tools.
You
want
to
like
work
together
with
the
same
view
and
that's,
I
think,
what
sets
telebook
apart.
B
The
ability
to
scale
really
fast
and
adjust
quickly
and
and
seamlessly
to
your
business
needs
whether
you
scale
up
or
sometimes
down
in
in
times
like
this
and
bring
everybody
together
with.
No,
you
know
difficult
permissions
or
deployments,
or
cases
where,
like
you,
need
to
hire
more
people
right,
like
one
of
the
main
benefits
of
data
log,
is
to
take
off
some
of
the
complexity
in
running
applications
in
the
cloud.
B
The
complexity
of
monitoring,
where,
for
example,
with
you
know,
cloud
native
technologies
and
ephemeral
infrastructure
based
like
traditional
tracing
logs
and
metrics,
solutions,
became
quickly
inefficient.
B
With
data
that
we
really
put
the
focus
on
the
developers,
we
put
the
focus
on
being
containers
first
and
cloud
agnostic,
and
we
allow
our
customers
to
run
on
any
runtime
in
any
types
of
infrastructure,
cloud
and
premises
and
so
forth.
Using
the
same
tools,
the
same
agent,
the
same
platform.
A
What
is
it,
what
is
it
about?
Datadog
that
you
know
makes
it
easier
for
people
to
manage
their
cloud
environment,
meaning
like
so
if,
if
someone
stands
up
in
openshift
environment
and
they
don't
have
something
like
data
dog
monitoring
it
what's
the
what's,
the
experience
like,
as
opposed
to
when
you
folks
are
involved.
B
Yeah,
that's
a
great
question,
michael,
I
think
you
know.
First
of
all,
most
of
the
companies
are
kind
of
still
in
in
this
journey
to
the
cloud
and
that
digital
transformation
to
modern
architecture-
and
you
know
modern
cloud
cloud
stacks
and
and
that
journey
is
where,
like
we're,
focusing
on
the
most
right,
traditional
solutions
not
or,
I
think,
still
not-
support
both
like
legacy
and
and
modern
cloud
environments
right
with
datadog.
You
basically
use
the
same
agent
and
the
same
platform
for
all
your
infrastructure.
B
All
your
stack,
both
your
legacy
environments,
your
own
premise,
environments
and
like
your
cloud
ones,
regardless
on
the
right
time,
you're
you're
using,
and
I
I
think
that
you
know
just
trying
to
get
started
with
data
log
and
being
able
to
like
get
everything
together
is-
is
making
us
really
shine
and
really
focusing
on
the
experience
here
right
like
instead
of
moving
away
between
different
tools
for
monitoring.
B
When
you
do
this
migration
from
like
legacy
or
from
on-premise
to
the
cloud,
you
have
the
ability
to
like,
monitor
and
measure
your
performance
as
you're
doing
this
journey
as
you're
lifting
and
shifting
your
architecture
as
you're.
Going
to
multi-cloud
architecture
and
hybrid
with
the
same
tool,
I
don't
want
to
give
maybe
specific
examples
yet
of
traditional
tools,
but
those
usually
targeted
one
type
of
stack
or
the
legacy
or
the
cloud,
but
usually
not
both.
So
the
focus
is
really
about
the
user
experience.
A
B
Yeah
exactly
so,
first
of
all,
log
is
a
sas
platform,
like
you
said,
for
monitoring
and
security,
which
means
that
all
your
monitoring,
telemetry
and
data
is
sent
to
our
cloud.
We
are
running
on
multi-cloud,
so
it's
not
necessarily
a
cloud.
We
have
data
centers
in
different
regions
in
the
world,
we're
running
on
google,
on
azure
and
azure
and
aws,
of
course,
and
we
have
two
types
of
integrations.
B
We
have
agent-based
integrations
which
require
using
that
unified
single
agent
that
runs
on
any
runtime
that
collects
metrics
logs
and
traces
from
within
your
workloads
and
containers
and
posts,
and
we
also
have
web
integrations
and
cloud
integrations
that
directly
fetch
using
you
know
apis
and
public
apis
data
from
cloud
providers
and
different
technologies
mainly
sas
and
pass.
B
We
have
more
than
400
integrations
every
day
that
I
check
it's
we're
adding
a
bit
more
part
of
what
I
also
work
on
with
my
team,
and
we
basically
make
it
really
easy
in
a
single
page
which
I'll
definitely
demo,
you
later
to
add
more
and
more
integrations
into
your
datadog
platform.
A
B
Oh,
that's
a
good
question.
I
think
that
that's
about
five
years
and
and
it's
not
just
like,
if
I
can
correct
you
like
what
what's
unique
about
this,
this
study,
I
think,
is
the
fact
that
we're
really
relying
on
real
data
we're
trying
to
provide
visibility
to
our
customers
and
any
anyone
in
the
community
to
the
latest
container
trends
that
we're
seeing
for
more
than
like
a
billion
and
a
half
containers
and
tens
of
thousands
of
customers.
A
Yeah,
I
was
going
to
say
it's
probably
five
years.
I
know
I
know
you
guys
must
have
a
tremendous
amount
of
telemetry
information
about
the
apps
yeah.
B
B
Also,
you
know
a
great
responsibility
right
like
we
need,
and
we
want
to
stay
ahead
of
those
trends
such
as,
like
you
know,
to
give
an
example,
our
move
to
kubernetes
happened
already
started
like
a
few
years
ago.
Before
kubernetes
was
even
you
know
very
popular,
and
about
a
year
ago
we,
you
know,
got
replaced
where
we're
running
100
of
our
workloads
on
kubernetes.
B
So
you
know,
taking
the
risks
on
betting
on
new
technologies
and
kubernetes
is
just
one
example
is
another.
You
know
thing
that
we're
doing,
which
is
you
know,
being
being
there
like
before
our
customers
and
knowing
kind
of
like
practices.
A
B
Yeah
we're
running
on
terminal,
we're
running,
you
know,
monolith
applications,
then
running
docker
containers
and
now
we're
running
all
those
containers
in
an
orchestrated
environment.
We
have
like
a
a
multi.
You
know
cloud
architecture
where
we
have
a
pretty
robust,
kubernetes
platform
that
we
build
ourselves
that
allows
you
know
running
physical
kubernetes
clusters
on
verbal,
but
also
and
by
their
metal.
B
I
mean
on
cloud
vms
and
developers
can
create
their
own
virtual
kubernetes
clusters
within
those
physical
clusters
right,
so
you
can
think
about
like
child
and
parent
and
and
and
that
really,
I
think,
amplifies
the
the
benefits
of
registration
right,
which
we're
focusing
a
lot
on
this
report
that
we'll
talk
about
later
as
orchestration
kind
of
abstracts.
The
complexity
of
the
cloud
from
from
users
with
a
container
first
approach,
which
we're
also.
B
A
Okay,
well,
we
we've.
We
were
happy
to
have
you
guys
on
the
show
here
today
I
I
was
expecting.
I
was
expecting
elon
to
be
on,
but
he
must
just
be
a
little
too
busy
he's
your
vp
of
marketing
he's,
usually
the
guy
that
we
work
with.
I
know
you
know:
we've
been
working
with
datadog
now
for
for
many
years
to
to
make
sure
that
you're
that
your
software
runs
and
runs
well
with
with
openshift
and
our
other
products
and
you're.
A
B
Yeah
we
have
dash
around
the
summer,
which
is
usually
the
most
exciting
event
of
our
company,
where
we
are
announcing
a
lot
of
new
products
and
features
and
and
inviting
our
customers
to
try
them
out
and
hear
more
about
it
and
talk
with
us.
Because.
A
I
went
down
there
well,
of
course
we
weren't
we
weren't
there
this
year
because
of
the
the
challenging
times
that
we're
all
working
in,
but
I
went
I
was
down
there
last
time.
It
was
down
in
down
in
new
york
down
in
the
waterfront
there
by
the
piers,
and
I
got
to
tell
you
I
mean
it
was.
It
was
really
impressive,
seeing
how
many
customers
were
there
and
the
the
excitement
around.
You
know
datadog
the
platform
and
nothing.
You
know.
A
A
A
B
Of
course,
hopefully
this
will
be
easier,
but
doesn't
look
like
you
know.
We
will
all
of
us
be
able
to
do
that,
but
as
soon
as
things
get
back
to
normal,
I'm
sure
that
those
events
will
happen
again.
B
B
A
Yeah
all
right
well
anyway,
so
I
was,
I
was
talking
about
elon
and,
and
you
know
the
the
the
reason
why
we
have
you
folks
on
the
show
here
today
is
not
because
you
know
you're,
just
some
random
company,
but
you
know
we
we
consider
datadog
and
and
the
services
that
you
guys
offer
are
pretty
key
workload
to
helping
our
customers
be
successful,
running
you
know,
kubernetes
and
specifically
open
shift
for
for
production
environments,
and
you
folks
have
a
have
a
red
hat
certified
container.
A
You
have
a
red
hat
certified
operator
for
open
shift,
and
I
think
that
you're
available
in
the
red
hat
marketplace
as
well
is
that
right,
okay
and
you're,
a
member
of
our
openshift
commons
community,
which
I
know
diane
mueller,
is
very
excited
about.
I
don't
know
if
anybody
who's
listening
has
has
had
an
opportunity
to
meet
diane
but
she's
like
she's,
probably
she's,
probably
one
of
the
most
amazing
people,
and
that
I've
had
the
pleasure
of
working
with.
A
She
actually
is
responsible
for
the
openshift
commons
program
and
all
of
their
events
and
and
those
are
all
over
the
world
if
anyone's
ever
has
a
chance
to
attend.
One
of
the
commons
briefings
they're
they're
pretty
terrific
anyway,
so
we
have
a
demonstration
that
you're
going
to
show
us
about.
You
know
what
it
is,
how
it
works.
What
do
you?
What
do
you
need?
Do
we
need
a
drum
roll
or.
B
I
just
can
go
ahead
and
give
you
a
quick
demo.
Definitely
I'll
I'll
go
ahead
and
show
my
screen
there
are
we
ready.
B
Sounds
great
if
you
just
give
me
one.
Second,
I'm
going
to
start
showing
my
screen.
B
Cool,
so
we'll
do
a
quick
demo.
Here
again,
this
is
datalog
for
those
who
haven't
seen
it
before.
Devlog
is
a
sas
monitoring
and
security
platform
that
combines
your
metrics
traces
and
logs
in
a
single
place
to
enable
visibility
across
any
kind
of
stack
for
all
teams
and
stakeholders.
B
This
means
that
everyone,
dev's
ops,
security
teams,
are
able
to
break
down
silos
and
collaborate
more
efficiently.
So
what
we're
looking
at
here
is
a
dashboard
used
to
bring
in
critical
data
such
as
metrics,
logs
and
traces
across
your
environment.
In
a
single
view,
this
specific
dashboard
is
called
our
demo
app
shop
list
which
we'll
use
for
this
demo,
which
basically
powers
a
an
e-commerce
retailer
that
we've
set
up.
B
You
asked
me
before
michael
about
our
integrations
right,
so
we
have
the
agent
we
have
cloud
integrations
you
can
see
about,
like
400
plus
integrations
in
this
screen.
Each
of
them
enables
you
to
quickly
set
them
up
user.
Using
very
few
steps,
our
agent,
you
know,
supports
kubernetes
and
openshift,
as
well
as
all
the
other
like
types
of
infrastructure
and
runtimes.
B
It's
a
single
agent
that
usually
can
be
deployed
in
like
one
or
two
steps.
So
once
you
set
up
some
integrations
in
your
environment,
each
of
this
integration
comes
out
with
out-of-the-box
dashboards,
for
example.
This
is
one
of
our
many
out-of-the-box
dashboards
for
kubernetes,
where
you
can
see
an
overview
of
your
release,
clusters
and
openshift
as
well.
B
Once
your
integrations
are
set
up.
You
can
also
take
a
look
at
your
infrastructure,
for
example,
here
we're
seeing
all
the
hosts
or
the
vms.
We
can
use
tags
to
like
group
them
and
slides
and
dice,
for
example,
such
as
cloud
provider
in
availability
zone
and
choose
any
metric
to
color
them.
For
example,
we
can
choose
user,
cpu
and
notice
that
there
is
one
instance
here
that
is,
that
is
pretty
busy
and
drill
down
to
understand
what
is
running
and
and
what
it
might
be
if
we
want
to
switch
to
theirs.
B
We
can
also
take
a
look
at
all
our
live
containers,
including
our
kubernetes
and
openshift
workloads,
right
here,
for
example,
I'm
using
again
a
tag
which
is
the
cluster
name
to
prove
all
my
fonts
right.
So
I
can
quickly
get
an
aggregated
view
of
how
many
of
them
are
in
each
state,
for
example,
those
that
are
crushed
to
back
off
and
with
a
single
click.
I
can
drill
into
the
specific
pod,
look
at
all
the
containers
that
are
running
in
it
and
correlate
between
logs.
B
A
Just
can
I
just
jump
in
here
for
a
second
I
did
want
to.
I
did
want
to
to
say
to
the
people
who
may
be
watching
and
listening.
You
know
we're
live
on,
we're
live
on
youtube,
we're
live
on
facebook
as
well
as
twitch,
and
and
and
certainly
our
bridge
here.
A
If
anyone
has
any
questions,
we'd
like
to
we'd
like
to
play
stump
the
product
manager
here
today,
there's
there's,
I
wanted
to
share
the
screen,
because
I
have
something
really
may
I
share
the
screen
for
one
second
here.
A
A
We're
we're
going
to
play
stump
the
product
manager.
If
anyone
has
a
question
and
they
can,
they
can
stump
the
idea
about
something
that's
specific
to
to
his
area
of
expertise,
we're
we're
making
up
these
these
t-shirts
that
I
think
everybody
can
everybody
can
relate
to
these
days.
You
get
the.
Can
you
see
my
screen
and
and
the
euron
mute
edition
here.
A
If
anyone
has
any
any
questions
for
a
year,
please
put
them
in
the
chat
and
then
we'll
we'll
get
you
one
of
our
one
of
our
new
challenging
times.
T-Shirts.
B
A
B
Michael
and-
and
you
know,
since
we're
very
interested
in
getting
any
questions
I'll,
just
maybe
quickly
explain
what
my
area
of
what
my
domain,
or
focus
or
expertise
in
datadog
right,
so
I'm
the
product
manager
for
containers,
which
means
a
bunch
of
things.
First
of
all,
you
know
focusing
on
making
datadog
a
container
first
platform,
which
means
you
know
with
ephemeral,
workflows
such
as
containers
and
infrastructure,
the
ability
to
like
basically
run
everywhere
on
any
runtime
as
well
as,
like
you
know,
the
challenges
that
modern
cloud
stack
provides.
B
We
really
like
focusing
on
making
those
challenges
disappear
when
you
use
data,
for
example,
right,
like
with
the
number
of
workloads
and
containers
and
microservices
the
tagging
or
the
number
of
of
signals,
and
how
you
classify
them
exposed
by
an
order
of
magnitude
and
one
of
the
things
we're
working
on
with
that
in
data
log
is
on
making
it
easy
to
like
control
that
cost
that
that
cardinality
right.
So
you
can
control
those
symmetric
tags.
You
can
control
traces
and
the
logs
that
you
index
and
so
forth.
B
The
other
thing
is
my
team
and
I
working
on
different
kubernetes
open
source
project
to
contribute
to
the
community,
such
as
our
extended
demon
set
and
the
digital
operator,
our
water,
pod,
watermark
for
the
scaler
and
and
so
forth
that
I
can
talk
about
later,
so
we're
trying
to
like
build
developer
tools
to
make
monitoring
kubernetes
and
other
environments
easy.
B
The
other
thing
is
that,
of
course,
we
work
with
the
major
cloud
providers,
different
cncf
project,
on
monitoring
those
with
our
integrations
and,
lastly,
we're
working
on
all
the
different
open
source
standards
to
keep
our
customers.
You
know
where.
B
To
help
them
where
they
are
and
and
keep
them
from
vendor
login
login,
you
know
such
as
like
open,
metrics
standards,
prometheus
open,
telemetry
and
so
forth.
So
that's
kind
of
my
area.
A
Should
we
go
back
to
michael
yeah
yeah,
I
I
didn't
mean
to
interrupt
well,
actually
I
did
mean
to
interrupt,
but
I
I
I
do
want
to
offer
up
these
these
shirts,
I
think
they're,
I
think
they're
pretty
cool
so
stuff,
we're
gonna,
we're
gonna,
send
some
down
we're
gonna,
have
some
co-branded
with
datadog
red
hat
and
we'll
send
some
down
to
you
guys
as
well
so
stump
the
product
manager
challenge
starts
today
and
and
having
said
that
I'll,
please
resume
with
your
demo
and
I
won't
interrupt
you
again.
I
promise
maybe.
B
B
The
next
thing
I
want
to
move
to
is
our
apm
services,
so
we're
now
looking
at
the
service
map-
and
you
know
in
today's
paradigm
of
microservices,
where,
like
you
know,
we
run
a
lot
of
different
num
like
we
run
a
high
number
of
different
services,
it
can
be
difficult
to
keep
on
top
of
the
dependencies
between
them.
So
what
we're
seeing
here
is
a
map
of
all
our
services,
and
we
can
understand
how
each
of
these
services
behaves
with
any.
You
know
request
that
it
receives.
B
For
example,
if
I
have
an
incident
and
I
wake
up
in
the
middle
of
the
night,
I
can
quickly
understand
where
the
service
that
has
a
latency
or
higher
rate
and
which
other
services
might
be
impacted
based
on
these
different
dependencies
between
them
and
the
communication,
we'll
switch
to
the
traces.
B
Sorry,
we'll
switch
to
the
service
page
of
one
of
those
services
that
we
just
saw
the
web
store.
Here,
you
can
see
an
overview
of
all
the
basically
the
application
performance
of
the
service
web
store.
B
For
example,
we
can
see
the
requests
that
the
service
is
receiving
where
each
color
represents
a
different
version,
as
well
as
like
the
latency
which
I
can
choose
from
and
many
other
things
and
those
cool
things
such
as
like
compare
like
the
performance
of
my
recent
version
to
the
previous
one
and
understand
if
there
is
any
difference
in
maybe
an
application
bug
introduced
a
higher
error
rate
and
investigate
this
really
quickly,
all
the
way
down
to
the
infrastructure
itself
right.
B
So
all
this
information
is
received
by
the
data
agent.
What
that
collects
like
traces
and
sends
them
to
data
here
we
can
see
basically
all
the
traces
that
are
received,
and
I
can,
for
example,
use
one
of
those
tags
to
filter
the
traces.
B
To
only
show
me
errors
in
the
theme
that
are
like
of
type
payment
service
unavailable
right,
let's
click
on
one
of
those
traces,
one
of
these
application
requests
and,
as
you
can
see
with
this
flame
graph,
I
can
quickly
understand
all
the
services
that
were
involved
all
the
way
down
to
the
payment
api
that
received
an
error.
Look
how
also
easy
it
is
to
quickly
pivot,
between
infrastructure,
metrics,
logs
and
so
forth.
B
One
of
the
nice
things
that
we
added
again
when
we
think
about
container
first
is
the
ability
to
get
all
your
traces
with
no
filtering
no
sampling
in
life
for
the
past
15
minutes.
Those
are
extremely
helpful
when
you're
like
troubleshooting
an
issue
in
production
where
you
don't
really
need
to
like
index
and
store
those
traces
for
a
long
time,
but
you're
right,
just
one
understand,
what's
going
on
at
a
specific
time,.
A
Yeah,
I
I
have
a,
I
have
a
question
for
you.
So
in
in
a
distributed
computing
environment,
people
notice
that
you
know
there's
something
wrong
or
that
there's
you
know
something's
consuming
too
many.
You
know
processes
or
so
you
know
somebody's
consuming
too
much
memory.
How
does
data
dog
help
with
affecting
a
change
to
fix
that?
Or
is
it
just
purely
monitoring.
B
I
mean
deadlock
does
not
roll
changes
to
your
own
applications.
It
just
you
know,
receives
telemetry
from
these
applications.
What
datalog
does
is
it
makes
it
very
easy
to
detect
issues
and
to
also
investigate
and
understand
the
root
cause
that
they
happen,
so
the
application
developers
or
any
other
users
can,
you
know,
get
their
applications
back
and
running
as
quickly
as
possible.
Right,
for
example,
if
I
deploy
you
know,
if
we
go
back
to.
B
I
can
use
this
screen
in
real
time
to
when
I
roll
out
the
new
version,
we'll
see
how
the
rollout
performs,
if
there
are
any
errors,
if
all
the
replicas
are
up
and
running
as
expected
to
view
metrics
along
and
so
forth,
right
once
the
application
is
up
and
running,
I
can
use
the
application
performance
monitoring
to
compare
between
these
versions
right,
so
I
can,
for
example,
open
the
active
version,
compare
it
to
the
previous
one
and
see
if
there
are
any
higher
number
of
errors,
which
I
can
then
compare
and
look
into
to
investigate
what
the
issue
is.
B
There
are,
of
course,
also
monitors
that
I
can
set
up
to
automatically
detect
me
when
an
error
rate
goes
up
or
when
my
replicas
are
not
available
and
so
forth.
To
really
you
know,
reduce
the
time
for
detection
and
reduce
the
time
for
investigating.
B
B
As
you
can
see
the
log
itself,
each
log
message
is
tagged
with
all
my
infrastructure
tags,
as
well
as
my
application
ones,
with
the
trace
that
allows
me
to
understand
what
happened
before
this
log
line,
and
you
know
with
the
logs
one
of
the
nice
things
that
we
have
here.
In
addition
to
exactly
the
abilities
to
filter
and
group
by
different
tags
is
also
the
ability
to
understand
what
happens
right
when
I
look
at
application
logs,
they're
very,
usually
noisy.
B
If
I
don't
know
exactly
what
I'm
looking
for
it's
hard
to
understand
and
find
what
I
need
with
these
patterns
detection,
I
can
quickly
identify
repetitive
patterns.
The
data
log
automatically
discovers
and
help
me,
you
know
understand
if
there
are
any
outliers
or
specific
issues
that
I
can
quickly
look
into.
B
Similarly
to
like
application
performance
monitoring
that
allows
me
to
send
traces
without
limits
our
logs
product.
Does
that
as
well?
I
can
change
to
live
tail
where
I
can
see
any
logs
that
are
received
in
my
environment
but
by
any
containers
and
any
cloud
services
that
I'm
using
those
logs
are
not
indexed
so
they're
very
cheap,
and
I
you
know
we
build
this,
because
we
understand
that
you
know
some
logs.
B
You
need
to
keep
and
store
and
index
which
you
can
control
and
choose,
and
some
are
not
that
important,
but
in
case
of
an
incident
they
can
be
extremely
important
right.
So,
with
the
live
tail
you
get
all
the
logs
without
limits
available
to
you.
B
Lastly,
let's
move
to
our
security
product.
Our
security
monitoring
product
allows
you
to
automatically
detect
issues.
We
could
we
collect
and
store
those
security
signals
that
did
adobe
detects
for
up
to
12
months.
I
think
so.
You
can
really
understand
the
patterns
in
your
environment
and
keep
it
safe
here.
We're
looking
at
one's
security
signal
for
an
account
takeover
with
a
brute
force
attempt,
and
we
can
get
like
a
message
that
also
tells
us
like
how
to
trigger
and
respond
it.
Lastly,.
B
Watchdog
is
a
page
that
shows
you
a
feed
of
all
the
things,
all
the
unusual
things
that
you
would
less
likely
to
detect
yourself,
we're
using
some
machine
learning
and
advanced
algorithms
to
identify
any
issues
in
your
services,
for
example,
we're
looking
at
a
watchdog
story
in
one
of
our
mongodb
database,
databases
that
show
us
a
higher
error
rate
for
some
queries
at
a
specific
time
and
and
we
can
quickly
create
a
digital
monitor
that
will
notify
us
with
alerts
via
slack
or
any
other,
like
notification
system
that
you
have
about
this.
B
A
B
Any
kind
of
styles
we
have
a
lot
of
customers,
some
of
them
are
small
and
medium.
Some
of
them
are
very
large.
I
can
tell
you
that
we
run
some
of
the
biggest
kubernetes
clusters,
I
think
in
the
world,
and
I'm
talking
about
thousands
and
more
of
nodes
per
cluster.
A
B
That's
a
great
question.
You
know
which,
as
I
said
like
we're
trying
to
stay
agnostic
to
any
cloud
technologies,
any
cloud
tools
our
customers
use.
You
know
a
huge
variety
of
tools
that
we
support
right.
B
Some
of
them,
for
example,
adopted
the
githubs
approach
where
they
keep
everything
in
source
control
and
with
ci
cd
deploy
changes,
and
our
agent,
you
know,
provides
help
chart
and
and
and
an
operator
where
you
can
keep
those
manifests,
those
yaml
files
and
your
source
control
and
deploy
them
across
multiple
nodes
and
multiple
clusters
with
kubernetes,
of
course,
an
open
shift.
We
use
the
demon
set
approach
where
the
demon
set
basically
updates
the
dialog
agent
odd
on
each
of
your
nodes.
B
We
also
support
ansible
chef
recipes
where,
like
people
use
vms
directly
and
deploy
the
agent
on
that.
So
you
know
the
goal
is
like
to
create
a
single
agent
that
provides
you.
You
know
you
can
find
everything
in
our
documentation
which
I
can
show
there
support
for
any
ci
cd
and
configuration
management
tools
that
you
have.
A
Okay,
and
does
everybody
run
this
in
the
cloud
I
mean
or
are
there
people
who
say
well
sorry,
but
our
policies
are
that
we
we
don't
want
anything
outside
of
our
own
infrastructure.
So
can
can
people
use
data
dog
on
site?
Do
you
have
something
other
than
a
sas
model.
B
We
do
not
have
anything
other
than
sas
model,
but
we
do
provide
a
lot
of
capabilities
that
allow
customers
to
securely
and
and
efficiently
monitor
their
own
premise
clusters.
You
know
we
we,
basically
you
know.
These
features
include
things
like
automatic
reduction
of
data
and
scrambling
of
sensitive
data
using
like
flood
remove
any
sensitive
information.
B
Usually
metrics
are
not
that
sensitive,
but
we
also
provide
them
capabilities
to
like
remove
tags
and
things
like
that.
But
the
point
is
that
you
know,
even
if
you're
running
on
premise,
you
can
keep
all
your
sensitive
information
and-
and
you
can
keep
your
applications
running
there,
but
you
still
want
a
unified
and
reliable
monitoring
solution
in
the
cloud.
B
B
Our
dialogue,
agent,
you
know,
provides
you
all
these
capabilities
to
like
customize
and
control,
what
is
being
sent,
what
is
being
delivered
and-
and
I
think
this
specifically
for
monitoring
right,
like
the
sas
having
a
reliable
sas
platform,
is
really
one
of
the
main
reasons
there.
In
the
first
place,.
B
Yeah,
for
example,
we
were
working
and
I
think
we
announced
like
a
cloud
for
like
a
cloud
offering
for
government
customers
right,
so
so
that
kind
of
cloud
that
we
built
for
the
government.
Customers
is
isolated
from
our
public
clouds
offerings
and
is
more
secured
in
some
ways
or
meets
different
compliant
needs
that
make
sense
sure.
A
Okay,
so
we
said
earlier
that
you
were
going
to
talk
about
the
results
of
your
survey.
You
put
out
a
survey
every
year.
I
think
it
was
a
com.
It
comes
out
in
october
or
november
right.
B
Right,
we're
usually
usually
releasing
the
the
report
during
coupon,
united.
A
A
B
Yeah
I
mean
for
the
report
we're
using
we're
basically
examining
more
than
1.5
billion
containers
that
run.
B
Right,
yeah,
that's
a
lot
of
data,
as
you
can
imagine.
We
have
a
really
talented
data
science
teams
that
helped
us
like
producing
this
report
and
finding
all
these
trends
that
we
published
every
year.
B
Correct
okay,
yeah,
so
one
of
you
know
the
first
brand
that
we
wanted
to
start
with
is
about
kubernetes.
Kubernetes,
of
course,
has
a
lot
of
flavors
such
as
openshift,
and
our
finding
shows
that
about
more
than
50
of
the
containers
are
now
running
in
kubernetes.
That's
really
pretty
exciting
to
see
the
rapid
and
the
steady
rise
of
kubernetes.
B
So
before
you
know,
using
kubernetes
or
kubernetes
is
an
orchestration
platform
right
which,
as
I
mentioned
before,
like
kind
of
abstracts,
some
of
the
complexity
of
the
cloud
and
managing
the
the
infrastructure.
Before
that
you
know,
organizations
still
use
containers
or
in
some
cases
they
have
not.
Yet
they
run
monolith
applications
and
they
deploy
them
directly
on
the
machines
themselves.
Right,
so
you
need
to
say,
I
am
going
to
run
this
container
or
this
application
on
host
x
or
y
with
kubernetes.
B
Things
are
changing
and-
and
you
know
basically,
the
orchestration
is
responsible
for
scheduling
those
containers
on
your
behalf
on
your
infrastructure.
B
One
of
you
know,
for
example,
the
the
changes
in
terms
of
like
the
user
experience
or
the
behavior
is
that
the
application
teams
do
not
need
to
even
know
or
care
much
about
the
infrastructure
or
where
they
are
deploying,
whether
it's
in
cluster
x
or
cloud
provider
y,
but
instead
they
just
you,
know,
tell
kubernetes.
I
want
to
run
these
applications
and
kubernetes
goes
to
that
super
machines
and
runs
them
before
kubernetes.
To
also
complement
this
answer.
B
There
were
other
orchestration
service
services
right,
one
of
the
most
popular
orchestration
services,
from
what
we
see
is
amazon
ecs,
which
you
know,
provides
a
simpler
way
to
kind
of
run
containers
in
terms
of
like
you
know,
the
different
types
of
options
that
you
can
customize
comparing
to
kubernetes
and
amazon
also
was
one
of
the
first
companies
that
released
a
managed
voltage
orchestration
platform
that
became
super
popular.
B
All
right
so
fact
number
two
was
that
by
now
we
see
that
90
of
the
containers
are
orchestrated.
That
means
again
that
all
these
stocker
containers
and
now
we're
seeing
you
know
the
rise
of
the
increased
popularity
of
also
other
content
around
times.
Those
are
just
managed
by
an
orchestration
such
as
kubernetes
or
ecs.
B
B
Sorry
about
30
of
all
the
containers
are
using
less
than
10
or
49
of
the
containers
are
using
less
than
30.,
with
memory
we've
seen
a
similar
case
and
that's
kind
of
like
counterintuitive
to
you,
know,
kubernetes
being
able
to
pinpack
and
automatically
schedule
containers
in
the
most
efficient
way,
and
there
are
several
reasons
that
you
know
we
we
explain.
I
can
talk
about
quickly
why
this
is
currently
happening
right.
B
One
of
them
has
to
do
with
how
the
journey
to
kubernetes
looks
like
right.
Most
companies
had
their
own
applications
that
they
ran
before
kubernetes
and
kind
of
the
first
phase
of
this
journey
to
kubernetes
or
to
orchestration
is
more
like
a
lift
and
shift
of
your
applications
to
kubernetes
during
this
process,
like
you
really
try
to
preserve
high
performance,
you
want
to
scale,
especially
during
you
know
this
this
month.
B
The
past
year,
where,
like
you
know,
we
see
the
digital
transformation
accelerating
and
you
do
not
want
to,
for
example,
risk
your
application
being
like
killed
or
portaled
by
you
know
linux,
so
that's
kind
of
like
the
first
phase
right.
B
The
other
thing
is
that,
when
you
think
about
where
customers
that
we
work
with
are
now
they're,
most
of
them
are
relatively
new
to
like
running
kubernetes,
and
we
think
that
now
that
you
know
in
the
next
year
we'll
see
the
focus
shifting
to
like
from
from
performance
like
now
that
performance
is
good
and
scale
is
automatic.
Scaling
is
working
to
cost
optimization,
which
basically
means
utilizing
the
cpu
and
memory
which
are
usually
one
of
the
major
some
of
the
major
expense
factors
in
running
cloud
services
and
applications.
A
B
A
B
And
you
know,
if
you
think
about
it
right,
if
you
already
have
your
you
know,
applications
before
you
know,
moving
to
kubernetes
like
those
were
not
necessarily
monolith,
but
they
were
composite
of
a
relatively
few
or
small
number
of
services
with
kubernetes.
You
basically
need
to
specify
for
each
container
how
much
cpu
and
memory
it
uses,
or
you
know,
that's
the
requests.
B
The
problem
is
that
if
you
have
very
large
containers-
and
you
want
to
schedule
them
or
bin
pack
them
efficiently
on
nodes,
there
is
a
certain
amount
of
large
containers
that
you
could.
You
know,
be
impact
on
this
point
on
on
a
single
node.
The
reason.
B
You
know,
basically,
is
an
application
architecture
where
you
have
a
high
number
of
services,
a
high
number
of
containers
that
are
smaller
and
you
know
kind
of
like.
If
you
try
to
take
a
lot
of
small
stones
and
put
them,
you
know
in
a
glass,
you'll
probably
have
less
air
left.
Rather,
if
you,
you
know,
try
to
put
a
lot
of
a
few
large
stones
in
in
a
ball
right.
That
will
have
a
lot
of
gaps
in
between.
B
Let's
scroll
down
a
little
bit
and
talk
about
fargate
right,
so
fargate
is
a
compute
system
or
service
by
aws
that
allows
you
to
run
containers
on
a
serverless
compute
platform,
so
it
basically
abstracts
or
moves
the
need
to
like
manage
and
and
use
hosts.
B
As
you
can
see
in
this
report,
like
we've
seen
fargate
increasing
to
about
more
than
30
percent,
pretty
high
number
of
usage
of
serverless
containers
in
a
single
like
service
such
as
target
pretty
exciting.
B
You
know
service
containers,
I
think,
will
unlock
a
lot
of
use
cases
and
benefits
over
the
next
few
years,
and
you
know
worth
mentioning
here
that
fargate
is
probably
a
good
representation
for
a
lot
of
other,
like
serverless
compute
platforms
and
orchestration
platforms,
that
we
will
see
that
they're
a
bit
more
in
their
mission
than
like
target,
which
is,
you
know,
been
released,
I
think
a
few
years
ago,
but
will
become
popular
as
well
right,
even
openshift
and
ibm
have
a
bunch
of
like
serverless
containers.
B
Services
such
as
openshift
services,
serverless,
sorry,
that
are
using
k
native,
which
is
a
really
interesting
technology
as
well
and
and
serverless
containers
are,
are
especially
interesting
because
containers
are
already
ephemeral
and
a
host
you
know
is
is
not
something
that
you
tear
down
every
second
right
so
having
the
ability
to
scale
up
and
down
your
containers
and
run
them
with
without
any
infrastructure
like
completely
abstracting
them
away,
makes
a
lot
of
sense
in
many
interesting
use
cases
so
that
that
was
that
was
about
the
serverless.
A
I
was
just
I
was
just
pinging
chris
short,
to
see
how
we're
doing
on
time.
I
think
he
said
we
can
go
over
a
little
bit
if
we
need
to.
A
Minutes
maybe
about
both
five.
B
Sounds
good
so
a
couple
of
more
trends
here,
right,
kubernetes,
note
sizes,
as
we
can
see
in
this
fact.
A
I'm
sorry
chris
said
we
can
actually
run
over
so
we're
good.
B
Cool
kubernetes,
node
sizes
in
kubernetes
are
changing
as
clusters
become
larger.
What
we
found
is
that
in
small
clusters,
the
use
of
small
nodes
is
pretty
common
still,
but
as
you
look
at
and
move
towards
larger
clusters,
those
small
nodes
kind
of
disappear,
and
we
see
more
like
larger
nodes,
with
60
nodes
or
more
and
of
course,
that
includes
32,
64
and
even
more.
B
That
actually
makes
a
lot
of
sense,
because
you
know
when
you
run
a
kubernetes
node,
you
have
kind
of
like
a
sunk
post
of
processes
such
as
the
kernel,
the
hypervisor,
the
container
runtime,
as
well
as
very
specific
components
like
the
cubelet
that
take
resources
that
are
expensive
and
and
those
basically
do
not
scale
linearly.
When
you
use
larger
nodes
right,
because
you
can
run
a
lot
of
those
containers
on
a
single
large
node
and
your
allocatable
cpu
memory,
resources
are
just
increasing.
B
The
other
thing
is
that
with
kubernetes
you
know,
having
a
failure
in
a
node
is
less
of
an
issue
and,
with
you
know,
large
clusters
that
have
1000
and
more
nodes.
The
failure
of
a
single
node
is
probably
not
gonna.
Have
a
severe
impact
on
performance,
which
is
something
that
cost
you
know.
Organizations
are
starting
to
accept
more
and
more
so
that's
pretty
exciting.
B
Job
in
abstracting,
you
know
obstructing
the
cloud
complexity,
but
one
of
the
things
that
you
know
are
sometimes
left
to
the
application
developers
and
the
platform
engineers
is,
you
know,
managing
the
networking
between
between
containers.
That
complexity
also
increases,
as
the
number
of
containers
increasing.
You
know
the
main.
B
The
main
technologies
that
deal
with
you
know
container
networking
and
security,
as
you
can
see
here,
help
containers
to
discover
each
other
and
really
simplifies
the
that
communication
for
the
application
developers
themselves.
B
One
of
the
interesting
findings
that
we
had
was
that,
while
calico,
which
is
a
great
networking
technology,
is
the
most
popular
we
see
a
lot
of
other
technologies
and
this
segmentation
this.
This
diversification
shows
us
that
you
know
this
is
an
area
which
no
one
is
yet
dominating
in
and
will
be
very
interesting
to
see.
You
know
what
happens
in
the
next
few
years.
We
believe
in
datadog
that
the
number
of
technologies
for
networking
you
know
container,
networking
and
security
will
continue
to
increase.
B
We
have
some
technologies,
such
as
you
know,
nginx
and
istio
that
are
super
popular
and
used
by
you
know.
For
example,
istio
is
used
by
red
hat
and
a
lot
of
other
companies
such
as
google,
to
build
like
service
meshes
and,
and
that's
that's
something
that
we
don't
think
will
will
change
anytime
soon
so
related
to
networking
technologies,
and
I
think,
by
that
we
will
maybe
wrap
up
the
container
report.
B
We
also
published
a
fact
about
the
service
measured
option
service
mesh
technology,
you
know,
is
really
used
as
an
abstraction
to
the
application
networking
for
applications
that
consist
of
a
lot
of
small
containers
or
or
small
services.
The
infrastructure
layer
of
the
application
networking
is
not
sold
today
by
kubernetes
right.
So,
if
you're
using,
for
example,
an
aws
cloud,
you
might
want
to
use
aws
vpc
for
networking
right.
B
But
if
you're
running
your
containers
on
other
runtimes,
such
as
on-premise
or
like
in
virtual
clusters,
the
the
underlying
network
infrastructure
might
be
different.
That's
one
of
the
core
benefits
on
promises
of
service
mesh
technologies,
which
is
really
exciting.
However,
what
we
found
in
this
report
is
that,
while
a
lot
of
companies
comparing
to
our
last
year,
report,
are
now
experimenting
and
trying
service
mesh
technologies.
B
But
you
know
the
adoption
is
still
early.
If
you
look
at
how
many
containers
sorry,
how
many
organizations
are
actually
running
the
majority
of
the
workloads
using
service
mesh
technology,
those
numbers
are
still
relatively
low.
A
When
we
were
talking
about
this
the
other
day,
I
I
basically
admitted
that
I'm
no
expert
on
service
mesh,
but
is
this
because
the
sizes
of
the
containers
are
rather
large,
comparatively
speaking
and
that
you
know
service
mess.
Adoption
is
going
to
probably
increase
when
the
containers
get
smaller
and
smaller
and
smaller,
smaller,
smaller
and
just
millions
and
millions
more
of
them
or.
B
Exactly
I
think
that
that
is
the
core,
the
core
reason
most
containers
are
still
relatively
large
and
when
you're
using
services
architecture
like
not
microservices
architecture,
services,
architecture,
which
is
still
way
more
popular,
you
already
have
solutions
that
provide
you.
Some
of
those
main
benefits
that
service
measures
do
such
as,
for
example,
blue
red
canary
deployments.
Right
you
could
use
like
an
ingress
controller
such
as
such
as
nginx
to
route
traffic
between
different
application
versions
or
different.
B
You
know
replicas
of
services,
however,
while
once
you
move
to
like
microservices
architecture
and
the
number
of
services
is
growing
by
you
know
an
order
of
two
or
three
to
thousands
of
microservices
right.
An
ingress
controller,
which
is
more
like
a
centralized
way
to
browse
traffic,
is
no
longer
very
scalable
or
granular
enough
for
that,
and
we
think
that
you
know,
as
the
number
of
services
that
cost,
that
organizations
run
will
increase.
So
this
mesh
adoption
will
follow
as
well.
B
Sounds
good,
so
one
of
our
last
facts
was
focused
on
about
the
most
popular
technologies
that
are
running
in
containers
today,
not
a
lot
of
new
surprises
here,
since
the
dominating
technologies
are
still
nginx
radius
and
postgres,
but
we
had
a
few
newcomers
right.
I
think
one
of
the
interesting
one
is
volt
that
came
up
10th.
I
think
in
terms
of
the
order.
Vault
is
a
really
exciting
technology
by
hashicorp
that
allows
you
know
application
developers
and
platform
engineers
to
keep
any.
B
B
You
know
for
environments
like
production,
where
basically
each
workload
carries
an
identity
and
fetches
them
from
you
know
the
secure
vault
during
the
deployment
and
and
the
continuous
integration
and
deployment
and
and
related
to
that
we
saw
that
in
kubernetes,
specifically
an
open
shift.
The
top
container
images
that
are
running
in
stateful
sets
stateful
sets
are
stateful
applications
that
require
some
persistence
of
date.
We
found
out
that
you
know
those
are
databases,
or
you
know,
data
services
such
as
radius,
elasticsearch
postgres,
and
that's
pretty
interesting.
B
I
think,
given
that
you
know
kubernetes
in
its
early
days,
was
not
very
friendly
to
run
those
technologies.
A
couple
of
things
changed
over
the
years,
of
course,
dozens
of
hundreds
of
improvements
to
kubernetes,
but
also
a
lot
of
support
that
came
from
those
open
source
technologies
and
and
the
commercial
vendors
that,
but
also
maintain
them
to
make
them
easily
easier
to
run
on
kubernetes
as
well.
B
That
may
also
makes
a
lot
of
sense
because
for
organizations
who
use
kubernetes,
you
know
the
benefits
of
running
all
your
services,
including
the
data
that
connects
all
of
them
together
in
a
single
cluster
in
a
single
environment
in
a
single
network,
is
obviously
important
a
lot.
So
it
makes
a
lot
of
sense,
for
you
know
that
now
we
see
all
those
technologies
are
becoming
popular,
which
means
that
you
know
the
the
journey
to
to
orchestration
and
kubernetes
is,
is
safer
and
more
predictable.
A
I
I
think
so
I
think
so,
and
that
comes
out
every
year
right
so
so.
B
A
The
next
one
is
going
to
be
coming
out
november
ish
exactly
what
are
your
predictions.
B
I
mean,
as
I
said
right,
we
think
that
more
and
more
customers
and
organizations
will
you
know,
move
to
kubernetes
and
and
and
the
different
flavors
of
kubernetes
like
openshift
and
all
those.
We
think
that
serverless
containers
are
becoming
more
popular
this
year.
I
think
that
you
know
with
service
meshes
as
microservices
architecture
is
becoming
the
more
you
know,
a
recommended
approach
for
cloud
native
applications
where
you
want
to
run
containers
everywhere,
microservices.
B
Well,
you
know,
adoption
will
increase
as
well
as
service
meshes,
and
you
know
last
thing
is
about
security
right.
A
lot
of
these
technologies
are,
are
meant
and
built
and
designed
for
containers,
and
they,
you
know
kind
of
support.
B
The
the
security
requirements
that
that's
running
containerized
applications
in
scale
have
so
so
that
would
probably
be
another
major
factor.
We
see
a
lot
of
open
source
and
commercial
solution,
solutions
for
securing
containers
and
I'm
pretty
excited
to
see
what
would
be
the
dominated
technologies
in
a
year.
From
now.
A
Okay,
we'll
we'll
we'll
find
out
we'll
find
out
datadog,
ladies
and
gentlemen,
so
yeah,
you
guys
have
a
free
trial.
If
people
want
to
use
the
free
trial,
we
have
it
here
on
the
screen.
You
can't.
B
A
On
it,
but
you
can
type
that
in
and
thanks
for
coming
yeah
here
it
was.
I
know,
I
know
that
you
know
you
guys
are
a
great
partner
of
ours,
and
you
know,
thanks
for
being
on
the
show,
looking
forward
to
having
you
folks
back
again
in
the
near
future.
Likewise,.