►
From YouTube: Closing
Description
Join us for Kubernetes Forums Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Wrap-up
A
Just
a
quick
word,
thank
you
again
to
our
sponsors
diamond
sponsors.
For
this
year,
our
dynaTrace
light
step
in
New
Relic
and
our
gold
sponsors,
Spotify
signal
effects
and
X
matters
and
definitely
join
us
tonight.
We're
going
to
be
going
over
to
the
whisky
house
for
23rd
Avenue
starts
at
5:30.
There
are
some
other
happy
hours
happening,
they're,
terrible
you
really.
This
is
the
one
you
want
to
be
at
so
the
whiskey
they're
sad
hours,
yeah
happy
hours,
so
please
check
that
out.
I
would
love
feedback.
A
A
C
A
A
D
D
Thanks
everyone
thanks
Rodrigo
and
Raja.
We
I
thought
it
would
be
good
to
start
with
a
quick
show
of
hands.
Oh
my
god,
and
some
feedback.
Okay,
good,
so
I
was
wondering
if
people
in
the
in
the
can
you
raise
your
hand
if
you're.
Currently,
if
you
would
self-identify
as
an
academic,
you
guys
need
to
raise
your
hands.
D
D
Literally,
nobody,
okay
and
do
you
feel,
like
you,
have
a
pretty
a
reasonable
understanding
of
maybe
like
roughly
what
the
top
five
vendors
in
this
space
are
like
who
they
are
roughly
what
they
do?
Okay
and
then
last
question.
Would
you
like
to
understand
more
about
the
kind
of
cutting
edge
of
academic
research
in
this
area?
D
Very
positive,
so
my
we
had
a
cancellation
today
and
I
thought
it
would
be
really
fun
to
bring.
We
happen.
I
looked
at
the
attendee
list
and
it's
like.
Oh,
we
have
the
to
literally
the
two
best
people
in
the
world
to
talk
about
the
academic
side
of
this
and
do
some
questions,
and
so
here
they
are
I.
D
F
That's
that's
very
kind
of
you,
but
yeah
thanks
for
the
opportunity.
I
was
pleasantly
surprised
when
I
walked
in
and
you
invited
us
to
do
this
yeah.
So
my
name
is
Rodrigo.
I
am
currently
faculty
at
Brown
University
and
my
PhD
in
2007
was
on
observability
before
he
was
called
that
or
anything
before
it
was
called
anything
yeah.
So
we
had
this
paper
called
stress,
which
kind
of
happened
at
the
same
time
as
the
diaper
work
at
Google
and
and
I.
Think
yeah
was
one
of
the
first
things
that
that
kicked
off
this
space.
B
B
How
were
there
interesting
ways
we
could?
We
could
cut
this
data
slice
it?
How
can
we
doctor
diagnose
different
performance
problems
and
so
on?
And
so
we
came
up
with
a
bunch
of
different
tools
and
take
things
to
do
this
and
that
formed
the
crux
of
my
dissertation
research
and
then
I
went
off
to
a
completely
different
world
for
a
few
years
and
then
for
the
past,
2
or
3
years,
I've
been
back
and
the
tracing
world
I'm.
B
B
Because
that
really
kicked
off
a
lot
of
the
work
in
tracing
I.
Think
that's!
If
you
look
at
the
academic
side
of
things
with
a
grad
students
extras
what
they
cite
or
looking
at
distributes
tracing
today.
I
think
that's
very
interesting
in
that
it
had.
It
espouses
a
different
model
for
what
traces
look
like
than
the
open
tracing,
open,
telemetry,
world
and
I
think
they're
interesting
trade-offs.
There.
B
There
is
I,
think
Hey
from
2006
or
2007,
which
is
very
interesting.
That
was
about
like
using
traces
to
compare
and/or
contrast,
like
developers,
expectations
how
things
should
work
to
what
actual
behavior
is
I,
think
I
was
very
cool.
They
actually
had
a
language,
they
developed
Express
on
behavior,
like
developers,
behavior
and
compared
that
to
actual
traces,
and
that
was
before
the
word.
Tracing
I
think
actually
was
a
commonplace
things.
They
don't
actually
use.
It
were
tracing
at
all
in
the
paper
as
far
as
I
know,
but.
F
C
F
Paper
was
hugely
influential
as
well.
It
spawned
pretty
much
all
the
initial
implementations
and
even
though
I
yeah,
I
would
have
preferred
the
other
model,
but
I
think
in
the
end
they
they're
expressing
the
same
thing.
I
think
an
early
work
that
was
also
interesting,
rogers
yeah,
we're
like
blowing
each
other's
horn
here,
but
but
rogers
spectroscope
work,
which
was
in
an
STI,
2010
or
11.
It
doesn't
matter.
It
was
also
really
cool
because
it
had
this
idea
of
okay.
I
have
two
traces.
B
I
have
one
paper:
I
can
add
to
that.
It's
actually
way
related
to
the
pivot
tracing
paper,
sort
of
there's
like
an
entire
line
and
research
on
instrumentation
quality
like
how
do
you
know
what
inspiration
should
be
in
your
system
in
the
first
place,
to
help
you
figure
out?
What's
all
sorts
of
problems
and
the
one
paper
I
point
out,
there
is
logged
20,
which
was
an
SSP
just
two
years
ago,
and
what
they're
doing
is
you're
trying
to
figure
out
how
to
automatically
figure
out?
B
D
C
D
The
speaker,
this
thing
right
here
should
we
move
further
away
from
that
thing
to
get
rid
of
the
feedback.
So
good,
ok,
ok,
cool!
Now
under
the
more
challenging
questions,
so
your
jobs
are
to
advance
the
state
of
knowledge,
I!
Think
as
I
understand
it
anyway.
That's
what
that's
the
whole
point
right.
So
what
are
like
the
major
obstacles
standing
in
your
way
of
doing
that,
and
right
now,
I
mean
not
just
having
the
ideas
like
what
what's
making
it
difficult
to
do.
Your
your
worth
so.
F
One
thing
in
this
area
in
particular:
it's
it's
actually
quite
tricky
to
publish
research
on
observability
and
and
tracing
and
debugging
in
general,
because
the
bar
is
quite
high,
like
it
has
to
be
a
real
demonstrative
problem.
So
it
shouldn't
be
testing
on
on
systems
that
you
pretty
much
invented
or
set
up
yourself
or
or
testing
to
find
problems
that
you
injected
so
real
problems,
unreal
systems
and
being
in
academia.
It's
it's!
It's
really
hard
like.
F
C
F
Thinking
today,
right
there
I
was
I
came
up
with
this
idea
of
open
traces
org
that
we
should
create
and
and
and
like
a
repository
of
anonymized
traces,
because
we
want
to
see
many
traces.
We
want
to
see
how
they're
different
so
that
we
can
test
machine
learning,
algorithm
test.
Yeah,
that's
I,
think
the
one,
the
top
thing
that
would
claim
so.
B
Along
those
lines,
I
think
that
certain
communities
have
like
datasets
that
they,
you
know
they
they're
out
there
and
the
bar
that
they
have
these
a
bar
is.
Can
you
find
your
insights
from
this
dataset
that
haven't
been
found
yet
right?
So
perhaps
there's
something
like
that
for
monitoring
or
tracing
data.
You
know
all
of
us
could
collaborate
on
to
create,
and
that
would
just
allow
a
lot
of
researchers
to
work
in
this
area.
Although.
D
It's
interesting:
if
you're
talking
about
getting
access
to
datasets,
that's
appealing
and
I,
don't
think
I
mean
that
would
be
a
good
place
to
start,
but
a
lot
of
mean
you
mentioned
pivot.
Recently.
A
pivot
tracing
is
a
really
interesting
piece
of
work,
but
it
actually
modifies
the
behavior
in
the
system
in
a
way
yeah
I
mean
so
it's
not
just
the
data
you
actually
would
want
to
be
able
to
ideally
run
in
a
way.
That's
non-disruptive
on
a
production
system
which,
obviously
that's
harder,
but
it
would.
B
This
comes
up
in
like
infrastructure
research
for
like
cloud
computing
right,
there's
a
lot
of
stuff
that
uses
the
cloud
and
you
can
do
a
lot
of
research
like
sitting
on
top
of
AWS
using
AWS
services.
But
then,
if
you
want
to
modify
the
infrastructure
itself,
that's
like
another,
you
know
huge,
you
know
beast
almost
right
and
I
think
it's
worth
separating.
The
truth
did
both
of
them
and
I'm
betting,
this
in
huge
class
of
things
that
can
be
done.
D
That
paper
to
two
conferences
is
rejected
from
both,
because
it's
of
no
scientific
merit,
which
is
actually
legitimate,
like
you'd,
already
done
the
next
race
work,
which
is
the
same
thing
in
many
ways
a
tip
had
already
been
published.
So
people
like
well
there's
no
there's
nothing
new
here
scientifically.
The
only
thing
we
were
really
adding
was
an
experience
from
real
data.
Google
has
worked
at
scale
at
Google
and
that's
the
only
contribution
that
dapper
had
scientifically
so
I
actually
thought.
D
It
made
sense
that
rejected
as
a
scientific
contribution,
but
there's,
but
it's
like
for
something
to
actually
be
published
for
science.
It
has
to
be
you,
it's
I,
think
you
said
it
right.
It's
a
very
high
bar.
It's
like
it
needs
to
feel
credible,
which
means
it
needs
she's.
A
real
data-
and
it
also
needs
to
introduce
new
techniques
and
test
a
hypothesis
and
most
people
in
industry,
aren't
doing
that
and
most
people
macadam,
you
don't
have
the
data.
So
it
is
it's
a
problem
right.
C
The
topic
of
data
when,
like
Rodrigo
and
I,
were
talking
about
like
religion,
uber
date
at
some
point
and
obviously
I
had
to
go
through
the
legal
departments
at
uber,
and
the
problem
is
like.
No
one
really
understands
what
it
means
to
an
animal.
Well,
they
know
what
it
means
to
anonymize
the
iteration
data,
but
what
it
means
and
practice
like.
Is
it
still
decipherable
in
some
way
yet
so
like
how
everyone
is
afraid
like
what
who
knows
what
people
will
be
able
to
find
through
this
data,
even
though
it's
an
animai?
C
F
That's
totally
true
the
there's
a
famous
case
of
the
Netflix
challenge,
data
right,
that
was
anonymized,
but
people
are
able
to
correlate
with
IMDB
data
set
and
find
the
behavior
exactly
of
people
who
had
reviewed
the
same
movies,
even
though
their
names
are
not
in
the
Netflix
data,
but
I
think
you
know
it's.
This
is
the
kind
of
thing
that
if
there
are
not
people
interested,
we
can
get
together
and
and
hammer
out
like
best
practices
and
how
to
do
this.
B
B
Worth
looking
out
for,
like
you,
know,
interns
to
come
into
these
companies
and
try
out
these
things
at
scale,
because
I
feel
like
allowed,
the
internships
tend
to
be
focused
towards
like
undergraduates,
right
or
they're
coming
in
to
get
like
different
experiences,
or
it
is
like
if
there
were
focused
internships
or,
like
all,
look
cool.
You
come
up
with
this
academic
solution
to
something
why'd
you
come
in
here
and
see.
B
D
It
strikes
me
that
the
results
of
these
analyses
are
almost
I
would
say.
Typically,
the
results
could
be
shared
widely
and
it
would
be
fine
from
a
compliance
standpoint.
It's
just
that
people
don't
for
good
reasons.
Around
GDP
are,
and
so
on,
they're
not
eager
to
expose
user
data
to
even
if
it
leaks
accidentally
or
it
can
be
inferred
through.
Even
if
you
hash
things,
you
can
still
infer
people
that
sort
of
stuff.
So
if
I
like
the
idea,
a
lot
of
it's
like
putting
the
research
on
Prem
at
some
level
and.
B
I
think
also
like
it
has
to
be
better
communication.
Almost
I
feel
like
from
the
researchers
standpoint,
like
the
academic
conferences.
What
you
said
is
entirely
right.
People
look
at
this
with
science,
it's
very
rigorous
right.
You
have
to
be
pushing
the
state
of
the
art
and
proving
that
you're
pushing
the
state
of
the
art.
On
the
other
side.
When
petitioners
look
at
what
researchers
do
it
feels
like
you
know,
the
type
of
code
released,
usually
our
prototypes
right
there
at
best
a
starting
point
for
future
efforts
right.
B
E
So
Russia
then
riffing
off
of
what
you
said.
You
kind
of
painted
the
picture
of
an
intern,
maybe
coming
into
an
industry
and
to
pulling
some
data
set
just
locally
keeping
it
in-house
extending
that.
What
about
a
partnership
where
that
intern
is
working
with
somebody
to
actually
put
things
in
perked
into
production
in
a
controlled
way
and
do
those
experiments
that
Ben
was
alluding
to
is?
Are
there
barriers
to
doing
to
doing
that,
actually
making
the
production
environment
part
of
the
experiment?
No.
B
I
think
that'd
be
fantastic,
I
think
from
the
I
have
no
issues
of
that.
The
only
thing
is
like
it
shouldn't
be:
something
is
going
to
take
like
seven
years
right
to
get
to
work,
because
you
know
the
students
do
have
to
graduate
in
a
set
timeframe
and
they
want
to
be
publishing
papers
and
so
on
right.
So
there's
some
way
to
stage
it
right.
I
think
that'd
be
fantastic.
Sure.
B
On
the
order
of
months,
then,
even
a
year
right
like
when
I
did
my
internship
at
Google
on
the
spectroscope
work
ID,
she
spent
a
year
there
trying
to
get
stuff
to
work.
So
I
think
that
would
be
fine
and
an
amazing
actually.
G
G
To
both
of
you
so
most
of
the
work
you've
been
doing
because
of
x-rays
and
spectra
has
been
tracing.
Have
you
looked
on
general
metrics,
because
but
there
are
much
more
environments
that
there
are
metrics
coming
out:
time-series,
metrics
and
all
the
things
like
I've
talked
about.
Are
you
looking
at
that
as
well,
because
I
think
Roger?
You
were
alluding
to
some
of
the
cloud
related
work.
I
know,
there's
been
work
going
on
in
Berkeley
as
well.
Do
you
look
outside
of
tracing
yeah.
B
The
one
of
my
favorite
papers
on
just
regular
monitoring
was
something
that
HP
Labs
did
I,
think
back
in
2007
2006,
where
they
were
actually
looking
at
figure
out
how
to
capture
systems
stay
to
represent
behavior,
so
they
were
like.
Okay,
there
is
a
problem
now.
Can
we
come
a
good
way
to
encode
the
metrics
that
are
being
currently
be
observed
in
a
way
that
that's
a
signature
in
the
case
that
problems
are
reoccurrence
yeah?
That
was
really
cool,
I
think.
G
B
I
Hi
I
have
a
question
about
so
we
had
these
traces
today,
which
are
request
centric
and
then
most
of
the
work
in
tracing
has
been
to
just
capture
more
and
more
of
this
data
in
a
richer
fashion.
I'm
curious,
if
you
have
any
thoughts
on
because
this
is
like
more
pie
in
the
sky
kind
of
thing
where
people
have
talked
about
there
was
like
there
is
some
time
travel
based,
debuggers
based
research,
and
then
people
have
been
looking
at
those
too,
because
traces
are
capturing.
I
The
execution
I
was
curious
if
you
guys
ever
considered
like
time,
travel,
debugging
or
applying
some
of
those
techniques
to
like
these
systems
to
either
capture
the
straight
of
the
machine
or
like
to
extract
trace
it
out
of
it
or
like
something
like
that:
something
crazy,
but
usually
recording
the
state
of
the
system
and
then
reproducing
it
later,
and
if
you
had
any
thoughts
on
like
whether
that
how
does
that
compare
to
tracing
or
is
that
even
feasible?
Does
it
make
sense
in
production?
Is
the
over
it
too
much
anything
along
those
lines?
So.
F
F
Just
the
minimal
necessary
points
in
the
execution
that
allow
you
to
later
reconstruct
the
execution.
So
essentially
you
capture
all
the
non
determinism
that
is
outside
of
the
system
right
and
then
you
run
the
system
again
and
instead
of
calling
a
timer
you
you,
you
basically
give
it
back
the
recorded
timer
value
of
in
and
so
forth.
So
if
you
have
the
source
code,
you
can
essentially
actually
record
a
lot
less
later
than
a
trace
would
because
you
can
regenerate
the
trace
exactly
as
it
would
have
been
generated
right,
but
but
there's
this
trade-off.
F
D
There's
a
group
in
the
Midwest
I
can't
refer
the
names.
Unfortunately,
this
is
kind
of
useless,
but
they're
they're
doing
this
type
of
thing
at
the
kernel
container
layer
which
allows
them
to
reconstruct
the
state
of
the
container,
although
it
does
have
I
think
maybe
5%
overhead
or
something
like
that,
and
then
the
other
issue,
of
course,
is
that,
because,
at
the
container
level
the
application
context
is
kind
of
gone
I
mean
you
can
go
into
the
container
and
try
to
figure
it
out.
But
it
doesn't
help
with
that.
But.
A
B
Verification
so
the
well
first
of
all,
there
has
been
the
application
of
tracing
to
different
use
cases
like
Raju.
You
have
that
retro
paper,
which
is
a
resource
attribution.
So
how
can
you
figure
out
how
to
allocate
the
writing
resources?
Different
clients,
based
on
like
client
IDs,
are
propagated
along
with
requests.
There
is
other
debugging.
These
cases
I've
seen
like
specifically
profiling.
B
There's
a
paper
in
SOC
see
a
few
years
ago
called
lineage
driven
fault
injection,
where
they're
using
a
very
similar
technique
to
kind
of
figure
out
to
inject
really
weird
stuff
in
chip
system
and
I,
build
up
the
notion
of
the
system
state
and
also
tag
the
request.
I
haven't
seen
anything
a
formal
verification
to
my
knowledge,
but.
H
F
I
think
there
might
be
room
for
that
like
so,
if
you
combine
symbolic
execution
like
you,
might
try,
okay,
so
looking
at
this
piece
of
code
service
this,
these
are
the
possible
types
of
traces
that
it
could
emit
and
you
could
perhaps
correlate
that
with
traces
that
you
actually
see,
but
I
haven't
seen
anyone
doing
exactly
that
again
there
there
are
many
uses
of
trace
like
data
and
context,
propagation
and
I.
Think
yeah.
It's
like
capabilities,
all
sorts
of
interesting
things
that
you
can
do
well.