►
From YouTube: Cloud Native End User Lounge
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Again,
yeah
cheryl
go
forward,
please
cool,
so
ricardo.
I
know
you
and
I
we've
kind
of
collaborated
and
worked
together
in
a
few
different
ways
now
through
cncf,
but
it'd
be
great.
If
you
can
tell
us
a
little
bit
more
about
your
role,
what
you
do
within
your
team
and
something
a
little
bit
about
the
infrastructure
at
cern.
B
Yeah
so
yeah,
first
of
all,
thanks
for
the
invitation
is
it.
I'm
super
excited
to
participate
in
this
always
happy
to
to
tell
people
about
cern
and
our
use
cases.
So
I'm
a
computer
engineer
in
the
cern
cloud
team.
I
focus
mostly
on
containerization
and
networking
aspects,
but
also
on
accelerators,
gpus
and
other
types
of
accelerators
and
more
recently,
I
try
to
help
our
users
on
machine
learning
use
cases.
B
So
I've
been
here
for
a
while.
I
had
different
roles,
but
right
now,
that's
what
that's
what
I'm
doing?
Mostly.
We
have
a
pretty
large
infrastructure,
I'm
sure
most
people
have
heard
of
cern,
but
we
have.
We
are
a
big
particle
physics
laboratory
and
we
we
have
a
large
scientific
experiments
that
do
cool
things
like
accelerating
protons,
to
close
to
the
speed
of
light
and
making
them
collide
at
specific
points
and
the
result
of
all
of
this.
It's
a
lot
of
data
and
that's
why
we
have
to
have
a
large
infrastructure
to
process.
A
B
A
B
So
to
analyze.
All
of
this
we
have
a
large
private
cloud
with
more
than
300
000
cores
and
we
actually
that's
not
enough
for
for
our
requirements.
So,
in
the
last
20
years
we
built
a
large
distributed
computing
infrastructure
that
more
than
doubles
that
capacity-
and
this
is
done
by
collaborating
with
more
than
200
centers
around
the
world
that
that
help
us
with
our
tasks.
B
Yeah,
so
this
has
been,
we've
been
doing
big
data
things
for
for
many
years
now,
so
we
are
always
looking
for
the
next
thing
that
can
help
us
be
more
efficient
and
basically
process
more
events
per
second,
which
is
our
main
measurement,
and
this
is
the
main
thing,
is
that
we
saw
the
potential
to
use
this
kind
of
technologies
to
improve
our
infrastructure
and
be
more
efficient,
and
this
was
in
terms
of
simplifying
our
infrastructure,
because
we
could
just
rely
on
a
uniform
api
that
is
declarative
and
kind
of
separates,
well
the
workloads
from
the
infrastructure
and
allows
us
to
have
our
users
just
telling
us
what
they
need,
and
then
the
infrastructure
itself
will
decide
how
to
replicate
the
services
or
the
workloads
and
also
do
things
like
cluster,
auto
scaling
and
the
second
one
was
the
the
ecosystem
around
it.
B
B
Maybe
a
third
one.
Would
be,
I
mentioned
this
distributed
computing
infrastructure
that
needs
to
scale
out
and
the
fact
that
the
kubernetes
api
got
so
popular.
It
also
means
that
there's
a
lot
of
options
for
people
to
deploy
elsewhere
and
also
there's
public
cloud
supporting
it.
So
we
can
like
more
easily
integrate
external
resources
into
our
systems
and
then,
instead
of
focusing
so
much
on
the
infrastructure,
just
focus
more
on
the
physics
side.
C
That's
actually
a
very
good
introduction
of
your
infrastructure.
However,
you've
mentioned
before
that
you
manage
more
than
600
cluster.
At
the
moment
you
have
hundreds
and
thousands
of
nodes.
Would
you
be
able
to
maybe
deep
dive
a
bit
more
into
some
of
the
technical
challenges
that
you
face
of
managing
infrastructure
scale?
Do
you
have
any
challenges
around
like
access
management,
networking
storage,
just
some
examples
of
what
exactly
have
to
solve
on
a
daily
basis.
B
B
The
challenges
are
really
like
in
some,
they
are
very
different.
One
of
the
main
challenges
is
that
we
have
a
huge
variety
of
use
cases
we
have
to
cover,
so
we
have
the
more
traditional
I.t
services
that
have
to
like
the
administrative
services
and
the
things
that
keep
our
campus
going,
and
this
is
kind
of
not
something
that
we
associate
necessarily
with
cern,
but
actually
we
have
more
than
10
000,
physicists
working
here
all
the
time.
B
So
it's
also
a
challenge
to
do
that,
so
we
have
to
focus
on
availability
and
making
sure
that
these
services
are
always
up
and
on
the
other
hand,
we
have
to
scale
out
to
do
the
physics
analysis.
So
we
need
to
like
be
able
to
distribute
the
workloads
really
fast
and
push
a
huge
amount
of
workloads
into
these
clusters.
So
I
would
say
that
having
clusters
that
serve
both
use
cases
is
the
is,
is
a
challenge.
We
need
to
be
able
to
to
configure
the
clusters
very
differently,
depending
on
the
target
in
use
cases.
B
The
technical
challenges,
I
think
the
main
one
we
have
is
is
managing
upgrades
that
we've
had
for
for
several
years
now.
I
would
say
the
way
we
do
this
for
the
application,
the
application
upgrades
and
deployments.
B
We
we
promote
githubs
as
much
as
possible
and
to
be
able
to
deploy
the
applications
in
different
clusters,
and
then
we
kind
of
promote
also
clusters
as
cattle
in
internally,
so
that
people,
if
their
cluster
breaks,
for
whatever
reason
they
don't
lose
completely
service,
they
might
lose
some
capacity
and
also
they
can
easily
do
upgrades
into
new
clusters
by
just
moving
the
applications
there.
So
I
would
say
those
are
the
two
main
areas
where
managing
many
clusters
for
us
is
kind
of
challenging.
C
You've
actually
mentioned
two
main
very
good
areas.
I
would
like
to
maybe
ask
a
bit
more
questions:
the
gitobs
and
cluster.
As
a
cattle
now
managing
multiple
clusters,
there
is
usually
the
approach
between
matching
either
multiple
name
name
spaces
and
distributing
that
to
different
teams
or
having
multiple
clusters.
Would
you
be
able
to
maybe
elaborate
a
bit
more
on
how
you
set
up
those
clusters
and
what
is
the
tools
of
choice
to
manage
your
clusters
overall.
B
Right,
so
we
we
to
deploy,
we
have
two
different
types
of
deployments.
One
is
one
big,
centrally
managed
cluster,
which
is
using
openshift,
and
this
is
doing
isolation
using
a
namespaces
and-
and
it's
very
well
centrally
managed,
and
then
we
offer
a
service
that
is
more
kubernetes
as
a
service
where
we
build,
we,
we
are
able
to
deploy
kubernetes
on
top
of
our
lower
level.
B
Openstack
deployment
and
users
can
create
their
own
clusters
using
an
api
that
we
provide,
which
allows
them
allows
them
to
create
the
cluster,
but
also
like
scale
upgrades
and
delete
the
cluster.
So
they
manage
the
world
life
cycle
like
this
and
then
on
the
layer
above
the
tools
that
we
use
for
these
clusters.
B
Where,
then,
we
give
internal
tutorials
and
seminars
for
disseminating
these
best
practices
so
that
they
can
use
something
like
flux
or
argo
cd
to
to
manage
the
deployments
in
in
the
multiple
clusters.
So
it's
kind
of
several
step
we
from
the
infrastructure.
We
focus
a
lot
on
on
the
making
the
clusters
as
as
reliable
as
possible
and
then
disseminating
best
practices,
so
that
the
end
users
can
can
do
the
best
for
the
applications.
C
Amazing,
that's
actually
a
very
nice
segue
to
developer
experience,
and
how
can
we
improve
the
application
deployment
lifecycle,
because
I
think
there
has
been
a
lot
of
thoughts
about
how
can
we
transition
our
engineers
to
work
with
yaml,
maybe
to
work
with,
like
templates
like
if
you
have
customized
or
helm,
and
now
there's
like
a
big
movement
in
shift
where
it's
git
ups
and
you've
mentioned
already
that
that
is
a
practice
which
is
already
enabled.
C
So
would
you
be
able
to
maybe
showcase
a
bit
more
like
describe
a
bit
more,
how
exactly
an
application
is
deployed
with
git
ups,
if
you
use
any
particular
tools
specifically
and
what
kind
of
support
you
provide
for
your
developers.
B
Yeah
so
I
mentioned
this-
is
this
is
not
something
new,
I
would
say
github
so
that
we've
been
doing
similar
things
with
the
other
configuration
management
systems
that
we
had
like
puppets,
or
previously
we
had
other
others,
even
with
other
types
of
version
control,
and
this
is
something
that
we
are
used
to.
So
one
of
the
main
motivations
here
was
to
get
people
interested
in
and
on
board
them
into
our
kubernetes
deployments.
B
Sometimes
the
learning
curve
can
be
kind
of
hard,
so
if
they
can
just
go
straight
to
to
the
version
control
where
they
have
a
bit
of
ammo,
describing
how
things
should
look
and
then
build
on
that
it
actually
kind
of
puts
traps
their
their
entry
into
the
kubernetes
world.
B
So
that
was
one
motivation,
but
then
what
we
try
is
to
have
a
single
source
of
truth
for
for
all
the
application
definitions-
and
this
cover
covers
the
defaults
for
all
the
environments
that
we
have
but
also
allows
to
customize
per
environment
or
per
cluster,
and
we
do
this
using
we.
We
used
mostly
flux
at
the
start,
there's
quite
a
few
users.
Now
that
also
use
argo
cd.
So
we
don't
mandate
one
of
them.
B
What
we
try
to
do
is
to
have
people
using
either
helm
or
customize
as
much
as
possible.
We
have
still
a
couple
of
deployments
that
use
plain
manifests
and
that
also
works
with
the
gate
ops,
but
but
it's
not
as
customizable.
So
we
we
do
a
lot
of
internal
training
where
we
advertise
repos,
where
we
have
like
small
tutorials
that
show
how
to
use
these
tools
internally.
A
Cool,
I
wanted
to
ask
a
little
bit
more
ricardo
about
observability,
because
we
did
a
cncf
end
user
tech
radar
on
observability
in
september
2020,
and
we
kind
of
asked
people
what
observability
tools
are
you
using.
So
obviously,
since
you
run
600
clusters,
you
must
be
observing
those.
So
can
you
tell
us
about
the
tools
that
you're
using
and
the
approach
that
you
take.
B
Yeah,
so
what
we
took
an
approach
which
was
to
try
to
build
to
integrate
with
the
existing
infrastructure
we
already
have
for
for
other
ways
of
deploying
applications.
So
this
was
crucial
so
that
people
can
migrate
their
workloads
gradually,
so
our
infrastructure
for
for
collecting,
logs
and
metrics
already
existed,
and
we
have
gateways
where
we
can
push
both
logs
and
and
metrics
for
logs.
We
use
as
a
backend.
B
We
use
elasticsearch
for,
like
short
term
log
storage
and
query,
and
then
we
have
an
hdfs
backend,
where
we
store
logs
from
longer
term
analysis
for
the
metrics.
It's
a
similar
situation,
there's
a
gateway
where
we
can
push
the
metrics,
and
this
will
split
into
multiple
destinations
which
include
influx
db
and
hdfs
as
well.
When
we
started
introducing
kubernetes,
then
prometheus
took
a
huge
role
on
it,
so
we
we
are
able
to
push
the
same
metrics
types
from
from
prometheus
sources
into
these
gateways
and
they
are
centrally
collected.
B
So
one
interesting
part
here
is
that
we
offer
our
users
two
levels
of
observability,
so
in
cluster
they
get
very
fine-grained
metric
collection
and
they
are
kept
for
something
like
two
weeks.
This
is
configurable,
but
on
average
I
would
say
two
weeks,
and
this
allows
us
to
like
debug
very
recent
problems
in
detail
and
then
what
we
push
centrally
is
an
aggregation
of
this
metrics
for
more
long-term
analysis
as
well.
B
We
actually
do
this
using
a
standard
prometheus
federation
for
now,
so
the
registration
of
the
cluster
is
still
a
manual
step,
but
one
of
the
things
we
are
looking
at
as
we
speak
is
projects
like
thanos
or
cortex,
where
we
could
do
more
dynamic
registration
of
the
endpoints
centrally.
C
C
C
The
other
thing
I
wanted
to
focus
on
is
that
currently
you
seem
to
have
and
solved
multiple
challenges
across
different
different
stacks,
be
it
infrastructure
being
observability.
C
Another
aspect,
I'd
like
to
focus
a
bit
more
is
is
the
community,
because
I
know
that
cern
have
been
giving
a
lot
of
talks
at
past
kipcons
and
cloud
native
cons,
including
keynotes,
and
actual
sessions
throughout,
and
we're
gonna
have
the
kipcon
europe
in
may,
which
is
gonna
be
from
may
4th
to
7th,
and
there
are
going
to
be
three
main
talks
delivered
by
certain
employees.
C
So
the
first
two
talks
I
would
like
to
focus
about
the
first
one
is
going
to
be
focused
on
crds
and
operators
and
how
this
can
be
used
to
manage
triple
websites.
This
is
going
to
be
given
by
a
colleague
of
yours,
so
if
you
have
any
details
about
that
one,
I
definitely
would
ask
you
to
elaborate
it
more,
but
the
session
that
you'll
be
actually
delivering
is
going
to
be
on
managing
a
centralized
machine
learning
platform
using
kickflow.
B
Yeah
yeah,
I'm
happy
to,
of
course-
and
that's
that's,
I
would
say
that's
also
one
one
of
the
things
that
is
pretty
special
in
this
usage
we
have
of
cloud
native
is
that
it
kind
of
touches
everything
at
cern
all
the
groups,
it's
very
common
that
we
use
like
an
open
source.
B
We
are
very
much
users
of
open
source
technologies,
but
having
a
set
of
tools
that
is
touching,
so
many
groups
in
our
communities
is
kind
of
special
so
like
for
for
the
talk
from
constantinos
and
rajulia
that
we'll
talk
about
managing
drupal
sites.
This
is
something
that
is
quite
important
at
cern,
because
the
dissemination
and
communication
is
very
important
for
us.
We
run
tens
of
thousands
of
websites.
Some
of
them
are
extremely
visible,
like
the
home
page
of
cern,
and
it
runs
in
this
infrastructure.
B
So
this,
what
was
developed
is
really
exploring
an
operator
that
will
allow
us
to
not
only
deploy
a
website
but
manage
the
life
cycle
of
these
websites.
So
upgrades
and
backup
restores
things
like
this,
so
they
will
give
a
pretty
cool
talk
about
this.
I
hope
I'm
sure
and
then
for
the
machine
learning.
This
is
machine.
Learning
is
really
having
a
huge
impact
at
cern.
B
There's
many
groups
looking
at
it
for
different
areas,
there's
use
cases
from
doing
like
particle
classification
in
the
detectors
to
simulating
generating
simulation
data
in
a
very
fast
pace,
and
also
things
like
reinforcement,
learning
for
beam
calibration
things
that
I
I
try
to
help,
but
I
don't
completely
understand
always
so.
B
A
B
But
then,
if
they
want
to
scale
out,
they
don't
necessarily
have
the
infrastructure
or
the
knowledge
to
do
that.
So
we
wanted
to
offer
a
service
that
will
help
them
out
with
this,
and
also
if
we
start
thinking
that
there
are
multiple
groups
with
these
custom
deployments,
then
the
resource
usage
is
not
super
efficient.
So
we
wanted
also
to
like
bring
everything
together
so
that
we
can
improve,
like
the
efficiency
of
using.
B
So
we
we
try
to
offer
a
service
that,
and
we
will
describe
how
we
do
this-
that
will
manage
the
wall,
life
cycle
of
their
machine
learning,
workloads
from
preparation
of
the
data
all
the
way
to
serving,
but
also
how
we
integrate
these
types
of
different
types
of
resources,
even
like
how
integrate
types
of
accelerators.
We
don't
have
on
premises
like
tpus,
ipu's
and
things
that
we
can.
We
get
from
public
clouds,
I'm
looking
forward
to
to
to
like
talking
to
people
with
similar
requirements
as
well.
After.
C
C
B
A
B
Definitely
yeah,
so
I
think
that's
one
of
the
advantage
of
working
at
cern
is
that
we
are,
we
are
able
to
be
really
open,
and
this
is
promoted
about
what
we
do
and
we
really
want
people.
A
B
To
learn
more
about
our
our
main
mission,
but
also
how
we
do
things
so
the
research
was
user
group
was
actually
came
up
after
a
lunch
break
during
coop
con
barcelona
with
bob
killen
and
other
people
together,
and
then
we
we
thought
of
having
a
group
that
would
be
dedicated
more
to
the
more
research-oriented
use
cases
which
are
kind
of
different
from
the
more
traditional
ite
service
deployments.
B
This
includes
things
like
patch
managing
park,
batch
workloads,
having
queues
and
and
a
way
to
do
fair
shares
focusing
on
accelerators,
and
then
he
talked
to
you
and
as
usual
or
as
always,
the
response
was
a
very
positive
and
we,
he
kind
of
started
this
group
and
the
goal
is
really
to
get
together
all
these
different
institutions
and
academic
institutions,
companies
that
have
these
similar
requirements
try
to
and
like
learn
more
from
each
other,
but
also
to
document
this.
B
This
use
cases
that
we
have
and
the
solutions
that
different
people
have
and
then
maybe
give
feedback
to
to
the
world
community
on
how
how
the
systems
can
be
improved
to
to
serve
these
use
cases.
So
it's
it's
been
like
really
exciting.
To
be
part
of
it,
I
think
we
have
a
pretty
good
momentum
right
now
with
a
set
of
regular
attendees
and
newcomers
all
the
time.
So
I
think
it's
it's
quite
exciting.
A
It
definitely
is
exciting.
I
mean
I
really
enjoy
learning
about
it.
As
you
said,
we
have
this
core
group
of
people
who
have
the
same
use
cases
and
therefore
run
into
the
same
challenges
just
very
openly
sharing
what
they
do,
what
they
do,
and
you
know
I
want
to
mention-
also
that
you
are
one
of
the
co-chairs
for
this
group,
so
you've
really
been
a
leader
for
this
group
and
it's
been
fantastic
to
see
that.
B
Yeah
and
it's
it's
very
interesting
to
see-
and
it
really
shows
another
benefit
of
this
cloud
native
community-
is
that
cern?
Has
this
big
data
requirements
for
many
many
years,
but
back
like
if
we
look
20
years
ago,
we're
kind
of
by
ourselves
and
now
there's
so
many
people
with
similar
requirements.
A
B
Yeah,
so
we
we
have
one
internal
challenge:
that
is
pretty
big,
which
is
we
have
a
big
machine
and
we
generate
lots
of
data,
but
we
are
about
to
have
an
upgrade
in
a
couple
of
years
that
will
multiply
this
by
10
and
it's
not
totally
obvious
how
we
can
handle
this.
So
we
we
do
as
we
always
do.
B
We
look
around
and
try
to
to
find
solutions,
so
we
work
together
with
everyone
to
try
to
to
tackle
these
challenges,
and
I
think
if
we
look
at
the
integration
with
cloud
native
technologies,
the
things
I
mentioned
before,
like
supporting
batch
workloads
in
a
more
advanced
way,
things
like
queues,
doing
fair
share
on
workload
submissions.
This
is
very
important
for
us.
B
Our
systems
do
this,
and
if
we
manage
to
like
integrate
this
functionality
into
the
ecosystem,
then
it
will
allow
us
to
like
have
all
the
other
benefits
from
scaling
out
integrating
external
resources
in
a
much
easier
way
coming
together
and
then
the
second
one
is
again.
We
are
moving
a
lot
of
our
workloads
from
cpus
to
gpus
and
our
accelerators.
So,
support
for
this
type
of
of
resources
is
extremely
important
and
we
have
to
be
able
to
do
this.
B
B
Yeah,
so
I
think
the
the
experience
we
have
from
from
doing
it
internally
is
really
to
to
like
the
resources,
are
plenty
to
learn
about
the
details
and
all
the
the
the
tooling,
but
one
important
thing
is
to
to
focus
on
doing
transition
gradually.
We've
seen
internally
that
any
kind
of
dramatic
transition
will
always
be
very
complicated,
so
making
sure
that
all
the
integrations,
all
the
required
integrations
are
done
properly,
first
sorry
and
then,
and
then
gradually
transitioning
the
services
to
to
to
the
new
infrastructure.
C
Since
we
touched
upon
the
the
community,
I
have
another
question
of
cern's
experience
as
an
end
user
member.
Would
you
be
able
to
maybe
describe
a
bit?
How
do
you
find
the
end
user
community,
your
engagement
within
it?
Has
it
been
helpful
for
you
to
either
reach
out
to
the
other
companies
within
the
same
industry
just
kind
of
more
a
bit
of?
How
do
you
find
this?
This
experience
overall.
B
C
B
B
These
days
is
more
in
in
calls
and
different
groups
that
are
organized
so
joining
the
special
interest
groups
or
or
the
end
user
calls
is,
is
a
good
way
or
slack,
of
course,
but
one
one
thing
that
I
really
appreciate
is
also
the
the
conferences.
A
B
Events
where
it's
like
the
direct
content
makes
it
really
easy
to
to
engage
with
other
with
other
communities
with
other
end
user
members,
and
this
this
we've
we've
had
several
we've
had
follow-ups
from
from
like
lunch
or
coffee
breaks
in
in
in
the
conferences
that
turned
into
like
full
day
sharing
and.
C
I
can
only
echo
that,
like
I
so
very
much
miss
the
coffee
chats
during
the
the
networking
sessions
at
kubecon.
Hopefully
it's
some
time
soon.
Another
thing
I
wanted
to
mention
is
recently:
you
have
been
elected
as
a
toc
or
technical
oversight
committee.
So
congratulations
for
for
your
new
positions.
Would
you
be
able
to
tell
a
bit
more
about
your
responsibilities
within
this
role?
Are
you
like
any?
Are
you
excited
about
this
like?
How
are
you
feeling
being
within
this
position.
B
I'm
I'm
very
excited
it's
it's
been,
I
it's
been
a
very
like
quick
start
jumping
in,
but
but
I'm
learning
about
all
the
tasks
and
all
the
things
to
do
and
yeah.
So
it
it's
super
interesting,
because
I
get
exposure
to
to
a
much
wider
set
of
of
like
even
projects
and
communities
that
are
part
of
the
cloud
native
ecosystem
and
not
something
that
I
could
it's
something
I
could
do
externally,
but
not
necessarily.
B
I
would
have
all
the
time
to
to
to
focus
on
this
dedicate
some
time.
B
So
really
the
task
is
to
to
make
sure
that
all
that
is
coming
into
the
ecosystem
kind
of
integrates
well
together
and
that
we
we
are
able
to
to
offer,
and
especially
in
in
my
case,
I'm
as
an
end
user
representative,
I'm
part
of
the
toc
as
an
end
user
representative
to
make
sure
that
we
we
we
take
the
requirements
of
the
end
users
into
account
and
not
not
just
the
wishes
from
from
the
project,
of
course,
but
try
to
match
those
against
the
expectations
of
the
end
users.
B
It's
very
important
that
that
we
have
like
a
vibrant
tech
coming
all
the
time,
but
it's
also
important
that
we
don't
keep
our
end
users,
that
we
keep
our
end
users
happy
and
not
like
constantly
break
systems
because
there's
a
new
one.
So
it's
it's
really.
A
That's
so
critical
to
have
that
end
user
perspective,
because
I
think
there
can
be
a
lot
of
discussions
about
what
the
right
thing
is
to
do.
But
once
you
have
an
end
user
who
says,
I
actually
have
this
problem.
I
see
other
companies
who
have
this
same
problem,
then
that
cuts
through
a
lot
of
that
and
just
keeps
things
moving,
so
that
that
perspective
is
really
really
important,
and
it's
something
that
I
think
will
grow
over
the
coming
years.
B
Yeah
and
it's
it's
really
also
one
thing
that
I
I
keep
saying
as
well
internally
is
that
all
the
work
that
is
done
in
the
in
the
toc
about
evaluating
the
projects
and
even
giving
them
like
a
maturity
level
is
extremely
helpful
for
for
end
users,
because
you
know
what
to
expect.
You
know
what
the
criteria
is
for
for
the
different
levels.
So
you
can.
You
can
already
approach
a
project
with
some
some
kind
of
assurance
of
what
you
can
expect
about.
It.
C
C
Another
question
I
have
maybe
to
kind
of
wrap
it
up,
since
you
are
a
tac
representative
and
cern
is
actually
a
very
big
user
of
cloud
native
technologies.
I
would
like
to
ask
if
there
is
any
new
projects
or
new,
maybe
methodologies,
of
kind
of
practices
on
the
horizon
on
the
cloud
native
ecosystem
that
you'd
like
to
use
in
the
future,
or
something
which
maybe
draws
your
attention
or
your
team's
attention
in
particular.
Is
there
anything
that
you
would
evidence
so
far.
B
Yeah,
so
I
I
can
give
my
a
bit
my
personal
experience,
but
also
from
what
I
hear
from
my
colleagues,
which
is
maybe
also
relevant,
which
is
one
one
project
that
I
I've
been
hearing
about,
is
open,
telemetry
and
how
it
aims
to
to
offer
like
a
more
consistent
way
to
to
handle
all
the
observability
parts,
and
this
is
something
that
is
quite
important.
We
have
a
good
story
for
logging
and
metrics,
but
we
probably
don't
have
such
a
consistent
story
for
tracing.
B
B
It's
a
nice
way
to
integrate
again
existing
infra
infrastructure
or
existing
services
into
the
same
ecosystem
and
try
to
make
again
the
apis,
uniform
and
well
managed
in
a
central
place.
So
it's
another
project
that
I'm
I'm
kind
of
excited
about
as
well.
C
Absolutely
especially
with
the
the
cross
plane,
I
think
there's
going
to
be
a
dedicated
qualitative
advantage
or
keep
going
for
it.
So
definitely
looking
forward
to
that
one
as
well,
and
we
have
actually
a
question
from
germa
and
he's
asking:
are
you
using
different
public
cloud
providers
to
deploy
the
whole
infrastructure
there's
communication
between
them?
So
would
you
be
able
to
elaborate
a
bit
more
on
that.
B
B
Quite
a
while
we
have
several
use
cases
where
we
try
to
scale
out,
and
we
gave
a
keynote
about
this,
where
we
managed
to
reproduce
the
the
higgs
discovery
in
just
a
couple
of
minutes
using
a
public
cloud
resource.
In
that
case
google,
and
we
we
keep,
we
keep
pushing
in
reality.
What
we
want
is
to
do
like
the
most
with
the
resources
we
can
access,
so
we
will
keep
pushing
for
public
clouds.
B
We
we
often
say
that
our
workloads
are
embarrassing
parallel,
because
we
can
just
distribute
them
and
they
don't
need
to
talk
to
each
other,
and
this
allows
us
to
actually
distribute
them
even
across
different
clouds,
in
a
much
simpler
way
than
if
you
would
have
to
have
like
very
tight
interconnectivity
between
the
different
systems,
so
we
are
able
to
scale
out
in
this
way
more
easily.
This
is
true
for
our
traditional
workloads.
C
C
B
Yeah,
I
honestly
I
didn't
have
time
to
go
into
detail
into
the
schedule,
yet
I
plan
to
do
it
very
soon
so,
but
I'm
sure,
like
the
issue,
is
really
like
to
be
able
to
watch
all
the
talks
even
later,
it's
very
hard
to
find
the
time
to
go
through
all
of
it.
But
I
think
the
last
message
I
would
put
here
is
is
that
the
the
community
around
cloud
native
is
having
a
huge
contribution
to
to
our
mission.
I
mentioned
that
yeah.
B
We
have
this
mention
of
doing
fundamental
research,
understanding
the
universe
better,
but
this
implies
a
lot
of
technical
challenges,
including
I.t
challenges
and
the
fact
that
we
can
get
so
much
help
from
from
the
world
community
on
all
the
institutions
that
collaborate
in
cloud
native
projects.
It's
a
it's
a
huge
plus.
This
changes
the
way
we
we
work
compared
to
like
20
years
ago
again,
so
I
would,
I
would
stress
that
message
that
the
the
world
community
is
is
having
a
huge
impact
in
ourselves.
We
try
in
in
our
work.
A
Well,
I
I
want
to
say
thank
you
to
you,
ricardo,
actually,
because
I
really
admire
how
much
you've
contributed
in
all
different
ways,
through
being
a
member
of
toc
sharing,
what
cern's
doing
and
leading
the
research
user
group
and
just
being
a
really
warm
and
welcoming
part
of
our
community.
So
thank
you
for
being
there.
C
And
we
actually
have
a
very
positive
comment
from
I'm
trying
to
pronounce
that
name
right
ayayu,
but
they
are
mentioning
that
the
end
user
and
his
perspective
at
cern
and
the
support
they've
given
to
to
the
community
has
been
amazing.
C
So
thank
you
very
much
for
mentioning
that
and
with
that,
I
think
we
we're
kind
of
getting
towards
the
end
of
this
stream,
and
I
would
like
to
thank
everyone
that
attended
and
very
much
was
with
us
for
this
first
episode
of
the
end
user
lounge
and
thank
you
very
much
ricardo
for
being
our
first
guest
and
cheryl
for
co-hosting
this
as
well,
and
I
would
like
to
to
mention
that,
if
you
have
again
any
questions,
please
please
ask
them
in
the
future
streams
as
well
to
join
us
as
well.
C
We
will
try
to
deliver
this
every
second
and
fourth
thursday
of
the
month
at
9.
00
a.m.
Pacific
time
and
please
do
join
us
at
kubecon
and
cloudnativecon,
which
is
going
to
be
in
may
may
the
4th
and
7th
and
of
course,
we're
going
to
have
the
collocated
events.
If
you
want
to
focus
on
more
targeted
technologies
or
methodologies.
As
well
and
if
you'd
like
to
join
our
end
user
community
pretty
much,
this
is
the
community
which
is
vendor
neutral
and
we'd
like
to
showcase
your
usage
of
cloud
native
tools.