►
From YouTube: Deep Dive on the OpenShift Logging-Stack Gabriel Ferraz Stein Red Hat | OpenShift Commons Briefing
Description
OpenShift Commons Briefing
Gabriel Ferraz Stein (Red Hat)
Recorded on 04-04-2020
Deep Dive on the OpenShift Logging Stack
Slides: https://blog.openshift.com/wp-content/uploads/A-deep-dive-on-the-OpenShift-Logging-Stack.pdf
Join OpenShift Commons here: https://commons.openshift.org#join
A
Well,
hello,
everybody
and
welcome
again
to
another
openshift
common.
This
briefing
this
time
we're
gonna
get
a
deep
dive
on
the
open
gifts.
Blogging
stack
from
someone
who's
been
doing
a
lot
of
work
with
it,
Gabrielle
for
a
Stein
who's,
one
of
the
Technical
Account
Managers
for
open
shift
and
I'm
gonna.
Let
him
introduce
himself
and
his
background
and
take
it
away.
We'll
have
live
Q&A
at
the
end
and
you
can
stage
your
questions
in
the
chat
so
Gabrielle.
Please
take
it
away.
B
So,
let's
start
with
the
basic
definitions
from
the
logging
stack.
What
is
the
logging
stack
first,
when
we
start
what
we
shift
and
then
you
you
do
the
setup
you
restore
the
logging
stack
and
you
have
like
three
components,
mainly
three
components,
working
which
is
the
elastic
search
and
also
the
fluent
D
and
now
so.
B
The
Cabana,
though,
do
have
like
the
three
components
which
forms
this
acronym,
which
is
EF
key
and
elastic,
search,
flinty
and
Cabana,
so
that
is
the
mainly
components
from
the
elastic
search
and
what
those
like
fluent
event
deal
just
like
do,
the
tail
from
logs
from
every
node
which
I
have
running
open
ships,
so
I
will
split
pods
from
20
and
every
note,
and
this
pulse
will
be
telling
the
logs
from
there.
My
note
and
information
also
some
for
my
namespaces
and
applications
which
I'm
running
on
top
of
open
shipped.
B
B
That
is
like
it's
a
Java
notification,
so
I
will
not
go
like
in
discussion
if
javis,
good
or
not,
but
there
are
concerns
a
lot
of
reserves
right.
So
we
should
have
like
a
plenty
of
RAM
and
CPU
for
that
we
have,
they
should
plan
for
it
that
do
we
have
enough
and
not,
if
you
and
also
disk,
with
a
lot
of
or
lot
no
but
with
good
IO.
B
So
you
need
to
have
fast
disks
that
you
could
store
this
blogs
and
you
could
be
a
fast
access
or
it
will
be
better
to
access
and
we
using
Cabana
right
and,
firstly,
you
ask
me:
hey
Gabrielle,
why?
What
kind
of
plenty
of
run
they
should
have,
or
how
much
run
on
how
much
you
CPU
I,
should
have
for
these
it's
hard
to
come
to
a
customer
and
say:
hey
put
that
amount
of
run
and
we
work
for
you
or
this
amount
of
CPU.
B
This
is
something
that
should
be
planned
should
be
first
checked
how
many
logs
you
have
in
your
open
shift
cluster.
How
many
amount
this
amount
of
logs,
and
also
you
you
shouldn't,
eat
how
many
or
much
run
for
it.
We
have
some
kind
of
documentation
and
they
are
the
commentation.
We
have
a
kind
of
formula
and
for
this
formula
you
can
see
how
much
lines
of
room
logs
will
be
stored
and
how
many
indexes
and
everything,
but
for
my
experience,
I
should
like
put
a
lot
of
run.
Firstly
and
CPU
associated
to
this
logging
stack.
B
B
If
you
are
setting
elasticsearch,
you
don't
have
a
plan,
how
to
do
it
and
there
is
a
need
from
resources
right
and
then
you
have
problems
taking
these
logs,
storing
these
logs
with
elasticsearch
and
also
producing
producing
indexes,
and
if
that
happens,
it
will
happen
that
you
cannot
see
these
logs
on
cabana
right,
because
elasticsearch
cannot
prepare
this
log.
So
I
can
check
on
cabana.
What
kind
of
logs
will
need?
B
Plenty
is
also
a
part
which
will
have
running
on
my
notes,
OpenShift
nose
and
the
approach
for
these
is
that
you
will
have
like
this
dogs
tend
to
elasticsearch
and
the
elasticsearch
will
create
indexes
and
will
manipulate
this
log.
So
I
can
view
it
and
visualize
on
Havana
and
Cabana
is
just
like
poor
web
UI
for
elasticsearch,
which
I
use
to
check
my
logs,
which
are
produced
on
mine,
OpenShift
notes.
I
have
also
some
deployment
considerations,
so
what
we
should
do
before
we
just
deploy
their
logging
stack
and
our
openshift
cluster.
B
So
the
most
important
thing
when
deploying
OpenShift
cluster
is,
as
I
already
said,
is
like
to
plan.
They
should
plan.
How
big
is
the
amount
of
logs
which
you
use
or
how
might
Neha's
versus
they
will
need
for
it
and,
of
course,
the
approach,
if
you
have
splitted
clusters,
no
per
shift
class
or
for
development
and
another
splitted
cutter
cluster
for
production,
you
should
also
check
it.
B
You
need
of
resources
right
and,
as
I
said
to
you,
there
is
a
basic
calculation.
I
I
will
show
you
after
and
there's
an
or
L
a
solution
from
Red
Hat
and
an
article
which
you
can
use
for
the
troubleshooting
from
the
logging
stack,
and
you
can
have
some
basic
information
there
for
it,
and
also
it's
important
also
that
you
check
the
configuration
from
fluent
II
and
mainly
on
the
open
shifts.
B
We
have
some
plugins,
which
are
already
deployments
with
them
fluently
configuration
right
running
with
together
with
plenty,
but
we
unfortunately,
or
fortunately
we
don't
work
on
new
plugins.
We
feel
started
using
different
plugins
and
together
with
the
fluency
pods
on
top
of
OpenShift,
these
Ville
beings
bring
some
complications
to
us.
B
So
I
will
also
do
even
advise
that
we
take
care
of
with
these
just
don't
stall
different
plugins
on
top
of
20
parts
there,
because
you'll
probably
have
some
problems
and
then
you
reach
our
support,
and
then
we
see
that
is
some
plug-in,
which
is
not
enough
but
to
say
supported
by
us.
Then
you
have
some
problems
right.
B
The
second
consideration
or
another
consideration-
is
a
forum.
Elasticsearch
is
like
if
you
would
like
to
have
an
H
a
set
up.
Please
have
three
notes
from
openshift
running
elasticsearch
parts
so
that
you
have
this
H
a
running
your
cluster
and
then
other
thing
is
in
for
a
storage
right,
don't
take
care
of
how
much
consumption
you
have
on
your
storage
for
this
indexes
from
elasticsearch.
It
could
bring
some
complications,
Oh
leave
it
like
between
50
and
70%.
B
No
more
than
these.
If
you
have
more
than
these,
you
start
getting
some
complications
and
also
you
probably
not
concern
logs.
You
cannot
consume
these
logs
on
cabana
and
other
consecutive
considerations
are
that
the
docker,
if
you
are
doing
the
configuration
from
dr.,
dare
at
least
set
the
JSON
file
as
log
driver.
This
is
something
that
you
is
standard
in
the
newer
versions
or
overshift,
and
the
journal
de
is
the
gave
a
lot
of
complications
for
the
cost
from
the
customers
in
the
past.
B
So
we
advise
our
customers
to
use
the
JSON
file
instead
of
journal
D,
so
that
we
can
work
better
with
the
customer,
the
customer
and
have
because,
if
I
have
replicas
from
my
data
from
elasticsearch
and
please
pay
attention,
this
hectic.
As
our
dear
friend
from
from
the
the
nodes
which
I
have
found
the
pots
which
you
have
from
elastic
starch,
it
can
have
many
parts
on
elasticsearch
but
to
insane
here
about
the
hair
because
and
they
have
because
it's
the
data
which
I
will
be
storing
from
elasticsearch.
B
So
mainly
if
you
have
like
1/2
lequel,
we
have
2
copies
from
this
data,
from
elastic
search
and
for
tree.
We
have
more
copies
for
that.
You
have
like
replicas
from
this
data
and
will
be
same
better
and
secure,
and
that,
if
you
have
some
problems
in
and
in
a
node
for
elastic
search,
you
can
use
an
other
habitus
with
this
data
and
another
recombination
or
consideration
is
that
this
configuration
which
you
can
set
on
the
daemon
set
from
from
fluent
e
right
is
the
the
march
json
log
right.
B
This
is
causing
a
lot
of
problems
in
our
customer
setups,
because
there's
a
lot
of
applications
and
running
on
these
classes
from
our
customers,
which
have
different
kind
of
mapping
from
the
dayton
from
the
dead,
Nexus
and
elastic
sludge,
sometimes
doesn't
understand
and
why.
It
doesn't
understand
this,
because
you
have
some
data
which
are
stored
on
these
indexes,
which
are
strings
or
binary
data
types.
B
So
if
you
try
to
store
a
binary
or
integral
on
the
string
data
type
there,
you
have
some
problems
and
with
the
merge,
JSON
log
as
true
these
will
bring
you
problems
on
the
log,
because
you
cannot
see
some
part
of
the
logs
or
part
of
these
logs
will
not
appear
on
the
Cabana.
So
false
means
that
you
kind
of
ignore
these
mappings
from
string
on
integral
and
they
will
not
break
your
logging
to
talk
to
appeared
a
non
Cabana.
B
About
I
have
some
types
to
improve
performance
and
first
is
like
just
need
to
say
it
again
right.
It's
like
please
don't
use
NFS.
This
is
something
which
is
our
on
our
documentation.
Please
is
don't
use
an
F
else
in
production.
This
is
the
worst
error
which
you
can
do.
Deployment
from
a
loss.
Logging
stack
and
the
data
from
elasticsearch
on
top
of
an
FS
already
said.
First
discs
are
a
must
right.
You
need
to
have
a
ready
to
have
a
good
eye.
B
All
right
and
ran
you
need
to
have
enough
ran
that
you
could
have
better
probe
logging
processing
from
this
elasticsearch
and
I
will
say
why
it
is,
and
personally
is
that
you,
if
you
don't
have
enough
RAM
and
the
consumption
from
elasticsearch,
is
to
is
low
the
logs
from
front
D.
You
will
start
on
the
notes
to
stay
all
them
and
then
you
cannot
consume
the
whole.
There
were
logs
on
flight
D,
so
if
you
have
enough
run,
you
can
consume
this
logs
from
friendly.
B
We
have
not
no
problems
with
these
outputs
from
from
logs
and
consumption
from
this
box
by
monastic
search
and
another
component
from
our
law.
Mystic
is
the
Kurata.
The
Kurata
is
like
the
from
chopper,
which
we
have
on
the
opposite:
logging
project
and
the
carotids
the
main
function
for
it
just
like
to
do.
The
maintenance
which
I
have
on
my
in
logging
stack.
Firstly,
is
like
a
loot
leads
the
indexes
in
cup
of
days.
I
can
set
up
it
like
10
or
11
or
30
days
right,
and
he
will
delete
the
indexes
in
these
days.
B
B
So
you
have
the
copy
from
data
from
elasticsearch
in
a
couple
of
elasticsearch
nodes
right
and
the
charts
which
you,
the
indexes,
are
splitted
through
the
nodes
which
are
running
elastic
storage,
and
this
splitting
is
like
to
make
shards
right.
The
thing
is:
if
you
have
a
lot
of
shards
order
than
usual,
this
will
affect
performance
so
use
this
shards
configuration
with
caution
like
don't
just
speak
the
discharge
for
a
lot
of
nodes
or
do
a
lot
of
shards
for
it,
because
it
could
affect
performance
and
even
break
your
setup.
B
Your
logins
deck
setup,
so
I
have
customers
that
we
are
just
using
the
normal
configuration
for
sharks
and
also
taking
care
of
this
deletion
from
from
from
the
indexes,
and
this
just
works
really
really
well.
But
if
you
have
a
really
big
setup,
so
you
need
to
do
it,
but
do
it
with
caution
right
always,
if
you
have
support
from
our
product
from
red
head,
so
contact
your
support
team
or
at
em
or
Technical,
Account
Manager,
and
ask
him
for
this,
and
then
we
have
some
common
errors
which
you
have
in
our
logging
stack.
B
So
this
is
what
happens
with
a
lot
of
customers.
First
is
like
the
fluent
e:
doesn't
work
anymore
or
doesn't
stand
the
logs
to
elastic
starch
and
also
mainly
the
customers
perceived
as
error
when
they
cannot
see
the
logging
on
Cubana.
Oh,
this
is
a
problem.
It's
unfortunately
resources.
You
need
to
check
if
you
have
enough
resources
for
it
right,
and
another
error
is
also
that
the
fluent
D
cannot
sense
quickly.
The
logs
to
elasticsearch
and
ask
Sturge
cannot
consume
it
as
swished
and
one
tip
other
1/nth
which
I
give
to
my
customers
is
like.
B
If
you
I
will
speak
about
more
about
the
logging
dump
right.
But
if
you
someone
asked
for
you
to
do
the
logging
done-
and
this
is
the
to
take
all
the
information
from
the
logging
stick
running
and
put
in
some
files
and
put
in
some
description
files
and
if
you
are
to
producing
this
log
in
dump-
and
you
have
some
errors
like
you,
succeed,
called
60
or
28
or
some
exists
called.
This
could
be
that
you
are
having
problems
with
resources
that
even
the
logging
dump
even
they're,
catching
from
hole.
B
B
This
is
the
main
article
which
we
have
on
redhead
with
a
lot
of
troubleshooting
hints
what
we
should
use
to
troubleshoot
openshift
log
in
stack,
so
also
there
is
some
link,
the
link
for
the
logging
dumped,
how
they
can
take
use
the
script
for
their
logging
dump
and
also
how
I
can
debug,
given
an
elastic
sergeant.
Indeed,
there's
different
topics
on
this
article
and
I
think
is
really
important,
that
to
to
use
that.
B
So
now,
I
was
just
speaking
about
open
53,
or
at
least
the
the
logging
stack
mainly
from
open
shiftry,
and
they
was
talking
with
the
colleagues
about
the
overshoot
for
and
what
we
should
expect
from
the
opposite
for,
and
the
logging
stack
mainly
on.
If
we
compare
the
stat
up
from
the
logging
stack
on
Appa
shift,
311th
weeks
and
4x,
when
you
are
deploying
things
on
the
opposite,
we
were
using
ansible
for
it.
B
We
are
using
ansible
scripts
right
and
on
opposite
for
we
are
doing
the
deployment
from
from
the
logging
stack
using
an
operator.
We
have
a
cluster
blocking
operator,
so
we
can
go
to
the
web
interface
from
the
our
OpenShift
there
and
just
do
the
go
to
the
operator
hub,
a
search
for
clustering
cluster
logging
and
do
the
deployment
from
our
operator
for
clustering
login.
And
then
you
have
the
d
cabana
and
flinty
and
everything.
What
do
you
need
and
I
go?
Probably
in
the
end
of
presentation.
Show
you
this
in
a
perfect
for
web
interface.
B
B
You
doe
will
not
need
the
whole
infrastructure
from
from
the
logging
stack
right,
and
you
also
can
use
TOS
between
the
collector,
which
is
collecting
the
logs,
which
is
the
fluent
D
right
and
the
destination
which
you'll
forward
vlogs.
Now
in
the
OpenShift
three
four
four
three
is
tech
preview.
You
can
now
start
using
it
and
check
it,
and
this
will
be
GA
january
general
availability
for
version
4.5
and.
B
For
each
so,
you
can
also,
and
for
the
logging
forwards
you
can
also,
you
can
have
outdated
locks.
You
can
forward
this
audit
logs
on
the
systems
to
another
external
systems,
and
then
you
can
check
like
in
the
example
here
like
there.
How
is
the
access
for
the
cluster,
and
then
you
have
these
audit
logs?
Also
in
the
external
tool,
can
also
sense
the
NECN
system
and
also
will
be
an
upgrade
applicated
version
from
the
elastic
13
Cabana.
So
we
have
the
version
6
or
from
both
right.
B
We
move
from
the
search
part
to
open
this
true
yeah
and
the
new
data
model
will
also
improve
not
just
escape
scalability,
but
also
performance
right,
and
do
you
have
a
better
separation
between
the
operator
from
the
logging,
the
cluster
logging
and
also
elastic
search
and
the
product?
Also
fantastic,
that
is
qivana.
This
is
planned
for
version
4.5.
B
A
log
in
dump
is
just
an
script
that
you
run
in
our
OpenShift
cluster,
and
this
is
script
you
just
like
catch
the
information
from
the
openshift
logging
project,
namespace
and
we'll
create
a
directory
with
with
this
whole
information,
so
that
you
could
use
it
for
debugging.
The
logging
stack
what
is
happening
if
you
have
enough
resources
or
if
you
have
problems
with
indexes
and
so
on
right
and
if
you
are
working
actively
with
the
support
from
breadhead,
you
can
also
send
it
to
for
for
the
support
already
here.
Hey
I
have
a
logging.
B
Dub
here
is
a
fire
which
a
compact
and
putting
on
mol
file
if
the
whole
information
for
my
logging
stack.
It
would
like
to
have
some
help
on
this.
That's
that
we
can
do
so
mainly
the
operation,
for
these
is
like
you
catch
the
script.
Is
this
JavaScript
right
and
then
you
run
the
script
with
as
an
admin
cluster
admin,
because
you
need
to
catch
this
information,
and
then
you
have
like
directory
with
the
information,
and
they
will
show
you
also
a
little
bit
about
the
logging
camp
in
the
next.
Oh.
B
The
log
in
town,
when
I
create
this
logging
dump
from
my
Foreman
system
from
a
logging
stack
and
we
have
different
directories
and
every
directory
has
a
kind
of
function.
The
first
level,
which
I
can
say,
is
like,
if
is
a
directory
for
every
component
on
the
logging
stack,
which
is
the
charata
influent
e.
B
Scrapped
from
the
whole
project
on
the
OpenShift
log
logging
project,
so
I
will
have
their
insides.
The
demo
sets
the
deployment
configs
the
points,
configuration
depend
and
and
so
on,
and
we
have
the
whole
components
for
my
project.
They
are
in
in
file,
so
I
can
read
it
and
they
can
see
how
the
deployments
from
this
log
instead
was
made
right,
go
to
going
stuffing
to
do
something.
B
Practical
I
have
here
an
example
from
the
logging
camp
I
created
here,
a
logging
camp
file
which
I
downloaded
and
this
just
a
shell
script,
which
goes
through
the
logging
stack
and
with
details
and
the
logs
from
the
my
logging
stack
in
different
directories.
Oh,
if
I
run
the
scripts
I
will
go,
I
will
have
a
kind
of
directory
like
this.
B
And
come
into
the
directory,
as
I
said,
you
have
this
directories,
which
is
Korat
or
elastic
storage,
flinty,
gabbana,
&
project
so
going
to
the
charata
I
will
have
mainly
and
the
description
from
the
pods
running
the
karate.
Well,
if
you
remember,
the
karate
is
just
a
cron
job
which
runs
from
time
to
time
and
I
have
just
like
some
errors
that
is
didn't,
run
well
or
have
some
problems
right:
Jesus
I've,
never,
which
I'm
having
here
on
my
logging
stack
and
if
I
go
to
the
logs
directory,
so
I
have
some
logs.
B
B
B
B
B
B
Four
seven
nodes
running
elapsed,
affronti
sorry,
so
I
open
a
file
here
and
they
will
have
the
details
from
my
Flint
B
port
like
this,
exactly
if
I
run
or
see
the
scribed
part
on
top
of
this
open,
C
operation,
Flint
import
so-
and
we
see
also
that
here
is
the
merge,
Jason
log
as
true
but
suffer,
which
is,
we
probably
should
change
if
you
are
having
some
complications
with
the
logging
from
joy,
be
and
sense
to
elastic
section.
If
elastic
search
cannot
understand
it.
B
The
first
fire
which
I
open
was
the
logs
from
the
flinty
and
the
communication
with
elasticsearch
is
10
blocks
north,
let's
start,
and
he
is
like
the
log
from
the
port,
which
is
the
friendly
port
running
on
the
node.
So
I
have
some
mirrors
and
has
some
problems
here
or
just
some
warnings,
because
you
should
check
what
is
going
on
right
and
we
feel
we
saw
and
minutes
ago
my
elasticsearch
port
is
not
running,
so
he
is
I
cannot
send
the
logs
full
elastic
starch
and
now
so
I
do
a
check
here.
I.
B
B
B
B
B
Or
have
the
information
from
my
notes
here
running
the
logging
stack,
so
one
thing
also
I,
which
I
can
see
here
doing
beers
is
also,
if
note
from
open
shift,
is
over
committed
if
I
have
a
lot
of
the
limits
really
is
really
high,
so
my
note
cannot
attend
the
whole
demands
which
I
have
on
my
login
stack
and
also
I
can
see.
Also
the
events
from
the
project
empty
and
also
I
said
to
you
that
you
can
change
the
magic
J's
unlock
configuration
to
false
and
he
is
on
demon
sects.
B
Or
pool,
though
I
will
probably
in
this
case
code
my
custom
and
say
hey,
please
change
the
Bema
set
and
set
it
for
false.
Of
course,
I
need
to
do
it
on
on
the
cluster,
from
that
my
craftsman,
so
I
just
see.
That
is
wrong
and
they
can
advise
him.
So
it's
better
to
do,
and
oh
you
can
fix
this
problem,
so
is
the
mainly
logging
dump
which
we
have.
This
is
really
helpful.
B
Will
help
a
lot
you
to
check
what
is
going
on
with
the
logging
stack
and
I
recommend
to
you
also
to
check
this
link
to
the
troubleshooting
and
the
opposite:
logging
stack
and
how
to
produce
this
login
down.
So
you
can
start
navigating
into
it
and
also
check
what
is
going
on
and,
of
course,
we
have
all
supported
which
can
help
you
on
this
demands.
B
So
I
would
like
to
thank
you
or
being
here
to
watch
this
presentation.
I
have
a
lot
of
I
present
a
lot
presented
a
lot
of
things.
There
is
much
more
which
I
can
do
in
a
presentation
about
logging
stack,
I,
hope
that
you
enjoy
it,
and
if
you
have
some
questions
please
what
are
the
questions?
I
will
check
now
the
questions.
Let's
see
yes,.
A
There
are
a
number
of
questions,
don't
and
I
think
they're
all
good
ones.
So
the
most
recent
one
novel
is
asking
about
forwarding
logs
to
Splunk
and
they
want
to
have
the
same
indexing
instinct,
ie,
namespace
indexing,
but
he's
not
seeing
anything.
Is
one
written
up
about
it
and
he's
wondering
if
this
is
possible
and
if
you
know.
B
Let's
see
indexing
Splunk,
good
question
I
to
be
honest,
I
didn't
I,
didn't
use
it
this
plank
with
the
log
in
forward
until
now,
I
don't
have
a
customer
doing
that
until
now,
so
we
probably
need
to
check
the
documentation
and
and
then
find
a
way
to
do
it.
Probably
it's
it's
possible
right
and
if
not,
we
can
do
also
kind
of
request
for
engagement
and
try
to
to
fix
it
in
the
next
versions.
From
the
the
open
ships
blogging
stack.
A
B
The
first
version-
the
first
thing
is
that
about
the
logging
stack
is
like
we
are
using
not
the
most
a
potatoe
version
from
the
elasticsearch,
so
we
are
using
the
version
6
and,
as
I
remember
the
elasticsearch.
Now
the
last
version
is
the
version
7.
Let
me
check
so
we
are
not
like
with
the
resources
or
we
are
not
the
same
version
from
elasticsearch,
which
you
can
just
use
it.
Without
the
openshift
blogging
stack
and
I,
don't
I,
never
I,
never
see
someone
doing
it
without
Kurata
I.
Think
Kurata.
B
First,
if
you
are
using
the
the
openshift,
if
you
have
also
access
to
the
documentation
from
openshift
or
even
oke
d,
right,
read
the
docs
from
from
OPD
what
we
offer
there
is
the
first
point
of
start,
and
also
the
documentation
from
elastic
is
really
good.
You
have
a
really
good
documentation
there,
but
it's
something
that
you
need
to
take
care
to
pay
attention.
That
is
that
the
elastic
search
which
you
will
find
on
the
website
form
elastic
is
the
some
versions
ahead
from
our
version
use
it
in
open
ships.
B
So
you
probably
try
to
implement
something
which
is
on
this
documentation
from
elastic
website,
and
it's
not
comfortable
with
our
version,
because
we
are
a
little
bit
be
back
on
this
versions.
Management
from
elastic
search
list
and
also
the
documentation
from
fluent
II
are
really
good.
Now
also
go
for
the
website
from
fluent
II,
and
let
me
check
it
out.
A
Out,
Gabrielle
and
I
will
collect
some
of
those
links
and
put
them
in
a
blog
post
on
openshift
comm,
along
with
his
presentation
and
the
video
for
this.
So
these
are
all
great
questions
and
you
know
the
core
to
having
more
talks
on
this
topic.
Obviously,
and
perhaps
we
can
get
the
elastic
folks
to
come
on
and
give
a
talk
about,
you
know
what's
going
on
in
their
latest
versions
and
what
we
can
anticipate
in
the
earth
yeah.
That
would
be
a
good
follow-on
to
this
as
well.
So
thank
you
very
much
I.
A
Thank
you
Gabrielle
for
your
time
and
for
taking
the
time
to
walk
through.
All
of
this
I
will
try
and
get
all
of
this
up
in
the
next
day
or
so
so,
look
for
it,
probably
on
Monday,
on
the
open,
shape.com
blog
it'll,
probably
be
on
the
YouTube
channel
RH
openshift
sooner
than
that
takes
a
little
while
to
get
the
blog
bug
published.
So
thank
you
again,
Gabrielle
and
thank
you
all
for
attending
and
for
you
thank.