►
Description
Please join us for Fluent Talks! Our weekly webinar and office hours, on Fridays at 2PM Central. Streaming live on YouTube.
#openshift
A
Okay,
hello,
everyone
welcome
to
another.
Fluent
talks
really
excited
to
to
continue
doing
these
aside.
This
is
our
fifth
or
sixth
one.
So
this
is.
This
is
pretty
fun.
Today,
we've
got
some
pretty
exciting
stuff.
We
got
pat
joining
us
here
from
the
clip
get
team.
We're
gonna
talk
a
little
bit
about
red,
hat,
certifications
and
open
shift
for
those
who
are
using.
A
It
talk
a
little
bit
about
how
to
send
data
from
from
openshift
for
those
who
are
familiar
with
it
and
even
give
kind
of
a
demo
of
some
of
the
the
solutions
that
you
can
send
data
to.
So
you
know
something:
that's
been
a
bit
more
popular
that
we've
seen
is
like
grafana
cloud
and
we'll
give
a
quick
overview
of
that
to
talk
a
little
bit
about
it
and
and
go
through
it.
A
So
with
that,
pat,
let's
I'll
turn
it
over
a
little
to
you
talking
about
the
red
hat,
certifications
and,
what's
going
on
over
there.
B
Yeah
yeah
thanks
for
that.
Obviously,
we've
done
a
little
bit
about
openshift
on
previous
talks,
but
I
just
wanted
to
have
a
quick
catch
up
because
there's
been
been
a
few
new
things
recently
and
also
I've
been
working
on
a
little
blog
post
about
how
you
can
use
open
fluent
bit
to
send
metrics
and
logs
from
openshift
to
various
backends.
B
So
we'll
do
a
demo
if
we
find
the
cloud,
but
we've
also
got
things
like
datatyler
elastic
splunk,
all
the
big
ones
in
there
as
well-
and
it's
probably
good
good
point
as
well,
because
in
the
last
I
think
two
three
weeks
openshift
410
came
out
and
which
has
quite
a
few
few
updates,
but
also
it
was
it
actually
synced
quite
nicely
with
1.90,
which
I
think
came
out.
Maybe
the
same
week.
B
I
can't
remember
exactly
and-
and
we've
obviously
also
got
that
as
a
red
hat,
certified
image
into
red
hat
container
catalog.
So
you
can,
you
can
consume
it
directly
as
a
certified
image,
obviously
with
openshift
as
well,
you
can
consume
the
open
source
images
if
you
want,
but
this
way
it's
it's
properly
certified
and
it's
all
built
on
top
of
ubi
and
gets
all
the
benefits
of
following
the
nice
red
hat,
support,
snack
and
actually
talking
about
certification,
something
recently
that
red
hat
has
changed.
B
So
what
normally
happens
with
openshift
certification,
and
it
was
something
I
was
going
to
do-
a
blog
post
on
but
good.
I
didn't
because
it's
slightly
changed.
Is
you
set
up
your
your
git
repo
to
build
your
container?
B
You
can
either
build
it
yourself
and
push
it
in
to
red
hat
container
catalogs,
where
they
scan
it
and
verify
it
and
check
a
few
things
or
you
can
get
red
hat
to
build
it
themselves
in
in
their
environment,
which
automatically
does
the
scanning
and
verification
for
you
as
well,
but
that
was
kind
of
limited
a
little
bit
in
that
you
know
it
doesn't
really
integrate
with
like
we're
we're
already
building
loads
of
stuff.
You
have
actions.
B
Why
do
we
have
to
then?
Have
this
whole
separate?
Other
thing
where
you
have
to
go
in
and
manually
like,
tell
it
to
rebuild
things
or
build
a
new
version
or
or
whatever,
and
also
it
didn't
support
arm
64,
which
is,
you
know,
flew
a
bit
out
of
the
box,
the
open
source,
one
supports
amd64,
arm64
and
arm32
and
you're,
seeing
more
and
more.
B
You
know
on
targets
so
so
recently
and
it's
something
I've
used
when
it
was
in
private
beta
at
my
previous
company,
but
they've
made
it
public
beta
now
as
well,
is
that
they've
provided
this
sort
of
pipeline
now,
where
basically
you
you
run
the
checks
and
stuff
yourself
and
you
upload
the
results
and
you
upload
the
container
image
as
well,
and
they
verify
if
the
results
are
accurate
and
all
that
stuff,
but
ultimately
it
means
it
feeds.
B
You
know,
you've
got
your
ci
cd
pipeline
already
set
up
you're
making
these
images,
because
that's
what
I
do
yeah
and
all
the
images
are
made
first,
and
then
we
push
them
to
red
hat,
to
build
them
again,
and
you
know
it's
it's
there's,
there's
no
real
need
to
do
that,
so
it
feels
like
it
should
be.
A
nicer
integration
into
see
icd,
so
yeah
you
can
just
plug
it
in
at
the
end.
A
One,
I
guess
one
question,
because
I'm
I'm
actually
not
as
familiar
with
it
either
is
there's
this
idea
of
certification
for
for
containers
and.
A
Containers
with
openshift,
I
think
maybe
two
two
questions
I'll
load
here
is
one.
Why,
what's
that?
What's
the
need
for
it?
Why
do
it?
What
are
the
differences,
maybe
between
a
regular
container
and
then
I
know
you
know
pat
you've,
you've
done
a
lot
of
work
with
like
government
and
others.
A
B
B
You
know,
essentially
it's
about
following
a
process
that
is
well
defined,
well
tested
and
reproducible,
and
and
that's
what
it's
kind
of
kind
of
guaranteeing
there's
other
benefits
as
well.
You
know,
like
so
part
of
the
certification
process,
requires
you
use
a
red
hat
based
image.
There's
benefits
to
that.
B
Certainly,
when
you
run
on
the
platform
and
that
you
know
everything's
running
a
common
base
based
image,
you're
only
loading
that
base
image
once
you
know,
and
it's
easy
to
to
update
for
vulnerabilities
to
scan
for
vulnerabilities,
it's
all
got
common
infrastructure
in
there,
so
you
can
hook
in
loads
of
other
stuff
without
having
to
go
well.
This
is
running
alpine.
This
is
running
yeah
disturbance
and
all
these
different
like
how
the
how
am
I
going
to
integrate
this
tooling,
those
kind
of
tooling
as
well?
So
that's
that's!
Some
of
the
benefits
of
certification.
B
It
kind
of
keeps
you
in
a
in
a
kind
of
nice,
integrated
stack.
You
follow.
You
know
if
you're
certified,
you
have
to
have
met
certain
criteria
like
export
compliance
is
one
of
them.
We've
been
having
fun
with
the
last
couple
of
weeks,
but
you
know
saying
if
someone
uses
my
image
they're
not
going
to
run
afoul
of
some
kind
of
export
law
or
licensing
issue,
or
something
like
that.
You
know
there's
packages
in
there
that
are
compliant
to
whatever
the
specific
needs
are
of
that
compliance
process.
B
Some
of
its
exports,
some
of
its
security.
Some
of
it
will
be,
you
know,
maybe
process
driven
or
some
other
aspect
that
you
need
to
audit.
So
there's
quite
a
few
things
there
and
part
of
the
certification
process
is,
is
you
know
it
does
tick
off
some
of
these
things?
There's
some
best
practice
stuff,
like
don't
run
its
room,
make
sure
your
packages
are
updated.
Those
you
know
stuff,
you
expect
to
be
in
there
and
then
there's
other
stuff
that
make
sure
you've
got
all
your
licenses
included.
B
You
know
stuff
like
that
which
is
like.
Oh
that's,
a
pain,
but
actually,
if
you're
going
to
deliver
something
commercially,
you
kind
of
need
to
do
it,
and
you
need
to
know
that
if
I
consume
this
thing,
I'm
not
opening
myself
up
to
risk.
You
know
I'm
not.
I've
brought
this
thing
into
my
environment.
I've
become
dependent
on
it.
Oh,
it
turns
out
there's
some
kind
of
legal
issue
with
it
or
you
know
aspects
like
that.
You
you
know
it's
all
about
trying
to
say.
No.
When
you
consume
these
images,
you'll
be
fine.
A
B
Of
the
other
things
we
looked
at
as
well
is
yeah
with,
with
some
of
the
flip
side
and
those
kind
of
things
there's
like
you
need
certain
versions
of
like
open
ssl.
They
need
to
be
built
in
a
certain
way
and
stuff
like
that,
and
it
kind
of
guarantees
that
you've
got
that
to
say
you
can't
automatically
be
certified
just
because
you're
using
it,
but
it's
part
of
that
kind
of
stack
to
say
tick,
that
box
tick
that
box
now.
I
can
then
go
right.
Everything
below
me
is
certified.
I
can.
B
A
A
And
I
think
one
thing
that
we've
done
with
with
fluent
bit
is
try
to
make
sure
like,
like
distro-less
containers,
for
those
who
aren't
familiar,
it's
like
an
image
created
by
google,
you
don't
have
a
shell,
it's
try
to
minimize
the
footprint
or
I'd,
say
the
attack
vectors
that
you
could
have
with
with
an
image,
and
so
with
that
you
know
you,
don't
you
don't
have
to
really.
You
can't.
A
You
can't
really
gain
access
to
it,
which
is
a
blessing
and
a
curse
when
you're
trying
to
debug
some
things
for
sure,
but
it's
not
like,
if
you
don't
use
a
certified
container
you're,
all
of
a
sudden
at
risk.
It's
just
these
certification
check
marks
adhere
to.
You
know
some
very
specific
piece
that
that's
here
with
whether
I
have
but
yeah.
B
Maybe
yeah
you
could
touch
on
that
real,
quick
yeah,
actually
yeah,
probably
so
previously.
I
did
a
post
on
thanks
to
tim
for
the
little
meme
generator,
but
like
we're
talking
about
how
we
can
improve
the
open
source,
multi-arch
images
because,
like
like,
we
were
saying
we
were
using
this,
but
we
were
only
using
it
for
amd
64..
B
The
arm
images
were
like
full
shell.
You
know.
B
B
Securing
your
containers
now,
obviously
this
is
this-
is
written
by
red
hat
based
on
you
know.
This
is
the
way
we're
doing
it,
and
these
are
the
decisions
we've
made.
So
it
kind
of
goes
into
like
what
the
ubi
image
yeah.
Why
why
they
make
the
ubi
image?
So
ubi
images
don't
know
universal
base
image.
So
it's
basically
a
version
of
of
red
hat's
container
image
that
you
can
consume
and
use
for
free,
yeah.
A
B
Redistribute
it
you
don't
have
to
license
it,
but
it's
kind
of
aspects.
So
the
idea
is
you
know
if
you
want
to
build
something
in
open
source,
you
can
use
it
and
people
consuming
it.
Don't
need
to
worry
about
licensing
and
there's
a
really
good
blog
post.
That
kind
of
goes
through.
Like
some
of
the
some
of
the
issues
with
distance,
I
mean
that
if
you
go
back
to
my
blog
post,
I
do
try
and
call
out
kind
of
you
know.
What's
what
are
the
a
competing
view?
A
B
Things
that
distroless
ubi
whatever
are
trying
to
solve
that
there
is
a
common
problem.
Yeah
it's
about
securing
it,
making
it
small.
This
is
a
short
gist
of
it.
So
there's
there's
different
ways
of
solving
that
problem
and
they
each
have
trade-offs
and-
and
so
there's
there's
two
blog
posts
linked
from
from
my
multi-arch
one,
which
kind
of
goes
through.
You
know:
here's
his
kind
of
the
digitalis
view,
here's
the
ubi
view
and
and
what
they're
trying
to
achieve
and
and
why
it
works.
B
B
Is
it,
as
you
know,
is
it
as
secure,
as
you
think
it
is,
is,
is
one
of
the
things
that
still
yeah
when
you
like,
when
we're
making
the
disturbance
image,
I
can
still
whack
a
load
of
packages
in
there
and
stuff
like
that,
but
when
it's
running
don't
just
assume,
because
it's
disturbance
that
you
can't
do
stuff,
no,
you
can.
B
You
know,
there's
still
exploits
and
attack
vectors
that
you
need
to
worry
about,
there's
just
less
of
them
and
it's
kind
of
one
of
one
of
the
the
benefits
of
of
either
stack
of
like
having
that
common.
Like
minimal
image
is
like.
If
something
does
happen,
we
just
update
the
one
base
and
it
sort
of
fans
out
to
all
of
them,
rather
than
everyone
making
their
own
kind
of
secure
approach
and
and
having
to
patch
it
10
million
times
like
with
log4j
or
something
like
that.
So
there's
quite
quite
goods.
B
A
B
Yeah,
it's
like
my
yeah.
I
don't
want
to
get
in
trouble.
They've
also
got
they've
also
got
it's
probably
one
for
another
time,
but
there's
the
ubi
micro
approach
as
well,
which
is
kind
of
it's
a
trickier
one
to
explain
but
like
when
you
use
it
you
to
get
packages
into
it,
essentially
pull
them
in
from
the
host
and
there's
some
extra
work
to
do,
because
it's
all
focused
more
on
on,
like,
rather
than
docker,
it's
using
some
of
the
the
open
source
tool.
B
That
red
hat
is
a
big
proponent
of,
like
builder
and
podman,
to
to
make
your
images
and
there's
different
ways
of
making
them
and
then
you've
got
other
approaches
like
build,
packs
and
stuff
like
that.
So
you
know:
there's
there's
about
a
million
ways
to
skin
the
same
cat,
so
it's
yeah,
there's
quite
a
lot
there,
but
the
red
hat
have
chosen
this
approach,
which
is
do
the
ubi.
B
If
you
want
a
certified
image,
it
either
has
to
be
ubi
or
or
a
red
hat
based
image
and
a
red
hat
based
image
would
require
some
kind
of
subscription
or
license,
or
you
know
approach
to
that.
I
think
now
they
change
their
licensing
all
the
time.
So
last
time
I
looked
to
stay
in
the
support
boundary.
So
if
your
organization
wanted
support,
you
needed
to
make
sure
all
of
your
red
hat
stuff
was
validly
subscribed.
B
So
if
you
start
using
red
hat
images,
you
need
to
make
sure
the
host
they're
on
is
subscribed
as
well,
and
things
like
that,
so
it
gets
a
little
bit
confusing.
But
if
you're
using
the
ubi
one
don't
have
to
worry
about
that.
It's
kind
of
I
guess
the
idea
is
like
as
well
like
if
you're
gonna
make
like.
B
If
we
look
at
fluid
bit
we're
making
well
we're
making
two
images
now
we're
making
the
distributors
one
for
three
architectures
and
we're
making
the
ubi
one
for
one
architecture,
but
we
can
make
it
for
the
others
when
they're
available
not
just
make
the
ubi
one
and
push
that
to
you,
know
docker
hub
and
that's.
The
kind
of
I
think
is
the
the
kind
of
positioning
around
it.
It's
like
well
you're
going
to
have
to
use
the
ebr
one
to
get
certified.
B
B
B
You
know
you
need
just
a
little
bit
more
configuration
to
get
access
to
your
logs
and
stuff
like
that,
but
the
back
end
here
like
the
stuff
once
it's
in
for
a
bit,
if
you
want
to
send
it
to
gefano
elastic
data
dog,
whatever
it's
still
the
same,
it's
the
same
configuration
for
that,
regardless
of
whether
you're
in
open
shift
or
not.
So
actually
you
could
use
this
guide
like.
Oh,
I
want
to
integrate
the
grip
on
the
cloud.
Here's
a
works
example
of
how
you
do
it.
B
It
just
happens
to
be
that
we're
getting
the
input
from
openshift,
but
you
you
just
have
to
arrange.
So
you
get
your
input
from
from
wherever
you
need
and
for
me
as
well.
One
of
the
big
thing,
big
things
I
have
because
I'm
getting
old
now
and
I
forget
stuff-
is
that
I
try
and
do
everything
as
code
now.
So
this
example
is
linked
from
from
the
blog
down
here.
B
In
fact,
and
we've
got
examples
for
elastic
cloud
datadog
and
a
couple
of
other
helper
directories,
but
I'm
just
going
to
show
you
the
performance
cloud
one
later,
but
I've
got
a
nice
little,
I'm
using
the
helm
chart
as
well.
So
this
is
the
open
source
com
chart
in
this
case
I'm
using
the
certified
image,
but
it
will
work
with
the
unsatisfied
image
as
long
as
you
allow
that
into
your
cluster
or
it's
available
in
your
cluster
and
that's
probably
in
my
previous
life.
B
I
worked
in
defense,
one
of
the
big
reasons
why
we
stuck
with
red
hat
and
why
I
went
with
openshift
and
stuff
like
that.
The
red
hat
way
is.
It
was
pretty
much
the
only
solution
that
documented
how
to
do
things
offline,
there's
a
lot
of
stuff
that
says
it
is
possible
to
get
offline,
but
it's
not
documented,
and
I
did
a
readout
forum
2020.
I
think
november
2020,
where
I
kind
of
went
through
some
of
these
problems
we
have
and
how
my
policy
at
the
time
was.
B
If
it's
not
documented,
it's
not
supported.
I
need
to
support
this
for
20
years
20,
50
years,
whatever
the
lifetime
of
the
project
was,
and
it's
like
yeah,
I
can
figure
out
how
to
get
your
thing
offline,
but
I'm
effectively
evaluating
it
and
like
if
it
doesn't
work
straight
away
right,
I'll,
just
move
on
to
the
next
one
and
and
red
hat
was
really
good.
For
that
you
know
the
documentation
is
like
you
want
to
deploy
it
offline.
Here's
all
the
things
you
need
now.
There
are
still
problems.
B
We
had
a
big
problem
with
openstack
at
the
time
with
some
of
our
storage
and
the
only
way
to
debug.
It
was
to
go,
get
a
load,
more
images
and
every
time
you
had
to
go
get
more
stuff.
It
was
a
day
turnaround
because
you
had
to
burn
it
to
this.
You
had
to
scan
it.
You
have
to
then
walk
into
the
faraday
cage
load
it
into
another
machine
and
yeah.
B
It
was
just
a
nightmare,
so
it
was
like
I
don't
mind,
downloading
a
lot
of
things
in
one
go
and
getting
it
all
done,
but
I
really
hate
the
dependency
spaghetti
that
sometimes
you
have
and
you
go
get
one
thing
and
it
will
need
some
dependencies
at
build
time.
Some
at
run
time.
Some,
if
you
enable
some
other
aspect-
and
it's
like-
I
just
need
everything
in
one
go
so
so
that
was
one
of
the
benefits
we
went
with
with
openshift
as
well.
B
Going
back
to
your
previous
question
about
why
why
you
might
want
that
and
why
you
want
certification
as
well
and
then
yeah
certification
for
us
as
well
at
that
time
was
like
essentially
proving
that
stack.
You
know,
you're
saying
I've
got
all
these
components,
I
know
their
provenance.
I
know
how
they're
built,
I
know
they're
correct
and
I
don't
need
to
go
and
spend
effort
myself,
verifying
them
I
can
take.
B
I
can
transfer
that
verification
that
red
hat
has
done
and
just
focus
on
on
my
aspects,
and
that
is
kind
of
so
you
might
have
a
better
solution,
but
it's
not
certified,
and
it's
just
like
that.
The
effort
of
getting
it
and
verifying
it
and
supporting
it
and
maintaining
it
is
just
like.
No,
you
know,
whilst
it
may
be
better,
it's
easier
and
less
risky
to
go
with
the
certified
foods.
B
So
that's
that's
kind
of
the
approach,
sometimes
back
to
the
blog
yeah,
so
yeah
I
try
and
follow
the
policy
of
everything
as
code.
So
basically
here
I've
got
a
nice
little
walkthrough
of
what
you
have
to
do.
So
I'm
using
code
ready
containers
which
I'll
show
you
briefly
in
a
moment
but
which
is
just
a
way
of
running
red
hat,
open
shift
locally
so
effectively.
It
runs
a
vm
with
a
single
single
node
in
it
for
us
for
for
the
whole
cluster.
B
So
it
has
some.
You
know
it's
quite
good
for
development,
definitely
don't
use
it
for
production
and
yeah.
It
has
some
other
downsides
as
well.
You
know
you're
running
all
on
a
local
machine,
you're
going
to
be
resource
constrained,
there's
lots
of
problems
there,
but
for
the
purposes
of
of
testing
and
demonstrating
this
it's
it's
a
nice
quick
like
I
can
start
up
and
I
can
just
delete
the
vm
and
start
a
fresh
one,
and
I
know
it's.
You
know
it's
all
from
scratch.
You
know
it's
infrastructure,
it's
code,
kind
of
thing.
B
So
so
that's
what
I'm
doing-
and
I
can't
we've
talked
about
code
ready
containers
previously,
so
I
won't
go
into
too
much
detail,
but
it
kind
of
it
looks
like
openshift.
Basically
there's
a
big
big
warning
at
the
top
saying:
don't
use
it
in
production,
as
I
say,
but
it's
got
pretty
much
everything.
There's
some
red
hat
documentation
explaining
what
it
hasn't
got.
It
turns
off
a
few
things
by
default,
like
monitoring
and
stuff
like
that,
because
there's
no
point,
you've
got
a
single
node.
B
Why
would
you
run
monitoring
of
that
one
node
on
the
same
node?
You
know
if
it's
not
working,
it's
not
working
so
and
it
consumes
those
resources
as
well,
but
one
of
the
main,
the
main
things
we
have
to
do
with
with
openshift
is
that
because
it's
generally
a
more
secure
set
of
defaults
from
a
kubernetes
perspective.
B
The
main
thing
we
need
to
do
is
get
access
to
the
host
logs.
I'll
also
show
you
how
to
get
access
to
the
host
metrics,
because
obviously
flipbit
does
metric
exporting
as
well,
and
so
I'll
show
that
briefly,
but
yeah
that
it's
the
same
stuff
and
red
hat
openshift
has
this
concept
of
security
context
constraints
sccs,
which
are
effectively
a
security
profile
for
your
application.
So
you
have
like
restricted
one
which
I
think
might
be
the
default
one
which
is
like
you
pretty
much.
You
can
do
some.
B
You
can
do
general
container
stuff
stay
inside
your
container,
but
you
can't
do
a
lot
of
the
privileged
access,
so
things
like
being
able
to
to
mount
a
directory
from
the
host
opens
up
various
vectors
to
exploit
if
you're
trying
to
attack
the
system.
So
that's
not
allowed
by
default,
but
obviously
fluid
bit
wants
to
read
those
logs
from
the
host.
So
what
we
have
to
do,
and-
and
I
sort
of
walk
through
it
in
in
in
a
huge
amount
of
detail-
is
enable
that
kind
of
host
volume
mounting.
B
So
we
want
to
say,
take
this
host.
Take
this
path
on
the
host,
so
flash
file,
logs
containers
and
pods
and
that
allow
it
to
be
mounted
into
the
container
and
I
sort
of
there's
a
good
bit
of
documentation
in
red
hat
on
like
how
to
do
this.
But
for
me
you
know
I
don't
like
reading
docs.
So
it's
good,
I
wrote
one.
B
A
B
It
just
it
because
it's
also
like
best
practice
yeah,
you
could
turn
on
everything
which
is
quite
easy
to
do.
You
know
run
as
a
privileged
thing
as
long
as
you've
got
admin
privileges
to
do
that,
but
that's
not
very
good
and
really
your
ops
and
ad
cluster
admin
should
go.
No,
you
don't
need
to
run
a
log
forwarder
with
like
full
privileges
to
the
whole
system.
That's
ridiculous!
You
just
need
to
be
able
to
mount
host
volumes,
maybe
secrets
and
stuff
like
that.
It
doesn't
need
to
run
its
route.
B
It
doesn't
need
some
other
capabilities.
So
you
kind
of
drill
down
to
all
this
and
it's
actually
quite
nice.
It
took
me
a
while
to
get
there,
but
I
was
like.
Surely
it's
got
the
security
context
constraint
the
scc
and
it's
just
allocated
to
the
you
know.
I
want
to
run
these
pods
with
that
scc.
Oh,
no,
don't
do
it.
That's
straightforward!
B
What
it
does
is
allocates
it
to
the
service
account
and
then
the
service
account
is
associated
with
your
pod
spec
and
then,
when
your
prospect
runs,
it
runs
with
that
service
account
and
gains
the
privileges
from
the
sec.
So
it
took
a
while.
B
You
know
a
bit
of
a
bit
of
a
route
to
get
there,
but
actually
it
works
quite
nicely
because
it
means
I
can,
if,
if
you
wanted
to,
you
could
quite
easily
assign
it
to
more
pods
in
that
same
name,
space
or,
however,
you
wanted
to
do
it,
and
so
so.
This
is
like
the
full.
What
you
need
to
do
create
your
service
account
bind
it
set
the
scc
up,
but
I
wrap
all
that
up
for
you
into
just
a
simple
script
up
here.
B
So
we've
got
service
account
creation
which
just
spins
the
ammo
for
you.
So
you
can.
A
B
So
yeah,
so
so
what
well?
What
I
did
was
I
installed
that
and
then
I
ran
our
pods
with
xfcc
to
make
sure
it
worked
and
then
figured
out
what
it
was
doing.
So
obviously
you
could
just
just
do
the
same,
but
you
know
it's
like.
I
don't
want
to
install
the
lock.
The
whole
fluency
stack
as
well
just
to
get
access
to
a
bee
ammo.
B
So
so
we
figured
out
what
it
what
it
was
and
and
and
like
privatized
it
by
namespace
and
things
like
that.
So
this
and
for
all
the
examples.
Basically,
I
use
that
same
thing
because
doesn't
matter
what
you're
sending
to,
as
I
said
before,
you
just
need
access
to
the
logs.
So
if
you
send
it
to
dave
if
it's
into
elastic
it's
just
the
same
stuff
because,
like
with
fluid
bits,
it's
a
you
know
it's
the
pipeline,
multi-in
they're
multi-out.
B
A
B
So
and
then
you
need
to
configure
your
output
for
grafana
as
well
and
I'll
touch
on
the
metrics
in
a
minute,
but
it's
not
very
difficult
to
send
stuff
to
loki,
it's
slightly
more
difficult
to
send
it
to
locally
outside
the
cluster,
but
not
not
that
much
more
and
so
I've
parameterized
it
again,
because
I
want
to
just
to
provide
these
as
a
dot
m
file.
B
Just
so
I
can,
you
know
quite
easily
change
the
username
and
obviously
not
detain
it
in
a
git
repo
for
everyone
to
hack
into
and
then
we're
just
using
the
help
script
as
well
to
to
deploy
it
all.
So
you
can
kind
of
see
there
we
go,
I'm
creating
that
service
account.
I
had
the
home
chart,
so
it's
just
the
default
ones
for
fluent,
so
it's
got
to
fluent
each
thing
in
there
and
you
might
notice,
I
add
them
to
add
two
slight
different
variants
of
it
now.
B
The
reason
for
this
is
that
if
you've
got
a
slash
on
the
end,
when
you
add
it,
it
will
fail
because
it
can
flicks.
So
I
try
adding
it
without
the
slash
and
I
try
adding
it
with
the
slash
just
in
case
you've
already
got
it.
It
just
goes
out
and
already
there
that's
fine,
if
it's
already
there
with
the
slash-
and
you
add
it
without,
it
will
error
out.
So
it's
a
bit
different.
So
this
script
is
intended
to
be
like.
I
can
run
it
a
million
times.
B
I
can
just
reuse
this
values
file
for
all
the
other
ones
and
just
change
the
output
values
I
don't
need,
then
I
don't
just
have
one
values
file
and
I'm
copying
and
pasting
part
of
it
and
changing
the
other
part.
So
I'll
just
show
you
the
other
values
file,
it's
thrilling,
I'm
sure
to
see
some
ammo,
but
it's
it's
very
straightforward.
The
bit
at
the
bottom
is
not
really
very
important.
B
It's
the
bit,
the
top
that
you
want,
so
so
we
match
up
so
when
you,
when
you
use
the
helm
chart,
so
this
is
very
specific
to
the
helm
chart.
This
is
the
animal
configuration
for
round
chart,
but
it's
got
options
for
creating
a
service
account
for
you
by
default.
You
don't
want
that
because
we
create
our
special
one.
That's
associated
to
rscc
and
then
you've
got
some
security
context
stuff.
B
Now
you
need
to
you
want
to
run
as
specific
users.
So
the
actual
logs
that
you
mount
from
the
host
are
accessible
by
root.
B
Only
there's
some
se
linux
stuff,
which
you
might
need,
if
you're
running,
with
some
additional
security
options
and
then
there's
some
best
privilege
best
practice
stuff
there
as
well
like
make
the
file
system
route
only
and
also
allow
us
to
us
escalate
our
privileges
to
root,
because
that's
what
we
need
to
to
read
the
log
files
so
yeah,
so
that's,
that's
all
it
is
and
I'll
show
you
what
I
had
to
do
to
so
this.
I
spun
up
crc
a
while
ago,
it's
running
410
somewhere.
B
Where
does
it
say
yeah
410
there
we
are
so
it
runs
a
vm
for
410
and
does
everything
else
for
you.
So
it's
quite
nice
and
then
you
get
your
nice.
You
know.
You've
got
your
your
web
ui,
which
is
all
here's
your
console,
here's
the
passwords,
all
those
kind
of
things.
Obviously
you
can
kind
of
see
yeah.
This
is
not
for
production,
but
it's,
but
it's
all
there.
B
So
now
we
want
to
just
deploy
I've
been
testing,
obviously
before
the
demo
to
make
sure
it
works
and
but,
if
I've,
so
this
is
our
git
repo,
that's
just
the
one
I
was
showing
you
before
and
I'm
just
going
to
use
the
script
to
deploy
it.
So
I've
actually
got
my
environment
file
with
all
my
super
secret
credentials.
B
B
There's
lots
of
stuff
there
so
that
that's
how
I'm
deploying
and
if
we
go
back
to
the
pods
I'll,
get
it
stuck
into
a
bit
logging.
Just
you
know,
because
it's
easy
and
we
can
kind
of
see
what
it's
doing
so,
there's
a
lot
of
debug
because
there's
a
lot
of
logs,
so
these
are
all
the
logs
for
the
cluster.
B
Now
I
cover
it
in
my
blog,
but
this
really
stresses
my
local
machine
out
to
to
try
and
run
a
cluster
and
also
monitor
across
the
mostly
the
same
cluster
in
a
vm
and
that's
running.
You
know
you
can
kind
of
see,
there's
quite
a
few
logs
there,
because
it's
running
the
control
plane.
It's
running
all
the
workers
all
on
one
node,
so
it's
pretty
pretty
stressful
for
for
for
the
single
node
solution
and
then
we
can
go
to
so.
This
is
just
started
up.
B
You
can
kind
of
see
it's
only
just
started.
Let's
see,
what's
the
details,
how
long
has
it
been
running
somewhere
created
one
minute
ago
and
then
you've
got
all
the
events
and
stuff
like
that?
But
it
shows
you
there
and
eventually
it
will
struggle
and
start
back
pressure
will
start
building
because
it
just
can't
handle
you
know
the
full
set
of
logs
all
on
my
lowly
laptop
while
I'm
running
zoom
as
well
so
yeah
we'll
see
in
a
minute,
but
it's
good
enough
for
a
demo
to
show
you
that
logs
are
getting
in.
B
So
this
was
a
while
ago.
So
I'm
in
grafana
cloud
here,
I
logged
in
a
while
ago.
Let's
see
if
it's
still
logged
in
or
I
need
to
reload.
B
There
we
go
we're
getting
some
logs,
you
can
kind
of
see,
though,
let's
see
if
we
can
jazz
them
up
anyway.
There
we
go
so
these
are
the
logs
coming
from
openshift
at
766,
which
is
the
time
now
in
the
uk,
and
you
can
kind
of
see
it's
coming
through.
If
we
want
to
put
some
nice
filters
on
there,
it
would
be
nice
to
get
them
in.
You
know.
B
Is
this
an
info
level
stuff
like
that
which
we
could
probably
do
quite
easily,
and
you
could
do
it
on
the
on
the
loki
side
as
well,
and
you
can
kind
of
see
my
my
older
tester
there
as
well.
So
it's
it's
showing
you
what's
going
on
and
we
can
even
stream
them
live.
If
you
want,
you
know,
they're
coming
in,
you
can
see
all
the
logs
and
you
can
do
all
the
you
know
the
grafana
stuff.
You
want
to
do
with
the
set
of
dashboards
and
showing
all
that
but
yeah.
B
The
main
goal
here
is
to
say:
look
we
can
get
logs
in.
This
is
how
you
do
it.
It's
very
simple
and
that's
all
just
from
running
that
one
one
hell
script
and
one
other
thing
I
wanted
to
touch
on,
go
questions.
B
B
Yeah
yeah,
so
so
something
in
case
people
aren't
aware
of,
and
it's
probably
something
we
need
to
publicize
a
little
bit
more,
but
fluid
bit
doesn't
just
do
logs.
You
know
it's
an
observability
tool,
it
does
metrics
and
hopefully
soon
I'll
pass
to
you
some
more
and
your
egg,
but
hopefully
we'll
get
the
traces
as
well,
but
we
can
get
metrics
in
and
I'm
just
going
to
do
host.
B
You
know
like
node
exporter,
metrics,
basically
which
which
let's
have
a
look,
which
is
one
of
the
metrics
inputs
that
we
can
use
on
on
floatbit.
B
So
no
exporter,
metrics
there
yeah
and
essentially
this
this-
gives
you
all
the
you
know
the
usual
node
exporter
stuff,
but
you
don't
have
to
run
node
exported
you're
already
running
for
a
bit
to
collect
your
logs.
It
can
collect
your
metrics
as
well.
Why?
You
know?
Why
did
you
know
why
spend
any
more
effort
or
run
a
whole
separate
stack
that
you've
got
to
maintain
and
version
separately
and
all
the
integration
that
goes
with
it
just
use
from
a
bit
to
do
it?
B
And
it's
it's
very
similar
to
logs,
I
mean
to
get
access
to
host
metrics.
You
need
access
to
like
the
proc
file
system,
basically
mostly,
and
so
that's
that's
what
what
we
do
in
in
one
of
the
other
examples
in
here
and
it
essentially
all
it's
doing
is-
is
generating
you
a
node
exporter
metric
that
uses
a
custom
directory
which
we
mount
from
the
host,
and
then
we
can
use
that
with
prometheus,
remote
right
output,
so
to
export
it
to
grafana
cloud
or
any
prometus
server.
B
B
I
did
not
no,
because
I
think
yeah
there's
it
would
be
nice.
Well,
I
didn't
find
a
good
way
of
of
effectively.
You
know
I
need
access
to
the
host
logs,
but
I
don't
need
right
access
or
you
know,
so
I
can
mount
them
as
read
only
but
it'd
be
nicer.
If
you
could,
you
know,
that's
that's
on
me
to
do
it
rather
than
the
security
context
to
enforce
it.
B
As
far
as
I
can
see,
I
can
see
a
straightforward
way
of
doing
it
and
it's
the
same,
for
you
know
you
just
want
to
read
the
data
yeah
so
yeah.
So
I
think
when
I,
when
I
set
it
up,
I
make
sure
it's
a
read-only
amount
and
stuff
like
that,
but
there's
nothing
that
I
could
see
that
stops
you
making
it
a
writable
amount
but,
like
it'd,
be
quite
difficult
to
write
into
proc,
but
I
mean
you
could
do
it
and
it
probably
is
an
exploit
and
stuff
like
that,
but
yeah.
B
But
no
there
was
no
yeah.
The
security
context
for
getting
access
to
the
logs
is
the
same
one.
For
metrics,
there
was
no
extra
work
required
because
it's
you're
just
mounting
something
from
the
host
into
the
container.
Basically,
if
you
look
at
it
in
simple
terms,
so
yeah
so
yeah,
the
grafina
cloud
one.
So
when
we
do
the
the
output
you
saw
it
before
so,
I've
created
a
separate
file
to
input
to
include
but
yeah.
We
use
the
promotes
prometa's
remote
right
and
then
we
can
kind
of
see
here.
B
B
Yeah,
so
you've
got
all
the
other
stuff
as
well
cpu
seconds,
it's
probably
a
nice
one
yeah,
so
you've
got
yeah
all
the
all
the
jazzy
stuff.
Obviously
you
know
you'd,
probably.
B
A
B
Because
that's
that's
not
in
the
list
at
the
moment,
but
it
it
is
there
as
you're
saying
it's.
Actually,
that's
probably
a
good
shout
out.
I
probably
should
add
that
because
then
it
would
be
quite
nice
to
see
like
you
could
get
all
the
kubelet
information
and
you
probably
see
how
terrible
it's
handling
all
these
containers.
Let's
go
back
and
see
what
see
if
it's
falling
over
yet
yeah
there.
It.
A
B
B
Certainly,
the
the
loki
output
is
a
lot
more
performant
than
than
the
elastic
output
and
the
some
of
the
others,
so
yeah
loki
seems
to
kind
of
keep
up,
but
it's
struggling
where
some
of
the
other
outputs
really
really
don't
like
being
run
or
on
a
single
vm,
but
they
work
fine
in
production
I
I
did
actually
spin
up
a
as
a
backup
and
as
your
cluster
as
well
with
with
openshift
on
it
just
as
a
in
case
this
didn't
work,
but
I
I
test
it
on
there
as
well.
B
A
B
Yeah
because
so.
B
Have
we
uploaded
the
recording
of
that
yeah?
I
can't
remember
I.
B
Yeah
I'll
have
to
go.
It's
worth
it
yeah,
because
one
thing
that
that
brought
out
was
windows.
Wasn't
it.
There
was
a
lot
of
a
lot
of
questions
about
what
we're
doing
with
windows
and
windows,
and
there
was
discussion
about
windows
containers
as
well
on
ecs
and
things
like
that.
So
it's
it's
probably
worth
people
jumping
in
and
looking
at
the
the
google
docs
got
a
link
to
the
discussion
where
I
talked
about.
Do
we
want
windows
containers
which
ones
do
we
want
like?
What
should
we
do
with
them?
A
I
do
yeah
we'll
have
another
community
meeting
in
in
two
weeks.
It
happens
every
two
weeks
and
we'll
we'll.
I
think
we
should
just
do
a
whole
fluid
talk
session
about
it
as
well.
There's
a
lot
of
good
stuff
in
1.9,
especially
that
we
added
there
like
a
whole
our
windows
event.
Log
was
really
focused
before
just
on
like
three
channels:
application
security-
and
I
can't
remember
the
last
one,
but.
B
A
B
Yeah
because
I
mean
yeah,
the
touchdown
briefing
is
like
a
prometheus
scraper,
it's
not
documented
yet,
but
it
is
available
and
I
I
did
have
a
little
play
with
it,
but
yeah
she
probably
should
have
integrated,
but
there's
things
like
the
other
kafka
input
as
well.
That's
that's
kind
of
announced
there,
but
you've
got.
We've
got
a
few
other
things
as
well
as
like
the
nightfall,
redaction
filter
and
apache
skywalking
plug-in,
and
things
like
that
as
well.
So
it's
probably.
A
Yeah
yeah
exactly
exactly
okay,
I
think
I
don't
see
any.
I
don't
see
any
questions
in
the
in
the
chat
here,
so
I
think
we're
yeah.
A
B
I'm
hoping
this
little
open
shift.
Example
is
a
good,
so
we
did
it
just
for
like
specifically
for
openshift,
because
it's
about
doing
like
a
walkthrough,
because
you've
got
these
extra
security
steps
you
have
to
do
initially,
but
the
output
steps
can
just
be
used
by
anyone.
You
know
if,
if
you're
deploying
a
standard
helm
chart,
when
you
don't
need
any
of
the
screen
configuration,
you
can
just
use
the
output,
values
and
job
done.
Yeah.
A
Yeah
exactly
all.