►
From YouTube: Fluent Talks | 005 | Weekly Webinar and Office Hours
Description
Please join us for Fluent Talks! Our weekly webinar and office hours, on Fridays at 2PM Central. Streaming live on YouTube.
A
A
Okay,
hi:
everyone
welcome
to
fluent
talks.
Number
five
eduardo.
Did
you
want
to
jump
in
and
say
anything.
B
A
Well,
that's
all
the
fun
of
a
live
event.
Isn't
it
just
making
sure
nothing
works
first
time.
B
Yeah
so
well
welcome
everybody
who's
joining
this
this
session
today
and,
as
you
know,
we
run
this
fluent
talks
every
friday,
2
p.m,
central
time,
sorry,
2
p.m.
Pacific
time
and
today
we're
going
to
have
our
one
of
our
internal
guests,
pat
who's,
our
master
ci
operations,
person
with
us,
a
bunch
of
ci,
open
sheets,
red
hat,
a
bit
of
everything,
and
in
that
part
you
can
introduce
your
topic.
A
Yeah
yeah,
so
today
I
think
the
main
focus
is
going
to
be
the
openshift
stuff.
We
might
delve
a
little
bit
into
some
of
the
ci
at
the
end.
If
there's
any
questions
as
well,
we'll
see
what
happens
there.
A
Maybe
just
keep
an
eye
on
the
live
stream,
because,
with
one
screen
on
my
laptop,
it's
quite
hard
to
to
watch
everything
so
yeah
yeah.
A
So
today
we're
going
to
going
to
cover
the
openshift
side,
so
I
was
going
to
give
it
a
little
bit
of
an
intro
to
open
shift
and
then
to
some
of
the
other
technologies
we're
using
like
helm
and
and
then
explain
some
of
the
problems
we
have
with
with
how
openshift
is
secured
and
various
other
bits
and
pieces
so,
along
with
just
a
a
good
example
of
of
setting
it
all
up.
A
A
So
some
of
the
stuff
I've
been
working
on
this
week
has
been
trying
to
sort
out
how-to
guides
for
this,
so
we
should
be
getting
those
up
fairly
soon
and
I
think
there'll
be
a
podcast
as
well
towards
the
end
of
the
month
on
some
of
that,
but
I
was
just
sort
of
so
I've
been
doing
it
this
week.
A
So
I
figured
maybe
I'll
just
show
you
some
of
the
stuff
I've
done
and
and
then
we'll
have
the
more
formal
documentation
up
a
bit
later,
but
hopefully
it'll
answer
a
few
questions
so
I'll
just
sort
of
dive
into
to
what
we've
been
doing
with
with
openshift
I'll,
try
and
share
my
I'll
share
my
whole
desktop.
Let's
see
what
happens,
then
there
we
go.
A
So
you
should
be
able
to
see
my
terminal
at
the
moment.
Is
that
big
enough
for
you?
Everybody
just
let
me
check
it's.
A
Yeah
so
actually,
well
a
day
ago,
we
published
the
the
one
point:
eight
thirty
one,
eight,
thirteen
fluent
bit
image
on
red
hat
container
catalog.
So
I
think
back
influence
talk
to
one
and
discuss
some
of
the
reasons
for
for
having
a
certified
image
and
stuff
like
this.
So
we
released
1813
of
open
source
this
week
and
pretty
quickly
got
the
the
certification
through
for
for
the
for
the
red
hat
image
as
well,
and
so
I
just
wanted
to
show
that
at
the
moment
there's
only
amd
64
support
in
openshift.
A
There
is
a
dev
preview
of
arm
support
and
I
think
I
was
yeah
open
shift.
410
arm
go
sort
of
ga
for
a
couple
of
deployments
on
aws
and
bare
metal
upi.
I
think,
but
not
not
many
of
the
others
and
hopefully
we'll
get
our
arm
images
certified
pretty
quickly
once
once
the
pipeline's
ready
for
that,
because
obviously
we
build
the
open
source
one
through
on
32
and
I'm
64
as
well.
So
there's
no
problems
with
with
getting
it
certified
so
yeah.
So
this
is
the
latest
image.
A
It's
just
the
open
source
source
code
on
top
of
a
ubi
base
image
and
then
all
certified
and
running
as
non-root
user
and
the
other
requirements
you
need
for
certification
and
actually
some
of
those
requirements
lead
into
some
of
the
problems
we
then
have
with
with
the
username.
A
So
openshift
itself
is
sort
of
red,
hat's
distribution
of
kubernetes,
basically
and
there's
a
few
decisions
made
and
and
sort
of
integrations
chosen
for
various
bits
and
tooling
just
to
make
it
a
nice
seamless,
well
integrated,
stack
but
effectively
it's
kubernetes,
so
we
can
use
a
lot
of
the
standard
kubernetes
tooling
to
do
it
now,
if
you
want
to
try
it
out
or
you
want
to
do
like
local
development
and
stuff
like
that.
A
So
if
I
wasn't
on
openshift,
I
was
doing
kubernetes
development,
I
typically
use
kind
so
kubernetes
and
docker,
just
because
it's
easy
to
like
spin
up,
destroy
and
just
do
testing
and
not
have
to
affect
like
real
clusters
and
and
stuff
like
that.
As
long
as
you've
got
a
powerful
enough,
local,
pc
and
red
hat
provide
a
similar,
well,
a
couple
of
solutions
for
you.
If
you
want
to
just
do
a
little
bit
of
development
and
testing,
you've
got
what
they
call
the
free
sandbox
openshift.
A
B
A
For
our
purposes,
that's
not
ideal,
because
cluster
logging
requires
some
additional
permissions,
because
obviously
you
want
to
get
the
logs
of
the
cluster
and,
if
you're
doing
that,
on
a
multi-tenant
solution
like
the
sandbox,
then
that's
not
well,
it
doesn't
work
because
you
know
you
might
be
looking
at
other
people's
logs
that
you're
not
supposed
to
so
so
we
can't
really
do
it
there.
A
The
other
thing
that
we
have
is
this
thing
called
red
hat
code,
ready
containers
and
what
that
is
is
essentially
a
vm
that
you
sort
of
download
and
it
spins
up
openshift.
So
a
single
node
of
openshift
locally,
with
a
few
things,
turned
off
like
the
the
monitoring
stack
and
a
few
other
bits
and
pieces
yeah.
So
the
idea
is
it's
just
for
dev
and
test:
it's
not
a
production
environment.
It's
you
know.
A
Single
node
was
only
recently
added
to
support
for
openshift
as
well,
and
also
things
like
you
don't
need
monitoring
if
you've
got
a
single
node,
because
it's
pretty
obvious
that
it's
down,
because
you
can't
look
at
anything
and
your
your
node
is
not
there.
So
it's
just
it's
quite
a
straightforward
way
of
doing
it.
It's
probably
a
little
bit
more
heavyweight
than
I'd
like
for
you
know.
Coming
from
where
I
was
using
kind
and
stuff
like
that,
I
can
run
a
few
different
clusters
quite
quickly.
A
It
does
take
a
few
minutes
to
start
up,
but
when
it
is
then
you've
got
you've
got
your
you've
got
your
open
shift
standard
web
ui.
I
don't
know
if
people
have
seen
this
so
so.
Essentially
I
just
spun
this
up
with
the
crc
start
command.
You
get
all
your
stuff,
you
can
see
it
takes
a
little
while
to
go
through,
but
essentially
runs
a
vm,
and
then
you
get
some
some
generated
credentials
at
the
end
for
administrator
and
and
some
for
for
dev
as
well.
A
So
in
the
administrator
stuff,
you've
got
all
the
usual
kubernetes
stuff.
You
can
see
all
your
pods,
your
other
bits,
yeah
the
other
bits
and
pieces
and
then
in
your
developer,
viewer
as
well.
You've
got
your
you,
you
you're,
straightforward
and
nice.
Obviously,
because
it's
ephemeral,
it
loses
all
my
settings
every
time.
A
B
A
A
It's
it's
quite
a
an
easy
way
to
like
install
upgrade
and
generally
manage
multiple
kubernetes
components
for
a
single
application.
A
So
you
have
a
this
concept
of
a
chart
and
the
chart
you
say
like
install
the
chart,
and
you
end
up
with
all
the
different
bits
of
the
ammo
that
you
would
need
to
to
run
your
application
under
the
hood
so
for
like
so,
we
have
a
fluent
bit
chart
and
what
that
needs
is
you
know
a
few
different
things
so
to
run
for
a
bit,
so
you
need
like
a
service
account.
You
need
a
default
configuration.
A
You
want
to
apply,
maybe
some
various
volumes
to
mount
all
these
different
bits
and
pieces
coming
together
with
the
like
the
demon
set
and
things
like
that
so,
rather
than
having
to
apply
a
big
loading
ammo
or
manage
it
all
from
the
command
line,
you
can
just
do
a
single
one
line,
install
for
helm.
A
Now,
personally,
I
helm's
great
for
light
evaluation
parks
and
like
ephemeral
deployments,
where
you're
like
you're,
always
starting
from
scratch
and
you're,
deploying
it
and
you're
never
trying
to
manage
it
or
upgrade
it.
You
just
destroy
it
and
and
start
with
the
the
later
version.
It
gets
a
bit
more
difficult
when
you're
upgrading
themes
or
having
to
manage
large
scale
deployments
in
in
in
production
and
because
it's
it
kind
of
hides
a
lot
of
the
complexity
from
you.
So
it's
really
good.
A
If
you
want
to
just,
I
just
want
to
see
how
that
thing
works.
Does
it
meet
my
needs
and
then
I'll
investigate
it?
A
bit
more.
You
can
try
that
quite
easily
with
the
different
helm,
charts,
but
generally
I'd
always
recommend
like
do
that,
and
then
once
you've
decided
what
you
want
to
use
and
how
you
want
to
use,
it
then
figure
out
how
it's
actually
doing
it
under
the
hood.
So
you
understand
all
the
different
corner
cases.
What
yeah?
What?
A
What
are
your
upgrade
parts
stuff
like
that,
because
it's
quite
difficult,
sometimes
you
know
helm,
is
like
a
general
tool.
It's
not
specific
tool
for
a
specific
job
so
like
with
a
database
upgrade
or
something
like
that
there
might
be
moving
parts
that
have
to
be
all
managed
together
and
it's
quite
difficult
to
do
that
in
a
home
chart
and
do
it
well.
So
it's
good
to
understand
your
your
underlying
platform,
but
helm
is
a
great
way
to
to
just
deploy
stuff
and
particularly
for
demos
as
well
and
show
you
things
like
that.
A
I
I
do
follow
quite
a
kind
of
a
infrastructure
as
code
mantra,
so
I
do
like
having
this
kind
of
approach
like
with
you
have
an
ephemeral
cluster.
You
then
just
run
ansible
scripts
or
whatever
is
or
your
preference
for
scripting
and
automation
and
just
say
right:
we
just
blow
it
away
start
again
and
deploy
it
and
we
can
deploy
it
yeah
reproduce
it
100
every
time
we're
not
trying
to
manage
it,
tweak
it
manually.
A
You
know
that's
just
horrible
to
to
manage
longer
term,
particularly
when
you
do
something
leave
it
alone
for
six
months
and
come
back.
It's
quite
hard
to
then
get
get
your
head
around
it
again.
So
helm
is
quite
good
for,
for
some
of
that,
and
I
I
think
it's
a
tool-
that's
really
good
for
its
particular
use
case,
but
you
have
to
be
aware
of
its
limitations,
so
touch
on
that.
So
we've
got
helm
and
we've
got
a
helm
chart
for
fluent
bit.
A
You
can
send
the
fluent
helm,
charts
repo
and
you
can
kind
of
see
it
goes
into.
Essentially,
that's
that's
all
you
need
to
do
generally
to
install
fluent
bits
on
the
kubernetes
platform
and
that
will
spin
up
a
load
of
stuff.
You
can
see
all
the
templates
here.
You
know.
You've
got
roles,
conflict
maps,
network
policies-
if
you
want
to
use
them,
there's
a
load
of
monitoring
with
prometheus
support.
You
know
adding
dashboards
and
lua
scripts
and
all
this
stuff
can
all
be
done
quite
easily.
A
From
from
you
know,
just
a
declarative
conflict
which
is
quite
nice.
We
have
loads
of
default
values
and
with
helm
you
get
the
defaults
unless
you
override
them.
So
there
are
some
some
gotchas
with
that
like.
If
you
want
to
disable
something
generally,
you
have
to
like
override
it
with
a
null
value,
or
something
like
that.
So
it
can
be
a
bit
bit
funny
sometimes,
but
it's
pretty
straightforward
and
you
can
kind
of
see
here
as
well.
You
know,
we've
got,
we've
got
everything
you
generally
see
and
then
right
at
the
bottom.
A
You've
got
yeah
your
default
config
and
it
probably
looks
quite
quite
understandable
for
people
who've
come
from
like
fluid
bit
already
and
understanding
how
it
how
it
all
works,
and
actually
we
we're
discussing
this
today,
weren't
we
eduardo
about
like
with
the
new
yammer
config
coming
in
in
1.9,
once
it's
nice
and
stable,
we
might
look
at
replacing
the
config
we've
got
in
the
helm,
chart
with
the
gml
equivalent
just
to
keep
things
a
bit
more
consistent,
but
yeah.
It's
just
a
general
thing.
A
Here's
your
config
map
and
you've
got
one
for
your
service,
one
for
inputs,
filters
and
outputs
and
the
actual
content.
You
know
this.
This
just
goes
into
a
kubernetes
config
map
and
then
is
mounted
as
a
file.
So
it's
pretty
straightforward
and
it
leaves
nice
bits
like
if
you
want
to
override
the
output.
You
just
have
to
change
one
field
and
you
get
all
the
rest
of
it
for
free.
So
it's
quite
nice
there
now
I
touched
on
it
briefly.
A
We've
got
well
I'm
pulling
together
these
how-to
guides
as
part
of
leading
up
to
this.
This
podcast,
with
with
red
hat
for
different
things,
and
the
first
thing
we
need
to
to
sort
out
is
like
getting
fluent
bit
to
read
the
logs
from
the
host,
because
that's
what
you
need
to
do
generally
on
kubernetes
yeah,
you
can
see
the
home
chart
there.
You
just
deploy
a
job
done
now
with
openshift,
it's
a
bit
more
secure,
usually,
and
there
are
a
few
issues.
A
So
I've
got
my
my
terminal
here,
so
we
can
do
all
this
through
the
web
ui.
But
I
just
want
to
show
you
like
you
know.
Open
shift
is
kubernetes.
You
can
use
the
kubernetes.
A
You
know
standard
command
line
tools
to
do
it,
so
we
want
to
install
the
helm
chart
it's
just
one
line
off
it
goes.
You
can
kind
of
look
at
your
earpods
and
see
what
they're
doing
yeah
it's
running
already
and
it
spins
up
quite
quickly.
We
can
even
go
to
it,
doesn't
help,
but
there
we
are
right.
So
we
look
at
pods
for
the
default
namespace
and
you
can
kind
of
see
what's
going
on
there
and
I'll
show.
A
There-
and
one
of
one
of
the
things
here
I
wanted
to
highlight
here,
is
like
so:
we've
mounted
our
logs
in
to
the
pod,
it's
all
showing
everything,
but
we
can't
actually
read
them,
which
is
a
bit
of
a
problem.
A
So,
like
you
know,
we
can't
actually
get
any
logs
out
of
our
cluster
at
the
moment,
and
if
I
show
you
in
here,
this
is
the
pod
that
we've
deployed
yeah.
It's
part
of
a
demonstrator
there's
a
lot
of
extra
stuff,
but
you
can
kind
of
see.
We've
mounted
the
various
bits
and
pieces
that
we
need.
These
are
all
standard
from
the
from
the
helm
chart
but
yeah
the
logs
are
saying:
no,
you
can't
can't
read
that,
and
actually
this
is
it
gets
even
worse.
A
If
you
do
it
outside
the
default
main
space
or
the
default
project
has
read
how
to
call
it
and
they
they
do
recommend,
don't
use
the
default
project.
So
that's
fair
enough,
but
if
you,
if
you
use
it
outside
of
the
of
the
default
one,
let
me
just
remove
it
and
if
I
I'll
just
install
it
in
another
namespace,
so
as
you
can
see
with
help.
A
A
So
describe
the
demon
set
and
it's
much
clearer.
What's
what
the
problem
is
here,
whereas
in
the
default
one
everything
starts
up,
it
just
doesn't
work,
which
is
not
very
helpful,
but
here
we've
got
this
idea
of
security
context,
so
security
contacts
are
something
that
are
in
red
hat,
which
are
like
a
security
profile
for
for
various
things.
So
you
can
say
you
know
this
application
has
a
profile
of
privilege.
A
This
one
is
only
allowed
to
do
like
a
subset
and
by
default
it's
very
restricted,
because
things
like
accessing
host
volumes,
host
paths
and
stuff
like
that
yeah
there's
a
few
exploits
you
can
you
can
get
there
and-
and
quite
often
you
know
just
any
old
pod
shouldn't
be
allowed
to
do
that.
So
you
should
have
to
declare
that
you
know
this.
I
trust
what
this
part
is
doing.
I
want
it
to
execute
under
this
particular
profile,
and
I
want
it
to
have
extra
access
to
to
other
stuff.
A
So
how
do
we
solve
it?
Now,
there's
a
lot
of
documentation
on
security
contexts,
there's
a
lot
of
documentation
on
what
you
can
do
with
them,
how
you
configure
them,
what
it
means
and
to
be
fair.
A
You
know
once
you
understand
the
problem,
you
can
figure
out
what
the
documentation
was
trying
to
tell
you,
but
I
found
it
just
quite
easy
to
to
basically
reverse
engineers
from
from
what
red
hat
had
so
red
hat
openshift
provides
various
operators,
and
one
of
the
operators
they
provide
is
a
it's
a
logging
operator
this
one
here,
and
this
is
based
on
fluentd.
Actually,
as
the
log
collector,
so
fluid
d
runs
reads,
the
logs
sends
them
in
this
case,
usually
to
like
an
internal
elastic
search
instance.
A
So
that's
basically
what
we
want
to
do
so
what
I
did
this
week,
I
think
it
was
tuesday
afternoon
actually
deployed
this
and
then
tried
to
reverse
engineer
what
it
was
doing.
It
took
a
while
because
it's
like
with
kubernetes
there's
like
10
different
ways
to
do
everything
and,
of
course
you
always
look
through
them
in
order,
and
it's
always
the
last
one
that
you
you
decide
to
check
that.
A
That
explains
how
it's
done,
so
the
main
thing
it
was
doing
here
is
it's
creating
a
security
context
that
lets
you
mount
host
paths.
It's
then
binding
that
to
a
particular
service
account
through
role,
bindings
and
stuff
like
that,
and
then
it's
running
fluentd
with
that
service
account.
So
we
can
do
the
same,
and
that's
basically
what
this
particular
how-to
guide.
A
I've
got
here
in
cluster
log
access
is
about,
is
kind
of
showing
you,
so
I
sort
of
show
you
some
of
the
some
of
the
problems
we've
got
here
as
well,
and
then
I
go
into
like.
So
how
do
we?
How
do
we
solve
it
now?
The
main
the
magic
is
is
in
in
this
service
account
creation.
A
So
what
we
want
to
do
with
the
help
chart
is
we
want
to
create
the
security
context
and
associate
it
well
and
bind
it
to
the
service
account
and
then
use
that
service
account
with
the
helm
chart
and
the
help
chart
lets
you
do
that
by
default,
the
helm
chart
tries
to
construct
a
service
account
for
you,
because
it
needs
some
of
some
access
to
various
things,
and
it
also
does
some
rbac
stuff
as
well
for
for
the
kubernetes
filter.
But
you
can,
you
can
say,
don't
do
that.
A
I
don't
want
you
to
do
that,
because
I've
created
my
own
service
account
outside
and
I'll
show
you
so
in
the
in
in
the
values
for
the
server
for
the
fluent
bits
helm
chart,
there's
two
main
things
here
so
line
26
you've
got
create
me
a
service
account.
Well,
I
don't
want
you
to
do
that.
I
might
turn
it
off
and
specify
the
name
of
it.
So
that's
basically
what
what
I
ended
up
doing
for
this
example.
So
I'll
just
just
show
you
what
we've
got
there.
A
So
we
just
have
a
bit
of
stuff
to
to
create
a
name
space
with
a
service
account
in
it,
and
then
we've
got
a
security
context
constraint
and
the
big
thing
here
is:
let
me
mount
some
host
directories.
That's
that's
what
you
want
yeah!
I
just
want
you
to
do
that.
I
want
you
to
drop
a
load
of
other
stuff
as
well
and
try
and
keep
it
as
small
as
possible,
and
basically
I
took
this
from
from
what
the
the
the
logging
operator
was
doing
as
well.
A
So
so
it's
basically
following
what
red
hat
doing
and
and
just
just
replicating
it
ourselves,
and
then
we
bind
the
two
together
and
say:
yeah,
there's
just
there's
just
a
binding
there,
and
once
we
have
that
we
can
then
deploy
it
with
with
with
our
with
our
helm
chart
and
the
main
thing,
then,
is
we
just
override
the
values
so
there's
some
extra
stuff
at
the
bottom?
A
That's
just
me
dealing
with
picking
an
image
and
making
sure
there's
no
output,
but
yeah
we're
just
saying
override
the
service
account
settings,
don't
create
it
and
also
use
this
specific
named
one.
A
We
then
have
some
extra
security
context
details
just
for
our
pod
and
the
main
one
which
is
a
bit
confusing
here,
is
so
to
get
your
image
certified
by
red
hat.
You
can't
have
the
way
you
have
to
run
as
a
non-root
user.
A
Unfortunately,
the
logs
are
mounted
as
the
root
user,
and
so
essentially,
we've
got
a
certified
image.
We
then
tell
it
to
run
as
ui
tip
zero,
so
it
can
access
the
logs
and
we
make
sure
it's
read
only
and
we
try
and
you
know
lock
down
the
rest
of
it,
because
ultimately,
we
only
need
read
access,
we're
not
writing
to
the
file
system.
So
that's
what
we
do
and
the
the
nice
thing
about
helm
here
as
well.
Is
you
can
kind
of
see
it
a
little
bit.
A
But
there's
you
know,
there's
there's
ways
of
adding
multiple
values
files
you
can
just
chain
them
together,
so
I've
so
for
for
my
grafina
cloud
example
I'll
show
you
in
a
minute
when
we
deploy
helm,
we
actually
use
the
values
from
cluster
log
access.
There's
a
separate
one
for
for
metrics
as
well,
because
you
need
to
now
proc
consists,
but
it's
the
same
concept
and
then
there's
just
the
output
for
grafana
cloud.
A
So
what
we
do
is
we
mount
the
necessary
setup
to
to
to
get
access
to
our
logs
and
then
we
just
say:
here's
our
outputs.
So
I
can
reuse
this
values
file
for
every
single
output
for
every
different
destination.
A
I
want
to
use-
and
I
can
just
say
so-
use
the
values
from
this
and
also
the
values
from
this,
and
it
will
combine
the
two
together
so
we'll
end
up
with
a
nice
little
bit
of
output,
so
for
for
the
grafana
cloud
one
I
can
set
it
up
to
send
output
to
prometheus
and
loki
quite
easily.
I
don't
need
to
include
all
the
other
stuff
from
you
know.
I
don't
need
to
duplicate
stuff
from
the
other
values
file,
so
it
simplifies
things
a
lot,
but
anyway
back
to
the
example.
A
So
this
is
not
working.
So,
let's,
let's,
let's
delete
this
sorry,
it's
just
easier
to
do
or
delete
the
fluid
bit.
We've
just
tried
to
deploy.
A
Where's,
the
sorry
it's
hitting
behind
the
zoo,
so
we
can
kind
of
see
I'm
not
sure
if
it'll
delete
the
actual.
What
do
I
call
it
test?
I
think
it
was
the
actual
project.
No
it's
still
there,
but
yeah
there's
nothing
in
it.
So
we
can
it's
all
gone,
and
now
I've
got
just
a
nice
example
of
deploying.
A
Deploying
everything
using
this
grifana
cloud
one,
so
I've
just
got
a
little
little
script.
That
does
everything
for
you,
but
ultimately
the
main
thing
it
does
is
just
use
the
helm
chart.
There's
some
other
funny
stuff
you
might
be
might
be
excited
to
know
that
when
you
add
a
helm
repo,
it
treats
an
ending
slash
as
a
different
name.
So
you
can't
add
the
same
name
with
or
without
the
ending
slash.
A
So
here
I
add,
you
might
already
have
one
or
the
other
already
included,
so
it
would,
it
will
pass
if
one
of
them
is
already
included,
otherwise
it
will
default
to
the
first
one
but
yeah.
So
we've
got
that
in
there
we've
got
some
environment
variables
because
things
like
the
credentials
and
stuff
like
that
are
not
obviously
not
storing
in
the
repo
providing
this
environment
variable
and
we'll
substitute
them
in.
A
So
you
can
kind
of
see
so
I've
added
the
debug
flag
as
well
to
helm,
so
it
prints
out
all
the
kind
of
stuff
for
the
demon
set
you're
going
to
get.
You
can
kind
of
see,
look
we're
running
as
user
0
for
the
containers.
So
there's
the
the
overridden
image.
I
provided
all
that
kind
of
stuff
and
hopefully.
A
Yeah,
so
we
can
see
we're
writing
to
prometheus
successfully.
We've
also
there's
no
commission's
errors.
Now
that's
the
main
thing.
So
if
we
look
in
in
here
a
bit
logging,
which
is
just
the
name
of
it,
everything's
running,
we've
actually
got
a
pod
now,
whereas
before
we
didn't
get
a
pod
because
the
demon
set
refused
to
create
it
because
the
permissions
problems-
and
you
can
see,
there's
there's
a
lot
of
files
and
but
that's
not
yeah.
A
It
is
a
full
production
system
running
on
a
single
vm.
So
it's
a
bit
a
bit
crazy,
but
we've
got
everything
in
there
and
we're
writing
to
prometheus,
and
you
can
kind
of
we
can
kind
of
have
a
look
at
grafina
cloud.
A
I'll
just
go
to
explorer,
it's
pretty
easy,
openshift
logs
there
we
go,
so
these
are
the
ones
that
are
just
coming
in
obviously,
look
you
can
just
see
there.
They've
only
just
started-
and
I
was
messing
about
before
just
before
we
kicked
off,
as
you
can
see,
just
make
sure
it
was
working,
but
you
can
kind
of
see
so
I've
not
really
done
anything
other
than
said
right,
the
right,
the
old
logs
straight
to
to
loki.
So
we've
got
those
and
you've
also
got
separately.
So
I
cover
it
and
eduardo.
A
Did
the
post
I
think
last
year
about
writing
your
metrics
to
to
grafana
cloud
as
well?
Let's
see
if
I
can
find
it,
but
the
metrics
are
coming
in
somewhere.
A
I
was
definitely
looking
at
before
anyway.
Metrics
are
coming
in
and
we've
got
the
logs
coming
in
so
so
everything's
working,
you
can
kind
of
see
it
there.
I've
actually
submitted
a
pr
to
this
is
quite
noisy.
You
know
on
success.
It's
it's
every
time.
It's
right
into
prometheus,
it's
ryan,
a
success
message
and
we
don't
really
need
that
info
level,
so
I've
dropped
that
to
debug
now
so
yeah.
A
So
that's
that's
what
we've
got
that's
everything
working,
so
I
just
wanted
to
cover
some
of
like
what
I've
done
this
week
with
with
openshift,
which
is
essentially
just
figure
out
that
security
stuff
by
reverse
engineering.
What
what
the
cluster
logging
operator
did
and,
as
I
say
it
is
all
documented,
but
it
is
a
bit.
A
A
How
does
it
work
and
then
from
that
and
pull
out
the
bits
I
need,
which
turns
out
to
be,
as
you
saw
just
a
small
bit
of
yammer,
to
sell
service
account
and
security
context,
and
it's
pretty
straightforward
and
and
that
repo
is,
is
all
public
collection,
openshift
examples
and
I'll
I'll
keep
extending
it
with
more
examples
for
we're,
looking
at
doing
a
few
others
for
splunk
and
datadog
and
whatever
anyone
else
wants,
maybe
open,
search
and
stuff
like
that.
A
I
think
that
was
pretty
much
what
I
was
going
to
cover
there
just
to
let
you
know
as
well
that
like
fluentd
is
the
current
logging
collector
for
for
openshift,
but
there
is
a
plan
to
move
away
from
that,
so
it
might
be
nice
to
to
start
using
fluid
bits
and
migrating
to
that
from
from
from
flu
from
d,
just
because
it's
all
in
the
same
ecosystem,
it's
quite
straightforward
to
do,
and
probably
a
little
bit
more
performant
as
well
as
you
saw.
You
don't
need
an
operator
to
do
it.
A
B
Is
is
really
exciting,
thanks
pat
for
sharing
all
of
these,
and
we
knew
that
sometimes
putting
all
this
work
together.
It's
not
just
about
to
build
a
submission.
Also,
it's
about
to
build
the
image
certified
with
the
right
base,
image
for
red
hat,
plus
make
sure
that
it
can
be
deployed
at
home
and
fix
all
the
the
complexity
around
it
right.
So
our
goal
is
that
the
final
user
can
just
do
run
one
two
comments
and
it
is
ready
to
go
hey.
What
is
your
vision?
This
is
a
question
for
you.
B
What
is
your
vision
about
the
metrics
collection
in
this
space?
Now?
This
is
a
kind
of
open
question
right,
so
I
know
that
in
kubernetes
the
standard
is
is
prometheus
pool
based
mechanism,
but
also
we're
seeing
some
trend
where
users
are
thinking
about
hey
what
about
this
push
through
remote
right?
Well,
we
know
that
open,
telemetry
operator
is
coming
to
that
want
to
fit
into
this
space.
But
I
wanted
to
get
your
opinions
and
expert
on
this
matter.
A
B
A
So
my
opinions
are
polling
for
stuff
is
always
wrong.
You
know
it's
the
prometheus
scraping
approach,
I'm
not
a
huge
fan
of
the
remote
right
is
a
lot
better.
You
know
you
can,
rather
than
something
going
and
pulling.
You
can
kind
of
push
it
out
there,
and
I
think
that's
that's
better.
A
It's
still,
there's
still
it's
quite
easy
to.
I
think
we
touched
on
this
earlier.
I've
certainly
done
it
a
few
times,
but
like
denial
of
service
yourself,
just
by
incorrectly
configuring,
your
scrape
interval
or
something
like
that
yeah
and
things
like
cardinality
and
stuff,
like
that,
you
have
to
understand
your
storage
side
of
things
as
well.
So
for
me,
I
think
we
do
need
a
much
you
know.
A
lot
of
stuff
in
kubernetes
is
event
driven.
You've
got
auto
scaling.
You've
got
all
these
kind
of
things.
A
Metrics
is
weird,
it
seems
to
be
the
only
thing
that
isn't
or
you
know
it's
not
widely
adopted
any
I've
not
seen
a
widely
adopted
solution.
Yet
I
think
a
bit
of
it
is
like
prometheus
is
kind
of
the
incumbent
and
yeah
as
long
as
you
support
that
you've
got
a
lot
of
you
know
different
options,
because
there's
lots
of
different
tools
that
can
handle
it.
I
think
I
think
there
needs
to
be
a
different
approach.
A
Operators
as
well,
I
think,
are
a
bit
overused
yeah.
There's
there's
not
always
a
need
for
an
operator
like
here.
There's
like
a
helmet,
you
don't
need
an
operator
to
manage
that.
A
You
know
kubernetes,
manages
demon,
sets
and
and
and
does
all
that
yeah
that's
what
its
job
is,
and
I
think
there's
a
bit
of
overuse
of
our
operators,
sometimes
for
maybe
a
controller
or
just
a
simple
scheduler
will
will
handle
it,
but
also
things
like
so
one
good
thing
I
think
with
a
bit
now
is
like
you
can
do
logs,
you
can
do
metrics
and,
and
hopefully
soon
we
can
do
traces.
A
Well,
let's
see
that's
like
the
three
pillars
of
observability
and
rather
than
having
to
install
and
manage
three
different
agents
to
do
that,
and
all
the
craziness
of
three
different
life
cycles
how
they
talk
to
each
other,
the
resources
they
use,
how
they
use
the
network
and
all
that
kind
of
contention
as
well.
Can
we
just
have
one
agent
that
does
everything
and-
and
we
can
deploy
that-
and
we
just
have
to
worry
about
that
one
agent
and
that
one
pipeline
rather
than
okay,
I'm
getting
my
metrics
from
here.
A
I
think
if
we
can
get
to
that
point
where
there
were
some
good
talks
at
fluentcon,
the
last
two
where
they
were
using
like
fluent
bit
as
a
hotel,
collector
and
things
like
that,
so
it'd
be
really
good
to
get
some
of
that
kind
of
stuff
so
like
well,
you
don't
need
to
install
three
agents,
you
just
install
fluent
beer
and
also
you
can
do
stuff
like
there's
a
lot
of
legacy
stuff
where
metrics
are
in
logs
and
being
able
to
like
expose
them.
You
know
I'm
already
collecting
the
logs.
A
Let
me
just
transform
them
into
metrics
and
expose
them
and,
and
I'm
already
pushing
those
metrics
out
as
well.
You
know
it's
there's
a
lot
of
capability
there.
I
think,
to
simplify
that
whole
stack
and
not
have
10
million
different
options
for
every
choice.
It's
yeah,
it's
yeah!
We
can
probably
talk
about
it
for
a
while,
but
yeah.
A
Certainly,
coming
from
that
kind
of
space
of
like
let's
try
and
reduce
our
dependencies
and
understand
them,
particularly
for
like
yeah,
I
spent
a
lot
of
time
doing
air
gaps
and
stuff
like
that.
It's
a
nightmare
like
as
soon
as
you
add
one
new
tool
that
tool
has
like
a
thousand
dependencies,
they're,
not
well
documented,
and
you
don't
find
out
that
something's
broken
until
run
time
and
it's
just
a
nightmare.
A
So
if
you
can
like
narrow
it
down,
which
I
think
is
such
sort
of
like
what
red
hat
trying
to
do
is
like
they
make
those
choices
for
you,
you
don't
have
a
choice
over
x,
y
or
z.
It's
like
we
do
it
this
way.
We
do
key
cloak
for
identity
management
or
whatever,
and
it's
not
like.
Well,
you
can
do
whatever
you
want.
No,
it's
like
you
do
that
these
are
things
you
have
to
download
as
well
integrated.
We
don't
have
to
test
all
the
different
combinations.
A
We
only
test
the
one
thing
that
we
provide,
so
I
think
it
would
be
good
if
we,
if
we
had
that
and
for
a
bit
as
you
you
know,
we
I
think
we
talked
about
with
the
amazon
guys
earlier
as
well,
was
like
we
manage
a
lot
of
our
dependencies
ourselves
as
well,
so
they're
compiled
in
they're
not
brought
in
from
somewhere
else
and
and
stuff
like
that.
So
it's
it's
a
lot
easier
to
deploy.
That's
certainly
been
my
experience.
B
How
instrumentation
is
on
and
there's
a
lot
of
fun
things
here.
The
first
thing
is:
if
you
look
at
a
users
now
moving
to
open
telemetry,
it's
not
about
open
telemetry,
it's
like
okay.
Now
they
have
to
instrument
their
applications.
They
have
to
get
the
right
sdks
into
the
applications.
Applications
can
ship
this
telemetry
data,
one.
A
B
A
B
B
Now,
if
we
go
to
fluency
back
in
the
day
18
years
ago-
and
this
is
fun
fluently
is
not
just
not
just
the
agent,
but
we
have
the
protocol
which
is
called
forward
and
we
have
sdks
for
all
almost
all
languages,
so
you
can
instrument
already
your
applications
with
fluentd
node
gas
go.
I
think
this
one
for
rust
right
now,
so
no
gs
python.
So
you
can
instrument
your
application
and
ship
the
logs
over
the
wire
by
using
the
forward
protocol.
B
B
But
what
gets
a
kind
of
the
white
adoption
in
a
short
period
of
time
was
in
primitives
just
exposed
an
http
employee
and
that's
a
that's
really
interesting
right.
We
cannot
compare
logs
with
matrix
right
logs
are
quite
more
intensive
inside
at
more
overhead
through
the
whole
process
of
the
pipelines.
B
I
would
say
that
matrix
is
quite
lightweight
compared
to
logs,
in
my
opinion,
but
it's
interesting,
and
I
think
that
now
put
in
my
my
oss
hat
maintainer
and
I
look
at
all
of
this
yeah.
So
we
have
users
who
go
and
stay
with
prometheus
doing
polling
mechanism
for
a
long
time
like
that's
the
industry
right
now
in
the
meanwhile
other
open
telling
people
is
going
to
start
implementing
with
open
telemetry.
A
B
Do
we
see
this
as
a
fluent
project
right
and-
and
I
see
personally
that
the
way
to
go
is
to
allow
to
connect
both
worlds
from
prometheus,
to
open,
telemetry
back
and
forth
and
provide
the
flexibility
to
users
to
to
solve
the
problem
right?
The
problem
is
that
I
need
to
move
my
data.
I
need
to
move
my
telemetry
data
and
ingest
this
data
into
a
vendor
place
database
cloud
service
whatever.
B
So
I'm
happy
to
see
that
all
this
all
this
journey
around
matrix
logs
and
we're
going
to
do,
choices
too,
is
hitting
that
point,
and
now,
with
this
work
on
openshift,
I
think
that
you're
doing
is
really
interesting
too,
because
that
helps
to
compose
more
use
cases
on
that
space
right,
I
think,
at
the
end
of
the
day,
it's
it's
about
to
bring
the
data
to
the
user,
so
they
can
extract
the
value
right,
yeah.
B
B
Okay
thanks
everybody
for
watching
thanks
pad
for
your
time
and
please
subscribe
to
the
channel
we're
just
not
going
to
have
talking
about
kalitia,
but
also
fluent
we're
going
to
have
other
guests
from
other
projects.
Other
ecosystems.
So
please
keep
watching
and
subscribe.