►
From YouTube: TGI Kubernetes 107: pod logging and fluent-bit
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
This week we will be exploring pod logging and fluent-bit
content used in the episode is available here: https://github.com/vmware-tanzu/tgik/tree/master/episodes/107
A
Hey:
hey,
hey
everybody
and
welcome
to
episode
107
of
CGI
cake
this
week,
we're
gonna
be
talking
about
pod
longing
container
vlogging
we're
gonna,
do
a
little
like
you
know,
dive
back
end
a
little
bit
of
the
detail
of
how
all
this
stuff
works
and
then
we're
gonna
explore
some
of
the
ways
to
get
those
logs
that
come
off
of
the
off
of
your
containers
to
some
aggregated
place
and
we'll
probably
talk
about
a
little
bit.
Why
and
that
kind
of
thing.
A
A
A
Different
I
was
I
was
getting
to
why
I
had
the
opportunity
this
morning
to
go
and
see
Ian
Coldwater
and
Brad
G
Simon
present
a
incredible
talk
at
RSA,
comp
and
they'll
be
presenting
this
talk
again
at
cube
Con
this
year
in
in
Amsterdam,
and
the
talk
was
really
great.
It
was
really
about
like
kind
of
different
advanced
persistent
threats
and
how
they
work
inside
of
kubernetes,
and
so,
if
you're
interested
in
that
sort
of
stuff
definitely
check
it
out,
give
them
a
follow
on
Twitter,
and
you
know
it
was.
A
It
was
really
great,
but
anyway
your
ears
were
burning
Rory,
because
we
were,
you
were
mentioned
a
few
times,
and
it
was
really
it
was
really
great.
Who
else
we
got?
We
have
Mike
Merrill
signing
it
from
New.
Jersey
you've
got
Martin
from
the
devil.
It's
good
to
see
Martin.
As
always,
we
have
the
Matty.
Tell
me
thanks
for
picking
this
topic.
It's
actually
wasn't
my
original
idea.
I
think
was
it
was
it
you,
George
I
think
was
George
I
came
up
with
this
one
and
I
was
like
you
know,
that's
a
really
good
point.
A
I
was
really
surprised
to
be
it
never
really
hit
that
one.
So
I
was
like
well
yeah.
I
got
to
do
that
because
that's
a
really
good
topic
so
shout
out
to
Jordans
Tim
Downey
from
Santa
Monica
and
Moses
for
Hondo
from
Dubai
and
I
mean
Santa
Monica.
Not
where
did
you
think
Oh
Santa
Monica?
Is
that
this
place.
B
A
Rudolfo
Sanchez
he
is
signing
in
from
he
is
somebody
I
met
last
week
in
Palo
Alto.
He
was
week
before
a
Vemma
conference,
IV
bug
leaders
conference.
That
I
was
speaking
at,
which
was
really
a
pretty
amazing
experience.
It
was
taught
it
was
people
who
are
leaders
of
virtual
VMware
user
groups
around
the
world
all
coming
together
to
talk
about.
A
You
know,
what's
coming
what's
changing
and
that
sort
of
stuff
from
the
VMware
ecosystem-
and
that
was
that
was
pretty
neat,
definitely
enjoyed
that
we
got
Phillip
human,
my
Google
Meyer
from
Bonn
Germany
I,
don't
know
if
I'm
saying
the
bond
part
but
I
feel
pretty
good
about
the
rest
of
the
words
so
yeah,
let's
dig
into
it
here,
see
what
the
news
of
the
week
is.
So
this
week
we
get
to
see
mr.
Joe.
B
A
A
And
some
other
notable
people,
Pat
Cal
singer,
a
CEO
of
VMware
ragam,
Ryo,
Farrell
and
Kate
Kohlberg
Oh
Coco
bear
we're
all
they're
all
gonna,
get
together
and
record
and
talk
about
a
launch
event.
That
kind
of
gives
you
a
view
into
the
direction
that
we're
going
with
VMware
and
it's
stuff
so
definitely
worth
checking
it
out.
A
I
always
like
to
see
Joe
speak
a
little
bit
when
he's
here
to
actually
present
TGI
K,
but
if
you're
interested
in
this
sort
of
stuff
definitely
go
check
it
out,
there
is
a
new
podcast
out
and
it's
put
together
by
a
friend,
rich
Burroughs
and,
and
they
are
looking
to
start,
you
know
presenting
really
interesting
information
about
kubernetes
and
the
people
who
build
and
use
it.
I've
been
asked
to
be
on
one
of
them:
I'm,
not
sure
which
episode
of
the
lead
but
it'll
definitely
be
a
cool
one.
A
So
definitely
check
this
out
if
you're
interested
in
more
podcasting,
goodness
for
kubernetes
type
things
we
also
have
Keep
Calm
is
right
around
the
corner.
At
the
end
of
March,
beginning
of
April,
we
will
be
doing
oh,
it's
realize
I'm
gonna
have
to
change
the
date
on
something
that
we
will
be
doing
a
coupon
in
Amsterdam
and
I.
Believe
it's
still
unplaced
alone.
They
have
a
noir
novel
coronavirus
update.
A
You
know
if
you're
meeting
people
at
conferences
and
any
conference
doesn't
have
to
be
cubic
on
film,
you
know
feel
free
to
bump
elbows
or
that
sort
of
thing
rather
than
shake
hands,
wash
your
hands
a
lot.
Do
that
kind
of
thing:
it's
a
pretty
big
deal
out
there
and
we
want
to
make
sure
everybody
stays
healthy,
I'm,
actually
fine
to
Vegas.
Next
week
and
I'm,
like
oh
I'm,
just
gonna
like
carry
a
backpack
full
of
like
Purell,
and
that's
my
that's
my
plan,
but
keep
Kaunas
coming
up.
A
The
schedule
is
posted
I
have
a
talk
at
the
main
conference
talking
about
sec,
humph
and
security
profiles
and
that
sort
of
stuff
that
should
be
really
fun.
I'm
also
co-presenting,
with
Ian
Coldwater
again
at
security
day,
and
that
is
the
next
thing
we
have
the
schedules
for
our
cognitive
security
day.
Serverless
practitioner
summit
and
service
meskada
are
up
so,
if
you're
interested
in
these
things,
these
are
day
zero
events
they
actually
take
place
the
day
before,
keep
convicts
place
that,
if
you're
interested
in
understanding
more
about
what's
happening,
there
definitely
go
check
them
out.
A
The
next
one
I
found
this
week
is
the
new
dismiss
that
there's
this
new
application
manager,
which
brings
skin
ops
to
Google,
Cooper
and
Cooper,
need
attention
and
I
think
this
is
actually
pretty
awesome,
because
I've
been
really
kind
of
like
trying
to
get
to
get
up
stuff
out
there.
The
link
is
TGI,
K,
dot,
io
/
notes
it's
actually
up
above
was
look
where
you
can
find
it
up
there.
A
If
you
go
from
Holland
and
we
have
Marco
signing
in
from
bedrock
good
to
see
you
Marco
has
been
a
while,
since
you
were
here,
live
I'm,
glad
you're
here
and
we
got
Kristoff
from
düsseldorf
Germany.
So
this
application
manager
is
like
a
pretty
neat
thing
because
it
is
actually
bringing
kind
of
a
get-ups
flow
to
deploying
applications.
Inside
of
your
few
more
news,
cluster
I
was
really
kind
of
disappointed,
though,
because
it
looks
like
you
have
to
have.
You
can
only
do
this
on
a
gke
cluster,
so
it's
not
something.
A
I
can
explore
easily
with
like
a
kind
cluster
or
any
other
kübra
nice
cluster
have
around,
and
that's
so
I
was
kind
of
surprised
by
that
you
know,
but
it
is
what
it
is.
I
guess
and
so
I
just
I
did
want
to
call
that
out.
I
was
like
I
thought.
I
thought
it
was
great
to
see
like
more
folks
working
on
good
apps,
but
I
am
disappointed
to
see
that
this
is
a
closed.
B
A
This
new
solution,
one
of
the
neat
things
about
it,
is
that
it
uses
rest
the
requirements.
A
pretty
recent,
though
you
have
to
have
a
Cooper,
is
117
cluster
with
our
back
and
abled,
and
the
target
deployments
or
pods
would
have
to
have
Prometheus
metrics
available
on
port
90
90.
So
it
looks
like
maybe
it'll,
be
scraping
those
endpoints
directly,
which
is
kind
of
interesting.
A
The
next
article
up,
I
have
a
really
good
article
by
new
vector
and
new
vector
actually
have
been
doing
a
lot
of
interesting
articles
in
this
space,
so
their
blog
is
actually
pretty
decent.
This
one
is
actually
kind
of
optimized
for
I/o
intensive
containers
on
kubernetes
now.
What's
interesting
is
that
by
default,
kubernetes
doesn't
really
spend
a
lot
of
its
time,
trying
to
police
the
I/o
between
containers.
Right,
we
have
things,
we
have
a.
We
have
quota
for
memory
and
for
CPU,
but
what
about
IO
and
so
definitely
definitely
check
this
out?
A
This
is
a
very
good,
very
good
article
I
thought
and
which
gets
into
it,
and
they
even
get
down
into
some
of
the
primitives
I
actually
talked
about
like
the
different
CPU
schedulers
that
are
scheduled.
You
know
that
can
be
used
on
the
underlying
node
and
how
those
things
that
have
an
effect
with
relation
to
tasks
that
are
running
inside
of
kubernetes
Thank
You,
abc123
I
appreciate
it.
Oh
selective
information
like
see
us,
maybe
I,
misunderstood.
B
A
So
yeah,
like
the
game,
do
they
they
get
in
some
really
good
details.
They
have
some
really
good
thoughts
about
how
it
works
and
they
have
a
really
good
article
here
put
together
about
how
to
approach
this
problem,
which
I
definitely
worth
a
check
out.
So
go
back
to
our
notes.
Here,
we've
talked
about.
Penguin
talked
about
IO
intensive
potaters,
oh
okay.
A
So
when
you
contribute
the
project
decisions,
you
have
lots
of
great
things
happen,
including
the
ability
for
CN
CF
to
host
webinars
on
your
project
and
actually
kind
of
like
help
get
the
word
out.
So
what's
fascinating
about
this
talk
is
you
could
even
tell
already
from
like
kind
of
the
how-to
or
the
information
here
on
the
right
that
this
is
a
developer
advocate
hell,
maintainer,
some
interesting
ampersands
and
another
deal
of
developer
advocate
from
different
companies
all
getting
together
to
talk
about
an
open
source
project
called
helm,
but
I
might
sneeze?
A
Okay,
the
neat
thing
about
this
is
that
they're
actually
doing
this
webinar
on
kind
of
talking
about
helm,
they're,
verifying
home
installation
is
talking
about
signing
and
verifying
health
charts,
detecting
and
fixing
vulnerabilities
and
container
images,
which
is
probably
the
snick
stuff.
And
then
we
have
kubernetes
security
in
your
charts
and
they're,
also
digging
in
to
kind
of
the
security
model
of
helm
itself,
and
so
it's
definitely
worth
checking
out.
It
has
changed
a
lot
introducing
helm
three
really
change
this
quite
a
lot.
A
That
there
was
this
article
that
came
out
on
February
26.
That
was
an
overview
of
fluid
D,
which
is
another
project
that
has
been
donated
to
the
CN
CF
and
fluid
D
and
fluid
bit
are
kind
of
the
two
different
means,
two
different
solutions
that
fit
the
same
need
and
they
even
have
a
very
consistent
way
between
them
of
solving
this
problem.
So
fluid
D
and
fluid
pit
are
very
similar
and
we'll
talk
about
the
differences
here
in
just
a
minute
as
we
get
into
the
actual
episode
here.
But
what
the?
A
A
You
could
talk
or
it's
a
good
article
I
should
say
alright.
That
was
all
the
news.
That's
fit
to
print
from
the
cloud
native
landscape,
I'm
sure,
there's
more
stuff
out
there,
but
I
wanted
to
make
sure
we
save
enough
time
to
get
through
the
content
today
and
it'll
be
a
fun
one.
So
let's
get
this:
let's
get
this
kicked
off
here.
A
Alright,
let's
check
the
notes,
see
how's
everybody
doing.
We
got
more
Tessa
from
Tehran
and
Eduardo
Silva,
who
is
one
of
the
authors
of
fluent
and
fluent
I,
believe
I
believe
in
both,
but
I
was
working
with
Eduardo
to
understand
a
little
bit
more
about
fluid
bit,
so
they
are
awesome
friend,
I'm
glad
they
are
here
to
chitchat
about
this
stuff
and
answer
come
on
well
stir.
We
have.
We
got
salt
from
Finland
flip
it
on
me,
so
Eduardo
works
on
fluid,
but
only
great
we
have
Bradley
from
life,
Chester,
UK
and
alright.
A
B
A
A
So
containers
really
kind
of
changed
the
way
that
we
think
about
logging-
and
this
is
actually
one
of
those
truths
about
containers
that
I
think
you
know,
bothers
some
people
and
doesn't
bother
others,
but
I
want
to
make
sure
that
we,
let
me
think
about
it.
Oh
thanks,
Rory
I,
like
that,
one
too,
it's
like
that.
It's
almost
like
the
Ironman
logo
for
the
Tom's
ooh
stuff,
but
what
containers
do
for
us
or.
B
A
Way
that
they
actually
kind
of
inherently
solve
logging
is
that
they
capture
is
that
they
expose
standard
out
and
standard
error
from
the
container
and
they
throw
this
stuff
into
Oh
Stephen
Oh
Stephen
they
throw
their
stuff
into
files.
So,
let's
take
a
look:
how
it
works,
kill
anybody,
so,
finally
a
docker,
so
here's
and
I
would
do
it
again.
All
right
so
see
the
inside
of
our
live
docker.
That's
actually,
where
we're
going
to
see
the
output
of
the
lot
of
all
the
containers
that
we
might
have
running
at
any
time
right.
A
So
if
we
look
at
these
these
logs,
these
are
just
information
about
the
container,
including
the
output
of
its
logs
right.
So
this
JSON
dot
log
file
that
is
associated
with
that
particular
container.
This
guy
here
is
actually
the
the
capture
of
the
standard
out
and
standard
error
from
this
particular
container
right,
and
so,
if
we
do,
this
see
docker
ps4,
PS.
B
A
A
A
A
So
that's
our
containers
that
we
just
created
and
you
can
see
that
long
string
is
actually
related,
docker
PS,
when
this
a
it's
related
to
the
long
string
that
represents
the
container
IDs
right.
So
a
9
for
d7
is
actually
that
kind
of
that
long
hash
that
represents
the
the
actual
container
ID.
And
if
we
go
into
there
we
can
see
our
log
file
again.
A
A
A
A
A
So
now
we
go
to
docker
PS
I'm
gonna
do
doctor
logs,
we
can
see
the
output
and
this
genom
generate
it's
just
generating
logs
constantly
to
the
output,
the
standard
I/o
centered
error
of
the
container.
So
now,
let's
see
if
we
see
that
will
be
expected
to
see
before
it's.
This
will
be
the
interesting
part.
Lib
docker
containers,
okay,.
A
There
we
go
alright,
so
it
is
wired
up
right,
I'm,
just
doing
something
wrong
if
you
send
data
out
to
the
oh
yeah,
sorry,
my
bad,
so
dr.,
PS
and
I've
started
this
container
image
called
bonsai
clouds,
long
log
generator,
and
what
it's
doing
is
it's
literally
just
kicking
out
the
the
a
steady
stream
of
logs
and
to
standard
out
and
standard
error.
So
if
we
were
to
do
docker
logs
f
busy
the
mirror,
we
would
see
what
the
output
of
that
log
looks
like.
A
This
is
just
a
dumb
little
process
that
is
randomly
generating
logs
kicking
about
to
the
output
we're
gonna
play
with
this
more
later
but
effectively.
That
is
how
the
that
is,
how
containers
handle
locks.
So,
if
you're
running,
if
you're
used
to
running
you,
know
some
other
application
that
handles
logs
differently,
like
maybe
a
java
application
or
or
one
of
the
other
applications
that
are-
or
you
know
pretty
much
any
other
application
that
you
might
write
if
you're,
if
your
goal
was
instead,
you
know,
persist
your
logs
to
syslog
or
persist.
A
A
A
Think
about
handling
the
logging
problem
like
how
do
we
actually
aggregate
those
logs
how
to
get
them
off
the
machine?
How
do
we
present
them
to
our
developers
and
that
sort
of
stuff
kind
of
an
easier
way
or
lower
a
less
friction
in
a
way
that
results
in
less
friction,
I'm
gonna
go
ahead
and
stop
stop
this
container
because
I
don't
want
to
like
fill
up.
My
fill
up
my
disk
here.
A
So
that
is
our
test
container.
It
shows
how
containers
work
and
pods
work
much
the
same
way
right
like
if
I
was
gonna.
Look
for
the
log
for
a
pod.
It
would
work
the
same
way.
So
let's
go
ahead
and
try
that
out
so
I've
created
this
I
created
this
environment,
just
gonna
play
with
it
and
before
we
actually
get
too
far
here,
I
want
to
talk
about
like
what
is
configured
here.
So
this
is
the
configure
of
the
cluster
I'm
using
a
kind
cluster,
like
usual
you've.
B
A
Cooing,
it
is
good
I
want
to
I
want
to
make
sure
we
understand
what's
happening
here,
kind
of
in
the
background,
so
one
of
the
things
I'm
actually
doing
to
this
particular
cluster
is
I'm.
Also
configuring
it
to
handle
audit
logs
right
I
want
to
make
sure
that
the
audit
logs
are
coming
out
of
the
control,
clean,
node
and
being
persisted
to
disk
in
the
form
of
a
file
called
var
log,
Cuba
API
server,
auto
log,
my
discount
a
typo.
So
let's
fix
this
real
quick.
A
A
Alright,
so
my
configuration
I've
actually
I've
got
a
few
kind
of
interesting
things
happening
here.
Right
I've
got
a
inside
of
kind
when
I'm,
defining
that
node
I'm
describing
an
extra
mount
and
that
extra
mount
means
that
inside
the
container
I'm
going
to
put
that
if
I
want
to
mount
that
file
at
Etsy
kubernetes
policies
at
bada
TMO
and
on
the
host.
My
laptop
I'm
actually
gonna
put
that
file
in
temp.
A
But
my
goal
here
is
to
make
sure
that
I
can
make
I
can
make
certain
that
the
log
the
audit
log
is
being
kicked
out
in
a
way
that
I
can
then
go
and
scrape
it
with
flow.
Indeed,
they're
not
mutually
exclusive,
it's
true
a
little
bit
and
fill
in
D
are
not
mutually
exclusive.
That's
a
very
good
point.
Is
this
better
for
containers
to
the
fluid
D?
We
usually
up.
Second
log
stash
I,
wouldn't
say
that
what
I
mean
they
they
each
have
their
strengths
and
we'll
talk
about
it.
B
A
We
actually
do
a
fairly
recent,
a
fairly
reasonable
job
of
describing
kind
of
the
difference
between
the
two.
They
both
actually
are
functionally
very
similar,
they're,
very
flexible,
in
the
way
that
you
can
configure
them.
They
lay
out
a
street
and
processing
mechanism
very
similar
right.
It
is
a
log,
collector,
processor
and
aggregator.
A
Actually,
fluid
D
is
an
aggregator
it
doesn't.
It
doesn't
have
that
feature
inside
of
foot
bit,
but
the
way
that
they
lay
these
things
out
and
we'll
talk
about
them
a
little
bit
when
we
get
into
configuration.
Is
it's
the
same
right?
You
have
inputs,
you
have
filters,
you
have
ways
to
manipulate
the
data
that
came
from
the
input
and
then
you
have
outputs
ways
to
get
that
data
out
toward
your
your
given
destination.
Now
some
of
the
stuff
that's
changing.
A
Alright,
so
fluid
is
written
in
C
and
in
Ruby
it
takes
quite
a
bit
more
memory.
It
is
high
performance,
but
but
it
does
take.
You
know,
depending
on,
like
the
amount
of
data
that
you're
moving
in
turn
it
in
a
second.
It
can't
really
count
to
feel
the
burn,
so
that
D
is
built
as
a
ruby
gem
and
it
requires
a
certain
number
of
other
gems.
B
A
A
Other
obscure
thing
you
can
think
of
right.
The
the
plugins
that
are
here
in
fluid
D
give
you
the
ability
to
really
customize
the
different
ways
that
the
input
is
handled,
that
the
processing
is
handled
and
that
the
egress
handle
like
how
do
you
send
it
out
right
where,
as
fluid
bit,
has
35
plugins
available?
It's
not
completely
compatible
all
those
plugins,
but
it
is
also
it
would
take
significantly
less
memory,
it's
written
from
the
ground
up
in
C,
and
it's
meant
to
be
very,
very
much
more
performant
than
the
other
right.
A
This
I
want
to
do
some
manipulation
of
that
by
making
sure
that
I
have
enough
metadata
when
those
logs
show
up
in
my
aggregation,
endpoint
but
I
understand
a
little
bit
more
contextually.
What's
happened
with
those
logs
and
then
I
don't
want
to
kick
them
out
too.
In
my
example,
here
we're
gonna
use
an
elf
stack
to
kick
them
up
right,
so
we're
gonna
output,
two
elasticsearch,
so
it's
a
little
bit
checks
all
those
boxes
for
me
and
that's
actually
what
I'm
we're
kind
of
exploring
it
they're
both
Apache
2
license.
A
They
both
have
been
around
for
a
while.
Now,
there's
pretty
major
release
coming
up
a
bit.
Eduardo
is
working
on,
but
I
think
from
my
perspective,
that's
kind
of
like
the
high
level
y
one
over
the
other
kind
of
thing,
but
yeah
neat
stuff.
So
let's
see
how
our
cluster
doing
clusters
up
so
docker
or
actually
keep
it
all.
A
A
A
A
A
Well,
that's
weird:
there's
no
varlet
docker,
so
I
wonder
how
that
works
right
but
then
I
remember!
You
know
what
in
kind
we
use
container
D,
not
docker,
so
we
thought
to
have
some
way
of
actually
handling
that
does
container
T
handle
lobs
in
a
similar
way
to
docker.
Let's
take
a
look
at
that
real
quick.
So,
but
if
CR
kettle,
PS
I
could
see
the
containers
that
are
running
here.
There's
my
logged
in
container
right.
If
I
do
see
Eric
at
all
inspect
the
container
the
logged
in
container,
we.
A
A
A
A
A
We
also
see
it
out
put
into
our
log
containers
and
if
we
do
an
LS
minus
L
here
we
can
see
these
are
just
symbols.
This
is
the
symlink
I
believe
is
being
done
by
my
P
cubed
or
might
be
done
by
container
T
itself,
but
the
actual
containers
you
mean
you
had
multiple
containers
per
pod.
You
would
see
the
output
of
each
containers
up
put
here
in
this
output
right
and
you
can
see
it
and
it's
just
linked
into
the
actual
via
the
VAR
log
pods
path.
A
A
So
now
I
just
changed
that
right.
So
now
the
logjam
is
a
different
ID.
So
the
question
is:
how
long
does
the
old
one
stay
round?
We
still
see
both
of
them
here
we
see
how
right
we
still.
We
still
see
our
a17
6e
and
we
see
our
b88
ODB
now
what's
interesting
about
this,
I
wanted
to
show
you
this
other
cool
thing.
If
we
do
cubic
it'll
get
pods,
you
can
see
it's
running.
We
had
it.
We
had
one
restart.
A
One
of
the
cool
features
that
I
really
like
about
cube
kettle
is
that
you
can
do
logs
P
for
previous
and
then
point
to
that
pod,
and
it
will
show
you
the
previous
lock.
So
if
you
have
a
pod
that
is
actually
failing,
you
can
actually
use
cube
petal
to
see
what
the
previous
logs
are.
So
we'll
keep
those
previous
logs
around
for
a
bit.
But
let's
go
ahead
and
do
that
same
command
that
we
did.
A
A
Yes,
the
VAR
law
containers
and
we
can
see
that
we
only
have
two
around
right.
So
what
we
just
noticed
here
was
that
the
actual
logs
it
only
keeps
the
previous
one
of
the
current
one
around.
It
doesn't
keep
the
ones
before
that
around.
But
what,
if
they,
what
if
there
was
information
in
that
log
that
was
critical
to
me
right,
like
I,
actually
noticed
I
didn't
notice
that
it
was
failing
over
and
over
again
until
this
recently,
but
I
want
to
be
able
to
go
back
and
see
what
the
logs
I
won't
understand.
A
So,
let's
take
a
look
at
our
chat,
see
how
everybody's
doing
I
know
I'm
talking
a
lot
and
I've
covered
a
lot
of
deep
detail,
but
I
hope
it's
useful
to
folks.
We
got
Eduardo
so
that
the
original
author
was
is
a
strong
C,
C++,
ruby
developer.
Actually,
before
fluent,
he
created
oh
wow,
created
message
pack,
a
binary,
serialization
data
format,
yeah
totally.
He
wrote
it
in
Ruby
because
it
was
the
initial
PLC
and
but
it
did
really.
A
People
are
still
using
some
of
these,
and
people
started.
Creating
plugins
for
we've
flown
to
you,
which
is
actually
pretty
killer
right,
like
I,
mean
that's
kind
of
the
Bill
of
the
really
neat
thing
about
the
Ruby
piece
is
that
people
feel
comfortable
contributing
to
it
right
getting
those
things
out.
A
A
All
right,
let's
kick
back
over
here,
we
talked
about
logging
for
pods.
Oh
there's
much
it
one
more
problem
that
I
wanted
to
highlight-
and
this
is
actually
I
mean
we
kind
of
did
right.
The
problem
is
that
these
pods
are
ephemeral.
They
change
a
lot
and
when
they
change
a
lot,
that
means
that
these
logs
are
going
to
get
deleted
from
the
underlying
host.
A
There
is
actually
you
know
for
the
actual
containers.
There
are
some.
There
are
some
tweaks
on
the
cubelet
to
manage
the
amount
of
logs
that
will
keep
around
and
that
sort
of
stuff,
but
but
typically,
what
you
really
want
to
make
sure
is
that
you
actually
get
those
Lots
off
the
hit
off
the
hosts
we
talked
about
where
pod
locks
go.
We
talked
about
how
long
they
stay
around.
We
talked
about
lock
you
catalogs
P,
which
is
one
of
my
favorite
logs
commands,
and
then
how
do
we
get
them
off
and
choice
of
aggregator?
A
A
A
B
A
We
are
on
the
kubernetes
repo
and
inside
a
few
hundred
underneath
Coover
descubrió
is
cluster.
There
is
a
directory
called
a
das,
and
add-ons
are
kind
of
the
things
that
you
have
probably
played
with
before.
There's
lots
of
good
stuff
in
here
the
dashboard
is
hosted
here.
The
device
plugin
for
NVIDIA
GPU,
the
horizontal
autoscaler
for
dns,
the
dns
plugin
core
DNS,
is
actually
goes
to
here.
A
A
Now
what
this
does
is
it
pulls
a
tar
file
of
those
things,
including
things
like
the
add-ons
right
and
so
now,
if
we
knew
tar
XV
is
X
PDF
tarball
we
get
a
bunch
of
inference.
We
get
a
bunch
of
content,
including
you
know,
kind
of
all.
The
add-ons
and
everything
else
that
are
related
to
the
main
upstream
kind
of
piece
mister
hasn't
been
moved
out.
Yet
so,
if
we
go
into
Cooper
Diaz
clusters
add-ons,
there
is
all
the
manifests,
especially
with
that
they're
very
elasticsearch.
Now
this
setup
will
actually
deploy
fluid
D.
A
A
A
A
A
A
A
A
B
A
A
That
having
some
sound
Dolph
but
anyway,
let's
keep
going
so
we've
got
that
deployed.
Let's
play
this,
let's
take
a
look
and
see
what
we
got
pods
a
and
what
we've
got
so
far
is
we've
got
a
elasticsearch
elasticsearch
logging
daemon
set
running
in
the
cube
system
namespace,
and
we
have
the
first
one
is
up
we're
looking
at
bringing
up
a
second
one.
We
have
the
fluent
deep
daemon
set
deployed.
We
have
cabana
running,
but
it's
not
completely
running
yet,
probably
because
it's
waiting
for
elasticsearch
to
work.
B
B
A
A
A
It's
not
I
want
to
actually
connect
to
that
service
that
is
defining
by
the
Cabana
service.
So
let's
do
QK
I'll
get
SVC
dash,
n
cube
system
and
we
see
our
two
important
services.
We
have
our
elasticsearch
logging
service.
We
have
our
cabana
service.
First,
let's
actually
see
if
the
elasticsearch
logging
service
is
working
correctly,
so
we'll
do
cube
kettle
get
SVC
and
who
system
elasticsearch,
grip,
self.
A
A
A
A
The
locks
there
we
go
all
right
so
now
what
we're
seeing
is
kind
of
what
I
was
hoping
to
see
earlier.
So
these
indexes.
A
So
he
said
so
now
we
see
our
two
indexes.
We
see
a
lot
stash
index
and
we
see
a
Cabana
index.
The
lock
stash
one
is
actually
being
propagated
by
our
fluids,
configuration
up
to
elasticsearch
and
we
can
see
it
there
so
now
we
know
that
that's
working
go
ahead
and
connect
you,
the
Cabana
UI
to
kind
of
like
play,
play
around
and
look
at
the
log,
sir.
So
let's
do
this:
let's
do
the
same
command
again
get
services
logging,
but
we're
going
to
look
for
our
cabana
one.
A
A
A
A
Let's
try
that
out
and
then
we'll
kick
over
to
that
hey
we
got
gimana
alright,
and
so
what
we're
doing
here
is
we're
actually
just
connecting
to
the
proxy
of
the
actual
application.
So
this
is.
The
application
has
defined
a
port
that
has
defined
a
port
as
part
of
its
service.
Keep/Kill.
The
proxy
command
can
determine
okay,
well,
there's
only
the
one
port
exposed
by
this
application.
So
we
will
connect
you
to
that
application
support,
and
here
we
are
looking
at
gabbana.
A
So
now
we're
going
to
do
the
next
part,
one
part
and
we're
going
to
look
at
the
create
index
pattern
page
and
we're
going
to
see
there's
a
couple
of
different
indexes
here:
there's
one
by
date
called
blocks
and
there's
also
cabana
one,
but
we're
gonna
go
ahead
and
grab
the
logs.
One
will
do
next
step.
Our
time
filter
will
be
time
stamp.
A
And
this
is
all
of
what's
built
into
all
the
fields
built
into
the
the
but
I
call
it
the
facility
configuration
that
is
hosted
on
as
part
of
the
kubernetes
upstream
content
right.
So
we
have
fields
like
docker
container,
ID.
Doctor
container
keyword
query
this
host,
whose
labels
all
kinds
of
information
all
here
we
can
actually
see
quite
a
lot
of
fields
represented
here
and
now.
If
we
go
back
to
our
inspects,
we
can
see
the
logs
that
are
present
on
our
other
cluster
right.
A
B
A
Try
it
I
could
try
to
reset
it
again,
but
let
me
know
all
right
well
may
keep
going
so
these
logs
are
now
off
the
node
right.
These
nodes
are
this:
this
well
sort
of
so
these
locks
are
now
in
elastic
search.
The
elastic
search
is
actually
hosted
on
the
node,
so
it's
not
completely
answer
and
in
fact
it
raises
a
very
good
point,
wherein
if
you
actually
did
host
elastic
search
on
the
cluster
and
you
lose
the
cluster,
then
you
also
lose
the
logs,
and
so
that's
going
to
be
hard
to
troubleshoot,
which.
A
An
elastic
search
on
some
centralized
cluster
or
on
a
number
of
out
clusters,
or
using
some
other
service
to
actually
post
that
blog
content,
don't
post
it
locally
right,
like
that's.
The
point
is
if,
if
you're,
if
you're,
if
the
tools
you
use
to
debug
cluster
are
all
hosted
on
that
cluster
and
the
cluster
is
misbehaving-
oh
yeah,
you
have
discovered.
One
of
the
many
foot
goes
right,
make
sure
all
right,
so
cool.
A
B
A
A
Like
so,
this
is
octants.
Octave
is
an
open
source
project.
We
work
on
here
at
VMware
and
it
gives
us
kind
of
a
visual
tool
for
looking
at
how
things
are
working
I'm
going
to
jump
into
the
cube
system.
Namespace,
because
that's
where
our
fluid
D
configuration
is
hosted,
we
can
see
we
have
a
few
things.
We
have
a
daemon
set
which
is
flowing
D
and
that's
what
I
wanted
to
look
at
was
like
how
that's
configured
right.
So
if
we
look
at
how
that
one's
configured,
we
see
a
couple
of
different
things.
A
We
have
bar
log
exposed
up.
We
have
Varla
Varla,
Farland
docker
containers
for
zooming
XY's,
but
that's
interesting
right,
because
it's
actually
mounting
a
directory
that
doesn't
exist
on
the
underlying
host.
We
were
just
noticing
that
earlier.
If
we
look
at
the
notes
here,
there
is
no
varlyn
docker
containers,
but
there
is
a
VAR
log
containers.
Let's
take
a
look
at
the
configuration,
so
then
we
have
our
fluent
DES
configuration
and
it's
Health's
posted
any
config
map.
So
let's
go
look
at
that
config
map
and
see
what
that
looks
like.
A
A
All
right,
so
here's
our
configuration
for
fluid
bit
and
like
I,
said
before
the
way
that
bullet
bit
had
fluid
I,
keep
saying
foot
a
bit
right
now
we're
looking
at
fluid
D
in
the
future.
We're
gonna
look
at
fluid
bit
we're
not
to
the
foot
bit
part
yet,
but
much
of
this
will
look
very
similar
when
we
get
to
the
fluid
bit
part.
In
this
case
we
see
our
input
off
and
it
describes
it's
pretty
well
annotated
file
here
for
describing
exactly
how
this
is
configured.
A
A
Let's
get
down
to
the
configuration
so
we've
configured
a
source
right
and
we're
looking
at
VAR
log
containers
start
out
log,
and
that
is
interesting
because
it
is
exactly
what
we
were
looking
at
before
right.
So
there's
a
type
tail,
which
means
it's
going
to
tail
all
of
the
files
that
we
described
by
this
match
and
so
on.
A
The
underlying
host
on
each
node
inside
of
our
log
containers
I,
have
all
of
those
container
outputs
right
and
we
can
see
the
logs
there
and
we
can
tag
these
things
and
we
can
have
a
position
file
so
if
our
elastic.
So
if
our
fluid
process
comes
to
an
end
or
gets
restarted
or
runs
out
of
memory
or
what-have-you,
it
can
actually
come
back
and
continue
on
where
it's
aren't,
where
it
left
off
and
I've
had
problems
with
that.
One.
A
B
A
A
A
B
A
Look
at
all
the
metadata
we
have
for
this
one
message:
the
fact
that
it
is
we
have
the
container
image
name,
the
image
ID,
the
container
name,
the
container
host
the
labels
that
are
associated
with
it,
the
master
URL,
that
is
in
the
environment,
the
namespace
IDs,
the
namespace
name,
the
pot
ID.
All
of
this
information
is
actually
added
to
the
log
line
right
when
it
pulls
that
out
of
the
file,
it's
actually
adding.
A
That
filter
is
actually
adding
all
of
the
security
to
this
metadata
to
the
content
before
kicking
it
up
to
elasticsearch,
and
that
is
killer,
because
when
you're
debugging
right,
you
need
that
context,
and
that's
actually
I
think
one
of
the
coolest
features
of
this
whole
fluent
thing
is
that
you
can
actually
manipulate
this
data
in
a
way
that
makes
it
more
contextually
relevant
to
you
when
it
hits
the
log
aggregation
endpoint.
That
is
a
killer
feature.
A
We
have
things
that
are
going
to
try
and
fix
the
JSON
fields,
there's
a
bunch
of
other
material
here
that
is
just
manipulating
the
content.
Looking
for
a
Prometheus
exporter,
looking
for
inputs,
the
input
here
is
pulling
from
Prometheus,
but
a
bunch
of
other
content
here
that
is
actually
pretty
cool,
stuff
right
and
there's
even
got
a
buffer,
and
so
it's
actually
so.
This
is
the
part
where
we're
sending
the
output.
This
section
here
is
output.
We're
gonna
send
it
to
elasticsearch
logging.
A
Now
this
may
not
look
super
consistent
with
the
way
that
we
name
things,
but
if
you
think
about
it,
the
service
that
was
exposed
that
we
interacted
earlier
with
de-seeding
indices
right.
The
service
that
we
that
we
saw
exposed
was
elasticsearch
logging
and
because,
in
this
case,
fluid
bit
or
fluid
D
is
deployed
within
the
same
namespace.
It
can
route
that
traffic
just
to
the
service
name
because
they're,
both
in
the
same
namespace.
If
I
put
this
in
a
different
name,
space
I
would
have
to
use
a
bit
more
context.
A
I
would
have
hostname
actually
have
to
be
the
IP
address
of
that
service
or
possibly
the
more
complete
name.
Elasticsearch
logging,
dot
q
system
dot
SCC
would
do
it.
What
ports
you
listen
on,
it's
logstash
format.
Actually,
this
is
how
it
determines
its
name
right.
So
that's
that's
our
handle
again.
We're
also
grabbing
the
system
D
of
a
system
cough
and
we're
grabbing
the
system.
Input
bunch.
A
Other
examples
in
here
trying
to
pull
logs
from
NCD
log,
which
is
actually
pretty
interesting
or
I
like
a
CD
but
I'm,
not
sure
if
that's
actually
present
we'd
have
to
look
at
that,
but
it's
actually
also
trying
to
pull
a
log
from
the
far
log
cubed
log
which
I
don't
think
it's
actually
even
there.
Let's
take
a
look
at
that
worker
bar
log.
A
A
A
A
A
A
A
A
B
A
A
Here
is
a
way
if
you
wanted
to
deploy
this
thing
manually
from
a
set
of
manifest.
You
can
see
them
here
and
I've
and
I
highlighted
this
earlier.
Playing
with
extensions,
v1
beta
one
has
been
expired,
and
so,
if
you're
gonna
deploy
to
a
cluster
newer
cluster,
you
have
to
make
sure
use
apps,
v1
API
version
instead
of
the
1/16
stuff.
A
Consume
all
container
unlocks
for
the
running
node
TL
input
would
not
depend
more
than
five
Meg
into
the
engine
until
they're
flushed
to
the
elasticsearch
back
end.
This
aims
to
provide
a
workaround
for
back
pressure
scenarios.
Some
details
about
how
it
works,
pretty
cool
cuneus
filter
will
enrich
the
lawns
with
that
kubernetes
metadata.
That's
what
we
talked
about
the
default
back
in
the
configuration
is
the
elastic
search
by
the
elastic
search
output.
A
Now,
what
I
noticed
when
I
was
looking
at
this
configure
was
looking
at
this
stuff
is
that
their
air
content
is
also
available
in
the
Helen
chart,
but
before
we
go
there,
let's
just
walk
through
the
getting
started
stuff
and
just
better
phrase
like
how
this
stuff
works.
Now
a
lot
of
this,
it's
going
to
be
very
similar,
whether
it's
in
float
bit
or
float
D
right.
A
lot
of
these
things
are
very
similar,
but
this
is
actually
kind
of
how
its
laid
out
and
I
want
to
talk
through
this
real
quick.
A
So,
as
I
said
before,
you
have
you
think
about
it.
Like
a
stream
processing,
you
have
your
inputs,
we
are
getting
the
data
and
the
one
that
we've
seen
so
far
was
about
getting
data
from
lob
files.
So
we
use
the
tail.
We
use
the
tail
mechanism
to
get
that
input
and
we
have
a
parser.
How
do
we
convert
this
unstructured
data
into
structured
data
that
we
can
kick
it
up
into
elastic,
search
and
make
it
a
usable
logline
and
some
of
the
examples
from
the
fluid
bits
died?
We
were
actually
using
a
parser
to.
A
You
know,
make
sure
that
if
it
was
a
multi-line
log
file,
we
would
try
and
catch
it
and
turn
it
into
a
single
line.
Log
file.
We
were
catching
things
like
making
sure
that
when
we
pull
these
things
apart
into
fields,
we
understand
which
field
is
the
timestamp
and
which
one
is
the
severity
and
which
one
is
informational
and
that
sort
of
stuff,
and
we
have
our
filter,
which
gives
us
the
ability
to
alter
that
data.
This
is
like
the
really
killer.
A
It
means
that,
if
you
know
if
we
are
producing
logs
faster
than
elasticsearch
can
ingest
them,
that's
not
going
to
be
a
situation
where
we
over
run
elasticsearch
or
we
lose
content.
Hopefully,
but
it
gives
us
the
ability
to
manage
it
and
then
routing
data
ingested
by
an
input
surface
is
tagged.
That
means
a
tag
is
assigned,
and
this
is
used
to
determine
where
the
data
should
be
routed
based
on
a
match
rule
so
right
now
we
only
have
the
one
routing
rule,
we're
kicking
it
all
to
elasticsearch,
but
what?
A
So
this
is
the
general
layout
of
how
fluid
bit
and
fluid
D
work.
These
are
not
different,
particularly
in
this
way.
Right
like
these,
this
idea
of
multiple
outputs
and
handling
routing
and
and
buffering
and
all
the
stuff
we
just
saw
in
fluent
D
is
actually
very
similar,
if
not
the
same
and
they're
a
bunch.
B
A
And
they're
a
bunch
of
built-in
options
to
handle
these
things
right.
So
if
you
click
on
any
of
the
inputs,
you
can
see
what
some
of
the
input
plugins
are
right.
So,
for
example,
we
can
pull
from.
We
pull
input
from
collective.
We
can
listen
to
collect
D,
CPU
disk
information.
We
can
exec
and
get
and
get
content.
A
We
listen
to
the
kernel
message
lock
buffer
and
then
the
tail
one
is
actually
the
one
that
we're
really
that
we
found
used
for
here
and
there's
also
a
system
D
one
which
is
actually
pretty
cool,
being
able
to
interact
with
systemd
leveraging
journal
e
to
pull
lots.
And
if
you
have
an
older
learning
system
or
one
that
doesn't
use
system
D
for
those
Lots,
you
could
also
pull
from
syslog
there's.
Also
a
serial
interface.
A
I
mean
like
really
it's
a
very
it's
I
mean
just
this
small
list
of
plugins
accounts
for
quite
a
huge
capability
of
getting
data
from
different
sources
into
into
this
tool.
That
allows
you
to
handle
that
stream
process
and
get
it
routed
out
right
and
the
same
thing
for
the
parsers
there's
different
parsers.
We.
A
A
Parser
name
is
dr.
like,
and
this
is
kind
of
the
expected
way
that
doctors
gonna
lay
its
data
out.
Then
we
have
our
filter
plugins,
the
one
we
talked
about
already
was
kubernetes,
but
there's
also
quite
a
lot
of
things
you
can
do
with
like
different
parsers
and
different
Louis
for
the
filter
side
of
things,
including
our
crew,
Brina
guests,
including
grep
I,
mean
definitely
pretty
powerful
combination
of
things
and
then
the
output
plugins.
Do
you
want
to
send
it
to
the
is
your
log
analytics?
A
Do
it
query
to
to
flow
counter
to
elasticsearch
to
a
file
to
data
blog
like
there's
a
lot
of
really
great
plugins
here?
These
are
all
the
ones
that
are
just
built
into
fluid
bid.
Now,
as
we
said
before,
if
I
were
to
pop
over
to
the
fluid
d
set
of
things,
the
number
of
plugins
over
there
is
even
crazier,
so
fluid
D,
plugins.
A
This
page,
but
I
wanted
to,
but
I
wanted.
You
know
to
kind
of
show
that,
like
there's
a
ton
of
really
a
ton
of
plugins
here
that
are
hosted
and
managed
by
different
off
as
it
relates
to
flu
and
D.
So
if
your
case,
if
the
case
you
have
is
getting
these
logs
processed
in
a
more
unique
way
where
you
know
the
tools
that
fluid
bit
exposed,
which
are
very
powerful,
don't
quite
solve
your
need
or
maybe
you're
just
taking
the
easier
route.
And
you
see
you
know
what
somebody's
already
solved.
A
A
Amounts
of
stream
data
into
a
kubernetes
cluster
and
get
it
out
into
a
place
where
you
can
actually
manage
it
and
do
and
do
some
metadata
implementation,
like
you
know
the
kubernetes
stuff,
then
you
know
fluid
bit,
might
be
more
your
mash
and
again
me.
There's
a
performance
piece
to
take
into
mind
right.
So
as.
B
A
A
A
Oh
yeah,
that's
not
super
helpful
anyway.
They
have
contracts
for
all
the
stuff
that
we've
got
deployed
here.
They
would
have
held
a
chart
for
elastic
surge.
They've
got
a
helmet
art
for
different
versions
of
elastic
search.
All
kinds
of
interesting
stuff
I'll
beat
all
the
things
from
a
lot
from
from
there,
but.
A
A
A
From
Kourou
his
charts
not
from
are
from
hub
Dutch
charts,
not
from
elastic
I,
was
looking
at
elastic
for
something
else.
So
if
you
go
to
cube
apps
comm
hub,
cube,
apps,
calm
great,
this
is
a
way
to
to
see
like
a
bunch
of
upstream
charts
that
are
that
are
hosted,
and
these
charts
are
generally
maintained,
and
you
can
see
like
who
the
maintainers
of
this
chart
are
right.
So
we
have
Kate
Fox.
We
have
ed
cipher,
who
is
with
us
today.
Eduardo
Silva
is
the
maintainer
of
this
chart.
A
And
if
you
have
the,
if
you
have
the
answers
to
these
questions,
then
you
can
go
ahead
and
configure
it.
However,
you
want
you
can
enable,
through
the
configuration
here
different
parsers.
There
are
a
bunch
of
general
annotations
like
do
you
want
to
actually
capture
the
audit
logs?
This
is
actually
why
I
added
the
auto
logged
on
my
cluster
so
that
we
could
say
yeah.
Let's
do
that,
whether
we
want
to
capture
the
pot
annotations
or
the
labels
check
if
those
are
on
by
default
or
not,
and
then
they,
it
actually
also
configures.
A
A
B
A
We
have
a
cube
URL
bunch
of
these
things
are
abutting
faults.
Cube
tags
got
a
bunch
of
other
configurations
like
what
image
bit
tag
is
going
to
be
used.
What
the
repository
is,
if
you're
hosting
it
locally
inside
of
your
own
image
registry,
what
the
image
post
secret
is,
if
you
care
about
that
one,
and
let
me
have
our
input,
tale,
parser
our
path,
you
know
what
we,
what
log
files
we
want
to
grab
those
sorts
of
things
here.
A
These
are
in
I
believe
that
they
can
be
specified
multiple
times
right
and
you
have
our
back
create.
This
is
really
cool.
I
haven't
seen
this
happened
very
often,
but
in
this
case
the
chart,
if
you
enable
this,
will
actually
deploy
a
pop
security
policy
that
is
locked
down
to
just
those
things
that
the
particular
pods
need,
which
is
actually
pretty
cool.
But
if
you
want
to
get
really
flexible
with
it,
you
can
actually
provide
it.
A
A
A
B
A
Logging,
this
is
a
humphrey
thing
before
you
had
to
do
name,
I'm
telling
it
to
use
the
current
directory
as
my
as
the
chart
directory
and
then
I'm
over.
Writing
some
of
these
variables
right,
I'm
overriding
the
set
input
system,
D
enabled
the
true,
because
I
do
want
to
actually
collect
from
system
D.
The
output
of
those
things
that
we
were
watching
before.
A
I'm
telling
you
the
multiple
things
I
wanted
to
watch.
I
only
have
container
d
inside
my
host
I,
don't
have
Ducker
so
I
want
to
watch
the
container
D
service
and
I
also
want
to
watch
the
cubelet
service
I'm,
telling
it
the
input
tale.
Parser
is
the
CRI
parser
instead
of
the
docker
parser,
because
again
I'm
using
container
D,
not
docker,
I'm
telling
you
to
set
them
back
into
the
packing
type
to
elasticsearch.
So
little
forward.
A
My
things
up
to
an
elasticsearch
end
point
and
again
like
here
in
the
output,
just
like
we
saw
in
the
configuration
or
I'm
setting
the
back
end
to
elasticsearch
logging,
dot,
cube
system,
dot,
service,
dot,
cluster
dot,
local,
and
this
is
like
an
fqdn
that
can
be
used
in
any
pod
anywhere
inside
of
this
particular
crew
videos
clustered
to
address
the
elasticsearch
logging
deployment
that
I
have
I
told
you
go
ahead
and
on
it
enable
true
and
I've,
given
it
a
different
tag.
The
1:39
tag,
which
is
the
more
recent
tag
of
fluid
bit.
A
A
A
We
can
see
the
config
tile
for
this,
so
here
is
our
fluid.
Be
are
our
flood
bit
configuration?
It
looks
very
similar
to
the
Flint
bit
for
the
fluid
configuration
that
we
saw
before
in
which
we
have
our
inputs.
We're
catching
inputs
from
tail
we're
watching,
VAR,
locked
container,
startup
log
be
told,
use
the
parser,
the
CRI
parser,
we're
tagging
it
with
cubed
star.
We
have
some
configuration
here
for
refresh
interval
and
for
the
memory
buffer
limit
and
to
skip
alum
lines.
A
We
have
a
different
input
for
system
D
and
we're
in
which
we're
watching
for
the
container
D
service
and
the
cube
of
service
we're
watching
the
input
for
tale
again
right.
This
one
is
a
system
D
piece,
but
we're
gonna
use
journal
kettle
data
right
to
that
log,
and
this
one
is
actually
using
the
tail
again
to
look
for
VAR
log,
Q,
BPI,
server,
audit
log.
A
A
A
Right
so
again
we're
using
a
filter
of
kubernetes,
we're
matching
on
the
cube,
the
cube
output
and
that
match
comes
from
the
input
right
where
we're
actually
setting
the
tag
from
that
input,
we're
matching
it
or
we're
tagging.
All
this
data
is
cube
and
here
down
on
the
filter
layer,
we're
doing
a
match
on
cube,
so
anything
coming
from
violet
containers
is
actually
match
that
here's,
actually
you
know
we're
going
to
authenticate
to
kubernetes
itself
to
get
that
metadata.
We're
merging
the
login,
we're
grabbing
the
token
from
a
service
account
and
all
that
good
stuff.
B
A
A
A
A
A
A
Except
for
the
mouth
point,
are
we
actually
addressing
that
mouth
point
interesting
stuff
right?
So
what
that
means
is
that
we're
actually
gathering
content
from
this
thing
that
we
don't
need
all
right,
so
we're
actually
mounting
a
host
path
that
we
don't
actually
need
here,
and
so
we
should
probably
get
rid
of
that.
But
that
was
one
thing
I
noticed
about
when
I
was
looking
through.
A
A
A
A
A
B
A
A
A
A
A
A
So
it's
almost
240,
but
I
want
to
actually
finish
this
up.
I
want
to
show
this
a
little
bit
more,
so
I'm
gonna
run
through
this
real,
quick
and
we'll
see
how
far
we
get
so
I
appreciate
you
sticking
around
about
this.
One
I
realized
that
that
was
like
a
pretty
intense
place
to
lose.
It
lose
it
so
the
kind
delete
cluster.
A
A
A
A
A
Yeah
probably
could
have
brought
another
machine
to
do
it,
but
I'm
glad
you're.
All
still
with
me
we'll
be
back
up
here.
What
just
happened
is
I
realize
I
didn't
put
the
advanced
audit
log
piece
back
in
the
file
system,
so
the
control
plane
couldn't
start
so
then
I
replaced
it
now
we're
back
in
play.
B
A
A
A
B
A
Was
boom
this
thing
the
impurity
is
the
animal
support
which
I
thought
was
really
cool.
This
is
using
the
language
server
that
can
be
used
on
a
variety
for
I
des
I
use
them
from
my
editor,
and
so
it
was
me
I'm
actually
using
this.
Basically,
this
same
setup
to
do
it.
So,
if
you're
interested
in
exploring
that
definitely
check
that
out,
Eduardo,
it's
very
cool.
A
A
A
A
A
A
A
A
A
There's
a
cubelet
there
and
it's
registered
with
the
cluster
put
this
pot
on
that
cubelet.
That's
what
this
means.
That's
what
Scoleri
toleration
operator
exists
me
so
pretty
neat
feature
I'm,
pretty
kind
of
kind
of
a
superpower
of
toleration.
I
see
a
lot
of
people
trying
to
define
toleration
as
that
match.
Like
saw
different
configurations
of
clusters.
This
is
the
simple
and
sometimes
simplest
is
the
best.
That's
the
simplest
way
to
do
that,
all
right
cool.
So
now,
let's
take
a
look
at
our
indices.
A
A
Let's
do
our
text
eg
open
again
and
again
we're
going
to
jump
into
the
cube
on
logging
in
point
when
I
look
at
that
output,
this
is
actually
pretty
cool
because
it
means,
if
I
see
the
index,
it
means
I've,
got
everything
working
right.
If
I
don't
see
the
X,
it
means
that
there's
nothing
working
in
it's
not
gonna
give.
A
A
A
A
A
The
audit
log
generated
by
the
API
server
sitting
on
that
control,
clean
node
and
it's
broken
it
up
into
things
like
this
is
an
allow
reason
and
like
some
of
the
content
that
are
actually
there's
actually
being
captured
by
the
kübra
news,
a
lot
and
yet
another
really
great
thing
to
get
the
heck
off
of
your
control
flame.
Node,
right,
like
you,
don't
you
shouldn't?
A
Have
your
audit
log
posted
on
your
control
play
node,
because
if
it
goes
away,
then
you
no
longer
have
an
autumn
log
to
describe
what's
happening,
but
with
that
auto
ID,
we
can
see
who
made
what
command
and
what
they
did
like.
What
the
response
was
rather
be
allowed
that
in
what
the
source
IP
address
of
the
of
that
was
lots
of
really
good
information
to
see
right.
So
these
are
basically
just
a
get
command,
and
so
this
is
all
just
a
way
of
expressing
the
configuration
of
I
mean.
B
B
A
B
A
A
A
Yeah
struggling
with
Wagner
output,
it's
a
different
game,
but
I'm
not
gonna
play
it
right.
Now,
anyway,
we're
able
to
see
like
logs
from
all
the
things
that
are
happening
inside
of
here
now
and
that's
really
cool.
The
last
thing
I
wanted
to
show
you
was
that,
basically,
what
I
learned
was
that
Debian
by
default
doesn't
persist
logs
to
disk,
and
so,
if
I
were
that
jump
into
the
worker,
node.
A
A
A
A
A
There
we
go
all
right,
so
that
is
from
the
Keyblade.
It's
just
like
identifier,
cubelet,
that's
actually
coming
from
the
cubelet.
We
can
see
that
content,
so
all
of
our
things
are
actually
showing
up,
and
so
what
I
noticed
before
was
I
wasn't
getting
those
outputs
and
so
I
needed
to
actually
change
the
configuration
to
make
sure
that
they're,
not
in
a
different
ethnics,
they're
part
of
a
different
slot
identifier,
so
I
think
that's
actually
where
I
missed
it
so
like
in
this
case,
it's
actually
just
like
identifier,
cubelet.
A
A
B
A
A
Anyway,
brutal:
that's
what
I
wanted
to
show
you
I
hope
that
was
helpful,
I'm
gonna
sign
off,
but
yeah
like
right
now,
they're,
not
in
a
different
index.
We
could
actually
make
it
so
that
they
were,
though,
by
providing
a
different
output
at
the
moment,
they're
all
on
the
same
output
but
yeah.
That's
that's
what
I
wanted
to
show
you
I
hope
that
was
helpful.
Let
me
kick
back
over
here
to
the
big
screen.
A
A
To
seeing
you
next
time,
thanks
again
for
all
of
your
help
and
I
I
hope
all
of
this
stuff
isn't
like
Wow,
but
I
look
forward
to
seeing
a
bunch
of
you
in
Amsterdam.
If
you
have
not
met
me
in
person,
please
feel
free
to
come
by
and
say
hello,
I'll,
be
probably
at
the
booth
or
I
have
a
couple
of
presentations.
I
love
to
meet
people
who
are
who
have
you
were
out
there
in
the
space
trying
to
do
good
things,
and
so
it's
been
a
pleasure.